diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bcef1c1379183edc5f58bf592dc6a293fa737299
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18f0a2c4f7b7e42d838c3bee9d4889ed0b18a9c217e8cfdebdcf6648dd30a6ae
+size 137336
diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bedf3cfce122c2b3dccf210352e411e8c8887c10
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe575a6024c9bc3f5c3e8037b6e45efdc2005be9380ef42bd61926343daee62c
+size 162123
diff --git a/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7de3263a0c78f752a849b439a73bd7becd6b76a4
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/86bc9287-7ab0-4f58-9e16-bf41de17edfb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d91cc77a237ccddc1ddae9b9adbb09fb51daee7e46e45e40aa864c873ad87f37
+size 3477634
diff --git a/aaar10assessingaispotentialtoassistresearch/full.md b/aaar10assessingaispotentialtoassistresearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4848153ec1008ec3ed722d3134a7fc558b1ade4d
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/full.md
@@ -0,0 +1,450 @@
+Renze Lou1 Hanzi Xu2 Sijia Wang3 Jiangshu Du4 Ryo Kamoi1 Xiaoxin Lu1 Jian Xie5 Yuxuan Sun5 Yusen Zhang1 Jihyun Janice Ahn1 Hongchao Fang1 Zhuoyang Zou1 Wenchao Ma1 Xi Li6 Kai Zhang7 Congying Xia5 Lifu Huang3 Wenpeng Yin1
+
+# Abstract
+
+Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own work, such as brainstorming research ideas, designing experiments, and writing or reviewing papers. In this study, we introduce AAAR-1.0, a benchmark dataset designed to evaluate LLM performance in three fundamental, expertise-intensive research tasks: (i) EQUATIONINFERENCE, assessing the correctness of equations based on the contextual information in paper submissions; (ii) EXPERIMENTDESIGN, designing experiments to validate research ideas and solutions; and (iii) PAPERWEAKNESS, identifying weaknesses in paper submissions. AAAR-1.0 differs from prior benchmarks in two key ways: first, it is explicitly researched-oriented, with tasks requiring deep domain expertise; second, it is researcher-oriented, mirroring the primary activities that researchers engage in on a daily basis. An evaluation of both open-source and closed-source LLMs reveals their potential as well as limitations in conducting sophisticated research tasks. We will keep iterating AAAR-1.0 to new versions. Project Webpage: https://renzelou.github.io/AAAR-1.0/
+
+
+Task #2: Experiment Design
+Figure 1: The input-output illustration of three tasks in the proposed AAAR-1.0 benchmark.
+
+# 1. Introduction
+
+Although AI has brought transformative changes to various aspects of life, its impact on researchers unfolds in a nuanced manner. On the one hand, AI assists in various research disciplines, such as Social Science (Neuman et al., 2023), Finance (Gu et al., 2024), Medicine (Rakhimov et al., 2022), GeoScience (Praskievicz, 2018), etc., significantly expediting academic processes. However, many of these applications are superficial, often limited to data-driven clustering or classification. On the flip side, the AI era poses challenges for researchers. Despite its ability to streamline some activities, researchers still face demanding, cognitively intensive tasks such as staying current through extensive paper reading, rapidly generating ideas in response to fast-paced advancements, conducting rigorous experiments to substantiate claims, and managing an increasing volume of peer reviews. Then a question looms: How effectively can AI assist researchers in tasks that are domain-specific, expertise-demanding, and reasoning-intensive?
+
+Existing works proved the promising potential for using LLMs in assisting AI research. Si et al. (2024) conducted a large-scale human study and found that LLMs can gen
+
+erate creative research ideas. Lu et al. (2024) proposed an autonomous agent to handle complicated research workflow and write a whole research paper. However, most of these works focus on addressing highly subjective problems that require a high degree of expertise, making evaluation laborious and hard to reproduce. This underscores the need for a comprehensive benchmark that rigorously assesses LLMs' capabilities in expertise-intensive research activities.
+
+To this end, in this work, we introduce AAAR-1.0, a novel benchmark that aims to comprehensively assess the LLMs' capacity on expert-level research tasks. As illustrated in Figure 1, AAAR-1.0 decomposes three distinct expert-level AI research tasks from the researcher's daily activities, including i) EQUATIONINFERENCE, investigating whether the LLMs can infer the equation correctness based on the paper context; ii) EXPERIMENTDESIGN, validating LLMs' ability on designing reliable experiments for a research idea; and iii) PAPERWEAKNESS, testing the quality of weaknesses discovered by LLMs from paper drafts. To ensure data quality, senior AI researchers with extensive domain expertise perform data annotation for AAAR-1.0, followed by rigorous multi-round data examination and filtering. All three tasks require models to possess strong domain knowledge covering various cutting-edge research findings, as well as expert-level research experience, to the extent that even humans need substantial research accumulation to tackle the tasks we designed. Crucially, tasks here are singular, standalone challenges (with clear input and output expectations) rather than a complicated task chain (Li et al., 2024; Lu et al., 2024), providing a more transparent assessment of the model's intermediate output. Benefiting from the proposed automatic metrics, we conduct extensive experiments across numerous mainstream LLMs, where we find that:
+
+- With a random guess baseline of $40\%$ $\mathrm{F}_1$ , the performance of most LLMs on EQINFER hovers just slightly above chance, with the top models reaching around $46\%$ . This highlights the difficulty of the task, despite its reliance primarily on local context reasoning.
+- In EXPDESIGN, LLM-designed experiments are innovative and more diverse than those by humans; however, many are trivial, lack feasibility, and stray from the original research objectives.
+- In PAPERWEAKNESS, LLM-identified weaknesses often lack depth and specificity, making them broadly applicable and less useful for providing feedback on paper drafts.
+
+# 2. Related Work
+
+LLMs for AI Research. With the rapid evolution of pertaining techniques, LLMs are found to be useful in assisting various research disciplines (Yu et al., 2024a; Labrak et al.,
+
+2024), particularly in AI research, such as generating novel research ideas (Kumar et al., 2024; Yu et al., 2024b), reviewing research draft (Gao et al., 2024; Du et al., 2024; Liang et al., 2024; Zhu et al., 2025), and writing scientific papers (Chamoun et al., 2024; Lu et al., 2024; Weng et al., 2024). For example, Si et al. (2024) conducted a large-scale human investigation on LLM-generated research ideas and found that LLMs can generate novel ideas compared with humans while lacking feasibility. Du et al. (2024) found that while LLMs are effective at summarizing papers, they tend to overly trust the authors' claimed strengths and struggle to identify weaknesses specific to the paper. Furthermore, some works try to employ LLMs to solve more complicated research tasks that are composed of multiple steps (Li et al., 2024; Tang et al., 2023). Notably, Lu et al. (2024) proposed AI-SCIENTIST, an autonomous agent framework that can handle a series of challenging research tasks consecutively, including generating research ideas, coming up with the corresponding experiments along with the implementations, and then writing the final research paper — exactly how human conduct a whole research pipeline. However, there is still a lack of systematic evaluations and quantitative analyses on the LLMs' (intermediate) output of each single-step research task. Accordingly, our work focuses on building a benchmark consisting of individual research steps with clear input-output expectations, making it suitable for comprehensive LLM evaluation. Moreover, we emphasize that relying on LLMs to fully replace human effort might compromise academic integrity. While our benchmark primarily serves an educational purpose — LLMs assist junior researchers by providing imperfect but insightful ideas, rather than by governing the entire research process.
+
+Benchmarks for AI Research Tasks. Existing "LLM assists research" benchmarks mainly focus on the implementation and execution part of the research pipeline (Lu et al., 2024; Chen et al., 2024a; Li et al., 2024; Chan et al., 2024). For instance, Huang et al. (2024) proposed MLAgentBench to test the LLMs' capacity for writing project code and training the ML models, where the evaluation metric is the test performance of the models trained by LLMs. However, real-world AI research activities are diverse and some of them are hard to assess for quality, such as generating research ideas, which requires intensive manual assessment (Si et al., 2024; Liang et al., 2024). Our work centers on tasks that emphasize a comprehensive mastery of the scientific research field and core elements of a researcher's daily workload, and we try to build curated task-specific metrics for every single task for a more efficient and accurate LLMs appraisal.
+
+# 3. AAAR-1.0
+
+Figure 2 provides a data construction overview. In the following sections, we elaborate on the data collection de
+
+
+Figure 2: Data construction workflows of the three tasks in AAAR-1.0.
+
+tails, including § 3.1 EQUATION INFERENCE (EQINFER), § 3.2 EXPERIMENT DESIGN (EXPDESIGN), and § 3.3 PAPER WEAKNESS (WEAKNESS).
+
+# 3.1. EQUATIONINFERENCE
+
+Crafting a correct scientific equation in paper writing or validating an equation in paper reviewing is challenging, as it requires a thorough understanding of an algorithm or the intricate relationships among numerous variables. Directly prompting LLMs to generate equations proves overly demanding. Therefore, this work formulates EQINFER (Figure 1) as a binary inference task. $^{1}$
+
+$①$ Data crawling and cleaning. For the data source, we adopt the pre-Compilation LaTeX code for two reasons: i) existing PDF parsing tools, such as PyMuPDF and PaperMage (Lo et al., 2023), can introduce considerable noise to the parsed equation text; ii) considering most of exiting LLMs are capable with processing LaTeX code, using LaTeX source instead of parsed text can be more accurate and provide LLMs with richer information. Meanwhile, we only crawl those peer-reviewed papers accepted by top-tier conferences to avoid using low-quality human-written equations. Accordingly, we first obtain the accepted paper list from ACL Anthology, from year 2019 to 2023. Next, we search each paper on arXiv to crawl its LaTeX source (if it exists). Finally, we get a total of 1,762 papers' source LaTeX packages. We then clean the LaTeX sources by deleting all the comments and combining multiple cross-referred .tex files into a main file. Afterward, we use regex to randomly extract (at most) 3 equations' code snippets per paper, resulting in 3,877 human-written equations.
+
+② LLM-based equation synthesis. As EQINFER assessing whether the LLMs can infer the correctness of equation (i.e., binary classification), for each human-written positive equation, we have to craft counterpart negative equations. To this end, for each positive equation, we prompt GPT-4 to synthesize a negative equation based on the paper context. We repeat this prompt (with a high decoding temperature) until three different negative equations are synthesized.
+
+$③$ LLM-based filtering. However, the LLM-synthetic equations can be context-unaligned, i.e., some synthesized equations contain notation that is never defined in the paper context, which becomes a superficial shortcut and too effortless for LLMs to identify. To improve data quality, we prompt GPT-4 to identify context-unaligned negative equations. We then eliminate the positive equation and its negative counterparts, where all three negative counterparts are unaligned. This filtering leads to a final of 1,449 positive equations and 4,347 negative equations (each positive equation has three negative counterparts, and at least one negative counterpart is "challenging").
+
+$④$ Expert-based examination. Furthermore, it's also possible that synthesized negative equations are actually correct (i.e., false negative) — even if the negative and positive equations are written differently, the final compiled results might be the same. We then employ human experts to review the data further and filter out false negative equations, checking the classification instances for accuracy.
+
+We asked 5 senior PhD students who are experienced in AI research to check all instances. We ask human experts to consider the following criteria for each positive equation and its negative counterparts (each pair): i) Are all equations
+
+grammatically correct? ii) After compilation, are all negative equations different from the positive ones? We ask every human expert to use external LaTeX compilation tools (e.g., TeXlive), and identify the pairs that cannot meet the criteria. Each pair is examined by at least two experts, and we only keep pairs that all experts decide to keep. After this strict examination, a total of 1,049 pairs are eventually kept (27.6% pairs are filtered)
+
+Final data. We finally obtain 1,049 positive equations (each has three negative counterparts). We show data statistics of EQINFER in Table 7 and data examples in Figure 8.
+
+# 3.2. EXPERIMENTDESIGN
+
+Given a research topic, such as a novel ML algorithm, a qualified researcher can design a solid experiment plan for it, and clarify underlying motivation to ensure the reliability of the designed experiment. Unlike the concurrent works that focus on the experiment implementation (Lu et al., 2024; Huang et al., 2024), we emphasize the importance of assessing the high-level experiment design of LLMs before the subsequent implementation to avoid any expensive execution iteration. Therefore, as shown in Figure 1, we formulate EXPDESIGN as a text-generation task that takes pre-experiment paper context as input, and then generates the experiment and explanation list.
+
+$①$ Data crawling. As for the data source, we first collect $\geq 10\mathrm{k}$ papers' data from arXiv, including LaTeX sources and PDFs, which cover broad AI categories, including cs.AI, cs.CL, and cs.CV, from year 2018 to 2023. Similarly, to ensure the source data quality, we only use papers that have appeared at well-known conferences.
+
+$②$ Domain-expert annotation. Making a reliable and executable experiment plan requires solid foundation knowledge of a specific research area. Consequently, we set a high standard for choosing annotators: i) be a senior Ph.D. student with at least one peer-reviewed publication in leading AI venues; ii) have more than 4 years of AI research experience; iii) frequently serve as conference reviewers. Finally, we invite a total of 10 qualified experts to participate in our data collection procedure. Given the $10\mathrm{k}$ crawled papers, we first ask every annotator to bid on the papers that they are interested in. After bidding, each of them is assigned 10 papers, i.e., a total of 100 papers to be annotated. During annotation, we post each paper PDF on online Google Drive and ask the annotator to first carefully read the whole paper. Then, we ask them to identify and locate the key experiments in each paper (i.e., highlighting the relevant paragraphs of each experiment). We don't consider some trivial experiments, such as those supplemental analyses in the appendix section. For each identified experiment, the
+
+annotator has to concisely answer two questions: i) What did this experiment do? ii) Why did the paper authors conduct this experiment? In other words, we ask the annotator to summarize all the key experiments in this paper and explain the underlying motivations based on their rich domain experience.
+
+$③$ Multi-round peer discussion. Intuitively, different experts might have different opinions on the same research topic. Particularly, when explaining the underlying motivation of an experiment, adopting only a single expert's opinion might introduce bias to our annotation. Hence, we conduct a further multi-round peer discussion. For each paper, where all the key experiments are identified, summarized, and explained, we ask a different expert (reviewer) to review the annotation by considering the following three criteria: i) Are the identified experiments all the key experiments? ii) Does each experiment summarization covers all key information? iii) Does each explanation sound reasonable and reliable? Each reviewer must leave comments on the online PDF regarding the above criteria, and then the annotator must respond to each comment — either accept the suggestion and revise the previous annotation or provide a "rebuttal" to the reviewer to uphold the annotation. This discussion is iterative until both opinions align. Eventually, for each paper, we collect two lists: i) the experiment list, summarizing each experiment step of the paper; ii) the explanation list, the underlying motivations that are one-one corresponding to the experiment.
+
+Final data. After annotation, we use the pre-experiment context of each paper (according to the first-experiment location identified by the annotator) as the input. Furthermore, we use GPT-4 to delete any sentence that potentially leaks the experiment from the input.3 Similar to the EQINFER, we utilize the source LaTeX as the input text to avoid PDF sparing noise. As for the image input, we collect those figures within each paper's source LaTeX package and only keep figures that are used in the pre-experiment context. Overall, a total of 100 instances are collected. As shown in Figure 1, the input of each instance is the pre-experiment context (including the figures), and the ground-truth output is the expert-annotated experiment plan and the explanations. Table 8 shows data statistics and Figure 9 illustrates the sample case in EXPDESIGN.
+
+# 3.3. PAPERWEAKNESS
+
+Another critical research task is paper review. Previous works have demonstrated the usefulness of the LLM-based review feedback (Gao et al., 2024; Jin et al., 2024; Lu et al., 2024). However, as indicated by Du et al. (2024); Liang et al. (2024), LLMs only excel at summarizing the research
+
+strengths while falling significantly short on weakness criticism. Hence, we build WEAKNESS for particularly investigating the LLM-generated weaknesses.
+
+$①$ Data crawling. We first crawl a total of 3,779 anonymous submissions of ICLR 2023 from OpenReview, $^{4}$ including PDF and other meta information (e.g., scores, decisions, and tracks). As the ICLR 2023 has 13 distinct tracks while the paper distribution across different tracks is highly biased, we then uniformly sample papers from different research tracks to improve the domain diversity. Meanwhile, during sampling, we also keep the accept/reject papers distributed equally to avoid data bias. In a word, we finally collect a total of 1,000 papers (500 accepted; 500 rejected), uniformly covering all 13 tracks. Please refer to Figure 3 for the track and score distribution of the 1,000 papers.
+
+$②$ Extraction of human-written weaknesses. Since the raw comments crawled from $ICLR 2023$ are mixed with both strengths and weaknesses, we further employ GPT-4 to extract all the weaknesses from each reviewer's comments and compose multiple weaknesses into a list. Notably, we force GPT-4 to keep the original text of the reviewer, i.e., all weaknesses in our dataset are those original sentences written by the reviewer without any modifications. What's more, sometimes one reviewer might repeatedly mention the same weakness throughout the comment. In this case, we simply keep all the repeated weaknesses because, if one weakness is repeatedly mentioned by the reviewer, it's intuitively an important weakness that the reviewer wants to emphasise; accordingly, keeping the repeat items can penalize LLMs more on missing this weakness.
+
+For each paper, we can finally get multiple weakness lists (one weakness list per reviewer, one paper can have multiple reviewers). We further delete a few papers without any weaknesses found in the raw comments, resulting in a total of 993 instances, i.e., 993 {paper, weakness lists} pairs.
+
+$③$ Input data processing. As we mentioned before, we crawl papers from OpenReview instead of arXiv because the under-review paper draft is required for this task. However, not every paper from OpenReview can be found on arXiv, i.e., the source LaTeX code and figures of most under-review papers are unavailable. Therefore, we utilize VILA (Lin et al., 2023) to parse text data out from the PDF; we also employ PDFFigures-2.0 (Clark & Divvala, 2016) to extract all the figures and tables (in image) from the paper, as Vila is not good at processing the table data.
+
+4 We adopt ICLR because it releases full submissions, while some other conferences only release accepted papers.
+
+5We manually checked GPT-4's extraction results of 200 cases — GPT-4 only missed $\leq 1\%$ of reviewer-written weaknesses and maintained almost all the original text.
+
+Final data. Our final data is composed of 993 instances, each input is paper text along with figure/table images, and each output is peer reviewers' weakness lists. Table 9 shows data statistics; Figure 10 presents an example of the data instances. We show the data diversity (score and track distribution) in Figure 3.
+
+# 4. Evaluation Criteria
+
+For EQINFER, we adopt $\mathrm{F}_1$ as the classification criterion. For EXPDESIGN and WEAKNESS, since both tasks have free-form outputs, we develop several novel task-specific metrics in addition to the conventional ROUGE (Lin, 2004).
+
+We use LLMs to evaluate the experiment list of EXPDESIGN. Specifically, given a model-predicted experiment list $p$ , and the ground-truth list $g$ , we calculate:
+
+$$
+\text {E n - P r e c i s i o n} = \frac {1}{m} \sum_ {i = 1} ^ {m} f \left(p _ {i}, g\right) \tag {1}
+$$
+
+$$
+\text {E n - R e c a l l} = \frac {1}{n} \sum_ {j = 1} ^ {n} f (g _ {j}, p) \tag {2}
+$$
+
+where the $m$ and $n$ are the list length of $p$ and $g$ ; $f(.)$ represents the LLM prompting, where we prompt LLM to decide whether each predicted experiment item $(p_i)$ is entailed by the whole ground-truth list $(g)$ , proceeding with binary output, and vice versa. Intuitively, En-Precision reflects how many prediction experiments match ground-truth experiments. In this work, we used GPT-4o as an evaluator.
+
+While for the explanation generation of EXPDESIGN, as the prediction experiments are one-on-one corresponding to the ground truth, we adopt a semantic-based metric:
+
+$$
+\mathrm {S} - \text {M a t c h} = \frac {1}{m} \sum_ {i = 1} ^ {m} \operatorname {s i m} \left(p _ {i}, g _ {i}\right) \tag {3}
+$$
+
+where we use SentenceBERT (Reimers, 2019) to measure the semantic similarity between $p_i$ and $g_j$ .
+
+Unlike EXPDESIGN, the ground truth of WEAKNESS is multiple reviewers' weakness lists. Instead of merely merging the opinions of various reviewers into one flattened list and keeping LLM-as-judge as the metric (which is not only costly but also loses the structural information of diverse research perspectives), we employ the following semantic-based metric to efficiently evaluate predicted weaknesses:
+
+$$
+\text {S - P r e c i s i o n} = \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\frac {1}{r} \sum_ {k = 1} ^ {r} \max _ {j} \sin \left(p _ {i}, g _ {j} ^ {k}\right)\right) \tag {4}
+$$
+
+$$
+\text {S - R e c a l l} = \frac {1}{r} \sum_ {k = 1} ^ {r} \left(\frac {1}{n _ {k}} \sum_ {j = 1} ^ {n _ {k}} \max _ {i} \sin \left(g _ {j} ^ {k}, p _ {i}\right)\right) \tag {5}
+$$
+
+where $r$ is the number of reviewers of the given paper, $n_k$ means the length of $k$ -th reviewer's weakness list, and $g_j^k$
+
+indicates the $j$ -th item in $k$ -th reviewer's weakness list.
+
+Additionally, in the real world, we would think a review weakness is reliable if it is specific to a paper. Meanwhile, we also hope the review is informative, i.e., no excessive similar weaknesses in one review. Inspired by the classic TF-IDF, we propose a novel review diversity metric:
+
+$$
+\text {I T F - I D F} = \frac {1}{w} \sum_ {j = 1} ^ {w} \left(\frac {1}{m _ {j}} \sum_ {i = 1} ^ {m _ {j}} \log \left(\frac {m _ {j}}{O _ {i} ^ {j}}\right) \times \log \left(\frac {w}{R _ {i} ^ {j}}\right)\right) \tag {6}
+$$
+
+$$
+O _ {i} ^ {j} = \sum_ {k = 1} ^ {m _ {j}} \operatorname {s i m} \left(p _ {i} ^ {j}, p _ {k} ^ {j}\right) \tag {7}
+$$
+
+$$
+R _ {i} ^ {j} = \sum_ {l = 1} ^ {w} \max _ {s} \sin \left(p _ {i} ^ {j}, p _ {s} ^ {l}\right) \tag {8}
+$$
+
+where the $w$ is the total number of papers in the dataset, $p^j$ is $j$ -th paper's prediction weakness list, $p_i^j$ is the $i$ -th weakness in $p^j$ . Moreover, $O_i^j$ calculates the intra-paper occurrence frequency of $p_i^j$ ; $R_i^j$ is the "soft" number of papers that also contain the $p_i^j$ , which is computed by summing the maximum similarity scores between $p_i^j$ and other paper's weaknesses. In a word, $O_i^j$ measures informativeness, and $R_i^j$ measures specificity. The complete ITF-IDF consider both aspects and reflects the overall weakness diversity.
+
+# 5. Experiments and Analyses
+
+In this section, we conduct extensive experiments on AAAR-1.0, across various mainstream LLMs, to quantify the current LLMs' capacity to tackle high-level research tasks. Specifically, § 5.1 for EQINFER, § 5.2 for EXPDESIGN, and § 5.3 for WEAKNESS. Please refer to Appendix B.2 for running details of the LLMs.
+
+# 5.1. EQUATIONINFERENCE
+
+Settings. As different LLMs have distinct context windows, to ensure a fair comparison, we fix the maximum input length for all models. According to Table 7, we empirically use 1,000 words for both contexts before and after equations, i.e., 2,000 surrounding words.
+
+Main results. Table 1 shows the main results. Firstly, a simple baseline that predicts all equations as positive achieves $40\%$ $\mathrm{F_1}$ (due to the 1:3 of positive and negative equations), while nearly all open-source LLMs even cannot beat this naive baseline. Notably, though the performance of Mixtral is slightly superior to the baseline, the extremely biased precision and recall scores imply that Mixtral is also simply predicting almost all samples as positive instead of truly inferring. Meanwhile, compared to the All-Positive baseline, the performance superiority of the strong close-source LLMs is not significant, the best LLM on this task only obtains $47.98\%$ , which demonstrates the challenge of EQINFER compared with other similar benchmarks (Song
+
+Table 1: Various LLMs' performances on EQINFER task (1,049 positive and 3,147 negative samples). "All-positive" indicates a baseline that predicts all equations as positive.
+
+
Methods F1 Prec. Rec. All-Positive 40.00 25.00 100.00 Open-source LLMs OLMo-7B (Groeneveld et al., 2024) 13.64 11.93 15.91 Mistral-7B (Jiang et al., 2023) 28.45 19.28 54.24 Mixtral-8x22B-MoE (Jiang et al., 2024) 40.90 26.15 93.80 Qwen 2.5-72B (Qwen Team, 2024) 31.22 26.28 57.40 Llama 3.1-70B (MetaAI, 2024) 33.08 22.14 65.39 Closed-source LLMs Gemini 1.5 Pro (Anil et al., 2023) 46.74 32.05 86.27 Claude 3.5 sonnet (Anthropic, 2024) 45.13 29.48 96.18 GPT-4o (OpenAI, 2024a) 40.35 30.79 58.53 o1-preview (OpenAI, 2024b) 46.35 31.43 88.27 o3-mini (OpenAI, 2025) 47.98 34.34 79.59
+
+et al., 2023). The generally high recall with low precision of all LLMs also indicates real-world risks, e.g., relying on LLMs to check the validity of equations in paper review.
+
+$\mathcal{Q}$ : Do more contexts boost performance? EQINFER places high demands on reasoning within the scientific context. To quantify the impact of input context length, we scale the input length (per side) from 100 to 1,500 words. As shown in Figure 4, for the open-source LLMs (Llama and Qwen), an appropriate context length can boost the performance; while for GPT-4o, scaling up the context length doesn't contribute much to the $\mathrm{F}_1$ . However, during the scaling, we find that the precision of GPT-4o is gradually increased, and the recall is decreased accordingly; considering the label distribution of EQINFER, we believe precision can better reflect the model's true capacities on this task. Thus, we anticipate that scaling up context shall be beneficial to those strong close-source LLMs such as GPT-4o.
+
+# 5.2. EXPERIMENTDESIGN
+
+Settings. Similarly, we unify the input context length of different LLMs to ensure a fair comparison. According to Table 8, we set 2,000 and 3,000 input words for open-and closed-source LLMs, respectively. Meanwhile, as experiment explanation is the subsequent task of experiment design, using model-generated experiments can propagate errors in explanation, leading to inferior results for most LLMs. To this end, we provide LLMs with the oracle experiments when generating explanations.
+
+Main results. Table 2 shows the main results. For the experiment design, the closed-source LLMs generally outperform open-source LLMs. However, the score values of all LLMs are relatively low $(20\% \sim 30\%)$ , implying the LLMs consistently miss ground-truth experiments from the origin paper (low recall), and they tend to generate
+
+Table 2: Various LLMs' performances on the 100 instances of EXPDESIGN. The explanation generation is based on the oracle experiments to prevent error propagation. "Copy Input" directly copies each experiment idea as the explanation.
+
+Methods Experiment Design Experiment Explanation En-F1 En-Precision En-Recall S-Match ROUGE-L ROUGE-1 Copy Input — — — 40.32 22.06 25.28 Open-source LLMs OLMo-7B (Groeneveld et al., 2024) 14.80 17.50 19.80 45.78 26.30 30.38 Mistral-7B (Jiang et al., 2023) 18.96 24.83 21.38 50.18 30.20 34.69 Mixtral-8x22B-MoE (Jiang et al., 2024) 23.16 24.45 30.57 49.07 29.96 34.53 Llama 3.1-70B (MetaAI, 2024) 22.92 23.10 29.76 50.05 29.33 34.11 Qwen 2.5-72B (Qwen Team, 2024) 24.28 22.48 34.44 51.12 29.46 34.68 Closed-source LLMs Gemini 1.5 Pro (Anil et al., 2023) 27.25 28.66 34.92 52.87 28.52 33.80 Claude 3.5 sonnet (Anthropic, 2024) 27.99 24.48 42.09 53.03 18.75 26.15 GPT-4o (OpenAI, 2024a) 25.03 22.25 36.59 54.79 27.54 34.31 o1-preview (OpenAI, 2024b) 30.13 28.13 38.59 58.55 29.11 36.70 o3-mini (OpenAI, 2025) 30.17 28.70 37.67 54.01 20.71 29.14
+
+more novel experiments that didn't show in the origin paper (low precision). As for the experiment explanation, the S-Match scores of closed-source LLMs still surpass the open-source LLMs. Furthermore, there is a negative correlation between S-Match and ROUGE score, where the ROUGE scores of closed-source LLMs are broadly inferior. We find that the open-source LLMs often try to copy the terms or phrases from the given experiment, or even simply paraphrase the experiment instead of explaining, which results in a high superficial overlap with the ground-truth explanation. This observation highlights the importance of adopting the proposed S-Match to avoid evaluation bias of traditional generation metrics.
+
+$\mathcal{Q}_1$ : What is the quality of the model-generated novel experiments? The low En-Precision of LLMs in Table 2 indicates the creativity of LLMs in generating novel experiments. We then randomly sample 15 papers from the EXPDESIGN and ask 3 experts to manually review the model-generated novel experiments. Specifically, we ask the experts to judge the necessity of the novel experiments, where we set three necessity levels: "A" indicates the experiment is necessary/mandatory to support the main claim, "B" represents optional/supplementary experiments, and "C" for those unrelated experiments (see Appendix C.2 for evaluation details). Table 3 shows the necessity scores of the three strongest LLMs. We find that LLMs consistently generate a lot of novel experiments, especially the Claude; though most of them are optional, even fancy/unrelated experiments, there are still a considerable amount of necessary experiments generated, e.g., the results of o1. We further find that some novel experiments can be regarded as useful supplementary analyses w.r.t. the human experiments. Table 11 shows examples of model-suggested experiments.
+
+Table 3: The human evaluation results on the novel experiments suggested by LLMs. "A", "B", and "C" represent the different quality level (i.e., necessity); "A" is the best level.
+
+Models # of novel EXP Necessity (%) A B Gemini 1.5 Pro 59 30.59 45.76 Claude 3.5 sonnet 112 21.78 50.00 o1-preview 71 35.84 36.61
+
+Table 4: The impact on S-Match scores of maintaining the experiment's self-containment for EXPDESIGN.
+
+Models One-by-One Whole-List Llama 3.1-70B 50.05 49.36 (↓ 0.7) Qwen 2.5-72B 51.12 48.56 (↓ 2.6) Gemini 1.5 Pro 52.87 57.48 (↑ 4.6) Claude 3.5 sonnet 53.03 59.11 (↑ 6.1) GPT-4 55.03 56.95 (↑ 1.9) GPT-4o 54.79 58.54 (↑ 3.8) o1-preview 58.55 61.58 (↑ 3.0)
+
+$\mathcal{Q}_2$ : Can self-contained experiment design enhance the experiment explanation? When generating the explanation in Table 2, we provide LLMs with each individual experiment and let them explain one by one, because we find that, when providing the whole experiment list, those open-source models only explain partial experiments because of their poor instruction-following capacity. However, there are intuitively some semantic or logical relations between different experiments, e.g., some experiments are prerequisite
+
+Table 5: The human evaluation results on LLMs' output explanations of EXPDESIGN. "Acc. ratio" means how many model outputs are accepted by the annotator.
+
+Models Acc. ratio Llama 3.1-70B 22.93 Gemini 1.5 Pro 55.07 Claude 3.5 sonnet 61.46 GPT-4o 69.72 o1-preview 76.14
+
+sites to others. Therefore, this one-by-one prompting might break the self-containment of an experiment plan. Consequently, we test with the "whole-list" prompting, where the LLMs are given the complete experiment list and are asked to explain all experiment steps together.
+
+As shown in Table 4, unlike the open-source LLMs, the explanation performances of those closed-source LLMs are generally improved after adopting whole-list prompting. According to further manual checking, after maintaining the self-containment of the experiments, the LLMs can refer to other experiments and better grasp the underlying motivation of the current experiment.
+
+$\mathcal{Q}_3$ : Do human evaluation results align with automatic metrics for explanation? As the explanation can be open-ended, in this paragraph, we provide the human evaluation results on different LLMs' experiment explanation outputs. In detail, we randomly select 20 out of 100 papers and ask 5 annotators to read the experiments along with each model's explanations; we then let the annotator decide whether each model's explanation is acceptable (see Appendix C.3 for more details). Table 5 illustrates the results, where the score variance is higher than Table 2. However, the performance ranking of both tables is perfectly correlated with each other (Spearman's rank correlation coefficient $= 1$ ), demonstrating the effectiveness of S-Match.
+
+$\mathcal{Q}_4$ : Do more contexts boost performance? We also investigate the impact of input context length for EXPDESIGN. As shown in Figure 5, we scale up the input pre-experiment context length from 0.1k to 10k tokens (10k is the length of the longest paper). For the experiment design, more input context does improve the performance of different LLMs, while this benefit stops after exceeding 8k tokens, which means that after the necessary information has been covered, scaling context becomes inefficient. Meanwhile, the explanation generation results reveal that LLMs primarily depend on given experiments rather than paper context to explain motivations. However, we do not expect this as we hope LLMs can explain the motivation based on a thorough
+
+understanding of the paper, just like how human experts do. Hence, there is still a considerable gap between the LLMs and humans in terms of grasping research motivations.
+
+$\mathcal{Q}_5$ : Does multi-modal input boost performance? Intuitively, besides the text, when designing experiments for a given research topic, the figures can provide rich supplementary information, such as an algorithm illustration that can help better understand this research topic and underlying motivations. Hence, we test the performance of different LMMs (Large Multimodal Models), including GPT4-o and InternVL2 (Chen et al., 2024b). Table 12 shows the ablation results on the figure data. To our surprise, the figure data doesn't improve the LMMs' results in this task, even harming the performances. This might be due to the low informativeness of the figures, as figures usually consume more input tokens but act only as supplementary information to the text, indicating future work on developing LMMs that can effectively leverage the scientific figures.
+
+# 5.3. PAPERWEAKNESS
+
+Settings. Intuitively, full paper content is necessary for paper reviewing. Therefore, instead of setting a maximum input length, in WEAKNESS, we try to utilize the whole paper. As the input length of WEAKNESS is extremely long (see Table 9), we adopt a "split-combine" method — we first split the whole paper into smaller pieces and let LLMs predict the weaknesses of each piece separately; after that, we merge all pieces' weaknesses as a final prediction. For the length of each small piece, we set 2,000 and 3,000 words for open- and closed-source LLMs, respectively. Additionally, in this task, we also examine the performance of AI-SCI (Lu et al., 2024), which enhances LLMs' paper review ability by leveraging advanced prompting techniques, e.g., self-reflection (Shinn et al., 2024) and response ensembling (Wang et al., 2023).6
+
+Main results. Table 6 shows the main results, where the closed-source LLMs' overall performances are generally superior to the results of open-source LLMs. Similarly, closed-source LLMs are particularly excellent in S-Recall because of more generated weaknesses. However, there is still a considerable gap in the weakness diversity between the LLMs and human experts.7 Compared with human review, most LLM-generated weaknesses are vague and lack the necessary knowledge about some frontier research works. Surprisingly, AI-SCI performs worse than backbone GPT
+
+Table 6: Various LLMs' performances on the 993 instances of WEAKNESS.
+
+Methods S-F1 (%) S-Precision (%) S-Recall (%) Weakness Diversity
+ITF-IDF (↑) Human Review — — — 7.69 Open-source LLMs OLMo-7B (Groeneveld et al., 2024) 43.25 40.38 47.04 2.45 Mistral-7B (Jiang et al., 2023) 42.03 43.80 40.77 1.17 Mixtral-8x22B-MoE (Jiang et al., 2024) 43.23 44.59 42.23 0.98 Llama 3.1-70B (MetaAI, 2024) 42.78 43.19 42.70 2.60 Qwen 2.5-72B (Qwen Team, 2024) 42.74 43.80 42.05 1.21 Closed-source LLMs Gemini 1.5 Pro (Anil et al., 2023) 48.75 43.97 55.08 5.88 Claude 3.5 sonnet (Anthropic, 2024) 47.85 41.97 56.00 3.91 GPT-4o (OpenAI, 2024a) 47.73 42.09 55.48 5.95 o1-preview (OpenAI, 2024b) 48.62 42.54 57.08 5.63 o3-mini (OpenAI, 2025) 46.33 42.00 51.99 5.85 LLM Agent Framework AI-SCI (GPT-4o) (Lu et al., 2024) 45.05 40.02 51.91 2.23
+
+4o, especially on ITF-IDF, which suggests the challenge of WEAKNESS, i.e., simply adopting popular prompting techniques cannot well address this task.
+
+$Q_{1}$ : Is the split-combine effective? Ideally, if the LLM has a sufficient context window size, splitting the input papers for separate processing is unnecessary. Consequently, in this paragraph, we utilize the LLMs accepting long context input to compare "split-combine" with "no-split", i.e., letting LLMs write weaknesses by giving the full paper. In practice, we set the maximum number of input words to $20k$ , which ensures $\geq 95\%$ papers in the WEAKNESS can be fully processed. As shown in Table 10, compared with giving the full paper contexts, split-combine generally brings about superior performances. During manual checking, we find that, when full paper is available, LLMs frequently neglect some important sections and omit weaknesses accordingly, while split-combine ensures that the LLMs can carefully brainstorm weaknesses within each smaller piece. Surprisingly, the LLMs' performances with full paper context can be even worse than just remaining the first 3,000 words. This implies that even the current powerful long-context LLMs still fall short when processing long scientific documents.
+
+$\mathcal{Q}_2$ : Does multi-modal input boost performance? Our dataset covers both tables and figure illustrations extracted from the paper PDF as inputs. Intuitively, when reviewing a paper, both figures and tables are critical, not only for a better understanding, but also because some weaknesses are related to tables/figures.8 Therefore, in Table 13, we adopt
+
+two LMMs to investigate the effectiveness of image inputs. Overall, image information, including both figures and tables, doesn't bring significant performance improvement, i.e., only InternVL2 gains a performance boost after incorporating figures; while tables slightly drop both models' results. This is probably because the LMMs cannot reason well over the information-intensive images, especially the table images.
+
+# 6. Conclusion
+
+In this work, we propose AAAR-1.0, a novel benchmark targeting a comprehensive evaluation of the current LLMs' capacity in assisting AI research. AAAR-1.0 consists of distinct expertise-intensive tasks along with the curated evaluation metrics. We collect high-quality data by employing senior AI researchers and conducting strict data examinations. Extensive experiments highlight the challenges and values of AAAR-1.0.
+
+# Acknowledgments
+
+The authors would like to thank Ibraheem Moosa and Sarkar Snigdha Sarathi Das for assisting in the data collection.
+
+# Impact Statement
+
+Our study explores whether LLMs can assist human researchers in AI research. We do not advocate for AI replacing human researchers. Instead, we stress that the primary responsibility for scientific research should remain with humans to prevent societal risks, with LLMs serving as tools to
+
+enhance research efficiency. Specifically, our work analyzes the strengths and weaknesses of LLMs to ensure researchers remain judicious in their use of these tools. Our goal is to mitigate risks while maximizing the benefits offered by LLMs. We are committed to the careful distribution of data collected in our research, ensuring it is used solely for research purposes.
+
+# References
+
+Almazrouei, E., Alobeidli, H., Alshamsi, A., Cappelli, A., Cojocaru, R., Debbah, M., Goffinet, E., Heslow, D., Lau nay, J., Malartic, Q., Noune, B., Pannier, B., and Penedo, G. Falcon-40B: an open large language model with state-of-the-art performance, 2023.
+Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Team, G., et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
+Anthropic. Introducing claude 3.5 sonnet. https:// www.anthropic.com/news/claude-3-5-sonnet, June 2024.
+Chamoun, E., Schlichktrull, M., and Vlachos, A. Automated focused feedback generation for scientific writing assistance. arXiv preprint arXiv:2405.20477, 2024.
+Chan, J. S., Chowdhury, N., Jaffe, O., Aung, J., Sherburn, D., Mays, E., Starace, G., Liu, K., Maksin, L., Patwardhan, T., et al. Mle-bench: Evaluating machine learning agents on machine learning engineering. arXiv preprint arXiv:2410.07095, 2024.
+Chen, Z., Chen, S., Ning, Y., Zhang, Q., Wang, B., Yu, B., Li, Y., Liao, Z., Wei, C., Lu, Z., et al. Scienceagentbench: Toward rigorous assessment of language agents for data-driven scientific discovery. arXiv preprint arXiv:2410.05080, 2024a.
+Chen, Z., Wang, W., Tian, H., Ye, S., Gao, Z., Cui, E., Tong, W., Hu, K., Luo, J., Ma, Z., et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024b.
+Clark, C. and Divvala, S. Pdfigures 2.0: Mining figures from research papers. In Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, pp. 143-152, 2016.
+Du, J., Wang, Y., Zhao, W., Deng, Z., Liu, S., Lou, R., Zou, H. P., Venkit, P. N., Zhang, N., Srinath, M., Zhang, H. R., Gupta, V., Li, Y., Li, T., Wang, F., Liu, Q., Liu, T., Gao, P., Xia, C., Xing, C., Cheng, J., Wang, Z., Su, Y., Shah, R. S., Guo, R., Gu, J., Li, H., Wei, K., Wang,
+
+Z., Cheng, L., Ranathunga, S., Fang, M., Fu, J., Liu, F., Huang, R., Blanco, E., Cao, Y., Zhang, R., Yu, P. S., and Yin, W. Llms assist NLP researchers: Critique paper (meta-)reviewing. In The 2024 Conference on Empirical Methods in Natural Language Processing, 2024. doi: 10.48550/ARXIV.2406.16253. URL https://doi.org/10.48550/arXiv.2406.16253.
+Gao, Z., Brantley, K., and Joachims, T. Reviewer2: Optimizing review generation through prompt generation. arXiv preprint arXiv:2402.10886, 2024.
+Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A. H., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M. E., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N. A., and Hajishirzi, H. Olmo: Accelerating the science of language models. Preprint, 2024.
+Gu, J., Ye, J., Yin, W., and Wang, G. Adaptive and explainable margin trading via large language models on portfolio management. In Proceedings of the 5th ACM International Conference on AI in Finance (ICAIF'24), 2024.
+Huang, Q., Vora, J., Liang, P., and Leskovec, J. Mlagent-bench: Evaluating language agents on machine learning experimentation. In *Forty-first International Conference on Machine Learning*, 2024.
+Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
+Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D. S., Casas, D. d. l., Hanna, E. B., Bressand, F., et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
+Jin, Y., Zhao, Q., Wang, Y., Chen, H., Zhu, K., Xiao, Y., and Wang, J. Agentreview: Exploring peer review dynamics with llm agents. arXiv preprint arXiv:2406.12708, 2024.
+Kumar, S., Ghosal, T., Goyal, V., and Ekbal, A. Can large language models unlock novel scientific research ideas? arXiv preprint arXiv:2409.06185, 2024.
+Labrak, Y., Bazoge, A., Morin, E., Gourraud, P.-A., Rouvier, M., and Dufour, R. Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373, 2024.
+
+Li, H., Jiang, H., Zhang, T., Yu, Z., Yin, A., Cheng, H., Fu, S., Zhang, Y., and He, W. Traineragent: Customizable and efficient model training through llm-powered multi-agent system. arXiv preprint arXiv:2311.06622, 2023.
+Li, R., Patel, T., Wang, Q., and Du, X. Mlr-copilot: Autonomous machine learning research based on large language models agents. arXiv preprint arXiv:2408.14033, 2024.
+Liang, W., Zhang, Y., Cao, H., Wang, B., Ding, D. Y., Yang, X., Vodrahalli, K., He, S., Smith, D. S., Yin, Y., et al. Can large language models provide useful feedback on research papers? a large-scale empirical analysis. NEJM AI, 1(8):A1oa2400196, 2024.
+Lin, C.-Y. Rouge: A Package for Automatic Evaluation of Summaries. In Text summarization branches out, pp. 74-81, 2004.
+Lin, J., Yin, H., Ping, W., Lu, Y., Molchanov, P., Tao, A., Mao, H., Kautz, J., Shoeybi, M., and Han, S. Vila: On pre-training for visual language models, 2023.
+Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., and Liang, P. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173, 2024.
+Lo, K., Shen, Z., Newman, B., Chang, J. Z., Authur, R., Bransom, E., Candra, S., Chandrasekhar, Y., Huff, R., Kuehl, B., et al. Papermage: A unified toolkit for processing, representing, and manipulating visually-rich scientific documents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 495-507, 2023.
+Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., and Ha, D. The AI Scientist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292, 2024.
+MetaAI. Introducing llama 3.1: Our most capable models to date. https://ai.meta.com/blog/meta-llama-3-1/, July 2024.
+Neuman, Y., Cohen, Y., and Yin, W. Identifying social norm violation in movie plots: from borat to american pie. Digit. Scholarsh. Humanit., 38(4):1636-1645, 2023. doi: 10.1093/LLC/FQAD052. URL https://doi.org/10.1093/llc/fqad052.
+OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/, May 2024a.
+OpenAI. Introducing openai o1. https://openai.com/index/introducing-openai-o1-preview/, September 2024b.
+
+OpenAI. Openai o3-mini. https://openai.com/index/openai-o3-mini/, January 2025.
+Praskievicz, S. River classification as a geographic tool in the age of big data and global change. Geographical Review, 108(1):120-137, 2018.
+Rakhimov, M., Akhmadjonov, R., and Javliev, S. Artificial intelligence in medicine for chronic disease classification using machine learning. In 2022 IEEE 16th International Conference on Application of Information and Communication Technologies (AICT), pp. 1-6. IEEE, 2022.
+Reimers, N. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
+Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K., and Yao, S. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024.
+Si, C., Yang, D., and Hashimoto, T. Can llms generate novel research ideas? a large-scale human study with $100+$ nlp researchers. arXiv preprint arXiv:2409.04109, 2024.
+Song, L., Zhang, J., Cheng, L., Zhou, P., Zhou, T., and Li, I. Nlpbench: Evaluating large language models on solving nlp problems. arXiv preprint arXiv:2309.15630, 2023.
+Tang, X., Liu, Y., Cai, Z., Shao, Y., Lu, J., Zhang, Y., Deng, Z., Hu, H., An, K., Huang, R., et al. Ml-bench: Evaluating large language models and agents for machine learning tasks on repository-level code. arXiv e-prints, pp. arXiv-2311, 2023.
+Team, G. Google launches gemma 2, its next generation of open models. https://blog.google/technology/ developers/google-gemma-2/, Jun 2024a.
+Team, Q. Qwen2.5: A party of foundation models, September 2024b. URL https://qwenlm.github.io/blog/qwen2.5/.
+Wang, M., Chen, L., Fu, C., Liao, S., Zhang, X., Wu, B., Yu, H., Xu, N., Zhang, L., Luo, R., et al. Leave no document behind: Benchmarking long-context llms with extended multi-doc qa. arXiv preprint arXiv:2406.17419, 2024.
+Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi, E. H., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=1PL1NIMMrw.
+
+Weng, Y., Zhu, M., Bao, G., Zhang, H., Wang, J., Zhang, Y., and Yang, L. Cycleresearcher: Improving automated research via automated review. arXiv preprint arXiv:2411.00816, 2024.
+Yu, B., Baker, F. N., Chen, Z., Ning, X., and Sun, H. Llasmol: Advancing large language models for chemistry with a large-scale, comprehensive, high-quality instruction tuning dataset. arXiv preprint arXiv:2402.09391, 2024a.
+Yu, H., Hong, Z., Cheng, Z., Zhu, K., Xuan, K., Yao, J., Feng, T., and You, J. Researchtown: Simulator of human research community. arXiv preprint arXiv:2412.17767, 2024b.
+Zhu, M., Weng, Y., Yang, L., and Zhang, Y. Deepreview: Improving llm-based paper review with human-like deep thinking process. arXiv preprint arXiv:2503.08569, 2025.
+
+# Appendices
+
+Within this supplementary material, we elaborate on the following aspects:
+
+- Appendix A: Data Statistics and Diversity
+- Appendix B: Implementation Details
+- Appendix C: More Experiment Results and Details
+- Appendix D: Data Cases and Annotation Platform Illustration
+- Appendix E: Prompt Templates
+
+# A. Data Statistics and Diversity
+
+We provide the detailed data statistics of three datasets in our benchmark, as shown in Table 7, 8, and 9. We use the NLTK package to tokenize words and count the length. When calculating the length of equations, we use the pylatexenc tool to simplify the equations first.
+
+Meanwhile, for the WEAKNESS, we also plot the review scores distribution of the papers used in the dataset, as well as the track distribution. As can be found in Figure 3, our dataset has a decent distribution, where the papers are uniformly distributed across 13 tracks, and most papers' scores ranged from 5 to 8 (i.e., most papers are weakly rejected or accepted).
+
+Table 7: The statistics of EQINFER. Here, the "left" and "right" input context indicates the paper contexts before and after the missed equation; "pos." means the ground-truth equations (written by the source paper authors), while "neg." is the GPT4-synthetic wrong equations.
+
+# of positive equations 1,049 # of negative equations 3,147 # of source papers 869 ave. “left” input context length (in words) 4,377 ave. “right” input context length (in words) 6,362 max “left” input context length (in words) 24,849 max “right” input context length (in words) 32,948 min “left” input context length (in words) 711 min “right” input context length (in words) 8 ave. “pos.” output equation length (in character) 55 ave. “neg.” output equation length (in character) 48 max “pos.” output equation length (in character) 1,039 max “neg.” output equation length (in character) 306 min “pos.” output equation length (in character) 6 min “neg.” output equation length (in character) 4
+
+# B. Implementation Details
+
+# B.1. Metric Details
+
+When calculating the metrics, specifically for the similarity-based scores, we utilize SentenceBERT (Reimers, 2019) to encode each segment (e.g., each experiment idea in the list) into a dense vector, and then calculate the cosine similarity, $^{11}$ which takes about 1GB of memory when running on a single A100 GPU.
+
+Table 8: The statistics of EXPDESIGN.
+
+# of instances 100 # of source papers 100 ave. input context length (in words) 4,288 max input context length (in words) 9,799 min input context length (in words) 698 ave. # of input figures 2.6 max # of input figures 16.0 min # of input figures 0.0 ave. length of Experiment&Explanation list 5.7 ave. length per experiment (in words) 34.3 ave. length per explanation (in words) 27.1 max length of Experiment&Explanation list 13 max length per experiment (in words) 135 max length per explanation (in words) 89 min length of Experiment&Explanation list 2 min length per experiment (in words) 9 min length per explanation (in words) 9
+
+# B.2. LLMs Running Details
+
+In our experiments, we utilize various LLMs, including both closed and open-sourced. We list the model weight sources for the open-source LLMs:
+
+- OLMo-7B (Groeneveld et al., 2024): https://huggingface.co/allenai/OLMo-7B
+- Falcon-40B (Almazrouei et al., 2023): https://huggingface.co/tiiuae/falcon-40b
+- Gemma 2-27B (Gemma Team, 2024): https://huggingface.co/google/gemma-2-27b
+- Mistral-7B (Jiang et al., 2023): https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
+- Mixtral-8x22B-MoE (Jiang et al., 2024): https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
+- Llama 3.1-70B (MetaAI, 2024): https://huggingface.co/meta-llama/Llama-3.1-70B
+Qwen 2.5-72B (Qwen Team, 2024): https://huggingface.co/Qwen/Qwen2.5-72B
+
+We use VLLM to unify the inference endpoints of all the above models.12 We use Pytorch 2.4.0 with CUDA 12.1, and use 8 NVIDIA A100 GPUs for the LLMs inference.
+
+Meanwhile, we use the gpt-4o-2024-08-06, gpt-4-1106-preview, o1-preview-2024-09-12, gemini-1.5-pro-002, and claude-3-5-sonnet-20240620 for the closed-source LLMs. We use LiteLLM to unify the API calling for all these LLMs. $^{13}$
+
+Given the unstable performance of LLMs, particularly closed-source ones, we run each model thrice during our experiments, selecting the median result from these repeated runs.
+
+Table 9: The statistics of WEAKNESS.
+
+# of instances 993 # of source papers 993 ave. input context length (in words) 9,811 max input context length (in words) 49,195 min input context length (in words) 24 ave. # of input figures 7.0 max # of input figures 37.0 min # of input figures 0.0 ave. # of input tables 4.3 max # of input tables 53.0 min # of input tables 0.0 ave. # of reviewers per paper 3.8 max # of reviewers per paper 9.0 min # of reviewers per paper 3.0 ave. # of weaknesses per reviewer 4.8 max # of weaknesses per reviewer 39.0 min # of weaknesses per reviewer 1.0 ave. length of weakness (in words) 39.1 max length of weakness (in words) 371.0 min length of weakness (in words) 2.0
+
+# C. More Experiment Results and Details
+
+# C.1. Input Context Scaling Investigation
+
+Figure 4, Figure 5, and Table 10 show the context scaling results of EQINFER, EXPDESIGN, and WEAKNESS.
+
+Table 10: The performance comparison of different input processing methods for WEAKNESS. We use GPT-4o and GPT-4-Turbo because both accept a maximum of 128k tokens input. We also put the results of AI-SCI in the table for reference. Here, "split-combine" splits the input paper into several pieces, where each piece's length is denoted as "window size"; "no-split" means the conventional input cutting, for example, if the window size is 3,000, then only the first 3,000 words in the paper are used. According to the data statistics, 20,000 words can cover maximum lengths of more than $95\%$ of the papers in our dataset.
+
+Models Input Context Processing Window Size (in words) S-F1 S-Precision S-Recall ITF-IDF GPT-4o split-combine 3,000 47.73 42.09 55.48 5.95 no-split 3,000 45.74 43.45 48.54 5.92 no-split 20,000 45.47 42.97 48.51 6.02 AI-SCI split-combine 3,000 45.05 40.02 51.91 2.23 no-split 3,000 42.56 40.90 44.65 2.53 no-split 20,000 42.53 40.75 44.78 2.58
+
+# C.2. Human Evaluation on LLM-Generated Novel Experiments
+
+Figure 6 illustrates the evaluation guideline for novel experiments generated by LLMs. We ask 3 senior PhD students to evaluate each paper; that is, if the first two annotators disagree with each other, a third annotator will make a final decision. Table 11 presents several human evaluation cases.
+
+# C.3. Human Evaluation on LLM-Generated Explanation
+
+We ask 5 annotators to evaluate the LLM-generated explanations. Specifically, each of them is assigned 4 or 5 papers, along with the corresponding experiment lists. For each paper, the annotator is given 5 different models' outputs (model names are anonymized), and the annotator has to decide if each LLM-generated explanation is acceptable according to the experiment. We show the human evaluation results in Table 5.
+
+# C.4. Multi-Modal Input Ablation
+
+We post the multi-modal ablation study of EXPDESIGN and WEAKNESS in Table 12 and Table 13.
+
+# D. Data cases and Annotation Platform Illustration
+
+As shown in Figure 8, 9, and 10, we show the sample cases of the three tasks in AAAR-1.0. Meanwhile, we illustrate the screenshot of our annotation platform in Figure 7.
+
+# E. Prompt Templates
+
+In this appendix, we attach all the prompts used in this work, including prompts in data collection and model prediction, as shown in Figure 11, 12, and 13.
+
+
+(a) The review score distribution of the papers used in WEAKNESS.
+
+
+(b) The track distribution of the papers used in WEAKNESS.
+Figure 3: The data diversity illustration of WEAKNESS, including the score distribution and track distribution of the papers used in our dataset.
+
+
+Figure 4: The input context length scaling trend on the EQINFER task.
+
+
+
+
+Figure 5: The input context length scaling trend of different LLMs on the EXPDESIGN task.
+
+For each paper, you are given this paper's human-annotated experiments (Column C), along with three different models' prediction experiments (Columns D, G, J) Those model-generated experiments are all novel experiments that the original human-annotated experiments (Column C) didn't mention. And your task is to evaluate whether these novel experiments are good or not. Based on the original paper and its experiments, pls rate the quality of each model-generated experiment. A (necessary experiment): Label an experiment with "A" if you think this experiment is necessary for this paper. A "necessary" experiment means if the authors don't include this experiment in the paper, this paper will be highly likely be rejected by the reviewer. For example, if this paper proposes a novel neural adaptor model, then an ablation study is required to see if having the proposed adaptor can contribute to the performance. B (optional experiment): label an experiment with "B", if you think this experiment is an optional choice for this paper. For example, if a paper proposes a new metric learning algorithm, conducting a representation space visualization is not required but can be useful for enhancing the explainability of this algorithm. C (unrelated experiment): label an experiment with "C" if you think this experiment is unrelated to the core motivation of this paper. Such as those fancy experiments that we can just omit without any impact. Note that, if the model-generated experiments are too general, such as simply suggesting an "ablation study" without any details, then you can also categorise it as an unrelated experiment. In the "Your Assessment" column, write down your assessment of the model-generated experiments, For example, if there are five novel experiments, write a list with a length of 5: [A, B, C, A, B] Leave any comments if you are not confident with any of your ratings.
+
+Figure 6: The human guideline for evaluating the LLM-generated novel experiments.
+
+A B C D Here, I provide a suggested annotation pipeline:1. Click the PDF link (Column B, Google Drive link) and read the "Experiment" section of the paper you are going to annotate. If you are not familiar with this paper, we also encourage you to read the full paper.2. For each experiment within the "Experiment" section, try to answer the following two questions:- What experiments do you suggest doing? (column C in this sheet)- Why do you suggest these experiments? (column D in this sheet)Write the "suggestion-style" answers to the above two questions by making comments on the PDF file directly --- i.e., highlighting the related paragraphs/tables/figures (this comment location information is a crucial part of your annotation, which will be used to ask you to go to see my annotation examples for a better understanding.3. After finishing all the annotations on the PDF file, copy all your annotations into this sheet.4. Organize all the experiment suggestions into the list. For example, in columns C and D, you should write something like:1. AAA ...2. BBB ...3. CCC ...Make sure all your lists are consistent! For example, if you make 7 experiment comments in the PDF, make sure there are also 7 items in columns C and D in this sheet.I ask all of you to go to see my annotation sheet and please use the same annotation format as mine (e.g., how to write the list, how to make comments on the PDF).Other notes:Usually, we only consider the experiments in the paper's main body and exclude the appendix, unless you think the experiments in the appendix are also critical to this paper ---the author explicitly claimed the importance or frequently mentioned this experiment in the paper's main body.Paper TitlePDF LinkWhat experiments do you suggest doing?Why do you suggest these experiments?1. Few-shot instruction tuning coverage speed comparison across diff 1. To investigate whether the current LMs can truly understand the semantics2. Zero-shot instruction-following performances among different instruc 2. To see if different instructions can impact the models' zero-shot instruc3. Few-shot instruction-following performance among different instruti 3. To see if different instructions can impact the models' few-shot instruci4. The effect of the target words. The author should also investigate whether 4. To see if the model can truly follow instructions to solve the task or just1. Cross-task instruction-following performance evaluation: the authors s 1. To prove that the task instructions in the proposed dataset (in both train2. Ablation study on the different components of the task instruction: the 2. Since the author proposed various components for the task instructions
+
+Figure 7: The annotation platform for collecting the annotation of EXPDESIGN. We ask annotators to first make comments on the Google Drive PDF, then move all the annotations to the online Google Doc (for further verification and discussion).
+
+Context Before Context After Equation Answer In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner [...] We then define emotion vectors $\\hat{v} \\in\mathsf{mathcal{R}})^{\wedge}\{10\}$ for each group $TGT$ [...] \\"textnormal{S}_({emo})\\"texttt{TGT})=\\"sum\\"limits^{"W_TGT}|_i=w\"(i)/({"W_TGT})\" correct In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner [...] We then define emotion vectors $\\hat{v} \\in \mathsf{mathcal{R}})^{\wedge}\{10\}$ for each group $TGT$ [...] \\"textnormal{S}_({emo})\\"texttt{TGT})=\\"frac{1}{{"W_TGT}|_i}\\sum_{-w} \\in W_TGT}\){score(w, emo)}\\" incorrect
+
+Figure 8: Two sample cases of EQINFER.
+
+Table 11: Examples of human evaluation on the model-generated novel experiments.
+
+Paper Title Original Experiments (by human) Novel Experiment (by LLMs) Rating WiCE: Real-World Entailment for Claims in Wikipedia 1. Analysis in Verification Problem Distribution: This paper should provide detailed analysis and statistics about the verification problems in the proposed dataset.2. Off-the-shelf entailment classification performance: The authors should provide entailment classification performance of existing models on the proposed dataset without fine-tuning.3. Human Performance: The authors should show human performance on the proposed dataset.4. Performance of fine-tuned models: The authors should provide the performance of models fine-tuned on the proposed dataset.5. Performance on the evidence retrieval task: The authors should show the performance on the evidence retrieval task, which is a sub-task of the proposed dataset.6. Performance of LLMs: The authors should provide the performance of LLMs on the proposed dataset.7. Retrieval+Entailment: Authors should provide experiments on a framework of retrieving evidence sentences and evaluate entailment by using the retrieved sentences.8. Analysis of Claim-Split on Downstream Tasks: The authors should analyze how claim-split, the proposed method, is effective on tasks other than the proposed dataset. Assess model performance on WiCE without fine-tuning to test domain generalization from traditional NLI datasets. A MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models 1. Results of multiple LLMs on popular math datasets: The authors should show the performance of multiple LLMs fine-tuned on their dataset on popular math datasets.2. Performance on open-source models with different sizes: The authors should show the performance of models with different sizes trained on the proposed dataset.3. Comparison to SOTA closed-source models: The authors show compare the performance of open-source models trained on the proposed dataset and strong close-source models.4. Evaluate the effect of augmentations: The authors need to perform an ablation study to compare the different argumentation methods they proposed.5. Analyze Training on Incorrect Answers: The authors should analyze whether wrong answers generated in data augmentation can harm the performance.6. Evaluate other ways to increase the size of training data: The authors should evaluate other ways to increase the training data size and compare the performance with models trained on their proposed train data.7. Error Analysis: The authors should analyze the performance of their models in different conditions (e.g., lengths of questions). Prompt Sensitivity Analysis: Evaluate the sensitivity of MetaMath to different prompt formats or phrasings of mathematical questions. B Large Language Models Cannot Self-Correct Reasoning Yet 1. Self-Correction with Oracle Labels: The authors should evaluate self-correction performance with oracle labels.2. Intrinsic Self-Correction: The authors should show performance without using the oracle labels.3. Analysis of Mistakes in Self-Correction: The authors should analyze the properties of mistakes made in the self-correction framework.4. Multi-Agent Debate: The authors should evaluate self-correction with multi-agent debate.5. Prompt Design Analysis: The authors should analyze the influence of prompt design for the initial responses on self-correction performance. Visualization of learned representations or attention mechanisms to provide insights into the model's inner workings. C
+
+Table 12: The figure inputs ablation of EXPDESIGN. For the maximum text input length, same as the setting in Table 2, we use 2,000 and 3,000 words for open- and closed-source models, respectively. For the closed-source GPT-4o and GPT-4, as they have long context window sizes, we use all the figures of each paper. While for InternVL2, we randomly select two figures per input paper.
+
+Models Experiment Design Experiment Explanation En-F1 En-Precision En-Recall S-Match ROUGE-L ROUGE-1 GPT-4o 25.03 22.25 36.59 58.54 29.25 35.50 w/ figures 25.39 24.35 32.80 58.53 27.87 34.30 InternVL2-26B 24.26 39.50 14.91 50.03 29.13 34.26 w/ figures 15.04 38.50 8.64 50.29 29.29 34.06
+
+Table 13: The ablation study about the paper tables and figures of WEAKNESS. Based on the conclusion in Table 10, we use the "split Combine" to process the text input here (2,000 and 3,000 words context window size for open- and closed-source models). For GPT-4o, we use all the table/figure images; while for InternVL2, we randomly select two images per paper, i.e., two random figures, two random tables, or one random figure + table.
+
+Models S-F1 S-Precision S-Recall ITF-IDF GPT-4o 47.73 42.09 55.48 5.95 w/ tables 46.76 41.32 54.17 5.53 w/ figures 46.62 41.20 54.04 5.48 w/ tables & figures 46.58 41.17 53.98 5.36 InternVL2-26B 41.91 41.02 43.28 1.48 w/ tables 40.55 40.37 42.91 1.46 w/ figures 42.88 42.10 43.76 1.46 w/ tables & figures 42.44 42.00 43.31 1.44
+
+Pre-Experiment Context (Input) Experiment Design (Output) Motivation Explanation (Output) In this paper, we show that Multilingual BERT (\\mbert{}), released by \cite{devlin2018bert} as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language [...] 1. Expand the definition of overlap. The authors should calculate overlap based on all the words shared between two languages, instead of just shared vocabulary on just the entities.
+2. Report performance gains for using some popular language similarity criterion, e.g., WALS.
+3. Effect of tokens per word. The authors should perform experiments on more scripts, specifically looking at the effect of words being split into multiple tokens.
+4. Control for vocabulary overlap among languages. Choose languages that have large vocabulary overlap and different word order feature. Train on one set of languages and then perform zero shot evaluation on the rest.
+5. Ablate the effect of common word pieces by using a non-overlapping tokenizer for different languages. 1. To check whether non-entity overlap between two languages also contribute to better performance on recognizing the entities. The model may use information from non-entity words to recognize an entity.
+Additionally, successfully recognizing that a word is not an entity also contributes the performance on the NER task.
+2. To understand which features the language model can exploit for cross-lingual transfer. This will give us insights into what typological similarity the multilingual language model can pick up during pretraining.
+3. To understand the effect of POS label frequency. The idea is that two languages with similar token to word ratio will result in better cross-lingual transfer. The reason is that continuation tokens should be classified properly and the change in the training corpus of the frequency of continuation tokens will result in different performance.
+4. To properly control for the effect of vocabulary overlap. Since large overlap in vocabulary can lead to performance gain, the reported results does not reflect the true impact of word order.
+5. To understand the effect of structure of sentences in different languages for cross-lingual understanding of multilingual language models. Since there will be no overlap between different languages the model must learn cross-lingual representations based on syntactic and semantic properties of the languages.
+
+Figure 9: A sample case of EXPDESIGN.
+
+Paper Context (Input) Weaknesses (Output) A Neural Process (NP) (Garnelo et al., 2018a;b) meta-learns a stochastic process describing the relationship between inputs and outputs in a given data stream, where each task in the data stream consists of a meta-training set of input-output pairs and also a meta-validation set. The NP then defines an implicit stochastic process whose functional form is determined by a neural network taking the meta-training set as an input [...] Reviewer#1:
+1. The writing is not on par with the idea.
+Reviewer#2:
+1. It would be informative to see how MPNPs scale with higher dimensionality. For example, empirical comparisons on a high-D regression task complementing the 10 one.
+2. The results of the Lotka-Volterra task would deserve further analysis: Why is BNP/BANP seemingly more apt at dealing with misspecification than MPNPs? My understanding is that model data-mismatch is a problem general to Bayesian inference, i.e., should also affect B(A)NP.
+Reviewer#3:
+1. The consistent outperformance of BNP/BANP over MPNP/MPANP weakens the central hypothesis of the paper.
+2. The comparisons appear to be against relatively old versions of NPs. I wonder how the proposed method compares against more recent versions of NPs than ANPs (2018) and BNPs (2020), for instance Evidential Turing Processes (2022).
+3. I find that the adaptation of the MPNP idea to CANP a bit dilutes the main message of the paper. It is after all a heavy pipeline with many components.
+4. It is great that the paper points out the limitations of the presented method, but would be even better if it also gave an educated guess on which properties of the method cause them.
+
+Figure 10: A sample case of WEAKNESS.
+
+LLM-based Equation Synthesis LLM-based Equation Filtering Model Prediction ##### Task:You are asked to complete the equation in an NLP paper. Given the context before and after an equation, where the equation is deleted, you should help me recover that equation. ##### Task:You are given a source code of a latex equation. Based on your knowledge regarding the Machine Learning and NLP, you should help me identify if this equation has obvious flaw. ##### Task:You are given the latex source code of the context before and after an equation in an NLP paper, while this equation is masked. Your task is to identify the correctness of the given candidate equation.Only provide either 'Correct' or 'Wrong'. Avoid any explanations. ##### Requirements:1. Give me the latex source code of the missed the equation.2. Only give me the equation, avoid any other explanations. ##### Context Before:{The context before the equation} ##### Context After:{The context after the equation}. ##### Equation:{Left part of the ground truth equation} ##### Your Answer: ##### Equation:{equation} ##### Your Answer:
+
+Figure 11: The prompts used in EQINFER, including both data collection and model prediction.
+
+LLM-based Leaking Sentence Deletion Model Prediction (Experiment Design) Model Prediction (Motivation Explanation) You are given a sentence (or a short paragraph) from an ML paper, along with a list of the experiments from this paper; help me decide whether this sentence discusses any experiments in the list. Let's say, if one sentence includes clues for coming up with any experiments in the list, we call this sentence a 'leaking sentence'; otherwise, if any experiment ideas cannot be inferred from the sentence, we call it a 'non-leak sentence'. Please give me a '1' if this sentence is a 'leaking sentence'; otherwise, give me a '0'. ### Experiment List: {The experiment list}. ### Sentence: {The sentence}. Now, give me your decision (give me either '0' or '1', only the number, without any explanations): You are partially given an ML paper (in latex), including some useful sections (e.g., 'abstract' and 'introduction') having some basic introductions to the research of this paper, where all the 'experiment' related sections are deleted. Please first help me carefully read these sections and try to understand the motivations of this research, such as 'what the authors are trying to propose/demonstrate?' and 'what are the main contributions/differences of this paper from others?' Then, based on your in-depth understanding of this paper, imagine that you are the authors of this paper; what experiments do you have to conduct to prove your research? Namely, you have to "recover the deleted experiments** by providing me with **a list of experiment ideas**, where the list briefly summarizes the experiments the authors should conduct. Here is an example: {few-shot examples} Here is the target ML paper (partial content): {The context input}. Now, based on this paper, give me a list of experiments the author has to do. Please only give me the list, without any other words. ### Your Experiment List: You are partially given an NLP paper (in latex), including some useful sections (e.g., 'abstract' and 'introduction') having some basic introductions to this research, where all the 'experiment' related sections are deleted. Meanwhile, you are also given a list of experiments that try to predict the missed experiments in this paper. Now, imagine the experiment list you created; you have to explain **why you suggested these experiments**. Here is an example experiment list: {few-shot examples} Here is the example corresponding explanation list: {few-shot examples} Now, help me look at the following paper: ### Paper: {The context input}. ### Experiment List: {The experiment list}. Please give me your explanation list, which should be the same length as the 'Experiment List'; the items of the two lists correspond one-to-one. Only give me the list without any other useless words. ### Explanation List:
+
+Figure 12: The prompts used in EXPDESIGN, including both data collection and model prediction.
+
+Model Prediction (Weaknesses) You are given an NLP paper, along with its figure illustrations. Imagine you are a machine learning expert with rich research experience. Please carefully review this paper and identify the weaknesses of this research. Here is the paper (it might be in partial content): The context input. Now, based on the provided context, give me a list of weaknesses of this research paper (such as '1. XXX\n2. XXX', one point per line).Note that if the given context is irrelevant to research, such as it is talking about 'acknowledgement', just generate 'No research content'.Please either give me the weakness list of this research paper or generate 'No research content' to clarify this is not a research paper, without any other words. Your Answer:
+
+Figure 13: The prompts used in WEAKNESS.
\ No newline at end of file
diff --git a/aaar10assessingaispotentialtoassistresearch/images.zip b/aaar10assessingaispotentialtoassistresearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fa7f5d6fd30ff7efcc4a01d74ad46f8135d29579
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d68d8ce0541bfd100bc43dbcbdf45dc2cf4f1e0c3a28c36241a3dabf7a5db0b
+size 2114005
diff --git a/aaar10assessingaispotentialtoassistresearch/layout.json b/aaar10assessingaispotentialtoassistresearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..85ed0abec83329ecd204436ed4e66fe3be8a6779
--- /dev/null
+++ b/aaar10assessingaispotentialtoassistresearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5109a201ce296c857e2808c91f984400b10c298fdeaedfa00e98829b63dd5cad
+size 539383
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..dd140247ea3854f6def66e14e9249c999b451769
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:672207b0ab3f9251544a74055040f1b97e714aee6def49ff0ecc720fdd50b794
+size 123947
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fac0924fe872aa4740ee2cab887ba629a443773a
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13dc41e252e1be2190432c77945e1f50ef08ab05323986fadbe6c50ddfd31be7
+size 147979
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..031cfd568ddddcaf272a0dd607e5f0a2a889b43c
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/be605c98-e986-4916-a1cf-5a2cf4d89930_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:359664e9e82636cddac56e31ee8dc1a4687b0ec5c42f8b36311a469125670446
+size 591272
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c5b858b8cba58d4e3029c9bb9eb80fc7565a1ba3
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/full.md
@@ -0,0 +1,494 @@
+# Ab Initio Nonparametric Variable Selection for Scalable Symbolic Regression with Large $p$
+
+Shengbin Ye1 2 Meng Li1
+
+# Abstract
+
+Symbolic regression (SR) is a powerful technique for discovering symbolic expressions that characterize nonlinear relationships in data, gaining increasing attention for its interpretability, compactness, and robustness. However, existing SR methods do not scale to datasets with a large number of input variables (referred to as extreme-scale SR), which is common in modern scientific applications. This "large $p$ " setting, often accompanied by measurement error, leads to slow performance of SR methods and overly complex expressions that are difficult to interpret. To address this scalability challenge, we propose a method called PAN+SR, which combines a key idea of ab initio nonparametric variable selection with SR to efficiently pre-screen large input spaces and reduce search complexity while maintaining accuracy. The use of nonparametric methods eliminates model misspecification, supporting a strategy called parametric-assisted nonparametric (PAN). We also extend SRBench, an open-source benchmarking platform, by incorporating high-dimensional regression problems with various signal-to-noise ratios. Our results demonstrate that PAN+SR consistently enhances the performance of 19 contemporary SR methods, enabling several to achieve state-of-the-art performance on these challenging datasets.
+
+# 1. Introduction
+
+Symbolic regression (SR) is a mathematical technique for finding a symbolic expression that matches data from an unknown function. An early example of SR dates back to
+
+$^{1}$ Department of Statistics, Rice University, Houston, TX, USA $^{2}$ Department of Statistics and Data Science, Northwestern University, Evanston, IL, USA. Correspondence to: Meng Li .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+the 1600s when Johannes Kepler used astronomical data to discover that Mars' orbit was elliptical. This discovery, along with Kepler's other parsimonious and analytically tractable laws of planetary motion, helped launch a scientific revolution.
+
+With the recent progress in theoretical modeling and experimental instrumentation, researchers have entered a new era of big data. The development of SR models is particularly important, as they have emerged as a powerful tool for developing machine learning models that are intelligible, interpretable, and compact. Unlike large numerical models, the mathematical expressions used in SR models enable an easy understanding of their behavior, making them valuable in fields such as physics, where they can connect newly discovered physical laws with theory to facilitate subsequent theoretical developments (Wu & Tegmark, 2019). Moreover, SR models offer a safe and responsible option for machine learning applications with high societal stakes, such as those related to human lives, as they are well-suited for human interpretability and in-depth analysis. As such, SR models have found successful applications across a range of fields, including astrophysics (Lemos et al., 2023), chemistry and materials science (Hernandez et al., 2019; Liu et al., 2020; 2022), control (Derner et al., 2020), economics (Verstyuk & Douglas, 2022), mechanical engineering (Kronberger et al., 2018), medicine (Virgolin et al., 2020), and space exploration (Märtens & Izzo, 2022), among others (Matsubara et al., 2024).
+
+SR literature has traditionally focused on datasets with low-dimensional inputs, often with $p \leq 10$ , and primarily considered only relevant variables—those used in the ground truth (La Cava et al., 2021; Kamienny et al., 2022; Shojaee et al., 2023; Tenachi et al., 2023; Li et al., 2024). In these settings, variable selection has not been critical, as SR has largely been viewed as an optimization problem under low-noise conditions. However, modern scientific applications increasingly involve datasets with far larger numbers of variables ( $p = 102$ to 459 in this work), often including irrelevant variables, rendering variable selection a critical yet underexplored concept in SR pipelines.
+
+While variable selection is a well-established topic in statistics, its adoption in SR has been limited and its effective
+
+ness in SR remains unclear. Existing approaches, such as random forest (RF)-based pre-selection in PySR (Cranmer, 2023), have demonstrated limited utility. Indeed, the PySR documentation explicitly notes that options like select_k_features are rarely used, suggesting that current methods are not well-suited to SR tasks. This observation is further supported by our analysis in Appendix D.2, where RF is shown to perform unsatisfactorily. The limited performance of off-the-shelf methods like RF highlights the unique challenges of variable selection in the context of SR. Unlike typical variable selection tasks, SR variable selection demands a near-zero false negative rate (FNR), as excluding even a single relevant variable from the search space prevents the recovery of the true underlying function. While false positives (FPs) primarily increase computational burden, they do not fundamentally impede the discovery of the underlying model. This asymmetry in performance requirements explains why standard methods often fall short and underscores the importance of designing variable selection methods specifically tailored to SR.
+
+In this paper, we introduce a versatile framework, PAN+SR, for improving SR methods at extreme scales. PAN+SR leverages the Parametric Assisted by Nonparametrics (PAN) strategy (Ye et al., 2024) for an ab initio screening of large influx of input variables before expression synthesis, enabling SR tasks at extreme scales. In light of the unique challenge of SR pre-screening, we propose a novel non-parametric variable selection method designed to minimize FN; we refer to this method as PAN throughout this paper. Furthermore, to evaluate PAN+SR at extreme scales, we extend the open-source SR benchmarking database, SRBench (La Cava et al., 2021), with high-dimensional problems containing white noise at various signal-to-noise ratios. In Section 6, we showcase the performance uplift of 19 contemporary SR methods under PAN+SR. The PAN+SR framework is available as an open-source project at https://github.com/mattsheng/PAN_SR.
+
+# 2. Background and Motivation
+
+Given a dataset $(\pmb{y},\pmb{X})$ with target $\pmb{y} \in \mathbb{R}^n$ and features $\pmb{X} = (\pmb{x}_1,\dots,\pmb{x}_p) \in \mathbb{R}^{n\times p}$ , SR assumes the existence of an analytical data-generating function that links $\pmb{X}$ to $\pmb{y}$ :
+
+$$
+y _ {i} = f _ {0} \left(x _ {i 1}, \dots , x _ {i p}\right) + \varepsilon_ {i}, \quad \text {f o r} \quad i = 1, \dots , n, \tag {1}
+$$
+
+in the presence of observation noise $\varepsilon_{i}$ . The goal of SR is to recover the unknown regression function $f_{0}(\cdot)$ symbolically. For example, consider regressing the gravitational force between two objects, $F$ , on their masses $(m_{1}, m_{2})$ and the distance between their centers $(r)$ . An SR algorithm would ideally re-discover the Newton's Law of Universal Gravitation, $F = 6.6743 \times 10^{-11} \cdot m_{1}m_{2} / r^{2}$ . This is typically done by randomly constructing mathematical expressions using the
+
+features, $\mathbf{X} = (m_{1}, m_{2}, r)$ in this case, and a set of mathematical operations, e.g., $\mathcal{O} = \{+, -, \times, \div, \exp, \log, \cdot^{2}\}$ . Even for this low-dimensional problem, it has been shown that exploring all expressions $\mathcal{F}(\mathbf{X}, \mathcal{O})$ , induced by $\mathbf{X}$ and $\mathcal{O}$ , is NP-hard (Virgolin & Pissis, 2022). Hence, typical SR algorithms only traverse through a small subset of the full search space, such as limiting the complexity of the candidate SR models, total runtime, number of mathematical operations, etc.
+
+In realistic scientific applications, particularly in the era of big data, scientists often include as many intuitively reasonable features as possible, many of which may be irrelevant to the target $\mathbf{y}$ . This practice causes the search space $\mathcal{F}(X, \mathcal{O})$ to expand double-exponentially quick (Ye et al., 2024), making it extremely challenging—if not impossible—to recover $f_0(\cdot)$ using algorithmic approaches alone. To this end, we propose the PAN+SR framework, which integrates the non-parametric module of PAN as a model-based pre-screening step. This framework excludes irrelevant features prior to applying SR methods, thereby mitigating the explosion of the search space in high-dimensional problems. Here, we assume that a high-dimensional SR problem in (1) can be reduced to
+
+$$
+y _ {i} = f _ {0} \left(\boldsymbol {X} _ {i, S _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} \quad i = 1, \dots , n, \tag {2}
+$$
+
+where only a small subset $S_0$ of $p_0 = |\mathcal{S}_0| \ll p$ of features exert influence on $\pmb{y}$ . Then the oracle search space $\mathcal{F}(X_{\mathcal{S}_0},\mathcal{O})$ is a significantly smaller subspace of the full search space $\mathcal{F}(\boldsymbol {X},\mathcal{O})$ . Thus, the successful identification of $S_0$ , or at least a superset of $S_0$ , is critical for reducing high-dimensional SR problems into manageable low-dimensional ones. With this reduction, the dataset $(\pmb {y},\pmb{X}_{\mathcal{S}_0})$ becomes sufficient for discovering $f_{0}(\cdot)$ , enabling SR methods to handle high-dimensional problems without requiring any modifications to their algorithms.
+
+# 3. Related Work
+
+SRBench (La Cava et al., 2021) is a reproducible and open-source benchmarking platform for SR that has made significant strides in the field through its curation of 122 real-world datasets and 130 ground-truth problems and its comprehensive evaluations of 14 contemporary SR methods. SRBench has quickly gained adaptations with numerous studies leveraging it to evaluate accuracy, exact solution rate, and solution complexity (Kamienny et al., 2022; Landajuela et al., 2022; Kamienny et al., 2023; Keren et al., 2023; Shojaee et al., 2023; Makke & Chawla, 2024). Despite its widespread use, SRBench primarily focuses on low-dimensional problems, which limits its applicability in the context of high-dimensional problems, a hallmark of the era of big data. In particular, the 130 ground-truth problems from the Feynman Symbolic Regression Database (Udrescu & Tegmark, 2020)
+
+and the ODE-Strogatz repository (Strogatz, 2015) contain only the oracle features $X_{S_0}$ with at most $p = 9$ features. This low and narrow dimensional scope leaves SRBench less suited for analyzing SR at extreme scales, underscoring the need for a high-dimensional SR database.
+
+# 4. Method
+
+Inspired by PAN, the PAN+SR framework utilizes a one-step nonparametric variable selection strategy to pre-screen a high-dimensional dataset $(\pmb{y},\pmb{X})$ and parse the reduced dataset $(\pmb{y},\pmb{X}_{\widehat{\mathcal{S}}})$ to SR methods for subsequent expression synthesis and selection. Unlike traditional variable selection literature, where the primary focus is controlling the false discovery rates, the PAN criterion calls for minimizing the false negative rate (FNR) while controlling the false positive rate (FPR) is secondary. In other words, the selected set of features $\widehat{\mathcal{S}}$ should be a superset of $S_0$ and as small as possible. When $\widehat{\mathcal{S}}$ fails to be the superset of $S_0$ (i.e., there is at least one FN), the reduced search space $\mathcal{F}(\pmb{X}_{\widehat{\mathcal{S}}},\mathcal{O})$ no longer contains $f_0(\cdot)$ , rendering any subsequent discovery based on $\pmb{X}_{\widehat{\mathcal{S}}}$ to be false.
+
+Nonparametric or model-free variable selection has been extensively studied in the literature. Lafferty and Wasserman (2008) propose the RODEO method for nonparametric variable selection through regularization of the derivative expectation operator. Candès et al. (2018) propose a model-free knockoff procedure controlling FDR with no assumptions on the conditional distribution of the response. Fan et al. (2011) propose a sure independence screening method for B-spline additive model. In the Bayesian literature, Bleich et al. (2014) design permutation tests for variable inclusion proportion of Bayesian Additive Regression Tree (BART); Liu et al. (2021) deploy spike-and-slab priors directly on the nodes of Bayesian forests.
+
+Despite this diverse array of methods, few meet the unique proposition of the PAN criterion. Among the few recent methods investigated in Ye et al. (2024), they found BART-G.SE (Bleich et al., 2014), a BART-based permutation variable selection method, to be particularly suitable for PAN. However, our comprehensive simulation study in Appendix D.2 reveals that BART-G.SE, along with three other methods, exhibit insufficient TPR, particularly under noisy or low-sample-size conditions. This deficiency renders these methods unsuitable for the PAN+SR framework.
+
+In this paper, we introduce a novel BART-based variable selection method and demonstrate its PAN criterion consistency through an extensive simulation study in Section 6.2. The key idea behind BART is to model the regression func
+
+tion $f_0(\cdot)$ by a sum of regression trees,
+
+$$
+\boldsymbol {y} = \sum_ {i = 1} ^ {M} \mathcal {T} _ {i} \left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {p}\right) + \varepsilon , \quad \varepsilon \sim \mathcal {N} _ {n} \left(\boldsymbol {0}, \sigma^ {2} \boldsymbol {I} _ {n}\right), \tag {3}
+$$
+
+where each regression tree $\mathcal{T}_i(x_1,\ldots ,x_p)$ partitions the feature space based on the values of $x_{1},\ldots ,x_{p}$ . For each posterior sample, we calculate the proportion of splits in the ensemble (3) that use $x_{j}$ as the splitting variable, for $j = 1,\dots ,p$ . The variable inclusion proportion (VIP) $q_{j}$ of $x_{j}$ is then estimated as the posterior mean of these proportions across all posterior samples (Chipman et al., 2010). Intuitively, $q_{1},\ldots ,q_{p}$ encode the relative importance of each feature, where a large VIP $q_{j}$ suggests $x_{j}$ being an important driver of the response $\pmb{y}$ . However, deciding on how large a VIP value must be to indicate relevance remains a challenge. For instance, BART-G.SE addresses this by using a permutation test on $q_{1},\ldots ,q_{p}$ to identify significant features, whereby controlling the family-wise error rate.
+
+Here, we propose an alternative approach that utilizes the rankings of VIPs instead of their raw values. Specifically, let $r_j$ denote the ranking of the VIP $q_j$ . Relevant features $X_{S_0}$ are expected to occupy top-ranking positions, namely $\{1, \ldots, p_0\}$ , due to their strong associations with $y$ . In contrast, irrelevant features $X_{S_1}$ , $S_1 = [p] \setminus S_0$ , are expected to appear in lower-ranking positions, namely $\{p_0 + 1, \ldots, p\}$ , since they are only selected sporadically or by chance (Chipman et al., 2010; Bleich et al., 2014). Consequently, a natural decision rule is to select feature $x_j$ if $r_j$ falls within $\{1, \ldots, p_0\}$ .
+
+However, this decision rule is impractical in real-world applications since the sparsity $p_0$ is unknown. To address this limitation, we propose a method that leverages multiple independent runs of BART to estimate the feature rankings more robustly. Let $r_{j,k}$ denote the VIP ranking of $\boldsymbol{x}_j$ in the $k$ th run. Assume that the rankings of $\boldsymbol{x}_j$ are randomly distributed over the $K$ independent runs (see Appendix D.1 for empirical justification):
+
+$$
+r _ {j, 1}, \ldots , r _ {j, K} \stackrel {{\text {i i d}}} {{\sim}} \left\{ \begin{array}{l l} \operatorname {U n i f} (\{1, \ldots , p _ {0} \}), & \text {i f} j \in \mathcal {S} _ {0} \\ \operatorname {U n i f} (\{p _ {0} + 1, \ldots , p \}), & \text {i f} j \notin \mathcal {S} _ {0} \end{array} \right.
+$$
+
+Then the average ranking $\bar{r}_j = \sum_{k=1}^{K} r_{j,k} / K$ of $\pmb{x}_j$ across $K$ independent runs forms two distinct clusters, $\mathcal{C}_0$ for $X_{S_0}$ and $\mathcal{C}_1$ for $X_{S_1}$ . Specifically, $\bar{r}_j$ . for $X_{S_0}$ are expected to cluster in $\mathcal{C}_0$ with mean $(1 + p_0) / 2$ , while those for $X_{S_1}$ tend to cluster in $\mathcal{C}_1$ with mean $(p_0 + 1 + p) / 2$ . Although both cluster means are unknown due to the unknown sparsity $p_0$ , their separation can be identified using clustering techniques.
+
+To illustrate, consider the extended Feynman I-38-12 dataset (defined in Section 5.2) with $p = 204$ features, of which
+
+$p_0 = 4$ are relevant. Without loss of generality, we assume that the relevant features $X_{S_0}$ are $\boldsymbol{x}_1,\boldsymbol{x}_2,\boldsymbol{x}_3,\boldsymbol{x}_4$ , i.e., $S_0 = \{1,2,3,4\}$ and $S_{1} = \{5,\dots ,204\}$ . When $K = 20$ independent BART models are trained on the dataset, the rankings $r_{1,k},r_{2,k},r_{3,k},r_{4,k}$ frequently fall within $\{1,2,3,4\}$ across all $k = 1,\ldots ,20$ runs. This is because the relevant features are frequently selected for tree splits due to their strong associations with the response variable $\mathbf{y}$ , leading to high VIPs and consistently top rankings. In contrast, irrelevant features $\boldsymbol{x}_5,\dots ,\boldsymbol{x}_{204}$ are included sporadically in BART, with $r_{5,k},\dots ,r_{204,k}$ distributed randomly across $\{5,\dots ,204\}$ . As evident in Figure 5 in Appendix D.1, the average VIP rankings $\bar{r_j}$ of the relevant features form a low-mean cluster $\mathcal{C}_0$ with a cluster mean of $(1 + p_0) / 2 = 2.5$ , while those of the irrelevant features form a high-mean cluster $\mathcal{C}_1$ , concentrating around $(p_0 + 1 + p) / 2 = 104.5$ .
+
+However, the sparse regression setting naturally leads to a class imbalance problem as $|\mathcal{C}_0| = p_0$ is much smaller than $|\mathcal{C}_1| = p - p_0$ . To this end, we propose to apply agglomerative hierarchical clustering (AHC) with Euclidean distance and average linkage to $(\bar{r}_1, \dots, \bar{r}_{p^{\cdot}})$ and cut the dendrogram to form two clusters: $\widehat{\mathcal{C}}_0$ and $\widehat{\mathcal{C}}_1$ . Then, features in $\widehat{\mathcal{C}}_0$ are retained, while those in $\widehat{\mathcal{C}}_1$ are discarded. Notably, the proposed data-driven selection criterion does not require any knowledge about the sparsity level $p_0$ or a tunable selection threshold. An ablation study evaluating the effect of different clustering algorithms on selection accuracy is available in Appendix D.3. We herein refer to this variable selection method for SR pre-screening as PAN; see Appendix C.2 for implementation details.
+
+# 5. Experiment Design
+
+Using an open-source benchmarking platform, SRBench, we evaluate the PAN+SR framework on two separate tasks. First, we assess its ability to make accurate predictions on "black-box" regression problems in which the underlying regression function remains unknown. Second, we test PAN+SR's ability to find the correct data-generating function $f_{0}$ on synthetic datasets with known data-generating functions originating from Feynman Lectures on Physics (Feynman et al., 2010; Udrescu & Tegmark, 2020).
+
+The experiment settings are summarized in Table 1. All experiments were run on a heterogeneous cluster. Each algorithm was trained on each dataset in 10 repeated trials with a different random state to control both the train/test split and the seed of the algorithm. Each run was performed until a 24-hour time limit was reached or up to 500,000 expression evaluations for black-box problems or 1,000,000 for ground-truth problems. For ground-truth problems, we chose a few representative algorithms in the black-box problems and investigated additional settings of sample size and
+
+signal-to-noise ratio. Datasets were split $75\% / 25\%$ in training and testing. For black-box problems, hyperparameters were either set to the optimal values published by SRBench or to values recommended by the original authors of the respective methods. The best hyperparameter settings in black-box regression problems were used in ground-truth problems. Instructions for reproducing the experiment is available in Appendix A, and detailed experimental settings are described in Appendix C.
+
+# 5.1. Symbolic Regression Methods
+
+Here we summarize the SR methods evaluated in this paper. A long strand of SR methods is based on genetic programming (GP), a technique for evolving executable data structures, such as expression trees. The most vanilla version we test is gplearn (Stephens, 2020), which performs random expression proposal and iterates through the steps of tournament selection, mutation, and crossover. Advanced GP-based methods utilize different evolutionary strategies and optimization objectives, ranging from Pareto optimization for efficient trade-offs between accuracy and model complexity to program semantics optimization for increasing coherence in expression. Here we test an array of advanced GP-based SR algorithms, including Age-Fitness Pareto optimization (AFP) (Schmidt & Lipson, 2010), AFP with co-evoloved fitness estimate (AFP_FE) (Schmidt & Lipson, 2010), Epigenetic Hill Climber (EHC) (La Cava et al., 2014), $\varepsilon$ -lexicase selection (EPLEX) (La Cava et al., 2019a), Feature Engineering Automation Tool (FEAT) (La Cava et al., 2019b), Fast Function Extraction (FFX) (McConaghy, 2011), GP version of Gene-pool Optimal Mixing Evolutionary Algorithm (GP-GOMEA) (Virgolin et al., 2021), Interaction-Transformation Evolutionary Algorithm (ITEA) (de Franca & Aldeia, 2021), Multiple Regression Genetic Programming (MRGP) (Arnaldo et al., 2014), Operon (Burlacu et al., 2020), PySR (Cranmer, 2023), and Semantic Back-propagation Genetic Programming (SBP-GP) (Virgolin et al., 2019).
+
+Additional methods include Bayesian Symbolic Regression (BSR) (Jin et al., 2020), which places a prior on the expression tree; Deep Symbolic Regression (DSR) (Petersen et al., 2021), Unified Deep Symbolic Regression (uDSR) (Landajuela et al., 2022), and Dynamic Symbolic Network (DySymNet) (Li et al., 2024) utilize recurrent neural networks to propose symbolic expressions; Transformer-based Planning for Symbolic Regression (TPSR) (Shojae et al., 2023) leverages pretrained transformer models; AIFeynman 2.0 (Udrescu et al., 2020) which uses a divide-and-conquer technique to recursively decomposing complex problems into lower-dimensional sub-problems.
+
+Table 1: Settings used in the experiments.
+
+SETTING BLACK-BOX PROBLEMS GROUND-TRUTH PROBLEMS # OF DATSETS 35 100 # OF ALGORITHMS 19 19 # OF TRIALS PER DATASET 10 10 TRAIN/TEST SPLIT .75/.25 .75/.25 TERMINATION CRITERIA 500K EVALUATIONS OR 24 HOURS 1M EVALUATIONS OR 24 HOURS SAMPLE SIZE ALL 500, 1000, 1500, 2000 SIGNAL-TO-NOISE RATIO NONE 0.5, 1, 2, 5, 10, 15, 20, NONE TOTAL COMPARISONS 12250 142000 COMPUTATION COST 34K CORE HOURS 104K CORE HOURS MEMORY ALLOCATION 16 GB 16 GB
+
+# 5.2. Datasets
+
+We curated a database of high-dimensional regression problems for testing the capability of PAN+SR. We selected 35 black-box regression problems available in PMLB v1.0 (Romano et al., 2021) using the following criteria: $n < 200$ and $p \geq 10$ or $n \geq 200$ and $p \geq 20$ . These problems were used in SRBench and overlap with various open-source repositories, including OpenML (Vanschoren et al., 2014) and the UCI Machine Learning Repository (Kelly et al., 2013).
+
+We also curated 100 high-dimensional ground-truth regression problems by modifying the Feynman Symbolic Regression Database (Udrescu & Tegmark, 2020) to include irrelevant features and white noise. For each equation $f_{0}(\cdot)$ in the Feynman Lectures on Physics, we generated the relevant features $X_{S_0}$ following Udrescu and Tegmark (2020):
+
+$$
+\left(x _ {1, j}, \dots , x _ {n, j}\right) \stackrel {\text {i d}} {\sim} \operatorname {U n i f} \left(a _ {j}, b _ {j}\right), \quad \text {f o r} 1 \leq j \leq p _ {0}, \tag {4}
+$$
+
+where $p_0 = |\mathcal{S}_0|$ is the number of relevant features, $n$ is the sample size, and $a_j$ and $b_j$ are the lower and upper bounds for feature $x_j$ described in Udrescu and Tegmark (2020). To study the effect of noise on PAN+SR, we tuned the signal-to-noise ratio (SNR) by adding a Gaussian error term when generating the response variable:
+
+$$
+y _ {i} = f _ {0} \left(x _ {i, 1}, \dots , x _ {i, p _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} 1 \leq i \leq n, \tag {5}
+$$
+
+where $\varepsilon_{i}\stackrel {\mathrm{iid}}{\sim}N(0,\sigma_{\varepsilon}^{2})$ $\sigma_{\varepsilon}^{2} = \sigma_{f}^{2} / \mathrm{SNR}$ . When $\sigma_{\varepsilon}^{2} = 0$ or $\mathrm{SNR} = \infty$ (4) and (5) generate the original Feynman Symbolic Regression Database.
+
+In addition to the relevant features $X_{\mathcal{S}_0} = (x_1,\dots ,x_{p_0})$ we included an array of irrelevant features $X_{\mathrm{irr}}$ representing the era of big data where all reasonable features are included in the dataset. Specifically, for each relevant feature $\boldsymbol {x}_j$ $j\in S_0$ we generate $(\boldsymbol{x}_{j,\mathrm{irr}}^{1},\ldots ,\boldsymbol{x}_{j,\mathrm{irr}}^{s})\stackrel {\mathrm{id}}{\sim}\mathrm{Unif}(a_j,b_j)$ representing $s$ copies of independent and irrelevant features coming from the same distribution as $\boldsymbol {x}_j$ . Then, the final feature matrix is $\pmb {X} = [X_{\mathcal{S}_0},X_{\mathrm{irr}}^1,\dots ,X_{\mathrm{irr}}^{p_0}]\in \mathbb{R}^{n\times p}$ ,where
+
+$\pmb{X}_{\mathrm{irr}}^{j} = (\pmb{x}_{j,\mathrm{irr}}^{1},\dots ,\pmb{x}_{j,\mathrm{irr}}^{s})\in \mathbb{R}^{n\times s}$ is the irreverent feature matrix induced by the $j$ th relevant feature for $j = 1,\ldots ,p_0$ totaling $p = p_0(1 + s)$ features. In Section 6.2, we fix $s = 50$ so the total number of features is $p = 51p_{0}$ . Additional dataset information and sampling process are available in Appendix B.
+
+Besides the 3,200 distinct simulation settings described in Table 1 (100 datasets, 8 SNRs, and 4 sample sizes), we include additional simulation settings in Appendix D.4 to further assess PAN+SR's behavior under alternative feature structures. These include (1) additive noise in features, (2) duplicated features, and (3) correlated features.
+
+# 5.3. Metrics
+
+Predictive Accuracy We assessed predictive accuracy using the coefficient of determination, defined as
+
+$$
+R ^ {2} = 1 - \frac {\sum_ {i = 1} ^ {n} (y _ {i} - \widehat {y} _ {i}) ^ {2}}{\sum_ {i = 1} ^ {n} (y _ {i} - \bar {y}) ^ {2}}.
+$$
+
+Model Complexity In line with SRBench, we define model complexity as the total number of mathematical operators, features, and constants in the model. To avoid redundancy, symbolic models are first simplified using SymPy (Meurer et al., 2017), a Python library for symbolic mathematics.
+
+Solution Criteria For ground-truth regression problems, we follow SRBench's definition of symbolic solution. A model $\widehat{f}(\mathbf{X})$ is considered a solution to the SR problem of $y = f_0(\mathbf{X}) + \varepsilon$ if $\widehat{f}(\mathbf{X})$ does not reduce to a constant and (1) $\widehat{f} - f_0 = a$ for some $a \in \mathbb{R}$ or (2) $\widehat{f} / f_0 = b$ for some $b \neq 0$ . That is, the predicted model $\widehat{f}$ only differs from the true model $f_0$ by either an additive or a multiplicative constant.
+
+While predictive accuracy can be influenced by the simulation design, the symbolic solution criterion offers a more reliable metric for assessing whether an SR method can uncover the true data-generating process. However, since
+
+
+Figure 1: Results on the black-box regression problems. Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals. Training time for PAN+SR includes the runtime of PAN, which averages only 74.14 seconds.
+
+SymPy's simplification process is not always optimal, it is possible that some symbolic solutions are not identified in the process.
+
+Feature Usage Accuracy The irrelevant features present a unique challenge for SR methods to identify the correct data-generating model $f_{0}$ . When the predictive model $\widehat{f}$ includes irrelevant features (FPs), it cannot be considered a symbolic solution to $f_{0}$ . Conversely, if $\widehat{f}$ excludes some relevant features (FNs), it also fails to meet the symbolic solution criteria. Although neither FPR nor FNR corresponds directly to symbolic solution rate, they can provide insights into why $\widehat{f}$ does not qualify as a symbolic solution.
+
+# 6. Results
+
+# 6.1. Blackbox Datasets
+
+Figure 1 shows that PAN+SR consistently improves test set $R^2$ across 18 out of 19 SR algorithms, with the largest gains observed in lower-performing methods such as BSR, AIfeynman, and ITEA. For top-performing SR algorithms, the improvements are more modest due to the natural upper limit of $R^2$ , but the uplift remains significant. For instance, PAN boosted uDSR from 14th to 5th place in the overall ranking and to 2nd among the standalone SR methods. Furthermore, these $R^2$ improvements are not accompanied by increased model complexity. In some cases, PAN+SR even reduces model complexity, enhancing both parsimony and interpretability.
+
+In addition to accuracy gain, PAN+SR significantly reduces training times for several SR algorithms, including SBP-GP, uDSR, AFP_FE, AIfeynman, and BSR. Notably, AIfeynman, the 2nd slowest running SR algorithm, achieves a 5-fold speedup (from 71250 seconds to 13997 seconds), while uDSR benefits from nearly a 3-fold speedup (from 7628 seconds to 2612 seconds) with PAN pre-screening. The computational overhead introduced by PAN is minimal, averaging only 74.14 seconds on a single core. As PAN relies on independent MCMC chains, this overhead can be further reduced through parallel processing, making PAN+SR both efficient and scalable.
+
+# 6.2. Ground-truth Datasets
+
+Figure 2 summarizes performance on the ground-truth regression problems with $n = 1000$ , $\mathrm{SNR} = \infty$ , and $s = 50$ . Methods are sorted by their standalone $R^2$ on the test set. PAN+SR consistently improves both $R^2$ and solution rate across all 19 SR methods. Due to the high dimensionality of the ground-truth problems, the standalone AIfeynman encountered out-of-memory errors and failed to complete any of the 1000 runs. However, PAN significantly improves AIfeynman's performance, lifting it from last place to 2nd overall in symbolic solution rate. Furthermore, PAN consistently outperforms all other nonparametric variable selection methods tested, achieving the highest TPR among four other methods and delivering the best $R^2$ when paired with SR, as detailed in Appendix D.2. This underscores the effectiveness and necessity for nonparametric pre-screening in
+
+
+Figure 2: Results on the ground truth regression problems with $n = 1000$ , $\mathrm{SNR} = \infty$ , and $s = 50$ . Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals. Training time for $\mathrm{PAN} + \mathrm{SR}$ includes the runtime of PAN, which averages 325 seconds. AIfeynman fails to complete any run in the standalone setting.
+
+high-dimensional SR problems.
+
+Similar to our findings in the black-box regression problems, this performance gain is not driven by increased model size, and PAN's average computational overhead of 325 seconds remains insignificant to many SR methods. Remarkably, uDSR benefited from nearly a 6-fold speedup with PAN (from 9573 seconds to 1596 seconds) while almost doubling its solution rate (from $36.6\%$ to $71.8\%$ ), making it the best performer in solution rate. Additionally, PAN elevated several mid-tier performers such as Operon, AFP_FE, AFP, and EHC, enabling them to surpass the 4th place method, GP-GOMEA, in the standalone SR solution rate ranking.
+
+Beyond the specific simulation setting of $n = 1000$ and $\mathrm{SNR} = \infty$ , we also investigated the sensitivity of $\mathrm{PAN} + \mathrm{SR}$ across a range of sample sizes and SNR. In particular, we evaluated $\mathrm{PAN} + \mathrm{SR}$ with all combinations of sample size $n \in \{500, 1000, 1500, 2000\}$ and $\mathrm{SNR} \in \{0.5, 1, 2, 5, 10, 15, 20, \infty\}$ . Given the extreme computational burden, we select Operon, the best-performing algorithm in black-box regression problems, to be the SR module for the sensitivity analysis.
+
+Figure 3a demonstrates that both Operon and PAN+Operon maintain consistently lower FPR across all settings of $n$ and SNR, with negligible differences between them. This low FPR reflects the rare inclusion of irrelevant features in the final symbolic models. In noisy settings, we notice a significant increase in PAN's FPR, from $0\%$ at $\mathrm{SNR} = \infty$ to over $30\%$ at $\mathrm{SNR} = 0.5$ . While this noise sensitivity could
+
+be a concern for typical variable selection applications, it is crucial to emphasize that PAN's primary objective is to scale up SR methods by reliably identifying a superset of the relevant features $S_0$ . In this context, minimizing FNs during pre-screening is more critical than avoiding FPs.
+
+Figure 3b illustrates that PAN achieves a near $0\%$ FNR across most simulation settings, highlighting its ability to identify a superset of the true feature set $S_0$ . This is crucial to ensure that the pre-screened dataset $(y, X_{\hat{S}})$ used for subsequent SR modeling is comprehensive enough to generate the correct expression $f_0$ . However, in the most extreme case, where $n = 500$ and $\mathrm{SNR} = 0.5$ , PAN's FNR rises to over $5\%$ , and caution is advised when relying on PAN in such cases. On the other hand, the standalone Operon often fails to include all relevant features in its final models across all $n$ and SNR settings, while PAN consistently lower Operon's FNR, enhancing its chance to identify the true function $f_0$ . Even with PAN, Operon fails to achieve the best-case FNR set by PAN, particularly under noisy conditions. This elevated FNR negatively impacts Operon's solution rate. For example, changing SNR from $\infty$ to 10, $\mathrm{PAN + Operon}$ 's average solution rate drops from $27.4\%$ to $0\%$ , and Operon's solution rate falls from $18.1\%$ to $0\%$ . As La Cava et al. (2021) noted, this limitation persists even when Operon is provided with only the relevant features $X_{S_0}$ and under favorable conditions ( $n = 100, 000$ and $\mathrm{SNR} = 100$ ), indicating that the issue lies beyond PAN pre-screening. Other performance metrics of this sensitivity analysis are available in Appendix D.5.
+
+
+(a) False positive rate (FPR).
+
+
+(b) False negative rate (FNR).
+Figure 3: FPR and FNR of Operon, PAN+Operon, and PAN on the ground truth datasets. PAN refers to the proposed selection method in Section 4. Points indicate the mean performance and bars represent the $95\%$ confidence intervals.
+
+
+Figure 4: Results of selected methods on the ground truth problems with $n = 1000$ , $\mathrm{SNR} \in \{\infty, 10\}$ , and $s = 50$ . Points indicate the mean test set performance and bars represent the $95\%$ confidence intervals.
+
+Beyond Operon, we also evaluated several top-performing SR methods on the ground-truth problems using $n = 1000$ and $\mathrm{SNR} \in \{\infty, 10\}$ . As shown in Figure 4, $\mathrm{PAN} + \mathrm{SR}$ consistently improves SR methods across all SNR levels, though all SR and their PAN-boosted variants become less accurate at $\mathrm{SNR} = 10$ , indicating the challenge when noise is present. In particular, GP-GOMEA performs similarly to Operon, with its solution rate dropping to $0\%$ at $\mathrm{SNR} = 10$ for both the standalone and PAN-boosted variants. The best-performing SR algorithm, uDSR, also exhibits vulnerability to noise, with its PAN-boosted solution rate falling from $71.8\%$ to $7.4\%$ . Surprisingly, PAN significantly benefits
+
+DSR, the weakest SR algorithm in Figure 4, increasing its solution rate from $8.2\%$ to $14.9\%$ at $\mathrm{SNR} = 10$ and from $8.9\%$ to $25.8\%$ at $\mathrm{SNR} = \infty$ . These findings highlight the fundamental challenges noise introduces to SR algorithms. To date, SR algorithms have been predominantly developed for noiseless or high-SNR settings, even for "small $p$ " problems. We expect that iterative application of the proposed variable selection method, similar to Ye et al. (2024), along with careful consideration of the challenges in extreme-scale SR, could improve performance in low-SNR settings. This will be explored in future work.
+
+# 7. Discussion
+
+In this paper, we introduce PAN+SR, a novel framework designed to address the scalability challenges faced by SR methods when applied to high-dimensional datasets. The growing prevalence of big data necessitates tools capable of efficiently handling such complexity, and PAN+SR addresses this need by integrating a nonparametric prescreening mechanism with SR. This integration enables the framework to focus the model search on a relevant subset of features, reducing computational burden and improving accuracy.
+
+The core innovation of PAN+SR lies in its nonparametric variable selection method, which filters the input dataset to reduce dimensionality before applying SR. A key challenge in this process is minimizing the risk of false negatives (FNs), where relevant features are mistakenly excluded. Such omissions can critically impair SR methods, as the success of SR depends on having access to the true feature set. To address this issue, we developed a variable selection method designed to ensure that the selected features
+
+form a superset of the true feature set, effectively minimizing the FNR. Our approach leverages the characteristics of VIP rankings derived BART, providing a tuning-free, data-driven variable selection criterion capable of retaining relevant features while excluding irrelevant ones. By preserving a comprehensive set of candidate features, PAN+SR maximizes the likelihood of identifying the true underlying model.
+
+We evaluated PAN+SR across a diverse set of datasets, including 35 high-dimensional real-world datasets from the PMLB database and 100 modified simulated datasets based on the Feynman Symbolic Regression Database. The results were highly promising: PAN+SR improved the performance of 18 out of 19 SR methods on real datasets and all 19 methods on simulated datasets when noise is absent. These findings underscore the framework's potential to enhance the robustness and scalability of SR methods across diverse datasets.
+
+In addition, we explored the sensitivity of $\mathrm{PAN} + \mathrm{SR}$ to varying sample sizes and SNR. Our analysis demonstrated that the performance gains achieved by $\mathrm{PAN} + \mathrm{SR}$ are consistent across different sample sizes and remain robust in the presence of noise. Like our extended Feynman database, SDSR (Matsubara et al., 2024) augments the original Feynman database with irrelevant features, bringing the synthetic benchmarks closer to real-world scientific process. However, SDSR adds only 1-3 irrelevant variables, while our setup introduces 100-450 irrelevant variables, posing a substantially more challenging test for both variable selection and symbolic regression. Nonetheless, SDSR rectifies several physical inconsistencies present in the original Feynman benchmark, such as a more realistic treatment of constants and integer-valued variables, and a more careful specification of sampling ranges. Our investigation extends beyond ground-truth datasets by incorporating black-box datasets, thereby mitigating, to some extent, the limitations inherent in purely simulated data. Still, we view SRSD as a valuable and complementary benchmark and plan to incorporate its refinements in future evaluations. In summary, $\mathrm{PAN} + \mathrm{SR}$ provides a significant step forward in enabling SR methods to handle the complexities of modern datasets, offering improved performance and scalability across a wide range of applications.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Arnaldo, I., Krawiec, K., and O'Reilly, U.-M. Multiple regression genetic programming. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO '14, pp. 879-886, New York, NY, USA, 2014. Association for Computing Machinery.
+Bleich, J., Kapelner, A., George, E. I., and Jensen, S. T. Variable selection for BART: an application to gene regulation. Annals of Applied Statistics, 8(3):1750-1781, 09 2014.
+Burlacu, B., Kronberger, G., and Kommenda, M. Operon $\mathrm{C} + +$ : an efficient genetic programming framework for symbolic regression. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO '20, pp. 1562-1570, New York, NY, USA, 2020. Association for Computing Machinery.
+Candès, E., Fan, Y., Janson, L., and Lv, J. Panning for Gold: 'Model-X' Knockoffs for High Dimensional Controlled Variable Selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80(3):551-577, 01 2018.
+Chipman, H. A., George, E. I., and McCulloch, R. E. BART: Bayesian additive regression trees. Annals of Applied Statistics, 4(1):266-298, 03 2010.
+Cranmer, M. Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl. arXiv:2305.01582, 2023.
+de Franca, F. O. and Aldeaia, G. S. I. Interaction-transformation evolutionary algorithm for symbolic regression. Evolutionary Computation, 29(3):367-390, 09 2021.
+Derner, E., Kubalík, J., Ancona, N., and Babuška, R. Constructing parsimonious analytic models for dynamic systems via symbolic regression. Applied Soft Computing, 94:106432, 2020.
+Dick, G. Genetic programming, standardisation, and stochastic gradient descent revisited: initial findings on srbench. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, pp. 2265-2273, New York, NY, USA, 2022. Association for Computing Machinery.
+Fan, J., Feng, Y., and and, R. S. Nonparametric independence screening in sparse ultra-high-dimensional additive models. Journal of the American Statistical Association, 106(494):544-557, 2011.
+Feynman, R. P., Leighton, R. B., and Sands, M. The Feynman Lectures on Physics. Basic Books, New York, NY, 2010.
+
+Friedman, J. H. Multivariate Adaptive Regression Splines. The Annals of Statistics, 19(1):1-67, 1991.
+Hernandez, A., Balasubramanian, A., Yuan, F., Mason, S. A. M., and Mueller, T. Fast, accurate, and transferable many-body interatomic potentials by symbolic regression. npj Computational Materials, 5(1):112, November 2019.
+Jin, Y., Fu, W., Kang, J., Guo, J., and Guo, J. Bayesian Symbolic Regression. arXiv:1910.08892, 2020.
+Kamienny, P.-a., d'Ascoli, S., Lample, G., and Charton, F. End-to-end symbolic regression with transformers. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 10269-10281. Curran Associates, Inc., 2022.
+Kamienny, P.-A., Lample, G., Lamprier, S., and Virgolin, M. Deep generative symbolic regression with Monte-Carlo-tree-search. In Proceedings of the 40th International Conference on Machine Learning, ICML'23, pp. 15655-15668. JMLR.org, 2023.
+Kelly, M., Longjohn, R., and Nottingham, K. The UCI Machine Learning Repository, 2013.
+Keren, L. S., Liberzon, A., and Lazebnik, T. A computational framework for physics-informed symbolic regression with straightforward integration of domain knowledge. Scientific Reports, 13(1):1249, January 2023.
+Kronberger, G., Kommenda, M., Promberger, A., and Nickel, F. Predicting friction system performance with symbolic regression and genetic programming with factor variables. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '18, pp. 1278-1285, New York, NY, USA, 2018. Association for Computing Machinery.
+La Cava, W., Spector, L., Danai, K., and Lackner, M. Evolving differential equations with developmental linear genetic programming and epigenetic hill climbing. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO Comp '14, pp. 141-142, New York, NY, USA, 2014. Association for Computing Machinery.
+La Cava, W., Helmuth, T., Spector, L., and Moore, J. H. A Probabilistic and Multi-Objective Analysis of Lexicase Selection and $\varepsilon$ -Lexicase Selection. Evolutionary Computation, 27(3):377-402, September 2019a.
+La Cava, W., Singh, T. R., Taggart, J., Suri, S., and Moore, J. Learning concise representations for regression by evolving networks of trees. In International Conference on Learning Representations, 2019b.
+
+La Cava, W., Orzechowski, P., Burlacu, B., de Franca, F., Virgolin, M., Jin, Y., Kommenda, M., and Moore, J. Contemporary symbolic regression methods and their relative performance. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1, 2021.
+Lafferty, J. and Wasserman, L. Rodeo: Sparse, greedy nonparametric regression. The Annals of Statistics, 36(1): 28-63, 2008.
+Landajuela, M., Lee, C. S., Yang, J., Glatt, R., Santiago, C. P., Aravena, I., Mundhenk, T., Mulcahy, G., and Petersen, B. K. A unified framework for deep symbolic regression. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 33985-33998. Curran Associates, Inc., 2022.
+Lemos, P., Jeffrey, N., Cranmer, M., Ho, S., and Battaglia, P. Rediscovering orbital mechanics with machine learning. Machine Learning: Science and Technology, 4(4):045002, October 2023.
+Li, W., Li, W., Yu, L., Wu, M., Sun, L., Liu, J., Li, Y., Wei, S., Yusong, D., and Hao, M. A neural-guided dynamic symbolic network for exploring mathematical expressions from data. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 28222-28242. PMLR, 21-27 Jul 2024.
+Liu, C.-Y., Zhang, S., Martinez, D., Li, M., and Senftle, T. P. Using statistical learning to predict interactions between single metal atoms and modified MgO (100) supports. npj Computational Materials, 6(1):102, 2020.
+Liu, C.-Y., Ye, S., Li, M., and Senftle, T. P. A rapid feature selection method for catalyst design: Iterative Bayesian additive regression trees (iBART). The Journal of Chemical Physics, 156(16), 2022.
+Liu, Y., Ročková, V., and Wang, Y. Variable selection with ABC Bayesian forests. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 83(3):453-481, 04 2021.
+Makke, N. and Chawla, S. Interpretable scientific discovery with symbolic regression: a review. Artificial Intelligence Review, 57(1):2, January 2024.
+Märtens, M. and Izzo, D. Symbolic regression for space applications: Differentiable cartesian genetic programming powered by multi-objective memetic algorithms. arXiv:2206.06213, 2022.
+
+Matsubara, Y., Chiba, N., Igarashi, R., and Ushiku, Y. Rethinking symbolic regression datasets and benchmarks for scientific discovery. Journal of Data-centric Machine Learning Research, 2024.
+McConaghy, T. FFX: Fast, Scalable, Deterministic Symbolic Regression Technology, pp. 235-260. Springer New York, New York, NY, 2011.
+Meurer, A., Smith, C. P., Paprocki, M., Certík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Roučka, v., Saboo, A., Fernando, I., Kulal, S., Cirmrnan, R., and Scopatz, A. SymPy: symbolic computing in Python. PeerJ Computer Science, 3:e103, January 2017.
+Petersen, B. K., Larma, M. L., Mundhenk, T. N., Santiago, C. P., Kim, S. K., and Kim, J. T. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations, 2021.
+Romano, J. D., Le, T. T., La Cava, W., Gregg, J. T., Goldberg, D. J., Chakraborty, P., Ray, N. L., Himmelstein, D., Fu, W., and Moore, J. H. PMLB v1.0: an open-source dataset collection for benchmarking machine learning methods. Bioinformatics, 38(3):878-880, October 2021.
+Schmidt, M. D. and Lipson, H. Age-fitness pareto optimization. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO '10, pp. 543-544, New York, NY, USA, 2010. Association for Computing Machinery.
+Shojaee, P., Meidani, K., Barati Farimani, A., and Reddy, C. Transformer-based planning for symbolic regression. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 45907-45919. Curran Associates, Inc., 2023.
+Stephens, T. gplearn: Genetic Programming in Python. https://github.com/trevorstephens/gplearn, 2020.
+Strogatz, S. H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press, 2015.
+Tenachi, W., Ibata, R., and Diakogiannis, F. I. Deep symbolic regression for physics guided by units constraints: Toward the automated discovery of physical laws. The Astrophysical Journal, 959(2):99, December 2023.
+Udrescu, S.-M. and Tegmark, M. AI Feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16):eaay2631, 2020.
+
+Udrescu, S.-M., Tan, A., Feng, J., Neto, O., Wu, T., and Tegmark, M. AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4860-4871. Curran Associates, Inc., 2020.
+Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. Openml: networked science in machine learning. SIGKDD Explor. Newsl., 15(2):49-60, June 2014.
+Verstyuk, S. and Douglas, M. R. Machine learning the gravity equation for international trade. Available at SSRN 4053795, 2022.
+Virgolin, M. and Pissis, S. P. Symbolic regression is NP-hard. Transactions on Machine Learning Research, 2022.
+Virgolin, M., Alderliesten, T., and Bosman, P. A. N. Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '19, pp. 1084-1092, New York, NY, USA, 2019. Association for Computing Machinery.
+Virgolin, M., Wang, Z., Alderliesten, T., and Bosman, P. A. N. Machine learning for the prediction of pseudorealistic pediatric abdominal phantoms for radiation dose reconstruction. Journal of Medical Imaging, 7(4):046501, 2020.
+Virgolin, M., Alderliesten, T., Witteveen, C., and Bosman, P. A. N. Improving Model-Based Genetic Programming for Symbolic Regression of Small Expressions. Evolutionary Computation, 29(2):211-237, 06 2021.
+Wu, T. and Tegmark, M. Toward an artificial intelligence physicist for unsupervised learning. Phys. Rev. E, 100: 033311, September 2019.
+Ye, S., Senftle, T. P., and Li, M. Operator-Induced Structural Variable Selection for Identifying Materials Genes. Journal of the American Statistical Association, 119(545): 81-94, 2024.
+
+# A. Reproducing the Experiment
+
+The experiment made use of an existing symbolic regression (SR) benchmarking platform, SRBench (La Cava et al., 2021), and changes were made to facilitate other functionalities, including signal-to-noise ratio (SNR) tuning, feature pre-screening, and variable usage accuracy calculation. The README file in our GitHub repository https://github.com/mattsheng/PAN_SR details the complete set of commands for reproducing the experiment. Here, we provide a short summary of the experiment process. Experiments are launched from the experiments/ folder via the script analyze.py. After installing and configuring the conda environment provided by SRBench, the complete black-box experiment on standalone SR methods can be started via the following command:
+
+```shell
+python analyze.py /path/to/pmlb/
+2 -results ./results/blackbox/SR/
+3 -n_trials 10
+4 -time_limit 24:00
+5 -tuned -skip_tuning
+```
+
+To enable PAN pre-screening, the users can either specify the path to a pre-run variable selection result or run the prescreening in place. The first option is useful when the users need to compare different SR methods on the same dataset:
+
+```shell
+python analyze.py /path/to/pmlb \
+results ../results/blackbox/SR_BART_VIP \
+-n_trials 10 \
+-time_limit 24:00 \
+-vs_method BART_VIP \
+-vs_result_path ../results/blackbox/pmlb_BART_VIP_withidx.feather \
+-vs_idx_label idx_hclst \
+-tuned -skip_tuning
+```
+
+If no path is given to -vs_result_path, the PAN pre-screening will be run in place. Similarly, the ground-truth experiment for the standalone SR methods on Feynman datasets with a sample size of $n = 1000$ and an SNR of 10 can be run by the following command:
+
+```txt
+python analyze.py /path/to/feynman \
+2 -results ../results_feynman/SR \
+3 -signal_to_noise 10 \
+4 -n 1000 \
+5 -sym_data \
+6 -n_trials 10 \
+7 -time_limit 24:00 \
+8 -tuned -skip_tuning
+```
+
+Note that -sym_data enables more performance metric calculations only available for ground-truth problems. To run PAN pre-screening only on the Feynman datasets with a sample size of $n = 1000$ and an SNR of 10, we can use the following command:
+
+```shell
+python analyze.py /path/to/feynman \
+2 -script BART_selection \
+3 -ml BART_VIP \
+4 -results ./results_feynman/BART_VIP/n_1000/ \
+5 -signal_to_noise 10 \
+6 -n 1000 \
+7 -sym_data \
+8 -n_trials 10 \
+9 -rep 20 \
+10 -time_limit 24:00
+```
+
+The -rep 20 argument instructs the program to run $K = 20$ replications of BART for estimating the variable ranking $r_{jk}$ of the $j$ th feature at the $k$ th run. Users can use other variable selection methods by modifying the BART_selection.py script.
+
+# B. Additional Dataset Information
+
+PMLB datasets Black-box datasets and their metadata are available from PMLB under an MIT license and is described in detail in Romano et al. (2021). In this experiment, we only focus on high-dimensional regression datasets available from PMLB. Specifically, we use PMLB regression datasets satisfying the following criteria:
+
+1. $n < 200$ and $p\geq 10$ , or
+2. $n\geq 200$ and $p\geq 20$
+
+Furthermore, datasets that have categorical features (number of unique value $\leq 5$ ) or non-continuous response variable (proportion of unique value $< 0.9$ ) are excluded since they are incorrectly classified as regression task (Dick, 2022). Among the datasets meeting these criteria, we found that two datasets, 195-auto-price and 207_autoprice, are identical, and we only kept 195-auto-price in our analysis. See Dick (2022) for a detailed analysis of the dataset duplication and incorrect problem classification issues of PMLB.
+
+Feynman datasets The original Feynman database described in Udrescu and Tegmark (2020) consists of only the relevant features $X_{S_0}$ and a large sample size of $n = 10^5$ , and is available in Feynman Symbolic Regression Database (https://space.mit.edu/home/tegmark/aifeynman.html). We extended the Feynman Symbolic Regression Database to include irrelevant features $X_{\mathrm{irr}}^{j} \in \mathbb{R}^{n \times s}$ for each relevant feature $x_j$ , $j \in S_0$ . To take advantage of the SRBench platform, we standardized the Feynman equations to PMLB format and included metadata detailing the true model and the units of each variable. The extended Feynman datasets are generated using the Python script provided in feynman-dataset-code/generate_feynman-dataset.py. To avoid the need to generate different datasets for each sample size $n$ considered in the main paper, we set $s = 50$ and $n = 100,000$ for all Feynman equations with random state control; we refer to this as the full Feynman datasets. In the experiment, the full Feynman datasets are randomly split into a $75\% / 25\%$ train/test set. If the train set contains more samples than the desired training sample size $n$ , the train and test sets will be further subsampled so that $X_{\mathrm{train}}$ has exactly $n$ samples and $X_{\mathrm{test}}$ has exactly $\lfloor n / 3 \rfloor$ samples.
+
+Users can also generate datasets using other data-generating functions $f_{0}$ by supplying a CSV file with the expression of $f_{0}(\cdot)$ and an additional CSV file describing the desired uniform distribution (i.e., the lower and upper bounds of the distribution) of each variable in $f_{0}(\cdot)$ . See feynman_dataset_code/FeynmanEquations.csv and feynman_dataset_code/units.csv for more details.
+
+Sampling Process for Extended Feynman Datasets The sampling process for the extended Feynman datasets is described in the main text and is reproduced here for completeness of the data description in this section.
+
+For each equation $f_0(\cdot)$ in the Feynman Lectures on Physics, we generated the relevant features $X_{S_0}$ following Udrescu and Tegmark (2020):
+
+$$
+\left(x _ {1, j}, \dots , x _ {n, j}\right) \stackrel {\text {i d}} {\sim} \operatorname {U n i f} \left(a _ {j}, b _ {j}\right), \quad \text {f o r} 1 \leq j \leq p _ {0}, \tag {6}
+$$
+
+where $p_0 = |\mathcal{S}_0|$ is the number of relevant features, $n$ is the sample size, and $a_j$ and $b_j$ are the lower and upper bounds for feature $x_j$ described in https://space.mit.edu/home/tegmark/aifeynman/FeynmanEquations.csv. Then, the response variable is generated as follow:
+
+$$
+y _ {i} = f _ {0} \left(x _ {i, 1}, \dots , x _ {i, p _ {0}}\right) + \varepsilon_ {i}, \quad \text {f o r} 1 \leq i \leq n, \tag {7}
+$$
+
+where $\varepsilon_{i}\stackrel{\mathrm{ii d}}{\sim}N(0,\sigma_{\varepsilon}^{2})$ is an additive Gaussian error, $\sigma_f^2$ denotes the sample variance of $f_0(\cdot)$ , and $\sigma_{\varepsilon}^{2} = \sigma_{f}^{2} / \mathrm{SNR}$ is the error variance tuned to a prescribed signal-to-noise ratio (SNR). When $\sigma_{\varepsilon}^{2} = 0$ (i.e., $\mathrm{SNR} = \infty$ ), (6) and (7) generate the original Feynman Symbolic Regression Database.
+
+For each relevant feature $\boldsymbol{x}_j, j = 1, \ldots, p_0$ , we generate $s = 50$ copies of irrelevant features following the distribution of $\boldsymbol{x}_j$ : $(\boldsymbol{x}_{j,\mathrm{irr}}^1, \ldots, \boldsymbol{x}_{j,\mathrm{irr}}^s) \stackrel{\mathrm{iid}}{\sim} \mathrm{Unif}(a_j, b_j)$ . Then, the final feature matrix is $\boldsymbol{X} = [X_{\mathcal{S}_0}, X_{\mathrm{irr}}^1, \ldots, X_{\mathrm{irr}}^{p_0}] \in \mathbb{R}^{n \times p}$ , where $\boldsymbol{X}_{\mathrm{irr}}^j = (\boldsymbol{x}_{j,\mathrm{irr}}^1, \ldots, \boldsymbol{x}_{j,\mathrm{irr}}^s) \in \mathbb{R}^{n \times s}$ is the irreverent feature matrix induced by the $j$ th relevant feature for $j = 1, \ldots, p_0$ , totaling $p = p_0(1 + s)$ number of features.
+
+In Appendix D.4, we consider sampling processes where features are not iid sampled from a uniform distribution.
+
+# C. Additional Experiment Details
+
+# C.1. General Experiment Settings
+
+Experiments were run in a heterogeneous cluster composed of nodes with Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.60GHz, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz, Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz, and AMD EPYC 7642 CPU @ 2.3GHz processors. The training of a single method on a single dataset for a fixed random seed was considered a job. Each job was managed by SLURM Workload Manager to receive one CPU core, 16GB of RAM, and a time limit of 24 hours. For the ground-truth problems, each final model was given an additional 5 minutes for each of the following steps: 1) cleaning the model for SymPy parsing, 2) simplifying the cleaned model using SymPy, 3) checking the difference solution criterion of the simplified model, 4) checking the ratio solution criterion of the simplified model, and 5) calculating model size (complexity). When the simplification of the cleaned model exceeded the 5-minute wall clock, steps 3-5 were run on the cleaned model instead.
+
+# C.2. Implementation Details of the Proposed Variable Selection Method
+
+The proposed method uses the bartMachine R package for its BART implementation. For each dataset, we fit $K = 20$ independent BART models and record the ranking $r_{j,k}$ of variable $x_{j}$ 's variable inclusion proportion (VIP) in the $k$ th run; the hyperparameters for bartMachine are summarized in Table 2. To cluster the VIP rankings into 2 clusters, we use the hclust function in R to perform agglomeration clustering (unweighted pair group method with arithmetic mean) on the Euclidean dissimilarity matrix of the VIP rankings. Then, $x_{j}$ is selected if $\bar{r}_{j} = \sum_{k=1}^{K} r_{j,k} / K$ belongs to the low-mean cluster.
+
+Table 2: Hyperparameters in bartMachine.
+
+Parameter Value # of trees 20 # of burn-in samples 10,000 # of posterior samples 10,000
+
+# D. Additional Results
+
+# D.1. Visualization of Average VIP Rankings $\bar{r}_j$ .
+
+Figure 5 shows the average BART VIP rankings for Feynman equation I-38-12 with $n = 1000$ . At high SNR, there is a clear separation between the low- and high-mean clusters, and the hypothesized cluster means closely match their actual values. As SNR decreases, irrelevant features tend to receive higher rankings, slightly shifting the cluster means incurring more false positives (FPs). Despite this deviation, the cluster means remain far apart, ensuring separation between relevant and irrelevant features.
+
+Figure 6 further demonstrates the clustering accuracy of the proposed method. Regardless of the SNR level, all true features are consistently assigned to the low-mean cluster, which is highly desirable in the PAN+SR framework. While decreasing SNR leads to some misclassification of the irrelevant features, the proposed method ensures that no true features are excluded. This robustness in retaining the true features under varying noise levels makes the proposed method well-suited for PAN+SR framework and high-dimensional SR tasks.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Average VIP Ranking Cluster low-mean high-mean
+
+
+Figure 5: Average BART VIP rankings $\bar{r}_j$ . over $K = 20$ runs on Feynman equation I-38-12 with $n = 1000$ , $p_0 = 4$ , and $p = 204$ . Black vertical dashed lines indicate the cluster means. Red solid vertical lines are the hypothesized cluster means: $(1 + p_0) / 2 = 2.5$ and $(p_0 + 1 + p) / 2 = 104.5$ .
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+True Label
+
+
+Irrelevant Feature
+
+
+Figure 6: Hierarchical clustering accuracy on Feynman equation I-38-12 with $n = 1000$ , $p_0 = 4$ , and $p = 204$ . Red and teal represent the low- and high-mean clusters, respectively. Circles and triangles represent relevant and irrelevant features, respectively.
+
+
+True Feature
+
+
+low-1
+
+
+high-mean
+
+# D.2. Analysis of Different Nonparametric Variable Selection Methods
+
+
+Figure 7: True positive rate (TPR) on the Feynman datasets for $n = 500,1000,1500,2000$ and $\mathrm{SNR} = \infty, 20, 15, 10, 5, 2, 1, 0.5$ . Points indicate the mean performance, and bars show the $95\%$ confidence interval. VIP Rank is the proposed method for PAN pre-screening. Local, G.SE, G.MAX, and RF are alternative nonparametric variable selection methods.
+
+PAN pre-screening presents a unique challenge to nonparametric variable selection methods where any missed signals (false negative) will eliminate the correct expression $f_0(\cdot)$ from the search space. That is, a true positive rate (TPR) near $100\%$ in the pre-screening phase is necessary to ensure successful SR tasks. Figure 7 compares the average TPR of five nonparametric variable selection methods across various configurations of $n$ and SNR on the Feynman datasets. VIP Rank, the proposed method, is compared with three BART permutation test-based methods (Local, G.SE, and G.MAX) (Bleich et al., 2014) and the Random Forest (RF) variable selection method in PySR (Cranmer, 2023). Of the three BART permutation test-based methods, BART-Local applies the least stringent selection criteria, while BART-G.MAX is the most stringent, with BART-G.SE offering a balance between the two. The RF implementation requires users to specify the number of selected variables $k$ , which we tuned over $\{1,2,\dots,20\}$ using 5-fold cross-validation.
+
+VIP Rank consistently achieves the highest TPR, nearing or reaching $100\%$ across all experimental settings. In noiseless conditions $(\mathrm{SNR} = \infty)$ , only VIP Rank attains a perfect TPR of $100\%$ . Although there is a slight TPR decline for VIP Rank at $n = 500$ and $\mathrm{SNR} \leq 5$ , it still outperforms the other methods, particularly at $n = 500$ and $\mathrm{SNR} = 0.5$ . These results reinforce the need for a specialized variable selection method for PAN pre-screening. In addition to the four methods considered here, we point readers to Ye et al. (2024), where they analyzed three additional nonparametric variable selection methods and showed that none outperform BART-G.SE in terms of TPR.
+
+Figure 8 illustrates the false positive rate (FPR), a crucial metric for evaluating variable selection accuracy. As discussed in the main paper, VIP Rank produces higher FPR under low SNR conditions—a tradeoff made to maintain a near-perfect true positive rate (TPR). While this tradeoff may be undesirable for typical variable selection tasks, it is acceptable for PAN pre-screening, where minimizing false negatives (FNs) is the priority. The three BART permutation-based methods and RF consistently maintain low and robust FPRs across all settings of $n$ and SNR. However, as Figure 7 shows, this strict control of FPR comes at the cost of worse TPR performance.
+
+To further evaluate the impact of variable selection methods in the PAN+SR framework, we replaced VIP Rank with BART-G.SE and compared their performance using Operon as the SR method. Operon was chosen for this analysis due to
+
+
+Figure 8: False positive rate (FPR) on the Feynman datasets for $n = 500,1000,1500,2000$ and $\mathrm{SNR} = \infty, 20, 15, 10, 5, 2, 1, 0.5$ . Points indicate the mean performance, and bars show the $95\%$ confidence interval. VIP Rank is the proposed method for PAN pre-screening. Local, G.SE, G.MAX, and RF are alternative nonparametric variable selection methods.
+
+its strong $R^2$ performance in both the black-box and ground-truth experiments. Table 3 summarizes the average test set $R^2$ on the Feynman dataset. VIP+SR consistently achieves the highest $R^2$ across all experimental settings. For instance, at $n = 500$ and $\mathrm{SNR} = 20$ , VIP+SR achieves an average $R^2$ of 0.892, compared to 0.860 for GSE+SR and 0.846 for standalone SR. Under high noise conditions, VIP+SR continues to demonstrate better robustness than GSE+SR. At $n = 500$ and $\mathrm{SNR} = 0.5$ , VIP+SR scores 0.145, slightly outperforming GSE+SR (0.142) and standalone (0.142). This trend is consistent across different different sample sizes $n$ .
+
+# D.3. Effect of Different Clustering Algorithms
+
+The proposed VIP Rank variable selection method can be implemented using various off-the-shelf clustering algorithms. However, due to the class imbalance nature of the variable selection problem, not all clustering algorithms are suitable. In this ablation study, we examine the effect of clustering algorithms on TPR and FPR performances of VIP Rank. We elected 10 clustering algorithms available in scikit-learn v1.5.7: agglomerative hierarchical clustering (AHC), k-means++, Gaussian mixture model (GMM), Birch, Mean Shift, Affinity Propagation, Spectral, OPTICS, HDBSCAN, and DBSCAN.
+
+As illustrated in Figure 9, the first 5 clustering algorithms (AHC, k-mean++, GMM, Birch, Mean Shift) achieve the highest TPR across all simulation settings with indistinguishable differences. Affinity Propagation also has similar TPR compared with the top 5 algorithms but lacks behind in noisy (e.g., $\mathrm{SNR} = 0.5$ ) and small- $n$ (e.g., $n = 500$ ) settings. The rest of the pack has significantly worse TPR and are thus not suitable in VIP Rank.
+
+Since the top 5 algorithms have indistinguishable TPR, we elect one with the least FPR. As shown in Figure 10, AHC has significantly lower FPR than the rest of the top 5 algorithms across most simulation settings. Combine with its near $100\%$ TPR, AHC is capable of identifying a more compact feature set that has a high probability of containing all relevant features.
+
+
+Figure 9: True positive rate of various ablations of clustering algorithm.
+
+
+Figure 10: False positive rate of various ablations of clustering algorithm.
+
+Table 3: Average test set $R^2$ . The highest value in each experimental setting is in bold.
+
+noiseless 20 15 10 5 2 1 0.5 n = 500 VIP+SR 0.974 0.892 0.870 0.837 0.730 0.525 0.335 0.145 GSE+SR 0.948 0.860 0.859 0.818 0.710 0.510 0.327 0.142 SR 0.915 0.846 0.840 0.792 0.702 0.506 0.322 0.142 n = 1000 VIP+SR 0.984 0.919 0.901 0.867 0.774 0.586 0.406 0.229 GSE+SR 0.971 0.914 0.897 0.851 0.774 0.574 0.405 0.229 SR 0.942 0.883 0.867 0.825 0.747 0.580 0.393 0.227 n = 1500 VIP+SR 0.990 0.928 0.909 0.874 0.792 0.612 0.433 0.260 GSE+SR 0.961 0.910 0.899 0.866 0.781 0.600 0.428 0.257 SR 0.956 0.895 0.878 0.856 0.761 0.592 0.426 0.255 n = 2000 VIP+SR 0.990 0.935 0.914 0.887 0.805 0.619 0.448 0.277 GSE+SR 0.963 0.918 0.905 0.872 0.787 0.617 0.445 0.272 SR 0.960 0.907 0.892 0.855 0.781 0.611 0.437 0.272
+
+# D.4. Effect of Noisy, Duplicated, and Correlated Predictors
+
+In addition to the extensive simulation settings described in Section 5.2, we further evaluate VIP Rank under alternative predictor structures that challenge common modeling assumptions:
+
+- Baseline: $x_{1},\ldots ,x_{p}\stackrel {\mathrm{~iid}}{\sim}\mathrm{Unif}(0,1)$
+- Noisy $X$ : Independent Gaussian noise is added to each predictor with variance equal to 1/5 of the signal variance
+- Duplicated $X$ : A redundant feature is added: $x_{6} = x_{1} + x_{2}$ , where $x_{1}$ and $x_{2}$ are relevant predictors
+Correlated $X\colon x_1,\ldots ,x_p\sim \mathrm{Unif}(0,1)$ with an autocorrelation structure: $\rho_{ij} = 0.9^{|i - j|}$
+
+The response variable $y$ is generated according to the Friedman equation (1991):
+
+$$
+y = 1 0 \sin (\pi x _ {1} x _ {2}) + 2 0 (x _ {3} - 0. 5) ^ {2} + 1 0 x _ {4} + 5 x _ {5} + \varepsilon , \quad \varepsilon \sim N (0, \sigma^ {2}).
+$$
+
+We fix $n = 1000$ , $p = 100$ , $\mathrm{SNR} = 10$ , and repeat each scenario for 100 trials. Table 4 reports the average TPR and FPR. VIP Rank consistently identifies all relevant features across all scenarios, demonstrating strong robustness to noise, redundancy, and correlation among predictors.
+
+Table 4: Average performance in each scenario across 100 trials.
+
+Scenario TPR FPR Baseline 100% 10.58% Noisy X 100% 26.42% Duplicated X 100% 11.11% Correlated X 100% 15.98%
+
+# D.5. Additional Performance Metrics for Operon vs PAN+Operon
+
+Figures 11, 12, 13 show additional metrics not discussed in the main paper. Although PAN+Operon's solution rate plummeted from $\sim 27\%$ at $\mathrm{SNR} = \infty$ to $0\%$ at $\mathrm{SNR} = 20$ across all $n$ , Figure 11 shows there is still improvement in $R^2$ on test set across all $n$ and SNR, while improving model interpretability evidenced by the uniformly lower model size in Figure 12.
+
+
+Figure 11: $R^2$ on test set with Operon as the SR module. Points indicate the average $R^2$ on test set and bars represent the $95\%$ confidence intervals.
+
+
+Figure 12: Model size with Operon as the SR module. Points indicate the average model size and bars represent the $95\%$ confidence intervals.
+
+
+Figure 13: Solution rate with Operon as the SR module. Points indicate the average solution rate and bars represent the $95\%$ confidence intervals.
\ No newline at end of file
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f1ac94283d1a5292de668b9bf46a2029ffa734c1
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00b10f0b9a45765371a80a0f45f2cbc22c827d65d17d61125641269bee3ae9fa
+size 993963
diff --git a/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d40b61ba650758a301a623128e36bf91e5229624
--- /dev/null
+++ b/abinitiononparametricvariableselectionforscalablesymbolicregressionwithlargep/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51ede053ce49afcdf6118c21d249d780a8f1ae06f39522a1b379b5471ae0c2cf
+size 800765
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1ea2c467e98b2e41702e68e5baec9ddeb5673d4e
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33b207d7474c36e2244edb7e149418213d49043db35e2658a7eb824430b64c1a
+size 350143
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9806b3ff21151dfa7d7d33abe89e63db89253e76
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c593f38714f8bc47b2771b1d1ab9a7bfa2849cd69d9e8915649232c13817f72b
+size 424430
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4b1c4d9ac0c8ab61f633b913c446b7143f6fc82f
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/89ad1898-ec6d-430c-a0d7-cdc1fb5659fa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83225f30b01acae912510db48b518ab047b09e6d3b7325aefbbf76a8031f7273
+size 4525036
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..505042c5fa8b25c9c1eb7ec31f3bae4722faee3c
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/full.md
@@ -0,0 +1,1897 @@
+# ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $\alpha$ - $\beta$ -Divergence
+
+Guanghui Wang1 Zhiyong Yang1 Zitai Wang2 Shi Wang2 Qianqian Xu2 Qingming Huang1
+
+# Abstract
+
+Knowledge Distillation (KD) transfers knowledge from a large teacher model to a smaller student model by minimizing the divergence between their output distributions, typically using forward Kullback-Leibler divergence (FKLD) or reverse KLD (RKLD). It has become an effective training paradigm due to the broader supervision information provided by the teacher distribution compared to one-hot labels. We identify that the core challenge in KD lies in balancing two mode-concentration effects: the Hardness-Concentration effect, which refers to focusing on modes with large errors, and the Confidence-Concentration effect, which refers to focusing on modes with high student confidence. Through an analysis of how probabilities are reassigned during gradient updates, we observe that these two effects are entangled in FKLD and RKLD, but in extreme forms. Specifically, both are too weak in FKLD, causing the student to fail to concentrate on the target class. In contrast, both are too strong in RKLD, causing the student to overly emphasize the target class while ignoring the broader distributional information from the teacher. To address this imbalance, we propose ABKD, a generic framework with $\alpha$ - $\beta$ -divergence. Our theoretical results show that ABKD offers a smooth interpolation between FKLD and RKLD, achieving an effective trade-off between these effects. Extensive experiments on 17 language/vision datasets with 12 teacher-student settings confirm
+
+$^{1}$ School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China $^{2}$ Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China $^{3}$ Key Laboratory of Big Data Mining and Knowledge Management (BDKM), University of Chinese Academy of Sciences, Beijing, China. Correspondence to: Zhiyong Yang , Qingming Huang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+its efficacy. The code is available at https: //github.com/ghwang-s/abkd.
+
+# 1. Introduction
+
+Knowledge Distillation (KD) (Hinton, 2015) is a widely-adopted technique for transferring knowledge from large models (teachers) to smaller models (students). In this setup, the student model, with a predictive distribution $q_{\theta}$ , learns to mimic the predictive distribution $p$ of the teacher model. This imitation is typically achieved by minimizing a predefined divergence $\mathbb{D}$ between the teacher distribution $p$ and the student distribution $q_{\theta}$ : $\ell_{\mathrm{KD}} \triangleq \mathbb{D}(p \| q_{\theta})$ . This way, KD allows the student to leverage richer soft label information from $p$ compared to one-hot labels, often leading to better performance than traditional supervised fine-tuning. This has been shown in tasks like image classification (Dosovitskiy, 2020; Radford et al., 2021; Yang et al., 2023b; Wang et al., 2022b) and text generation (Vaswani, 2017; Touvron et al., 2023a).
+
+A key step in KD is to choose a proper divergence $\mathbb{D}$ for distribution matching. One popular choice in previous works (Cho & Hariharan, 2019; Mirzadeh et al., 2020; Zhou et al., 2021; Zhao et al., 2022; Jin et al., 2023; Sun et al., 2024; Zheng & Yang, 2024) is the forward Kullback-Leibler divergence (FKLD). However, FKLD's asymmetry often results in a student distribution $q_{\theta}$ that is overly smooth, spreading across the entire support of $p$ . To address this, recent studies (Lee et al., 2023; Gu et al., 2024a; Kim et al., 2024; Gu et al., 2024b) have explored the reverse KLD (RKLD), which allows $q_{\theta}$ to focus on a few prominent modes of $p$ . Despite the effectiveness, empirical results (Wen et al., 2023; Wu et al., 2024; Ko et al., 2024) suggest that RKLD often yields suboptimal performance across a range of tasks. What is worse, there is no systematic approach to identify the essential issues hidden behind, which hinders the development of a more generic and effective KD framework. To get out of this dilemma, we first pose the following question:
+
+What underlying factors contribute to the suboptimal performance of FKLD and RKLD?
+
+To answer this, we analyze how different divergence func
+
+
+(a)
+
+
+(b)
+(c)
+
+
+(d)
+
+
+
+
+(f)
+Figure 1. (a) Illustration of the unified search space for our proposed ABKD, where height (color) represents performance $(\uparrow)$ . The FKLD and RKLD are special cases of ABKD when selecting $(\alpha = 1, \beta = 0)$ and $(\alpha = 0, \beta = 1)$ , respectively. The $\alpha$ -divergence can only search along the submanifold $\alpha + \beta = 1$ in the ABKD space. (b)-(c) illustrate how adjusting $\alpha$ and $\beta$ affects hardness-concentration and confidence-concentration. (d)-(g) illustrate how different divergences learn a student distribution from the given teacher distribution. The $\alpha$ -divergence, compared to others, can more effectively learn soft label information while maintaining focus on the target class.
+
+
+
+tions affect the allocation of probability mass in the student distribution during training by tracking the log mass ratio LogR. Notably, LogR is proportional to the gradient of the loss function w.r.t the logits. This insight allows us to frame the problem as understanding how divergence algorithms influence the reduction of LogR. Through this lens, we identify two key mode-concentration effects: Hardness-Concentration and Confidence-Concentration. Hardness-Concentration refers to focusing on modes in the loss where there is a large error between $p$ and $q_{\theta}$ , while Confidence-Concentration refers to focusing on modes in the loss where $q_{\theta}$ has high confidence.
+
+On top of this, we find that the limitations of FKLD and RKLD stem from the extreme ways they utilize these concentration effects: a) FKLD exhibits weak concentration effects, treating mismatches equally from all classes, which fails in guiding the student to concentrate on the target class and causes incorrect predictions (Fig. 1d). b) RKLD exhibits strong concentration effects, focusing on both hard classes with large errors and classes where the student has high confidence. This often leads to a trivial solution, where the well-trained student focuses exclusively on the target class and ignores broader knowledge from $p$ (Fig. 1e). With the limitations revealed, we continue to seek an answer to the following question:
+
+Can we find a generic, theoretically grounded method to balance hardness-concentration and confidence-concentration?
+
+In pursuit of this, we introduce the $\alpha$ - $\beta$ -divergence, a general extension of divergences that unifies FKLD and RKLD, while also extending to previously unexplored divergences like the Hellinger distance and $\beta$ -divergence. Our theoretical results demonstrate that the $\alpha$ - $\beta$ -divergence provides
+
+a flexible mechanism to smoothly interpolate between the extremes of FKLD and RKLD by controlling the trade-off between hardness-concentration (Fig. 1b) and confidence-concentration (Fig. 1c) via the hyperparameters $\alpha$ and $\beta$ . This mechanism ensures a more proper allocation of probability mass (Fig. 1g). Motivated by these insights, we propose ABKD, a generic distillation framework based on $\alpha - \beta$ -divergence. Empirical results across a variety of tasks, including instruction-following and image classification, demonstrate ABKD's generality and effectiveness. For instance, by modifying only the loss function, ABKD achieves performance improvements of 0.81 to 3.31 over FKLD and RKLD on five instruction-response datasets when distilling GPT-2 XL (1.5B) into GPT-2 (0.1B).
+
+In summary, the contributions of this work are three-fold:
+
+- Theoretically: We analyze the limitations of FKLD and RKLD from novel perspectives of hardness-concentration and confidence-concentration, and show that the $\alpha-\beta$ -divergence offers a flexible approach to balance these effects.
+- Methodologically: We propose ABKD, a flexible distillation framework that unifies FKLD and RKLD and generalizes to several other divergences, offering greater versatility and applicability.
+- Empirically: Extensive experiments on 17 language and vision datasets with 12 teacher-student configurations (0.85M-0.46M to 7B-3B) validate the theoretical insights. ABKD outperforms or matches state-of-the-art methods without extra trainable parameters and allows further gains by rectifying their loss functions.
+
+Prior Arts. We discuss related work and defer a concentrated account to App. A.
+
+# 2. Preliminaries
+
+KD involves using a fixed teacher model $f_{T}$ to improve the performance of a parameterized student model $f_{S}$ . Given an input $x$ , the teacher $f_{T}$ and student $f_{S}$ produce probability distributions $p$ and $q_{\theta}$ , respectively.
+
+The goal of KD can be achieved by letting $q_{\theta}$ mimic $p$ for all samples in dataset $\mathcal{D}$ . A direct way to do this is minimizing:
+
+$$
+\ell_ {\mathrm {K D}} \triangleq \mathbb {D} (p \| q _ {\theta}), \tag {1}
+$$
+
+where $\mathbb{D}$ is a distribution measure. Optionally, practitioners can substitute $p$ with the one-hot vector $\boldsymbol{y}$ , where $\boldsymbol{y} \triangleq [0, \dots, 1, \dots, 0]$ with 1 at ground-truth label $y$ and 0 elsewhere. In this case, the loss is $\ell_{\mathrm{CE}} \triangleq \mathbb{D}(\boldsymbol{y} \| q_{\theta})$ , where $\mathbb{D}$ is typically the FKLD. The final training loss is:
+
+$$
+\ell = \ell_ {\mathrm {C E}} + \lambda \ell_ {\mathrm {K D}}, \tag {2}
+$$
+
+where $\lambda$ is a hyperparameter. Since $p$ provides richer information (i.e., soft label) than the one-hot vector $\mathbf{y}$ , KD outperforms traditional supervised fine-tuning on many downstream tasks, such as instruction-following and image classification. The settings for these tasks in KD are as follows.
+
+Instruction-following. Let $\mathbf{x}$ and $\mathbf{y}$ represent the input and output sequences, respectively. A token-level autoregressive model produces an $C$ -dimensional probability distribution for the $n$ -th token over the vocabulary $\mathbb{V}$ , conditioned on $\mathbf{x}$ and $\mathbf{y}_{ q_{\theta}(y))$ classes with higher $q_{\theta}(y)$ .
+3. RKLD preferentially reduces the mass of overestimated $(p(y) < q_{\theta}(y))$ classes with smaller $q_{\theta}(y)$ .
+
+As shown in Fig. 1(d), the equally weighted matching scheme in FKLD drives students to sub-optimal modes, which induces wrong predictions. For RKLD, the theorem states that it only favors small mass classes when the teacher score is over-estimated while only favors large mass classes under the opposite scenario. As a total effect, the small mass tends to get smaller; the large mass tends to get larger. As an extreme result shown in Fig. 1(e), RKLD eventually forces the student to focus on one class. This makes the teacher's supervision degenerate to a ont-hot label, which loses the distributional information hidden inside the teacher's prediction. This leads to the following conclusion:
+
+A proper divergence should achieve a moderate trade-off between hardness-concentration and confidence-concentration.
+
+# 3.3. Weighted Sum of FKLD and RKLD
+
+In pursuit of this, a naive solution is to take a weighted sum of FKLD and RKLD, which we call the weighted sum divergence (WSD):
+
+$$
+\mathbb {D} _ {\mathrm {W S D}} (p \| q) \triangleq \alpha \mathbb {D} _ {\mathrm {K L}} (p \| q) + \beta \mathbb {D} _ {\mathrm {K L}} (q \| p), \tag {6}
+$$
+
+where $\alpha$ and $\beta$ are hyperparameters. A more principled approach is to adapt the weighting coefficients dynamically during training based on the discrepancy (e.g., entropy) between $p$ and $q$ , as done in previous works (Amara et al., 2022; Wu et al., 2024).
+
+Unfortunately, such a composite metric overemphasizes modes with small probabilities in $p$ and $q$ . To see this, when either $q(k) \approx 0, p(k) > 0$ or $p(k) \approx 0, q(k) > 0$ , we have
+
+$\mathbb{D}_{\mathrm{WSD}}(p\| q)\to \infty$ . Hence, the algorithm must focus on extreme cases to minimize the objective function, leading to improper probability allocation. Moreover, similar to the analysis in Ko et al. (2024), one can easily show that the gradient norm in this case also grows excessively, leading to significant and potentially noisy parameter updates. Such behaviors can destabilize the optimization process and hinder convergence.
+
+Another attractive approach is to use the Jensen-Shannon divergence (Binici et al., 2022; Agarwal et al., 2024) $\mathbb{D}_{\mathrm{JSD}}(p\| q)\triangleq \frac{1}{2}\mathbb{D}_{\mathrm{KL}}(p\| m) + \frac{1}{2}\mathbb{D}_{\mathrm{KL}}(q\| m)$ , where $m = \frac{1}{2} (p + q)$ . However, a major drawback of JSD is that it suffers from gradient vanishing (Arjovsky et al., 2017) when the distributions $p$ and $q_{\theta}$ are far apart (a common scenario in early training stages), which hinders model convergence.
+
+Above all, balancing hardness and confidence concentration is non-trivial if one only resorts to FKLD and RKLD. In the next section, we will introduce a generic notion of divergence to address this issue.
+
+# 4. ABKD: The Proposed Method
+
+# 4.1.ABKD
+
+One way to pursue a harmonic utilization of hardness- and confidence-concentration is to find a subtle point between FKLD and RKLD. The following $\alpha$ - $\beta$ -divergence exactly serves this purpose (Cichocki et al., 2011).
+
+Definition 4.1 ( $\alpha$ - $\beta$ -divergence). Consider $\alpha$ and $\beta \in \mathbb{R}$ , satisfying $\alpha, \beta, \alpha + \beta \neq 0$ . The $\alpha$ - $\beta$ -divergence of two distributions is given by:
+
+$$
+\begin{array}{l} \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q) \triangleq - \frac {1}{\alpha \beta} \sum_ {k} \left[ p (k) ^ {\alpha} q (k) ^ {\beta} - \frac {\alpha}{\alpha + \beta} p (k) ^ {\alpha + \beta} \right. \\ \left. - \frac {\beta}{\alpha + \beta} q (k) ^ {\alpha + \beta} \right], \\ \end{array}
+$$
+
+where $p = [p(k)]_{k=1}^{C}$ and $q = [q(k)]_{k=1}^{C}$ are two discrete distributions over $C$ classes.
+
+As will soon be seen in Sec.4.2, both hardness-concentration and confidence-concentration effects in $\alpha$ - $\beta$ -divergence could be regarded as an interpolation between the corresponding effect of FKLD and RKLD. Such ability allows the $\alpha$ - $\beta$ -divergence to ensure a more proper allocation of probability mass.
+
+Inspired by this, we propose ABKD, which is formally defined as minimizing the following objective:
+
+$$
+\ell = \ell_ {\mathrm {C E}} + \lambda \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta}), \tag {7}
+$$
+
+Beyond this issue, $\alpha$ - $\beta$ -divergence is also a generic notion
+
+Table 1. Some divergence functions and their corresponding choices of $\alpha$ and $\beta$ . The $\alpha - \beta$ -divergence can be extended by continuity (by applying l'Hôpital formula) to cover all the values of $\alpha ,\beta \in \mathbb{R}$ ,as shown in App. B.
+
+Distribution Measure Reference Range Kullback–Leibler (KL) divergence Kullback & Leibler (1951) α = 1,β = 0 Reverse KL divergence Kullback & Leibler (1951) α = 0,β = 1 α-divergence Chernoff (1952) α + β = 1 β-divergence Basu et al. (1998) α = 1 Hellinger distance Hellinger (1909) α = β = 0.5 Squared euclidean distance Heath (1956) α = β = 1
+
+of a family distribution divergences, which includes FKLD, RKLD, and other typical divergences as special cases. For example, when $(\alpha = 1,\beta = 0)$ , one obtains FKLD; when $(\alpha = 0,\beta = 1)$ , one obtains RKLD. Please see Tab. 1 for other special cases. In this way, ABKD nature provides a generic framework for divergence-based distillation algorithms.
+
+# 4.2. Trading off Hardness-Concentration and Confidence-Concentration via $\alpha$ - $\beta$ -divergence
+
+ABKD offers a unified space to trade off the hardness-concentration and confidence-concentration effects.
+
+To explain this, we go back to the log mass ratio, the following proposition explains how the hyperparameters $\alpha$ and $\beta$ influence the reduction of $|\mathrm{LogR}_t^{(\alpha ,\beta)}(y)|$
+
+Proposition 4.2. The updates induced by $\alpha$ - $\beta$ -divergence for $q_{t}$ within one gradient descent step are given by:
+
+$$
+\begin{array}{l} \left| \operatorname {L o g} \mathrm {R} _ {t} ^ {(\alpha , \beta)} (y) \right| \leq \eta \underbrace {q _ {t} (y) ^ {\beta}} _ {(a)} \underbrace {\left\lfloor \frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha} \right\rfloor} _ {(b)} \\ + \eta q _ {t} (y) \sum_ {k} \underbrace {q _ {t} (k) ^ {\beta}} _ {(a _ {1})} \underbrace {\left| \frac {p (k) ^ {\alpha} - q _ {t} (k) ^ {\alpha}}{\alpha} \right|} _ {(b _ {1})} + | \mathsf {N} _ {t} ^ {(\alpha , \beta)} (y) |, \\ \end{array}
+$$
+
+where $\mathsf{N}_t^{\alpha,\beta}(y)$ denotes constant normalization factor independent of $y$ and vanishes to zero when $p = q_t$ .
+
+The proof is in App. G.5. In $(a)$ and $(a_{1})$ , $\alpha - \beta$ -divergence employs a power form $q_{t}(k)^{\beta}$ for confidence-concentration effect. It is easy to see when $\beta \to 1$ , it degenerates to the effect of RKLD, and when $\beta \to 0$ to the effect of FKLD. A larger $\beta$ provides a stronger effect of confidence-concentration, focusing the matching performance on its most confident classes (Fig. 1c). Meanwhile, terms $(b)$ and $(b_{1})$ use $|\frac{p(y)^{\alpha} - q_{t}(y)^{\alpha}}{\alpha}|$ for hardness-concentration effect. It is easy to see when $\alpha \to 1$ , it degenerates to the effect of FKLD, and when $\alpha \to 0$ to the effect of RKLD. A smaller $\alpha$ amplifies the hardness-concentration
+
+Table 2. ROUGE-L scores (↑) on five task-agnostic instruction-following datasets. Note that this is an unfair comparison because we only train on the fixed dataset while other KD methods employ augmentation. The fairer results using our method with different augmentation strategies can be found in Fig. 3(b) and Tab. 8. All results are based on our re-implementation. We report the average and standard deviation of ROUGE-L scores across five random seeds. Better results are shown in bold, and darker colors indicate superior performance.
+
+Method Dolly Eval Self-Instruct Vicuna Eval Super-Natural Unnatural GPT-2 XL (Teacher) 26.94 (0.23) 13.31 (0.63) 16.23 (0.62) 24.28 (0.43) 29.05 (0.14) GPT-2 XL (1.5B) → GPT-2 (0.1B) SFT 23.14 (0.23) 10.22 (0.44) 15.15 (0.31) 17.41 (0.18) 19.76 (0.09) KD (Hinton, 2015) 23.80 (0.37) 10.01 (0.75) 15.25 (0.65) 17.69 (0.26) 18.99 (0.05) SeqKD (Kim & Rush, 2016) 24.28 (0.22) 11.24 (0.30) 14.94 (0.58) 20.66 (0.28) 23.59 (0.13) MiniLLM (Gu et al., 2024a) 24.62 (0.33) 12.49 (0.56) 17.30 (0.41) 23.76 (0.38) 24.30 (0.14) GKD (Agarwal et al., 2024) 24.49 (0.16) 11.41 (0.14) 16.01 (0.37) 18.25 (0.24) 21.41 (0.11) DISTILLM (Ko et al., 2024) 25.32 (0.14) 11.65 (0.28) 16.76 (0.66) 23.52 (0.47) 25.79 (0.08) Ours (ABKD) 25.65 (0.24) 13.47 (0.42) 16.06 (0.25) 26.47 (0.31) 29.32 (0.08) GPT-2 XL (1.5B) → GPT-2 Medium (0.3B) SFT 25.30 (0.31) 12.56 (0.62) 16.36 (0.22) 23.32 (0.13) 23.42 (0.07) KD (Hinton, 2015) 24.71 (0.17) 10.33 (0.54) 16.23 (0.50) 23.74 (0.32) 23.97 (0.12) SeqKD (Kim & Rush, 2016) 25.93 (0.44) 12.98 (0.24) 16.68 (0.30) 21.95 (0.19) 25.23 (0.08) MiniLLM (Gu et al., 2024a) 25.34 (0.25) 13.36 (0.62) 17.25 (0.46) 25.68 (0.41) 26.63 (0.12) GKD (Agarwal et al., 2024) 24.75 (0.27) 12.76 (0.85) 16.54 (0.39) 24.94 (0.14) 26.42 (0.15) DISTILLM (Ko et al., 2024) 26.21 (0.29) 13.53 (0.13) 16.96 (0.66) 25.78 (0.19) 28.51 (0.26) Ours (ABKD) 26.08 (0.36) 13.86 (0.40) 16.63 (0.26) 27.25 (0.38) 29.69 (0.21) GPT-2 XL (1.5B) → GPT-2 Large (0.8B) SFT 25.42 (0.32) 12.91 (0.46) 16.31 (0.51) 23.76 (0.28) 25.72 (0.07) KD (Hinton, 2015) 26.02 (0.43) 12.34 (0.52) 16.26 (0.44) 25.11 (0.37) 26.44 (0.12) SeqKD (Kim & Rush, 2016) 26.29 (0.47) 13.53 (0.34) 16.39 (0.36) 25.81 (0.40) 27.51 (0.10) MiniLLM (Gu et al., 2024a) 26.12 (0.25) 13.79 (0.31) 17.35 (0.51) 26.12 (0.37) 28.53 (0.17) GKD (Agarwal et al., 2024) 26.06 (0.34) 13.21 (0.45) 16.64 (0.45) 26.13 (0.41) 27.13 (0.21) DISTILLM (Ko et al., 2024) 26.56 (0.36) 13.97 (0.36) 16.61 (0.45) 26.73 (0.36) 29.24 (0.23) Ours (ABKD) 26.51 (0.22) 14.38 (0.43) 16.63 (0.42) 28.05 (0.21) 29.92 (0.14)
+
+effect and tends to be more aggressive in achieving better matching by penalizing errors on hard classes (Fig. 1b).
+
+In this sense, by tuning $\alpha$ and $\beta$ , we can flexibly balance the influence of the two effects and avoid extreme cases (Fig. 1g). For a finer-grained theoretical analysis and hyperparameter tuning guidelines, please see App. D, Thm. D.1.
+
+Comparing with the Weighted Sum. As discussed earlier in Sec. 3.3, WSD often focuses excessively on the extreme values of $p / q$ , leading to unstable optimization. Fortunately, one can show that the $\alpha - \beta$ -divergence can finely adjust the focus on different likelihood ratios $p / q$ , thus enjoying a more stable gradient. For further analysis, see App. E.
+
+Comparing with the $\alpha$ -divergence. One might also recall the $\alpha$ -divergence to achieve the trade-off, which is defined as $\mathbb{D}_{\alpha}(p\| q)\triangleq \frac{1}{\alpha(\alpha - 1)}\left[\sum_{k}p(k)^{\alpha}q(k)^{1 - \alpha} - 1\right]$ . It includes $\mathbb{D}_{\mathrm{KL}}(p\| q_{\theta})$ as $\alpha \to 1$ , and $\mathbb{D}_{\mathrm{KL}}(q_{\theta}\| p)$ as $\alpha \to 0$ . Note that when $\beta = 1 - \alpha$ , it becomes a special case of our framework. According to Prop. 4.2, to decrease $\alpha$ , one has to increase $\beta$ to ensure that they add up to 1. Such unnecessary restriction hinders its ability to achieve better performance, as shown
+
+in Fig. 1(a) and (f). For further analysis, please see App. F.
+
+# 5. Experiments
+
+In the following, we investigate to what extent our theoretical results translate into practice on natural language and vision tasks. Due to space limitations, please see App. I for more details on datasets, competitors, and implementation.
+
+# 5.1. Natural Language Processing Tasks
+
+Datasets. We evaluate our methods on five task-agnostic instruction-following benchmarks. Evaluation metric is based on ROUGE-L (Lin, 2004). Details about the datasets and evaluation metric can be found in App. I.1.1.
+
+Competitors. We consider the following state-of-the-art (SOTA) baselines: 1) supervised fine-tuning (SFT) with only student model on fixed datasets; 2) KD with FKLD on fixed datasets; 3) SeqKD with SFT to teacher-generated output; 4) MiniLLM with RKLD using a policy gradient approach on student-generated outputs (SGOs); 5) GKD with JSD on
+
+Table 3. Evaluation of the effect of different loss functions. WSD: weighted sum of FKLD and RKLD. HD: Hellinger distance. SED: Squared euclidean distance.
+
+Loss Function Dolly Eval Self-Instruct Vicuna Eval Super-Natural Unnatural FKLD 23.80 (0.37) 10.01 (0.75) 15.25 (0.65) 17.69 (0.26) 18.99 (0.05) RKLD 24.77 (0.37) 12.02 (0.48) 15.06 (0.28) 23.27 (0.29) 26.01 (0.11) WSD 23.33 (0.52) 10.52 (0.47) 14.83 (0.61) 19.67 (0.13) 21.21 (0.21) HD 25.15 (0.36) 12.39 (0.77) 15.43 (0.20) 24.14 (0.23) 26.83 (0.15) SED 21.04 (0.51) 10.00 (0.56) 13.73 (0.17) 19.34 (0.19) 22.62 (0.19) α-divergence 25.15 (0.41) 12.92 (0.22) 15.60 (0.27) 24.83 (0.21) 27.81 (0.10) β-divergence 24.12 (0.38) 11.18 (0.27) 14.95 (0.33) 20.98 (0.23) 23.15 (0.14) α-β-divergence 25.65 (0.24) 13.47 (0.42) 16.06 (0.25) 26.47 (0.31) 29.32 (0.08)
+
+
+Figure 2. Performance across different loss functions on the validation set.
+
+
+(a) Training Speed
+
+
+(b) Effects of SGOs
+
+a mixture of SGOs and fixed datasets; 6) DISTILLM with S(R)KL on a mixture of SGOs and fixed datasets. Please refer to App. I.1.2 for details about competitors and SGOs.
+
+Results. From Tab. 2, we have the following observations: 1) Distillation methods often outperform SFT, showcasing their potential. However, they can sometimes yield worse results (e.g., KD on Unnatural when distilling GPT-2 XL into GPT-2), highlighting the importance of selecting a proper distillation objective. 2) By simply modifying the distillation objective, our framework outperforms vanilla KD and SFT across various datasets when distilling GPT-2 XL (1.5B) to smaller-scale families of GPT-2 (0.1B~0.8B); 3) Prior arts (Ko et al., 2024; Agarwal et al., 2024) show that training with SGOs can lead to significant improvements. However, even when compared to SGOs-based methods (e.g., GKD, DISTILLM) under this inherently unfair setting, our approach consistently achieves superior or comparable results, especially on Super-Natural and Unnatural datasets.
+
+Efficiency Comparison. Fig. 3(a) shows that our framework matches the training speed of vanilla KD, as it only modifies the distillation objective without introducing additional cost. This addresses concerns regarding the scalability of our method. In contrast, other distillation methods require 1.6 to 7 times longer training time due to the continuous need to sample student's outputs during training.
+
+Effects of SGOs. We examine the robustness of our framework by evaluating its performance with various SGOs ap
+
+
+Figure 3. Comparison of training speeds and the effects of using SGOs. Please see Sec. I.1.2 for details of different SGOs strategies.
+Figure 4. Comparison with SOTA methods on base-to-new setting. HM denotes the harmonic mean of base and new accuracy. Results are averaged across 11 datasets, with per-dataset details in Tab. 16. Results of baseline CLIP are evaluated on the pre-trained model. Teacher: ViT-L/14 CLIP; Student: ViT-B/16 CLIP.
+
+proaches. As shown in Fig. 3(b), our framework consistently delivers high performance across different settings, highlighting its adaptability and effectiveness. Further experimental results and analyses are provided in App. J.1.1.
+
+Effects of Loss Functions. Tab.3 compares the performance between various loss functions. The results show that $\alpha$ - $\beta$ -divergence consistently outperforms the others, while using only $\alpha$ - or $\beta$ -divergence degrades performance due to limited expressivity. In particular, $\alpha$ - $\beta$ -divergence achieves improvements of 0.81 to 3.31 over FKLD and RKLD across five datasets, whereas WSD, which combines weighted FKLD and RKLD, fails to deliver comparable results. Furthermore, Fig.2 demonstrates the superior performance of $\alpha$ - $\beta$ -divergence during the entire training phase.
+
+In summary, these empirical results align with the theoretical insights in Sec. 3 and show that even modest adjustments to the loss function can yield significant improvements.
+
+# 5.2. Vision Tasks
+
+Datasets. We conduct experiments on 12 popular image recognition datasets. Dataset details are referred to App. I.2.1. The evaluation metric used is accuracy. Apart
+
+
+(a) WRN-40-2 $\rightarrow$ WRN-16-2
+
+
+(b) WRN-40-2 $\rightarrow$ WRN-40-1
+
+
+(e) resnet110 $\rightarrow$ resnet32
+
+
+(f) resnet32x4 $\rightarrow$ resnet8x4
+
+
+(c) resnet56 $\rightarrow$ resnet20
+
+
+(g) $\mathrm{vgg13}\rightarrow \mathrm{vgg8}$
+
+
+(d) resnet110 $\rightarrow$ resnet 20
+
+
+(h) resnet110 $\rightarrow$ resnet44
+
+
+(a) alpha on CIFAR-100
+
+
+(b) alpha on Dolly
+
+
+Figure 5. Accuracy on CIFAR-100 for student models trained with different distillation methods. ABDKD, ABTTM, ABLSD, and ABKD are our implementations by rectifying the backbone's loss function. For details on backbones, please refer to Sec. I.2.2.
+(c) beta on CIFAR-100
+
+
+(d) beta on Dolly
+Figure 6. Sensitivity analysis of hyperparameters $\alpha$ and $\beta$ . (a)-(b) For low-dimensional output distribution in CIFAR-100, a smaller $\alpha$ leads to excessive penalization for error with limited gains. However, for higher-dimensional distribution in Dolly (e.g., 50,527 for GPT-2), a well-tuned smaller $\alpha$ is critical. (c)-(d) A larger $\beta$ sharpens output distributions by emphasizing classes with high student confidence.
+
+from the standard training-evaluation paradigm, we further consider a novel base-to-new setting (Zhou et al., 2022; Hua et al., 2025) to more thoroughly analyze the student model's generalization across classes. In this setup, training is performed on base classes, and accuracy is evaluated on both base and new classes. Please see App. I.2.3 for more details.
+
+Competitors. We consider the following SOTA distillation methods: 1) KD, 2) DKD, 3) LSD, and 4) TTM. For the base-to-new setting, we also compare with SOTA SFT methods: 5) CoCoOp, 6) MaPLe, and 7) PromptSRC. Please refer to App. I.2.2 for method details.
+
+Results. Fig. 4 and Fig. 5 show results from 9 teacher-student architectures on 12 datasets. Based on these, we conclude: 1) Without modifying the distillation objective, methods that more effectively utilize teacher distribution knowledge (e.g., DKD, TTM, and LSD) can outperform vanilla KD; 2) However, their scores fall short in some cases, such as LSD in base-to-new setting; 3) Orthogonal to them,
+
+our framework selects more suitable distillation objectives for specific teacher-student pairs, showing competitive or superior results, particularly in base-to-new setting.
+
+Apply to Other Distillation Techniques. Fig. 5 also shows that our framework can act as a simple plug-and-play tool to rectify the loss functions used by existing methods, yielding further improvements (e.g., ABDKD vs DKD).
+
+# 5.3. Sensitivity Analysis
+
+We next analyze the effects of hardness-concentration and confidence-concentration, which helps validate the theoretical insights shown in Prop. 4.2 and Thm. D.1.
+
+Effect of $\alpha$ on hardness-concentration. Figs. 6(a) and (b) show performance during training for different $\alpha$ . In CIFAR-100, with its relatively low-dimensional output distribution, a smaller $\alpha$ (stronger hardness-concentration) aggressively penalizes errors but offers limited gains. However, in Dolly,
+
+with a higher-dimensional output (e.g., GPT-2's vocabulary size of 50,257), a well-tuned smaller $\alpha$ is crucial to avoid local optima, especially in early training stages.
+
+Effect of $\beta$ on confidence-concentration. Figs. 6(c) and (d) show how $\beta$ affects Shannon entropy of the output distribution and Self-BLEU score (Zhu et al., 2018) of output sequences (100 indicates deterministic outputs and 0 denotes maximum diversity). The smaller $\beta$ (weaker confidence-concentration) places more emphasis on classes with low student confidence, encouraging the student to focus more on learning the soft label information from the teacher distribution. This leads to a smoother output distribution (higher entropy) and more diverse generated sequences (lower Self-BLEU). Thus, selecting an appropriate $\beta$ ensures a balance between focusing on the target class and learning more soft label information.
+
+# 6. Conclusion
+
+In this paper, we argue that the key to KD lies in trading off two mode-concentration effects: hardness-concentration and confidence-concentration. The widely used FKLD and RKLD fail to achieve this balance, instead representing two extreme cases that lead to improper probability allocation. To address this issue, we introduce ABKD, a generic distillation framework based on $\alpha-\beta$ -divergence. ABKD generalizes FKLD and RKLD to a broader family of divergences, offering greater flexibility. Our theoretical results show that ABKD can flexibly interpolate between the above two extremes, enabling an effective trade-off. Extensive experiments further demonstrate its effectiveness.
+
+# Acknowledgements
+
+This work was supported in part by the Fundamental Research Funds for the Central Universities, in part by the National Key R&D Program of China under Grant 2018AAA0102000, 2024QY210004 and 2022YFC3302300, in part by National Natural Science Foundation of China: 62236008, 62441232, U21B2038, U23B2051, 62122075, 62206264 and 92370102, in part by Youth Innovation Promotion Association CAS, in part by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No.XDB0680201, in part by the China National Postdoctoral Program for Innovative Talents under Grant BX20240384. The authors thank anonymous reviewers for their insightful comments and suggestions.
+
+# Impact Statement
+
+The work presented in this paper aims to advance the field of knowledge distillation (KD), a promising direction to enable knowledge transfer between different models. This
+
+potential has been demonstrated in some recent frontier works, such as DeepSeek-R1 (Guo et al., 2025) and Qwen-3 (Yang et al., 2025). Although these methods primarily adopt SeqKD-based distillation techniques (Kim & Rush, 2016) and therefore do not involve distribution matching through KL divergence, the results in Tab. 2 demonstrate that logit-based methods possess even greater potential. We believe that the applications of KD techniques will continue to expand.
+
+In this work, we provide a theoretical analysis of the fundamentally different behaviors—mode-covering and mode-seeking—exhibited by forward and reverse KL divergences in KD, grounded in two novel mode-concentration effects. These new theoretical insights further shed light on the development of more principled and effective distillation objectives. Interestingly, our theoretical framework also aligns with findings from related studies, such as the phenomenon of likelihood displacement (Razin et al., 2024; Ren & Sutherland, 2024) observed in Direct Preference Optimization (DPO). This phenomenon stems from the equivalence of DPO w.r.t. the reverse optimization of KL divergence (Rafailov et al., 2023; Tajwar et al., 2024).
+
+# References
+
+Agarwal, R., Vieillard, N., Zhou, Y., Stanczyk, P., Garea, S. R., Geist, M., and Bachem, O. On-policy distillation of language models: Learning from self-generated mistakes In The Twelfth International Conference on Learning Representations, 2024.
+Amara, I., Sepahvand, N., Meyer, B. H., Gross, W. J., and Clark, J. J. Bd-kd: balancing the divergences for online knowledge distillation. arXiv preprint arXiv:2212.12965, 2022.
+Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214-223. PMLR, 2017.
+Basu, A., Harris, I. R., Hjort, N. L., and Jones, M. Robust and efficient estimation by minimising a density power divergence. Biometrika, 85(3):549-559, 1998.
+Binici, K., Pham, N. T., Mitra, T., and Leman, K. Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 663-671, 2022.
+Bossard, L., Guillaumeun, M., and Van Gool, L. Food-101-mining discriminative components with random forests. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part VI 13, pp. 446-461. Springer, 2014.
+
+Chen, D., Mei, J.-P., Zhang, H., Wang, C., Feng, Y., and Chen, C. Knowledge distillation with the reused teacher classifier. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11933-11942, 2022.
+Chernoff, H. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. The Annals of Mathematical Statistics, pp. 493-507, 1952.
+Cho, J. H. and Hariharan, B. On the efficacy of knowledge distillation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4794-4802, 2019.
+Cichocki, A., Cruces, S., and Amari, S.-i. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy, 13(1):134-170, 2011. ISSN 1099-4300. doi: 10.3390/e13010134. URL https://www.mdpi.com/1099-4300/13/1/134.
+Conover, M., Hayes, M., Mathur, A., Xie, J., Wan, J., Shah, S., Ghodsi, A., Wendell, P., Zaharia, M., and Xin, R. Freedolly: Introducing the world's first truly open instruction-tuned llm. Company Blog of Databricks, 2023.
+Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
+Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
+Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pp. 178-178. IEEE, 2004.
+Gokaslan, A., Cohen, V., Pavlick, E., and Tellex, S. Open-webtext corpus, 2019.
+Gu, Y., Dong, L., Wei, F., and Huang, M. Minillm: Knowledge distillation of large language models. In The Twelfth International Conference on Learning Representations, 2024a.
+Gu, Y., Zhou, H., Meng, F., Zhou, J., and Huang, M. Mini- plm: Knowledge distillation for pre-training language models. arXiv preprint arXiv:2410.17215, 2024b.
+Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
+
+He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Heath, T. L. The thirteen books of Euclid's Elements. Dover Publications, Inc, 1956.
+Helber, P., Bischke, B., Dengel, A., and Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217-2226, 2019.
+Hellinger, E. Neue begründung der theorie quadratischer formen von unendlichvielen veränderlichen. Journal für die reine und angewandte Mathematik, 1909(136):210-271, 1909.
+Hinton, G. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Honovich, O., Scialom, T., Levy, O., and Schick, T. Unnatural instructions: Tuning language models with (almost) no human labor. In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14409-14428, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.806. URL https://aclanthology.org/2023.acl-long.806.
+Hua, C., Xu, Q., Yang, Z., Wang, Z., Bao, S., and Huang, Q. Openworldauc: Towards unified evaluation and optimization for open-world prompt tuning. arXiv preprint arXiv:2505.05180, 2025.
+Huang, T., You, S., Wang, F., Qian, C., and Xu, C. Knowledge distillation from a stronger teacher. Advances in Neural Information Processing Systems, 35:33716-33727, 2022.
+Huang, Z. and Wang, N. Like what you like: Knowledge distill via neuron selectivity transfer. arXiv preprint arXiv:1707.01219, 2017.
+Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F., and Liu, Q. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351, 2019.
+Jin, Y., Wang, J., and Lin, D. Multi-level logit distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24276-24285, 2023.
+Khattak, M. U., Rasheed, H., Maaz, M., Khan, S., and Khan, F. S. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19113-19122, 2023a.
+
+Khattak, M. U., Wasim, S. T., Naseer, M., Khan, S., Yang, M.-H., and Khan, F. S. Self-regulating prompts: Foundational model adaptation without forgetting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15190-15200, 2023b.
+Kim, G., Jang, D., and Yang, E. PromptKD: Distilling student-friendly knowledge for generative language models via prompt tuning. In Al-Onaizan, Y., Bansal, M., and Chen, Y.-N. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 6266-6282, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.364. URL https://aclanthology.org/2024.findings-emnlp.364/.
+Kim, Y. and Rush, A. M. Sequence-level knowledge distillation. CoRR, abs/1606.07947, 2016. URL http://arxiv.org/abs/1606.07947.
+Ko, J., Kim, S., Chen, T., and Yun, S.-Y. Distillm: Towards streamlined distillation for large language models. arXiv preprint arXiv:2402.03898, 2024.
+Krause, J., Stark, M., Deng, J., and Fei-Fei, L. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pp. 554-561, 2013.
+Kullback, S. and Leibler, R. A. On information and sufficiency. The annals of mathematical statistics, 22(1): 79-86, 1951.
+Lee, H., Park, Y., Seo, H., and Kang, M. Self-knowledge distillation via dropout. Computer Vision and Image Understanding, 233:103720, 2023.
+Li, L., Bao, Y., Dong, P., Yang, C., Li, A., Luo, W., Liu, Q., Xue, W., and Guo, Y. Detkds: Knowledge distillation search for object detectors. In *Forty-first International Conference on Machine Learning*, 2024.
+Li, X.-C., Fan, W.-S., Song, S., Li, Y., Yunfeng, S., Zhan, D.-C., et al. Asymmetric temperature scaling makes larger networks teach well again. Advances in neural information processing systems, 35:3830-3842, 2022.
+Liang, C., Jiang, H., Li, Z., Tang, X., Yin, B., and Zhao, T. Homodistil: Homotopic task-agnostic distillation of pretrained transformers. arXiv preprint arXiv:2302.09632, 2023a.
+Liang, C., Zuo, S., Zhang, Q., He, P., Chen, W., and Zhao, T. Less is more: Task-aware layer-wise distillation for language model compression. In International Conference on Machine Learning, pp. 20852-20867. PMLR, 2023b.
+
+Lin, C.-Y. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74-81, 2004.
+Lv, J., Yang, H., and Li, P. Wasserstein distance rivals kullback-leibler divergence for knowledge distillation. arXiv preprint arXiv:2412.08139, 2024.
+Maji, S., Rahtu, E., Kannala, J., Blaschko, M., and Vedaldi, A. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
+Mirzadeh, S. I., Farajtabar, M., Li, A., Levine, N., Matsukawa, A., and Ghasemzadeh, H. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 5191-5198, 2020.
+Nilsback, M.-E. and Zisserman, A. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pp. 722-729. IEEE, 2008.
+Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498-3505. IEEE, 2012.
+Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
+Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36: 53728-53741, 2023.
+Razin, N., Malladi, S., Bhaskar, A., Chen, D., Arora, S., and Hanin, B. Unintentional unalignment: Likelihood displacement in direct preference optimization. arXiv preprint arXiv:2410.08847, 2024.
+Ren, Y. and Sutherland, D. J. Learning dynamics of llm finetuning. arXiv preprint arXiv:2407.10490, 2024.
+Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., and Bengio, Y. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
+
+Simonyan, K. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Soomro, K. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
+Sun, S., Cheng, Y., Gan, Z., and Liu, J. Patient knowledge distillation for bert model compression. arXiv preprint arXiv:1908.09355, 2019.
+Sun, S., Ren, W., Li, J., Wang, R., and Cao, X. Logit standardization in knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15731-15740, 2024.
+Sun, Z., Yu, H., Song, X., Liu, R., Yang, Y., and Zhou, D. Mobilebert: a compact task-agnostic bert for resource-limited devices. arXiv preprint arXiv:2004.02984, 2020.
+Tajwar, F., Singh, A., Sharma, A., Rafailov, R., Schneider, J., Xie, T., Ermon, S., Finn, C., and Kumar, A. Preference fine-tuning of llms should leverage suboptimal, on-policy data. arXiv preprint arXiv:2404.14367, 2024.
+Tian, Y., Krishnan, D., and Isola, P. Contrastive representation distillation. arXiv preprint arXiv:1910.10699, 2019.
+Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
+Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. Llama: Open and efficient foundation language models, 2023b. URL https://arxiv.org/abs/2302.13971.
+Vaswani, A. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
+Wang, D., Gong, C., Li, M., Liu, Q., and Chandra, V. Alphanumeric: Improved training of supernets with alphadivergence. In International Conference on Machine Learning, pp. 10760-10771. PMLR, 2021.
+Wang, W., Bao, H., Huang, S., Dong, L., and Wei, F. Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers. arXiv preprint arXiv:2012.15828, 2020a.
+Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., and Zhou, M. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, 33: 5776-5788, 2020b.
+
+Wang, Y., Mishra, S., Alipoormolabashi, P., Kordi, Y., Mirzaei, A., Naik, A., Ashok, A., Dhanasekaran, A. S., Arunkumar, A., Stap, D., Pathak, E., Karamanolakis, G., Lai, H., Purohit, I., Mondal, I., Anderson, J., Kuznia, K., Doshi, K., Pal, K. K., Patel, M., Moradshahi, M., Parmar, M., Purohit, M., Varshney, N., Kaza, P. R., Verma, P., Puri, R. S., Karia, R., Doshi, S., Sampat, S. K., Mishra, S., Reddy A, S., Patro, S., Dixit, T., and Shen, X. Super-Natural Instructions: Generalization via declarative instructions on $1600+$ NLP tasks. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 5085-5109, Abu Dhabi, United Arab Emirates, December 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.340. URL https://aclanthology.org/2022.emnlp-main.340.
+Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., and Hajishirzi, H. Self-instruct: Aligning language models with self-generated instructions. In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13484-13508, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.754. URL https://aclanthology.org/2023.acl-long.754.
+Wang, Z., Xu, Q., Yang, Z., He, Y., Cao, X., and Huang, Q. Openauc: Towards auc-oriented open-set recognition. Advances in Neural Information Processing Systems, 35: 25033-25045, 2022b.
+Wen, Y., Li, Z., Du, W., and Mou, L. f-divergence minimization for sequence-level knowledge distillation. arXiv preprint arXiv:2307.15190, 2023.
+Wu, T., Tao, C., Wang, J., Yang, R., Zhao, Z., and Wong, N. Rethinking kullback-leibler divergence in knowledge distillation for large language models. arXiv preprint arXiv:2404.02657, 2024.
+Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485-3492. IEEE, 2010.
+Yang, A., Li, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Gao, C., Huang, C., Lv, C., et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388, 2025.
+Yang, Z., Xu, Q., Bao, S., Cao, X., and Huang, Q. Learning with multiclass auc: Theory and algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (11):7747-7763, 2021.
+
+Yang, Z., Xu, Q., Bao, S., Wen, P., He, Y., Cao, X., and Huang, Q. Auc-oriented domain adaptation: From theory to algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):14161-14174, 2023a.
+Yang, Z., Xu, Q., Hou, W., Bao, S., He, Y., Cao, X., and Huang, Q. Revisiting auc-oriented adversarial training with loss-agnostic perturbations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15494-15511, 2023b.
+Yang, Z., Xu, Q., Wang, Z., Li, S., Han, B., Bao, S., Cao, X., and Huang, Q. Harnessing hierarchical label distribution variations in test agnostic long-tail recognition. arXiv preprint arXiv:2405.07780, 2024.
+Zagoruyko, S. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Zagoruyko, S. and Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928, 2016.
+Zhao, B., Cui, Q., Song, R., Qiu, Y., and Liang, J. Decoupled knowledge distillation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp. 11953-11962, 2022.
+Zheng, K. and Yang, E.-H. Knowledge distillation based on transformed teacher matching. arXiv preprint arXiv:2402.11148, 2024.
+Zhou, H., Song, L., Chen, J., Zhou, Y., Wang, G., Yuan, J., and Zhang, Q. Rethinking soft labels for knowledge distillation: A bias-variance tradeoff perspective. arXiv preprint arXiv:2102.00650, 2021.
+Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16816-16825, 2022.
+Zhou, X., Ye, W., Lee, Z., Zou, L., and Zhang, S. Valuing training data via causal inference for in-context learning. IEEE Transactions on Knowledge and Data Engineering, 2025a.
+Zhou, X., Zhang, M., Lee, Z., Ye, W., and Zhang, S. Hademif: Hallucination detection and mitigation in large language models. In The Thirteenth International Conference on Learning Representations, 2025b.
+Zhu, Y., Lu, S., Zheng, L., Guo, J., Zhang, W., Wang, J., and Yu, Y. Texygen: A benchmarking platform for text generation models. In The 41st international ACM SIGIR conference on research & development in information retrieval, pp. 1097-1100, 2018.
+
+# Appendix
+
+# Table of Contents
+
+A Prior Arts 16
+B Continuous Extension of $\alpha$ - $\beta$ -Divergence 16
+C More Discussion Related to Tracking Probability Allocation with Log Mass Ratio 17
+
+C.1 The Relationship between Log Mass Ratio and Logit Gradient 17
+C.2 The Relationship Between Overall Gradient and Logit Gradient 17
+
+D $\alpha$ - $\beta$ -divergence: Further Analysis on Trading off Hardness-Concentration and Confidence-Concentration 17
+E Comparison of Gradients: FKLD, RKLD, and Our Framework 18
+F Pursuing a Proper Mass allocation via $\alpha$ -Divergence 19
+
+G Proofs 21
+
+G.1 Proof of Proposition 3.1 21
+G.2 Proof of Theorem 3.2 21
+G.3 Proof of Proposition F.2 23
+G.4 Proof of Theorem F.3 25
+G.5 Proof of Proposition 4.2 33
+G.6 Proof of Theorem D.1 35
+
+H Algorithm Protocol 36
+
+I Additional Experiment Settings 36
+
+I.1 Natural Language Processing Tasks 36
+
+I.1.1 Datasets 36
+I.1.2 Competitors 37
+I.1.3 Implementation Details 37
+
+I.2 Vision Tasks 38
+
+I.2.1 Datasets 38
+I.2.2 Competitors 39
+I.2.3 Implementation Details 39
+
+J Additional Experiment Analysis 40
+
+J.1 Natural Language Processing Tasks 40
+
+J.1.1 Effects of SGOs 40
+J.1.2 Distilling from Stronger Teacher 41
+J.1.3 Comparison with More Baselines 41
+J.1.4 LLaMA Family Distillation 42
+J.1.5 Qualitative Evaluation 42
+
+J.2 Vision Tasks 43
+
+J.2.1 Distilling from Stronger Teacher 43
+J.2.2 Cross-Architecture Distillation 43
+J.2.3 How does ABKD perform with alpha/beta outside [0,1]? 44
+
+# A. Prior Arts
+
+Knowledge distillation (Hinton, 2015) is a promising technology for transferring knowledge between different models. The typical setup assumes the presence of a larger teacher model with more parameters and a smaller student model with fewer parameters. To achieve this knowledge transfer, a common approach is to let the student distribution $q_{\theta}$ mimic the teacher distribution $p$ by minimizing a distributional measure $\mathbb{D}(p\|q_{\theta})$ . This approach is referred to as logit-based distillation (the focus of this work). Another promising approach is to leverage the rich information in the intermediate layers of the model, such as the attention matrix (Zagoruyko & Komodakis, 2016; Sun et al., 2020; Jiao et al., 2019; Wang et al., 2020b;a), the embedding features and their relationships (Romero et al., 2014; Liang et al., 2023b; Sun et al., 2019; Liang et al., 2023a; Lv et al., 2024; Tian et al., 2019), etc. These methods are known as feature-based distillation. Due to the success of KD, it often outperforms supervised fine-tuning and has led to improved performance in various downstream tasks, including image classification (Kim et al., 2024; Yang et al., 2021; 2023a; 2024), instruction generation (Gu et al., 2024b; Zhou et al., 2025b;a), neural architecture search (Wang et al., 2021), and object detection (Li et al., 2024; Lv et al., 2024).
+
+Logit-based methods aim to minimize the distance between student and teacher distributions, which have achieved profound success in the past few years. To do this, one can choose different distillation objectives, such as Maximum Mean Discrepancy (Huang & Wang, 2017), Total Variation Distance (Wen et al., 2023), Wasserstein Distance (Lv et al., 2024), or Pearson correlation coefficient (Huang et al., 2022). Most prior methods (Hinton, 2015) use primarily forward Kullback-Leibler divergence (FKLD) to let the student distribution to mimic the teacher distribution. On this basis, a variety of methods have been proposed to help the student learn better from the teacher distribution, such as using asymmetric temperature scaling (Li et al., 2022), decomposing the teacher distribution into separate learning of target class and non-target class knowledge (Zhao et al., 2022), removing temperature scaling on the student side (Zheng & Yang, 2024), normalizing logits (Sun et al., 2024), reusing the teacher's classifier (Chen et al., 2022), utilizing inter-class relationships (Lv et al., 2024; Jin et al., 2023), and so on. Despite achieving profound success, recent research points out that due to its asymmetry, FKLD tends to cover the entire support of the teacher's distribution, leading to an overly smoothed student distribution. To address this issue, many works resort to using reverse Kullback-Leibler divergence (RKLD) (Lee et al., 2023; Gu et al., 2024a; Kim et al., 2024; Gu et al., 2024b), which forces the student distribution to focus on a few modes in the teacher's distribution. At the same time, some works explore more general distribution measures (Wen et al., 2023; Agarwal et al., 2024; Wang et al., 2021) and composite metrics (Wu et al., 2024; Amara et al., 2022; Binici et al., 2022). Recently, some studies (Ko et al., 2024; Wu et al., 2024; Wen et al., 2023) have found that the superiority of FKLD and RKLD depends on the task and dataset. However, systematic studies providing theoretical insights into the suboptimal performance of FKLD and RKLD are either scarce or predominantly qualitative. This limits further exploration in this field.
+
+Contributions: In this paper, we propose a generic distillation framework based on $\alpha$ - $\beta$ -divergence. Unlike previous generic methods, our approach is 1) built on balancing hardness-concentration and confidence-concentration. Based on analysis in the unified space of our framework, we 2) theoretically explain why FKLD and RKLD lead to suboptimal performance, which further complements previous empirical observations. Fortunately, our framework 3) allows for flexible interpolation between them, ensuring better performance. Furthermore, we 4) confirm the effectiveness of the newly introduced distillation objective through extensive empirical experiments on language and vision datasets.
+
+# B. Continuous Extension of $\alpha$ - $\beta$ -Divergence
+
+The $\alpha$ - $\beta$ -divergence can be extended through continuous extension (by applying L'Hopital's Rule) to cover all values of $\alpha, \beta \in \mathbb{R}$ . Its more explicit form is defined as follows.
+
+$$
+\mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} = \left\{ \begin{array}{l l} \sum_ {k} - \frac {1}{\alpha \beta} \left[ p (k) ^ {\alpha} q (k) ^ {\beta} - \frac {\alpha}{\alpha + \beta} p (k) ^ {\alpha + \beta} - \frac {\beta}{\alpha + \beta} q (k) ^ {\alpha + \beta} \right], & \text {f o r} \quad \alpha , \beta , \alpha + \beta \neq 0, \\ \sum_ {k} \frac {1}{\alpha^ {2}} \left[ p (k) ^ {\alpha} (\ln p (k) ^ {\alpha} - \ln q (k) ^ {\alpha}) - p (k) ^ {\alpha} + q (k) ^ {\alpha} \right], & \text {f o r} \quad \alpha \neq 0, \beta = 0, \\ \sum_ {k} \frac {1}{\alpha^ {2}} \left[ \ln q (k) ^ {\alpha} - \ln p (k) ^ {\alpha} + \left(\frac {q (k) ^ {\alpha}}{p (k) ^ {\alpha}}\right) ^ {- 1} - 1 \right], & \text {f o r} \quad \alpha = - \beta \neq 0, \\ \sum_ {k} \frac {1}{\beta^ {2}} \left[ q (k) ^ {\beta} (\ln q (k) ^ {\beta} - \ln p (k) ^ {\beta}) - q (k) ^ {\beta} + p (k) ^ {\beta} \right], & \text {f o r} \quad \alpha = 0, \beta \neq 0, \\ \sum_ {k} \frac {1}{2} [ \ln p (k) - \ln q (k) ] ^ {2}, & \text {f o r} \quad \alpha , \beta = 0. \end{array} \right. \tag {8}
+$$
+
+# C. More Discussion Related to Tracking Probability Allocation with Log Mass Ratio
+
+# C.1. The Relationship between Log Mass Ratio and Logit Gradient
+
+For a given divergence algorithm $\ell \triangleq \mathbb{D}_{\mathcal{A}}(p\parallel q)$ , consider using the gradient descent method to update the loss function $\ell$ w.r.t the logits $f_{y}^{t}$ , then the distribution at the next step $p^{t + 1}$ is given by:
+
+$$
+\begin{array}{l} q _ {t + 1} ^ {\mathcal {A}} (y) = \frac {\exp \left(f _ {y} ^ {t + 1}\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t + 1}\right)} \\ = \frac {\exp \left(f _ {y} ^ {t} - \eta \nabla_ {f _ {y} ^ {t}} \ell\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t} - \eta \nabla_ {f _ {k} ^ {t}} \ell\right)} \tag {9} \\ = q _ {t} (y) \cdot \frac {\exp (- \eta \nabla_ {f _ {y} ^ {t}} \ell)}{\sum_ {k} q _ {t} (k) \exp (- \eta \nabla_ {f _ {k} ^ {t}} \ell)}. \\ \end{array}
+$$
+
+Observing that the denominator serves as a normalization constant, this can be rewritten as:
+
+$$
+\frac {q _ {t + 1} ^ {\mathcal {A}} (y)}{q _ {t} (y)} \propto \exp \left(- \eta \nabla_ {f _ {y} ^ {t}} \ell\right). \tag {10}
+$$
+
+Taking the logarithm on both sides, we get:
+
+$$
+\log \frac {q _ {t + 1} ^ {\mathcal {A}} (y)}{q _ {t} (y)} = - \eta \cdot \nabla_ {f _ {y} ^ {t}} \ell + \mathrm {N} _ {t} ^ {\mathcal {A}} (y), \tag {11}
+$$
+
+where $\mathsf{N}_t^A (y)$ denotes constant normalization factors independent of $y$ . This indicates that the log mass ratio is proportional to $\nabla_{f_y^t}\ell$
+
+# C.2. The Relationship Between Overall Gradient and Logit Gradient
+
+We first give the relationship between overall gradient and logit gradient:
+
+$$
+\nabla_ {\boldsymbol {W}} \ell = \boldsymbol {J} ^ {\top} \cdot \nabla_ {f} \ell . \tag {12}
+$$
+
+In this case, $J$ is the Jacobian matrix representing the gradient of logits w.r.t. model parameters, and its dimensions are $C \times M$ , where $C$ is the dimensionality of logits, and $M$ is the dimensionality of the model parameters $W$ . Typically, we have $M \gg C$ . For example, for an image classification task using ResNet-110 on CIFAR-100, $C = 100$ and $M = 1, 110, 240$ . In the case of instruction generation tasks, for GPT-2 XLarge, $C = 50, 257$ and $M = 1, 500, 000, 000$ . Thus, the matrix $J$ is close to being full rank $C$ .
+
+In this case, if $\nabla_W\ell \to 0$ , then we must have:
+
+$$
+\boldsymbol {J} ^ {\top} \cdot \nabla_ {f} \ell \rightarrow 0. \tag {13}
+$$
+
+Since the Jacobian matrix $J$ is full rank, the product can only approach zero if $\nabla_f \ell \to 0$ , i.e., $\nabla_{f_y} \ell \to 0$ for all class channels $y$ .
+
+# D. $\alpha$ - $\beta$ -divergence: Further Analysis on Trading off Hardness-Concentration and Confidence-Concentration
+
+Of course, there is no free lunch. A broad hyperparameter space offers more flexibility but also makes finding suitable values more difficult. While grid search could theoretically yield optimal performance, it introduces additional overhead in our framework. Fortunately, guided by the following theoretical insights, we can design a principled divergence algorithm for the target task with inductive bias more efficiently.
+
+Theorem D.1. Let $q_{t+1}^{(\alpha, \beta)}(y)$ be the distribution obtained after one gradient step, starting from $q_t$ using the $\alpha-\beta$ -divergence. Define $\Delta_t^{(\alpha, \beta)}$ as the difference of log mass ratios across two classes $y_1$ and $y_2$ , obtained from the $\alpha-\beta$ -divergence:
+
+$$
+\Delta_ {t} ^ {(\alpha , \beta)} \left(y _ {1}, y _ {2}\right) \triangleq \operatorname {L o g R} _ {t} ^ {(\alpha , \beta)} \left(y _ {1}\right) - \operatorname {L o g R} _ {t} ^ {(\alpha , \beta)} \left(y _ {2}\right). \tag {14}
+$$
+
+We have the following (for appropriate positive constants $\zeta$ , $\delta_1$ , $\delta_2$ , and any real numbers $\alpha_1$ and $\alpha_2$ in the range $[0,1]$ satisfying $\alpha_1 < \alpha_2$ ):
+
+1. $\alpha$ - $\beta$ -divergence transfers probability mass from overestimated classes to underestimated classes more aggressively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $\delta_{1} < q_{t}(y_{1}) = q_{t}(y_{2}) \leq p(y_{1})$ (where $\delta_{1} > 0$ ), and $p(y_{1}) \geq p(y_{2}) + \zeta$ , it holds that $\Delta_{t}^{(\alpha_{1},\beta)}(y_{1},y_{2}) \geq \Delta_{t}^{(\alpha_{2},\beta)}(y_{1},y_{2})$ .
+2. $\alpha$ - $\beta$ -divergence reduces the probability mass of classes with larger error $|p(y) - q_t(y)|$ more aggressively as $\alpha$ decreases. If $y_1$ and $y_2$ are such that $p(y_1) < q_t(y_1) = q_t(y_2) \leq 1 - \delta_2$ (where $\delta_2 > 0$ ), and $p(y_1) \geq p(y_2) + \zeta$ , it holds that $\Delta_t^{(\alpha_1, \beta)}(y_1, y_2) \geq \Delta_t^{(\alpha_2, \beta)}(y_1, y_2)$ .
+3. The $\alpha$ - $\beta$ -divergence becomes more (less) preferential in focusing the error on classes with higher student confidence as $\beta$ increases (decreases) when reducing $\left|\mathrm{LogR}_t^{(\alpha,\beta)}(y)\right|$ .
+
+The proof is in App. G.6. Case 1 shows that a smaller $\alpha$ (stronger hardness-concentration) leads to aggressive mass reallocation across classes when some classes are overestimated. Case 2 shows that a smaller $\alpha$ will more aggressively penalize overestimated classes with large errors (a.k.a., hard classes). In this sense, a proper assignment of $\alpha$ leads to a better hardness-concentration effect. On the other hand, case 3 shows that a larger $\beta$ tends to focus more on reducing the error from classes with high student confidence and ignores the error from other classes, as shown in Fig. 6(c) and Fig. 6(d), resulting in a sharper student distribution. As such, one can select a proper $\beta$ to ensure a better confidence-concentration effect.
+
+Hyperparameter tuning guidelines. In principle, one may prefer to select a larger $\beta$ than 0, which should be inversely proportional to the ratio of non-target classes in the output distribution. A proper $\beta$ allows the student to effectively learn from the teacher's soft labels while maintaining an adequate focus on the target class. On the other hand, choosing a smaller $\alpha$ than 1 leads to more aggressive probability mass reallocation across classes. Therefore, a small and proper $\alpha$ can more effectively avoid local optima. This becomes particularly important when the two distributions are far apart, as shown in Fig. 6(b).
+
+Empirically, we find that for tasks with low-dimensional output distributions, such as image classification on CIFAR-100, selecting a large $\alpha$ and small $\beta$ is sufficient to achieve optimal performance, as shown in Tab. 7 and Tab. 4. However, for tasks with more high-dimensional output distributions, such as instruction generation on the Dolly dataset, selecting a small $\alpha$ (which leads to more aggressive reallocation of probability mass) and a large $\beta$ (to emphasize learning the soft label information) are crucial for achieving exceptional performance, as shown in Fig. 1(a) and App. I.1.3.
+
+# E. Comparison of Gradients: FKLD, RKLD, and Our Framework
+
+Lemma E.1 (Cichocki et al. 2011). Given two distributions $p$ and $q_{\theta}$ , the gradient of the $\alpha - \beta$ -divergence with respect to $\theta$ is calculated as:
+
+$$
+\frac {\partial \mathbb {D} _ {A B} ^ {(\alpha , \beta)} (p \| q _ {\theta})}{\partial \theta} = - \sum_ {k} \frac {\partial q _ {\theta} (k)}{\partial \theta} \cdot \underbrace {q _ {\theta} (k) ^ {\alpha + \beta - 1}} _ {\text {w e i g h t s}} \underbrace {\frac {\mathbf {r} _ {p , q _ {\theta}} ^ {\alpha} - 1}{\alpha}} _ {\alpha - z o o m}, \tag {15}
+$$
+
+where $\mathbf{r}_{p,q_{\theta}}$ is the ratio between arbitrary distributions $p$ and $q_{\theta}$ .
+
+Proof. The formula for the $\alpha-\beta$ -divergence is defined as follows:
+
+$$
+\mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta}) = - \frac {1}{\alpha \beta} \sum_ {k} \left[ p (k) ^ {\alpha} q _ {\theta} (k) ^ {\beta} - \frac {\alpha}{\alpha + \beta} p (k) ^ {\alpha + \beta} - \frac {\beta}{\alpha + \beta} q _ {\theta} (k) ^ {\alpha + \beta} \right]. \tag {16}
+$$
+
+Taking the derivative of each term with respect to the parameter $\theta$ , we have:
+
+$$
+\frac {\partial \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta})}{\partial \theta} = \sum_ {k} \frac {\partial}{\partial q _ {\theta} (k)} \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta}) \cdot \frac {\partial q _ {\theta} (k)}{\partial \theta}, \tag {17}
+$$
+
+where
+
+$$
+\begin{array}{l} \frac {\partial}{\partial q _ {\theta} (k)} \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta}) = - \frac {1}{\alpha \beta} \left(\beta p (k) ^ {\alpha} q _ {\theta} (k) ^ {\beta - 1} - \beta q _ {\theta} (k) ^ {\alpha + \beta - 1}\right) \\ = - \frac {1}{\alpha} \left(p (k) ^ {\alpha} q _ {\theta} (k) ^ {\beta - 1} - q _ {\theta} (k) ^ {\alpha + \beta - 1}\right) \tag {18} \\ = - q _ {\theta} (k) ^ {\alpha + \beta - 1} \cdot \frac {\left(\frac {p (k)}{q _ {\theta} (k)}\right) ^ {\alpha} - 1}{\alpha}. \\ \end{array}
+$$
+
+Substituting Eq. 18 into Eq. 17, we obtain:
+
+$$
+\frac {\partial \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta})}{\partial \theta} = - \sum_ {k} \frac {\partial q _ {\theta} (k)}{\partial \theta} \cdot q _ {\theta} (k) ^ {\alpha + \beta - 1} \frac {\left(\frac {p (k)}{q _ {\theta} (k)}\right) ^ {\alpha} - 1}{\alpha}. \tag {19}
+$$
+
+This concludes the proof.
+
+
+
+The gradient of the FKLD with respect to $\theta$ is given by
+
+$$
+\frac {\partial}{\partial \theta} \mathbb {D} _ {\mathrm {K L}} (p \| q _ {\theta}) = - \sum_ {k} \frac {\partial q _ {\theta} (k)}{\partial \theta} \cdot \frac {p (k)}{q _ {\theta} (k)}. \tag {20}
+$$
+
+The result is the negative gradient of the model probability, inversely weighted by its value. As Ko et al. (2024) stated, when $q_{\theta}(k) \approx 0$ , the gradient norm increases, causing large, noisy updates that can hinder optimization. Similarly, the derivative of the reverse KL divergence with respect to $\theta$ is given by:
+
+$$
+\frac {\partial}{\partial \theta} \mathbb {D} _ {\mathrm {K L}} \left(q _ {\theta} \| p\right) = - \sum_ {k} \frac {\partial q _ {\theta} (k)}{\partial \theta} \cdot \left(\log \frac {q _ {\theta} (k)}{p (k)} + 1\right). \tag {21}
+$$
+
+This value becomes very large when $p(k) \approx 0$ . In our framework, by Lem. E.1, the derivative of the $\alpha - \beta$ -divergence with respect to $\theta$ is:
+
+$$
+\frac {\partial \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q _ {\theta})}{\partial \theta} = - \sum_ {k} \frac {\partial q (k)}{\partial \theta} \cdot q _ {\theta} (k) ^ {\alpha + \beta - 1} \cdot \frac {\left(\frac {p (k)}{q _ {\theta} (k)}\right) ^ {\alpha} - 1}{\alpha}.
+$$
+
+When $\alpha = 1$ and $\beta = 0$ , FKLD becomes a special case; when $\alpha = 0$ and $\beta = 1$ , RKLD becomes a special case. The parameter $\alpha$ controls the focus on large or small ratios $p / q_{\theta}$ , while $\beta$ adjusts the weighting of these ratios through the scaling factor $q_{\theta}^{\alpha + \beta - 1}$ . When choosing $\alpha = 1$ , $\frac{\partial \mathbb{D}_{\mathrm{AB}}^{(\alpha, \beta)}(p \| q_{\theta})}{\partial \theta}$ tends to excessively focus on the extreme values of $p / q$ (i.e., when $p(x) > 0$ and $q(x) \approx 0$ ). On the other hand, choosing a $\alpha$ close to 0 would overly focus on the extreme values of $q / p$ (i.e., when $q(x) > 0$ and $p(x) \approx 0$ ). Additionally, choosing a larger value of $\alpha + \beta > 1$ would place more focus on $p / q$ of classes with high student confidence, while conversely treating all classes more equally. This means they provide a fine-grained way to tune the model to emphasize specific likelihood ratio ranges, thereby ensuring stable gradient optimization.
+
+# F. Pursuing a Proper Mass allocation via $\alpha$ -Divergence
+
+To achieve an effective trade-off between hardness-concentration and confidence-concentration, one can, for example, extend FKLD and RKLD to a family of generalized divergences by introducing an additional dimension $\alpha$ , which is known as the $\alpha$ -divergence (Chernoff, 1952).
+
+Definition F.1 ( $\alpha$ -divergence). Consider $\alpha \in \mathbb{R} \setminus \{0, 1\}$ , the $\alpha$ -divergence of two distributions is given by:
+
+$$
+\mathbb {D} _ {\alpha} (p \parallel q) \triangleq \frac {1}{\alpha (\alpha - 1)} \left[ \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha} - 1 \right],
+$$
+
+where $p = [p(k)]_{k=1}^{C}$ and $q = [q(k)]_{k=1}^{C}$ are two discrete distributions over $C$ classes.
+
+Remark. The $\alpha$ -divergence includes $\mathbb{D}_{\mathrm{KL}}(p \parallel q_{\theta})$ as $\alpha \to 1$ , and $\mathbb{D}_{\mathrm{KL}}(q_{\theta} \parallel p)$ as $\alpha \to 0$ .
+
+The following proposition characterizes the effect of hyperparameter $\alpha$ on reducing $|\mathrm{LogR}_t^\alpha (y)|$ . The proof is in App. G.3.
+
+Proposition F.2. The updates induced by $\alpha$ -divergence for $q_{t}$ within one gradient descent step are given by:
+
+$$
+\left|\operatorname{Log}\mathsf{R}_{t}^{\alpha}(y)\right|\leq \eta \underbrace{q_{t}(y)^{1 - \alpha}}_{(a)}\underbrace{\left|\frac{p(y)^{\alpha} - q_{t}(y)^{\alpha}}{\alpha}\right|}_{(b)} + q_{t}(y)\sum_{k}\underbrace{q_{t}(k)^{1 - \alpha}}_{(a)}\underbrace{\left|\frac{p(k)^{\alpha} - q_{t}(k)^{\alpha}}{\alpha}\right|}_{(b)} + \left|\mathsf{N}_{t}^{\alpha}(y)\right|,
+$$
+
+where $\mathsf{N}_t^\alpha (y)$ denotes constant normalization factors.
+
+This proposition indicates that term (a) scales the relative importance of large versus small $q_{t}(y)$ , controlling the confidence-concentration effect. Term (b) adjusts the relative emphasis of the error between $p(y)$ and $q_{t}(y)$ , controlling the hardness-concentration effect. Together, these terms are coupled, with their interaction governed by $\alpha$ and $1 - \alpha$ . Unfortunately, this unnecessary constraint makes it intractable to adjust their effects independently.
+
+The following theorem validates our idea and shows that the $\alpha$ -divergence can only inflexibly interpolate between FKLD and RKLD (Fig. 1e) in a linear subspace of the planar space formed by terms (a) and (b), as shown in Fig.1(a). The proof is deferred to App. G.4.
+
+Theorem F.3. Let $q_{t+1}^{\alpha}(y)$ be the distribution obtained after one gradient step, starting from $q_t$ using the $\alpha$ -divergence. Define $\Delta_t^\alpha$ as the difference of log mass ratios across two classes $y_1$ and $y_2$ , obtained from the $\alpha$ -divergence:
+
+$$
+\Delta_ {t} ^ {\alpha} \left(y _ {1}, y _ {2}\right) \triangleq \operatorname {L o g} R _ {t} ^ {\alpha} \left(y _ {1}\right) - \operatorname {L o g} R _ {t} ^ {\alpha} \left(y _ {2}\right).
+$$
+
+We observe the following linear trend (for appropriate positive constants $\zeta$ , $\delta_1$ , $\delta_2$ , and any real numbers $\alpha_1$ and $\alpha_2$ in the range $[0, 1]$ satisfying $\alpha_1 < \alpha_2$ ):
+
+1. The $\alpha$ -divergence transfers the probability mass of overestimated classes to underestimated ones more aggressively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $p(y_{2}) < \delta_{1} < q_{t}(y_{1}) = q_{t}(y_{2}) \leq p(y_{1})$ (where $\delta_{1} > 0$ ), and $p(y_{1}) \geq p(y_{2}) + \zeta$ , it holds that $\Delta_t^{\alpha_1}(y_1, y_2) \geq \Delta_t^{\alpha_2}(y_1, y_2)$ .
+2. $\alpha$ -divergence reduces the probability mass of classes with larger error $|p(y) - q_t(y)|$ more aggressively as $\alpha$ decreases. If $y_1$ and $y_2$ are such that $p(y_1) < q_t(y_1) = q_t(y_2) \leq 1 - \delta_2$ (where $\delta_2 > 0$ ), and $p(y_1) \geq p(y_2) + \zeta$ , it holds that $\Delta_t^{\alpha_1}(y_1, y_2) \geq \Delta_t^{\alpha_2}(y_1, y_2)$ .
+3. $\alpha$ -divergence increases the probability mass more preferentially on underestimated classes with larger probabilities $q_{t}(y)$ as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $p(y_{1}) = p(y_{2}) > c_{0} \cdot q_{t}(y_{1})$ , where $c_{0}$ is a positive constant $> 1$ , it holds that $\Delta_{t}^{\alpha_{1}}(y_{1}, y_{2}) \geq \Delta_{t}^{\alpha_{2}}(y_{1}, y_{2})$ .
+4. $\alpha$ -divergence reduces the probability mass on overestimated classes with larger probabilities $q_{t}(y)$ more conservatively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $c_{0} \cdot q_{t}(y_{2}) < p(y_{1}) = p(y_{2}) < c_{1} \cdot q_{t}(y_{1})$ , where $c_{0}$ and $c_{1}$ are constants with $c_{0} > 1$ and $c_{1} < 1$ , it holds that $\Delta_{t}^{\alpha_{1}}(y_{1}, y_{2}) \geq \Delta_{t}^{\alpha_{2}}(y_{1}, y_{2})$ .
+
+These cases illustrate the differences in probability mass allocation for different $\alpha$ . Specifically, from Case 1 and Case 2, it can be seen that a smaller $\alpha$ leads to a more aggressive reduction of the probability mass on overestimated classes, transferring it to underestimated classes. Case 3 and Case 4 show that a smaller $\alpha$ (or larger $1 - \alpha$ ) tends to concentrate the probability mass more on classes with high student confidence.
+
+In summary, we have the following conclusion.
+
+The $\alpha$ -divergence achieves suboptimal balance between hardness-concentration and confidence-concentration inflexibly.
+
+# G. Proofs
+
+# G.1. Proof of Proposition 3.1
+
+Lemma G.1 (Tajwar et al., 2024). For a given distribution $q_{t}$ , the algebraic relationships between LogR and the gradient of the logit $f_{y}$ with a given learning rate $\eta$ in FKLD and RKLD are given by:
+
+$$
+F K L D: \quad \operatorname {L o g} \mathsf {R} _ {t} ^ {\mathcal {F}} (y) = \eta \cdot \left(p (y) - q _ {t} (y)\right) + \mathsf {N} _ {t} ^ {\mathcal {F}} (y),
+$$
+
+$$
+R K L D: \quad \operatorname {L o g} \mathsf {R} _ {t} ^ {\mathcal {R}} (y) = \eta \cdot q _ {t} (y) \Big (\log p (y) - \log q _ {t} (y) + \sum_ {k} q _ {t} (k) \big (\log q _ {t} (k) - \log p (k) \big) \Big) + \mathsf {N} _ {t} ^ {\mathcal {R}} (y),
+$$
+
+where $\mathsf{N}_t^{\mathcal{F}}(y)$ and $\mathsf{N}_t^{\mathcal{R}}(y)$ denote constant normalization factors independent of $y$ and vanish to zero when $p = q_t$ .
+
+Restate of Proposition 3.1. The updates induced by FKLD and RKLD for $q_{t}$ within one gradient descent step are given by:
+
+$$
+\text {F K L D :} \quad \left| \operatorname {L o g R} _ {t} ^ {\mathcal {F}} (y) \right| \leq \eta \cdot \underbrace {1} _ {(a)} \cdot \underbrace {\left| p (y) - q _ {t} (y) \right|} _ {(b)} + \left| \mathrm {N} _ {t} ^ {\mathcal {F}} (y) \right|,
+$$
+
+$$
+\text{RKLD:}\quad \big|\operatorname {Log}\mathsf{R}_{t}^{\mathcal{R}}(y)\big|\leq \eta \cdot \underbrace{q_{t}(y)}_{(a_{1})}\Big(\underbrace{\big|\log p(y) - \log q_{t}(y)\big|}_{(b_{1})} + \sum_{k}\underbrace{q_{t}(k)}_{(a_{2})}\underbrace{\big|\log p(k) - \log q_{t}(k)\big|}_{(b_{2})}\Big) + \big|\mathsf{N}_{t}^{\mathcal{R}}(y)\big|,
+$$
+
+where $\mathsf{N}_t^{\mathcal{F}}(y)$ and $\mathsf{N}_t^{\mathcal{R}}(y)$ denote constant normalization factors independent of $y$ and vanish to zero when $p = q_t$ .
+
+Proof. By Lemma G.1, and applying the triangle inequality, we can directly obtain the following bounds:
+
+$$
+\left| \operatorname {L o g} \mathsf {R} _ {t} ^ {\mathcal {F}} (y) \right| \leq \eta \cdot 1 \cdot \left| p (y) - q _ {t} (y) \right| + \left| \mathsf {N} _ {t} ^ {\mathcal {F}} (y) \right|, \tag {22}
+$$
+
+$$
+\left| \operatorname {L o g} \mathsf {R} _ {t} ^ {\mathcal {R}} (y) \right| \leq \eta \cdot q _ {t} (y) \left(\left| \log p (y) - \log q _ {t} (y) \right| + \sum_ {k} q _ {t} (k) \mid \log p (k) - \log q _ {t} (k) \right|) + \left| \mathsf {N} _ {t} ^ {\mathcal {R}} (y) \right|, \tag {23}
+$$
+
+This completes the proof.
+
+# G.2. Proof of Theorem 3.2
+
+Restate of Theorem 3.2. Let $q_{t+1}^{f}(y)$ be the distribution obtained after one gradient step, starting from $q_{t}$ using the FKLD. Likewise, let $q_{t+1}^{r}(y)$ be the distribution obtained using the RKLD, from $q_{t}$ . Define $\Delta_{t}^{f}$ and $\Delta_{t}^{r}$ as the difference of log mass ratios across two classes $y_{1}$ and $y_{2}$ , obtained from the forward and reverse divergences respectively:
+
+$$
+\Delta_ {t} ^ {f} (y _ {1}, y _ {2}) \triangleq \operatorname {L o g R} _ {t} ^ {\mathcal {F}} (y _ {1}) - \operatorname {L o g R} _ {t} ^ {\mathcal {F}} (y _ {2}),
+$$
+
+and $\Delta_t^r$ is similarly defined. Then we have the following (for appropriate positive constants $\zeta, \delta_1, \delta_2$ ):
+
+1. RKLD transfers probability mass from overestimated classes to underestimated classes more aggressively than FKLD. If $y_{1}$ and $y_{2}$ are such that $p(y_{2}) < \delta_{1} < q_{t}(y_{1}) = q_{t}(y_{2}) < p(y_{1})$ (where $\delta_{1} > 0, \delta_{2} > 0$ ), but $p(y_{1}) \geq p(y_{2}) + \zeta$ , then, $\Delta_t^r (y_1,y_2) > \Delta_t^f (y_1,y_2)$ .
+2. RKLD reduces the probability mass of overestimated classes with higher error $|p(y) - q_t(y)|$ more aggressively than FKLD. If $y_1$ and $y_2$ are such that $p(y_1) < q_t(y_1) = q_t(y_2) \leq 1 - \delta_2$ , but $p(y_1) \geq p(y_2) + \zeta$ , then, $\Delta_t^r (y_1,y_2) > \Delta_t^f (y_1,y_2)$ .
+3. RKLD more preferentially increases probability mass on underestimated classes with larger probability $q_{t}(y)$ than FKLD. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $p(y_{1}) = p(y_{2}) > c_{0} \cdot q_{t}(y_{1})$ , where $c_{0}$ is a positive constant $> 1$ , then, $\Delta_t^r (y_1,y_2) > \Delta_t^f (y_1,y_2)$ .
+4. RKLD reduces probability mass on overestimated classes with larger probability $q_{t}(y)$ more conservatively than FKLD. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $c_{0} \cdot q_{t}(y_{2}) < p(y_{1}) = p(y_{2}) < c_{1} \cdot q_{t}(y_{1})$ , where $c_{0}$ and $c_{1}$ are constants with $c_{0} > 1$ and $c_{1} < 1$ , then, $\Delta_t^r (y_1,y_2) > \Delta_t^f (y_1,y_2)$ .
+
+Proof. For Case 1, note that $q_{t}(y_{1}) = q_{t}(y_{2}) < p(y_{1})$ , based on Lem. G.1, we have:
+
+$$
+\Delta^ {f} \left(y _ {1}, y _ {2}\right) = \eta \left(p \left(y _ {1}\right) - p \left(y _ {2}\right)\right), \tag {24}
+$$
+
+$$
+\Delta^ {r} \left(y _ {1}, y _ {2}\right) = \eta q \left(y _ {1}\right) \left[ \log p \left(y _ {1}\right) - \log p \left(y _ {2}\right) \right]. \tag {25}
+$$
+
+The discrepancy between $\Delta^f$ and $\Delta^r$ is now given by:
+
+$$
+\Delta^ {r} \left(y _ {1}, y _ {2}\right) - \Delta^ {f} \left(y _ {1}, y _ {2}\right) = \eta \cdot q \left(y _ {1}\right) \left[ \log p \left(y _ {1}\right) - \log p \left(y _ {2}\right) - \frac {p \left(y _ {1}\right) - p \left(y _ {2}\right)}{q \left(y _ {1}\right)} \right]. \tag {26}
+$$
+
+Notice that by the lagrange's mean value theorem, there exists a $c_{0} \in (p(y_{2}), p(y_{1}))$ such that:
+
+$$
+\log p \left(y _ {1}\right) - \log p \left(y _ {2}\right) = \frac {d \log p}{d p} \Bigg | _ {p = c _ {0}} \cdot \left(p \left(y _ {1}\right) - p \left(y _ {2}\right)\right). \tag {27}
+$$
+
+Since $\left.\frac{d\log p}{dp}\right|_{p = c_0} = \frac{1}{c_0}$ , we have that:
+
+$$
+\Delta^ {r} \left(y _ {1}, y _ {2}\right) - \Delta^ {f} \left(y _ {1}, y _ {2}\right) = \eta \cdot \left(p \left(y _ {1}\right) - p \left(y _ {2}\right)\right) \cdot \left[ \frac {q \left(y _ {1}\right)}{c _ {0}} - 1 \right]. \tag {28}
+$$
+
+This quantity is positive if $q(y_{1}) > c_{0} = \delta_{1}$
+
+Then, for Case 2, similarly, since $p(y_{1}) < q_{t}(y_{1}) = q_{t}(y_{2})$ , we have:
+
+$$
+\Delta^ {r} \left(y _ {1}, y _ {2}\right) - \Delta^ {f} \left(y _ {1}, y _ {2}\right) = \eta \cdot \left(p \left(y _ {1}\right) - p \left(y _ {2}\right)\right) \cdot \left[ \frac {q \left(y _ {1}\right)}{c _ {0}} - 1 \right]. \tag {29}
+$$
+
+where $c_0 \in (p(y_2), p(y_1))$ is obtained by applying the lagrange's mean value theorem to the difference $\log p(y_1) - \log p(y_2)$ . Notice that $q(y_1) > p(y_1)$ , so this term is always positive.
+
+The proof for Case 3 is consistent with Tajwar et al. (2024), and is omitted here.
+
+Finally, we prove Case 4. Noting that $q(y_{2}) < p(y_{1}) = p(y_{2}) < q(y_{1})$ , we have
+
+$$
+\Delta^ {f} \left(y _ {1}, y _ {2}\right) = \eta \left(q \left(y _ {2}\right) - q \left(y _ {1}\right)\right). \tag {30}
+$$
+
+Additionally, for $\Delta^r (y_1,y_2)$ , we have
+
+$$
+\begin{array}{l} \Delta^ {r} (y _ {1}, y _ {2}) = \underbrace {\eta [ q (y _ {1}) - q (y _ {2}) ] \log p (y _ {1})} _ {(a)} - \underbrace {\eta [ q (y _ {1}) \log (q (y _ {1}) - q (y _ {2}) \log (q (y _ {2}) ]} _ {(b)} \\ + \underbrace {\eta \left(q \left(y _ {1}\right) - q \left(y _ {2}\right)\right) \mathbb {D} _ {\mathrm {K L}} (p \| q)} _ {\geq 0}. \tag {31} \\ \end{array}
+$$
+
+For item (b), by the lagrange's mean value theorem, there exists a point $c_{0} \in (q(y_{2}), q(y_{1}))$ such that
+
+$$
+\begin{array}{l} \eta \left[ \right. q \left(y _ {1}\right) \log \left( \right.q \left(y _ {1}\right) - q \left(y _ {2}\right) \log \left( \right.q \left(y _ {2}\right)\left. \right] = \frac {d q \log q}{d q} \left. \right\rvert_ {q = c _ {0}} \left(q \left(y _ {1}\right) - q \left(y _ {2}\right)\right) \tag {32} \\ = (1 + \log c _ {0}) (q (y _ {1}) - q (y _ {2})). \\ \end{array}
+$$
+
+Substituting into Eq. 31, we obtain
+
+$$
+\Delta^ {r} \left(y _ {1}, y _ {2}\right) = \eta \left[ q \left(y _ {1}\right) - q \left(y _ {2}\right) \right] \left[ \log p \left(y _ {1}\right) - \log c _ {0} - 1 + \mathbb {D} _ {\mathrm {K L}} (p \| q) \right]. \tag {33}
+$$
+
+We need to prove that
+
+$$
+\eta \left[ q \left(y _ {1}\right) - q \left(y _ {2}\right) \right] \left[ \log p \left(y _ {1}\right) - \log c _ {0} - 1 + \mathbb {D} _ {\mathrm {K L}} (p \| q) \right] > - \eta \left[ q \left(y _ {1}\right) - q \left(y _ {2}\right) \right]. \tag {34}
+$$
+
+It suffices to choose a sufficiently large $p(y_1) \in (q(y_2), q(y_1))$ to ensure that
+
+$$
+\log p \left(y _ {1}\right) - \log c _ {0} + \mathbb {D} _ {\mathrm {K L}} (p \| q) > 0. \tag {35}
+$$
+
+This condition can be satisfied.
+
+Remark. These cases highlight the contrasting behaviors of FKLD and RKLD in certain scenarios. Specifically, Case 1 shows that RKLD is more aggressive in transferring probability mass from overestimated classes (i.e., $p(y) < q(y)$ ) to underestimated ones (i.e., $p(y) > q(y)$ ). Case 2 indicates that when two classes have the same predicted values $q(y)$ greater than the target values $p(y)$ , RKLD more aggressively penalizes hard classes with larger overestimations (i.e., larger error $|p(y) - q(y)|$ ). Case 3 indicates that when the predicted values $q(y)$ for two classes $y_1$ and $y_2$ are both below the target value $p(y)$ , RKLD preferentially increases the probability mass of the class with larger student confidence $q(y)$ , even though both predictions are equal at the target value $p(y)$ . Case 4 suggests that when one class $y_1$ is overestimated and another class $y_2$ is underestimated, RKLD reduces the overestimation of high-probability classes more conservatively.
+
+In summary, combining Case 1 and Case 2, we conclude that RKLD reallocates the probability mass across classes more aggressively by penalizing errors in hard classes compared to FKLD, aiming to achieve better matching. This can help avoid local optima more quickly in some cases, ensuring superior performance and faster convergence throughout the training process, as shown in Fig. 2. From Case 3 and Case 4, it can be inferred that RKLD tends to concentrate the probability mass on a few classes with high prediction confidence. In contrast, FKLD reallocates the probability mass evenly across all classes since the weights for different classes are identical. This can lead to a sharper distribution when using RKLD, as empirically validated in Fig. 6(c) and (d) ( $\beta = 0$ for FKLD and $\beta = 1$ for RKLD).
+
+# G.3. Proof of Proposition F.2
+
+Lemma G.2. The gradient of the softmax function $q(i) = \frac{e^{f_i}}{\sum_k e^{f_k}}$ with respect to $f_j$ is given by:
+
+$$
+\frac {\partial q (i)}{\partial f _ {j}} = \left\{ \begin{array}{l l} q (i) (1 - q (i)) & i f i = j, \\ - q (i) q (j) & i f i \neq j. \end{array} \right.
+$$
+
+Proof. To derive $\frac{\partial q(i)}{\partial f_j}$ , we consider two cases: $i = j$ and $i \neq j$ .
+
+1. Case $i = j$ : Using the quotient rule, we have:
+
+$$
+\begin{array}{l} \frac {\partial q (i)}{\partial f _ {i}} = \frac {e ^ {f _ {i}} \cdot \sum_ {k} e ^ {f _ {k}} - e ^ {f _ {i}} \cdot e ^ {f _ {i}}}{\left(\sum_ {k} e ^ {f _ {k}}\right) ^ {2}} \tag {36} \\ = \frac {e ^ {f _ {i}}}{\sum_ {k} e ^ {f _ {k}}} - \frac {e ^ {f _ {i}} \cdot e ^ {f _ {i}}}{\left(\sum_ {k} e ^ {f _ {k}}\right) ^ {2}}. \\ \end{array}
+$$
+
+Simplifying:
+
+$$
+\frac {\partial q (i)}{\partial f _ {i}} = q (i) (1 - q (i)). \tag {37}
+$$
+
+2. Case $i \neq j$ : We have
+
+$$
+\begin{array}{l} \frac {\partial q (i)}{\partial f _ {j}} = - \frac {e ^ {f _ {i}} e ^ {f _ {j}}}{\left(\sum_ {k} e ^ {f _ {k}}\right) ^ {2}} \tag {38} \\ = - q (i) q (j). \\ \end{array}
+$$
+
+In conclusion, the derivative is:
+
+$$
+\frac {\partial q (i)}{\partial f _ {j}} = \left\{ \begin{array}{l l} q (i) (1 - q (i)) & \text {i f} i = j, \\ - q (i) q (j) & \text {i f} i \neq j. \end{array} \right. \tag {39}
+$$
+
+Restate of Proposition F.2. The updates induced by $\alpha$ -divergence for $q_{t}$ within one gradient descent step are given by:
+
+$$
+\log \frac{q_{t + 1}^{\alpha}(x)}{q_{t}(x)}\leq \eta \underbrace{q_{t}(y)^{1 - \alpha}}_{(a)}\underbrace{\left|\frac{p(y)^{\alpha} - q_{t}(y)^{\alpha}}{\alpha}\right|}_{(b)} + q_{t}(y)\sum_{k}\underbrace{q_{t}(k)^{1 - \alpha}}_{(a)}\underbrace{\left|\frac{p(k)^{\alpha} - q_{t}(k)^{\alpha}}{\alpha}\right|}_{(b)} + \bigl|\mathsf{N}_{t}^{\alpha}(y)\bigr|,
+$$
+
+where $\mathsf{N}_t^\alpha (y)$ denotes constant normalization factors.
+
+Proof. The formula for the $\alpha$ -divergence is:
+
+$$
+\mathbb {D} _ {\alpha} (p \| q) \triangleq \frac {1}{\alpha (\alpha - 1)} \left[ \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha} - 1 \right], \tag {40}
+$$
+
+Using the chain rule, we have:
+
+$$
+\frac {\partial}{\partial f _ {y}} \mathbb {D} _ {\alpha} (p \| q) = \sum_ {k} \frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (k)} \frac {\partial q (k)}{\partial f _ {y}}, \tag {41}
+$$
+
+where
+
+$$
+\frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (y)} = - \frac {1}{\alpha} \left(\frac {p (y)}{q (y)}\right) ^ {\alpha}, \tag {42}
+$$
+
+and
+
+$$
+\frac {\partial q (y)}{\partial f _ {y}} = \frac {\partial}{\partial f _ {y}} \frac {e ^ {f _ {y}}}{\sum_ {k} e ^ {f _ {k}}}. \tag {43}
+$$
+
+Combining Lem. G.2, the Eq. 41 can be expressed as
+
+$$
+\begin{array}{l} \frac {\partial}{\partial f _ {y}} \mathbb {D} _ {\alpha} (p \| q) = \sum_ {k \neq y} \frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (k)} \frac {\partial q (k)}{\partial f (y)} + \frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (y)} \frac {\partial q (y)}{\partial f _ {j}} \\ = \sum_ {k \neq y} \left(- \frac {1}{\alpha} \left(\frac {p (k)}{q (k)}\right) ^ {\alpha}\right) \cdot (- q (k) q (y)) + \left(- \frac {1}{\alpha} \left(\frac {p (y)}{q (y)}\right) ^ {\alpha}\right) \cdot (q (y) (1 - q (y))) \tag {44} \\ = - \frac {1}{\alpha} \left[ q (y) ^ {1 - \alpha} \left(p (y) ^ {\alpha} - q (y) ^ {\alpha}\right) + q (y) \left(\sum_ {k} q (k) ^ {1 - \alpha} \left(q (k) ^ {\alpha} - p (k) ^ {\alpha}\right)\right) \right]. \\ \end{array}
+$$
+
+Now, consider using the gradient descent method to update the loss function $\ell$ with respect to the logits $f_{y}^{t}$ , then the distribution at the next step $p^{t + 1}$ is given by:
+
+$$
+\begin{array}{l} q _ {t + 1} (y) = \frac {\exp \left(f _ {y} ^ {t + 1}\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t + 1}\right)} \\ = \frac {\exp \left(f _ {y} ^ {t} - \eta \nabla_ {f _ {y} ^ {t}} \ell\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t} - \eta \nabla_ {f _ {k} ^ {t}} \ell\right)} \tag {45} \\ = q _ {t} (y) \cdot \frac {\exp (- \eta \nabla_ {f _ {y} ^ {t + 1}} \ell)}{\sum_ {k} q _ {t} (k) \exp (- \eta \nabla_ {f _ {k} ^ {t}} \ell)}. \\ \end{array}
+$$
+
+Now, substituting the gradient formula of the $\alpha$ -divergence, the characterization of $q_{t + 1}^{\alpha}(x_j)$ is obtained as:
+
+$$
+q _ {t + 1} ^ {\alpha} (y) = q _ {t} (y) \cdot \frac {\exp \left(\frac {\eta}{\alpha} [ q (y) ^ {1 - \alpha} (p (y) ^ {\alpha} - q (y) ^ {\alpha}) + q (y) (\sum_ {k} q (k) ^ {1 - \alpha} (q (k) ^ {\alpha} - p (k) ^ {\alpha})) ]\right)}{\sum_ {i} q _ {t} (i) \exp \left(\frac {\eta}{\alpha} [ q (i) ^ {1 - \alpha} (p (i) ^ {\alpha} - q (i) ^ {\alpha}) + q (i) (\sum_ {k} q (k) ^ {1 - \alpha} (q (k) ^ {\alpha} - p (k) ^ {\alpha})) ]\right)}. \tag {46}
+$$
+
+Observing that the denominator serves as a normalization constant, this can be rewritten as:
+
+$$
+\frac {q _ {t + 1} ^ {f} (y)}{q _ {t} (y)} \propto \exp \left(\frac {\eta}{\alpha} \left[ q (y) ^ {1 - \alpha} (p (y) ^ {\alpha} - q (y) ^ {\alpha}) + q (y) \left(\sum_ {k} q (k) ^ {1 - \alpha} (q (k) ^ {\alpha} - p (k) ^ {\alpha})\right) \right]\right). \tag {47}
+$$
+
+Taking the logarithm on both sides, we get:
+
+$$
+\log \frac {q _ {t + 1} ^ {\alpha} (y)}{q _ {t} (y)} = \eta \underbrace {\left[ q _ {t} (y) ^ {1 - \alpha} \left(\frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha}\right) + q _ {t} (y) \sum_ {k} q _ {t} (k) ^ {1 - \alpha} \left(\frac {q _ {t} (k) ^ {\alpha} - p (k) ^ {\alpha}}{\alpha}\right) \right]} _ {- \nabla_ {f _ {y}} \ell} + \mathsf {N} _ {t} ^ {\alpha} (y), \tag {48}
+$$
+
+where $\mathsf{N}_t^\alpha (y)$ denotes constant normalization factors. We can further derive that
+
+$$
+\left| \log \frac {q _ {t + 1} ^ {\alpha} (y)}{q _ {t} (y)} \right| \leq \eta \underbrace {q _ {t} (y) ^ {1 - \alpha}} _ {(a)} \underbrace {\left| \frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha} \right|} _ {(b)} + q _ {t} (y) \sum_ {k} \underbrace {q _ {t} (k) ^ {1 - \alpha}} _ {(a)} \underbrace {\left| \frac {p (k) ^ {\alpha} - q _ {t} (k) ^ {\alpha}}{\alpha} \right|} _ {(b)} + \left| \mathrm {N} _ {t} ^ {\alpha} (y) \right|. \tag {49}
+$$
+
+This completes the proof.
+
+# G.4. Proof of Theorem F.3
+
+Lemma G.3. Let $p$ and $q$ be normalized probability distributions over class numbers $C$ i.e., $\sum_{k}^{C} p(k) = \sum_{k}^{C} q(k) = 1$ . Define the function $F(\alpha)$ for $\alpha \in [0,1]$ as:
+
+$$
+F (\alpha) \triangleq \frac {1}{\alpha} \left(1 - \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha}\right).
+$$
+
+Then, $F(\alpha)$ decreases monotonically as $\alpha$ increases, and $F(\alpha) \geq 0$ on $[0,1]$ .
+
+Proof. Define:
+
+$$
+S (\alpha) \triangleq \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha}. \tag {50}
+$$
+
+Thus, the function $F(\alpha)$ can be written as:
+
+$$
+F (\alpha) = \frac {1 - S (\alpha)}{\alpha}. \tag {51}
+$$
+
+First, compute the first derivative of $S(\alpha)$ :
+
+$$
+S ^ {\prime} (\alpha) = \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha} (\ln p (k) - \ln q (k)). \tag {52}
+$$
+
+and the second derivative:
+
+$$
+\begin{array}{l} S ^ {\prime \prime} (\alpha) = \sum_ {k} p (k) ^ {\alpha} q (k) ^ {1 - \alpha} \left(\ln p (k) - \ln q (k)\right) ^ {2} \tag {53} \\ \geq 0. \\ \end{array}
+$$
+
+Since $S''(\alpha) \geq 0$ for all $\alpha$ , $S(\alpha)$ is a convex function of $\alpha$ .
+
+Now, compute the derivative of $F(\alpha)$ :
+
+$$
+\begin{array}{l} F ^ {\prime} (\alpha) = \frac {d}{d \alpha} \left(\frac {1 - S (\alpha)}{\alpha}\right) \\ = \frac {- S ^ {\prime} (\alpha) \cdot \alpha - (1 - S (\alpha))}{\alpha^ {2}} \tag {54} \\ = \frac {N (\alpha)}{\alpha^ {2}}, \\ \end{array}
+$$
+
+where
+
+$$
+N (\alpha) \triangleq - \alpha S ^ {\prime} (\alpha) - 1 + S (\alpha). \tag {55}
+$$
+
+To show $F^{\prime}(\alpha)\leq 0$ , it suffices to prove $N(\alpha)\leq 0$
+
+Since $S(\alpha)$ is convex, $S'(\alpha)$ is non-decreasing. Additionally, considering the boundary conditions:
+
+$$
+N (0) = 0, \quad N (1) = - \mathbb {D} _ {\mathrm {K L}} (p \| q) \leq 0, \tag {56}
+$$
+
+and since
+
+$$
+\begin{array}{l} N ^ {\prime} (\alpha) = \frac {d}{d \alpha} (S (\alpha) - \alpha S ^ {\prime} (\alpha) - 1) \\ = - \alpha S ^ {\prime \prime} (\alpha) \tag {57} \\ \leq 0, \\ \end{array}
+$$
+
+$N(\alpha)$ is monotonically decreasing.
+
+Therefore, for all $\alpha \in [0,1]$ :
+
+$$
+N (\alpha) \leq 0. \tag {58}
+$$
+
+Thus,
+
+$$
+\begin{array}{l} F ^ {\prime} (\alpha) = \frac {N (\alpha)}{\alpha^ {2}} \tag {59} \\ \leq 0. \\ \end{array}
+$$
+
+Hence, $F(\alpha)$ is monotonically decreasing with respect to $\alpha$ on the interval [0, 1].
+
+Finally, note that $F(1) = 0$ , and therefore $F(\alpha) \geq 0$ . This completes the proof.
+
+Lemma G.4. Consider the function $f(\alpha) \triangleq p(x)^{\alpha} - q(y_{i})^{\alpha}$ , where $p(x)$ and $q(y_{i})$ are constants. The derivative of the logarithm of $f(\alpha)$ with respect to $\alpha$ is:
+
+$$
+\frac {d}{d \alpha} \ln (p (x) ^ {\alpha} - q (y _ {i}) ^ {\alpha}) = \ln p (x) + \frac {- \left(\frac {q (y _ {i})}{p (x)}\right) ^ {\alpha} \ln \left(\frac {q (y _ {i})}{p (x)}\right)}{1 - \left(\frac {q (y _ {i})}{p (x)}\right) ^ {\alpha}}. \tag {60}
+$$
+
+Proof. Start with:
+
+$$
+\ln \left(p (x) ^ {\alpha} - q \left(y _ {i}\right) ^ {\alpha}\right) = \ln p (x) ^ {\alpha} + \ln \left(1 - \left(\frac {q \left(y _ {i}\right)}{p (x)}\right) ^ {\alpha}\right). \tag {61}
+$$
+
+Differentiating with respect to $\alpha$ , we get:
+
+$$
+\frac {d}{d \alpha} \ln (p (x) ^ {\alpha} - q (y _ {i}) ^ {\alpha}) = \frac {d}{d \alpha} \ln p (x) ^ {\alpha} + \frac {d}{d \alpha} \ln \left(1 - \left(\frac {q (y _ {i})}{p (x)}\right) ^ {\alpha}\right). \tag {62}
+$$
+
+The derivative of $\ln p(x)^{\alpha}$ is $\ln p(x)$ , and the derivative of the second term is:
+
+$$
+\frac {d}{d \alpha} \ln \left(1 - \left(\frac {q \left(y _ {i}\right)}{p (x)}\right) ^ {\alpha}\right) = \frac {- \left(\frac {q \left(y _ {i}\right)}{p (x)}\right) ^ {\alpha} \ln \left(\frac {q \left(y _ {i}\right)}{p (x)}\right)}{1 - \left(\frac {q \left(y _ {i}\right)}{p (x)}\right) ^ {\alpha}}. \tag {63}
+$$
+
+Combining these gives the desired result:
+
+$$
+\frac {d}{d \alpha} \ln (p (x) ^ {\alpha} - q (y _ {i}) ^ {\alpha}) = \ln p (x) + \frac {- \left(\frac {q (y _ {i})}{p (x)}\right) ^ {\alpha} \ln \left(\frac {q (y _ {i})}{p (x)}\right)}{1 - \left(\frac {q (y _ {i})}{p (x)}\right) ^ {\alpha}}. \tag {64}
+$$
+
+Lemma G.5. Let $1 \geq \alpha > 0$ and define
+
+$$
+h (s) \triangleq 1 - s ^ {2 \alpha} - 2 \alpha s ^ {\alpha} | \ln s | \tag {65}
+$$
+
+for $0 < s < 1$ . Then $h(s)$ is strictly decreasing on $(0,1)$ .
+
+Proof. Since $0 < s < 1$ , we have $\ln s < 0$ , thus $|\ln s| = -\ln s$ . Substitute this:
+
+$$
+h (s) = 1 - s ^ {2 \alpha} + 2 \alpha s ^ {\alpha} \ln s. \tag {66}
+$$
+
+Differentiating term-by-term,
+
+$$
+\begin{array}{l} h ^ {\prime} (s) = - 2 \alpha s ^ {2 \alpha - 1} + 2 \alpha (\alpha s ^ {\alpha - 1} \ln s + s ^ {\alpha - 1}) \tag {67} \\ = 2 \alpha s ^ {\alpha - 1} (\alpha \ln s + 1 - s ^ {\alpha}). \\ \end{array}
+$$
+
+Set
+
+$$
+q (s) = \alpha \ln s + 1 - s ^ {\alpha}. \tag {68}
+$$
+
+As $s \to 0^+$ , $\ln s \to -\infty$ and thus $q(s) \to -\infty$ . At $s = 1$ , $q(1) = \alpha \cdot 0 + 1 - 1 = 0$ . Moreover,
+
+$$
+\begin{array}{l} q ^ {\prime} (s) = \frac {\alpha (1 - s ^ {\alpha})}{s} \tag {69} \\ > 0. \\ \end{array}
+$$
+
+Since $0 < s < 1$ implies $1 - s^{\alpha} > 0$ . Hence $q(s)$ is strictly increasing on $(0,1)$ , and we have $q(1) = 0$ . Thus $q(s) < 0$ for all $0 < s < 1$ .
+
+Since $2\alpha s^{\alpha -1} > 0$ and $q(s) < 0$ , we have $h^\prime (s) < 0$ for all $0 < s < 1$ . Therefore, $h(s)$ is strictly decreasing on $(0,1)$
+
+Lemma G.6. For $0 < s < 1$ and $1 \geq \alpha > 0$ , define
+
+$$
+\beta (s, \alpha) \triangleq \frac {2 (1 - s ^ {\alpha}) (1 + s ^ {\alpha}) | \ln s |}{(1 + s ^ {\alpha}) ^ {2} (\ln s) ^ {2}}. \tag {70}
+$$
+
+Then $\beta(s, \alpha)$ is strictly increasing in $s$ on $(0, 1)$ and increases from $0$ to $\alpha$ as $s$ goes from $0$ to $1$ .
+
+Proof. Since $0 < s < 1$ , we have $\ln s < 0$ and thus $|\ln s| = -\ln s$ . Substituting this into Eq. 70 and simplifying, we obtain:
+
+$$
+\beta (s, \alpha) = \frac {2 (1 - s ^ {\alpha})}{(1 + s ^ {\alpha}) | \ln s |}. \tag {71}
+$$
+
+To differentiate $\beta (s,\alpha)$ with respect to $s$ , let:
+
+$$
+f (s) \triangleq 1 - s ^ {\alpha}, \quad f ^ {\prime} (s) = - \alpha s ^ {\alpha - 1}, \tag {72}
+$$
+
+$$
+g (s) \triangleq 1 + s ^ {\alpha}, \quad g ^ {\prime} (s) = \alpha s ^ {\alpha - 1}, \tag {73}
+$$
+
+$$
+h (s) \triangleq | \ln s | = - \ln s, \quad h ^ {\prime} (s) = - \frac {1}{s}. \tag {74}
+$$
+
+Thus,
+
+$$
+\beta (s, \alpha) = \frac {2 f (s)}{g (s) h (s)}. \tag {75}
+$$
+
+Applying the quotient rule:
+
+$$
+\frac {d \beta}{d s} = 2 \frac {f ^ {\prime} (s) g (s) h (s) - f (s) g ^ {\prime} (s) h (s) - f (s) g (s) h ^ {\prime} (s)}{\left[ g (s) h (s) \right] ^ {2}}. \tag {76}
+$$
+
+Plugging Eq. 72, Eq. 73 and Eq. 74 into Eq. 76 and simplifying, we arrive at:
+
+$$
+\frac {d \beta}{d s} = \frac {2 h (s)}{(1 + s ^ {\alpha}) ^ {2} (\ln s) ^ {2}} \quad \mathrm {w i t h} \quad h (s) = 1 - s ^ {2 \alpha} - 2 \alpha s ^ {\alpha} | \ln s |. \tag {77}
+$$
+
+From Lem. G.5, it follows that $h(s)$ is decreasing on $(0,1)$ . As $s \to 1^{-}$ , $h(s) \to 0$ . Therefore, $h(s) > 0$ on $(0,1)$ .
+
+Since $(1 + s^{\alpha})^{2}(\ln s)^{2} > 0$ , it follows from Eq. 77 that $\frac{d\beta}{ds} > 0$ . Therefore, $\beta(s, \alpha)$ is strictly increasing. Furthermore, taking limits:
+
+$$
+\lim _ {s \to 0 ^ {+}} \beta (s, \alpha) = 0, \quad \lim _ {s \to 1 ^ {-}} \beta (s, \alpha) = \alpha ,
+$$
+
+so $\beta (s,\alpha)$ increases from 0 to $\alpha$ as $s$ goes from 0 to 1.
+
+Lemma G.7. Let $f(s, \alpha) \triangleq \frac{s^{\alpha} (\ln s)^{2}}{(1 - s^{\alpha})^{2}}$ , where $0 < s < 1$ and $\alpha > 0$ . Then, $f(s, \alpha)$ is strictly increasing with respect to $s$ for all $0 < s < 1$ and $\alpha > 0$ , i.e., $\frac{\partial f}{\partial s} > 0$ .
+
+Proof. To compute $\frac{\partial f}{\partial s}$ , we use the quotient rule. Define:
+
+$$
+u \triangleq s ^ {\alpha} (\ln s) ^ {2}, \quad v \triangleq (1 - s ^ {\alpha}) ^ {2}, \quad f (s, \alpha) \triangleq \frac {u}{v}. \tag {78}
+$$
+
+The derivative is given by:
+
+$$
+\frac {\partial f}{\partial s} = \frac {u ^ {\prime} v - u v ^ {\prime}}{v ^ {2}}. \tag {79}
+$$
+
+First, compute $u'$ :
+
+$$
+u = s ^ {\alpha} (\ln s) ^ {2}, \quad u ^ {\prime} = \alpha s ^ {\alpha - 1} (\ln s) ^ {2} + 2 s ^ {\alpha - 1} \ln s = s ^ {\alpha - 1} [ \alpha (\ln s) ^ {2} + 2 \ln s ]. \tag {80}
+$$
+
+Next, compute $v'$ :
+
+$$
+v = (1 - s ^ {\alpha}) ^ {2}, \quad v ^ {\prime} = 2 (1 - s ^ {\alpha}) \cdot (- \alpha s ^ {\alpha - 1}) = - 2 \alpha s ^ {\alpha - 1} (1 - s ^ {\alpha}). \tag {81}
+$$
+
+Substituting $u'$ and $v'$ into the quotient rule:
+
+$$
+\frac {\partial f}{\partial s} = \frac {s ^ {\alpha - 1} \left[ \alpha (\ln s) ^ {2} + 2 \ln s \right] (1 - s ^ {\alpha}) ^ {2} + 2 \alpha s ^ {2 \alpha - 1} (\ln s) ^ {2} (1 - s ^ {\alpha})}{(1 - s ^ {\alpha}) ^ {4}}. \tag {82}
+$$
+
+Simplify the numerator:
+
+$$
+\text {N u m e r a t o r} = s ^ {\alpha - 1} \left(1 - s ^ {\alpha}\right) \left[ \alpha (\ln s) ^ {2} + 2 \ln s \right] + 2 \alpha s ^ {2 \alpha - 1} (\ln s) ^ {2}. \tag {83}
+$$
+
+Factorize:
+
+$$
+\operatorname {N u m e r a t o r} = s ^ {\alpha - 1} \left(1 - s ^ {\alpha}\right) \left[ \alpha \left(1 + s ^ {\alpha}\right) \left(\ln s\right) ^ {2} + 2 \left(1 - s ^ {\alpha}\right) \ln s \right]. \tag {84}
+$$
+
+Thus:
+
+$$
+\frac {\partial f}{\partial s} = \frac {s ^ {\alpha - 1} \left[ \alpha \left(1 + s ^ {\alpha}\right) (\ln s) ^ {2} + 2 \left(1 - s ^ {\alpha}\right) \ln s \right]}{\left(1 - s ^ {\alpha}\right) ^ {3}}. \tag {85}
+$$
+
+To determine the sign of $\frac{\partial f}{\partial s}$ , note:
+
+- $s^{\alpha - 1} > 0$ since $0 < s < 1$ and $\alpha > 0$ ,
+- $(1 - s^{\alpha})^{3} > 0$ since $0 < s < 1$ and $\alpha > 0$ ,
+- $\ln s < 0$ for $0 < s < 1$ , hence $(\ln s)^2 > 0$ .
+
+Denote the remaining expression inside the brackets as:
+
+$$
+N \triangleq \alpha (1 + s ^ {\alpha}) (\ln s) ^ {2} + 2 (1 - s ^ {\alpha}) \ln s. \tag {86}
+$$
+
+Rewrite $N$ as:
+
+$$
+N = \left| \ln s \right| \left[ \alpha \left(1 + s ^ {\alpha}\right) \right] \ln s - 2 \left(1 - s ^ {\alpha}\right). \tag {87}
+$$
+
+Since $|\ln s| > 0$ , we analyze $\alpha (1 + s^{\alpha})|\ln s| - 2(1 - s^{\alpha}) > 0$ :
+
+$$
+\alpha > \frac {2 (1 - s ^ {\alpha})}{(1 + s ^ {\alpha}) | \ln s |}. \tag {88}
+$$
+
+From Lem. G.6, it follows that for all $0 < s < 1$ and $\alpha > 0$ , $\alpha > \beta(s, \alpha)$ always holds, implying:
+
+$$
+N > 0. \tag {89}
+$$
+
+Hence:
+
+$$
+\frac {\partial f}{\partial s} > 0, \tag {90}
+$$
+
+which shows that $f(s, \alpha)$ is strictly increasing with respect to $s$ .
+
+Lemma G.8. Considering $p(x), q(y_1), q(y_2) \in [0, 1]$ such that $p(x) \geq q(y_1) \geq q(y_2)$ , and $\alpha > 0$ , the following function:
+
+$$
+F (\alpha) \triangleq \frac {q \left(y _ {1}\right) ^ {1 - \alpha}}{q \left(y _ {2}\right) ^ {1 - \alpha}} \cdot \frac {p (x) ^ {\alpha} - q \left(y _ {1}\right) ^ {\alpha}}{p (x) ^ {\alpha} - q \left(y _ {2}\right) ^ {\alpha}}, \tag {91}
+$$
+
+$F(\alpha)$ decreases as $\alpha$ increases.
+
+Proof. Let $h(\alpha) \triangleq \ln F(\alpha)$ , we get
+
+$$
+h (\alpha) = (1 - \alpha) \ln \frac {q \left(y _ {1}\right)}{q \left(y _ {2}\right)} + \ln \left(p (x) ^ {\alpha} - q \left(y _ {1}\right) ^ {\alpha}\right) - \ln \left(p (x) ^ {\alpha} - q \left(y _ {2}\right) ^ {\alpha}\right). \tag {92}
+$$
+
+Differentiating $h(\alpha)$ with respect to $\alpha$ , we get
+
+$$
+h ^ {\prime} (\alpha) = - \ln \frac {q \left(y _ {1}\right)}{q \left(y _ {2}\right)} + \frac {d \ln (p (x) ^ {\alpha} - q \left(y _ {1}\right) ^ {\alpha})}{d \alpha} - \frac {d \ln (p (x) ^ {\alpha} - q \left(y _ {2}\right) ^ {\alpha})}{d \alpha}. \tag {93}
+$$
+
+Combining Lem. G.4, we obtain the following expression for the derivative of $h(\alpha)$ :
+
+$$
+h ^ {\prime} (\alpha) = - \ln \frac {q (y _ {1})}{q (y _ {2})} - \frac {\left(\frac {q (y _ {1})}{p (x)}\right) ^ {\alpha} \ln \left(\frac {q (y _ {1})}{p (x)}\right)}{1 - \left(\frac {q (y _ {1})}{p (x)}\right) ^ {\alpha}} + \frac {\left(\frac {q (y _ {2})}{p (x)}\right) ^ {\alpha} \ln \left(\frac {q (y _ {2})}{p (x)}\right)}{1 - \left(\frac {q (y _ {2})}{p (x)}\right) ^ {\alpha}}. \tag {94}
+$$
+
+For convenience, define the variables $s_i = \frac{q(x_i)}{p(x)}$ for $i = 1,2$ , where $1 \geq s_1 \geq s_2$ . Using this substitution, we obtain:
+
+$$
+h ^ {\prime} (\alpha) = - \ln \frac {s _ {1}}{s _ {2}} - \underbrace {\frac {s _ {1} ^ {\alpha} \ln s _ {1}}{1 - s _ {1} ^ {\alpha}}} _ {(a)} + \underbrace {\frac {s _ {2} ^ {\alpha} \ln s _ {2}}{1 - s _ {2} ^ {\alpha}}} _ {(b)}. \tag {95}
+$$
+
+To analyze the sign of $h^\prime (\alpha)$ , consider the second derivative with respect to $\alpha$ ..
+
+$$
+h ^ {\prime \prime} (\alpha) = \frac {d}{d \alpha} \left(- \frac {s _ {1} ^ {\alpha} \ln s _ {1}}{1 - s _ {1} ^ {\alpha}} + \frac {s _ {2} ^ {\alpha} \ln s _ {2}}{1 - s _ {2} ^ {\alpha}}\right). \tag {96}
+$$
+
+Using the quotient rule, differentiate each term separately:
+
+$$
+\frac {d}{d \alpha} \left(\frac {s ^ {\alpha} \ln s}{1 - s ^ {\alpha}}\right) = \frac {s ^ {\alpha} (\ln s) ^ {2}}{(1 - s ^ {\alpha}) ^ {2}}. \tag {97}
+$$
+
+Therefore:
+
+$$
+h ^ {\prime \prime} (\alpha) = - \frac {s _ {1} ^ {\alpha} (\ln s _ {1}) ^ {2}}{(1 - s _ {1} ^ {\alpha}) ^ {2}} + \frac {s _ {2} ^ {\alpha} (\ln s _ {2}) ^ {2}}{(1 - s _ {2} ^ {\alpha}) ^ {2}}. \tag {98}
+$$
+
+From Lem. G.7, we have
+
+$$
+h ^ {\prime \prime} (\alpha) \leq 0.
+$$
+
+Thus, we have shown that $h'(\alpha)$ is monotonically decreasing. Considering the limit of $h'(\alpha)$ as $\alpha \to 0^+$ , we have:
+
+$$
+\lim_{\alpha \to 0^{+}}h^{\prime}(\alpha) = 0.
+$$
+
+Therefore, $h'(\alpha) < 0$ always holds, and $F(\alpha)$ decreases as $\alpha$ increases.
+
+Lemma G.9. Define
+
+$$
+\begin{array}{l} f (p (y _ {1})) \triangleq - (\log p (y _ {1}) - \log q (y _ {2})) ^ {2} q (y _ {2}) ^ {1 - \alpha} \\ + \left(\log p (y _ {1}) - \log q (y _ {1})\right) ^ {2} q (y _ {1}) ^ {1 - \alpha}, \\ \end{array}
+$$
+
+where $0 < q(y_{2}) < p(y_{1}) < q(y_{1})$ . Then:
+
+1. The function $f(p(y_1))$ is monotonically decreasing with respect to $p(y_1)$ .
+2. There exists a unique constant $c_0 \in (q(y_2), q(y_1))$ such that $f(c_0) = 0$ . Moreover, for all $p(y_1) > c_0$ , it holds that $f(p(y_1)) < 0$ .
+
+Proof. The derivative of $f$ with respect to $p(y_{1})$ is:
+
+$$
+f ^ {\prime} \left(p \left(y _ {1}\right)\right) = \frac {2}{p \left(y _ {1}\right)} \left[ \left(\log p \left(y _ {1}\right) - \log q \left(y _ {1}\right)\right) q \left(y _ {1}\right) ^ {1 - \alpha} - \left(\log p \left(y _ {1}\right) - \log q \left(y _ {2}\right)\right) q \left(y _ {2}\right) ^ {1 - \alpha} \right]. \tag {99}
+$$
+
+Noting that $0 < q(y_{2}) < p(y_{1}) < q(y_{1})$ , therefore $f'(p(y_{1})) \leq 0$ . When $p(y_{1}) = q(y_{2})$ :
+
+$$
+f \left(q _ {2}\right) = \left(\log q _ {2} - \log q _ {1}\right) ^ {2} q _ {1} ^ {1 - \alpha} > 0. \tag {100}
+$$
+
+When $p(y_1) = q(y_1)$ :
+
+$$
+f \left(q _ {1}\right) = - \left(\log q _ {1} - \log q _ {2}\right) ^ {2} q _ {2} ^ {1 - \alpha} < 0. \tag {101}
+$$
+
+According to the intermediate value theorem, since $f(p)$ is continuous on $p \in (q_2, q_1)$ and decreases from a positive value to a negative value, there exists a unique $c_0 \in (q_2, q_1)$ such that $f(c_0) = 0$ . Therefore, when $p > c_0$ , $f(p) < 0$ .
+
+Restate of Theorem F.3. Let $q_{t + 1}^{\alpha}(y)$ be the distribution obtained after one gradient step, starting from $q_{t}$ using the $\alpha$ -divergence. Define $\Delta_t^\alpha$ as the difference of log mass ratios across two classes $y_{1}$ and $y_{2}$ , obtained from the $\alpha$ -divergence:
+
+$$
+\Delta_ {t} ^ {\alpha} \left(y _ {1}, y _ {2}\right) \triangleq \operatorname {L o g} R _ {t} ^ {\alpha} \left(y _ {1}\right) - \operatorname {L o g} R _ {t} ^ {\alpha} \left(y _ {2}\right).
+$$
+
+We observe the following linear trend (for appropriate positive constants $\zeta$ , $\delta_1$ , $\delta_2$ , and any real numbers $\alpha_1$ and $\alpha_2$ in the range $[0, 1]$ satisfying $\alpha_1 < \alpha_2$ ):
+
+1. The $\alpha$ -divergence transfers the probability mass of overestimated classes to underestimated ones more aggressively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $p(y_{2}) < \delta_{1} < q_{t}(y_{1}) = q_{t}(y_{2}) \leq p(y_{1})$ (where $\delta_{1} > 0$ ), and $p(y_{1}) \geq p(y_{2}) + \zeta$ , it holds that $\Delta_t^{\alpha_1}(y_1, y_2) \geq \Delta_t^{\alpha_2}(y_1, y_2)$ .
+2. $\alpha$ -divergence reduces the probability mass of classes with larger error $|p(y) - q_t(y)|$ more aggressively as $\alpha$ decreases. If $y_1$ and $y_2$ are such that $p(y_1) < q_t(y_1) = q_t(y_2) \leq 1 - \delta_2$ (where $\delta_2 > 0$ ), and $p(y_1) \geq p(y_2) + \zeta$ , it holds that $\Delta_t^{\alpha_1}(y_1, y_2) \geq \Delta_t^{\alpha_2}(y_1, y_2)$ .
+3. $\alpha$ -divergence increases the probability mass more preferentially on underestimated classes with larger probabilities $q_{t}(y)$ as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $p(y_{1}) = p(y_{2}) > c_{0} \cdot q_{t}(y_{1})$ , where $c_{0}$ is a positive constant $> 1$ , it holds that $\Delta_{t}^{\alpha_{1}}(y_{1}, y_{2}) \geq \Delta_{t}^{\alpha_{2}}(y_{1}, y_{2})$ .
+4. $\alpha$ -divergence reduces the probability mass on overestimated classes with larger probabilities $q_{t}(y)$ more conservatively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $q_{t}(y_{2}) + \zeta \leq q_{t}(y_{1}) \leq 1 - \delta_{2}$ , and $c_{0} \cdot q_{t}(y_{2}) < p(y_{1}) = p(y_{2}) < c_{1} \cdot q_{t}(y_{1})$ , where $c_{0}$ and $c_{1}$ are constants with $c_{0} > 1$ and $c_{1} < 1$ , it holds that $\Delta_{t}^{\alpha_{1}}(y_{1}, y_{2}) \geq \Delta_{t}^{\alpha_{2}}(y_{1}, y_{2})$ .
+
+Proof. First, we prove Case 1. Note that $q_{t}(y_{1}) = q_{t}(y_{2}) = q(x)$ , based on Eq. 48, we have
+
+$$
+\Delta_ {t} ^ {\alpha} = \eta q (x) ^ {1 - \alpha} \cdot \frac {p \left(y _ {1}\right) ^ {\alpha} - p \left(y _ {2}\right) ^ {\alpha}}{\alpha}. \tag {102}
+$$
+
+Let $f(\alpha) = q(x)^{1 - \alpha} \cdot \frac{p(y_1)^\alpha - p(y_2)^\alpha}{\alpha}$ , and then it can be rewritten as
+
+$$
+f (\alpha) \triangleq q (x) ^ {1 - \alpha} \int_ {p \left(y _ {2}\right)} ^ {p \left(y _ {1}\right)} y ^ {\alpha - 1} d y. \tag {103}
+$$
+
+Let $t = \frac{y}{q(x)}$ , substituting gives
+
+$$
+\begin{array}{l} f (\alpha) = q (x) ^ {1 - \alpha} \int_ {\substack {\frac {p (y _ {2})}{q (x)}}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} \cdot q (x) ^ {\alpha - 1} \cdot q (x) d t \tag{104} \\ = q(x)\int_{\substack{\frac{p(y_{2})}{q(x)}}}^{\frac{p(y_{1})}{q(x)}}t^{\alpha -1}dt. \\ \end{array}
+$$
+
+Using Leibniz's rule, we get
+
+$$
+f ^ {\prime} (\alpha) = q (x) \int_ {\frac {p (y _ {2})}{q (x)}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} \ln t d t. \tag {105}
+$$
+
+Note that the sign of $f'(\alpha)$ depends only on $\int_{\frac{p(y_2)}{q(x)}}^{\frac{p(y_1)}{q(x)}} t^{\alpha - 1} \ln t \, dt$ , so we define
+
+$$
+h \triangleq \int_ {\frac {p (y _ {2})}{q (x)}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} \ln t d t. \tag {106}
+$$
+
+Clearly, the sign of $h$ depends on the relative size of $q(x)$ with respect to $p(y_1)$ and $p(y_2)$ . To differentiate $h$ with respect to $q(x)$ , using Leibniz's Rule, we get
+
+$$
+\begin{array}{l} h ^ {\prime} (q (x)) = \frac {d}{d q (x)} \int_ {\frac {p (y _ {2})}{q (x)}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} \ln t d t \tag {107} \\ = \left[ t ^ {\alpha - 1} \ln t \right] _ {t = \frac {p (y _ {1})}{q (x)}} \cdot \left(- \frac {p (y _ {1})}{q (x) ^ {2}}\right) - \left[ t ^ {\alpha - 1} \ln t \right] _ {t = \frac {p (y _ {2})}{q (x)}} \cdot \left(- \frac {p (y _ {2})}{q (x) ^ {2}}\right). \\ \end{array}
+$$
+
+Simplifying, we obtain:
+
+$$
+\begin{array}{l} h ^ {\prime} (q (x)) = \frac {p \left(y _ {2}\right) ^ {\alpha} \ln \left(\frac {p \left(y _ {2}\right)}{q (x)}\right) - p \left(y _ {1}\right) ^ {\alpha} \ln \left(\frac {p \left(y _ {1}\right)}{q (x)}\right)}{q (x) ^ {\alpha + 1}} \tag {108} \\ = \frac {p (y _ {2}) ^ {\alpha} \ln p (y _ {2}) - p (y _ {1}) ^ {\alpha} \ln p (y _ {1}) + (p (y _ {1}) ^ {\alpha} - p (y _ {2}) ^ {\alpha}) \ln q (x)}{q (x) ^ {\alpha + 1}}. \\ \end{array}
+$$
+
+Note that the sign of $h'(q(x))$ depends only on the sign of the numerator, and the numerator is a monotonic increasing function of $q(x)$ ( $(p(y_1) \geq p(y_2))$ ). Note that $q(x) \leq p(y_1)$ , we have
+
+$$
+\begin{array}{l} h ^ {\prime} (q (x)) \leq h ^ {\prime} (p (y _ {1})) \\ = \frac {p \left(y _ {2}\right) ^ {\alpha} \ln \left(\frac {p \left(y _ {2}\right)}{p \left(y _ {1}\right)}\right)}{p \left(y _ {1}\right) ^ {\alpha + 1}} \tag {109} \\ \leq 0. \\ \end{array}
+$$
+
+Thus, we have proven that $h$ is monotonically decreasing as $q(x)$ increases. Also, since
+
+$$
+\begin{array}{l} h \left(p \left(y _ {2}\right)\right) = \int_ {1} ^ {\frac {p \left(y _ {1}\right)}{p \left(y _ {2}\right)}} t ^ {\alpha - 1} \ln t d t \tag {110} \\ \geq 0, \\ \end{array}
+$$
+
+and
+
+$$
+\begin{array}{l} h \left(p \left(y _ {1}\right)\right) = \int_ {\frac {p \left(y _ {2}\right)}{p \left(x _ {i}\right)}} ^ {1} t ^ {\alpha - 1} \ln t d t \tag {111} \\ \leq 0. \\ \end{array}
+$$
+
+By the intermediate value theorem, there exists $c_{\alpha} \in [p(y_2), p(y_1)]$ such that when $q(x) > c_{\alpha}$ , we have
+
+$$
+h (q (x)) \leq 0 \tag {112}
+$$
+
+for all values. By combining Eq. 105, we can conclude that $f'(\alpha) < 0$ . Furthermore, let $\delta_1 \triangleq \max(c_\alpha)$ for any $\alpha \in [0,1]$ . Then, when $p(y_1) \geq q(x) > \delta_1$ , we have $f'(\alpha) \leq 0$ for $\alpha \in [0,1]$ . Thus, we have proven that for any $\alpha_1$ and $\alpha_2 \in [0,1]$ such that $\alpha_1 < \alpha_2$ , we have
+
+$$
+\Delta_ {t} ^ {\alpha_ {1}} \left(y _ {1}, y _ {2}\right) \geq \Delta_ {t} ^ {\alpha_ {2}} \left(y _ {1}, y _ {2}\right).
+$$
+
+Next, we prove Case 2. Similarly, since $q_{t}(y_{1}) = q_{t}(y_{2}) = q(x)$ , we can deduce that
+
+$$
+\Delta_ {t} ^ {\alpha} = \eta q (x) ^ {1 - \alpha} \cdot \frac {p \left(y _ {1}\right) ^ {\alpha} - p \left(y _ {2}\right) ^ {\alpha}}{\alpha}. \tag {113}
+$$
+
+Let $f(\alpha) \triangleq q(x)^{1 - \alpha} \cdot \frac{p(y_1)^\alpha - p(y_2)^\alpha}{\alpha}$ , we get
+
+$$
+f (\alpha) = q (x) \int_ {\frac {p (y _ {2})}{q (x)}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} d t. \tag {114}
+$$
+
+Using Leibniz's rule, we get
+
+$$
+f ^ {\prime} (\alpha) = q (x) \int_ {\frac {p (y _ {2})}{q (x)}} ^ {\frac {p (y _ {1})}{q (x)}} t ^ {\alpha - 1} \ln t d t. \tag {115}
+$$
+
+Note that $q(x) > p(y_{1})$ , so
+
+$$
+f ^ {\prime} (\alpha) \leq 0. \tag {116}
+$$
+
+Thus, this proves that for any $\alpha_{1}$ and $\alpha_{2}$ such that $\alpha_{1} < \alpha_{2}$ , we have
+
+$$
+\Delta_ {t} ^ {\alpha_ {1}} \left(y _ {1}, y _ {2}\right) \geq \Delta_ {t} ^ {\alpha_ {2}} \left(y _ {1}, y _ {2}\right).
+$$
+
+We now proceed to prove Case 3. Note that $p(y_1) = p(y_2) \geq q(y_1) \geq q(y_2) + \zeta$ , thus we can deduce
+
+$$
+\begin{array}{l} \Delta^ {\alpha} (y _ {1}, y _ {2}) = \underbrace {\eta q (y _ {1}) ^ {1 - \alpha} \cdot \frac {p (y _ {1}) ^ {\alpha} - q (y _ {1}) ^ {\alpha}}{\alpha}} _ {(a)} - \underbrace {\eta q (y _ {2}) ^ {1 - \alpha} \cdot \frac {p (y _ {1}) ^ {\alpha} - q (y _ {2}) ^ {\alpha}}{\alpha}} _ {(b)} \\ + \underbrace {\frac {\eta (q \left(y _ {1}\right) - q \left(y _ {2}\right))}{\alpha} \cdot \left[ \sum_ {k} q (k) ^ {1 - \alpha} (q (k) ^ {\alpha} - p (k) ^ {\alpha}) \right]} _ {(c)}. \tag {117} \\ \end{array}
+$$
+
+First, through Lem. G.3, we know that $(c) \geq 0$ , and it increases as $\alpha$ decreases on $[0,1]$ . Then, we consider the term $(a) - (b)$ :
+
+$$
+\begin{array}{l} \Delta^ {\alpha} \left(y _ {1}, y _ {2}\right) \geq (a) - (b) \\ = \eta q (y _ {1}) ^ {1 - \alpha} \cdot \frac {p (y _ {1}) ^ {\alpha} - q (y _ {1}) ^ {\alpha}}{\alpha} - \eta q (y _ {2}) ^ {1 - \alpha} \cdot \frac {p (y _ {2}) ^ {\alpha} - q (y _ {2}) ^ {\alpha}}{\alpha} \\ = \eta \underbrace {q (y _ {2}) ^ {1 - \alpha} \cdot \frac {p (y _ {2}) ^ {\alpha} - q (y _ {2}) ^ {\alpha}}{\alpha}} _ {(1)} \left[ \underbrace {\frac {q (y _ {1}) ^ {1 - \alpha}}{q (y _ {2}) ^ {1 - \alpha}} \cdot \frac {p (y _ {1}) ^ {\alpha} - q (y _ {1}) ^ {\alpha}}{p (y _ {2}) ^ {\alpha} - q (y _ {2}) ^ {\alpha}}} _ {(2)} - 1 \right]. \tag {118} \\ \end{array}
+$$
+
+From Lem. G.8, it can be seen that term (2) increases as $\alpha$ decreases. Term (1) can be expressed as:
+
+$$
+\begin{array}{l} q \left(y _ {2}\right) ^ {1 - \alpha} \cdot \frac {p \left(y _ {2}\right) ^ {\alpha} - q \left(y _ {2}\right) ^ {\alpha}}{\alpha} = q \left(y _ {2}\right) ^ {1 - \alpha} \cdot \int_ {q \left(y _ {2}\right)} ^ {p \left(y _ {2}\right)} t ^ {\alpha - 1} d t \\ = q \left(y _ {2}\right) \cdot \int_ {1} ^ {\frac {p \left(y _ {2}\right)}{q \left(y _ {2}\right)}} t ^ {\alpha - 1} d t. \tag {119} \\ \end{array}
+$$
+
+Noting that
+
+$$
+\begin{array}{l} \frac {d}{d \alpha} \int_ {1} ^ {\frac {p (y _ {2})}{q (y _ {2})}} t ^ {\alpha - 1} d t = \int_ {1} ^ {\frac {p (y _ {2})}{q (y _ {2})}} t ^ {\alpha - 1} \ln t d t \tag {120} \\ \geq 0, \\ \end{array}
+$$
+
+it follows that term (1) decreases as $\alpha$ decreases. Therefore, as $\alpha$ decreases, if term $(2) \leq 1$ , the value of $(a) - (b)$ increases as $\alpha$ decreases, although its value remains overall less than 0. If term $(2) \geq 1$ , we have $(a) - (b) \geq 0$ , even though $\Delta^{\alpha = 1}(y_1, y_2) < 0$ .
+
+Finally, we prove Case 4. Since $p(y_1) = p(y_2)$ and $q(y_2) \leq q(y_1)$ , we have
+
+$$
+\begin{array}{l} \Delta^ {\alpha} \left(y _ {1}, y _ {2}\right) = \eta q \left(y _ {1}\right) ^ {1 - \alpha} \cdot \frac {p \left(y _ {1}\right) ^ {\alpha} - q \left(y _ {1}\right) ^ {\alpha}}{\alpha} - \eta q \left(y _ {2}\right) ^ {1 - \alpha} \cdot \frac {p \left(y _ {1}\right) ^ {\alpha} - q \left(y _ {2}\right) ^ {\alpha}}{\alpha} \\ + \frac {\eta (q (y _ {1}) - q (y _ {2}))}{\alpha} \cdot \left[ \sum_ {k} q (k) ^ {1 - \alpha} (q (k) ^ {\alpha} - p (k) ^ {\alpha}) \right]. \tag {121} \\ \end{array}
+$$
+
+Considering $f(\alpha) \triangleq \Delta^{\alpha}(y_1, y_2) / \eta$ , taking the derivative with respect to $\alpha$ yields:
+
+$$
+\begin{array}{l} f ^ {\prime} (\alpha) = \frac {1}{\alpha^ {2}} \left[ (- 1 + \alpha \log p (y _ {1}) - \alpha \log q (y _ {1})) p (y _ {1}) ^ {\alpha} q (y _ {1}) ^ {1 - \alpha} \right. \\ + (1 - \alpha \log p (y _ {1}) + \alpha \log q (y _ {2})) p (y _ {1}) ^ {\alpha} q (y _ {2}) ^ {1 - \alpha} \\ - \left(q \left(y _ {1}\right) - q \left(y _ {2}\right)\right) \left(- 1 + \sum_ {k} q (k) ^ {1 - \alpha} \left(- p (k) ^ {\alpha} + q (k) ^ {\alpha}\right) \right. \tag {122} \\ - \alpha \sum_ {k} \left(- \log q (k) q (k) ^ {1 - \alpha} \left(- p (x _ {k}) ^ {\alpha} + q (x _ {k}) ^ {\alpha}\right) \right. \\ \left. \left. + q (k) ^ {1 - \alpha} \left(- \log p (k) p (k) ^ {\alpha} + \log q (k) q (k) ^ {\alpha}\right)\right)\right) \Bigg ]. \\ \end{array}
+$$
+
+Noting that the sign of $f^{\prime}(\alpha)$ depends solely on the numerator, let $h(\alpha)$ denote its numerator. Differentiating $h(\alpha)$ with respect to $\alpha$ , we obtain:
+
+$$
+\begin{array}{l} h ^ {\prime} (\alpha) = \alpha p (y _ {1}) ^ {\alpha} q (y _ {1}) ^ {- \alpha} q (y _ {2}) ^ {- \alpha} \left[ - \left(\log p (y _ {1}) - \log q (y _ {2})\right) ^ {2} q (y _ {1}) ^ {\alpha} q (y _ {2}) \right. \\ \left. + \left(\log p (y _ {1}) - \log q (y _ {1})\right) ^ {2} q (y _ {1}) q (y _ {2}) ^ {\alpha} \right] \\ - \alpha \underbrace {\left(q \left(y _ {1}\right) - q \left(y _ {2}\right)\right) \sum_ {k} q (k) ^ {1 - \alpha} p (k) ^ {\alpha} \left(\log q (k) - \log p (k)\right) ^ {2}} _ {\geq 0} \tag {123} \\ \leq \alpha p (y _ {1}) ^ {\alpha} q (y _ {1}) ^ {- \alpha} q (y _ {2}) ^ {- \alpha} \left[ - (\log p (y _ {1}) - \log q (y _ {2})) ^ {2} q (y _ {1}) ^ {\alpha} q (y _ {2}) \right. \\ \left. + \left(\log p (y _ {1}) - \log q (y _ {1})\right) ^ {2} q (y _ {1}) q (y _ {2}) ^ {\alpha} \right]. \\ \end{array}
+$$
+
+From Lem.G.9, it can be concluded that for any $\alpha$ , there exists a point $c_{\alpha} \in (q(y_2), q(y_1))$ such that when $p(y_1) > c_{\alpha}$ , $h'(\alpha) < 0$ . Therefore, let $\delta_1 = \max(c_{\alpha})$ for $\alpha \in [0,1]$ . Then, when $p(y_1) > \delta_1$ , $h(\alpha)$ is monotonically decreasing for $\alpha \in [0,1]$ .
+
+Noting that as $\alpha \to 0^{+}$ , we have $\lim_{\alpha \to 0^{+}}h(\alpha) = 0$ . Hence, $h(\alpha) < 0$ when $\alpha > 0$ . In this way, we have proven that $f^{\prime}(\alpha) < 0$ , i.e., $\Delta^{\alpha}(y_1,y_2)$ increases as $\alpha$ decreases.
+
+
+
+# G.5. Proof of Proposition 4.2
+
+Restate of Proposition 4.2 The updates induced by $\alpha$ - $\beta$ -divergence for $q_{t}$ within one gradient descent step are given by:
+
+$$
+\begin{array}{l} \left| \operatorname {L o g} \mathrm {R} _ {t} ^ {(\alpha , \beta)} (y) \right| \leq \eta \underbrace {q _ {t} (y) ^ {\beta}} _ {(a)} \underbrace {\left| \frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha} \right|} _ {(b)} \\ + q _ {t} (y) \sum_ {k} \underbrace {q _ {t} (k) ^ {\beta}} _ {(a)} \underbrace {\left| \frac {p (k) ^ {\alpha} - q _ {t} (k) ^ {\alpha}}{\alpha} \right|} _ {(b)} + \left| \mathsf {N} _ {t} ^ {(\alpha , \beta)} (y) \right|, \\ \end{array}
+$$
+
+where $\mathsf{N}_t^{\alpha ,\beta}(y)$ denotes constant normalization factor independent of $y$
+
+Proof. The formula for the $\alpha -\beta$ -divergence is:
+
+$$
+\mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q) \triangleq - \frac {1}{\alpha \beta} \sum_ {k} \left[ p (k) ^ {\alpha} q (k) ^ {\beta} - \frac {\alpha}{\alpha + \beta} p (k) ^ {\alpha + \beta} - \frac {\beta}{\alpha + \beta} q (k) ^ {\alpha + \beta} \right]. \tag {124}
+$$
+
+Using the chain rule, we have:
+
+$$
+\frac {\partial}{\partial f _ {y}} \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q) = \sum_ {k} \frac {\partial \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q)}{\partial q (k)} \frac {\partial q (k)}{\partial f _ {y}}, \tag {125}
+$$
+
+where
+
+$$
+\frac {\partial \mathbb {D} _ {\mathrm {A B}} ^ {(\alpha , \beta)} (p \| q)}{\partial q (y)} = - \frac {1}{\alpha} \left(p (y) ^ {\alpha} q (y) ^ {\beta - 1} - q (y) ^ {\alpha + \beta - 1}\right), \tag {126}
+$$
+
+and
+
+$$
+\frac {\partial q (y)}{\partial f _ {y}} = \frac {\partial}{\partial f _ {y}} \frac {e ^ {f _ {y}}}{\sum_ {k} e ^ {f _ {k}}}. \tag {127}
+$$
+
+Combining Lem. G.2, the Eq. 125 can be expressed as
+
+$$
+\begin{array}{l} \frac {\partial}{\partial f _ {y}} \mathbb {D} _ {\alpha} (p \| q) = \sum_ {k \neq y} \frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (k)} \frac {\partial q (k)}{\partial f (y)} + \frac {\partial \mathbb {D} _ {\alpha} (p \| q)}{\partial q (y)} \frac {\partial q (y)}{\partial f _ {j}} \\ = - \frac {1}{\alpha} \left[ q (y) ^ {\beta} (p (y) ^ {\alpha} - q (y) ^ {\alpha}) + q (y) \left(\sum_ {k} q (k) ^ {\beta} (q (k) ^ {\alpha} - p (k) ^ {\alpha})\right) \right]. \tag {128} \\ \end{array}
+$$
+
+Now, consider using the gradient descent method to update the loss function $\ell$ with respect to the logits $f_{y}^{t}$ , then the distribution at the next step $p^{t + 1}$ is given by:
+
+$$
+\begin{array}{l} q _ {t + 1} (y) = \frac {\exp \left(f _ {y} ^ {t + 1}\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t + 1}\right)} \\ = \frac {\exp \left(f _ {y} ^ {t} - \eta \nabla_ {f _ {y} ^ {t}} \ell\right)}{\sum_ {k} \exp \left(f _ {k} ^ {t} - \eta \nabla_ {f _ {k} ^ {t}} \ell\right)} \tag {129} \\ = q _ {t} (y) \cdot \frac {\exp (- \eta \nabla_ {f _ {y} ^ {t + 1}}) \ell)}{\sum_ {k} q _ {t} (k) \exp (- \eta \nabla_ {f _ {k} ^ {t}} \ell)}. \\ \end{array}
+$$
+
+Now, substituting the gradient formula of the $\alpha$ - $\beta$ -divergence, the characterization of $q_{t+1}^{(\alpha,\beta)}(x_j)$ is obtained as:
+
+$$
+q _ {t + 1} ^ {(\alpha , \beta)} (y) = q _ {t} (y) \cdot \frac {\exp \left(\frac {\eta}{\alpha} [ q (y) ^ {\beta} (p (y) ^ {\alpha} - q (y) ^ {\alpha}) + q (y) (\sum_ {k} q (k) ^ {\beta} (q (k) ^ {\alpha} - p (k) ^ {\alpha})) ]\right)}{\sum_ {i} q _ {t} (i) \exp \left(\frac {\eta}{\alpha} [ q (i) ^ {\beta} (p (i) ^ {\alpha} - q (i) ^ {\alpha}) + q (i) (\sum_ {k} q (k) ^ {\beta} (q (k) ^ {\alpha} - p (k) ^ {\alpha})) ]\right)}. \tag {130}
+$$
+
+Observing that the denominator serves as a normalization constant, this can be rewritten as:
+
+$$
+\frac {q _ {t + 1} ^ {(\alpha , \beta)} (y)}{q _ {t} (y)} \propto \exp \left(\frac {\eta}{\alpha} \left[ q (y) ^ {\beta} (p (y) ^ {\alpha} - q (y) ^ {\alpha}) + q (y) \left(\sum_ {k} q (k) ^ {\beta} (q (k) ^ {\alpha} - p (k) ^ {\alpha})\right) \right]\right). \tag {131}
+$$
+
+Taking the logarithm on both sides, we get:
+
+$$
+\log \frac {q _ {t + 1} ^ {(\alpha , \beta)} (y)}{q _ {t} (y)} = \eta \underbrace {\left[ q _ {t} (y) ^ {\beta} \left(\frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha}\right) + q _ {t} (y) \sum_ {k} q _ {t} (k) ^ {\beta} \left(\frac {q _ {t} (k) ^ {\alpha} - p (k) ^ {\alpha}}{\alpha}\right) \right]} _ {- \nabla_ {f _ {y}} \ell} + \mathsf {N} _ {t} ^ {(\alpha , \beta)} (y), \tag {132}
+$$
+
+where $\mathsf{N}_t^{(\alpha ,\beta)}(y)$ denotes constant normalization factors independent of $y$ . We can further derive that
+
+$$
+\left| \log \frac {q _ {t + 1} ^ {(\alpha , \beta)} (y)}{q _ {t} (y)} \right| \leq \eta \underbrace {q _ {t} (y) ^ {\beta}} _ {(a)} \underbrace {\left\lfloor \frac {p (y) ^ {\alpha} - q _ {t} (y) ^ {\alpha}}{\alpha} \right\rfloor} _ {(b)} + q _ {t} (y) \sum_ {k} \underbrace {q _ {t} (k) ^ {\beta}} _ {(a)} \underbrace {\left\lfloor \frac {p (k) ^ {\alpha} - q _ {t} (k) ^ {\alpha}}{\alpha} \right\rfloor} _ {(b)} + \left| \mathrm {N} _ {t} ^ {(\alpha , \beta)} (y) \right|. \tag {133}
+$$
+
+This completes the proof.
+
+# G.6. Proof of Theorem D.1
+
+Restate of Theorem D.1. Let $q_{t+1}^{(\alpha, \beta)}(y)$ be the distribution obtained after one gradient step, starting from $q_t$ using the $\alpha-\beta$ -divergence. Define $\Delta_t^{(\alpha, \beta)}$ as the difference of log mass ratios across two classes $y_1$ and $y_2$ , obtained from the $\alpha-\beta$ -divergence:
+
+$$
+\Delta_ {t} ^ {(\alpha , \beta)} (y _ {1}, y _ {2}) \triangleq \mathrm {L o g R} _ {t} ^ {(\alpha , \beta)} (y _ {1}) - \mathrm {L o g R} _ {t} ^ {(\alpha , \beta)} (y _ {2}).
+$$
+
+We have the following (for appropriate positive constants $\zeta$ , $\delta_1$ , $\delta_2$ , and any real numbers $\alpha_1$ and $\alpha_2$ in the range $[0,1]$ satisfying $\alpha_1 < \alpha_2$ ):
+
+1. $\alpha$ - $\beta$ -divergence transfers probability mass from overestimated classes to underestimated classes more aggressively as $\alpha$ decreases. If $y_{1}$ and $y_{2}$ are such that $\delta_{1} < q_{t}(y_{1}) = q_{t}(y_{2}) \leq p(y_{1})$ (where $\delta_{1} > 0$ ), and $p(y_{1}) \geq p(y_{2}) + \zeta$ , it holds that $\Delta_{t}^{(\alpha_{1},\beta)}(y_{1},y_{2}) \geq \Delta_{t}^{(\alpha_{2},\beta)}(y_{1},y_{2})$ .
+2. $\alpha$ - $\beta$ -divergence reduces the probability mass of classes with larger error $|p(y) - q_t(y)|$ more aggressively as $\alpha$ decreases. If $y_1$ and $y_2$ are such that $p(y_1) < q_t(y_1) = q_t(y_2) \leq 1 - \delta_2$ (where $\delta_2 > 0$ ), and $p(y_1) \geq p(y_2) + \zeta$ , it holds that $\Delta_t^{(\alpha_1, \beta)}(y_1, y_2) \geq \Delta_t^{(\alpha_2, \beta)}(y_1, y_2)$ .
+3. The $\alpha$ - $\beta$ -divergence becomes more (less) preferential in focusing the error on classes with higher student confidence as $\beta$ increases (decreases) when reducing $\left|\mathrm{LogR}_t^{(\alpha,\beta)}(y)\right|$ .
+
+Proof. First, we prove Case 1. Note that $q_{t}(y_{1}) = q_{t}(y_{2}) = q(x)$ , Based on Eq. 132, we have
+
+$$
+\begin{array}{l} \Delta_ {t} ^ {(\alpha , \beta)} = \eta q (x) ^ {\beta} \cdot \frac {p (y _ {1}) ^ {\alpha} - p (y _ {2}) ^ {\alpha}}{\alpha} \\ = \eta q (x) ^ {\beta} \cdot \int_ {p \left(y _ {2}\right)} ^ {p \left(y _ {1}\right)} t ^ {\alpha - 1} d t. \tag {134} \\ \end{array}
+$$
+
+Taking the derivative with respect to $\alpha$ , we get
+
+$$
+\frac {\partial}{\partial \alpha} \Delta_ {t} ^ {(\alpha , \beta)} = \eta q (x) ^ {\beta} \cdot \int_ {p \left(y _ {2}\right)} ^ {p \left(y _ {1}\right)} t ^ {\alpha - 1} \ln t d t. \tag {135}
+$$
+
+Note that $p(y_{1})\leq 1$ and $p(y_{2})\leq 1$ , we have
+
+$$
+\frac {\partial}{\partial \alpha} \Delta_ {t} ^ {(\alpha , \beta)} \leq 0. \tag {136}
+$$
+
+since $\ln t \leq 0$ when $t \in (0,1]$ . Therefore, we have $\Delta_t^{(\alpha_1,\beta)} > \Delta_t^{(\alpha_2,\beta)}$ when $\alpha_1 < \alpha_2$ .
+
+The proof of Case 2 is similar to Case 1 and thus is omitted.
+
+Finally, we prove Case 3. Recall that when reducing $\left|\operatorname{LogR}_t^{(\alpha,\beta)}(y)\right|$ , we have the following relationship:
+
+$$
+\left|\operatorname{Log}\mathsf{R}_{t}^{(\alpha ,\beta)}(y)\right|\leq \eta \underbrace{q_{t}(y)^{\beta}}_{(a)}\underbrace{\left|\frac{p(y)^{\alpha} - q_{t}(y)^{\alpha}}{\alpha}\right|}_{(b)} + \eta q_{t}(y)\sum_{k}\underbrace{q_{t}(k)^{\beta}}_{(a_{1})}\underbrace{\left|\frac{p(k)^{\alpha} - q_{t}(k)^{\alpha}}{\alpha}\right|}_{(b_{1})} + \left|\mathsf{N}_{t}^{(\alpha ,\beta)}(y)\right|,
+$$
+
+where terms (a) and $\left(a_{1}\right)$ act as weighting functions. Therefore, selecting a larger (smaller) $\beta$ will place more (less) emphasis on errors from classes with higher student confidence $q_{t}(y)$ , as shown in Fig. 1(c).
+
+Remark. Case 1 and Case 2 show that selecting a smaller $\alpha$ leads to a more aggressive reduction of errors across classes by shifting the probability mass from overestimated to underestimated classes. On the other hand, Case 3 shows that increasing $\beta$ emphasizes minimizing errors more in classes with higher student confidence.
+
+Algorithm 1 alpha beta Divergence Function
+Input: Student distribution $q_{\theta}$ , Teacher distribution $p$ , and Hyperparameters $\alpha$ and $\beta$
+Output: Divergence value $\mathbb{D}_{\mathrm{AB}}^{(\alpha ,\beta)}(p||q_{\theta})$
+1: return $-\frac{1}{\alpha\beta}\sum_{k}\left(p(k)^{\alpha}q(k)^{\beta} - \frac{\alpha}{\alpha + \beta} p(k)^{\alpha +\beta} - \frac{\beta}{\alpha + \beta} q(k)^{\alpha +\beta}\right)$
+Algorithm 2 Generalized distillation framework with $\alpha$ - $\beta$ -divergence.
+Input: Dataset $\mathcal{D}$ with input-target pair $\{\{\pmb {x}_n,\pmb {y}_n\} \}_{n = 1}^N$ , Teacher $f_{T}$ , Student $f_{S}$ , loss weight $\lambda$ , $\alpha$ - $\beta$ -divergence function $\mathbb{D}_{\mathrm{AB}}^{(\alpha ,\beta)}$ in Algo. 1, and Hyperparameters $\alpha$ and $\beta$
+Output: Trained student model $f_{S}$
+1: for each $(x_{n},y_{n})$ in $\mathcal{D}$ do
+2: $f^{T}\gets f_{T}(x_{n}),f^{S}\gets f_{S}(x_{n})$
+3: $p\gets \mathrm{softmax}(f^T)$
+4: $q_{\theta}\gets \mathrm{softmax}(f^{S})$
+5: $\ell_{\mathrm{KD}}\gets \mathbb{D}_{\mathrm{AB}}^{(\alpha ,\beta)}(p||q_{\theta})$
+6: Update $f_{S}$ towards minimizing $\ell_{CE}(\pmb {y}_n,q_\theta) + \lambda \ell_{\mathrm{KD}}(p,q_\theta)$
+7: end for
+
+# H. Algorithm Protocol
+
+Algo. 1 and Algo. 2 give the algorithmic protocol of our framework, which is easy to implement and applicable to common KD downstream tasks.
+
+# I. Additional Experiment Settings
+
+In this section, we provide a more detailed description of the experimental protocol.
+
+# I.1. Natural Language Processing Tasks
+
+# I.1.1. DATASETS
+
+Following Gu et al. (2024a); Ko et al. (2024), we select 14K samples from databricks-dolly-15k (Conover et al., 2023) for training and 500 samples each for validation and testing. After distillation, the models are evaluated on five task-agnostic instruction-following benchmarks: Dolly-evaluation, Self-Instruct, Vicuna-evaluation, Super-Natural Instructions, and Unnatural Instruction. The details for each dataset are as follows:
+
+- databricks-dolly-15k (Conover et al., 2023): An open-source dataset of instruction-following records created by thousands of Databricks employees. It includes several behavioral classes from Ouyang et al. (2022), such as brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
+- Self-Instruct (Wang et al., 2023): A framework that enhances a language model's instruction-following by using its own outputs to generate extensive instructional data. It contains 52K instructions and 82K input-output pairs for tuning, 252 expert-written tasks for practical use, and 50K public dataset examples for benchmarking.
+- Vicuna: Utilizes 80 challenging questions for evaluating Vicuna, following (Ko et al., 2024; Gu et al., 2024a).
+- Super-Natural Instruction (Wang et al., 2022a): A benchmark of 1,616 diverse NLP tasks with expert-written instructions, covering 76 task types. The test set includes 9K samples across 119 tasks.
+- Unnatural Instruction (Honovich et al., 2023): AI-generated dataset with 240K instructions created with minimal human input, proving AI data can match human data for training language models. The core set has 60K samples.
+
+For experiments on these datasets, we use ROUGE-L (Lin, 2004) as the evaluation metric. ROUGE-L measures the quality of generated text by calculating the Longest Common Subsequence (LCS) between the generated text $\mathbf{y}$ and the reference
+
+text $x$ . A higher ROUGE-L score indicates that the generated text is more similar to the reference text. The metric is computed based on the harmonic mean of recall $R_{\mathrm{LCS}}$ and precision $P_{\mathrm{LCS}}$ , defined as:
+
+$$
+R _ {\mathrm {L C S}} = \frac {\operatorname {L C S} (\boldsymbol {x} , \boldsymbol {y})}{L _ {\boldsymbol {x}}},
+$$
+
+$$
+P _ {\mathrm {L C S}} = \frac {\operatorname {L C S} (\boldsymbol {x} , \boldsymbol {y})}{L _ {\boldsymbol {y}}},
+$$
+
+$$
+\mathrm {R O U G E - L} = \frac {2 \cdot R _ {\mathrm {L C S}} \cdot P _ {\mathrm {L C S}}}{R _ {\mathrm {L C S}} + P _ {\mathrm {L C S}}}.
+$$
+
+Here, $\operatorname{LCS}(\pmb{x},\pmb{y})$ is the length of the longest common subsequence, $L_{x}$ is the length of the reference text, and $L_{y}$ is the length of the generated text.
+
+# I.1.2. COMPETITORS
+
+Here we give a more detailed summary of the competitors mentioned in the experiments and their SGOs approaches (if exists).
+
+- SFT is supervised fine-tuning of student model using ground-truth on the Fixed dataset (using predefined input-output pairs).
+- KD (Hinton, 2015) trains the student distribution to mimic the teacher distribution on the Fixed dataset using FKLD.
+- SeqKD (Kim & Rush, 2016) maximizes the likelihood of high probability sequences generated by the teacher, and can be viewed as SFT on teacher-generated outputs.
+- MiniLLM (Gu et al., 2024a) trains on the student-generated sentences (SGOs) and uses an On-policy gradient method. Their distillation object is to minimize the RKLD between the teacher and student distributions.
+- GKD (Agarwal et al., 2024) uses the generalized Jensen-Shannon divergence $(\mathbb{D}_{\mathrm{JSD}(\beta)}(p\| q_{\theta}) = \beta \mathbb{D}(p\| \beta p + (1 - \beta)q_{\theta}) + (1 - \beta)\mathbb{D}(q_{\theta}\| \beta p + (1 - \beta)q_{\theta}))$ , training on a Mixture of datasets, either teacher-generated or ground-truth, and on-policy student-generated sequences.
+- DISTILLM (Ko et al., 2024) uses Skew KL $(\mathbb{D}(p\| \alpha p + (1 - \alpha q_{\theta}))$ or Skew RKL $(\mathbb{D}(q_{\theta}\| \alpha q_{\theta} + (1 - \alpha p))$ and reports the better performing one. They train on a mixed dataset consisting of fixed outputs and student-generated outputs. Additionally, they use an Adaptive off-policy method to determine whether to use student-generated outputs for training based on validation loss, thereby removing noisy SGOs data.
+
+# I.1.3. IMPLEMENTATION DETAILS
+
+Training. For training the teacher and student models, we used four RTX 3090 24GB GPUs. Our experimental setup for training LMs on databricks-dolly-15k primarily follows the experimental setup for Ko et al. (2024). We search for the learning rates in $\{5\mathrm{e} - 4,1\mathrm{e} - 4,5\mathrm{e} - 5\}$ , the batch sizes in $\{4,8,16\}$ within the possible maximum batch size for 3090 24GB GPUs, and train these models for 20 epochs. We fully use the distillation loss for the instruction-following dataset and language modeling loss for OpenWebText (Gokaslan et al., 2019) corpus. The checkpoints of each student are selected by the ROUGE-L scores on the validation set. For all teacher-student configurations, we set $\alpha = 0.2$ and $\beta = 0.7$ . Additionally, the cross-entropy loss $\ell_{\mathrm{CE}}$ was not considered to ensure a fair comparison with previous methods.
+
+To ensure a fair comparison, for other competitors, we rerun them (with the necessary hyperparameter tuning) and select the best-performing checkpoint on the validation set.
+
+Evaluation. For evaluating the teacher and student models, we applied a single RTX 3090 24GB GPU. Following (Ko et al., 2024; Gu et al., 2024a), We adopt a prompt template as shown in Fig. 7. We sample the responses from each model using a temperature of 1.0, a max-length limit of 512, and five random seeds (i.e., {10, 20, 30, 40, 50}).
+
+```txt
+Below is an instruction that describes a task. Write a response that appropriately completes the request.
+# Instruction: {instruction}
+# Input: {input}
+# Response:
+```
+
+Figure 7. The prompt template for training and evaluation of instruction-following task experiments from (Ko et al., 2024; Gu et al., 2024a).
+
+# I.2. Vision Tasks
+
+# I.2.1. DATASETS
+
+In this section, we provide detailed descriptions of the image datasets used.
+
+- CIFAR-100: It is a generic image dataset consisting of 60,000 $32 \times 32$ color images across 100 classes, with 600 images per class. It is further split into 50,000 training images and 10,000 test images.
+- ImageNet (Deng et al., 2009): It is a widely recognized object classification dataset containing approximately 1.28 million training images and 50,000 test images across 1,000 object classes. The images are sourced from the web and organized using the WordNet hierarchy, making it a standard benchmark for evaluating object recognition models.
+- Caltech101 (Fei-Fei et al., 2004): It is an object classification dataset with 101 classes and a background class, containing approximately 7,650 training images and 3,300 test images. The images vary significantly in scale, orientation, and lighting conditions.
+- OxfordPets (Parkhi et al., 2012): It is a fine-grained pet classification dataset with 37 pet breed classes, featuring nearly equal numbers of training (3,680) and test (3,669) images. It also includes pixel-level segmentation masks.
+- StanfordCars (Krause et al., 2013): It is a fine-grained car model recognition dataset with 196 classes, based on make, model, and year. It contains 8,144 training images and 8,041 test images, capturing diverse vehicle angles and environments.
+- Flowers102 (Nilsback & Zisserman, 2008): It is a dataset of 102 flower species for fine-grained classification tasks. It includes 6,149 training images and 1,020 test images, posing challenges in distinguishing visually similar flower classes.
+- Food101 (Bossard et al., 2014): It is a fine-grained food classification dataset with 101 dish classes, comprising 75,750 training images and 25,250 test images. It poses challenges in recognizing overlapping ingredients and presentation styles.
+- FGVCAircraft (Maji et al., 2013): It is a fine-grained aircraft classification dataset with 100 classes, distinguishing between models and manufacturers. It contains 6,667 training images and 3,333 test images.
+- SUN397 (Xiao et al., 2010): It is a comprehensive scene recognition dataset with 397 classes, including natural landscapes, indoor spaces, and urban environments. It contains approximately 50,000 training images and 50,000 test images.
+- UCF101 (Soomro, 2012): It is a video dataset for action recognition, featuring 101 action classes ranging from sports to daily activities. It contains approximately 9,500 training clips and 3,700 test clips, collected from YouTube.
+- DTD (Soomro, 2012): It is a texture classification dataset with 47 texture classes described using human-interpretable attributes. It contains 3,760 training images and 1,880 test images.
+
+- EuroSAT (Helber et al., 2019): It is a satellite image dataset for land-use and land-cover classification, with 10 classes such as agricultural areas, forests, and urban regions. It contains 21,600 training images and 5,400 test images.
+
+For experiments on these datasets, we follow the popular setup in classification tasks to use accuracy as the evaluation metric.
+
+# 1.2.2. COMPETITORS
+
+In this section, we provide a more in-depth overview of the competitors discussed in the vision experiments.
+
+First, we introduce the distillation-based methods:
+
+- KD (Hinton, 2015) directly minimizes the FKLD between the student and teacher distributions to transfer knowledge.
+- DKD (Zhao et al., 2022) uses FKLD for distillation, where the knowledge of the teacher's distribution is decoupled into target class and non-target class knowledge for separate learning.
+- LSD (Sun et al., 2024) uses FKLD for distillation in their experiments, where they first normalize the logit vector before obtaining the model output distribution.
+- TTM (Zheng & Yang, 2024) uses FKLD for distillation, and they also introduce Rényi entropy as a regularization to make the student distribution smoother.
+
+Next, we describe the SFT-based methods:
+
+- CLIP is supervised fine-tuning using ground-truth on standard datasets.
+- CoCoOp (Zhou et al., 2022) enhances new class performance by transforming the unified context into an instance-adaptive context, where each sample is assigned a specific prompt that focuses on its unique features or attributes.
+- MaPLe (Khattak et al., 2023a) improves vision-language alignment by simultaneously adapting both the text and image encoders in CLIP using hierarchical prompts.
+- PromptSRC (Khattak et al., 2023b) ensures better performance on both base and new classes by minimizing the task cross-entropy loss and the FKLD between the output distribution of the model and the pre-trained model.
+
+# I.2.3. IMPLEMENTATION DETAILS
+
+We conduct all vision experiments on a single RTX 3090 GPU. The detailed experimental setups are as follows.
+
+Standard Training-Evaluation setup. In this experimental setup, we consider model architectures including VGG (Simonyan, 2014), ResNet (He et al., 2016), and WideResNet (Zagoruyko, 2016). Following (Zheng & Yang, 2024; Sun et al., 2024; Zhao et al., 2022), we train the student models on all class samples. We also consider a standard training data augmentation scheme including padding 4 pixels prior to random cropping and horizontal flipping. We set the batch size as 64 and the initial learning rate as 0.05. We train the model for 240 epochs, in which the learning rate is decayed by 10 every 30 epochs after 150 epochs. We use stochastic gradient descent (SGD) as the optimizer with weight decay 5e-4 and momentum 0.9.
+
+For evaluation, we report the average accuracy across all classes on the test set. We list the hyperparameters $\alpha$ and $\beta$ used across the above experiments in Tab. 4. In our ABKD, the weight $\lambda$ of $\ell_{\mathrm{KD}}$ is set to the default value 32. For those re-implemented methods, we only adjust $\alpha$ and $\beta$ and follow the other hyperparameters as reported in their original papers.
+
+Base-to-New setup. In this experimental setup, we use the ViT-L/14 CLIP model as the teacher and the ViT-B/16 CLIP model as the student. We adopt the recently popular prompt tuning setup for CLIP, as it performs sufficiently well across many tasks, despite freezing most of the model parameters and training only a subset of the learnable prompt tokens. We split the training and testing datasets into base and new classes same as previous work (Khattak et al., 2023a; Kim et al., 2024). Tab. 5 provides the details of the number of images used for training on the base-to-new setup.
+
+Table 4. Hyperparameters for different architecture distillations on CIFAR-100.
+
+Teacher Student WRN-40-2 WRN-16-2 WRN-40-2 WRN-40-1 resnet56 resnet20 resnet110 resnet20 resnet110 resnet32 resnet32x4 resnet8x4 vgg13 vgg8 resnet110 resnet44 ABKD α=0.6,β=0.5 α=0.9,β=0.2 α=0.8,β=0.3 α=0.8,β=0.3 α=0.7,β=0.4 α=0.5,β=0.5 α=0.9,β=0.2 α=0.8,β=0.3 ABDKD α=0.8,β=0.4 α=1.0,β=0.2 α=1.0,β=0.2 α=0.8,β=0.3 α=0.9,β=0.3 α=0.8,β=0.3 α=0.7,β=0.4 α=0.8,β=0.3 ABLSD α=0.9,β=0.1 α=0.8,β=0.4 α=0.9,β=0.3 α=1.2,β=-0.1 α=0.9,β=0.2 α=1.2,β=-0.2 α=1.0,β=0.2 α=1.0,β=0.2 ABTTM α=0.8,β=0.3 α=1.0,β=0.1 α=0.7,β=0.5 α=0.9,β=0.2 α=0.8,β=0.3 α=0.8,β=0.3 α=0.7,β=0.5 α=0.8,β=0.2
+
+Table 5. Number of images used for distillation and testing per-dataset. To ensure a fair comparison, we follow the same data split as prior arts (Kim et al., 2024).
+
+Dataset ImageNet Caltech101 OxfordPeds StanfordCars Flowers102 Food101 FGVCAircraft SUN397 DTD EuroSAT UCF101 Train 1,281,167 4,128 2,944 6,509 4,093 50,500 3,334 15,880 2,820 13,500 7,639 Test Base 25,000 1,549 1,881 4,002 1,053 15,300 1,666 9,950 864 4,200 1,934 Test New 25,000 916 1,788 4,039 1,410 15,000 1,667 9,900 828 3,900 1,849
+
+The teacher is pre-trained using the PromptSRC (Khattak et al., 2023b) method, following which it is fine-tuned on the base classes using ground truth supervision. Following (Kim et al., 2024), all distillation-based methods use the unlabeled training set to train students for a fair comparison, and we search for $\lambda$ in $\{100, 200, 300, 500, 1000, 2000, 3000\}$ . We set the prompt depth to 9 and the vision and language prompt lengths to 4. We use SGD as the optimizer. All student models are trained for 20 epochs with a batch size of 8 and a learning rate of 0.005. We follow the data augmentation scheme as in Khattak et al. (2023b), i.e., random resized cropping and random flipping. The text prompts of the first layer are initialized with the word embeddings of "a photo of a {classname}".
+
+For evaluation, we report the model's accuracy on both the base classes and the new classes separately. Additionally, we report the Harmonic Mean (HM) of the two accuracies, defined as:
+
+$$
+\mathrm {H M} = \frac {2 \times \text {B a s e A c c} \times \text {N e w A c c}}{\text {B a s e A c c} + \text {N e w A c c}}. \tag {137}
+$$
+
+We list the hyperparameters $\alpha$ and $\beta$ used across different datasets in Tab. 7. In addition, since LSD, DKD, and TTM did not report performance under the base-to-new setting, we reran their source code and report the best results (with necessary hyperparameter tuning).
+
+Table 6. Hyperparameters for different image datasets
+
+Dataset ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 FGVCAircraft SUN397 DTD EuroSAT UCF101 α 0.5 0.8 0.8 0.6 0.9 0.5 0.6 0.8 1.0 0.6 0.8 β 0.5 0.2 0.4 0.4 0.1 0.5 0.5 0.2 0.2 0.5 0.2
+
+# J. Additional Experiment Analysis
+
+In this section, we present additional experimental results on language and vision tasks.
+
+# J.1. Natural Language Processing Tasks
+
+# J.1.1.EFFECTS OF SGOS
+
+Prior arts (Ko et al., 2024; Agarwal et al., 2024) highlight that existing KD methods suffer from distribution mismatch between the output sequences seen during training and those generated by the student during inference in auto-regressive language models. To address this, these works incorporate student-generated outputs (SGOs) along with teacher feedback (i.e., token-level predict distribution for these sentences) during training, leading to significant improvements. To assess the applicability of our framework, we evaluate the performance after training with different SGO strategies, as shown in Tab. 8. The results indicate that by integrating these promising techniques, our framework achieves further improvements across most datasets compared to training with fixed data.
+
+Table 7. Hyperparameters for different instruction datasets
+
+Dataset Dolly Eval Self-Instruct Vicuna Eval Super-Natural Unnatural α 0.2 0.2 0.2 0.2 0.2 β 0.7 0.7 0.7 0.7 0.7
+
+Table 8. Effects of our framework using different SGOs strategies (i.e., On-policy, Mixed, and Adaptive Off-policy). Fixed denotes that our framework uses only the original dataset for training without augmentation. Following (Ko et al., 2024), in the mixed strategy, we apply the on-policy optimization method with a probability of 0.5. Otherwise, we sample from the fixed dataset.
+
+Method Dolly Eval Self-Instruct Vicuna Eval Super-Natural Unnatural Prior SOTA result 25.32 (0.14) 12.49 (0.56) 17.30 (0.41) 23.76 (0.38) 25.79 (0.08) Fixed + Ours 25.65 (0.24) 13.47 (0.42) 16.06 (0.25) 26.47 (0.31) 29.32 (0.08) On-policy (Gu et al., 2024a) + Ours 25.96 (0.42) 13.44 (0.37) 17.32 (0.38) 26.86 (0.26) 29.57 (0.13) Mixed (Agarwal et al., 2024) + Ours 26.49 (0.23) 14.62 (0.27) 17.14 (0.26) 27.54 (0.44) 30.98 (0.09) Adaptive Off-policy (Ko et al., 2024) + Ours 26.58 (0.18) 14.25 (0.25) 17.79 (0.35) 27.79 (0.26) 31.13 (0.12)
+
+# J.1.2. DISTILLING FROM STRONGER TEACHER
+
+Recent research (Cho & Hariharan, 2019; Huang et al., 2022) shows that as the size of the teacher model increases, the distillation performance does not always improve for the student models and may even degrade due to the capacity gap between them. It is not clear how our framework performs when scaling up the teacher models' sizes. To this end, we report the performance of our method using teacher models of varying sizes while keeping the student model size fixed, as shown in Tab. 9. From the results, we have the following observations: 1) Our $\alpha$ - $\beta$ -divergence consistently outperforms FKLD and RKLD across different teacher models; 2) FKLD and RKLD fail to ensure that the student model consistently benefits from the rich supervision provided by larger teacher models and thus leads to suboptimal performance. 3) In contrast, the $\alpha$ - $\beta$ -divergence can maintain the student model's performance nearly positively correlated with teacher model size by smoothly interpolating between FKLD and RKLD.
+
+Table 9. Performance of GPT-2 on five task-agnostic instruction-following datasets with different teacher model sizes.
+
+Method Dolly Eval Self-Instruct Vicuna Eval Super-Natural Unnatural GPT-2 Medium (0.3B) → GPT-2 (0.1B) FKLD 23.68 (0.29) 10.14 (0.53) 15.44 (0.48) 18.54 (0.30) 20.44 (0.20) RKLD 24.66 (0.20) 11.73 (0.31) 15.27 (0.41) 22.65 (0.25) 25.27 (0.24) α-β-divergence (Ours) 25.47 (0.25) 13.13 (0.46) 15.84 (0.21) 26.29 (0.13) 28.31 (0.14) GPT-2 Large (0.8B) → GPT-2 (0.1B) FKLD 20.01 (0.23) 9.89 (0.59) 14.98 (0.35) 19.00 (0.26) 18.72 (0.13) RKLD 25.27 (0.28) 11.77 (0.24) 14.78 (0.26) 23.61 (0.36) 26.41 (0.13) α-β-divergence (Ours) 25.72 (0.52) 13.08 (0.45) 15.80 (0.62) 26.44 (0.32) 29.25 (0.10) GPT-2 XL (1.5B) → GPT-2 (0.1B) FKLD 23.80 (0.37) 10.01 (0.75) 15.25 (0.65) 17.69 (0.26) 18.99 (0.05) RKLD 24.77 (0.37) 12.02 (0.48) 15.06 (0.28) 23.27 (0.29) 26.01 (0.11) α-β-divergence (Ours) 25.65 (0.24) 13.47 (0.42) 16.06 (0.25) 26.47 (0.31) 29.32 (0.08)
+
+# J.1.3. COMPARISON WITH MORE BASELINES
+
+To further demonstrate the effectiveness of the proposed method, we additionally consider several KD baselines that were not included in the main text, namely (1) AlphaNet (Wang et al., 2021), (2) BDKD (Amara et al., 2022), (3) AKL (Wu et al., 2024), and (4) Jensen's KL (Binici et al., 2022).
+
+The results are shown in Tab. 10. ABKD outperforms all other baselines across different benchmarks, with improvements ranging from 0.42 to 1.76. In addition, we visualize the performance dynamics of different methods throughout the training process, as shown in Fig. 8. The $\alpha -\beta$ divergence consistently achieves the highest performance throughout training, especially in the early stages, demonstrating its faster convergence ability.
+
+
+Figure 8. Performance on the validation set when distilling GPT-2 XL (1.5B) to GPT-2 (0.1B).
+
+Finally, it is worth noting that although AlphaNet achieves performance comparable to ours, it requires tuning three hyperparameters (while ours only has two), which significantly increases the search overhead. In addition, a key advantage of our approach is that its hyperparameters have clear physical interpretations. This allows one to leverage inductive bias and follow principled guidelines to more efficiently search for suitable hyperparameters for the target task (App. D).
+
+Table 10. ROUGE-L scores (↑) of different loss functions on five task-agnostic instruction-following datasets when distilling GPT-2 XL (1.5B) into GPT-2 (0.1B). We report the average and standard deviation of ROUGE-L scores across five random seeds [10, 20, 30, 40, 50].
+
+Loss Function Dolly Self-Instruct Vicuna Super-Natural Unnatural FKLD 23.80 (0.37) 10.01 (0.75) 15.25 (0.65) 17.69 (0.26) 18.99 (0.05) RKLD 24.77 (0.37) 12.02 (0.48) 15.06 (0.28) 23.27 (0.29) 26.01 (0.11) WSD 23.33 (0.52) 10.52 (0.47) 14.83 (0.61) 19.67 (0.13) 21.21 (0.21) BDKD 23.94 (0.24) 11.83 (0.39) 15.21 (0.23) 19.56 (0.23) 21.66 (0.23) Jensen-Shannon divergence 23.79 (0.24) 11.52 (0.18) 15.35 (0.80) 21.36 (0.17) 21.97 (0.10) AKL 23.83 (0.59) 10.87 (0.42) 15.63 (0.66) 20.07 (0.32) 21.97 (0.13) AlphaNet 25.13 (0.27) 12.46 (0.46) 15.64 (0.40) 25.27 (0.20) 27.56 (0.15) SKL 25.01 (0.23) 12.47 (0.29) 15.98 (0.84) 25.56 (0.31) 27.51 (0.07) SRKL 25.75 (0.39) 11.58 (0.49) 15.56 (0.17) 26.13 (0.25) 27.37 (0.18) α-divergence 25.15 (0.41) 12.92 (0.22) 15.60 (0.27) 24.83 (0.21) 27.81 (0.10) β-divergence 24.12 (0.38) 11.18 (0.27) 14.95 (0.33) 20.98 (0.23) 23.15 (0.14) α-β-divergence (Ours) 25.65 (0.24) 13.47 (0.42) 16.06 (0.25) 26.47 (0.31) 29.32 (0.08)
+
+# J.1.4. LLAMA FAMILY DISTILLATION
+
+The following analysis aims to investigate whether the proposed method remains effective in distillation experiments involving larger-scale models. To this end, we conducted distillation from OpenLLaMA2-7B (Touvron et al., 2023b) to 3B and compared our approach with various KD baselines.
+
+The results are presented in Tab. 11. ABKD outperforms others by 0.65-3.26 ROUGE-L scores, especially excelling in Dolly and Unnatural.
+
+# J.1.5. QUALITATIVE EVALUATION
+
+In this section, we present several case studies to illustrate the effectiveness of ABKD. As shown in Tab. 17, ABKD is better at generating more accurate responses according to the predefined requirements specified in the instructions.
+
+Table 11. ROUGE-L scores (↑) on five task-agnostic instruction-following datasets when distilling OpenLLaMA2-7B into OpenLLaMA2-3B. Experiments are conducted on eight RTX 3090 24GB GPUs. * indicates that SGOs are used.
+
+Method Dolly Self-Instruct Vicuna Super-Natural Unnatural SFT 24.54 (0.51) 16.80 (0.64) 16.15 (0.15) 29.29 (0.13) 27.43 (0.21) FKLD 25.23 (0.44) 18.90 (1.20) 16.67 (0.35) 31.68 (0.22) 29.36 (0.13) RKLD 27.74 (0.45) 20.61 (0.80) 18.83 (0.40) 35.31 (0.24) 33.86 (0.16) Jensen's KL 26.28 (0.43) 18.84 (0.66) 17.81 (0.38) 30.92 (0.12) 29.79 (0.17) BDKD 26.78 (0.53) 18.94 (0.68) 17.81 (0.52) 32.15 (0.34) 30.89 (0.24) AKL 26.38 (0.41) 17.69 (0.46) 16.72 (0.48) 33.02 (0.16) 31.29 (0.08) DISTILLM* 28.24 (0.48) 21.00 (0.72) 19.12 (0.53) 37.06 (0.35) 35.05 (0.13) AlphaNet 28.11 (0.29) 21.30 (0.63) 18.70 (0.23) 37.86 (0.44) 35.40 (0.17) Ours (ABKD) 30.25 (0.37) 22.39 (0.62) 20.83 (0.42) 38.51 (0.32) 38.66 (0.10)
+
+Table 12. The distillation results of Qwen2.5-Math on English mathematical benchmarks. Models are evaluated with chain-of-thought prompting.
+
+BENCHMARK
+MODEL GSM8K MATH GaoKao
+2023 En Olympiad
+Bench College
+Math Avg. Qwen2.5-Math-7B-Instruct (Teacher) 95.5 82.8 66.8 38.5 37.7 64.3 Qwen2.5-1.5B-Instruct (Student) 73.3 54.9 45.9 18.9 30.3 44.7 SeqKD 75.8 57.3 47.3 17.7 31.3 45.9 KD 75.9 58.1 45.5 21.1 31.3 46.3 ABKD 77.4 58.6 48.5 20.4 32.0 47.4
+
+# J.2. Vision Tasks
+
+# J.2.1. DISTILLING FROM STRONGER TEACHER
+
+To evaluate the potential of our framework to benefit from a larger teacher, we examine the distillation effect when different-sized teacher models are used to distill a student model of the same size, as shown in Tab. 13. Encouragingly, our framework consistently outperforms FKLD and RKLD, with performance gains remaining stable as the teacher model size increases.
+
+Table 13. Performance of resnet20 on CIFAR-100 with different teacher model sizes. For a fair comparison, we set $\alpha = 0.8$ and $\beta = 0.3$ across all teacher-student configurations.
+
+Student Teacher Accuracy (%) Student Teacher FKLD RKLD α-β-divergence (Ours) resnet32 resnet32 69.06 71.93 71.03 (0.23) 70.91 (0.29) 71.46 (0.15) resnet44 72.25 71.51 (0.11) 71.29 (0.16) 71.76 (0.25) resnet56 72.34 70.66 (0.24) 71.43 (0.16) 71.79 (0.16) resnet110 74.31 70.67 (0.27) 71.41 (0.23) 71.72 (0.18)
+
+# J.2.2. CROSS-ARCHITECTURE DISTILLATION
+
+Although the analysis in the main text has demonstrated the effectiveness of the proposed method for distillation within the same architecture, it remains unclear how much improvement our method can achieve when the teacher and student have different architectures. To this end, we performed distillation from ResNet50 to VGG8. The results in Tab. 14 show that our method outperforms previous approaches by a margin of 0.15 to 0.89.
+
+Table 14. Accuracy (%) comparison of different distillation methods from ResNet50 to VGG8 on CIFAR-100.
+
+Method Accuracy (%) KD 73.81 ABKD (Ours) 74.62 (0.81) DKD 74.37 ABDKD (Ours) 75.26 (0.89) LSD 74.52 ABLSD (Ours) 74.77 (0.25) TTM 74.87 ABTTM (Ours) 75.02 (0.15)
+
+# J.2.3. HOW DOES ABKD PERFORM WITH ALPHA/BETA OUTSIDE [0,1]?
+
+Another interesting question is how ABKD performs when $\alpha$ or $\beta$ fall outside the range [0, 1] (e.g., $\alpha = 1.5$ , $\beta = -0.5$ ). To address this concern, we tested the settings with $\alpha > 1$ and $\beta < 0$ when distilling ResNet56 to ResNet20, as shown in Tab. 15.
+
+Table 15. Accuracy (%) of ABKD with $\alpha > 1$ and $\beta < 0$ when distilling ResNet56 to ResNet20.
+
+α\β -0.1 -0.3 -0.5 1.2 70.81 71.10 70.35 1.4 71.29 71.24 70.92 1.6 70.55 70.53 70.34
+
+The results indicate that excessively large values of $\alpha$ weaken the hardness-concentration effect, while overly small values of $\beta$ diminish the confidence-concentration effect. Both cases can lead to degraded distillation performance. This observation aligns with our objective: we aim to balance FKLD and RKLD, which correspond to the extreme cases of $\alpha = 1$ , $\alpha = 0$ , $\beta = 0$ , and $\beta = 1$ . Therefore, a natural approach is to search for parameters within the range [0, 1].
+
+Table 16. Comparison with existing SOTA methods on base-to-new generalization. Teacher: ViT-L/14 CLIP. Student: ViT-B/16 CLIP.
+
+ViT-B/16 Base Novel HM Teacher 87.85 81.45 84.39 CLIP 69.34 74.22 71.70 CoCoOp 80.47 71.69 75.83 MaPLe 82.28 75.14 78.55 PromptSRC 84.26 76.10 79.97 KD 86.96 80.73 83.63 DKD 87.02 81.02 83.79 LSD 86.31 79.99 82.89 Ours 87.27 81.41 84.17
+
+(a) Average over 11 datasets
+
+ViT-B/16 Base Novel HM Teacher 83.24 76.83 79.91 CLIP 72.43 68.14 70.22 CoCoOp 76.66 70.54 73.47 MaPLe 77.00 74.05 75.49 PromptSRC 77.63 74.97 76.26 KD 80.83 74.66 77.62 DKD 80.98 74.85 77.79 LSD 80.85 74.62 77.61 Ours 81.23 75.02 78.00
+
+(b) ImageNet
+
+ViT-B/16 Base Novel HM Teacher 98.71 98.03 98.37 CLIP 96.84 94.00 95.40 CoCoOp 97.96 93.81 95.84 MaPLe 97.74 94.36 96.02 PromptSRC 98.10 94.03 96.02 KD 98.91 96.65 97.77 DKD 99.12 96.52 97.80 LSD 99.05 96.24 97.62 Ours 99.46 96.93 98.18
+
+(c) Caltech101
+
+ViT-B/16 Base Novel HM Teacher 96.86 98.82 97.83 CLIP 91.17 97.26 94.12 CoCoOp 95.23 97.69 96.43 MaPLe 95.43 97.96 96.68 PromptSRC 95.33 97.30 96.30 KD 96.30 98.01 97.15 DKD 96.36 98.52 97.43 LSD 95.96 98.32 97.13 Ours 96.49 98.55 97.51
+
+(d) OxfordPets
+
+ViT-B/16 Base Novel HM Teacher 84.53 84.25 84.39 CLIP 63.37 74.89 68.65 CoCoOp 70.49 73.59 72.01 MaPLe 72.94 74.00 73.47 PromptSRC 78.27 74.97 76.56 KD 82.80 83.37 83.13 DKD 82.23 84.20 83.21 LSD 78.29 79.48 78.88 Ours 83.43 84.01 83.72
+
+(e) StanfordCars
+
+ViT-B/16 Base Novel HM Teacher 99.05 82.60 90.08 CLIP 72.08 77.80 74.83 CoCoOp 94.47 71.75 81.01 MaPLe 95.92 72.64 82.56 PromptSRC 98.02 76.50 85.92 KD 99.42 82.62 90.24 DKD 99.15 82.64 90.15 LSD 98.86 81.84 89.55 Ours 99.24 83.47 90.67
+
+(f) Flowers102
+
+ViT-B/16 Base Novel HM Teacher 94.56 95.15 94.85 CLIP 90.10 91.22 90.66 CoCoOp 90.70 91.29 90.99 MaPLe 90.91 91.25 91.08 PromptSRC 90.67 91.53 91.10 KD 92.43 93.68 93.05 DKD 92.35 93.72 93.03 LSD 92.07 93.07 92.57 Ours 92.46 93.84 93.14
+
+(g) Food101
+
+ViT-B/16 Base Novel HM Teacher 54.44 43.07 48.09 CLIP 27.19 36.29 31.09 CoCoOp 33.41 23.71 27.74 MaPLe 37.44 35.41 36.50 PromptSRC 42.73 37.87 40.15 KD 49.12 41.81 45.17 DKD 48.92 42.43 45.44 LSD 47.76 39.84 43.44 Ours 49.06 43.05 45.86
+
+(h) FGVCAircraft
+
+ViT-B/16 Base Novel HM Teacher 84.97 81.09 82.98 CLIP 69.36 75.35 72.23 CoCoOp 79.74 76.86 78.27 MaPLe 82.88 78.70 80.75 PromptSRC 82.67 78.47 80.52 KD 83.69 81.54 82.60 DKD 83.87 81.32 82.58 LSD 83.34 80.62 81.96 Ours 83.88 81.75 82.80
+
+(i) SUN397
+
+ViT-B/16 Base Novel HM Teacher 85.76 70.65 77.48 CLIP 53.24 59.90 56.37 CoCoOp 77.01 56.00 64.85 MaPLe 80.36 59.18 68.16 PromptSRC 83.37 62.97 71.75 KD 85.84 71.37 77.94 DKD 86.83 71.91 78.67 LSD 86.35 70.97 77.91 Ours 86.52 72.65 78.98
+
+(j) DTD
+
+ViT-B/16 Base Novel HM Teacher 94.79 83.15 88.59 CLIP 56.48 64.05 60.03 CoCoOp 90.89 71.64 80.11 MaPLe 93.32 71.26 80.94 PromptSRC 92.90 73.90 82.32 KD 97.54 82.08 89.14 DKD 97.14 83.72 89.93 LSD 97.31 83.31 89.77 Ours 97.75 83.92 90.30
+
+(k) EuroSAT
+
+ViT-B/16 Base Novel HM Teacher 89.50 82.26 85.73 CLIP 70.53 77.50 73.85 CoCoOp 82.33 74.95 78.47 MaPLe 83.08 78.60 80.76 PromptSRC 87.10 78.01 82.30 KD 89.71 82.27 86.10 DKD 90.28 81.40 85.61 LSD 89.62 81.53 85.38 Ours 90.44 82.31 86.73
+
+(1)UCF101
+
+Table 17. Instruction-following cases from the Unnatural (Honovich et al., 2023) dataset. ABKD is better at generating responses that meet the requirements of the given instructions.
+
+Case # 1 Instruction In this task, you're given a short description of an event. Your job is to order the steps involved in the event from first to last. Note that there may be multiple correct answers for each event. Input Description: A man walks into a bar and orders a drink. He pays for his drink and leaves the bar. Ground-truth The man walks into the bar.\nHe orders a drink.\nHe pays for his drink.\nHe leaves the bar. SFT A man walks into a bar, asks a bartender who is in the bar and drinks. KD A man walks into a bar, pays for his drink, and leaves the bar with a short description of the event.\n\nA woman walks into a coffee shop, opens the door, and asks the bartender for him to order them.\n\nA man asks the bartender for their order and alcohol. The woman hands the coffee, and begins making her coffee. At the end, the men exit, and the woman returns to the bar. SeqKD A man walks into a bar with a drink and leaves the bar; he pays for his drink and enters the bar. MiniLLM The event was a man walking into a bar and ordering a drink. GKD A man walks into a bar. He paid for his drink and leaves The bar. DISTILLM A man walks into a bar, orders a drink and leaves the bar Ours (ABKD) A man walks into a bar & orders a drink. He pays for his drink & leaves the bar. Case # 2 Instruction You need to output the text of the passage after capitalizing all proper nouns. Input Rami went to new york city on business. While he was there, he met his good friend ali, who showed him around town. Ground-truth Rami went to New York City on business. While he was there, he met his good friend Ali, who showed him around town. SFT Rami went to new yORK city on business. While she was there, he met His good friend ali, who shows him around town. KD Rami went to new yORK city on business. While she was there, he met His good friend ali, who taught him how to go city. SeqKD Rami went to new york city on business. While being there, he met his bad friend ali, who showed you around town. MiniLLM Rami went to new york city on business. While being there, he met his friend ali, who showed he around town. GKD Rami went to new yORK city on business. While there, he met his bad friend Ali, who showed him around Town. DISTILLM York City, Holds: ali, who showed him that you are a good friend. Ours (ABKD) Rami went to New York City's on business. While he was there, he met her good friend Ali, who shown him around town. Case # 3 Instruction You will be given a text consisting of multiple sentences. The task is to find the number of questions present in the text, and then print them out separately. A question is defined as a sentence that ends with a question mark ('?'). Input Text: I was born on October 3rd, 1984. Do you know when you were born? What month are we in currently? Ground-truth Do you know when you were born? \nWhat month are we in currently? SFT I was born on October 3, 1984. What year are we in currently? KD I was born on October 3nd, 1984. Do you actually know? I'm not sure because I don't know when I was born. SeqKD You are in currently. MiniLLM The text of the question is: I was born on September 3rd, 1984. GKD I am born on October 3rd 1984. Do you know what month are we in currently, year? DISTILLM The task is to find the answer to a question that begins with a question mark ('?'). The task is to find the answer to a question that begins with a question mark ('?'). Ours (ABKD) I was born on October 3nd, 1984. Do you remember when you were born?
\ No newline at end of file
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/images.zip b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..666985762b0b567524cdd3f043b79a40a8451998
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98f5804d7214939d3df0aea427f9f5d5638af1d59e486482c1349da72b74536e
+size 3279171
diff --git a/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/layout.json b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c720c72ab86573d16b47c541594e68c2348d43a5
--- /dev/null
+++ b/abkdpursuingaproperallocationoftheprobabilitymassinknowledgedistillationviadivergence/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4e94951a55f72c1a0eec75caed4084d39d760b0fbeb7c1df2440b4b4021ddd6
+size 2371685
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_content_list.json b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6ae03a791d3e6fd7a7ecc4ad21537eb67548de05
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c8a2506a248605cca7f55088c1f937f440cb40cdc1fc05ed9b4e08d2d2b8a27
+size 139846
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_model.json b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0c67dcf69f73065f897b19dd8f98bfedc6892c4
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:906bb486389d63fdbd6ad1414d9dbdf6539f235614c62468cbec66a342d3a5b9
+size 164121
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_origin.pdf b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3abab61ed351fb7fb2d82b3ebf98e23d12d36084
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/474f3ca7-b597-40ce-a816-f5a449c555ce_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7eddcb1426ffe42653316d7e16b9a0ac182f1fbdfee573ba1a25cbf119d1134
+size 1518672
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/full.md b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e34bb5d77e7f74ceab92bd05fc97e540b495f613
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/full.md
@@ -0,0 +1,615 @@
+# ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning
+
+Wei Xiao $^{1}$ Tsun-Hsuan Wang $^{1}$ Chuang Gan $^{2}$ Daniela Rus $^{1}$
+
+# Abstract
+
+Safe learning is central to AI-enabled robots where a single failure may lead to catastrophic results. Existing safe learning methods are not scalable, inefficient and hard to train, and tend to generate unstable signals under noisy inputs that are challenging to be deployed for robots. To address these challenges, we propose Adaptive explicit-Barrier Net (ABNet) in which barriers explicitly show up in the closed-form model that guarantees safety. The ABNet has the potential to incrementally scale toward larger safe foundation models. Each head of ABNet could learn safe control policies from different features and focuses on specific part of the observation. In this way, we do not need to directly construct a large model for complex tasks, which significantly facilitates the training of the model while ensuring its stable output. Most importantly, we can still formally prove the safety guarantees of the ABNet. We demonstrate the efficiency and strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving, with results showing much better robustness and guarantees over existing models1 .
+
+# 1. Introduction
+
+Robot learning usually requires to leverage scalable training and vast amount of data. There are many large models (Li et al., 2022) for complex robotic tasks including manipulation, locomotion, autonomous driving (Bommasani et al., 2021) (Singh et al., 2023) (Wang et al., 2023a). However, these models are not trustworthy and have no safety guarantees. Existing methods that incorporate guarantees or certificates into neural networks are not scalable and hard to
+
+1 Computer Science and Artificial Intelligence Lab, MIT, USA
+2 UMass Amherst and MIT-IBM Watson AI Lab, USA. Correspondence to: Wei Xiao .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+1Code is available at: https://github.com/Weixy21/ ABNet
+
+train (Pereira et al., 2020) (Xiao et al., 2023) (Wang et al., 2023b). It is desirable to merge these models as we can get better performance controllers in general (Beygelzimer et al., 2015) (Agarwal et al., 2020). Traditional mixture of expert methods (Shazeer et al., 2017) (Riquelme et al., 2021) (Zhou et al., 2022) or other merging approaches (Huang et al., 2023) (Ramé et al., 2023) (Wang et al., 2024) are not designed to retain the safety of the models. In this work, we explore to leverage the collective power of many safety-critical models to handle complex tasks while preserving the safety of the models.
+
+There are various definitions of safety for robotics and autonomy, and safety can be basically defined as something bad never happens. Mathematically, safety can be defined as a continuously differentiable constraint with respect to the system state and it can be further captured by the forward invariance of the safe set over such a constraint (Ames et al., 2017) (Xiao & Belta, 2021) (Glotfelter et al., 2017). In other words, we can use different constraints and approaches to enforce safety. The way we learn such safety enforcement methods may depend on the focused observation feature. For instance, some human drivers may focus on the left lane boundary in driving in order to achieve safe lane keeping, while others may focus on the right lane boundary, as shown in Fig. 1. Merging these models enables us to build robust and powerful learning models. However, the adaptivity of the merging method to different safe models is crucial, especially in retaining safety.
+
+In the literature, differentiable Quadratic Programs (dQP) (Amos & Kolter, 2017) and differentiable Model Predictive Control (dMPC) (Amos et al., 2018) are widely used for safe robot learning. However, dMPC is restricted to linear systems with linear constraints. Barrier-based learning methods (Robey et al., 2020) (Pereira et al., 2020) (Srinivasan et al., 2020), such as the BarrierNet (Xiao et al., 2023) (Wang et al., 2023b) (Liu et al., 2023), are widely used to transform nonlinear problems into dQPs and can equip deep learning systems with safety guarantees. However, there are several limitations of these learning methods: $(i)$ they are involved with solving batch QPs during training, which is inefficient, and dQPs tend to give awful solutions that significantly deteriorate the model; $(ii)$ they can only implement a single safety enforcement method as the last layer of the neural network, which is not scalable to larger safe learning
+
+An Efficient Safety-Guaranteed Learning System with Scalability
+
+
+Figure 1: The proposed ABNet that is efficient, scalable and generates stable output while guaranteeing safety for robots. Each head of ABNet in the model could learn safe control policies with focus on different observation features in a scalable or one-shot/direct manner. Barriers play the role of gates in determining the closed-form safe control and are more interpretable.
+
+models; (iii) these methods tend to generate unstable output under noise, which cannot be deployed for robots.
+
+In this paper, we propose the Adaptive explicit-Barrier Net (ABNet) to merge many safety-critical models while preserving the safety guarantees. The ABNet is efficient, scalable, robust to noise, and easy to be trained in an incremental manner. As shown in Fig. 1, we may build multi-head models within the ABNet. Each head of the ABNet may pay attention to different observation features to generate a safe control policy. We combine the outputs of all the safe learning models in a way that is provably safe. The weights of this combination quantify the importance of each head of the model, and they are trainable. The structure of the ABNet allows us to build larger safe foundation models for complicated robotic applications as we can incrementally train safe models corresponding to different robot skills and this will simply increase the head $h$ of the ABNet.
+
+In summary, we make the following new contributions:
+
+- We propose a novel explicit-Barrier model that shows superior performance in stable training and computation than dQP (Amos & Kolter, 2017) and BarrierNet (Xiao et al., 2023) while guaranteeing safety of robot learning, and the explicit-Barrier model is crucial to build larger models via merging due to its high-efficiency in dealing with nonlinear systems and constraints.
+- We propose a novel ABNet that merges many safety-critical learning models, and this new model is scalable, robust, and easy to be trained.
+- We formally prove the safety guarantees of the proposed ABNet.
+- We demonstrate the strength and effectiveness of our model on a variety of robot control tasks, including 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving in an open dataset. We also show that existing models/policies merging could make safety worse in complicated tasks (such as
+
+in vision-based driving).
+
+# 2. Problem Formulation
+
+We consider the following safe robot learning problem:
+
+Problem. Given (a) a robotic system with dynamics; (b) a state-feedback nominal controller $\pi^{*}(\pmb{x}) = \pmb{u}^{*}$ (such as a model predictive controller) that provides the training label; (c) a set of safety constraints $b_{j}(\pmb{x})\geq 0,j\in S(b_{j}$ is continuously differentiable, $S$ is a constraint set); (d) a neural network controller $\pi (\pmb {x},\pmb {z}|\theta) = \pmb{u}$ parameterized by $\theta$ (under observation $\pmb{z}$ );
+
+Our goal is to find the optimal parameter
+
+$$
+\theta^ {*} = \arg \min _ {\theta} \mathbb {E} _ {\boldsymbol {x}, \boldsymbol {z}} [ \ell (\pi^ {*} (\boldsymbol {x}), \pi (\boldsymbol {x}, \boldsymbol {z} | \theta)) ], \tag {1}
+$$
+
+while satisfying all the safety constraints in (c) and the dynamics constraint (a). $\mathbb{E}$ is the expectation, and $\ell$ is a loss function.
+
+# 3. Adaptive Explicit-Barrier Net
+
+In this section, we present the architecture of the Adaptive explicit-Barrier Net (ABNet) and formally prove its safety guarantees in learning systems.
+
+Our proposed method can fuse machine learning models that can strictly enforce system safety. In the literature, to make the safety model trainable without losing guarantees, we would usually require the model to be in the form of differentiable convex optimizations, such as differentiable QP (Amos & Kolter, 2017), differentiable MPC (Amos et al., 2018) or differentiable CBF (Xiao et al., 2023). In the former two cases, the considered robot learning problems are usually with linear dynamics and linear constraints. Otherwise, the optimization becomes nonlinear (i.e., not trainable in neural networks). Although one can transform constrained optimizations into unconstrained optimizations that are trainable using classical barrier functions (Boyd &
+
+Vandenberghe, 2004), it may make the system lose safety guarantees.
+
+# 3.1. Multi-head Explicit-Barrier
+
+We focus on general safe robot learning problems with nonlinear dynamics and constraints. For such problems, it has been shown that we can use the CBF transformation to reduce nonlinear optimizations onto quadratic optimizations with safety guarantees (Ames et al., 2017) (Xiao & Belta, 2021), which gives rise to the so-called BarrierNet (Xiao et al., 2023).
+
+Specifically, consider robot dynamics as: $\dot{\pmb{x}} = f(\pmb{x}) + g(\pmb{x})\pmb{u}$ , where $\pmb{x} \in \mathbb{R}^n$ is the robot state, $f: \mathbb{R}^n \to \mathbb{R}^n$ and $g: \mathbb{R}^n \to \mathbb{R}^{n \times q}$ are locally Lipschitz, and $\pmb{u} \in \mathbb{R}^q$ is the control. We can also consider non-affine control systems by defining auxiliary systems (Xiao et al., 2021).
+
+Implicit-Barrier. The constrained optimal control in the considered problem in Sec. 2 is then transformed into the following differentiable CBF/BarrierNet (Xiao et al., 2023), which may form a head of the model:
+
+$$
+\boldsymbol {u} _ {k} = \arg \min _ {\boldsymbol {u} (t)} \frac {1}{2} \boldsymbol {u} (t) ^ {T} H \left(\boldsymbol {z} _ {k} \mid \theta_ {h, k}\right) \boldsymbol {u} (t) + F ^ {T} \left(\boldsymbol {z} _ {k} \mid \theta_ {f, k}\right) \boldsymbol {u} (t) \tag {2}
+$$
+
+s.t.
+
+$$
+\begin{array}{l} L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + \left[ L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} \\ + p _ {m, k} \left(\boldsymbol {z} _ {k} \mid \theta_ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} \mid \theta_ {p})\right) \geq 0, j \in S, \\ \psi_ {j, i} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) = \dot {\psi} _ {j, i - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \\ + p _ {i} (\boldsymbol {z} | \theta_ {p} ^ {i}) \alpha_ {j, i} \left(\psi_ {j, i - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right), i \in \{1, \dots , m - 1 \}, j \in S, \\ \psi_ {j, 0} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) = b _ {j} (\boldsymbol {x}), j \in S, \tag {3} \\ \end{array}
+$$
+
+where $H(\pmb{z}_k|\theta_{h,k}) \in \mathbb{R}^{q \times q}$ is positive definite, and $-H^{-1}(\pmb{z}_k|\theta_{h,k})F(\pmb{z}_k|\theta_{f,k})$ can be interpreted as a reference control (the output of previous network layers). The constraints above are the High-Order CBFs (HOCBFs) constructed in enforcing the safety constraints $b_j(\pmb{x}) \geq 0$ of relative degree $m$ (Xiao & Belta, 2021). It can be shown that the satisfaction of the first HOCBF constraint in the above implies the satisfaction of $b_j(\pmb{x}) \geq 0, \forall j \in S$ , which proves the safety of differentiable CBF. In the above, $L_f\psi = \frac{d\psi}{dx} f(\pmb{x}), L_g\psi = \frac{d\psi}{dx} g(\pmb{x}), k \in \{1, \dots, h\}$ , and $h$ is the number of heads (as shown in Fig. 1). $p_i \geq 0, i \in \{1, \dots, m-1\}, p_{m,k} \geq 0$ are penalty functions (outputs of the previous network, as shown in Fig. 2) on the strictly increasing and zero-passing functions $\alpha_{j,i}, i \in \{1, \dots, m\}, j \in S$ , and will determine the conservativeness of the robot. $\theta := (\theta_h, k, \theta_{f,k}, \theta_{p,k}^m, \theta_p), k \in \{1, \dots, h\}$ , where $\theta_p := (\theta_p^1, \dots, \theta_p^{m-1})$ are all trainable parameters. $z_k$ is the observation of the head $k$ , and it is possible that all heads share the same observation, i.e. $z_k = z, \forall k \in \{1, \dots, h\}$ .
+
+The training of the above differentiable CBF (3) involves
+
+
+Figure 2: Architecture of multi-head explicit-barriers. ABNet is capable (adaptive) to fuse any safe learning models, such as the proposed explicit-barriers, BarrierNet, dMPC, etc. The ABNet is usually used in conjunction with any other neural networks and can be implemented in parallel. The parameters (inputs) of each head of ABNet are the outputs of previous layers (such as CNN or LSTM).
+
+solving batch QPs (Amos & Kolter, 2017), which is inefficient. Since CBFs do not explicitly show up in the solution, we call differentiable CBF (3) as implicit-barrier. In the following, we derive the trainable explicit solution of the differentiable CBF, which is our proposed explicit-barrier.
+
+Explicit-Barrier. It has been shown in (Luenberger, 1997) that we can find the explicit solution of a QP if there are only two constraints. As the cardinality of the safety constraint set $S$ may be greater than two, the number of HOCBFs in the differentiable CBF (3) will also be greater than two. In order to address this, first, we can define two safety functions $b_{I}(\pmb{x}) = -\ln(\sum_{j \in S_{1}} \exp(-b_{j}(\pmb{x})))$ and $b_{II}(\pmb{x}) = -\ln(\sum_{j \in S_{2}} \exp(-b_{j}(\pmb{x})))$ , where $S = S_{1} \cup S_{2}$ . By (Boyd & Vandenberghe, 2004), we have that $\max_{j \in S} b_{j}(\pmb{x}) \leq \ln(\sum_{j \in S} \exp(b_{j}(\pmb{x})))$ . It can be easily shown that $b_{I}(\pmb{x}) \geq 0$ and $b_{II}(\pmb{x}) \geq 0$ implies $b_{j}(\pmb{x}) \geq 0, \forall j \in S$ .
+
+Alternatively, we can simply consider the two most risk safety specifications, i.e., $b_{I}(\pmb{x}) = \min_{i\in S}b_{j}(\pmb{x}), b_{II}(\pmb{x}) = \min_{i\in S\setminus \arg \min_{i\in S}b_{j}(\pmb{x})}b_{j}(\pmb{x})$ . This approach is simpler and it works well for most obstacle avoidance tasks.
+
+Then, the explicit optimal solution of the differentiable CBF with the above two safety specifications can be given by
+
+$$
+\begin{array}{l} u _ {k} = - \lambda_ {1} (\boldsymbol {x}) H ^ {- 1} L _ {g} \psi_ {I, m - 1} (\boldsymbol {x}) \\ - \lambda_ {2} (\boldsymbol {x}) H ^ {- 1} L _ {g} \psi_ {I I, m - 1} (\boldsymbol {x}) - H ^ {- 1} F, \\ \end{array}
+$$
+
+where $H, F$ are given as in (2) with arguments omitted. $\lambda_1(\pmb{x}) \leq 0, \lambda_2(\pmb{x}) \leq 0$ are two gate functions ((14), (15) given in Appendix), and $\psi_{I,m-1}(\pmb{x}), \psi_{II,m-1}(\pmb{x})$ are the two HOCBFs corresponding to $b_I(\pmb{x}), b_{II}(\pmb{x})$ defined similarly as in (3). As we can see that $b_I(\pmb{x}), b_{II}(\pmb{x})$ explicitly show up in the above equation while it guarantees safety, we call it explicit-Barrier. This makes the explicit-Barrier more interpretable. The whole explicit solution derivation process is given in Appendix Sec. A.
+
+Adaptive mechanism. Each head of ABNet may learn different safe control policies even if all the heads have the same observation $z$ . The benefit is that the final performance is achieved "collectively" by all heads and thus each head can just focus on the "subproblem" with safety. Alternatively, we may also make each head of ABNet focus on different observations $z_{k}$ . The observation $z_{k}$ may come from different parts of the sensor observation (such as the left lane boundary and right lane boundary in driving shown in Fig. 1), or even different perceptions (such as vision, lidar, etc.)
+
+Cross connection. It can be noted from (3) that each head of ABNet $k \in \{1, \dots, h\}$ has some cross connection with other heads, as also shown in Fig. 1. In other words, $\psi_{j,i}(\boldsymbol{x}, \boldsymbol{z} | \theta_p), i \in \{1, \dots, m-1\}, j \in S$ are formulated in the same way through the shared parameter $\theta_p$ (independent from $k$ ). This is to ensure (i) the construction for provable safety (as shown later), and (ii) some shared information across different heads of ABNet as they all generate safe controls for the robot.
+
+Fusion. Another important consideration is how should we fuse all these controls $\pmb{u}_k, k \in \{1, \dots, h\}$ while preserving the safety property of each head of the ABNet. We propose the following form:
+
+$$
+\boldsymbol {u} = \sum_ {k = 1} ^ {h} w _ {k} \boldsymbol {u} _ {k}, \quad \text {w h e r e} \sum_ {k = 1} ^ {h} w _ {k} = 1. \tag {5}
+$$
+
+In the above, $w_{k} \geq 0, k \in \{1, \dots, h\}$ are trainable parameters. The composition of explicit-Barrier (4), BarrierNet, and dMPCs, etc, in the form of (5) is our proposed ABNet, as shown in Fig. 2. The safety guarantees of the ABNet is shown in the following theorem:
+
+Theorem 3.1. (Safety of ABNets) Given the multi-head ABNet formulated as in (4) and all other safe learning models (BarrierNet, dMPC, etc.). If the system is initially safe (i.e., $b_{j}(\boldsymbol{x}(t_{0})) \geq 0, \forall j \in S$ ), then a control policy $\boldsymbol{u}$ from the ABNet output (5) guarantees the safety of system, i.e., $b_{j}(\boldsymbol{x}(t)) \geq 0, \forall j \in S, \forall t \geq t_{0}$ .
+
+All the proofs for theorems are given in Appendix B. If the system is not initially safe (i.e., $b_{j}(\boldsymbol{x}(t_{0})) < 0, \exists j \in S$ ), then the system state $\boldsymbol{x}$ will be driven to the safe side of the state space due to the Lyapunov property of CBF/HOCBFs (Ames et al., 2017) (Xiao & Belta, 2021). This enables the possibility of utilizing data that violates safety to conduct adversary training of the ABNet.
+
+Natural noise filter. The ABNet is a natural noise filter since $w_{k} \in [0,1], \forall k \in \{1, \dots, h\}$ in (5). This can ensure that the output $\pmb{u}$ of the model is stable with a large enough head number $h$ if all the heads have different observation $z_{k}$ for the current environment. This feature makes ABNet a very robust and adaptive controller for robotic systems, and thus,
+
+# Algorithm 1 Construction and training of ABNet
+
+Input: the problem setup (a)-(d) given in the problem formulation (Sec. 2).
+
+Output: a robust and safe controller $\pmb{u}$ for the system.
+
+(a) Formulate each head of explicit-Barriers as in (4).
+(b) Build the cross connection among explicit-Barriers via $p_i(\mathbf{z}|\theta_p^i), i \in \{1, \dots, m-1\}$ .
+(c) Fuse all the heads of explicit-Barriers as in (5).
+
+# if Incremental training then
+
+Decouple $p_i(z|\theta_p^i), i \in \{1, \dots, m - 1\}$ and define them for each explicit-Barrier.
+
+Train each head of explicit-Barriers, respectively.
+
+Choose a $p_i(\boldsymbol {z}|\theta_p^i),i\in \{1,\dots ,m - 1\}$ from one of the explicit-Barriers to build cross connection.
+
+Fuse all the explicit-Barriers via (6).
+
+# else
+
+Directly train the ABNet via reverse mode error back propagation.
+
+# end if
+
+ABNet can generate smooth signals.
+
+Theorem 3.2. (Safety of merging of ABNets) Given two ABNets, the merged model using the form as in (5) again guarantees the safety of system.
+
+# 3.2. Model Training
+
+The ABNet can be trained incrementally or in one-shot. This is due to the fact that each head of ABNet can generate a control policy that is applicable to the system. The linear combination weights $w_{k}, k \in \{1, \dots, h\}$ in the ABNet denote the importance of the corresponding control policies.
+
+Incremental training. In ABNet, we may train each head $k,k\in \{1,\dots ,h\}$ of the model in a scalable way as we wish to minimize the loss between their output $\pmb{u}_k$ and the label $\pmb{u}^*$ as well. The training can be done by directly incorporating the explicit-Barrier (4) into the model. There are some cross connections via $p_i(z|\theta_p)$ between explicit-Barriers in the ABNet that may prevent the implementation of the training. We may address this by training a $p_i(z|\theta_p)$ for each head of the ABNet. After we train all heads of the model, we may fix the parameters of those models, choose a $p_i(z|\theta_p)$ from one of the explicit-Barriers (or take an average of all $p_i(z|\theta_p)$ among the models) to build the cross connection, and train the weights $w_{i}$ for some more iterations. Another way is to fuse these explicit-Barriers by testing loss. In other words, the weight $w_{k},k\in \{1,\ldots ,h\}$ can be determined by:
+
+$$
+w _ {k} = \frac {1 / \ell_ {k} \left(\boldsymbol {u} _ {k} , \boldsymbol {u} ^ {*}\right)}{\sum_ {k = 1} ^ {h} 1 / \ell_ {k} \left(\boldsymbol {u} _ {k} , \boldsymbol {u} ^ {*}\right)}, \tag {6}
+$$
+
+where $\ell_{k}$ is a loss function.
+
+If we already have some trained ABNet, and we wish to
+
+
+
+
+Figure 3: Computation (upper, numbers in the bracket denote variance) and training efficiency (lower, numbers in the bracket denote testing loss) comparison of our proposed explicit-Barrier (ABNet) with dQP and BarrierNet (BNet). The use of dQP in BarrierNet could give very bad solutions. NN is a normal neural network without safety guarantees.
+
+add some new capabilities (such as safe driving by only focusing on the left lane boundary) to the model, then we can train some heads of ABNets based on the new data we have. Finally, we can fuse the models similarly with safety guarantees as shown in Thm. 3.2. This shows the scalability of the proposed ABNet that allows us to build larger foundational safe models in an incremental way.
+
+One-shot/Direct training. The one-shot training of the ABNet can be directly done using the traditional reverse mode automatic differentiation. In addition to the loss between the eventual output $\pmb{u}$ of the ABNet and the label $\pmb{u}^{*}$ , we may also consider the losses on $\pmb{u}_k, k\in \{1,\dots,h\}$ , as well as on the reference controls $H^{-1}(z_k|\theta_{h,k})F(z_k|\theta_{f,k})$ , in order to improve the training performance.
+
+The construction and training of the ABNet involve the formulation of each head of explicit-Barriers as in (4), the model fusion as in (5), and the scalable or direct training as shown above (Alg. 1).
+
+# 4. Experiments
+
+In this section, we conduct several experiments to answer the following questions:
+
+
+
+
+Figure 4: 2D robot obstacle avoidance closed-loop testing control profiles (upper) and ABNet performance with the increasing of ABNet heads using scalable training (lower). This scalable training for ABNet is with safety guarantees. The controls are subject to input noise, and thus are nonsmooth.
+
+- How does the proposed explicit-Barrier compare with dQP (Amos & Kolter, 2017) and BarrierNet (Xiao & Belta, 2021) in terms of computation and training efficiency?
+- Does our method match the theoretic results in experiments and is it scalable?
+- How does our method compare with state-of-the-art models in enforcing safety constraints?
+- The benefit of models/policies merging and the robustness of our models in safety and smoothness?
+
+Benchmark models: We compare with (i) baseline: Tables 1, 2–single end-to-end learning model (E2E) (Levine et al., 2016) and Table 3–single vanilla end-to-end (V-E2E) model (Amini et al., 2022), (ii) safety guaranteed models: (implicit-) BarrierNet (BNet) (Xiao et al., 2023), Deep forward and backward (DFB) model (Pereira et al., 2020), (iii) policies merging: BarrierNet policies merged with uncertainty propagation (BNet-UP) (Wang et al., 2023b) that employs Gaussian kernels with Scott's rule (Scott, 2015) to select the bandwidth, (iv) models merging: E2Es merged with Monte-Carlo Dropout (E2Es-MCD) (Gal & Ghahramani, 2016), E2Es merged with Deep Resembles (E2Es-DR) (Lakshminarayanan et al., 2017).
+
+Table 1: 2D robot obstacle avoidance closed-loop testing under noisy input.
+
+MODEL SAFETY(≥0) CONSER.(≥0&↓) MSE(↓) u1UNCERTAINTY(↓) u2UNCERTAINTY(↓) THEORET.GUAR. E2E (LEVINE ET AL., 2016) -14.140 -2.976±3.770 0.007±0.004 0.063 0.049 × E2ES-MCD (GAL &GHAHRAMANI, 2016) -2.087 -1.341±0.824 0.004±0.001 0.041 0.026 × E2ES-DR (LAKSHMINARAYANANEt AL., 2017) -35.130 -3.176±4.299 0.080±0.006 0.032 0.020 × DFB (PEREIRA ET AL., 2020) 36.659 47.810±4.377 0.013±0.003 0.062 0.052 √ BNET (XIAO ET AL., 2023) 5.045 7.966±1.287 0.014±0.006 0.074 0.047 √ BNET-UP (WANG ET AL., 2023B) 5.988 8.573±1.738 0.008±0.004 0.054 0.028 × ABNET-10-SC (OURS) 5.731 6.269±0.319 0.011±0.007 0.065 0.027 √ ABNET-10 (OURS) 12.639 13.887±1.323 0.008±0.005 0.049 0.030 √ ABNET-100 (OURS) 10.122 11.729±0.816 0.012±0.006 0.049 0.013 √
+
+Our models: We consider the minimum function method in determining $b_{I}(\pmb{x})$ and $b_{II}(\pmb{x})$ . Sec. 4.2 and 4.3: ABNet trained in a scalable way with 10 heads (ABNET-10-SC), ABNet trained in one shot with 10 heads (ABNET-10), ABNet trained in one shot with 100 heads (ABNET-100). Sec. 4.4: our ABNet trained in one shot with 10 heads using the same input images (ABNET), ABNet with attention images and 10 heads (ABNET-ATT), our ABNet first trained with ABNET scaled by ABNET-ATT (20 heads, ABNET-SC).
+
+Evaluation metrics: The evaluation metrics are defined as follows: mean square error of the model testing (MSE), satisfaction of safety constraints where non-negative values mean safety guarantees (SAFETY), system conservativeness (CONSER.), steering control $u_{1}$ uncertainty ( $u_{1}$ UNCERTAINTY), acceleration control $u_{2}$ uncertainty ( $u_{2}$ UNCERTAINTY), and theoretical safety guarantees (THEORET. GUAR.) respectively. The metrics are explicitly defined in Appendix C.
+
+# 4.1. Computation and Training Time
+
+We first compare the training stability and efficiency of our proposed explicit-Barrier (or ABNet) with dQP (Amos & Kolter, 2017) and BarrierNet (Xiao et al., 2023). The dQP method is based on the "QPFunction" library from OptNet (Amos & Kolter, 2017), and BarrierNet is based on dQP. The computation times under different batch sizes are shown in Fig. 3 (upper). The computation time significantly increases as the increasing of the batch size, but the proposed explicit-Barrier remains to be efficient. Fig. 3 (lower) shows the training time (of the model based on the 2D robot case in Sec. 4.2) under different number of heads. The training time of our proposed ABNet is comparable to a normal NN (NN is without safety guarantees). While the BarrierNet (based on dQP) tends to give very bad training and testing solutions, as also shown in Fig. 7 in the Appendix, which significantly deteriorates the quality of model training.
+
+# 4.2. 2D Robot Obstacle Avoidance
+
+We aim to find a neural network controller for a 2D robot that can drive the robot from an initial location to an arbitrary destination while avoiding crash onto the obstacle. All the models (h copies/heads) have the same input (with uniformly distributed noise, $10\%$ of the input magnitude in testing). The detailed problem setup and model introductions are given in Appendix C.2.
+
+
+
+
+Figure 5: Robot manipulation closed-loop end-effector trajectories (upper) and ABNet performance with the increasing of model heads using scalable training (lower). The transparent trajectories in the upper figure are corresponding to results in all runs.
+
+Models/policies merging can improve the performance as
+
+Table 2: Robot manipulation closed-loop testing under noisy input and comparisons with benchmarks.
+
+MODEL SAFETY(≥0) CONSER.(≥0&↓) MSE(↓) u1UNCERTAINTY(↓) u2UNCERTAINTY(↓) THEORET.GUAR. E2E (LEVINE ET AL., 2016) -11.027 -1.082±2.992 3.6e-4±1.7e-4 0.013 0.009 × E2ES-MCD (GAL &GHAHRAMANI, 2016) -11.827 0.162±2.085 1.1e-4±7.3e-5 0.008 0.005 × E2ES-DR (LAKSHMINARAYANAT EL., 2017) -11.381 -0.958±1.875 1.3e-4±8.5e-5 0.007 0.005 × DFB (PEREIRA ET AL., 2020) 2.905 6.023±3.110 8.7e-4±1.9e-4 0.019 0.018 ✓ BNET (XIAO ET AL., 2023) 0.147 0.745±0.505 2.3e-4±1.2e-4 0.010 0.009 ✓ BNET-UP (WANG ET AL., 2023B) 0.206 0.346±0.098 5.2e-5±3.2e-5 0.005 0.005 × ABNET-10-SC (OURS) 0.233 0.570±0.360 5.9e-5±5.5e-5 0.006 0.005 ✓ ABNET-10 (OURS) 0.039 0.272±0.443 1.2e-4±9.6e-5 0.008 0.007 ✓ ABNET-100 (OURS) 0.053 0.123±0.177 1.1e-4±4.4e-5 0.005 0.004 ✓
+
+shown by the MSE metrics in Table 1 and the scalable training in Fig. 4. Note that our scalable training for ABNets has safety guarantees. The DFB tends to be very conservative as the CBFs within which are not differentiable, which presents a high conservative value shown in Table 1. Our proposed ABNets can significantly reduce the uncertainty of the outputs (controls) under noisy input while guaranteeing safety, and this uncertainty decreases as the increases of the heads in the ABNets, as shown by the last two and three columns in Table 1, as well as shown in Fig. 4 and 8 of Appendix C.2 where the control uncertainty of ABNet-100 is lower than the one of BNet. The smoothness of the controls also increases with the increase of model heads (e.g., blue from ABNet v.s. red from BNet in Fig. 8). In terms of performance, our proposed ABNets can also improve the testing errors compared to BNet and DFB, as shown by the MSE in Table 1. The E2Es-MCD model can achieve the best performance, but this is at the cost of safety (the SAFETY metric in Table 1 is negative, which implies violated safety).
+
+# 4.3. Safe Robot Manipulation
+
+In robot manipulation, we employ a two-link planar robot manipulator to grasp an object from an arbitrary point to an arbitrary destination while avoiding crashing onto obstacles. All the models (h copies/heads) have the same input (with uniformly distributed noise, $10\%$ of the input magnitude in testing). We compare our proposed ABNets with the same benchmark models as in the last subsection. More detailed problem setup and model introductions are given in Appendix C.3.
+
+Again, models/policies merging can improve the performance as shown by the MSE metrics in Table 2 and the scalable training in Fig. 5. All the E2E-related models are not robust to noise and violate safety constraints (i.e., crash onto obstacles) under noisy input since there are no formal guarantees, and such an example is shown by the magenta trajectory curve of the end-effector in Fig. 5. As shown in
+
+Table 2, the proposed ABNet-100 model is the least conservative one with the lowest control uncertainties as well under noisy inputs (significantly improved compared with BNet and DFB), which demonstrates its advantage over other models. This uncertainty improvement is also shown by the control distributions in Fig. 9 in Appendix C.3 (BNet: red area v.s. ABNet-100: blue area). The BNet-UP achieves the best performance without safety guarantees.
+
+# 4.4. Vision-based End-to-End Autonomous Driving
+
+We finally test our models in a more complicated and realistic task: vision-based driving, using an open dataset and benchmark from the VISTA (Amini et al., 2022). One of ABNets, named ABNet-att, is constructed such that different heads focus on different parts of the image (left lane boundary, right lane boundary, etc., the corresponding images are shown in Fig 10 of Appendix C.4). For more experiment and model details, please refer to Appendix C.4.
+
+As shown in Table 3, the proposed ABNets can avoid crash onto obstacles with $100\%$ obstacle passing rate, including the ABNet-sc that is trained in a scalable way with two ABNets (also shown by the scalable training in Fig. 6). This is because the ABNets can learn the correct steering control (the blue and green sine waves shown in Fig. 11 (right) in Appendix C.4) to avoid the obstacle without stopping in front of it. Compared to the baseline MPC, the proposed ABNet is much more efficient (0.004s v.s. 0.872s). Although linearization is possible in MPC to improve the efficiency, it may make the MPC lose safety guarantees. The DFB and BNet-related models learn a significant deceleration control (shown in Fig. 11) to avoid crashing onto obstacles, which explains why the corresponding obstacle passing rates are low compared to other models in Table 3 and why the blue trajectories (BNet) terminate near the obstacle in Fig. 6 (upper). Nonetheless, there are still some crash cases in DFB and BNet models due to badly learned CBF parameters that make the inter-sampling effect (i.e., safety violation
+
+Table 3: Vision-based end-to-end autonomous driving closed-loop testing and comparisons with benchmarks. New items are short for obstacle crash rate (CRASH), obstacle passing rate (PASS).
+
+MODEL CRASH(↓) PASS(↑) SAFETY(≥0) CONSER.(≥0&↓) u1UNCERTAINTY(↓) u2UNCERTAINTY(↓) THEORET.GUAR. V-E2E (AMINI ET AL., 2022) 6% 94% -60.297 -0.610±21.165 0.443 0.222 × E2ES-MCD (GAL &GHAHRAMANI, 2016) 8% 92% -60.566 -2.211±22.343 0.429 0.227 × E2ES-DR (LAKSHMINARAYANAT EL., 2017) 9% 91% -60.572 -1.499±21.500 0.431 0.224 × DFB (PEREIRA ET AL., 2020) 4% 39% -18.114 -0.828±5.444 0.513 0.125 ✓ BNET (XIAO ET AL., 2023) 3% 33% -16.694 -4.882±4.817 0.724 0.385 ✓ BNET-UP (WANG ET AL., 2023B) 2% 35% -23.252 -5.190±4.920 0.726 0.532 × ABNET (OURS) 0% 100% 1.455 6.132±2.181 0.168 0.316 ✓ ABNET-ATT (OURS) 0% 100% 4.198 8.053±1.449 0.172 0.269 ✓ ABNET-SC (OURS) 0% 100% 2.221 7.224±1.667 0.130 0.256 ✓
+
+between discretized times) serious. Most importantly, our proposed ABNet can learn less uncertain controls for this complicated task, as shown in Table 3, the scalable training in Fig. 6, and Fig. 11 (e.g., ABNet:blue or ABNet-att:green area v.s. BNet: red area).
+
+
+
+
+Figure 6: Vision-based end-to-end autonomous driving closed-loop testing trajectories in VISTA (upper) and ABNet performance with the increasing of model heads using scalable training (lower). This scalable training is done by both the ABNet and ABNet-att in Table 3 with safety guarantees.
+
+The ABNet-att can learn more consistent autonomous driving behavior than the ABNet due to the image attention
+
+setting, as shown by the magenta (ABNet-att) and cyan (ABNet) trajectories in Fig. 6 (upper) and the green (ABNet-att) and blue (ABNet) areas in Fig. 11. Ablation studies on the robustness of our ABNets in terms of safety under high-noisy inputs (50% noise level) are given in Table 4 of Appendix C.4.
+
+# 5. Related Works
+
+Scalability, merging and uncertainty in safe robot learning. Machine learning has been widely used in robot control (Bommasani et al., 2021) (Singh et al., 2023) (Wang et al., 2023a). However, there is increasing concern for machine learning, especially large foundation models, being used in robotics (Bommasani et al., 2021). Mixture of expert methods (Shazeer et al., 2017) (Riquelme et al., 2021) (Zhou et al., 2022) are scalable but hard to retain the property (such as safety) of the models. The uncertainty resulting from noisy model input or dataset is preventing the deployment to real robots (Loquercio et al., 2020) (Kahn et al., 2017). To address this, predictive uncertainty quantification (Gal & Ghahramani, 2016) (Lakshminarayanan et al., 2017), also a model merging approach, has been widely adopted. It has been shown to work well in vision-based autonomous driving under noisy input (Wang et al., 2023b) using the Gaussian kernel with Scott's rule (Scott, 2015) to select bandwidth. The main challenge of this technique is that it may make the system lose performance guarantees, such as safety. Other model merging approaches (Huang et al., 2023) (Ramé et al., 2023) (Wang et al., 2024) do not preserve safety either. We address the uncertainty and scalability problem using the proposed ABNets with provable safety.
+
+CBFs and set invariance. In control theory, the set invariance has been widely adopted to prove and enforce the safety of dynamical systems (Blanchini, 1999) (Rakovic et al., 2005) (Ames et al., 2017) (Xiao & Belta, 2021) (Xiao
+
+et al., 2023). The Control Barrier Function (CBF) (Ames et al., 2017) (Xiao & Belta, 2021) is such a state of the art technique that can enforce set invariance (Aubin, 2009), (Prajna et al., 2007), (Wisniewski & Sloth, 2013), and transforms a nonlinear optimization problem to a quadratic problem that is very efficient to solve. CBFs originates from barrier functions that are originally used in optimization problems (Boyd & Vandenberghe, 2004). However, the CBF tends to make the system conservative (i.e., at the cost of performance) in order to enforce safety, and it is not scalable to build large models. Our proposed ABNet can address all these limitations.
+
+Safety in neural networks. Safety is usually enforced using optimizations. Barrier functions have been widely used in safe Reinforcement Learning (RL) (Tessler et al., 2018; Achiam et al., 2017). However, safety cannot be guaranteed in safe RL as the barrier functions are used as part of the reward function (a soft constraint). Recently, differentiable optimizations show great potential for learning-based control with safety guarantees (Pereira et al., 2020; Amos et al., 2018; Xiao et al., 2023; Liu et al., 2023). The quadratic program (QP) can be employed as a layer in the neural network, i.e., the OptNet (Amos & Kolter, 2017). The OptNet has been used with CBFs in neural networks as a safe filter controls (Pereira et al., 2020), in which CBFs themselves are not trainable, which can significantly limit the learning capability. Neural network controllers with safety certificate have been learned through verification-in-the-loop training (Deshmukh et al., 2019; Zhao et al., 2021; Ferlez et al., 2020). However, the verification method cannot guarantee to cover the whole state space, and this method is also very computationally expensive. None of these methods are scalable to larger models, and are subject to uncertainty, which the proposed ABNet can address.
+
+# 6. Conclusions, Limitations and Future Work
+
+We propose Adaptive explicit-Barrier Net (ABNet) that merges many safety-critical learning models while preserving the safety in this paper. The proposed ABNet is efficient to train, scalable to build larger safe learning models, can achieve better performance, and is robust to input noise. We have demonstrated the effectiveness of the model on a series of robot control tasks. Nonetheless, our model (and all the other barrier-based learning models (Ferlez et al., 2020) (Xiao et al., 2023)) still have a few limitations motivating for further research.
+
+Limitations. First, all the ABNets have the same safety constraints. We will explore how to merge ABNets with different safety constraints in the future. Second, the ABNet also requires safety specifications that may be unknown in some robot control tasks, we may learn the safety specifications from data (Robey et al., 2020), (Srinivasan et al.,
+
+2020), and this can also be done in conjunction with ABNet. Third, the model merging is done in the output space, future work will further focus on model merging with safety guarantees in the parameter space. Finally, we will apply the proposed model in environments that involve contact handling, such as grasping.
+
+# Acknowledgements
+
+The research was supported in part by Capgemini Engineering. It was also partially sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This research was also supported in part by the AI2050 program at Schmidt Futures (Grant G-965 22-63172), and by the ONR Science of Autonomy program N00014-23-1-2354.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization. In International conference on machine learning, pp. 22-31. PMLR, 2017.
+Agarwal, N., Brukhim, N., Hazan, E., and Lu, Z. Boosting for control of dynamical systems. In International Conference on Machine Learning, pp. 96-103. PMLR, 2020.
+Ames, A. D., Xu, X., Grizzle, J. W., and Tabuada, P. Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8):3861-3876, 2017.
+Amini, A., Wang, T.-H., Gilitschenski, I., Schwarting, W., Liu, Z., Han, S., Karaman, S., and Rus, D. Vista 2.0: An open, data-driven simulator for multimodal sensing and policy learning for autonomous vehicles. In 2022 International Conference on Robotics and Automation (ICRA), pp. 2419-2426. IEEE, 2022.
+Amos, B. and Kolter, J. Z. Optnet: Differentiable optimization
+
+tion as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, pp. 136-145, 2017.
+Amos, B., Rodriguez, I. D. J., Sacks, J., Boots, B., and Kolter, J. Z. Differentiable mpc for end-to-end planning and control. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8299-8310. Curran Associates Inc., 2018.
+Aubin, J.-P. Viability theory. Springer, 2009.
+Beygelzimer, A., Hazan, E., Kale, S., and Luo, H. Online gradient boosting. Advances in neural information processing systems, 28, 2015.
+Blanchini, F. Set invariance in control. Automatica, 35(11): 1747-1767, 1999.
+Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosse-lut, A., Brunskill, E., et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
+Boyd, S. P. and Vandenberghe, L. Convex optimization. Cambridge university press, New York, 2004.
+Deshmukh, J. V., Kapinski, J. P., Yamaguchi, T., and Prokhorov, D. Learning deep neural network controllers for dynamical systems with safety guarantees: Invited paper. In 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1-7, 2019.
+Ferlez, J., Elnaggar, M., Shoukry, Y., and Fleming, C. Shieldnn: A provably safe nn filter for unsafe nn controllers. preprint arXiv:2006.09564, 2020.
+Gal, Y. and Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059. PMLR, 2016.
+Glotfelter, P., Cortes, J., and Egerstedt, M. Nonsmooth barrier functions with applications to multi-robot systems. IEEE control systems letters, 1(2):310-315, 2017.
+Huang, C., Liu, Q., Lin, B. Y., Pang, T., Du, C., and Lin, M. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269, 2023.
+Kahn, G., Villaflor, A., Pong, V., Abbeel, P., and Levine, S. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017.
+Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
+
+Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1-40, 2016.
+Li, J., Li, D., Xiong, C., and Hoi, S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International conference on machine learning, pp. 12888-12900. PMLR, 2022.
+Liu, W., Xiao, W., and Belta, C. Learning robust and correct controllers from signal temporal logic specifications using barriernet. In 2023 62nd IEEE Conference on Decision and Control (CDC), pp. 7049-7054. IEEE, 2023.
+Loquercio, A., Segu, M., and Scaramuzza, D. A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters, 5(2):3153-3160, 2020.
+Luenberger, D. G. Optimization by vector space methods. John Wiley & Sons, 1997.
+Nagumo, M. Über die lage der integralkurven gewöhnlicher differentialgleichungen. In Proceedings of the Physico-Mathematical Society of Japan. 3rd Series. 24:551-559, 1942.
+Pereira, M. A., Wang, Z., Exarchos, I., and Theodorou, E. A. Safe optimal control using stochastic barrier functions and deep forward-backward sdes. In Conference on Robot Learning, 2020.
+Prajna, S., Jadbabaie, A., and Pappas, G. J. A framework for worst-case and stochastic safety verification using barrier certificates. IEEE Transactions on Automatic Control, 52 (8):1415-1428, 2007.
+Rakovic, S. V., Kerrigan, E. C., Kouramas, K. I., and Mayne, D. Q. Invariant approximations of the minimal robust positively invariant set. IEEE Transactions on automatic control, 50(3):406-410, 2005.
+Rame, A., Ahuja, K., Zhang, J., Cord, M., Bottou, L., and Lopez-Paz, D. Model ratatouille: Recycling diverse models for out-of-distribution generalization. In International Conference on Machine Learning, pp. 28656-28679. PMLR, 2023.
+Riquelme, C., Puigcerver, J., Mustafa, B., Neumann, M., Jenatton, R., Susano Pinto, A., Keysers, D., and Houlsby, N. Scaling vision with sparse mixture of experts. Advances in Neural Information Processing Systems, 34: 8583-8595, 2021.
+Robey, A., Hu, H., Lindemann, L., Zhang, H., Dimarogonas, D. V., Tu, S., and Matni, N. Learning control barrier functions from expert demonstrations. In 2020 59th IEEE
+
+Conference on Decision and Control (CDC), pp. 3717-3724, 2020.
+Rucco, A., Notarstefano, G., and Hauser, J. An efficient minimum-time trajectory generation strategy for two-track car vehicles. IEEE Transactions on Control Systems Technology, 23(4):1505-1519, 2015.
+Scott, D. W. Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons, 2015.
+Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and Dean, J. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
+Singh, I., Blukis, V., Mousavian, A., Goyal, A., Xu, D., Tremblay, J., Fox, D., Thomason, J., and Garg, A. Prompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 11523-11530. IEEE, 2023.
+Srinivasan, M., Dabholkar, A., Coogan, S., and Vela, P. A. Synthesis of control barrier functions using a supervised machine learning approach. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7139-7145, 2020.
+Tessler, C., Mankowitz, D. J., and Mannor, S. Reward constrained policy optimization. arXiv preprint arXiv:1805.11074, 2018.
+Wang, L., Zhao, J., Du, Y., Adelson, E. H., and Tedrake, R. Poco: Policy composition from and for heterogeneous robot learning. arXiv preprint arXiv:2402.02511, 2024.
+Wang, T.-H., Maalouf, A., Xiao, W., Ban, Y., Amini, A., Rosman, G., Karaman, S., and Rus, D. Drive anywhere: Generalizable end-to-end autonomous driving with multi-modal foundation models. arXiv preprint arXiv:2310.17642, 2023a.
+Wang, T.-H., Xiao, W., Chahine, M., Amini, A., Hasani, R., and Rus, D. Learning stability attention in vision-based end-to-end driving policies. In Proceedings of The 5th Annual Learning for Dynamics and Control Conference, volume 211 of Proceedings of Machine Learning Research, pp. 1099-1111. PMLR, 15-16 Jun 2023b.
+Wisniewski, R. and Sloth, C. Converse barrier certificate theorem. In Proc. of 52nd IEEE Conference on Decision and Control, pp. 4713-4718, Florence, Italy, 2013.
+Xiao, W. and Belta, C. High-order control barrier functions. IEEE Transactions on Automatic Control, 67(7):3655-3662, 2021.
+
+Xiao, W., Belta, C., and Cassandras, C. G. Adaptive control barrier functions. IEEE Transactions on Automatic Control, 67(5):2267-2281, 2021.
+Xiao, W., Wang, T.-H., Hasani, R., Chahine, M., Amini, A., Li, X., and Rus, D. Barriernet: Differentiable control barrier functions for learning of safe robot control. IEEE Transactions on Robotics, 39(3):2289-2307, 2023.
+Zhao, H., Zeng, X., Chen, T., Liu, Z., and Woodcock, J. Learning safe neural network controllers with barrier certificates. Form Asp Comp, 33:437-455, 2021.
+Zhou, Y., Lei, T., Liu, H., Du, N., Huang, Y., Zhao, V., Dai, A. M., Le, Q. V., Laudon, J., et al. Mixture-of-experts with expert choice routing. Advances in Neural Information Processing Systems, 35:7103-7114, 2022.
+
+# A. Closed-form Solution of the Explicit-Barrier
+
+Here, we show the process of deriving the closed-form solution of the explicit-Barrier following (Luenberger, 1997) (Ames et al., 2017).
+
+Similarly as in (3), we consider the following optimization (with the two specifications $b_{I}(x), b_{II}(x)$ shown in the main text) corresponding to the explicit-Barrier:
+
+$$
+\boldsymbol {u} _ {k} = \arg \min _ {\boldsymbol {u} (t)} \frac {1}{2} \boldsymbol {u} (t) ^ {T} H \left(\boldsymbol {z} _ {k} \mid \theta_ {h, k}\right) \boldsymbol {u} (t) + F ^ {T} \left(\boldsymbol {z} _ {k} \mid \theta_ {f, k}\right) \boldsymbol {u} (t) \tag {7}
+$$
+
+s.t.
+
+$$
+L _ {f} \psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + \left[ L _ {g} \psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} + p _ {m, k} \left(\boldsymbol {z} _ {k} | \theta_ {p, k} ^ {m}\right) \alpha_ {I, m} \left(\psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right) \geq 0,
+$$
+
+$$
+L _ {f} \psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \boldsymbol {\theta} _ {p}) + \left[ L _ {g} \psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \boldsymbol {\theta} _ {p}) \right] \boldsymbol {u} + p _ {m, k} \left(\boldsymbol {z} _ {k} | \boldsymbol {\theta} _ {p, k} ^ {m}\right) \alpha_ {I I, m} \left(\psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \boldsymbol {\theta} _ {p})\right) \geq 0,
+$$
+
+We first define
+
+$$
+g _ {1} (\boldsymbol {x}) = \left[ - L _ {g} \psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right], \quad h _ {1} (\boldsymbol {x}) = L _ {f} \psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + p _ {m, k} \left(\boldsymbol {z} _ {k} \mid \theta_ {p, k} ^ {m}\right) \alpha_ {I, m} \left(\psi_ {I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right), \tag {8}
+$$
+
+$$
+g _ {2} (\boldsymbol {x}) = \left[ - L _ {g} \psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right], \quad h _ {2} (\boldsymbol {x}) = L _ {f} \psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + p _ {m, k} (z _ {k} | \theta_ {p, k} ^ {m}) \alpha_ {I I, m} (\psi_ {I I, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})).
+$$
+
+The matrix $H$ is positive definite in the above optimization (7), we then define
+
+$$
+[ \hat {g} _ {1} (\boldsymbol {x}), \hat {g} _ {2} (\boldsymbol {x}) ] = H \left(\boldsymbol {z} _ {k} \mid \theta_ {h, k}\right) ^ {- 1} \left[ g _ {1} (\boldsymbol {x}), g _ {2} (\boldsymbol {x}) \right],
+$$
+
+$$
+\left[ \begin{array}{l} \hat {h} _ {1} (\boldsymbol {x}) \\ \hat {h} _ {2} (\boldsymbol {x}) \end{array} \right] = \left[ \begin{array}{l} h _ {1} (\boldsymbol {x}) \\ h _ {2} (\boldsymbol {x}) \end{array} \right] - \left[ \begin{array}{l} g _ {1} (\boldsymbol {x}) ^ {T} \\ g _ {2} (\boldsymbol {x}) ^ {T} \end{array} \right] \hat {\boldsymbol {u}} _ {k} \tag {9}
+$$
+
+where
+
+$$
+\hat {\boldsymbol {u}} _ {k} = - H \left(\boldsymbol {z} _ {k} \mid \theta_ {h, k}\right) ^ {- 1} F \left(\boldsymbol {z} _ {k} \mid \theta_ {f, k}\right). \tag {10}
+$$
+
+Next, let $\pmb{v}_k \coloneqq \pmb{u}_k - \hat{\pmb{u}}_k$ and $\langle \cdot, \cdot \rangle$ define an inner product with weight matrix $H(\pmb{z}_k | \theta_{h,k})$ so that $\langle \pmb{v}_k, \pmb{v}_k \rangle = (\pmb{v}_k)^T H(\pmb{z}_k | \theta_{h,k}) \pmb{v}_k$ . The optimization problem (7) is equivalent to:
+
+$$
+\boldsymbol {v} _ {k} ^ {*} = \underset {\boldsymbol {v} _ {k}} {\arg \min } \langle \boldsymbol {v} _ {k}, \boldsymbol {v} _ {k} \rangle ,
+$$
+
+$$
+\text {s . t .}, \langle \hat {g} _ {1} (\boldsymbol {x}), \boldsymbol {v} _ {k} \rangle \leq \hat {h} _ {1} (\boldsymbol {x}), \tag {11}
+$$
+
+$$
+\langle \hat {g} _ {2} (\boldsymbol {x}), \boldsymbol {v} _ {k} \rangle \leq \hat {h} _ {2} (\boldsymbol {x}).
+$$
+
+Finally, we have that the optimal solution of (7) is given by
+
+$$
+\boldsymbol {u} _ {k} = \boldsymbol {v} _ {k} ^ {*} + \hat {\boldsymbol {u}} _ {k}. \tag {12}
+$$
+
+Let $G(\pmb{x}) = [G_{ij}(\pmb{x})] = [\langle \hat{g}_i(\pmb{x}),\hat{g}_j(\pmb{x})\rangle ],i,j = 1,2$ is the Gram matrix. Following (Luenberger, 1997) [Ch. 3], the unique solution $\pmb{v}_k^*$ to (11) is given by
+
+$$
+\boldsymbol {v} _ {k} ^ {*} = \lambda_ {1} (\boldsymbol {x}) \hat {g} _ {1} (\boldsymbol {x}) + \lambda_ {2} (\boldsymbol {x}) \hat {g} _ {2} (\boldsymbol {x}) \tag {13}
+$$
+
+where the two gate functions $\lambda_1(x),\lambda_2(x)$ are given by:
+
+$$
+\lambda_ {1} (\boldsymbol {x}) = \left\{ \begin{array}{l l} 0 & \text {i f} G _ {2 1} (\boldsymbol {x}) \max \left(\hat {h} _ {2} (\boldsymbol {x}), 0\right) - G _ {2 2} (\boldsymbol {x}) \hat {h} _ {1} (\boldsymbol {x}) < 0 \\ \frac {\operatorname* {m a x} \left(\hat {h} _ {1} (\boldsymbol {x}) , 0\right)}{G _ {1 1} (\boldsymbol {x})} & \text {i f} G _ {1 2} (\boldsymbol {x}) \max \left(\hat {h} _ {1} (\boldsymbol {x}), 0\right) - G _ {1 1} (\boldsymbol {x}) \hat {h} _ {2} (\boldsymbol {x}) < 0 \\ \frac {\operatorname* {m a x} \left(G _ {2 2} (\boldsymbol {x}) \hat {h} _ {1} (\boldsymbol {x}) - G _ {2 1} (\boldsymbol {x}) \hat {h} _ {2} (\boldsymbol {x}), 0\right)}{G _ {1 1} (\boldsymbol {x}) G _ {2 2} (\boldsymbol {x}) - G _ {1 2} (\boldsymbol {x}) G _ {2 1} (\boldsymbol {x})} & \text {o t h e r w i s e .} \end{array} \right. \tag {14}
+$$
+
+$$
+\lambda_ {2} (\boldsymbol {x}) = \left\{ \begin{array}{l l} \frac {\max \left(\hat {h} _ {2} (\boldsymbol {x}) , 0\right)}{G _ {2 2} (\boldsymbol {x})} & \text {i f} G _ {2 1} (\boldsymbol {x}) \max \left(\hat {h} _ {2} (\boldsymbol {x}), 0\right) - G _ {2 2} (\boldsymbol {x}) \hat {h} _ {1} (\boldsymbol {x}) < 0 \\ 0 & \text {i f} G _ {1 2} (\boldsymbol {x}) \max \left(\hat {h} _ {1} (\boldsymbol {x}), 0\right) - G _ {1 1} (\boldsymbol {x}) \hat {h} _ {2} (\boldsymbol {x}) < 0 \\ \frac {\max \left(G _ {1 1} (\boldsymbol {x}) \hat {h} _ {2} (\boldsymbol {x}) - G _ {1 2} (\boldsymbol {x}) \hat {h} _ {1} (\boldsymbol {x}), 0\right)}{G _ {1 1} (\boldsymbol {x}) G _ {2 2} (\boldsymbol {x}) - G _ {1 2} (\boldsymbol {x}) G _ {2 1} (\boldsymbol {x})} & \text {o t h e r w i s e .} \end{array} \right. \tag {15}
+$$
+
+# B. Proof of Theorems
+
+Theorem 3.1. (Safety of ABNets) Given the multi-head ABNet formulated as in (4) and all other safe learning models (BarrierNet, dMPC, etc.). If the system is initially safe (i.e., $b_{j}(\boldsymbol{x}(t_{0})) \geq 0, \forall j \in S$ ), then a control policy $\boldsymbol{u}$ from the ABNet output (5) guarantees the safety of system, i.e., $b_{j}(\boldsymbol{x}(t)) \geq 0, \forall j \in S, \forall t \geq t_{0}$ .
+
+Proof: The proof outline is to first show the existence of new HOCBF constraints (corresponding to all the safety specifications) that are defined over the output of the ABNet. Then, we can use Nagumo's theorem (Nagumo, 1942) to recursively show the forward invariance of each safety set in the HOCBFs, and this can eventually imply the satisfaction of the safety specifications $b_{j}(\boldsymbol{x}) \geq 0, \forall j \in S$ .
+
+First, we show how we may ensure the safety of ABNet when there are other safe learning models, such as BarrierNet, dMPC, etc. Given a safe learning model, we have that $b_{j}(\boldsymbol{x}) \geq 0, \forall j \in S$ . By the adaptive CBF theorem (Xiao et al., 2021), we have that the satisfaction of the adaptive CBF constraint is a necessary and sufficient condition for the safety of the system. In other words, $b_{j}(\boldsymbol{x}) \geq 0, \forall j \in S$ implies that there exists an adaptive CBF:
+
+$$
+L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + \left[ L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} _ {k} + p _ {m, k} \left(\boldsymbol {z} _ {k} | \theta_ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right) \geq 0, j \in S, \tag {16}
+$$
+
+where $p_{m,k}(\boldsymbol{z}_k|\boldsymbol{\theta}_{p,k}^m) > 0$ is the penalty (adaptive) function, and $\psi_{j,m-1}(\boldsymbol{x},\boldsymbol{z}|\boldsymbol{\theta}_p)$ is defined as in (4).
+
+Next, we consider the explicit-Barrier model. As shown in Appendix sec. A, the explicit-barrier (4) is the exact solution of the QP (7). The solution of the QP (7) further implies the satisfaction of $b_{I}(\pmb{x}) \geq 0, b_{II}(\pmb{x}) \geq 0$ by the HOCBF theory (Xiao & Belta, 2021), which is equivalent to have that $b_{j}(\pmb{x}) \geq 0, \forall j \in S$ (shown right before (4)). Again, by the adaptive CBF theorem, we have that there exist adaptive CBFs as in the form of (16).
+
+Finally, we only need to consider the case of fusing controllers $\pmb{u}_k, k \in \{1, \dots, h\}$ that satisfy the following:
+
+$$
+L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + \left[ L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} _ {k} + p _ {m, k} \left(\boldsymbol {z} _ {k} | \theta_ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right) \geq 0, j \in S, \tag {17}
+$$
+
+Multiplying the weight $w_{k} \geq 0$ to the last equation, we have
+
+$$
+w _ {k} L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + w _ {k} \left[ L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} _ {k} + w _ {k} p _ {m, k} \left(z _ {k} | \theta_ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p})\right) \geq 0, j \in S, \tag {18}
+$$
+
+Taking a summation of the last equation over all $k \in \{1, \dots, h\}$ , the following equation establishes:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {h} w _ {k} L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + \sum_ {k = 1} ^ {h} w _ {k} \left[ L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \right] \boldsymbol {u} _ {k} \\ \begin{array}{l} = 1 \\ + \sum_ {k = 1} ^ {h} w _ {k} p _ {m, k} \left(\boldsymbol {z} _ {k} \mid \theta_ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} \mid \theta_ {p})\right) \geq 0, j \in S, \end{array} \tag {19} \\ \end{array}
+$$
+
+Since $L_{g}\psi_{j,m - 1}(\pmb {x},\pmb {z}|\theta_{p}$ is a vector that is independent of $k$ and $\sum_{k = 1}^{h}w_{k} = 1$ , the last equation can be rewritten as:
+
+$$
+\begin{array}{l} L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) + L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \left(\sum_ {k = 1} ^ {h} w _ {k} \boldsymbol {u} _ {k}\right) \\ + \sum_ {k = 1} ^ {h} w _ {k} p _ {m, k} \left(\boldsymbol {z} _ {k} \mid \boldsymbol {\theta} _ {p, k} ^ {m}\right) \alpha_ {j, m} \left(\psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} \mid \boldsymbol {\theta} _ {p})\right) \geq 0, j \in S, \\ \end{array}
+$$
+
+The summation of class $\mathcal{H}$ functions is also a class $\mathcal{H}$ function. Since $\alpha_{j,m}$ are class $\mathcal{H}$ functions, the $\sum_{k=1}^{h} w_k p_{m,k} (\pmb{z}_k | \theta_{p,k}^m) \alpha_{j,m} (\psi_{j,m-1}(\pmb{x}, \pmb{z} | \theta_p))$ is also a class $\mathcal{H}$ function over $\psi_{j,m-1}(\pmb{x}, \pmb{z} | \theta_p)$ . Therefore, equations (20) are the new HOCBF constraints defined over the output of the ABNet, i.e., $\sum_{k=1}^{h} w_k \pmb{u}_k$ . In other words, whenever $\psi_{j,m-1}(\pmb{x}, \pmb{z} | \theta_p) = 0$ , we have
+
+$$
+L _ {f} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \boldsymbol {\theta} _ {p}) + L _ {g} \psi_ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \boldsymbol {\theta} _ {p}) \left(\sum_ {k = 1} ^ {h} w _ {k} \boldsymbol {u} _ {k}\right) \geq 0, j \in S, \tag {21}
+$$
+
+The controls (outputs of the ABNet) $\sum_{k=1}^{h} w_k u_k \equiv u$ are directly used to drive the system, and $z$ is taken as a piece-wise constant within discretized time intervals (Xiao et al., 2023). Therefore, the last equation can be rewritten as
+
+$$
+\frac {\partial \psi_ {j , m - 1} (\boldsymbol {x} , \boldsymbol {z} | \theta_ {p})}{\partial \boldsymbol {x}} (f (\boldsymbol {x}) + g (\boldsymbol {x}) \boldsymbol {u}) = \frac {\partial \psi_ {j , m - 1} (\boldsymbol {x} , \boldsymbol {z} | \theta_ {p})}{\partial \boldsymbol {x}} \dot {\boldsymbol {x}} = \dot {\psi} _ {j, m - 1} (\boldsymbol {x}, \boldsymbol {z} | \theta_ {p}) \geq 0, j \in S, \tag {22}
+$$
+
+Since $b_{j}(\pmb{x}(t_{0})) \geq 0$ , we can always initialize the HOCBF definition such that $\dot{\psi}_{j,m-1}(\pmb{x},\pmb{z}|\theta_{p}) \geq 0$ is satisfied at $t_{0}$ (Xiao & Belta, 2021). By Nagumo's theorem (Nagumo, 1942) and (20)-(22), we have that $\psi_{j,m-1}(\pmb{x},\pmb{z}|\theta_{p}) \geq 0, \forall t \geq t_{0}$ .
+
+Recursively, we can show that $\psi_{j,i}(\pmb{x},\pmb{z}|\theta_p) \geq 0, \forall t \geq t_0, \forall i \in \{0,\dots,m-1\}$ from $i = m-1$ to $i = 0$ . Since $b_j(\pmb{x}) = \psi_{j,0}(\pmb{x},\pmb{z}|\theta_p)$ by (3), we have that $b_j(\pmb{x}(t)) \geq 0, \forall t \geq t_0, \forall j \in S$ , which the safety guarantees of the ABNet for the system.
+
+Theorem 3.2. (Safety of merging of ABNets) Given two ABNets, the merged model using the form as in (5) again guarantees the safety of system.
+
+Proof: The proof outline is similar to that of Theorem 3.1. From each ABNet, we can show the existence of new HOCBF constraints (corresponding to all the safety specifications) that are defined over the output of each ABNet. Then we can again show the existence of another set of new HOCBF constraints (corresponding to all the safety specifications) that are defined over the output of the merged ABNet. Finally, we can also use Nagumo's theorem (Nagumo, 1942) to recursively show the forward invariance of each safety set in the HOCBFs, and this can eventually imply the satisfaction of the safety specifications $b_{j}(\boldsymbol{x}) \geq 0, \forall j \in S$ .
+
+The mathematical proof is similar to that of Theorem 3.1, and thus is omitted.
+
+# C. Experiment Details
+
+Metrics used in all the tables. The SAFETY metric is defined as:
+
+$$
+\text {S A F E T Y} = \min _ {k} \left\{\min _ {t \in [ t _ {0}, T ]} b (\boldsymbol {x} (t)) \right\} _ {k}, k \in \{1, \dots , N \}, \tag {23}
+$$
+
+where $N$ is the number of testing runs ( $N = 100$ in this case). $T$ is the final time of each run. $b(\pmb{x}) \geq 0$ is the safety constraint that is given explicitly in each experiment below.
+
+The CONSER. metric is defined as
+
+$$
+\begin{array}{l} \text {C O N S E R .} \text {m e a n} = \underset {k} {\text {m e a n}} \left\{\underset {t \in [ t _ {0}, T ]} {\text {m i n}} b (\boldsymbol {x} (t)) \right\} _ {k}, k \in \{1, \dots , N \}, \\ \text {C O N S E R .} \operatorname {s t d} = \operatorname {s t d} _ {k} \left\{\min _ {t \in [ t _ {0}, T ]} b (\boldsymbol {x} (t)) \right\} _ {k}, k \in \{1, \dots , N \}. \tag {24} \\ \end{array}
+$$
+
+The UNCERTAINTY metric for both controls are calculated by:
+
+$$
+u _ {i} \text {U N C E R T A I N T Y} = \underset {t \in [ t _ {0}, T ]} {\text {m e a n}} \left\{\underset {k} {\operatorname {s t d}} \left\{u _ {i} (t) \right\} _ {k}, k \in \{1, \dots , N \} \right\}, i \in \{1, 2 \}. \tag {25}
+$$
+
+All the class $\mathcal{H}$ functions in the BarrierNets/ABNets are implemented as linear functions with trainable slopes.
+
+# C.1. Training Stability and Efficiency
+
+The dQP (Amos & Kolter, 2017) could give very bad solutions (although it still satisfies the safety constraints), as shown in Fig. 7 (right), this could significantly deteriorate the training quality of the model.
+
+# C.2. 2D Robot Obstacle Avoidance
+
+Models. All the models include fully connected layers of shape [5, 128, 32, 32, 2] with RELU as activation functions. There are some additional layers of differentiable QPs in other models (other than E2E-related models). The model input is the system state and the goal.
+
+
+Figure 7: Comparison of ABNet (left) and BarrierNet (right) (based on dQP) in training stability. BarrierNet tends to give very bad solutions. "time" in the x-axis denotes training iterations.
+
+
+
+Training and Dataset. The dataset includes 100 trajectories, and each trajectory has 137 trajectory points. The ground truth controls (i.e., training labels) are obtained via solving HOCBF-based QPs (Xiao & Belta, 2021). We use Adam as the optimizer to train the model with a MSE loss function and a learning rate 0.001. We use the QPFunction from the OptNet (Amos & Kolter, 2017) to solve the dQPs. The training time of the ABNet is about 1 hour for 20 epochs on a RTX-3090 computer.
+
+Robot dynamics and safety constraints. We employ the bicycle model as the robot dynamics:
+
+$$
+\underbrace {\left[ \begin{array}{l} \dot {x} (t) \\ \dot {y} (t) \\ \dot {\theta} (t) \\ \dot {v} (t) \end{array} \right]} _ {\dot {\boldsymbol {x}} (t)} = \underbrace {\left[ \begin{array}{c} v (t) \cos \theta (t) \\ v (t) \sin \theta (t) \\ 0 \\ 0 \end{array} \right]} _ {f (\boldsymbol {x})} + \underbrace {\left[ \begin{array}{l l} 0 & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{array} \right]} _ {g (\boldsymbol {x})} \underbrace {\left[ \begin{array}{l} u _ {1} (t) \\ u _ {2} (t) \end{array} \right]} _ {\boldsymbol {u}} \tag {26}
+$$
+
+where $(x,y)\in \mathbb{R}^2$ denotes the 2D location of the robot, $\theta \in \mathbb{R}$ is the heading angle of the robot, $\nu \in \mathbb{R}$ is the linear speed of the robot. $u_{1},u_{2}$ are the angular speed and acceleration controls, respectively.
+
+The safety constraint of the robot is defined as:
+
+$$
+b (\boldsymbol {x}) = \left(x - x _ {0}\right) ^ {2} + \left(y - y _ {0}\right) ^ {2} - R ^ {2} \geq 0, \tag {27}
+$$
+
+where $(x_0,y_0)\in \mathbb{R}^2$ is the 2D location of the obstacle, and $R > 0$ is its size.
+
+Acceleration control profiles. We show the acceleration control profiles in Fig. 8. The corresponding uncertainty is also significantly decreased with the proposed ABNet.
+
+# C.3. Safe Robot Manipulation
+
+Models. All the models include fully connected layers of shape [6, 128, 256, 128, 128, 32, 32, 2] with RELU as activation functions. There are some additional layers of differentiable QPs in other models (other than E2E-related models). The model input is the system state and the goal.
+
+Training and Dataset. The dataset includes 1000 trajectories, and each trajectory has about 350 trajectory points. The ground truth controls (i.e., training labels) are obtained via solving HOCBF-based QPs (Xiao & Belta, 2021). We use Adam as the optimizer to train the model with a MSE loss function and a learning rate 0.001. We use the QPFunction from the OptNet (Amos & Kolter, 2017) to solve the dQPs. The training time of the ABNet is about 2 hours for 10 epochs on a RTX-3090 computer.
+
+
+Figure 8: 2D robot obstacle avoidance acceleration control profiles and their distributions. The controls are subject to input noise, and thus are non-smooth. All the testings are done in a closed-loop fashion, i.e., the model outputs are directly used to control the robot.
+
+Robot dynamics and safety constraints. We employ the following model as the manipulator dynamics:
+
+$$
+\underbrace {\left[ \begin{array}{l} \dot {\theta} _ {1} \\ \dot {\omega} _ {1} \\ \dot {\theta} _ {2} \\ \dot {\omega} _ {2} \end{array} \right]} _ {\boldsymbol {x}} = \underbrace {\left[ \begin{array}{c} \omega_ {1} \\ 0 \\ \omega_ {2} \\ 0 \end{array} \right]} _ {f (\boldsymbol {x})} + \underbrace {\left[ \begin{array}{l l} 0 & 0 \\ 1 & 0 \\ 0 & 0 \\ 0 & 1 \end{array} \right]} _ {g (\boldsymbol {x})} \underbrace {\left[ \begin{array}{l} u _ {1} \\ u _ {2} \end{array} \right]} _ {\boldsymbol {u}} \tag {28}
+$$
+
+where $(\theta_{1},\theta_{2})\in \mathbb{R}^{2}$ denotes the angles of the two-link manipulator joints (defined in the Cartesian space, we may get the joint space angles $q_{1} = \theta_{1},q_{2} = \theta_{2} - \theta_{1}$ , $(\omega_{1},\omega_{2})\in \mathbb{R}^{2}$ is the angular speed of the two-link manipulator joints, $u_{1},u_{2}$ are the angular acceleration controls corresponding to the two joints, respectively.
+
+The safety constraint of the robot is defined as:
+
+$$
+b (\boldsymbol {x}) = \left(l _ {1} \cos \theta_ {1} + l _ {2} \cos \theta_ {2} - x _ {0}\right) ^ {2} + \left(l _ {1} \sin \theta_ {1} + l _ {2} \sin \theta_ {2} - y _ {0}\right) ^ {2} - R ^ {2} \geq 0, \tag {29}
+$$
+
+where $(x_0, y_0) \in \mathbb{R}^2$ is the location of the obstacle, and $R > 0$ is its size. $l_1 > 0, l_2 > 0$ are the length of the two links of the manipulator, respectively. In the current setting, the non-collision of the end-effector implies the non-collision of the link. Therefore, we only need to consider the safety of the end-effector. We show both the $u_1, u_2$ control profiles in Fig. 9 to demonstrate the advantage of the proposed ABNet. The metric definitions are the same as in the 2D robot obstacle avoidance, and the number of testing runs is $N = 100$ .
+
+# C.4. Vision-based End-to-End Autonomous Driving
+
+Models. All the models include CNN ([3, 24, 5, 2, 2], [24, 36, 5, 2, 2], [36, 48, 3, 2, 1], [48, 64, 3, 1, 1], [64, 64, 3, 1, 1]]) and LSTM layers (size: 64) and some fully connected layers of shape $[32, 32, 2] \times 2$ with RELU as activation functions. The dropout rates for both CNN and fully connected layers are 0.3. There are some additional layers of differentiable QPs in other models (other than E2E-related models). The model input is the front-view RGB images (shape: $3 \times 45 \times 155$ ) of the ego vehicle, and the outputs are the steering rate and acceleration controls of the vehicle.
+
+Training and Dataset. The dataset is open-sourced including 0.4 million image-control pairs from a closed-road sim-to-real driving field. Static and parked cars of different types and colors are used as obstacles in the dataset. The dataset is collected from the VISTA simulator (Amini et al., 2022). The ground truth controls (i.e., training labels) are obtained via solving a nonlinear model predictive control (NMPC). We use Adam as the optimizer to train the model with a MSE loss function and a learning rate 0.001. We use the QPFunction from the OptNet (Amos & Kolter, 2017) to solve the dQPs. The training time of the ABNet is about 15 hours for 5 epochs on a RTX-3090 computer.
+
+
+Figure 9: Robot manipulation joint control profiles and their distributions. The controls are subject to input noise, and thus are non-smooth. All the testings are done in a closed-loop fashion, i.e., the model outputs are directly used to control the manipulator.
+
+
+
+Brief introduction to VISTA. VISTA is a sim-to-real driving simulator that can generate driving scenarios from real driving data (Amini et al., 2022). The VISTA allows us to train our model with guided policy learning. This learning method has been shown to work for model transfer to a full-scale real autonomous vehicle. There three steps to generate the data: (i) In VISTA, we randomly initialize the locations and poses of ego- and ado-cars that are associated with the real driving data; (ii) we use NMPC to collect ground-truth controls (training labels) with corresponding states, and (iii) we collect front-view RGB images along the trajectories generated from NMPC.
+
+Vehicle dynamics and safety constraints. The vehicle dynamics are specified with respect to a reference trajectory (Rucco et al., 2015), such as the lane center line. The two most important states are the along-trajectory progress $s \in \mathbb{R}$ and the lateral offset distance $d \in \mathbb{R}$ of the vehicle center with respect to the trajectory. The dynamics are defined as:
+
+$$
+\underbrace {\left[ \begin{array}{l} \dot {s} \\ \dot {d} \\ \dot {\mu} \\ \dot {v} \\ \dot {\delta} \end{array} \right]} _ {\dot {x}} = \underbrace {\left[ \begin{array}{c} \frac {v \cos (\mu + \beta)}{1 - d \kappa} \\ v \sin (\mu + \beta) \\ \frac {v}{l _ {r}} \sin \beta - \kappa \frac {v \cos (\mu + \beta)}{1 - d \kappa} \\ 0 \\ 0 \end{array} \right]} _ {f (\boldsymbol {x})} + \underbrace {\left[ \begin{array}{l l} 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{array} \right]} _ {g (\boldsymbol {x})} \underbrace {\left[ \begin{array}{l} u _ {1} \\ u _ {2} \end{array} \right]} _ {\boldsymbol {u}}, \tag {30}
+$$
+
+where $\mu$ is the local heading error of the vehicle with respect to the reference trajectory, $\nu$ is the linear speed of the vehicle, $\kappa$ is the curvature of the trajectory at the progress $s$ . $l_{r}$ is the length of the vehicle from the tail to the center, $\beta = \arctan \left(\frac{l_r}{l_r + l_f}\tan \delta\right)$ , where $l_{f}$ is the length of the vehicle from the head to the center. $u_{1}, u_{2}$ are the steering rate and acceleration controls of the vehicle, respectively.
+
+The safety constraint of the vehicle is defined as:
+
+$$
+b (\boldsymbol {x}) = (s - s _ {0}) ^ {2} + (d - d _ {0}) ^ {2} - R ^ {2} \geq 0, \tag {31}
+$$
+
+where $(s_0, d_0) \in \mathbb{R}^2$ is the location of the obstacle in the curvi-linear frame (i.e., defined with respect to the reference trajectory), and $R > 0$ defines its size that is chosen such that the satisfaction of the above constraint can make the ego vehicle avoid crashing onto the obstacle.
+
+Closed-loop testing. We test all of our models in a closed-loop manner in VISTA. In other words, at each time step, we get the front-view RGB image observation from VISTA. Then, the model generates a control based on the image. Finally, the control is used to drive the "virtual" vehicle in VISTA. This process is done recursively until the final time. The total number of testing runs is $N = 100$ for all the tables. The obstacles are randomly initialized (in uniform probability distribution) with lateral distance $d_0$ ranges from $\pm 0.1m$ to $\pm 1.5m$ . In Figs. 6 and 11, the ego vehicle is randomly initialized with $d \in [-0.5, 0.5]m$ (in uniform probability distribution).
+
+
+
+
+
+
+
+
+
+
+Figure 10: Attention-based image observations for the ABNet-att model. From left to right and top to down: attentions on full image, left-most part, left lane boundary, lane center, right lane boundary, and right-most part.
+
+
+
+Image observations for the ABNet-att model. We generate the attention-based observations as shown in Fig. 10. Each of the attention images may play an important role in a specific driving scenario (e.g., attention on the left-most part may be crucial for sharp-left turn).
+
+Acceleration control profiles. We present both the acceleration control and steering rate control profiles in Fig. 11. Both the BNet and BNet-UP models have forced the ego vehicle to have a large deceleration instead of making it to pass the obstacle using the steering control when the vehicle approaches the obstacle. This can make the ego vehicle get stuck at the obstacles, and thus, the obstacle passing rate (as shown in Table 3) is low in these two models.
+
+Ablation studies on the model robustness in terms of safety under noisy input. To further test the model safety robustness, we add random noise (50% magnitude of the image values) to all the image observations. The results are presented in Table 4. Our proposed ABNets can still guarantee the safety of the vehicle under noisy input (0% crash rate), while the crash rates using other models significantly increase except the DFB model. This is because the HOCBFs in the DFB model are not trainable, and the corresponding parameters are fixed. Badly trained HOCBFs could make the method fail to guarantee safety due to the inter-sampling effect.
+
+
+Figure 11: Vision-based end-to-end autonomous driving closed-loop testing control profiles. The models directly take images as inputs, and output controls for the vehicle. All the testings are done in closed-loop in VISTA.
+
+
+
+Table 4: Ablation study: vision-based end-to-end autonomous driving closed-loop testing under noise and comparisons with benchmarks. Items in the first row are short for obstacle crash rate (CRASH), Obstacle passing rate (PASS), satisfaction of safety constraints where non-negative values mean safety guarantees (SAFETY), system conservativeness (CONSER.), acceleration control $u_{1}$ uncertainty ( $u_{1}$ UNCERTAINTY), steering rate control $u_{2}$ uncertainty ( $u_{2}$ UNCERTAINTY), and theoretical safety guarantees (THEORET, GUAR.) respectively. In the model column, items are short for single vanilla end-to-end driving model (V-E2E), E2Es merged with Monte-Carlo Dropout (E2Es-MCD), E2Es merged with deep resembles (E2Es-MERG), deep forward and backward model (DFB), single BarrierNet (BNET), BarrierNet policies with uncertainty propagation (BNET-UP), ABNet with 10 heads (ABNET), ABNet with attention images and 10 heads (ABNET-ATT), ABNET-SC denotes our ABNet first trained with ABNET-ATT scaled by ABNET (20 heads) respectively. The safety metric is defined as the minimum value of the safety specification $b_{j}(\boldsymbol{x})$ , $j \in S$ among all runs. The conservativeness metric is defined as the mean (with std) of the minimum value (in each run) of the safety specification $b_{j}(\boldsymbol{x})$ , $j \in S$ among all runs. The uncertainty metrics for both $u_{1}$ and $u_{2}$ are measured by the standard deviations of the model outputs (two controls) among all runs.
+
+MODEL CRASH(↓) PASS(↑) SAFETY(≥0) CONSER.(≥0&↓) u1UNCERTAINTY(↓) u2UNCERTAINTY(↓) THEORET.GUAR. V-E2E (AMINI ET AL., 2022) 31% 69% -59.455 -8.932±19.741 0.529 0.239 × E2ES-MCD (GAL &GHAHRAMANI, 2016) 28% 72% -58.405 -8.116±20.802 0.524 0.232 × E2ES-DR (LAKSHMINARAYANAT EL., 2017) 27% 73% -60.267 -8.781±20.910 0.512 0.225 × DFB (PEREIRA ET AL., 2020) 1% 37% -13.281 -0.256±4.348 0.482 0.127 √ BNET (XIAO ET AL., 2023) 23% 37% -45.415 -9.114±13.382 0.730 0.316 √ BNET-UP (WANG ET AL., 2023B) 24% 39% -44.634 -8.866±13.167 0.747 0.278 × ABNET (OURS) 0% 100% 4.268 8.315±2.147 0.151 0.326 √ ABNET-ATT (OURS) 0% 100% 5.986 7.032±0.405 0.118 0.213 √ ABNET-SC (OURS) 0% 100% 4.118 7.515±1.120 0.128 0.255 √
\ No newline at end of file
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/images.zip b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7687a6e0b24ad31fd40b2d9396fbc917555692ea
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea9d4785c9158ef0f5773a4a62c0269abea7ace0d3f5889da1b29859d5230839
+size 1201298
diff --git a/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/layout.json b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a1b0fc40182868e3dd1e2e3b3c84f060cb7fb62d
--- /dev/null
+++ b/abnetadaptiveexplicitbarriernetforsafeandscalablerobotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ee459a7ec7426bef79f82d2baaaa2532df989c034d1aac73a51cd677bb07bfa
+size 681679
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_content_list.json b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..504fca7fa57de848d7dd42d51b7e31ebb85735c3
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48940f76beea29f20a7fcfce331fc8bb92453bc77bcbf461f12c205b12442be0
+size 312072
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_model.json b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6b8ff6b54177a7dc7d52a37b55101b6b6005c87f
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dec2cd4491b9a809cbc043d213bfffd82b0cfb5461f250356510afa7b047f94b
+size 363599
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_origin.pdf b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b47819f9ba51f026dd139ff5f6dda23c322b4749
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/69ab461c-709a-4477-81b9-473befc6be24_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f3b6f8d6f0299df0c79fd0f1c4857d9f87ef5e8c42f874dde3cb6dd6b1f9961
+size 2938091
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/full.md b/accelerateddiffusionmodelsviaspeculativesampling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..70f7502bb9d8a0b40ad0b5b03ad8659d1d1a41d4
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/full.md
@@ -0,0 +1,1560 @@
+# Accelerated Diffusion Models via Speculative Sampling
+
+Valentin De Bortoli $^{*1}$ Alexandre Galashov $^{*1}$ Arthur Gretton $^{1}$ Arnaud Doucet $^{1}$
+
+# Abstract
+
+Speculative sampling is a popular technique for accelerating inference in Large Language Models by generating candidate tokens using a fast draft model and then accepting or rejecting them based on the target model's distribution. While speculative sampling was previously limited to discrete sequences, we extend it to diffusion models, which generate samples via continuous, vector-valued Markov chains. In this context, the target model is a high-quality but computationally expensive diffusion model. We propose various drafting strategies, including a simple and effective approach that does not require training a draft model and is applicable out-of-the-box to any diffusion model. We demonstrate significant generation speedup on various diffusion models, halving the number of function evaluations while generating exact samples from the target model. Finally, we also show how this procedure can be used to accelerate Langevin diffusions to sample unnormalized distributions.
+
+# 1. Motivation
+
+Denoising diffusion models (DDMs), introduced by Sohl-Dickstein et al. (2015) and further developed by Ho et al. (2020) and Song et al. (2021), are generative models exhibiting state-of-the-art performance in a wide variety of domains. The core concept behind DDMs is the progressive transformation of a data distribution into a Gaussian distribution through the addition of noise. Sample generation is achieved by simulating an approximation of the time-reversal of this noising process. This requires multiple evaluations of a neural network that approximates the scores of the noising process, and typically involves simulating a Markov chain over hundreds of steps.
+
+Since sample generation is computationally expensive, sev
+
+eral techniques have been proposed to accelerate it. These include distillation techniques (e.g. Salimans & Ho, 2022; Meng et al., 2023; Song et al., 2023), better sampling schemes (e.g. Karras et al., 2022; Lu et al., 2022; Zhang & Chen, 2023) and parallel simulation methods (e.g. Shih et al., 2023; Chen et al., 2024). However, distillation techniques inherently require training a student model and often underperform compared to the teacher model (Dieleman, 2024). While better sampling schemes can improve performance, using too small a number of steps does degrade performance, see e.g. (Karras et al., 2022). Finally, parallel simulation methods relying on Picard iterations over a sliding window have been proposed (Shih et al., 2023; Chen et al., 2024; Tang et al., 2024). However, they are inherently iterative, requiring repeated parallel sampling within a window until errors fall below a pre-specified tolerance.
+
+In the context of Large Language Models (LLMs), various techniques have also been proposed to speed up inference. Notably, speculative sampling, first introduced by Leviathan et al. (2023) and later proposed independently by Chen et al. (2023), has become prominent in this area and has spawned numerous extensions (Xia et al., 2024). Given a target LLM, this algorithm enables faster sampling than serial token decoding without compromising quality, as the sampled tokens remain exactly distributed according to the target model's distribution. This is achieved by considering a smaller and faster LLM model generating a draft sequence. The target model is then used to compute in parallel the conditional probabilities of these draft tokens, and these probabilities are used to decide sequentially whether to accept or reject the draft tokens. Upon the first rejection, a new token is sampled using an adjusted distribution combining the draft and target distributions. Many extensions of speculative sampling have been proposed to reduce latency; see Xia et al. (2024) and further related works in Section 6.
+
+In the present work, we adapt speculative sampling to accelerate DDMs. We assume a computationally cheap draft model that generates a sequence of draft states for the denoising Markov chain of a target DDM. The transition probability densities of these states under the target model are then computed in parallel and used to sequentially accept/reject the draft states. At rejection, a new state is sampled from an adjusted distribution dependent on both the draft and target distributions; see Figure 1. As for LLMs, the pro
+
+cedure is designed such that it outputs samples distributed exactly according to the target DDM.
+
+Wang et al. (2024) concurrently proposed an adaptation of speculative sampling for continuous-valued autoregressive processes, specifically for Masked Autoregressive models (Li et al., 2024b). In this setting, they sample from the adjusted distribution appearing at rejection using a standard rejection sampling algorithm. However, as demonstrated in Section 3.2, this approach is, on average, more computationally expensive than directly sampling from the target model in our context. Furthermore, it exhibits counter-intuitive performance degradation as the draft model more closely approximates the target model. We present a method to circumvent these issues while retaining the optimality properties of speculative sampling. Our contributions are summarized below. Proofs are in the Supplementary Material.
+
+- By leveraging the connections between speculative sampling and coupling techniques (Lindvall, 1992), first observed by Sun et al. (2023) in the context of LLMs, we show in Section 3.3 that we can sample efficiently from a novel adjusted distribution for DDMs using reflection maximal coupling (Bou-Rabee et al., 2020). Our procedure returns exact samples from the target model, and it is optimal in the sense that it maximizes the probability of accepting each draft state.
+- We investigate several drafting strategies (Section 3.1 and Appendix B). As with LLMs, one can rely on a "cheap" diffusion model as draft model, or use a draft model learned from the target model. We propose here instead a simple and effective approach that proposes a draft model relying solely on the target model. This eliminates any need for learning a separate draft model, and is readily applicable to any diffusion model.
+- We present a complexity analysis and a lower bound on the acceptance ratio of the draft states in Section 4.
+- We explain in Section 5 how this method can be adapted to accelerate Langevin diffusions to sample unnormalized distributions.
+- The proposed method achieves significant speed-ups for image generation on CIFAR10, and LSUN using pixel space diffusion models, without any loss of quality (Section 7). Furthermore, we show similar speed-ups in robotics for policy generation.
+
+# 2. Speculative Sampling for LLMs
+
+We begin with a review of speculative sampling for LLMs. Consider two probability distributions $q$ and $p$ for sequences on some finite space $\mathcal{X}$ . In this context, $q$ corresponds to the joint distribution of tokens for the target LLM, while $p$ represents the draft model.
+
+# 2.1. Speculative Sampling for Autoregressive Targets
+
+Speculative sampling generates $L$ candidate tokens according to the draft model $p$ which are scored in parallel using the target model $q$ . They are then accepted sequentially using an adjusted rejection sampling algorithm. At the first rejection, one needs to sample a new token from an adjusted distribution denoted $r$ . A new set of $L$ candidate tokens is then generated, and so on. This is detailed in Algorithm 1 using notation $z_{k:\ell} = (z_k,z_{k + 1},\dots,z_\ell)$ for $k\leq \ell$ and $z_{k:\ell} = \emptyset$ for $k > \ell$ for any sequence $(z_{k})_{k\in \mathbb{N}}$ and $[k] = \{1,\ldots ,k\}$ for any positive integer $k$ . We denote sequential computations by (Seq.) and parallel computations by (Par.).
+
+Algorithm 1 Speculative Sampling for LLM
+Require: Lookahead integer $L$ maximum length $K$ draft model $p$ , target model $q$ , initial context $X_{0:n_0}$
+Set $n\gets n_0$
+while $n < n_0 + K$ do
+(Seq.) Sample $\tilde{X}_{n + 1:n + L}\sim p(\cdot |X_{0:n})$ Get $p_{n + j} = p(\cdot |X_{0:n},\tilde{X}_{n + 1:n + j - 1}),j\in [L].$
+(Par.) Get $q_{n + j} = q(\cdot |X_{0:n},\tilde{X}_{n + 1:n + j - 1}),j\in [L]$ for $k = n + 1:n + L$ do $(X_{k},\mathsf{bool})\gets \mathsf{REJECTION}(p_{k},q_{k},\tilde{X}_{k})$ if not(bool) or $X_{k} = \mathrm{EOS}$ then Exit For Loop end if end for Set $n\gets k$
+end while
+return $X_{n_0 + 1:n}$
+
+The rejection mechanism is described in Algorithm 2.
+
+Algorithm 2 REJECTION $(p,q,\tilde{X})$
+Require: Proba. distributions $p,q$ and $\tilde{X}\sim p$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,q(\tilde{X}) / p(\tilde{X}))]$ if bool then Set $X = \tilde{X}$ else $X\sim r(\cdot)$ $r(x)\propto \max (0,q(x) - p(x))$ end if return $(X,\mathrm{bool})$ where $X\sim q$
+
+In (Chen et al., 2023; Leviathan et al., 2023), the draft sequence is sampled using a "cheap" autoregressive LLM, i.e., $\tilde{X}_{n+1} \sim p(\cdot | X_{0:n})$ , $\tilde{X}_{n+2} \sim p(\cdot | X_{0:n}, \tilde{X}_{n+1})$ , ..., $\tilde{X}_{n+L} \sim p(\cdot | X_{0:n}, \tilde{X}_{n+1:n+L-1})$ . However, this does not have to be the case, and any distribution $p(x_{n+1:n+L} | x_{0:n})$ can be used, e.g., in Medusa (Cai et al., 2024) one samples the draft tokens in parallel by considering a factorized draft distribution $p(x_{n+1:n+L} | x_{0:n}) =$
+
+
+Figure 1. Speculative Sampling for diffusion models. Draft states are efficiently generated and verified in parallel. Upon the first rejection, a new state is sampled using an adjusted distribution combining draft & target models, and the remainder of the draft sequence is discarded.
+
+$\prod_{k=n+1}^{n+L} p(x_k | x_{0:n})$ . Note that in this context, the distributions $\{p(x_{n+1:n+L} | x_{0:n})\}_{n \geq n_0}$ are usually not compatible, i.e., they are not the conditional distributions of a joint distribution. To be more precise, we should write $p_n(x_{n+1:n+L} | x_{0:n})$ instead of $p(x_{n+1:n+L} | x_{0:n})$ but we slightly abuse notation here.
+
+# 2.2. Adjusted Rejection Sampling as Maximal Coupling
+
+At the core of speculative sampling lies an adjusted rejection sampling mechanism which allows for sampling from the (conditional) distribution of a token $q(x) \coloneqq q(x|\text{past tokens})$ for a target LLM given the (conditional) distribution $p(x) \coloneqq p(x|\text{past tokens})$ of a token for a draft model.
+
+As pointed out by Sun et al. (2023), this procedure, summarized in Algorithm 2, is well-known in the probability literature, and the joint distribution of $(X,Y)$ it induces is a so-called maximal coupling; see e.g. (Lindvall, 1992), Section 4.5 in (Thorisson, 2000) and (Jacob, 2021) for a comprehensive introduction. Maximal couplings denote any distribution on $(X,Y)$ maximizing the probability that $X = Y$ while $X\sim p$ and $Y\sim q$ . For completeness, without any claim for originality, see Proposition 2.1 for a formal statement and the supplementary material for a proof.
+
+Proposition 2.1: Let $\tilde{X} \sim p$ then Algorithm 2 outputs $X \sim q$ . This procedure is optimal in the sense that it maximizes the probability that $X = \tilde{X}$ under the constraints $\tilde{X} \sim p$ , $X \sim q$ . Additionally, we have
+
+$$
+\mathbb {P} (X \neq \tilde {X}) = | | p - q | | _ {\mathrm {T V}},
+$$
+
+where $||p - q||_{\mathrm{TV}} \coloneqq \frac{1}{2}\sum_{x\in \mathcal{X}}|p(x) - q(x)|.$
+
+# 3. Speculative Sampling for Diffusion Models
+
+We now present our main contribution, which is the adaptation of speculative sampling to DDMs. Our DDM target model and some drafting strategies are given in Section 3.1, leading to our speculative sampling procedure in Algorithm 3. As for LLMs, this algorithm requires an adjusted rejection sampling procedure. After analyzing the difficulties of an implementation of Algorithm 2 in the context of DDMs (Section 3.2), an original solution resolving these difficulties is presented in Section 3.3.
+
+# 3.1. Denoising diffusion models, draft models and speculative sampling
+
+We first define the target DDM model we want to sample from. Following Song et al. (2021), consider a forward nois-ing process where $\mathbf{X}_0\sim q_{\mathrm{data}}$ and $\mathrm{d}\mathbf{X}_t = f_t\mathbf{X}_t\mathrm{d}t + g_t\mathrm{d}\mathbf{B}_t$ where $(\mathbf{B}_t)_{t\in [0,1]}$ is a $d$ -dimensional Brownian motion. Let $q_{t}$ the density of $\mathbf{X}_t$ , we select $f_{t},g_{t}$ such that $q_{1}\approx$ $\mathcal{N}(0,\mathrm{Id})$ . We then consider the process $(\mathbf{Y}_t)_{t\in [0,1]}$
+
+$$
+\mathrm {d} \mathbf {Y} _ {t} = b _ {t} \left(\mathbf {Y} _ {t}\right) + \varepsilon g _ {1 - t} \mathrm {d} \mathbf {W} _ {t}, \quad \mathbf {Y} _ {0} \sim q _ {1}, \tag {1}
+$$
+
+$$
+b _ {t} (x) = - f _ {1 - t} x + \frac {1 + \varepsilon^ {2}}{2} g _ {1 - t} ^ {2} s _ {1 - t} (x),
+$$
+
+where $s_t(x) = \nabla \log q_t(x)$ is the Stein score, $(\mathbf{W}_t)_{t\in [0,1]}$ is another Brownian motion and $\varepsilon \geq 0$ is a hyperparameter which controls the stochasticity level of $(\mathbf{Y}_t)_{t\in [0,1]}$ (Albergo et al., 2023), referred to as the churn parameter in the literature (Karras et al., 2022). This process is such that $\mathbf{Y}_{1 - t} \sim q_t$ for all $t \in [0,1]$ and corresponds to the time-reversal of $(\mathbf{X}_t)_{t\in [0,1]}$ for $\varepsilon = 1$ . In practice, $b_{t}$ is approximated using a neural network denoted $b_{t}^{q}$ . At inference we consider $K + 1$ discretization steps and let $\gamma = 1 / K$ and $(t_k)^k$ with $t_k = k\gamma$ ; the corresponding distribution of the resulting Markov chain obtained by the Euler-Maruyama
+
+discretisation of (1) and initialized at $\mathcal{N}(0,\mathrm{Id})\approx q_1$ is denoted $q(y_{0:K}) = q(y_0)\prod_{k = 1}^{K}q(y_k|y_{k - 1})$ where
+
+$$
+q \left(y _ {k} \mid y _ {k - 1}\right) = \mathcal {N} \left(y _ {k}; m _ {k - 1} ^ {q} \left(y _ {k - 1}\right), \sigma_ {k - 1} ^ {2} \mathrm {I d}\right), \tag {2}
+$$
+
+with $q(y_0) = \mathcal{N}(y_0;0,\mathrm{Id})\approx q_1(y_0)$ , $m_k^q (y_k) = y_k + \gamma b_{t_k}^q (y_k)$ and $\sigma_{k} = \sqrt{\gamma}\varepsilon g_{1 - t_{k}}$ . The distribution (2) defines the target model in our speculative sampling procedure.
+
+Speculative sampling requires specifying a draft model. All the draft models we consider are of the form $p(y_{n+1:n_L}|y_n) = \prod_{k=n+1}^{n_L} p(y_k|y_{n:k-1})$ where $n_L = \min(n + L, K)$ , $L$ is the length of the draft sequence and
+
+$$
+p \left(y _ {k} \mid y _ {n: k - 1}\right) = \mathcal {N} \left(y _ {k}; m _ {k - 1} ^ {p} \left(y _ {n: k - 1}\right), \sigma_ {k - 1} ^ {2} \mathrm {I d}\right). \tag {3}
+$$
+
+Independent draft model. A first choice, similar to the original speculative sampling algorithm (Leviathan et al., 2023), is to consider a draft model with the same sampling strategy as $q$ but with an approximation $b_{t}^{p}$ which is cheaper to evaluate than $b_{t}^{q}$ . Hence, the draft model satisfies $p(y_{k}|y_{n:k - 1}) = p(y_{k}|y_{k - 1})$ with
+
+$$
+m _ {k} ^ {p} \left(y _ {n: k}\right) = y _ {k} + \gamma b _ {t _ {k}} ^ {p} \left(y _ {k}\right), \sigma_ {k} = \sqrt {\gamma} \varepsilon g _ {1 - t _ {k}}. \tag {4}
+$$
+
+This choice of draft requires the availability of a cheaper DDM. For $p$ and $q$ to be close and to obtain better performance (i.e., higher acceptance rate of the draft states), this requires training $p$ on the same dataset as $q$ , which would be costly and might not be feasible. Even if $p$ and $q$ are trained with the same architecture on the same dataset, there can still be a significant mismatch between $b^{p}$ and $b^{q}$ .
+
+Frozen target draft model. Another popular choice in speculative sampling is to derive a draft model directly from the target model, see for instance (Cai et al., 2024). In the context of diffusion models, we consider here a very simple draft model where $p(y_{k}|y_{n:k - 1}) = p(y_{k}|y_{n},y_{k - 1})$ with
+
+$$
+m _ {k} ^ {p} \left(y _ {n: k}\right) = y _ {k} + \gamma b _ {t _ {n}} ^ {q} \left(y _ {n}\right), \sigma_ {k} = \sqrt {\gamma} \varepsilon g _ {1 - t _ {k}}. \tag {5}
+$$
+
+This draft model is similar to the target model, except that we replace $b_{t_k}^q (y_k)$ by $b_{t_n}^q (y_n)$ . Importantly, on a window of size $L$ , we only need to query the target model once in order to draw a draft sequence. This strategy is thus computationally inexpensive, requires no additional training and allows parallel sampling of the draft sequence. However, the differences between the draft and target models can be large near the data distribution as the score function typically exhibits significant variation. Consequently, $b_{t_n}^q (y_n)$ may deviate substantially from $b_{t_k}^q (y_k)$ when $k,n,K$ are close, rendering the approximation $b_{t_k}^q (y_k)\approx b_{t_n}^q (y_n)$ inaccurate. This issue can be addressed at higher computational cost using alternative and more involved drafting procedures, as discussed in Appendix B.
+
+# Algorithm 3 Speculative Sampling for DDM
+
+Require: Lookahead integer $L$ , sequence length $K$ , target model $q$ and draft model $p$ . Sample $Y_0 \sim \mathcal{N}(0, \mathrm{Id})$ and set $n = 0$ . while $n < K$ do Set $\tilde{Y}_n \gets Y_n$ Set $n_L \gets \min(n + L, K)$ and $\tilde{L} = n_L - n$ (Seq.) Sample draft states $\tilde{Y}_{n+1:n_L} \sim p(\cdot|\tilde{Y}_n)$ using (3). Get means of $p_{n+j} = p(\cdot|\tilde{Y}_{n:n+j-1})$ , $j \in [\tilde{L}]$ . (Par.) Get means of $q_{n+j} = q(\cdot|\tilde{Y}_{n+j-1})$ , $j \in [\tilde{L}]$ . for $k = n+1:n_L$ do $(Y_k, \text{bool}) \gets \text{REJECTION} (p_k, q_k, \tilde{Y}_k)$ . if not (\text{bool}) then Exit For Loop end if end for Set $n \gets k$ . end while return $Y_{0:K}$
+
+Having now defined the target and draft model, we present Algorithm 3, our speculative sampling algorithm for diffusion models. This algorithm is similar in principle to Algorithm 1 for LLMs. The evaluation of the means of $q(y_{n + j}|\tilde{Y}_{n:n + j - 1})$ for $j\in [n_L - n]$ is done in parallel. The rejection steps within the for loop are also implemented in parallel. However, the REJECTION step in our algorithm requires a substantially different implementation compared to the one defined by Algorithm 2 used for LLMs. This difference arises because directly applying the rejection mechanism of Algorithm 2 to diffusion models presents significant challenges, as we will demonstrate.
+
+# 3.2. Adjusted Rejection Sampling: Implementation Issues
+
+Using Algorithm 2 to define REJECTION in Algorithm 3 would yield a valid speculative sampling algorithm for diffusion models, i.e., this algorithm would produce a Markov chain exactly distributed according to the target model, $Y_{0:K} \sim q$ , and Proposition 2.1 would also apply directly. However, we show below that implementing Algorithm 2 is problematic in the context of diffusion models. If a draft state is rejected at iteration $k$ , where $k > n$ , we must then sample $Y_k$ from
+
+$$
+r (x) = \frac {\operatorname* {m a x} (0 , q (x) - p (x))}{\int_ {\mathbb {R} ^ {d}} \operatorname* {m a x} (0 , q (x) - p (x)) \mathrm {d} x}, \tag {6}
+$$
+
+for $q(y_{k}) \coloneqq q(y_{k}|y_{k - 1})$ , $p(y_{k}) \coloneqq p(y_{k}|y_{n:k - 1})$ . Although straightforward for LLMs due to the discrete nature of $r(x)$ , a satisfactory solution for continuous state-spaces
+
+remains elusive. Leveraging the fact that
+
+$$
+r (x) \propto q (x) \left(1 - \min (1, p (x) / q (x))\right), \tag {7}
+$$
+
+we could sample from $r(x)$ using standard rejection sampling. Using $q(x)$ as proposal, the acceptance probability is $1 - \min(1, p(x) / q(x))$ so that the average acceptance probability is
+
+$$
+\int q (x) \left(1 - \min \left(1, p (x) / q (x)\right) \right. \mathrm {d} x = | | p - q | | _ {\mathrm {T V}}.
+$$
+
+This is an approach analyzed by Jacob (2021) and adopted by Wang et al. (2024) for continuous-valued autoregressive processes. From standard results on rejection sampling, it is known that the number of trials to simulate from the target $q$ before acceptance follows a geometric distribution with parameter $||p - q||_{\mathrm{TV}}$ . This distribution has mean $1 / ||p - q||_{\mathrm{TV}}$ and variance $(1 - ||p - q||_{\mathrm{TV}}) / ||p - q||_{\mathrm{TV}}^2$ (see (Jacob, 2021) for instance). This implementation of Algorithm 2 proves inefficient, as demonstrated by the following simple analysis. With probability $||p - q||_{\mathrm{TV}}$ , one needs to sample from (7) and, due to the properties of the geometric distribution, the expected number of samples from $q$ we need is $||p - q||_{\mathrm{TV}} \times (1 / ||p - q||_{\mathrm{TV}}) = 1$ . This rejection sampling procedure is thus practically useless, as it requires sampling on average from both $p$ and $q$ , as well as computing the acceptance probability $\min(1, q(x) / p(x))$ . Another undesirable property of this implementation is that the variance of the number of samples from $q$ one would have to simulate increases rapidly as the draft model $p$ better approximates the target $q$ (i.e., as $||p - q||_{\mathrm{TV}}$ decreases). These issues have been extensively reported in the literature (Jacob, 2021).
+
+# 3.3. Adjusted Rejection Sampling via Reflection-Maximal Coupling
+
+As discussed in Section 2.2, the adjusted rejection sampling procedure from Algorithm 2 is identical to a specific maximal coupling described, for example, in (Lindvall, 1992). For DDMs, we have shown that implementing this procedure is challenging. However, it is essential to note that maximal couplings are not unique. Bou-Rabee et al. (2020) proposed an algorithm known as reflection maximal coupling to implement a maximal coupling for two Gaussian distributions $\mathcal{N}(m^p,\sigma^2\mathrm{Id})$ and $\mathcal{N}(m^q,\sigma^2\mathrm{Id})$ . This is directly applicable to diffusion models, since $p(y_{k}|y_{n:k - 1}) = \mathcal{N}(y_{k};m_{k - 1}^{p}(y_{n:k - 1}),\sigma_{k - 1}^{2}\mathrm{Id})$ and $q(y_{k}|y_{k - 1}) = \mathcal{N}(y_{k};m_{k - 1}^{q}(y_{k - 1}),\sigma_{k - 1}^{2}\mathrm{Id})$ are Gaussian distributions with different means but identical variances. Introduced to establish convergence results for Hamiltonian Monte Carlo, this procedure is noteworthy for its conciseness and its bounded and short running time. We detail it in Algorithm 4.
+
+Direct calculations show that the acceptance probability of the proposal $\tilde{Y} \sim \mathcal{N}(m^p, \sigma^2\mathrm{Id})$ computed with this
+
+
+
+
+Figure 2. Two maximal couplings between $p = \mathcal{N}(0.5,0.25)$ and $q = \mathcal{N}(1.5,0.25)$ : the one given by Algorithm 2 (top) and the reflection maximal coupling from Algorithm 4 (bottom). By definition, both couplings have $p$ and $q$ as their margins. As they are maximal couplings, their probability mass on the diagonal is identical and is the maximum among all valid couplings.
+
+procedure is identical to the one used in Algorithm 2. This follows from the fact that $q(\tilde{Y}) / p(\tilde{Y}) = \mathcal{N}(Z + \Delta; 0, \mathrm{Id}) / \mathcal{N}(Z; 0, \mathrm{Id})$ with $\Delta = (m^p - m^q) / \sigma$ for $\tilde{Y} = m^p + \sigma Z$ . At acceptance, we also have $Y = \tilde{Y}$ as in Algorithm 2. However, Algorithm 4 differs fundamentally from Algorithm 2, as upon rejection of the draft state, the new state is computed deterministically as a function of the rejected state, instead of sampling from (6); see Figure 2 for an illustration. This requires only one evaluation of the target $q(y_k | y_{k-1})$ to obtain the state $Y_k$ . Therefore, we use Algorithm 4 for REJECTION in our implementation of speculative sampling. A detailed full implementation is provided in Algorithm 6. The following proposition, which parallels Proposition 2.2, establishes the correctness of the method and follows Section 2.3.2 from Bou-Rabee et al. (2020).
+
+Proposition 3.1 (Reflection Coupling): Let $p(x) = \mathcal{N}(x;m^p,\sigma^2\mathrm{Id})$ , $q(x) = \mathcal{N}(x;m^q,\sigma^2\mathrm{Id})$ and $\tilde{Y} \sim p$ . Algorithm 4 outputs $Y \sim q$ . Additionally, it maximizes the probability that $Y = \tilde{Y}$ and
+
+$$
+\mathbb {P} (Y \neq \tilde {Y}) = | | p - q | | _ {\mathrm {T V}} = 2 \Phi (\sigma^ {- 1} | | m ^ {p} - m ^ {q} | | / 2) - 1,
+$$
+
+where $||p - q||_{\mathrm{TV}} = \frac{1}{2}\int |p(x) - q(x)|\mathrm{d}x$ and $\Phi$ is the c.d.f. of the standard normal random variable.
+
+This result shows that the efficiency of speculative sampling at time $k$ - i.e., the probability of accepting a draft state - is a decreasing function of $||m_{k-1}^{p}(\tilde{Y}_{n:k-1}) - m_{k-1}^{q}(\tilde{Y}_{k-1})|| / \sigma_{k-1}$ . This means that, as expected, a draft model must reasonably approximate the target for good performance.
+
+Algorithm 4 REJECTION $(p,q,\tilde{Y})$ for two Gaussians with same covariance
+Require: Gaussians $p(x) = \mathcal{N}(x;m^p,\sigma^2\mathrm{Id}),q(x) =$ $\mathcal{N}(x;m^q,\sigma^2\mathrm{Id})$ and $\tilde{Y}\sim p$
+Set $\Delta = (m^{p} - m^{q}) / \sigma$ and $e = \Delta /\| \Delta \|$
+Let $Z = (\tilde{Y} -m^{p}) / \sigma$
+Sample $U\sim \mathrm{Unif}[0,1]$
+bool $= \mathbb{I}\Big[U\leq \min \Big(1,\frac{\mathcal{N}(Z + \Delta;0,\mathrm{Id})}{\mathcal{N}(Z;0,\mathrm{Id})}\Big)\Big].$
+if bool then Set $Y = \tilde{Y}$
+else Set $Y = m^{q} + \sigma (\mathrm{Id} - 2ee^{\top})Z$
+end if
+return $(Y,\mathsf{bool})$ where $Y\sim q$
+
+# 4. Theoretical analysis
+
+We provide an analysis of the proposed methodology. We derive an approximation of the complexity of speculative sampling in Section 4.1, and a lower-bound on the acceptance ratio when using an independent draft model, in Section 4.2.
+
+# 4.1. Complexity analysis
+
+We analyze here the computational benefits of using speculative sampling for DDMs under a simplified computational model. We assume an independent draft model $p$ given by (4). The cost of evaluating $b_{t}^{q}$ and $b_{t}^{p}$ are $C_{q}$ and $C_{p}$ respectively with $C_{q} > C_{p}$ . Using a window size $L$ , each step of speculative sampling increases the iteration index $n$ by a random variable $\hat{L} \in \{1, \dots, L\}$ . The cost of running the target model for $K$ iterations is $C_{\mathrm{original}} = KC_{q}$ , while speculative sampling approximately requires $C_{\mathrm{spec}} = (K / \hat{L})(LC_{p} + C_{q})$ . This simplified computational model leads directly to the following proposition.
+
+Proposition 4.1 (Average cost ratio): We have that
+
+$$
+\mathbb {E} \left[ C _ {\text {o r i g i n a l}} / C _ {\text {s p e c}} \right] = \frac {\mathbb {E} [ \hat {L} ]}{1 + L C _ {p} / C _ {q}}. \tag {8}
+$$
+
+Note that the average cost ratio (8) is independent of $K$ . Speculative sampling is beneficial if this ratio exceeds one,
+
+which occurs if and only if
+
+$$
+\mathbb {E} [ \hat {L} ] / L \geq C _ {p} / C _ {q} + 1 / L. \tag {9}
+$$
+
+Under the simplifying assumption that the acceptance probability of any draft state is lower bounded by $\alpha$ , independent across the state sequence, one has
+
+$$
+\mathbb {E} [ \hat {L} ] \geq \sum_ {\ell = 0} ^ {L - 1} (\ell + 1) \alpha^ {\ell} (1 - \alpha) + L \alpha^ {L} = 1 - \alpha^ {L} + L \alpha^ {L + 1}.
+$$
+
+This highlights the competing factors in speculative sampling. To satisfy (9), we aim for $\mathbb{E}[\hat{L}] / L$ to be as close to one as possible, indicating a high acceptance ratio and thus a draft model that closely approximates the target model. This typically implies that $C_p \approx C_q$ . Conversely, while (9) is made easier to satisfy by minimizing $C_p / C_q$ , this will in practice cause the acceptance ratio to deteriorate, consequently decreasing $\mathbb{E}[\hat{L}] / L$ .
+
+# 4.2. Lower bound on acceptance ratio
+
+We shed light here on how the acceptance ratio depends on the problem parameters and an independent draft model. Let $a_{n} = \mathcal{N}(Z + \Delta_{n};0,\mathrm{Id}) / \mathcal{N}(Z;0,\mathrm{Id})$ for $Z\sim \mathcal{N}(0,\mathrm{Id})$ where
+
+$$
+\| \Delta_ {n} \| ^ {2} = \frac {1}{4} \gamma (\varepsilon + \frac {1}{\varepsilon}) ^ {2} g _ {1 - t _ {n}} ^ {2} \| s _ {1 - t _ {n}} ^ {p} (\tilde {Y} _ {n}) - s _ {1 - t _ {n}} ^ {q} (\tilde {Y} _ {n}) \| ^ {2},
+$$
+
+for $b_{t}^{q}(x) = -f_{1-t}x + \frac{1 + \varepsilon^{2}}{2} g_{1-t}^{2}s_{1-t}^{q}(x)$ , where both $s_{t}^{q}$ and $s_{t}^{p}$ approximate the true score $s_{t}$ at differing computational cost. The draft state at time $n+1$ is accepted with probability $\min(1, a_{n})$ . We have the following lower bound.
+
+Lemma 4.2 (Control of acceptance ratio): We have
+
+$$
+\mathbb {E} \left[ a _ {n} \right] \geq \exp \left[ - \frac {1}{2} \mathbb {E} \left[ \| \Delta_ {n} \| ^ {2} \right] \right].
+$$
+
+Similar results can be established for the frozen draft model.
+
+We next assume that the target model has access to the exact score for distribution $q_{\mathrm{data}}$ , and that the draft model score corresponds to an exact score for some distribution $p_{\mathrm{data}}$ (we can think of this as a means of characterizing the inexactness of the draft model). We obtain the following result.
+
+Theorem 4.3 (Control of acceptance ratio (II)): Under assumptions on $p_{data}$ and $q_{data}$ detailed in the supplementary material, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ a _ {n} \right] \geq \exp \left[ - C \frac {\gamma g _ {s n} ^ {2}}{8} \left(\varepsilon + \frac {1}{\varepsilon}\right) ^ {2} \right. \\ \times \left. \min \left((\frac {1}{\sigma_ {s _ {n}}} - \sigma_ {s _ {n}}) ^ {2} + \alpha_ {s _ {n}} ^ {2}, \frac {1}{\alpha_ {s _ {n}} ^ {2}} \mathrm {D} (p _ {d a t a}, q _ {d a t a})\right) \right], \\ \end{array}
+$$
+
+where $s_n = 1 - t_n$ , $C$ a constant and $\mathrm{D}(p_{data}, q_{data})$ is some divergence between $p_{data}$ and $q_{data}$ explicit in the proof.
+
+There are different factors influencing the lower bound of Theorem 4.3:
+
+- As $\gamma \to 0$ , we have $\mathbb{E}[a_n] \geq 1$ , implying that a smaller discretization step size leads to higher acceptance rates of draft states. However, a smaller step size also necessitates a larger total number of steps to reach the target.
+- If $\mathrm{D}(p_{\mathrm{data}}, q_{\mathrm{data}}) \to 0$ then $\mathbb{E}[a_n] \geq 1$ . This means that if the draft and target models approximate the same data distribution then we obtain a higher acceptance rate.
+- If $g_{t}^{2}((\frac{1}{\sigma_{t}} - \sigma_{t})^{2} + \alpha_{t}^{2}) \to 0$ as $t \to 1$ then $\mathbb{E}[a_n] \geq 1$ for $n$ close to 0. This is the case for classical schedules $(f_{t}, g_{t})$ used in practice. Hence at the beginning of the denoising process, the acceptance rate is high.
+- The dependency with respect to $\varepsilon$ is such that both low and high values worsen the lower bound. There exists an optimal parameter $\varepsilon$ ( $\varepsilon = 1.0$ in this bound). In practice, we sweep over $\varepsilon > 0$ .
+
+# 5. Speculative Sampling for Langevin Diffusions
+
+Consider a scenario where we are interested in sampling from an unnormalized density $\pi (x)$ on $\mathbb{R}^d$ , i.e.
+
+$$
+\pi (x) = \frac {\exp (- E (x))}{Z}, \qquad Z = \int \exp (- E (x)) \mathrm {d} x,
+$$
+
+where the energy function $E(x)$ can be evaluated pointwise, but each evaluation is computationally expensive, and $Z$ is intractable. To sample from such distributions, we typically use Markov chain Monte Carlo (MCMC) techniques which are iterative algorithms requiring evaluating the energy function at each iteration. We show here how we can accelerate MCMC methods when we have access to a computationally cheap proxy energy function $\hat{E}(x) \approx E(x)$ defining $\hat{\pi}(x) \propto \exp(-\hat{E}(x))$ using speculative sampling. Access to such proxies is common in many domains of computational science and engineering; see e.g. (Christen & Fox, 2005; Cui et al., 2011; Sherlock et al., 2017; Peherstorfer et al., 2018).
+
+A standard MCMC technique to sample from $\pi$ is the Langevin diffusion defined by
+
+$$
+\mathrm {d} \mathbf {X} _ {t} = - \nabla E (\mathbf {X} _ {t}) \mathrm {d} t + \sqrt {2} \mathrm {d} \mathbf {B} _ {t},
+$$
+
+where $(\mathbf{B}_t)_{t\geq 0}$ is a Brownian motion. The limiting distribution of this diffusion is $\pi$ . In practice, the so-called unadjusted Langevin algorithm (ULA) (Durmus & Moulines, 2017; Vempala & Wibisono, 2019) is often implemented
+
+$$
+X _ {k + 1} = X _ {k} - \gamma \nabla E (X _ {k}) + \sqrt {2 \gamma} W _ {k}, \tag {10}
+$$
+
+for a stepsize $\gamma > 0$ and $W_{k} \stackrel{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0, \mathrm{Id})$ . Due to this time discretization, ULA only samples from an approximation
+
+of $\pi$ , but explicit bounds on the bias incurred are available (Durmus & Moulines, 2017).
+
+The speculative sampling procedure for DDMs presented in Algorithm 3 and relying on reflection maximal coupling (Algorithm 4) can be easily modified to accelerate the simulation of (10). In this scenario, (10) plays the role of the target model while
+
+$$
+X _ {k + 1} = X _ {k} - \gamma \nabla \hat {E} (X _ {k}) + \sqrt {2 \gamma} W _ {k}, \tag {11}
+$$
+
+is the draft model. As a cheap proxy, we can also use the frozen draft model strategy, that is set $\nabla \hat{E}(x_{n+k}) = \nabla E(x_n)$ for $k = 1, \ldots, n_L$ .
+
+Speculative sampling can be interpreted here as a prefetching technique (Brockwell, 2006; Angelino et al., 2014); see Appendix K for a detailed description of the algorithm. It is an alternative to recent methods proposed to accelerate Langevin diffusions relying also on on parallel evaluations of $\nabla E$ (Shen & Lee, 2019; Anari et al., 2024; Yu & Dalalyan, 2024; Zhou & Sugiyama, 2024).
+
+# 6. Related works
+
+Speculative sampling. Introduced in the context of LLMs by Leviathan et al. (2023); Chen et al. (2023), speculative sampling relies on a draft model based on a cheap LLM. An early drafting methodology proposing multiple tokens at once was put forth by Stern et al. (2018), while drafting with independent models was explored in (Chen et al., 2023; Leviathan et al., 2023; Spector & Re, 2023; Sun et al., 2023; Christopher et al., 2024). Efficient drafting using the target model with additional feedforward neural network (FFN) heads was considered in (Stern et al., 2018; Sun et al., 2021; Xia et al., 2023; Cai et al., 2024). Finally, it has been proposed very recently by Christopher et al. (2024) to use a discrete DDM (Austin et al., 2021; Campbell et al., 2022) as draft model for an autoregressive target model. For a comprehensive review of speculative sampling techniques for LLMs, we refer to Xia et al. (2024). Wang et al. (2024) has adapted speculative sampling to continuous state space but sample from the adjusted distribution (6) using rejection sampling, which is computationally inefficient in our context.
+
+Acceleration of diffusion models. One line of work distills a teacher DDM into a student DDM for faster sampling; see (Luhman & Luhman, 2021; Salimans & Ho, 2022; Berthelet et al., 2023; Liu et al., 2023; Meng et al., 2023; Sauer et al., 2024; Song et al., 2023; Katzir et al., 2024; Kim et al., 2024; Xu et al., 2024; Yin et al., 2024). For a review of distillation methods, we refer to Luo (2023); Dieleman (2024). Another line of work pursues accelerating sampling through improved integrators (Dockhorn et al., 2022; Liu et al., 2022; Lu et al., 2022; Xiao et al., 2022; Zhang & Chen, 2023). Additionally, parallel sampling of DDMs has been
+
+explored in (Shih et al., 2023; Chen et al., 2024; Li et al., 2024a; Ma et al., 2024; Tang et al., 2024). Our approach complements these approaches and can be combined with parallel sampling and/or better integrators. In Appendix J, we support this claim by combining our method with the parallel sampling integrator from (Shih et al., 2023). Specifically, we show that our method can benefit in terms of both NFE and FID from using a single parallel call. In addition, our method can be seamlessly used in combination with timestep distillation methods, such as those in (Sabour et al., 2024; Tong et al., 2024).
+
+# 7. Experiments
+
+In all of our experiments, we track two different types of metrics. First, we assess the quality of the output distribution obtained with the speculative sampling strategy (Wasserstein-2 in the low dimensional case, FID (Heusel et al., 2017) and IS (Salimans et al., 2016) in the image experiments and reward (Chi et al., 2023) in the robotics setting). We also report the Number of Function Evaluations of the target model; a function evaluation is defined as a call to the target model with a batch of data, irrespective of the batch size. Experiments to accelerate Langevin diffusions can be found in Appendix K.
+
+Low dimensional experiments. We first investigate Algorithm 3 in a low dimensional setting in order to better understand the effect of key hyperparameters of the algorithm. We consider a mixture of Gaussians target distribution with dimension varying between [2, 4, 8, 16, 32] and 16 components. All diffusion models are trained with a velocity objective, see Appendix I.1. We consider two drafting strategies: the INDEPENDENT strategy and the FROZEN strategy as described in Section 3.1. We also refer to Appendix I.1 for the architectural details. In Figure 3, we display the effects on the performance of the algorithm of the stochasticity $\varepsilon$ in the sampler and the window size $L$ .
+
+Figure 3 illustrate that FROZEN drafting is more efficient than INDEPENDENT drafting as it provides a better reduction of the NFE of the target models. This is in accordance with findings in speculative decoding/sampling for LLMs, see e.g. (Cai et al., 2024). Regarding the amount of stochasticity $\varepsilon$ , there appears to be an optimal value $\varepsilon$ for which the speculative sampling gains are optimal in agreement with theoretical insights derived in Theorem 4.3. Finally, increasing the window size $L$ improves the performance of speculative sampling.
+
+Image space experiments. Next, we demonstrate speculative sampling in higher dimensional settings on two datasets: CIFAR10 $(32 \times 32 \times 3)$ and LSUN $(64 \times 64 \times 3)$ . In all settings, the backbone architecture is a U-Net. We
+
+
+
+
+Figure 3. In each figure, the $y$ axis corresponds to the number of evaluations of the target model. Without speculative sampling, we evaluate the target model with 200 steps and show the improvements obtained using our approach. Each dotted line corresponds to INDEPENDENT drafting, each solid line to FROZEN drafting. The color gradient purple to yellow corresponds to different dimensions of the target distribution [2, 4, 8, 16, 32].
+
+refer to Appendix I.1 for architectural and training details. For all experiments we report FID score computed on 50k training samples. Our results are reported in Table 1. We investigate the effect of temperature on our models. More precisely, we introduce an hyperparameter $\tau > 0$ such that $\mathcal{N}(Z + \Delta; 0, \mathrm{Id}) / \mathcal{N}(Z; 0, \mathrm{Id})$ is replaced by $\mathcal{N}(Z + \Delta; 0, \tau \mathrm{Id}) / \mathcal{N}(Z; 0, \tau \mathrm{Id})$ , see Appendix F.1 for more details. Note that upon choosing $\tau > 1$ we do not sample exactly from $q$ but improve the acceptance rate. We sweep over the values of $\varepsilon$ and $\tau$ in Table 1 on the CIFAR10 dataset. The main conclusion is that our proposed speculative sampling algorithm provides a significant speed-up (x2 to x3) while maintaining the quality of the target model. For example, on CIFAR10, we reach a FID score of 2.34 with only 35 calls to the target model, while the classical sampling procedure requires 100 calls to the target model to reach a FID score of 2.45, which represents a reduction of $65\%$ of the number of calls to the target model. Running the target model for only 30 steps on the other hand reduces the image quality, as the FID score worsens to 4.32. It is worth noting that increasing the temperature marginally im
+
+Configuration Draft (100 steps) Target (100 steps) Target (30 steps) Speculative FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ NFE ↓ ε = 0.01, τ = 0.5 17.05 8.67 2.86 10.10 4.32 10.83 2.84 10.11 65.36 ε = 0.01, τ = 1.0 17.05 8.67 2.86 10.10 4.32 10.83 2.84 10.11 61.64 ε = 0.01, τ = 2.0 17.05 8.67 2.86 10.10 4.32 10.83 2.83 10.12 57.47 ε = 0.25, τ = 0.5 81.58 7.60 2.45 10.31 7.68 11.32 2.42 10.24 42.44 ε = 0.25, τ = 1.0 81.58 7.60 2.45 10.31 7.68 11.32 2.35 10.25 39.31 ε = 0.25, τ = 2.0 81.58 7.60 2.45 10.31 7.68 11.32 2.34 10.32 35.40 ε = 0.5, τ = 0.5 115.57 5.25 2.81 10.72 10.28 11.55 2.71 10.59 43.08 ε = 0.5, τ = 1.0 115.57 5.25 2.81 10.72 10.28 11.55 2.71 10.57 40.37 ε = 0.5, τ = 2.0 115.57 5.25 2.81 10.72 10.28 11.55 2.74 10.52 36.72 ε = 1.0, τ = 0.5 188.29 2.64 7.09 11.22 28.93 11.48 7.12 11.14 46.54 ε = 1.0, τ = 1.0 188.29 2.64 7.09 11.22 28.93 11.48 7.10 11.18 44.81 ε = 1.0, τ = 2.0 188.29 2.64 7.09 11.22 28.93 11.48 7.11 11.14 42.11
+
+Table 1. CIFAR-10 evaluation. For each column, we report the best result in bold.
+
+Configuration Target Speculative Reward ↑ NFE ↓ Reward ↑ NFE ↓ L=20,K=100 0.889 ± 0.008 100 0.898 ± 0.008 27.245 ± 0.002 L=20,K=80 0.882 ± 0.008 80 0.899 ± 0.008 23.890 ± 0.003 L=20,K=40 0.898 ± 0.008 40 0.875 ± 0.008 15.544 ± 0.005 L=20,K=20 0.887 ± 0.008 20 0.901 ± 0.008 9.430 ± 0.004 L=10,K=10 0.901 ± 0.008 10 0.903 ± 0.007 5.053 ± 0.001 L=5,K=5 0.876 ± 0.008 5 0.870 ± 0.009 3.000 ± 0.000
+
+Table 2. PushT evaluation.
+
+prove FID and IS score for some values of $\varepsilon$ . We observe similar improvements (around halving the NFE) in the case of LSUN, see Appendix J.
+
+PushT dataset. Finally, we conclude our experimental study by showing that speculative sampling also yields improvements for a robotics task, where the policy is generated using a diffusion model following (Chi et al., 2023). In our setting, we focus on the PushT dataset. The state space is of dimension (16, 2), where 16 corresponds to the prediction horizon and 2 is the dimension of the action. We refer to Appendix I.1 for more details. The metric we report is the reward $r \in [0,1]$ where $r = 1.0$ means that the policy achieves perfect coverage over an episode. For robustness we run 1000 episodes to compute the mean of the maximum rewards. For each episode we run the policy for 300 steps or stop if we reach the maximum reward. We follow the setting of (Chi et al., 2023), see also Appendix I.1. For our speculative sampler we fix $\tau = 1$ and do not perform any parallel call. We only consider the FROZEN drafting strategy. We report our results in Table 2. We consistently observe that the speculative sampling strategy reduces the number of call to the target model while preserving the quality of the model. For instance, with only 5 calls to the target model, our speculative sampler achieves a reward of $0.903 \pm 0.007$ while running the target model with only 5 steps yields a reward of $0.876 \pm 0.008$ .
+
+# 8. Discussion
+
+We have developed here a novel speculative sampling procedure to accelerate diffusion models. This was achieved by exploiting the connections between speculative sampling and maximal coupling, specifically through the use of reflection maximal coupling. We have demonstrated that significant speed-up can be achieved while sampling exactly from the target distribution.
+
+This approach also has limitations. It is not directly applicable to deterministic samplers, although noise can be added in a principled way to such samplers to obtain a valid stochastic sampler, to which speculative sampling can then be applied (see e.g. Section 3.1). Moreover, similarly to Picard iteration techniques (Shih et al., 2023; Chen et al., 2024), it increases memory overhead due to the parallel calls to the sampler during the verification procedure.
+
+In particular, while LLMs are memory-bound and therefore heavily benefit from speculative decoding/sampling techniques that increase arithmetic intensity and reduce latency, this is not necessarily the case for diffusion models, which already benefit from parallelism. The applicability of parallel techniques for serving diffusion models, therefore, remains an active area of investigation.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Albergo, M. S., Boffi, N. M., and Vanden-Eijnden, E. Stochastic interpolants: A unifying framework for flows and diffusions. arXiv preprint arXiv:2303.08797, 2023.
+Anari, N., Chewi, S., and Vuong, T.-D. Fast parallel sampling under isoperimetry. In The Thirty Seventh Annual Conference on Learning Theory, 2024.
+Angelino, E., Kohler, E., Waterland, A., Seltzer, M., and Adams, R. P. Accelerating MCMC via parallel predictive prefetching. arXiv preprint arXiv:1403.7265, 2014.
+Austin, J., Johnson, D. D., Ho, J., Tarlow, D., and Van Den Berg, R. Structured denoising diffusion models in discrete state-spaces. In Advances in Neural Information Processing Systems, 2021.
+Berthelot, D., Autef, A., Lin, J., Yap, D. A., Zhai, S., Hu, S., Zheng, D., Talbott, W., and Gu, E. Tract: Denoising diffusion models with transitive closure time-distillation. arXiv preprint arXiv:2303.04248, 2023.
+Bou-Rabee, N., Eberle, A., and Zimmer, R. Coupling and convergence for Hamiltonian Monte Carlo. The Annals of Applied Probability, 30(3):1209-1250, 2020.
+Brockwell, A. Parallel Markov chain Monte Carlo simulation by pre-fetching. Journal of Computational and Graphical Statistics, 15(1):246-261, 2006.
+Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., and Dao, T. Medusa: Simple LLM inference acceleration framework with multiple decoding heads. In International Conference on Machine Learning, 2024.
+Campbell, A., Benton, J., De Bortoli, V., Rainforth, T., Deligiannidis, G., and Doucet, A. A continuous time framework for discrete denoising models. In Advances in Neural Information Processing Systems, 2022.
+Chen, C., Borgeaud, S., Irving, G., Lespiau, J.-B., Sifre, L., and Jumper, J. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023.
+Chen, H., Ren, Y., Ying, L., and Rotskoff, G. M. Accelerating diffusion models with parallel sampling: Inference at sub-linear time complexity. In Advances in Neural Information Processing Systems, 2024.
+
+Chi, C., Xu, Z., Feng, S., Cousineau, E., Du, Y., Burchfiel, B., Tedrake, R., and Song, S. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, 2023.
+Christen, J. A. and Fox, C. Markov chain Monte Carlo using an approximation. Journal of Computational and Graphical statistics, 14(4):795-810, 2005.
+Christopher, J. K., Bartoldson, B. R., Kailkhura, B., and Fioretto, F. Speculative diffusion decoding: Accelerating language generation through diffusion. arXiv preprint arXiv:2408.05636, 2024.
+Cui, T., Fox, C., and O'sullivan, M. Bayesian calibration of a large-scale geothermal reservoir model by a new adaptive delayed acceptance Metropolis Hastings algorithm. Water Resources Research, 47(10), 2011.
+De Bortoli, V., Hutchinson, M., Wirnsberger, P., and Doucet, A. Target score matching. arXiv preprint arXiv:2402.08667, 2024.
+Del Moral, P. Feynman-Kac Formulae: Genealogical and Interacting Particle Approximations. Springer, 2004.
+Dieleman, S. The paradox of diffusion distillation, 2024. URL https://sander.ai/2024/02/28/paradox.html.
+Dockhorn, T., Vahdat, A., and Kreis, K. Genie: Higher-order denoising diffusion solvers. In Advances in Neural Information Processing Systems, 2022.
+Doucet, A., De Freitas, N., and Gordon, N. J. Sequential Monte Carlo Methods in Practice. Information Science and Statistics. New York, NY: Springer, 2001.
+Durmus, A. and Moulines, E. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. Annals of Applied Probability, 27(3):1551-1587, 2017.
+Esser, P., Rombach, R., and Ommer, B. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2021.
+Esser, P., Kulal, S., Blattmann, A., Entezari, R., Müller, J., Saini, H., Levi, Y., Lorenz, D., Sauer, A., Boesel, F., Podell, D., Dockhorn, T., English, Z., Lacey, K., Goodwin, A., Marek, Y., and Rombach, R. Scaling rectified flow transformers for high-resolution image synthesis. In International Conference on Machine Learning, 2024.
+Guth, F., Coste, S., De Bortoli, V., and Mallat, S. Wavelet score-based generative modeling. In Advances in neural information processing systems, 2022.
+
+Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, 2017.
+Higgins, I., Matthew, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M. M., Mohamed, S., and Lerchner, A. betavae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2016.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 2020.
+Hsu, E. P. and Sturm, K.-T. Maximal coupling of Euclidean Brownian motions. Communications in Mathematics and Statistics, 1:93-104, 2013.
+Jacob, P. Lectures on Couplings and Monte Carlo. https://sites.google.com/site/pierrejacob/cmclectures, 2021.
+Karras, T., Aittala, M., Aila, T., and Laine, S. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems, 2022.
+Karras, T., Aittala, M., Kynkänniemi, T., Lehtinen, J., Aila, T., and Laine, S. Guiding a diffusion model with a bad version of itself. In Advances in Neural Information Processing Systems, 2024.
+Katzir, O., Patashnik, O., Cohen-Or, D., and Lischinski, D. Noise-free score distillation. In International Conference on Learning Representations, 2024.
+Kim, D., Lai, C.-H., Liao, W.-H., Murata, N., Takida, Y., Uesaka, T., He, Y., Mitsufuji, Y., and Ermon, S. Consistency trajectory models: Learning probability flow ODE trajectory of diffusion. In International Conference on Learning Representations, 2024.
+Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. International Conference on Learning Representations, 2014.
+Leviathan, Y., Kalman, M., and Matias, Y. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, 2023.
+Li, M., Cai, T., Cao, J., Zhang, Q., Cai, H., Bai, J., Jia, Y., Li, K., and Han, S. Distribfusion: Distributed parallel inference for high-resolution diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024a.
+
+Li, T., Tian, Y., Li, H., Deng, M., and He, K. Autoregressive image generation without vector quantization. In Advances in Neural Information Processing Systems, 2024b.
+Lindvall, T. Lectures on the Coupling Method. John Wiley & Sons, New York, 1992.
+Liu, L., Ren, Y., Lin, Z., and Zhao, Z. Pseudo numerical methods for diffusion models on manifolds. In International Conference on Learning Representations, 2022.
+Liu, X., Zhang, X., Ma, J., Peng, J., and Liu, Q. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. In International Conference on Learning Representations, 2023.
+Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. In Advances in Neural Information Processing Systems, 2022.
+Luhman, E. and Luhman, T. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021.
+Luo, W. A comprehensive survey on knowledge distillation of diffusion models. arXiv preprint arXiv:2304.04262, 2023.
+Ma, X., Fang, G., and Wang, X. Deepcache: Accelerating diffusion models for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
+Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., and Salimans, T. On distillation of guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
+Milchev, A., Heermann, D., and Binder, K. Finite-size scaling analysis of the $\phi - 4$ field theory on the square lattice. Journal of Statistical Physics, 44:749-784, 1986.
+Peherstorfer, B., Willcox, K., and Gunzburger, M. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018.
+Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
+Sabour, A., Fidler, S., and Kreis, K. Align your steps: Optimizing sampling schedules in diffusion models. arXiv preprint arXiv:2404.14507, 2024.
+
+Salimans, T. and Ho, J. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, 2022.
+Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, 2016.
+Salimans, T., Mensink, T., Heek, J., and Hoogeboom, E. Multistep distillation of diffusion models via moment matching. In Advances in Neural Information Processing Systems, 2024.
+Sauer, A., Lorenz, D., Blattmann, A., and Rombach, R. Adversarial diffusion distillation. In European Conference on Computer Vision, 2024.
+Shen, R. and Lee, Y. T. The randomized midpoint method for log-concave sampling. In Advances in Neural Information Processing Systems, 2019.
+Sherlock, C., Golightly, A., and Henderson, D. A. Adaptive, delayed-acceptance MCMC for targets with expensive likelihoods. Journal of Computational and Graphical Statistics, 26(2):434-444, 2017.
+Shih, A., Belkhale, S., Ermon, S., Sadigh, D., and Anari, N. Parallel sampling of diffusion models. In Advances in Neural Information Processing Systems, 2023.
+Shoji, I. and Ozaki, T. Estimation for nonlinear stochastic differential equations by a local linearization method. Stochastic Analysis and Applications, 16(4):733-752, 1998.
+Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, 2015.
+Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.
+Song, Y., Dhariwal, P., Chen, M., and Sutskever, I. Consistency models. In International Conference on Machine Learning, 2023.
+Spector, B. F. and Re, C. Accelerating LLM inference with staged speculative decoding. In Workshop on Efficient Systems for Foundation Models@ ICML2023, 2023.
+Stern, M., Shazeer, N., and Uszkoreit, J. Blockwise parallel decoding for deep autoregressive models. In Advances in Neural Information Processing Systems, 2018.
+
+Sun, X., Ge, T., Wei, F., and Wang, H. Instantaneous grammatical error correction with shallow aggressive decoding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2021.
+Sun, Z., Suresh, A. T., Ro, J. H., Beirami, A., Jain, H., and Yu, F. Spectr: Fast speculative decoding via optimal transport. In Advances in Neural Information Processing Systems, 2023.
+Tang, Z., Tang, J., Luo, H., Wang, F., and Chang, T.-H. Accelerating parallel sampling of diffusion models. In International Conference on Machine Learning, 2024.
+Thorisson, H. Coupling, Stationarity, and Regeneration. Springer, New York, 2000.
+Tong, V., Trung-Dung, H., Liu, A., Broeck, G. V. d., and Niepert, M. Learning to discretize denoising diffusion ODEs. arXiv preprint arXiv:2405.15506, 2024.
+Vempala, S. and Wibisono, A. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. In Advances in Neural Information Processing Systems, 2019.
+Vincent, P. A connection between score matching and denoising autoencoders. Neural Computation, 23(7):1661-1674, 2011.
+Wang, Z., Zhang, R., Ding, K., Yang, Q., Li, F., and Xiang, S. Continuous speculative decoding for autoregressive image generation. arXiv preprint arXiv:2411.11925, 2024.
+Xia, H., Ge, T., Wang, P., Chen, S.-Q., Wei, F., and Sui, Z. Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 3909-3925, 2023.
+Xia, H., Yang, Z., Dong, Q., Wang, P., Li, Y., Ge, T., Liu, T., Li, W., and Sui, Z. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. In Findings of the Association for Computational Linguistics: ACL 2024, 2024.
+Xiao, Z., Kreis, K., and Vahdat, A. Tackling the generative learning trilemma with denoising diffusion GANs. In International Conference on Learning Representations, 2022.
+Xu, Y., Zhao, Y., Xiao, Z., and Hou, T. Ufogen: You forward once large scale text-to-image generation via diffusion GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
+
+Yin, T., Gharbi, M., Zhang, R., Shechtman, E., Durand, F., Freeman, W. T., and Park, T. One-step diffusion with distribution matching distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
+Yu, L. and Dalalyan, A. Parallelized midpoint randomization for Langevin Monte Carlo. arXiv preprint arXiv:2402.14434, 2024.
+Zhang, Q. and Chen, Y. Fast sampling of diffusion models with exponential integrator. In International Conference on Learning Representations, 2023.
+Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
+Zhou, H. and Sugiyama, M. Parallel simulation for sampling under isoperimetry and score-based diffusion models. arXiv preprint arXiv:2412.07435, 2024.
+
+# Organization of the Supplementary Material
+
+This supplementary material is organized as follows. Proofs of the main results are gathered in Appendix A. Potential alternative drafting strategies are discussed in Appendix B. A detailed implementation of speculative sampling for diffusion models is presented in Appendix C. In Appendix D, we present various results on the first time to rejection and how this could be efficiently approximated numerically. An extension of the maximal coupling strategy for Gaussians with different variances is proposed in Appendix E. In Appendix F, we investigate alternative acceptance criteria relying on either the introduction of a temperature parameter or the use of the "typical acceptance criterion" of (Cai et al., 2024) introduced for LLMs. Appendix G presents an extension of speculative sampling to incorporate some spatial transform. In Appendix H, we establish a lower bound on the expectation of the log-acceptance ratio. Experimental details are gathered in Appendix I. Finally, Appendix K details how the speculative sampling procedure proposed in this work can be used to accelerate simulation of Langevin diffusions to sample from unnormalized target distributions and presents simulations in this context.
+
+# A. Proofs of the Main Results
+
+# A.1. Proof of Proposition 2.1
+
+The joint distribution of $(\tilde{X}, X)$ generated by Algorithm 2 is
+
+$$
+f (\tilde {x}, x) = p (\tilde {x}) \left(\alpha (\tilde {x}) \delta_ {\tilde {x}} (x) + (1 - \alpha (\tilde {x})) r (x)\right),
+$$
+
+with $\delta_{\tilde{x}}(x)$ the Kronecker-delta symbol and $\alpha (\tilde{x}) = \min (1,q(\tilde{x}) / p(\tilde{x}))$ . That is we first sample $\tilde{X}\sim p$ then set $X = \tilde{X}$ with probability $\alpha (\tilde{X})$ and sample $X\sim r$ otherwise. It follows that the marginal distribution of $X$ is given by
+
+$$
+f (x) = \sum_ {\tilde {x} \in \mathcal {X}} f (\tilde {x}, x) = \alpha (x) p (x) + \left(1 - \sum_ {\tilde {x} \in \mathcal {X}} \alpha (\tilde {x}) p (\tilde {x})\right) r (x). \tag {12}
+$$
+
+We have
+
+$$
+\begin{array}{l} r (x) \propto \max (0, q (x) - p (x)) \\ = q (x) - \min (p (x), q (x)) \\ = q (x) - \alpha (x) p (x). \\ \end{array}
+$$
+
+Therefore, we have that
+
+$$
+r (x) = \frac {q (x) - \alpha (x) p (x)}{1 - \sum_ {\tilde {x} \in \mathcal {X}} \alpha (\tilde {x}) p (\tilde {x})}.
+$$
+
+Hence, by substituting the expression of $r(x)$ in (12), we obtain $f(x) = q(x)$ , that is $X \sim q$ . Now by construction, we have that
+
+$$
+\begin{array}{l} \mathbb {P} (X \neq \tilde {X}) = 1 - \sum_ {x \in \mathcal {X}} p (x) \alpha (x) \\ = 1 - \sum_ {x \in \mathcal {X}} \min (p (x), q (x)) \\ = | | p - q | | _ {\mathrm {T V}}, \\ \end{array}
+$$
+
+as $\min(a, b) = \frac{1}{2}(a + b - |a - b|)$ for any $a, b$ . However, Lindvall's inequality (Lindvall, 1992) (also known as the coupling inequality) shows that any pair of random variables $X, \tilde{X}$ satisfying marginally $\tilde{X} \sim p$ and $X \sim q$ verify
+
+$$
+\left| \left| p - q \right| \right| _ {\mathrm {T V}} \leq \mathbb {P} (X \neq \tilde {X}). \tag {13}
+$$
+
+Algorithm 2 generates a joint distribution for which the inequality (13) becomes an equality; hence it is optimal.
+
+# A.2. The need for adjusted rejection sampling
+
+One could wonder if an algorithm where we sample from $q$ under rejection and not from the modified probability $r(x) \propto \max(0, q(x) - p(x))$ would work. In particular, we could consider Algorithm 5.
+
+Algorithm 5 INCORRECT REJECTION $(p,q,\tilde{X})$
+```txt
+Require: Proba. distributions $p,q$ and $\tilde{X}\sim p$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,q(\tilde{X}) / p(\tilde{X}))]$ if bool then Set $X = \tilde{X}$ else $X\sim q(\cdot)$ end if return $(X,\mathrm{bool})$ where $X\sim q$
+```
+
+It can easily be shown that Algorithm 5 does not output $Y$ with distribution $q$ as in this case the joint distribution of $X, Y$ is
+
+$$
+f (\tilde {x}, x) = p (\tilde {x}) \left(\alpha (\tilde {x}) \delta_ {\tilde {x}} (x) + (1 - \alpha (\tilde {x})) q (x)\right),
+$$
+
+so the marginal distribution of $X$ is given by
+
+$$
+f (x) = \alpha (x) p (x) + \left(1 - \sum_ {\tilde {x} \in \mathcal {X}} \alpha (\tilde {x}) p (\tilde {x})\right) q (x) \neq q (x).
+$$
+
+In particular, the following example illustrates the problems with Algorithm 5. Consider $p = \mathrm{Unif}(\{0,1\})$ and $q = \mathrm{Unif}(\{0,1,2,3\})$ . In that case, we have that $\mathsf{boo1} = \operatorname{Ber}(1/2)$ . Hence, we accept $\tilde{X}$ half of the time in expectation. If we were sampling from $q = \mathrm{Unif}(\{0,1,2,3\})$ upon rejection then the output distribution $f(x)$ of $X$ would be given by
+
+$$
+X \sim \frac {3}{4} \mathrm {U n i f} (\{0, 1 \}) + \frac {1}{4} \mathrm {U n i f} (\{2, 3 \}).
+$$
+
+This means that we sample too much on the set $\{0,1\}$ . In order to get $X \sim q(\cdot)$ we need to sample more on the set which is outside of the support of $p$ . This is exactly the purpose of $r(\cdot)$ . Indeed, we have that $r(x) = \mathrm{Unif}(\{2,3\})$ . Hence, using Algorithm 2 we get that
+
+$$
+X \sim \frac {1}{2} \mathrm {U n i f} (\{0, 1 \}) + \frac {1}{2} \mathrm {U n i f} (\{2, 3 \}),
+$$
+
+that is, $X\sim q$ as required.
+
+# A.3. Proof of Proposition 3.1
+
+We have $\tilde{Y} \sim \mathcal{N}(m^p; \sigma^2\mathrm{Id})$ . We check here that the algorithm also returns $Y \sim \mathcal{N}(m^q; \sigma^2\mathrm{Id})$ . To show this, we leverage the fact that we can rewrite $Y = m^q + \sigma \tilde{Z}$ for some random variable $\tilde{Z}$ whose distribution follows
+
+$$
+\begin{array}{l} f (\tilde {z}) = \int \delta_ {z + \Delta} (\tilde {z}) \min \left(1, \frac {\mathcal {N} (z + \Delta ; 0 , \operatorname {I d})}{\mathcal {N} (z ; 0 , \operatorname {I d})}\right) \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z \\ + \int \delta_ {\left(\mathrm {I d} - 2 e e ^ {\top}\right) z} (\tilde {z}) \max \left(0, 1 - \frac {\mathcal {N} (z + \Delta ; 0 , \mathrm {I d})}{\mathcal {N} (z ; 0 , \mathrm {I d})}\right) \mathcal {N} (z; 0, \mathrm {I d}) \mathrm {d} z, \\ \end{array}
+$$
+
+where we have used $1 - \min(1, a) = \max(0, 1 - a)$ for $a \geq 0$ . Hence to show the validity of the procedure, we need now to show that $\tilde{Z} \sim \mathcal{N}(0, \mathrm{Id})$ , i.e., $f(\tilde{z}) = \mathcal{N}(\tilde{z}; 0, \mathrm{Id})$ . We have
+
+$$
+\begin{array}{l} \int \delta_ {z + \Delta} (\tilde {z}) \min \left(1, \frac {\mathcal {N} (z + \Delta ; 0 , \mathrm {I d})}{\mathcal {N} (z ; 0 , \mathrm {I d})}\right) \mathcal {N} (z; 0, \mathrm {I d}) \mathrm {d} z \\ = \int \delta_ {z + \Delta} (\tilde {z}) \min \left(\mathcal {N} (z; 0, \operatorname {I d}), \mathcal {N} (z + \Delta ; 0, \operatorname {I d})\right) \mathrm {d} z \\ = \min \left(\mathcal {N} (\tilde {z} - \Delta ; 0, \operatorname {I d}), \mathcal {N} (\tilde {z}; 0, \operatorname {I d})\right). \\ \end{array}
+$$
+
+In addition, we have that
+
+$$
+\begin{array}{l} \int \delta_ {(\mathrm {I d} - 2 e e ^ {\top}) z} (\tilde {z}) \max \Big (0, 1 - \frac {\mathcal {N} (z + \Delta ; 0 , \mathrm {I d})}{\mathcal {N} (z ; 0 , \mathrm {I d})} \Big) \mathcal {N} (z; 0, \mathrm {I d}) \mathrm {d} z \\ = \int \delta_ {\left(\operatorname {I d} - 2 e e ^ {\top}\right) z} (\tilde {z}) \max \left(0, \mathcal {N} (z; 0, \operatorname {I d}) - \mathcal {N} (z + \Delta ; 0, \operatorname {I d})\right) \mathrm {d} z \\ = \max \left(0, \mathcal {N} \left(\left(\operatorname {I d} - 2 e e ^ {\top}\right) \tilde {z}; 0, \operatorname {I d}\right) - \mathcal {N} \left(\left(\operatorname {I d} - 2 e e ^ {\top}\right) \tilde {z} + \Delta ; 0, \operatorname {I d}\right)\right) \\ = \max (0, \mathcal {N} (\tilde {z}; 0, \operatorname {I d}) - \mathcal {N} (\tilde {z} - \Delta ; 0, \operatorname {I d})) \\ \end{array}
+$$
+
+as $\tilde{z} = (\mathrm{Id} - 2ee^{\top})z$ implies that $z = (\mathrm{Id} - 2ee^{\top})\tilde{z}$ and $\mathcal{N}((\mathrm{Id} - 2ee^{\top})\tilde{z};0,\mathrm{Id}) = \mathcal{N}(\tilde{z};0,\mathrm{Id})$ because $||(\mathrm{Id} - 2ee^{\top})\tilde{z} || = ||\tilde{z} ||$ . Finally we used the fact that $\mathcal{N}((\mathrm{Id} - 2ee^{\top})\tilde{z} +\Delta ;0,\mathrm{Id}) = \mathcal{N}(\tilde{z} -\Delta ;0,\mathrm{Id})$ as
+
+$$
+\begin{array}{l} \left|\left| (\mathrm {I d} - 2 e e ^ {\top}) \tilde {z} + \Delta \right|\right| ^ {2} = \left| \right.\left| \Delta \right| ^ {2} + \left|\left| (\mathrm {I d} - 2 e e ^ {\top}) \tilde {z} \right|\right| ^ {2} + 2 \Delta^ {\top} \tilde {z} - 4 \Delta^ {\top} e e ^ {\top} \tilde {z} \\ = | | \Delta | | ^ {2} + | | \tilde {z} | | ^ {2} - 2 \Delta^ {\top} \tilde {z} \\ = | | \tilde {z} - \Delta | | ^ {2}, \\ \end{array}
+$$
+
+as $e e^{\top} = \Delta \Delta^{\top} / ||\Delta||^{2}$ . Combining these results, we obtain that
+
+$$
+\begin{array}{l} f (\tilde {z}) = \min (\mathcal {N} (\tilde {z} - \Delta ; 0, \operatorname {I d}), \mathcal {N} (\tilde {z}; 0, \operatorname {I d})) + \max (0, \mathcal {N} (\tilde {z}; 0, \operatorname {I d}) - \mathcal {N} (\tilde {z} - \Delta ; 0, \operatorname {I d})) \\ = \mathcal {N} (\tilde {z}; 0, \mathrm {I d}). \\ \end{array}
+$$
+
+We thus have proved that $\tilde{Z} \sim \mathcal{N}(0, \mathrm{Id})$ , so $Y \sim \mathcal{N}(m^q, \sigma^2 \mathrm{Id})$ . To prove now that this coupling is a maximal coupling, we compute $\mathbb{P}(Y \neq \tilde{Y})$ . Recall that $Y = \tilde{Y}$ if $U \leq \min(1, \mathcal{N}(z + \Delta; 0, \mathrm{Id}) / \mathcal{N}(z; 0, \mathrm{Id}))$ so
+
+$$
+\mathbb {P} (Y \neq \tilde {Y}) = 1 - \int \min \left(\mathcal {N} (z; 0, \operatorname {I d}), \mathcal {N} (\Delta + z; 0, \operatorname {I d})\right) \mathrm {d} z.
+$$
+
+It is straightforward to check that this is indeed equal to
+
+$$
+\left. \right.\left|\left| p - q \right|\right| _ {\mathrm {T V}} = \left|\left| \mathcal {N} \left(\mu_ {1}, \sigma^ {2} \mathrm {I d}\right) - \mathcal {N} \left(\mu_ {2}, \sigma^ {2} \mathrm {I d}\right)\right|\right| _ {\mathrm {T V}} = 2 \Phi \left(\left|\left| \Delta \right|\right| / 2\right) - 1,
+$$
+
+where $\Phi$ is the cumulative distribution function of the standard normal random variable. Hence, it follows from Lindvall's inequality (Lindvall, 1992) that Algorithm 4 outputs a maximal coupling.
+
+Similarly to Appendix A.2, it can be easily shown that we cannot sample simply independently from $q$ in the case we reject as it would output unconditionally a random variable $Y$ whose distribution differs from $p$ .
+
+# A.4. Optimality of reflection maximal coupling
+
+In Appendix A.3 we have shown that the reflection coupling is a maximal coupling. In what follows, we denote $\mathcal{C}(m^p,m^q)$ the set of coupling, i.e., distributions on $\mathbb{R}^d\times \mathbb{R}^d$ with marginals $\mathcal{N}(m^p,\sigma^2\mathrm{Id})$ and $\mathcal{N}(m^q,\sigma^2\mathrm{Id})$ . We also denote by $\Pi_{\mathrm{reflection}}\in \mathcal{C}(m^p,m^q)$ the reflection coupling. Proposition 3.1 shows that
+
+$$
+\Pi_ {\text {r e f l e c t i o n}} \in \operatorname {a r g m i n} _ {\Pi \in \mathcal {C} (m ^ {p}, m ^ {q})} \mathbb {E} _ {(\tilde {Y}, Y) \sim \Pi} [ \mathbf {1} _ {\tilde {Y} \neq Y} ].
+$$
+
+In fact, Hsu & Sturm (2013, Theorem 4.2) show that
+
+$$
+\Pi_ {\text {r e f l e c t i o n}} \in \operatorname {a r g m i n} _ {\Pi \in \mathcal {C} (m ^ {p}, m ^ {q})} \mathbb {E} _ {(\tilde {Y}, Y) \sim \Pi} [ \phi (\| \tilde {Y} - Y \|) ],
+$$
+
+for every non-negative, strictly increasing and strictly concave function $\phi$ with $\phi(0) = 0$ . Hence, the reflection coupling also naturally appears if one considers other cost functions than $(\tilde{y}, y) \mapsto \mathbf{1}_{\tilde{y} \neq y}$ .
+
+# B. Alternative drafting strategies for diffusion models
+
+Medusa-like correction. To improve the frozen target as draft model (see (16)), we can introduce a correction term to the frozen model. Our correction is inspired by the Medusa architecture (Cai et al., 2024). More precisely, we consider a smaller correction model $c_{s,t}^{\theta}$ trained with the following loss
+
+$$
+\mathcal {L} (\theta) = \int_ {0} ^ {1} \int_ {0} ^ {1} \| b _ {t} ^ {q} (x _ {t}) - b _ {s} ^ {q} (x _ {s}) - c _ {s, t} ^ {\theta} (x _ {s}, x _ {t}) \| ^ {2} p _ {s, t} (x _ {s}, x _ {t}) \mathrm {d} x _ {s} \mathrm {d} x _ {t}. \tag {14}
+$$
+
+Here $p_{s,t}$ can be any distribution with support on $\mathbb{R}^d \times \mathbb{R}^d$ as the minimizer for $c_{s,t}(x_s, x_t)$ is then always $b_t^q(x_t) - b_t^q(x_s)$ . However, in practice, one may choose $p_{s,t}(x_s, x_t)$ defined by the following procedure
+
+$$
+\mathbf {X} _ {t} = \alpha_ {t} \mathbf {X} _ {0} + \sigma_ {t} \mathbf {Z} _ {t}, \qquad \mathbf {X} _ {s} = \frac {\alpha_ {t}}{\alpha_ {s}} \mathbf {X} _ {s} + (\sigma_ {t} ^ {2} - (\frac {\sigma_ {s} \alpha_ {t}}{\alpha_ {s}}) ^ {2}) ^ {1 / 2} \mathbf {Z} _ {s},
+$$
+
+where $\mathbf{Z}_t$ and $\mathbf{Z}_s$ are independent Gaussian random variables with zero mean and identity covariance matrix where this joint distribution of $\mathbf{X}_s$ and $\mathbf{X}_t$ is induced by the diffusion
+
+$$
+\mathrm {d} \mathbf {X} _ {u} = f _ {u} \mathbf {X} _ {u} + g _ {u} \mathrm {d} \mathbf {B} _ {u}.
+$$
+
+If the correction model is expressive enough then the minimizer of (14) is given by $c_{s,t}^{\theta}(x_s,x_t) = b_t^q (x_t) - b_s^q (x_s)$ and in that case the draft model is equal to the target model.
+
+We then have a model $p(y_{k}|y_{n:k - 1}) = \mathcal{N}(y_{k};m_{k - 1}^{p}(y_{n:k - 1}),\sigma_{k - 1}^{2}\mathrm{Id})$ with
+
+$$
+m _ {k} ^ {p} (y _ {n: k}) = y _ {k} + \gamma \{b _ {t _ {n}} ^ {q} (y _ {n}) + c _ {t _ {n}, t _ {k}} ^ {\theta} (y _ {n}, y _ {k}) \}, \sigma_ {k} = \varepsilon \sqrt {\gamma} g _ {1 - t _ {k}}.
+$$
+
+Hence, on a window of size $L$ , we only need to evaluate the target model once while the (cheap) correction model is evaluated $L$ times. If $c_{s,t}^{\theta} = 0$ then we recover the draft model proposed in (16). Note that similarly to the frozen model, we can sample the draft states in parallel.
+
+Combining draft models. Assume we have $N_{p}$ draft models such that $p_{\ell}(y_k|y_{k - 1}) = \mathcal{N}(y_k;m_{k - 1}^{p,\ell}(y_{n:k - 1}),\sigma_{k - 1}^2\mathrm{Id})$ and let $\alpha_{k - 1}^{\ell}(y_{k - 1})\geq 0$ such that $\sum_{\ell = 1}^{N_p}\alpha_k^\ell (y_{k - 1}) = 1$ . We can define a new draft distribution
+
+$$
+p _ {\operatorname {m i x}} ^ {\alpha} \left(y _ {k} \mid y _ {k - 1}\right) = \mathcal {N} \left(y _ {k}; m _ {k - 1} ^ {\operatorname {m i x}} \left(y _ {k - 1}\right), \sigma_ {k - 1} ^ {2} \operatorname {I d}\right), \quad m _ {k - 1} ^ {\operatorname {m i x}} \left(y _ {k - 1}\right) = \sum_ {\ell = 1} ^ {N _ {p}} \alpha_ {k - 1} ^ {\ell} \left(y _ {k - 1}\right) m _ {k - 1} ^ {p, \ell} \left(y _ {k - 1}\right). \tag {15}
+$$
+
+The distribution in (15) mixes together $N_{p}$ draft models by considering a convex combination of their means. Since it is a Gaussian, Section 3.3 applies and we get that
+
+$$
+\mathbb {P} \big (Y _ {k} \neq \tilde {Y} _ {k} | Y _ {k - 1} \big) = 2 \Phi \big (\sigma^ {- 1} | | m ^ {p} (\cdot | Y _ {k - 1}) - m _ {k - 1} ^ {\mathrm {m i x}} (Y _ {k - 1}) | | / 2 \big) - 1,
+$$
+
+The parameters, $\alpha_{k}(y_{n - 1})$ can either be hyperparameters (constants) specified by the practitioner, or can be represented by a mapping $\alpha_{k}^{\ell}(y_{k};\theta)$ with parameters $\theta$ . These parameters $\theta$ can be learned by minimizing the sum of average rejection probabilities $\mathbb{E}_{Y_{0:K}\sim q}\left[\sum_{k = 1}^{K}\mathbb{P}(Y_k\neq \tilde{Y}_k|Y_{k - 1})\right]$ .
+
+Parallel sampling and speculative correction. We now show here how one can combine Picard iterations from ParaDiGMS (Shih et al., 2023) and speculative sampling by using the output of ParaDiGMS as a draft model. We start by recalling the Picard iterations from Shih et al. (2023). Consider a draft sequence initialized with some deterministic transformation of $\tilde{Y}_n$ , i.e. $\tilde{Y}_{n+1:n_L}^0 = (F_{n+1}^0 (\tilde{Y}_n), \dots, F_{n_L}^0 (\tilde{Y}_n))$ where $n_L = \max(K, n + L)$ . Then, we define the Picard iterations as
+
+$$
+\tilde {Y} _ {k} ^ {m} = \tilde {Y} _ {k - 1} ^ {m - 1} + \gamma \bar {b} _ {t _ {k - 1}} ^ {q} (\tilde {Y} _ {k - 1} ^ {m - 1}), \quad \tilde {Y} _ {n} ^ {m} = \tilde {Y} _ {n},
+$$
+
+where $k \in \{n + 1, \ldots, n_L\}$ and $m \in \{1, \ldots, M - 1\}$ . Here we use Picard iterations for the deterministic sampler, that is $\varepsilon = 0$ and $\bar{b}_t^q(x) = -f_{1-t}(x) + \frac{1}{2} g_{1-t}^2 s_{1-t}(x)$ , see Equation (1). Hence, for any $k \in \{n + 1, \ldots, n_L\}$ there exists a deterministic function $F_k^m$ such that
+
+$$
+\tilde {Y} _ {k} ^ {m} = F _ {k} ^ {m} (\tilde {Y} _ {n}).
+$$
+
+Lastly, we consider a last Picard iteration
+
+$$
+\tilde {Y} _ {k} ^ {M} = \tilde {Y} _ {k - 1} ^ {M - 1} + \gamma b _ {t _ {k - 1}} ^ {q} (\tilde {Y} _ {k - 1} ^ {M - 1}) + \varepsilon \sqrt {\gamma} g _ {t _ {k - 1}} Z _ {k},
+$$
+
+where $Z_{k}\sim \mathcal{N}(0,\mathrm{Id})$ . Hence, we have
+
+$$
+\tilde {Y} _ {k} ^ {M} = F _ {k - 1} ^ {M - 1} (\tilde {Y} _ {n}) + \gamma b _ {t _ {k - 1}} ^ {q} (F _ {k - 1} ^ {M - 1} (\tilde {Y} _ {n})) + \varepsilon \sqrt {\gamma} \sigma_ {t _ {k - 1}} Z _ {k}.
+$$
+
+We consider the sequence $\tilde{Y}_{n + 1:n_L}^M$ as a draft sequence. In that case we still have that $\tilde{Y}_{n + 1:n_L}^M\sim p(y_{n + 1:n_L}|y_n)$ with
+
+$$
+p (y _ {n + 1: n _ {L}} | y _ {n}) = \prod_ {k = n + 1} ^ {n _ {L}} p (y _ {k} | y _ {n: k - 1}) = \prod_ {k = n + 1} ^ {n _ {L}} \mathcal {N} (y _ {k}; m _ {k - 1} ^ {p} (y _ {n: k - 1}), \sigma_ {k - 1} ^ {2} \mathrm {I d}),
+$$
+
+as required where
+
+$$
+m _ {k} ^ {p} \left(y _ {n: k - 1}\right) = F _ {k} ^ {M - 1} \left(y _ {n}\right) + \gamma b _ {t _ {k}} ^ {q} \left(F _ {k} ^ {M - 1} \left(y _ {n}\right)\right), \quad \sigma_ {k} = \varepsilon \sqrt {\gamma} g _ {1 - t _ {k}}.
+$$
+
+Efficient frozen draft strategy. We start by recalling the frozen draft strategy described in Section 3. In that case, we consider here a very simple draft model where $p(y_{k}|y_{n:k - 1}) = p(y_{k}|y_{n},y_{k - 1})$ with
+
+$$
+m _ {k} ^ {p} \left(y _ {n: k}\right) = y _ {k} + \gamma b _ {t _ {n}} ^ {q} \left(y _ {n}\right), \sigma_ {k} = \sqrt {\gamma} \varepsilon g _ {1 - t _ {k}}. \tag {16}
+$$
+
+This draft model is similar to the target model, except that we replace $b_{t_k}^q (y_k)$ by $b_{t_n}^q (y_n)$ . Importantly, on a window of size $L$ , we only need to query the target model once in order to draw a draft sequence. In practice, a more efficient modification of this frozen draft strategy can be obtained if we replace $b_{t_n}^q (y_n)$ with $b_{t_n}^q (\tilde{y}_n)$ . Note that if, in the previous window, all the samples have been accepted then $b_{t_n}^q (y_n)$ coincides with $b_{t_n}^q (\tilde{y}_n)$ . Otherwise they do not. The main advantage of this procedure is that we can leverage the quantities computed on the previous window during the iterated speculative sampling procedure. Indeed, $b_{t_n}^q (\tilde{y}_n)$ is always computed when doing speculative sampling on the previous window, in order to perform the verification stage, see Algorithm 3. This drastically reduces the cost of the draft model since, we do not need to call any model to compute the proposals, since $b_{t_n}^q (\tilde{y}_n)$ has already been computed at the previous verification stage. The only caveat to this method is that it requires to be initialized, i.e., we need to compute $b_{t_n}^q (\tilde{y}_0)$ . In that case, we simply compute $b_{t_n}^q (y_0)$ (and therefore the cost of running the whole draft model is one function evaluation of the target model). Finally, another alternative strategy is to use $b_{t_{n - 1}}^q (y_{n - 1})$ in place of $b_{t_n}^q (\tilde{y}_n)$ when $\tilde{y}_n\neq y_n$ .
+
+# C. Detailed implementation of speculative sampling for diffusion models
+
+We present in Algorithm 6 a detailed implementation of speculative sampling for diffusion models, an algorithm combining Algorithm 3 and Algorithm 4.
+
+# D. Distribution of time to rejection
+
+Consider the following process $(\tilde{Y}_k, Y_k)_{k \geq 0}$ following the following distribution
+
+$$
+\Gamma (\tilde {y} _ {0: n}, y _ {0: n}) = p (\tilde {y} _ {0}) \delta_ {\tilde {y} _ {0}} (y _ {0}) \prod_ {k = 1} ^ {n} p (\tilde {y} _ {k} | y _ {k - 1}) \Big (\alpha_ {k} (y _ {k - 1}, \tilde {y} _ {k}) \delta_ {\tilde {y} _ {k}} (y _ {k}) + (1 - \alpha_ {k} (y _ {k - 1}, \tilde {y} _ {k})) r (y _ {k} | y _ {k - 1}, \tilde {y} _ {k}) \Big).
+$$
+
+where
+
+$$
+\alpha_ {k} \left(y _ {k - 1}, \tilde {y} _ {k}\right) = \min \left(1, \frac {q \left(\tilde {y} _ {k} \mid y _ {k - 1}\right)}{p \left(\tilde {y} _ {k} \mid y _ {k - 1}\right)}\right)
+$$
+
+and
+
+$$
+r \left(y _ {k} \mid y _ {k - 1}, \tilde {y} _ {k}\right) = \delta_ {f \left(y _ {k - 1}, \tilde {y} _ {k}\right)} \left(y _ {k}\right)
+$$
+
+corresponds to reflection maximal coupling for an appropriate function $f$ . This distribution describes the speculative sampling algorithm for diffusions (not describing the drafting process over an horizon of $L$ but this is irrelevant). By construction, we have $\int \Gamma (\tilde{y}_{0:n},y_{0:n})\mathrm{d}\tilde{y}_{0:n} = q(y_{0:n})$ . Starting from $\tilde{Y}_0 = Y_0$ , we can look at the first time $\tau$ , $\tau \geq 1$ , the draft state is rejected. This is equivalent to look at the first time that $\tilde{Y}_k\neq Y_k$ . We have from direct calculations that
+
+$$
+\mathbb {P} (\tau > k) = \int \dots \int p \left(\tilde {y} _ {0: k}\right) \prod_ {i = 1} ^ {k} \alpha_ {i} \left(\tilde {y} _ {i - 1}, \tilde {y} _ {i}\right) \mathrm {d} \tilde {y} _ {0: k}.
+$$
+
+Hence $\mathbb{P}(\tau > k)$ is given by a Feynman-Kac formula (Del Moral, 2004) so we could estimate numerically efficiently this quantity by running a particle filter (Del Moral, 2004; Doucet et al., 2001). A similar expression can be obtained for the distribution of $\tau_{n}$ the $n^{\mathrm{th}}$ time the draft state is rejected starting from the last time one rejected a draft, $p(\tilde{y}_0)$ being replaced by $p(\tilde{y}_{\tau_{n - 1} + 1}|y_{\tau_n})$ .
+
+Algorithm 6 Speculative Sampling for DDM
+```txt
+Require: Lookahead integer $L$ , sequence length $K$ , target model $q$ (see eq. (2)) and draft model $p$ (see eq. (3)). Sample $Y_0 \sim \mathcal{N}(0, \mathrm{Id})$ and set $n = 0$ .
+while $n < K$ do
+Set $\tilde{Y}_n \gets Y_n$ and $n_L = \min(n + L, K)$ .
+for $k = n + 1 : n_L$ do
+Sample $\tilde{Y}_k \sim \mathcal{N}(m_{k-1}^p(\tilde{Y}_{n:k-1}), \sigma_{k-1}^2 \mathrm{Id})$ .
+end for
+In parallel, compute $m_n^q(\tilde{Y}_n)$ , $m_{n+1}^q(\tilde{Y}_{n+1})$ , ..., $m_{n_L-1}^q(\tilde{Y}_{n_L-1})$ .
+for $k = n + 1 : n_L$ do
+Set $\Delta_{k-1} = (m_{k-1}^p(\tilde{Y}_{n:k-1}) - m_{k-1}^q(\tilde{Y}_{k-1})) / \sigma_{k-1}$ and $e = \Delta_{k-1} / ||\Delta_{k-1}||$ .
+Let $Z_{k-1} = (\tilde{Y}_k - m_{k-1}^p(\tilde{Y}_{n:k-1})) / \sigma_{k-1}$ .
+Sample $U \sim \mathrm{Unif}[0, 1]$ .
+bool = $\mathbb{I}[U \leq \min(1, \mathcal{N}(Z_{k-1} + \Delta_{k-1}; 0, \mathrm{Id}) / \mathcal{N}(Z_{k-1}; 0, \mathrm{Id}))]$ .
+if bool then
+Set $Y_k = \tilde{Y}_k$ .
+else
+Set $Y_k = m_{k-1}^q(\tilde{Y}_{k-1}) + \sigma_{k-1}(\mathrm{Id} - 2ee^\top)Z_{k-1}$ .
+end if
+return (Y_k, bool).
+if not (bool) then
+Exit For Loop
+end if
+end for
+Set $n \gets k$ .
+end while
+return Y_K
+```
+
+
+Figure 4. Evolution of the rejection probability $(y$ -axis) with the dimension $d$ ( $x$ -axis) for $\sigma_{1} = 0.2$ and $\sigma_{2} = 0.1$
+
+In the case where we have
+
+$$
+p (y _ {k} | y _ {k - 1}) = \mathcal {N} (y _ {k}; y _ {k - 1} + \gamma b ^ {p} (y _ {k - 1}), \gamma \mathrm {I d}), \quad q (y _ {k} | y _ {k - 1}) = \mathcal {N} (y _ {k}; y _ {k - 1} + \gamma b ^ {q} (y _ {k - 1}), \gamma \mathrm {I d}).
+$$
+
+Under the simplifying assumption that $||b^{p}(x) - b^{q}(x)||\geq M$ , we have
+
+$$
+\begin{array}{l} \alpha_ {\gamma , k} = \mathbb {P} (\tilde {Y _ {0}} = Y _ {0}) \prod_ {i = 1} ^ {k} \mathbb {P} (\tilde {Y _ {i}} = Y _ {i} | \tilde {Y _ {i - 1}} = Y _ {i - 1}) \\ = \prod_ {i = 1} ^ {k} 2 \Phi \left(- \sqrt {\gamma} | | b ^ {p} \left(Y _ {i - 1}\right) - b ^ {q} \left(Y _ {i - 1}\right) | | / 2\right) \\ \leq \left(2 \Phi (- \sqrt {\gamma} M / 2)\right) ^ {k}. \\ \end{array}
+$$
+
+As we have $2\Phi (-x) = 1 - \sqrt{\frac{2}{\pi}} x + o(x^3)$ so
+
+$$
+\lim_{\gamma \to 0}\alpha_{\gamma ,1 / \gamma} = 0.
+$$
+
+# E. Maximal coupling between Gaussian distributions with different covariance matrices
+
+Algorithm 4 is restricted to Gaussian random variables admitting the same covariance matrix. This implies that the draft and the target samplers introduce the same amount of noise at each step. One can wonder if the two samplers could have instead different noise levels. In the following examples, we show that, even if the means are equal, then the probability of rejection is extremely high as the dimension increases.
+
+Indeed, consider two centered $d$ -dimensional normals $p(x) = \mathcal{N}(x;0,\sigma_1^2\mathrm{Id})$ and $q(x) = \mathcal{N}(x;0,\sigma_2^2\mathrm{Id})$ . We assume that $\sigma_{2} < \sigma_{1}$ . In that case, we have that $p(x)\leq q(x)$ if and only if $\| x\| ^2\leq R^2$ with
+
+$$
+R ^ {2} = d \log \left(\sigma_ {1} ^ {2} / \sigma_ {2} ^ {2}\right) \left(1 / \sigma_ {2} ^ {2} - 1 / \sigma_ {1} ^ {2}\right) ^ {- 1}.
+$$
+
+Hence, in the ideal case where the acceptance probability $r$ is given by $\| p - q\|_{\mathrm{TV}}$ , we get that
+
+$$
+r = 1 - \mathbb {E} [ \mathbf {1} _ {\sigma_ {1} ^ {2} Q \leq R ^ {2}} ] - \mathbb {E} [ \mathbf {1} _ {\sigma_ {2} ^ {2} Q \geq R ^ {2}} ] = \mathbb {E} [ \mathbf {1} _ {Q \leq R ^ {2} / \sigma_ {2} ^ {2}} ] - \mathbb {E} [ \mathbf {1} _ {Q \leq R ^ {2} / \sigma_ {1} ^ {2}} ].
+$$
+
+where $Q$ is a $\chi^2$ random variable with $d$ degrees of freedom. Unfortunately this probability is extremely close to 0 as $d$ increases, see Figure 4. Therefore, the probability of coupling is very low in high dimensions.
+
+# F. Alternative verification strategies
+
+In this section, we investigate different verification strategies for Algorithm 4. First, we introduce a temperature parameter in Appendix F.1. Then, in Appendix F.2, we adapt the typical acceptance criterion of Stern et al. (2018); Cai et al. (2024) to our setting.
+
+# F.1. Influence of the temperature
+
+Consider Algorithm 7, which is a version of Algorithm 4 including an additional temperature parameter $\tau >0$
+
+Algorithm 7 (Temperature) REJECTION $(p,q,\tilde{Y})$ for two Gaussians with same covariance
+Require: Probability densities $p(x) = \mathcal{N}(x;m^p,\sigma^2\mathrm{Id}),q(x) = \mathcal{N}(x;m^q,\sigma^2\mathrm{Id}),\tilde{Y}\sim p$ temperature $\tau >0$ Set $\Delta = (m^{p} - m^{q}) / \sigma$ and $e = \Delta /\| \Delta \|$ Let $Z = (\tilde{Y} -m^{p}) / \sigma$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,\mathcal{N}(Z + \Delta ;0,\tau \mathrm{Id}) / \mathcal{N}(Z;0,\tau \mathrm{Id}))]$ if bool then Set $Y = \hat{Y}$ else Set $Y = m^{q} + \sigma (\mathrm{Id} - 2ee^{\top})Z$ end if return $(Y,\mathrm{bool})$
+
+Setting $\tau = 1$ , we recover Algorithm 4 but for $\tau > 1$ we have a larger probability of accepting the current proposal $\tilde{Y}$ . This higher acceptance rate, of course, is not without its drawbacks since we are no longer sampling from the correct distribution. In what follows, we analyze how the distribution is shifted when tuning the temperature parameter $\tau$ . To shorten notation, we use in the following proof the notation $\varphi_{\tau}(z) = \mathcal{N}(z;0,\tau \mathrm{Id})$ and $\varphi = \varphi_{1}$ for $\tau = 1$ . We recall that $\mathrm{C}_c(\mathbb{R}^d)$ is the set of continuous functions with compact support. For any $f \in \mathrm{C}_c(\mathbb{R}^d)$ , using that for any $x \in \mathbb{R}^d$ , $\|x\| = \|x\|$ for $\hat{x} = (\mathrm{Id} - 2ee^\top)x$ , and $x \mapsto (\mathrm{Id} - 2ee^\top)x$ is an involution, we have
+
+$$
+\begin{array}{l} \mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} f (m ^ {p} + \sigma z) \min (1, \varphi_ {\tau} (z + \Delta) / \varphi_ {\tau} (z)) \varphi (z) \mathrm {d} z \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma \hat {z}) (1 - \min (1, \varphi_ {\tau} (z + \Delta) / \varphi_ {\tau} (z))) \varphi (z) \mathrm {d} z, \\ = \int_ {\mathbb {R} ^ {d}} f (m ^ {p} + \sigma z) \min (1, \varphi_ {\tau} (z + \Delta) / \varphi_ {\tau} (z)) \varphi (z) d z \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) (1 - \min (1, \varphi_ {\tau} (\hat {z} + \Delta) / \varphi_ {\tau} (z))) \varphi (z) \mathrm {d} z, \\ = \int_ {\mathbb {R} ^ {d}} f (m ^ {p} + \sigma z) \min (1, \varphi_ {\tau} (z + \Delta) / \varphi_ {\tau} (z)) \varphi (z) d z \\ - \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min (1, \varphi_ {\tau} (\hat {z} + \Delta) / \varphi_ {\tau} (z))) \varphi (z) \mathrm {d} z, \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min \left(1, \varphi_ {\tau} (z) / \varphi_ {\tau} (z - \Delta)\right) \varphi (z - \Delta) d z \\ - \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min (1, \varphi_ {\tau} (\hat {z} + \Delta) / \varphi_ {\tau} (z))) \varphi (z) \mathrm {d} z, \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \varphi (z) \mathrm {d} z. \\ \end{array}
+$$
+
+We have that $\widehat{\dot{z} + \Delta} = z - \Delta$ . Hence, we get that
+
+$$
+\begin{array}{l} \mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min \left(1, \varphi_ {\tau} (z) / \varphi_ {\tau} (z - \Delta)\right) \varphi (z - \Delta) \mathrm {d} z \\ - \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min \left(1, \varphi_ {\tau} (\hat {z} + \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z, \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \varphi (z) \mathrm {d} z \\ = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min (\varphi (z - \Delta) / \varphi (z), (\varphi_ {\tau} (z) \varphi (z - \Delta)) / (\varphi_ {\tau} (z - \Delta) \varphi (z))) \varphi (z) d z \\ - \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min \left(1, \varphi_ {\tau} (\hat {z} + \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z, \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min (\varphi (z - \Delta) / \varphi (z), (\varphi_ {\tau} (z) \varphi (z - \Delta)) / (\varphi_ {\tau} (z - \Delta) \varphi (z))) \varphi (z) d z \\ - \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z, \\ + \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \varphi (z) d z. \\ \end{array}
+$$
+
+Hence, we get that
+
+$$
+\mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) (1 + a _ {\tau} (z)) \varphi (z) \mathrm {d} z, \tag {17}
+$$
+
+where
+
+$$
+a _ {\tau} (z) = - \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z))\right) + \min \left(\left(\varphi_ {\tau} (z) \varphi (z - \Delta)\right) / \left(\varphi_ {\tau} (z - \Delta) \varphi (z)\right), \varphi (z - \Delta) / \varphi (z)\right).
+$$
+
+Note that for $\tau = 1$ we have that $a_{\tau}(z) = 0$ . If we let $\tau \rightarrow +\infty$ then we get that $a_{\tau}(z) = -1 + \varphi (z - \Delta) / \varphi (z)$ so $Y\sim \mathcal{N}(m^p,\sigma^2\mathrm{Id})$ from (17), i.e., we always accept the draft model. In Figure 5, we show the effect of the temperature on the output distribution. The influence of the temperature in more realistic settings is studied in Appendix I.
+
+Link with guidance. We first give the following result and subsequently explain its connections with (Karras et al., 2024).
+
+Proposition F.1 (Link with guidance): Let $(\hat{Y}, Y)$ be the output of Algorithm 7. We have that
+
+$$
+\mathbb {E} [ Y ] = m ^ {q} + (1 / \sigma) C _ {\tau} (\| \Delta \|) (m ^ {p} - m ^ {q}),
+$$
+
+with $\Delta = (m^p - m^q) / \sigma$ and $C_{\tau}(\| \Delta \|) \leq 0$ if $\tau \leq 1$ and $C_{\tau}(\| \Delta \|) \geq 0$ otherwise. In particular, we have that $C_{\tau}(\| \Delta \|) = 0$ if $\tau = 1$ . In addition, $C_{\tau}(\| \Delta \|)$ is explicit in the proof.
+
+We can interpret this result as follows. For $\tau = 1$ , we recover that the mean of $Y$ is the mean of the target as expected as we have a maximal coupling in this case so $Y$ follows the correct target distribution. For $\tau > 1$ , we increase the acceptance probability: this has intuitively the effect of moving the distribution of $Y$ towards the distribution of $\tilde{Y}$ . Looking at the mean, we can interpret this effect as a guidance effect, where we push towards $m^p$ and away for $m^q$ . For $\tau < 1$ , we are pushing towards the target distribution even more than with $\tau = 1$ . Looking at the mean of $Y$ , we can interpret this effect as a guidance term, i.e., pushing away from the draft model and towards the target model. This last setting is similar to (Karras et al., 2024), which consider an explicit guidance of a "good" model with a "bad" model.
+
+Proof. Using (17), we have that
+
+$$
+\begin{array}{l} \mathbb {E} [ Y ] = \int_ {\mathbb {R} ^ {d}} (m ^ {q} + \sigma z) (1 + a _ {\tau} (z)) \varphi (z) \mathrm {d} z \\ = m ^ {q} + m ^ {q} \int_ {\mathbb {R} ^ {d}} a _ {\tau} (z) \varphi (z) \mathrm {d} z + \sigma \int_ {\mathbb {R} ^ {d}} z a _ {\tau} (z) \varphi (z) \mathrm {d} z. \\ \end{array}
+$$
+
+First, we show that $\int_{\mathbb{R}^d}a_\tau (z)\mathrm{d}z = 0$ . Indeed, using the change of variable $z\mapsto -z$ and $z\mapsto z - \Delta$ we get
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} a _ {\tau} (z) \varphi (z) \mathrm {d} z = \int_ {\mathbb {R} ^ {d}} \min \left(\left(\varphi_ {\tau} (z) \varphi (z - \Delta)\right) / \left(\varphi_ {\tau} (z - \Delta) \varphi (z)\right), \varphi (z - \Delta) / \varphi (z)\right) \varphi (z) \mathrm {d} z \\ - \int_ {\mathbb {R} ^ {d}} \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} \min \left(\left(\varphi_ {\tau} (z) \varphi (z + \Delta)\right) / \left(\varphi_ {\tau} (z + \Delta) \varphi (z)\right), \varphi (z + \Delta) / \varphi (z)\right) \varphi (z) d z \\ - \int_ {\mathbb {R} ^ {d}} \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} \min \left(\left(\varphi_ {\tau} (z - \Delta) \varphi (z)\right) / \left(\varphi_ {\tau} (z) \varphi (z - \Delta)\right), \varphi (z) / \varphi (z - \Delta)\right) \varphi (z - \Delta) d z \\ - \int_ {\mathbb {R} ^ {d}} \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} \min \left(\left(\varphi_ {\tau} (z - \Delta) \varphi (z)\right) / \varphi_ {\tau} (z), \varphi (z)\right) d z \\ - \int_ {\mathbb {R} ^ {d}} \min (\varphi (z), \varphi (z) \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z))) \mathrm {d} z = 0. \\ \end{array}
+$$
+
+Hence, we have that
+
+$$
+\mathbb {E} [ Y ] = \int_ {\mathbb {R} ^ {d}} (m ^ {q} + \sigma z) (1 + a _ {\tau} (z)) \varphi (z) \mathrm {d} z = m ^ {q} + \sigma \int_ {\mathbb {R} ^ {d}} z a _ {\tau} (z) \varphi (z) \mathrm {d} z.
+$$
+
+We are going to show that
+
+$$
+\int_ {\mathbb {R} ^ {d}} \langle z, e \rangle a _ {\tau} (z) \varphi (z) \mathrm {d} z = 0,
+$$
+
+where we recall that $e = \Delta / \| \Delta \|$ . For any $z \in \mathbb{R}^d$ , we have that $z = z_{e}e + \sum_{i=1}^{d-1}z_{e_{i}}e_{i}$ , where $z_{e} = \langle z, e \rangle$ and $z_{e_{i}} = \langle z, e_{i} \rangle$ with $\{e, e_{i}\}_{i=1}^{d-1}$ an orthonormal basis. Note in particular that for any $i \in \{1, \ldots, d-1\}$ , $\langle e_{i}, \Delta \rangle = 0$ . We have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} z a _ {\tau} (z) \varphi (z) \mathrm {d} z = \int_ {\mathbb {R} ^ {d}} z \min \left(\left(\varphi_ {\tau} (z) \varphi (z - \Delta)\right) / \left(\varphi_ {\tau} (z - \Delta) \varphi (z)\right), \varphi (z - \Delta) / \varphi (z)\right) \varphi (z) \mathrm {d} z \\ - \int_ {\mathbb {R} ^ {d}} z \min \left(1, \varphi_ {\tau} (z - \Delta) / \varphi_ {\tau} (z)\right)) \varphi (z) d z \\ = \int_ {\mathbb {R} ^ {d}} z \min \left(\varphi_ {\tau} (z) \varphi (z - \Delta) / \varphi_ {\tau} (z - \Delta), \varphi (z - \Delta)\right) d z \\ - \int_ {\mathbb {R} ^ {d}} z \min (\varphi (z), \varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z)) d z \\ = - \int_ {\mathbb {R} ^ {d}} (z - \Delta) \min (\varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z), \varphi (z) d z \\ - \int_ {\mathbb {R} ^ {d}} z \min (\varphi (z), \varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z)) d z \\ = - 2 \int_ {\mathbb {R} ^ {d}} z \min (\varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z), \varphi (z)) d z \\ + \Delta \int_ {\mathbb {R} ^ {d}} \min \left(\varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z), \varphi (z)\right) d z. \tag {18} \\ \end{array}
+$$
+
+Next, we look at $z \mapsto \min(\varphi_{\tau}(z - \Delta)\varphi(z) / \varphi_{\tau}(z), \varphi(z))$ . For any $z$ , let $z_{e^{\perp}} = z - z_{e}e$ . Note that $\langle z_{e^{\perp}}, \Delta \rangle = 0$ . We have
+
+that
+
+$$
+\begin{array}{l} \min (\varphi_ {\tau} (z - \Delta) \varphi (z) / \varphi_ {\tau} (z), \varphi (z)) \\ = \min (\varphi_ {\tau} (z _ {e} - \| \Delta \|) \varphi_ {\tau} (z _ {e ^ {\perp}}) \varphi (z _ {e}) \varphi (z _ {e ^ {\perp}}) / (\varphi_ {\tau} (z _ {e}) \varphi_ {\tau} (z _ {e ^ {\perp}})), \varphi (z _ {e}) \varphi (z _ {e ^ {\perp}})) \\ = \min \left(\varphi_ {\tau} \left(z _ {e} - \| \Delta\right) \varphi \left(z _ {e}\right) \varphi \left(z _ {e ^ {\perp}}\right) / \varphi_ {\tau} \left(z _ {e}\right), \varphi \left(z _ {e}\right) \varphi \left(z _ {e ^ {\perp}}\right)\right) \\ = \varphi \left(z _ {e ^ {\perp}}\right) \min \left(\varphi_ {\tau} \left(z _ {e} - \| \Delta\right)\right) \varphi \left(z _ {e}\right) / \varphi_ {\tau} \left(z _ {e}\right), \varphi \left(z _ {e}\right)). \\ \end{array}
+$$
+
+Using this result, (18) and the fact that for any $i\in \{1,\dots ,d - 1\}$ $\langle e_i,\Delta \rangle = 0$ we get
+
+$$
+\left\langle \int_ {\mathbb {R} ^ {d}} z a _ {\tau} (z) \varphi (z) \mathrm {d} z, e _ {i} \right\rangle = 0.
+$$
+
+Therefore, we get that
+
+$$
+\mathbb {E} [ Y ] = m ^ {q} + (1 / \sigma) C _ {\tau} (\| \Delta \|) (m ^ {p} - m ^ {q}).
+$$
+
+In the rest of the proof, we give an explicit expression for the parameter $C_{\tau}(\Delta)$ . We first find $z$ such that $\varphi_{\tau}(z_e - \|\Delta\|)\varphi(z_e)/\varphi_{\tau}(z_e) \leq \varphi(z_e)$ , i.e., we find $z$ such that $\log(\varphi_{\tau}(z_e - \|\Delta\|)) \leq \log(\varphi_{\tau}(z_e))$ , i.e., $z_e \leq \|\Delta\|/2$ . In particular, we have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R}} \min (\varphi_ {\tau} (z _ {e} - \| \Delta \|) \varphi (z _ {e}) / \varphi_ {\tau} (z _ {e}), \varphi (z _ {e})) \mathrm {d} z _ {e} \\ = \int_ {- \infty} ^ {\| \Delta \| / 2} \varphi (z _ {e}) \mathrm {d} z _ {e} + \int_ {\| \Delta \| / 2} ^ {+ \infty} \varphi_ {\tau} (z _ {e} - \| \Delta \|) \varphi (z _ {e}) / \varphi_ {\tau} (z _ {e}) \mathrm {d} z _ {e} \\ = \Phi (\| \Delta \| / 2) + \int_ {\| \Delta \| / 2} ^ {+ \infty} \varphi_ {\tau} \left(z _ {e} - \| \Delta\|\right) \varphi \left(z _ {e}\right) / \varphi_ {\tau} \left(z _ {e}\right) d z _ {e}. \\ \end{array}
+$$
+
+In addition, we have that
+
+$$
+\varphi_ {\tau} \left(z _ {e} - \| \Delta \|\right) \varphi \left(z _ {e}\right) / \varphi_ {\tau} \left(z _ {e}\right) = \varphi \left(z _ {e} - \| \Delta \| / \tau\right) \exp \left[ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) \right].
+$$
+
+Therefore, we get that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R}} \min \left(\varphi_ {\tau} \left(z _ {e} - \| \Delta\right)\right) \varphi \left(z _ {e}\right) / \varphi_ {\tau} \left(z _ {e}\right), \varphi \left(z _ {e}\right)) \mathrm {d} z _ {e} \\ = \Phi (\| \Delta \| / 2) + \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] (1 - \Phi (\| \Delta \| (\frac {1}{2} - \frac {1}{\tau}))) \\ = \Phi (\| \Delta \| / 2) + \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \Phi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})). \tag {19} \\ \end{array}
+$$
+
+Similarly, we have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R}} z _ {e} \min \left(\varphi_ {\tau} \left(z _ {e} - \| \Delta \|\right) \varphi \left(z _ {e}\right) / \varphi_ {\tau} \left(z _ {e}\right), \varphi \left(z _ {e}\right)\right) \mathrm {d} z _ {e} \\ = \int_ {- \infty} ^ {\| \Delta \| / 2} z _ {e} \varphi (z _ {e}) \mathrm {d} z _ {e} + \int_ {\| \Delta \| / 2} ^ {+ \infty} z _ {e} \varphi_ {\tau} (z _ {e} - \| \Delta \|) \varphi (z _ {e}) / \varphi_ {\tau} (z _ {e}) \mathrm {d} z _ {e} \\ = \int_ {- \infty} ^ {\| \Delta \| / 2} z _ {e} \varphi (z _ {e}) \mathrm {d} z _ {e} + \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \int_ {\| \Delta \| / 2} ^ {+ \infty} z _ {e} \varphi (z _ {e} - \| \Delta \| / \tau) \mathrm {d} z _ {e} \\ = \int_ {- \infty} ^ {\| \Delta \| / 2} z _ {e} \varphi (z _ {e}) \mathrm {d} z _ {e} + \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \int_ {\| \Delta \| / 2} ^ {+ \infty} (z _ {e} - \Delta) \varphi (z _ {e} - \| \Delta \| / \tau) \mathrm {d} z _ {e} \\ + \| \Delta \| \int_ {\| \Delta \| / 2} ^ {+ \infty} \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \varphi (z _ {e} - \| \Delta \| / \tau) d z _ {e} \\ = \varphi (\| \Delta \| (\frac {1}{2} - \frac {1}{\tau})) - \varphi (\| \Delta \| / 2) + \| \Delta \| \int_ {\| \Delta \| / 2} ^ {+ \infty} \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \varphi (z _ {e} - \| \Delta \| / \tau) d z _ {e} \\ = \varphi \left(\| \Delta \| \left(\frac {1}{\tau} - \frac {1}{2}\right)\right) - \varphi \left(\| \Delta \| / 2\right) + \| \Delta \| \exp \left[ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) \right] \Phi \left(\| \Delta \| \left(\frac {1}{\tau} - \frac {1}{2}\right)\right). \tag {20} \\ \end{array}
+$$
+
+Combining (19), (20) and (18), we get that
+
+$$
+\begin{array}{l} \left\langle e, \int_ {\mathbb {R} ^ {d}} z a _ {\tau} (z) \mathrm {d} z \right\rangle = - 2 \varphi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})) + 2 \varphi (\| \Delta \| / 2) \\ - 2 \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \| \Delta \| \Phi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})) + \| \Delta \| \Phi (\| \Delta \| / 2) \\ + \| \Delta \| \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \Phi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})) \\ = - 2 \varphi \left(\| \Delta \| \left(\frac {1}{\tau} - \frac {1}{2}\right)\right) + 2 \varphi (\| \Delta \| / 2) \\ + \| \Delta \| \Phi (\| \Delta \| / 2) - \| \Delta \| \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \Phi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})). \\ \end{array}
+$$
+
+Therefore, we have
+
+$$
+\begin{array}{l} C _ {\tau} (\| \Delta \|) = - \frac {2}{\| \Delta \|} \varphi (\| \Delta \| \left(\frac {1}{\tau} - \frac {1}{2}\right)) + \frac {2}{\| \Delta \|} \varphi (\| \Delta \| / 2) \\ + \Phi (\| \Delta \| / 2) - \exp [ \| \Delta \| ^ {2} / \tau^ {2} (1 - \tau) ] \Phi (\| \Delta \| (\frac {1}{\tau} - \frac {1}{2})). \\ \end{array}
+$$
+
+It can be checked that $C_{\tau}(\| \Delta \|) \leq 0$ if $\tau \leq 1$ and $C_{\tau}(\| \Delta \|) \geq 0$ otherwise. In particular, we have that $C_{\tau}(\| \Delta \|) = 0$ if $\tau = 1$ .
+
+# F.2. Typical acceptance in the Gaussian case
+
+We adapt here the typical acceptance criterion introduced in (Cai et al., 2024) to our setting; i.e. we consider the following acceptance ratio
+
+$$
+a (x) = \min (1, \max (q (x) / \kappa , q (x) \exp [ \mathrm {H} (q) ] / \delta)). \tag {21}
+$$
+
+where $\mathrm{H}(q)$ is the differential entropy of $q$ , i.e., $\mathrm{H}(q) = -\int_{\mathbb{R}^d} q(x) \log q(x) \, \mathrm{d}x$ . The hyperparameters $\kappa, \delta > 0$ are assumed to be fixed. We recall that $q(x) = \mathcal{N}(x; m^q, \sigma^2 \mathrm{Id})$ . In that case we have that
+
+$$
+\mathrm {H} (q) = (d / 2) (1 + \log (2 \pi) + \sigma^ {2}).
+$$
+
+Now, if we replace the acceptance criterion in Algorithm 4 with (21) and, if the sample is rejected, apply a deterministic orthogonal transformation $z \mapsto \hat{z}$ to the Gaussian noise to obtain $Y$ , we get that for any $f \in \mathrm{C}_c(\mathbb{R}^d)$
+
+$$
+\mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} [ a (m ^ {p} + \sigma z) f (m ^ {p} + \sigma z) + (1 - a (m ^ {p} + \sigma z)) f (m ^ {q} + \sigma \hat {z}) ] \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z
+$$
+
+Hence, we get
+
+$$
+\begin{array}{l} \mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} [ a (m ^ {p} + \sigma z) f (m ^ {p} + \sigma z) + (1 - a (m ^ {p} + \sigma z)) f (m ^ {q} + \sigma \hat {z}) ] \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z \\ = \int_ {\mathbb {R} ^ {d}} [ a (m ^ {p} + \sigma z) f (m ^ {p} + \sigma z) + (1 - a (m ^ {p} + \sigma \hat {z})) f (m ^ {q} + \sigma z) ] \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z \\ = \int_ {\mathbb {R} ^ {d}} \left[ \frac {\mathcal {N} (z - \Delta ; 0 , \operatorname {I d})}{\mathcal {N} (z ; 0 , \operatorname {I d})} a (m ^ {q} + \sigma z) f (m ^ {q} + \sigma z) + (1 - a (m ^ {p} + \sigma \hat {z})) f (m ^ {q} + \sigma z) \right] \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z \\ = \int_ {\mathbb {R} ^ {d}} \left[ \frac {\mathcal {N} (z - \Delta ; 0 , \operatorname {I d})}{\mathcal {N} (z ; 0 , \operatorname {I d})} a \left(m ^ {q} + \sigma z\right) + \left(1 - a \left(m ^ {p} + \sigma \hat {z}\right)\right) \right] f \left(m ^ {q} + \sigma z\right) \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z \\ \end{array}
+$$
+
+Therefore, we have
+
+$$
+\mathbb {E} [ f (Y) ] = \int_ {\mathbb {R} ^ {d}} f (m ^ {q} + \sigma z) \left(1 + \frac {\mathcal {N} (z - \Delta ; 0 , \operatorname {I d})}{\mathcal {N} (z ; 0 , \operatorname {I d})} a (m ^ {q} + \sigma z) - a (m ^ {p} + \sigma \hat {z})\right) \mathcal {N} (z; 0, \operatorname {I d}) \mathrm {d} z.
+$$
+
+# G. Projection and extension to operators
+
+In this section, we show that we can introduce an acceptance criterion so that two random variables are maximally coupled in a latent space. This relaxes the criterion introduced in Algorithm 4. In particular, it is possible to reach higher acceptance
+
+
+Probability Density Functions with Varying Temperatures
+Figure 5. Effect of the temperature on the distribution of $Y$ . Draft model has mean 1.0 and standard deviation 0.5. Target model has mean 3.0 and standard deviation 0.5.
+
+Algorithm 8 REJECTION $(p_{\mathrm{A}}, q_{\mathrm{A}}, \tilde{Y}_{\mathrm{A}})$ for two Gaussians with same (full) covariance
+```latex
+Require: matrix A, $p_{\mathrm{A}}(x) = \mathcal{N}(x;\mathrm{Am}^p,\sigma^2\mathrm{AA}^\top \mathrm{Id})$ $q_{\mathrm{A}}(x) = \mathcal{N}(x;\mathrm{Am}^q,\sigma^2\mathrm{AA}^\top \mathrm{Id}),\tilde{Y}_{\mathrm{A}}\sim p_{\mathrm{A}}$ Set $\Delta_{\mathrm{A}} = (\mathrm{AA}^{\top})^{-1 / 2}\mathrm{A}(m^{p} - m^{q}) / \sigma$ and $e_{\mathrm{A}} = \Delta_{\mathrm{A}} / ||\Delta_{\mathrm{A}}||$ Let $Z_{\mathrm{A}} = (\mathrm{AA}^{\top})^{-1 / 2}(\tilde{Y}_{\mathrm{A}} - \mathrm{Am}^{p}) / \sigma$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,\mathcal{N}(Z_{\mathrm{A}} + \Delta_{\mathrm{A}};0,\mathrm{Id}) / \mathcal{N}(Z_{\mathrm{A}};0,\mathrm{Id}))]$ if bool then Set $Y_{\mathrm{A}} = \tilde{Y}_{\mathrm{A}}$ else Set $Y_{\mathrm{A}} = \mathrm{Am}^{q} + \sigma (\mathrm{AA}^{\top})^{1 / 2}(\mathrm{Id} - 2e_{\mathrm{A}}e_{\mathrm{A}}^{\top})Z_{\mathrm{A}}$ end if return $(Y_{\mathrm{A}},\texttt{bool})$
+```
+
+rate than with Algorithm 4. Of course, there is a price to pay for this increased flexibility as the variable $Y$ does not follow the target distribution $q$ anymore.
+
+To start with, consider a linear operator $\mathrm{A} \in \mathbb{R}^{d \times d}$ such that $\mathrm{AA}^\top$ is invertible, i.e., $\mathrm{A}$ is surjective. In Algorithm 8, we show how to maximally couple two $d$ -dimensional densities of the form $\mathcal{N}(x; \mathrm{Am}^p, \sigma^2 \mathrm{AA}^\top)$ and $\mathcal{N}(x; \mathrm{Am}^q, \sigma^2 \mathrm{AA}^\top)$ .
+
+Algorithm 8 operates directly in the "latent" space, i.e., it provides a maximal coupling $(\tilde{Y}_{\mathrm{A}}, Y_{\mathrm{A}})$ where $\tilde{Y}_{\mathrm{A}} \sim \mathcal{N}(\mathrm{Am}^p, \sigma^2 \mathrm{AA}^\top \mathrm{Id})$ and $Y_{\mathrm{A}} \sim \mathcal{N}(\mathrm{Am}^q, \sigma^2 \mathrm{AA}^\top \mathrm{Id})$ .
+
+We now present Algorithm 9, which is a non-trivial rewriting of Algorithm 8 operating on the original $(\tilde{Y}, Y)$ and thus induces maximally coupled $(\tilde{Y}_{\mathrm{A}}, Y_{\mathrm{A}})$ . In what follows, we denote $\mathrm{A}^{\dagger}$ the Moore-Penrose inverse of $\mathrm{A}$ defined by
+
+$$
+\mathbf {A} ^ {\dagger} = \mathbf {A} ^ {\top} (\mathbf {A A} ^ {\top}) ^ {- 1}.
+$$
+
+The validity of Algorithm 9 is based on the following lemma.
+
+Lemma G.1 (Latent reflection): Let $\mathrm{AA}^{\top}$ be invertible. Let $\tilde{Y} = m^{p} + \sigma Z$ with $Z\sim \mathcal{N}(0,\mathrm{Id})$ . Let $Z_{\mathrm{A}} = (\mathrm{AA}^{\top})^{-1 / 2}\mathrm{AZ}$ . Let $e_{\mathrm{A}} = \Delta_{\mathrm{A}} / \| \Delta_{\mathrm{A}}\|$ where $\Delta_{\mathrm{A}} = (\mathrm{AA}^{\top})^{-1 / 2}\mathrm{A}\Delta$ and $\Delta = (m^{p} - m^{q}) / \sigma$ . We have that
+
+$$
+\mathrm {A} m ^ {q} + \sigma \left(\mathrm {A A} ^ {\top}\right) ^ {1 / 2} \left(\mathrm {I d} - 2 e _ {\mathrm {A}} e _ {\mathrm {A}} ^ {\top}\right) Z _ {\mathrm {A}} = \mathrm {A} \left[ m ^ {q} + \sigma \left(Z - 2 \frac {Z ^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta}{\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta} \Delta\right) \right]. \tag {22}
+$$
+
+In addition, we have that
+
+$$
+\exp \left[ - \frac {1}{2} (\Delta + 2 Z) ^ {\top} A ^ {\dagger} A \Delta \right] = \mathcal {N} \left(Z _ {A} + \Delta_ {A}; 0, I d\right) / \mathcal {N} \left(Z _ {A}; 0, I d\right). \tag {23}
+$$
+
+Proof. First, we have the following
+
+$$
+\begin{array}{l} \mathrm {A} (m ^ {p} - m ^ {q}) (m ^ {p} - m ^ {q}) \mathrm {A} ^ {\dagger} \mathrm {A} Z = (\mathrm {A A} ^ {\top}) ^ {1 / 2} (\mathrm {A A} ^ {\top}) ^ {- 1 / 2} \mathrm {A} (m ^ {p} - m ^ {q}) (m ^ {p} - m ^ {q}) \mathrm {A} ^ {\top} (\mathrm {A A} ^ {\top}) ^ {- 1} \mathrm {A} Z \\ = \left(\mathrm {A A} ^ {\top}\right) ^ {1 / 2} \left(\mathrm {A A} ^ {\top}\right) ^ {- 1 / 2} \mathrm {A} \left(m ^ {p} - m ^ {q}\right) \left(m ^ {p} - m ^ {q}\right) \mathrm {A} ^ {\top} \left(\mathrm {A A} ^ {\top}\right) ^ {- 1 / 2} Z _ {\mathrm {A}} \\ = \sigma^ {2} (\mathrm {A A} ^ {\top}) ^ {1 / 2} \Delta_ {\mathrm {A}} \Delta_ {\mathrm {A}} ^ {\top} Z _ {\mathrm {A}}. \\ \end{array}
+$$
+
+Hence, we have that
+
+$$
+\mathrm {A} \Delta \Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} Z = \left(\mathrm {A A} ^ {\top}\right) ^ {1 / 2} \Delta_ {\mathrm {A}} \Delta_ {\mathrm {A}} ^ {\top} Z _ {\mathrm {A}}. \tag {24}
+$$
+
+Next, we have that for any $u\in \mathbb{R}^d$
+
+$$
+u ^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} u = u ^ {\top} \mathrm {A} ^ {\top} \left(\mathrm {A A} ^ {\top}\right) ^ {- 1} \mathrm {A} u = \left\| \left(\mathrm {A A} ^ {\top}\right) ^ {- 1 / 2} \mathrm {A} u \right\| ^ {2}. \tag {25}
+$$
+
+Hence, we have that
+
+$$
+\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta = \| (\mathrm {A A} ^ {\top}) ^ {- 1 / 2} \Delta \| ^ {2} = \| \Delta_ {\mathrm {A}} \| ^ {2}.
+$$
+
+Combining this result and (24), we have
+
+$$
+\begin{array}{l} \mathrm {A} \left[ m ^ {q} + \sigma \Big (Z - 2 \frac {Z ^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta}{\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta} \Delta \Big) \right] = \mathrm {A} \left[ m ^ {q} + \sigma \Big (Z - 2 \frac {\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} Z}{\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta} \Delta \Big) \right] \\ = \mathrm {A} \left[ m ^ {q} + \sigma \left(Z - 2 \frac {\Delta \Delta^ {\top}}{\Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta} \mathrm {A} ^ {\dagger} \mathrm {A} Z\right) \right] \\ = \mathrm {A m} ^ {q} + \sigma \left(\mathrm {A A} ^ {\top}\right) ^ {1 / 2} \left(\mathrm {I d} - 2 e _ {\mathrm {A}} e _ {\mathrm {A}} ^ {\top}\right) Z _ {\mathrm {A}}, \\ \end{array}
+$$
+
+which concludes the proof of (22). Second, we have that
+
+$$
+\begin{array}{l} \left(\Delta + 2 Z\right) ^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta = \Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta + Z ^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} \Delta + \Delta^ {\top} \mathrm {A} ^ {\dagger} \mathrm {A} Z \\ = (Z + \Delta) ^ {\top} A ^ {\dagger} A (Z + \Delta) - Z ^ {\top} A ^ {\dagger} A Z \\ = \left\| \left(\mathrm {A A} ^ {\top}\right) ^ {- 1 / 2} \mathrm {A} (Z + \Delta) \right\| ^ {2} - \left\| \left(\mathrm {A A} ^ {\top}\right) ^ {- 1 / 2} \mathrm {A} Z \right\| ^ {2} \\ = \left\| Z _ {\mathrm {A}} + \Delta_ {\mathrm {A}} \right\| ^ {2} - \left\| Z _ {\mathrm {A}} \right\| ^ {2}, \\ \end{array}
+$$
+
+where we have used (25). This concludes the proof of (23).
+
+
+
+Algorithm 9 REJECTION $(p,q,\tilde{Y})$ for two Gaussians with same (full) covariance
+Require: Matrix A, $p(x) = \mathcal{N}(x;m^p,\sigma^2\mathrm{Id}),q(x) = \mathcal{N}(x;m^q,\sigma^2\mathrm{Id}),\tilde{Y}\sim p.$ $\Delta = (m^{p} - m^{q}) / \sigma$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,\exp [-\frac{1}{2} (\Delta +2Z)^{\top}\mathrm{A}^{\dagger}\mathrm{A}\Delta)])$ if bool then Set $Y = \tilde{Y}$ else Set $Y = m^{q} + \sigma \Big(Z - 2\frac{Z^{\top}\mathrm{A}^{\dagger}\mathrm{A}\Delta}{\Delta^{\top}\mathrm{A}^{\dagger}\mathrm{A}\Delta}\Delta \Big).$ end if return $(Y,\mathrm{bool})$
+
+The main advantage of Algorithm 9 compared to Algorithm 8 is that it only requires the knowledge of A and $\mathrm{A}^{\dagger}$ and implicitly provide a maximal coupling between $\tilde{Y}_{\mathrm{A}}$ and $Y_{\mathrm{A}}$ . Note that if A is invertible, then we have that $\mathrm{A}^{\dagger} = \mathrm{A}^{-1}$ and Algorithm 9 becomes identical to Algorithm 4 and thus returns a maximal coupling between $\tilde{Y}$ and $Y$ . However, Algorithm 9 is also applicable when only $\mathrm{AA}^{\top}$ is invertible. In that case, we do not recover that $Y \sim \mathcal{N}(x; m^{q}, \sigma^{2}\mathrm{Id})$ but the algorithm can still be applied and does induce maximally coupled $(\tilde{Y}_{\mathrm{A}}, Y_{\mathrm{A}})$ .
+
+In particular, given a mapping $f$ and a mapping $g$ such that $g(f(x)) \approx x$ for $x \in \mathbb{R}^d$ , we can define Algorithm 10, which is a non-linear approximate version of Algorithm 9. In particular, in Algorithm 10, $f$ can be thought of as an encoder and $g$ as a decoder. In the case where $f(x) = \mathrm{A}x$ , then $\Delta^{\star} = g(f(\Delta)) = \mathrm{A}^{\dagger}\mathrm{A}\Delta$ . Note that by letting $\Delta^{\star} = \Delta/\tau$ in Algorithm 10, we recover Algorithm 7.
+
+Algorithm 10 REJECTION $(p,q,\tilde{Y})$ for two Gaussians with auto-encoders
+Require: $f,g,p(x) = \mathcal{N}(x;m^p,\sigma^2\mathrm{Id}),q(x) = \mathcal{N}(x;m^q,\sigma^2\mathrm{Id}),\tilde{Y}\sim p.$ $\Delta = (m^{p} - m^{q}) / \sigma ,\Delta^{\star} = (g(f(\Delta)))$ Sample $U\sim \mathrm{Unif}[0,1]$ bool $= \mathbb{I}[U\leq \min (1,\exp [-\frac{1}{2} (\Delta +2Z)^{\top}\Delta^{\star}])]$ if bool then Set $Y = \tilde{Y}$ else Set $Y = m^{q} + \sigma \Big(Z - 2\frac{Z^{\top}\Delta^{\star}}{\Delta^{\top}\Delta^{\star}}\Delta \Big).$ end if return $(Y,\text{bool})$
+
+# H. Some Theoretical Results
+
+In Appendix H.1, we establish Lemma 4.2 while we prove Theorem 4.3 in Appendix H.2.
+
+# H.1. Control of acceptance ratio
+
+We now provide a lower bound on the expectation of the logarithm of the acceptance ratio for speculative sampling. We have at step $n + 1$ that the target density is $q(y_{n + 1}|y_n) = \mathcal{N}(y_{n + 1};m_{t_n}^q (y_n),\sigma_n^2\mathrm{Id})$ and, for an independent target model, the proposal density is $p(y_{n + 1}|y_n) = \mathcal{N}(y_{n + 1};m_{t_n}^p (y)n),\sigma_n^2\mathrm{Id})$ where
+
+$$
+m _ {t _ {n}} ^ {q} (y) = y + \gamma b _ {1 - t _ {n}} ^ {q} (y), \qquad m _ {t _ {n}} ^ {p} (y) = y + \gamma b _ {1 - t _ {n}} ^ {p} (y)
+$$
+
+with
+
+$$
+b _ {1 - t _ {n}} ^ {q} (y) = - f _ {1 - t _ {n}} y + \frac {g _ {1 - t _ {n}} ^ {2}}{2} s _ {1 - t _ {n}} ^ {q} (y), b _ {1 - t _ {n}} ^ {p} (y) = - f _ {1 - t _ {n}} y + \frac {g _ {1 - t _ {n}} ^ {2}}{2} s _ {1 - t _ {n}} ^ {p} (y).
+$$
+
+The acceptance ratio is then given by
+
+$$
+a _ {n} = \frac {\mathcal {N} (Z + \Delta_ {n} ; 0 , \mathrm {I d})}{\mathcal {N} (Z ; 0 , \mathrm {I d})}
+$$
+
+for $Z\sim \mathcal{N}(Z;0,\mathrm{Id})$ where
+
+$$
+\| \Delta_ {n} \| ^ {2} = \frac {1}{4} \gamma (\varepsilon + \frac {1}{\varepsilon}) ^ {2} g _ {1 - t _ {n}} ^ {2} \| s _ {1 - t _ {n}} ^ {p} (\tilde {Y} _ {n}) - s _ {1 - t _ {n}} ^ {q} (\tilde {Y} _ {n}) \| ^ {2}.
+$$
+
+So we obtain that
+
+$$
+\log a _ {n} = - \frac {1}{2} \| Z + \Delta_ {n} \| ^ {2} + \frac {1}{2} \| Z \| ^ {2}
+$$
+
+so that
+
+$$
+\mathbb {E} [ \log (a _ {n}) | y _ {n} ] = - \frac {1}{2} \| \Delta_ {n} \| ^ {2}.
+$$
+
+Now using Jensen's inequality
+
+$$
+\mathbb {E} [ a _ {n} ] \geq \exp [ \mathbb {E} [ \log (a _ {n}) ] ] = \exp \left[ - \frac {1}{2} \mathbb {E} [ \| \Delta_ {n} \| ^ {2} ] \right].
+$$
+
+This proves the result.
+
+# H.2. Control of acceptance ratio under exact scores
+
+We consider the following setting. Let $(\mathbf{X}_t^i)_{t\in [0,1]}$ for any $i\in \{0,1\}$ be given by
+
+$$
+\mathrm {d} \mathbf {X} _ {t} ^ {i} = f _ {t} \mathbf {X} _ {t} ^ {i} \mathrm {d} + g _ {t} \mathrm {d} \mathbf {B} _ {t}, \qquad \mathbf {X} _ {0} ^ {i} \sim \pi_ {0} ^ {i}
+$$
+
+where $f:[0,1)\to \mathbb{R}$ and $g:[0,1)\to [0, + \infty)$ are functions introduced further, $\pi_0^0$ and $\pi_0^1$ are distributions over $\mathbb{R}^d$ and $(\mathbf{B}_t^i)_{t\in [0,1]}$ are $d$ -dimensional Brownian motions. In what follows, we define for any $t\in [0,1)$
+
+$$
+f _ {t} = - 1 / (1 - t), \quad g _ {t} ^ {2} = 2 t / (1 - t).
+$$
+
+In that case, we have that for any $t \in [0,1]$ and $i \in \{0,1\}$
+
+$$
+\mathbf {X} _ {t} ^ {i} = \alpha_ {t} \mathbf {X} _ {0} ^ {i} + \sigma_ {t} \mathbf {Z}, \quad \mathbf {Z} \sim \mathcal {N} (0, \mathrm {I d}) \tag {26}
+$$
+
+with $\alpha_{t} = 1 - t$ and $\sigma_t = t$ . We assume that for any $i\in \{0,1\}$ , $\pi_0^i$ has a density with respect to the Lebesgue measure denoted $p_0^i$ . In that case, for any $t\in [0,1]$ and $i\in \{0,1\}$ , $\mathbf{X}_t^i$ admits a density with respect to the Lebesgue measure denoted $p_t^i$ . In this section, we show that for any $t\in (0,1]$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \leq C (t, p _ {0} ^ {0}, p _ {0} ^ {1}), \tag {27}
+$$
+
+such that
+
+1. $\lim_{t\to 1}C(t,p_0^0,p_0^1) = 0$
+2. $D(p_0^0 | p_1^0) \to 0$ implies that $C(t, p_0^0, p_0^1) \to 0$ , where $D$ is a measure of divergence between $p_0^0$ and $p_0^1$ defined further.
+
+In other words, item 1) shows that the Fisher score between $p_t^0$ and $p_t^1$ gets smaller as $t$ gets larger as expected as $p_1^0 = p_1^1$ , a normal density. Item 2) shows that the Fisher score between $p_t^0$ and $p_t^1$ is small if $p_0^0$ and $p_1^0$ are close.
+
+We will also establish in our main result, Theorem H.9, a lower bound for the expectation of the logarithm of the acceptance ratio in our speculative sampling setting based on (27).
+
+Time control. First, we provide an upper-bound on the Fisher score that goes to 0 as $t \to 1$ . We begin with the following result.
+
+Lemma H.1 (Convergence of Fisher score): Assume that $\int_{\mathbb{R}^d}\| x\|^2\mathrm{d}\pi_0^i (x) = C_2^i < + \infty$ for $i\in \{0,1\}$ . Then, we have that for any $t\in (0,1]$ and $i\in \{0,1\}$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {i} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {i} (x _ {t}) \mathrm {d} x _ {t} \leq (\frac {1}{\sigma_ {t}} - \sigma_ {t}) ^ {2} d + \alpha_ {t} ^ {2} C _ {2} ^ {i},
+$$
+
+where $p_1$ is the density of $\mathcal{N}(0,\mathrm{Id})$ with respect to the Lebesgue measure. In addition, assume that $\int_{\mathbb{R}^d}\| x\|^4\mathrm{d}\pi_0^i (x) = C_4^i < + \infty$ , we have that for any $t\in (0,1]$ and $i\in \{0,1\}$
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {i} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {4} p _ {t} ^ {i} (x _ {t}) d x _ {t} \leq 3 \left(\frac {1}{\sigma_ {t}} - \sigma_ {t}\right) ^ {4} d ^ {2} + \alpha_ {t} ^ {4} C _ {4} ^ {i} + 6 \alpha_ {t} ^ {2} \left(\frac {1}{\sigma_ {t}} - \sigma_ {t}\right) ^ {2} C _ {2} ^ {i} d \\ \leq 1 2 \left(\frac {1}{\sigma_ {t}} - \sigma_ {t}\right) ^ {4} d ^ {2} + 2 \alpha_ {t} ^ {4} C _ {4} ^ {i}. \\ \end{array}
+$$
+
+Proof. Let $i \in \{0, 1\}$ . First, using Tweedie's identity, see (Vincent, 2011) for instance, we recall that for any $t \in (0, 1)$ , we have that for any $x_{t} \in \mathbb{R}^{d}$
+
+$$
+\nabla \log p _ {t} ^ {i} (x _ {t}) = \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {t | 0} (x _ {t} | x _ {0}) p _ {0 | t} ^ {i} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} = \mathbb {E} [ - \mathbf {Z} / \sigma_ {t} \mid \mathbf {X} _ {t} ^ {i} = x _ {t} ],
+$$
+
+where we recall that $\mathbf{X}_t^i = \alpha_t\mathbf{X}_0^i +\sigma_t\mathbf{Z}$ , see (26). Hence, using Jensen's inequality, we have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {i} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {i} (x _ {t}) \mathrm {d} x _ {t} = \mathbb {E} [ \| \mathbb {E} [ \mathbf {Z} / \sigma_ {t} - \mathbf {X} _ {t} ^ {i} | \mathbf {X} _ {t} ^ {i} ] \| ^ {2} ] \\ \leq \mathbb {E} [ \| (\frac {1}{\sigma_ {t}} - \sigma_ {t}) \mathbf {Z} - \alpha_ {t} \mathbf {X} _ {0} ^ {i} \| ^ {2} ] \\ \leq \big (\frac {1}{\sigma_ {t}} - \sigma_ {t} \big) ^ {2} \mathbb {E} [ \| \mathbf {Z} \| ^ {2} ] + \alpha_ {t} ^ {2} \mathbb {E} [ \| \mathbf {X} _ {0} ^ {i} \| ^ {2} ], \\ \end{array}
+$$
+
+where we have used that $\mathbf{X}_0^i$ and $\mathbf{Z}$ are independent. Finally, using $\mathbb{E}[\| \mathbf{Z}\|^2] = d$ , we obtained the first result. The second part of the proof is similar and left to the reader.
+
+We recall that for any $\alpha \geq 1$ the $\chi_{\alpha}$ divergence between two densities over $\mathbb{R}^d$ , $p, q$ is given by
+
+$$
+\chi_ {\alpha} (p | q) = \int_ {\mathbb {R} ^ {d}} \left(1 - \frac {p (x)}{q (x)}\right) ^ {\alpha} q (x) d x.
+$$
+
+If $\alpha = 2$ , we also have
+
+$$
+\chi_ {2} (p | q) = \int_ {\mathbb {R} ^ {d}} \frac {p (x) ^ {2}}{q (x)} \mathrm {d} x - 1. \tag {28}
+$$
+
+In addition, we have the following useful result.
+
+Lemma H.2 ( $\chi_{\alpha}$ -data processing inequality): For any $\alpha \geq 1$ , $t \in [0,1]$ , $\chi_{\alpha}(p_t^0 | p_t^1) \leq \chi_{\alpha}(p_0^0 | p_0^1)$ .
+
+Note that this data processing is in fact valid for every $f$ -divergence with $f$ convex. Combining Lemma H.1 and Lemma H.2, we have the following result.
+
+Lemma H.3 (Convergence of modified Fisher score): Assume that $\int_{\mathbb{R}^d}\| x\|^4\mathrm{d}\pi_0^i (x) = C_4^i < + \infty$ for $i\in \{0,1\}$ . Let $C_4 = \max (C_4^0,C_4^1)$ . Then, we have that for any $t\in (0,1]$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {1} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \leq 4 (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 2} ((\frac {1}{\sigma_ {t}} - \sigma_ {t}) ^ {2} d + \alpha_ {t} ^ {2} C _ {4} ^ {1 / 2}),
+$$
+
+where $p_1$ is the density of $\mathcal{N}(0,\mathrm{Id})$ with respect to the Lebesgue measure.
+
+Proof. For any $t \in (0,1)$ , let $A_{t} = \int_{\mathbb{R}^{d}} \| \nabla \log p_{t}^{0}(x_{t}) - \nabla \log p_{1}(x_{t}) \|^{2} p_{t}^{0}(x_{t}) \mathrm{d}x_{t}$ . Using the Cauchy-Schwarz inequality and (28), we have that for any $t \in (0,1)$
+
+$$
+\begin{array}{l} A _ {t} ^ {2} = \left(\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {1} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} \frac {p _ {t} ^ {0} (x _ {t})}{p _ {t} ^ {1} (x _ {t})} p _ {t} ^ {1} (x _ {t}) \mathrm {d} x _ {t}\right) ^ {2} \\ \leq \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {1} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {4} p _ {t} ^ {1} (x _ {t}) d x _ {t} \int_ {\mathbb {R} ^ {d}} \frac {p _ {t} ^ {0} (x _ {t}) ^ {2}}{p _ {t} ^ {1} (x _ {t})} d x _ {t} \\ \leq \int_ {\mathbb {R} ^ {d}} | | \nabla \log p _ {t} ^ {1} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) | | ^ {4} p _ {t} ^ {1} (x _ {t}) \mathrm {d} x _ {t} (1 + \chi_ {2} (p _ {t} ^ {0} | p _ {t} ^ {1})). \\ \end{array}
+$$
+
+We conclude upon combining Lemma H.1, Lemma H.2, the fact that for any $a,b\geq 0$ $\sqrt{a + b}\leq \sqrt{a} +\sqrt{b}$ and that $\max (\sqrt{12},\sqrt{2})\leq 4$
+
+Finally, combining Lemma H.3 and Lemma H.1, we get the following result.
+
+Proposition H.4 (Control of Fisher score (I)): Assume that $\int_{\mathbb{R}^d}\| x\|^4\mathrm{d}\pi_0^i (x) = C_4^i < + \infty$ for $i\in \{0,1\}$ . Let $C_4 = \max (C_4^0,C_4^1)$ . Then, we have that for any $t\in (0,1]$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \leq 1 0 (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 2} ((\frac {1}{\sigma_ {t}} - \sigma_ {t}) ^ {2} d + \alpha_ {t} ^ {2} C _ {4} ^ {1 / 2}).
+$$
+
+Proof. For any $t \in (0,1)$ , we have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \left\| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \right\| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \\ \leq 2 \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) d x _ {t} \\ + 2 \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {1} (x _ {t}) - \nabla \log p _ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t}. \\ \end{array}
+$$
+
+We conclude upon combining Lemma H.1 and Lemma H.3.
+
+In particular, Proposition H.4 shows that $\lim_{t\to 1}\int_{\mathbb{R}^d}\| \nabla \log p_t^0 (x_t) - \nabla \log p_t^1 (x_t)\| ^2 p_t^0 (x_t)\mathrm{d}x_t = 0$
+
+Measure control. We now provide a control on the Fisher score that depends on some divergence between the measures $\pi_0^0$ and $\pi_0^1$ . We first recall a useful result on the score which can be found, for instance, in (De Bortoli et al., 2024).
+
+Lemma H.5 (Target Score Identity): Assume that for any $i \in \{0,1\}$ , $p_0^i \in \mathrm{C}^1(\mathbb{R}^d, \mathbb{R}^d)$ and for any $t \in [0,1]$ and $x_t \in \mathbb{R}^d$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \| p_{0|t}^i(x_0|x_t) \, \mathrm{d}x_0 < +\infty$ . Then, we have that for any $i \in \{0,1\}$ , $t \in [0,1)$ and $x_t \in \mathbb{R}^d$
+
+$$
+\nabla \log p _ {t} ^ {i} (x _ {t}) = \frac {1}{\alpha_ {t}} \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {i} (x _ {0}) p _ {0 | t} ^ {i} (x _ {0} | x _ {t}) \mathrm {d} x _ {0}.
+$$
+
+Next, we show the following result.
+
+Lemma H.6 (Posterior control): We have that for any $\alpha \geq 2$ , $\alpha$ even and $t \in (0,1)$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \chi_ {\alpha} (p _ {0 | t} ^ {1} (x _ {0} | x _ {t}) | p _ {0 | t} ^ {0} (x _ {0} | x _ {t})) p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \leq D _ {0, \alpha} (\chi_ {4 \alpha} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 4} + \chi_ {4 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 4}),
+$$
+
+with
+
+$$
+D _ {0, \alpha} \leq 2 ^ {2 \alpha - \frac {3}{2}} (1 + \chi_ {2 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0})) ^ {1 / 2} (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 4}.
+$$
+
+Proof. First, we have that for any $\alpha \geq 2$ , $\alpha$ even and $t \in (0,1)$
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \chi_ {\alpha} \left(p _ {0 | t} ^ {1} \left(x _ {0} \mid x _ {t}\right) \mid p _ {0 | t} ^ {0} \left(x _ {0} \mid x _ {t}\right)\right) p _ {t} ^ {0} \left(x _ {t}\right) \mathrm {d} x _ {t} = \int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \left(1 - \frac {p _ {0 | t} ^ {1} \left(x _ {0} \mid x _ {t}\right)}{p _ {0 | t} ^ {0} \left(x _ {0} \mid x _ {t}\right)}\right) ^ {\alpha} p _ {0, t} ^ {0} \left(x _ {0}, x _ {t}\right) \mathrm {d} x _ {0} \mathrm {d} x _ {t} \\ = \int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \left(1 - \frac {p _ {0} ^ {1} (x _ {0}) p _ {t} ^ {0} (x _ {t})}{p _ {0} ^ {0} (x _ {0}) p _ {t} ^ {1} (x _ {t})}\right) ^ {\alpha} p _ {0, t} ^ {0} (x _ {0}, x _ {t}) \mathrm {d} x _ {0} \mathrm {d} x _ {t} \\ \leq 2 ^ {\alpha - 1} \int_ {\mathbb {R} ^ {d}} \left(1 - \frac {p _ {0} ^ {1} (x _ {0})}{p _ {0} ^ {0} (x _ {0})}\right) ^ {\alpha} p _ {0} (x _ {0}) \mathrm {d} x _ {0} \\ + 2 ^ {\alpha - 1} \int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \left(\frac {p _ {0} ^ {1} (x _ {0})}{p _ {0} ^ {0} (x _ {0})}\right) ^ {\alpha} \left(1 - \frac {p _ {t} ^ {0} (x _ {t})}{p _ {t} ^ {1} (x _ {t})}\right) ^ {\alpha} p _ {0, t} ^ {0} (x _ {0}, x _ {t}) d x _ {0} d x _ {t} \\ = 2 ^ {\alpha - 1} \chi_ {\alpha} \left(p _ {0} ^ {1} \mid p _ {0} ^ {0}\right) + 2 ^ {\alpha - 1} \int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \left(\frac {p _ {0} ^ {1} \left(x _ {0}\right)}{p _ {0} ^ {0} \left(x _ {0}\right)}\right) ^ {\alpha} \left(1 - \frac {p _ {t} ^ {0} \left(x _ {0}\right)}{p _ {t} ^ {1} \left(x _ {0}\right)}\right) ^ {\alpha} p _ {0, t} ^ {0} \left(x _ {0}, x _ {t}\right) d x _ {0} d x _ {t}. \tag {29} \\ \end{array}
+$$
+
+Next, we note that for any $\beta \geq 1$ , $\beta$ even and densities $p, q$
+
+$$
+\int_ {\mathbb {R} ^ {d}} \left(\frac {q (x)}{p (x)}\right) ^ {\beta} p (x) d x \leq 2 ^ {\beta - 1} (1 + \chi_ {\beta} (q | p)).
+$$
+
+Using this result and (29), we have that
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \chi_ {\alpha} \left(p _ {0 | t} ^ {1} \left(x _ {0} \mid x _ {t}\right) \mid p _ {0 | t} ^ {0} \left(x _ {0} \mid x _ {t}\right)\right) p _ {t} ^ {0} \left(x _ {t}\right) \mathrm {d} x _ {t} \\ \leq 2^{\alpha -1}\chi_{\alpha}(p_{0}^{1}|p_{0}^{0}) + 2^{\alpha -1}\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}\left( \begin{array}{c}p_{0}^{1}(x_{0})\\ p_{0}^{0}(x_{0}) \end{array} \right)^{\alpha}\left(1 - \frac{p_{t}^{0}(x_{t})}{p_{t}^{1}(x_{t})}\right)^{\alpha}p_{0,t}^{0}(x_{0},x_{t})\mathrm{d}x_{0}\mathrm{d}x_{t} \\ \leq 2 ^ {\alpha - 1} \chi_ {\alpha} \left(p _ {0} ^ {1} \mid p _ {0} ^ {0}\right) + 2 ^ {\alpha - 1} 2 ^ {\frac {2 \alpha - 1}{2}} \left(1 + \chi_ {2 \alpha} \left(p _ {0} ^ {1} \mid p _ {0} ^ {2}\right)\right) ^ {1 / 2} \left(\int_ {\mathbb {R} ^ {d}} \left(1 - \frac {p _ {t} ^ {0} \left(x _ {t}\right)}{p _ {t} ^ {1} \left(x _ {t}\right)}\right) ^ {2 \alpha} p _ {t} ^ {0} \left(x _ {t}\right) \mathrm {d} x _ {t}\right) ^ {1 / 2} \\ \leq 2 ^ {\alpha - 1} \chi_ {\alpha} \left(p _ {0} ^ {1} \mid p _ {0} ^ {0}\right) + 2 ^ {2 \alpha - \frac {3}{2}} \left(1 + \chi_ {2 \alpha} \left(p _ {0} ^ {1} \mid p _ {0} ^ {0}\right)\right) ^ {1 / 2} \left(\int_ {\mathbb {R} ^ {d}} \left(1 - \frac {p _ {t} ^ {0} \left(x _ {0}\right)}{p _ {t} ^ {1} \left(x _ {0}\right)}\right) ^ {2 \alpha} \frac {p _ {t} ^ {0} \left(x _ {t}\right)}{p _ {t} ^ {1} \left(x _ {t}\right)} p _ {t} ^ {1} \left(x _ {t}\right) \mathrm {d} x _ {t}\right) ^ {1 / 2} \\ \leq 2 ^ {\alpha - 1} \chi_ {\alpha} (p _ {0} ^ {1} | p _ {0} ^ {0}) + 2 ^ {2 \alpha - \frac {3}{2}} (1 + \chi_ {2 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0})) ^ {1 / 2} (1 + \chi_ {2} (p _ {t} ^ {0} | p _ {t} ^ {1})) ^ {1 / 4} \chi_ {4 \alpha} (p _ {t} ^ {0} | p _ {t} ^ {1}) ^ {1 / 4} \\ \leq 2 ^ {\alpha - 1} \chi_ {\alpha} (p _ {0} ^ {1} | p _ {0} ^ {0}) + 2 ^ {2 \alpha - \frac {3}{2}} (1 + \chi_ {2 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0})) ^ {1 / 2} (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 4} \chi_ {4 \alpha} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 4} \\ \leq 2 ^ {\alpha} \chi_ {4 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 4} + 2 ^ {2 \alpha - \frac {3}{2}} (1 + \chi_ {2 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0})) ^ {1 / 2} (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 4} \chi_ {4 \alpha} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 4} \\ \leq 2 ^ {2 \alpha - \frac {3}{2}} (1 + \chi_ {2 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0})) ^ {1 / 2} (1 + \chi_ {2} (p _ {0} ^ {0} | p _ {0} ^ {1})) ^ {1 / 4} (\chi_ {4 \alpha} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 4} + \chi_ {4 \alpha} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 4}) \\ \end{array}
+$$
+
+where we used the data processing inequality. This concludes the proof.
+
+Finally, for ease of notation, we introduce for any $t\in [0,1]$
+
+$$
+\operatorname {F I} \left(p _ {t} ^ {0} \mid p _ {t} ^ {1}\right) = \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t}.
+$$
+
+We obtain the following result.
+
+Proposition H.7 (Control of Fisher score (II)): Assume that for any $i \in \{0,1\}$ , $p_0^i \in \mathrm{C}^1(\mathbb{R}^d, \mathbb{R}^d)$ and for any $t \in [0,1]$ and $x_t \in \mathbb{R}^d$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \| p_{0|t}^i(x_0|x_t) \, \mathrm{d}x_0 < +\infty$ . In addition, assume that for any $i \in \{0,1\}$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \|^4 (p_0^0(x_0) + p_1(x_0)) \, \mathrm{d}x_0 = D_4^i < +\infty$ . Then for any $t \in [0,1)$ , we have
+
+$$
+\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \leq \frac {2 D}{\alpha_ {t} ^ {2}} (\operatorname {F I} (p _ {0} ^ {0} | p _ {0} ^ {1}) + \chi_ {1 6} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 8} + \chi_ {1 6} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 8}),
+$$
+
+where $D$ is explicit in the proof.
+
+Proof. For any $t \in (0,1)$ , let $A_{t} = \int_{\mathbb{R}^{d}} \| \nabla \log p_{t}^{0}(x_{t}) - \nabla \log p_{1}(x_{t}) \|^{2} p_{t}^{0}(x_{t}) \mathrm{d} x_{t}$ . Using Lemma H.5, we have that for any $t \in (0,1)$
+
+$$
+A _ {t} = \frac {1}{\alpha_ {t} ^ {2}} \int_ {\mathbb {R} ^ {d}} \left\| \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {0} (x _ {0}) p _ {0 | t} ^ {0} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} - \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {1} (x _ {0}) p _ {0 | t} ^ {1} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} \right\| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t}.
+$$
+
+Hence, for any $t \in (0,1)$ , $A_{t} \leq \frac{2}{\alpha_{t}^{2}} (A_{t}^{1} + A_{t}^{2})$ with
+
+$$
+A _ {t} ^ {1} = \int_ {\mathbb {R} ^ {d}} \left\| \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {0} (x _ {0}) p _ {0 | t} ^ {0} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} - \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {1} (x _ {0}) p _ {0 | t} ^ {0} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} \right\| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t},
+$$
+
+$$
+A _ {t} ^ {2} = \int_ {\mathbb {R} ^ {d}} \left\| \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {1} (x _ {0}) p _ {0 | t} ^ {0} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} - \int_ {\mathbb {R} ^ {d}} \nabla \log p _ {0} ^ {1} (x _ {0}) p _ {0 | t} ^ {1} (x _ {0} | x _ {t}) \mathrm {d} x _ {0} \right\| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t}.
+$$
+
+Using Jensen's inequality, we have that for any $t \in (0,1)$
+
+$$
+A _ {t} ^ {1} \leq \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {0} ^ {0} (x _ {0}) - \nabla \log p _ {0} ^ {1} (x _ {0}) \| ^ {2} p _ {0} ^ {0} (x _ {0}) \mathrm {d} x _ {0}. \tag {30}
+$$
+
+Second, using Jensen's inequality, the Cauchy-Schwarz inequality and Lemma H.6
+
+$$
+\begin{array}{l} A _ {t} ^ {2} \leq \int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \| \nabla \log p _ {0} ^ {1} (x _ {0}) \| ^ {2} \left(1 - \frac {p _ {0 | t} ^ {1} (x _ {0} | x _ {t})}{p _ {0 | t} ^ {0} (x _ {0} | x _ {t})}\right) ^ {2} p _ {0, t} ^ {0} (x _ {0}, x _ {t}) \mathrm {d} x _ {0} \mathrm {d} x _ {t} \\ \leq \left(\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {0} ^ {1} (x _ {0}) \| ^ {4} p _ {0} ^ {0} (x _ {0}) \mathrm {d} x _ {0}\right) ^ {1 / 2} \left(\int_ {\mathbb {R} ^ {d} \times \mathbb {R} ^ {d}} \left(1 - \frac {p _ {0 | t} ^ {1} (x _ {0} | x _ {t})}{p _ {0 | t} ^ {0} (x _ {0} | x _ {t})}\right) ^ {4} p _ {0, t} ^ {0} (x _ {0}, x _ {t}) \mathrm {d} x _ {0} \mathrm {d} x _ {t}\right) ^ {1 / 2} \\ \leq D _ {0, 4} ^ {1 / 2} \left(\int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {0} ^ {1} (x _ {0}) \| ^ {4} p _ {0} ^ {0} (x _ {0}) \mathrm {d} x _ {0}\right) ^ {1 / 2} (\chi_ {1 6} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 8} + \chi_ {1 6} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 8}). \\ \end{array}
+$$
+
+Combining this result and (30) concludes the proof with $D = 2(1 + D_{0,4}^{1/2}\max(D_4^0, D_4^1))$ .
+
+Finally, combining Proposition H.4 and Proposition H.7, we get the following proposition.
+
+Proposition H.8 (Control Fisher (III)): Assume that for any $i \in \{0,1\}$ , $p_0^i \in \mathrm{C}^1(\mathbb{R}^d, \mathbb{R}^d)$ and for any $t \in [0,1]$ and $x_t \in \mathbb{R}^d$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \| p_{0|t}^i(x_0|x_t) \, \mathrm{d}x_0 < +\infty$ . In addition, assume that for any $i \in \{0,1\}$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \|^4 (p_0^0(x_0) + p_1(x_0)) \, \mathrm{d}x_0 = D_4^i < +\infty$ . Assume that $\int_{\mathbb{R}^d} \| x \|^4 \, \mathrm{d}\pi_0^i(x) = C_4^i < +\infty$ for $i \in \{0,1\}$ . Then, we have for any $t \in (0,1)$
+
+$$
+\begin{array}{l} \int_ {\mathbb {R} ^ {d}} \| \nabla \log p _ {t} ^ {0} (x _ {t}) - \nabla \log p _ {t} ^ {1} (x _ {t}) \| ^ {2} p _ {t} ^ {0} (x _ {t}) \mathrm {d} x _ {t} \\ \leq C \min \left((\frac {1}{\sigma_ {t}} - \sigma_ {t}) ^ {2} + \alpha_ {t} ^ {2}, \frac {1}{\alpha_ {t} ^ {2}} (\mathrm {F I} (p _ {0} ^ {0} | p _ {0} ^ {1}) + \chi_ {1 6} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 8} + \chi_ {1 6} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 8})\right), \\ \end{array}
+$$
+
+where $C\geq 0$ can be made explicit.
+
+Control of acceptance ratio. We now provide a lower bound on the expectation of the logarithm of the acceptance ratio in the speculative sampling framework. We consider a discretization of the interval $[0,1]$ given by $K\in \mathbb{N}$ and $t_k = k / K$ . We let $\gamma = 1 / K$ . We consider the target model given for any $k\in \{0,K - 1\}$ by
+
+$$
+Y _ {k + 1} ^ {t} = Y _ {k} ^ {t} + \gamma \{- f _ {1 - t _ {k}} Y _ {k} ^ {t} + \frac {1 + \varepsilon^ {2}}{2} g _ {1 - t _ {k}} ^ {2} \nabla \log p _ {1 - t _ {k}} ^ {0} (Y _ {k}) \} + \sqrt {\gamma} g _ {1 - t _ {k}} Z _ {k} ^ {t}, \qquad Y _ {0} \sim \mathcal {N} (0, \mathrm {I d}),
+$$
+
+where $(Z_k^t)_{k\in \mathbb{N}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\mathrm{Id})$ . We now $k_0\in \{0,\dots ,N - 1\}$ , $L\in \mathbb{N}$ , $k_{L} = \min (N - 1,k_{0} + L - 1)$ and consider the draft model associated given for any $k\in \{k_0,k_L\}$ by
+
+$$
+Y _ {k + 1} ^ {d} = Y _ {k} ^ {d} + \gamma \{- f _ {1 - t _ {k}} Y _ {k} ^ {d} + \frac {1 + \varepsilon^ {2}}{2} \nabla \log p _ {1 - t _ {k}} ^ {1} (Y _ {k}) \} + \sqrt {\gamma} g _ {1 - t _ {k}} Z _ {k} ^ {d}, \qquad Y _ {k _ {0}} ^ {d} = Y _ {k _ {0}} ^ {t} \sim \mathcal {N} (0, \mathrm {I d}),
+$$
+
+where $(Z_k^d)_{k\in \mathbb{N}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\mathrm{Id})$
+
+The step $k_{0} + 1$ is accepted if $U \leq \min(1, \mathcal{N}(Z_{k_{0}}^{t} + \Delta, \mathrm{Id}) / \mathcal{N}(Z_{k_{0}}^{t}, \mathrm{Id}))$ with
+
+$$
+\| \Delta_ {k _ {0}} \| ^ {2} = \frac {1}{4} \gamma (\varepsilon + \frac {1}{\varepsilon}) ^ {2} \gamma g _ {1 - t _ {k _ {0}}} ^ {2} \| \nabla \log p _ {1 - t _ {k _ {0}}} ^ {1} (Y _ {k _ {0}} ^ {t}) - \nabla \log p _ {1 - t _ {k _ {0}}} ^ {0} (Y _ {k _ {0}} ^ {t}) \| ^ {2}.
+$$
+
+So we obtain
+
+$$
+\mathbb {E} \left[ \log \left(a _ {k _ {0}}\right) \right] = - \frac {1}{2} \mathbb {E} \left[ \left\| \Delta_ {k _ {0}} \right\| ^ {2} \right].
+$$
+
+Combining this result with the previous Proposition, we obtain the following result.
+
+Theorem H.9 (Control of log-acceptance ratio): Assume that for any $i \in \{0,1\}$ , $p_0^i \in \mathrm{C}^1(\mathbb{R}^d, \mathbb{R}^d)$ and for any $t \in [0,1]$ and $x_t \in \mathbb{R}^d$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \| p_{0|t}^i(x_0|x_t) \, \mathrm{d}x_0 < +\infty$ . In addition, assume that for any $i \in \{0,1\}$ , $\int_{\mathbb{R}^d} \| \nabla \log p_0^i(x_0) \|^4 (p_0^0(x_0) + p_1(x_0)) \, \mathrm{d}x_0 = D_4^i < +\infty$ . Assume that $\int_{\mathbb{R}^d} \| x \|^4 \, \mathrm{d}\pi_0^i(x) = C_4^i < +\infty$ for $i \in \{0,1\}$ . In addition, assume that $Y_{k_0}^t \sim p_{1 - t_{k_0}}$ then
+
+$$
+\begin{array}{l} \mathbb {E} [ \log (a _ {k _ {0}}) ] \\ \geq - \frac {C}{8} (\varepsilon + \frac {1}{\varepsilon}) ^ {2} \gamma g _ {s _ {0}} ^ {2} \min \left((\frac {1}{\sigma_ {s _ {0}}} - \sigma_ {s _ {0}}) ^ {2} + \alpha_ {s _ {0}} ^ {2}, \frac {1}{\alpha_ {s _ {0}} ^ {2}} (\mathrm {F I} (p _ {0} ^ {0} | p _ {0} ^ {1}) + \chi_ {1 6} (p _ {0} ^ {1} | p _ {0} ^ {0}) ^ {1 / 8} + \chi_ {1 6} (p _ {0} ^ {0} | p _ {0} ^ {1}) ^ {1 / 8})\right), \\ \end{array}
+$$
+
+where $s_0 = 1 - t_{k_0}$ and $C \geq 0$ is explicit in the proof.
+
+The final result is obtained by using Jensen's inequality, i.e, $\mathbb{E}[a_{k_0}] \geq \exp[\mathbb{E}[\log(a_{k_0})]]$ .
+
+Let us interpret Theorem H.9. We aim at maximizing $\log (a_{k_0})$ since a high acceptance ratio yields a lower computational cost of the speculative sampling method. We here give a lower bound on its expectation. There are different factors that influence this bound:
+
+- $\gamma \rightarrow 0$ yields $\mathbb{E}[\log (a_{k_0})] \geq 0$ . Hence a small discretization step is associated with better acceptance of the method. However, we emphasize that a small discretization step also gives a larger total number of steps. Hence the benefits of reducing the stepsize must be weighted by the additional computational requirement of having to run the speculative procedure for a larger number of iterations.
+- If $p_0^0 \to p_0^1$ (in Fisher and $\chi_4$ divergence) then $\mathbb{E}[\log(a_{k_0})] \geq 0$ . This means that if during speculative sampling, the two models target similar distribution then we obtain a higher acceptance rate. This remark is verified empirically in ... and echoes similar findings in LLMs (Cai et al., 2024).
+- If $g_{t}^{2}((\frac{1}{\sigma_{t}} - \sigma_{t})^{2} + \alpha_{t}^{2}) \to 0$ as $t \to 1$ then $\mathbb{E}[\log (a_{k_0})] \geq 0$ . Hence, in that case for low values of $k_0$ , i.e., at the beginning of the denoising process, the acceptance rate is high. This observation is also confirmed empirically and is specific to the diffusion model setting.
+
+In our setting, we have
+
+$$
+g _ {t} ^ {2} ((\frac {1}{\sigma_ {t}} - \sigma_ {t}) ^ {2} + \alpha_ {t} ^ {2}) = 2 t (1 - t) (1 + (1 + 1 / t) ^ {2}).
+$$
+
+Hence $\lim_{t\to 1}g_t^2 ((\frac{1}{\sigma_t} -\sigma_t)^2 +\alpha_t^2) = 0.$
+
+We showcase the $A(t) = g_t^2\left(\left(\frac{1}{\sigma_t} - \sigma_t\right)^2 + \alpha_t^2\right)$ in Figure 6.
+
+# I. Experimental details
+
+In this section, we provide details about our experimental setup in Appendix I.1. Our setting for the low dimensional Gaussian Mixture Models (GMMs) is described in Appendix I.2. Similarly, our setting for the image experiments is given in Appendix I.3.
+
+
+Figure 6. The value of $A(t)$ as a function of $t$ for $\alpha_{t} = 1 - t$ and $\sigma_{t} = t$ (blue). The value of $A(t)$ for $\alpha_{t} = \cos((\pi / 2)t)$ and $\sigma_{t} = \sin((\pi / 2)t)$ (orange).
+
+# I.1. Experiment setting.
+
+In our setting, we consider the stochastic interpolant framework (Albergo et al., 2023) for greater flexibility. Namely, we consider a noising interpolant given by
+
+$$
+\mathbf {X} _ {t} = \alpha_ {t} \mathbf {X} _ {0} + \sigma_ {t} \mathbf {X} _ {1}, \quad \mathbf {X} _ {0} \sim \pi_ {0}, \mathbf {X} _ {1} \sim \mathcal {N} (0, \mathrm {I d}), \tag {31}
+$$
+
+where $t \mapsto \alpha_t$ is a non-increasing function and $t \mapsto \sigma_t$ is a non-decreasing function so that $\alpha_1 = 0$ , $\sigma_1 = 1$ . The interpolation (31) can be associated with the following forward process
+
+$$
+\mathrm {d} \mathbf {X} _ {t} = f _ {t} \mathbf {X} _ {t} \mathrm {d} t + g _ {t} \mathrm {d} \mathbf {B} _ {t}, \quad \mathbf {X} _ {0} \sim \pi , \tag {32}
+$$
+
+where for any $t\in (0,1)$
+
+$$
+f _ {t} = \partial_ {t} \log (\alpha_ {t}), \qquad g _ {t} ^ {2} = 2 \alpha_ {t} \sigma_ {t} \partial_ {t} (\sigma_ {t} / \alpha_ {t}).
+$$
+
+The time-reversal of the noising process (32) is given by $(\mathbf{Y}_t)_{t\in [0,1]}$ which satisfies
+
+$$
+\mathrm {d} \mathbf {Y} _ {t} = \left\{- f _ {1 - t} \mathbf {Y} _ {t} + g _ {1 - t} ^ {2} \nabla \log p _ {1 - t} (\mathbf {Y} _ {t}) \right\} \mathrm {d} t + g _ {1 - t} \mathrm {d} \mathbf {B} _ {t}, \quad \mathbf {Y} _ {0} \sim p _ {1} \tag {33}
+$$
+
+where $p_t$ is the density of $\mathbf{X}_t$ with respect to the Lebesgue measure. In practice, we do not typically know $p_1$ and let $\mathbf{Y}_0 \sim \mathcal{N}(0, \sigma_1^2\mathrm{Id})$ . For a given hyperparameter $\varepsilon > 0$ , one can also consider
+
+$$
+\mathrm {d} \mathbf {Y} _ {t} = \left\{- f _ {1 - t} \mathbf {Y} _ {t} + \frac {1}{2} \left(1 + \varepsilon^ {2}\right) g _ {1 - t} ^ {2} \nabla \log p _ {1 - t} (\mathbf {Y} _ {t}) \right\} \mathrm {d} t + \varepsilon g _ {1 - t} \mathrm {d} \mathbf {B} _ {t}, \tag {34}
+$$
+
+which has the same marginals as (33). This can also be rewritten as
+
+$$
+\mathrm {d} \mathbf {Y} _ {t} = \left\{- v _ {1 - t} \left(\mathbf {Y} _ {t}\right) + \frac {1}{2} \varepsilon^ {2} g _ {1 - t} ^ {2} \nabla \log p _ {1 - t} \left(\mathbf {Y} _ {t}\right) \right\} \mathrm {d} t + \varepsilon g _ {1 - t} \mathrm {d} \mathbf {B} _ {t}, \tag {35}
+$$
+
+where the so-called velocity $v_{t}$ is given by
+
+$$
+v _ {t} (x) = \mathbb {E} [ \partial_ {t} \alpha_ {t} \mathbf {X} _ {0} + \partial_ {t} \sigma_ {t} \mathbf {X} _ {1} \mid \mathbf {X} _ {t} = x ].
+$$
+
+Upon combining (34) and (35), we have that for any $t \in (0,1)$
+
+$$
+\nabla \log p _ {t} (x) = - \mathbb {E} [ \mathbf {X} _ {1} \mid \mathbf {X} _ {t} = x ] / \sigma_ {t} = \frac {2}{g _ {t} ^ {2}} \left(f _ {t} x - v _ {t} (x)\right). \tag {36}
+$$
+
+In particular, in order to estimate the score function, we only need to estimate the velocity and vice-versa. In practice, we consider the following loss function
+
+$$
+\mathcal {L} _ {\theta} = \int_ {0} ^ {1} w _ {t} \mathbb {E} [ \| \partial_ {t} \alpha_ {t} \mathbf {X} _ {0} + \partial_ {t} \sigma_ {t} \mathbf {X} _ {1} - v _ {\theta , t} (\mathbf {X} _ {t}) \| ^ {2} ] \mathrm {d} t,
+$$
+
+where $w_{t} > 0$ is a weighting function, see Esser et al. (2024) for some possible choices for $w_{t}$ . We denote $s_{\theta}$ the score estimated from $v_{\theta}$ using (36). At sampling time, we consider the Euler-Maruyama discretisation of (35). More precisely, we define some timesteps $\{t_i\}_{i=0}^N$ with $0 = t_0 < t_1 < \dots < t_N = 1$ and consider the following Markov chain
+
+$$
+Y _ {k + 1} = Y _ {k} + \gamma_ {k} \left\{v _ {\theta , 1 - t _ {k}} \left(Y _ {k}\right) + \frac {1}{2} \varepsilon^ {2} g _ {1 - t _ {k}} ^ {2} s _ {\theta , 1 - t _ {k}} \left(Y _ {k}\right) \right\} + \sqrt {\gamma_ {k}} \varepsilon g _ {1 - t _ {k}} Z _ {k}, \tag {37}
+$$
+
+where $(Z_{k})_{k\in \mathbb{N}}$ is a sequence of independent and identically distributed Gaussian random variables with zero mean and identity covariance matrix and $\gamma_{k} = t_{k + 1} - t_{k}$ . When additional conditioning information is available, one can consider an additional guidance term and (37) is changed into
+
+$$
+Y _ {k + 1} = Y _ {k} + \gamma_ {k} \Big \{(1 + \delta) v _ {\theta , 1 - t _ {k}} (Y _ {k}, c) - \delta v _ {\theta , 1 - t _ {k}} (Y _ {k}, \emptyset) + \frac {1}{2} \varepsilon^ {2} g _ {1 - t _ {k}} ^ {2} s _ {\theta , 1 - t _ {k}} (Y _ {k}) \Big \} + \sqrt {\gamma_ {k}} \varepsilon g _ {1 - t _ {k}} Z _ {k},
+$$
+
+where $v_{\theta, t}(\cdot, c)$ corresponds to a conditional model and $v_{\theta, t}(\cdot, \emptyset)$ to an unconditional one.
+
+# I.2. Low dimensional experiments.
+
+In our low-dimensional setting, we create a dataset by sampling from a mixture of Gaussians. The means are sampled uniformly and independently from $[-2, 2]^d$ , where $d$ is the dimension. Each Gaussian component has a covariance matrix of the form $\sigma^2 \mathrm{Id}$ , where the standard deviation $\sigma$ is also sampled uniformly and independently from $[0.1, 0.2]$ . We test across dimensions $d \in \{2, 4, 8, 16, 32\}$ and numbers of components $n \in \{1, 2, 4, 8, 16\}$ .
+
+The velocity of the diffusion model is parameterized with a sequence of MLPs. For all MLP we use the GeLU activation function. The label, corresponding to the component of the mixture is encoded using a layer embedding layer with feature dimension 512. Similarly, the time information is encoded using sinusoidal embedding with feature dimension 512. This encoding is then processed with a MLP with output dimension 512. The time embedding and the label embedding are then concatenated into a conditioning embedding. The conditioning embedding and the input $x_{t}$ of the velocity network are then processed independently with 3 MLP layers with output dimension (64, 64, 128). The obtained embedding are then concatenated and processed with 3 MLP layers with output dimension (128, 64, 64). Finally a last dense layer with output dimension $d$ is added. We do not consider any normalisation layer. In the case of the training of an independent draft model, the three preprocessing MLP layers are replaced with one MLP layer with output dimension 4. Similarly, the three postprocessing MLP layers are replaced with one MLP layer with output dimension 4. For the sampling, we use 250 sampling steps. We refer to Appendix J for additional results.
+
+# I.3. Image experiments
+
+All FID and IS scores are evaluated with 50,000 images.
+
+CIFAR10. The shape of the samples in the training dataset is $(32 \times 32 \times 3)$ . The batch size is set to 128. Images are rescaled between $-1.0$ and $1.0$ . We consider an augmentation pipeline similar to the one of (Karras et al., 2022). The augmentation pipeline is applied with a global probability $p = 0.12$ . The rest of the augmentation pipeline is similar to the one used in (Karras et al., 2022). In particular, we consider flipping (both $x$ and $y$ axis), anisotropy transformations, non-integer rotation, scaling and non-integer translation.
+
+For the model, we consider a $U$ -net architecture with GeLU activations, 4 levels with a residual attention block applied on the second level. The channel multipliers are given by $(1,2,2,2)$ . The channel size is 256. We consider a dropout rate of 0.2. The normalization layers are RMS normalization layers. For the attention layers we consider 8 heads. The number of residual blocks is 2. For the skip connection, we add (and normalize) the current activation with the stored activation. The time is embedded using sinusoidal embedding with hidden dimension 256. We embed the 10 different classes and consider conditional models. We also condition the model on the augmentation vector. These two conditionings are added and the time embedding is added on top of this. The conditioning occurs through adaptive normalization layers. We train the model for $1M$ steps with the Adam optimizer and a learning rate of $10^{-4}$ and EMA decay of 0.9999.
+
+LSUN. We consider the same configuration as CIFAR10. However, the samples do not have label and we only consider the augmentation conditioning.
+
+# I.4. Latent CIFAR-10 experiments
+
+In the first stage, as an auto-encoder, we use variational auto-encoder (VAE) (Kingma & Welling, 2014) with a smaller term $\beta$ on the KL-term as in $\beta$ -VAE (Higgins et al., 2016). The encoder and decoder are represented by U-Net where in encoder, U-Net follows only the downsampling and middle bottleneck paths, while in decoder, U-Net follows middle bottleneck and upsampling paths, which is similar to what is used in (Rombach et al., 2022). We use 128 channels for the corresponding U-Nets without attention in downsampling/upsampling but with attention in the middle (1 head) and with channel multipliers $(1,2,4,4)$ . We use SILU activation function. We also use RMSNorm for normalization as opposed to GroupNorm. We also employ perceptual LPSIS loss (Zhang et al., 2018) with coefficient 1 as well as patch-based discriminator as in (Esser et al., 2021). The dimensionality of the latent space is $(4,4,32)$ which is 6 times smaller than the original CIFAR-10 image dimensionality $(32,32,3)$ . We train the autoencoder for 500000 steps. We track the FID on the subset of 12800 images comparing clean images and the reconstructions decoder(encoder $(x))$ , and select hyperparameters which achieve the smallest FID. The selected hyperparameters as well as their ranges are:
+
+- Number of discriminator filters $= 32$ . Range [32, 64, 128].
+- Number of discriminator layers $= 6$ . Range [3, 6, 9].
+- Dropout rate for both encoder and decoder $= 0.0$ . Range $[0, 0.1, 0.2, 0.3]$ .
+- $\beta$ parameter $= 1e - 6$ . Range $[1e - 4, 5e - 5, 1e - 5, 5e - 6, 1e - 6, 5e - 7, 1e - 7]$ .
+- Generator loss coefficient $= {0.01}$ . Range $\left\lbrack {{0.001},{0.01},{0.1},{1.0}}\right\rbrack$ .
+- Adversarial loss coefficient $= 0.001$ . Range [0.001, 0.01, 0.1, 1.0].
+- Batch size = 1024. Range [128, 256, 512, 1024]
+
+For the second stage, we freeze the encoder and decoder and train a diffusion model on the encoded images (we take the means), similar to (Rombach et al., 2022). We use the U-Net with 256 channels, (2, 2) channel multipliers with attention performed (False, True), with attention in the middle with 8 attention heads, RMSNorm, GeLU activation. We train latent diffusion for 160000 iterations with batch size 256. We track FID on the subset of 12800 to select the hyperparameters. The selected hyperparameters as well as their ranges are:
+
+- Prediction target $= x_0$ . Range $x_0$ or velocity
+- U-Net dropout rate $= 0.0$ . Range [0, 0.1, 0.2, 0.3].
+- Learning rate $= 1e - 4$ . Range [1e-3, 1e-4, 5e-5]
+- Noise process type = cosine. Range - linear, cosine, rectified flow
+
+Once the models are trained, we employ the same sampling strategy as in CIFAR-10 experiment.
+
+# I.5. PushT dataset.
+
+We consider the PushT dataset. The task here is to push a $T$ shape onto a target shape on a two-dimensional plane. The action dimension is 2, the action horizon is 8. We keep an history of length 2 and consider a maximum of 300 steps when unrolling the policy. We consider a prediction horizon of length 16. This means that the dimension of the target is $16 \times 2$ . And we condition on the last two previous states (dimension is 5). Hence the conditioning signal has shape $(2 \times 5)$ . Once we have predicted 16 actions we execute 8 of them. As specified before we execute a maximum of 300 steps or stop if the reward reaches one, i.e., the $T$ shape is perfectly aligned.
+
+We train the model for $1M$ steps with Adam and stepsize $10^{-4}$ . We consider a one-dimensional U-net with time embedding. The architecture follows from (Chi et al., 2023).
+
+At inference time, we rely on the DDPM sampler.
+
+# J. Additional results
+
+We run similar experiments in latent space to showcase the flexibility of our method. We follow the approach described in (Rombach et al., 2022) – we pre-train an autoencoder on the whole dataset and then train a diffusion model on the latent-encoded dataset. We consider the latent space of shape $(4 \times 4 \times 32)$ which is 6 times smaller than the dimensionality $(32 \times 32 \times 3)$ of CIFAR10. We refer to Appendix I.4 for architectural and training details. We report FID score computed on 50k training samples. Our results are reported in Table 4. We found that using latent diffusion on CIFAR-10 achieved better FID score when the target used only 30 sampling iterations. Nevertheless, we see that our speculative sampling method still provides 3-x speed-up (best is NFE) while maintaining similar to target model quality. We also considered using target model with only 10 NFEs and Table 4 suggests that it achieves considerably worse results. This highlights the strength of our approach.
+
+Combining speculative sampling and parallel sampling. We report FID score and NFE for CIFAR-10 with a number of steps of 30. We vary the temperature parameter $\tau$ , the churn parameter $\varepsilon$ as well as the number of parallel iterations, see (Shih et al., 2023; Tang et al., 2024). For each combination of hyperparameters we also consider window sizes 5, 10 and 20 and report the best run (in terms of FID).
+
+The original speculative sampling procedure corresponds to $p = 0$ . The best FID number that can be achieved with this configuration is 2.23 with a NFE of 15.69. However, by combining our speculative sampling procedure with parallel sampling then we can reach a FID of 2.07 with a NFE of 15.42. This shows the benefits of combining our speculative sampling procedure with other acceleration methods. We report those results in Table 8.
+
+Combining speculative sampling and step distillation. We now compare our approach with LD3 (Tong et al., 2024) and (Sabour et al., 2024). We compare the results on CIFAR-10 as reported in LD3 (Tong et al., 2024). Our best speculative sampling method outperformed both LD3 and AYS. We also included our best results obtained with a uniform timesteps spacing and EDM timestep spacing (Karras et al., 2022). These results are based on the same model as "Best speculative". We sweep over $\rho = [1.0, \dots, 8.0]$ in the case of EDM timestep spacing. This improves the quality of the samples but they remain inferior in quality to the ones obtained with our best speculative model. We re-implemented LD3 (Tong et al., 2024) in our setting and used it to learn a timestep spacing. Our setting is similar to the one of (Tong et al., 2024). Finally, we compare our approach with a distilled generator trained on top of our best model. We focus on Multistep Moment Matching Distillation (MMD) (Salimans et al., 2024).
+
+Configuration FID NFE DPM Solver++ (naive - reported) 2.37 20 DPM Solver++ (AYS (Sabour et al., 2024) - reported) 2.10 20 DPM Solver++ (LD3 (Tong et al., 2024) - reported) 2.36 20 Uniform timesteps 7.14 15 EDM timesteps 4.22 15 LD3 timesteps 3.49 15 MultiStep Moment Matching 2.76 15 Best speculative 2.07 15.4
+
+Table 3. Comparison of model configurations, including our best speculative methods against several baselines. The top section shows reported results from prior work, while the bottom section details our experiments.
+
+# K. Accelerating Langevin Diffusions using Speculative Sampling
+
+We detail in this appendix the application of speculative sampling to Langevin diffusions proposed in Section 5. Assume where we are interested in sampling from an unnormalized density $\pi (x)$ on $\mathbb{R}^d$ , i.e.
+
+$$
+\pi (x) = \frac {\exp (- E (x))}{Z}, \qquad Z = \int \exp (- E (x)) \mathrm {d} x,
+$$
+
+where the energy function $E(x)$ can be evaluated pointwise, but each evaluation is computationally expensive, and $Z$ is an intractable normalizing constant. We are interested here in accelerating MCMC sampling in the context where we have
+
+Configuration Draft Target (30 steps) Target (10 steps) Speculative FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ NFE ↓ ε = 0.01, τ = 0.5 80.92 5.59 2.67 11.09 39.48 7.42 2.66 11.13 18.53 ε = 0.01, τ = 1.0 80.92 5.59 2.67 11.09 39.48 7.42 2.66 11.14 17.78 ε = 0.01, τ = 2.0 80.92 5.59 2.67 11.09 39.48 7.42 2.66 11.14 17.09 ε = 0.25, τ = 0.5 82.28 5.50 2.64 11.15 87.39 4.82 2.68 11.18 10.37 ε = 0.25, τ = 1.0 82.28 5.50 2.64 11.15 87.39 4.82 2.66 11.23 9.36 ε = 0.25, τ = 2.0 82.28 5.50 2.64 11.15 87.39 4.82 2.66 11.21 8.36 ε = 0.5, τ = 0.5 83.27 5.42 2.51 11.08 118.78 3.81 2.56 11.11 9.35 ε = 0.5, τ = 1.0 83.27 5.42 2.51 11.08 118.78 3.81 2.50 11.12 8.30 ε = 0.5, τ = 2.0 83.27 5.42 2.51 11.08 118.78 3.81 2.52 11.07 7.30 ε = 1.0, τ = 0.5 97.67 4.72 37.54 7.09 182.94 2.43 37.13 7.11 9.57 ε = 1.0, τ = 1.0 97.67 4.72 37.54 7.09 182.94 2.43 37.85 7.07 8.36 ε = 1.0, τ = 2.0 97.67 4.72 37.54 7.09 182.94 2.43 38.32 7.09 7.19
+
+Table 4. Latent diffusion on CIFAR-10 with window size = 15 for speculative sampling. For each column, we report the best result in bold.
+
+Configuration Draft Target (500 steps) Speculative FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ NFE ↓ ε = 0.005, τ = 0.25 5.76 1.93 4.66 2.02 4.69 2.01 305.39 ε = 0.005, τ = 0.5 5.76 1.93 4.66 2.02 4.68 2.01 286.07 ε = 0.005, τ = 1.0 5.76 1.93 4.66 2.02 4.70 2.01 263.56 ε = 0.005, τ = 2.0 5.76 1.93 4.66 2.02 4.72 2.01 238.01 ε = 0.01, τ = 0.25 5.76 1.93 4.66 2.02 4.65 2.01 257.96 ε = 0.01, τ = 0.5 5.76 1.93 4.66 2.02 4.67 2.01 236.04 ε = 0.01, τ = 1.0 5.76 1.93 4.66 2.02 4.69 2.00 211.47 ε = 0.01, τ = 2.0 5.76 1.93 4.66 2.02 4.76 2.00 184.98 ε = 0.05, τ = 0.25 5.97 1.91 4.66 2.01 4.48 2.02 186.63 ε = 0.05, τ = 0.5 5.97 1.91 4.66 2.01 4.53 2.00 164.07 ε = 0.05, τ = 1.0 5.97 1.91 4.66 2.01 4.62 2.01 140.06 ε = 0.05, τ = 2.0 5.97 1.91 4.66 2.01 4.86 1.99 116.38 ε = 0.1, τ = 0.25 6.46 1.91 4.52 2.00 4.36 2.03 176.02 ε = 0.1, τ = 0.5 6.46 1.91 4.52 2.00 4.38 2.02 154.73 ε = 0.1, τ = 1.0 6.46 1.91 4.52 2.00 4.56 1.99 131.47 ε = 0.1, τ = 2.0 6.46 1.91 4.52 2.00 4.79 1.97 108.40 ε = 0.25, τ = 0.25 10.11 1.96 4.13 1.96 3.94 2.01 172.65 ε = 0.25, τ = 0.5 10.11 1.96 4.13 1.96 3.98 1.97 153.05 ε = 0.25, τ = 1.0 10.11 1.96 4.13 1.96 4.24 1.97 130.71 ε = 0.25, τ = 2.0 10.11 1.96 4.13 1.96 4.53 1.96 107.92 ε = 0.5, τ = 0.25 17.53 2.11 4.18 1.96 4.02 1.96 178.45 ε = 0.5, τ = 0.5 17.53 2.11 4.18 1.96 4.02 1.95 160.68 ε = 0.5, τ = 1.0 17.53 2.11 4.18 1.96 4.26 1.93 139.16 ε = 0.5, τ = 2.0 17.53 2.11 4.18 1.96 4.51 1.93 116.33
+
+Table 5. LSUN with window size $= {50}$ ,no last step function,500 steps. For each column,we report the best result in bold.
+
+Configuration Draft Target (200 steps) Speculative FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ NFE ↓ ε = 0.001, τ = 0.25 10.56 1.89 3.99 1.99 3.99 1.98 176.85 ε = 0.001, τ = 0.5 10.56 1.89 3.99 1.99 3.99 1.98 173.49 ε = 0.001, τ = 1.0 10.56 1.89 3.99 1.99 3.99 1.98 168.23 ε = 0.001, τ = 2.0 10.56 1.89 3.99 1.99 3.99 1.98 160.89 ε = 0.005, τ = 0.25 10.58 1.89 4.02 1.98 4.00 1.98 137.95 ε = 0.005, τ = 0.5 10.58 1.89 4.02 1.98 3.99 1.98 131.53 ε = 0.005, τ = 1.0 10.58 1.89 4.02 1.98 3.99 1.98 124.52 ε = 0.005, τ = 2.0 10.58 1.89 4.02 1.98 4.00 1.98 117.13 ε = 0.01, τ = 0.25 10.63 1.89 3.99 1.98 3.98 1.98 121.26 ε = 0.01, τ = 0.5 10.63 1.89 3.99 1.98 3.98 1.98 114.51 ε = 0.01, τ = 1.0 10.63 1.89 3.99 1.98 3.99 1.98 107.26 ε = 0.01, τ = 2.0 10.63 1.89 3.99 1.98 4.01 1.98 99.20 ε = 0.05, τ = 0.25 12.73 1.91 3.95 1.98 3.94 1.98 92.66 ε = 0.05, τ = 0.5 12.73 1.91 3.95 1.98 3.96 1.97 86.26 ε = 0.05, τ = 1.0 12.73 1.91 3.95 1.98 4.03 1.96 78.75 ε = 0.05, τ = 2.0 12.73 1.91 3.95 1.98 4.14 1.95 70.04 ε = 0.1, τ = 0.25 18.56 1.99 3.92 1.99 3.89 1.97 87.74 ε = 0.1, τ = 0.5 18.56 1.99 3.92 1.99 3.93 1.99 82.05 ε = 0.1, τ = 1.0 18.56 1.99 3.92 1.99 3.97 1.98 74.87 ε = 0.1, τ = 2.0 18.56 1.99 3.92 1.99 4.16 1.94 66.28 ε = 0.25, τ = 0.25 33.76 2.28 3.83 1.94 3.76 1.96 85.60 ε = 0.25, τ = 0.5 33.76 2.28 3.83 1.94 3.74 1.97 80.82 ε = 0.25, τ = 1.0 33.76 2.28 3.83 1.94 3.94 1.95 74.27 ε = 0.25, τ = 2.0 33.76 2.28 3.83 1.94 4.12 1.94 66.01 ε = 0.5, τ = 0.25 49.82 2.65 4.09 1.95 3.93 1.95 87.12 ε = 0.5, τ = 0.5 49.82 2.65 4.09 1.95 3.97 1.95 83.29 ε = 0.5, τ = 1.0 49.82 2.65 4.09 1.95 4.14 1.93 77.55 ε = 0.5, τ = 2.0 49.82 2.65 4.09 1.95 4.22 1.96 69.81 ε = 1.0, τ = 0.25 115.98 3.44 4.76 1.93 4.75 1.95 93.13 ε = 1.0, τ = 0.5 115.98 3.44 4.76 1.93 4.73 1.97 90.88 ε = 1.0, τ = 1.0 115.98 3.44 4.76 1.93 4.77 1.96 87.24 ε = 1.0, τ = 2.0 115.98 3.44 4.76 1.93 4.85 1.95 81.40
+
+Table 6. LSUN with window size $= {50}$ ,no last step function,200 steps. For each column,we report the best result in bold.
+
+Configuration Draft Target (100 steps) Speculative FID ↓ IS ↑ FID ↓ IS ↑ FID ↓ IS ↑ NFE ↓ ε = 0.001, τ = 0.25 24.04 1.99 3.81 1.95 3.78 1.95 91.82 ε = 0.001, τ = 0.5 24.04 1.99 3.81 1.95 3.79 1.95 90.57 ε = 0.001, τ = 1.0 24.04 1.99 3.81 1.95 3.79 1.95 88.56 ε = 0.001, τ = 2.0 24.04 1.99 3.81 1.95 3.79 1.95 85.46 ε = 0.005, τ = 0.25 24.26 2.00 3.81 1.95 3.78 1.95 73.13 ε = 0.005, τ = 0.5 24.26 2.00 3.81 1.95 3.77 1.95 70.13 ε = 0.005, τ = 1.0 24.26 2.00 3.81 1.95 3.77 1.95 66.67 ε = 0.005, τ = 2.0 24.26 2.00 3.81 1.95 3.77 1.95 63.10 ε = 0.01, τ = 0.25 25.05 2.01 3.80 1.95 3.77 1.94 65.75 ε = 0.01, τ = 0.5 25.05 2.01 3.80 1.95 3.77 1.94 62.68 ε = 0.01, τ = 1.0 25.05 2.01 3.80 1.95 3.77 1.94 59.59 ε = 0.01, τ = 2.0 25.05 2.01 3.80 1.95 3.77 1.94 56.71 ε = 0.05, τ = 0.25 48.62 2.27 3.75 1.96 3.75 1.94 52.44 ε = 0.05, τ = 0.5 48.62 2.27 3.75 1.96 3.76 1.93 50.07 ε = 0.05, τ = 1.0 48.62 2.27 3.75 1.96 3.77 1.93 47.57 ε = 0.05, τ = 2.0 48.62 2.27 3.75 1.96 3.85 1.93 44.52 ε = 0.1, τ = 0.25 69.55 2.53 3.74 1.95 3.78 1.94 49.65 ε = 0.1, τ = 0.5 69.55 2.53 3.74 1.95 3.79 1.94 47.75 ε = 0.1, τ = 1.0 69.55 2.53 3.74 1.95 3.79 1.93 45.51 ε = 0.1, τ = 2.0 69.55 2.53 3.74 1.95 3.86 1.91 42.52 ε = 0.25, τ = 0.25 97.47 3.17 3.85 1.92 3.79 1.93 48.11 ε = 0.25, τ = 0.5 97.47 3.17 3.85 1.92 3.82 1.94 46.76 ε = 0.25, τ = 1.0 97.47 3.17 3.85 1.92 3.81 1.93 44.92 ε = 0.25, τ = 2.0 97.47 3.17 3.85 1.92 3.90 1.92 42.16 ε = 0.5, τ = 0.25 147.36 3.62 4.08 1.97 4.01 1.95 48.38 ε = 0.5, τ = 0.5 147.36 3.62 4.08 1.97 4.06 1.96 47.43 ε = 0.5, τ = 1.0 147.36 3.62 4.08 1.97 4.14 1.95 46.02 ε = 0.5, τ = 2.0 147.36 3.62 4.08 1.97 4.21 1.95 43.70 ε = 1.0, τ = 0.25 231.66 2.74 5.76 2.02 5.72 2.00 50.08 ε = 1.0, τ = 0.5 231.66 2.74 5.76 2.02 5.69 2.00 49.59 ε = 1.0, τ = 1.0 231.66 2.74 5.76 2.02 5.70 2.01 49.00 ε = 1.0, τ = 2.0 231.66 2.74 5.76 2.02 5.65 2.02 47.89
+
+Table 7. LSUN with window size $= {50}$ ,no last step functions,100 steps. For each column,we report the best result in bold.
+
+Configuration FID ↓ NFE ↓ p=0, ε=0.25, τ=1.0 2.23 15.69 p=1, ε=0.25, τ=1.0 2.09 23.80 p=5, ε=0.25, τ=1.0 2.09 57.85 p=0, ε=0.5, τ=1.0 2.77 17.06 p=1, ε=0.5, τ=1.0 2.75 23.42 p=5, ε=0.5, τ=1.0 2.75 57.80 p=0, ε=0.25, τ=2.0 2.24 14.89 p=1, ε=0.25, τ=2.0 2.09 21.12 p=5, ε=0.25, τ=2.0 2.08 51.45 p=0, ε=0.5, τ=2.0 2.74 16.47 p=1, ε=0.5, τ=2.0 2.77 20.62 p=5, ε=0.5, τ=2.0 2.77 50.40 p=0, ε=0.25, τ=10.0 2.39 12.86 p=1, ε=0.25, τ=10.0 2.07 15.42 p=5, ε=0.25, τ=10.0 2.07 37.50 p=0, ε=0.5, τ=10.0 2.73 14.49 p=1, ε=0.5, τ=10.0 2.79 16.38 p=5, ε=0.5, τ=10.0 2.79 40.25
+
+Table 8. Results on CIFAR-10 when combining speculative sampling and parallel sampling. The hyperparameter $p$ represents the number of parallel calls.
+
+access to a computationally cheap proxy energy function $\hat{E}(x) \approx E(x)$ defining $\hat{\pi}(x) \propto \exp(-\hat{E}(x))$ . Access to such proxies is common in many domains of computational science and engineering, see e.g. (Peherstorfer et al., 2018) for a review.
+
+In this context, a popular modification of the Metropolis-Hastings (MH) algorithm to sample from $\pi$ leveraging an energy proxy was proposed by Christen & Fox (2005). It is known in the literature as delayed acceptance MH (Cui et al., 2011; Sherlock et al., 2017) or two-stage Markov chain Monte Carlo (Peherstorfer et al., 2018). We present here a completely different approach to accelerate another popular MCMC algorithm, namely the Unadjusted Langevin algorithm (ULA).
+
+The Langevin diffusion is defined by
+
+$$
+\mathrm {d} \mathbf {X} _ {t} = - \nabla E (\mathbf {X} _ {t}) \mathrm {d} t + \sqrt {2} \mathrm {d} \mathbf {B} _ {t},
+$$
+
+where $(\mathbf{B}_t)_{t\geq 0}$ is a standard multivariate Brownian motion. The limiting distribution of this diffusion is $\pi$ . Practically, we discretize this diffusion to obtain the ULA algorithm, i.e.
+
+$$
+X _ {k + 1} = X _ {k} - \gamma \nabla E (X _ {k}) + \sqrt {2 \gamma} W _ {k}, \tag {38}
+$$
+
+for a stepsize $\gamma > 0$ and $W_{k} \stackrel{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0, \mathrm{Id})$ . Contrary to MH, this algorithm only samples from an approximation of $\pi$ due to the time-discretization but explicit bounds on the bias incurred are available (Durmus & Moulines, 2017). The speculative sampling algorithm is directly applicable to accelerate the simulation of (38). In this case, (38) plays the role of the target model while
+
+$$
+X _ {k + 1} = X _ {k} - \gamma \nabla \hat {E} (X _ {k}) + \sqrt {2 \gamma} W _ {k}, \tag {39}
+$$
+
+corresponds to the draft model. In this case, the general speculative sampling from Algorithm 6 simplifies drastically and we obtain Algorithm 11. As a cheap proxy, we can still use the frozen draft model strategy in this context, that is set $\nabla \hat{E}(x_{n+k}) = \nabla E(x_n)$ for $k = 1, \dots, n_L$ in (39). Again speculative sampling returns exact samples from ULA. This method can be thought of as a novel pre-fetching technique to accelerate MCMC (Brockwell, 2006; Angelino et al., 2014).
+
+We now demonstrate the efficiency of Algorithm 11. We consider the $\phi - 4$ model (Guth et al., 2022; Milchev et al., 1986) where the energy function is given by
+
+$$
+E (x) = \frac {\beta}{2} \sum_ {| i - j | = 1} \left(x _ {i} - x _ {j}\right) ^ {2} + \sum_ {i} \left(x _ {i} ^ {2} - 1\right) ^ {2},
+$$
+
+on a grid of shape $(8, 8)$ and $\beta = 100$ . Sampling from $\pi$ is complex as this requires sampling so-called ordered states. In this context, the teacher model is the Langevin diffusion sampling $E(x)$ with 100,000 iterations and stepsize $10^{-3}$ , while our speculative sampling algorithm uses the frozen prediction draft model and a window size of 20. We report the mean and the standard deviation of the energy over the last 500 simulated samples over 500 runs. The NFE is reduced by a factor of 2.
+
+Metrics Mean energy Standard deviation energy NFE Langevin sampling 62.27 13.32 100000 Speculative sampling 65.90 12.48 48564
+
+Table 9. Comparison of sampling metrics for the $\phi - 4$ model.
+
+Algorithm 11 Speculative Sampling for Unadjusted Langevin Diffusion
+```txt
+Require: Lookahead integer $L$ , sequence length $K$ , stepsize $\gamma > 0$ , target distribution $\pi$ and proxy distribution $\hat{\pi}$ . Set $Y_0$ arbitrarily and set $n = 0$ .
+while $n < K$ do
+Set $\tilde{Y}_n \gets Y_n$ and $n_L = \min(n + L, K)$ .
+for $k = n + 1:n_L$ do
+Set $\tilde{Y}_k = \tilde{Y}_{k-1} - \gamma \nabla \hat{E}(\tilde{Y}_{k-1}) + \sqrt{2\gamma} Z_{k-1}$ for $Z_{k-1} \sim \mathcal{N}(0, \mathrm{Id})$ .
+end for
+In parallel, compute $\nabla E(\tilde{Y}_n)$ , $\nabla E(\tilde{Y}_{n+1}), \dots, \nabla E(\tilde{Y}_{nL-1})$ .
+for $k = n + 1:n_L$ do
+Set $\Delta_{k-1} = \sqrt{\gamma/2} (\nabla E(\tilde{Y}_{k-1}) - \nabla \hat{E}(\tilde{Y}_{k-1}))$ and $e = \Delta_{k-1} / ||\Delta_{k-1}||$ .
+Sample $U \sim \mathrm{Unif}[0,1]$ .
+bool $= \mathbb{I}[U \leq \min(1, \mathcal{N}(Z_{k-1} + \Delta_{k-1}; 0, \mathrm{Id}) / \mathcal{N}(Z_{k-1}; 0, \mathrm{Id}))]$ .
+if bool then
+Set $Y_k = \tilde{Y}_k$ .
+else
+Set $Y_k = \tilde{Y}_{k-1} - \gamma E(\tilde{Y}_{k-1}) + \sqrt{2\gamma} (\mathrm{Id} - 2ee^\top) Z_{k-1}$ .
+end if
+return (Y_k, bool).
+if not (bool) then
+Exit For Loop
+end if
+end for
+Set $n \gets k$ .
+end while
+return Y_{0:K}
+```
\ No newline at end of file
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/images.zip b/accelerateddiffusionmodelsviaspeculativesampling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..09de8d895285257fb50c2323e7d0602961162aa1
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e89507a4ed0198e604ce7b0f04ec11bccc008e3069e3dd6492cc8a6c557a473d
+size 2641573
diff --git a/accelerateddiffusionmodelsviaspeculativesampling/layout.json b/accelerateddiffusionmodelsviaspeculativesampling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee82356c3c5b61f462644ae88ab658add7e438c6
--- /dev/null
+++ b/accelerateddiffusionmodelsviaspeculativesampling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:225693367c3da58a725a3549fed602a4dce2097f41d4b753247e2f3d925bd428
+size 1928463
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_content_list.json b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0c2e3e91c498c6a94510286127efedea606972ec
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8c0b08d28329a708f93bd1efc0b25be10605e17ce59acf5e39b577a15530e0d
+size 211567
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_model.json b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..02a92ffa35a13363aa91452a59117770105ca554
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:542d10a4bc2f6721874d682619f96ea306e51ec98746095d27a3c55d07fb8db8
+size 251730
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_origin.pdf b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a22fc99069fea09a86da0d786196a8347df7e0c1
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/421c4b62-cf3e-458b-b763-7a580b21e488_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90661bb325709885baef428502077673085647748fe79d35ffa23e6207286b9e
+size 795250
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/full.md b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3baf45567e6ce918da1664e2295ec39cb4d12f8
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/full.md
@@ -0,0 +1,981 @@
+# Accelerating Large Language Model Reasoning via Speculative Search
+
+Zhihai Wang $^{1}$ Jie Wang $^{1\boxtimes}$ Jilai Pan $^{1}$ Xilin Xia $^{1}$ Huiling Zhen $^{2}$ Mingxuan Yuan $^{2}$ Jianye Hao $^{23}$ Feng Wu
+
+# Abstract
+
+Tree-search-based reasoning methods have significantly enhanced the reasoning capability of large language models (LLMs) by facilitating the exploration of multiple intermediate reasoning steps, i.e., thoughts. However, these methods suffer from substantial inference latency, as they have to generate numerous reasoning thoughts, severely limiting LLM applicability. To address this challenge, we propose a novel Speculative Search (SpecSearch) framework that significantly accelerates LLM reasoning by optimizing thought generation. Specifically, SpecSearch utilizes a small model to strategically collaborate with a large model at both thought and token levels, efficiently generating high-quality reasoning thoughts. The major pillar of SpecSearch is a novel quality-preserving rejection mechanism, which effectively filters out thoughts whose quality falls below that of the large model's outputs. Moreover, we show that SpecSearch preserves comparable reasoning quality to the large model. Experiments on both the Qwen and Llama models demonstrate that SpecSearch significantly outperforms state-of-the-art approaches, achieving up to $2.12 \times$ speedup with comparable reasoning quality. Code is available at https://github.com/MIRALab-USTC/LLMReasoning-SpecSearch.
+
+# 1. Introduction
+
+The reasoning capabilities of large language models (LLMs) have significantly advanced with the adoption of slow-thinking processes based on tree-search-based (TSB) rea
+
+This work was done when Zhihai Wang was an intern at Huawei. MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China Noah's Ark Lab, Huawei Technologies College of Intelligence and Computing, Tianjin University. Correspondence to: Jie Wang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+(a) Latency-Accuracy
+
+
+(b) Bottleneck of Reasoning
+Figure 1. (a) The inference latency increases by several orders of magnitude with the introduction of tree-search-based reasoning methods. (b) Thought generation acts as an efficiency bottleneck of tree-search-based reasoning methods.
+
+soning methods (Yao et al., 2023; Wan et al., 2024a; Jiang et al., 2024; Wu et al., 2024). These TSB methods enhance reasoning by following the Chain-of-Thought (COT) approach (Wei et al., 2022), which decomposes problem-solving into a sequence of intermediate reasoning steps, termed thoughts. Building upon this, TSB frameworks such as Tree-of-Thoughts (TOT) (Yao et al., 2023) integrate thought generation and evaluation with search algorithms—such as beam search (Kang et al., 2024) and Monte Carlo Tree Search (MCTS) (Chen et al., 2024; Zhang et al., 2024b)—to systematically explore diverse reasoning paths. By incorporating these techniques, TSB methods provide LLMs with a deliberate and structured reasoning framework, significantly improving their capability to tackle complex and multi-step reasoning tasks.
+
+However, existing TSB reasoning methods often suffer from substantial inference latency (Gao et al., 2024; Wang et al., 2024e), with inference latency increasing by several orders of magnitude (see Figure 1a). The primary bottleneck stems from the need to explore a vast number of reasoning thoughts (see Figure 1b). This substantial increase in inference latency poses significant challenges for practical deployment, particularly in real-time applications requiring low-latency performance (Zhou et al., 2024; Xia et al., 2024). However, effective and efficient strategies to accelerate slow-thinking reasoning in LLMs without compromising reasoning quality remain largely underexplored.
+
+In this paper, we propose Speculative Search (SpecSearch), a novel and efficient framework that significantly accelerates LLM reasoning while maintaining comparable quality. At its core, SpecSearch features a bi-level speculative
+
+thought generator, where a small model strategically collaborates with a large model at both coarse-grained thought and fine-grained token levels. This innovative design optimizes thought generation efficiency, enabling faster yet effective reasoning. To ensure reasoning quality, SpecSearch proposes to filter out thoughts that fall below the quality of the large model's outputs. SpecSearch achieves this by accurately and efficiently estimating the quality through a non-parametric statistical estimation method, leveraging historical reasoning thoughts from the large model. Moreover, we establish a theoretical guarantee that SpecSearch preserves comparable reasoning quality to the large model.
+
+To demonstrate the effectiveness of SpecSearch, we evaluate it on two complex reasoning datasets: MATH and GSM8K. Experiments using both the Qwen and Llama models demonstrate that our method significantly outperforms state-of-the-art (SOTA) approaches, achieving up to $2.12 \times$ speedup while maintaining comparable reasoning quality. Moreover, experiments demonstrate that SpecSearch seamlessly integrates with several tree search algorithms and thought evaluators, delivering substantial acceleration without compromising reasoning quality. These results underscore SpecSearch's ability to significantly enhance the efficiency of existing TSB reasoning methods.
+
+We summarize our major contributions as follows. (1) A Novel SpecSearch Framework Observing that thought generation is a major efficiency bottleneck, we propose SpecSearch, which utilizes a small model collaborating with a large model at both coarse-grained thought and fine-grained token levels. This design significantly improves thought generation efficiency, thereby accelerating LLM reasoning. (2) Quality-Preserving Rejection Mechanism To ensure high reasoning quality, we propose to filter out thoughts whose quality falls below that of the large model's outputs, and efficiently estimate the large model's quality via its historical reasoning thoughts. (3) Theoretical Guarantee We provide a theoretical analysis showing that SpecSearch preserves reasoning quality comparable to that of the large model. (4) Significant Speedup and Versatility Experiments demonstrate that SpecSearch significantly outperforms SOTA approaches, achieving up to $2.12 \times$ speedup while preserving comparable reasoning quality. Moreover, experiments demonstrate the strong compatibility of SpecSearch with different LLMs, search algorithms, and thought evaluators, highlighting its broad applicability.
+
+# 2. Related Work
+
+Speculative Decoding As the number of parameters in LLMs increases, inference latency has become a major obstacle to their broader applications (Zhou et al., 2024; Wan et al., 2024b; Xia et al., 2024; Zhang et al., 2024a). This latency is primarily caused by the autoregressive decoding process, where each token is generated sequentially, depen
+
+dent on the preceding token's completion (Lu et al., 2024; Xia et al., 2024). To accelerate LLM decoding, an innovative paradigm (Leviathan et al., 2023; Chen et al., 2023a,b; Yang et al., 2024; Li et al., 2024; Kou et al., 2024; Zhong & Bharadwaj, 2024) introduces the idea of speculative execution (Burton, 1985; Hennessy & Patterson, 2012) as in computer architecture to LLM decoding in a draft-then-verify style. Specifically, speculative decoding methods speculatively draft a sequence of tokens via a small model, and then verify the sequence via a large model in parallel, thus speeding up the LLM decoding process (see Figure 6a in Appendix B). However, speculative decoding—a token-level inference acceleration method—can be poorly aligned with optimizing search-based reasoning approaches that involve intricate, non-sequential reasoning thought generation, leading to suboptimal acceleration performance. Inspired by speculative execution, we propose a novel SpecSearch framework to leverage the inherent structure of TSB reasoning frameworks by formulating both thought and token generation as speculative tasks. To the best of our knowledge, we are the first to well generalize speculative execution to TSB reasoning, providing a novel speculative execution formulation for accelerating LLM reasoning. Moreover, we provide a detailed discussion on novelty of SpecSearch over standard speculative decoding and TreeBon (Qiu et al., 2024) in Appendix G.
+
+The Novelty of SpecSearch over Speculative Decoding Methods We discuss the novelty of SpecSearch compared to existing SD techniques, emphasizing key distinctions in terms of speculative formulation, verification and rejection strategies, and theoretical guarantees. (1) Bi-Level Speculative Formulation: Unlike existing SD methods focused solely on tokens, SpecSearch treats both high-level thoughts and low-level tokens as bi-level speculative tasks. This enables 1) Structural Alignment with reasoning frameworks, where thoughts are fundamental units, and 2) Compatibility with standard SD methods through low-level token-level speculation. (2) Contextual Verification for Higher Acceptance and Speedup: Unlike existing SD methods that enforce strict token-level alignment, leading to frequent rejections, SpecSearch verifies the contextual quality of reasoning thoughts. This allows acceptance of correct but non-aligned outputs, substantially boosting acceptance rates and achieving significant speedups. (3) Quality-Preserving Rejection Mechanism: Unlike token-level rejection in standard SD methods, SpecSearch proposes quality-preserving thought-level rejection based on contextual quality. It discards entire thoughts only when their quality is lower than the large model's, ensuring high-quality reasoning throughout decoding. (4) Theoretical Guarantee of Reasoning Quality: While standard SD methods preserve token-level distributions, SpecSearch guarantees that reasoning quality remains comparable with
+
+outputs from the large model.
+
+Tree-Search-Based Reasoning Acceleration In recent years, tree-search-based reasoning methods (Yao et al., 2023; Hao et al., 2023; Hui et al., 2024; Wan et al., 2024a; Jiang et al., 2024; Wu et al., 2024; Xie et al., 2023) have significantly enhanced the reasoning capabilities of LLMs. To accelerate tree-search-based reasoning, Gao et al. (2024) directly integrate standard speculative decoding techniques with reasoning methods. Subsequently, SEED (Wang et al., 2024e) proposes a Scheduled Speculative Decoding method, which manages the scheduling of parallel small models based on only one shared large model. These methods improve the efficiency of tree-search-based reasoning methods, demonstrating the potential of designing speculative execution strategies in the LLM reasoning framework. However, these methods primarily design speculative execution strategies at the token level, neglecting the inherent structure of LLM frameworks, where reasoning thoughts are fundamental units. This oversight results in suboptimal acceleration performance. In contrast, our SpecSearch proposes a novel bi-level speculative thought generator, which utilizes a small model collaborating with a large model at both coarse-grained thought and fine-grained token levels, leading to significant acceleration with comparable quality.
+
+The Novelty of SpecSearch over TreeBon (Qiu et al., 2024) We discuss the novelty of SpecSearch compared to Treebon (Qiu et al., 2024), emphasizing key distinctions in terms of motivation, speculative formulation, rejection strategies, and theoretical guarantees. (1) Distinct Motivation: Unlike Treebon, which targets accelerating best-of-n sampling through speculative rejection combined with tree search, SpecSearch is the first to well generalize speculative execution to tree-based LLM reasoning. (2) Bi-Level Speculative Formulation: Treebon treats fixed-length token sequences as speculative tasks, while SpecSearch introduces a flexible bi-level approach—modeling full reasoning thoughts as high-level tasks and tokens as low-level ones. Unlike Treebon's fixed-length design, SpecSearch leverages LLMs' reasoning capabilities to generate semantically coherent thoughts of dynamic length. (3) Quality-Preserving Rejection Mechanism: Treebon rejects a fixed proportion of token sequences using a preset threshold. SpecSearch, instead, scores reasoning thoughts and adaptively rejects those with lower contextual quality based on the large model's reasoning quality, enabling finer control and better quality preservation. (4) Theoretical Guarantee: Unlike Treebon, which lacks theoretical guarantees, SpecSearch provides formal assurance that reasoning quality remains uncompromised, matching that of the large model's outputs.
+
+# 3. Background
+
+Speculative Sampling in LLM Decoding We introduce Speculative Sampling (SpS) (Leviathan et al., 2023; Chen et al., 2023a), a decoding technique that accelerates LLM inference while preserving the target model's distribution. Given a prefix $c$ , a draft model $M_{q}$ , a target model $M_{p}$ , and step size $\gamma$ , SpS operates in two phases. (1) Drafting $M_{q}$ autoregressively generates $\gamma$ tokens $x_{1}, x_{2}, \ldots, x_{\gamma}$ . (2) Verification $M_{p}$ verifies these tokens in parallel, accepting $x_{i}$ with probability $\min \left(1, \frac{M_{p}(x_{i}|x_{i-1}, \ldots, x_{1}, c)}{M_{q}(x_{i}|x_{i-1}, \ldots, x_{1}, c)}\right)$ . If $x_{i}$ is rejected, a resampling method generates $\tilde{x}_{i}$ . This process theoretically guarantees alignment with the target model's distribution while significantly enhancing inference speed.
+
+Tree-Search-Based Reasoning Methods Tree-search-based reasoning methods formulate tree nodes as intermediate reasoning steps (thoughts) and tree paths as potential solutions to multi-step reasoning problems. They comprise a Thought Generator $(G)$ , a Thought Evaluator $(V)$ , and a search algorithm (see Figure 6b in Appendix B). From the root node $(c, \text{input question})$ , $G$ expands the tree by generating $N$ child nodes (thoughts). $V$ evaluates their quality, guiding the search algorithm. This iterative process constructs a reasoning tree, culminating in a final reasoning path $P$ , formed by $z_{n}, \ldots, z_{1}, c$ . Common search algorithms include Beam Search and MCTS. See Appendix B for details.
+
+# 4. Speculative Search for LLM Reasoning
+
+We begin with an overview of the SpecSearch framework in Section 4.1. Next, we detail the formal procedure underlying SpecSearch, describing the bi-level speculative thought generator in Section 4.2 and the quality-preserving rejection mechanism in Section 4.3. Lastly, we present the theoretical guarantee of SpecSearch in Section 4.4.
+
+# 4.1. Overview of Speculative Search Framework
+
+In this part, we first present several key motivations for our proposed SpecSearch. Then, we describe the overview of SpecSearch as shown in Figure 3.
+
+Motivation 1: Thought generation dominates computation time in tree-search-based reasoning, consuming over $91\%$ of total runtime in reasoning (see Figure 1b).
+
+Motivation 2: Small models can generate high-quality reasoning thoughts. In multi-step reasoning, some steps are inherently simpler than others. For example, solving $99^{2} + 99 + 1$ requires computing $99^{2}$ ("harder") and $9900 + 1$ ("easier"). Moreover, an analysis of the quantized Qwen2.5-7B-Instruct model shows that over $40\%$ of its reasoning, thoughts outperformed the average reward score of those from the larger Qwen2.5-72B-Instruct model (see Figure 2a). The findings suggest that assigning simple steps to a small model and complex ones to a large model can speed
+
+
+(a) Scores of a Small Model
+
+
+(b) Large Model Engagement
+Figure 2. (a) Small models can generate thoughts with high reward scores. (b) Simple large model engagement strategies at the thought level struggle to preserve comparable reasoning quality.
+
+up reasoning without sacrificing accuracy.
+
+Motivation 3: Simple large model engagement strategies at the thought level struggle to maintain reasoning quality. Motivated by Motivation 2, we design a simple large-model engagement strategy where a thought evaluator scores the small model's outputs, and the bottom $\mathrm{X\%}$ (X is a hyperparameter) is reprocessed by the large model for refinement. However, as shown in Figure 2b, maintaining reasoning quality remains challenging when collaboration occurs at the thought level.
+
+Overview of SpecSearch Building on the aforementioned key motivations, SpecSearch proposes a bi-level speculative thought generation framework, leveraging both a small model and a large model to efficiently produce high-quality reasoning thoughts. Guided by a thought evaluator, this method operates at both the thought and token levels, integrating seamlessly into any search algorithm as an efficient node expansion module.
+
+The bi-level speculative thought generation follows a draft-evaluate-reject-correct paradigm, where the first three stages operate at a coarse-grained thought level, while the final stage refines outputs at a fine-grained token level. Initially, a small model drafts multiple reasoning thoughts, which are then evaluated by a thought evaluator to reject low-quality candidates. Finally, a lossless speculative decoding method corrects the rejected thoughts, ensuring robust and accurate reasoning.
+
+# 4.2. Bi-Level Speculative Thought Generator
+
+This section first discusses the advantages of using a small model in collaboration with a large model at both the coarse-grained thought and fine-grained token levels. We then describe the bi-level speculative thought generator. An illustration of the generator is provided in Figure 3. The procedure is summarized in Algorithm 1.
+
+Advantages Compared to standard token-level speculative decoding (Xia et al., 2024; Zhang et al., 2024a; Leviathan et al., 2023; Chen et al., 2023a; Li et al., 2024), thought-level speculative execution offers several key advantages. First, it leverages the inherent structure of the tree-search-based reasoning framework, where each thought serves as
+
+a fundamental unit (i.e., a node) within the reasoning tree. This structure allows for effective utilization of components such as the thought evaluator, enabling seamless integration into the reasoning process. Second, since a single thought typically comprises more than fifty tokens (see Table 8 in Appendix I.2), thought-level speculation facilitates coarse-grained collaboration, increasing the number of tokens generated by the small model throughout the search process. This, in turn, can significantly enhance the efficiency of thought generation. Third, it harnesses the reasoning capabilities of small models to generate high-quality thoughts (see Figure 2a). As a result, it carries the potential to maintain or even improve reasoning quality compared to the original large model (see Tabel 1 in Section 5).
+
+Algorithm Design We propose the following bi-level speculative thought generator, which follows a draft-evaluate-reject-correct paradigm. First, it drafts multiple reasoning thoughts using a small model, then evaluates the quality of these thoughts and rejects those of low quality. Finally, the rejected thoughts are corrected using a lossless token-level speculative decoding method.
+
+(1) Drafting Phase at the Thought Level To leverage the reasoning capability of small models, as shown in Figure 2a, we propose using a small model to efficiently generate multiple reasoning thoughts. These drafted thoughts serve as potential candidates for further evaluation and correction.
+(2) Evaluating Phase at the Thought Level To evaluate the quality of the generated thoughts, previous speculative decoding methods (Xia et al., 2024; Zhang et al., 2024a) typically use a large model to verify the token sequences within each thought. Verifying thoughts with a large model poses several challenges. First, it struggles to capture the intrinsic structure and semantics of reasoning thoughts, leading to potential evaluation inaccuracies. Second, while token-level distributions are well understood, preserving thought distributions is far more complex. The exponential growth of possible thoughts makes accurate modeling difficult. Third, in tree-search-based reasoning, multiple valid paths can lead to the same answer, creating ambiguity in defining lossless thought generation. A speculative model may generate different reasoning paths than a large model while still being correct. Overall, these challenges significantly limit the accuracy and efficiency of large-model-based verification.
+
+To address these challenges, we propose utilizing the inherent thought evaluator within the existing LLM reasoning framework for accurate thought evaluation. For example, a process reward model (PRM) can be employed to assign a reward score to each thought, offering an accurate evaluation of its quality. This approach addresses the aforementioned challenges and offers several advantages. A detailed discussion is provided in Appendix H.1.
+
+
+Figure 3. Illustration of our proposed SpecSearch. SpecSearch proposes a bi-level speculative thought generator with a quality-preserving rejection mechanism, which significantly accelerates LLM reasoning while preserving comparable quality.
+
+
+
+
+
+(3) Rejection Phase at the Thought Level The primary objective of this phase is to effectively reject generated thoughts that are of lower quality than the large model's outputs—a task made particularly challenging by the lack of access to the large model's outputs. To address this challenge, we propose a novel quality-preserving rejection mechanism, as detailed in Section 4.3.
+(4) Correction Phase at the Token Level To correct rejected low-quality thoughts, we propose utilizing a lossless token-level speculative decoding method to refine them at a fine-grained token level. By applying lossless speculative decoding, we ensure that the corrected thoughts maintain the same distribution as the large model's outputs. For token-level correction, we propose regenerating the entire thought using a token-level speculative model to replace the rejected one for simplicity. Unless otherwise specified, we use the terms "large model" and "token-level speculative model" interchangeably in the following.
+
+# 4.3. Quality-Preserving Rejection Mechanism
+
+Unlike standard token-level speculative decoding, our approach has the potential to significantly reduce inference latency through thought-level speculation, as discussed in Section 4.2. However, since a reasoning thought consists of more than fifty token-generation steps, a small model is more prone to generating misleading thoughts, as errors can accumulate across multiple token-generation steps. Therefore, a robust thought rejection mechanism is essential to ensure reasoning quality. To address this quality-preserving challenge, we first present several mathematical definitions.
+
+Let $\mathbb{Z}$ be the set of all possible reasoning thoughts, and let $V:\mathbb{Z}\to [0,1]$ be a process reward model (PRM) that assigns a quality score to each thought. Given a sequence of generated thoughts $z_{k - 1},z_{k - 2},\ldots ,z_{1}$ and an initial condition $c$ (e.g., input question and prompt), a thought generator $G$ samples thoughts from the distribution $G(\cdot |z_{< k})$ over $\mathbb{Z}$ .
+
+Definition 4.1. (Quality of Thoughts and Thought Generators) The quality of a thought $z$ is given by $V(z)$ . The reasoning quality of a thought generator $G$ is defined as $\mathbb{E}_{z \sim G(\cdot | z_{ 0$ . This assumption is commonly used in data science (Gopinath, 1998; Zhang, 2010).
+
+Denote our speculative thought generator with threshold $\beta^{(k)}, k = 1,2,\ldots,K$ as $G_{s}\left(\left\{\beta^{(k)}\right\}_{k=1}^{K}\right)$ , where $K$ is the maximum number of reasoning steps. Note that we use $\beta^{(k)}$ to denote a general threshold, while $\hat{\beta}^{(k)}$ represents the estimate of the threshold obtained using our estimation method. The following theorem guarantees that, under ideal conditions, as long as the threshold meets or exceeds the quality of the large model, our generator ensures the undegraded quality condition defined in Definition 4.2.
+
+Theorem 4.3. (Quality-Preserving Condition on the Threshold) The generator $G_{s}\left(\{\beta^{(k)}\}_{k = 1}^{K}\right)$ preserves undegraded quality if the following condition holds: $\beta^{(k)} \geq \mu_p^{(k)}, \forall k = 1,2,\ldots ,K$ .
+
+This theorem provides a quality-preserving condition on the threshold in our designed speculative thought generator. Specifically, if the threshold estimation method proposed in Section 4.3 satisfies this condition, then our SpecSearch guarantees undegraded quality.
+
+We then make the following quality-descending assumption based on our observations in Figure 7 in Appendix I.2.
+
+Assumption 4.4. (Descending Quality and Bounded Variance) At the $k$ -th step in the beam search algorithm, which selects the candidate with optimal quality, we assume that $\mu_p^{(k)} \leq \gamma \mu_p^{(k-1)}$ , $\forall k = 1, 2, \ldots, K$ , where $\gamma < 1$ is the decay factor. We further assume that $\sigma_p^{(k)} \leq \sigma_c$ , $\forall k = 1, 2, \ldots, K$ , where $\sigma_c > 0$ is a constant.
+
+Building upon this assumption, the following theorem establishes a lower bound on the probability that our speculative thought generator preserves quality at each reasoning step.
+
+Theorem 4.5. (Probability Bound for Step-Wise Quality-Preserving) Consider a speculative thought generator $G_{s}\left(\left\{\beta^{(k)}\right\}_{k = 1}^{K}\right)$ . Given that weight $\theta \geq \gamma$ and at step $k \geq 1$ , the generator $G_{s}$ preserves undegraded quality, the lower bound for the probability that at step $k + 1$ it also preserves undegraded quality is given by:
+
+$$
+\begin{array}{l} P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)} \mid \hat {\beta} ^ {(k)} \geq \mu_ {p} ^ {(k)}\right) \geq \\ \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) (\sigma_ {c}) ^ {2}}. \tag {1} \\ \end{array}
+$$
+
+Furthermore, for a beam search algorithm with up to $K$ reasoning steps, we derive a lower bound on the probability that our speculative thought generator maintains undegraded quality, as stated in the following theorem.
+
+Theorem 4.6. (Probability Bound for Quality-Preserving) For a speculative thought generator $G_{s}\left(\left\{\beta^{(k)}\right\}_{k = 1}^{K}\right)$ with a maximum of $K$ reasoning steps, where $K\in \mathbb{N}$ , and weight $\theta \geq \gamma$ , the lower bound on the probability of $G_{s}$ preserving
+
+undegraded quality is given by:
+
+$$
+\begin{array}{l} P \left(\hat {\beta} ^ {(k)} \geq \mu_ {p} ^ {(k)}, 1 \leq k \leq K\right) \geq \\ \left(1 - \frac {1}{2 ^ {N + 1}}\right) \prod_ {k = 1} ^ {K - 1} \left[ \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) (\sigma_ {c}) ^ {2}} \right]. \tag {2} \\ \end{array}
+$$
+
+This probability bound increases monotonically with respect to the sample size $N$ . Furthermore, as $N \to \infty$ , the lower bound approaches 1, implying that our speculative generator can achieve higher reasoning quality by generating more samples during the drafting phase. A detailed discussion of this result is provided in Appendix A.5.
+
+# 5. Experiments
+
+Our experiments have four main parts. Experiment 1. We evaluate the performance of SpecSearch and the baselines on different datasets and LLMs. Experiment 2. We evaluate the generalization performance of SpecSearch across different search algorithms and thought evaluators. Experiment 3. We conduct carefully designed ablation studies to demonstrate the effectiveness of SpecSearch. Experiment 4. We perform a visualization analysis of SpecSearch to provide further insight into SpecSearch.
+
+Experimental Setup We use quantized Qwen2.5-72B-Instruct and Qwen2.5-7B-Instruct (Team, 2024) as large and small models, respectively, along with quantized Llama3-70B-Instruct and Llama3-8B-Instruct (Dubey et al., 2024). Unless stated otherwise, experiments follow OpenR (Wang et al., 2024a) settings: tree width of 6, tree depth of 50, MATH-psa as the process reward model (PRM), Qwen models as the main LLMs, and beam search as the main search algorithm. Throughout all experiments, we set the EMA weight $\theta$ in SpecSearch to 0.9
+
+Baselines This study aims to accelerate thought generation in reasoning trees without modifying search algorithms or prompting techniques. Thus, we compare our method with two baselines: (1) AR, the original ToT method using autoregressive decoding with a large model, and (2) SpS, a state-of-the-art (SOTA) lossless speculative decoding approach. Details are in Appendix D.
+
+Datasets We use two well-established mathematical problem datasets, GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), to evaluate the acceleration performance of the proposed framework. GSM8K contains high-quality elementary mathematics word problems, while MATH comprises advanced high school competition-level math problems. Due to the long testing times of tree-search-based reasoning methods, we randomly select 100 samples from both the GSM8K and MATH datasets for evaluation.
+
+Evaluation Metrics. We use two widely-used metrics, accuracy and speedup, to compare our method's performance with that of the baselines. We define the accuracy by the
+
+Table 1. The results demonstrate that SpecSearch significantly accelerates LLM reasoning with comparable reasoning accuracy.
+
+Dataset MATH-100 GSM8K-100 Models Methods Reasoning Accuracy (%)↑ Average Inference Latency (s) ↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Reasoning Accuracy (%)↑ Average Inference Latency (s) ↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Qwen AR 87.00 275.78 NA 0.51 97.00 138.24 NA 0.50 SpS 88.00 141.55 1.95 NA 97.00 69.43 1.99 NA SpecSearch (Ours) 87.00 82.35 3.35 1.72 96.00 48.18 2.87 1.44 Methods Reasoning Accuracy (%)↑ Average Inference Latency (s)↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Reasoning Accuracy (%)↑ Average Inference Latency (s) ↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Llama AR 62.00 170.84 NA 0.79 87.00 90.04 NA 0.71 SpS 61.00 134.34 1.27 NA 86.00 64.29 1.40 NA SpecSearch (Ours) 64.00 129.65 1.32 1.04 88.00 45.34 1.99 1.42
+
+percentage of correct predictions. We define the speedup by the ratio of the baseline's latency to our approach's latency.
+
+Experiment 1. Main Evaluation We evaluate SpecSearch against two competitive baselines on two math datasets using the Qwen and Llama models. Table 1 highlights three key findings. Moreover, we provide additional evaluation on four more distinct dataset categories, including the full GSM8K, AIME, Olympiad Bench, and a code-generation benchmark, in Appendix I.1.
+
+(1) High Speedup SpecSearch consistently outperforms all baselines, achieving up to $1.72 \times$ speedup over SpS and $3.35 \times$ over AR on MATH-100 with Qwen. (2) Broad Compatibility Our method accelerates both Llama and Qwen models, demonstrating strong adaptability across LLMs. (3) Superior Reasoning Ability On Llama, SpecSearch surpasses baselines in reasoning accuracy on MATH-100 and GSM8K-100, highlighting the strong ability of our SpecSearch to effectively collaborate the small and large models to maintain reasoning quality.(4) Accuracy Degradation Analysis We conduct a case study to explore the reasons behind the accuracy degradation of SpecSearch on GSM8K-100. The results in Appendix I.3 show that the degradation primarily arises from certain misleading thoughts that deceive the PRM.
+
+Experiment 2. Generalization We evaluate SpecSearch's generalization across different search algorithms and thought evaluators on GSM8K-100. Due to limited space, we defer results on MATH-100 to Appendix I.4.
+
+Search Algorithms To demonstrate the broad applicability of SpecSearch, we apply it to two distinct search algorithms: beam search and MCTS. We compare SpecSearch against AR and SpS on both algorithms. As shown in Table 2, SpecSearch significantly outperforms the baselines, reducing inference latency while maintaining comparable accuracy. These results highlight both its efficiency and its adaptability across different search algorithms.
+
+Different Thought Evaluators To evaluate the generalization of SpecSearch across thought evaluators, we test
+
+it with two PRMs—Math-Shepherd (Wang et al., 2024c) and MATH-psa (Wang et al., 2024a)—on beam search. As shown in Table 2, SpecSearch maintains nearly the same accuracy while significantly accelerating inference across different PRMs, achieving up to $2.12 \times$ speedup. This demonstrates its strong adaptability to various evaluators.
+
+Experiment 3. Ablation Study As the MATH dataset is harder than the GSM8K dataset, we perform an ablation study and sensitivity analysis on the MATH dataset. Specifically, we further randomly sample 50 problems from MATH-100, called MATH-50.
+
+Contribution of Each Component To assess the effectiveness of each component, we conduct an ablation study. For the Evaluation Module in SpecSearch, we replace PRM evaluation with evaluation via the log probabilities of a large model. For the Rejection Module, we compare three variations: SpecSearch with Fixed Large Model Engagement (SpecSearch w/ FLME), SpecSearch with Fixed Threshold (SpecSearch w/ FT), and SpecSearch with Random Rejection (SpecSearch w/ RR). SpecSearch w/ FLME implements a simple collaboration strategy between large and small models. SpecSearch w/ FT replaces the step-wise threshold estimation with a fixed threshold in the rejection process. SpecSearch w/ RR randomly rejects $50\%$ of the small model's generated thoughts. As shown in Table 3, our evaluation and rejection modules are both essential for preserving reasoning quality, suggesting that each component in our proposed SpecSearch are important for its significant performance improvement.
+
+Sensitivity Analysis (1) The EMA Weight $\theta$ . We analyze the sensitivity of SpecSearch to the EMA weight $\theta$ . Due to limited space, we defer results to Appendix I.6. The results in Table 11 in Appendix I.6 show that SpecSearch achieves similar average performance across a wide range of $\theta$ . (2) Draft Model's Size We have investigated SpecSearch's performance using multiple small draft models. The results in Table 10 in Appendix I.5 reveal that SpecSearch achieves speedups ranging from $2.18 \times$ to $2.87 \times$ , underscoring its acceleration capabilities across diverse small-model settings.
+
+Table 2. The results demonstrate the Broad Compatibility of Our SpecSearch with different search algorithms and PRMs.
+
+Search Algorithms Beam Search MCTS Methods Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ AR 97.00 138.24 NA 0.50 98.00 256.17 NA 0.51 SpS 97.00 69.43 1.99 NA 98.00 129.74 1.97 NA SpecSearch (Ours) 96.00 48.18 2.87 1.44 98.00 98.16 2.61 1.32 PRMs Math-Shepherd Math-psa Methods Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ AR 96.00 124.76 NA 0.51 97.00 138.24 NA 0.50 SpS 95.00 64.17 1.94 NA 97.00 69.43 1.99 NA SpecSearch (Ours) 94.00 30.32 4.11 2.12 96.00 48.18 2.87 1.44
+
+Table 3. The results demonstrate that each component within SpecSearch is significant for maintaining reasoning accuracy.
+
+Dataset MATH-50 Methods Reasoning Accuracy (%)↑ Average Inference Latency (s)↓ Speedup (vs AR)↑ AR 88.00 256.05 NA SD 90.00 132.68 1.93 SpecSearch (Ours) 88.00 70.63 3.63 Evaluation Module SpecSearch w LMV 78.00 172.26 1.49 Rejection Module SpecSearch w FT 80.00 68.84 3.72 SpecSearch w RR 80.00 99.73 2.57 SpecSearch w FLME 84.00 105.25 2.43
+
+
+(a) Qwen Models
+
+
+(b) Llama Models
+Figure 4. To verify that our method preserves comparable reward scores for reasoning thoughts, we visualize the average reward scores at each reasoning step during the tree search process.
+
+Experiment 4. Visualization Analysis To evaluate whether our SpecSearch can preserve comparable reward scores for reasoning thoughts, we visualize the average reward scores at each reasoning step during the tree search process for SpecSearch and the baselines on the MATH-100 dataset. As shown in Figure 4, SpecSearch achieves reward scores comparable to those of the large model across all reasoning steps. This result highlights SpecSearch's ability to significantly accelerate inference while maintaining comparable reasoning quality to the large model.
+
+# 6. Conclusion
+
+We propose Speculative Search (SpecSearch), a framework that accelerates reasoning by enabling a small model to generate speculative thoughts with a large model at both thought and token levels. With a quality-preserving rejection mechanism, SpecSearch theoretically maintains reasoning quality comparable to the large model. Experiments show up to $2.12 \times$ speedup while preserving high reasoning quality.
+
+# Acknowledgements
+
+This work was supported in part by National Key R&D Program of China under contract 2022ZD0119801, National Nature Science Foundations of China grants U23A20388 and 62021001. This work was supported in part by Huawei as well. We would like to thank all the anonymous reviewers for their insightful comments.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Berto, F., Hua, C., Park, J., Kim, M., Kim, H., Son, J., Kim, H., Kim, J., and Park, J. RL4CO: a unified reinforcement learning for combinatorial optimization library. In NeurIPS 2023 Workshop: New Frontiers in Graph Learning, 2023. URL https://openreview.net/forum?id=YXSJxi8dOV.
+Burkardt, J. The truncated normal distribution. Department of Scientific Computing Website, Florida State University, 1(35):58, 2014.
+Burton, F. W. Speculative computation, parallelism, and functional programming. IEEE Trans. Comput-
+
+ers, 34(12):1190-1193, 1985. doi: 10.1109/TC.1985.6312218. URL https://doi.org/10.1109/TC.1985.6312218.
+Chen, C., Borgeaud, S., Irving, G., Lespiau, J., Sifre, L., and Jumper, J. Accelerating large language model decoding with speculative sampling. CoRR, abs/2302.01318, 2023a. doi: 10.48550/ARXIV.2302.01318. URL https://doi.org/10.48550/arXiv.2302.01318.
+Chen, G., Liao, M., Li, C., and Fan, K. Alphamath almost zero: process supervision without process. CoRR, abs/2405.03553, 2024. doi: 10.48550/ARXIV.2405.03553. URL https://doi.org/10.48550/arXiv.2405.03553.
+Chen, Z., Yang, X., Lin, J., Sun, C., Huang, J., and Chang, K. C. Cascade speculative drafting for even faster LLM inference. CoRR, abs/2312.11462, 2023b. doi: 10.48550/ ARXIV.2312.11462. URL https://doi.org/10. 48550/arXiv.2312.11462.
+Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168.
+Dekking, F. M., Kraaikamp, C., Lopuhaa, H. P., and Meester, L. E. A Modern Introduction to Probability and Statistics: Understanding why and how. Springer Science & Business Media, 2006.
+Delarue, A., Anderson, R., and Tjandraatmadja, C. Reinforcement learning with combinatorial actions: An application to vehicle routing. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 609-620. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Paper.pdf.
+Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., Spataru, A., Rozière, B., Biron, B., Tang, B., Chern, B., Caucheteux, C., Nayak, C., Bi, C., Marra, C., McConnell, C., Keller, C., Touret, C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Allonsius, D., Song, D., Pintz, D., Livshits, D., Esiobu, D., Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino, D., Hupkes, D., Lakomkin, E., AlBadawy, E., Lobanova, E., Dinan, E., Smith, E. M., Radenovic, F., Zhang, F., Synnaeve, G., Lee, G., Anderson, G. L., Nail, G., Mialon, G., Pang
+
+G., Cucurell, G., Nguyen, H., Korevaar, H., Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann, I. M., Misra, I., Evtimov, I., Copet, J., Lee, J., Geffert, J., Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der Linde, J., Billock, J., Hong, J., Lee, J., Fu, J., Chi, J., Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak, J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Alwala, K. V., Upasani, K., Plawiak, K., Li, K., Heafield, K., Stone, K., and et al. The llama 3 herd of models. CoRR, abs/2407.21783, 2024. doi: 10.48550/ARXIV.2407.21783. URL https://doi.org/10.48550/arXiv.2407.21783.
+Gao, Z., Niu, B., He, X., Xu, H., Liu, H., Liu, A., Hu, X., and Wen, L. Interpretable contrastive monte carlo tree search reasoning. CoRR, abs/2410.01707, 2024. doi: 10.48550/ARXIV.2410.01707. URL https://doi.org/10.48550/arXiv.2410.01707.
+Geng, Z., Wang, J., Liu, Z., Xu, S., Tang, Z., Yuan, M., Hao, J., Zhang, Y., and Wu, F. Reinforcement learning within tree search for fast macro placement. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview.net/forum?id=AJGwSx0RUV.
+Gopinath, R. A. Maximum likelihood modeling with gaussian distributions for classification. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP'98 (Cat. No. 98CH36181), volume 2, pp. 661-664. IEEE, 1998.
+Guo, H., Liu, Z., Zhang, Y., and Wang, Z. Can large language models play games? a case study of a self-play approach. arXiv preprint arXiv:2403.05632, 2024.
+Hao, S., Gu, Y., Ma, H., Hong, J. J., Wang, Z., Wang, D. Z., and Hu, Z. Reasoning with language model is planning with world model. In Bouamor, H., Pino, J., and Bali, K. (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 8154-8173. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.EMNLP-MAIN.507. URL https://doi.org/10.18653/v1/2023.emnlp-main.507.
+Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the MATH dataset. In Vanschoren, J. and Yeung, S. (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021.
+Hennessy, J. L. and Patterson, D. A. Computer Architecture - A Quantitative Approach, 5th Edition. Morgan Kaufmann, 2012. ISBN 978-0-12-383872-8.
+
+Hu, J. Reinforce++: A simple and efficient approach for aligning large language models.
+Hui, W., Jiang, C., Wang, Y., and Tu, K. Rot: Enhancing large language models with reflection on search trees. CoRR, abs/2404.05449, 2024. doi: 10.48550/ARXIV.2404.05449. URL https://doi.org/10.48550/arXiv.2404.05449.
+Janner, M., Fu, J., Zhang, M., and Levine, S. When to trust your model: Model-based policy optimization. Advances in neural information processing systems, 32, 2019.
+Jiang, J., Chen, Z., Min, Y., Chen, J., Cheng, X., Wang, J., Tang, Y., Sun, H., Deng, J., Zhao, W. X., Liu, Z., Yan, D., Xie, J., Wang, Z., and Wen, J. Technical report: Enhancing LLM reasoning with reward-guided tree search. CoRR, abs/2411.11694, 2024. doi: 10.48550/ARXIV.2411.11694. URL https://doi.org/10.48550/arXiv.2411.11694.
+Kang, J., Li, X. Z., Chen, X., Kazemi, A., and Chen, B. Mindstar: Enhancing math reasoning in pre-trained llms at inference time. CoRR, abs/2405.16265, 2024. doi: 10.48550/ARXIV.2405.16265. URL https://doi.org/10.48550/arXiv.2405.16265.
+Klinker, F. Exponential moving average versus moving exponential average. Mathematische Semesterberichte, 58:97-107, 2011.
+Kou, S., Hu, L., He, Z., Deng, Z., and Zhang, H. Cllms: Consistency large language models. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=8uzBOVmh8H.
+Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C. H., Gonzalez, J., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. In Flinn, J., Seltzer, M. I., Druschel, P., Kaufmann, A., and Mace, J. (eds.), Proceedings of the 29th Symposium on Operating Systems Principles, SOSP 2023, Koblenz, Germany, October 23-26, 2023, pp. 611-626. ACM, 2023. doi: 10.1145/3600006.3613165. URL https://doi.org/10.1145/3600006.3613165.
+Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
+Leviathan, Y., Kalman, M., and Matias, Y. Fast inference from transformers via speculative decoding. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato,
+
+S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 19274-19286. PMLR, 2023. URL https://proceedings.mlr.press/v202/leviathan23a.html.
+Li, Y., Wei, F., Zhang, C., and Zhang, H. EAGLE: speculative sampling requires rethinking feature uncertainty. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=1NdN7eXyb4.
+Liu, Z., Hu, H., Zhang, S., Guo, H., Ke, S., Liu, B., and Wang, Z. Reason for future, act for now: A principled architecture for autonomous llm agents. In *Forty-first International Conference on Machine Learning*, 2023.
+Lu, J., Yang, Z., Wang, Y., Liu, X., Namee, B. M., and Huang, C. Padellm-ner: Parallel decoding in large language models for named entity recognition. CoRR, abs/2402.04838, 2024. doi: 10.48550/ARXIV.2402.04838. URL https://doi.org/10.48550/arXiv.2402.04838.
+Mazyavkina, N., Sviridov, S., Ivanov, S., and Burnaev, E. Reinforcement learning for combinatorial optimization: A survey. Computers & Operations Research, 134: 105400, 2021.
+Qiu, J., Lu, Y., Zeng, Y., Guo, J., Geng, J., Wang, H., Huang, K., Wu, Y., and Wang, M. Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling. arXiv preprint arXiv:2410.16033, 2024.
+Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897. PMLR, 2015.
+Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Shao, Z., Wang, P., Zhu, Q., Xu, R., Song, J., Bi, X., Zhang, H., Zhang, M., Li, Y., Wu, Y., et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024.
+Team, Q. Qwen2.5: A party of foundation models, September 2024. URL https://qwenlm.github.io/blog/qwen2.5/.
+Wan, Z., Feng, X., Wen, M., McAleer, S. M., Wen, Y., Zhang, W., and Wang, J. Alphazero-like tree-search can guide large language model decoding and training. In
+
+Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024a. URL https://openreview.net/forum?id=C4OpREezgj.
+Wan, Z., Wang, X., Liu, C., Alam, S., Zheng, Y., Liu, J., Qu, Z., Yan, S., Zhu, Y., Zhang, Q., Chowdhury, M., and Zhang, M. Efficient large language models: A survey. Transactions on Machine Learning Research, 2024b. ISSN 2835-8856. URL https://openreview.net/forum?id=bsCCJHbO8A. Survey Certification.
+Wang, J., Fang, M., Wan, Z., Wen, M., Zhu, J., Liu, A., Gong, Z., Song, Y., Chen, L., Ni, L. M., Yang, L., Wen, Y., and Zhang, W. Openr: An open source framework for advanced reasoning with large language models. CoRR, abs/2410.09671, 2024a. doi: 10.48550/ARXIV.2410.09671. URL https://doi.org/10.48550/arXiv.2410.09671.
+Wang, J., Wang, Z., Li, X., Kuang, Y., Shi, Z., Zhu, F., Yuan, M., Zeng, J., Zhang, Y., and Wu, F. Learning to cut via hierarchical sequence/set model for efficient mixed-integer programming. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(12):9697-9713, 2024b. doi: 10.1109/TPAMI.2024.3432716.
+Wang, P., Li, L., Shao, Z., Xu, R., Dai, D., Li, Y., Chen, D., Wu, Y., and Sui, Z. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Ku, L., Martins, A., and Srikumar, V. (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024, pp. 9426-9439. Association for Computational Linguistics, 2024c. doi: 10.18653/V1/2024.ACL-LONG.510. URL https://doi.org/10.18653/v1/2024.acl-long.510.
+Wang, Z., Wang, J., Zhou, Q., Li, B., and Li, H. Sample-efficient reinforcement learning via conservative model-based actor-critic. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8): 8612-8620, Jun. 2022. doi: 10.1609/aaai.v36i8.20839. URL https://ojs.aaaai.org/index.php/AAAI/article/view/20839.
+Wang, Z., Li, X., Wang, J., Kuang, Y., Yuan, M., Zeng, J., Zhang, Y., and Wu, F. Learning cut selection for mixed-integer linear programming via hierarchical sequence model. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=Zob4P9bRNcK.
+Wang, Z., Pan, T., Zhou, Q., and Wang, J. Efficient exploration in resource-restricted reinforcement learning.
+
+Proceedings of the AAAI Conference on Artificial Intelligence, 37(8):10279-10287, Jun. 2023b. doi: 10.1609/aaai.v37i8.26224. URL https://ojs.aaai.org/index.php/AAAI/article/view/26224.
+Wang, Z., Wang, J., Zuo, D., Yunjie, J., Xia, X., Ma, Y., HAO, J., Yuan, M., Zhang, Y., and Wu, F. A hierarchical adaptive multi-task reinforcement learning framework for multiplier circuit design. In *Forty-first International Conference on Machine Learning*, 2024d. URL https://openreview.net/forum?id=LGz7GaUSEB.
+Wang, Z., Wu, J., Lai, Y., Zhang, C., and Zhou, D. SEED: accelerating reasoning tree construction via scheduled speculative decoding. CoRR, abs/2406.18200, 2024e. doi: 10.48550/ARXIV.2406.18200. URL https://doi.org/10.48550/arXiv.2406.18200.
+Wang, Z., Wang, J., Xia, X., Zuo, D., Chen, L., Ma, Y., HAO, J., Yuan, M., and Wu, F. Computing circuits optimization via model-based circuit genetic evolution. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=KWH4UIoQKS.
+Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V., and Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
+Wikipedia contributors. Cantelli's inequality — Wikipedia, the free encyclopedia, 2024. URL https://en.wikipedia.org/w/index.php? title=Cantelli%27s_inequality&oldid=1244860887. [Online; accessed 21-January-2025].
+Wu, Y., Sun, Z., Li, S., Welleck, S., and Yang, Y. Scaling inference computation: Compute-optimal inference for problem-solving with language models. In The 4th Workshop on Mathematical Reasoning and AI at NeurIPS'24, 2024. URL https://openreview.net/forum?id=j7DZwSc8qu.
+Xia, H., Yang, Z., Dong, Q., Wang, P., Li, Y., Ge, T., Liu, T., Li, W., and Sui, Z. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. In Ku, L., Martins, A., and Srikumar, V. (eds.), Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 7655-7671. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.
+
+FINDINGS-ACL.456. URL https://doi.org/10. 18653/v1/2024-findings-acl.456.
+Xie, Y., Kawaguchi, K., Zhao, Y., Zhao, X., Kan, M., He, J., and Xie, Q. Decomposition enhances reasoning via self-evaluation guided decoding. CoRR, abs/2305.00633, 2023. doi: 10.48550/ARXIV.2305.00633. URL https://doi.org/10.48550/arXiv.2305.00633.
+Yang, R., Wang, J., Geng, Z., Ye, M., Ji, S., Li, B., and Wu, F. Learning task-relevant representations for generalization via characteristic functions of reward sequence distributions. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2242-2252, 2022.
+Yang, S., Huang, S., Dai, X., and Chen, J. Multi-candidate speculative decoding. CoRR, abs/2401.06706, 2024. doi: 10.48550/ARXIV.2401.06706. URL https://doi.org/10.48550/arXiv.2401.06706.
+Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
+Yu, Q., Zhang, Z., Zhu, R., Yuan, Y., Zuo, X., Yue, Y., Fan, T., Liu, G., Liu, L., Liu, X., et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025.
+Yue, Y., Yuan, Y., Yu, Q., Zuo, X., Zhu, R., Xu, W., Chen, J., Wang, C., Fan, T., Du, Z., et al. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118, 2025.
+Zhang, C., Liu, Z., and Song, D. Beyond the speculative game: A survey of speculative execution in large language models. CoRR, abs/2404.14897, 2024a. doi: 10.48550/ARXIV.2404.14897. URL https://doi.org/10.48550/arXiv.2404.14897.
+Zhang, D., Zhoubian, S., Yue, Y., Dong, Y., and Tang, J. Rest-mcts*: LLM self-training via process reward guided tree search. CoRR, abs/2406.03816, 2024b. doi: 10.48550/ARXIV.2406.03816. URL https://doi.org/10.48550/arXiv.2406.03816.
+Zhang, X. Gaussian distribution., 2010.
+Zheng, R., Guo, H., Liu, Z., Zhang, X., Yao, Y., Xu, X., Wang, Z., Xi, Z., Gui, T., Zhang, Q., et al. Toward optimal llm alignments using two-player games. arXiv preprint arXiv:2406.10977, 2024.
+
+Zhong, W. and Bharadwaj, M. S3D: A simple and cost-effective self-speculative decoding scheme for low-memory gpus. CoRR, abs/2405.20314, 2024. doi: 10.48550/ARXIV.2405.20314. URL https://doi.org/10.48550/arXiv.2405.20314.
+Zhou, Q., Li, H., and Wang, J. Deep model-based reinforcement learning via estimated uncertainty and conservative policy optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 6941-6948, 2020.
+Zhou, Z., Ning, X., Hong, K., Fu, T., Xu, J., Li, S., Lou, Y., Wang, L., Yuan, Z., Li, X., Yan, S., Dai, G., Zhang, X., Dong, Y., and Wang, Y. A survey on efficient inference for large language models. CoRR, abs/2404.14294, 2024. doi: 10.48550/ARXIV.2404.14294. URL https://doi.org/10.48550/arXiv.2404.14294.
+
+# A. Theoretical Analysis
+
+In this section, we provide proof of the theorems in the main paper along with further discussions.
+
+To facilitate analytical clarity, our analysis is confined to the case that the sample mean serves as the nonparametric estimation method, i.e.
+
+$$
+\hat {\beta} ^ {(k + 1)} = \theta \hat {\beta} ^ {(k)} + \frac {1 - \theta}{M + 1} \sum_ {i = 1} ^ {M + 1} V _ {i} ^ {(k)}, \tag {3}
+$$
+
+where we assume that the large model generates one more thought to avoid the case where $M = 0$ . We present further discussion about those settings in Appendix A.5.3 and Appendix A.5.2.
+
+# A.1. Proof for Theorem 4.3
+
+Lemma A.1. Let $\varphi(x)$ and $\Phi(x)$ denote the probability density function (PDF) and cumulative distribution function (CDF) of the standard normal, respectively. Then for any $x \in \mathbb{R}$ , we have
+
+$$
+\varphi (x) - x \left(1 - \Phi (x)\right) > 0. \tag {4}
+$$
+
+distribution
+
+Proof. Notice that
+
+$$
+\begin{array}{l} 1 - \Phi (x) = \int_ {x} ^ {\infty} \varphi (t) d t \\ = \int_ {x} ^ {\infty} \frac {1}{t} \cdot t \varphi (t) d t \\ = - \int_ {x} ^ {\infty} \frac {1}{t} \cdot d \varphi (t) \\ = - \frac {1}{t} \varphi (t) \bigg | _ {t = x} ^ {\infty} + \int_ {x} ^ {\infty} \varphi (t) d \left(\frac {1}{t}\right) \\ = \frac {1}{x} \varphi (x) - \int_ {x} ^ {\infty} \varphi (t) \frac {1}{t ^ {2}} d t \\ < \frac {1}{x} \varphi (x). \tag {5} \\ \end{array}
+$$
+
+Then we have
+
+$$
+\varphi (x) - x \left(1 - \Phi (x)\right) > 0. \tag {6}
+$$
+
+Then we prove Theorem 4.3 as follows.
+
+Proof. At step $k$ , let the qualities of $N$ thoughts obtained from the small model $G_{q}$ at the $k$ -th step be denoted as $\hat{V}_{i}^{(k)}$ , $i = 1,2,\ldots,N$ , and the threshold of the generator as $\beta^{(k)}$ . Among those, $U$ thoughts with qualities $\hat{V}_{i_l}^{(k)}, l = 1,2,\ldots,U$ are retained with $\hat{V}_{i_l}^{(k)} \geq \beta$ . The rest of $M = N - U$ thoughts are refined by large model $G_{p}$ with qualities $V_{i}^{(k)}, i = 1,2,\ldots,N - U$ , along with an additional thought generated with quality $V_{N-U+1}^{(k)}$ by the target model. Then the probability that a sample passes the threshold $\beta^{(k)}$ is given by
+
+$$
+p _ {s} = 1 - \Phi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right). \tag {7}
+$$
+
+Thus, the number of passing thoughts $U$ follows a binomial distribution over $N$ trials:
+
+$$
+P (U = u) = \binom {N} {u} p _ {s} ^ {u} \left(1 - p _ {s}\right) ^ {N - u}. \tag {8}
+$$
+
+For the qualities of passing thoughts $\hat{V}_{i_k}$ , their distribution is a truncated normal distribution(Burkardt, 2014). The expected quality is computed as
+
+$$
+\mu_ {q} ^ {\prime} = \mathbb {E} \left[ \hat {V} _ {i _ {l}} ^ {(k)} \right] = \mu_ {q} ^ {(k)} + \sigma_ {q} ^ {(k)} \frac {\varphi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)}{1 - \Phi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)}. \tag {9}
+$$
+
+Let the mean quality of the new batch of solutions be denoted as
+
+$$
+\bar {V} ^ {(k)} = \frac {1}{N + 1} \left(\sum_ {l = 1} ^ {U} \hat {V} _ {i _ {l}} ^ {(k)} + \sum_ {j = 1} ^ {N - U + 1} V _ {j} ^ {(k)}\right). \tag {10}
+$$
+
+By the law of total expectation(Dekking et al., 2006), we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \bar {V} ^ {(k)} \right] = \mathbb {E} \left[ \mathbb {E} \left[ \bar {V} ^ {(k)} | U \right] \right] \\ = \sum_ {u = 0} ^ {n} P (U = u) \mathbb {E} \left[ \overline {{V}} ^ {(k)} | U = u \right] \\ = \sum_ {u = 0} ^ {n} P (U = u) \mathbb {E} \left[ \frac {1}{N + 1} \left(\sum_ {l = 1} ^ {U} \hat {V} _ {i _ {l}} ^ {(k)} + \sum_ {j = 1} ^ {N - u + 1} V _ {j} ^ {(k)}\right) | U = u \right] \\ = \sum_ {u = 0} ^ {n} P (U = u) \left[ \frac {1}{N + 1} \left(\sum_ {l = 1} ^ {U} \mathbb {E} [ \hat {V} _ {i _ {l}} ^ {(k)} ] + \sum_ {j = 1} ^ {N - u + 1} \mathbb {E} [ V _ {j} ^ {(k)} ]\right) \Bigg | U = u \right] \\ = \sum_ {u = 0} ^ {n} P (U = u) \left[ \frac {u}{N + 1} \mu_ {q} ^ {\prime} + \frac {N - u + 1}{N + 1} \mu_ {p} ^ {(k)} \right| U = u \Bigg ] \\ = \sum_ {u = 0} ^ {n} P (U = u) \left[ \mu_ {p} ^ {(k)} + \frac {u}{N + 1} (\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}) \right| U = u \Biggr ] \\ = \mu_ {p} ^ {(k)} + \frac {\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}}{N + 1} \sum_ {u = 0} ^ {N} u P (U = u) \\ = \mu_ {p} ^ {(k)} + \frac {\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}}{N + 1} \mathbb {E} [ U ] \\ = \mu_ {p} ^ {(k)} + \frac {N p _ {s}}{N + 1} \left(\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}\right). \tag {11} \\ \end{array}
+$$
+
+Let
+
+$$
+h (x) = \frac {\varphi (x)}{1 - \Phi (x)}. \tag {12}
+$$
+
+and we have
+
+$$
+h ^ {\prime} (x) = \frac {- x \varphi (x) (1 - \Phi (x)) + \varphi^ {2} (x)}{(1 - \Phi (x)) ^ {2}} = \frac {\varphi (x) (\varphi (x) - x (1 - \Phi (x)))}{(1 - \Phi (x)) ^ {2}}. \tag {13}
+$$
+
+According to Lemma A.1, $\varphi (x) - x(1 - \Phi (x)) > 0$ , i.e. $h(x)\geq x$ , so the function $h(x)$ is monotonically increasing. Therefore for any $k\geq 1$
+
+$$
+\mu_ {q} ^ {\prime} = \mu_ {q} ^ {(k)} + \sigma_ {q} ^ {(k)} h \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right) \geq \mu_ {q} ^ {(k)} + \sigma_ {q} ^ {(k)} h \left(\frac {\mu_ {p} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right) \geq \mu_ {q} ^ {(k)} + \sigma_ {q} ^ {(k)} \frac {\mu_ {p} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}} = \mu_ {p} ^ {(k)}. \tag {14}
+$$
+
+Substituting into the (11), we obtain that
+
+$$
+\mathbb {E} \left[ \bar {V} ^ {(k)} \right] \geq \mu_ {p} ^ {(k)}, \forall k \geq 1, \tag {15}
+$$
+
+which is the definition of a lossless thought generator.
+
+
+
+# A.2. Further Discussion on Theorem 4.3
+
+Additionally, we provide the necessary and sufficient conditions for losslessness in the following proposition.
+
+Proposition A.2. (Necessary And Sufficient Lossless Threshold Condition) The generator $G_{s}(\beta)$ is lossless if and only if for any reasoning step $k \geq 1$ ,
+
+$$
+\beta^ {(k)} \geq \mu_ {q} ^ {(k)} + \alpha^ {(k)} \sigma_ {q} ^ {(k)}, \tag {16}
+$$
+
+where $\alpha^{(k)}$ is the solution to the equation
+
+$$
+\frac {\varphi (x)}{1 - \Phi (x)} = \frac {\mu_ {p} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}. \tag {17}
+$$
+
+Proof. As shown in the proof for Theorem 4.3, we have
+
+$$
+\mathbb {E} \left[ \bar {V} ^ {(k)} \right] = \mu_ {p} ^ {(k)} + \frac {N p _ {s}}{N + 1} \left(\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}\right). \tag {18}
+$$
+
+The condition is equivalent to
+
+$$
+\beta^ {(k)} \geq \mu_ {q} ^ {(k)} + \alpha^ {(k)} \sigma_ {q} ^ {(k)} \Longleftrightarrow \frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}} \geq \alpha^ {(k)}, \forall k \geq 1. \tag {19}
+$$
+
+By the monotonicity of the function $h(x) = \frac{\varphi(x)}{1 - \Phi(x)}$ , inequality (19) is equivalent to
+
+$$
+\frac {\varphi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)}{1 - \Phi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)} \geq h \left(\alpha^ {(k)}\right) = \frac {\mu_ {p} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}, \forall k \geq 1. \tag {20}
+$$
+
+Rearranging, we obtain
+
+$$
+\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)} = \mu_ {q} ^ {(k)} + \sigma_ {q} ^ {(k)} \frac {\varphi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)}{1 - \Phi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right)} - \mu_ {p} ^ {(k)} \geq 0, \forall k \geq 1. \tag {21}
+$$
+
+Substituting into the (18), we obtain that (21) is equivalent to
+
+$$
+\mathbb {E} \left[ \overline {{V}} ^ {(k)} \right] \geq \mu_ {p} ^ {(k)}, \forall k \geq 1. \tag {22}
+$$
+
+This is equivalent to the definition of a lossless thought generator.
+
+
+
+# A.3. Proof for Theorem 4.5
+
+Let the condition be denoted as $\mathcal{C} = \left\{\hat{\beta}^{(k)}\geq \mu_p^{(k)}\right\}$ , and we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] = \theta \mathbb {E} \left[ \hat {\beta} ^ {(k)} \mid \mathcal {C} \right] + (1 - \theta) \mathbb {E} \left[ \overline {{V}} ^ {(k)} \mid \mathcal {C} \right] \\ \geq \theta \mu_ {p} ^ {(k)} + (1 - \theta) \mu_ {p} ^ {(k)} \\ = \mu_ {p} ^ {(k)} \geq \frac {1}{\gamma} \mu_ {p} ^ {(k + 1)} \tag {23} \\ \end{array}
+$$
+
+Thus,
+
+$$
+\mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] - \mu_ {p} ^ {(k + 1)} \geq \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \tag {24}
+$$
+
+Then according to condition variance decomposition (Dekking et al., 2006), we have
+
+$$
+\begin{array}{l} V a r \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] = V a r \left[ \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \mid \mathcal {C} \right] + \mathbb {E} \left[ V a r \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \mid \mathcal {C} \right] \\ = I + I I \tag {25} \\ \end{array}
+$$
+
+where $I = Var\left[\mathbb{E}\left[\hat{\beta}^{(k + 1)} \mid \mathcal{C}, \hat{\beta}^{(k)}\right] \mid \mathcal{C}\right]$ and $II = \mathbb{E}\left[Var\left[\hat{\beta}^{(k + 1)} \mid \mathcal{C}, \hat{\beta}^{(k)}\right] \mid \mathcal{C}\right]$ . For $I$ , since
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] = \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = \mathbb {E} \left[ \theta \hat {\beta} ^ {(k)} + (1 - \theta) \bar {V} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = \theta \mathbb {E} \left[ \hat {\beta} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] + (1 - \theta) \mathbb {E} \left[ \bar {V} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = \theta \hat {\beta} ^ {(k)} + (1 - \theta) \mu_ {p} ^ {(k)}. \tag {26} \\ \end{array}
+$$
+
+and therefore we have,
+
+$$
+I = \operatorname {V a r} \left[ \theta \hat {\beta} ^ {(k)} + (1 - \theta) \mu_ {p} ^ {(k)} \mid \mathcal {C} \right] = \theta^ {2} \operatorname {V a r} \left[ \hat {\beta} ^ {(k)} \mid \mathcal {C} \right]. \tag {27}
+$$
+
+For $II$ , now that
+
+$$
+\begin{array}{l} V a r \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] = V a r \left[ \theta \hat {\beta} ^ {(k)} + (1 - \theta) \bar {V} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = (1 - \theta) ^ {2} \operatorname {V a r} \left[ \overline {{V}} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = (1 - \theta) ^ {2} \left(V a r \left[ \mathbb {E} \left[ \bar {V} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)}, U ^ {(k)} \right] \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \right. (28) \\ \left. + \mathbb {E} \left[ V a r \left[ \bar {V} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)}, U ^ {(k)} \right] \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right]\right) \\ = (1 - \theta) ^ {2} \left(I I _ {1} + I I _ {2}\right), (29) \\ \end{array}
+$$
+
+where $U^{(k)}$ is the number retained draft thought. We find that
+
+$$
+\begin{array}{l} I I _ {1} = \operatorname {V a r} \left[ \mathbb {E} \left[ \overline {{V}} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)}, U ^ {(k)} \right] \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ = \operatorname {V a r} \left[ \mu_ {p} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] = 0, \tag {30} \\ \end{array}
+$$
+
+and
+
+$$
+\begin{array}{l} V a r \left[ \overline {{V}} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)}, U ^ {(k)} \right] \\ = V a r \left[ \frac {1}{N - U ^ {(k)} + 1} \sum_ {i = 1} ^ {N - U ^ {(k)} + 1} V _ {i} ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)}, U ^ {(k)} \right] \\ = \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - U ^ {(k)} + 1}. \tag {31} \\ \end{array}
+$$
+
+Since function $g(x) = \frac{1}{N - x + 1}$ is a concave function, therefore according to Jensen's Inequality (Dekking et al., 2006),
+
+$$
+\begin{array}{l} I I _ {2} = \left(\sigma_ {p} ^ {(k)}\right) ^ {2} \mathbb {E} \left[ g (U ^ {(k)}) \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right] \\ \leq \left(\sigma_ {p} ^ {(k)}\right) ^ {2} g \left(\mathbb {E} \left[ U ^ {(k)} \mid \mathcal {C}, \hat {\beta} ^ {(k)} \right]\right) \\ = \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - \mathbb {E} \left[ U ^ {(k)} \mid \mathcal {C} , \hat {\beta} ^ {(k)} \right] + 1} \\ = \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - N p _ {s} + 1} \tag {32} \\ \end{array}
+$$
+
+where $p_s$ is defined in (7). Given condition where $\hat{\beta}^{(k)}\geq \mu_p^{(k)}$ , then
+
+$$
+p _ {s} = 1 - \Phi \left(\frac {\beta^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right) \leq 1 - \Phi \left(\frac {\mu_ {p} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right) \leq 1 - \Phi \left(\frac {\mu_ {q} ^ {(k)} - \mu_ {q} ^ {(k)}}{\sigma_ {q} ^ {(k)}}\right) = \frac {1}{2}.
+$$
+
+Therefore,
+
+$$
+I I _ {2} \leq \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - \frac {1}{2} N + 1} = \frac {2}{N + 2} \left(\sigma_ {p} ^ {(k)}\right) ^ {2} \tag {33}
+$$
+
+Overall, we can find the recursive expression of variance of $\hat{\beta}^{(k)}$ :
+
+$$
+V a r \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] \leq \theta^ {2} V a r \left[ \hat {\beta} ^ {(k)} \mid \mathcal {C} \right] + \frac {2 (1 - \theta) ^ {2}}{N + 2} \left(\sigma_ {p} ^ {(k)}\right) ^ {2} \tag {34}
+$$
+
+Now that $\left(\sigma_p^{(k)}\right)^2 \leq (\sigma_c)^2$ , we can derive that
+
+.
+
+$$
+\begin{array}{l} V a r \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] \leq \theta^ {2} V a r \left[ \hat {\beta} ^ {(k)} \mid \mathcal {C} \right] + \frac {2 (1 - \theta) ^ {2}}{N + 2} (\sigma_ {c}) ^ {2} \\ \leq \theta^ {2 k} \operatorname {V a r} \left[ \hat {\beta} ^ {(1)} \mid \mathcal {C} \right] + \frac {2 (1 - \theta) ^ {2}}{N + 2} \sum_ {i = 1} ^ {k} \theta^ {2 i - 2} \left(\sigma_ {c}\right) ^ {2} \\ \leq \left(\frac {\theta^ {2 k}}{N + 1} + \frac {2}{N + 2} \frac {(1 - \theta) ^ {2} (1 - \theta^ {2 k})}{1 - \theta^ {2}}\right) (\sigma_ {c}) ^ {2} \\ = \left(\frac {\theta^ {2 k}}{N + 1} + \frac {2}{N + 2} \frac {(1 - \theta) (1 - \theta^ {2 k})}{1 + \theta}\right) (\sigma_ {c}) ^ {2} \\ \leq \left(\frac {1}{N + 1} + \frac {2}{N + 2} \frac {(1 - \gamma) (1 - \gamma^ {2 k})}{1 + \gamma}\right) (\sigma_ {c}) ^ {2} \\ \leq \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) \left(\sigma_ {c}\right) ^ {2}. \tag {35} \\ \end{array}
+$$
+
+By Cantelli's inequality (Wikipedia contributors, 2024),
+
+$$
+\begin{array}{l} P \left(\hat {\beta} ^ {(k + 1)} \leq \mu_ {p} ^ {(k + 1)} \middle | \mathcal {C}\right) = P \left(\hat {\beta} ^ {(k + 1)} - \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] \leq - \left(\mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} \mid \mathcal {C} \right] - \mu_ {p} ^ {(k + 1)}\right) \middle | \mathcal {C}\right) \\ \leq \left(1 + \frac {\left(\mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} | \mathcal {C} \right] - \mu_ {p} ^ {(k + 1)}\right) ^ {2}}{\operatorname {V a r} \left[ \hat {\beta} ^ {(k + 1)} - \mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} | \mathcal {C} \right] | \mathcal {C} \right]}\right) ^ {- 1} \\ = \left(1 + \frac {\left(\mathbb {E} \left[ \hat {\beta} ^ {(k + 1)} | \mathcal {C} \right] - \mu_ {p} ^ {(k + 1)}\right) ^ {2}}{V a r \left[ \hat {\beta} ^ {(k + 1)} | \mathcal {C} \right]}\right) ^ {- 1} \\ \leq \left(1 + \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) \left(\sigma_ {c}\right) ^ {2}}\right) ^ {- 1}. \tag {36} \\ \end{array}
+$$
+
+Therefore,
+
+$$
+P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)} \mid \mathcal {C}\right) \geq \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) (\sigma_ {c}) ^ {2}}. \tag {37}
+$$
+
+# A.4. Proof for Theorem 4.6
+
+When $k = 0$ , we have
+
+$$
+\mathbb {E} \left[ \hat {\beta} ^ {(1)} \right] = \theta \mu_ {p} ^ {(0)} \geq \gamma \mu_ {p} ^ {(0)} \geq \mu_ {p} ^ {(1)} \tag {38}
+$$
+
+and
+
+$$
+\operatorname {V a r} \left[ \hat {\beta} ^ {(1)} \right] = \frac {1}{N + 1} \left(\sigma_ {p} ^ {(0)}\right) ^ {2} \tag {39}
+$$
+
+Additionally, we have
+
+$$
+P \left(\hat {\beta} ^ {(1)} \leq \mu_ {p} ^ {(1)}\right) \leq P \left(\hat {\beta} ^ {(1)} \leq \mu_ {p} ^ {(0)}\right) = \frac {1}{2 ^ {N + 1}}. \tag {40}
+$$
+
+Then for $k\geq 1$ , according to Theorem 4.5,
+
+$$
+P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)} \mid \mathcal {C}\right) \geq \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) (\sigma_ {c}) ^ {2}}. \tag {41}
+$$
+
+Noting the Markov property of $\hat{\beta}^{(k + 1)}$ we have
+
+$$
+\begin{array}{l} P \left(\hat {\beta} ^ {(k)} \geq \mu_ {p} ^ {(k)}, 1 \leq k \leq K\right) = P \left(\hat {\beta} ^ {(1)} \geq \mu_ {p} ^ {(1)}\right) \prod_ {k = 1} ^ {K - 1} P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)} \mid \hat {\beta} ^ {(k)} \geq \mu_ {p} ^ {(k)}\right) \\ \geq \left(1 - \frac {1}{2 ^ {N + 1}}\right) \prod_ {k = 1} ^ {K - 1} \left[ \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \left(\frac {1}{N + 1} + \frac {2}{N + 2}\right) (\sigma_ {c}) ^ {2}} \right]. \tag {42} \\ \end{array}
+$$
+
+# A.5. Further Discussion on Theorem 4.6
+
+# A.5.1. NUMERICAL ANALYSIS FOR PROBABILITY BOUND
+
+We conduct a numerical analysis of the probability lower bound presented in Theorem 4.6 for a common scenario. Specifically, we set decent factor $\gamma = 0.9$ , quality of large model at the initial step $\mu_p^{(0)} = 0.85$ , maximum reasoning quality variance $\sigma_c = 0.01$ , drafting size $N = 10$ , and maximum reasoning steps $K = 10$ . Using Theorem 4.6, we compute the current probability lower bound up to the $k$ -th step, $1 \leq k \leq K$ . The results of this computation are presented in Figure 5. At the 10-th reasoning step, the probability lower bound remains as high as 0.90. Although this conclusion is derived under highly idealized conditions, it still provides theoretical support for the high quality of thoughts generated from our method.
+
+
+Figure 5. The bound decent rapidly with the reasoning steps. However, the bound remains as high as 0.90 even at the 10-th step.
+
+# A.5.2. THRESHOLD ESTIMATOR WITH MAXIMUM ESTIMATION
+
+In practice, due to the limited number of samples, the accuracy of the average estimation method tends to be lower, which results in a decrease in the quality of the thoughts generated by our speculative reasoning algorithm. Therefore, we incorporate the solutions from the small model $G_{q}$ that passed the threshold into the estimation of $\mu_p^{(k)}$ and use the maximum as a non-parametric estimator. Specifically, at reasoning step $k + 1$ , denote the set of qualities of thoughts from small model $G_{q}$ that passed $\tilde{\beta}^{(k)}$ by $\mathcal{V}_q^{(k)} = \left\{\hat{V}_{i_1}^{(k)},\hat{V}_{i_2}^{(k)},\dots ,\hat{V}_{i_{N - M}}^{(k)}\right\}$ , and the set of qualities of thoughts generated by large model (speculative model) $G_{p}$ by $\mathcal{V}_p^{(k)} = \left\{V_1^{(k)},V_2^{(k)},\ldots ,V_M^{(k)}\right\}$ . Then, our estimator takes the form of:
+
+$$
+\tilde {\beta} ^ {(k + 1)} = \theta \tilde {\beta} ^ {(k)} + (1 - \theta) \max \mathcal {V} _ {p} ^ {(k)} \cup \mathcal {V} _ {q} ^ {(k)}, \tag {43}
+$$
+
+with the initial threshold $\tilde{\beta}^{(0)} = \theta \max \mathcal{V}_p^{(0)}$ . It's easy to see that $\tilde{\beta}^{(k)} \geq \hat{\beta}^{(0)}, k = 1, 2, \ldots$ . Therefore, we have
+
+$$
+P \left(\tilde {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)}, 1 \leq k \leq K\right) \geq P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)}, 1 \leq k \leq K\right). \tag {44}
+$$
+
+That indicates that the maximum estimation method results in a higher probability of producing quality-preserved thoughts, at the cost of increased computational resources.
+
+# A.5.3. THRESHOLD ESTIMATOR WITH NO ADDITIONAL $G_{p}$ SAMPLES
+
+For the sake of simplicity in the previous analysis, we assumed that large model (speculative model) $G_{p}$ generates $M + 1$ solutions at each step to ensure the existence of the large model's solution. In reality, we can make a more practical assumption that $G_{p}$ still generates $M$ thoughts and $M \geq 1$ . Then $U = N - M$ follows a truncated binomial distribution:
+
+$$
+P (U = u) = \left\{ \begin{array}{l l} 0 & u = n, \\ \binom {N} {u} \frac {p _ {s} ^ {u} \left(1 - p _ {s}\right) ^ {N - u}}{1 - p _ {s} ^ {n}} & \text {e l s e}, \end{array} \right. \tag {45}
+$$
+
+where $p_{s} = 1 - \Phi \left(\frac{\beta^{(k)} - \mu_{q}^{(k)}}{\sigma_{q}^{(k)}}\right)$ . We can calculate that
+
+$$
+\mathbb {E} [ U ] = \frac {N \left(p _ {s} - p _ {s} ^ {N}\right)}{1 - p _ {s} ^ {N}}. \tag {46}
+$$
+
+Then
+
+$$
+\mathbb {E} \left[ \bar {V} ^ {(k)} \right] = \mu_ {p} ^ {(k)} + \frac {p _ {s} - p _ {s} ^ {N}}{1 - p _ {s} ^ {n}} \left(\mu_ {q} ^ {\prime} - \mu_ {p} ^ {(k)}\right). \tag {47}
+$$
+
+Therefore we can draw the same conclusion with Theorem 4.3. In addition, we find (32) changes into
+
+$$
+I I _ {2} = \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - \mathbb {E} \left[ U ^ {(k)} \mid \mathcal {C} , \hat {\beta} ^ {(k)} \right]} = \frac {\left(\sigma_ {p} ^ {(k)}\right) ^ {2}}{N - \frac {N \left(p _ {s} - p _ {s} ^ {N}\right)}{1 - p _ {s} ^ {N}}} \leq \frac {2}{N} \left(\sigma_ {p} ^ {(k)}\right) ^ {2}, \tag {48}
+$$
+
+and the probability bound for quality-preserving changes to
+
+$$
+P \left(\hat {\beta} ^ {(k + 1)} \geq \mu_ {p} ^ {(k + 1)}, 0 \leq k \leq K\right) \geq \left(1 - \frac {1}{2 ^ {N}}\right) \prod_ {k = 0} ^ {K} \left[ \frac {\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2}}{\left[ \frac {1 - \gamma}{\gamma} \mu_ {p} ^ {(k + 1)} \right] ^ {2} + \frac {3}{N} \left(\sigma_ {c}\right) ^ {2}} \right]. \tag {49}
+$$
+
+# B. More Background
+
+
+2) Verification
+(a) Speculative Decoding
+Figure 6. (a) Illustration of standard speculative decoding methods. (b) Illustration of the beam-search-based reasoning method.
+
+
+(b) Search-Based Reasoning
+
+Details on Speculative Sampling Here is a detailed introduction of speculative sampling (SpS) (Leviathan et al., 2023; Chen et al., 2023a), a state-of-the-art decoding technique that significantly accelerates LLM inference while preserving the target model's distribution. Specifically, let $c$ denote the prefix, $M_q$ and $M_p$ be the small and large models, respectively, and $\gamma$ represent the number of tokens generated per step. SpS operates in two phases: drafting and verification. In the drafting phase, the small model $M_q$ performs autoregressive sampling to generate $\gamma$ tokens, denoted as $x_1, x_2, \ldots, x_\gamma$ , where each $x_i \sim M_q(x_i \mid x_{i-1}, x_{i-2}, \ldots, x_1, c)$ . In the verification phase, the large model $M_p$ verifies the tokens generated by $M_q$ in parallel, obtaining the probability distribution $M_p(x \mid x_{i-1}, \ldots, x_1, c)$ . Each token $x_i$ is then verified sequentially using a modified rejection sampling mechanism, accepted with probability $\min\left(1, \frac{M_p(x_i \mid x_{i-1}, \ldots, x_1, c)}{M_q(x_i \mid x_{i-1}, \ldots, x_1, c)}\right)$ . If $x_i$ is rejected, the verification process terminates, and a resampling phase begins to generate a new token $\tilde{x}_i$ . Theoretically, this approach ensures that the distribution of accepted tokens matches that of the large model.
+
+Details on Beam Search and MCTS Beam Search is a heuristic search algorithm that starts from the root node and generates $N$ child nodes. At each depth level, only the top $k$ most promising nodes (beam size) are retained. This process is repeated for the selected $k$ nodes until a termination condition is met. The highest-scoring path is returned as the solution. MCTS is a simulation-based decision-making method that starts from the root node and selects child nodes according to a specific strategy (e.g., UCB) until an unexpanded node is reached. A new child node is then expanded. From this new node, a series of random steps are executed to simulate the search process until the end or a predefined depth is reached. After the simulation, rewards propagate back up the tree, updating the value of each visited node based on the simulation results. Through multiple iterations, MCTS converges to an optimal solution.
+
+# C. More Related Work
+
+Here we discuss more related work as follows.
+
+Reinforcement Learning and Search Recent advancements in enhancing the reasoning capabilities of large language models (LLMs) broadly fall into two categories: reinforcement learning-based post-training methods (Shao et al., 2024; Yu et al., 2025; Zheng et al., 2024; Hu; Yue et al., 2025) and search-based slow-thinking frameworks (Yao et al., 2023; Gao et al., 2024; Zhang et al., 2024b). Reinforcement learning-based post-training methods closely align with reinforcement learning techniques (Schulman et al., 2015; Zhou et al., 2020; Wang et al., 2023b; Schulman et al., 2017; Wang et al., 2022; Janner et al., 2019; Yang et al., 2022) employed for combinatorial optimization tasks (Berto et al., 2023; Wang et al., 2025; 2024b; Delarue et al., 2020; Wang et al., 2024d; Geng et al., 2024; Mazyavkina et al., 2021; Wang et al., 2023a), predominantly leveraging policy-gradient algorithms to optimize autoregressive models toward improved reasoning performance. In contrast, search-based slow-thinking frameworks augment pretrained LLMs by incorporating external search methodologies—such as beam search (Kang et al., 2024) and Monte Carlo Tree Search (MCTS) (Gao et al., 2024; Liu et al., 2023; Zhang et al., 2024b; Guo et al., 2024)—to enable systematic and structured deliberation without necessitating further training. This paper specifically emphasizes the exploration and advancement of these search-based slow-thinking frameworks.
+
+# D. Implementation Details of the Baselines
+
+We implement the baselines used in our paper based on the OpenR code framework.
+
+# D.1. AR
+
+AR refers to Autoregressive Generation, a sequence-based model generation method widely used in language models. In the standard Tree of Thoughts (ToT) method, autoregressive generation involves constructing solutions step by step, where each step uses information from previous steps to guide current choices. This approach is straightforward and intuitive but can be limited in terms of speed. In our work, the AR methods using Beam Search and MCTS are based on existing open-source code from OpenR.
+
+# D.2. SpS
+
+SpS refers to Speculative Sampling, a parallel decoding method for model generation. By introducing a small model, speculative sampling accelerates the generation process in tree search reasoning methods. We implement an efficient SpS method for Beam Search and MCTS using the vLLM (Kwon et al., 2023) package, building on the open-source code from OpenR.
+
+# E. Details of the Datasets Used in This Paper
+
+# E.1. The Datasets Used in the Main Evaluation
+
+MATH-100 The MATH dataset (Hendrycks et al., 2021) consists of 12,500 challenging competition mathematics problems, each with a full step-by-step solution. These solutions can be used to teach models to generate answer derivations and explanations. We randomly select 100 problems from this dataset as our test set. We choose this number of problems for our test set because the inference latency of the tree search algorithm is quite long. Even with our efficient SpecSearch acceleration framework, the latency remains significant. To avoid the experiment running time being too long for a single run, we select 100 problems for evaluation.
+
+GSM8K-100 The GSM8K dataset (Cobbe et al., 2021) contains 8.5K high-quality, linguistically diverse grade school math word problems. We randomly select 100 problems from this dataset for our test set. The reason for selecting 100 problems is the same as for the MATH dataset: to manage inference latency effectively.
+
+# E.2. The Datasets Used in the Ablation Study
+
+MATH-50 For the ablation study, we need to test multiple variants of SpecSearch and conduct hyperparameter robustness experiments, which requires running the experiments multiple times. To facilitate this process, we select 50 mathematical problems from the MATH dataset as the test set for the ablation study.
+
+# F. Illustration of Using Models
+
+Thought Generator For the Qwen series of models, we use the Qwen2.5-72B-Instruct-GPTQ-Int4 model as the large model and the Qwen2.5-7B-Instruct-GPTQ-Int4 model as the small model. For the Llama series of models, we use the Llama-3-70B-Instruct-GPTQ-Int4 model as the large model and the Llama-3-8B-Instruct-GPTQ-Int4 model as the small model.
+
+Thought Evaluator We use two PRM models as thought evaluators, one is Math-Shepherd (Wang et al., 2024c) and the other is Math-psa (Wang et al., 2024a) for Experiment 2.
+
+# G. Discussion on the novelty of SpecSearch over standard speculative decoding and TreeBon (Qiu et al., 2024)
+
+# G.1. Comparison with Existing Speculative Decoding Techniques
+
+Relation to Standard Speculative Decoding (SD) Methods. We discuss the novelty of SpecSearch compared to existing SD techniques, emphasizing key distinctions in terms of speculative formulation, verification and rejection strategies, and theoretical guarantees.
+
+- Bi-Level Speculative Formulation: Unlike existing SD methods focused solely on tokens, SpecSearch treats both high-level thoughts and low-level tokens as bi-level speculative tasks. This enables (1) Structural Alignment with reasoning frameworks, where thoughts are fundamental units, and (2) Compatibility with standard SD methods through low-level token-level speculation.
+- Contextual Verification for Higher Acceptance and Speedup: Unlike SD methods that enforce strict token-level alignment, leading to frequent rejections, SpecSearch verifies the contextual quality of reasoning thoughts. This allows acceptance of correct but non-aligned outputs, substantially boosting acceptance rates and achieving significant speedups.
+- Quality-Preserving Rejection Mechanism: In contrast to token-level rejection in standard SD methods, SpecSearch introduces quality-preserving thought-level rejection based on contextual quality. It discards entire thoughts only when their quality is lower than the large model's, ensuring high-quality reasoning throughout decoding.
+- Theoretical Guarantee of Reasoning Quality: While standard SD methods preserve token-level distributions, SpecSearch guarantees that the reasoning quality remains comparable to that of the large model.
+
+# G.2. Comparison with Treebon (Qiu et al., 2024)
+
+We discuss the novelty of SpecSearch compared to Treebon (Qiu et al., 2024), emphasizing key distinctions in terms of motivation, speculative formulation, rejection strategies, and theoretical guarantees.
+
+- Distinct Motivation: Unlike Treebon, which aims to accelerate best-of- $n$ sampling via speculative rejection and tree search, SpecSearch is the first to generalize speculative execution to LLM reasoning tasks.
+- Bi-Level Speculative Formulation: Treebon treats fixed-length token sequences as speculative units, while SpecSearch adopts a flexible bi-level formulation—modeling full reasoning thoughts as high-level tasks and tokens as low-level ones. Unlike Treebon's fixed-length design, SpecSearch leverages LLMs' reasoning capabilities to generate semantically coherent thoughts of dynamic length.
+- Quality-Preserving Rejection Mechanism: Treebon rejects a fixed proportion of token sequences using a preset threshold. In contrast, SpecSearch scores reasoning thoughts and adaptively rejects those with lower contextual quality relative to the large model's output, enabling finer control and better quality preservation.
+- Theoretical Guarantee: Unlike Treebon, which lacks theoretical guarantees, SpecSearch offers formal assurance that the quality of the output reasoning remains on par with that of the large model.
+
+Algorithm 2 Pseudo Code for SpecSearch
+Input: Input question $c$ , large model (speculative model) $G_{p}$ , small model $G_{q}$ , evaluation model $V$ , expansion width $N$ , beam size $b$ , EMA weight $\theta$ , reasoning depth $K$ , a nonparametric estimation method $\Theta$ . Initialize beam: $\mathcal{B} \gets \emptyset$ Each element takes the form of [sequence, quality] Initialize candidate thoughts: $\mathcal{T} \gets \emptyset$ , $\mathcal{V} \gets \emptyset$ Initial reasoning from large model $G_{p}$ for $i = 1$ to $N$ do Generate from large model: $z \gets G_{p}(\cdot \mid c)$ Evaluate generated thought: $v \gets V(z)$ Update candidates: $\mathcal{T} \gets \mathcal{T} \cup \{(z,c)\}$ , $\mathcal{V} \gets \mathcal{V} \cup \{v\}$ end for Initialize threshold: $\hat{\beta}^{(1)} \gets \theta \Theta(\mathcal{V})$ Update beam: $\mathcal{B} \gets \mathrm{Top}_b(\mathcal{T})$ Retain top $b$ by quality for $k = 1$ to $K$ do Initialize candidate thoughts: $\mathcal{T} \gets \emptyset$ for $z_{< k}$ in $\mathcal{B}$ do Generate Thoughts: $\hat{\beta}^{(k + 1)}$ , $\mathcal{T}_i \gets G_s(z_{< k}, G_p, G_q, V, \hat{\beta}^{(k)}, N, \theta, \Theta)$ Update candidate thoughts: $\mathcal{T} \gets \mathcal{T} \cup \mathcal{T}_i$ end for Update beam: $\mathcal{B} \gets \mathrm{Top}_b(\mathcal{T})$ Retain top $b$ by quality if $\forall z_{\leq k} \in \mathcal{B}$ , last $(z_{\leq k}) = <\mathrm{STOP}>$ then break End if end for return $\mathcal{B}$
+
+# H. Implementation Details of Our SpecSearch
+
+# H.1. Discussion on Advantages of Our Evaluation Method
+
+Here we present a detailed discussion on using a process reward model to evaluate the quality of thoughts. First, the thought evaluator accurately captures a thought's complete semantic meaning. Second, it converts thought distribution into a structured, manageable quality distribution, enabling a clearer definition of lossless reasoning acceleration. Third, it assigns high scores to different valid reasoning paths, improving the assessment of the small model's thought quality.
+
+# H.2. SpecSearch Implementation Details
+
+# H.2.1. SMALL MODEL PARALLEL THOUGHT GENERATION
+
+Due to the small memory footprint of small models, they can operate in parallel even under limited memory conditions. Generating multiple thoughts simultaneously does not significantly increase latency compared to generating a single thought. Therefore, we use a small model to generate thoughts in parallel. Although the overall quality of generation from small models may not match that of large models, they still produce high-quality thoughts.
+
+We utilize the small model to generate $2^{*}\mathrm{N}$ thoughts in parallel, combining the efficiency of parallel processing with the ability to generate high-quality thoughts. This approach introduces more high-quality thoughts into the Tree of Thoughts (ToT), enhancing both efficiency and thought quality.
+
+# H.2.2. ACCEPTANCE-REJECTION MECHANISM
+
+After generating $2^{*}\mathrm{N}$ thoughts with the small model, we evaluate these thoughts using the Process Reward Model (PRM) to determine their rewards. Each thought's reward is compared to a dynamically calculated threshold. If the reward surpasses the threshold, the thought is retained; otherwise, it is discarded. If more than $\mathbf{N}$ thoughts are retained, we select the top $\mathbf{N}$ thoughts with the highest rewards for final acceptance.
+
+# H.2.3. ALGORITHM IMPLEMENTATION
+
+Table 4. Full GSM8K. Evaluation on the full GSM8K-1319 dataset. (1) Setup We utilize quantized versions of Qwen2.5-72B-Instruct and Qwen2.5-7B-Instruct as the large and small language models, utilize MATH-psa as the Process Reward Model and employ beam search as the search algorithm. Unless explicitly stated otherwise, all results presented below follow this setting. (2) Results The results demonstrate that SpecSearch achieves comparable accuracy while significantly reducing inference latency.
+
+Qwen models MATH Dataset GSM8K-1319 Methods Reasoning Accuracy (%) Average Inference Latency (s) Speedup (vs AR) Speedup (vs SpS) AR 96.66 144.63 NA 0.48 SpS 96.66 70.04 2.06 NA SpecSearch (Ours) 95.83 50.99 2.84 1.37
+
+Table 5. AIME. Evaluation on the AIME dataset. The results demonstrate that SpecSearch achieves comparable accuracy while significantly reducing inference latency.
+
+Qwen models MATH Dataset AIME Methods Reasoning Accuracy (%) Average Inference Latency (s) Speedup (vs AR) Speedup (vs SpS) AR 16.67 562.89 NA 0.57 SpS 13.33 318.71 1.77 NA SpecSearch (Ours) 13.33 264.44 2.13 1.21
+
+The procedure of our bi-level speculative thought generator is outlined in Algorithm 1 in the main text. Here, we further present the complete SpecSearch algorithm, which is based on the beam search algorithm, as shown in Algorithm 2.
+
+Furthermore, due to the limited sample size $M$ , we adopt a more conservative estimation strategy in the implementation, utilizing the maximum value as an estimate of the upper confidence bound for $\mu_p^{(k + 1)}$ . Specifically, let $\mathcal{V}_q^{(k)}$ denote the set of qualities of thoughts generated by the small model $G_{q}$ , and $\mathcal{V}_p^{(k)}$ denote the set of qualities of thoughts generated by the large model (speculative model) $G_{p}$ . The threshold estimation method we employ is as follows:
+
+$$
+\tilde {\beta} ^ {(k + 1)} = \theta \tilde {\beta} ^ {(k)} + (1 - \theta) \max \mathcal {V} _ {p} ^ {(k)} \cup \mathcal {V} _ {q} ^ {(k)}. \tag {50}
+$$
+
+# H.2.4. SPECULATIVE MODEL SERIAL THOUGHT GENERATION
+
+If the number of accepted thoughts from the small model is less than $\mathbf{N}$ after filtering through the acceptance-rejection mechanism, we use a speculative model to serially generate additional thoughts until the total number of thoughts reaches $\mathbf{N}$ .
+
+# H.3. Hyperparameters
+
+SpecSearch In our experiments, unless otherwise specified, we set the EMA weight $\theta$ in the SpecSearch to 0.9.
+
+Beam Search In our experiments, unless otherwise specified, we set the tree width to 6, the tree depth to 50, and the beam size to 2 in the Beam Search.
+
+MCTS In our experiments, unless otherwise specified, we set the tree width to 6, the tree depth to 50, and the iteration number to 4 in the MCTS.
+
+# I. More Results
+
+# I.1. More Main Evaluation
+
+We conduct comprehensive evaluations across three distinct dataset categories to rigorously demonstrate the efficiency and generalizability of SpecSearch. Specifically, these include: (1) the full GSM8K dataset comprising 1,319 problems; (2) more challenging mathematical reasoning benchmarks, namely the AIME and Olympiad datasets; and (3) a code-generation benchmark. As illustrated in Tables 4, 5, 6, and 7, SpecSearch consistently and significantly surpasses state-of-the-art approaches across all three dataset categories, achieving speedups ranging from $2.04 \times$ to $2.84 \times$ while maintaining
+
+Table 6. Olympiad Bench. Evaluation on the Olympiad Bench (OE-TO-maths-zh-CEE) dataset. The results demonstrate that SpecSearch achieves comparable accuracy while significantly reducing inference latency.
+
+Qwen models MATH Dataset Olympiad Bench Methods Reasoning Accuracy (%) Average Inference Latency (s) Speedup (vs AR) Speedup (vs SpS) AR 63.75 358.44 NA 0.67 SpS 58.75 241.80 1.48 NA SpecSearch (Ours) 58.75 176.02 2.04 1.37
+
+Table 7. Code-Generation Benchmark. Evaluation on the HumanEval dataset. The results show that SpecSearch achieves comparable accuracy while significantly reducing inference latency.
+
+Qwen models Coding Dataset HumanEval Methods Reasoning Accuracy (%) Average Inference Latency (s) Speedup (vs AR) Speedup (vs SpS) AR 85.37 342.18 NA 0.65 SpS 84.15 223.30 1.53 NA SpecSearch (Ours) 85.37 158.43 2.16 1.41
+
+comparable reasoning accuracy. These findings highlight SpecSearch's versatility and robustness, demonstrating substantial improvements in inference speed with minimal or no compromise in accuracy across diverse tasks.
+
+Setup. Throughout our experiments, we utilize quantized versions of Qwen2.5-72B-Instruct and Qwen2.5-7B-Instruct as the large and small language models, respectively. Additionally, we incorporate MATH-psa as the Process Reward Model and employ beam search as the search algorithm.
+
+# Results.
+
+(1) Full GSM8K Dataset (1,319 Problems): SpecSearch achieves a substantial $2.84 \times$ speedup compared to the AR baseline, with only a minimal accuracy reduction of $0.83\%$ . This result highlights SpecSearch's capability to effectively scale to larger problem sets while preserving high reasoning accuracy.
+(2) High-Difficulty Mathematics (AIME and Olympiad Bench): We conduct experiments on the AIME and Olympiad Bench (OE_TO maths-zh_CEE) datasets. Notably, SpecSearch maintains identical accuracy to the SpS method while achieving speedups of $1.21 \times$ and $1.37 \times$ , respectively. These results demonstrate the method's effectiveness in handling challenging, competition-level mathematics problems.
+(3) Code Generation (HumanEval): To assess SpecSearch beyond mathematical reasoning, we evaluate its performance on the HumanEval code-generation benchmark. The results show that SpecSearch achieves a $2.16 \times$ speedup over the AR without any reduction in accuracy. Furthermore, it surpasses the SpS by $1.22\%$ in accuracy while simultaneously delivering a $1.41 \times$ speedup. These results underscore SpecSearch's strong generalization capabilities across diverse domains.
+
+# I.2. More Motivating Results
+
+Reward Distribution Across Reasoning Steps We analyze the reward distributions across different reasoning steps in our experiments. Figure 7 shows the reward distribution for each step in the reasoning path. The figure illustrates that the average reward decreases as the reasoning process moves from initial steps to later stages.
+
+Initially, reasoning steps tend to yield higher rewards because they are simpler and require less cognitive effort, allowing for higher thresholds. As the reasoning progresses, subsequent steps become more complex, resulting in lower average reward scores and necessitating lower thresholds. This pattern supports our approach of using dynamic thresholds.
+
+Similar Output Lengths Enable Effective Model Collaboration We calculate the average number of tokens generated by large and small models in a single reasoning step. Table 8 shows that both model types produce a similar number of tokens. This similarity suggests that the length of the thought or reasoning process at each step is comparable. This feature supports
+
+
+Figure 7. The distribution of rewards for generated thoughts decreases step by step.
+
+Table 8. The results demonstrate that the number of tokens generated by large and small models in a single reasoning step is comparable.
+
+Model The average number of tokens in a reasoning step Qwen2.5-7B-Instruct 59.088 Qwen2.5-1.5B-Instruct 57.037
+
+collaboration between large and small models, as it implies they can efficiently divide tasks and work together within the same reasoning framework.
+
+# I.3. Case Study
+
+Case 1 Figure 8 shows a challenging case study. It illustrates how different reasoning steps vary in difficulty. In this process, identifying the curve type from its equation is an easy step, while transforming an equation from polar to Cartesian coordinates is more difficult.
+
+Case 2 Figure 9 presents a simple case study showing how different reasoning steps have different levels of difficulty. In this reasoning process, calculating $9900 + 1$ is an easy step while computing the square of 99 is a hard step.
+
+Figures 8 and 9 illustrate varying difficulty levels among reasoning steps. Simpler steps are efficiently handled by a small model, while more complex steps are managed by a large model. This division optimizes efficiency and accuracy throughout the reasoning process.
+
+Case 3 We select a problem from the GSM8K-100 dataset where SpecSearch made an error for case study analysis. Figure 10 shows the PRM score on the incorrect reasoning path. In this scenario, the first three reasoning steps are correct, but an error occurs at the fourth step. Subsequent steps remain incorrect. Notably, the fourth step, despite being wrong, achieves a high PRM score of 0.8916015625. This indicates that incorrect steps can mislead the PRM and prevent it from accurately identifying errors. This observation clarifies why we observed low precision loss in our SpecSearch.
+
+# I.4. More Broad Compatibility Results
+
+In this section, we provide more results of the broad compatibility experiment. We conduct broad compatibility experiments on the MATH-100 dataset using the Qwen models. The results in Table 9 show the performance of SpecSearch and the baselines in different search algorithms and different thought evaluators. SpecSearch accelerates beam search and MCTS, outperforming baselines by reducing latency with minimal accuracy loss, and shows consistent performance across different PRMs, demonstrating broad applicability and generalization.
+
+This experimental result supplements the broad compatibility experiment in the main text, verifying that our method has broad applicability across different datasets.
+
+Table 9. The results demonstrate the Broad Compatibility of Our SpecSearch with different search algorithms and PRMs on the MATH-100 dataset.
+
+Search Algorithms Beam Search MCTS Methods Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR)↑ Speedup (vs SpS)↑ Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ AR 87.00 275.78 NA 0.51 93.00 523.54 NA 0.49 SpS 88.00 141.55 1.95 NA 91.00 257.62 2.03 NA SpecSearch (Ours) 87.00 82.35 3.35 1.72 90.00 171.59 3.05 1.50 PRMs Math-psa Math-Shepherd Methods Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ Reasoning Accuracy (%) ↑ Average Inference Latency (s) ↓ Speedup (vs AR) ↑ Speedup (vs SpS) ↑ AR 87.00 275.78 NA 0.51 88.00 265.29 NA 0.55 SpS 88.00 141.55 1.95 NA 85.00 145.53 1.82 NA SpecSearch (Ours) 87.00 82.35 3.35 1.72 85.00 118.67 2.24 1.23
+
+Table 10. Sensitivity to Draft Models. We investigate SpecSearch's performance using multiple small draft models—Qwen2.5-3B-Instruct, Qwen2.5-1.5B-Instruct, and Qwen2.5-0.5B-Instruct. The results demonstrate that our method maintains stable accuracy while achieving significant latency reduction across various draft models.
+
+MATH Dataset GSM8K-100 Methods Reasoning Accuracy (%) Average Inference Latency (s) Speedup (vs AR) Speedup (vs SpS) Draft Acceptance Rate (%) AR 97 138.24 NA 0.50 NA SpS (Draft-7B) 97 69.43 1.99 NA NA SpecSearch (Ours, Draft-7B) 96 48.18 2.87 1.44 49.19 SpecSearch (Ours, Draft-3B) 96 63.48 2.18 1.09 44.54 SpecSearch (Ours, Draft-1.5B) 95 53.49 2.58 1.30 45.79 SpecSearch (Ours, Draft-0.5B) 96 49.54 2.79 1.40 35.48
+
+# I.5. Sensitivity Analysis to Draft Model's Size
+
+We have investigated SpecSearch's performance using multiple small draft models. The results in Table 10 reveal that SpecSearch achieves speedups ranging from $2.18 \times$ to $2.87 \times$ , underscoring its robust acceleration capabilities across diverse small-model settings.
+
+# I.6. More Ablation Study Results
+
+Sensitivity Analysis Hyperparameter $\theta$ , which controls the relative importance of reward information from the previous layer when updating the threshold for the current layer, is crucial for balancing between accuracy and latency. To understand the impact of hyperparameter $\theta$ on the performance of SpecSearch, we conduct a detailed sensitivity analysis focusing exclusively on this parameter.
+
+We vary $\theta$ across a range from $\theta_{\mathrm{min}} = 0.8$ to $\theta_{\mathrm{max}} = 0.95$ , with increments of $\Delta \theta = 0.05$ . For each value of $\theta$ , we evaluate SpecSearch using the MATH-50 dataset, ensuring that all other hyperparameters are held constant to isolate the effect of $\theta$ .
+
+The results in Table 11 show that the accuracy of SpecSearch remains largely unchanged when $\theta$ is large and latency decreases as $\theta$ increases. These findings suggest that while $\theta$ does not significantly affect accuracy, setting $\theta$ closer to 1 can lead to substantial improvements in computational efficiency without compromising the quality of the generated reasoning paths. This demonstrates the robustness of $\theta$ .
+
+Table 11. The results demonstrate that SpecSearch achieves similar average performance across a wide range of $\theta$ .
+
+Dataset MATH-50 Methods Reasoning Accuracy (%)↑ Average Inference Latency (s)↓ Speedup (vs AR)↑ AR 88.00 256.05 NA SD 90.00 132.68 1.93 SpecSearch (Ours,θ = 0.95) 88.00 66.48 3.85 SpecSearch (Ours,θ = 0.9) 88.00 70.63 3.63 SpecSearch (Ours,θ = 0.85) 88.00 73.66 3.48 SpecSearch (Ours,θ = 0.8) 88.00 83.29 3.07
+
+# Q&A Case 1
+
+# Question:
+
+Find the curve defined by the equation $r^2 \cos 2\theta = 4$ . Which of the following does this represent?
+
+(A) Line (B) Circle (C) Parabola (D) Ellipse (E) Hyperbola.
+
+Enter the letter of the correct option.
+
+# Reason:
+
+Step 1: Recall the polar to Cartesian coordinate transformations:
+
+$$
+x = r \cos \theta , y = r \sin \theta
+$$
+
+and the identity:
+
+$$
+\cos 2 \theta = \cos^ {2} \theta - \sin^ {2} \theta
+$$
+
+Step 2: Substitute $r$ cos $\theta$ and $r$ sin $\theta$ into the identity:
+
+→hard
+
+$$
+\cos 2 \theta = \left(\frac {x}{r}\right) ^ {2} - \left(\frac {y}{r}\right) ^ {2} = \frac {x ^ {2}}{r ^ {2}} - \frac {y ^ {2}}{r ^ {2}} = \frac {x ^ {2} - y ^ {2}}{r ^ {2}}
+$$
+
+Step 3: Substitute $\cos 2\theta$ back into the original equation:
+
+$$
+r ^ {2} \cos 2 \theta = 4 \Longrightarrow r ^ {2} \left(\frac {x ^ {2} - y ^ {2}}{r ^ {2}}\right) = 4 \Longrightarrow x ^ {2} - y ^ {2} = 4
+$$
+
+Step 4: Analyze the resulting Cartesian equation:
+
+$\rightarrow$ easy
+
+$$
+x ^ {2} - y ^ {2} = 4
+$$
+
+This is the standard form of a hyperbola. Specifically, it is a hyperbola centered at the origin with its transverse axis along the x-axis.
+
+So the answer is:
+
+E
+
+Figure 8. A difficult case study showing how different reasoning steps have different levels of difficulty.
+
+# Q&A Case 2
+
+# Question:
+
+Compute $99^2 + 99 + 1$ in your head.
+
+# Reason:
+
+Step 1: Calculate $99^2$ :
+
+→hard
+
+$$
+9 9 ^ {2} = (1 0 0 - 1) ^ {2} = 1 0 0 ^ {2} - 2 \cdot 1 0 0 \cdot 1 + 1 ^ {2} = 1 0 0 0 0 - 2 0 0 + 1 = 9 8 0 1
+$$
+
+Step 2: Add 99 to $99^2$ :
+
+$$
+9 9 ^ {2} + 9 9 = 9 8 0 1 + 9 9 = 9 9 0 0
+$$
+
+Step 3: Add 1 to the result:
+
+$\rightarrow$ easy
+
+$$
+9 9 0 0 + 1 = 9 9 0 1
+$$
+
+So the answer is:
+
+9901
+
+Figure 9. A simple case study showing how different reasoning steps have different levels of difficulty.
+
+# Q&A Case 3
+
+# Question:
+
+Raymond and Samantha are cousins. Raymond was born 6 years before Samantha. Raymond had a son at the age of 23. If Samantha is now 31, how many years ago was Raymond's son born?
+
+# Reason:
+
+Step 1: Identify the current age of Samantha: Samantha is currently 31 years old. [0.93701171875]
+
+Step 2: Determine Raymond's age when his son was born: Raymond had a son at the age of 23. [0.91845703125]
+
+Step 3: Find the age difference between Raymond and Samantha: We are told that Raymond was born 6 years before Samantha. This means Samantha is 6 years younger than Raymond. [0.92529296875]
+
+Step 4: Calculate Raymond's current age: If Samantha is 31 and she is 6 years older than Raymond, then Raymond is currently $31 - 6 = 25$ years old. [0.8916015625]
+
+Step 5: Determine the age of Raymond when his son was born: Raymond had his son at the age of 23. [0.81982421875]
+
+Step 6: Calculate the number of years since Raymond's son was born: Since Raymond is currently 25 years old and he had his son at age 23, it has been $25 - 23 = 2$ years since Raymond's son was born. [0.56982421875]
+
+So the answer is:
+
+
+Figure 10. A case study showing the PRM score on the wrong reasoning path.
\ No newline at end of file
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/images.zip b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..295706d3f4df1962b0db6eab33470a0090be2fc0
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2847a510a56463b51f3c84eb2789fd9593984d34e65772ee4bd77013f818eff7
+size 1251975
diff --git a/acceleratinglargelanguagemodelreasoningviaspeculativesearch/layout.json b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..43c7b7c6ae05b96ca5cfe6b762e9edfb4157e962
--- /dev/null
+++ b/acceleratinglargelanguagemodelreasoningviaspeculativesearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6090cecf7a41ed2cc99257c870c947ee196849d5551be82f5f0beefe460bf542
+size 1093460
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_content_list.json b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1033f225e72f2bee1128748ffcf07b87b199aa04
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:291823dc6413aea3da059a2a6376edbd49c978495a857f24b75b7528c71889a9
+size 171572
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_model.json b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..99c149c1da76ba9ef9f96edf740889fdd2f6e0f3
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d86b0d197c48507f25e70150a88db07b7276c5e22b27548c5a5536d38d36c05
+size 200610
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_origin.pdf b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..296901e1dab7ca22b588e4fdae7d73c6dd19320f
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/6e68aa2b-d20f-464e-8322-2e96f58dc240_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49a63dafaad60cdfba3544ba3e49f53061e2f8605c2d90d293ef6d9b3b26b5b3
+size 681948
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/full.md b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..057ba54d3b38e698461ce9a1fda0e676840f27f2
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/full.md
@@ -0,0 +1,386 @@
+# Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies
+
+Nadav Timor1 Jonathan Mamou2 Daniel Korat2 Moshe Berchansky2 Gaurav Jain3 Oren Pereg2 Moshe Wasserblat2 David Harel1
+
+# Abstract
+
+Accelerating the inference of large language models (LLMs) is a critical challenge in generative AI. Speculative decoding (SD) methods offer substantial efficiency gains by generating multiple tokens using a single target forward pass. However, existing SD approaches require the drafter and target models to share the same vocabulary, thus limiting the pool of possible drafters, often necessitating the training of a drafter from scratch. We present three new SD methods that remove this shared-vocabulary constraint. All three methods preserve the target distribution (i.e., they are lossless) and work with off-the-shelf models without requiring additional training or modifications. Empirically, on summarization, programming, and long-context tasks, our algorithms demonstrate significant speedups of up to $2.8 \times$ over standard autoregressive decoding. By enabling any off-the-shelf model to serve as a drafter and requiring no retraining, this work substantially broadens the applicability of the SD framework in practice.
+
+# 1 Introduction
+
+Speculative decoding (SD; Leviathan et al., 2023; Chen et al., 2023) is an effective method for reducing the latency of LLM inference and increasing its throughput. A necessary condition for SD to be effective is that the drafter is sufficiently fast and accurate in approximating the target distribution (Timor et al., 2025; Chen et al., 2024). State-of-the-art verification methods for SD employ rejection sampling algorithms that are designed to work with a single vocabulary, where the draft tokens are sampled from the same vocabulary as the target tokens (Leviathan et al., 2023; Chen et al., 2023; Miao et al., 2024; Sun et al., 2024). However,
+
+1 Weizmann Institute of Science 2 Intel Labs 3 d-Matrix. Correspondence to: Nadav Timor .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+often in practice, such drafters are not available—either because the target model is not part of a model family (examples of families include the StarCoder, Li et al., 2023; Llama, Dubey et al., 2024 and DeepSeek, DeepSeek-AI et al., 2025) or the smallest model in the same family remains too large and slow. An alternative approach—training a drafter from scratch (Zafrir et al., 2024)—is a challenging task that requires computational resources, data, time, and expertise. Even if you successfully train such a drafter, another problem is that you cannot reuse it for other models with different vocabularies.
+
+Our Contributions. We relax a key constraint of the speculative decoding (SD) framework—the requirement that the drafter must use the same vocabulary as the target model. By allowing heterogeneous vocabularies, we eliminate the requirement to train a drafter from scratch and enable any model to operate as drafter, thereby significantly broaden the applicability of SD methods. By unlocking any off-the-shelf model to serve as drafter, we were able to find drafters that are more effective even than drafters from the same model family. Our main contributions are:
+
+- Algorithm 2 (String-Level Exact Match, SLEM): An algorithm that uses plain text as a shared intermediate representation between the draft and target vocabularies, enabling exact matching of tokens. It solves the problem of non-injective tokenizers (Section 3.2) to support any off-the-shelf model pair. We evaluate the algorithm on summarization, programming, and long-context tasks, demonstrating robust speedups of up to $2.8 \times$ over autoregressive decoding.
+- Algorithm 4 (Token-Level Intersection, TLI): A purely token-based approach that adjusts the drafter's distribution to sample only from the intersection between the two vocabularies and employs the standard SD verification method. We prove theoretically that this approach outperforms a simple "union" strategy by increasing the probability of accepting tokens (Theorem 4.1). Empirically, Algorithm 4 demonstrates significant speedups of up to $1.7 \times$ over autoregressive decoding.
+- Algorithm 3 (String-Level Rejection Sampling, SLRS): A novel verification mechanism that imple
+
+ments rejection sampling at the string level instead of the token level. We prove that it is lossless (Theorem 3.2) and guarantees higher expected acceptance rates than string-level exact matching, under the same target distribution (Theorem 3.1). Our theoretical and empirical analysis shows rapid growth in computational cost for vocabularies with longer tokens, thus making this method most suitable for drafters with shorter tokens (Section 3.4).
+
+We merged our open-source implementation of Algorithm 2 and Algorithm 4 into Hugging Face Transformers (Wolf et al., 2020), the most popular LLM library, with more than 378,000 repositories and 6,000 open-source packages that depend on it. Independently of our benchmarks, Hugging Face's core maintainers have thoroughly evaluated the effectiveness of SLEM and TLI (Algorithms 2 and 4) and found our methods to be the most effective among all the speculative decoding algorithms they have previously supported—over various use cases and hardware setups. As a result, they made SLEM and TLI the default for heterogeneous SD in Hugging Face Transformers.
+
+All our algorithms are lossless, namely, outputs preserve the target distribution, and we provide acceptance rate expectations (Table 3) and other bounds. Our experiments—covering summarization, programming, and long-context tasks—demonstrate speedups versus autoregressive decoding. By open-sourcing these methods via Hugging Face Transformers, we have already enabled immediate, practical acceleration of LLMs under heterogeneous vocabularies—a scenario that is increasingly common in real-world deployments.
+
+# 2 Motivating Examples
+
+Existing SD methods are designed to work with a single vocabulary, where the drafter samples from the same vocabulary as the target model. As an example, see Algorithm 5, which is the standard SD algorithm proposed by Leviathan et al. (2023); Chen et al. (2023).
+
+Algorithm 1 offers a simple way to extend these methods to operate in cases where the drafter's vocabulary differs from the target's by virtually extending the vocabularies such that both vocabularies are their union. For example, consider the case of disjoint vocabularies where the target vocabulary is $T = \{\text{a}'\}$ and the draft vocabulary $D = \{\text{'b'}\}$ . Although all the draft tokens 'b' are rejected, the target distribution is preserved because we use the standard verification method of SD, which is lossless as proved in Leviathan et al. (2023); Chen et al. (2023). Even if one vocabulary is a proper subset of the other, for example, if $T = \{\text{'a}', \text{'b'}\}$ and $D = \{\text{'b'}\}$ , or if $T = \{\text{'a'}\}$ and $D = \{\text{'a}', \text{'b'}\}$ , the target distribution is still preserved thanks to the guarantee of the standard verification method.
+
+Algorithm 1 An iteration of speculative decoding for heterogeneous vocabularies with a simple "union" strategy
+
+1: Input: Probability distributions $p$ and $q$ over vocabularies $T$ and $D$ , respectively. Drafting lookahead $i \in \mathbb{N}$ . An input prompt $c$ .
+2: Output: A sequence of tokens from $T$ , containing between 1 and $i + 1$ tokens.
+3: Procedure:
+4: Define probability distributions $p'$ and $q'$ over the vocabulary $T \cup D$ as follows. $p'(x) = p(x)$ if $x \in T$ and $p'(x) = 0$ otherwise. $q'(x) = q(x)$ if $x \in D$ and $q(x) = 0$ otherwise.
+5: Run Algorithm 5 with $p', q', i, c$ .
+
+As long as $p(t) \leq q(t)$ for all $t \in T$ , where $p$ is the target and $q$ is the drafter, this simple approach of Algorithm 1 is optimal in terms of maximizing the probability of accepting a draft token. However, this condition is not satisfied if $\exists d \in D$ such that $q(d) > 0$ and $d \notin T$ because we then have $\sum_{t \in T} q(t) < 1$ . Although the simple approach of Algorithm 1 to extend Algorithm 5 preserves the target distribution because the verification method remains unchanged, it might not yield the maximum probability of accepting a target token (see Theorem 4.1). Below, we present Algorithm 4, which improves Algorithm 1 by adjusting the distribution of the drafter such that the probability of sampling tokens that are not in the target vocabulary is zero. This adjustment is done by normalizing the distribution of the drafter such that the sum of the probabilities of the tokens that are in the target vocabulary is one. For example, if $T = \{a', b'\}$ and $D = \{a', b', c'\}$ where $q(a') = q(b') = q(c') = \frac{1}{3}$ , we adjust the distribution of the drafter to be $q'(a') = q'(b') = \frac{1}{2}$ and $q'(c') = 0$ . This approach increases the probability of accepting a draft token while still preserving the target distribution, as Theorem 4.1 proves. However, the expected acceptance rate of both Algorithm 1 and Algorithm 4 might be suboptimal in some other cases. For example, consider the case where $T = \{a'\}$ and $D = \{a', aa'\}$ and there is a nonzero probability that the drafter samples the token 'aa'. Algorithm 1 and Algorithm 4 are suboptimal because they always reject the token 'aa'. In this example it is easy to see that both models can only generate concatenations of the token 'a', hence we should have accepted the token 'aa', unless it is the last token to generate. Below, we also present Algorithm 2, which solves this problem by allowing the drafter to sample tokens that are not in the target vocabulary. Algorithm 2 preserves the target distribution because it replaces the standard verification method that guarantees that the output tokens distribute according to the target distribution with exact matching, which guarantees that the output tokens are exactly the target tokens.
+
+# 3 Speculative Decoding for Heterogeneous Vocabularies with String-Level Verification
+
+Notation. Vocabularies are finite sets of strings, also called tokens. We say that a string $a$ is expressible in a vocabulary $B$ if there exist strings $b_{1}, b_{2}, \ldots, b_{n} \in B$ such that $a = b_{1} \oplus b_{2} \oplus \ldots \oplus b_{n}$ , where $\oplus$ denotes string concatenation. We say that a vocabulary $A$ is expressible in a vocabulary $B$ if all strings in $A$ are expressible in $B$ , and denote this relationship by $A \to B^{*}$ , where $B^{*}$ is the Kleene closure of $B$ under string concatenation. Tokenizing a string $s$ with respect to a vocabulary $A$ is the process of partitioning $s$ into a sequence of tokens $a_{1}, a_{2}, \ldots, a_{n}$ , where $a_{1}$ is the longest prefix of $s$ that is a token in $A$ , $a_{2}$ is the longest prefix of $s$ that is in $A$ after removing $a_{1}$ , and so on. The tokenization of $s$ with respect to $A$ is a finite sequence of tokens, denoted by $A(s)$ . The $i$ -th token of $A(s)$ is denoted as $A(s)_i \in A$ .
+
+# 3.1 String-Level Exact Match (SLEM)
+
+Algorithm 2 is one solution to the problem of heterogeneous vocabularies. It implements a variant of SD with the verification method of exact matching. The key mechanism involves translating tokens bidirectionally between the draft and target vocabularies. Tokens generated by the drafter are first decoded into text and subsequently re-tokenized using the target model's vocabulary. After the target model verifies the generated tokens, the sequence is converted back into the drafter's tokenization format for the next iteration. This process ensures that the target model's distribution is preserved while allowing the drafter to operate within its own vocabulary constraints.
+
+Vocabulary Constraints. Algorithm 2 assumes that the target vocabulary $T$ is expressible in the draft vocabulary $D$ , i.e., $T \rightarrow D^*$ . Additionally, it assumes $D^* \rightarrow T^*$ , namely, every concatenation of draft tokens, $d_1 \oplus \ldots \oplus d_i$ for some $i$ , must be expressible by concatenations of target tokens $t_1 \oplus t_2 \oplus \ldots \oplus t_m \in T^*$ for some $m$ , i.e., $T(d_1 \oplus \ldots \oplus d_i) \neq \emptyset$ in line 7. If these conditions do not hold, converting strings from one vocabulary to another becomes undefined, leading to a decreased acceptance rate and rendering the algorithm ineffective. In practice, assuming $T \rightarrow D^*$ and $D^* \rightarrow T^*$ is reasonable due to the way vocabularies are typically constructed. The process of constructing a vocabulary often begins by determining its size, i.e., the number of tokens it contains. Informally, vocabularies are designed to maximize the frequency of token appearances in a given corpus, avoid splitting frequently co-occurring tokens, or both. Known tokenization methods such as BPE (Sennrich et al., 2016), WordPiece (Schuster & Nakajima, 2012), Unigram (Kudo, 2018), and SentencePiece (Kudo & Richardson, 2018) are heuristic and greedy approaches that generate vocabularies containing all the characters of the alphabet in the given
+
+Algorithm 2 (SLEM), an iteration of speculative decoding for heterogeneous vocabularies with string-level exact match verification
+
+1: Input: Target model $p$ and drafter model $q$ over vocabularies $T$ and $D$ , respectively, where $T \twoheadrightarrow D^{*}$ and $D^{*} \rightarrow T^{*}$ . Drafting lookahead value $i \in \mathbb{N}$ . A prompt $c \in T^{*}$ .
+
+2: Output: A non-empty sequence of accepted tokens from $T$ .
+
+3: Procedure:
+
+4: Tokenize the prompt to the draft vocabulary, $D(c)$ .
+
+5: For $j\gets 1,\ldots ,i$
+
+6: Sample a draft token from the drafter conditioned on the prompt and previous draft tokens, $d_{j} \sim q_{D(c) \oplus d_{1} \oplus \ldots \oplus d_{j-1}}$ (where $d_{0} := c$ ).
+
+7: Tokenize the concatenation of the draft tokens, $(t_{1},t_{2},\ldots ,t_{m})\gets T(d_{1}\oplus \ldots \oplus d_{i})$
+
+8: With data parallelism (batching), compute via one target forward pass the $m + 1$ logits of the target model conditioned on the prompt and all the draft continuations, $p_{T(c)}, p_{T(c)} \oplus t_1, \dots, p_{T(c)} \oplus t_1 \oplus \dots \oplus t_m$ .
+
+9: Sample a token from each logit, $t_1' \sim p_{T(c)}, t_2' \sim p_{T(c) \oplus t_1}, \dots, t_{m+1}' \sim p_{T(c) \oplus t_1 \oplus \dots \oplus t_m}$ .
+
+10: Find the first index where the draft differs from the target, $j \coloneqq \arg \min_{j \in \{1, \dots, m + 1\}} t_j' \neq t_j$ .
+
+11: Accept $t_1, t_2, \ldots, t_{j-1}, t_j'$ .
+
+corpus when the vocabulary size is greater than the alphabet cardinality, which is often the case (see Table 8 for examples). Typically, the corpus used for constructing a vocabulary comprises extensive texts, such as books or collections of documents. Unless the target and draft tokenizers are constructed using a narrow corpus, it is reasonable to assume $T \twoheadrightarrow D^{*}$ and $D^{*} \twoheadrightarrow T^{*}$ because both vocabularies usually include all the characters of the alphabet, hence satisfying even stronger relations of the form $T \twoheadrightarrow D^{*}$ and $D \twoheadrightarrow T^{*}$ .
+
+# 3.2 Non-Injective Tokenizers
+
+A common issue with tokenizers is that they do not always implement an injective function, meaning that for any given string $s$ , it is possible for $s \neq \text{decode}(\text{encode}(s))$ . This can occur due to so-called "normalization steps" or "pre-tokenization rules" that discard certain details of the input text. In practice, common examples include tokenizers that treat multiple spaces as a single space, lowercase all characters, or replace accented characters with their standard counterparts, such as 'é' being replaced by 'e'. In standard autoregressive decoding or speculative decoding, where the target and draft vocabularies are the same, we tokenize the input prompt $c$ into tokens only once at the beginning of the decoding process. Conditioned on the encoded prompt, we sample $N$ tokens $t_1, t_2, \ldots, t_N$ directly from the target
+
+(anoregressive decoding) or using a rejection sampling procedure with draft tokens (speculative decoding). Then, we return the string $c \oplus t_1 \oplus t_2 \oplus \ldots \oplus t_N$ . Since language models output token IDs, returning this string requires decoding each of the output tokens $t_1, t_2, \ldots, t_N$ from its ID back into text, then, concatenating them with the prompt yields $c \oplus t_1 \oplus t_2 \oplus \ldots \oplus t_N$ . Pre-tokenization rules are only applied to the input prompt $c$ once, before applying the model, and therefore they limit the ability of the model to distinguish between different input strings $c \neq c'$ that are equivalent under pre-tokenization rules, namely, $T(c) = T(c')$ given a non-injective tokenizer $T$ . This behavior is not necessarily problematic, and has been used in practice for a long time. It is important to note that the pre-tokenization rules are not directly applied on the output tokens $c, t_1, t_2, \ldots, t_N$ that are concatenated to form the final output string. That is, pre-tokenization rules do not alter the tokens $t_1, t_2, \ldots, t_N$ after these tokens are sampled. The final returned string starts with the given prompt $c$ without any modifications and ends with a concatenation of the sampled tokens $t_1 \oplus t_2 \oplus \ldots \oplus t_N$ . Unlike decoding over homogeneous vocabularies—where the target vocabulary $T$ and the draft vocabulary $D$ are the same—in decoding over heterogeneous vocabularies, we may have $T \neq D$ , which limits the ability of the target and drafter models to communicate token IDs. Algorithm 2 employs plain text as an intermediate representation that is shared between the two different vocabularies. This means that the output tokens $t_1, t_2, \ldots, t_N$ are decoded back into text and then re-tokenized using the draft vocabulary in line 4. This process may apply pre-tokenization rules to the output tokens, which can lead to a discrepancy between the output tokens and the target tokens. To evaluate whether various tokenizers exhibit injectivity on a specific dataset, we conduct a simple experiment that heuristically tests the consistency of the decoding and encoding, as detailed in Appendix F. Our findings indicate that some commonly used tokenizers do not maintain injectivity even when tested heuristically on a specific dataset. When we developed and tested Algorithm 2, we found that the non-injective behavior of tokenizers significantly impacted the algorithm's acceptance rate. To address this issue and broaden the applicability of Algorithm 2 to a wider range of tokenizers, we propose the following simple solution.
+
+Algorithm 2 Supports Non-Injective Tokenizers. Given a prompt $c \in T^*$ , Algorithm 2 starts by tokenizing it into the draft vocabulary, $D(c)$ , in line 4. The prompt is also tokenized into the target vocabulary, $T(c)$ , to allow the target model to compute the logits in line 8. Line 7 tokenizes into the target vocabulary the concatenation of the $i$ draft tokens that are previously sampled from the drafter, namely, computes $T(d_1 \oplus \ldots \oplus d_i)$ . Since the output of Algorithm 2 is in the target vocabulary, following runs of Algorithm 2 can use the output as-is without decoding it back into text. Only
+
+in the last run, we need to decode the output of Algorithm 2 back into text before returning the final string. Because each tokenizer might apply different normalization rules, there can be a mismatch between what the target model sees and what the drafter model intended to produce. To handle these mismatches, we look for the longest stretch of matched tokens between the tokens we already accepted in the target tokenizer's space, and the newly proposed tokens re-encoded in the target tokenizer's space. Conceptually, this search procedure is a way of finding the largest overlap (or suffix/regex match) between the old and new sequences. We then only take the suffix of the new tokens that falls beyond that overlap. This effectively aligns the newly added tokens to the correct place in the target-token space. The algorithm can "look behind" a small number of tokens to try to realign sequences. By doing so, we mitigate the effect of the mismatch and preserve as much of the previously decoded text as possible. We provided the implementation in the Supplementary Material.
+
+KV Caching. Storing the KV cache of models is a common practice that has been shown to be crucial for efficient inference (Pope et al., 2023; Kwon et al., 2023). In particular, without KV caching, the additional number of operations (e.g., floating-point operations) required for the decoding might grow quadratically with respect to the number of tokens in the context for self-attention transformers. Algorithm 2 implements only a single iteration of SD. SD over heterogeneous vocabularies that is based on Algorithm 2 therefore may include multiple runs of Algorithm 2. These runs are sequential and autoregressive, namely, the output of each run of Algorithm 2 is used as the input for the next run of Algorithm 2. Therefore, implementations of Algorithm 2 should store the KV cache from one run of Algorithm 2 to the next run. With KV caching, the prompt $c$ needs to be encoded into the target and draft vocabularies only once, during the first run of Algorithm 2 (that is, the first iteration, also referred to as "pre-filling"), to facilitate line 8 and line 4, respectively.
+
+# 3.3 Verification via Rejection Sampling
+
+The standard verification method of SD guarantees that the output tokens are distributed according to the target distribution, but it does not guarantee that the output tokens are exactly the target tokens, as in exact matching. For example, if the drafter is another instance of the target model $p$ , the standard verification method of SD will accept all the draft tokens because, in general, the expected acceptance rate satisfies $\sum_{t\in T}\min \left\{p(t),q(t)\right\}$ for any drafter $q$ and vocabulary $T$ , according to Leviathan et al. (2023). Hence, the expected acceptance rate of a drafter that is an instance of the target model is $\sum_{t\in T}p(t) = 1$ . For any drafter different from the target model, $q\neq p$ , the expected acceptance rate is strictly lower than one. Theorem 3.1 proves that, in gen
+
+eral, for any non-trivial target distribution $p$ , the expected acceptance rate of exact matching is strictly less than the expected acceptance rate of SD for homogeneous vocabularies under the same target distribution.
+
+Theorem 3.1. Let $p$ be a non-trivial target probability distribution over a vocabulary $T$ , where there exist $t_1, t_2 \in T$ such that $p(t_1) \neq p(t_2)$ . Let $q$ be the drafter probability distribution over the same vocabulary $T$ . If $q = p$ , namely, the drafter is another instance of the target model, then the expected acceptance rate of the exact matching method $\alpha_{EM}$ is strictly less than the expected acceptance rate of the standard speculative decoding method $\alpha_{SD}$ . Namely, it holds that $\alpha_{EM} < \alpha_{SD}$ .
+
+Proof. See Appendix G.
+
+Since Algorithm 2 implements exact matching verification, its expected acceptance rate is relatively low compared to the standard verification method of SD, which implements a rejection sampling procedure. To increase the acceptance rate of Algorithm 2, we propose Algorithm 3, introducing a novel verification method that employs lossless rejection sampling at the string level. Algorithm 3 samples draft tokens autoregressively from the drafter until a lookahead condition is satisfied, then tokenizes the concatenation of the draft tokens into the target vocabulary. It is lossless, as Theorem 3.2 proves, because it uses the same structure as the standard verification method of SD, which is lossless, as proved in Leviathan et al. (2023); Chen et al. (2023). The primary difference is that the probabilities are for generating a certain string rather than a single token.
+
+Algorithm 3 (SLRS), string-level rejection sampling verification for speculative decoding with heterogeneous vocabularies
+
+1: Input: Probability distributions $p$ and $q$ over vocabularies $T$ and $D$ , respectively, where $T \twoheadrightarrow D^{*}$ and $D^{*} \twoheadrightarrow T^{*}$ . Lookahead indicator function $S_{\mathbb{1}}$ from the current state to a boolean value.
+2: Output: A token from $T$ .
+3: Procedure:
+4: Sample $d_1, \ldots, d_i \sim q$ until $i$ satisfies $S_{\mathbb{1}}(i)$ .
+5: Tokenize $(t_{1}, t_{2}, \ldots, t_{m}) \gets T(d_{1} \oplus \ldots \oplus d_{i})$ .
+6: If $p(t_1) \geq \psi(t_1)$ , accept $t_1$ .
+7: With probability $\frac{p(t_1)}{\psi(t_1)}$ , accept $t_1$ .
+8: Reject $t_1$ . Sample $t \sim \frac{p(t) - \min\{p(t), \psi(t)\}}{1 - \sum_{t'} \min\{p(t'), \psi(t')\}}$ , return $t$ .
+
+Theorem 3.2. For any token in the target vocabulary $t \in T$ , Algorithm 3 outputs the token $t$ with probability $p(t)$ if we define $\psi(t) := \sum_{d_1, d_2, \ldots, d_i: t = T(d_1 \oplus \ldots \oplus d_i)_1} \prod_{j \in \{1, \ldots, i\}} q(d_j)$ .
+
+Namely, Algorithm 3 is lossless.
+
+Proof. See Appendix G.
+
+Lookahead. The lookahead controls a tradeoff between the probability of accepting a token and the number of drafter forwards, since every sampling of a draft token requires computing a forward pass of the drafter. The lookahead indicator function $S_{1}$ determines whether the algorithm should stop sampling draft tokens. Naively, we can set $S_{1}(i) := \mathbb{1}[i > n]$ for some threshold $n \in \mathbb{N}$ , and stop sampling draft tokens after $n$ draft tokens have been sampled in line 4 of Algorithm 3. On one hand, increasing the threshold $n$ necessarily increases the number of drafter forwards that Algorithm 3 requires. On the other hand, selecting a larger value of $n$ may increase the probability that Algorithm 3 accepts a token because it may increase the number of feasible values of $t_1$ in line 5. Small values of $n$ may lead to scenarios where some target tokens are never accepted. For example, if the target vocabulary $T$ includes a token $t$ with ten characters, and the longest token in the draft vocabulary $D$ is four characters, selecting $n < 3$ will never accept $t$ . However, since increasing $n$ also increases the number of drafter forward passes, it is important to select a value of $n$ that optimizes our objective function, which is, most commonly, maximizing the throughput of the inference or minimizing its latency. Target tokens $t \in T$ may correspond to more than one sequence of draft tokens $d_1, \ldots, d_i \in D$ for which the tokenized concatenation $T(d_1 \oplus \ldots \oplus d_i)$ starts with $t$ , namely, $T(d_1 \oplus \ldots \oplus d_i)_1 = t$ . These cases are common in practice, especially for a target vocabulary $T$ that is larger and includes longer tokens than the draft vocabulary $D$ . For example, consider a draft vocabulary $D = \{\text{'hello}_-, \text{'world}', \text{'wo'}, \text{'rld'}\}$ and a target vocabulary $T = D \cup \{\text{'hello_world'}\}$ . The target token 'hello_world' is the first token in the tokenized concatenation of two different sequences of draft tokens: 'hello_world' = $T(\text{'hello}_-' \oplus \text{'world'})_1 = T(\text{'hello}_-' \oplus \text{'wo'} \oplus \text{'rld'})_1$ . In fact, there are infinitely many sequences of draft tokens that start with 'hello_world'. Since Algorithm 3 uses only the first target token $T(d_1 \oplus \ldots \oplus d_i)_1$ , it is redundant to sample more than three draft tokens in this example. However, if the first two draft tokens are 'hello_' and 'world', there is no need to sample the third token since the first target token has already been determined. To capture this behavior and avoid unnecessary drafter forwards during inference time, we can calculate the maximum lookahead $n_{\max}$ at preprocessing time, by calculating the maximum number of draft tokens that need to be sampled to determine the first target token ( $n_{\max} = 3$ in the example above). Defining the lookahead indicator function to be $S_{1}(i) = \mathbb{1}[i > n_{\max}]$ is a simple heuristic ensuring that the algorithm stops sampling draft tokens after the first target token has been determined. However, this heuristic might still sample more draft tokens than necessary, as we saw in the example, where the first target token is determined after the two draft tokens, 'hello_' and 'world', have been sampled. To avoid computing unnecessary drafter forward passes, we can define the lookahead
+
+indicator function $S_{1}$ to combine a maximum threshold $n \leq n_{\max}$ and a stopping condition of whether the first target token has been determined. Namely, $S_{1}(i)$ is true if $i > n$ or $\operatorname{Pr}\left[T(d_1 \oplus \ldots \oplus d_i)_1 \neq T(d_1 \oplus \ldots \oplus d_i \oplus d_{i+1} \oplus \ldots \oplus d_n)_1\right] = 0$ , and false otherwise. Algorithm 3 and Theorem 3.2 both hold for this more general lookahead indicator function. In cases where the additional drafter forward passes are expensive or longer tokens are less likely to be accepted, setting the threshold $n$ to a value that is strictly less than $n_{\max}$ can be beneficial. More sophisticated lookahead indicator functions control the lookahead based on additional information about the current state, as has seen in other recent works. For example, Mamou et al. (2024) trained a small neural network to estimate the likelihood of the next draft token being accepted and used this information to decide whether to sample the next draft token or stop drafting. Their experiments showed that even a simple controller that attends to the drafter's logits is highly effective, and the controller generalizes well across different datasets and tasks. Following their success in both increasing the throughput and reducing the latency of the inference, Hugging Face's Transformers, the commonly used open-source library for training and deploying LLMs, has recently incorporated such a controller into their default inference pipeline. While implementing the lookahead indicator function $S_{1}$ as such a controller seems promising, it might be computationally expensive to calculate $\psi(t)$ for longer lookahead values, as Section 3.4 shows.
+
+Block Verification is Non-Trivial. In Algorithm 3, the vocabularies $T$ and $D$ are related only by $T \twoheadrightarrow D^*$ and $D^* \twoheadrightarrow T^*$ rather than by stricter relationships like bijection or $D \subseteq T$ . After Algorithm 3 removes the prefix $t_1$ from the concatenation $d_1 \oplus \ldots \oplus d_i$ , the remaining string is $t_2 \oplus \ldots \oplus t_m$ , and its tokenization back into the draft vocabulary $D$ might differ from $(d_{j > 1},\dots ,d_i)$ . For example, consider a simple case where $D \not\subseteq T$ , such that $T = \{\texttt{a} ',\texttt{b}'\}$ and $D = \{\texttt{a}',\texttt{b}',\texttt{aa}'\}$ . Let $d_1 = \texttt{aa}'$ and assume that $i = 1$ , meaning that only one draft token is sampled in line 4. We then have $T(d_{1}) = (t_{1},t_{2}) = (\texttt{a}',\texttt{a}')$ . Therefore, the remainder of the drafted string 'aa' after removing $t_1 = \texttt{a}'$ is the token $t_2 = \texttt{a}'$ , which was not sampled. Such scenarios can arise only when shifting from settings where $D = T$ to settings where $D \neq T$ . Applying Algorithm 3 to the string that remains after removing the candidate token $t_1$ is, therefore, more challenging. This issue makes it nontrivial to generalize the block verification mechanism of Sun et al. (2024) to the case of heterogeneous vocabularies, despite its proven advantage in homogeneous setups.
+
+# 3.4 Efficient Calculation of $\psi (t)$
+
+Calculating $\psi(t)$ in line 6 of Algorithm 3 requires summing over all the probabilities of sampling sequences of draft
+
+tokens $d_{1},\ldots ,d_{i}$ such that their concatenation $d_{1}\oplus \ldots \oplus d_{i}$ starts with the target token $t$ , namely, $T(d_{1}\oplus \ldots \oplus d_{i})_{1} = t$ For general vocabularies, the number of such sequences $d_{1},\ldots ,d_{i}$ grows rapidly with the length of $t$ .For example, consider a complete vocabulary $D_{n}$ that contains all possible strings of length $n$ over a fixed alphabet $\Sigma$ A simple case is the alphabet $\Sigma = \{\mathrm{a^{\prime}},\mathrm{b^{\prime}}\}$ , where $D_{1} = \{\mathrm{a^{\prime}},\mathrm{b^{\prime}}\}$ $D_{2} = D_{1}\cup \{\mathrm{aa^{\prime}},\mathrm{ab^{\prime}},\mathrm{ba^{\prime}},\mathrm{bb^{\prime}}\}$ $D_{3} =$ $D_{2}\cup \{\mathrm{aaa^{\prime}},\mathrm{aab^{\prime}},\mathrm{aba^{\prime}},\mathrm{abb^{\prime}},\mathrm{bab^{\prime}},\mathrm{bbb^{\prime}},\mathrm{bbb^{\prime}}\}$ For such a vocabulary $D_{n}$ , the number of terms in the sum of $\psi (t)$ from Theorem 3.2 for a target token $t$ of length $m\leq n$ is $2^{m - 1}$ , as Lemma 3.1 proves. Here, the length of token $t$ is defined to be the maximum number of tokens whose concatenation equals to $t$ . In the example above, 'aaa' has length three because it is the concatenation of three 'a' tokens, while 'aa' is of length two because it is the concatenation of two 'a' tokens.
+
+Lemma 3.1. For a target token $t$ of length $m \leq n$ in a complete vocabulary $D_{n}$ that contains all possible strings of length up to $n$ over a fixed alphabet $\Sigma$ , the number of distinct sequences of draft tokens $d_{1}, \ldots, d_{i}$ such that their concatenation $d_{1} \oplus \ldots \oplus d_{i}$ starts with $t$ , namely, $T(d_{1} \oplus \ldots \oplus d_{i})_{1} = t$ , is $2^{m - 1}$ .
+
+Proof. See Appendix G.
+
+Appendix C provides details of an experiment conducted to examine the complexity of calculating $\psi(t)$ given the vocabulary of a real-world, off-the-shelf drafter (Qwen2-7B-Instruct from Yang et al., 2024). The results indicate that the number of terms in the sum of $\psi(t)$ grows exponentially with the length of the target token $t$ , as predicted by Lemma 3.1. Although Algorithm 3 is lossless (Theorem 3.2) and its acceptance rates are likely to be higher than those of Algorithm 2 (Theorem 3.1), calculating $\psi(t)$ during runtime might be too computationally expensive for practical use cases—especially if the drafter's vocabulary includes long tokens, as shown in the proof of Lemma 3.1 and supported by the experiment in Appendix C. Beyond its theoretical guarantees, Algorithm 3 might be suitable in practice only for specific drafters with small vocabularies, where the number of terms in the sum of $\psi(t)$ is manageable. For example, modern models like the recent MambaByte (Wang et al., 2024) could potentially be suitable drafters for Algorithm 3. However, the applicability of Algorithm 3 to a wider range of drafters with larger vocabularies is an open question that requires further research, and we propose it as future work.
+
+# 4 Speculative Decoding for Heterogeneous Vocabularies with Token-Level Verification
+
+This section introduces additional algorithms that extend the standard SD framework to operate over heterogeneous
+
+vocabies, namely, where the drafter's vocabulary differs from the target's. Unlike Section 3, the algorithms in this section do not use strings as an intermediate, shared representation. Instead, they operate at the token level, as in standard SD algorithms (for example, see Algorithm 5). The primary idea is to project the drafter's probability distribution over its vocabulary onto the intersection between the vocabularies of the draft and the target models. In doing so, Algorithm 4 adjusts the drafter to sample only tokens that are in the intersection between the two vocabularies while keeping the target model unchanged.
+
+Algorithm 4 (Token-Level Intersection, TLI), an iteration of speculative decoding for heterogeneous vocabularies with token-level rejection sampling verification
+
+1: Input: Probability distributions $p$ and $q$ over vocabularies $T$ and $D$ , respectively. Drafting lookahead $i \in \mathbb{N}$ .
+2: Output: A sequence of tokens from $T$ , containing between 1 and $i + 1$ tokens.
+3: Procedure:
+4: Define a probability distribution $q'$ over the vocabulary $T \cap D$ such that $q'(x) = \frac{q(x)}{\sum_{t \in T} q(t)}$ if $x \in T$ and $q'(x) = 0$ otherwise.
+5: Run Algorithm 5 with $p, q', i, c$ .
+
+Theorem 4.1 proves that the acceptance rate of Algorithm 4 is greater than or equal to the acceptance rate of the simple solution that Algorithm 1 implements.
+
+Theorem 4.1. Let $p$ and $q$ be target and drafter probability distributions over vocabularies $T$ and $D$ , respectively. Define $p', q_1, q_2$ to be probability distributions over $T \cup D$ as follows. $p'(x) = p(x)$ if $x \in T$ and $p'(x) = 0$ otherwise. $q_1(x) = q(x)$ if $x \in D$ and $q_1(x) = 0$ otherwise. $q_2(x) = \frac{q(x)}{\sum_{t \in T} q(t)}$ if $x \in T$ and $q_2(x) = 0$ otherwise. Given the target $p'$ , we define $\alpha_1$ and $\alpha_2$ to be the probability of accepting a token $x \sim q_1$ and $x \sim q_2$ , respectively, by the rejection sampling algorithm of speculative decoding from Leviathan et al. (2023); Chen et al. (2023). Then, $\alpha_1 \leq \alpha_2$ , and the output tokens distribute according to $p$ . Proof. See Appendix G.
+
+Although the acceptance rate of Algorithm 4 is at least as high as the acceptance rate of Algorithm 1 (Theorem 4.1), it still depends on the intersection between the two vocabularies. For example, if the intersection is empty, the acceptance rate of both algorithms is zero. This dependency on acceptance rate is not new or unique. Instead, it is a known limitation of SD algorithms. Timor et al. (2025) analyzed the expected speedups of SD for any drafter size and acceptance rate and studied the slowdowns that standard SD algorithms cause given sufficiently low acceptance rates. In practice, the intersection between the draft and target
+
+vocables is often non-empty because of how tokenizers are constructed. The intuition is based on commonly used tokenization methods, as mentioned in Section 3.1. Our experiments with real-world off-the-shelf models support the assumption that the intersection between the vocabularies is non-empty. Tokens in the intersection have a non-zero probability of being sampled by both models and, therefore, the intersection supports a non-zero expected acceptance rate, as shown by Leviathan et al. (2023).
+
+# 5 Empirical Results
+
+Our empirical results have had an impact on the open-source ecosystem, with Algorithm 2 and Algorithm 4 successfully integrated into Hugging Face Transformers (Wolf et al., 2020)—the most widely adopted library in the AI field, boasting over 145,000 GitHub stars, more than 378,000 repositories, and 6,000 open-source packages that depend on it. Thanks to their versatility and broad applicability, Algorithm 2 and Algorithm 4 had become the default inference pipeline behavior (in October 2024 and February 2025, respectively), enabling efficient speculative decoding (SD) for heterogeneous vocabularies across diverse applications. The open-source community has quickly embraced our approach to heterogeneous SD, unlocking any model to serve as a drafter, driving widespread adoption and enabling potential further enhancements by engineers and researchers. Its seamless integration into existing workflows has empowered practitioners to achieve substantial improvements in inference efficiency with minimal effort. This broad adoption underscores the practical utility and robustness of our approach in real-world scenarios. The rapid uptake of our algorithms demonstrates their effectiveness across a diverse range of model pairs, tasks, and hardware setups. The following section presents only a selection of examples.
+
+We evaluate Algorithm 2 (SLEM) and Algorithm 4 (TLI) over widely used models, tasks, and hardware setups, including DeepSeek (DeepSeek-AI et al., 2025), Phi (Abdin et al., 2024b;a), Mixtral (Jiang et al., 2024), Qwen2.5 (Qwen et al., 2025), Vicuna (Chiang et al., 2023), Llama (Dubey et al., 2024), CodeLlama (Rozière et al., 2024), Starcoder (Li et al., 2023), and Gemma2 (Team et al., 2024). Table 1 benchmarks SLEM and autoregressive decoding (AR) where both employ a temperature of zero. Table 2 benchmarks TLI and AR where both employ a temperature of one. The results demonstrate throughput accelerations over AR of up to $2.8 \times$ with SLEM and $1.7 \times$ with TLI. Note that the target models in Tables 1 and 2 do not have homogeneous drafters that are available off-the-shelf and therefore we cannot accelerate them using standard SD. Tables 6 and 7 in Appendix D add results for additional models, including those with homogeneous drafters (e.g., Gemma2). For exact implementation details, we refer the reader to Appendix D.
+
+Table 1: Benchmark comparing Algorithm 2 (SLEM) and autoregressive decoding (AR) for widely used models, tasks, and hardware setups. The results demonstrate that SLEM increases throughput by up to $2.8 \times$ over AR. Note that the target models below do not have homogeneous drafters that are available off-the-shelf. For some target models, their in-family drafters are heterogeneous, as their vocabularies differ. Examples include the target model phi-4 with the drafter Phi-3.5-mini-instruct, and the DeepSeek-R1-Distill-Qwen model family.
+
+Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) Tok/s Speedup Mixtral-8x22B-Instruct-v0.1 cnn_dailymail 4 * H100 NVL AR No Drafter (Autoregressive) 266.8 127.9 7.8 1.0 SLEM Qwen2.5-0.5B-Instruct 321.2 68.3 13.3 1.71 vicuna-68m 302.4 57.3 16.4 2.1 scrolls 4 * H100 NVL AR No Drafter (Autoregressive) 1331.9 163.0 6.0 1.0 SLEM Qwen2.5-0.5B-Instruct 1414.2 81.0 10.3 1.71 vicuna-68m 1344.5 132.5 7.4 1.24 openai_humaneval 4 * H100 NVL AR No Drafter (Autoregressive) 217.5 127.9 7.8 1.0 SLEM Qwen2.5-0.5B-Instruct 484.4 70.2 12.0 1.53 vicuna-68m 231.5 73.3 12.6 1.61 DeepSeek-R1-Distill-Qwen-14B scrolls 1 * RTX 6000 AR No Drafter (Autoregressive) 1481.0 87.5 10.9 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 1665.4 59.1 16.0 1.48 vicuna-68m 1566.8 56.0 17.3 1.59 cnn_dailymail 1 * RTX 6000 AR No Drafter (Autoregressive) 176.8 51.7 19.2 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 287.5 69.9 14.1 0.73 vicuna-68m 243.0 36.2 27.4 1.43 openai_humaneval 1 * RTX 6000 AR No Drafter (Autoregressive) 91.3 50.3 19.8 1.0 SLEM tiny_starcoder.py 113.4 43.8 22.4 1.14 CodeLlama-7b-Instruct-hf 256.6 77.5 12.4 0.63 DeepSeek-R1-Distill-Qwen-1.5B 292.5 70.9 13.6 0.69 DeepSeek-R1-Distill-Qwen-32B cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 121.2 48.0 20.8 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 167.1 51.3 18.9 0.91 vicuna-68m 148.1 32.5 30.6 1.47 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 72.0 48.3 20.7 1.0 SLEM tiny_starcoder.py 80.1 34.2 28.5 1.38 CodeLlama-7b-Instruct-hf 182.7 64.4 14.9 0.72 DeepSeek-R1-Distill-Qwen-1.5B 196.4 50.3 19.5 0.94 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 933.1 77.7 12.5 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 988.1 57.6 17.1 1.37 vicuna-68m 979.9 59.3 16.5 1.32 phi-4 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 483.9 47 21.3 1.0 SLEM Qwen2.5-0.5B-Instruct 457.7 29.5 33.9 1.59 Phi-3.5-mini-instruct 646.9 39.6 25.3 1.19 CodeLlama-13b-Instruct-hf humaneval 1 * A6000 AR No Drafter (Autoregressive) 70.7 46.8 21.4 1.0 SLEM tiny_starcoder.py 109.7 16.7 59.7 2.79 CodeLlama-7b-Instruct-hf 146.5 21.8 45.8 2.14
+
+Table 2: Benchmark comparing Algorithm 4 (TLI) and autoregressive decoding (AR) for widely used models, tasks, and hardware setups. The results demonstrate that TLI increases throughput by up to $1.7 \times$ over AR. Note that the target models below do not have homogeneous drafters that are available off-the-shelf. For some target models, their in-family drafters are heterogeneous, as their vocabularies differ. Examples include the target model phi-4 with the drafter Phi-3.5-mini-instruct, and the DeepSeek-R1-Distill-Qwen model family.
+
+Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) Tok/s Speedup Mixtral-8x22B-Instruct-v0.1 scrolls 4* H100 NVL AR No Drafter (Autoregressive) 1334.7 168.7 5.9 1.0 TLI Qwen2.5-0.5B-Instruct 1372.6 97.8 9.9 1.69 vicuna-68m 1329.7 138.2 7.2 1.22 openai_humaneval 4* H100 NVL AR No Drafter (Autoregressive) 217.5 128.1 7.8 1.0 TLI Qwen2.5-0.5B-Instruct 266.9 90.6 10.9 1.4 vicuna-68m 228.5 74.8 13.0 1.67 cnn_dailymail 4* H100 NVL AR No Drafter (Autoregressive) 266.8 128.1 7.8 1.0 TLI Qwen2.5-0.5B-Instruct 294.5 88.9 11.2 1.43 vicuna-68m 297.3 81.0 11.9 1.53 phi-4 scrolls 1* H100 NVL AR No Drafter (Autoregressive) 487.4 47.2 21.2 1.0 TLI Qwen2.5-0.5B-Instruct 454.7 32.5 30.8 1.45 Phi-3.5-mini-instruct 610.4 46.0 21.7 1.03 CodeLlama-13b-Instruct-hf humaneval 1* A6000 AR No Drafter (Autoregressive) 70.5 45.3 22.1 1.0 TLI tiny_starcoder.py 65.1 25.9 38.5 1.74 CodeLlama-7b-Instruct-hf 141.3 25.6 39.1 1.77 DeepSeek-R1-Distill-Qwen-14B scrolls 1* RTX 6000 AR No Drafter (Autoregressive) 1479.5 88.3 10.8 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 1640.7 61.6 16.1 1.5 vicuna-68m 1502.2 57.2 17.1 1.59 cnn_dailymail 1* RTX 6000 AR No Drafter (Autoregressive) 176.1 54.4 18.4 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 240.5 44.7 21.4 1.16 vicuna-68m 202.4 40.6 24.1 1.31 openai_humaneval 1* RTX 6000 AR No Drafter (Autoregressive) 90.4 50.9 19.6 1.0 TLI tiny_starcoder.py 93.9 38.6 25.4 1.3 CodeLlama-7b-Instruct-hf 150.2 66.0 14.6 0.75 DeepSeek-R1-Distill-Qwen-1.5B 172.6 45.6 21.2 1.08
+
+Tables 8, 9, and 10 in Appendix E examine the vocabularies of widely used off-the-shelf target and drafter models. Table 8 shows the vocabulary size of each model. Table 9 shows the size of the intersection between the draft and target vocabularies and the ratio of the intersection size to the target vocabulary size for various model pairs. We can see a wide range of overlap sizes and ratios, however, none of them are empty. This observation is consistent with our aforementioned assumption that the intersection between the draft and target vocabularies is non-empty in practice. Table 10 extends Table 9 by showing the overlap sizes and ratios over various tasks.
+
+To facilitate additional standardized benchmarks, we have open-sourced our benchmarking repository, which provides full reproducibility. The code is available at github.com/keyboardAnt/hf-bench. See Appendix D for implementation details.
+
+# 6 Discussion
+
+To speed up the inference of a given target model, we need to select a drafter and a decoding algorithm. Table 3 summarizes the expected probability of accepting the next token for all the speculation algorithms when the drafter has a different vocabulary than the target. Note that the effectiveness of each algorithm depends on the properties of the drafter. Table 4 outlines the necessary constraints that the drafter must satisfy for each algorithm to be effective in practice. If these constraints are not met, selecting an alternative algorithm is recommended. Future work is discussed in Appendix A.
+
+Table 3: Expected acceptance rates given heterogeneous vocabularies for all speculation methods. The expected acceptance rate of Algorithm 1 is always less than or equal to the expected acceptance rate of Algorithm 4, as Theorem 4.1 proves.
+
+Method Expected Acceptance Rate Alg 5 (SD) Undefined Alg 1 ∑t∈T∩Dmin{p(t),q(t)} Alg 2 (SLEM) ∑t∈T[p(t)·ψ(t)] Alg 3 (SLRS) ∑t∈Tmin{p(t),ψ(t)} Alg 4 (TLI) ∑t∈T∩Dmin{p(t),q(t)/∑x∈T∩Dq(x)}
+
+Practical Implications. Practitioners can leverage speculative decoding (SD) to significantly accelerate the inference of off-the-shelf LLMs, even when no drafter with the same vocabulary as the target model is available. This advancement eliminates the need for extensive computational resources, as it bypasses the costly and time-consuming
+
+Table 4: Informal constraints on the drafter for different algorithms to ensure effectiveness. If the constraints are not met, an alternative algorithm should be selected. Since the acceptance rate of Algorithm 4 is always greater than or equal to that of Algorithm 1, selecting Algorithm 4 over Algorithm 1 is always beneficial, assuming the implementation overhead is negligible. A necessary condition for the effectiveness of Algorithms 2, 3, and 4 is that the drafter must approximate the target distribution sufficiently well. The effectiveness of Algorithm 3 is further enhanced when the drafter's vocabulary consists of short tokens. The effectiveness of Algorithm 4 improves as the number of tokens in the intersection between the vocabularies increases.
+
+Algorithm Drafter Constraints Alg 1 Not Applicable (instead, select Alg 4) Alg 2 (SLEM) Accurate Alg 3 (SLRS) Accurate, short tokens Alg 4 (TLI) Accurate, large overlap of vocabs
+
+process of training a dedicated drafter. Furthermore, our approach allows practitioners to integrate SD seamlessly into existing inference pipelines without requiring any modifications to the target model's architecture or retraining procedures. The proposed algorithms expand the applicability of SD to a broader range of use cases, including models with different tokenization schemes. This is particularly relevant for practitioners and researchers who rely on pre-trained models (e.g., from the Hugging Face Hub), each with distinct vocabularies. Our methods provide practical solutions to unify heterogeneous models under a single SD framework, enhancing efficiency across diverse applications.
+
+Limitations. A fundamental limitation of SD algorithms is their dependence on the acceptance rate of the drafter and the latency of its forward pass, as extensively analyzed in Timor et al. (2025). When the drafter approximates the target distribution inaccurately, the acceptance rate decreases, leading to diminished performance improvements. Our proposed methods are no exception to this constraint. Unlike standard SD that limits the drafter to in-family models, our algorithms open the door to using off-the-shelf target-drafter pairs that differ in their architecture and the way they were trained, although both can critically affect the acceptance rate. For drafters with a heterogeneous vocabulary, the inherit mismatches in token granularity might further reduce the likelihood of draft tokens being accepted. Despite these challenges, our algorithms empirically demonstrate significant accelerations not only for heterogeneous drafters (Section 5) but also homogeneous ones (Appendix D) while employing drafters that are faster than the fastest in-family model. However, for cases with insufficiently fast or accurate drafters, our methods might fail, as Appendix D shows.
+
+# Acknowledgments
+
+We are grateful to Roy Schwartz from The Hebrew University of Jerusalem for his valuable feedback in improving this work. We thank João Gante and the Hugging Face team for reviewing the code and providing valuable feedback that contributed to its implementation in the Transformers library.
+
+This work was partially funded by the Israel Science Foundation (ISF grant 3698/21). Additional support was provided by a research grant to David Harel from Louis J. Lavigne and Nancy Rothman, the Carter Chapman Shreve Family Foundation, Dr. and Mrs. Donald Rivin, and the Estate of Smigel Trust.
+
+# Impact Statement
+
+This work lowers the cost and latency of LLM inference—making the serving of these models cheaper, faster, and more accessible to a wider range of users.
+
+# References
+
+Abdin, M., Aneja, J., Awadalla, H., Awadallah, A., Awan, A. A., Bach, N., Bahree, A., Bakhtiari, A., Bao, J., Behl, H., Benhaim, A., Bilenko, M., Bjorck, J., Bubeck, S., Cai, M., Cai, Q., Chaudhary, V., Chen, D., Chen, D., Chen, W., Chen, Y.-C., Chen, Y.-L., Cheng, H., Chopra, P., Dai, X., Dixon, M., Eldan, R., Fragoso, V., Gao, J., Gao, M., Gao, M., Garg, A., Giorno, A. D., Goswami, A., Gunasekar, S., Haider, E., Hao, J., Hewett, R. J., Hu, W., Huynh, J., Iter, D., Jacobs, S. A., Javaheripi, M., Jin, X., Karampatziakis, N., Kauffmann, P., Khademi, M., Kim, D., Kim, Y. J., Kurilenko, L., Lee, J. R., Lee, Y. T., Li, Y., Li, Y., Liang, C., Liden, L., Lin, X., Lin, Z., Liu, C., Liu, L., Liu, M., Liu, W., Liu, X., Luo, C., Madan, P., Mahmoudzadeh, A., Majercak, D., Mazzola, M., Mendes, C. C. T., Mitra, A., Modi, H., Nguyen, A., Norick, B., Patra, B., Perez-Becker, D., Portet, T., Pryzant, R., Qin, H., Radmilac, M., Ren, L., de Rosa, G., Rosset, C., Roy, S., Ruwase, O., Saarikivi, O., Saied, A., Salim, A., Santacroce, M., Shah, S., Shang, N., Sharma, H., Shen, Y., Shukla, S., Song, X., Tanaka, M., Tupini, A., Vaddamanu, P., Wang, C., Wang, G., Wang, L., Wang, S., Wang, X., Wang, Y., Ward, R., Wen, W., Witte, P., Wu, H., Wu, X., Wyatt, M., Xiao, B., Xu, C., Xu, J., Xu, W., Xue, J., Yadav, S., Yang, F., Yang, J., Yang, Y., Yang, Z., Yu, D., Yuan, L., Zhang, C., Zhang, C., Zhang, J., Zhang, L. L., Zhang, Y., Zhang, Y. Zhang, Y. and Zhou, X. Phi-3 technical report: A highly capable language model locally on your phone 2024a. URL https://arxiv.org/abs/2404.14219.
+Abdin, M., Aneja, J., Behl, H., Bubeck, S., Eldan, R., Gunasekar, S., Harrison, M., Hewett, R. J., Javaheripi, M.,
+
+Kauffmann, P., Lee, J. R., Lee, Y. T., Li, Y., Liu, W., Mendes, C. C. T., Nguyen, A., Price, E., de Rosa, G., Saarikivi, O., Salim, A., Shah, S., Wang, X., Ward, R., Wu, Y., Yu, D., Zhang, C., and Zhang, Y. Phi-4 technical report, 2024b. URL https://arxiv.org/abs/2412.08905.
+Chen, C., Borgeaud, S., Irving, G., Lespiau, J.-B., Sifre, L., and Jumper, J. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023.
+Chen, J., Tiwari, V., Sadhukhan, R., Chen, Z., Shi, J., Yen, I. E.-H., and Chen, B. Magicdec: Breaking the latency-throughput tradeoff for long context generation with speculative decoding. arXiv preprint arXiv:2408.11049, 2024.
+Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I., and Zaremba, W. Evaluating large language models trained on code, 2021. URL https://arxiv.org/abs/2107.03374.arXiv:2107.03374.
+Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., Gonzalez, J. E., Stoica, I., and Xing, E. P. Vicuna: An open-source chatbot impressing GPT-4 with $90\%$ ChatGPT quality, 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
+DeepSeek-AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., Zhang, X., Yu, X., Wu, Y., Wu, Z. F., Gou, Z., Shao, Z., Li, Z., Gao, Z., Liu, A., Xue, B., Wang, B., Wu, B., Feng, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., Dai, D., Chen, D., Ji, D., Li, E., Lin, F., Dai, F., Luo, F., Hao, G., Chen, G., Li, G., Zhang, H., Bao, H., Xu, H., Wang, H., Ding, H., Xin, H., Gao, H., Qu, H., Li, H., Guo, J., Li, J., Wang, J., Chen, J., Yuan, J., Qiu, J., Li, J., Cai, J. L., Ni, J., Liang, J., Chen, J., Dong, K., Hu, K., Gao, K., Guan, K., Huang, K., Yu, K., Wang, L., Zhang, L., Zhao, L., Wang, L., Zhang, L., Xu, L., Xia, L., Zhang, M., Zhang, M., Tang, M., Li, M., Wang, M., Li, M., Tian, N., Huang, P., Zhang, P., Wang, Q., Chen, Q., Du, Q., Ge, R., Zhang, R., Pan, R., Wang, R., Chen
+
+R. J., Jin, R. L., Chen, R., Lu, S., Zhou, S., Chen, S., Ye, S., Wang, S., Yu, S., Zhou, S., Pan, S., Li, S. S., Zhou, S., Wu, S., Ye, S., Yun, T., Pei, T., Sun, T., Wang, T., Zeng, W., Zhao, W., Liu, W., Liang, W., Gao, W., Yu, W., Zhang, W., Xiao, W. L., An, W., Liu, X., Wang, X., Chen, X., Nie, X., Cheng, X., Liu, X., Xie, X., Liu, X., Yang, X., Li, X., Su, X., Lin, X., Li, X. Q., Jin, X., Shen, X., Chen, X., Sun, X., Wang, X., Song, X., Zhou, X., Wang, X., Shan, X., Li, Y. K., Wang, Y. Q., Wei, Y. X., Zhang, Y., Xu, Y., Li, Y., Zhao, Y., Sun, Y., Wang, Y., Yu, Y., Zhang, Y., Shi, Y., Xiong, Y., He, Y., Piao, Y., Wang, Y., Tan, Y., Ma, Y., Liu, Y., Guo, Y., Ou, Y., Wang, Y., Gong, Y., Zou, Y., He, Y., Xiong, Y., Luo, Y., You, Y., Liu, Y., Zhou, Y., Zhu, Y. X., Xu, Y., Huang, Y., Li, Y., Zheng, Y., Zhu, Y., Ma, Y., Tang, Y., Zha, Y., Yan, Y., Ren, Z. Z., Ren, Z., Sha, Z., Fu, Z., Xu, Z., Xie, Z., Zhang, Z., Hao, Z., Ma, Z., Yan, Z., Wu, Z., Gu, Z., Zhu, Z., Liu, Z., Li, Z., Xie, Z., Song, Z., Pan, Z., Huang, Z., Xu, Z., Zhang, Z. and Zhang, Z. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948.
+Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
+Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D. S., de las Casas, D., Hanna, E. B., Bressand, F., Lengyel, G., Bour, G., Lample, G., Lavaud, L. R., Saulnier, L., Lachaux, M.-A., Stock, P., Subramanian, S., Yang, S., Antoniak, S., Scao, T. L., Gervet, T., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mixtral of experts, 2024. URL https://arxiv.org/abs/2401.04088.
+Joao Gante. Assisted generation: a new direction toward low-latency text generation, 2023. URL https://huggingface.co/blog/assisted-generation.
+Kudo, T. Subword regularization: Improving neural network translation models with multiple subword candidates. In Gurevych, I. and Miyao, Y. (eds.), Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 66-75, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1007. URL https://aclanthology.org/P18-1007.
+Kudo, T. and Richardson, J. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Blanco, E. and Lu, W. (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels,
+
+Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012.
+Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C. H., Gonzalez, J., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pp. 611-626, 2023.
+Leviathan, Y., Kalman, M., and Matias, Y. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pp. 19274-19286. PMLR, 2023.
+Li, R., allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., LI, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Lamy-Poirier, J., Monteiro, J., Gontier, N., Yee, M., H., Umapathi, L. K., Zhu, J., Lipkin, B., Oblokulov, M., Wang, Z., Murthy, R., Stillerman, J. T., Patel, S. S., Abulkhanov, D., Zocca, M., Dey, M., Zhang, Z., Bhattacharyya, U., Yu, W., Luccioni, S., Villegas, P., Zhdanov, F., Lee, T., Timor, N., Ding, J., Schlesinger, C. S., Schoelkopf, H., Ebert, J., Dao, T., Mishra, M., Gu, A., Anderson, C. J., Dolan-Gavitt, B., Contractor, D., Reddy, S., Fried, D., Bahdanau, D., Jernite, Y., Ferrandis, C. M., Hughes, S., Wolf, T., Guha, A., Werra, L. V., and de Vries, H. Starcoder: may the source be with you! Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=KoFOg41haE. Reproducibility Certification.
+Mamou, J., Pereg, O., Korat, D., Berchansky, M., Timor, N., Wasserblat, M., and Schwartz, R. Dynamic speculation lookahead accelerates speculative decoding of large language models. In Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, volume 262 of Proceedings of Machine Learning Research, pp. 456-467. PMLR, 2024. URL https://proceedings.mlr.press/v262/mamou24a.html.
+Miao, X., Oliaro, G., Zhang, Z., Cheng, X., Wang, Z., Zhang, Z., Wong, R. Y. Y., Zhu, A., Yang, L., Shi, X., Shi, C., Chen, Z., Arfeen, D., Abhyankar, R., and Jia, Z. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. ACM, 2024. doi: 10.1145/3620666.3651335. URL http://dx.doi.org/10.1145/3620666.3651335.
+Nallapati, R., Zhou, B., dos Santos, C., Gulçehre, C., and Xi-ang, B. Abstractive text summarization using sequence-to
+
+sequence RNNs and beyond. In Rieszler, S. and Goldberg, Y. (eds.), Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280-290, Berlin, Germany, August 2016a. Association for Computational Linguistics. doi: 10.18653/v1/K16-1028. URL https://aclanthology.org/K16-1028.
+Nallapati, R., Zhou, B., dos Santos, C., Gulçehre, C., and Xiang, B. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Rieszler, S. and Goldberg, Y. (eds.), Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280-290, Berlin, Germany, 2016b. Association for Computational Linguistics. doi: 10.18653/v1/K16-1028. URL https://aclanthology.org/K16-1028/.
+Pope, R., Douglas, S., Chowdhery, A., Devlin, J., Bradbury, J., Heek, J., Xiao, K., Agrawal, S., and Dean, J. Efficiently scaling transformer inference. Proceedings of Machine Learning and Systems, 5:606-624, 2023.
+Qwen, :, Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., Lin, H., Yang, J., Tu, J., Zhang, J., Yang, J., Yang, J., Zhou, J., Lin, J., Dang, K., Lu, K., Bao, K., Yang, K., Yu, L., Li, M., Xue, M., Zhang, P., Zhu, Q., Men, R., Lin, R., Li, T., Tang, T., Xia, T., Ren, X., Ren, X., Fan, Y., Su, Y., Zhang, Y., Wan, Y., Liu, Y., Cui, Z., Zhang, Z., and Qiu, Z. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115.
+Rozière, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y., Liu, J., Sauvestre, R., Remez, T., Rapin, J., Kozhevnikov, A., Evtimov, I., Bitton, J., Bhatt, M., Ferrer, C. C., Grattafori, A., Xiong, W., Defossez, A., Copet, J., Azhar, F., Touvron, H., Martin, L., Usunier, N., Scialom, T., and Synnaeve, G. Code llama: Open foundation models for code, 2024. URL https://arxiv.org/abs/2308.12950.
+Schuster, M. and Nakajima, K. Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5149-5152, 2012. doi: 10.1109/ICASSP.2012.6289079.
+Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. In Erk, K. and Smith, N. A. (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://aclanthology.org/P16-1162.
+Shaham, U., Segal, E., Ivgi, M., Efrat, A., Yoran, O., Haviv, A., Gupta, A., Xiong, W., Geva, M., Berant, J., and Levy, O. SCROLLS: Standardized CompaRison over
+
+long language sequences. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 12007-12021, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.823. URL https://aclanthology.org/2022.emnlp-main.823/.
+Sun, Z., Ro, J. H., Beirami, A., and Suresh, A. T. Optimal block-level draft verification for accelerating speculative decoding. arXiv preprint arXiv:2403.10444, 2024.
+Team, G., Riviere, M., Pathak, S., Sessa, P. G., Hardin, C., Bhupatiraju, S., Hussenot, L., Mesnard, T., Shahriari, B., Rame, A., Ferret, J., Liu, P., Tafti, P., Friesen, A., Casbon, M., Ramos, S., Kumar, R., Lan, C. L., Jerome, S., Tsitsulin, A., Vieillard, N., Stanczyk, P., Girgin, S., Momchev, N., Hoffman, M., Thakoor, S., Grill, J.-B., Neyshabur, B., Bachem, O., Walton, A., Severyn, A., Parrish, A., Ahmad, A., Hutchison, A., Abdagic, A., Carl, A., Shen, A., Brock, A., Coenen, A., Laforge, A., Paterson, A., Bastian, B., Piot, B., Wu, B., Royal, B., Chen, C., Kumar, C., Perry, C., Welty, C., Choquette-Choo, C. A., Sinopalnikov, D., Weinberger, D., Vijaykumar, D., Rogozinska, D., Herbison, D., Bandy, E., Wang, E., Noland, E., Moreira, E., Senter, E., Eltsyshev, E., Visin, F., Rasskin, G., Wei, G., Cameron, G., Martins, G., Hashemi, H., Klimczak-Plucińska, H., Batra, H., Dhand, H., Nardini, I., Mein, J., Zhou, J., Svensson, J., Stanway, J., Chan, J., Zhou, J. P., Carrasqueira, J., Iljazi, J., Becker, J., Fernandez, J., van Amersfoort, J., Gordon, J., Lipschultz, J., Newlan, J., yeong Ji, J., Mohamed, K., Badola, K., Black, K., Millican, K., McDonell, K., Nguyen, K., Sodhia, K., Greene, K., Sjoesund, L. L., Usui, L., Sifre, L., Heuermann, L., Lago, L., McNealus, L., Soares, L. B., Kilpatrick, L. Dixon, L., Martins, L., Reid, M., Singh, M., Iverson M. Gorner M.VellosoM.Wirth MDavidow M. Miller M.Rahtz M.Watson M.Risdal M.Kazemi M. Moynihan M. Zhang M. Kahng M. Park M. Rahman M. Khatwani M.Dao N.Bardoliwalla N. Devanathan N. Dumai N.Chauhan N.Wahltinez O.Botarda P.BarnesP.BarhamP.Michel P.Jin P.Georgiev P.Culliton P.Kuppala P.Comanescu R.Merhej R.Jana R.Rokni R.A.Agarwal R. Mullins R.Saadat S.Carthy S.M.CoganS.Perrin S.Arnold S.M.R.Krause S.Dai S.Garg S. Sheth S. Ronstrom S. Chan S. Jordan T.Yu T. Eccles T.Hennigan T.Kocisky T.Doshi T.Jain V.Yadav V.Meshram V.Dharmadhikari V.Barkley W.WeiW.YeW.HanW.KwonW.XuX.Shen Z.GongZ.WeiZ.CotrutaV.KirkP.RaoA. JiangM.PeranL.WarkentinT.CollinsE.BarralJ.Ghahramani Z.Hadsell R.SculleyD.Banks J.DraganA.PetrovS.VinyalsO.DeanJ.Has
+
+sabis, D., Kavukcuoglu, K., Farabet, C., Buchatskaya, E., Borgeaud, S., Fiedel, N., Joulin, A., Kenealy, K., Dadashi, R., and Andreev, A. Gemma 2: Improving open language models at a practical size, 2024. URL https://arxiv.org/abs/2408.00118.
+Timor, N., Mamou, J., Korat, D., Berchansky, M., Pereg, O., Wasserblat, M., Galanti, T., Gordon, M., and Harel, D. Distributed speculative inference (dsi): Speculation parallelism for provably faster lossless language model inference. In International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=cJd1BgZ9CS.
+Wang, J., Gangavarapu, T., Yan, J. N., and Rush, A. M. Mambabyte: Token-free selective state space model. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=X1xNsuKssb.
+Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. M. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6.
+Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115, 2024.
+Zafrir, O., Margulis, I., Shteyman, D., and Boudoukh, G. Fastdraft: How to train your draft, 2024. URL https://arxiv.org/abs/2411.11055.
+
+# A Future Work
+
+Future work includes assessing the effectiveness and applicability of Algorithm 3 in real-world scenarios, particularly with drafters of small vocabularies such as Wang et al., 2024, and exploring drafter adjustment strategies for Algorithm 4 to increase acceptance rates.
+
+# B Standard Speculative Decoding
+
+Generating the next token via autoregressive decoding requires computing a target forward pass. Standard SD methods, like Algorithm 5, tend to utilize this target forward pass to verify multiple candidate tokens at once via a data parallelism technique known as batching, which is supported by modern hardware such as GPUs and TPUs. Running Algorithm 5 to generate the next target token requires only one target forward pass (line 6 of Algorithm 5) although the algorithm could generate between one and $i + 1$ new tokens. Since computing the target forward pass is often the slowest and most expensive operation in the inference pipeline, the ability of SD methods like Algorithm 5 to reduce the number of required target forward passes is the key to their efficiency, as was previously shown in Leviathan et al. (2023); Chen et al. (2023); Timor et al. (2025).
+
+Algorithm 5 samples draft tokens from the drafter and then decides whether to accept or reject each draft token based on the target model's logits. The algorithm is widely used in practice and has been shown to be effective in accelerating the inference of large language models. The algorithm is lossless, meaning that it outputs tokens that distribute as the output tokens of standard autoregressive decoding.
+
+Algorithm 5 Standard Speculative Decoding (Adapted from Leviathan et al., 2023; Chen et al., 2023)
+
+1: Input: Probability distributions $p$ and $q$ over a vocabulary $T$ . Drafting lookahead $i \in \mathbb{N}$ . An input prompt $c$ .
+2: Output: A sequence of tokens from $T$ , containing between 1 and $i + 1$ tokens.
+3: Procedure:
+4: For $j \gets 1, \ldots, i$ :
+5: Sample a draft token from the drafter conditioned on the prompt and previous drafts, $d_{j} \sim q_{c \oplus d_{1} \oplus \ldots \oplus d_{j-1}}$ (where $d_{0} := c$ ).
+6: With data parallelism (batching), compute via one target forward pass the $i + 1$ logits of the target model conditioned on the prompt and all the draft continuations, $p_c$ , $p_{c \oplus d_1}$ , $\dots$ , $p_{c \oplus d_1 \oplus \dots \oplus d_i}$ .
+7: For $j\gets 1,\ldots ,i$
+8: Let $x \gets c \oplus d_1 \oplus \dots \oplus d_{j-1}$ (where $d_0 \coloneqq c$ ).
+9: If $p_x(d_j) \leq q_x(d_j)$ , with probability $1 - \frac{p_x(d_j)}{q_x(d_j)}$ , reject $d_j$ and go to line 11 (namely, break this for-loop).
+10: Accept the draft token $d_{j}$
+11: Let $j \in \{0, 1, \dots, i\}$ be the number of accepted drafts. Set $x \gets c \oplus d_1 \oplus \dots \oplus d_j$ .
+12: Sample $t \sim r_x$ for $r_x(t) := \frac{p_x(t) - \min\{p_x(t), q_x(t)\}}{1 - \sum_{t' \in T} \min\{p_x(t'), q_x(t')\}}$ if line 9 ever rejected a token. Otherwise, sample $t \sim p_x$ .
+13: Return $d_1, \ldots, d_j, t$ .
+
+# C Empirical Analysis of $\psi(t)$ Computation in Algorithm 3: Challenges and Insights
+
+This section presents our empirical analysis of the computational complexity involved in calculating $\psi(t)$ using a real-world vocabulary. Specifically, we examine the Qwen2-7B-Instruct model's vocabulary to evaluate how the number of terms in $\psi(t)$ scales with the length of the target token $t$ . Our findings support the theoretical prediction in Lemma 3.1, which states that the number of terms grows exponentially with the token length. We select 150,000 of the shortest tokens from a total of 151,646 tokens in the Qwen2-7B-Instruct vocabulary to keep the computation tractable. We then count how many ways a target token $t$ can be reconstructed by concatenating these shorter tokens. For instance, in the case of $t = \text{hello}$ , we found 14 valid combinations out of the 16 that would appear in a complete vocabulary (as defined in Section 3.4), indicating that the vocabulary of this model may be nearly complete for five-character tokens. Figure 1 lists all 14 valid combinations for the string 'hello' and visualizes them in a tree structure, where each leaf node represents a valid combination. In general, the number of forward passes of the drafter model that are required to calculate $\psi(t)$ is equal to the number of non-leaf nodes in the tree plus one. In this example, calculating $\psi(\text{hello})$ requires 16 forward passes of the drafter model, which makes Algorithm 3 with this vocabulary impractical for many target models that are considered
+
+state-of-the-art, including the open access models StarCoder (Li et al., 2023), Llama (Dubey et al., 2024), and DeepSeek (DeepSeek-AI et al., 2025). In a similar way to the above example for the token 'hello', we decompose each of the 150,000 selected tokens into the set of all its corresponding combinations. Table 5 summarizes the statistical properties of the token
+
+1. ['H', 'e', 'l', 'l', 'o']
+2. ['H', 'e', 'l', 'lo']
+3. ['H', 'e', 'll', 'o']
+4. ['H', 'el', 'l', 'o']
+5. [H', 'el', 'lo']
+6. ['H', 'ell', 'o']
+7. ['H', 'ello']
+8. ['He', 'l', 'l', 'o']
+9. ['He', 'l', 'lo']
+10. ['He', 'll', 'o']
+11. ['Hel', 'l', 'o']
+12. ['Hel', 'lo']
+13. ['Hell', 'o']
+14. ['Hello']
+
+
+
+
+
+
+
+
+Hello $\checkmark$
+Figure 1: Left: All the 14 valid combinations of tokens from the Qwen2-7B-Instruct vocabulary that can be concatenated to form the string 'hello'. Right: Tree visualization of all these combinations. Each of the 14 checkmarks indicate a valid combination, which is a leaf in the visualized tree. In this example, calculating $\psi(t)$ from Algorithm 3 requires 16 forward passes of the drafter model, which is the number of non-leaf nodes in the tree plus one. This large number of forward passes is due to the exponential growth in the number of valid combinations as the token length increases, as shown in Figure 2.
+
+lengths and the number of combinations for the selected tokens. The mean token length is 6.21 characters, with a standard deviation of 2.87. The mean number of combinations is 144.31, with a standard deviation of 880.98. The maximum number of combinations is 65,536. The median number of combinations is 15, and the 75th percentile is 56. Figure 2 shows the number of combinations for different token lengths. The number of combinations grows exponentially with the token length, as expected. Figure 3 shows the histogram and kernel density estimate of the number of combinations for the 150,000 selected tokens. The distribution is right-skewed, with a long tail of tokens having a large number of combinations. This exponential blow-up renders the calculation of $\psi(t)$ computationally infeasible for longer tokens, especially those among the 1,646 longest in the vocabulary. In practice, we could not even count all combinations for those tokens even after hours of computing time on a server, although only counting the combinations is an easier task than listing them. These results align with our theoretical expectations. While shorter tokens have a manageable number of decompositions, longer tokens exhibit
+
+Table 5: Statistical summary of token length and number of combinations for a set of 150,000 shortest tokens (out of a total of 151,646 tokens) in the Qwen2-7B-Instruct vocabulary.
+
+Token Length (Number of Characters) Number of Combinations Mean 6.21 144.31 Standard Deviation 2.87 880.98 Minimum 1.00 1.00 25% Percentile 4.00 7.00 50% (Median) 6.00 15.00 75% Percentile 8.00 56.00 Maximum 17.00 65536.00
+
+a combinatorial explosion, underscoring the importance of using drafter models with smaller, more concise vocabularies to reduce computational overhead. Although Algorithm 3 guarantees lossless speculative decoding, the latency incurred by the computation of $\psi(t)$ may be prohibitive when the vocabulary includes very long tokens. Consequently, its applicability might be limited to models with compact or pruned vocabularies—such as MambaByte (Wang et al., 2024)—that can balance accuracy with computational feasibility. Further research should explore heuristic or approximate methods to calculate $\psi(t)$ without exhaustive enumeration. Additionally, continued work on vocabulary construction and pruning techniques that reduce redundant token entries could help mitigate these computational challenges.
+
+# D Speedups
+
+We evaluate our methods on various combinations of models, tasks, and hardware setups. Tables 6 and 7 provide full benchmarks for SLEM (Algorithm 2) where the temperature is zero, and TLI (Algorithm 4) where the temperature is one, respectively. The benchmark includes widely used models: DeepSeek (DeepSeek-AI et al., 2025), Phi (Abdin et al., 2024b;a), Gemma2 (Team et al., 2024), Mixtral (Jiang et al., 2024), Qwen2.5 (Qwen et al., 2025), Vicuna (Chiang et al., 2023), Llama (Dubey et al., 2024), CodeLlama (Rozière et al., 2024), and Starcoder (Li et al., 2023). Note that for some targets, all the drafters are heterogeneous despite both target and drafter belonging to the same model family. For example, for the target phi-4, the drafter Phi-3.5-mini-instruct is heterogeneous. This is also the case for the DeepSeek-R1-Distill-Qwen model family, where some in-family models are heterogeneous, and therefore we cannot accelerate them using standard speculative decoding. The datasets span three tasks: code generation using HumanEval (Chen et al., 2021), text summarization using CNN-DailyMail (Nallapati et al., 2016a), and long-context task using SCROLLS (Shaham et al., 2022). For each dataset, the results are averaged over 30 prompts such that we generate between 128 and 512 new tokens for each prompt. AR denotes autoregressive decoding, SD denotes the official implementation in Hugging Face Transformers of standard speculative decoding like Algorithm 5 (Joao Gante, 2023).
+
+
+Figure 2: The number of combinations for different token lengths for the 150,000 selected tokens from the Qwen2-7B-Instruct vocabulary. We can see that the number of combinations grows exponentially with the token length.
+
+
+Figure 3: Histogram and Kernel Density Estimate of number of combinations for the 150,000 selected tokens from the Qwen2-7B-Instruct vocabulary. We can see that the number of combinations is right-skewed, with a long tail of tokens with a large number of combinations. For exact values, see Table 5.
+
+Table 6: Full benchmark for SLEM (Algorithm 2).
+
+Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) Tok/s Speedup Mixtral-8x22B-Instruct-v0.1 cnn_dailymail 4 * H100 NVL AR No Drafter (Autoregressive) 266.8 127.9 7.8 1.0 SLEM Qwen2.5-0.5B-Instruct 321.2 68.3 13.3 1.71 vicuna-68m 302.4 57.3 16.4 2.1 scrolls 4 * H100 NVL AR No Drafter (Autoregressive) 1331.9 163.0 6.0 1.0 SLEM Qwen2.5-0.5B-Instruct 1414.2 81.0 10.3 1.71 vicuna-68m 1344.5 132.5 7.4 1.24 openai_humaneval 4 * H100 NVL AR No Drafter (Autoregressive) 217.5 127.9 7.8 1.0 SLEM Qwen2.5-0.5B-Instruct 484.4 70.2 12.0 1.53 vicuna-68m 231.5 73.3 12.6 1.61 DeepSeek-R1-Distill-Qwen-14B scrolls 1 * RTX 6000 AR No Drafter (Autoregressive) 1481.0 87.5 10.9 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 1665.4 59.1 16.0 1.48 vicuna-68m 1566.8 56.0 17.3 1.59 cnn_dailymail 1 * RTX 6000 AR No Drafter (Autoregressive) 176.8 51.7 19.2 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 287.5 69.9 14.1 0.73 vicuna-68m 243.0 36.2 27.4 1.43 openai_humaneval 1 * RTX 6000 AR No Drafter (Autoregressive) 91.3 50.3 19.8 1.0 SLEM tiny_starcoder.py 113.4 43.8 22.4 1.14 CodeLlama-7b-Instruct-hf 256.6 77.5 12.4 0.63 DeepSeek-R1-Distill-Qwen-1.5B 292.5 70.9 13.6 0.69 DeepSeek-R1-Distill-Qwen-32B cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 121.2 48.0 20.8 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 167.1 51.3 18.9 0.91 vicuna-68m 148.1 32.5 30.6 1.47 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 72.0 48.3 20.7 1.0 SLEM tiny_starcoder.py 80.1 34.2 28.5 1.38 CodeLlama-7b-Instruct-hf 182.7 64.4 14.9 0.72 DeepSeek-R1-Distill-Qwen-1.5B 196.4 50.3 19.5 0.94 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 933.1 77.7 12.5 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 988.1 57.6 17.1 1.37 vicuna-68m 979.9 59.3 16.5 1.32 phi-4 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 483.9 47 21.3 1.0 SLEM Qwen2.5-0.5B-Instruct 457.7 29.5 33.9 1.59 Phi-3.5-mini-instruct 646.9 39.6 25.3 1.19 CodeLlama-13b-Instruct-hf humaneval 1 * A6000 AR No Drafter (Autoregressive) 70.7 46.8 21.4 1.0 SLEM tiny_starcoder.py 109.7 16.7 59.7 2.79 CodeLlama-7b-Instruct-hf 146.5 21.8 45.8 2.14 DeepSeek-R1-Distill-Qwen-7B cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 34.7 19.3 51.8 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 85.1 38.6 24.6 0.48 vicuna-68m 65.2 17.6 55.2 1.07 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 24.4 19.6 50.9 1.0 SLEM tiny_starcoder.py 36.1 23.7 39.6 0.78 CodeLlama-7b-Instruct-hf 138.0 54.4 17.5 0.34 DeepSeek-R1-Distill-Qwen-1.5B 149.3 42.2 22.7 0.45 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 221.2 22.4 44.0 1.0 SLEM DeepSeek-R1-Distill-Qwen-1.5B 296.2 41.1 23.5 0.54 vicuna-68m 245.4 24.6 39.9 0.91 gamma-2-9b-it scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 584.0 95.3 9.9 1.0 SD gemma-2-2b-it 739.0 31.3 30.2 3.05 SLEM gemma-2-2b-it 592.4 48.3 18.6 1.87 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 42.6 37.0 27.0 1.0 Continued on next page Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) T/s Speedup SD gamma-2-2b-it 446.5 29.1 33.2 1.23 SLEM vicuna-68m 51.6 24.5 40.2 1.49 cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 73.7 37.3 26.7 1.0 SD gamma-2-2b-it 125.4 39.7 24.6 0.92 SLEM vicuna-68m 83.9 26.9 37.1 1.39 DeepSeek-R1-Distill-Llama-70B openai_humaneval 2 * A100 80GB PCIe AR No Drafter (Autoregressive) 297.1 122.6 8.2 1.0 SD CodeLlama-7b-Instruct-hf 428.7 101.5 9.6 1.18 DeepSeek-R1-Distill-Llama-8B 353.5 54.3 18.3 2.25 SLEM tiny_starcoder.py 265.9 84.6 11.8 1.44 2 * H100 NVL AR No Drafter (Autoregressive) 130.1 76.1 13.1 1.0 SD CodeLlama-7b-Instruct-hf 277.3 93.3 10.4 0.79 DeepSeek-R1-Distill-Llama-8B 223.5 52.8 18.8 1.43 SLEM DeepSeek-R1-Distill-Qwen-1.5B 297.3 60.6 16.3 1.24 tiny_starcoder.py 143.3 63.2 15.6 1.19 cnn_dailymail 2 * H100 NVL AR No Drafter (Autoregressive) 230.7 77.6 12.9 1.0 SD DeepSeek-R1-Distill-Llama-8B 452.5 78.7 12.5 0.97 Llama-3.1-8B 277.9 74.4 13.4 1.04 Llama-3.2-1B 242.3 51.4 19.4 1.51 Llama-3.2-3B 252.1 66.8 14.9 1.16 SLEM DeepSeek-R1-Distill-Qwen-1.5B 296.1 72.3 13.6 1.06 vicuna-68m 263.8 51.5 19.3 1.5 scrolls 2 * H100 NVL AR No Drafter (Autoregressive) 1836.9 127.1 7.7 1.0 SD DeepSeek-R1-Distill-Llama-8B 2121.4 88.0 10.9 1.42 SLEM DeepSeek-R1-Distill-Qwen-1.5B 1890.9 85.8 11.3 1.47 DeepSeek-R1-Distill-Llama-8B scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 245.3 34.3 27.9 1.0 SD Llama-3.2-1B 283.1 24.6 39.2 1.41 Llama-3.2-3B 353.1 35.2 27.3 0.98 SLEM DeepSeek-R1-Distill-Qwen-1.5B 315.7 45.2 20.2 0.73 vicuna-68m 263.4 27.7 34.9 1.25 cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 38.9 21.8 45.9 1.0 SD Llama-3.2-1B 48.2 25.6 38.6 0.84 Llama-3.2-3B 57.2 38.1 26.0 0.57 SLEM DeepSeek-R1-Distill-Qwen-1.5B 88.2 43.2 22.7 0.5 vicuna-68m 66.9 19.7 50.0 1.09 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 31.9 21.8 45.8 1.0 SD CodeLlama-7b-Instruct-hf 144.0 72.3 12.5 0.27 SLEM tiny_starcoder.py 36.7 37.1 25.8 0.56 1 * RTX 6000 AR No Drafter (Autoregressive) 73.4 40.8 24.5 1.0 SD CodeLlama-7b-Instruct-hf 279.8 120.1 7.9 0.32 SLEM tiny_starcoder.py 96.4 52.6 18.2 0.74 DeepSeek-R1-Distill-Qwen-1.5B 246.2 42.7 20.4 0.83
+
+Table 7: Full benchmark for TLI (Algorithm 4).
+
+Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) Tok/s Speedup Mixtral-8x22B-Instruct-v0.1 scrolls 4 * H100 NVL AR No Drafter (Autoregressive) 1334.7 168.7 5.9 1.0 TLI Qwen2.5-0.5B-Instruct 1372.6 97.8 9.9 1.69 vicuna-68m 1329.7 138.2 7.2 1.22 openai_humaneval 4 * H100 NVL AR No Drafter (Autoregressive) 217.5 128.1 7.8 1.0 TLI Qwen2.5-0.5B-Instruct 266.9 90.6 10.9 1.4 vicuna-68m 228.5 74.8 13.0 1.67 cnn_dailymail 4 * H100 NVL AR No Drafter (Autoregressive) 266.8 128.1 7.8 1.0 TLI Qwen2.5-0.5B-Instruct 294.5 88.9 11.2 1.43 vicuna-68m 297.3 81.0 11.9 1.53 phi-4 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 487.4 47.2 21.2 1.0 TLI Qwen2.5-0.5B-Instruct 454.7 32.5 30.8 1.45 Phi-3.5-mini-instruct 610.4 46.0 21.7 1.03 CodeLlama-13b-Instruct-hf humaneval 1 * A6000 AR No Drafter (Autoregressive) 70.5 45.3 22.1 1.0 TLI tiny_starcoder.py 65.1 25.9 38.5 1.74 CodeLlama-7b-Instruct-hf 141.3 25.6 39.1 1.77 DeepSeek-R1-Distill-Qwen-14B scrolls 1 * RTX 6000 AR No Drafter (Autoregressive) 1479.5 88.3 10.8 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 1640.7 61.6 16.1 1.5 vicuna-68m 1502.2 57.2 17.1 1.59 cnn_dailymail 1 * RTX 6000 AR No Drafter (Autoregressive) 176.1 54.4 18.4 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 240.5 44.7 21.4 1.16 vicuna-68m 202.4 40.6 24.1 1.31 openai_humaneval 1 * RTX 6000 AR No Drafter (Autoregressive) 90.4 50.9 19.6 1.0 TLI tiny_starcoder.py 93.9 38.6 25.4 1.3 CodeLlama-7b-Instruct-hf 150.2 66.0 14.6 0.75 DeepSeek-R1-Distill-Qwen-1.5B 172.6 45.6 21.2 1.08 DeepSeek-R1-Distill-Qwen-7B cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 35.0 19.8 50.6 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 95.0 27.2 36.6 0.72 vicuna-68m 108.5 18.4 54.2 1.07 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 23.4 20.0 49.9 1.0 TLI tiny_starcoder.py 40.0 22.3 44.7 0.9 CodeLlama-7b-Instruct-hf 59.6 39.2 25.3 0.51 DeepSeek-R1-Distill-Qwen-1.5B 88.3 24.3 40.9 0.82 scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 220.5 22.8 43.2 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 296.9 29.1 34.2 0.79 vicuna-68m 238.6 25.0 39.2 0.91 gamma-2-9b-it scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 585.1 90.6 10.4 1.0 Continued on next page Target Dataset Hardware Method Drafter TTFT (ms) TPOT (ms) T/s Speedup TLI vicuna-68m 603.0 46.0 21.4 2.04 SD gemma-2-2b-it 742.3 37.7 26.0 2.49 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 42.6 37.3 26.8 1.0 TLI vicuna-68m 92.8 25.1 39.2 1.46 SD gemma-2-2b-it 384.1 27.2 36.4 1.36 cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 73.7 37.8 26.5 1.0 TLI vicuna-68m 100.1 30.0 33.2 1.26 SD gemma-2-2b-it 117.8 33.5 29.8 1.13 DeepSeek-R1-Distill-Llama-70B openai_humaneval 2 * A100 80GB PCIe AR No Drafter (Autoregressive) 244.0 123.5 8.1 1.0 TLI tiny_starcoder.py 258.7 85.5 11.7 1.44 SD CodeLlama-7b-Instruct-hf 317.3 87.2 11.4 1.41 DeepSeek-R1-Distill-Llama-8B 358.2 53.1 18.6 2.3 2 * H100 NVL AR No Drafter (Autoregressive) 129.9 76.7 13.0 1.0 TLI tiny_starcoder.py 147.9 55.3 18.0 1.38 SD CodeLlama-7b-Instruct-hf 214.5 68.8 14.5 1.11 DeepSeek-R1-Distill-Llama-8B 179.7 44.6 22.4 1.72 TLI DeepSeek-R1-Distill-Qwen-1.5B 220.4 45.5 21.9 1.68 scrolls 2 * H100 NVL AR No Drafter (Autoregressive) 1837.1 126.6 7.7 1.0 SD DeepSeek-R1-Distill-Llama-8B 2059.4 65.1 15.2 1.98 TLI DeepSeek-R1-Distill-Qwen-1.5B 1898.5 70.9 13.9 1.82 cnn_dailymail 2 * H100 NVL AR No Drafter (Autoregressive) 231.2 77.9 12.8 1.0 SD DeepSeek-R1-Distill-Llama-8B 342.5 58.2 17.1 1.33 TLI DeepSeek-R1-Distill-Qwen-1.5B 315.5 59.7 16.7 1.3 vicuna-68m 263.4 55.8 17.8 1.39 SD Llama-3.1-8B 262.3 59.6 16.7 1.31 Llama-3.2-1B 248.5 51.3 19.4 1.51 Llama-3.2-3B 259.6 56.7 17.5 1.37 DeepSeek-R1-Distill-Qwen-32B scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 946.4 77.3 12.5 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 997.8 43.0 22.7 1.82 vicuna-68m 977.1 61.6 15.9 1.27 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 72.2 48.6 20.6 1.0 TLI tiny_starcoder.py 86.2 35.5 28.1 1.37 CodeLlama-7b-Instruct-hf 123.4 49.5 20.1 0.97 DeepSeek-R1-Distill-Qwen-1.5B 147.4 28.5 35.0 1.7 cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 121.5 49.0 20.4 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 167.5 37.9 26.1 1.28 vicuna-68m 146.4 34.2 29.1 1.42 DeepSeek-R1-Distill-Llama-8B scrolls 1 * H100 NVL AR No Drafter (Autoregressive) 246.7 38.6 24.9 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 324.4 33.5 29.6 1.19 vicuna-68m 256.5 28.1 34.5 1.39 SD Llama-3.2-1B 295.2 24.9 39.7 1.6 Llama-3.2-3B 355.9 31.7 31.2 1.25 cnn_dailymail 1 * H100 NVL AR No Drafter (Autoregressive) 39.6 22.3 44.9 1.0 TLI DeepSeek-R1-Distill-Qwen-1.5B 93.4 31.9 31.1 0.69 vicuna-68m 75.4 20.3 49.1 1.09 SD Llama-3.2-1B 51.8 22.6 44.2 0.98 Llama-3.2-3B 60.2 29.2 34.2 0.76 openai_humaneval 1 * H100 NVL AR No Drafter (Autoregressive) 31.2 22.3 44.8 1.0 TLI tiny_starcoder.py 43.5 23.6 42.1 0.94 SD CodeLlama-7b-Instruct-hf 99.0 38.0 26.0 0.58 1 * RTX 6000 AR No Drafter (Autoregressive) 73.4 41.1 24.3 1.0 TLI tiny_starcoder.py 82.5 39.5 25.1 1.03 CodeLlama-7b-Instruct-hf 218.6 63.0 15.7 0.65 DeepSeek-R1-Distill-Qwen-1.5B 145.8 35.7 26.0 1.07
+
+# E Vocabulary and Overlap
+
+This section examines the vocabularies of widely used off-the-shelf target and drafter models. Table 8 shows the vocabulary sizes of widely used target and drafter models. Table 9 shows the vocabulary overlap between the target and drafter models. Table 10 shows the ratio of the number of tokens in the intersection between the target and draft vocabularies $|T' \cap D'|$ to the number of tokens in the target vocabulary $|T'|$ , considering only the tokens that appeared in 50 randomly selected prompts for the given task.
+
+Table 8: Vocabulary sizes of widely used target and drafter models.
+
+Target Model Vocabulary Size |T| google/Gemma-2-9b 256,000 meta-Illama/Llama-3.1-70B 128,256 mistralai/Mixtral-8x22B-Instruct-v0.1 32,768 microsoft/Phi-3-medium-128k-instruct 32,011 codellama/CodeLlama-13b-Instruct-hf 32,016 Drafter Model Vocabulary Size |D| Qwen/Qwen2-0.5B-Instruct 151,646 bigcode/tiny_starcoder.py 49,152 double7/ VICuna-68m 32,000
+
+Table 9: Vocabulary overlap metrics for widely used target and drafter models: the size of the intersection between the target vocabulary and the draft vocabulary, and the ratio of the intersection size to the target vocabulary size. We can see a wide range of overlap sizes and ratios.
+
+Target Model Drafter Model |T∩D| |T∩D|/|T| Llama-3.1-70B Qwen2-0.5B-Instruct 109,566 0.85 Gemma-2-9b vicuna-68m 30,489 0.12 Mixtral-8x22B-Instruct-v0.1 vicuna-68m 24,184 0.74 Mixtral-8x22B-Instruct-v0.1 Qwen2-0.5B-Instruct 10,566 0.32 Phi-3-medium-128k-instruct Qwen2-0.5B-Instruct 9,588 0.30 CodeLlama-13b-Instruct-hf tiny_starcoder.py 8,481 0.26
+
+# F Injectivity of Tokenizers Under the CMM-DM Dataset
+
+The experiment sampled uniformly at random examples from the CNN-DM dataset (Nallapati et al., 2016b), and took the prefix of 100 characters from each example. Using a SentencePiece tokenizer (Kudo & Richardson, 2018) or various other Hugging Face Transformers tokenizers (Wolf et al., 2020), we encoded the prefix into tokens, and then decoded the tokens back into text. We then checked whether the original prefix could be recovered by checking whether $s = \text{decode}(\text{encode}(s))$ . While a tokenizer may implement a non-injective function in general, this experiment specifically tested its injectivity on the given dataset. The results of our experiment are summarized in Table 11.
+
+# G Proofs
+
+Theorem 3.1. Let $p$ be a non-trivial target probability distribution over a vocabulary $T$ , where there exist $t_1, t_2 \in T$ such that $p(t_1) \neq p(t_2)$ . Let $q$ be the drafter probability distribution over the same vocabulary $T$ . If $q = p$ , namely, the drafter is another instance of the target model, then the expected acceptance rate of the exact matching method $\alpha_{EM}$ is strictly less than the expected acceptance rate of the standard speculative decoding method $\alpha_{SD}$ . Namely, it holds that $\alpha_{EM} < \alpha_{SD}$ .
+
+Proof. The expected acceptance rate of the standard speculative decoding verification method is $\alpha_{\mathrm{SD}} = \sum_{t\in T}\min \{p(t),q(t)\}$ by Leviathan et al. (2023). If $q = p$ , we have $\alpha_{\mathrm{SD}} = \sum_{t\in T}\min \{p(t),p(t)\} = \sum_{t\in T}p(t) = 1$ . For exact matching, a token $t$ is accepted if it is sampled by both the draft and the target models. Since these are independent events, the probability of accepting $t$ is $p(t)\cdot p(t) = p(t)^2$ . Thus, we have $\alpha_{\mathrm{EM}} = \sum_{t\in T}p(t)^2$ . For any $p(t)$ such that $0 < p(t) < 1$ , it holds that $p(t)^2 < p(t)$ . Summing over all tokens $t\in T$ , we get that $\sum_{t\in T}p(t)^2 < \sum_{t\in T}p(t) = 1$ . Therefore, $\alpha_{\mathrm{EM}} < \alpha_{\mathrm{SD}}$ for any non-trivial target distribution $p$ .
+
+Theorem 3.2. For any token in the target vocabulary $t \in T$ , Algorithm 3 outputs the token $t$ with probability $p(t)$ if we define $\psi(t) \coloneqq \sum q(d_j)$ . Namely, Algorithm 3 is lossless.
+
+$$
+d _ {1}, d _ {2}, \dots , d _ {i}: t = T (d _ {1} \oplus \dots \oplus d _ {i}) _ {1} j \in \{1, \dots , i \}
+$$
+
+Proof. Denote the probability of accepting the token $t_1$ by $\operatorname*{Pr}[\text{accept } t \mid t]$ . We have that $\operatorname*{Pr}[\text{accept } t \mid t] = 1$ if $p(t) \geq \psi(t)$ , and $\frac{p(t)}{\psi(t)}$ otherwise. We also have that the probability of sampling tokens from $q$ such that their concatenation forms $t$ is $\psi(t)$ . Therefore, $\sum_{t} \operatorname*{Pr}[\text{accept } t] = \sum_{t} \operatorname*{Pr}[\text{accept } t \mid t] \cdot \operatorname*{Pr}[t] = \sum_{t} \min\{p(t), \psi(t)\}$ . The probability of outputting $t$ is then $\operatorname*{Pr}[\text{output } t] = \operatorname*{Pr}[\text{accept } t] + (1 - \sum_{t} \operatorname*{Pr}[\text{accept } t]) \cdot \frac{p(t) - \min\{p(t), \psi(t)\}}{1 - \sum_{t'} \min\{p(t'), \psi(t')\}} = p(t)$ .
+
+Lemma 3.1. For a target token $t$ of length $m \leq n$ in a complete vocabulary $D_{n}$ that contains all possible strings of length up to $n$ over a fixed alphabet $\Sigma$ , the number of distinct sequences of draft tokens $d_{1}, \ldots, d_{i}$ such that their concatenation $d_{1} \oplus \ldots \oplus d_{i}$ starts with $t$ , namely, $T(d_{1} \oplus \ldots \oplus d_{i})_{1} = t$ , is $2^{m - 1}$ .
+
+Proof. We can approach this counting problem by considering it as a combinatorial composition, specifically the number of ways to write the length $m$ of the target token $t$ as the sum of a sequence of strictly positive integers. Consider the token $t$ of length $m$ , which can be decomposed into a sequence of tokens $t_1, t_2, \ldots, t_m$ . Each possible partition of $m$ into smaller segments corresponds to a unique way of concatenating draft tokens from the vocabulary. The problem can be reduced to counting how many distinct ways we can concatenate these tokens to obtain the desired target token $t$ . There are exactly $2^{m-1}$ ways to achieve this because, at each position between the tokens, we have two choices: either to concatenate the next token with the previous segment or to keep it separate. For example, given the sequence $t_1, t_2, \ldots, t_m$ , the possible compositions include $(t_1 \oplus t_2), t_3, \ldots, t_m; t_1, (t_2 \oplus t_3), \ldots, t_m$ ; and $(t_1 \oplus t_2 \oplus t_3), t_4, \ldots, t_m$ , and so forth, covering all
+
+Table 10: The ratio of the number of tokens in the intersection between the target and draft vocabularies $|T' \cap D'|$ to the number of tokens in the target vocabulary $|T'|$ , considering only the tokens that appeared in 50 randomly selected prompts for the given task. Note that $|T' \cap D'| / |T'|$ for a given task could differ from $|T \cap D| / |T|$ because some tokens of $T$ or $D$ might not appear in the prompts of the given task.
+
+Target Model Drafter Model Task Dataset \( \left\lbrack {{T}^{2}\cap {D}^{1}}\right\rbrack \) CodeLlama-13b-Instruct-hf CodeLlama-7b-Instruct-hf coding openai,humaneval 1.0 CodeLlama-13b-Instruct-hf tiny_starcoder.py coding openai,humaneval 0.86 DeepSeek-R1-Distill-Llama-70B CodeLlama-7b-Instruct-hf coding openai,humaneval 0.84 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Llama-8B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Llama-8B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Llama-8B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Qwen-1.5B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Qwen-1.5B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Qwen-1.5B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.1-B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.1-B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.2-1B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.2-1B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.2-3B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-70B Llama-3.2-3B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-70B tiny_starcoder.py coding openai,humaneval 0.94 DeepSeek-R1-Distill-Llama-70B vicuna-68m long-ctx summ scrolls 0.97 DeepSeek-R1-Distill-Llama-70B vicuna-68m summ cnn_dailymail 0.98 DeepSeek-R1-Distill-Llama-8B CodeLlama-7b-Instruct-hf coding openai,humaneval 0.77 DeepSeek-R1-Distill-Llama-8B DeepSeek-R1-Distill-Qwen-1.5B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Llama-8B DeepSeek-R1-Distill-Qwen-1.5B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-8B Llama-3.2-1B long-ctx summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-8B Llama-3.2-1B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-8B Llama-3.2-3B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Llama-8B Llama-3.2-3B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Llama-8B tiny_starcoder.py coding openai,humaneval 0.94 DeepSeek-R1-Distill-Llama-8B vicuna-68m long-ctx summ scrolls 0.98 DeepSeek-R1-Distill-Llama-8B vicuna-68m summ cnn_dailymail 0.98 DeepSeek-R1-Distill-Qwen-14B CodeLlama-7b-Instruct-hf coding openai,humaneval 0.83 DeepSeek-R1-Distill-Qwen-14B DeepSeek-R1-Distill-Qwen-1.5B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Qwen-14B DeepSeek-R1-Distill-Qwen-1.5B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Qwen-14B DeepSeek-R1-Distill-Qwen-1.5B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Qwen-14B tiny_starcoder.py coding openai,humaneval 0.93 DeepSeek-R1-Distill-Qwen-14B vicuna-68m long-ctx summ scrolls 0.98 DeepSeek-R1-Distill-Qwen-14B vicuna-68m summ cnn_dailymail 0.99 DeepSeek-R1-Distill-Qwen-32B CodeLlama-7b-Instruct-hf coding openai,humaneval 0.83 DeepSeek-R1-Distill-Qwen-32B DeepSeek-R1-Distill-Qwen-1.5B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Qwen-32B DeepSeek-R1-Distill-Qwen-1.5B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Qwen-32B DeepSeek-R1-Distill-Qwen-1.5B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Qwen-32B tiny_starcoder.py coding openai,humaneval 0.93 DeepSeek-R1-Distill-Qwen-32B vicuna-68m long-ctx summ scrolls 0.98 DeepSeek-R1-Distill-Qwen-32B dicogn summ cnn_dailymail 0.99 DeepSeek-R1-Distill-Qwen-7B CodeLlama-7b-Instruct-hf coding openai,humaneval 0.83 DeepSeek-R1-Distill-Qwen-7B DeepSeek-R1-Distill-Qwen-1.5B coding openai,humaneval 1.0 DeepSeek-R1-Distill-Qwen-7B DeepSeek-R1-Distill-Qwen-1.5B long-ctx summ scrolls 1.0 DeepSeek-R1-Distill-Qwen-7B DeepSeek-R1-Distill-Qwen-1.5B summ cnn_dailymail 1.0 DeepSeek-R1-Distill-Qwen-7B tiny_starcoder.py coding openai,humaneval 0.93 DeepSeek-R1-Distill-Qwen-7B vicuna-68m long-ctx summ scrolls 0.98 DeepSeek-R1-Distill-Qwen-7B vicuna-68m summ cnn_dailymail 0.99 Llama-3.1-70B Llama-3.1-B coding openai,humaneval 1.0 Llama-3.1-70B Llama-3.1-B summ cnn_dailymail 1.0 Llama-3.1-70B Llama-3.1-B coding openai,humaneval 1.0 Llama-3.1-70B Llama-3.2-1B long-ctx summ scrolls 1.0 Llama-3.1-70B Llama-3.2-1B long-ctx summ cnn_dailymail 1.0 Llama-3.1-70B Llama-3.2-1B summ cnn_dailymail 1.0 Llama-3.1-70B Llama-3.2-3B coding openai,humaneval 1.0 Llama-3.1-70B Llama-3.2-3B long-ctx summ scrolls 1.0 Llama-3.1-70B Llama-3.2-3B summ cnn_dailymail 1.0 Llama-3.1-70B Wqen2.5-0.5B-Instruct coding openai,humaneval 1.0 Llama-3.1-70B Wqen2.5-0.5B-Instruct long-ctx summ scrolls 1.0 Llama-3.1-70B Wqen2.5-0.5B-Instruct summ cnn_dailymail 1.0 Llama-3.1-70B-Instruct Llama-3.1-8B-Instruct coding openai,humaneval 1.0 Llama-3.1-70B-Instruct Llama-3.1-8B-Instruct long-ctx summ scrolls 1.0 Llama-3.1-70B-Instruct Llama-3.1-8B-Instruct summ cnn_dailymail 1.0 Llama-3.1-70B-Instruct Llama-3.2-1B-Instruct coding openai,humaneval 1.0 Llama-3.1-70B-Instruct Llama-3.2-1B-Instruct long-ctx summ scrolls 1.0 Llama-3.1-70B-Instruct Llama-3.2-1B-Instruct summ cnn_dailymail 1.0 Llama-3.1-70B-Instruct Llama-3.2-3B-Instruct long-ctx summ cnn_dailymail 1.0 Llama-3.1-70B-Instruct Llama-3.2-3B-Instruct coding openai,humaneval 1.0 Llama-3.1-70B-Instruct Llama-3.2-3B-Instruct long-ctx summ scrolls 1.0 Llama-3.1-70B-Instruct Wqen2.5-0.5B-Instruct coding openai,humaneval 1.0 Llama-3.1-70B-Instruct Wqen2.5-0.5B-Instruct long-ctx summ scrolls 1.0 Llama-3.1-70B-Instruct Wqen2.5-0.5B-Instruct summ cnn_dailymail 0.98 Llama-3.1-70B-Instruct Wqen2.5-0.5B-Instruct long-ctx summ scrolls 0.98 Llama-3.1-70B-Instruct Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymail 0.98 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct coding openai,humaneval 1.0 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ scrolls 0.89 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct summ cnn_dailymail 1.0 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ scrolls 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymail 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymail 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymair 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymair 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymai 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymair 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymail 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymai 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymai 0.99 Mixtral-8x22B-Instruct-v0.1 Wqen2.5-0.5B-Instruct long-ctx summ cnn_dailymail 0.99
+
+Table 11: Results of injectivity tests for various tokenizers.
+
+Library Tokenizer Injective SentencePiece SentencePiece True Hugging Face gpt2 True Hugging Face double7/vicuna-68m False Hugging Face bigcode/tiny_starcoder.py True Hugging Face Qwen/Qwen2-0.5B-Instruct True
+
+possible ways to concatenate adjacent tokens. Thus, the total number of valid concatenations is $2^{m - 1}$ , which follows from the combinatorial nature of partitioning the sequence into contiguous segments.
+
+Theorem 4.1. Let $p$ and $q$ be target and drafter probability distributions over vocabularies $T$ and $D$ , respectively. Define $p', q_1, q_2$ to be probability distributions over $T \cup D$ as follows. $p'(x) = p(x)$ if $x \in T$ and $p'(x) = 0$ otherwise. $q_1(x) = q(x)$ if $x \in D$ and $q_1(x) = 0$ otherwise. $q_2(x) = \frac{q(x)}{\sum_{t \in T} q(t)}$ if $x \in T$ and $q_2(x) = 0$ otherwise. Given the target $p'$ , we define $\alpha_1$ and $\alpha_2$ to be the probability of accepting a token $x \sim q_1$ and $x \sim q_2$ , respectively, by the rejection sampling algorithm of speculative decoding from Leviathan et al. (2023); Chen et al. (2023). Then, $\alpha_1 \leq \alpha_2$ , and the output tokens distribute according to $p$ .
+
+Proof. By Leviathan et al. (2023), the expected acceptance rate is the sum of the minimum probabilities of the target and draft distributions, namely, we have $\alpha_{1} = \sum_{x\in T\cup D}\min \left\{p^{\prime}(x),q_{1}(x)\right\} = \sum_{x\in T}\min \left\{p^{\prime}(x),q_{1}(x)\right\} \leq \sum_{x\in T}\min \left\{p^{\prime}(x),q_{2}(x)\right\} = \sum_{x\in T\cup D}\min \left\{p^{\prime}(x),q_{2}(x)\right\} = \alpha_{2}$ since $\sum_{x\in T}q(x)\leq 1$ . The output tokens distribute according to $p^\prime$ because the rejection sampling algorithm of speculative decoding preserves the target distribution. Since $p^{\prime}(x) = p(x)$ for $x\in T$ , we have that the output tokens distribute according to $p$ .
\ No newline at end of file
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/images.zip b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..487bf8776ed98db47da58e5dba234fc4f7225a1c
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:803a273d43458ba964ec54b93db99dd2fff84a3520b3904e0902caa157f7ca3e
+size 1407866
diff --git a/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/layout.json b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f18826996b873c2eb3022f324e8c486d7b84d661
--- /dev/null
+++ b/acceleratingllminferencewithlosslessspeculativedecodingalgorithmsforheterogeneousvocabularies/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae1a9f395ee6be11493e5589aac56f8dc712b25aa46e435bacfac608d096d27e
+size 897399
diff --git a/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_content_list.json b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a27b96fad4ab77ae3ce3b74d82feef444dd41541
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64206ab6f0c89252fcabd8889f45b71c9edf81c8946c1bb0c688426410456a7b
+size 317713
diff --git a/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_model.json b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..765d7312cce8ffda05b1d0a20413a5749ea52008
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa42aff04887ae66409f10bbc7c9b4270fb02f0e4e52694fb6ecd11f13cfa070
+size 360933
diff --git a/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_origin.pdf b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c337072c0e67100b231ac8009e7bbb14bdf4d5d3
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/42431bfd-4f0e-441e-9c8a-947af21cd543_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84a37ae895e4b76becbc1b0c6bac8aaf3ed2db17a33864598fcc1ae178c19ec8
+size 12691043
diff --git a/addqadaptivedistributionaldoubleqlearning/full.md b/addqadaptivedistributionaldoubleqlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a51aa9ad43e18d3d8f4c3fb0e3c94105c3150a4
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/full.md
@@ -0,0 +1,1678 @@
+Leif Döring $^{1}$ Benedikt Wille $^{1}$ Maximilian Birr $^{1}$ Mihail Bīrsan $^{2}$ Martin Slowik $^{1}$
+
+# Abstract
+
+Bias problems in the estimation of $Q$ -values are a well-known obstacle that slows down convergence of $Q$ -learning and actor-critic methods. One of the reasons of the success of modern RL algorithms is partially a direct or indirect overestimation reduction mechanism. We propose an easy to implement method built on top of distributional reinforcement learning (DRL) algorithms to deal with the overestimation in a locally adaptive way. Our framework is simple to implement, existing distributional algorithms can be improved with a few lines of code. We provide theoretical evidence and use double $Q$ -learning to show how to include locally adaptive overestimation control in existing algorithms. Experiments are provided for tabular, Atari, and MuJoCo environments.
+
+# 1. Introduction
+
+A fundamental building block of many modern reinforcement learning (RL) algorithms is Watkins' $Q$ -learning (QL) (Watkins & Dayan, 1992). In each round the agent observes a new reward signal and updates the currently estimated state-action function by combining the new reward signal with the best currently estimated action in the next step. Unfortunately, the update rule involves a maximum and maxima suffer both from overestimation bias and function approximation uncertainty. Thus, estimated $Q$ values are initially way too large. Although convergence of $Q$ -learning and its variants for tabular cases can be proved rigorously, the convergence can often be seen only after millions of iterations. In the context of $Q$ -learning we refer to the seminal paper (Thrun & Schwartz, 1993). The overestimation effect is harmful not only for simple QL and variants with function approximation such as DQN (Mnih et al., 2015), but also for critic estimation in actor-critic methods such as
+
+$^{1}$ Institute of Mathematics, University of Mannheim, Germany $^{2}$ Department of Mathematics and Computer Science, Freie Universität Berlin, Germany. Correspondence to: Leif Döring .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+soft actor-critic (SAC) of (Haarnoja et al., 2018). Motivated by statistical approaches to the estimation of the expectation of maxima of random variables the concept of double $Q$ -learning (DQL) was introduced in (van Hasselt, 2010). Instead of using one set of random variables two independent sets are used. One is used to detect the maximal index, the other to evaluate the random variable corresponding to the maximal index. For $Q$ -learning this translates to keeping track of two copies of the $Q$ -matrix that are alternated in order to detect the best action and evaluate the corresponding $Q$ -value. DQL and its actor-critic variants reduce the overestimation (see for instance Figure 2 in (Fujimoto et al., 2018)) and sometimes even underestimate $Q$ -values. This, for example, can be seen in a simple chain MDP (Example 6.7 in (Sutton & Barto, 2018) and also (Lan et al., 2020) for the underestimation effect in the same example). (Fujimoto et al., 2018) argue that overestimation should be addressed particularly in state-action regions with high uncertainty. Indirectly, this is taken into account by ensemble methods such as Maxmin QL (Lan et al., 2020). These methods use ensembles of more than two $Q$ -estimators. Different ensemble estimators take the minimum, full ensemble averages (Anschel et al., 2017), or random ensemble averages (Chen et al., 2021). A related line of research uses uncertainty-based RL, also with the goal of reducing overestimation, see for instance (Wu et al., 2021), (Ghasemipour et al., 2022).
+
+There is no rule whether QL, DQL, or any ensemble variant works well for a given environment. Algorithms sometimes perform well and sometimes fail. This article proposes a novel approach. We propose to take QL and (at least) one other method and try to control if the QL update should be replaced or mixed with another underestimating method. The control must be locally adaptive, and the need to manage the bias depends on the local uncertainty (aleatoric and epistemic randomness, function approximation). For an approach similar in spirit to ours, see (Dorka et al., 2021).
+
+- We show theoretically how distributional RL helps the agent identify the need for overestimation control.
+- As a test case, we combine QL and DQL using a local weighting that we call ADDQ.
+- Convergence of ADDQ is proved, experiments are performed on tabular, Atari, and MuJoCo environments.
+
+# 2. $Q$ -learning and the overestimation problem
+
+# 2.1. Tabular $Q$ -learning
+
+Let us fix a (discrete) Markov decision model $(\mathcal{S},\mathcal{A},\mathcal{R},p)$ , where $\mathcal{S}$ is a finite state-space, $\mathcal{A}$ a finite space of allowed actions, $\mathcal{R}$ the reward space, and $p$ a transition kernel describing the distribution of the reward $r$ and the new state $s'$ when action $a$ is played in state $s$ . Given a time-stationary policy $\pi$ , a Markov kernel on $\mathcal{S} \times \mathcal{A}$ , there is a Markov reward process $(S_{t},A_{t},R_{t})$ with transitions
+
+$$
+\begin{array}{l} \mathbb {P} ^ {\pi} \left(R _ {t} = r, S _ {t + 1} = s ^ {\prime}, A _ {t + 1} = a ^ {\prime} \mid S _ {t} = s, A _ {t} = a\right) \\ = \pi (a ^ {\prime} \colon s ^ {\prime}) p (r, s ^ {\prime} \colon s, a). \\ \end{array}
+$$
+
+The goal of the agent in reinforcement learning is to use rolls out of the MDP to find a policy that maximizes $Q^{\pi}(s,a) = \mathbb{E}^{\pi}[\sum_{t=0}^{\infty}\gamma^{t}R_{t}|S_{0}=s,A_{0}=a]$ , the expected discounted reward. The discounting factor $\gamma \in (0,1)$ is fixed. In the discrete setting with $S$ and $A$ finite it is well-known that optimal stationary policies exist and can be found as greedy policy obtained by the unique solution matrix $Q^{*}$ to $T^{*}Q = Q$ . The non-linear operator $(T^{*}Q)(s,a) = r(s,a) + \sum_{s^{\prime}\in S}p(\mathcal{R}\times \{s^{\prime}\} :s,a)\gamma \max_{a^{\prime}\in A}Q(s,a^{\prime})$ is called Bellman's optimality operator. Bellman's optimality operator is a max-norm contraction on the $S\times A$ matrices. Using Banach's fixed point theorem, the solution can in principle be found by iteratively applying $T^{*}$ to some initial matrix $Q_{0}$ . The drawback of this approach is the need to know the operator $T^{*}$ , thus having knowledge about the transitions $p$ . Using standard stochastic approximation algorithms the fixed point $Q^{*}$ can be approximated by
+
+$$
+Q (s, a) \gets (1 - \alpha) Q (s, a) + \alpha \big (r + \gamma \max _ {a ^ {\prime}} Q (s ^ {\prime}, a ^ {\prime}) \big),
+$$
+
+called $Q$ -learning (QL). The state-action pairs can be chosen synchronously or asynchronously using rollouts. Typically, to update at $(s, a)$ a one-step sample $s'$ , $r$ is obtained from $p(\cdot : s, a)$ and the step-sizes $\alpha$ are assumed to satisfy the Robbins-Monro conditions. The exploration (choices of $(s, a)$ to be updated) can be on-policy (using the $Q$ -estimates) or off-policy (using a behavior policy). The only requirement is infinite visits for all state-action pairs. The recursively defined matrix-sequence $(Q_t)$ was proved to converge in the tabular setting, see e.g. (Tsitsiklis, 1994).
+
+# 2.2. The overestimation problem
+
+Even though QL converges to $Q^{*}$ for the number of updates going to infinity, the convergence is very slow. One of the known sources is the so-called overestimation problem of QL. The algorithm does not provide unbiased estimates of $Q^{*}(s,a)$ . Instead, the estimates $Q_{t}(s,a)$ tend to overestimate $Q^{*}(s,a)$ . A statistical explanation is based on the simple fact that the point estimator $\max \{\hat{X}_1,\dots,\hat{X}_n\}$ is not an unbiased estimator of $\max \{\mathbb{E}[X_1],\dots,\mathbb{E}[X_n]\}$ but
+
+the estimator is positively biased. Thus, the update targets $r + \gamma \max_{a'} Q_t(s', a')$ can be seen as overestimating the true Bellman optimality operator at each step. The consequences of overestimation are less obvious than it seems at first sight. Shifting $Q$ -values the same amount globally has no effect, neither for $Q$ -based exploration purposes nor for best action selection. It is the local difference in overestimation that must be avoided to not confuse the agent. Thus, it is crucial to understand the root causes of overestimation to then mitigate by taking alternative updates to QL if needed.
+
+There is little quantitative understanding of the overestimation; for some rough bounds see (van Hasselt, 2011). We used tools from probability theory to compute estimation bounds for a simple explanatory example. The example is
+
+
+Figure 1. Two-sided bandit MDP, start in $s_0$ , gray boxes terminal
+
+intriguing, as it is simple but hard to learn - even more so if one side is replaced by a chain of decisions. Each side gives a reward (for simplicity 0) followed by a Gaussian reward from one of $k$ actions. QL and DQL can both fail badly (each on one side) for the same parameter configuration. If $\mu_{1} > 0 > \mu_{2}$ , then the optimal action in $s_0$ is "left". If $\sigma_{2}$ (and/or $k_{2}$ ) is large compared to $\sigma_{1}$ (and/or $k_{1}$ ) then the overestimated $Q$ -values of QL will confuse the agent and lead him to believe that "right" is optimal. Similarly, underestimating can make the agent believe "down" is optimal. Our approach of learning locally to use QL or DQL (or another variant) can mitigate that problem by learning to use QL for one side and DQL for the other. The method is motivated by two theoretical results, a lower bound on overestimation (Proposition 2.1) and a computation with DRL to connect estimated sample variances and overestimation (Proposition 2.2). If step-sizes are chosen in the common way as $\alpha_{t}(s,a) = \frac{1}{T_{s,a}(t)}$ , with $T_{s,a}(t)$ the number of visits at $(s,a)$ up to time $t$ , then results on sums and maxima of Gaussian random variables can be used to prove a lower bound on the expected overestimation of the true value $\gamma \mu$ .
+
+Proposition 2.1. If the left side has been explored $Nk_{1}$ times and the exploration was sufficiently exploratory (see Theorem A.2), then the $Q$ -estimate at $(s_0, "left")$ has bias at least $\frac{\gamma}{\sqrt{\pi\log(2)}}\frac{\sigma_1\sqrt{\log(k_1)}}{\sqrt{N}}$ and analogously at $(s_0, "right")$ .
+
+The lower bound quantifies the idea that uncertainty forces overestimation and the bias only decreases slowly over time. A proof is given in Appendix A. Since the estimate is relatively tight one gets a feeling for how many reward samples
+
+are needed to get sufficiently precise estimates of the $Q$ -values so the agent makes the right decision.
+
+There has been considerable interest for the past years to understand the sources of uncertainty in RL. While many sources of uncertainty exist, they are often categorized as either aleatoric or epistemic. Aleatoric uncertainty is model given and cannot be improved by more data or learning. In RL this is uncertainty implied by random variables governing rewards and transitions. Epistemic uncertainty refers to the uncertainty that could potentially be reduced using more data and better algorithms (including better function approximation). Epistemic uncertainty in RL is induced by all random variables to run the learning procedure (exploration, replay buffer, etc.) and function approximation in deep RL. Keeping in mind the different sources of uncertainty is useful in order to identify algorithmic potential for improvement but also theoretical limitations. It is also important to realize that aleatoric and epistemic randomness strongly influence each other. If a reward has large variance (aleatoric uncertainty) then the estimation with samples creates more epistemic uncertainty. For estimating expectations, this is due to the central limit theorem.
+
+In deep RL one of the major sources of epistemic uncertainty is function approximation. As in (Thrun & Schwartz, 1993) we could add to our analysis independent error-noise modeling the function approximation. If the error noise is assumed Gaussian then our results readily extend, by adding additional noise-variance to our results. Since the modeling assumption as independent noise is rather special we restrain from studying the effect of function approximation on epistemic uncertainty. Instead, we will now show how to use additional information from distributional QL to deal with overestimation in a local way.
+
+# 2.3. Tabular distributional $Q$ -learning
+
+To make this article as self-contained as possible we give a minimal overview on DRL. For a concise treatment we refer to the recent book (Bellemare et al., 2023). Given a Markov decision model and a stationary policy $\pi$ , (Rowland et al., 2018) define the return distribution function as
+
+$$
+\eta^ {\pi} (s, a) (B) := \mathbb {P} ^ {\pi} \left(\sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \in B \mid S _ {0} = s, A _ {0} = a\right)
+$$
+
+for $B \in \mathcal{B}(\mathbb{R})$ . The expectation over the measure $\eta^{\pi}$ is the classical state-action value function $Q^{\pi}$ . In contrast to ordinary RL, the target in DRL is to learn return distributions instead of only return expectations. There have been a lot of theoretical articles on DRL (Bellemare et al., 2017; Dabney et al., 2018; Rowland et al., 2018; Lyle et al., 2019; Bellemare et al., 2023; Rowland et al., 2023b;a) establishing distributional Bellman operators, contractivity, convergence proofs of dynamic programming,
+
+etc. It was shown in (Bellemare et al., 2017; 2023) that the return distribution function is the unique solution to $\eta^{\pi} = T^{\pi}\eta^{\pi}$ , where $T^{\pi}:\mathcal{P}(\mathbb{R})^{S\times A}\to \mathcal{P}(\mathbb{R})^{S\times A}$ is the distributional Bellman operator defined as $(T^{\pi}\eta)(s,a) = \sum_{r,s',a'\in \mathcal{R}\times S\times A}b_{r,\gamma}\# \eta (s',a')\pi (a';s')p(s',r;s,a)$ with bootstrap function $b_{r,\gamma}(z) = r + \gamma z$ and push-forward of measures $f\# \nu (B)\coloneqq \nu (f^{-1}(B))$ . In essence, sample based dynamic programming can also be carried out in a distributional sense replacing the classical Bellman operator by its analogue in the distributional sense. Distributional QL proceeds similarly to classical expectation QL:
+
+$$
+\eta (s, a) \leftarrow (1 - \alpha) \eta (s, a) + \alpha \left(b _ {r, \gamma} \# \eta \left(s ^ {\prime}, a ^ {*}\right)\right),
+$$
+
+with $a^{*} = \operatorname{argmax}_{a^{\prime}} Q(s^{\prime}, a^{\prime})$ , where $Q(s^{\prime}, a^{\prime})$ are the expectations of the probability measures $\eta(s^{\prime}, a^{\prime})$ . In order to work algorithmically with DRL parametrizations $\mathcal{F}$ of measures need to be used. Distributional learning algorithms in practice then work similarly to deep learning algorithms, alternating Bellman operators and function class projection, see Algorithm 1. There are two simple
+
+Algorithm 1 Distributional $Q$ -learning update step
+Require: Proxy $\eta$ for $\eta^{*}$ and pair $(s,a)$ to be updated Determine step-size $\alpha$ Sample reward/next state $(r,s^{\prime})$ # Compute target $a^* \gets \mathrm{argmax}_a\mathbb{E}_{Z\sim \eta (s',a)}[Z]$ $\hat{\eta}_{*} \gets b_{r,\gamma} \# \eta (s',a^{*})$ # Project target back onto support $\hat{\eta} \gets \Pi_{\mathcal{F}}(\hat{\eta}_{*})$ # Move $\eta (s,a)$ towards the target, for tabular RL e.g. $\eta (s,a) \gets (1 - \alpha)\eta (s,a) + \alpha \hat{\eta}$
+
+parametrization that have been used frequently. The categorical parametrization (fixing number of atoms with variable weights at fixed locations) and the quantile parametrization (fixing number of atoms with fixed weights but variable locations). For the categorical parametrization a set of $m$ evenly spaced locations $\theta_{1} < \dots < \theta_{m}$ needs to be fixed, the categorical measures are then defined by $\mathcal{F}_{C,m} = \left\{\sum_{i=1}^{m} p_{i} \delta_{\theta_{i}} \mid p_{i} \geq 0, \sum_{i=1}^{m} p_{i} = 1\right\}$ . That is, measures in $\mathcal{F}_{C,m}$ are parametrized by an mdim probability vector with weights for the $m$ fixed atoms. In contrast, the quantile parametrization $\mathcal{F}_{Q,m} = \left\{\sum_{i=1}^{m} \frac{1}{m} \delta_{\theta_{i}} : \theta_{i} \in \mathbb{R}\right\}$ fixes the weights to $\frac{1}{m}$ with variable atom locations.
+
+# 2.4. Overestimation mitigation using distributional RL
+
+We follow an insight from Proposition 2.1 that large uncertainty, aleatoric (large $\sigma$ ) as well as epistemic (small $N$ additional $\sigma$ if function approximation is modeled as Gaussian error), implies large overestimation of $Q$ -values. While we skip function approximation for theoretical considerations we are as precise as possible in the simplest tabular
+
+settings. The estimate from Proposition 2.1 suggests to replace QL-updates in $(s,a)$ if uncertainty is large (compared to other actions). Unfortunately, in standard QL the agent has no direct access to such information in order to adjust the update rule at $(s,a)$ . This is where our idea comes into play. DRL gives the agent exactly the needed information using distribution insight into the return estimate $\eta_t(s,a)$ . It is crucial to note that DRL learns random measures as $\eta(s,a)$ is a probability measure that depends on the random samples used in the updates so it has an expectation and a variance that are both random in terms of the random samples. The situation becomes tricky as we are going to take variances of the variances. To avoid confusion an analogy to statistics is used. We speak of sample averages $M$ (resp. sample variances $S^2$ ) of $\eta(s,a)$ and expectation $\mathbb{E}$ (resp. variance $\mathbb{V}$ ) for the integrals against the randomness induced by the probability space behind all random variables.
+
+We now use the bandit MDP from above to explain why the DRL agent has access to uncertainty estimates during the learning process. To allow concrete computations with distributions, we use a particularly simple QL update mechanism (the one also used for estimates in (van Hasselt, 2011)). First explore all actions $N$ times, then propagate to $(s_0, \text{"left"})$ . In fact, this is nothing but distributional QL with cyclic exploration and target-matrix trick (Mnih et al., 2015) as we explain in Appendix A. The obtained estimate of $\eta^*(s_0, \text{"left"})$ will be denoted by $\hat{\eta}(s_0, \text{"left"})$ .
+
+Proposition 2.2. The sample variance of $\hat{\eta}(s_0, "left")$ , analogously for "right", after $Nk_{1}$ steps is $\frac{\sigma_{1}^{2}}{N-1} \chi_{N-1}^{2}$ distributed. The expectation is $\sigma_{1}^{2}$ , the variance is $\frac{2 \sigma_{1}^{4}}{N-1}$ .
+
+A proof is given in Appendix A. It is surprising that the sample variance distribution can be identified explicitly as chi-squared, unlike the sample expectations (maxima of independent Gaussians). This is a consequence of the well-known fact in statistics that sample variances of Gaussians are independent of sample expectations. Thus, if return estimates are Gaussian the max-operation of QL (with respect to sample averages) is only delicate for sample expectations, not for sample variances. We emphasize that in contrast to QL the agent in distributional QL does have access to the $\frac{\sigma^2\chi_N^2}{N}$ -distributed sample variance by computing sums of the atoms! Hence, the agent can make use of an unbiased estimate for the aleatoric uncertainty $\sigma^2$ which is conflicted by epistemic uncertainty that decreases with $N$ .
+
+Implications from of our theoretical considerations: We propose the following locally adaptive overestimation mitigation method. (i) Use distributional QL. (ii) At every update compute the sample variance of the current return estimate $\eta_t(s, a)$ . (iii) If the sample variance is large replace the QL update by another update (e.g. DQL or an ensemble
+
+update or a mixture). Since "large" has no absolute meaning in RL we will compare variances among all actions in $s$ and reduce according to the relative sample variance.
+
+Exploration and overestimation control with ensembles and double $Q$ -learning: Known algorithms that directly try to better estimate the $Q$ -values require to set up a particular algorithmic architecture. Most use an ensemble of $Q$ -copies which are then combined by taking minima (Lan et al., 2020), averages (Peer et al., 2021), or averages with random choices (Chen et al., 2021). Ensemble methods are promising in theory (assuming independent ensembles) but more problematic for deep RL as storage problems force ensembles to be parametrized by the same neural network. The optimal number of copies (a hyperparameter) varies for different environments, some choices work well, others fail.
+
+Our approach is different. We suggest to use QL when it works well (small sample variance) and another algorithm where QL fails (large sample variance). We could combine QL with ensemble methods but for this article decided to combine QL with DQL a bit in the spirit of weighted DQL (Zhang et al., 2017) but with completely different weights - we compare to weighted DQL in Appendix C.2. For DQL (van Hasselt, 2010) two copies $Q^A$ and $Q^B$ are stored. The update mechanism is similar to QL where the matrix to be updated is chosen randomly in every step. The main difference is the target used, matrices $Q^A$ and $Q^B$ are flipped:
+
+$$
+\begin{array}{l} Q ^ {A / B} (s, a) \\ \leftarrow (1 - \alpha) Q ^ {A / B} (s, a) + \alpha \big (r + \gamma Q ^ {B / A} \big (s ^ {\prime}, z ^ {*} \big) \big), \\ \end{array}
+$$
+
+with $z^{*} = \max_{a^{\prime}}Q^{A / B}(s^{\prime},a^{\prime})$ . Here and in the following we use $Q^{A / B}$ to allow either the choice of $Q^{A}$ or $Q^{B}$ . Double $Q$ -learning reduces the overestimation strongly but sometimes leads to severe underestimation.
+
+The use of sample variance of estimated return distributions is not new to RL. For bandits the simplest example is the UCB exploration bonus for unknown variances that uses this for exploration. In the context of exploration in RL uncertainty dependent exploration has been applied using distributional RL (see for instance (Mavrin et al., 2019), (Moerland et al., 2018)). The use of sample variances in distributional RL to locally mitigate the overestimation is new to the best of our knowledge. In order to not mix up effects we stick to the standard exploration choices.
+
+# 3. ADDQ: Tabular setting
+
+Based on the theoretical insight we will now introduce a concrete method. We use DQL-updates as an alternative to QL-updates when the agent expects QL-updates to be harmful (large estimated uncertainty by means of sample variance). The weighted DQL-approach we suggest is readily implemented into existing DRL implementations.
+
+# 3.1. Weighted DQL: From SARSA-trick to ADDQ
+
+To derive the algorithm let us recall the SARSA convergence proof of (Singh et al., 2000) that was also used to prove convergence for DQL (van Hasselt, 2010) and variants such as clipped $Q$ -learning (Fujimoto et al., 2018) or Maxmin (Lan et al., 2020). The idea is to add a clever 0, adding and subtracting what is missing to the QL update, thus, writing the algorithm as QL with a bias. If the bias can be proved to disappear, a comparison to QL implies convergence to $Q^{*}$ :
+
+$$
+\begin{array}{c} Q ^ {A / B} (s, a) \leftarrow \overbrace {(1 - \alpha) Q ^ {A / B} (s , a) + \alpha \big (r + \gamma Q ^ {A / B} (s ^ {\prime} , z ^ {*}) \big)} ^ {Q \text {- l e a r n i n g u p d a t e}} \\ + \overbrace {\alpha (\gamma Q ^ {B / A} (s ^ {\prime} , z ^ {*}) - \gamma Q ^ {A / B} (s ^ {\prime} , z ^ {*}))} ^ {=: b ^ {A / B} (s, a)}, \end{array}
+$$
+
+with $z^{*} = \operatorname{argmax} Q^{A / B}(s, a)$ . Taking this point of view there are plenty of possibilities to modify DQL to achieve more or less over-/underestimation. As an example, choosing the negative bias-terms $b_{\mathrm{clip}}^{A / B} = \alpha \min \{\gamma Q^{B / A}(s', z^{*}) - \gamma Q^{A / B}(s', z^{*}), 0\}$ yields so-called clipped $Q$ -learning introduced as part of TD3 in (Fujimoto et al., 2018). As motivated in Section 2.4 we suggest a locally adaptive overestimation control. The main insight is as follows. One can locally interpolate between QL and DQL by multiplying bias terms with local adaptive weights:
+
+$$
+\bar {b} ^ {A / B} (s, a) := \underbrace {\beta^ {A / B} (s , a)} _ {\text {n e w}} b ^ {A / B} (s, a).
+$$
+
+Replacing bias terms $b$ by $\bar{b}$ generalizes the update. The aggressive choice $\beta = 1$ results in QL updates (overestimation), $\beta = 0$ results in DQL updates (tendency of underestimation). Choices of $\beta$ suggested in the present article are motivated by the propositions of Sections 2.2 and 2.4. If a lot of uncertainty is present, the algorithm uses large $\beta$ , otherwise small $\beta$ . The aggressive choices are not necessarily the best, in our experiments below softer choices were more effective. Since overestimation is a priori not problematic for the learning process (if all estimates are equally overestimated the best action does not change, only skewed overestimation slows down the learning) we suggest a choice of $\beta$ that takes into account the local structure, normalizing variances over possible actions.
+
+# 3.2. New algorithm: locally adaptive distributional double $Q$ -learning
+
+Following the ideas above we introduce ADDQ, integrating locally adaptive overestimation mitigation in DQL. The pseudo-code given in Algorithm 2 extends distributional RL pseudo-code to include the double algorithm and adaptive weights. For the tabular target update we follow the measure-mixture approach of (Rowland et al., 2018), Section 8. Changes to the code for other target updates (for
+
+instance gradient steps to minimize KL loss) are straight forward. The algorithm is seen to be a combination of distri
+
+# Algorithm 2 ADDQ update step
+
+Require: Proxies $\eta^A, \eta^B$ for $\eta^*$ , pair $(s, a)$ to be updated
+Determine step-size $\alpha$
+Sample reward/next state $(r, s')$
+Randomly choose Update(A) or Update(B)
+if Update(A) then
+# Compute target with locally adapted weight
+ $a^* \gets \operatorname{argmax}_a \mathbb{E}_{Z \sim \eta^A(s', a)}[Z]$
+Determine weight $\beta \in [0, 1]$ based on $\eta_{old}^A, \eta_{old}^B$ $\nu \gets (1 - \beta)\eta^B(s', a^*) + \beta\eta^A(s', a^*)$ $\hat{\eta}_*^A \gets b_{r,\gamma}\# \nu$
+# Project target back onto support
+ $\hat{\eta}_^A \gets \Pi_{\mathcal{F}}(\hat{\eta}_*)$
+# Move $\eta(s, a)$ towards the target, for tabular RL e.g.
+ $\eta^A(s, a) \gets (1 - \alpha)\eta^A(s, a) + \alpha\hat{\eta}^A$
+end if
+if Update(B) then
+Proceed analogously with $A$ and $B$ exchanged
+end if
+
+butional QL and distributional DQL. Keeping $\beta$ constant to 1 is distributional QL, constant 0 distributional DQL. The key is to chose $\beta$ dependent on the uncertainty that drives the skewed QL overestimation for different actions. Extending arguments from the literature, notably the convergence proof of (Rowland et al., 2018) for categorical $Q$ -learning with stochastic approximation target update and the SARSA trick of (Singh et al., 2000), we prove convergence of Algorithm 2 for categorical measure parametrizations:
+
+Theorem 3.1. Given some initial return distribution functions $\eta_0^A, \eta_0^B$ supported within $[\theta_1, \theta_m]$ . If
+
+- rewards are bounded in $[R_{min}, R_{max}]$ and it holds $\left[\frac{R_{min}}{1 - \gamma}, \frac{R_{max}}{1 - \gamma}\right] \subseteq [\theta_1, \theta_m]$ ,
+- step-sizes fulfill the Robbins-Monro conditions and $\eta^A$ or $\eta^B$ are updated randomly,
+- the sequences $(\beta_{t}^{A})_{t\in \mathbb{N}},(\beta_{t}^{B})_{t\in \mathbb{N}}$ only depend on the past and fulfill $\lim_{t\to \infty}|\beta_t^A -\beta_t^B | = 0$ almost surely,
+
+then the induced $Q$ -values converge almost surely towards $Q^{*}$ . If additionally the MDP has a unique optimal policy $\pi^{*}$ , then $(\eta_t^A), (\eta_t^B)$ converge almost surely in $\bar{\ell}_2$ to some $\eta_C^* \in \mathcal{F}_{C,m}$ and the greedy policy with respect to $\eta_C^*$ is $\pi^{*}$ .
+
+According to Theorem 3.1 symmetric sequences $\beta^{A} = \beta^{B}$ that can depend on past distributions $\eta^{A},\eta^{B}$ yield convergence. A particularly simple choice compares the deviations in $\eta^{A / B}$ locally as a local measure for uncertainty.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 2. Grid world. First column: Biases summed over all state-action pairs. Other columns highlight the relation of sample variance (bottom) and the effect on $Q$ -value (top) estimation. For more experiments and comparisons to Maxmin, EBQL, REDQ see Appendix C.
+
+
+
+
+
+
+
+
+
+An exemplary choice of adaptive $\beta$ : As motivated earlier QL suffers from overestimation in the presence of uncertainty (function approximation, aleatoric randomness, epistemic randomness) which is reflected in the sample variance (or other measures of the spread of the distribution) of the discrete (random) distributions $\eta_t^{A / B}$ . As uniform overestimation is not particularly troubling (e.g. adding a constant to all $Q$ -values does not harm at all) the learning process is slowed down by differences in sample variances for allowed actions. There are many other possibilities to choose $\beta$ but we decided to fix this example for all our experiments. For a finite atomic measures $\nu = \sum_{i = 1}^{n}p_{i}\delta_{a_{i}}$ define the sample mean $M(\nu) = \frac{1}{n}\sum_{i = 1}^{n}a_{i}p_{i}$ and the sample variance $S^2 (\nu) = \frac{1}{n - 1}\sum_{i = 1}^{n}p_i(a_i - M(\nu))^2$ . Now define
+
+$$
+S _ {s, a} ^ {2} := \frac {1}{2} \left(S ^ {2} \left(\eta_ {t} ^ {A} (s, a) + S ^ {2} \left(\eta_ {t} ^ {B} (s, a)\right), \right. \right.
+$$
+
+$$
+S _ {s} ^ {2} := \frac {1}{| \mathcal {A} |} \sum_ {a \in \mathcal {A}} S _ {s, a} ^ {2}, \quad \text {a n d} \quad S _ {r e l} ^ {2} (s, a) := \frac {S _ {s , a} ^ {2}}{S _ {s} ^ {2}}.
+$$
+
+According to our computations in the two-sided bandit model, evenly distributed relative sample variances (i.e. all values around 1) correspond to balanced overestimation effects. Otherwise, overestimation is unbalanced. We define locally adaptive weights for the next update at $(s,a)$ :
+
+$$
+\beta := \left\{ \begin{array}{l l} 0. 7 5 & : S _ {r e l} ^ {2} (s, a) < 0. 7 5 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 7 5, 1. 2 5 ] \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) > 1. 2 5 \end{array} \right.. \tag {1}
+$$
+
+For algorithmic simplicity one could also use squared deviations to the median instead of the mean as the median can be red off directly for measures in the $\mathcal{F}_{C,m}$ and $\mathcal{F}_{Q,m}$ . The choice combines Q and DQ rather softly, more aggressive choices reduce the length of the interval and increase (resp. decrease) towards $\beta = 1$ and $\beta = 0$ . In practice, (tabular, all Atari, all MuJoCo) the choice worked well without any tuning. The thresholds in the definition of $\beta$ can be seen as hyperparameters to the algorithm. We leave the development of adaptive thresholds for future research.
+
+# 3.3. A grid world example
+
+To show the advantages of ADDQ we present a grid world that highlights, in a more complicated and more realistic way, the main effects of the bandit MDP. For more details, see Appendix C. In particular, in Appendix C.3 we compare ADDQ to other algorithms for overestimation reduction.
+
+Deep RL experiments presented below use essentially deterministic environments with uncertainty mainly from function approximations. We thus provide a grid world with complicated stochasticity that, depending on parameters, is hard for QL and DQL. The high stochasticity region with low rewards confuses the QL agent. The agent strongly overestimates suboptimal $Q$ -values that lead to the gray area, in the gray area $Q$ -values are overestimated for actions that stay in the gray area. Hence, all $\varepsilon$ -greedy exploration mecha
+
+nisms spend much time in the gray area. On the other hand, DQL is motivated by estimators of iid random variables and underestimates strongly if action-values for different actions are unevenly distributed. Thus, the DQL agent gets confused by the local deviations caused by the fake goal and the stochastic region. What results in overestimation for QL, results in underestimation for DQL. In contrast, ADDQ locally combines update-rules of QL and DQL to reduce the estimation biases a lot. In Appendix C we show experimentally that the choice of thresholds in $\beta$ is relatively harmless, more or less aggressive updates still perform well. In contrast, constant $\beta$ and the non-distributional choice from (Zhang et al., 2017) with $c = 10$ have larger biases.
+
+Plots in the figure above show a few key insights, more in Appendix C. First, the estimation bias of ADDQ is much smaller than those of QL and DQL. Most importantly in the complicated states 1, 4, 6, 7. The reason ADDQ better estimates the $Q$ -values is the adaptive choice to prefer QL or DQL, depending on (relatively) large variances.
+
+
+
+Figure 3. Start in S, goal in G, fake goal in F, high stochasticity and low reward area in gray
+
+# 3.4. ADDQ adaptation for distributional DQN
+
+Our DRL based local adaptive overestimation mitigation can be integrated into existing code for DRL with a few extra lines. In the following we describe the integration into C51 (Bellemare et al., 2017). We run experiments on Atari environments from the Arcade Learning Environment (Bellemare et al., 2013) using the Gymnasium API (Towers et al., 2023). Algorithms are few line modifications based on the RL Baselines3 Zoo (Raffin, 2020) training framework, without any further tuning of hyperparameters.
+
+The C51 algorithm obtained its name from using a categorical representation of return distributions with $m = 51$ atoms. The weights of the parametrization are parametrized via feedforward neural networks following the DQN architecture (Mnih et al., 2015). The state $s$ serves as input and the last layer outputs $m = 51$ logits for each action followed by a softmax to return probability weights. We write $\eta_{\omega}(s, a) = \sum_{i=1}^{m} p_i(s, a; \omega) \delta_{\theta_i}$ , where $\omega$ comprises the online networks weights. We add a bar $\bar{\eta}$ to denote a delayed target network which is kept constant and is overwritten from $\eta$ every e.g. 10000 steps with the parameters from the online network. The corresponding expectation is denoted by $Q_{\omega}(s, a) = \sum_{i=1}^{m} p_i(s, a; \omega) \theta_i$ . Given a transition $(s, a, r, s')$ the projected target is
+
+$$
+\hat {\eta} = \Pi_ {C} \left(b _ {r, \gamma} \# \eta_ {\bar {\omega}} \left(s ^ {\prime}, a ^ {*}\right)\right) =: \sum_ {i = 1} ^ {m} \hat {p} _ {i} \delta_ {\theta_ {i}},
+$$
+
+where $a^* = \operatorname{argmax}_{a'} Q_{\bar{\omega}}(s', a')$ . Gradient descent on the weights with respect to some loss (here: cross-entropy) is used to move the distribution $\eta_{\omega}(s, a)$ towards the target distribution $\hat{\eta}$ . As for the tabular algorithm variants with overestimation reduction intervene in the target definition. We keep track of two independently initialized online networks denoted by $\omega^A, \omega^B$ and a pair of respective target networks $\bar{\omega}^A, \bar{\omega}^B$ . For each gradient step we simulate a vector of random variables with the same size as the batch size with each element determining which of the two estimators is being updated based on the respective transition with the same position in the batch. Accordingly, we use twice the batch size for these methods, so that on average per gradient step, the same number of transitions is used for each estimator, compared to the single-estimator case. We can now describe how to modify the targets for a given transition $(s, a, r, s')$ . With the placeholder $\Gamma$ in
+
+$$
+\bar {\eta} ^ {A / B} = \Pi_ {C} (b _ {r, \gamma} \# \Gamma^ {A / B})
+$$
+
+only the place holder is modified for different algorithms:
+
+Double C51: Set $\Gamma^{A / B} = \eta_{\bar{\omega}^{B / A}}(s',z^*)$ with $z^{*} = \arg \max_{a'}Q_{\bar{\omega}^{A / B}}(s',a')$
+
+Clipped C51: Inspired by (Fujimoto et al., 2018). Set $\Gamma^{A / B} = \eta_{\bar{\omega}^X}(s',z^*)$ with $z^{*} = \mathrm{argmax}_{a'}Q_{\bar{\omega}^{A / B}}(s',a')$
+
+and $X = \operatorname{argmin}_{c\in \{A,B\}}Q_{\bar{\omega}^c}(s',z^*)$ :
+
+ADDQ (us): The ADDQ target uses
+
+$$
+\Gamma^ {A / B} = \beta \eta_ {\bar {\omega} ^ {A / B}} \left(s ^ {\prime}, z ^ {*}\right) + (1 - \beta) \eta_ {\bar {\omega} ^ {B / A}} \left(s ^ {\prime}, z ^ {*}\right),
+$$
+
+where $z^{*} = \operatorname{argmax}_{a^{\prime}}Q_{\bar{\omega}^{A / B}}(s^{\prime},a^{\prime})$ . The locally adaptive weights $\beta$ may depend on entire state-action return distributions. For the experiments we used the choice from Equation (1) based on the online networks $\eta_{\omega^A},\eta_{\omega^B}$ .
+
+Results for two Atari environments and a RLiable comparison (Agarwal et al., 2021) are presented in Figure 4 and more extensively in Appendix E. The experiments show that ADDQ is more stable than QL and DQL (it never fails completely) and achieves higher scores in different metrics.
+
+# 3.5. ADDQ adaptation for QRDQN
+
+Modifications and results for quantile DRL (Dabney et al., 2018) are similar to the categorical setting. The setup is explained in Appendix D; for a quick view two experiments and a RLiable plot for probability of improvements can be found in Figure 4. More experimental results are provided in Appendix E. ADDQ is more stable (it never fails completely) and achieves higher scores in different metrics.
+
+# 3.6. ADDQ adaptation for quantile SAC
+
+Compared to Atari environments it is known that overestimation in MuJoCo environments is more severe. Using the double estimator in critic estimation is not enough, the clipping trick of TD3 (Fujimoto et al., 2018) greatly improves performance. Recall that, in the formulation of biased $Q$ -learning of Section 3.1, clipping uses bias $\min \{Q^{B / A} - Q^{A / B},0\}$ instead of $Q^{B / A} - Q^{A / B}$ . Thus, using our distributional overestimation identification to combine QL and DQL does not hint towards an algorithm competitive with SAC/TD3 or algorithms with more refined overestimation control such as REDQ (Chen et al., 2021) or TQC (Kuznetsov et al., 2020). For completeness of the article we still consider the ADDQ effect with standard single and double estimator. We used the base implementations from Stable-Baselines3 to study SAC with quantile regression. With a single critic estimator (we call this QRSAC), with double estimator, clipped double estimator (QB-SAC in the terminology of (Kuznetsov et al., 2020)), and ADDQ estimator (ADQRSAC). Experiments are shown in Figure 5 and more extensively in Appendix E.
+
+As expected ADDQ improves QRSAC and double QRSAC but is not competitive with clipping. We leave for future work to experiment distributional RL based overestimation identification for REDQ and TQC.
+
+
+
+
+
+
+
+
+Figure 4. Our method ADDQ (blue) compared to distributional DQN (orange), with double estimator (green) and clipped double estimator (red). First row with categorical, second row quantile parametrization. Learning curves are given for two environments, 8 more in Appendix E. We used 10 seeds. RLiable probability of improvement plots are for all 10 environments, more metrics in Appendix E.
+
+
+
+
+
+
+Figure 5. For completeness: Learning curves for two MuJoCo environments and RLiable probability of improvement on 5 environments. Adapting locally (blue, us) the double critic estimator (green) and direct estimator (orange) cannot (by far) reach performance of the clipping estimator (red). Nonetheless, ADDQ improves distributional direct and double critic estimators with almost no change to the code. Runs are averaged on 10 seeds, more details can be found in the Appendix E.
+
+
+
+
+
+# 4. Summary, limitations, and future work
+
+Built on theoretical insight for a bandit MDP we suggest to use sample variances in distributional RL to mitigate the overestimation of QL. Our approach does not use a novel estimation procedure for $Q$ -values but instead combines known estimators and tries to use the better one. Using DRL, the agent in state-action pair $(s,a)$ has access to next-state information that predicts the overestimation of QL-updates and accordingly prevers QL- or an alternative. The approach can be incorporated in different estimation methods (e.g. (random) ensemble methods, truncation methods). For the present article we decided to improve DQL, leading to our algorithm ADDQ. The algorithm has the expected feature that in contrast to DQ/DQL it does not fail completely on some environments. Probability of improvement and normalized scores using the RLiable library show clear improvement of ADDQ to underlying deep DQ/DQL.
+
+While ADDQ improves QL and DQL in different settings there is future work to be done. (i) It is clear that our probabilistic calculations can be extended. It is quite likely that results from Gaussian processes can be used for computations in more general settings. (ii) The exemplary choice of $\beta$ can be improved. It would be interesting to replace the hyperparameter thresholds by some adaptive learnable choice. (iii) The MuJoCo simulation study was only included for completeness, it would have been very surprising if a local adaptation of simple and double critic estimates could improve the clipped critic estimate (or even better algorithms such as REDQ or TQC). It is interesting future research to include local overestimation mitigation into REDQ (make the chosen ensemble number depend locally on sample variances) or TQC (make the number of truncated atoms depend locally on sample variances). (iv) Use sample variances to perform updates of target networks after non-constant steps.
+
+# Code
+
+The code used in our experiments can be found on GitHub: https://github.com/BommeHD/ADDQ.git.
+
+# Acknowledgement
+
+The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG.
+
+# Impact statement
+
+This paper presents work whose goal is to advance the field of machine learning, in particular reinforcement learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Adler, R. J. and Taylor, J. E. Random fields and geometry. Springer Monographs in Mathematics. Springer, New York, 2007. ISBN 978-0-387-48112-8.
+Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., and Bellemare, M. Deep reinforcement learning at the edge of the statistical precipice. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 29304-29320. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/f514cec81cb148559cf475e7426eed5e-Paper.pdf.
+Anschel, O., Baram, N., and Shimkin, N. Averaged-DQN: Variance reduction and stabilization for deep reinforcement learning. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 176-185. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/anschel17a.html.
+Badia, A., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, D., and Blundell, C. Agent57: Outperforming the atari human benchmark, 03 2020. URL http://arxiv.org/abs/2003.13350.
+Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: an evaluation platform for general agents. J. Artif. Int. Res., 47(1):253-279, may 2013. ISSN 1076-9757.
+Bellemare, M. G., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine
+
+Learning - Volume 70, ICML'17, pp. 449-458. JMLR.org, 2017.
+Bellemare, M. G., Dabney, W., and Rowland, M. Distributional Reinforcement Learning. MIT Press, 2023. http://www.distributional-rl.org.
+Bertsekas, D. P. and Tsitsiklis, J. N. Neuro-dynamic programming., volume 3 of Optimization and neural computation series. Athena Scientific, 1996. ISBN 1886529108.
+Castro, P. S., Moitra, S., Gelada, C., Kumar, S., and Bellemare, M. G. Dopamine: A Research Framework for Deep Reinforcement Learning. Preprint available on arXiv:1812.06110, 2018. URL http://arxiv.org/abs/1812.06110.
+Chen, X., Wang, C., Zhou, Z., and Ross, K. Randomized ensembled double q-learning: Learning fast without a model. Preprint available on arXiv:2101.05982, 2021. URL http://arxiv.org/abs/2101.05982.
+Dabney, W., Rowland, M., Bellemare, M. G., and Munos, R. Distributional reinforcement learning with quantile regression. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press, 2018. ISBN 978-1-57735-800-8.
+Dorka, N., Welschehold, T., Boedecker, J., and Burgard, W. Adaptively calibrated critic estimates for deep reinforcement learning. IEEE Robotics and Automation Letters, 8:624-631, 2021. URL https://api-semanticscholar.org/CorpusID:244527082.
+Fernique, X. Regularité des trajectoires des fonctions aléatoires gaussiennes. In École d'Étée de Probabilités de Saint-Flour, IV-1974, volume Vol. 480 of Lecture Notes in Math., pp. 1-96. Springer, Berlin-New York, 1975.
+Fujimoto, S., van Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. Preprint available on arXiv:1802.09477, 2018. URL http://arxiv.org/abs/1802.09477.
+Ghasemipour, K., Gu, S. S., and Nachum, O. Why so pessimistic? estimating uncertainties for offline rl through ensembles, and why their independence matters. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088.
+
+Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861-1870. PMLR, 2018.
+Kuznetsov, A., Shvechikov, P., Grishin, A., and Vetrov, D. Controlling overestimation bias with truncated mixture of continuous distributional quantile critics. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020.
+Lan, Q., Pan, Y., Fyshe, A., and White, M. Maxmin q-learning: Controlling the estimation bias of q-learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Bkg0u3Etwr.
+Lyle, C., Bellemare, M. G., and Castro, P. S. A comparative analysis of expected and distributional reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01): 4504-4511, Jul. 2019. doi: 10.1609/aaai.v33i01. 33014504. URL https://ojs.aaaai.org/index.php/AAAI/article/view/4365.
+Mavrin, B., Yao, H., Kong, L., Wu, K., and Yu, Y. Distributional reinforcement learning for efficient exploration. In Chaudhuri, K. and Salakhutdinov, R. (eds.), ICML, volume 97 of Proceedings of Machine Learning Research, pp. 4424-4434. PMLR, 2019. URL http://dblp.uni-trier.de/db/conf/icml/icml2019.html#MavrinYKWY19.
+Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature, 518 (7540):529-533, 2015. doi: 10.1038/nature14236. URL https://doi.org/10.1038/nature14236.
+Moerland, T., Broekens, J., and Jonker, C. The potential of the return distribution for exploration in rl, 06 2018. URL http://arxiv.org/abs/1806.04242.
+Peer, O., Tessler, C., Merlis, N., and Meir, R. Ensemble bootstrapping for q-learning. In International Conference on Machine Learning, 2021. URL https://api-semanticscholar.org/CorpusID:232076148.
+Quan, J. and Ostrovski, G. DQN Zoo: Reference implementations of DQN-based agents, 2020. URL http://github.com/deepmind/dqn_zoo.
+
+Raffin, A. Rl baselines3 zoo. https://github.com/DLR-RM/r1-baselines3-zoo, 2020.
+Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., and Dormann, N. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1-8, 2021. URL http://jmlr.org/papers/v22/20-1364.html.
+Rowland, M., Bellemare, M. G., Dabney, W., Munos, R., and Teh, Y. W. An analysis of categorical distributional reinforcement learning. In Storkey, A. and Perez-Cruz, F. (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pp. 29-37. PMLR, 09-11 Apr 2018. URL https://proceedings.mlrpress/v84/rowland18a.html.
+Rowland, M., Munos, R., Azar, M. G., Tang, Y., Ostrovski, G., Harutyunyan, A., Tuyls, K., Bellemare, M. G., and Dabney, W. An analysis of quantile temporal-difference learning. Preprint available on arXiv:2301.04462, 2023a. URL http://arxiv.org/abs/2301.04462.
+Rowland, M., Tang, Y., Lyle, C., Munos, R., Bellemare, M. G., and Dabney, W. The statistical benefits of quantile temporal-difference learning for value estimation. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org, 2023b.
+Singh, S., Jaakkola, T., Littman, M., and Szepesvári, C. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287-308, 2000. doi: 10.1023/A:1007678930559.
+Sudakov, V. N. Gaussian random processes, and measures of solid angles in Hilbert space. Dokl. Akad. Nauk SSSR, 197:43-45, 1971. ISSN 0002-3264.
+Sudakov, V. N. Geometric problems of the theory of infinite-dimensional probability distributions. Trudy Mat. Inst. Steklov., 141:191, 1976. ISSN 0371-9685.
+Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html.
+Thrun, S. and Schwartz, A. Issues in using function approximation for reinforcement learning. In Mozer, M., Smolensky, P., Touretzky, D., Elman, J., and Weigend, A. (eds.), Proceedings of the 1993 Connectionist Models Summer School, pp. 255-263. Lawrence Erlbaum, 1993. URL http://www.ri.cmu.edu/pub_files/pub1/thrun_sebastian_1993_1/thrun_sebastian_1993_1.pdf.
+
+Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033, 2012. doi: 10.1109/IROS.2012.6386109.
+Towers, M., Terry, J. K., Kwiatkowski, A., Balis, J. U., Cola, G. d., Deleu, T., Goulão, M., Kallinteris, A., KG, A., Krimmel, M., Perez-Vicente, R., Pierre, A., Schulhoff, S., Tai, J. J., Shen, A. T. J., and Younis, O. G. Gymnasium, March 2023. URL https://zenodo.org/record/8127025.
+Tsitsiklis, J. N. Asynchronous stochastic approximation and q-learning. Machine Learning, 16(3):185-202, 1994. doi: 10.1023/A:1022689125041. URL https://doi.org/10.1023/A:1022689125041.
+van Handel, R. *Probabilitiy in High Dimensions*. Princeton University, lecture notes, 2016.
+van Hasselt, H. Double q-learning. In Lafferty, J., Williams, C., Shawe-Taylor, J., Zemel, R., and Culotta, A. (eds.), Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010. URL https://proceedings.neurips.cc/paper_files/paper/2010/file/091d584fced301b442654dd8c23b3fc9-Paper.pdf.
+van Hasselt, H., Guez, A., and Silver, D. Deep reinforcement learning with double q-learning. Preprint available on arXiv:1509.06461, 2015. URL http://arxiv.org/abs/1509.06461. cite arxiv:1509.06461Comment: AAAI 2016.
+van Hasselt, H. P. Insights in Reinforcement Learning: formal analysis and empirical evaluation of temporal-difference learning algorithms. PhD thesis, Universiteit Utrecht, January 2011. URL http://homepages.cwi.nl/~hasselt/papers/Insights_in_Reinforcement_Learning_Hado_van_Hasselt.pdf.
+Watkins, C. J. C. H. and Dayan, P. Q-learning. Machine Learning, 8(3):279-292, May 1992. ISSN 1573-0565. doi: 10.1007/BF00992698. URL https://doi.org/10.1007/BF00992698.
+Wu, Y., Zhai, S., Srivastava, N., Susskind, J. M., Zhang, J., Salakhutdinov, R., and Goh, H. Uncertainty weighted actor-critic for offline reinforcement learning. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11319-11328. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/wu21i.html.
+
+Zhang, Z., Pan, Z., and Kochenderfer, M. J. Weighted double q-learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 3455-3461, 2017. doi: 10.24963/ijcai.2017/483. URL https://doi.org/10.24963/ ijcai.2017/483.
+
+# A. Theoretical backup from probability theory
+
+Theoretical backup for all known overestimation reduction algorithms is rather weak, the update rule of $Q$ -learning makes precise computations very difficult. As we discuss below, the stochastic approximation rule requires computations with sums of random variables which is not well compatible with computations of maxima of random variables. Justifications for overestimation reduction algorithms are typically based on qualitative arguments for the estimation bias of expectations $\mathbb{E}[\max_i\{X_1,\dots,X_K\}]$ of maxima of independent random variables even though random variables appearing in $Q$ -learning maxima are clearly not independent. We will give explicit estimates using probabilistic insights for sums and maxima of independent random variables.
+
+We should make very clear that there are different factors to the overestimation problem that are captured in the algorithmic approach of ADDQ. To provide some rigorous theoretical evidence, we focus in this section on the tabular setting without function approximation error. We refer the reader to the seminal article (Thrun & Schwartz, 1993) for some simplified computations on the overestimation effect caused by approximation errors.
+
+# A.1. A lower bound computation for the overestimation bias in an episodic bandit MDP (Proof of Proposition 2.1)
+
+
+
+The two-sided bandit MDP from the main text can be analyzed side by side. Without loss of generality we thus study the overestimation of the left-side which is essentially the bandit MDP that appeared quite a bit in the literature on overestimation of $Q$ -learning (see (van Hasselt, 2011), Example 6.7 of (Sutton & Barto, 2018), or (Lan et al., 2020)).
+
+Before stating our results on the overestimation bias we state a technical lemma that is based on the Sudakov-Fernique inequality, see (Sudakov, 1971; 1976; Fernique, 1975), which is a comparison inequality for Gaussian processes.
+
+Lemma A.1. Let $k \in \mathbb{N}$ and suppose $(X_j^i)_{j \in \mathbb{N}, i \in \{1, \dots, k\}}$ are iid $\mathcal{N}(\mu, \sigma^2)$ . Further, let $n_1, \ldots, n_k \in \mathbb{N}$ such that $n_1 + \dots + n_k \leq kn$ and set $\gamma := \max_{1 \leq i \neq i' \leq k} \left| 1 / n_{n_i} + 1 / n_{i'} - 2 / n \right|$ . Then,
+
+$$
+\left| \mathbb {E} \left[ \max _ {1 \leq i \leq k} \frac {1}{n _ {i}} \sum_ {j = 1} ^ {n _ {i}} X _ {j} ^ {i} \right] - \mathbb {E} \left[ \max _ {1 \leq i \leq k} \frac {1}{n} \sum_ {j = 1} ^ {n} X _ {j} ^ {i} \right] \right| \leq \sqrt {2 \gamma \ln k}. \tag {2}
+$$
+
+In particular, if, additionally, $1 / n_{i} + 1 / n_{i^{\prime}}\geq 2 / n$ for all $i,i^{\prime}\in \{1,\ldots ,k\}$ with $i\neq i^{\prime}$ then
+
+$$
+\mathbb {E} \left[ \max _ {1 \leq i \leq k} \frac {1}{n} \sum_ {j = 1} ^ {n} X _ {j} ^ {i} \right] \leq \mathbb {E} \left[ \max _ {1 \leq i \leq k} \frac {1}{n _ {i}} \sum_ {j = 1} ^ {n _ {i}} X _ {j} ^ {i} \right]. \tag {3}
+$$
+
+Proof. Given $n_1, \ldots, n_k \in \mathbb{N}$ with $n_1 + \dots + n_k \leq kn$ , set $Y^i \coloneqq n^{-1} \sum_{j=1}^{n} X_j^i$ and $Z^i \coloneqq n_i^{-1} \sum_{j=1}^{n_i} X_j^i$ for any $i \in \{1, \ldots, k\}$ . Clearly, $Y = (Y^1, \ldots, Y^k)$ and $Z = (Z^1, \ldots, Z^k)$ are Gaussian vectors with $\mathbb{E}[Y^i] = \mathbb{E}[Z^i]$ for all $i \in \{1, \ldots, k\}$ and
+
+$$
+\gamma_ {i, i ^ {\prime}} ^ {Y} := \mathbb {E} \left[ \left(Y ^ {i} - Y ^ {i ^ {\prime}}\right) ^ {2} \right] = 2 / n \quad \text {a n d} \quad \gamma_ {i, i ^ {\prime}} ^ {Z} := \mathbb {E} \left[ \left(Z ^ {i} - Z ^ {i ^ {\prime}}\right) ^ {2} \right] = 1 / n _ {i} + 1 / n _ {i ^ {\prime}}
+$$
+
+for all $i, i' \in \{1, \dots, k\}$ with $i \neq i'$ . Thus, (2) is an immediate consequence of Adler & Taylor, 2007, Theorem 2.2.5, Eq. (2.2.11). If, in addition, $1 / n_i + 1 / n_{i'} \geq 2$ for all $i \neq i'$ then $\gamma_{i,i'}^Y \leq \gamma_{i,i'}^Z$ and (3) follows from Adler & Taylor, 2007, Theorem 2.2.5, Eq. (2.2.12).
+
+We are now in a position to give a lower bound on the overestimation bias for all exploration policies under the typical $\frac{1}{\# \text{visits}}$ -step-size schedule. This extends the overestimation analysis of (van Hasselt, 2011) which was based on a simple synchronous exploration mechanism. The step-size schedule is crucial for our probabilistic analysis as it turns the analysis in a computation with sums and maxima of independent random variables. The sum structure is a consequence of the
+
+simple fact that $z_{t} \coloneqq \frac{1}{t}\sum_{i = 1}^{t}a_{i - 1}$ solves the stochastic approximation recursion $z_{t + 1} = (1 - \alpha_t)z_t + \alpha_ta_t, z_0 = 0$ , with $\alpha_{t} = \frac{1}{t + 1}$ . Here is a formal lower bound of the overestimation bias in the episodic bandit MDP. As mentioned above we only consider exploration of the left-side of the bandit MDP.
+
+Theorem A.2. Suppose $Q_0$ is initialized as the zero matrix and step-sizes are chosen as $\alpha_t(s, a) = \frac{1}{T_{s,a}(t)}$ , with $T_{s,a}(t)$ the number of updates of $Q(s, a)$ . If rewards are $\mathcal{N}(\mu, \sigma^2)$ -distributed, then every sufficient exploratory exploration rule leads to overestimation bias at least
+
+$$
+\mathbb {E} [ Q _ {N k} (s _ {0}, \mathbf {\Omega} ^ {\prime \prime} l e f t ^ {\prime \prime}) ] - Q ^ {*} (s _ {0}, \mathbf {\Omega} ^ {\prime \prime} l e f t ^ {\prime \prime}) \geq \frac {\gamma}{\sqrt {\pi \log (2)}} \frac {\sigma \sqrt {\log (k)}}{\sqrt {N}}.
+$$
+
+By sufficient exploratory we mean that $\frac{1}{n_a(t)} + \frac{1}{n_{a'}(t)} \geq \frac{2}{N}$ for all actions $a, a'$ with $n_a(t) = T_{s,a}(t)$ .
+
+In simple words, the expected overestimation in $n$ episodes of $Q$ -learning on the left-side of the bandit MDP is at least of the order $\frac{\sigma\sqrt{k\log(k)}}{\sqrt{n}}$ . Higher variance and more actions obviously lead to stronger overestimation.
+
+Proof. In contrast to (van Hasselt, 2011) we do not assume synchronous exploration of all actions in each episode. Instead, we give a comparison argument that allows to reduce sufficiently exploratory exploration to simple cyclic exploration. The approach is motivated by the target network (here: target matrix) trick from DQN (Mnih et al., 2015). With target networks the update works as follows:
+
+$$
+Q _ {t + 1} (s, a) \leftarrow (1 - \alpha_ {t}) Q _ {t} (s, a) + \alpha_ {t} \big (r + \gamma \max _ {a ^ {\prime}} \bar {Q} _ {t} (s ^ {\prime}, a ^ {\prime}) \big),
+$$
+
+where $\bar{Q}$ is kept constant for a fixed number of steps and then updated from the current $Q$ -matrix. Although the target network was introduced in DQN to reduce overfitting in function approximation, it also reduces the overestimation of $Q$ -values. We use the latter effect locally in this proof to construct a lower bound. The target matrix is kept constant (0 matrix) at $s_0$ until the last update. We show that (i) this target matrix trick with cyclic exploration yields a lower bound for the overestimation bias of standard $Q$ -learning with sufficient exploratory exploration strategy and (ii) allows for computations using elements of probability theory. In what follows we denote by $X_i^a$ the reward obtained when playing choosing $a$ for the $i$ th time. By assumption all $(X_i^a)$ are iid.
+
+Step 1: In the first step we show that using cyclic exploration (playing one action after the other) for $N$ rounds ( $Nk$ steps in total) followed by one update at $(s_0, \text{"left"})$ minimizes the overestimation of $Q$ -learning estimators $\hat{Q}^{cyc,tar}(s_0, \text{"left"})$ that explore each action at most $N$ times. For the claim we compare arbitrary $Q$ -learning with the cyclic variant. Recalling the $Q$ -learning update (with arbitrary exploration rule) and the step-size schedule shows that
+
+$$
+Q _ {t} (s _ {1}, a) = \frac {1}{T _ {s , a} (t)} \sum_ {j = 1} ^ {T _ {s, a} (t)} X _ {j} ^ {a} \sim \mathcal {N} (\mu , t \sigma^ {2}).
+$$
+
+Since $s_0$ is explored once per episode the step-size schedule of regular $Q$ -learning yields
+
+$$
+Q _ {N k} (s _ {0}, \text {" l e f t "}) = \frac {1}{N k} \sum_ {t = 1} ^ {N k} \gamma \max _ {a} Q _ {t} (s _ {1}, a)
+$$
+
+for the $N$ th episode. We next show that $\mathbb{E}[Q_{Nk}(s_0, "left")] \geq \mathbb{E}[\hat{Q}^{cyc, tar}(s_0, "left")]$ . Denoting $n_k(t) = T_{s_1, a_k}(t)$ and
+
+using Lemma A.1 we obtain
+
+$$
+\begin{array}{l} \mathbb {E} \left[ Q _ {N} \left(s _ {0}, \text {" l e f t"}\right) \right] = \mathbb {E} \left[ \frac {1}{N k} \sum_ {t = 1} ^ {N k} \gamma \max _ {a} \left\{Q _ {t} \left(s _ {1}, a _ {1}\right), \dots , Q _ {t} \left(s _ {1}, a _ {k}\right) \right\} \right] \\ = \frac {1}{N k} \sum_ {t = 1} ^ {N k} \gamma \mathbb {E} \left[ \max \left\{\frac {1}{n _ {1} (t)} \sum_ {j = 1} ^ {n _ {1} (t)} X _ {j} ^ {a _ {1}},..., \frac {1}{n _ {k} (t)} \sum_ {j = 1} ^ {n _ {k} (t)} X _ {j} ^ {a _ {k}} \right\} \right] \\ \geq \frac {1}{N k} \sum_ {t = 1} ^ {N k} \gamma \mathbb {E} \left[ \max \left\{\frac {1}{N} \sum_ {j = 1} ^ {N} X _ {j} ^ {a _ {1}},..., \frac {1}{N} \sum_ {j = 1} ^ {N} X _ {j} ^ {a _ {k}} \right\} \right] \\ = \gamma \mathbb {E} \left[ \max \left\{\frac {1}{N} \sum_ {j = 1} ^ {N} X _ {j} ^ {a _ {1}}, \dots , \frac {1}{N} \sum_ {j = 1} ^ {N} X _ {j} ^ {a _ {k}} \right\} \right] \\ = \mathbb {E} \left[ \hat {Q} ^ {\text {c y c , t a r}} (s _ {0}, ^ {\prime \prime} \text {l e f t} ^ {\prime \prime}) \right]. \\ \end{array}
+$$
+
+Step 2: We now analyze the overestimation bias for the $Q$ -learing with target matrix trick after $Nk$ episodes. To deduce the claim we use a fact on the expectation of the maximum of independent $\mathcal{N}(\mu, \sigma^2)$ -distributed random variables:
+
+$$
+\mu + \frac {1}{\sqrt {\pi \log (2)}} \sigma \sqrt {\log (k)} \leq \mathbb {E} [ \max \{X _ {1}, \dots , X _ {k} \} ] \leq \mu + \sqrt {2} \sigma \sqrt {\log (k)}
+$$
+
+The inequality can for instance be found in (van Handel, 2016). According to the step-size schedule the update is as follows. If $N = nk$ , i.e. in cyclic exploration every action is played $n$ times, then $Q_{Nk}^{cyc,tar}(s_1,a) = \frac{1}{N}\sum_{i = 1}^{N}X_i^a$ for independent $\mathcal{N}(\mu ,\sigma^2)$ -distributed reward samples. Thus, $Q_{Nk}(s_1,a_1),\dots,Q_{Nk}(s_1,a_k)$ are iid and $\mathcal{N}(\mu ,\frac{\sigma^2}{N})$ -distributed. The final update after $N$ episodes is to set
+
+$$
+\hat {Q} ^ {c y c, t a r} (s _ {1}, ” \mathrm {l e f t}”) = \gamma \max \{Q _ {N k} (s _ {1}, a _ {1}), \dots , Q _ {N k} (s _ {1}, a _ {k}) \} \stackrel {(d)} {=} \gamma \max \{Z _ {1}, \dots , Z _ {k} \}
+$$
+
+for $Z_{1},\ldots ,Z_{k}$ iid $\mathcal{N}(\mu ,\frac{\sigma^2}{N})$ . The claim follows from the lower bound on the expectation of maxima of independent Gaussians.
+
+The computations give a number of quantitative insights. First, it becomes very clear why explicit computations for $Q$ -learning are complicated. The stochastic approximation update is closely related to sums (exactly equal to sums for $\frac{1}{\# \text{visits}}$ step-sizes, while the update target involves a maximum. Unfortunately, sums and maxima of random variables do not get along well. We thus performed computations for Gaussian reward distributions for which sums are Gaussian again. Maxima of Gaussian random variables (processes) are a well-studied field in Mathematics and estimates can be derived.
+
+Finally, the computations show how variances influence the overestimation problem. Stochastic rewards with high variance are more problematic than rewards with small variance. In the next section we study distribution RL in the same simple problem.
+
+# A.2. On the use of variance in local overestimation control via distributional RL (Proof of Proposition 2.2)
+
+Our computations for the bandit MDP above suggest uncertainty at next states (aleatoric uncertainty of rewards, epistemic uncertainty through small $N$ ) lead to larger overestimation. Thus, algorithms mitigating the overestimation problem locally at $(s, a)$ must somehow use uncertainty at next states $s'$ . In the following proposition we analyze the distributional $Q$ -learning update scheme (compare Section 8 of (Rowland et al., 2018))
+
+$$
+\eta (s, a) \leftarrow (1 - \alpha) \eta (s, a) + \alpha \left(b _ {r, \gamma} \# \eta \left(s ^ {\prime}, a ^ {*}\right)\right),
+$$
+
+with $a^* = \operatorname{argmax}_{a'} Q(s', a')$ , where $Q(s', a')$ are the (random) sample means of the estimated return distributions $\eta(s', a')$ . The bootstrap function is $b_{r, \gamma}(z) = r + \gamma z$ and $f \# \nu(B) := \nu(f^{-1}(B))$ denotes the push-forward of measures. Recall from the main text that distributional QL learns random measures, $\eta(s, a)$ itself is a probability measure that depends on the random samples used in the updates. As a probability measure $\eta(s, a)$ has an expectation and a variance that are both random in terms of the random samples. The situation becomes a bit tricky as we are going to take variances of the variances.
+
+To avoid confusion we use an analogy to statistics and speak of (random) sample averages $M$ (resp. sample variances $S^2$ ) of $\eta(s, a)$ and expectation $\mathbb{E}$ (resp. variance $\mathbb{V}$ ) for the integrals against the true randomness induced by the probability space behind all appearing random variables.
+
+Motivated by the previous section we only study cyclic exploration with "target measure", i.e. all actions in $s_1$ are explored $N$ times before bootstrapping the estimates to $s_0$ . Using the standard $\frac{1}{\# \text{visits}}$ -step-size schedule allows us to carry out fairly explicit computations using tools from probability theory. Most importantly, we can derive the exact distribution for the sample variance of $\eta(s_0, \text{"left"})$ , the state-action pair at risk for overestimation. The insight obtained from the computation motivates our ADDQ algorithm.
+
+Proposition A.3. Consider the bandit MDP introduced above with $\mathcal{N}(\mu, \sigma^2)$ reward distributions and $k$ actions. Denote by $\hat{\eta}^{cyc,tar}(s_0, "left")$ the return distribution of distributional $Q$ -learning with cyclic exploration, step-size schedule $\alpha_t(s, a) = \frac{1}{T_{s,a}(t)}$ , and "target distribution". More precisely, all arms are $N$ times explored before the first and only update at $(s_1, "left")$ . It then follows that
+
+$$
+S ^ {2} (\hat {\eta} ^ {c y c, t a r} (s _ {0}, ” l e f t ”)) \sim \frac {\sigma^ {2}}{N - 1} \chi_ {N - 1} ^ {2},
+$$
+
+where $\chi_n^2$ denotes the chi-squared distribution with $n$ degrees of freedom.
+
+It is interesting to note that while there is a simple distributional expression for the sample variance there is no simple expression for the distribution of the sample mean $\max \left\{\frac{1}{N}\sum_{i = 1}^{N}X_{i}^{a_{1}},\dots,\frac{1}{N}\sum_{i = 1}^{N}X_{i}^{a_{k}}\right\}$ which is the maximum of independent Gaussians.
+
+Proof. For the proof we need to spell-out the distributional $Q$ -learning update and then use basic results from statistic for sample-mean/sample-variances.
+
+Let us first understand the distributional update at the last step. Similar to scalar stochastic approximation schemes, the $\frac{1}{n + 1}$ -step-size schedule also gives the distributional estimator
+
+$$
+\hat {\eta} ^ {c y c, t a r} \left(s _ {1}, a\right) = \frac {1}{N} \sum_ {i = 1} ^ {N} \delta_ {X _ {i} ^ {a}} \tag {4}
+$$
+
+for the pre-terminal state $s_1$ after $N$ explorations of action $a$ . To see why write $\eta_n \coloneqq \frac{1}{n} \sum_{i=1}^{n} \delta_{X_i^a}$ to get
+
+$$
+\eta_ {n + 1} = \frac {1}{n + 1} \sum_ {i = 1} ^ {n + 1} \delta_ {X _ {i} ^ {a}} = \left(1 - \frac {1}{n + 1}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \delta_ {X _ {i} ^ {a}} + \frac {1}{n + 1} \delta_ {X _ {n + 1} ^ {a}} = (1 - \alpha_ {n}) \eta_ {n} + \alpha_ {n} \delta_ {X _ {n + 1} ^ {a}}
+$$
+
+and recall that there is no max-term in the update for pre-terminal states. Note that the equal weights $\frac{1}{N}$ are a consequence of the step-size schedule. To compute the return distribution at $s_0$ for action "left" we need to identify $a^*$ . For that sake we must identify the sample means of $\hat{\eta}^{cyc,tar}(s_1,a)$ :
+
+$$
+\hat {Q} ^ {c y c, t a r} (s _ {1}, a) = \frac {1}{N} \sum_ {i = 1} ^ {N} X _ {i} ^ {a} \sim \mathcal {N} (\mu , \sigma^ {2} / N)
+$$
+
+Now we chose $a^* = \arg \max_a \hat{Q}^{cyc, tar}(s_1, a)$ and set
+
+$$
+\hat {\eta} ^ {c y c, t a r} \left(s _ {0}, \text {" l e f t "}\right) = b _ {0, \gamma} \# \hat {\eta} ^ {c y c, t a r} \left(s _ {1}, a ^ {*}\right) =: \gamma X.
+$$
+
+While we do not know much about $X$ (it is the empirical distribution of the set of Gaussians $X_1^a, \ldots, X_N^a$ with maximal sum) we know the exact distribution of the sample variance for the following reason. The sample variance of every $\eta^{cyc, tar}(s_1, a)$ is nothing what is called sample variance of the iid observation $X_1^a, \ldots, X_k^a$ in statistics. If $M_a := \frac{1}{N} \sum_{i=1}^{N} X_i^a$ is the sample mean and $S_a^2 := \frac{1}{N-1} \sum_{i=1}^{N} (X_i^a - M^a)^2$ the sample variance then it is well-known that
+
+$$
+\bullet \mathbb {E} [ S _ {a} ^ {2} ] = \sigma^ {2},
+$$
+
+$S_{a}^{2}$ is $\frac{\sigma^2}{N - 1}\chi_{N - 1}^2$ distributed,
+- $M_{a}$ and $S_{a}^{2}$ are independent.
+
+The third property implies that if $S_{a_1}^2, \ldots, S_{a_k}^2$ are independent sample variances and $a^*$ is chosen to maximize the sample means, then also $S_{a^*}^2 \sim \frac{\sigma^2}{N - 1} \chi_{N - 1}^2$ while $M_{a^*} \approx M_a$ .
+
+As a consequence, even though we cannot compute the distribution of the sample mean of $\hat{\eta}^{cyc,tar}(s_0, "\text{left}")$ we can compute the distribution of its sample variance as $\frac{\sigma^2}{N-1}\chi_{N-1}^2$ . The expectation is $\mathbb{E}[S_a^2] = \sigma^2$ while the variance is $\mathbb{V}(S_a^2) = \frac{2\sigma^4}{N-1}$ . Note that one might be tempted to believe the sample variance is independent of the number $k$ of actions. This is not the case, as the number of explorations $Nk$ needed for $N$ explorations depends on $k$ . Alternatively, one might denote the number of total episodes by $n$ and replace $N$ by $\lfloor \frac{n}{k} \rfloor$ .
+
+We now come back to the motivation of our ADDQ algorithm, in particular the choice of $\beta$ . Let us consider the two-sided bandit MDP with $\mathcal{N}(\mu_1,\sigma_1^2)$ (resp. $\mathcal{N}(\mu_2,\sigma_2^2)$ ) distributed rewards. Suppose $\mu_{1} > \mu_{2}$ are small but $\sigma_1^2\ll \sigma_2^2$ and/or $k_{1}\ll k_{2}$ . The MDP is delicate as the following can happen using QL (overestimation) and DQL (global overestimation reduction). In QL the agent will believe for a long time that "right" is the optimal action in $s_0$ . In DQL the agent will believe for a long time that "down" is the optimal action as both non-trivial $Q$ -values can be underestimated to be negative. It is thus more reasonable to mitigate overestimation locally instead of globally. This is what ADDQ achieves through the choice of $\beta$ .
+
+To use our explicit computations above, the agent explores both sides with cyclic exploration and target-matrix update. Now assume both sides have been explored with a total number of $n$ episodes each. The agent knows the estimates $\hat{\eta}(s_0, "left")$ (resp. $\hat{\eta}(s_0, "right")$ ) and the corresponding $Q$ -values (sample means of return distributions) $\hat{Q}(s_0, "left")$ (resp. $\hat{Q}(s_0, "right")$ ). From the overestimation study of the previous section, the agent unwittingly overestimated the true $Q$ -values in the order of $\frac{\sigma_1 \sqrt{k_1 \log(k_1)}}{\sqrt{n}}$ (resp. $\frac{\sigma_2 \sqrt{k_2 \log(k_2)}}{\sqrt{n}}$ ) and will believe "right" is the correct action. Now the smart agent also knows he/she should take into account the sample variances that is known to the agent and we know are of the order $\sigma_1^2$ (resp. $\sigma_2^2$ ) with concentration depending on $k_1$ (resp. $k_2$ ). The local overestimation control of $\beta$ (based on relative sample variances) from (1) thus compares $\sigma_1^2$ to $\sigma_2^2$ and suggests to mitigate overestimation of $\hat{Q}(s_0, "right")$ . In our algorithm we use double $Q$ -learning to mitigate, the same idea can of course be integrated into other algorithms (such as changing number of truncation atoms in truncated quantile critics algorithms (Kuznetsov et al., 2020)).
+
+# B. Experimental confirmation of theoretical results for two-sided bandit MDP
+
+We provide an experimental analysis of the two-sided bandit MDP for which we proved theoretical results on the overestimation. For reimplementation purposes we collect here all required information on the environment and the training for all plots.
+
+Environment:
+
+- $\gamma = 0.9$
+- $\mu_{1} = -0.1$ , $\mu_{2} = 0.1$ , $k_{2} = 5$ , $\sigma_{2} = 1$ ; the correct decision is thus moving to the right in the Start State.
+
+Distributional properties for ADDQ:
+
+- categorical parametrization, 51 atoms equally spaced on $[-3,3]$
+- initialization as $\delta_0$
+
+Algorithmic choices:
+
+- $\beta$ is chosen according to (1)
+- step-size schedules $\alpha_{t}(s,a) = \frac{1}{T_{s,a}(t)}$ , with $T_{s,a}(t)$ the number of visits in $(s,a)$ up to time $t$ , i.e. $\frac{1}{n}$ state-action wise counted
+- exploration: either $\epsilon$ -greedy with $\epsilon$ linearly decreasing from 1 to 0.1 in 10000 steps, then constant (E) or uniform random (U)
+
+Policy evaluation: evaluation of 3 steps with a frequency of every 500 steps using current greedy policy, correct action rates refer to if the exploration was greedy
+
+In order to demonstrate the proven proportionality of the overestimation of $Q$ -values by QL in the number of arms and the variance to the left, we keep one of the values fixed and compare different values in the other one.
+
+- First experiment: $\sigma_{1} = 5$ fixed, iterating over $k_{1} = 5, 10, 15, 20$ , denoted as $K$ in the legend
+- Second experiment: $k_{1} = 10$ fixed, iterating over $\sigma_{1} = 2,4,6,8$ , denoted as $S$ in the legend
+
+The plots also demonstrate that
+
+- ADDQ is much better in terms of bias, leveraging local information given by the (relative) variances
+- the variances and relative variances at state 0 capture the real variances given by the MDP
+- although our theorems assumed a sufficiently exploratory policy, the results seem to generalize to the much more commonly used $\epsilon$ -greedy setting
+
+
+
+
+Different choices for number of arms on the left side with epsilon-greedy exploration
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6. Comparing ADDQ and QL on two-sided bandit MDP with different number of arms on the left side and different exploration settings.
+
+
+
+
+
+
+
+Q (K=5,E) Q (K=10,E) Q (K=15,E) Q (K=20,E) ADDQ (K=5,E) ADDQ (K=10,E) ADDQ (K=15,E) ADDQ (K=20,E)
+
+Q (K=5,U) Q (K=10,U) Q (K=15,U) Q (K=20,U) ADDQ (K=5,U) ADDQ (K=10,U) ADDQ (K=15,U) ADDQ (K=20,U)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7. Comparing ADDQ and QL on two-sided bandit MDP with different variances on the left side and different exploration settings.
+
+
+
+
+
+
+
+Q (S=2,E) (S=4,E) Q (S=6,E) Q (S=8,E) ADDD (S=2,E) ADDD (S=4,E) ADDD (S=6,E) ADDD (S=8,E)
+
+Q (S=2,U) (S=4,U) Q (S=6,U) (Q=8,U) ADDD (S=2,U) ADDD (S=4,U) ADDD (S=6,U) ADDD (S=8,U)
+
+# C. Grid world details
+
+# C.1. Experiment from the main text
+
+For reimplementation purposes we collect here all required information on the environment and the training for all plots.
+
+Environment:
+
+- $\gamma = 0.9$
+- white low stochastic high average reward region: rewards $-0.05, +0.05$ with equal probabilities
+- gray high stochastic low average reward region: rewards $-2.1, +2$ with equal probabilities
+- goal: deterministic reward of 1, fake goal: deterministic reward of 0.65
+
+Distributional properties for CategoricalQ, CategoricalDouble, and ADDQ:
+
+- categorical parametrization, 51 atoms equally spaced on $[-3,3]$
+- initialization as $\delta_0$
+
+Algorithmic choices:
+
+- $\beta$ is chosen according to (1)
+- step-size schedule $\alpha_{t}(s,a) = \frac{1}{T_{s,a}(t)}$ , with $T_{s,a}(t)$ the number of visits in $(s,a)$ up to time $t$ , i.e. $\frac{1}{n}$ state-action wise counted,
+- exploration: $\varepsilon$ -greedy with $\varepsilon$ linearly decreasing from 1 to 0.1 in 10000 steps, then constant
+
+Policy evaluation: evaluation of 6 steps with a frequency of every 500 steps, correct action rates refer to if the exploration was greedy
+
+
+Figure 8. additional plots
+
+For completeness we give plots pairing learning curves for $Q$ -values with the corresponding sample variance. We give all state-action combinations for the interesting states 1 and 4 next to the fake goal, 6 and 7 next to the region with high stochasticity, and 10 and 14 before leaving the region with high stochasticity.
+
+
+
+
+
+
+
+Finally, the following plot demonstrates that, in this GridWorld example, the relative variances are also strongly determined by the variances of the next state.
+
+
+0 Double ADDQ
+
+# C.2. Ablation study
+
+In the following, on the same environment as before, we experiment with different choices for $\beta$ and compare with weighted DQL (Zhang et al., 2017) with $c = 10$ (WDQ), the standard choice from that paper. The choice of beta's names are comprised as follows:
+
+- (Optional) First two letters: Left-tilted (lt), Right-tilted (rt)
+- First/Third letter: Neutral (n), Aggressive (a), Conservative (c)
+- Final digit: Refers to the number of intervals in the definition of Beta (3 or 5)
+
+The choices of Aggressive, Conservative, and Neutral refer to the trade-off of no interpolation (just choosing which Algorithm's update to take) and softening the interpolation, with neutral being in between the two choices and corresponding to the choice presented in the main body. Left- and Right-tilted refers to shifting the intervals for the relative Variance to fall into while choosing the interpolation coefficient. Left-tilted favors the Q update, Right-tilted the DQ update.
+
+The concrete choices are:
+
+$$
+\mathrm {n} 3: \qquad \beta := \left\{ \begin{array}{l l} 0. 7 5 & : S _ {r e l} ^ {2} (s, a) < 0. 7 5 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 7 5, 1. 2 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) > 1. 2 5 \end{array} \right.
+$$
+
+$$
+\mathrm {l t n} 3: \qquad \beta := \left\{ \begin{array}{l l} 0. 7 5 & : S _ {r e l} ^ {2} (s, a) < 1. 2 5 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 2 5, 1. 7 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) > 1. 7 5 \end{array} \right.
+$$
+
+$$
+\mathrm {r t n} 3: \qquad \beta := \left\{ \begin{array}{l l} 0. 7 5 & : S _ {r e l} ^ {2} (s, a) < 0. 2 5 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 2 5, 0. 7 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) > 0. 7 5 \end{array} \right.
+$$
+
+$$
+\text {c 3 :} \qquad \beta := \left\{ \begin{array}{l l} 0. 6 & : S _ {r e l} ^ {2} (s, a) < 0. 6 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 6, 1. 4 ] , \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) > 1. 4 \end{array} \right.
+$$
+
+$$
+\mathrm {l t c 3 :} \qquad \beta := \left\{ \begin{array}{l l} 0. 6 & : S _ {r e l} ^ {2} (s, a) < 1. 1 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 1, 1. 9 ], \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) > 1. 9 \end{array} \right.
+$$
+
+$$
+\operatorname {r t c 3}: \qquad \beta := \left\{ \begin{array}{l l} 0. 6 & : S _ {r e l} ^ {2} (s, a) < 0. 1 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 1, 0. 9 ], \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) > 0. 9 \end{array} \right.,
+$$
+
+$$
+\text {a 3 :} \quad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) < 0. 9 9 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 9 9, 1. 0 1 ] \\ 0 & : S _ {r e l} ^ {2} (s, a) > 1. 0 1 \end{array} \right.
+$$
+
+$$
+\text {l t a 3 :} \quad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) < 1. 4 9 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 4 9, 1. 5 1 ] \\ 0 & : S _ {r e l} ^ {2} (s, a) > 1. 5 1 \end{array} \right.
+$$
+
+$$
+\text {r t a 3 :} \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) < 0. 4 9 \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 4 9, 0. 5 1 ] \\ 0 & : S _ {r e l} ^ {2} (s, a) > 0. 5 1 \end{array} \right.
+$$
+
+$$
+\mathrm {n} 5: \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq 0. 2 5 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 2 5, 0. 7 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 7 5, 1. 2 5 ] \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (1. 2 5, 1. 7 5) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 1. 7 5 \end{array} \right.
+$$
+
+$$
+\text {l t n 5 :} \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq 0. 7 5 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 7 5, 1. 2 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 2 5, 1. 7 5 ] \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (1. 7 5, 2. 2 5) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 2. 2 5 \end{array} \right.
+$$
+
+$$
+\mathrm {r t n} 5: \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq - 0. 2 5 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (- 0. 2 5, 0. 2 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 2 5, 0. 7 5 ] \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 7 5, 1. 2 5) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 1. 2 5 \end{array} \right.
+$$
+
+$$
+\text {a 5 :} \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq 0. 9 9 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 9 9, 0. 9 9 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 9 9 5, 1. 0 0 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (1. 0 0 5, 1. 0 1) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 1. 0 1 \end{array} \right.
+$$
+
+$$
+\text {c 5 :} \qquad \beta := \left\{ \begin{array}{l l} 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 7, 1. 3 ] \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) \in (1. 3, 1. 9) \\ 0. 3 & : S _ {r e l} ^ {2} (s, a) \geq 1. 9 \end{array} \right.
+$$
+
+$$
+\text {l t a 5 :} \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq 1. 4 9 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (1. 4 9, 1. 4 9 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 4 9 5, 1. 5 0 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (1. 5 0 5, 1. 5 1) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 1. 5 1 \end{array} \right.
+$$
+
+$$
+\text {l t c 5 :} \qquad \beta := \left\{ \begin{array}{l l} 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 1. 2, 1. 8 ] \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) \in (1. 8, 2. 4) \\ 0. 3 & : S _ {r e l} ^ {2} (s, a) \geq 2. 4 \end{array} \right.
+$$
+
+$$
+\text {r t a 5 :} \qquad \beta := \left\{ \begin{array}{l l} 1 & : S _ {r e l} ^ {2} (s, a) \leq 0. 4 9 \\ 0. 7 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 4 9, 0. 4 9 5) \\ 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 4 9 5, 0. 5 0 5 ], \\ 0. 2 5 & : S _ {r e l} ^ {2} (s, a) \in (0. 5 0 5, 0. 5 1) \\ 0 & : S _ {r e l} ^ {2} (s, a) \geq 0. 5 1 \end{array} \right.,
+$$
+
+$$
+\operatorname {r t c 5}: \qquad \beta := \left\{ \begin{array}{l l} 0. 5 & : S _ {r e l} ^ {2} (s, a) \in [ 0. 2, 0. 8 ] \\ 0. 4 & : S _ {r e l} ^ {2} (s, a) \in (0. 8, 1. 4) \\ 0. 3 & : S _ {r e l} ^ {2} (s, a) \geq 1. 4 \end{array} \right.
+$$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9. Ablation study plots regarding bias improvement. State 1 is adjacent to the fake goal, state 6 adjacent to the stochastic region.
+
+
+
+
+
+
+
+
+
+# It turns out that
+
+- the choice of thresholds in $\beta$ (hyperparameter to the algorithm) is harmless, results do not vary a lot,
+- conservative choices seem to work especially well.
+- weighted double Q-learning, an algorithm that is similar to ADDQ with non-distributional choice of $\beta$ is improved by our locally adaptive distributional RL based choice of $\beta$ .
+
+# C.3. Comparison to other algorithms
+
+We compared ADDQ with other bias reduction methods using different hyperparameters used in the respective papers on the same GridWorld environment.
+
+Maxmin with 2, 4, 6, and 8 ensembles (Lan et al., 2020)
+- Ensemble Bootstrapped QL (EBQL) with 3, 7, 10, and 15 ensembles (Peer et al., 2021)
+- Randomized Ensemble DQL (REDQ) with 3 and 5 ensembles and size 1 and 2 of random update subset sizes (Chen et al., 2021)
+
+It turns out that ADDQ decreases the estimation bias stronger than those algorithms while only using two ensembles.
+
+
+Policie's correctness rate at states 1,4,6, and 7
+
+
+Correct action rates, biases, and scores
+Correct action rates at epochs
+
+
+Summed total Biases
+
+
+Summed total squared Biases
+
+
+Scores during evaluations
+
+
+Length of evaluations
+
+
+Scores during epochs
+
+
+Length of epochs
+
+CategoricalQ CategoricalDouble ADDQ (us) Maxmin2 Maxmin4 Maxmin6 Maxmin8
+
+
+
+
+State 1 Action Right
+
+
+Q-function values and Sample variances next to fake goal
+
+
+State 1 Action Left
+
+
+State 4 Action Up
+
+
+State 4 Action Right
+
+
+State 4 Action Down
+
+
+State 4 Action Left
+
+
+State 6 Action Up
+
+
+State 6 Action Right
+
+
+State 6 Action Down
+
+
+State 6 Action Left
+
+
+State 7 Action Up
+
+
+State 7 Action Right
+Figure 10. Comparison to MaxMin.
+
+
+State 7 Action Down
+
+
+State 7 Action Left
+
+CategoricalQ CategoricalDouble ADDQ (us) Maxmin2 Maxmin4 Maxmin6 Maxmin8
+
+
+Policie's correctness rate at states 1,4,6, and 7
+
+
+Correct action rates at epochs
+
+
+Summed total Biases
+
+
+Summed total squared Biases
+
+
+Scores during evaluations
+
+
+Length of evaluations
+
+
+Scores during epochs
+
+
+Length of epochs
+
+CategoricalQ CategoricalDouble ADDQ (us) EBQL3 EBQL7 EBQL10 EBQL15
+
+
+
+
+Q-function values and Sample variances next to fake goals
+State 1 Action Right
+
+
+State I Action Down
+
+
+State 1 Action Left
+
+
+State 4 Action Up
+
+
+State 4 Action Right
+
+
+State 4 Action Down
+
+
+State 4 Action Left
+
+
+State 6 Action Up
+
+
+State 6 Action Right
+
+
+State 6 Action Down
+
+
+State 6 Action Left
+
+
+State 7 Action Up
+
+
+State 7 Action Right
+Figure 11. Comparison to EBQL
+
+
+State 7 Action Down
+
+
+State 7 Action Left
+
+CategoricalQ CategoricalDouble ADDQ (us) EBQL3 EBQL7 EBQL10 EBQL15
+
+
+Policle's correctness rate at states 1,4,6, and 7
+
+
+Correct action rates
+
+
+Summed total Biases
+
+
+Summed total squared Biases
+
+
+Scores during evaluations
+
+
+Length of evaluations
+
+
+Scores during epochs
+
+
+Length of epochs
+
+CategoricalQ CategoricalDouble ADDQ (us) REDQ(3,1) REDQ(3,2) REDQ(5,1) REDQ(5,2)
+
+
+
+
+Q-function values and Sample variances next to fake goal
+State 1 Action Right
+
+
+
+
+State 1 Action Left
+
+
+State 4 Action Up
+
+
+State 4 Action Right
+
+
+State 4 Action Down
+
+
+State 4 Action Left
+
+
+State 6 Action Up
+
+
+State 6 Action Right
+
+
+State 6 Action Down
+
+
+State 6 Action Left
+
+
+State 7 Action Up
+
+
+State 7 Action Right
+Figure 12. Comparison to REDQ.
+
+
+State 7 Action Down
+
+
+State 7 Action Left
+
+
+
+CategoricalDouble
+
+
+
+REDQ(3,1)
+
+
+
+—BEDO(5.1)
+
+BENO52
+
+# D. ADDQ adaptation for QRDQN - setup
+
+To not double too much in the main text we moved the adaptation of ADDQ to the quantile setup (Dabney et al., 2018) to the appendix. In this section we explain how to adapt the ADDQ idea into QRDQN, experimental results are provided in the next section.
+
+The categorical approach has multiple disadvantages, most notably rewards and the fixed atom positions must be compatible. The categorical algorithm was included in the main text to keep notation simple. For quantile distributional RL the return distributions are parametrized by
+
+$$
+\mathcal {F} _ {Q, m} = \Big \{\sum_ {i = 1} ^ {m} \frac {1}{m} \delta_ {\theta_ {i}}: \theta_ {i} \in \mathbb {R} \Big \}.
+$$
+
+In contrast to the categorical setting the positions of the atoms are not fixed, but the weights of all atoms are set equal. The computation of the target is equal to the categorical version (except using the projection on $\mathcal{F}_{Q,m}$ instead of $\mathcal{F}_{C,m}$ ). The update step is a gradient step in computing the Wasserstein-projection on $\mathcal{F}_{Q,m}$ of the target distribution $\hat{\eta}$ , that is a gradient step in the quantile Huber-loss minimization:
+
+$$
+\min _ {\hat {\theta} _ {1} ^ {A} (s, a), \dots , \hat {\theta} _ {m} ^ {A} (s, a)} \sum_ {i = 1} ^ {m} \mathbb {E} _ {Z \sim \hat {\eta}} [ \rho_ {\tau_ {i}} ^ {\kappa} (Z - \hat {\theta} _ {i} ^ {A} (s, a)) ],
+$$
+
+with quantile mid-points $\tau_{i} = \frac{2i - 1}{2m}$ and
+
+$$
+\rho_ {\tau} ^ {\kappa} (u) = \left\{ \begin{array}{l l} | \tau - \mathbf {1} _ {u < 0} | \frac {1}{2} u ^ {2} & : | u | \leq \kappa \\ | \tau - \mathbf {1} _ {u < 0} | \kappa (| u | - \frac {1}{2} \kappa) & : | u | > \kappa \end{array} \right..
+$$
+
+The quantile $Q$ -learning modification of classical DQN (Mnih et al., 2015) is called QRDQN. Using the same network architecture as C51, QRDQN approximates return distributions using the quantile representation. Therefore the last layer outputs the $m$ quantile locations for each action. In the quantile setup we write $\eta_{\omega}(s,a) = \frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_i(s,a;\omega)}$ with induced mean values $Q_{\omega}(s,a) = \frac{1}{m}\sum_{i=1}^{m}\theta_i(s,a;\omega)$ . Given a sample transition $(s,a,r,s')$ , the network parameters are updated via gradient descent with respect to the loss function
+
+$$
+\mathcal {L} (\omega) = \frac {1}{m} \sum_ {i, j = 1} ^ {m} \rho_ {\tau_ {i}} ^ {1} \left(r + \gamma \theta_ {j} \left(s ^ {\prime}, z ^ {*}; \bar {\omega}\right) - \theta_ {i} (s, a; \omega)\right), \quad z ^ {*} = \operatorname {a r g m a x} _ {a ^ {\prime}} Q _ {\bar {\omega}} \left(s ^ {\prime}, a ^ {\prime}\right), \tag {5}
+$$
+
+and the quantile mid-points $\tau_{i} = \frac{2 - 1}{2m}$
+
+In what follows we turn three known double variants of DQN with overestimation reduction into QRDQN variants and compare them on several Arcade environments to our algorithms. We use double DQN (van Hasselt et al., 2015), a $Q$ -learning adaptation of the clipping trick included in TD3 (Fujimoto et al., 2018), a quantile version of our ADDQ algorithm, and an additional variant of ADDQ. Using the Stable-Baselines3 framework [(Raffin et al., 2021)] there is very little that must be modified. Return distributions $\eta^A$ and $\eta^B$ are parametrized with two independently initialized neural networks denoted by $\omega^A$ and $\omega^B$ . As in DQN we use delayed target networks, one for $A$ , one for $B$ , that are indicated with an additional bar. For each gradient step we simulate a vector of random variables with the same size as the batch size with each element determining which of the two estimators is being updated based on the respective transition with the same position in the batch. Accordingly, we use twice the batch size for these methods, so that on average per gradient step, the same number of transitions is used for each estimator, compared to the single-estimator case. The only difference in different algorithms is the target used to update the neural networks.
+
+Similar to modifying C51 implementations we only modify the target return distributions $b_{r,\gamma} \# \eta_{\overline{\omega}}(s',z^{*})$ for an appropriate action $z^{*}$ . In the quantile setup those are given by the locations of their atoms:
+
+$$
+\Gamma = \{r + \gamma \theta_ {j} \left(s ^ {\prime}, z ^ {*}; \omega\right): j = 1, \dots , m \}
+$$
+
+We again use the compact $A / B$ notation to indicate how the update applies for $\Gamma^A$ and $\Gamma^B$ .
+
+Double QRDQN: $\Gamma^{A / B} = \eta_{\bar{\omega}^{B / A}}(s',z^*)$ , where $z^{*} = \mathrm{argmax}_{a^{\prime}}Q_{\bar{\omega}^{A / B}}(s^{\prime},a^{\prime})$
+
+Clipped QRDQN: $\Gamma^{A / B} = \eta_{\bar{\omega}^X}(s',z^*)$ , where $z^{*} = \operatorname{argmax}_{a'} Q_{\bar{\omega}^{A / B}}(s',a')$ and $X = \operatorname{argmin}_{c\in \{A,B\}}Q_{\bar{\omega}^c}(s',z^*)$ .
+
+ADDQ (us): $\Gamma^{A / B} = \beta \eta_{\bar{\omega}^{A / B}}(s',z^*) + (1 - \beta)\eta_{\bar{\omega}^{B / A}}(s',z^*)$ , where $z^{*} = \operatorname{argmax}_{a'} Q_{\bar{\omega}^{A / B}}(s',a')$ . The weights $\beta = \beta(s,a;\omega)$ are essentially arbitrary and can depend on the current estimated return distributions $\eta^{A}$ and $\eta^{B}$ . For the experiments we take the same choice from 1 as used for the tabular and the categorical settings.
+
+Experimental results are presented in the next section.
+
+# E. Deep reinforcement learning experiments
+
+To ensure fair comparison we modified the algorithms C51 [(Bellemare et al., 2017)] and QRDQN [(Dabney et al., 2018)] within the Stable-Baselines3 framework [(Raffin et al., 2021)]. The C51 implementation has been added to this framework by adapting from the Dopamine framework [(Castro et al., 2018)] and the DQN Zoo [(Quan & Ostrovski, 2020)]. We run Atari environments from the Arcade Learning Environment [(Bellemare et al., 2013)] and MuJoCo [(Todorov et al., 2012)] environments both using the Gymnasium API [(Towers et al., 2023)]. We run the experiments via the RL Baselines3 Zoo [(Raffin, 2020)] training framework.
+
+The experiments were executed on a HPC cluster with NVIDIA Tesla V100 and NVIDIA A100 GPUs. The replay buffer on Atari environments takes around 57GB of memory and less than 7 GB of memory for MuJoCo environments.
+
+For the experiments the training has been interrupted every 50000 steps and 10 evaluation episodes on 10 evaluation environments without exploration have been performed. The plots below show the mean total reward (sum of all rewards) averaged over 10 seeds with standard errors of seeds as the shaded regions. To improve visibility a rolling window of size 4 is applied. Atari runs took less than 48 hours for 20 million train steps and periodic evaluations and MuJoCo runs less than 36 for 10 million train steps (Humanoid) and periodic evaluations. Note that one timestep in the Atari environments corresponds to 4 frames, which are stacked together. This corresponds to repeating every action 4 times in the actual game. Therefore 20 million timesteps correspond to 80 million frames. Additionally, a small ablation study comprising of 10 seeds on one evaluation environment with some of the choices of beta detailed in Appendix C.2 has been conducted.
+
+# E.1. Full experimental results for the categorical parametrization
+
+As in (Bellemare et al., 2017) we use 51 atoms for all C51 variants.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 13. Learning curves on 10 Atari environments, averaged over 10 seeds
+
+
+
+
+
+We additionally provide plots using the RLiable library (Agarwal et al., 2021).
+
+
+Figure 14. RLiable probability of improvement plot
+
+
+Figure 15. RLiable human normalized scores plot (based on (Badia et al., 2020))
+
+# E.2. Full experimental results for the quantile parametrization
+
+
+
+
+
+——ADDQ (us) ——QRDQN ——DOUBLE QRDQN ——CLIP QRDQN
+
+
+
+
+
+
+
+
+
+
+
+ADDQ (us) QRDQN DOUBLE QRDQN CLIP QRDQN
+
+
+Figure 16. Learning curves on 10 Atari environments, averaged over 10 seeds
+
+
+
+
+
+
+Figure 17. RLiable probability of improvement plot
+
+
+Figure 18. RLiable human normalized scores plot (based on (Badia et al., 2020))
+
+# E.3. Full experimental results for the actor critic setting
+
+
+
+
+
+ADQRSAC (us) QRSAC DOUBLE QRSAC CLIP QRSAC
+
+
+Figure 19. Learning curves on 5 MuJoCo environments, averaged over 10 seeds
+
+
+
+
+
+
+Figure 20. RLiable probability of improvement plot
+
+
+Figure 21. RLiable normalized scores plot, based on highest and lowest performance on each environment
+
+# F. Convergence proof in the tabular categorical setup
+
+In this section we give a convergence proof for the adaptive distributional double-Q algorithm in the simplest setting, the categorical setting. The proof is based on known arguments from the literature and requires some modifications to work in our generality. Since many papers only sketched proofs we decided to spell out all details.
+
+Remark F.1 (Notation and short recap). The Cramer distance $\ell_2$ for probability distributions $\nu, \nu' \in \mathbb{P}(\mathbb{R})$ is given by
+
+$$
+\ell_ {2} (\nu , \nu^ {\prime}) = \left(\int_ {\mathbb {R}} \left| F _ {\nu} (z) - F _ {\nu^ {\prime}} (z) \right| ^ {2} d z\right) ^ {1 / 2}.
+$$
+
+Following (Rowland et al., 2018; Bellemare et al., 2023) the supremum extension of a probability metric $d$ between two return distribution functions $\eta, \eta' \in \mathbb{P}^{S \times A}$ is denoted as
+
+$$
+\bar {d} (\eta , \eta^ {\prime}) = \sup _ {s, a \in \mathcal {S} \times \mathcal {A}} d (\eta (s, a), \eta^ {\prime} (s, a).
+$$
+
+The iterates $\eta_{k + 1} = \Pi_C\mathcal{T}^\pi \eta_k$ converge to the unique fixed point in $\mathcal{F}_{C,m}^{S\times A}$ with respect to $\bar{\ell}_2$ based on Banach's fixed point Theorem. This follows from the contraction property
+
+$$
+\bar {\ell} _ {2} \left(\Pi_ {C} \mathcal {T} ^ {\pi} \eta , \Pi_ {C} \mathcal {T} ^ {\pi} \eta^ {\prime}\right) \leq \sqrt {\gamma} \bar {\ell} _ {2} (\eta , \eta^ {\prime}), \tag {6}
+$$
+
+[compare (Rowland et al., 2018; Bellemare et al., 2023)].
+
+Theorem F.2 (Convergence of adaptive distributional $Q$ -learning in the categorical setting). Given some initial return distribution functions $\eta_0^A, \eta_0^B$ supported within $[\theta_1, \theta_m]$ , the induced $Q$ -values, i.e. the expected values of the return distributions $(\eta_t^A), (\eta_t^B)$ , recursively defined by Algorithm 1 converge almost surely towards $Q^*$ if the following conditions are satisfied:
+
+1. the step sizes $\alpha_{t}(s,a)$ almost surely fulfill the Robbins-Monro conditions $\sum_{t=0}^{\infty}\alpha_{t}(s,a) = \infty$ and $\sum_{t=0}^{\infty}\alpha_{t}^{2}(s,a) < \infty$ .
+2. rewards are bounded in $[R_{min}, R_{max}]$ and $[\frac{R_{min}}{1 - \gamma}, \frac{R_{max}}{1 - \gamma}] \subseteq [\theta_1, \theta_m]$ ,
+3. the choice of updating $\eta^A$ or $\eta^B$ is random and independent of all previous random variables
+4. the sequences $(\beta_t^A)_{t\in \mathbb{N}},(\beta_t^B)_{t\in \mathbb{N}}$ only depend on the past and fulfill $\lim_{t\to \infty}|\beta_t^A -\beta_t^B | = 0$ almost surely.
+
+If additionally the MDP has a unique optimal policy $\pi^{*}$ , then $(\eta_t^A), (\eta_t^B)$ converge almost surely in $\bar{\ell}_2$ to some limit $\eta_C^* \in \mathcal{F}_{C,m}$ and the greedy policy with respect to $\eta_C^*$ is the optimal policy.
+
+Note that the algorithm and proof uses $\beta_{t + 1}^{A / B}(s,a)$ with index $t + 1$ when updating $\eta_t^{A / B}$ . This is to show that in general the parameter is allowed to depend on $S_{t + 1}$ and the respective greedy action, i.e. it must only be $\mathcal{F}_{t + 1}$ measurable. To portray this generality in the following we will only write $\beta_{t + 1}^{A / B}$ without referencing a state-action pair.
+
+The simplest way to guarantee the assumptions on the adaptive parameters $\beta^{A},\beta^{B}$ to be satisfied is to chose them equal.
+
+As in (van Hasselt, 2010), the proof is based on the following stochastic approximation result, see also (Bertsekas & Tsitsiklis, 1996), Proposition 4.5.
+
+Lemma F.3 ((Singh et al., 2000), Lemma 1). Suppose $(\Omega, \mathcal{A}, \mathbb{P}, (\mathcal{F}_n))$ is a filtered probability space on which all appearing random variables are defined. Suppose that
+
+1. a stochastic process $(F_{n})_{n\in \mathbb{N}}\subset \mathbb{R}^{d}$ with the coordinates $F_{i,n}$ for $i = 1,\ldots ,d$ such that $F_{n}$ is $\mathcal{F}_{n + 1}$ -measurable and for all $i = 1,\dots ,d$
+
+$$
+\left\| \mathbb {E} \left[ F _ {n} \mid \mathcal {F} _ {n} \right] \right\| _ {\infty} \leq \kappa \| X _ {n} \| _ {\infty} + c _ {n} \quad a n d \quad \mathbb {V} \left[ F _ {i, n} \mid \mathcal {F} _ {n} \right] \leq K (1 + \kappa \| X _ {n} \| _ {\infty}) ^ {2} \quad n \geq 1,
+$$
+
+where $\kappa \in [0,1)$ , an adapted, stochastic process $(c_{n})_{n\in \mathbb{N}}\subset \mathbb{R}^{+}$ that converges to 0 almost surely and some constant $K > 0$
+
+2. the non-negative stochastic process $(\alpha_{n})_{n\in \mathbb{N}}\subset \mathbb{R}^{d}$ , with the coordinates $\alpha_{i,n}\in [0,1]$ for $i = 1,\ldots ,d$ is adapted with
+
+$$
+\sum_ {n = 1} ^ {\infty} \alpha_ {i, n} = \infty a n d \sum_ {n = 1} ^ {\infty} \alpha_ {i, n} ^ {2} < \infty a. s..
+$$
+
+Then, for any $\mathcal{F}_0$ -measurable initial condition $X_0$ the stochastic process $(X_{n})_{n\in \mathbb{N}}\subset \mathbb{R}^{d}$ with coordinates $X_{i,n}$ for $i = 1,\ldots ,d$ that is recursively defined by
+
+$$
+X _ {i, n + 1} = (1 - \alpha_ {i, n}) X _ {i, n} + \alpha_ {i, n} F _ {i, n}, \quad n \in \mathbb {N},
+$$
+
+converges almost surely to zero.
+
+Furthermore, we follow (Rowland et al., 2018) by first showing the convergence of the mean-values to $Q^{*}$ and afterwards showing convergence of the return distribution functions, under the assumption of a unique optimal policy, by coupling it with policy evaluation. The convergence of the latter is easier to prove and we will do so at the end.
+
+Lemma F.4 (Adaptive Double Categorical Temporal Difference for Policy Evaluation). Given some initial return distribution functions $\eta_0^A, \eta_0^B$ supported within $[\theta_1, \theta_m]$ and a stationary policy $\pi \in \Pi_S$ , the return distribution functions $(\eta_t^A), (\eta_t^B)$ recursively defined by Algorithm 1, but with $a^* \sim \pi(\cdot; S_{t+1})$ instead, converge almost surely towards the unique fixed point $\eta_C \in \mathbb{P}(\mathbb{R})^{S \times A}$ of the operator $\Pi_C T^\pi$ with respect to $\bar{\ell}_2$ , if the following conditions are satisfied:
+
+1. the step sizes $\alpha_{t}(s,a)$ fulfill the Robbins-Monro conditions:
+
+$\sum_{t=0}^{\infty} \alpha_t(s, a) = \infty$
+- $\sum_{t=0}^{\infty} \alpha_t^2(s, a) < \infty$ ,
+
+2. rewards are bounded in $[R_{min}, R_{max}]$ and $[\frac{R_{min}}{1 - \gamma}, \frac{R_{max}}{1 - \gamma}] \subseteq [\theta_1, \theta_m]$ ,
+3. the choice of updating $\eta^A$ or $\eta^B$ is random and independent of all other previous random variables
+
+The above result is only relevant for the proof of Theorem F.2, as policy evaluation with a double estimator is not of interest. Note that convergence of categorical temporal difference for policy evaluation (in the single estimator case) has been proven in [(Rowland et al., 2018) Theorem 2 mimicking (Tsitsiklis, 1994) Theorem 2] and [(Bellemare et al., 2023) Theorem 6.12 applying (Tsitsiklis, 1994) Theorem 3 or (Bertsekas & Tsitsiklis, 1996) Proposition 4.5].
+
+Lemma F.5. Let $(\alpha_{t})_{t\in \mathbb{N}_{0}}$ be a sequence fulfilling the Robbins-Monro conditions and $(Y_{t})_{t\in \mathbb{N}}$ an iid sequence of Bernoulli(0.5) random variables, i.e. $\mathbb{P}(Y_t = 1) = \mathbb{P}(Y_t = 0) = 0.5$ for all $t\in \mathbb{N}_0$ . Then $(\alpha_{t}Y_{t})_{t\in \mathbb{N}_{0}}$ also fulfills the Robbins-Monro condition.
+
+Proof. The almost sure convergence of the summed squares is obviously fulfilled due to
+
+$$
+\sum_ {t = 0} ^ {\infty} \left(\alpha_ {t} Y _ {t}\right) ^ {2} \leq \sum_ {t = 0} ^ {\infty} \alpha_ {t} ^ {2} < \infty \quad \text {a l m o s t s u r e l y}.
+$$
+
+Due to independence of each $Y_{t}$ with $\{Y_{n}|n\in \mathbb{N}_{0}, n\neq t\}$ as well as with $\alpha = (\alpha_{t})_{t = 0}^{\infty}$ we will consider a two stage experiment, where we first draw the sequence $\alpha = (\alpha_{t})_{t = 0}^{\infty}$ and then independently of this realization sample the iid sequence $Y = (Y_{t})_{t = 0}^{\infty}$ . Due to the independence the joint measure of $\alpha$ and $Y$ is the product measure. Consider the product space $(\Omega ,\mathcal{F},\mathbb{P}) = (\Omega_{\alpha}\times \Omega_{Y},\mathcal{F}_{\alpha}\otimes \mathcal{F}_{Y},\mathbb{P}_{\alpha}^{\otimes \mathbb{N}}\otimes \mathbb{P}_{Y}^{\otimes \mathbb{N}})$ where $\Omega_{\alpha},\Omega_{Y} = [0,1]^{\mathbb{N}},\mathcal{F}_{\alpha},\mathcal{F}_{Y} = \mathcal{B}([0,1])^{\otimes \mathbb{N}}$ . Then, using that $\sum_{t = 0}^{\infty}\alpha_{t} = \infty \mathbb{P}_{\alpha}$ -almost surely, we have
+
+$$
+\begin{array}{l} \mathbb {P} \Big (\sum_ {t = 0} ^ {\infty} \alpha_ {t} Y _ {t} = \infty \Big) = \int_ {\Omega_ {\alpha}} \mathbb {P} _ {Y} \Big (\sum_ {t = 0} ^ {\infty} \alpha_ {t} Y _ {t} = \infty \Big) d \mathbb {P} _ {\alpha} (\alpha) \\ = \int_ {\left\{\left(\alpha_ {t}\right) _ {t = 0} ^ {\infty} \in \Omega_ {\alpha}: \sum_ {t = 0} ^ {\infty} \alpha_ {t} = \infty \right\}} \mathbb {P} _ {Y} \left(\sum_ {t = 0} ^ {\infty} \alpha_ {t} Y _ {t} = \infty\right) d \mathbb {P} _ {\alpha} (\alpha) \\ \stackrel {(a)} {=} \int_ {\left\{\left(\alpha_ {t}\right) _ {t = 0} ^ {\infty} \in \Omega_ {\alpha}: \sum_ {t = 0} ^ {\infty} \alpha_ {t} = \infty \right\}} 1 d \mathbb {P} _ {\alpha} (\alpha) \\ = 1, \\ \end{array}
+$$
+
+where $(a)$ can be seen as follows. Consider any deterministic sequence $(b_{t})\subseteq [0,1]$ fulfilling $\sum_{t = 0}^{\infty}b_{t} = \infty$ . Then
+
+$$
+\infty = \sum_ {t = 0} ^ {\infty} b _ {t} = \sum_ {t = 0} ^ {\infty} b _ {t} Y _ {t} + \sum_ {t = 0} ^ {\infty} b _ {t} \mathbf {1} _ {Y _ {t} = 0}.
+$$
+
+Now notice that $A = \sum_{t=0}^{\infty} b_t Y_t$ and $B = \sum_{t=0}^{\infty} b_t \mathbf{1}_{Y_t = 0}$ are identically distributed and since the sum of $A$ and $B$ is always infinite, almost surely either one of them is infinite. Given the identical distribution, we infer
+
+$$
+\mathbb {P} _ {Y} \big (\sum_ {t = 0} ^ {\infty} b _ {t} Y _ {t} = \infty \big) > 0.
+$$
+
+But since $(b_{t}Y_{t})$ is an independent sequence of random variables and the event that the infinite sum diverges is in the tail sigma algebra, the Kolmogorov 0-1 law yields:
+
+$$
+\mathbb {P} _ {Y} (\sum_ {t = 0} ^ {\infty} b _ {t} Y _ {t} = \infty) = 1.
+$$
+
+Remark F.6. As outlined in (Rowland et al., 2018), proof of Proposition 1, denoting by $\mathcal{M}(\mathbb{R})$ the space of all finite signed measures on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ , the subspace
+
+$$
+\mathcal {M} _ {0} (\mathbb {R}) := \{\nu \in \mathcal {M} (\mathbb {R}) | \nu (\mathbb {R}) = 0, \int_ {\mathbb {R}} F _ {\nu} (x) ^ {2} d x < \infty \},
+$$
+
+"where $F_{\nu}(x) = \nu([- \infty, x))$ for $x \in \mathbb{R}$ , is isometrically isomorphic to a subspace of the Hilbert space $L^2(\mathbb{R})$ with inner product given by
+
+$$
+\langle \nu_ {1}, \nu_ {2} \rangle_ {\ell_ {2}} = \int_ {\mathbb {R}} F _ {\nu_ {1}} (x) F _ {\nu_ {2}} (x) d x. ”
+$$
+
+Then the affine translation $\delta_0 + \mathcal{M}_0$ is also Hilbert space endowed with the same inner product. It contains probability measures $\nu \in \mathbb{P}(\mathbb{R})$ satisfying
+
+$$
+\int_ {- \infty} ^ {0} F _ {\nu} (x) ^ {2} d x < \infty \quad \mathrm {a n d} \quad \int_ {0} ^ {\infty} (1 - F _ {\nu} (x)) ^ {2} d x < \infty .
+$$
+
+To see this, consider $\mu = \nu -\delta_0$ fulfills $F_{\mu}(x) = F_{\nu}(x)$ for $x < 0$ and $F_{\mu}(x) = F_{\nu}(x) - 1$ for $x\geq 1$ . Hence, $\mu \in \mathcal{M}_0$ . The two conditions assure that the tails decay fast enough.
+
+Note that the inner product induces a norm through $\| \nu \|_{\ell_2}^2 = \langle \nu ,\nu \rangle$ . And we have $\ell_2(\nu_1,\nu_2) = \| \nu_1 - \nu_2\|_{\ell_2}$ . In the following proof, we will make use of the relationship
+
+$$
+\ell_ {2} ^ {2} (\nu_ {1} + \nu_ {2}, \nu_ {1} ^ {\prime} + \nu_ {2} ^ {\prime}) = \| \nu_ {1} - \nu_ {1} ^ {\prime} \| _ {\ell_ {2}} ^ {2} + \| \nu_ {2} - \nu_ {2} ^ {\prime} \| _ {\ell_ {2}} ^ {2} + 2 \langle \nu_ {1} - \nu_ {1} ^ {\prime}, \nu_ {2} - \nu_ {2} ^ {\prime} \rangle
+$$
+
+holding by bilinearity of the inner product.
+
+# Proof of Thoerem F.2. Step 1: Convergence of mean values to $Q^{*}$
+
+The proof mainly follows (Rowland et al., 2018) and (van Hasselt, 2010). Let the filtration be given by $\mathcal{F}_t = \sigma(\eta_0^A, \eta_0^B, s_0, a_0, \alpha_0, R_0, S_1, Y_1, \beta_1^A, \beta_1^B \ldots, s_t, a_t, \alpha_t)$ , where $(Y_n)_{n \in \mathbb{N}}$ is an iid sequence of Bernoulli(0.5) random variables, independent of all other appearing random variables, such that $A$ is updated when $Y_{n+1} = 1$ . Denote the expected values of the return-distributions by $Q_t^A(s, a) = \mathbb{E}_{R \sim \eta_t^A(s, a)}[R]$ and overloading notation, let us further write $\mathbb{E}[\nu]$ for the expected value $\mathbb{E}_{R \sim \nu}[R]$ of a probability distribution $\nu \in \mathbb{P}(\mathbb{R})$ . We will first consider how the expected values evolve. Due to the symmetry of the updates it is sufficient to show convergence of $Q_t^A$ to $Q^*$ . It is implied that $\alpha(s, a) = 0$ for $(s, a) \neq (s_t, a_t)$ . Further, define
+
+$$
+\begin{array}{l} X _ {t} \left(s _ {t}, a _ {t}\right) := Q _ {t} ^ {A} \left(s _ {t}, a _ {t}\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right) \\ F _ {t} \left(s _ {t}, a _ {t}\right) := \mathbf {1} _ {Y _ {t + 1} = 1} \left(R _ {t} + \gamma \left(\beta_ {t + 1} ^ {A} Q _ {t} ^ {A} \left(S _ {t + 1}, a ^ {*}\right) + \left(1 - \beta_ {t + 1} ^ {A}\right) Q _ {t} ^ {B} \left(S _ {t + 1}, a ^ {*}\right)\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right)\right) \\ + \mathbf {1} _ {Y _ {t + 1} = 0} X _ {t} \left(s _ {t}, a _ {t}\right) \\ \end{array}
+$$
+
+$$
+F _ {t} (s, a) := 0 \text {w h e n e v e r} (s, a) \neq \left(s _ {t}, a _ {t}\right)
+$$
+
+with $a^* = \arg \max_{a' \in \mathcal{A}_{S_{t+1}}} Q^A(S_{t+1}, a')$ . According to [Lyle et al., 2019] Proposition 1] projection $\Pi_C$ is mean-preserving, i.e. $\mathbb{E}[\Pi_C \nu] = \mathbb{E}[\nu]$ for when $\nu$ is a distribution supported within $[\theta_1, \theta_m]$ . This is the case for every $\hat{\eta}_*$ as in Algorithm 1, which can be seen as following. Assume $\eta_t^A(s_t, a_t)$ , $\eta_t^B(s_t, a_t) \in \mathcal{F}_{C,m}$ . Then also
+
+$$
+\nu = \beta_ {t + 1} ^ {A} \eta_ {t} ^ {A} (S _ {t + 1}, a ^ {*}) + (1 - \beta_ {t + 1} ^ {A}) \eta_ {t} ^ {B} (S _ {t + 1}, a ^ {*})) \in \mathcal {F} _ {C, m},
+$$
+
+and suppose $\nu = \sum_{i=1}^{m} p_i \delta_{\theta_i}$ for some $p_i$ . Then
+
+$$
+\hat {\eta} _ {*} := b _ {R _ {t}, \gamma} \# \nu = \sum_ {i = 1} ^ {m} p _ {i} \delta_ {R _ {t} + \gamma \theta_ {i}}.
+$$
+
+But now
+
+$$
+\theta_ {1} \leq \frac {R _ {m i n}}{1 - \gamma} \leq \frac {R _ {m i n}}{1 - \gamma} \leq \theta_ {m}
+$$
+
+(Assumption $(ii))$ guarantees that
+
+$$
+\theta_ {1} \leq R _ {t} + \gamma \theta_ {i} \leq \theta_ {m} \quad \forall i \in \{1, \dots , m \}
+$$
+
+and $\hat{\eta}_{*}$ is supported within $[\theta_1,\theta_m]$ . Similarly for a realized transition with $(R_{t},S_{t + 1}) = (r_{t},s_{t + 1})$ , we have for the expected value of the distribution
+
+$$
+\begin{array}{l} \mathbb {E} \left[ b _ {r _ {t}, \gamma} \# \left(\beta_ {t + 1} ^ {A} \eta_ {t} ^ {A} \left(s _ {t + 1}, a ^ {*} + \left(1 - \beta_ {t + 1} ^ {A}\right) \eta_ {t} ^ {B} \left(s _ {t + 1}, a ^ {*}\right)\right)\right) \right] \\ = r _ {t} + \gamma \left(\beta_ {t + 1} ^ {A} Q _ {t} ^ {A} \left(s _ {t + 1}, a ^ {*}\right) + \left(1 - \beta_ {t + 1} ^ {A}\right) Q _ {t} ^ {B} \left(s _ {t + 1}, a ^ {*}\right)\right). \\ \end{array}
+$$
+
+Hence, the expected values of the return distributions $\eta_t^A$ subtracted by $Q^{*}$ indeed evolve as
+
+$$
+X _ {t + 1} (s, a) = \left(1 - \alpha_ {t} (s, a)\right) X _ {t} (s, a) + \alpha_ {t} (s, a) F _ {t} (s, a).
+$$
+
+We now proceed similarly as in (van Hasselt, 2010) to show that the conditions of Lemma F.3 are satisfied.
+
+We first show that $\mathbb{V}[F_t(s,a)|\mathcal{F}_t]$ is bounded for all $(s,a)\in S\times \mathcal{A}$ and therefore satisfies $\mathbb{V}[F_t(s,a)|\mathcal{F}_t]\leq K(1 + \kappa \| X_t\|_\infty)$ as required. Since the rewards were assumed to be bounded there is an $\bar{R} >0$ such that $|r|,|\theta_1|,|\theta_m|\leq \bar{R}\forall r\in \mathcal{R}$ . Hence, we have
+
+$$
+\begin{array}{l} \left| F _ {t} \left(s _ {t}, a _ {t}\right) \right| \leq \left| R _ {t} + \gamma \left(\beta_ {t + 1} ^ {A} Q _ {t} ^ {A} \left(S _ {t + 1}, a ^ {*}\right) + \left(1 - \beta_ {t + 1} ^ {A}\right) Q _ {t} ^ {B} \left(S _ {t + 1}, a ^ {*}\right)\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right) \right| \\ + \left| X _ {t} \left(s _ {t}, a _ {t}\right) \right| \\ \le \bar {R} + 3 \bar {R} + 2 \frac {\bar {R}}{1 - \gamma}. \\ \end{array}
+$$
+
+Next, we need to show that $\| \mathbb{E}[F_t\mid \mathcal{F}_t]\|_{\infty}\leq \kappa \| X_t\|_{\infty} + c_n$ . Let us therefore decompose
+
+$$
+\begin{array}{l} F _ {t} \left(s _ {t}, a _ {t}\right) = \mathbf {1} _ {Y _ {t + 1} = 1} \left(F _ {t} ^ {Q} \left(s _ {t}, a _ {t}\right) + \gamma \left(1 - \beta_ {t + 1}\right) \left(Q _ {t} ^ {B} \left(S _ {t + 1}, a ^ {*}\right) - Q _ {t} ^ {A} \left(S _ {t + 1}, a ^ {*}\right)\right)\right) \\ + \mathbf {1} _ {Y _ {t + 1} = 0} \alpha_ {t} \left(s _ {t}, a _ {t}\right) X _ {t} \left(s _ {t}, a _ {t}\right) \\ \end{array}
+$$
+
+with $F_{t}^{Q}(s_{t},a_{t}):= R_{t} + \gamma Q_{t}^{A}(S_{t + 1},a^{*}) - Q^{*}(s_{t},a_{t})$ . This yields
+
+$$
+\begin{array}{l} \left. \left| \mathbb {E} \left[ \mathbf {1} _ {Y _ {t + 1} = 1} F _ {t} ^ {Q} \left(s _ {t}, a _ {t}\right) + \mathbf {1} _ {Y _ {t + 1} = 0} \alpha_ {t} \left(s _ {t}, a _ {t}\right) X _ {t} \left(s _ {t}, a _ {t}\right) \mid \mathcal {F} _ {t} \right] \right| \right. \\ = \left| \frac {1}{2} \mathbb {E} \left[ R _ {t} + \gamma Q _ {t} ^ {A} \left(S _ {t + 1}, a ^ {*}\right) \right] - Q ^ {*} \left(s _ {t}, a _ {t}\right) + \frac {1}{2} X _ {t} \left(s _ {t}, a _ {t}\right) \right| \\ \leq \left| T ^ {*} Q ^ {A} \left(s _ {t}, a _ {t}\right) - T ^ {*} Q ^ {*} \left(s _ {t}, a _ {t}\right) \right| + \left| \frac {1}{2} X _ {t} \left(s _ {t}, a _ {t}\right) \right| \\ \leq \gamma \| Q _ {t} ^ {A} - Q ^ {*} \| _ {\infty} + \frac {1}{2} \| X _ {t} \| _ {\infty} \\ = \underbrace {\left(\frac {1}{2} \gamma + \frac {1}{2}\right)} _ {< 1} \| X _ {t} \| _ {\infty}, \\ \end{array}
+$$
+
+since the Bellman optimality operator is a $\gamma$ -contraction. Subsequently, it only remains to show that
+
+$$
+c _ {t} := | \mathbb {E} [ \mathbf {1} _ {Y _ {t + 1} = 1} \gamma (1 - \beta_ {t + 1} ^ {A}) (Q _ {t} ^ {B} (S _ {t + 1}, a ^ {*}) - Q _ {t} ^ {A} (S _ {t + 1}, a ^ {*})) | \mathcal {F} _ {t} ] |
+$$
+
+goes to zero almost surely. This is immediate if we verify that
+
+$$
+X _ {t} ^ {B A} (s, a) := Q _ {t} ^ {B} (s, a) - Q _ {t} ^ {A} (s, a)
+$$
+
+goes to zero almost surely for all $(s,a)\in S\times \mathcal{A}$ which will be achieved by another application of Lemma F.3. We infer that
+
+$$
+\begin{array}{l} X _ {n + 1} ^ {B A} (s _ {n}, a _ {n}) \\ = X _ {n} ^ {B A} \left(s _ {n}, a _ {n}\right) + \alpha_ {n} \left(s _ {n}, a _ {n}\right) \\ \mathbf {1} _ {Y _ {n + 1} = 0} \Big (R _ {n} + \gamma \big (\beta_ {n + 1} ^ {B} Q _ {n} ^ {B} (S _ {n + 1}, b ^ {*}) + (1 - \beta_ {n + 1} ^ {B}) Q _ {n} ^ {A} (S _ {n + 1}, b ^ {*}) \big) - Q _ {n} ^ {B} (s _ {n}, a _ {n}) \Big) \\ - \mathbf {1} _ {Y _ {n + 1} = 1} \Big (R _ {n} + \gamma \big (\beta_ {n + 1} ^ {A} Q _ {n} ^ {A} (S _ {n + 1}, a ^ {*}) + (1 - \beta_ {n + 1} ^ {A}) Q _ {n} ^ {B} (S _ {n + 1}, a ^ {*}) \big) - Q _ {n} ^ {A} (s _ {n}, a _ {n}) \Big) \\ ) \\ = (1 - \alpha_ {n} (s _ {n}, a _ {n})) X _ {n} ^ {B A} (s _ {n}, a _ {n}) + \alpha_ {n} (s _ {n}, a _ {n}) \\ \mathbf {1} _ {Y _ {n + 1} = 0} \left(R _ {n} + \gamma \left(\beta_ {n + 1} ^ {B} Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) + \left(1 - \beta_ {n + 1} ^ {B}\right) Q _ {n} ^ {A} \left(S _ {n + 1}, b ^ {*}\right)\right)\right) \\ - \mathbf {1} _ {Y _ {n + 1} = 1} \Big (R _ {n} + \gamma \big (\beta_ {n + 1} ^ {A} Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) + \left(1 - \beta_ {n + 1} ^ {A}\right) Q _ {n} ^ {B} \left(S _ {n + 1}, a ^ {*}\right)\left. \right) \Big) \\ \left. + \mathbf {1} _ {Y _ {n + 1} = 1} Q _ {n} ^ {B} \left(s _ {n}, a _ {n}\right) - \mathbf {1} _ {Y _ {n + 1} = 0} Q _ {n} ^ {A} \left(s _ {n}, a _ {n}\right)\right) \\ = (1 - \alpha_ {n} (s _ {n}, a _ {n})) X _ {n} ^ {B A} (s _ {n}, a _ {n}) + \alpha_ {n} (s _ {n}, a _ {n}) \tilde {F} _ {n} (s _ {n}, a _ {n}), \\ \end{array}
+$$
+
+with
+
+$$
+\begin{array}{l} \tilde {F} _ {n} \left(s _ {n}, a _ {n}\right) = \left(\mathbf {1} _ {Y _ {n + 1} = 0} \left(R _ {n} + \gamma \left(\beta_ {n + 1} ^ {B} Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) + \left(1 - \beta_ {n + 1} ^ {B}\right) Q _ {n} ^ {A} \left(S _ {n + 1}, b ^ {*}\right)\right)\right) \right. \\ - \mathbf {1} _ {Y _ {n + 1} = 1} \left(R _ {n} + \gamma \left(\beta_ {n + 1} ^ {A} Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) + \left(1 - \beta_ {n + 1} ^ {A}\right) Q _ {n} ^ {B} \left(S _ {n + 1}, a ^ {*}\right)\right)\right) \\ \left. + \mathbf {1} _ {Y _ {n + 1} = 1} Q _ {n} ^ {B} (s _ {n}, a _ {n}) - \mathbf {1} _ {Y _ {n + 1} = 0} Q _ {n} ^ {A} (s _ {n}, a _ {n})\right). \\ \end{array}
+$$
+
+Now, using that $Q_{n}^{B}(s_{n},a_{n}),Q_{n}^{A}(s_{n},a_{n}),X_{n}^{BA}(s_{n},a_{n}),\alpha_{n}(s_{n},a_{n})$ are $\mathcal{F}_n$ -measurable and $Y_{n + 1}$ is independent of $\mathcal{F}_n$ , the conditional expectation satisfies
+
+$$
+\begin{array}{l} \left| \mathbb {E} \left[ \tilde {F} _ {n} \left(s _ {n}, a _ {n}\right) \mid \mathcal {F} _ {n} \right] \right| = \frac {1}{2} \gamma \left| \mathbb {E} \left[ \beta_ {n + 1} ^ {B} Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) + \left(1 - \beta_ {n + 1} ^ {B}\right) Q _ {n} ^ {A} \left(S _ {n + 1}, b ^ {*}\right) \right. \right. \\ - \beta_ {n + 1} ^ {A} Q _ {n} ^ {A} (S _ {n + 1}, a ^ {*}) - (1 - \beta_ {n + 1} ^ {A}) Q _ {n} ^ {B} (S _ {n + 1}, a ^ {*}) | \mathcal {F} _ {n} | \\ + \frac {1}{2} \left| Q _ {n} ^ {B} \left(s _ {n}, a _ {n}\right) - Q _ {n} ^ {A} \left(s _ {n}, a _ {n}\right) \right| \\ \leq \frac {1}{2} \gamma \left(\left| \mathbb {E} \left[ \beta_ {n + 1} ^ {B} \left(Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) - Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right)\right) \mid \mathcal {F} _ {n} \right] \right| \right. \\ + \left| \mathbb {E} \left[ \left(1 - \beta_ {n + 1} ^ {B}\right) \left(Q _ {n} ^ {A} \left(S _ {n + 1}, b ^ {*}\right) - Q _ {n} ^ {B} \left(S _ {n + 1}, a ^ {*}\right)\right) \mid \mathcal {F} _ {n} \right] \right| \\ + \left| \mathbb {E} \left[ \left(\beta_ {n + 1} ^ {B} - \beta_ {n + 1} ^ {A}\right) Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) | \mathcal {F} _ {n} \right] \right| \\ \left. + \left| \mathbb {E} [ ((1 - \beta_ {n + 1} ^ {B}) - (1 - \beta_ {n + 1} ^ {A})) Q _ {n} ^ {B} (S _ {n + 1}, a ^ {*}) | \mathcal {F} _ {n} ] \right|\right) \\ + \frac {1}{2} \| X _ {n} \| _ {\infty} \\ \end{array}
+$$
+
+Now if it holds $\mathbb{E}[Q_n^B (S_{n + 1},b^*)|\mathcal{F}_n]\geq \mathbb{E}[Q_n^A (S_{n + 1},a^*)|\mathcal{F}_n]$ , by definition of $a^*$ we have $Q_{n}^{A}(S_{n + 1},a^{*}) = \max_{a\in \mathcal{A}_{S_{n + 1}}}Q_n^A (S_{n + 1},a)\geq Q_n^A (S_{n + 1},b^*)$ and therefore
+
+$$
+\begin{array}{l} \left| \mathbb {E} \left[ Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) - Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) \mid \mathcal {F} _ {n} \right] \right| = \mathbb {E} \left[ Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) - Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) \mid \mathcal {F} _ {n} \right] \\ \leq \mathbb {E} [ Q _ {n} ^ {B} (S _ {n + 1}, b ^ {*}) - Q _ {n} ^ {A} (S _ {n + 1}, b ^ {*}) | \mathcal {F} _ {n} ] \leq \| X _ {n} ^ {B A} \| _ {\infty}. \\ \end{array}
+$$
+
+Analogously, if $\mathbb{E}[Q_n^B (S_{n + 1},b^*)|\mathcal{F}_n] < \mathbb{E}[Q_n^A (S_{n + 1},a^*)|\mathcal{F}_n]$ , then we have by definition of $b^{*}$
+
+$$
+\begin{array}{l} \left| \mathbb {E} \left[ Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) - Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) \mid \mathcal {F} _ {n} \right] \right| = \mathbb {E} \left[ Q _ {n} ^ {A} \left(S _ {n + 1}, a ^ {*}\right) - Q _ {n} ^ {B} \left(S _ {n + 1}, b ^ {*}\right) \mid \mathcal {F} _ {n} \right] \\ \leq \mathbb {E} [ Q _ {n} ^ {A} (S _ {n + 1}, a ^ {*}) - Q _ {n} ^ {B} (S _ {n + 1}, a ^ {*}) | \mathcal {F} _ {n} ] \leq \| X _ {n} ^ {B A} \| _ {\infty}. \\ \end{array}
+$$
+
+Similarly, by distinguishing cases, one shows that
+
+$$
+\left| \mathbb {E} [ Q _ {n} ^ {A} (S _ {n + 1}, b ^ {*}) - Q _ {n} ^ {B} (S _ {n + 1}, a ^ {*}) | \mathcal {F} _ {n} ] \right| \leq \| X _ {n} ^ {B A} \| _ {\infty} + \frac {1}{2} \| X _ {n} ^ {B A} \|.
+$$
+
+Combining the above yields
+
+$$
+\begin{array}{l} \left| \mathbb {E} \left[ \tilde {F} _ {n} \left(s _ {n}, a _ {n}\right) \mid \mathcal {F} _ {n} \right] \right| \leq \frac {1}{2} \gamma \left(\beta_ {n + 1} ^ {B} + \left(1 - \beta_ {n + 1} ^ {B}\right)\right) \| X _ {n} ^ {B A} \| _ {\infty} \\ + \underbrace {\left| \gamma \mathbb {E} [ (\beta_ {n + 1} ^ {B} - \beta_ {n + 1} ^ {A}) \underbrace {Q _ {n} ^ {A} (S _ {n + 1} , a ^ {*})} _ {< \bar {R} < \infty} | \mathcal {F} _ {n} ] \right| + \left| \gamma \mathbb {E} [ ((1 - \beta_ {n + 1} ^ {B}) - (1 - \beta_ {n + 1} ^ {A})) \underbrace {Q _ {n} ^ {B} (S _ {n + 1} , a ^ {*})} _ {< \bar {R} < \infty} | \mathcal {F} _ {n} ] \right| .} _ {: = \bar {c} _ {n} \to 0, \text {s i n c e} | \beta_ {n} ^ {A} - \beta_ {n} ^ {B} | \text {c o n v e r g e s t o 0 f o r} n \to \infty \text {d u e t o (i v)}} \\ \end{array}
+$$
+
+Hence, we invoke Lemma F.3 to obtain convergence of $X_{t}^{BA}$ and thus with another application of Lemma F.3, $X_{t}(s,a)$ converges to zero which finally implies $Q_{t}^{A}(s,a)$ (and also $Q^{B}(s,a)$ ) converges to $Q^{*}(s,a)$ almost surely for every $(s,a)\in S\times \mathcal{A}$ .
+
+Since $S, A$ are finite, for every $\varepsilon > 0$ , there exists a random variable $N > 0$ such that for all $t > N$ , we have
+
+$$
+\max _ {z \in \{A, B \}} \| Q _ {t} ^ {z} - Q ^ {*} \| _ {\infty} < \varepsilon \quad \text {a l m o s t s u r e l y}.
+$$
+
+# Step 2: Convergence of return distributions
+
+Suppose the MDP has a unique optimal policy $\pi^{*}$ . Now following (Rowland et al., 2018), we take $\varepsilon$ to be half the minimum action gap for the optimal action-value function $Q^{*} = Q^{\pi^{*}}$ , i.e.
+
+$$
+\varepsilon = \frac {1}{2} \min _ {s \in \mathcal {S}} (Q ^ {\pi^ {*}} (s, \pi^ {*} (s) - \max _ {a \neq} Q ^ {\pi^ {*}} (s, a))
+$$
+
+which is greater than zero by assumption $(v)$ . Hence, denoting the action of the deterministic optimal policy in a certain state $s$ by $\pi^{*}(s)$ , we get
+
+$$
+\max _ {a} Q _ {t} ^ {A} (s, a) = \max _ {a} Q _ {t} ^ {B} (s, a) = \pi^ {*} (s)
+$$
+
+for all $t > N$ . For some initial condition $\tilde{\eta}_0 \in \mathcal{F}_{C,m}^S$ , let now $\tilde{\eta}_k$ be the iterates created by a double categorical policy evaluation algorithm for the optimal policy $\pi^*$ , i.e.
+
+$$
+\begin{array}{l} \tilde {\eta} _ {k + 1} ^ {A} (s _ {k}, a _ {k}) = (1 - \mathbf {1} _ {Y _ {k + 1} = 1} \alpha_ {k} (s _ {k}, a _ {k})) \tilde {\eta} _ {k} (s _ {k}, a _ {k}) \\ + \mathbf {1} _ {Y _ {k + 1} = 1} \alpha_ {k} \left(s _ {k}, a _ {k}\right) \Pi_ {C} \left(b _ {R _ {k}, \gamma} \# \left(\beta_ {k + 1} ^ {A} \tilde {\eta} _ {k} ^ {A} \left(S _ {k + 1}, \pi^ {*} \left(S _ {k + 1}\right)\right) \right. \right. \\ \left. \left. + \left(1 - \beta_ {k + 1} ^ {A}\right) \tilde {\eta} _ {k} ^ {B} \left(S _ {k + 1}, \pi^ {*} \left(S _ {k + 1}\right)\right)\right)\right) \\ \end{array}
+$$
+
+$$
+\tilde {\eta} _ {k + 1} ^ {A} (s, a) = \tilde {\eta} _ {k} ^ {A} (s, a) \text {f o r} (s, a) \neq (s _ {k}, a _ {k}).
+$$
+
+and analogously for $\tilde{\eta}^B$ . Note that the appearing $\mathbb{Y}_k, \alpha_k, \beta_k^A, \beta_k^B$ are chosen to be the same as in the control case above. Then $\tilde{\eta}^A, \tilde{\eta}^B$ converges almost surely to the unique fixed point $\eta_C^*$ of the projected operator $\Pi_C T^{\pi^*}$ with respect to $\bar{\ell}_2$ by
+
+Lemma F.4. Similarly to (Rowland et al., 2018), we now proceed by a coupling argument. Denote by $\pi_k^A, \pi_k^B$ any greedy selection rule with respect to $\eta_k^A$ and $\eta_k^B$ and $A_k = \{\pi_k^A = \pi_k^B = \pi^*\text{ for all } n \geq k\}$ . Then $A_k \subseteq A_{k+1}$ and by the above explanation we have $\mathbb{P}(A_k) \nearrow 1$ . Additionally, let $B$ be the event of probability 1 for which the (double) policy evaluation algorithm converges. Now on the event $B \cup A_k$ , we have
+
+$$
+\overline {{\ell}} _ {2} ^ {2} (\tilde {\eta} _ {n} ^ {A}, \eta_ {C} ^ {*}) \to 0 \quad \mathrm {a n d} \quad \overline {{\ell}} _ {2} ^ {2} (\tilde {\eta} _ {n} ^ {B}, \eta_ {C} ^ {*}) \to 0.
+$$
+
+Then by the triangle inequality it suffices to show $\bar{\ell}_2(\eta_n^A,\tilde{\eta}_n^A)\to 0$ and $\bar{\ell}_2(\eta_n^B,\tilde{\eta}_n^B)\to 0$ on this event too, since then the Theorem follows by $\mathbb{P}(B\cup A_k)\nearrow 1$
+
+To prove this we will again apply Lemma F.3. This time with $d = 2\cdot |\mathcal{S}||\mathcal{A}|$ , where we identify
+
+$$
+X _ {n} := \left[ \begin{array}{c} \ell_ {2} ^ {2} (\eta_ {n} ^ {A}, \tilde {\eta} _ {n} ^ {A}) \\ \ell_ {2} ^ {2} (\eta_ {n} ^ {B}, \tilde {\eta} _ {n} ^ {B}) \end{array} \right] \in \mathbb {R} ^ {2 | \mathcal {S} | | \mathcal {A} |}.
+$$
+
+Additionally, we expand the filtration by $\tilde{\mathcal{F}}_n = \sigma(\mathcal{F}_n, Y_{n+1})$ and define $\tilde{\alpha}_n^A(s, a) = \alpha_n(s, a)\mathbf{1}_{Y_{n+1}=1}$ and $\tilde{\alpha}_n^B(s, a) = \alpha_n(s, a)\mathbf{1}_{Y_{n+1}=0}$ . By Lemma F.5 these steps-size sequences still fulfill the Robbins-Monro conditions. Then, writing
+
+$$
+\nu^ {A} = \beta_ {n + 1} ^ {A} \eta_ {n} ^ {A} (S _ {n + 1}, \pi^ {*} (S _ {n + 1})) + (1 - \beta_ {n + 1} ^ {A}) \eta_ {n} ^ {B} (S _ {n + 1}, \pi^ {*} (S _ {n + 1}))
+$$
+
+$$
+\tilde {\nu} ^ {A} = \beta_ {n + 1} ^ {A} \tilde {\eta} _ {n} ^ {A} (S _ {n + 1}, \pi^ {*} (S _ {n + 1})) + (1 - \beta_ {n + 1} ^ {A}) \tilde {\eta} _ {n} ^ {B} (S _ {n + 1}, \pi^ {*} (S _ {n + 1}))
+$$
+
+for short, for $n\geq k$ , on $A_{k}$ we have
+
+$$
+\begin{array}{l} \ell_ {2} ^ {2} \left(\eta_ {n + 1} ^ {A} \left(s _ {n}, a _ {n}\right), \tilde {\eta} _ {n + 1} ^ {A} \left(s _ {n}, a _ {n}\right)\right) \\ = (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n})) ^ {2} \| \eta_ {n} ^ {A} (s _ {n}, a _ {n}) - \tilde {\eta} _ {n} ^ {A} (s _ {n}, a _ {n}) \| _ {\ell_ {2}} ^ {2} \\ + \tilde {\alpha} _ {n} ^ {A} \left(s _ {n}, a _ {n}\right) ^ {2} \| \Pi_ {C} \left(b _ {R _ {n}, \gamma} \# \nu^ {A}\right) - \Pi_ {C} \left(b _ {R _ {n}, \gamma} \# \tilde {\nu} ^ {A}\right) \| _ {\ell_ {2}} ^ {2} \\ + (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n})) \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) 2 \langle \eta_ {n} ^ {A} (s _ {n}, a _ {n}) - \tilde {\eta} _ {n} ^ {A} (s _ {n}, a _ {n}), \Pi_ {C} (b _ {R _ {n}, \gamma} \# \nu^ {A}) - \Pi_ {C} (b _ {R _ {n}, \gamma} \# \tilde {\nu} ^ {A}) \rangle_ {\ell_ {2}}. \\ \end{array}
+$$
+
+This can be rewritten in terms of Lemma F.3 as
+
+$$
+X _ {n + 1} ^ {A} (s _ {n}, a _ {n}) = (1 - \zeta_ {n} ^ {A} (s _ {n}, a _ {n})) X _ {n} ^ {A} (s _ {n}, a _ {n}) + \zeta_ {n} ^ {A} (s _ {n}, a _ {n}) F _ {n} ^ {A} (s _ {n}, a _ {n})
+$$
+
+with $\zeta_n^A (s_n,a_n) = 2\tilde{\alpha}_n^A (s_n,a_n) - \tilde{\alpha}_n^A (s_n,a_n)^2$ and
+
+$$
+\begin{array}{l} F _ {n} ^ {A} (s _ {n}, a _ {n}) = \frac {1}{\zeta_ {n} ^ {A} (s _ {n} , a _ {n})} (\tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} \| \Pi_ {C} (b _ {R _ {n}, \gamma} \# \nu^ {A}) - \Pi_ {C} (b _ {R _ {n}, \gamma} \# \tilde {\nu} ^ {A}) \| _ {\ell_ {2}} ^ {2} \\ + (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n})) \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) 2 \langle \eta_ {n} ^ {A} (s _ {n}, a _ {n}) - \tilde {\eta} _ {n} ^ {A} (s _ {n}, a _ {n}), \\ \Pi_ {C} \left(b _ {R _ {n}, \gamma} \# \nu^ {A}\right) - \Pi_ {C} \left(b _ {R _ {n}, \gamma} \# \tilde {\nu} ^ {A}\right) \rangle_ {\ell_ {2}}) \\ \end{array}
+$$
+
+and $F_{n}^{A}(s,a) = 0$ if $(s,a)\neq (s_n,a_n)$ . It is mentioned that $\zeta_n^A (s_n,a_n) > 0$ . Notice that,
+
+$$
+\sum_ {n = 1} ^ {\infty} \zeta_ {n} ^ {A} \left(s _ {n}, a _ {n}\right) = \sum_ {n = 1} ^ {\infty} \left(2 \tilde {\alpha} _ {n} ^ {A} \left(s _ {n}, a _ {n}\right) - \tilde {\alpha} _ {n} ^ {A} \left(s _ {n}, a _ {n}\right) ^ {2}\right) = \infty \quad a. s.
+$$
+
+$$
+\sum_ {n = 1} ^ {\infty} \zeta_ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} = \sum_ {n = 1} ^ {\infty} 4 \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} - 4 \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {3} + \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} < \infty \quad a. s.
+$$
+
+Finally we have
+
+$$
+\begin{array}{l} | F _ {n} ^ {A} (s _ {n}, a _ {n}) | \leq \frac {1}{\zeta_ {n} ^ {A} (s _ {n} , a _ {n})} (\tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} \gamma \bar {\ell} _ {2} ^ {2} (\beta_ {n + 1} ^ {A} \eta_ {n} ^ {A} + (1 - \beta_ {n + 1} ^ {A}) \eta_ {n} ^ {B}, \beta_ {n + 1} ^ {A} \tilde {\eta} _ {n} ^ {A} + (1 - \beta_ {n + 1} ^ {A}) \tilde {\eta} _ {n} ^ {B}) \\ + (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n})) \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) 2 \sqrt {\gamma} | \langle \eta_ {n} ^ {A} - \tilde {\eta} _ {n} ^ {A}, \\ \beta_ {n} ^ {A} \eta_ {n} ^ {A} + (1 - \beta_ {n} ^ {A}) \eta_ {n} ^ {B} - \beta_ {n} ^ {A} \tilde {\eta} _ {n} ^ {A} - (1 - \beta_ {n} ^ {A}) \tilde {\eta} _ {n} ^ {B} \rangle_ {\bar {\ell} _ {2}} |) \\ \leq \frac {1}{\zeta_ {n} ^ {A} (s _ {n} , a _ {n})} (\tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} \gamma \max _ {z \in \{A, B \}} \overline {{\ell}} _ {2} ^ {\ell} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z}) \\ + (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n})) \tilde {\alpha} _ {n} ^ {A} (s _ {n}, a _ {n}) 2 \sqrt {\gamma} \max _ {z \in \{A, B \}} \overline {{\ell}} _ {2} ^ {2} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z})) \\ = \frac {\tilde {\alpha} _ {n} ^ {A} (s _ {n} , a _ {n}) ^ {2} \gamma + (1 - \tilde {\alpha} _ {n} ^ {A} (s _ {n} , a _ {n})) \tilde {\alpha} _ {n} ^ {A} (s _ {n} , a _ {n}) 2 \sqrt {\gamma}}{2 \tilde {\alpha} _ {n} ^ {A} (s _ {n} , a _ {n}) - \tilde {\alpha} _ {n} ^ {A} (s _ {n} , a _ {n}) ^ {2}} \max _ {z \in \{A, B \}} \overrightarrow {\ell_ {2} ^ {2}} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z}) \\ \leq \frac {\left(2 \tilde {\alpha} _ {n} ^ {A} \left(s _ {n} , a _ {n}\right) - \tilde {\alpha} _ {n} ^ {A} \left(s _ {n} , a _ {n}\right) ^ {2}\right) \sqrt {\gamma}}{2 \tilde {\alpha} _ {n} ^ {A} \left(s _ {n} , a _ {n}\right) - \tilde {\alpha} _ {n} ^ {A} \left(s _ {n} , a _ {n}\right) ^ {2}} \max _ {z \in \{A, B \}} \overline {{\ell_ {2} ^ {2}}} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z}) \\ \leq \sqrt {\gamma} \max _ {z \in \{A, B \}} \bar {\ell} _ {2} ^ {2} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z}) = \sqrt {\gamma} \| X _ {n} \| _ {\infty} \\ \end{array}
+$$
+
+where we used regularity and $1/2$ -homogeneity of the $\ell_2$ metric as described in [(Bellemare et al., 2023) Section 4.6] as well as that $\Pi_C$ is a non-expansion in $\ell_2$ and
+
+$$
+\begin{array}{l} | \langle u, \beta u + (1 - \beta) v \rangle | = \beta \langle u, u \rangle + (1 - \beta) | \langle u, v \rangle | \leq \beta \max (\| u \| ^ {2}, \| v \| ^ {2}) + (1 - \beta) \| u \| \| v \| \\ \leq \max (\| u \| ^ {2}, \| v \| ^ {2}) \\ \end{array}
+$$
+
+by the Cauchy-Schwarz inequality. Further, by the above the Variance also fulfills
+
+$$
+\begin{array}{l} \mathbb {V} [ F _ {n} ^ {A} (s _ {n}, a _ {n}) | \tilde {\mathcal {F}} _ {n} ] = \mathbb {E} [ F _ {n} ^ {A} (s _ {n}, a _ {n}) ^ {2} | \mathcal {F} _ {n} ] - \mathbb {E} [ F _ {n} ^ {A} (s _ {n}, a _ {n}) | \tilde {\mathcal {F}} _ {n} ] ^ {2} \\ \leq 2 \big (\sqrt {\gamma} \max _ {z \in \{A, B \}} \bar {\ell} _ {2} ^ {2} (\eta_ {n} ^ {z}, \tilde {\eta} _ {n} ^ {z}) \big) ^ {2} \\ \leq 2\gamma \sup_{\eta ,\eta \in \mathcal{F}_{C,m}^{S}}\bar{\ell}_{2}^{4}(\eta ,\eta^{\prime}) < \infty . \\ \end{array}
+$$
+
+Therefore, by Lemma F.3 we obtain convergence $\bar{\ell}_2(\eta_n^A,\tilde{\eta}_n^A)\to 0$ and $\bar{\ell}_2(\eta_n^B,\tilde{\eta}_n^B)\to 0$ on $A_{k}$ . As already described above, this results in
+
+$$
+\bar {\ell} _ {2} \left(\eta_ {n} ^ {A}, \eta_ {C} ^ {*}\right)\rightarrow 0 \quad \text {a n d} \quad \bar {\ell} _ {2} \left(\eta_ {n} ^ {B}, \eta_ {C} ^ {*}\right)\rightarrow 0 \quad \text {a l m o s t s u r e l y}.
+$$
+
+Proof of Lemma F.4. Let the filtration be given by $\mathcal{F}_t = \sigma(\eta_0^A, \eta_0^B, s_0, a_0, \alpha_0, R_0, S_1, Y_1, \beta_1^A, \beta_1^B \ldots, s_t, a_t, \alpha_t, Y_{t+1})$ , where $(Y_n)_{n \in \mathbb{N}}$ is an iid sequence of Bernoulli(0.5) random variables, independent of all other appearing random variables, such that A is updated when $Y_{n+1} = 1$ . To clarify, abbreviating
+
+$$
+\nu^ {A} = \beta_ {t + 1} ^ {A} \eta_ {t} ^ {A} (S _ {t + 1}, A _ {t + 1}) + (1 - \beta_ {t + 1} ^ {A}) \eta_ {t} ^ {B} (S _ {t + 1}, A _ {t + 1})
+$$
+
+$$
+\nu^ {B} = \beta_ {t + 1} ^ {B} \eta_ {t} ^ {B} \left(S _ {t + 1}, A _ {t + 1}\right) + \left(1 - \beta_ {t + 1} ^ {B}\right) \eta_ {t} ^ {A} \left(S _ {t + 1}, A _ {t + 1}\right) \quad \text {w h e r e}
+$$
+
+$$
+A _ {t + 1} \sim \pi (\cdot ; S _ {t + 1}),
+$$
+
+we are confronted with the updates
+
+$$
+\eta_ {t + 1} ^ {A} (s, a) = \eta_ {t} ^ {A} (s, a) + \alpha_ {t} (s, a) \mathbf {1} _ {Y _ {t + 1} = 1} \left(\Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) - \eta_ {t} ^ {A} (s, a)\right)
+$$
+
+$$
+\eta_ {t + 1} ^ {B} (s, a) = \eta_ {t} ^ {B} (s, a) + \alpha_ {t} (s, a) \mathbf {1} _ {Y _ {t + 1} = 0} (\Pi_ {C} (b _ {R _ {t}, \gamma} \# \nu^ {B}) - \eta_ {t} ^ {B} (s, a)).
+$$
+
+As in the proof above, define $\tilde{\alpha}_n^A (s,a) = \alpha_n(s,a)\mathbf{1}_{Y_{n + 1} = 1}$ and $\tilde{\alpha}_n^B (s,a) = \alpha_n(s,a)\mathbf{1}_{Y_{n + 1} = 0}$ . By Lemma F.5 these step-size sequences still fulfill the Robbins-Monro conditions. Also note that as in step 2 of the proof of Theorem F.2, $Y_{t + 1}$ is
+
+$\mathcal{F}_t$ -measurable and hence so is $\tilde{\alpha}_t^{A / B}$ . In order to align this with Lemma F.3, we rewrite
+
+$$
+\begin{array}{l} X _ {t + 1} ^ {A} (s, a) = \ell_ {2} ^ {2} \left(\eta_ {t + 1} ^ {A} (s, a), \eta_ {C} (s, a)\right) \\ = (1 - \tilde {\alpha} _ {t} ^ {A} (s, a)) ^ {2} \| \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a) \| _ {\ell_ {2}} ^ {2} \\ + \tilde {\alpha} _ {t} ^ {A} (s, a) ^ {2} \| \Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) - \eta_ {C} (s, a) \| _ {\ell_ {2}} ^ {2} \\ + \left(1 - \tilde {\alpha} _ {t} ^ {A} (s, a)\right) \tilde {\alpha} _ {t} ^ {A} (s, a) 2 \langle \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) - \eta_ {C} (s, a) \rangle_ {\ell_ {2}} \\ = (1 - \zeta_ {t} ^ {A} (s, a)) X _ {t} ^ {A} (s, a) + \zeta_ {t} ^ {A} (s, a) F _ {t} ^ {A} (s, a) \\ \end{array}
+$$
+
+with $\tilde{\zeta}_t^A (s,a) = 2\tilde{\alpha}_t^A (s,a) - \tilde{\alpha}_t^A (s,a)^2$
+
+$$
+X _ {t} := \left[ \begin{array}{c} \ell_ {2} ^ {2} (\eta_ {t} ^ {A}, \eta_ {C}) \\ \ell_ {2} ^ {2} (\eta_ {t} ^ {B}, \eta_ {C}) \end{array} \right] \in \mathbb {R} ^ {2 | \mathcal {S} | | \mathcal {A} |}
+$$
+
+and
+
+$$
+\begin{array}{l} F _ {t} ^ {A} (s, a) = \frac {1}{\zeta_ {t} ^ {A} (s , a)} \mathbf {1} _ {\tilde {\alpha} _ {t} ^ {A} (s, a) > 0} (\tilde {\alpha} _ {t} ^ {A} (s, a) ^ {2} \ell_ {2} ^ {2} (\Pi_ {C} (b _ {R _ {t}, \gamma} \# \nu^ {A}), \eta_ {C} (s, a)) \\ + (1 - \tilde {\alpha} _ {t} ^ {A} (s, a)) \tilde {\alpha} _ {t} ^ {A} (s, a) 2 \langle \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) - \eta_ {C} (s, a) \rangle_ {\ell_ {2}}). \\ \end{array}
+$$
+
+As in Equation (7), the sequence $\zeta_t^A(s, a)$ fulfills the Robbins-Monro condition. Additionally, note that there exists $K > 0$ such that $\ell_2^2(\Pi_C(b_{R_t, \gamma} \# \nu^A), \eta_C(s, a)) < K$ independent of $s, a, t$ . Further, observe that
+
+$$
+c _ {t} := \max _ {z \in \{A, B \}} \frac {1}{\zeta_ {t} ^ {z} (s , a)} \mathbf {1} _ {\tilde {\alpha} _ {t} ^ {z} (s, a) > 0} \tilde {\alpha} _ {t} ^ {z} (s, a) ^ {2} K \rightarrow 0 \text {f o r} t \rightarrow \infty \text {a l m o s t s u r e l y}.
+$$
+
+We use that $\Pi_C$ is mean-preserving [(Lyle et al., 2019) Proposition 1] for discrete distributions supported within $[\theta_1, \theta_m]$ , which is satisfied by $b_{R_t, \gamma} \# \nu^A$ , due to Assumption (ii) and $\nu^A \in \mathcal{F}_m$ . Together with the fact that $\Pi_C \mathcal{T}^\pi$ is a $\sqrt{\gamma}$ -contraction with respect to $\bar{\ell}_2$ and the Cauchy-Schwarz inequality, we have
+
+$$
+\begin{array}{l} | \mathbb {E} [ \langle \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) - \eta_ {C} (s, a) \rangle_ {\ell_ {2}} | \mathcal {F} _ {t} ] | \\ = \left| \left\langle \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \mathbb {E} \left[ \Pi_ {C} \left(b _ {R _ {t}, \gamma} \# \nu^ {A}\right) \mid \mathcal {F} _ {t} \right] - \eta_ {C} (s, a) \right\rangle_ {\ell_ {2}} \right| \\ = \left| \right.\left\langle \right. \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \mathbb {E} \left[ \right. b _ {R _ {t}, \gamma} \# \nu^ {A}\left. \right) \mid \mathcal {F} _ {t} \left. \right] - \eta_ {C} (s, a) \left. \right\rangle_ {\ell_ {2}} \left. \right| \\ = \left| \left\langle \eta_ {t} ^ {A} (s, a) - \eta_ {C} (s, a), \Pi_ {C} \mathcal {T} ^ {\pi} \left(\beta_ {t + 1} ^ {A} \eta_ {t} ^ {A} + \left(1 - \beta_ {t + 1} ^ {A}\right) \eta_ {t} ^ {B}\right) (s, a) - \left(\Pi_ {C} \mathcal {T} ^ {\pi} \eta_ {C}\right) (s, a) \right\rangle_ {\ell_ {2}} \right| \\ \leq \sqrt {\gamma} | \langle \eta_ {t} ^ {A} - \eta_ {C}, (\beta_ {t + 1} ^ {A} \eta_ {t} ^ {A} + (1 - \beta_ {t + 1} ^ {A}) \eta_ {t} ^ {B}) - \eta_ {C} \rangle_ {\bar {\ell} _ {2}} | \\ \leq \sqrt {\gamma} \left(\beta_ {t + 1} ^ {A} \bar {\ell} _ {2} ^ {2} \left(\eta_ {t} ^ {A}, \eta_ {C}\right) + \left(1 - \beta_ {t + 1} ^ {A}\right) | \langle \eta_ {t} ^ {A} - \eta_ {C}, \eta_ {t} ^ {B} - \eta_ {C} \rangle_ {\bar {\ell} _ {2}} |\right) \\ \leq \sqrt {\gamma} \left(\beta_ {t + 1} ^ {A} \max _ {z \in \{A, B \}} \bar {\ell} _ {2} ^ {2} \left(\eta_ {t} ^ {z}, \eta_ {C}\right) + \left(1 - \beta_ {t + 1} ^ {A}\right) \| \eta_ {t} ^ {A} - \eta_ {C} \| _ {\bar {\ell} _ {2}} \| \eta_ {t} ^ {B} - \eta_ {C} \| _ {\bar {\ell} _ {2}}\right) \\ \leq \sqrt {\gamma} \max _ {z \in \{A, B \}} \bar {\ell} _ {2} ^ {2} (\eta_ {t} ^ {z}, \eta_ {C}) \\ = \sqrt {\gamma} \| X _ {t} \| _ {\infty}. \\ \end{array}
+$$
+
+In total, this yields
+
+$$
+\begin{array}{l} | \mathbb {E} [ F _ {t} ^ {A} (s, a) | \mathcal {F} _ {t} ] | \\ \leq \frac {1}{\zeta_ {t} ^ {A} (s , a)} \mathbf {1} _ {\tilde {\alpha} _ {t} ^ {A} (s, a) > 0} \tilde {\alpha} _ {t} ^ {A} (s, a) ^ {2} K + \frac {1}{\zeta_ {t} ^ {A} (s , a)} \mathbf {1} _ {\tilde {\alpha} _ {t} ^ {A} (s, a) > 0} (1 - \tilde {\alpha} _ {t} ^ {A} (s, a)) \tilde {\alpha} _ {t} ^ {A} (s, a) 2 \sqrt {\gamma} \| X _ {t} \| _ {\infty}. \\ \leq c _ {t} + \sqrt {\gamma} \| X _ {t} \| _ {\infty}. \\ \end{array}
+$$
+
+Since $\bar{\ell}_2(\eta, \eta') < K$ for every $\eta, \eta' \in \mathcal{F}_{C,m}^{S \times A}$ some $K > 0$ , the conditional variance $\mathbb{V}[F_t^A | \mathcal{F}_t]$ can be bounded uniformly in $t$ .
+
+Therefore, the requirements of Lemma F.3 are fulfilled, and its application yields $X_{t}^{A}(s,a) = \ell_{2}^{2}(\eta_{t}^{A}(s,a),\eta_{C}(s,a))\to 0$ and $X_{t}^{B}(s,a) = \ell_{2}^{2}(\eta_{t}^{B}(s,a),\eta_{C}(s,a))\to 0$ . Hence, also $\eta_t^A,\eta_t^B$ converge to $\eta_C$ with respect to $\bar{\ell}_2$ .
\ No newline at end of file
diff --git a/addqadaptivedistributionaldoubleqlearning/images.zip b/addqadaptivedistributionaldoubleqlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..211adf1caef469774c7d3ae717488eca987eaaa1
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1a497f078be36b96a3ba39d22717eadff2ca8d6c296c088454b779285d88ab6
+size 4174943
diff --git a/addqadaptivedistributionaldoubleqlearning/layout.json b/addqadaptivedistributionaldoubleqlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cfea478f082608cc58932191b44537d977588e13
--- /dev/null
+++ b/addqadaptivedistributionaldoubleqlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44a110ec45be95f0f437302705ed461d6fe2acf238fc53ed7510356870f76e9f
+size 2084351
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_content_list.json b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2162cb655753aff5b188884374063f725f987427
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:368bd51e4890c9965713daa0dd40fcf35cf6bfb8ff0e4c98fba1c931fcc38cc7
+size 86203
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_model.json b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7a41bbd10b0e81fbe380e9b4f5f4a931fa1ba35f
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a1093328b04cf829a3d1064ecba055fd7031aec88c13ab8251659f4abde3143
+size 106654
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_origin.pdf b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c8a80263d213ea72e3f30971b31c12468998da49
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/a763c3ea-e1d0-42c6-96b4-e51045315b61_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49e272a19f6b54528ae6b849fea9afe8ec83acc9c87e675ad7d07c355c829103
+size 1235189
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/full.md b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2148e1be676de64998aaaa50e491d0e0c5ea30e1
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/full.md
@@ -0,0 +1,290 @@
+# ADHMR: Aligning Diffusion-based Human Mesh Recovery via Direct Preference Optimization
+
+Wenhao Shen1 Wanqi Yin2 Xiaofeng Yang1 Cheng Chen1 Chaoyue Song1 Zhongang Cai2 Lei Yang2 Hao Wang3 Guosheng Lin1
+
+# Abstract
+
+Human mesh recovery (HMR) from a single image is inherently ill-posed due to depth ambiguity and occlusions. Probabilistic methods have tried to solve this by generating numerous plausible 3D human mesh predictions, but they often exhibit misalignment with 2D image observations and weak robustness to in-the-wild images. To address these issues, we propose ADHMR, a framework that Aligns a Diffusion-based HMR model in a preference optimization manner. First, we train a human mesh prediction assessment model, HMR-Scorer, capable of evaluating predictions even for in-the-wild images without 3D annotations. We then use HMR-Scorer to create a preference dataset, where each input image has a pair of winner and loser mesh predictions. This dataset is used to finetune the base model using direct preference optimization. Moreover, HMR-Scorer also helps improve existing HMR models by data cleaning, even with fewer training samples. Extensive experiments show that ADHMR outperforms current state-of-the-art methods. Code is available at: https://github.com/shenwenhao01/ADHMR.
+
+# 1. Introduction
+
+Human mesh recovery (HMR) is a fundamental challenge in computer vision, focused on estimating the 3D human shape and pose from a single RGB image. HMR enables various downstream applications, including clothed human reconstruction (Shuai et al., 2022; Hong & Shen, 2024; Yao et al., 2025), virtual try-on, AR/VR content creation (Xu et al., 2024c; Yang et al., 2024) and etc.
+
+$^{1}$ Nanyang Technological University $^{2}$ SenseTime Research $^{3}$ The Hong Kong University of Science and Technology (Guangzhou). Correspondence to: Guosheng Lin , Hao Wang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Prevailing approaches usually adopt a deterministic style, generating a single prediction for each image (Cai et al., 2024a; Goel et al., 2023; Li et al., 2023; Moon et al., 2022). However, this task faces inherent uncertainty when lifting 2D observations to 3D models, due to depth ambiguity and occlusion. Accordingly, the community is now shifting to probabilistic methods. Probabilistic methods tackle the uncertainty by generating multiple plausible human mesh predictions for each image (Kolotouros et al., 2021; Sengupta et al., 2023). For instance, recent approaches (Foo et al., 2023; Cho & Kim, 2023) frame this task as a denoising diffusion process. However, these probabilistic approaches suffer from limited emphasis on obtaining accurate estimates.
+
+Specifically, the current state-of-the-art probabilistic method ScoreHypo (Xu et al., 2024b) tackles this by designing an additional network for test-time prediction selection after the diffusion-based prediction model. However, we observe that ScoreHypo still exhibits the following shortcomings: (1) misalignment between 3D mesh predictions and 2D image cues, and (2) poor performance on in-the-wild images. This is primarily because end-to-end diffusion models predicting from pure noise typically avoid 3D reprojection loss, as early denoising steps yield unrealistic poses, making such loss ineffective (Huang et al., 2024). Instead, the diffusion loss focuses on generating the target human mesh distribution rather than precisely aligning joints. While this produces plausible poses, it may neglect the alignment between the 3D mesh and the image. Moreover, existing datasets often use optimization-based HMR methods to generate pseudo 3D annotations for in-the-wild images, which inevitably contain some inaccurate or noisy data.
+
+To address the above challenges, we introduce Aligned Diffusion-based Human Mesh Recovery (ADHMR). The key insight is to distill the knowledge of a powerful scorer into the 3D human mesh predictor in a preference optimization manner. Technically, we begin by training the HMR-Scorer, essentially a reward model that assigns a quality score to quantify the human mesh prediction quality. HMR-Scorer gives higher scores to predictions better aligned with the input image ("winners") and lower scores to those poorly aligned ("losers"). In order to increase the sensitivity of
+
+HMR-Scorer to nuances in the image cues, we extract multiscale image features as conditions, which provide global and local pixel-aligned features sampled by reprojected human keypoints, enabling HMR-Scorer to identify misalignment between the predicted mesh and 2D image cues.
+
+Then we draw on the concept of direct preference optimization (DPO) for diffusion models (Wallace et al., 2024) to optimize a diffusion-based HMR base model. Traditional joint-wise or pixel-wise losses could overfit noisy labels in real-world data. Besides, a trade-off between 2D reprojection fidelity and 3D accuracy exists due to imprecise camera estimation (Dwivedi et al., 2024). In contrast, DPO focuses on the relative prediction quality, being more robust to imperfect data. However, DPO requires an annotated preference dataset, which is costly to obtain (Rafailov et al., 2024). To this end, we employ HMR-Scorer to evaluate and rank the predictions generated by the base model, resulting in a preference dataset composed of (winner, loser) prediction pairs. Guided by this synthetic dataset, ADHMR refines the HMR base model towards producing human pose predictions that are both more plausible and more closely aligned with 2D image cues. Moreover, ADHMR improves its robustness by finetuning on in-the-wild images without the need for pseudo labels.
+
+Notably, HMR-Score can also be leveraged to improve the performance of state-of-the-art HMR models through data cleaning. Many models (Yin et al., 2025; Pang et al., 2024; Sun et al., 2024) incorporate in-the-wild datasets for training to enhance their generalizability. However, as mentioned earlier, the 3D pseudo-labels in these datasets are often unreliable. Prior work relies on expensive manual curation (Lassner et al., 2017) or rigid reprojection-error filtering (Kolotouros et al., 2019) to combat these issues. Instead, we propose to conduct a fully automated data cleaning process to build higher-quality training datasets. We sort the pseudo-labeled images in a dataset based on their scores given by HMR-Score and only retain samples with scores above a certain threshold. By filtering out poorly annotated data, we reduce the influence of noisy annotations and boost model performance.
+
+Comprehensive experimental results demonstrate the effectiveness of our approach. The main contributions are summarized below:
+
+- We propose ADHMR, a novel framework for improving existing diffusion-based HMR models by adapting human preference optimization methods to an unlabeled setting, thus outperforming existing state-of-the-art probabilistic HMR methods.
+- We introduce HMR-Scorer, a robust reward model that effectively quantifies the alignment between human mesh predictions and corresponding input images.
+
+- We show that using HMR-Scorer for data cleaning boosts the performance of state-of-the-art HMR models, even when trained on fewer data.
+
+# 2. Related work
+
+# 2.1. Human Mesh Recovery from a Single Image
+
+Current HMR approaches can be broadly categorized into two paradigms: deterministic and probabilistic. Deterministic approaches(Goel et al., 2023; Cai et al., 2024a; Moon et al., 2022; Shen et al., 2024; Yin et al., 2024) produce a single estimate for each input. However, due to intrinsic reconstruction ambiguities, probabilistic approaches focus on generating multiple plausible hypotheses or capturing probabilistic distributions. ProHMR (Kolotouros et al., 2021) leverages a conditional normalizing flow to model a conditional probability distribution. Fang et al. (Fang et al., 2023) propose learning probability distributions over human joint rotations by utilizing a learned analytical posterior probability. EgoHMR (Zhang et al., 2023) proposes a 3D scene-conditioned diffusion approach for reconstructing human meshes from egocentric views. ScoreHypo (Xu et al., 2024b) uses a diffusion-based generator to produce a diverse set of plausible estimates, and a separate network is employed to choose from these estimates. Despite their effectiveness, they often require generating numerous candidate poses for selection or averaging. In contrast, we employ direct preference optimization to improve the performance of the prediction model directly.
+
+# 2.2. Human Preference Optimization
+
+The initial efforts to learn from human preferences originated in training agents (Christiano et al., 2017; Ibarz et al., 2018), later expanding to incorporate human feedback (RLHF) for enhancing tasks like translation (Kreutzer et al., 2018) and summarization (Stiannon et al., 2020; Ziegler et al., 2019). These methods first train a reward model to align with human preferences and then finetune a language model to maximize this reward using reinforcement learning techniques such as PPO (Schulman et al., 2017). Several solutions have been proposed to simplify this complex pipeline: HIVE (Zhang et al., 2024) uses offline reinforcement learning to align instruction editing. Direct Preference Optimization (DPO) (Rafailov et al., 2024) directly optimizes the model using a supervised classification objective on preference data. This approach is now being increasingly adopted across other domains. For instance, ImageReward (Xu et al., 2024a) and Lee et al. (Lee et al., 2023) apply RLHF to text-to-image synthesis models; DreamReward (Ye et al., 2024) and CADCrafter (Chen et al., 2025) use RLHF for text-to-3D generation. Diffusion-DPO (Wallace et al., 2024) adapts the DPO objective for Diffusion Models, improving the performance of models like Stable
+
+
+Figure 1. Overview of ADHMR. We aim to finetune a probabilistic HMR base model that generates multiple human mesh predictions conditioned on the input image. We first train the HMR-Scorer that assesses the reconstruction quality given an image and corresponding human mesh predictions. The reconstruction quality annotations $Q^{*}$ are computed using standard HMR metrics, including PVE $Q^{pve}$ , MPJPE $Q^{mpjpe}$ , PA-MPJPE $Q^{pajpe}$ , and PA-PVE $Q^{papve}$ . Next, we construct a synthetic human preference dataset, where each sample is a (winner, loser) prediction pair rated by the HMR-Scorer. Finally, ADHMR uses this synthetic human preference dataset to finetune the base model to preferentially generate predictions that are more plausible and better aligned with the image cues.
+
+Diffusion for enhanced visual appeal and textual coherence. Our method is inspired by Diffusion-DPO but differs in its implementation. Rather than depending on curated manually labeled human feedback datasets, we devise a method to automatically generate a human preference dataset using HMR-Scorer, offering greater flexibility for our scenario.
+
+# 3. Preliminary
+
+# 3.1. HMR Evaluation Metrics
+
+We use four standard metrics for HMR: Mean Per Vertex Position Error (PVE) and Mean Per Joint Position Error (MPJPE), along with their Procrustes-aligned variants (PA-PVE and PA-MPJPE). All metrics compute the average distance (in mm) between predicted and ground-truth positions, with the pelvis joint aligned as the reference point. We apply the joint regressor of SMPL(-X) to the predicted mesh to obtain 3D joint coordinates.
+
+# 3.2. Diffusion-based HMR Base Model
+
+Modeling HMR as a reverse diffusion process by noisy samples $\{\mathbf{x}_t\}_{t=0}^T$ , the base model HypoNet (Xu et al., 2024b) $\epsilon_{\mathrm{ref}}$ is a denoiser that progressively denoises random pose noise based on the input image $I$ to reconstruct the human mesh. $T$ is the total number of timesteps. This process is formulated as:
+
+$$
+p _ {\theta} \left(\mathbf {x} _ {t - 1} \mid \mathbf {x} _ {t}\right) = \mathcal {N} \left(\mathbf {x} _ {t - 1}; \epsilon_ {\text {r e f}} \left(\mathbf {x} _ {t}, t, I\right)\right), \tag {1}
+$$
+
+where $p_{\theta}$ is the posterior mean of the forward process.
+
+Specifically, the base model follows (Li et al., 2021) by breaking down the SMPL (Loper et al., 2015) pose param
+
+eters into two components: swing, derived from 3D joint positions, and twist, representing rotational details for each body part. These two elements are combined into a single data sample, which is then processed through a forward diffusion step to gradually add noise. The noisy samples are mapped to a high-dimensional feature space using a multilayer perceptron. To guide the denoising process, the model incorporates image features extracted through a convolutional neural network backbone. These preprocessed image features are concatenated and fed into a transformer-encoder (Vaswani, 2017) based network. The transformer integrates global image context through a cross-attention mechanism, aligning the denoising process with the input image. Finally, the network reconstructs the human pose by removing the added noise. The human shape parameters are estimated by the convolutional backbone.
+
+# 4. Method
+
+# 4.1. Overview
+
+An overview of ADHMR is presented in Figure 1. Given an input image $I$ , we aim to reconstruct the 3D human mesh in a parameterized way, which is to predict the pose parameters $\theta \in \mathbb{R}^{24 \times 3}$ and shape parameters $\beta \in \mathbb{R}^{10}$ of the predefined SMPL model (Pavlakos et al., 2019). We formulate this problem as a generation process conditioned on the input image to tackle the inherent reconstruction ambiguity.
+
+We begin by training a diffusion-based HMR base model (Sec. 3.2). Next, we construct a synthetic human preference dataset (Sec. 4.3), where candidate human mesh predictions are generated by the base model and then paired
+
+based on scores provided by the assessment model HMR-Scorer (Sec. 4.2). To distill knowledge from this synthetic preference dataset, we propose a preference optimization framework ADHMR that finetunes the base model to preferentially generate winner predictions over losers (Sec. 4.4). Furthermore, thanks to the strong capacity of HMR-Scorer to assess mesh predictions, we filter training data to enhance several popular HMR models (Sec. 4.5).
+
+# 4.2. HMR-Scorer
+
+Given a set of predictions $\{\mathbf{P}_m = (\theta, \beta, \Pi)_m\}_{m=0}^M$ for an input image $I$ , HMR-Scorer aims to assign an estimated quality score $\{s_m \in \mathbb{R}\}_{i=0}^M$ to each prediction. $M$ represents the number of predictions, $\Pi$ is the predicted camera parameters. Higher scores should be assigned to predictions with higher quality and better aligned with the image.
+
+Model architecture. We first introduce the input features into HMR-Scorer. Instead of directly encoding pose parameters, which are prone to ambiguities in representing joint positions and deficient in spatial context, we leverage UVD coordinates as the input to HMR-Scorer. This provides a unified and consistent representation of the 3D human skeleton and preserves the geometric relationships of the skeleton. Specifically, using the camera model, we first project human body keypoints to the input image space to get their UVD coordinates $\mathbf{J}_{uvd} \in \mathbb{R}^{N \times 3}$ , where $N = 29$ is the number of keypoints. We use a multilayer perceptron (MLP) to map $\mathbf{J}_{uvd}$ to a high-dimensional feature vector $\mathbf{F}^J \in \mathbb{R}^{C^l \times N}$ .
+
+We utilize multi-scale image features as the image condition, denoted as $\mathbf{c} \coloneqq \{\mathbf{F}^g, \mathbf{F}^l\}$ . The input image $I$ is initially divided into fixed-size patches through a patch embedding mechanism, producing a sequence of image tokens. These tokens are subsequently processed using a ViT-Base model (Dosovitskiy et al., 2021) to generate a series of global image feature tokens, denoted as $\mathbf{F}^g \in \mathbb{R}^{C^g \times H^g \times W^g}$ . The global image feature tokens are then passed through a convolutional network to derive the low-channel global features, represented as $\mathbf{F}^g \in \mathbb{R}^{C^l \times H^g \times W^g}$ . A de-convolution head is deployed on the global feature $\mathbf{F}^g$ to obtain the high-resolution local feature map $\mathbf{F}^l \in \mathbb{R}^{C^l \times H^l \times W^l}$ . $C^*$ and $H^* \times W^*$ denote the feature channel and size, respectively. We sample the local feature $\mathbf{F}^l$ according to the re-projected 2D joint positions $\mathbf{J}_{uv}$ and obtain pixel-aligned local image features $\mathbf{F}^L \in \mathbb{R}^{C^l \times N}$ for each joint.
+
+The concatenated features of $\mathbf{F}^J$ and $\mathbf{F}^L$ are subsequently fed into a transformer-encoder-based network comprising $B$ fundamental blocks. Each block integrates a multi-head self-attention (MHSA) mechanism, a cross-attention (CA) unit, and a feed-forward network (FFN). Within the CA unit, the global image feature $\mathbf{F}^g$ serves as the key and value features, while the query feature is derived from the
+
+output of the preceding MHSA unit. Through the cross-attention mechanism, the geometric information from the human mesh predictions is effectively aligned with image features, ensuring a coherent integration of structural and visual cues. Finally, a decoder network, implemented as an MLP, is employed to estimate the score $s$ .
+
+Training. We construct a training dataset comprising human mesh predictions for corresponding images and their quality labels to train the HMR-Score. Specifically, predictions are generated by adding joint-wise Gaussian noise to the ground truth SMPL pose to simulate rotational errors, with magnitudes empirically determined. The reconstruction quality labels are annotated using standard HMR metrics, including PVE $Q^{pve}$ , MPJPE $Q^{mpjpe}$ , PA-MPJPE $Q^{pajpe}$ , and PA-PVE $Q^{papve}$ . Details of these metrics are provided in Sec. 3.1. To accurately capture subtle quality differences, the training process is designed to learn relative quality preferences among predictions. Inspired by RankNet (Burges et al., 2005), we utilize a probabilistic ranking cost function:
+
+$$
+\begin{array}{l} \mathcal {L} _ {m n} \left(s _ {m n}, y _ {m n}\right) := - y _ {m n} \log s _ {m n} \tag {2} \\ - \left(1 - y _ {m n}\right) \log \left(1 - s _ {m n}\right), \\ \end{array}
+$$
+
+where $s_{mn} =$ Sigmoid $(s_m - s_n)$ is the relative quality difference probability between predictions $\mathbf{P}_m$ and $\mathbf{P}_n$ , and $y_{mn}$ is the ground truth quality label representing the quality difference between predictions based on each of the above-mentioned four HMR metrics. For instance, for the PVE benchmark, the label is represented as $y_{mn}^{pve}$ ( $y_{mn}^{pve} = 1$ if $Q_m^{pve} < Q_n^{pve}$ , and 0 otherwise). The overall training loss for HMR-Scorer is defined as follows:
+
+$$
+\begin{array}{l} \mathcal {L} ^ {\text {H M R - S c o r e r}} = \\ \sum_ {m, n = 0; n \neq m} ^ {M} \left(\mathcal {L} _ {m n} ^ {p v e} + \mathcal {L} _ {m n} ^ {p a p v e} + \mathcal {L} _ {m n} ^ {p a j p e} + \mathcal {L} _ {m n} ^ {m p j p e}\right). \tag {3} \\ \end{array}
+$$
+
+# 4.3. HMR Preference Dataset Construction
+
+We leverage preference-based optimization rather than traditional supervised training to finetune the base HMR model. However, traditional preference optimization methods require human preference datasets labeled by human annotators, which are currently unavailable for this field.
+
+To this end, we propose to use HMR-Scorer to synthesize an HMR preference dataset $\mathcal{D} = \left\{\left(I,\mathbf{x}_0^w,\mathbf{x}_0^l\right)\right\}$ , where each sample contains the input image $I$ and a pair of human mesh predictions generated from the HMR base model $\epsilon_{\mathrm{ref}}$ . Specifically, given a set of predictions $\{\mathbf{P}_m\}_{m=0}^M$ , HMR-Scorer assigns scores $\{s_m \in \mathbb{R}\}_{m=0}^M$ for these predictions. According to the scores, these predictions undergo ordinal arrangement, which emulates human preference in ranking human mesh reconstructions, from highest to lowest fidelity.
+
+GTA-Human DNA-Rendering PVE MPJPE PA-MPJPE PVE MPJPE PA-MPJPE PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ ScoreNet (Xu et al., 2024b) 0.52 0.49 0.52 0.50 0.47 0.43 0.55 0.51 0.55 0.50 0.50 0.46 HMR-Scorer-P 0.30 0.28 0.28 0.26 0.29 0.25 0.34 0.31 0.32 0.29 0.34 0.27 HMR-Scorer-2D 0.59 0.58 0.59 0.58 0.50 0.49 0.62 0.60 0.63 0.61 0.56 0.53 HMR-Scorer (Ours) 0.63 0.62 0.63 0.62 0.57 0.54 0.66 0.64 0.66 0.64 0.62 0.58
+
+Table 1. Score prediction results on the GTA-Human (Cai et al., 2024b) and DNA-Rendering (Cheng et al., 2023) dataset. We report the PLCC and SRCC between the predicted scores and the PVE, MPJPE, and PA-PVE ground truth errors, respectively.
+
+This hierarchical organization facilitates the extraction of paired samples $\left(\mathbf{P}^w, \mathbf{P}^l\right)$ , denoting superior (winner) and inferior (loser) predictions respectively, where their associated scores satisfy the preference relation $\left(\mathbf{P}^w \succ \mathbf{P}^l \mid I\right)$ . The pairing process involves stochastic selection of winners from the top $K$ highest-scoring predictions, coupled with losers from the $K$ lowest-scoring predictions, generating $K$ distinct pairs per image. For studio-captured datasets with precise human mesh annotations, the prediction quality ordering is directly determined by computing the reconstruction error against ground truth labels.
+
+# 4.4. ADHMR
+
+ADHMR aligns the base model $\epsilon_{\mathrm{ref}}$ with the constructed preference dataset $\mathcal{D} = \left\{\left(I,\mathbf{x}_0^w,\mathbf{x}_0^l\right)\right\}$ to produce superior predictions. The aligned model $\epsilon_{\theta}$ is initialized using the parameters of the base model $\epsilon_{\mathrm{ref}}$ , while keeping the base model's parameters frozen throughout the training process. The proposed optimization framework extends the direct preference optimization (DPO) method. The principle of DPO lies in its direct optimization of a conditional distribution $\epsilon_{\theta}\left(\mathbf{x}_{0} \mid \mathbf{c}\right)$ , contrasting with RLHF's approach of optimizing a reward model $r\left(\mathbf{c},\mathbf{x}_0\right)$ , while simultaneously constraining the KL-divergence from a reference distribution $\epsilon_{\mathrm{ref}}$ :
+
+$$
+\max _ {\boldsymbol {\epsilon} _ {\theta}} \mathbb {E} _ {\mathbf {c} \sim \mathcal {D} _ {c}, \mathbf {x} _ {0} \sim \boldsymbol {\epsilon} _ {\theta} (\mathbf {x} _ {0} | \mathbf {c})} \tag {4}
+$$
+
+$$
+\left[ r \left(c, \mathbf {x} _ {0}\right) \right] - \beta_ {\mathrm {K L}} \left[ \epsilon_ {\theta} \left(\mathbf {x} _ {0} \mid \mathbf {c}\right) \| \epsilon_ {\text {r e f}} \left(\mathbf {x} _ {0} \mid \mathbf {c}\right) \right].
+$$
+
+Following (Wallace et al., 2024), a significant challenge in applying DPO to diffusion models is the intractability of the parameterized distribution $\epsilon_{\theta}(\mathbf{x}_0|\mathbf{c})$ , which stems from the necessity to marginalize over all possible diffusion trajectories $(\mathbf{x}_1,\dots ,\mathbf{x}_T)$ that culminate in $\mathbf{x}_0$ . Through some mathematical techniques, this challenge is addressed by formulating an objective function that operates on the complete denoising trajectory $\mathbf{x}_{0:T}$ :
+
+$$
+\begin{array}{l} \mathcal {L} ^ {\mathrm {D D P O}} (\theta) = - \mathbb {E} _ {(\mathbf {x} _ {0} ^ {w}, \mathbf {x} _ {0} ^ {l}) \sim \mathcal {D}, t \sim \mathcal {U} (0, T), \mathbf {x} _ {t} ^ {w} \sim q (\mathbf {x} _ {t} ^ {w} | \mathbf {x} _ {0} ^ {w}), \mathbf {x} _ {t} ^ {l} \sim q (\mathbf {x} _ {t} ^ {l} | \mathbf {x} _ {0} ^ {l})} \\ \log \sigma (- \beta T \omega (\lambda_ {t}) ( \\ \left\| \boldsymbol {\epsilon} ^ {w} - \boldsymbol {\epsilon} _ {\theta} \left(\mathbf {x} _ {t} ^ {w}, t\right) \right\| _ {2} ^ {2} - \left\| \boldsymbol {\epsilon} ^ {w} - \boldsymbol {\epsilon} _ {\text {r e f}} \left(\mathbf {x} _ {t} ^ {w}, t\right) \right\| _ {2} ^ {2} \\ \left. - \left(\left\| \boldsymbol {\epsilon} ^ {l} - \boldsymbol {\epsilon} _ {\theta} \left(\mathbf {x} _ {t} ^ {l}, t\right)\right\| _ {2} ^ {2} - \left\| \boldsymbol {\epsilon} ^ {l} - \boldsymbol {\epsilon} _ {\text {r e f}} \left(\mathbf {x} _ {t} ^ {l}, t\right)\right\| _ {2} ^ {2}\right)\right)\left. \right) \tag {5} \\ \end{array}
+$$
+
+where $\mathbf{x}_t^* = \alpha_t\mathbf{x}_0^* +\sigma_t\pmb{\epsilon}^*$ is drawn from $q\left(\mathbf{x}_t^*\mid \mathbf{x}_0^*\right)$ with $\pmb {\epsilon}^{*}\sim \mathcal{N}(0,\mathbf{I})$ . Here, $\lambda_{t} = \alpha_{t}^{2} / \sigma_{t}^{2}$ denotes the signal-to-noise ratio, $\omega (\lambda_t)$ is a weighting function, and the constant $T$ is factored into $\beta$
+
+During training, the model improves by comparing points along the diffusion trajectory with examples from the synthetic preference dataset. This helps the model better denoise winner mesh predictions compared to losers, as evaluated by HMR-Scorer. Hence, this methodology guides the model to generate human mesh predictions that not only align closely with the input image but also adhere to a realistic distribution of human poses. By exclusively finetuning the denoiser component within the latent space, the approach achieves more generalized and well-aligned results without overfitting to noisy labels, especially for in-the-wild datasets.
+
+# 4.5. Data Cleaning
+
+To further evaluate the efficacy of our trained HMR-Scorer, we propose to apply it to the training data cleaning process, aiming to determine whether the proposed scorer can effectively identify noisy data. While many indoor datasets have ground truth labels, in-the-wild datasets often rely on noisy pseudo labels, which hinders model training and generalization. Therefore, we use HMR-Scorer to remove low-quality samples, ensuring a reliable training dataset.
+
+The data cleaning process begins with score computation, where a quality score $s_i \in [0,1]$ is assigned to each sample $(I_i, \hat{\theta}_i, \hat{\beta}_i, \hat{\Pi}_i)$ in the dataset $\mathcal{X}$ using HMR-Scorer. The score evaluates the alignment of predictions with the input image and the plausibility of the model parameters. Next, a threshold $\tau$ is applied to filter out low-quality samples, resulting in the cleaned dataset $\mathcal{X}_{\mathrm{clean}} = \{(I_i, \hat{\theta}_i, \hat{\beta}_i, \hat{\Pi}_i) | s_i \geq \tau\}$ . Finally, only high-confidence pseudo-labels are retained for model training.
+
+# 5. Experiments
+
+# 5.1. Setup
+
+Training. HMR-Scorer is trained on five datasets, including HI4D (Yin et al., 2023), BEDLAM (Black et al., 2023), DNA-Rendering (Cheng et al., 2023), GTA-Human (Cai
+
+Method M 3DPW Human3.6M PVE ↓ MPJPE ↓ PA-MPJPE ↓ PVE ↓ MPJPE ↓ PA-MPJPE ↓ HMR (Kanazawa et al., 2018) - 152.7 130.0 81.3 96.1 88.0 56.8 HybrIK (Li et al., 2021) - 86.5 74.1 45.0 65.7 54.4 34.5 PyMaf (Zhang et al., 2021) - 110.1 92.8 58.9 - 57.7 40.5 POTTER (Zheng et al., 2023) - 87.4 75.0 44.8 - 56.5 35.1 ImpHMR (Cho et al., 2023) - 87.1 74.3 45.4 - - - Zolly (Wang et al., 2023) - 76.3 65.0 39.8 - 49.4 32.3 HMR 2.0a (Goel et al., 2023) - - 70.0 44.5 - 44.8 33.6 HMR 2.0b (Goel et al., 2023) - - 81.3 54.3 - 50.0 32.4 ScoreHMR (Stathopoulos et al., 2024) - - 76.8 51.1 - - Biggs et al. (Biggs et al., 2020a) 10 - 79.4 56.6 - 59.2 42.2 25 - 75.8 55.6 - 58.2 42.2 Sengupta et al. (Sengupta et al., 2021) 25 - 75.1 47.0 - - - ProHMR (Kolotouros et al., 2021) 25 - - 52.4 - - 36.8 HuManiFlow (Sengupta et al., 2023) 100 - 65.1 39.9 - - - HMDiff (Foo et al., 2023) 25 82.4 72.7 44.5 - 49.3 32.4 HypoNet (Base Model) 10 79.8 68.5 41.0 52.5 42.4 29.0 100 73.4 63.0 37.6 47.5 38.4 26.0 200 71.9 61.8 36.1 46.4 37.4 25.3 ADHMR 10 73.8 64.2 38.3 52.1 41.8 28.4 100 65.4 57.2 33.5 45.9 36.9 24.8 200 63.5 55.7 32.5 44.6 35.8 24.1 ADHMR (ITW) 10 71.3 61.3 37.1 52.2 41.9 28.3 100 62.6 54.2 32.0 45.9 37.0 24.8 200 60.5 52.6 30.8 44.6 35.9 24.0
+
+et al., 2024b), and SPEC (Kocabas et al., 2021). These datasets contain accurate 3D annotations for human pose, which plays an important role in training an effective scorer. The HMR base model is the current state-of-the-art probabilistic HMR method: HypoNet from (Xu et al., 2024b).
+
+Evaluation metrics. We use four standard human mesh reconstruction metrics: PVE, MPJPE, PA-PVE, and PA-MPJPE, as detailed in Sec. 3.1. To evaluate the scorer, we follow score prediction assessment (Zhai & Min, 2020) to employ two standard metrics: the Pearson linear correlation coefficient (PLCC) and Spearman rank correlation coefficient (SRCC). These correlation coefficients quantify the alignment between the predicted scores and ground truth reconstruction error.
+
+# 5.2. HMR-Scorer Evaluation
+
+Test benchmark. Since there are no existing datasets for score prediction of HMR models, we construct a test set from a synthetic dataset GTA-Human (Cai et al., 2024b), which is produced with rendering engines (e.g., Unreal Engine) and contains accurate 3D annotations. We also use DNA-Rendering (Cheng et al., 2023), a large-scale multiview studio-based dataset with an ultra-high resolution, to construct the other test set to show the capacity for common studio-based scenes. We adopt the original test set of the two selected datasets. Then we perturb the ground truth pose labels with random gaussian noise to simulate
+
+Table 2. Comparison with state-of-the-arts on the 3DPW (Von Marcard et al., 2018) and Human3.6M (Ionescu et al., 2013) dataset. $M$ is the number of predictions in probabilistic methods. ADHMR is finetuned on the target benchmark dataset, while ADHMR (ITW) is further finetuned on the preference dataset constructed from an in-the-wild dataset.
+
+Method 3DPW PVE ↓ MPJPE ↓ PA-MPJPE ↓ HypoNet (Base Model) 73.4 63.0 37.6 (a) Finetune on target benchmark dataset Supervised finetuning 68.0 59.4 35.9 ADHMR 65.4 57.2 33.5 (b) Finetune on in-the-wild dataset Supervised finetuning 70.2 61.3 36.5 ADHMR 62.6 54.2 32.0
+
+Table 3. Ablation of preference finetuning on 3DPW (Von Marcard et al., 2018) dataset. $M = 100$ for all models.
+
+predictions of HMR models. We save the corresponding reconstruction errors for the noised predictions. We then measure the PLCC and SRCC between the ground truth reconstruction metrics and the predicted scores.
+
+Results. Table 1 presents the comparison results for score prediction. We observe that our scorer outperforms all baseline methods in both PLCC and SRCC metrics, showcasing its efficacy. For comparison, we modify the reward model in ScoreHypo as a baseline. We also study two ablations of HMR-Scorer: HMR-Scorer-P accepts SMPL(-X) pose rotation vectors as input, and HMR-Scorer-2D accepts 2D joint positions (without joint depth) as input. Results show that HMR-Scorer outperforms the baselines in aligning the scores with the real reconstruction errors, which underscores its effectiveness indicating human mesh prediction quality.
+
+
+Figure 2. Qualitative comparison of the state-of-the-art probabilistic model ScoreHypo (Xu et al., 2024b) and our ADHMR. Our framework significantly improves image alignment and in-the-wild robustness. (a) $\sim$ (f) are from the 3DPW (Von Marcard et al., 2018) dataset, and (g) $\sim$ (h) are challenging in-the-wild images.
+
+Method 3DPW PVE ↓ MPJPE ↓ PA-MPJPE ↓ HypoNet (Base Model) 71.9 61.8 36.1 Supervised finetuning 69.9 59.7 35.2 ADHMR (ITW) 60.5 52.6 30.8
+
+Table 4. Ablation of extra training data on 3DPW (Von Marcard et al., 2018) dataset. We use multiple training datasets of the scorer to perform supervised finetuning. $M = 200$ for all models.
+
+# 5.3. ADHMR Evaluation
+
+Comparisons with state-of-the-art methods. In Table 2, we compare the accuracy of the ADHMR with state-of-the-art methods on two widely used benchmark datasets Human3.6M (Ionescu et al., 2013) and 3DPW (Von Marcard et al., 2018). 3DPW is an in-the-wild dataset. We show results of both deterministic and probabilistic methods. Following the conventions of standard probabilistic approaches (Xu et al., 2024b; Biggs et al., 2020b), we generate multiple estimates and report the min-MPJPE and min-PVE of the $M$ predictions. When finetuned directly on the target benchmark dataset, ADHMR achieves further enhancements, showcasing its strong ability to adapt to and balance domain-specific distribution. To evaluate the effectiveness under the in-the-wild setting, ADHMR (ITW) is
+
+further finetuned on InstaVariety dataset (Kanazawa et al., 2019), which contains various in-the-wild images annotated with noisy pseudo-labels collected from Instagram. Please note that we do not use the original 3D labels but use the HMR-Scorer to construct a preference dataset. Notably, we observe that our finetuned model achieves better performance using $M = 10$ predictions than the base model using $M = 200$ predictions on the in-the-wild benchmark 3DPW, showcasing that our finetuning pipeline greatly enhances the generalizability to in-the-wild datasets and its efficiency. Moreover, ADHMR consistently outperforms existing state-of-the-art methods by a substantial margin.
+
+Qualitative results. Fig. 2 shows qualitative comparisons between ADHMR and the previous state-of-the-art probabilistic method ScoreHypo. We show randomly selected results of ScoreHypo and ADHMR on 3DPW and internet images with $M = 10$ candidate predictions. We can see that the finetuned model can produce more accurate results for body pose under challenging cases, such as dense human-environment interactions. The base model, however, cannot achieve good image-mesh alignment. For example, in the (a) instance, our method provides more reasonable poses for the occluded right arm. In the (c) instance, ScoreHypo gives erroneous prediction for the person's arms, while ours gives a more accurate body pose prediction, proving the
+
+Method 3DPW Human3.6M EHF MPJPE ↓ PA-MPJPE ↓ MPJPE ↓ PA-MPJPE ↓ PA-PVE ↓ PVE ↓ PA-MPJPE ↓ Hand4Whole (Moon et al., 2022) 115.2 75.4 78.8 57.7 57.8 89.2 70.2 + data cleaning 112.3 73.7 77.9 57.0 56.2 88.8 69.8 OSX (Base) (Lin et al., 2023) 100.4 66.4 69.5 48.9 54.7 86.6 63.7 + data cleaning 99.4 65.2 65.7 46.4 53.1 84.0 62.4 SMPLer-X-Base (Cai et al., 2024a) 99.5 64.2 59.8 45.8 51.0 82.4 59.7 + data cleaning 97.9 62.9 57.5 43.8 51.4 78.6 58.6
+
+Table 5. Quantitative comparisons between several state-of-the-art methods with and without data cleaning on the 3DPW (Von Marcard et al., 2018), Human3.6M (Ionescu et al., 2013) and EHF (Pavlakos et al., 2019) dataset. All methods are trained on four commonly used datasets. After cleaning the training data, these models achieve higher accuracy even when trained on a smaller subset.
+
+efficacy of the proposed finetuning pipeline. In the (g) instance, ScoreHypo produces inaccurate elbow poses, while our prediction fits the input image better. Results show that ADHMR is more robust for challenging internet images than ScoreHypo. Please zoom in to observe our improvement over the base model.
+
+Ablation of preference finetuning. As shown in Table 3, we conduct an ablation study on different finetuning methods to demonstrate the advantages of ADHMR over traditional supervised finetuning. We construct two baselines where the base model is finetuned using the ground truth labels in the datasets. We first finetune the base model on the training sets of the two target benchmarks (3DPW and Human3.6M). We also finetune on the pseudo labels of InstaVariety to simulate training on noisy pseudo-labeled in-the-wild data. Results show that ADHMR consistently outperforms traditional supervised finetuning in both settings. When finetuned directly on the target benchmark dataset, our method achieves further enhancements than supervised finetuning, showcasing its strong ability to adapt to and balance domain-specific distribution. In the meantime, supervised finetuning may overfit one training dataset (3DPW) and the performance on the other test benchmark (Human3.6M) could be corrupted. On the in-the-wild dataset, ADHMR demonstrates robustness under noisy pseudo-label conditions, while supervised finetuning on the noisy labels could overfit the training dataset and harm its performance on 3DPW benchmark.
+
+Ablation of extra training datasets. In Table 4, we aim to confirm that the improvements achieved by ADHMR are contributed to its inherent optimization strategy, rather than including extra information from other datasets used during scorer training. So we additionally finetune the base model on the scorer's training sets. Supervised finetuning on scorer training datasets yields insignificant performance gains, confirming that the improvements are not primarily due to external dataset information. This suggests that simply exposing the base model to more data does not guarantee better performance. In contrast, ADHMR benefits from the ability of HMR-Scorer to implicitly guide the base model
+
+towards generating high-quality predictions.
+
+# 5.4. Data Cleaning Results
+
+Current HMR models are trained using quite different datasets. For a fair and comprehensive comparison, we selected several state-of-the-art HMR methods and retrained them using four commonly used datasets: MSCOCO (Lin et al., 2014), MPI (Andriluka et al., 2014), Human3.6M (Ionescu et al., 2013), and MPI-INF-3DHP (Mehta et al., 2017). We compare training the models on the full training sets of these datasets and training on the filtered datasets obtained after applying data cleaning with the HMR-Scorer. We set the filter threshold $\tau = 0.6$ .
+
+Quantitative results are in Table 5. The results demonstrate that models trained on the cleaned datasets achieve better performance despite using less training data. This improvement highlights the utility of our method in filtering out low-quality training samples, thereby enabling the models to focus on higher-quality data for learning. This finding provides a new perspective for improving large HMR models by strategically cleaning training data. By leveraging the HMR-Scorer to curate datasets, we can achieve higher-quality model training with even less data, making it a valuable tool for effortless performance gain.
+
+# 6. Conclusion
+
+In this work, we propose ADHMR, the first framework for aligning diffusion-based HMR models with direct preference optimization. We leverage a trained HMR-Scorer to synthesize a preference dataset automatically without the need for manual annotation. This dataset is then used to align the diffusion-based HMR model through direct preference optimization. Additionally, HMR-Scorer improves the performance of several state-of-the-art HMR models by filtering out low-quality training data. Extensive experiments validate the effectiveness of ADHMR. We believe that our work will pave the way for future advancements in alignment techniques for human mesh recovery.
+
+# Acknowledgements
+
+This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). This research is also supported by A*STAR under its MTC Programmatic Funds (Grant No. M23L7b0021) and the MoE AcRF Tier 1 grant (RG14/22).
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Andriluka, M., Pishchulin, L., Gehler, P., and Schiele, B. 2d human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 3686-3693, 2014.
+Biggs, B., Novotny, D., Ehrhardt, S., Joo, H., Graham, B., and Vedaldi, A. 3d multi-bodies: Fitting sets of plausible 3d human models to ambiguous image data. Advances in neural information processing systems, 33:20496-20507, 2020a.
+Biggs, B., Novotny, D., Ehrhardt, S., Joo, H., Graham, B., and Vedaldi, A. 3d multi-bodies: Fitting sets of plausible 3d human models to ambiguous image data. Advances in neural information processing systems, 33:20496-20507, 2020b.
+Black, M. J., Patel, P., Tesch, J., and Yang, J. Bedlam: A synthetic dataset of bodies exhibiting detailed lifelike animated motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8726-8737, 2023.
+Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., and Hullender, G. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pp. 89-96, 2005.
+Cai, Z., Yin, W., Zeng, A., Wei, C., Sun, Q., Yanjun, W., Pang, H. E., Mei, H., Zhang, M., Zhang, L., et al. Smplerx: Scaling up expressive human pose and shape estimation. Advances in Neural Information Processing Systems, 36, 2024a.
+Cai, Z., Zhang, M., Ren, J., Wei, C., Ren, D., Lin, Z., Zhao, H., Yang, L., Loy, C. C., and Liu, Z. Playing for 3d human recovery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024b.
+
+Chen, C., Wei, J., Chen, T., Zhang, C., Yang, X., Zhang, S., Yang, B., Foo, C.-S., Lin, G., Huang, Q., et al. Cadcrafter: Generating computer-aided design models from unconstrained images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025.
+Cheng, W., Chen, R., Fan, S., Yin, W., Chen, K., Cai, Z., Wang, J., Gao, Y., Yu, Z., Lin, Z., et al. Dna-rendering: A diverse neural actor repository for high-fidelity human-centric rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19982-19993, 2023.
+Cho, H. and Kim, J. Generative approach for probabilistic human mesh recovery using diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4183-4188, 2023.
+Cho, H., Cho, Y., Ahn, J., and Kim, J. Implicit 3d human mesh recovery using consistency with pose and shape from unseen-view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21148-21158, 2023.
+Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
+Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021.
+Dwivedi, S. K., Sun, Y., Patel, P., Feng, Y., and Black, M. J. Tokenhmr: Advancing human mesh recovery with a tokenized pose representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1323-1333, 2024.
+Fang, Q., Chen, K., Fan, Y., Shuai, Q., Li, J., and Zhang, W. Learning analytical posterior probability for human mesh recovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8781-8791, 2023.
+Foo, L. G., Gong, J., Rahmani, H., and Liu, J. Distribution-aligned diffusion for human mesh recovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9221-9232, 2023.
+Goel, S., Pavlakos, G., Rajasegaran, J., Kanazawa, A., and Malik, J. Humans in 4d: Reconstructing and tracking humans with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14783-14794, 2023.
+
+Hong, Z. and Shen, W. Free-viewpoint video in the wild using a flying camera. In ECCV 2024 Workshop on Wild 3D: 3D Modeling, Reconstruction, and Generation in the Wild, 2024.
+Huang, B., Li, C., Xu, C., Pan, L., Wang, Y., and Lee, G. H. Closely interactive human reconstruction with proxemics and physics-guided adaption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
+Ibarz, B., Leike, J., Pohlen, T., Irving, G., Legg, S., and Amodei, D. Reward learning from human preferences and demonstrations in atari. Advances in neural information processing systems, 31, 2018.
+Ionescu, C., Papava, D., Olaru, V., and Sminchisescu, C. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013.
+Kanazawa, A., Black, M. J., Jacobs, D. W., and Malik, J. End-to-end recovery of human shape and pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7122-7131, 2018.
+Kanazawa, A., Zhang, J. Y., Felsen, P., and Malik, J. Learning 3d human dynamics from video. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5614-5623, 2019.
+Kocabas, M., Huang, C.-H. P., Tesch, J., Müller, L., Hilliges, O., and Black, M. J. Spec: Seeing people in the wild with an estimated camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11035-11045, 2021.
+Kolotouros, N., Pavlakos, G., Black, M. J., and Daniilidis, K. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 2252-2261, 2019.
+Kolotouros, N., Pavlakos, G., Jayaraman, D., and Daniilidis, K. Probabilistic modeling for human mesh recovery. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 11605-11614, 2021.
+Kreutzer, J., Uyheng, J., and Riezler, S. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1777-1788, 2018.
+
+Lassner, C., Romero, J., Kiefel, M., Bogo, F., Black, M. J., and Gehler, P. V. Unite the people: Closing the loop between 3d and 2d human representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6050-6059, 2017.
+Lee, K., Liu, H., Ryu, M., Watkins, O., Du, Y., Boutilier, C., Abbeel, P., Ghavamzadeh, M., and Gu, S. S. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192, 2023.
+Li, J., Xu, C., Chen, Z., Bian, S., Yang, L., and Lu, C. Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3383-3393, 2021.
+Li, J., Bian, S., Xu, C., Chen, Z., Yang, L., and Lu, C. Hybrik-x: Hybrid analytical-neural inverse kinematics for whole-body mesh recovery. arXiv preprint arXiv:2304.05690, 2023.
+Lin, J., Zeng, A., Wang, H., Zhang, L., and Li, Y. One-stage 3d whole-body mesh recovery with component aware transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21159-21168, 2023.
+Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740-755. Springer, 2014.
+Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., and Black, M. J. Smpl: a skinned multi-person linear model. ACM Transactions on Graphics (TOG), 34(6):1-16, 2015.
+Mehta, D., Rhodin, H., Casas, D., Fua, P., Sotnychenko, O., Xu, W., and Theobalt, C. Monocular 3d human pose estimation in the wild using improved cnn supervision. In 2017 international conference on 3D vision (3DV), pp. 506-516. IEEE, 2017.
+Moon, G., Choi, H., and Lee, K. M. Accurate 3d hand pose estimation for whole-body 3d human mesh estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2308-2317, 2022.
+Pang, H. E., Cai, Z., Yang, L., Tao, Q., Wu, Z., Zhang, T., and Liu, Z. Towards robust and expressive whole-body human pose and shape estimation. Advances in Neural Information Processing Systems, 36, 2024.
+Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A., Tzionas, D., and Black, M. J. Expressive body capture: 3d hands, face, and body from a single image.
+
+In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10975-10985, 2019.
+Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
+Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Sengupta, A., Budvytis, I., and Cipolla, R. Hierarchical kinematic probability distributions for 3d human shape and pose estimation from images in the wild. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 11219-11229, 2021.
+Sengupta, A., Budvytis, I., and Cipolla, R. Humanflow: Ancestor-conditioned normalising flows on so (3) manifolds for human pose and shape distribution estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4779-4789, 2023.
+Shen, W., Yin, W., Wang, H., Wei, C., Cai, Z., Yang, L., and Lin, G. Hmr-adapter: A lightweight adapter with dual-path cross augmentation for expressive human mesh recovery. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 6093-6102, 2024.
+Shuai, Q., Geng, C., Fang, Q., Peng, S., Shen, W., Zhou, X., and Bao, H. Novel view synthesis of human interactions from sparse multi-view videos. In ACM SIGGRAPH 2022 Conference Proceedings, pp. 1-10, 2022.
+Stathopoulos, A., Han, L., and Metaxas, D. Score-guided diffusion for 3d human recovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 906-915, 2024.
+Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33: 3008-3021, 2020.
+Sun, Q., Wang, Y., Zeng, A., Yin, W., Wei, C., Wang, W., Mei, H., Leung, C.-S., Liu, Z., Yang, L., et al. Aios: All-in-one-stage expressive human pose and shape estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1834-1843, 2024.
+Vaswani, A. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
+
+Von Marcard, T., Henschel, R., Black, M. J., Rosenhahn, B., and Pons-Moll, G. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European conference on computer vision (ECCV), pp. 601-617, 2018.
+Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., Ermon, S., Xiong, C., Joty, S., and Naik, N. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228-8238, 2024.
+Wang, W., Ge, Y., Mei, H., Cai, Z., Sun, Q., Wang, Y., Shen, C., Yang, L., and Komura, T. Zolly: Zoom focal length correctly for perspective-distorted human mesh reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3925-3935, 2023.
+Xu, J., Liu, X., Wu, Y., Tong, Y., Li, Q., Ding, M., Tang, J., and Dong, Y. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024a.
+Xu, Y., Ma, X., Su, J., Zhu, W., Qiao, Y., and Wang, Y. Scorehypo: Probabilistic human mesh estimation with hypothesis scoring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 979-989, 2024b.
+Xu, Z., Song, C., Song, G., Zhang, J., Liew, J. H., Xu, H., Xie, Y., Luo, L., Lin, G., Feng, J., et al. High quality human image animation using regional supervision and motion blur condition. arXiv preprint arXiv:2409.19580, 2024c.
+Yang, F., Chen, T., He, X., Cai, Z., Yang, L., Wu, S., and Lin, G. Attribuman-3d: Editable 3d human avatar generation with attribute decomposition and indexing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10596-10605, 2024.
+Yao, N., Zhang, G., Shen, W., Shu, J., and Wang, H. Unify3d: An augmented holistic end-to-end monocular 3d human reconstruction via anatomy shaping and twins negotiating. arXiv preprint arXiv:2504.18215, 2025.
+Ye, J., Liu, F., Li, Q., Wang, Z., Wang, Y., Wang, X., Duan, Y., and Zhu, J. Dreamreward: Text-to-3d generation with human preference. arXiv preprint arXiv:2403.14613, 2024.
+Yin, W., Cai, Z., Wang, R., Wang, F., Wei, C., Mei, H., Xiao, W., Yang, Z., Sun, Q., Yamashita, A., et al. Whac: World-grounded humans and cameras. In European Conference on Computer Vision, pp. 20-37. Springer, 2024.
+
+Yin, W., Cai, Z., Wang, R., Zeng, A., Wei, C., Sun, Q., Mei, H., Wang, Y., Pang, H. E., Zhang, M., et al. Smallest-x: Ultimate scaling for expressive human pose and shape estimation. arXiv preprint arXiv:2501.09782, 2025.
+Yin, Y., Guo, C., Kaufmann, M., Zarate, J. J., Song, J., and Hilliges, O. Hi4d: 4d instance segmentation of close human interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17016-17027, 2023.
+Zhai, G. and Min, X. Perceptual image quality assessment: a survey. Science China Information Sciences, 63:1-52, 2020.
+Zhang, H., Tian, Y., Zhou, X., Ouyang, W., Liu, Y., Wang, L., and Sun, Z. Pymaf: 3d human pose and shape regression with pyramidal mesh alignment feedback loop. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11446-11456, 2021.
+Zhang, S., Ma, Q., Zhang, Y., Aliakbarian, S., Cosker, D., and Tang, S. Probabilistic human mesh recovery in 3d scenes from egocentric views. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7989-8000, 2023.
+Zhang, S., Yang, X., Feng, Y., Qin, C., Chen, C.-C., Yu, N., Chen, Z., Wang, H., Savarese, S., Ermon, S., et al. Hive: Harnessing human feedback for instructional visual editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9026-9036, 2024.
+Zheng, C., Liu, X., Qi, G.-J., and Chen, C. Potter: Pooling attention transformer for efficient human mesh recovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1611-1620, 2023.
+Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., and Irving, G. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
\ No newline at end of file
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/images.zip b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..68364bb187d792945f81e47a2bab8a617e197995
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4dcc17c749ee923a0192b4c7a0b6114c9bc2604ddf4473e6b015f26f7350528e
+size 567330
diff --git a/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/layout.json b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7f692c1d52e4516cb6cf73afb3367ce6650f614e
--- /dev/null
+++ b/adhmraligningdiffusionbasedhumanmeshrecoveryviadirectpreferenceoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4cb41e3f375a30a004d4795d05a9a7afe6e69864b2462b41897d310b43f34b1c
+size 395170
diff --git a/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_content_list.json b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a9a2010295902314be9ade7dd3584eaebf6fcde
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd6b6da51da912a6ae34aaccc7b5ec1b5ca3aca068ce85ce1c3bef4786d4be1e
+size 138832
diff --git a/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_model.json b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c9a51d69324709bd508f62490c7fda746a1a240e
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db2e87f3b9f4869f44a062febca89a4f74c3520dbd28d31d41c0c8e3e8a3aeb5
+size 159922
diff --git a/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_origin.pdf b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5e1d9b22159b7cb733d8214d3512628d35f002a8
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/8cf0227a-0185-4b01-8517-955402c070be_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:269bb2a115eab5d7f6e49ba4e657f9fbcbb519bc836fb5f41ba1ac4597c2c68b
+size 4077425
diff --git a/adiosantibodydevelopmentviaopponentshaping/full.md b/adiosantibodydevelopmentviaopponentshaping/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1b83587e1f874f4dcd871a06cabf4fc2e5a0110
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/full.md
@@ -0,0 +1,562 @@
+# ADIOS: Antibody Development via Opponent Shaping
+
+Sebastian Towers *12 Aleksandra Kalisz *12 Philippe A. Robert 3 Alicia Higueruelo 4 Francesca Vianello 5 Ming-Han Chloe Tsai 6 Harrison Steel 2 Jakob Foerster 12
+
+# Abstract
+
+Anti-viral therapies are typically designed to target only the current strains of a virus, a myopic response. However, therapy-induced selective pressures drive the emergence of new viral strains, against which the original myopic therapies are no longer effective. This evolutionary response presents an opportunity: our therapies could both defend against and actively influence viral evolution. This motivates our method ADIOS: Antibody Development vIa Opponent Shaping. ADIOS is a meta-learning framework where the process of antibody therapy design, the outer loop, accounts for the virus's adaptive response, the inner loop. With ADIOS, antibodies are not only robust against potential future variants, they also influence, i.e., shape, which future variants emerge. In line with the opponent shaping literature, we refer to our optimised antibodies as shapers. To demonstrate the value of ADIOS, we build a viral evolution simulator using the Absolut! framework, in which shapers successfully target both current and future viral variants, outperforming myopic antibodies. Furthermore, we show that shapers modify the distribution over viral evolutionary trajectories to result in weaker variants. We believe that our ADIOS paradigm will facilitate the discovery of long-lived vaccines and antibody therapies while also generalising to other domains. Specifically, domains such as antimicrobial resistance, cancer treatment, and others with evolutionarily adaptive opponents. Our code is available at https://github.com/olakalisz/adios.
+
+*Equal contribution 1FLAIR, Foerster Lab for AI Research
+2Department of Engineering, University of Oxford, Oxford, UK
+3Department of Biomedicine, University of Basel, Basel, Switzerland
+4Isomorphic Labs, London, UK Exscientia, Oxford, UK
+6Epsilon Ltd., London, UK Correspondence to: Sebastian Towers , Aleksandra Kalisz .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+# 1. Introduction
+
+Designing effective therapies to fight off viral pathogens is crucial for limiting their devastating social and economic costs (Nandi & Shet; Orenstein & Ahmed, 2017; Samsudin et al., 2024; Faramarzi et al., 2024). However, traditional design approaches only target the current variant of a virus. Although this myopic design approach may yield therapies with high initial efficacy, it fails to account for viral adaptation, leaving treatments vulnerable to becoming ineffective over time (Weisblum et al., 2020; Doud et al., 2018; Lee et al., 2019; Dingens et al., 2019; Greaney et al., 2021).
+
+
+Figure 1. Myopic Therapies Become Ineffective Over Time. Initial virus (variant A) evolves in response to the evolutionary pressures induced by therapies, resulting in new variants. In case of traditional myopic therapies (top), the new emerging variants (variant B) are often therapy resistant. In contrast, ADIOS designs shapers or shaper therapies (bottom) which remain effective and steer the viral evolution towards less harmful variants (variant C).
+
+The COVID-19 pandemic starkly illustrated the challenges of adaptive viruses. While the rapid development of vaccines was a remarkable achievement, concerns quickly arose about their long-term efficacy against new emerging COVID variants (Carabelli et al., 2023; Hu et al., 2021). For example, the B.1.351 variant demonstrated that the vaccine loses its effectiveness against new strains (Madhi et al., 2021). This underscores the need for approaches that consider both the current and future efficacy of a designed therapy.
+
+The virus inevitably adapts in response to selective pressures imposed by our therapies, i.e., we influence the viral evolution (Chéron et al., 2016; Meijers et al., 2022). Our work turns this influence in our favour, designing therapies that steer the virus towards less dangerous variants, see Figure 1.
+
+
+
+
+
+
+Figure 2. Main Components of ADIOS. a The ADIOS framework. In the Antibody Optimisation Loop (i.e., outer loop), we optimise the antibody to perform well against current and future virus variants; thus influencing the viral evolution. We approximate the future variants through our Simulated Viral Escape via Evolution (i.e., inner loop) where the viruses evolve to escape from the current antibody over a given horizon length. b The payoffs of the antibody and virus. Red arrows indicate binding interactions that players aim to minimise, while green arrows represent those they aim to maximise. The antibody optimises for binding to the virus while avoiding its anti-target. In this zero-sum game, the antibody's optimisation indirectly counters the virus's binding to its target, see Equation 1. c Binding simulator. Our JAX (Bradbury et al., 2018) implementation of the binding calculation uses binding poses generated by Absolut! (Robert et al., 2022) and the Miyazawa-Jernigan energy potential matrix (Miyazawa & Jernigan, 1999).
+
+To achieve this we utilise principles from opponent shaping (Foerster et al., 2018), a multi-agent reinforcement learning framework that allows agents to both anticipate and influence the future policies of other agents in their environment. This approach, exemplified by methods such as Learning with Opponent-Learning Awareness (Foerster et al., 2018) and Model-Free Opponent Shaping (Lu et al., 2022), allows agents to consider not only their current performance but also the consequences of their actions on their opponents' future behaviour.
+
+Building on these principles, we introduce ADIOS: Antibody Development vIa Opponent Shaping. Antibodies are immune system proteins that bind to pathogens such as viruses. While naturally produced by the body, it is also possible to design and synthetically produce antibodies as therapies. ADIOS frames the interaction between antibodies and viruses as a two-player zero-sum game. In this game,
+
+the antibody's payoff is primarily determined by its binding strength to the virus, while the virus has the opposite payoff (Figure 2b). Although our framework can use any binding model in principle, in this work we build on the Absolut! framework (Robert et al., 2022) to estimate the binding strength of protein-protein interactions. To improve computational efficiency, we reimplement parts of Absolut! in JAX (Bradbury et al., 2018), allowing GPU acceleration and a 10,000-fold speedup over the original implementation (Figure 2c).
+
+We use this game to model viral escape - the process through which mutations allow a virus to evade a host's immune system (Lucas et al., 2001). Following a meta-learning approach, ADIOS implements two nested optimisation loops, an inner loop and an outer loop (Figure 2a). In the inner loop, we simulate viral escape via evolution, where the virus adapts to the current antibody by repeatedly finding approx-
+
+imate best responses that decrease binding strength. The outer loop uses a genetic algorithm to optimise the antibodies to be effective across viral evolutionary trajectories, resulting in antibodies we call shapers. This is in contrast to only optimising for binding to the initial virus, which results in myopic antibodies.
+
+Importantly, our simulations show that shapers not only outperform myopic antibodies in long-term efficacy but also demonstrate the ability to shape viral evolution. Moreover, shapers guide viruses toward variants that are more susceptible to binding to a broad spectrum of antibodies, not just the shaper that induced the given viral evolution, providing insights into the scalability and potential for practical deployment of ADIOS. Our study also explores the trade-offs between the effectiveness of shapers and the computational resources required for their optimisation. Finally, we present an explainability analysis of the key features that distinguish shapers from myopic antibodies.
+
+Our key contributions include:
+
+- ADIOS: A framework that brings opponent shaping to antibody design to address viral escape.
+- A GPU-accelerated JAX implementation of the binding simulator Absolut!, achieving a 10,000x speedup.
+- An open-source instantiation of ADIOS applied to antibody design for both the dengue virus and three other viruses using our JAX implementation.
+- Empirical results showing ADIOS-optimised shapers both significantly outperform myopic antibodies by limiting long-horizon viral escape and guide viral evolution towards variants that can be more easily targeted.
+- Analysis of computational trade-offs in shaping horizons, providing practical guidance for deploying ADIOS in compute-constrained settings, e.g. due to more realistic binding simulators.
+- Interpretability analysis into how antibody shapers influence viral escape, which could, in principle, provide inspiration to antibody designers.
+
+While our results provide a promising proof of concept, they are based on simplified models of binding and viral escape. However, we believe that as more sophisticated simulators emerge, the ADIOS framework has the potential to significantly impact future antiviral therapy design.
+
+# 2. Related Work
+
+Antibody Design: Antibodies are essential components of the immune system that bind to unique identifiers (antigens) present on pathogens, including viruses, to identify and neutralise them. While natural antibodies emerge through an immune response, it is also possible to design antibodies for use as therapies. Recent work has made significant progress in computational antibody design (Cutting et al., 2024; Zam-
+
+baldi et al., 2024; Bennett et al., 2024). The common approaches to antibody design utilise energy-based antibody optimisation methods (Li et al., 2014; Adolf-Bryfogle et al., 2018; Pereira et al., 2024), sequence-based language models (Liu et al., 2020; Saka et al., 2021) or structure-based approaches relying on GNNs (Jin et al., 2022) and diffusion models (Martinkus et al., 2024).
+
+In contrast to these works, we are not interested in generating better antibody design methods immediately, but rather in how we should make new methods in the future to account for our effect on evolving viruses.
+
+Predicting Viral Escape: Recent machine learning methods have demonstrated success in predicting future viral strains (Shanker et al., 2024; Wang et al., 2023; Nie et al., 2025). EVEscape (Thadani et al., 2023) decomposes the likelihood of a mutation into three parts: maintaining fitness, accessibility to antibodies, and disrupting binding, demonstrating success through retrospective identification of COVID variants. Han et al. (2023) take a different approach by modelling viral evolution through simulated fitness landscapes. Unlike these methods, ADIOS models the antibody influence on viral evolution, enabling both the simulation of viral escape trajectories and the optimisation of antibodies to minimise viral escape.
+
+# 3. Background
+
+Antibody Binding Simulators: In our setting, the interaction between antibodies and viruses is characterised by their binding strength $B(\cdot, \cdot)$ - a measure of how strongly the two "attach" to each other through molecular forces. Molecular dynamics simulations offer high accuracy but are computationally intensive (Hollingsworth & Dror, 2018). Sequence-based ML models (Mason et al., 2021; Lim et al., 2022; Ruffolo et al., 2023; Yan Huang et al., 2022) provide faster alternatives but struggle to generalise beyond their training distribution, making them unsuitable for exploring novel viral mutations. To evaluate $B(\cdot, \cdot)$ we use the Absolut! framework (Robert et al., 2022). Absolut! offers a balance between speed and generalisation by modelling binding through discretised protein structures. It focuses on the CDRH3 region of the antibody, the most variable portion that primarily determines binding specificity (VanDyk & Meek, 1992). For each antibody-antigen pair, Absolut! enumerates possible binding poses and computes their energy using the Miyazawa-Jernigan potential (Miyazawa & Jernigan, 1999). The binding strength $B(\cdot, \cdot)$ is then defined as the negative of the lowest binding energy, see Figure 2c and Appendix D for details.
+
+Opponent Shaping: A multi-agent reinforcement learning framework which allows agents to anticipate and influence the future policies of other agents in their environment. Learning with Opponent-Learning Awareness (LOLA) (Foerster et al., 2018) introduced this concept by having LOLA
+
+Algorithm 1 $\operatorname{Ev}(\hat{v}, a)$ : Simulated Viral Escape (Inner Loop)
+Input: virus $\hat{v}$ antibody $a$ , horizon $H$ population size $P$ inverse temperature $\beta$
+Output: Sampled viral trajectory $\hat{\pmb{v}} = [\hat{v}^0,\hat{v}^1,\dots ,\hat{v}^H ]$ $\hat{v}_0\gets \hat{v}$
+for $i = 0$ to $H - 1$ do for $k = 1$ to $P$ do $v_{k}^{i}\gets \hat{v}^{i}\oplus M u t a t i o n$ end for $p(v)\leftarrow \mathbb{P}(v = v_k^i)\propto \exp (\beta R_\mathrm{v}(v_k^i,a))\quad /$ See Eq.1 $\hat{v}^{i + 1}\sim p(v)$
+end for
+ $\hat{\pmb{v}}\gets [\hat{v}_0,\dots ,\hat{v}_H]$
+return $\hat{v}$
+
+agents optimise against anticipated opponent updates rather than static opponent policies. They achieved this through an augmented value function that accounts for the opponent's learning step.
+
+Meta-learning is a set of methods for optimising a learning process itself, "learning to learn". In multi-agent systems, this concept extends to learning about and influencing how other agents learn. Model-Free Opponent Shaping (M-FOS) (Lu et al., 2022) showcases this idea by using gradient-free optimisation to learn meta-policies that accomplish long-horizon opponent shaping. Our approach follows a similar principle, using evolutionary optimisation to shape viral escape trajectories.
+
+# 4. Method
+
+ADIOS frames antibody design as a two-player game between an antibody shaper agent and a naive virus agent, building on principles from opponent shaping (Figure 2). We present our method in three parts:
+
+In Section 4.1, we introduce the virus-antibody game, defining the action spaces and payoffs for both players. Section 4.2 describes our simulated viral evolution process, modelling how viruses evolve to escape a given antibody. Finally, Section 4.3 presents our antibody optimisation approach, which optimises the antibody shapers in a way that accounts for future viral mutations and learns to influence viral evolution away from escape.
+
+# 4.1. Virus-Antibody Game
+
+We formalise the interaction between antibodies and viruses as a two-player zero-sum game. In this game, two players – the virus and the antibody – play a game where one player's gain is the other's loss. The game is defined by the set of actions available to each player and their respective payoffs. The players' actions are represented by their amino acid sequences. The sequences are of an antigen protein for the virus and a fragment of a hypervariable region of the heavy chain for the antibody.
+
+Algorithm 2 Antibody Optimisation (Outer Loop)
+Input: antibody $\hat{a}$ , virus $\hat{v}$ , horizon $H$ , population size $P_{a}$ , steps $N$
+Output: Trajectory of antibodies $\hat{a} = [\hat{a}^{0}, \hat{a}^{1}, \dots, \hat{a}^{N}]$ $\hat{a}_{0} \gets \hat{a}$
+for $i = 0$ to $N - 1$ do
+ $\hat{a}_{1}^{i} \gets \hat{a}^{i}$
+for $k = 2$ to $P_{a}$ do
+ $\hat{a}_{k}^{i} \gets \hat{a}^{i} \oplus \text{Point Mutation}$
+end for
+ $\hat{a}^{i + 1} \gets \arg \max_{k} \mathbb{E}\left[F_{\hat{v}}^{H}(a_{k}^{i})\right] // \text{See Eq. 2}$
+end for
+ $\hat{a} \gets [\hat{a}_{0}, \dots, \hat{a}_{N}]$
+return $\hat{a}$
+
+We define the set of 20 amino acids as $\mathbb{A}$ . Let $N_{v}$ be the virus sequence length and $N_{a}$ be the antibody sequence length. So an action of the virus is $v\in \mathbb{A}^{N_v}$ , and an action of the antibody is $a\in \mathbb{A}^{N_a}$ . Let $B:\mathbb{A}^{N_v}\times \mathbb{A}^{N_a}\to \mathbb{R}$ be our binding function, which measures the strength of the binding between the antibody and the virus with increasing values corresponding to stronger binding1 .
+
+The payoff structure is designed to capture the biological incentives of both players: the antibody aims to bind strongly to the virus while avoiding binding to human proteins (an anti-target), whereas the virus seeks to evade antibody binding while maintaining its ability to bind to host cell receptors (a binding target). Mathematically, we define the antibody's payoff $R_{a}$ as:
+
+$$
+R _ {a} (v, a) = B (v, a) - B \left(t _ {a} ^ {-}, a\right) - B \left(v, t _ {v} ^ {+}\right) \tag {1}
+$$
+
+where $B(v, a)$ represents the binding strength between the virus $v$ and antibody $a$ , $t_a^-$ is the antibody's anti-target, and $t_v^+$ is the virus's binding target. The virus's payoff $R_v$ is simply the negative of the antibody's payoff: $R_v(v, a) = -R_a(v, a)$ , see Figure 2b. This formulation also ensures that neither player can adopt an overly simplistic strategy: the virus can't become entirely inert without losing its ability to infect host cells, and the antibody can't become universally "sticky" without binding to the human protein, a "false positive", potentially causing the immune system to attack the human body. We give the full Markov Decision Process (MDP) definition $\mathcal{M} = \langle S, \mathcal{A}^{\mathrm{v}}, \mathcal{A}^{\mathrm{a}}, P, R, \mu \rangle$ in Appendix E.
+
+# 4.2. Simulated Viral Escape via Evolution
+
+We model the viral escape as a virus $\hat{v}$ naively evolving for $H$ steps in response to some fixed antibody $a$ . The simulated viral escape via evolution, see Figure 2a and Algorithm 1, is defined as follows. Given a starting virus $\hat{v}$ , the fixed antibody $a$ induces a distribution $\operatorname{Ev}(\hat{v}, a)$ over sequences
+
+of viruses $\hat{\pmb{v}} = [\hat{v}^0,\hat{v}^1,\hat{v}^2,\dots \hat{v}^H]$ , where $\hat{v}^0 = \hat{v}$ and $H$ is the chosen horizon length. We write $\hat{\pmb{v}}\sim \mathrm{Ev}(\hat{v},a)$ to denote this relationship.
+
+We define the process of generating the escape trajectories inductively. In generation $i$ , we have a virus $\hat{v}^i$ . We generate a population of viruses $v_1^i, v_2^i \ldots v_P^i$ by duplicating $\hat{v}^i$ $P$ times, then randomly applying mutations such that on average there is one amino acid mutation per viral sequence:
+
+$$
+v _ {k} ^ {i} = \hat {v} ^ {i} \oplus M u t a t i o n
+$$
+
+In our experiments $P = 15$ . For every virus in the population, we evaluate its fitness given by $R_{v}(v_{k}^{i},a)$ . We then sample a new virus $\hat{v}^{i + 1}$ based on the fitness values, in particular:
+
+$$
+\mathbb {P} (\hat {v} ^ {i + 1} = v _ {k} ^ {i}) \propto \exp (\beta R _ {\mathrm {v}} (v _ {k} ^ {i}, a))
+$$
+
+With duplicates in the population being considered distinct, so that the likelihood of a particular variant increases with the number of duplicates. Furthermore, $\beta$ is a constant which reflects how random the selection process is, with $\beta \rightarrow \infty$ reflecting deterministic max-fitness selection. After $H$ generations, a full escape trajectory $\hat{v} = [\hat{v}^0,\hat{v}^1,\hat{v}^2,\dots ,\hat{v}^H ]$ has been generated and the simulated viral escape process ends.
+
+# 4.3. Antibody Optimisation
+
+We define the antibody fitness $F_{\hat{v}}^{H}(a)$ such that it represents the true objective of the antibody, which accounts for the viral escape. Given a horizon $H$ and starting virus $\hat{v}$ , the antibody fitness is:
+
+$$
+F _ {\hat {v}} ^ {H} (a) = \mathbb {E} _ {\hat {v} \sim \operatorname {E v} (\hat {v}, a)} \left[ \frac {1}{H + 1} \sum_ {i = 0} ^ {H} R _ {a} \left(\hat {v} ^ {i}, a\right) \right] \tag {2}
+$$
+
+Note that if $H = 0$ this fitness defaults to a naive antibody payoff that ignores viral escape, i.e., $F_{\hat{v}}^{0}(a) = R_{a}(\hat{v},a)$ . We refer to this as the myopic objective.
+
+To optimise both shapers and myopic antibodies, we employ Monte Carlo simulations to estimate the antibody fitness, combined with an evolutionary optimisation algorithm. We refer to this process as the antibody optimisation loop, see Figure 2a and Algorithm 2. In meta-learning terms, this is the outer loop or the meta-loop, contrasting to the inner loop, which is the simulated viral escape via evolution.
+
+Given a starting antibody $\hat{a}$ , a starting virus $\hat{v}$ and a viral escape horizon $H$ , the antibody optimisation process generates a trajectory of antibodies $\hat{\pmb{a}} = [\hat{a}^0, \hat{a}^1, \hat{a}^2, \dots, \hat{a}^N]$ , where $N$ is the number of antibody optimisation steps (i.e., meta-steps). In the trajectory, $\hat{a}^0 = \hat{a}$ is the starting antibody and $\hat{a}_N$ is the final optimised antibody. This optimisation
+
+could, in principle, start from any antibody, but for simplicity we opt to start from purely random antibodies, meaning $\hat{a}$ is random. In most of our experiments $N = 30$ .
+
+At the start of antibody optimisation step $i$ , we have an antibody $\hat{a}^i$ . We first generate a population of $P_{a}$ antibodies $[a_1^i, a_2^i \dots a_{P_a}^i]$ by taking both the antibody $\hat{a}^i$ and $P_{a} - 1$ copies of it, with each copy having exactly a single random mutation in the amino acid sequence of $\hat{a}^i$ . For our experiments, $P_{a} = 40$ .
+
+We then sample their fitness values $F_{\hat{v}}^{H}(a_{j}^{i})$ with a fixed number $\eta$ of Monte Carlo roll-outs, i.e., we sample $\eta$ independent viral escape trajectories, each with horizon $H$ viral escape steps. We found $\eta = 5$ to be sufficient. Finally, we select $\hat{a}^{i + 1}$ to be the best-performing antibody:
+
+$$
+\hat {a} ^ {i + 1} = \arg \max _ {k} \mathbb {E} \left[ F _ {\hat {v}} ^ {H} (a _ {k} ^ {i}) \right]
+$$
+
+Once the final optimised antibody $\hat{a}^N$ is generated, a full optimisation trajectory is complete, $\hat{\pmb{a}} = [\hat{a}^0,\hat{a}^1,\hat{a}^2,\dots ,\hat{a}^N ]$ and the antibody optimisation process finishes.
+
+# 5. Experimental Setup
+
+# 5.1. Absolut! Speedup
+
+To meet the computational demands of our opponent shaping approach, which requires rapid evaluation of numerous antibody-virus interactions, we reimplement the core binding calculation of Absolut! (Robert et al., 2022) using JAX (Bradbury et al., 2018), a framework that facilitates GPU-accelerated computation (Figure 2c). Our efficient JAX implementation and the GPU acceleration results in a 10,000-fold speedup compared to the original implementation, see Table 1.
+
+Absolut! Absolut! + JAX Hardware Apple M2 Max Nvidia A40 Time/Antigen (s) 1.8 2.1 × 10-4
+
+Table 1. Comparison of the time taken to compute a single binding query between the original implementation of Absolut! (Robert et al., 2022) and our reimplementation of Absolut! in JAX (Bradbury et al., 2018). The original Absolut! implementation runs on CPU only, hence the difference in evaluation hardware.
+
+# 5.2. Dengue Virus
+
+We use the antigen protein from the Dengue Virus for our main experiments, specifically, the structure with Protein Data Bank (PDB) code 2R29 (Berman et al., 2000; Lok et al., 2008). First, Absolut! processes this structure to generate binding-relevant information, which is then used by our JAX implementation (details given in Appendix D). In the viral escape step, we mutate only the amino acid sequence of
+
+
+
+
+
+
+Figure 3. Shapers Outperform Myopic Antibodies. a Distribution of antibody shapers optimised with horizon $H = 100$ (orange) vs. myopic antibodies distribution (blue). We highlight the top $10\%$ shapers with respect to $F_{v}^{100}(a)$ in red, and the top $10\%$ myopic antibodies with respect to $R_{a}(v,a)$ in green. The x-axis is the myopic antibody fitness $R_{a}(v,a)$ and the y-axis is the escape averaged antibody fitness for $H = 100$ , i.e., $F_{v}^{100}(a)$ . Higher values on both axes indicate better performance. b Viral escape curves (inner loop performance) for different steps of the antibody optimisation process (outer loop) for antibody shapers optimised with horizon 100 (solid lines) and myopic antibodies (dashed lines). The lighter lines indicate early antibody optimisation steps, and the darker lines show the later steps. The x-axis shows the evolutionary steps of viral escape. The y-axis represents the virus fitness/payoff $R_{v}(v,a)$ , where higher values indicate better virus fitness (and lower values denote better antibody performance, so lower is better for us). c, d Antibody optimisation learning curves (outer loop performance) for a varying horizon length. The x-axis shows the antibody optimisation steps, i.e., meta-steps (c) or the number of samples from the binding simulator (d). The y-axis shows antibody fitness $F_{v}^{100}(a)$ . Error bars correspond to the standard error. Higher values indicate better performance.
+
+
+
+the dengue envelope antigen, which is composed of $N_{v} = 97$ amino acids, and do not consider other components of Dengue Virus. Importantly, we assume that the structure of the antigen does not significantly change over the course of viral escape. All experiments but the ones we discuss in Section 6.3 and Appendix A use the dengue virus.
+
+# 5.3. Additional Viruses and Bacterium
+
+To demonstrate robustness of the experimental results we achieve with ADIOS on dengue virus we conduct additional experiments with three other viruses and one bacterium. The three viral antigens we use are: West Nile Virus, PDB code 1ZTX (Nybakken et al., 2005); Influenza Neuraminidase Virus, PDB code 4QNP (Wan et al., 2015) and
+
+MERS-CoV Virus with PDB code 5DO2 (Li et al., 2015). Furthermore, we show that ADIOS can be easily applied to other pathogens, such as bacteria, too. We perform an extra experiment with the Clostridium Difficile Bacterium, PDB code 4NP4 (Orth et al., 2014).
+
+# 6. Results
+
+# 6.1. Shapers vs. Myopic Antibodies
+
+We validate the effectiveness of the antibody shapers in optimising the escape-averaged antibody fitness function $F_{v}^{H}(a)$ compared to myopic antibodies that only respond to the current virus $v$ . For our shaper antibodies, we select a long horizon of $H = 100$ to capture extended viral
+
+
+Figure 4. Shapers Outperform Myopic Antibodies on Other Viruses and a Bacterium. Antibody optimisation learning curves (outer loop performance) for varying horizon lengths across three viruses - West Nile Virus, Influenza Virus, and MERS-CoV - as well as the bacterium Clostridium Difficile. The x-axis shows the antibody optimisation steps, i.e., meta-steps and the y-axis shows the antibody fitness $F_{v}^{100}(a)$ . Raw fitness values are dependent on Absolut! scores and are relative to the specific antigen, i.e., absolute values should not be compared between different viruses or bacterium but rather the overall trends. Error bars correspond to the standard error. Higher values indicate better performance. Full set of ADIOS results for these four pathogens is provided in Appendix A.
+
+
+
+
+
+
+
+escape trajectories. Both shapers and myopic antibodies are optimised for $N = 30$ steps. Figure 3a presents the performance distributions of shapers and myopic antibodies under both objective functions.
+
+Our results demonstrate a clear advantage of shapers in the escape-averaged objective $F_{v}^{100}(a)$ . The mean of the shapers distribution significantly exceeds that of the myopic distribution, as evident from the marginal density plot in Figure 3a. Notably, none of the myopic antibodies outperform any of the top $10\%$ of shapers in this long-term objective. However, there is a trade-off between short-term and long-term optimisation. While shapers do better on the escape-averaged objective, they underperform on the myopic objective $R_{a}(v,a)$ .
+
+We next examine the influence of antibody shapers on viral escape trajectories, comparing $H = 100$ shapers with myopic antibodies, both optimised for $N = 30$ steps. Figure 3b illustrates the viral escape curves induced by both antibody types at different stages of their optimisation process. We first complete the antibody optimisation process, saving antibodies generated at steps 0, 10, 20, and 30. For each of these optimisation steps, we then simulate viral escape over $H = 100$ evolutionary steps using the corresponding saved antibodies. The presented viral escape curves are averages derived from multiple simulations.
+
+At the outset of the antibody optimisation process (step 0), both the shapers and the myopic antibodies induce similar escape curves, an expected outcome given their initialisation from random antibody sequences. However, as we examine antibodies from later optimisation steps, we observe diverging trends. Myopic antibodies cause the viral fitness to be lower in the initial escape steps, outperforming the shapers. After about 10 escape steps, corresponding to $\approx 10$ viral mutations, the two antibody types perform similarly. Beyond that, shapers demonstrate superior results in later escape stages, more effectively preventing viral escape.
+
+These results show that as the antibody optimisation process progresses, shapers learn to influence viral trajectories in a way that minimises long-term viral escape, albeit at the cost of initial performance. While myopic antibodies may offer better immediate control, shapers provide more sustained effectiveness against evolving viral populations.
+
+# 6.2. Antibody Shapers with Varying Horizons
+
+Finally, we investigate the impact of varying horizons $H$ on the optimisation process of antibody shapers. We optimise myopic antibodies and shapers using horizons $H = \{5, 10, 20, 100\}$ for $N = 30$ steps. To evaluate these antibodies against a consistent "true" objective, we simulate viral escape over $H = 100$ steps for each antibody, regardless of the horizon used during its optimisation. Figure 3c presents these results, demonstrating that shapers optimised with longer horizons $H$ consistently yield better performance throughout all steps of the optimisation process.
+
+However, the number of antibody optimisation steps does not accurately reflect the computational or experimental cost of optimisation. Each simulation of viral escape requires a number of binding samples that increases linearly with the horizon length $H$ . Yet, shorter horizon antibodies optimise an objective that diverges further from our "true" antibody objective $F_{v}^{100}(a)$ . Due to this trade-off, we observe that the optimal training horizon varies depending on the available computational budget.
+
+To illustrate this trade-off, we conduct an additional experiment shown in Figure 3d. Here, instead of fixing the number of optimisation steps $N$ , we constrain the total number of binding samples - queries to our binding strength simulator used to evaluate all antibody and virus payoffs throughout the optimisation process - to be constant across different horizons. This approach provides a performance comparison that accounts for the computational resources
+
+necessary across varying horizon lengths. Interestingly, $H = 20$ shapers perform strongly, nearly matching the performance of those optimised with horizon $H = 100$ for a given number of antibody optimisation steps, and far exceeding it when accounting for the differing computational cost. This suggests that using a cheaper, shorter-horizon proxy for the true antibody objective $F_{v}^{100}(a)$ can yield substantial benefits.
+
+More generally, we find that the optimisation horizon significantly influences the performance of antibody shapers. While longer horizons lead to better long-term performance, the optimal horizon length is dependent on the available computational resources. Thus, it is important to consider the balance between computational cost and the fidelity of the optimisation objective when designing antibodies for long-term effectiveness against evolving viral populations.
+
+# 6.3. Antibody Shapers for Other Viruses and Bacterium
+
+To test whether the shaping effects observed on dengue generalise, we evaluate ADIOS on three additional viruses: West Nile, Influenza, and MERS-CoV; as well as the Clostridium Difficile bacterium (Figure 4, Appendix A). In all cases, we observe consistent trends: shapers outperform myopic antibodies in escape-averaged fitness, confirming that ADIOS can successfully achieve shaping across diverse pathogens.
+
+For West Nile virus and MERS-CoV, $H = 100$ shapers appear to perform worse than shorter-horizon $H = 20$ antibodies, see Figure 4. However, given that all antibodies are only optimised for 30 meta-steps, and the computer-normalised (bottom row) plots in Figure A.1 show clearly that $H = 100$ shapers have not yet converged, we hypothesise that $H = 100$ shapers would ultimately yield the best performance if given more optimisation time.
+
+Interestingly, the shaping effect is especially strong on the Clostridium difficile bacterium, where $H = 100$ shapers significantly outperform all other antibody types across all reported metrics; see right-most row in Figure A.1. These results suggest that different antigens can exhibit very different behaviour in a shaping setting. However, ADIOS is able to produce antibodies with effective shaping behaviour across them.
+
+# 6.4. Attack is the Best Defence
+
+Our previous results demonstrate that antibody shapers, particularly those optimised with longer horizons, manage to effectively minimise viral escape. However, we hypothesise they can achieve this through two distinct strategies: robustness or shaping. A robustness strategy involves developing antibodies that are inherently resistant to a wide range of potential viral variants — a "good defence" approach. In contrast, a shaping strategy aims to actively influence the
+
+evolutionary trajectory of the virus itself, creating evolutionary pressures that guide viral mutations in a direction more favourable to antibody binding — an “attack” approach.
+
+
+Figure 5. Robustness vs. Shaping. We optimise 80 different antibodies $a_{H}$ across multiple horizons (Myopic, $H = 5$ , $H = 10$ , $H = 20$ , $H = 100$ ), these are represented by the y-axis. We simulate the viral escape to each of these antibodies for 100 steps, and we group the escape viruses $v_{H}$ by the horizon $H$ of the antibody that induced them; these escape viruses are represented by the x-axis. In colour, we show the mean antibody payoff $R_{a}(v_{H}, a_{H'})$ for each group of optimised antibodies $a_{H'}$ against the final escape viral variant $v_{H}$ induced by other antibodies optimised with horizon $H$ . Darker colours correspond to better antibody payoff.
+
+To disentangle these strategies, we separately evaluate the antibodies and the viruses that evolve in response to them. To do this, we compare the viruses against other antibodies which did not influence the viral evolution. The intuition is that an antibody which is good at shaping (good attack), but less robust (poor defence), will induce viruses which other antibodies will perform well against. Specifically, we generate antibodies $a_{H}$ for each horizon $H$ and simulate viral escape against these antibodies for 100 steps, resulting in viruses $v_{H}$ . For all pairs of horizons $(H, H')$ , we then cross-evaluate the antibody payoff $R_{a}(v_{H}, a_{H'})$ . Figure 5 presents the result of this analysis.
+
+Interestingly, viruses $v_{100}$ induced by $H = 100$ shapers are consistently more exploitable by antibodies across all optimisation horizons. This suggests that $H = 100$ shapers actively shape the escape trajectories of the virus in a way that makes the resulting variants more susceptible to antibody binding in general. However, this shaping effect comes at a cost. The $H = 100$ shapers ( $a_{100}$ ) show slightly lower payoffs compared to the peak performance of shorter-horizon antibodies ( $a_5$ and $a_{10}$ ) against the viruses $v_{100}$ induced by the $H = 100$ shapers (see rightmost column of Figure 5). This trade-off indicates that to exert a stronger shaping influence on viral evolution, $H = 100$ shapers sac
+
+rifice some degree of immediate performance or robustness, that is, their ability to perform well against a wide range of viruses. Therefore, a potential strategy could involve using a mixture of antibodies as therapy, where some are optimised for shaping the virus's evolutionary trajectory, and others are designed for strong immediate binding.
+
+
+Figure 6. Shaping with External Pressure. A similar experiment to Figure 5, but with the additional external pressure of a separate myopic antibody (see Equation 3).
+
+To investigate whether shaping persists in more realistic scenarios with multiple therapeutic pressures, we conduct an additional experiment. Using the same groups of antibodies $a_{H}$ , we simulate viral escape with external pressure from a myopic antibody $a_{Ext}$ (not included in the original $a_{myopic}$ set). The viruses $v_{H + Ext}$ now evolve according to a modified payoff:
+
+$$
+R _ {v} ^ {E x t} (v, a, a _ {E x t}) = \frac {1}{2} R _ {v} (v, a) + \frac {1}{2} R _ {v} (v, a _ {E x t}) \tag {3}
+$$
+
+which represents the scenario where multiple therapies are present in the environment (for example, during COVID-19 when multiple vaccines were available). Figure 6 shows that while the shaping effect is somewhat reduced compared to our original results, it remains clearly visible. This demonstrates that our shaping approach transfers to test regimes where external pressures from other therapies are present.
+
+# 6.5. Explainability Analysis
+
+To understand what distinguishes shapers from myopic antibodies, we conduct two complementary analyses of amino acid distributions and binding poses (Appendices B and C). Examining amino acid distributions, we find that long-horizon shapers exhibit more uniform distributions, while myopic antibodies tend to cluster around amino acids with extreme binding energies. We hypothesise that by maintaining diversity in their amino acid composition, shapers can
+
+preserve robustness against viral mutation since the virus cannot easily escape by avoiding specific, high-binding, parts of the antibody.
+
+Through analysis of binding using pose matrices, representing which parts of the antibody and virus bind with each other, we observe these interaction patterns significantly change as the virus adapts. However, the type of antibody influences the nature of these changes: $H = 100$ shapers actively constrain viral evolution by both preventing unfavourable binding configurations and preserving favourable ones. While these findings are specific to our Absolut! binding simulator, they hint at explainable strategies that shapers use to influence viral evolution, which could inform future antibody design approaches.
+
+# 7. Conclusion & Future Work
+
+In this work, we introduce ADIOS, a meta-learning framework for designing therapeutic antibodies that not only defend against current viral strains but instead actively shape viral evolution. We provide a GPU-accelerated JAX implementation of Absolut!, enabling rapid simulation of viral escape trajectories and outer-loop optimisation. Our results demonstrate that shapers are not only more robust against viral escape, but they also shape viral evolution toward more targetable variants. Lastly, we provide an explainability analysis of how shapers achieve this level of robustness and influence, which we hope will inspire practitioners.
+
+Although dengue virus served as our primary benchmark, we evaluated ADIOS on three other viruses (West Nile, Influenza and MERS-CoV) and on the bacterium Clostridium Difficile. In all four cases ADIOS consistently achieves shaping. These results confirm that ADIOS can generalise across a diverse set of pathogens. More broadly, the same opponent-shaping principle can be transferred to monoclonal-antibody (mAb) therapy for cancer (Zahavi & Weiner, 2020). In that setting, the outer loop would optimise therapeutic mAbs, while the inner loop would simulate the evolution of cancer-cell growth-factor receptors; the goal would be to shape cancer cells into cells that do not proliferate well. Exploring this direction, together with other bacterial or antimicrobial-resistance scenarios, remains an exciting avenue for future work.
+
+While our current implementation uses simplified binding and evolutionary escape models that prevent direct therapeutic application, ADIOS could be integrated with more sophisticated models, like AlphaFold3 (Abramson et al., 2024), to better capture evolving viral and antibody structures. As computational models of protein interactions and evolutionary processes continue to improve, ADIOS has the potential to transform how we develop therapies against viruses, cancers, and other evolving adversaries.
+
+# Acknowledgements
+
+We would like to thank Marius Urbonas, Duygu Açıkalin, Michael Matthews, Benjamin Ellis, Benedetta L. Mussati, Mattie Fellows, Lisa Zillig, Ting Lee, Matthew Raybould and Charlotte Deane for their invaluable contributions to this work. Their thoughtful insights helped guide the project direction, their detailed feedback on earlier drafts significantly enhanced the manuscript's clarity and accessibility, and their support during the review process was instrumental in addressing reviewers' concerns. We also thank the anonymous reviewers for their constructive feedback, which led to improvements in the paper. This work was supported by UK Research and Innovation and the European Research Council - Jakob Foerster is partially funded by the UKRI grant EP/Y028481/1, originally selected for funding by the ERC. This work was also supported by Exscientia, which provided Aleksandra Kalisz with a PhD studentship.
+
+# Impact Statement
+
+This work introduces a machine learning framework for antibody design that could contribute to developing more effective therapies against evolving pathogens like viruses. While this has a potential positive societal impact through improved disease treatment and pandemic preparedness, we acknowledge that therapeutic development tools carry risks if misused. Our current implementation uses simplified models and significant additional research and safety validation would be required before any real-world therapeutic applications.
+
+# References
+
+Abramson, J., Adler, J., Dunger, J., Evans, R., Green, T., Pritzel, A., Ronneberger, O., Willmore, L., Ballard, A. J., Bambrick, J., Bodenstein, S. W., Evans, D. A., Hung, C.-C., O'Neill, M., Reiman, D., Tunya-suvunakool, K., Wu, Z., Žemgulyte, A., Arvaniti, E., Beattie, C., Bertolli, O., Bridgland, A., Cherepanov, A., Congreve, M., Cowen-Rivers, A. I., Cowie, A., Figurnov, M., Fuchs, F. B., Gladman, H., Jain, R., Khan, Y. A., Low, C. M. R., Perlin, K., Potapenko, A., Savy, P., Singh, S., Stecula, A., Thillaisundaram, A., Tong, C., Yakneen, S., Zhong, E. D., Zielinski, M., Žídek, A., Bapat, V., Kohli, P., Jaderberg, M., Hassabis, D., and Jumper, J. M. Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature, 630(8016): 493-500, June 2024. ISSN 1476-4687. doi: 10.1038/s41586-024-07487-w. URL https://www.nature.com/articles/s41586-024-07487-w.
+Adolf-Bryfogle, J., Kalyuzhniy, O., Kubitz, M., Weitzner, B. D., Hu, X., Adachi, Y., Schief, W. R., and Jr, R. L. D. RosettaAntibodyDesign (RAbD): A general framework
+
+for computational antibody design. PLOS Computational Biology, 14(4):e1006112, April 2018. ISSN 1553-7358. doi: 10.1371/journal.pcbi.1006112. URL https://journals.plos.org/ ploscompbiol/article?id=10.1371/ journal.pcbi.1006112. Publisher: Public Library of Science.
+Bennett, N. R., Watson, J. L., Ragotte, R. J., Borst, A. J., See, D. L., Weidle, C., Biswas, R., Shrock, E. L., Leung, P. J. Y., Huang, B., Goreshnik, I., Ault, R., Carr, K. D., Singer, B., Criswell, C., Vafeados, D., Sanchez, M. G., Kim, H. M., Torres, S. V., Chan, S., and Baker, D. Atomically accurate de novo design of single-domain antibodies, March 2024. URL https://www.biorxiv.org/content/10.1101/2024.03.14.585103v1. Pages: 2024.03.14.585103 Section: New Results.
+Berman, H. M., Westbrook, J., Feng, Z., Gilliland, G., Bhat, T. N., Weissig, H., Shindyalov, I. N., and Bourne, P. E. The Protein Data Bank. *Nucleic Acids Research*, 28(1): 235-242, January 2000. ISSN 0305-1048. doi: 10.1093/nar/28.1.235. URL https://doi.org/10.1093/nar/28.1.235.
+Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., and Wanderman-Milne, S. JAX: composable transformations of Python+ NumPy programs. 2018. URL https://scholar.google.com/scholar?cluster=8800325452556162710&hl=en&oi=scholarr.
+Carabelli, A. M., Peacock, T. P., Thorne, L. G., Harvey, W. T., Hughes, J., de Silva, T. I., Peacock, S. J., Barclay, W. S., de Silva, T. I., Towers, G. J., and Robertson, D. L. SARS-CoV-2 variant biology: immune escape, transmission and fitness. Nature Reviews Microbiology, 21(3): 162-177, March 2023. ISSN 1740-1534. doi: 10.1038/s41579-022-00841-7. URL https://www.nature.com/articles/s41579-022-00841-7. Publisher: Nature Publishing Group.
+Chéron, N., Serohijos, A. W., Choi, J., and Shakhnovich, E. I. Evolutionary dynamics of viral escape under antibodies stress: A biophysical model. *Protein Science: A Publication of the Protein Society*, 25(7):1332-1340, July 2016. ISSN 0961-8368. doi: 10.1002/pro.2915. URL https://www.ncbi.nlm.nih.gov/PMC/articles/PMC4918420/.
+consortium, w. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Research, 47(D1):D520-D528, October 2018. ISSN 0305-1048. doi: 10.1093/nar/gky949. URL https://doi.org/10.1093/nar/gky949.
+
+Cutting, D., Dreyer, F. A., Errington, D., Schneider, C., and Deane, C. M. De novo antibody design with SE(3) diffusion, May 2024. URL http://arxiv.org/abs/2405.07622.arXiv:2405.07622 [q-bio].
+Dingens, A. S., Arenz, D., Weight, H., Overbaugh, J., and Bloom, J. D. An Antigenic Atlas of HIV-1 Escape from Broadly Neutralizing Antibodies Distinguishes Functional and Structural Epitopes. Immunity, 50(2):520-532.e3, February 2019. ISSN 1074-7613. doi: 10.1016/j.immuni.2018.12.017. URL https://www.cell.com/immunity/abstract/S1074-7613(18)30565-X. Publisher: Elsevier.
+Doud, M. B., Lee, J. M., and Bloom, J. D. How single mutations affect viral escape from broad and narrow antibodies to H1 influenza hemagglutinin. Nature Communications, 9(1):1386, April 2018. ISSN 2041-1723. doi: 10.1038/s41467-018-03665-3. URL https://www.nature.com/articles/s41467-018-03665-3. Publisher: Nature Publishing Group.
+Faramarzi, A., Norouzi, S., Dehdarirad, H., Aghlmand, S., Yusefzadeh, H., and Javan-Noughabi, J. The global economic burden of COVID-19 disease: a comprehensive systematic review and meta-analysis. Systematic Reviews, 13(1):68, February 2024. ISSN 2046-4053. doi: 10.1186/s13643-024-02476-6. URL https://doi.org/10.1186/s13643-024-02476-6.
+Foerster, J., Chen, R. Y., Al-Shedivat, M., Whiteson, S., Abbeel, P., and Mordatch, I. Learning with Opponent-Learning Awareness. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, pp. 122-130, Richland, SC, July 2018. International Foundation for Autonomous Agents and Multiagent Systems.
+Greaney, A. J., Starr, T. N., Gilchuk, P., Zost, S. J., Binshtein, E., Loes, A. N., Hilton, S. K., Huddleston, J., Eguia, R., Crawford, K. H. D., Dingens, A. S., Nargi, R. S., Sutton, R. E., Suryadevara, N., Rothlauf, P. W., Liu, Z., Whelan, S. P. J., Carnahan, R. H., Crowe, J. E., and Bloom, J. D. Complete Mapping of Mutations to the SARS-CoV-2 Spike Receptor-Binding Domain that Escape Antibody Recognition. Cell Host & Microbe, 29(1):44-57.e9, January 2021. ISSN 1931-3128. doi: 10.1016/j.chom.2020.11.007. URL https://www.cell.com/cell-host-microbe/abstract/S1931-3128(20)30624-7. Publisher: Elsevier.
+Han, W., Chen, N., Xu, X., Sahil, A., Zhou, J., Li, Z., Zhong, H., Gao, E., Zhang, R., Wang, Y., Sun, S., Cheung, P. P.-H., and Gao, X. Predicting the antigenic evolution of SARS-COV-2 with
+
+deep learning. Nature Communications, 14(1):3478, June 2023. ISSN 2041-1723. doi: 10.1038/s41467-023-39199-6. URL https://www.nature.com/articles/s41467-023-39199-6.
+Hollingsworth, S. A. and Dror, R. O. Molecular dynamics simulation for all. Neuron, 99(6):1129-1143, September 2018. ISSN 0896-6273. doi: 10.1016/j.neuron.2018.08.011. URL https://www.ncbi.nlm.nih.gov/PMC/articles/PMC6209097/.
+Hu, J., Peng, P., Wang, K., Fang, L., Luo, F.-y., Jin, A.-s., Liu, B.-z., Tang, N., and Huang, A.-l. Emerging SARS-CoV-2 variants reduce neutralization sensitivity to convalescent sera and monoclonal antibodies. Cellular and Molecular Immunology, 18(4):1061-1063, April 2021. ISSN 1672-7681. doi: 10.1038/s41423-021-00648-1. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7905196/.
+Jin, W., Barzilay, R., and Jaakkola, T. Antibody-Antigen Docking and Design via Hierarchical Equivariant Refinement, July 2022. URL http://arxiv.org/abs/2207.06616.arXiv:2207.06616 [q-bio].
+Lee, J. M., Eguia, R., Zost, S. J., Choudhary, S., Wilson, P. C., Bedford, T., Stevens-Ayers, T., Boeckh, M., Hurt, A. C., Lakdawala, S. S., Hensley, S. E., and Bloom, J. D. Mapping person-to-person variation in viral mutations that escape polyclonal serum targeting influenza hemagglutinin. eLife, 8:e49324, August 2019. ISSN 2050-084X. doi: 10.7554/eLife.49324. URL https://doi.org/10.7554/eLife.49324. Publisher: eLife Sciences Publications, Ltd.
+Li, T., Pantazes, R. J., and Maranas, C. D. OptMAVEN - A New Framework for the de novo Design of Antibody Variable Region Models Targeting Specific Antigen Epitopes. PLOS ONE, 9(8):e105954, August 2014. ISSN 1932-6203. doi: 10.1371/journal.pone.0105954. URL https://journals.plos.org/plosone/article? id=10.1371/journal.pone.0105954. Publisher: Public Library of Science.
+Li, Y., Wan, Y., Liu, P., Zhao, J., Lu, G., Qi, J., Wang, Q., Lu, X., Wu, Y., Liu, W., Zhang, B., Yuen, K.-Y., Perlman, S., Gao, G. F., and Yan, J. A humanized neutralizing antibody against MERS-CoV targeting the receptor-binding domain of the spike protein. Cell Research, 25 (11):1237-1249, November 2015. ISSN 1748-7838. doi: 10.1038/cr.2015.113. URL https://www.nature.com/articles/cr2015113. Publisher: Nature Publishing Group.
+Lim, Y. W., Adler, A. S., and Johnson, D. S. Predicting antibody binders and generating synthetic antibodies us-
+
+ing deep learning. mAbs, 14(1):2069075, 2022. ISSN 1942-0870. doi: 10.1080/19420862.2022.2069075.
+Liu, G., Zeng, H., Mueller, J., Carter, B., Wang, Z., Schilz, J., Horny, G., Birnbaum, M. E., Ewert, S., and Gifford, D. K. Antibody complementarity determining region design using high-capacity machine learning. Bioinformatics, 36(7):2126-2133, April 2020. ISSN 1367-4803. doi: 10.1093/bioinformatics/btz895. URL https://doi.org/10.1093/bioinformatics/btz895.
+Lok, S.-M., Kostyuchenko, V., Nybakken, G. E., Holdaway, H. A., Battisti, A. J., Sukupolvi-Petty, S., Sedlak, D., Fremont, D. H., Chipman, P. R., Roehrig, J. T., Diamond, M. S., Kuhn, R. J., and Rossmann, M. G. Binding of a neutralizing antibody to dengue virus alters the arrangement of surface glycoproteins. Nature Structural & Molecular Biology, 15(3):312-317, March 2008. ISSN 1545-9985. doi: 10.1038/nsmb.1382.
+Lu, C., Willi, T., Witt, C. A. S. D., and Foerster, J. Model-Free Opponent Shaping. In Proceedings of the 39th International Conference on Machine Learning, pp. 14398-14411. PMLR, June 2022. URL https://proceedings.mlr.press/v162/lu22d.html.
+Lucas, M., Karrer, U., Lucas, A., and Klenerman, P. Viral escape mechanisms - escapology taught by viruses. International Journal of Experimental Pathology, 82(5):269-286, 2001. ISSN 1365-2613. doi: 10.1046/j.1365-2613.2001.00204.x. URL https://onlinelibrary.wiley.com/doi/ abs/10.1046/j.1365-2613.2001.00204.x.
+Madhi, S. A., Baillie, V., Cutland, C. L., Voysey, M., Koen, A. L., Fairlie, L., Padayachee, S. D., Dheda, K., Barnabas, S. L., Bhorat, Q. E., Briner, C., Kwatra, G., Ahmed, K., Aley, P., Bhikha, S., Bhiman, J. N., Bhorat, A. E., du Plessis, J., Esmail, A., Groenewald, M., Horne, E., Hwa, S.-H., Jose, A., Lambe, T., Laubscher, M., Malahleha, M., Masenya, M., Masilela, M., McKenzie, S., Molapo, K., Moultrie, A., Oelofse, S., Patel, F., Pillay, S., Rhead, S., Rodel, H., Rossouw, L., Taoushanis, C., Tegally, H., Thombrayil, A., van Eck, S., Wibmer, C. K., Durham, N. M., Kelly, E. J., Villafana, T. L., Gilbert, S., Pollard, A. J., de Oliveira, T., Moore, P. L., Sigal, A., Izu, A., NGS-SA Group, and Wits-VIDA COVID Group. Efficacy of the ChAdOx1 nCoV-19 Covid-19 Vaccine against the B.1.351 Variant. The New England Journal of Medicine, 384(20):1885–1898, May 2021. ISSN 1533-4406. doi: 10.1056/NEJMoa2102214.
+Martinkus, K., Ludwiczak, J., Cho, K., Liang, W.-C., Lafrance-Vanasse, J., Hotzel, I., Rajpal, A., Wu, Y., Bonneau, R., Gligorijevic, V., and Loukas, A. AbDiffuser:
+
+Full-Atom Generation of in vitro Functioning Antibodies, March 2024. URL http://arxiv.org/abs/2308.05027.arXiv:2308.05027 [q-bio].
+Mason, D. M., Friedensohn, S., Weber, C. R., Jordi, C., Wagner, B., Meng, S. M., Ehling, R. A., Bonati, L., Dahinden, J., Gainza, P., Correia, B. E., and Reddy, S. T. Optimization of therapeutic antibodies by predicting antigen specificity from antibody sequence via deep learning. Nature Biomedical Engineering, 5(6): 600-612, June 2021. ISSN 2157-846X. doi: 10.1038/s41551-021-00699-9. URL https://www.nature.com/articles/s41551-021-00699-9.
+Meijers, M., Ruchnewitz, D., Łuksza, M., and Lässig, M. Vaccination shapes evolutionary trajectories of SARS-CoV-2, July 2022. URL http://arxiv.org/abs/2207.09329.arXiv:2207.09329 [q-bio].
+Miyazawa, S. and Jernigan, R. L. An empirical energy potential with a reference state for protein fold and sequence recognition. Proteins: Structure, Function, and Bioinformatics, 36(3):357-369, 1999. ISSN 1097-0134. doi: 10.1002/(SICI)1097-0134(19990815)36:3<357::AID-PROT10>3.0.CO;2-U. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%291097-0134%2819990815%2936%3A3%3C357%3A%3AAID-PROT10%3E3.0.CO%3B2-U.
+Nandi, A. and Shet, A. Why vaccines matter: understanding the broader health, economic, and child development benefits of routine vaccination. Human Vaccines & Immunotherapeutics, 16(8):1900-1904. ISSN 2164-5515. doi: 10.1080/21645515.2019.1708669. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7482790/.
+Nie, Z., Liu, X., Chen, J., Wang, Z., Liu, Y., Si, H., Dong, T., Xu, F., Song, G., Wang, Y., Zhou, P., Gao, W., and Tian, Y. A unified evolution-driven deep learning framework for virus variation driver prediction. Nature Machine Intelligence, 7(1):131-144, January 2025. ISSN 2522-5839. doi: 10.1038/s42256-024-00966-9. URL https://www.nature.com/articles/s42256-024-00966-9. Publisher: Nature Publishing Group.
+Nybakken, G. E., Oliphant, T., Johnson, S., Burke, S., Diamond, M. S., and Fremont, D. H. Structural basis of West Nile virus neutralization by a therapeutic antibody. Nature, 437(7059):764-769, September 2005. ISSN 1476-4687. doi: 10.1038/nature03956. URL https://www.nature.com/articles/nature03956. Publisher: Nature Publishing Group.
+
+Orenstein, W. A. and Ahmed, R. Simply put: Vaccination saves lives. Proceedings of the National Academy of Sciences, 114(16):4031-4033, April 2017. doi: 10.1073/pnas.1704507114. URL https://www.pnas.org/doi/10.1073/pnas.1704507114. Publisher: Proceedings of the National Academy of Sciences.
+Orth, P., Xiao, L., Hernandez, L. D., Reichert, P., Sheth, P. R., Beaumont, M., Yang, X., Murgolo, N., Ermakov, G., DiNunzio, E., Racine, F., Karczewski, J., Secore, S., Ingram, R. N., Mayhood, T., Strickland, C., and Therien, A. G. Mechanism of Action and Epitopes of Clostridium difficile Toxin B-neutralizing Antibody Bezlotoxumab Revealed by X-ray Crystallography. Journal of Biological Chemistry, 289(26):18008-18021, June 2014. ISSN 0021-9258. doi: 10.1074/jbc.M114.560748. URL https://www.sciencedirect.com/science/article/pii/S0021925820405149.
+Pereira, P., Minoux, H., Walczak, A. M., and Mora, T. Energy-based generative models for monoclonal antibodies, November 2024. URL http://arxiv.org/abs/2411.13390. arXiv:2411.13390 [q-bio].
+Robert, P. A., Akbar, R., Frank, R., Pavlovic, M., Widrich, M., Snapkov, I., Slabodkin, A., Chernigovskaya, M., Scheffer, L., Smorodina, E., Rawat, P., Mehta, B. B., Vu, M. H., Mathisen, I. F., Prósz, A., Abram, K., Olar, A., Miho, E., Haug, D. T. T., Lund-Johansen, F., Hochreiter, S., Haff, I. H., Klambauer, G., Sandve, G. K., and Greiff, V. Unconstrained generation of synthetic antibody-antigen structures to guide machine learning methodology for antibody specificity prediction. Nature Computational Science, 2(12):845-865, December 2022. ISSN 2662-8457. doi: 10.1038/s43588-022-00372-4. URL https://www.nature.com/articles/s43588-022-00372-4. Publisher: Nature Publishing Group.
+Ruffolo, J. A., Chu, L.-S., Mahajan, S. P., and Gray, J. J. Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies. Nature Communications, 14(1):2389, April 2023. ISSN 2041-1723. doi: 10.1038/s41467-023-38063-x.
+Saka, K., Kakuzaki, T., Metsugi, S., Kashiwagi, D., Yoshida, K., Wada, M., Tsunoda, H., and Teramoto, R. Antibody design using LSTM based deep generative model from phage display library for affinity maturation. Scientific Reports, 11(1):5852, March 2021. ISSN 2045-2322. doi: 10.1038/s41598-021-85274-7. URL https://www.nature.com/articles/s41598-021-85274-7. Publisher: Nature Publishing Group.
+Samsudin, E. Z., Yasin, S. M., Ruslan, N.-H., Abdullah, N. N., Noor, A. F. A., and Hair, A. F. A. Socioeco
+
+nomic impacts of airborne and droplet-borne infectious diseases on industries: a systematic review. BMC Infectious Diseases, 24(1):93, January 2024. ISSN 1471-2334. doi: 10.1186/s12879-024-08993-y. URL https://doi.org/10.1186/s12879-024-08993-y.
+Shanker, V. R., Bruun, T. U. J., Hie, B. L., and Kim, P. S. Unsupervised evolution of protein and antibody complexes with a structure-informed language model. Science, 385(6704):46-53, July 2024. doi: 10.1126/science.adk8946. URL https:// www-science-org.ezproxy-prd.bodleian. ox.ac.uk/doi/10.1126/science.adk8946. Publisher: American Association for the Advancement of Science.
+Thadani, N. N., Gurev, S., Notin, P., Youssef, N., Rollins, N. J., Ritter, D., Sander, C., Gal, Y., and Marks, D. S. Learning from prepandemic data to forecast viral escape. Nature, 622(7984):818-825, October 2023. ISSN 1476-4687. doi: 10.1038/s41586-023-06617-0. URL https://www.nature.com/articles/s41586-023-06617-0. Publisher: Nature Publishing Group.
+VanDyk, L. and Meek, K. Assembly of IgH CDR3: mechanism, regulation, and influence on antibody diversity. International Reviews of Immunology, 8(2-3):123-133, 1992. ISSN 0883-0185. doi: 10.3109/08830189209055568.
+Wan, H., Yang, H., Shore, D. A., Garten, R. J., Couzens, L., Gao, J., Jiang, L., Carney, P. J., Villanueva, J., Stevens, J., and Eichelberger, M. C. Structural characterization of a protective epitope spanning A(H1N1)pdm09 influenza virus neuraminidase monomers. Nature Communications, 6(1):6114, February 2015. ISSN 2041-1723. doi: 10.1038/ncomms7114. URL https://www.nature.com/articles/ncomms7114. Publisher: Nature Publishing Group.
+Wang, G., Liu, X., Wang, K., Gao, Y., Li, G., Baptista-Hon, D. T., Yang, X. H., Xue, K., Tai, W. H., Jiang, Z., Cheng, L., Fok, M., Lau, J. Y.-N., Yang, S., Lu, L., Zhang, P., and Zhang, K. Deep-learning-enabled protein-protein interaction analysis for prediction of SARS-CoV-2 infectivity and variant evolution. Nature Medicine, 29(8):2007-2018, August 2023. ISSN 1546-170X. doi: 10.1038/s41591-023-02483-5. URL https://www.nature.com/articles/s41591-023-02483-5. Publisher: Nature Publishing Group.
+Weisblum, Y., Schmidt, F., Zhang, F., DaSilva, J., Poston, D., Lorenzi, J. C., Muecksch, F., Rutkowska, M., Hoffmann, H.-H., Michailidis, E., Gaebler, C., Agudelo, M., Cho, A., Wang, Z., Gazumyan, A., Cipolla, M.,
+
+Luchsinger, L., Hillyer, C. D., Caskey, M., Robbiani, D. F., Rice, C. M., Nussenzweig, M. C., Hatzioannou, T., and Bieniasz, P. D. Escape from neutralizing antibodies by SARS-CoV-2 spike protein variants. eLife, 9:e61312, October 2020. ISSN 2050-084X. doi: 10.7554/eLife.61312. URL https://doi.org/10. 7554/eLife.61312. Publisher: eLife Sciences Publications, Ltd.
+Yan Huang, Ziding Zhang, and Yuan Zhou. AbAgIntPre: A deep learning method for predicting antibody-antigen interactions based on sequence information. Frontiers in Immunology, 2022. doi: 10.3389/fimmu.2022.1053617.
+Zahavi, D. and Weiner, L. Monoclonal Antibodies in Cancer Therapy. Antibodies, 9(3):34, July 2020. ISSN 2073-4468. doi: 10.3390/antib9030034. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7551545/.
+Zambaldi, V., La, D., Chu, A. E., Patani, H., Danson, A. E., Kwan, T. O. C., Frerix, T., Schneider, R. G., Saxton, D., Thillaisundaram, A., Wu, Z., Moraes, I., Lange, O., Papa, E., Stanton, G., Martin, V., Singh, S., Wong, L. H., Bates, R., Kohl, S. A., Abramson, J., Senior, A. W., Alguel, Y., Wu, M. Y., Aspalter, I. M., Bentley, K., Bauer, D. L. V., Cherepanov, P., Hassabis, D., Kohli, P., Fergus, R., and Wang, J. De novo design of high-affinity protein binders with AlphaProteo, September 2024. URL http:// arxiv.org/abs/2409.08022.arXiv:2409.08022 [q-bio].
+
+# A. Experimental Results on Other Viruses and Bacterium
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A.1. Shapers Outperform Myopic Antibodies on Other Viruses and a Bacterium. The figure shows results across four different pathogens: West Nile Virus, Influenza Virus, MERS-CoV Virus, and Clostridium Difficile Bacterium (left to right columns). First row Distribution of antibody shapers optimised with horizon $H = 100$ (orange) vs. myopic antibodies distribution (blue). We highlight the top $10\%$ shapers with respect to $F_{v}^{100}(a)$ in red, and the top $10\%$ myopic antibodies with respect to $R_{a}(v,a)$ in green. The x-axis is the myopic antibody fitness $R_{a}(v,a)$ and the y-axis is the escape averaged antibody fitness for $H = 100$ , i.e., $F_{v}^{100}(a)$ . Higher values on both axes indicate better performance. Second row Viral escape curves (inner loop performance) for different steps of the antibody optimisation process (outer loop) for antibody shapers optimised with horizon $H = 100$ (solid lines) and myopic antibodies (dashed lines). The lighter lines indicate early antibody optimisation steps, and the darker lines show the later steps. The x-axis shows the evolutionary steps of viral escape. The y-axis represents the virus fitness/payoff $R_{v}(v,a)$ , where higher values indicate better virus fitness (and lower values denote better antibody performance, so lower is better for us). Third row Antibody optimisation learning curves (outer loop performance) for varying horizon lengths. The x-axis shows the antibody optimisation steps, i.e., meta-steps, and the y-axis shows antibody fitness $F_{v}^{100}(a)$ . Forth row Antibody optimisation learning curves accounting for computational cost. The x-axis shows the number of samples from the binding simulator, and the y-axis shows antibody fitness $F_{v}^{100}(a)$ . Higher values indicate better performance. Error bars correspond to the standard error. In all these results, raw payoff/fitness values are dependent on Absolut! scores and are relative to the specific antigen, i.e., absolute values should not be compared between different viruses or bacterium but rather the overall trends.
+
+
+
+
+
+
+
+# B. Amino Acid Distribution in Shapers and Myopic Antibodies
+
+To further understand the performance differences between myopic antibodies and shapers, we analyse how the amino acid distributions of the antibodies change with the optimisation horizon $H$ . Within our model, each amino acid is solely characterised by its binding strength to other amino acids as defined by the Miyazawa-Jernigan energy potential matrix (Miyazawa & Jernigan, 1999). However, despite this simplicity, we still see interesting patterns in the amino acid distribution. Figure B.1 showcases the results of our experiment.
+
+Antibodies optimised with longer horizons, especially the $H = 100$ shapers, exhibit a more uniform distribution of amino acids, while those with shorter horizons show a tendency to cluster around amino acids associated with either high or low binding energies. The flatter distribution of long-horizon shapers suggests a more diverse and balanced approach to viral antigen binding. We hypothesise that this strategy helps to preserve robustness against viral mutations. By maintaining a more even distribution across energy levels, these antibodies may be less susceptible to viral escape.
+
+In contrast, the clustering behaviour we observe in shorter-horizon antibodies indicates a more specialised strategy. By concentrating on amino acids at the extremes of the binding energy spectrum, these antibodies may achieve strong immediate binding but potentially at the cost of long-term robustness. However, while this analysis hints at the robustness of long-term shapers, it does not fully explain the shaping behaviour we observed in our previous results. In the next section, we investigate the distribution of amino acids within specific binding poses.
+
+
+Figure B.1. Distribution of amino acids in myopic antibodies and shapers. The antibodies are optimised for $N = 30$ steps using different viral escape horizons $H$ . Longer horizon shapers push the amino acid distribution closer to a uniform distribution.
+
+# C. Influence of Antibody Shapers on Binding Positions
+
+In the Absolut! framework (Robert et al., 2022), binding poses are defined as sets of interacting residue pairs between the antibody and the antigen. The binding energy of a pose is calculated by populating these residue locations with the amino acid sequences of both the antibody and the virus and then summing the pairwise interaction energies defined by (Miyazawa & Jernigan, 1999). Absolut! considers a vast number of possible poses (on the order of $10^{6}$ ) and determines the overall interaction energy as the energy of the minimum pose, refer to Appendix D for more details. Importantly, only a small part of the viral sequence contributes to this minimum energy pose.
+
+As both the virus and the antibody mutate during our optimisation process, the lowest energy pose can change. To capture these dynamics, we introduced the concept of a pose matrix: a $20 \times 20$ matrix with one entry for each possible pair of amino acids. One dimension corresponds to the antibody amino acids, and the other to the viral amino acids. The entries in this matrix represent the number of interactions between the specific amino acid pairs in the lowest energy pose for the binding configuration between an antibody and an antigen.
+
+Figure C.1a presents average pose matrices from multiple optimisation runs of both myopic antibodies and long-horizon shapers. We observe two key trends. First, as viral escape steps increase (top row vs bottom row), the pose matrices become more "diffused". This is expected, as the virus explores more "pose possibilities" through mutations during escape. Second, as the horizon of antibody optimisation increases, the poses also become more "diffused". This is particularly interesting, as all antibodies have the same number of mutations regardless of the horizon, suggesting that this diffusion might relate to the increased robustness of shapers.
+
+
+a
+
+
+
+
+
+
+b
+
+
+
+
+
+
+C
+
+
+
+
+Figure C.1. Influence of antibody shapers on binding poses. a Average pose matrices between antibodies optimised using different horizons and the virus at various stages of its escape. The escape steps increase from left to right, and the horizon increases from top to bottom. The full grid of matrices with more antibody horizons and virus escape steps is available in the Supplementary Information, Figure C.2. b, c Aggregated sum of pose matrices w.r.t the antibody axis (b) and w.r.t the virus axis (c). The plots show a change in the interaction counts in the poses from the viral escape step 0 to 100. Red indicates a decrease in the interaction count and green an increase.
+
+
+
+To further understand these pose dynamics, we aggregate the pose matrices along the antibody axis (Figure C.1b) and the virus axis (Figure C.1c). These figures show the change in interaction counts between viral escape steps 0 and 100. Figure C.1b shows that as the virus escapes it includes more of the antibody's lowest binding amino acids (particularly K, Lysine) in the pose. Notably, long-horizon shapers, especially $H = 100$ shapers, are most effective at preventing this increase in K (Lysine) interactions. Furthermore, Figure C.1c shows another viral escape strategy, where the virus removes its high-binding amino acids I (Isoleucine) and M (Methionine) from the pose. Again, $H = 100$ shapers are most successful in counteracting this trend, although they cannot completely prevent it.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure C.2. Full grid of pose matrices. They represent the average pose interactions between myopic antibodies or antibody shapers optimised with different horizons and viruses at different stages of their escape.
+
+
+
+
+
+
+
+
+
+Based on these observations, we hypothesise that the shaping ability of $H = 100$ shapers relies on two main mechanisms. Preventing the virus from including the antibody's lowest binding amino acids in the pose, and inhibiting the virus from removing its own high-binding amino acids from the pose. These strategies constrain the viral escape trajectories, making the resulting viral variants more susceptible to antibody binding in general. While these results are specific to our Absolut! binding simulator, they demonstrate that the behaviour of antibody shapers is both explainable and intuitive. This work serves as a proof of concept, showing that opponent shaping techniques can optimise antibodies to more effectively prevent viral escape.
+
+# D. Binding Function
+
+In general, ADIOS is independent of the choice of the binding function $B:\mathbb{A}^{N_v}\times \mathbb{A}^{N_a}\to \mathbb{R}$ . In our work, we rely on the Absolut! framework (Robert et al., 2022) to implement the binding function. In this section, we mathematically formalise the binding energy calculation that Absolut! uses. For further explanation, readers are recommended to refer to the original Absolut! paper (Robert et al., 2022).
+
+For two given protein structures, there are many possible joint configurations. Each of these joint configurations yields an energy. The configurations which are associated with lower energy will require more external energy to cause the system to leave that state, meaning in turn that they are more stable. If the configuration is sufficiently stable, this may be referred to as a binding pose.
+
+In Absolut, poses are represented as pairs of residues2 which are adjacent to each other in that pose. In particular, the pairs may be from the antigen to the antibody, or from the antibody to itself. We define the space of possible poses $\Phi$ :
+
+$$
+\Phi = 2 ^ {N _ {v} \times N _ {a}} \times 2 ^ {N _ {a} \times N _ {a}}
+$$
+
+Where $N_{v}$ and $N_{a}$ are taken to be the set of integers up to $N_{v}$ and $N_{a}$ , respectively.
+
+The energy of a complex of a virus $v \in \mathbb{A}^{N_v}$ and an antibody $a \in \mathbb{A}^{N_a}$ , in a given pose $(\phi^{v \times a}, \phi^{a \times a}) \in \Phi$ is defined by sum of the energies of each adjacent residues. The energy between a residue pair is determined by which two amino acids it contains, given by a symmetric interaction matrix $M: \mathbb{A} \times \mathbb{A} \to \mathbb{R}$ , which is determined experimentally (Miyazawa & Jernigan, 1999).
+
+We then define the energy of a single pose to be:
+
+$$
+\hat {E} (a, b; (\phi^ {v \times a}, \phi^ {a \times a}), M) = \sum_ {(i, j) \in \phi^ {v \times a}} M (v _ {i}, a _ {j}) + \sum_ {(i, j) \in \phi^ {a \times a}} M (a _ {i}, a _ {j})
+$$
+
+Finally, given a set of poses $S \subseteq \Phi$ , the binding strength is:
+
+$$
+B (v, a) = - E (v, a; S, M) = - \min _ {\phi \in S} \hat {E} (v, a; \phi , M) \tag {4}
+$$
+
+Absolut! generates $S$ through a two-step process. First, Absolut! discretises a given structure of the virus $v$ (or any antigen) which is taken from the PDB (consortium, 2018). Second, Absolut! does a brute-force search over possible (discretised) poses for an antibody $a$ to join to the viral structure. The exact details are not necessary for this paper, we refer interested readers to the original paper.
+
+However, we find that Absolut! generates more poses than we require. Since the energy function, $E$ is a minimum over poses, certain poses contribute far more than others. In particular, if a pose $\phi$ tends to yield higher energies, so $\hat{E}(a,b;\phi,M)$ is relatively large, it will have little impact on the result of $B$ .
+
+To give a more concrete example, for this paper, we use the dengue virus antigen (Lok et al., 2008). Absolut! gives $\approx 1.5$ million poses for this structure. Absolut! also comes with $\approx 20$ million real-world antibody sequences. When using the base dengue sequence as the antigen, across the 20 million binding calculations only 1027 binding poses are ever the minimum. Furthermore, the relevance of each pose drops exponentially. The most relevant pose accounts for $20\%$ of binding configurations, and by using the top 100 poses one would get the exact same result for binding in $95\%$ of antibodies. This gives us a way to make the computation 1000 times faster3 for a negligible accuracy drop for this particular antigen sequence.
+
+However, this leads to more errors as soon as we change the viral antigen sequence. Looking at the particular poses which lead to binding reveals another way to cut down on the total number of poses: all of the poses contain at least 18 pairs of residues. As the interaction matrix $M$ is strictly negative, having more pairs of residues always makes the binding energy of a pose lower, meaning it is more likely to be where binding occurs. Out of the original 1.5 million poses, only approximately 37 thousand (1 in 40) contain 18 or more pairs of residues. When using only these residues, we see no differences across any of the evolutionary simulations. It is possible that a pose with 17 or less pairs is the dominant one for some antigen $v$ with antibody $a$ , but if so, then such pairs appear to be extremely rare.
+
+Using these methods of pruning poses gives us two subsets of the original set of poses, a larger one which almost exactly matches performance, and another which sometimes differs, but is much faster to compute. We refer to these as the high-resolution and low-resolution binding simulators, respectively. Note that for the low-resolution binding simulator, the more mutations the virus undergoes, the less accurate it becomes. Furthermore, we also do binding to the antibody anti-target, $t_{a}^{-}$ . To account for this, we compute the relevant poses for this anti-target too.
+
+When running dengue virus experiments, we always train with the low-resolution binding simulator, then perform “verification” with the high-resolution one, and these are the results we report throughout the paper. For the other viruses and the bacterium we run “verification” in a differently initialised instantiation of the low-resolution simulator. The reason is
+
+twofold. Firstly this enables us to run many more evolution experiments. Secondly, this mimics the real-life process of transferring out of simulation to the real world. By showing we transfer from the low-resolution binding simulation to the slower, high-resolution binding simulation, we demonstrate that our results are not extremely specific to the exact simulation we use and that any result will not disappear as soon as a more accurate simulation is used. We emphasise that Absolut! does not represent an accurate model of antibody binding. It is instead a toy simulation to demonstrate our methodology. For example, we do not expect our framework when used with this simulation model to yield highly effective, superior antibodies for real-world applications.
+
+# E. MDP for the Virus-Antibody Game
+
+We formalise a single interaction round between an antibody and a virus as the finite-horizon Markov Decision Process $\mathcal{M}$ :
+
+$$
+\mathcal {M} = \left\langle \mathcal {S}, \mathcal {A} ^ {\mathrm {v}}, \mathcal {A} ^ {\mathrm {a}}, P, R, \mu \right\rangle .
+$$
+
+- State space: $S = \mathbb{A}^{N_v}\times \mathbb{A}^{N_a}$ , where a state $s = (v,a)$ is the pair of viral $(v)$ and antibody $(a)$ amino-acid sequences.
+Action spaces (chosen simultaneously):
+
+$$
+\mathcal {A} ^ {\mathrm {v}} (s) = \mathbb {A} ^ {N _ {v}}, \quad \mathcal {A} ^ {\mathrm {a}} (s) = \mathbb {A} ^ {N _ {a}}.
+$$
+
+- Transition kernel (episode terminates after one step):
+
+$$
+P \left(s ^ {\prime} \mid s, a ^ {\mathrm {v}}, a ^ {\mathrm {a}}\right) = \mathbf {1} _ {s ^ {\prime} = s _ {\star}}, \quad s _ {\star} \text {i s t e r m i n a l}.
+$$
+
+- Reward vector $R = (R_{\mathrm{v}}, R_{\mathrm{a}})$ . Given joint action $(a^{\mathrm{v}}, a^{\mathrm{a}})$ in state $s$ ,
+
+$$
+R _ {\mathrm {a}} \left(s, a ^ {\mathrm {v}}, a ^ {\mathrm {a}}\right) = B \left(a ^ {\mathrm {v}}, a ^ {\mathrm {a}}\right) - B \left(t _ {a} ^ {-}, a ^ {\mathrm {a}}\right) - B \left(a ^ {\mathrm {v}}, t _ {v} ^ {+}\right), \quad R _ {\mathrm {v}} = - R _ {\mathrm {a}}.
+$$
+
+- Initial-state distribution $\mu$ puts mass on the wild-type virus and the initial antibody candidate: $\mu (v_0,a_0) = 1$ .
+
+Because this is a single-step MDP, the return equals the immediate reward $R$ and the discount factor is irrelevant.
\ No newline at end of file
diff --git a/adiosantibodydevelopmentviaopponentshaping/images.zip b/adiosantibodydevelopmentviaopponentshaping/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8743ebebe5e720b79c501c5336671bf9ad5dfb1a
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:760fba3d339c261caae2374adfe3a1b58a3dc3670c2f35bfafdb5643ab69b494
+size 1131343
diff --git a/adiosantibodydevelopmentviaopponentshaping/layout.json b/adiosantibodydevelopmentviaopponentshaping/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f5c1426d5dff3f93dcb8343b329eb7b1ef609fc3
--- /dev/null
+++ b/adiosantibodydevelopmentviaopponentshaping/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1fe063a4a5022ebe331bdf351fe95d1405a404d052745ba68878c10deef2d37d
+size 748094
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_content_list.json b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..30a7ed4508e77d5597d0bce58d6e586099cd3740
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23511f723229ed0ea6092ad52568fae53dde49a28aad1cda371ee60f2ec7437e
+size 123437
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_model.json b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fc0c9da3dd157034ffa106ba8407b79e8474d5c7
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1748e3af3b5b443f164b461f04f4e9dbe21a612c7322d5d3e03629d8457b1568
+size 150146
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_origin.pdf b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..14684b950aa1e04a421f609f090fdd94a57ce6ae
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/ce2d9e25-c788-46ff-a4b8-aa6e92c97f32_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7df8c5cf9bc5bb9c836b6a2fffcdb3e925b9cb4ac57d505cbf141bef9d8818ff
+size 2174279
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/full.md b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cfac71b1b1c356b88a671c90f4a9828b620c73cf
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/full.md
@@ -0,0 +1,508 @@
+# AEQA-NAT: Adaptive End-to-end Quantization Alignment Training Framework for Non-autoregressive Machine Translation
+
+Xiangyu Qu 12 Guojing Liu 3 Liang Li 124
+
+# Abstract
+
+Non-autoregressive Transformers (NATs) have garnered significant attention due to their efficient decoding compared to autoregressive methods. However, existing conditional dependency modeling schemes based on masked language modeling introduce a training-inference gap in NATs. For instance, while NATs sample target words during training to enhance input, this condition cannot be met during inference, and simply annealing the sampling rate to zero during training leads to model performance degradation. We demonstrate that this training-inference gap prevents NATs from fully realizing their potential. To address this, we propose an adaptive end-to-end quantization alignment training framework, which introduces a semantic consistency space to adaptively align NAT training, eliminating the need for target information and thereby bridging the training-inference gap. Experimental results demonstrate that our method outperforms most existing fully NAT models, delivering performance on par with Autoregressive Transformer (AT) while being 17.0 times more efficient in inference.
+
+# 1. Introduction
+
+Non-autoregressive Transformer (NAT, Gu et al. 2018) has emerged as a promising approach to mitigate the high latency inherent in Autoregressive Transformer (AT, Vaswani et al. 2017), which stems from their sequential token-by-token decoding mechanism. While NAT achieves significant speedup by parallel decoding, it often suffers from translation quality degradation due to insufficient modeling of
+
+$^{1}$ School of Cyber Science and Technology, Shandong University $^{2}$ State Key Laboratory of Cryptography and Digital Economy Security, Shandong University $^{3}$ Ocean University of China $^{4}$ QCL. Correspondence to: Liang Li .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+Figure 1. An example illustrating the challenge of the training-inference gap faced by MLM-based Non-autoregressive Transformer (NAT) models. In this context, the x-axis represents the sampling rate of unmasked target tokens during the inference phase of the NAT model, which is consistent with the training phase. The y-axis denotes the translation quality. The dashed line indicates the baseline performance of the NAT model under its original inference conditions.
+
+interdependencies among target tokens (Guo et al., 2019; Xiao et al., 2023). Current research in NAT primarily explores two paradigms: iterative decoding, which refines translations through multiple iterations to balance quality and efficiency (Ghazvininejad et al., 2020b; Sahara et al., 2020), and non-iterative decoding, which generates translations in a single forward pass to maximize speed advantages (Gu & Kong, 2021; An et al., 2023).
+
+Conditional masked language modeling (CMLM) stands as one of the most effective approaches for enhancing target token dependencies in NAT (Ghazvininejad et al., 2019; Shao & Feng, 2022; Huang et al., 2022d). This method explicitly guides the model to learn the mapping from the source sentence $X$ and observable target tokens $Y_{obs}$ to the masked target tokens $Y_{mask}$ by incorporating additional target words as part of the input. Qian et al. (2021) further advanced this explicit dependency modeling capability by introducing the Glancing Transformer (GLAT), which adaptively adjusts the number of sampled target tokens, a technique that has been widely adopted (Bao et al., 2022; Huang et al., 2022c; Guo et al., 2023). However, this MLM-based training paradigm $(X + Y_{obs} \rightarrow Y_{mask})$ creates a training-inference gap, as
+
+NAT inference requires predicting the complete translation $(X\rightarrow Y)$ rather than reconstructing partial tokens. While annealing the target token sampling rate to zero during training might theoretically address this gap, GLAT demonstrates that doing so leads to significant performance degradation.
+
+The state-of-the-art NAT model, Directed Acyclic Transformer (DAT, Huang et al. 2022c), optimizes translation paths by constructing a directed acyclic graph, achieving performance comparable to AT models without knowledge distillation. Nevertheless, DAT is not without its limitations: 1) its performance gains come at the expense of decoding speed, with speedups ranging from $14.0\mathrm{x}$ to $7.0\mathrm{x}$ (Huang et al., 2022c); and 2) despite employing GLAT training methods, DAT fails to fully completely bridge the gap between training and inference. Furthermore, empirical investigations into the unified training and inference of NAT—where target information is input during inference to prompt translation—reveal significant improvements in translation quality for MLM-based NAT models, as shown in Fig 1. This indicates that eliminating the training-inference gap can significantly enhance model performance.
+
+Our main contributions are as follows:
+
+- Our research has revealed a significant discrepancy between the training and inference of contemporary advanced NAT networks. Specifically, we've identified that NAT fails to reach its full potential during inference.
+- We propose an Adaptive End-to-End Quantization Alignment (AEQA) framework for NAT, which introduces a semantic consistency space to jointly optimize all model components without requiring additional target information. This approach effectively eliminates the training-inference gap inherent in NAT systems.
+- Experimental results demonstrate that our method achieves the fastest decoding speed among all NAT networks while attaining state-of-the-art performance across multiple translation directions. Additionally, our method reduces the performance gap between training on raw data and distilled data to 0.29 BLEU points, demonstrating its superiority in handling multimodal distributions.
+
+# 2. Methodology
+
+In this section, we first systematically introduce the widely used techniques of existing advanced NAT models and reveal their limitations. We then present our AEQA-NAT framework in detail, including its training and inference process with some effective schemes.
+
+# 2.1. Preliminary
+
+Non-autoregressive Machine Translation The machine translation task can be formally defined as a sequence-to-sequence generation problem, where the neural network generates a target sentence $Y = \{y_{1}, y_{2}, \dots, y_{N}\}$ under the condition of a given source sentence $X = \{x_{1}, x_{2}, \dots, x_{M}\}$ . Autoregressive Transformer (AT, Vaswani et al. 2017) factorize the translation probability as follows and maximize it with the cross-entropy loss
+
+$$
+\mathcal {L} _ {\mathrm {A T}} = - \sum_ {i = 1} ^ {N} \log p _ {\theta} \left(y _ {i} \mid y _ {< i}, X\right) \tag {1}
+$$
+
+where $y_{i}$ is predicted based on the prefix $y_{< i}$ .
+
+The Vanilla NAT (Gu et al., 2018) makes a conditional independent assumption where each token is independent of each other when $X$ is given. Formally, we have
+
+$$
+\mathcal {L} _ {\mathrm {N A T}} = - \sum_ {i = 1} ^ {N} \log p _ {\theta} \left(y _ {i} \mid X\right) \tag {2}
+$$
+
+All target tokens are generated in parallel through the conditional probability of $X$ . During decoding, NAT lacks appropriate methods to restore correct inter-word dependencies.
+
+The Conditional Masked Language Model Enhancing dependency modeling is based on the idea of masked language model (MLM), e.g., CMLM (Ghazvininejad et al., 2019), GLAT (Qian et al., 2021) and DAT (Huang et al., 2022c), which allows the model to explicitly learn a mapping from the source sequence $X$ and the observable variables ${}^{1}{Y}_{\text{obs }}$ (unmasked ground truth tokens) to the target sequence $y$
+
+$$
+\mathcal {L} _ {\text {m a s k}} = - \sum_ {y \in Y _ {\text {m a s k}}} \log p _ {\theta} (y | Y _ {\text {o b s}}, X) \tag {3}
+$$
+
+$\mathcal{L}_{mask}$ denotes the reconstruction loss. MLM-based construction loss plays a critical role in enhancing dependency modeling of target tokens, which significantly improves the translation quality of NAT. However, the training paradigm $\langle X + Y_{obs} \to Y_{mask} \rangle$ makes NAT learn a revised distribution (Huang et al., 2022b), resulting in a gap between NAT training and inference (i.e., NAT inputs $X$ and predicts the full $Y$ in inference).
+
+
+Figure 2. Overview of the AEQA training framework. In the pre-alignment phase, we utilize exponential moving average updates on the mBART model to refine the Semantic Quantization Space. In the training stage, we optimize the whole model through $\mathcal{L}_{\mathrm{NAT}}$ , $\mathcal{L}_{\mathrm{SQA}}$ , and $\mathcal{L}_{\mathrm{LEN}}$ .
+
+
+
+# 2.2. Adaptive End-to-End Quantization Alignment
+
+In contrast to Autoregressive Transformers, which employ a consistent left-to-right decoding strategy during both training and inference, NATs face incompatibility issues due to the additional information required during training that is unavailable at inference. Inspired by Vector Quantized Variational Autoencoder (VQ-VAE, Van Den Oord et al. 2017), we construct an external semantic quantization space (SQS) to serve as a bridge and constraint for the NAT system during training. This SQS is co-optimized with the model to ensure that the NAT system captures the semantic consistency between the source and target languages. By doing so, we achieve a unified framework for NAT systems across both training and inference phases. Existing approaches such as latent-GLAT (Bao et al., 2022) enhance NAT through latent variable modeling. Our method fundamentally diverges from such VAE-based NAT frameworks, as elaborated in Appendix A.1.
+
+# 2.3. Architecture of AEQA Training Framework
+
+As illustrated in Fig 2, the adaptive end-to-end quantization alignment training framework mainly consists of three modules: a NAT encoder $f_{\mathrm{enc}}$ , a shared semantic quantization space SQS and a NAT decoder $f_{\mathrm{dec}}$ . Their functions can be formalized as
+
+$$
+\left(h _ {x, 1}, h _ {x, 2}, \dots , h _ {x, n}\right) \leftarrow f _ {\mathrm {e n c}} \left(x _ {1}, x _ {2}, \dots , x _ {n}\right),
+$$
+
+$$
+\left(h _ {1}, h _ {2}, \dots , h _ {m}\right) \leftarrow \operatorname {s o f t c o p y} \left(f _ {\text {l e n}} \left(h _ {x, 1}, h _ {x, 2}, \dots , h _ {x, n}\right)\right),
+$$
+
+$$
+\left(z _ {q} \left(h _ {1}\right), z _ {q} \left(h _ {2}\right), \dots , z _ {q} \left(h _ {m}\right)\right) \leftarrow \operatorname {S Q S} \left(h _ {1}, h _ {2}, \dots , h _ {m}\right),
+$$
+
+$$
+p _ {\theta} (Y | \mathcal {S Q} (X), X) \leftarrow f _ {\mathrm {d e c}} \left(z _ {q} \left(h _ {1: m}\right), h _ {x, 1: x, n}\right),
+$$
+
+We use an extra module $f_{\mathrm{len}}$ to predict the target length $m$ and initialize the decoder inputs $\mathbf{H}_m = \{h_1, h_2, \dots, h_m\}$ with the Softcopy (Li et al., 2018; Wei et al., 2019) mechanism, see Appendix C.2 for more details.
+
+Pre-aligned Semantic Quantization Space We leverage the pre-trained multilingual model mBART (Liu et al., 2020) to achieve the alignment between the source language and the target language within the SQS, as depicted in the Fig 2, $\mathrm{mBART}^2$ is a pre-trained cross-lingual model based on 25 languages, and by relying on its shared encoder, it can effectively embed the source sequence and the target sequence into the SQS. Specifically, the SQS can be viewed as a $K\times \mathbf{D}$ -dimensional vocabulary $S = \{e_1,e_2,\dots,e_k\}$ , where $k$ is the number of word embeddings and $D$ is the dimension of the word embedding $e_k$ . Intuitively, given the hidden states $\mathbf{H}_x = \{h_{x,1},h_{x,2},\ldots ,h_{x,N_x}\}$ , mapping each dimension of $\mathbf{H}_x$ to one of the $K$ embeddings contained in the SQS. Formally, we have
+
+$$
+z _ {q} \left(h _ {x, i}\right) = \underset {e _ {k ^ {\prime}}} {\arg \min } \left\| h _ {x, i} - e _ {k ^ {\prime}} \right\| _ {2} \tag {4}
+$$
+
+$$
+z _ {q} \left(h _ {y, j}\right) = \underset {e _ {k ^ {\prime \prime}}} {\arg \min } \left\| h _ {y, j} - e _ {k ^ {\prime \prime}} \right\| _ {2} \tag {5}
+$$
+
+where $k'$ denotes the index of SQS vector $e_k'$ that minimizes the distance $\| h_{x,i} - e_{k'}\|_2$ , effectively quantizing $h_{x,i}$ into the category represented by $e_k'$ .
+
+Following (Van Den Oord et al., 2017), we constrain the outputs of the encoder and decoder to the vectors in SQS with commitment loss (i.e., $\mathcal{L}_{\mathrm{x}}$ ). Concretely, we use exponential moving average (EMA) to update SQS
+
+$$
+w _ {k} = \sum_ {i} ^ {N _ {x}} \mathbb {I} ((h _ {x, i}) = k) h _ {x, i}
+$$
+
+$$
+n _ {k} \leftarrow \gamma n _ {k} + (1 - \gamma) \sum_ {i} ^ {N _ {x}} \mathbb {I} \left(\left(h _ {x, i}\right) = k\right) \tag {6}
+$$
+
+$$
+e _ {k} \leftarrow \frac {1}{n _ {k}} \left(\gamma e _ {k} + (1 - \gamma) w _ {k}\right)
+$$
+
+where $\mathbb{I}(\cdot)$ is the indicator function, $w_{k}$ is computed as the sum of the encoder's hidden states $h_{x,i}$ , and $\gamma$ is a decay factor. $e_k$ is updated by averaging the previous embedding and the newly computed $w_{k}$ , normalized by $n_k$ . For more details on constructing the SQS, refer to Appendix A.2.
+
+# 2.4. Training
+
+AEQA-NAT optimizes two training objectives: semantic quantization alignment loss $\mathcal{L}_{\mathrm{SQA}}$ and translation maximum likelihood loss $\mathcal{L}_{\mathrm{NAT}}$ .
+
+Semantic Quantization Alignment Loss $\mathcal{L}_{\mathrm{SQA}}$ We leverage the pre-aligned SQS to facilitate the model's semantic mapping from the source sequence to the target sequence, thereby ensuring the consistency of the encoder and decoder outputs within the SQS during the training process. Specifically, we have
+
+$$
+\mathcal {L} _ {\mathrm {S Q A}} = \mathcal {L} _ {\mathrm {a x}} + \mathcal {L} _ {\mathrm {x y}} + \mathcal {L} _ {\mathrm {a y}} \tag {7}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {a x}} = \left\| \mathbf {H} _ {x} - \operatorname {s g} \left(z _ {q} \left(\mathbf {H} _ {x}\right)\right) \right\| _ {2} ^ {2} \tag {8}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {x y}} = \left\| z _ {\bar {q}} \left(\mathbf {H} _ {x}\right) - \operatorname {s g} \left(z _ {\bar {q}} \left(\mathbf {H} _ {y}\right)\right) \right\| _ {2} ^ {2} \tag {9}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {a y}} = \left\| \hat {\mathbf {H}} _ {y} - \operatorname {s g} \left(z _ {q} \left(\mathbf {H} _ {y}\right)\right) \right\| _ {2} ^ {2} \tag {10}
+$$
+
+where $\hat{\mathbf{H}}_y$ represents the hidden representation output by the last layer of the decoder, which is consistent with the dimension of SQS. Note that $z_{\bar{q}}(\mathbf{H}_x)$ is calculated as $\frac{1}{N_x}\sum_{i = 1}^{N_x}z_q(h_{x,i})$ and $z_{\bar{q}}(\mathbf{H}_y)$ is calculated as $\frac{1}{N_y}\sum_{j = 1}^{N_y}z_q(h_{y,j})$ .
+
+Translation Maximum Likelihood Loss $\mathcal{L}_{\mathrm{NAT}}$ To ensure the model generates complete translations while preserving the benefits of Glancing training, we design a learning strategy compatible with Glancing Targets (Qian et al., 2021). Specifically, $\mathcal{L}_{\mathrm{NAT}}$ maximizes the conditional probability of the target sequence $Y$ given the input sequence $X$ :
+
+$$
+\mathcal {S Q} (X) = \mathbb {G S} (\operatorname {S Q S} \left(f _ {e n c} (X)\right), \mathcal {D} (Y, \hat {Y})) \tag {11}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {N A T}} = - \sum_ {i = 1} ^ {T} \log p _ {\theta} \left(y _ {i} \mid \mathcal {S Q} (X), X\right) \tag {12}
+$$
+
+where $\mathbb{G}\mathbb{S}(\cdot)$ denotes Glancing Sampling Strategy (Qian et al., 2021), $\mathcal{D}(Y,\hat{Y})$ is used to determine the number of samples3 . Note that what is replaced here is the discrete vector, rather than the target tokens. $\mathcal{S}\mathcal{Q}(\cdot)$ is the quantized input after masking. Finally, the overall training loss $\mathcal{L}$ is obtained by
+
+$$
+\mathcal {L} = \mathcal {L} _ {\mathrm {N A T}} + \epsilon \mathcal {L} _ {\mathrm {S Q A}} + \delta \mathcal {L} _ {\mathrm {L E N}} \tag {13}
+$$
+
+where $\delta$ and $\epsilon$ are the hyperparameters to control the impact of Semantic Quantization Alignment loss $\mathcal{L}_{\mathrm{SQA}}$ and length
+
+
+Figure 3. Aligned Reordering (AR) process applied to a sequence of semantic representations. The input sequence $z_{q}(\mathbf{H}_{m}) = \{h_{1}, h_{2}, \dots, h_{m}\}$ is aligned with the ground truth sequence $Y = \{y_{1}, y_{2}, \dots, y_{n}\}$ . The alignment probability distribution matrix $\mathbf{A}$ is computed to determine the most probable order the input words. The AR process outputs the reordered sequence $\mathbb{A}\mathbb{R}(z_{q}(\mathbf{H}_{m})) = \{h_{1}, h_{3}, h_{5}, h_{4}, h_{2}\}$ , aligning the input sequence with the correct syntactic structure as indicated by the matrix $\mathbf{A}$ . The checked positions in the matrix represent the optimal alignment of each word in the sequence.
+
+prediction loss $\mathcal{L}_{\mathrm{LEN}}$ , respectively. Based on this, gradients are computed to update various parameters of the model, including the encoder parameters, decoder parameters, and the embedding vectors $e_k$ of the SQS.
+
+Aligned Reordering Intuitively, the input to the decoder, $\mathcal{S}\mathcal{Q}(X)$ , is strongly correlated with the word order of the source sequence rather than the target sequence. These representations may not initially adhere to the correct syntactic order with respect to the target language. The Aligned Reordering (AR) mechanism4 addresses this by adjusting the syntactic discrete representations $z_{q}(\mathbf{H}_{m})$ from the SQS, corresponding to words in the source sequence, as illustrated in Fig 3. Specifically, we define an alignment probability distribution matrix A. Given the input $z_{q}(\mathbf{H}_{m}) = \{h_{1}, h_{2}, \dots, h_{m}\}$ from the SQS and the ground truth $Y = \{y_{1}, y_{2}, \dots, y_{n}\}$ , the alignment probability distribution matrix A is computed as follows
+
+$$
+\mathbf {A} = \operatorname {s o f t m a x} \left(Y z _ {q} \left(\mathbf {H} _ {m}\right) ^ {T}\right) \tag {14}
+$$
+
+where $\mathbf{A} \in \mathbb{R}^{n \times m}$ is the alignment probability distribution matrix normalized by rows, which captures the alignment probabilities between the input representation $z_{q}(\mathbf{H}_{m})$ and the ground truth $Y$ . The AR process uses these probabilities to reorder the input sequence into the correct syntactic order, as indicated by the reordered sequence $\mathbb{AR}(z_{q}(\mathbf{H}_{m})) = \{h_{1}, h_{3}, h_{5}, h_{4}, h_{2}\}$ . The detailed formulation is presented in Appendix C.1.
+
+Table 1. Performance comparison between our models and existing methods. The speedup is measured on WMT 14 EN←DE test set with batch size 1. $I_{dec}$ denotes the number of iterations at inference time, Adv means adaptive and $m$ is the number of length reranking candidates. Results of prior work are quoted from respective papers. NPD represents noisy parallel decoding. Reordering denotes the aligned reordering mechanism. DSLP (Huang et al., 2022a) denotes deep supervision and feed additional layer-wise predictions. Best performance of non-iterative NATs ( $I_{dec} = 1$ ) are bolded. * indicates results of our re-implementation. † denotes the results of out implementations.
+
+Model \(I_{dec}\) WMT14 EN-DE Raw KD WMT14 DE-EN Raw KD WMT16 EN-RO Raw KD WMT16 RO-EN Raw KD Speedup Transformer (Vaswani et al., 2017) M 27.37 27.48 31.33 31.34 33.89 33.65 34.12 34.00 1.0x Transformer (Ours) M 28.11 28.37 31.90 31.88 33.70 33.68 34.39 34.05 1.0x CMLM (Ghazvininejad et al., 2019) 10 24.61 27.03 29.40 30.53 32.86 33.08 - 33.31 1.7x JM-NAT (Guo et al., 2020) 10 - 27.69 - 32.24 - 33.52 - 33.72 5.7x SMART (Ghazvininejad et al., 2020b) 10 25.10 27.65 29.58 31.27 - - - - 2.2x DisCo (Kasai et al., 2020) Adv - 27.34 - 31.31 - 33.22 - 33.25 2.6x Multi-Task NAT (Hao et al., 2021) 10 25.79 27.98 30.32 31.27 - 33.80 - 33.60 2.6x RewriteNAT (Geng et al., 2021) Adv - 27.83 - 31.52 - 33.63 - 34.09 - CMLMC (Huang et al., 2022d) 10 26.40 28.37 30.92 31.41 34.14 34.57 34.13 34.14 - Con-NAT (Cheng & Zhang, 2022) 10 25.60 27.93 30.05 31.57 - 33.88 - 34.18 - Vanilla NAT (Gu et al., 2018) 1 10.84* 18.21* 15.85* 24.33* 19.82* 27.29 21.93* 29.06 15.6x CTC (Libovický & Helcl, 2018) 1 17.73* 25.52 21.44* 28.73 23.12* 32.60 25.05* 33.46 14.6x AXE (Ghazvininejad et al., 2020a) 1 20.40 23.53 24.90 27.90 30.47 30.75 31.42 31.54 14.5x GLAT (Qian et al., 2021) 1 18.94* 25.21 25.71* 29.84 26.38* 31.19 27.99* 32.04 14.2x OaXE (Du et al., 2021) 1 22.40 26.10 26.80 30.20 25.43* 32.40 28.17* 33.30 12.4x DAT (Huang et al., 2022c) 1 26.57 27.49 30.68 31.37 32.71* 32.79* 33.25* 33.85* 13.9x MgMO (Li et al., 2022) 1 - 26.40 - 30.30 - 32.90 - 33.60 - DePA (Zhan et al., 2023) 1 - 26.43 - 30.42 - 33.07 - 33.82 15.1x Renew NAT (Guo et al., 2023) 1 - 26.65 - 30.65 - 33.02 - 33.74 11.2x FA-DAT (Ma et al., 2023) 1 27.47 27.17 31.44 - - - - - 13.2x CMLM-rephraser (Shao et al., 2023) 1 23.12 26.65 27.44 30.70 32.30 32.72 32.07 33.03 15.0x DAT* (Li et al., 2024) 1 26.48* 27.01* 30.62* 31.15* 33.18 33.25 33.02* 33.14* 12.0x †AEQA-NAT 1 25.96 26.24 28.78 29.04 30.83 31.11 31.29 31.67 17.0x †AEQA-NAT w/ Reordering 1 26.82 27.04 29.61 29.85 32.75 32.80 32.07 32.11 17.0x †AEQA-NAT w/ NPD (m=5) 1 26.87 27.20 30.14 30.53 32.78 32.79 32.25 32.59 13.5x †+DSLPP 1 26.22 26.29 28.95 29.14 28.97 29.16 32.01 32.34 15.3x †AEQA-NAT w/ Reordering 1 26.95 27.21 30.25 30.34 32.80 32.83 32.96 33.27 15.3x †AEQA-NAT w/ NPD (m=5) 1 27.10 27.30 30.71 31.42 33.26 33.31 33.42 33.76 12.2x †AEQA-DAT 1 27.03 27.62 30.94 31.70 33.09 33.37 33.56 33.89 9.1x
+
+# 2.5. Inference
+
+AEQA-NAT utilizes the source text and the quantization alignment representations during inference, which is consistent with the training process. Specifically, we obtain
+
+$$
+\hat {Y} = \underset {Y} {\arg \max } \log p _ {\theta} (Y | \mathcal {S Q} (X), X) \tag {15}
+$$
+
+benefiting from the fact that no additional target information is introduced during training, AEQA-NAT seamlessly generalizes the knowledge from the training data to the inference stage5 .
+
+# 3. Experiments
+
+Dataset We validate our proposed models on four widely used translation benchmarks, i.e., WMT14 EN $\leftrightarrow$ DE (4.0M), WMT16 EN $\leftrightarrow$ RO (610K), WMT17 ZH $\leftrightarrow$ EN (20M) and IWSLT16 DE $\rightarrow$ EN (153K), where we follow (Zhou et al. 2020, Lee et al. 2018a, Kasai et al. 2020) for pre-processing. Consistent with previous work (Gu et al. 2018, Qian et al. 2021), we employ sequence-level knowledge distillation for all datasets, refer to Appendix B.
+
+Evaluation Following prior works, we compute tokenized BLEU (Papineni et al., 2002) for WMT14 EN $\leftrightarrow$ DE and WMT16 EN $\leftrightarrow$ RO, while using SacreBLEU (Post, 2018) for WMT17 ZH $\leftrightarrow$ EN. To comprehensively assess translation quality, we utilize multiple additional metrics, including the rule-based metric chrF (Popovic, 2015) and two model
+
+
+Figure 4. Effect of the size factor $L$ on WMT14 EN-DE. The graph size or the output length is $L$ times of the source length.
+
+
+Figure 5. The BLEU score on WMT14 EN-DE bucketed by the reference length.
+
+
+Figure 6. N-gram Repetition of different models.
+
+based metrics: COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020). Specifically, for COMET, we use the wmt22-comet-da model (Rei et al., 2022), and for BLEURT, we adopt the BLEURT-20 model (Pu et al., 2021).
+
+Implementations Our models generally use the hyperparameters of transformer-base (Vaswani et al., 2017). We set the size of SQS to 2048. During pre-aligning, we set $\alpha$ in Eq (16) to 0.5. We set the dropout rate to 0.1 and use Adam optimizer (Kingma & Ba, 2014) with $\beta = (0.9, 0.999)$ . During training, we set $\epsilon$ and $\delta$ in Eq (13) to 0.25 and 0.1, respectively. We apply weight decay 0.01 and label smoothing $l = 0.1$ . We train the model with the batches of $64\mathrm{K} / 8\mathrm{K}$ tokens for WMT/IWSLT datasets, respectively. The learning rate warms up to $5e - 4$ in 4K steps and gradually decays according to inverse square root schedule. For hyparameter $\lambda$ , we adopt linear annealing from 0.6 to 0.4 for training and a fixed value of 0.4 in inference. We apply noisy parallel decoding denoted as NPD (Gu et al., 2018) and set the length beam as 5. We extend our method to DAT (Huang et al., 2022c) by setting the graph size factor $L$ and removing the length prediction module, refer to Appendix D. All models are implemented on fairseq (Ott et al., 2019).
+
+# 4. Main Results
+
+The main results on the benchmarks are presented in Table 1, AEQA-NAT demonstrates significant advantages in translation performance. Our method enhances conditional dependency modeling at the decoder through the Semantic Quantization Space (SQS), effectively eliminating the training-inference gap and enabling the model to achieve its full potential. The SQS ensures that the model can directly learn complex raw data distributions, thereby providing a promising solution to eliminate the reliance on knowledge distillation.
+
+1) In terms of translation quality, AEQA-NAT achieves state-of-the-art results across multiple translation directions, demonstrating significant performance advantages compared to fully NAT models. Compared to AT, AEQA-NAT exhibits a performance gap of less than 1 BLEU across all
+
+Table 2. Results of AT and NAT models trained with (or without) knowledge distillation on WMT14 EN↔DE and IWSLT16 DE→EN.
+
+Methods WMT14 IWSLT16 Avg Gap Δ↓ EN→DE DE→EN DE→EN Vanilla NAT 10.84 15.85 17.93 +7.24 w/ KD 18.21 24.33 23.81 GLAT 18.94 25.71 29.20 +4.21 w/ KD 25.21 29.84 31.43 DAT 26.57 30.68 31.57 +0.81 w/ KD 27.49 31.37 32.40 AEQA-NAT 26.87 30.14 32.34 +0.29 w/ KD 27.20 30.53 32.50 Transformer 28.11 31.90 32.92 +0.03 w/ KD 28.37 31.88 32.78
+
+benchmarks, surpassing other NAT models and indicating its ability to generate translations that are semantically and syntactically more aligned with the target language.
+
+2) In terms of decoding speed, AEQA-NAT shows a clear advantage over AT and iterative NAT models, achieving a maximum speedup of $17.0\mathrm{x}$ , which is also superior to other fully NAT models. Even when integrated with NPD techniques, AEQA-NAT maintains a high decoding speed with a speedup ratio of $12.2\mathrm{x}$ , retaining its edge over other NAT models. This demonstrates that AEQA-NAT achieves an effective balance between translation quality and decoding efficiency.
+3) In terms of portability, AEQA-NAT exhibits exceptional performance. It can seamlessly integrate advanced NAT techniques, such as NPD and DSLP, without significantly increasing computational overhead. This highlights the efficiency and flexibility of the AEQA-NAT framework.
+
+# 4.1. Analysis
+
+AEQA Enhances Dependency Modeling on Raw Data Knowledge distillation (KD) is widely employed to enhance NAT learning. Distilled data constitutes a mixed marginal
+
+Table 3. Performances on WMT14 EN→DE and WMT17 ZH→EN with the fixed sampling ratio in inference.
+
+Sampling ratio λ 0.1 0.2 0.3 0.4 0.5 0.6 WMT14 EN-DE 18.34 20.71 24.53 26.82 25.64 23.87 WMT17 ZH-EN 12.96 15.37 19.11 23.18 23.26 20.95
+
+Table 4. Results on WMT14 EN→DE test sets with different number of references (e.g., "Single" and "Multiple"). "Δ" indicates the performance gap over the Vanilla NAT.
+
+Methods Single Multiple BLEU Δ BLEU Δ Raw Data Vanilla NAT 10.8 - 25.0 - CMLM 11.0 +0.2 28.1 +3.1 GLAT 18.9 +8.1 51.5 +26.5 AEQA-NAT 26.8 +16.0 71.3 +46.3 Distillation Vanilla NAT 15.9 - 41.9 - CMLM 18.6 +2.7 50.7 +8.8 GLAT 25.2 +9.3 65.3 +23.4 AEQA-NAT 27.0 +11.1 73.5 +31.6
+
+distribution derived from raw data and the teacher model's distribution. As illustrated in Table 2, the Transformer's translation quality improves by merely 0.03 BLEU after applying KD, underscoring its robust ability to model raw data and minimal dependence on distilled data. Intuitively, a model's capacity to capture data distribution characteristics inversely correlates with its reliance on distilled data. Vanilla NAT shows significant dependence on distilled data, achieving a BLEU score improvement of 7.24 post-KD. DAT markedly reduces this gap to 0.81, while AEQA-NAT achieves a further reduction to 0.29 for the first time. These results demonstrate that AEQA effectively captures raw data distribution features and exhibits advanced dependency modeling capabilities.
+
+Graph Size The original DAT requires a pre-defined graph size significantly larger than the source sequence length to model translation references, typically set $L = 8$ . As illustrated in Fig 4, AEQA-DAT achieves peak performance with only $L = 3$ , whereas DAT and DAT* exhibit gradual performance improvements as L increases, yet their final performance remains inferior to that of AEQA-DAT at $L = 3$ . Overall, AEQA enables DAT to streamline the graph size, thereby enhancing training efficiency.
+
+Different Lengths To analyze the impact of varying lengths on model performance, we categorized references into different length intervals and evaluated translation quality within each interval. As shown in Fig 5, AEQA-NAT demonstrates strong performance across all reference sequence length
+
+Table 5. Results of different translation models on WMT16 EN $\rightarrow$ RO. We encompass a wide range of metrics including rule-based metrics (BLEU and chrf) and model-based metrics (COMET and BLEURT).
+
+Methods BLEU↑ chrf↑ COMET↑ BLEURT↑ Speedup↑ Raw Data Vanilla NAT 19.82 50.65 65.22 53.17 15.6x GLAT 26.38 56.34 73.53 62.89 14.2x DAT 32.71 57.28 76.08 66.45 13.9x AEQA-NAT 32.75 57.40 77.12 67.64 17.0x Distillation Vanilla NAT 27.29 56.38 72.16 62.01 15.6x GLAT 31.19 57.07 75.23 65.17 14.2x DAT 32.79 57.81 76.52 66.84 13.9x AEQA-NAT 32.80 57.80 77.01 66.92 17.0x
+
+intervals, achieving results comparable to state-of-the-art NAT models. For AEQA-DAT, its translation quality surpasses that of DAT in every interval, further validating the effectiveness and stability of AEQA.
+
+N-gram Repetition We evaluated the ability of AEQA to handle n-gram repetition. As illustrated in the Fig 6, AEQA-DAT demonstrates a significant advantage over the DAT models in reducing n-gram repetition. This improvement can be attributed to AEQA's reduction of input length, where the graph size decreases from 8 to 3, thereby streamlining the number of references on the directed acyclic graph. This effect is particularly pronounced for n-grams with sizes smaller than 4. Compared to all NATs, AEQA-NAT consistently maintains a lower n-gram repetition count, demonstrating its robust capability to handle multimodal challenges effectively.
+
+Sampling Ratio in Inference The results across varying sampling rates demonstrate that AEQA-NAT maintains robust performance under diverse data sampling conditions during inference, as illustrated in Table 3. This underscores that the SQS effectively addresses the limitations of traditional methods reliant on explicit target word dependencies for token-level relationship modeling, thereby affirming the efficacy of NATs in leveraging semantic consistency spaces. For WMT14 En-De, model performance improves progressively as the sampling rate $\lambda$ increases from 0.1 to 0.4, reaching its peak at $\lambda = 0.4$ . Beyond this point, performance declines, indicating that excessively high sampling rates may introduce redundancy or noise. A comparable pattern is observed for WMT17 Zh-En, with optimal performance achieved at $\lambda = 0.5$ . These findings suggest that optimal sampling rates are dataset-dependent, emphasizing the necessity of tailoring $\lambda$ to specific data distributions.
+
+Multiple References To assess the translation quality of AEQA-NAT from a multimodal perspective, we evaluated
+
+its performance on the dataset $^{6}$ released by Ott et al. 2018, which includes ten reference translations for each of the 500 sentences from the WMT14 EN-DE test set. As demonstrated in Table 4, AEQA-NAT significantly surpasses other models in multi-reference translation tasks on raw data. This superiority arises from its ability to effectively capture multimodal references, allowing it to fully exploit its robust diversity generation capability in handling "one-to-many" mapping relationships. Notably, while all models show performance gains after distillation, AEQA-NAT's advantage becomes less pronounced compared to its performance on raw data. This further underscores that AEQA-NAT substantially reduces its dependence on knowledge distillation.
+
+Comprehensive Performance Evaluation To comprehensively evaluate the performance of AEQA-NAT, we conducted assessments across multiple key benchmarks. As shown in Table 5, on raw data, AEQA-NAT outperforms other NAT models on both rule-based metrics (BLEU and chrF) and model-based metrics (COMET and BLEURT), indicating its ability to generate more coherent and higher-quality translations. On distilled data, AEQA-NAT achieves superior performance on model-based metrics, suggesting that the introduction of the Semantic Quantization Space (SQS) better captures the semantic relationships between the source and target languages. This is because model-based metrics evaluate translation quality by measuring semantic relevance between sentences using parametric knowledge.
+
+# 4.2. Ablation Study
+
+AEQA significantly enhances model performance. As presented in Table 6, the incorporation of the Semantic Quantization Space (SQS) results in a performance gain exceeding 9 BLEU points (Line 1 vs. Line 2). This substantial improvement underscores the critical role of AEQA training in enhancing the model's capacity to capture data distribution features. Furthermore, we observe a consistent trend of improved translation quality with an increase in the categorical number $K$ of the SQS, with optimal performance attained at $K = 2048$ . Specifically, as demonstrated in Line 8 and Line 9, the BLEU scores achieve 25.14 and 25.96 under distinct sampling strategies, respectively. These results highlight the importance of selecting an appropriate $K$ value for maximizing model efficacy.
+
+Influence of Sampling Strategies. The experimental results reveal that Adaptive Sampling exhibits a marginally better performance trend than Uniform Sampling under identical $K$ value configurations. At $K = 512$ , the performance gap between the two exceeds 1 BLEU point (Line 2 vs. Line 3). As $K$ increases, this gap progressively narrows, with BLEU score differences of 0.84, 0.82, and 0.68 for $K = 1024$ ,
+
+Table 6. Ablation on WMT14 EN→DE test set with different combinations of techniques. "AR" denotes Aligned Reordering.
+
+Line K Sampling AR DSLP BLEU 512 1024 2048 4096 Uniform Adaptive 1 10.84 2 ✓ ✓ 19.93 3 ✓ ✓ 20.62 4 ✓ ✓ 21.85 5 ✓ ✓ 22.69 6 ✓ ✓ 22.83 7 ✓ ✓ 23.51 8 ✓ ✓ 25.14 9 ✓ ✓ 25.96 10 ✓ ✓ 26.82 11 ✓ ✓ ✓ 26.95
+
+2048, and 4096, respectively. These findings indicate that the impact of sampling strategies on model performance decreases with larger $K$ values.
+
+Aligned Reordering When evaluating only the predefined $K$ values and sampling methods without integrating AR and DSLP, the BLEU score is 25.96, as indicated in Line 9. With the inclusion of AR (Line 10), the BLEU score rises to 26.82. This improvement suggests that the AR mechanism further enhances the model's translation performance, potentially by refining syntactic structures or improving semantic alignment, thereby elevating the quality of the generated translations.
+
+# 5. Related Work
+
+The optimization strategies for NAT systems can be broadly categorized into two dimensions: 1) Input-side enhancement, which incorporates explicit target information to improve the model's ability to capture data distributions, as exemplified by techniques as Conditional Masked Language Model (CMLM, Ghazvininejad et al. 2019; Du et al. 2021; Cheng & Zhang 2022; Shao et al. 2023) and Glancing Transformer (GLAT, Qian et al. 2021; Bao et al. 2022; Schmidt et al. 2022; An et al. 2023); and 2) Target-side optimization, which modifies learning objectives to alleviate training difficulties, including methods like Aligned Cross Entropy (AXE, Ghazvininejad et al. 2020a), Order-Agnostic Cross Entropy (OAXE, Du et al. 2021), Multi-Granularity Optimization (MgMO, Li et al. 2022) and Non-Monotonic Latent Alignments (NMLA, Shao & Feng 2022). Target-side optimization techniques, such as AXE and OAXE, attempt to accommodate NAT training by relaxing strict word order alignment. Furthermore, several significant Aligned Reordering approaches have been applied to NAT. Reorder-NAT (Ran et al., 2021) incorporates a dedicated reordering module to explicitly model word rearrangement information during decoding. AligNART (Song et al., 2021)
+
+achieves one-to-one mapping between encoder hidden representations and target information through alignment estimation, thereby reducing the modality of target distributions. AEQA-NAT leverages the synergistic effect of multiple loss components to jointly consider both the input and decoding phases, guiding the model to adaptively align the source and target texts.
+
+# 6. Conclusion
+
+In this work, we propose the AEQA-NAT training framework for non-autoregressive translation. By introducing a semantic quantization space, AEQA-NAT eliminates reliance on target information and effectively bridges the training-inference gap in NAT. Experimental results demonstrate that AEQA-NAT achieves state-of-the-art performance among fully non-autoregressive models across multiple translation benchmarks, while maintaining decoding speeds comparable to Vanilla NAT. Furthermore, our approach enhances the ability of NAT models to learn from raw data distributions, reducing the performance gap between raw data and knowledge distillation to 0.29 BLEU score.
+
+# Impact Statement
+
+This work presents a novel approach to bridge the training-inference gap in NAT models. By reducing the dependency on knowledge distillation, our method enhances the efficiency and scalability of NAT training. The proposed framework has the potential to enable more effective and efficient NAT deployment in real-world applications, contributing to the advancement of machine translation and other sequence generation tasks.
+
+# References
+
+An, C., Feng, J., Huang, F., Qiu, X., and Kong, L. Optimizing non-autoregressive transformers with contrastive learning, 2023. URL https://arxiv.org/abs/2305.13667.
+Bao, Y., Huang, S., Xiao, T., Wang, D., Dai, X., and Chen, J. Non-autoregressive translation by learning target categorical codes. In Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y. (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5749-5759, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.458. URL https://aclanthology.org/2021.naacl-main.458/.
+Bao, Y., Zhou, H., Huang, S., Wang, D., Qian, L., Dai,
+
+X., Chen, J., and Li, L. latent-GLAT: Glancing at latent variables for parallel text generation. In Muresan, S., Nakov, P., and Villavicencio, A. (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8398-8409, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.575. URL https://aclanthology.org/2022.acl-long.575/.
+Chen, B. and Cherry, C. A systematic comparison of smoothing techniques for sentence-level bleu. In Proceedings of the ninth workshop on statistical machine translation, pp. 362-367, 2014.
+Cheng, H. and Zhang, Z. Con-NAT: Contrastive nonautoregressive neural machine translation. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 6219-6231, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.463. URL https://aclanthology.org/2022.findings-emnlp.463/.
+Du, C., Tu, Z., and Jiang, J. Order-agnostic cross entropy for non-autoregressive machine translation. In International conference on machine learning, pp. 2849–2859. PMLR, 2021.
+Geng, X., Feng, X., and Qin, B. Learning to rewrite for non-autoregressive neural machine translation. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3297-3308, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.265. URL https://aclanthology.org/2021.emnlp-main.265/.
+Ghazvininejad, M., Levy, O., Liu, Y., and Zettlemoyer, L. Mask-predict: Parallel decoding of conditional masked language models. In Inui, K., Jiang, J., Ng, V., and Wan, X. (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6112-6121, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1633. URL https://aclanthology.org/D19-1633.
+Ghazvininejad, M., Karpukhin, V., Zettlemoyer, L., and Levy, O. Aligned cross entropy for non-autoregressive machine translation. In International Conference on Machine Learning, pp. 3515-3523. PMLR, 2020a.
+
+Ghazvininejad, M., Levy, O., and Zettlemoyer, L. Semi-autoregressive training improves mask-predict decoding. arXiv preprint arXiv:2001.08785, 2020b.
+Gu, J. and Kong, X. Fully non-autoregressive neural machine translation: Tricks of the trade. In Zong, C., Xia, F., Li, W., and Navigli, R. (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 120-133, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl. 11. URL https://aclanthology.org/2021.findings-acl.11.
+Gu, J., Bradbury, J., Xiong, C., Li, V. O., and Socher, R. Non-autoregressive neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B118Bt1Cb.
+Guo, J., Tan, X., He, D., Qin, T., Xu, L., and Liu, T.-Y. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19. AAAI Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/aaai.v33i01.33013723. URL https://doi.org/10.1609/aaaai.v33i01.33013723.
+Guo, J., Xu, L., and Chen, E. Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. In Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J. (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 376-385, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.36. URL https://aclanthology.org/2020.acl-main.36/.
+Guo, P., Xiao, Y., Li, J., and Zhang, M. Renewat: renewing potential translation for non-autoregressive transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 12854-12862, 2023.
+Hao, Y., He, S., Jiao, W., Tu, Z., Lyu, M., and Wang, X. Multi-task learning with shared encoder for non-autoregressive machine translation. In Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y. (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3989-3996, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.
+
+naacl-main.313. URL https://aclanthology.org/2021.naacl-main.313/.
+Huang, C., Zhou, H., Zaiane, O. R., Mou, L., and Li, L. Non-autoregressive translation with layer-wise prediction and deep supervision. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pp. 10776-10784, 2022a.
+Huang, F., Tao, T., Zhou, H., Li, L., and Huang, M. On the learning of non-autoregressive transformers. In International Conference on Machine Learning, pp. 9356-9376. PMLR, 2022b.
+Huang, F., Zhou, H., Liu, Y., Li, H., and Huang, M. Directed acyclic transformer for non-autoregressive machine translation. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 9410-9428. PMLR, 17-23 Jul 2022c. URL https://proceedings.mlrpress/v162/huang22m.html.
+Huang, X. S., Perez, F., and Volkovs, M. Improving non-autoregressive translation models without distillation. In International Conference on Learning Representations, 2022d.
+Kaiser, L., Bengio, S., Roy, A., Vaswani, A., Parmar, N., Uszkoreit, J., and Shazeer, N. Fast decoding in sequence models using discrete latent variables. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2390-2399. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr.press/v80/ kaiser18a.html.
+Kasai, J., Cross, J., Ghazvininejad, M., and Gu, J. Nonautoregressive machine translation with disentangled context transformer. In International conference on machine learning, pp. 5144-5155. PMLR, 2020.
+Kim, Y. and Rush, A. M. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1317-1327, 2016.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL https://api_semanticscholar.org/CorpusID:6628106.
+Lee, J., Mansimov, E., and Cho, K. Deterministic nonautoregressive neural sequence modeling by iterative refinement. In Riloff, E., Chiang, D., Hockenmaier, J.,
+
+and Tsujii, J. (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1173-1182, Brussels, Belgium, October-November 2018a. Association for Computational Linguistics. doi: 10.18653/v1/D18-1149. URL https://aclanthology.org/D18-1149/.
+Lee, J., Mansimov, E., and Cho, K. Deterministic non-autoregressive neural sequence modeling by iterative refinement, 2018b. URL https://arxiv.org/abs/1802.06901.
+Li, Y., Cui, L., Yin, Y., and Zhang, Y. Multi-granularity optimization for non-autoregressive translation. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 5073-5084, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.339. URL https://aclanthology.org/2022.emnlp-main.339.
+Li, Y., Zhang, H., Yan, J., Yin, Y., and Zhang, Y. What have we achieved on non-autoregressive translation? In Ku, L.-W., Martins, A., and Srikumar, V. (eds.), Findings of the Association for Computational Linguistics: ACL 2024, pp. 7585-7606, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl. 452. URL https://aclanthology.org/2024.findings-acl.452/.
+Li, Z., He, D., Tian, F., Qin, T., Wang, L., and Liu, T.-Y. Hint-based training for non-autoregressive translation. 2018.
+Libovicky, J. and Helcl, J. End-to-end non-autoregressive neural machine translation with connectionist temporal classification. In Riloff, E., Chiang, D., Hockenmaier, J., and Tsujii, J. (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3016-3021, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1336. URL https://aclanthology.org/D18-1336/.
+Liu, Y., Gu, J., Goyal, N., Li, X., Edunov, S., Ghazvininejad, M., Lewis, M., and Zettlemoyer, L. Multilingual denoising pre-training for neural machine translation. 2020.
+Ma, Z., Shao, C., Gui, S., Zhang, M., and Feng, Y. Fuzzy alignments in directed acyclic graph for nonautoregressive machine translation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=LSz-gQyd0zE.
+
+Ott, M., Auli, M., Grangier, D., and Ranzato, M. Analyzing uncertainty in neural machine translation. In International Conference on Machine Learning, 2018.
+Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. fairseq: A fast, extensible toolkit for sequence modeling. In Ammar, W., Louis, A., and Mostafazadeh, N. (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 48-53, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-4009. URL https://aclanthology.org/N19-4009/.
+Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Isabelle, P., Charniak, E., and Lin, D. (eds.), Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://aclanthology.org/P02-1040/.
+Popović, M. chrF: character n-gram F-score for automatic MT evaluation. In Bojar, O., Chatterjee, R., Federmann, C., Haddow, B., Hokamp, C., Huck, M., Logacheva, V., and Pecina, P. (eds.), Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 392–395, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-3049. URL https://aclanthology.org/W15-3049/.
+Post, M. A call for clarity in reporting BLEU scores. In Bojar, O., Chatterjee, R., Federmann, C., Fishel, M., Graham, Y., Haddow, B., Huck, M., Yepes, A. J., Koehn, P., Monz, C., Negri, M., Néveol, A., Neves, M., Post, M., Specia, L., Turchi, M., and Verspoor, K. (eds.), Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186-191, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6319. URL https://aclanthology.org/W18-6319/.
+Pu, A., Chung, H. W., Parikh, A., Gehrmann, S., and Selam, T. Learning compact metrics for MT. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 751-762, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.58. URL https://aclanthology.org/2021.emnlp-main.58/.
+Qian, L., Zhou, H., Bao, Y., Wang, M., Qiu, L., Zhang, W., Yu, Y., and Li, L. Glancing transformer for non-autoregressive neural machine translation. In Zong,
+
+C., Xia, F., Li, W., and Navigli, R. (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1993-2003, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.155. URL https://aclanthology.org/2021.acl-long.155.
+Ran, Q., Lin, Y., Li, P., and Zhou, J. Guiding non-autoregressive neural machine translation decoding with reordering information. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15): 13727-13735, May 2021. doi: 10.1609/aaai.v35i15. 17618. URL https://ojs.aaaai.org/index. php/AAAI/article/view/17618.
+Rei, R., Stewart, C., Farinha, A. C., and Lavie, A. COMET: A neural framework for MT evaluation. In Webber, B., Cohn, T., He, Y., and Liu, Y. (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2685-2702, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.213. URL https://aclanthology.org/2020.emnlp-main.213/.
+Rei, R., C. de Souza, J. G., Alves, D., Zerva, C., Farinha, A. C., Glushkova, T., Lavie, A., Coheur, L., and Martins, A. F. T. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Koehn, P., Barrault, L., Bojar, O., Bougares, F., Chatterjee, R., Costa-jussa, M. R., Federmann, C., Fishel, M., Fraser, A., Freitag, M., Graham, Y., Grundkiewicz, R., Guzman, P., Haddow, B., Huck, M., Jimeno Yepes, A., Kocmi, T., Martins, A., Morishita, M., Monz, C., Nagata, M., Nakazawa, T., Negri, M., Néveol, A., Neves, M., Popel, M., Turchi, M., and Zampieri, M. (eds.), Proceedings of the Seventh Conference on Machine Translation (WMT), pp. 578-585, Abu Dhabi, United Arab Emirates (Hybrid), December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.wmt-1.52/.
+Saharia, C., Chan, W., Saxena, S., and Norouzi, M. Nonautoregressive machine translation with latent alignments. In Webber, B., Cohn, T., He, Y., and Liu, Y. (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1098-1108, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.83. URL https://aclanthology.org/2020.emnlp-main.83.
+Schmidt, R., Pires, T., Peitz, S., and Loof, J. Nonautoregressive neural machine translation: A call for clarity. In Goldberg, Y., Kozareva, Z., and Zhang, Y.
+
+(eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 2785-2799, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.179. URL https://aclanthology.org/2022.emnlp-main.179.
+Sellam, T., Das, D., and Parikh, A. BLEURT: Learning robust metrics for text generation. In Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J. (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7881-7892, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.704. URL https://aclanthology.org/2020.acl-main.704/.
+Shao, C. and Feng, Y. Non-monotonic latent alignments for ctc-based non-autoregressive machine translation. Advances in Neural Information Processing Systems, 35: 8159-8173, 2022.
+Shao, C., Zhang, J., Zhou, J., and Feng, Y. Rephrasing the reference for non-autoregressive machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 13538-13546, 2023.
+Shu, R., Lee, J., Nakayama, H., and Cho, K. Latent-variable non-autoregressive neural machine translation with deterministic inference using a delta posterior. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8846-8853, Apr. 2020. doi: 10.1609/aaai.v34i05.6413. URL https://ojs(aaai.org/index.php/AAAI/article/view/6413.
+Song, J., Kim, S., and Yoon, S. AlignNART: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 1-14, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.1. URL https://aclanthology.org/2021.emnlp-main.1/.
+Van Den Oord, A., Vinyals, O., et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
+Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/
+
+3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
+Wei, B., Wang, M., Zhou, H., Lin, J., and Sun, X. Imitation learning for non-autoregressive neural machine translation. In Korhonen, A., Traum, D., and Marquez, L. (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1304-1312, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1125. URL https://aclanthology.org/P19-1125/.
+Xiao, Y., Wu, L., Guo, J., Li, J., Zhang, M., Qin, T., and Liu, T.-Y. A survey on non-autoregressive generation for neural machine translation and beyond. IEEE Trans. Pattern Anal. Mach. Intell., 45(10):11407-11427, October 2023. ISSN 0162-8828. doi: 10.1109/TPAMI.2023.3277122. URL https://doi.org/10.1109/TPAMI.2023.3277122.
+Zhan, J., Chen, Q., Chen, B., Wang, W., Bai, Y., and Gao, Y. DePA: Improving non-autoregressive translation with dependency-aware decoder. In Salesky, E., Federico, M., and Carpuat, M. (eds.), Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), pp. 478-490, Toronto, Canada (in-person and online), July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.iwslt-1.47. URL https://aclanthology.org/2023.iwslt-1.47/.
+Zhou, C., Gu, J., and Neubig, G. Understanding knowledge distillation in non-autoregressive machine translation. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BygFVAEKDH.
+
+# A. Pre-aligned SQS
+
+# A.1. VAE-based NAT
+
+Latent Transformer (LT, Kaiser et al. 2018) is the first to use the vector quantization (VQ) technique to improve NAT learning. During inference, VAE-based NATs first predict the latent variable, then non-autoregressively produce the entire target sequence $y$ conditioned on the latent sequence (Kaiser et al., 2018; Shu et al., 2020; Bao et al., 2021; 2022). In contrast, we do not introduce an additional network to model the distributions of latent variables. We design NAT-based Semantic Quantization Space (SQS) for aligning discrete representations of source and target languages. Specifically, AEQA-NAT differs from latent-GLAT in several key aspects:
+
+- AEQA-NAT introduces a novel Semantic Quantization Space (SQS) that jointly models discrete latent variables for both source and target texts during pre-alignment. This bilateral alignment mechanism, driven by the collaborative effect of multiple loss terms, establishes a unified semantic quantification space, ensuring cross-lingual consistency between training and inference. In contrast, latent-GLAT adopts a unilateral approach, encoding only target-side information and requiring an auxiliary network for latent variable prediction during inference.
+- While latent-GLAT employs glancing training—optimizing masked token reconstruction by training on both latent variables and explicit tokens—it directly predicts complete translations during inference. Conversely, AEQA-NAT maintains training-inference parity by directly generating full translations in both phases, eliminating architectural discrepancies.
+
+# A.2. Optimizing Semantic Quantization Alignment
+
+To ensure effective alignment, we design a fine-tuning task on the pre-trained multilingual model mBART (Liu et al., 2020) for optimizing the pre-aligned SQS, as shown in Fig 2. For the sake of brevity, only the details related to the SQS are presented here. Given a language pair $(X,Y)$ , and through the mBART encoder, we obtain the intermediate representation $\mathbf{H}_x$ . Note that SQS contains the pre-aligned embeddings $z_{q}(\mathbf{H}_{x})$ and $z_{q}(\mathbf{H}_{y})$ . To ensure translation quality, we prefer $z_{q}(\mathbf{H}_{y})$ to keep the original embedding position and $z_{q}(\mathbf{H}_{x})$ to cluster to $z_{q}(\mathbf{H}_{y})$ . We define the training objective for the alignment of the semantic quantization $\mathcal{L}_{\mathrm{AL}}$ as
+
+$$
+\mathcal {L} _ {\mathrm {A L}} = \mathcal {L} _ {\mathrm {a}} + \alpha \mathcal {L} _ {\mathrm {x}} \tag {16}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {a}} = \left\| z _ {\tilde {q}} \left(\mathbf {H} _ {x}\right) - \operatorname {s g} \left(z _ {\tilde {q}} \left(\mathbf {H} _ {y}\right)\right) \right\| _ {2} ^ {2} \tag {17}
+$$
+
+$$
+\mathcal {L} _ {\mathrm {x}} = \left\| \mathbf {H} _ {x} - \operatorname {s g} \left(z _ {q} \left(\mathbf {H} _ {x}\right)\right) \right\| _ {2} ^ {2} \tag {18}
+$$
+
+where $\alpha$ is the hyperparameter to control the effect of $\mathcal{L}_x$ , $\mathrm{sg}(\cdot)$ is the stop-gradient operation and $\mathbf{H}_y$ is the intermediate representation output by the encoder for the target text, having the same dimension as $z_{q}(\mathbf{H}_{y})$ . Note that $z_{\bar{q}}(\mathbf{H}_x)$ is calculated as $\frac{1}{N_x}\sum_{i = 1}^{N_x}z_q(h_{x,i})$ and $z_{\bar{q}}(\mathbf{H}_y)$ is calculated as $\frac{1}{N_y}\sum_{j = 1}^{N_y}z_q(h_{y,j})$ , which represent the closetet embedding optima for $X$ and $Y$ in SQS, respectively.
+
+The specific meaning of each loss term is as follows:
+
+- $\mathcal{L}_{\mathrm{a}}$ : Ensures that the representation $z_{\bar{q}}(\mathbf{H}_x)$ of the source text is semantically aligned with the representation $z_{\bar{q}}(\mathbf{H}_y)$ of the target text quantization. This helps the model establish accurate semantic associations between different texts, enhancing the accuracy of cross-text semantic transfer and facilitating the subsequent generation of semantically coherent texts.
+- $\mathcal{L}_{\mathrm{x}}$ : Maintains a reasonable relationship between the original intermediate representation $\mathbf{H}_x$ of the source text and its corresponding quantized representation $z_{q}(\mathbf{H}_{x})$ (after stop-gradient processing), preventing the quantization process from excessively distorting the original semantics and ensuring that the quantized representation $z_{q}(\mathbf{H}_{x})$ can still effectively carry the key semantic information of the source text.
+
+# B. Knowledge Distillation
+
+Existing NAT techniques rely on knowledge distillation (Kim & Rush, 2016; Zhou et al., 2020) to mitigate the multimodality problem. Formally, the distillation data is a mixture of marginal distribution approximating the original data distribution and
+
+the teacher model's distribution
+
+$$
+\begin{array}{l} \tilde {y} = \underset {\hat {y} \in \mathcal {T}} {\arg \max } \operatorname {s i m} (\hat {y}, y) q _ {\theta} (\hat {y} | x) \tag {19} \\ \approx \operatorname * {a r g m a x} _ {y \in \mathcal {T} _ {K}} s i m (\hat {y}, y) \\ \end{array}
+$$
+
+where $sim$ is a function measuring closeness by sentence-level BLEU (Chen & Cherry, 2014), $\mathcal{T}_K$ is the K-best list from beam search and $q_{\theta}$ is the teacher model. The distribution learned by the student model should match the mixture distribution
+
+$$
+\mathcal {D} _ {\mathrm {K D}} (x, \tilde {y}) \sim (1 - \alpha) \mathcal {D} _ {\text {d a t a}} (x, y) + \alpha q _ {\theta} (\hat {y} | x) \tag {20}
+$$
+
+Notably, the argmax of this mixture distribution is unlikely to correspond to either $y$ (the ground truth of the original data) or $\hat{y}$ (the beam search output). Thus, distilled data serves as a compromise between real data and data suitable for NAT training, rather than representing real sentences applicable to real-world scenarios. Furthermore, distilled data has inherent limitations, such as its dependence on guidance from a teacher model, compared to the richness and potential of the original corpus.
+
+# C. Details of Model Components
+
+# C.1. Correcting Word Order via Alignment Probability Matrix
+
+The alignment probability distribution matrix $\mathbf{A}$ is used to reorder the words in the input sequence $z_{q}(\mathbf{H}_{m})$ to match the syntactic order of the target sequence. The steps are as follows:
+
+# Matrix Computation
+
+The alignment probability matrix $\mathbf{A}$ is defined as:
+
+$$
+\mathrm {A} = \operatorname {s o f t m a x} \left(\mathbf {Y} z _ {q} (\mathbf {H} _ {m}) ^ {T}\right)
+$$
+
+where each element $A_{ij}$ denotes the probability of aligning input word $h_j$ with target position $y_i$ .
+
+# Algorithm for Reordering
+
+# Algorithm 1 Aligned Reordering Process
+
+Require: Alignment probability matrix A, Input sequence $z_{q}(\mathbf{H}_{m}) = \{h_{1}, h_{2}, h_{3}, h_{4}, h_{5}\}$
+Ensure: Reordered sequence $\mathbb{A}\mathbb{R}(z_q(\mathbf{H}_m))$
+
+1: Initialize an empty list reordered_sequence.
+2: for each target position $i$ in the range 1 to $n$ (loop over target sequence positions $y_{i}$ ) do
+3: Extract row $i$ from $\mathbf{A}$ to get alignment probabilities for $h_1, h_2, \ldots, h_m$ .
+4: Identify column index $j$ with the maximum probability in row $i$ , which corresponds to the alignment of $y_{i}$ with $h_{j}$ .
+5: Place $h_j$ in position $i$ of reordered_sequence.
+6: end for
+7: return reordered_sequence
+
+# Outcome
+
+The final reordered_sequence aligns with the syntactic structure of the ground truth sequence and is returned as the corrected order.
+
+# C.2. Length Predictor
+
+Length prediction can be formulated as a classification problem based on the intermediate representations generated by the encoder. Following (Lee et al., 2018b), we predict the target sequence length $m$ . Given $m$ , the decoder inputs $\mathbf{H}_m = h_{1:m}$
+
+are computed using Softcopy (Li et al., 2018; Wei et al., 2019) as:
+
+$$
+w _ {i, j} = \operatorname {s o f t m a x} (- | j - i | / \tau),
+$$
+
+$$
+h _ {j} = \sum_ {i = 0} ^ {T} w _ {i j} h _ {x, i}, \tag {21}
+$$
+
+where the weight $w_{i,j}$ is determined by the positional distance between the source position $i$ and the target position $j$ , and $\tau$ is a hyperparameter controlling the sharpness of the softmax function.
+
+# D. DA-Transformer
+
+The strict position alignment between predicted and target tokens in vanilla NAT models struggles to capture multimodal data distributions, often leading to generated tokens with mixed modality and repeated words. To address this, the directed acyclic decoder state length is upsampled to $L$ , and $H_{L} = [h_{1}, h_{2}, \dots, h_{L}]$ represents the decoder output hidden states, defined as vertex states. The probability of a path a is redefined as the position transition probability:
+
+$$
+p _ {\theta} (\mathrm {a} | x) = \prod_ {i} p _ {\theta} \left(a _ {i + 1} \mid a _ {i}, x\right) = \prod_ {i} \mathbf {E} _ {a _ {i}, a _ {i + 1}}, \tag {22}
+$$
+
+where $\mathbf{E} \in \mathbb{R}^{L \times L}$ is the transition matrix normalized by rows. Here, $\mathbf{a} = \{a_1, a_2, \ldots, a_T\}$ is a set of decoder position indices sorted in ascending order, with size $|\mathbf{a}| = n$ , and $\Gamma$ contains all possible a with a size of $\binom{L}{n}$ . For example, if the target length $n = 3$ and $L = 5$ , $\Gamma$ contains $\binom{5}{3} = 10$ possible paths, such as $\mathbf{a} \in \{0, 1, 2\}, \{0, 1, 3\}, \{2, 3, 4\}$ . Specifically, the transition matrix is computed as:
+
+$$
+\mathbf {E} = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q} \mathbf {K} ^ {T}}{\sqrt {d}}\right), \tag {23}
+$$
+
+$$
+\mathbf {Q} = \mathbf {H} \mathbf {W} _ {\mathrm {Q}}, \quad \mathbf {K} = \mathbf {H} \mathbf {W} _ {\mathrm {K}},
+$$
+
+where $d$ is the hidden size, and $\mathrm{W_Q}$ and $\mathrm{W_K}$ are learnable parameters. DAT applies lower triangular masking to $\mathbf{E}$ , restricting transitions to vertices with smaller indices to larger indices. Conditioned on the vertex states in $\mathbf{H}$ and the selected path a, the posterior probability of $y$ is calculated as:
+
+$$
+p _ {\theta} (y | \mathrm {a}, x) = \prod_ {i = 1} ^ {T} p _ {\theta} \left(y _ {i} \mid a _ {i}, x\right) \tag {24}
+$$
+
+$$
+= \prod_ {i = 1} ^ {T} \operatorname {s o f t m a x} \left(\mathrm {W} _ {\mathrm {p}} \mathrm {h} _ {\mathrm {a} _ {\mathrm {i}}}\right),
+$$
+
+where $\mathrm{h_{a_i}}$ is the representation of the $i$ -th vertex on the path a.
+
+# E. More Analyses
+
+# E.1. Sampling Rate in Training
+
+Table 7. Results on IWSLT16 with decreasing sampling ratio.
+
+Sampling Ratio λs λe BLEU Decreasing 0.6 0 26.28 0.6 0.1 26.75 0.6 0.2 29.83 0.6 0.3 31.10 0.6 0.4 32.34 0.6 0.5 30.36
+
+Drawing inspiration from the GLAT methodology, we reduce the sampling rate during training. In contrast to GLAT, our initial state begins at $\lambda_{s} = 0.6$ , as AEQA does not depend on explicit target token inputs for dependency learning and
+
+
+Figure 7. The tradeoff between Speedup and BLEU on WMT14 EN-DE.
+
+necessitates richer semantic consistency information. Table 7 demonstrates the influence of different termination values $\lambda_{e}$ on model performance. As $\lambda_{e}$ increases from 0 to 0.4, the BLEU score exhibits a notable upward trend, reaching its peak of 32.34 at $\lambda = 0.4$ . Therefore, we establish the training termination value at $\lambda_{e} = 0.4$ to fully exploit the model's capabilities.
+
+# E.2. Tradeoff
+
+As illustrated in Fig 7, AEQA-NAT demonstrates exceptional performance in terms of relative decoding speed, achieving a speedup that surpasses all other NATs. This result reinforces the advantage of NAT models in decoding speed, indicating that AEQA-NAT possesses significant superiority in decoding efficiency. Regarding translation quality, AEQA-NAT also excels, with its BLEU score surpassing that of other NAT models. This indicates that the translations generated by AEQA-NAT are closer in quality to the reference translations, further narrowing the performance gap with AT models. Overall, AEQA-NAT achieves an optimal performance-speed tradeoff.
\ No newline at end of file
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/images.zip b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9be1411b972d4c4a3f583368588d1b8eca80d6f5
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9a5acd2d134eb583765e11ffd7fb5250efb2089dec92ed6dc550f37147c10ae
+size 773542
diff --git a/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/layout.json b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7af3c84454e3c505197236ccec88a09858aa2761
--- /dev/null
+++ b/aeqanatadaptiveendtoendquantizationalignmenttrainingframeworkfornonautoregressivemachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:57e19e29079ccf4acd74c7b74524bbc98162f969f5bbda14ce1451d1d99e1bc8
+size 658102
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_content_list.json b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..218e348e12918435d31f562dd75f4b4444f85bfd
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b31cad209b0e7dad747348c92360738e3a366c9ed6c40c072c7bb86912a812f2
+size 141067
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_model.json b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7e5bcfa27d90216ad6a8687868ce2048a8bea3c
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fcb85a75ab08ba209d0fc63932d75a1d504b3cb400b01fa46d104fa6dc9d6b79
+size 174278
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_origin.pdf b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..70ce7fdee59e302a3c0e54b9e856c68034c87f9a
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/7e0ade36-8aff-48d4-b6c5-49925da9acb5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b428cbec92cafcc689fac6b1379bf2b736c012a2b1c7921bb9238cb90a495c2
+size 2052472
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/full.md b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5856d1e1454170d7659aefd7eae04f56bbb92a6f
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/full.md
@@ -0,0 +1,513 @@
+# AGAV-Rater: Adapting Large Multimodal Model for AI-Generated Audio-Visual Quality Assessment
+
+Yuqin Cao $^{1}$ Xiongkuo Min $^{1}$ Yixuan Gao $^{1}$ Wei Sun $^{2}$ Guangtao Zhai $^{1}$
+
+# Abstract
+
+Many video-to-audio (VTA) methods have been proposed for dubbing silent AI-generated videos. An efficient quality assessment method for AI-generated audio-visual content (AGAV) is crucial for ensuring audio-visual quality. Existing audio-visual quality assessment methods struggle with unique distortions in AGAVs, such as unrealistic and inconsistent elements. To address this, we introduce AGAVQA-3k, the first large-scale AGAV quality assessment dataset, comprising 3,382 AGAVs from 16 VTA methods. AGAVQA-3k includes two subsets: AGAVQA-MOS, which provides multi-dimensional scores for audio quality, content consistency, and overall quality, and AGAVQA-Pair, designed for optimal AGAV pair selection. We further propose AGAV-Rater, a LMM-based model that can score AGAVs, as well as audio and music generated from text, across multiple dimensions, and selects the best AGAV generated by VTA methods to present to the user. AGAV-Rater achieves state-of-the-art performance on AGAVQA-3k, Text-to-Audio, and Text-to-Music datasets. Subjective tests also confirm that AGAV-Rater enhances VTA performance and user experience. The dataset and code are available at https://github.com/charlotte9524/AGAVRater.
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 62271312, Grant 62132006, Grant 62225112 and Grant 62301316; in part by STCSM under Grant 22DZ2229005; and in part by the Oceanic Interdisciplinary Program of Shanghai Jiao Tong University (project number SL2020ZD102). Institute of Image Communication and Network Engineering, Shanghai Key Laboratory of Digital Media Processing and Transmissions, Shanghai Jiao Tong University, Shanghai School of Communication & Electronic Engineering, East China Normal University, Shanghai. Correspondence to: Guangtao Zhai , Xiongkuo Min .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+Figure 1. Comparison of answer accuracy between AGAV-Rater and proprietary LMMs on AGAV-Pair subset. We construct question-answer pairs from demonstration AGAVs on VTA GitHub pages, prompting LMMs to identify the optimal AGAV pair. Each dimension in the radar chart represents the answer accuracy with the correct answer corresponding to the current VTA method.
+
+
+
+# 1. Introduction
+
+The quality of AI-generated content needs to be controlled. Many researchers focus on developing video-to-audio (VTA) methods to add sound to silent AI-generated videos. Some researchers (Wang et al., 2024f; Lu et al., 2024) combine large multimodal models (LMMs) with diffusion models, empowering them with the ability to generate audio from videos. On the commercial side, companies like ElevenLabs have launched efficient VTA models to dub videos. Although VTA methods can significantly improve post-production efficiency and enhance the audio-visual (A/V) experience of AI-generated content (AIGC), they occasionally encounter issues such as poor A/V alignment or low perceptual audio quality when generating audio. Therefore, there is a need for an automated model to evaluate AI-generated audio-visual content (AGAV), select the most user-preferred results to present to users, and provide feedback for improving the generated content.
+
+Traditional audio-visual quality assessment (AVQA) methods (Cao et al., 2023a;b; Min et al., 2020) focus on distortions caused during the capture and transmission stages, which makes it difficult to identify unique distortions in AIGC, such as inconsistent A/V content and unnatural audio. With the rise of LMMs, researchers (Wang et al., 2024b;c; Wu et al.) utilize the powerful content and language com
+
+prehension capabilities of LMMs to evaluate the quality of AIGC images and videos more accurately. However, most quality assessment research focuses on the visual capabilities of LMMs (Sun et al., 2024b;a; 2023), with little exploration of their A/V capabilities. This raises the question:
+
+Can LMMs be utilized to evaluate the quality of audio-visual content generated by VTA methods?
+
+In this paper, to develop and refine methods for evaluating AGAV quality, we construct the first AI-generated audiovisual quality assessment dataset, AGAVQA-3k, which contains 3,382 AGAVs generated by 16 VTA methods. Fig. 2 illustrates the dataset construction pipeline. We hope that AGAV quality assessment methods can score AGAVs from multiple dimensions, while also assisting VTA models in selecting the optimal result from multiple generated outputs. To meet the above two needs, our AGAVQA-3k dataset is divided into two subsets. In the AGAVQA-MOS subset, we utilize 8 VTA methods to generate 3,088 AGAV from 386 AIGC videos. Then we conduct a subjective experiment to collect 9,264 Mean Opinion Scores (MOSs) across three dimensions, including audio perceptual quality, A/V content consistency, and overall A/V quality. In the AGAVQA-Pair subset, we collect 294 AGAVs from 8 VTA GitHub pages. The same video content forms a group, resulting in 75 question-answer pairs. We evaluate 7 closed-source LMMs on the AGAVQA-Pair subset to assess accuracy in selecting the optimal AGAVs, as shown in Fig. 1. The results indicate that LMMs still have significant room for improvement in evaluating AGAV quality. Moreover, existing closed-source LMMs have difficulty providing a numerical score for AGAV quality as humans do. Therefore, this paper primarily focuses on:
+
+How to adapt LMMs to score AGAV like humans?
+
+We propose the first LMM-based quality assessment method for AGAV, AGAV-Rater. AGAV-Rater is trained through two stages to perceive AGAV in a human-like manner and outputs numerical scores across three dimensions. Firstly, we create 50,952 instruction-response pairs related to the perceived quality from 3 large-scale real-world audio-caption datasets, including audio-visual datasets VGGSound (Chen et al., 2020), audio captioning dataset AudioCaps (Kim et al., 2019), and music captioning dataset MusicCaps (Agostinelli et al., 2023). These instruction-response pairs do not require human annotations. Instead, the responses are automatically labeled using two text-defined rating levels (excellent and bad). These labels are then utilized to pretrain the LMM, enabling it to roughly assess whether the quality is good or bad. This approach significantly reduces the labor and costs associated with dataset construction and allows the model to better predict numerical scores in subsequent stages. Finally, we fine-tune the pre-trained LMM
+
+on human-annotated multi-dimensional MOSs.
+
+Our experimental results demonstrate that AGAV-Rater achieves state-of-the-art performance on three quality assessment datasets: AGAVQA-MOS, text-to-audio (TTA), and text-to-music (TTM) (Deshmukh et al., 2024). Since the video content and VTA methods in the AGAVQA-MOS and AGAVQA-Pair subsets do not overlap, we validate the generalization ability and robustness of AGAV-Rater on the unseen dataset AGAVQA-Pair, as shown in Fig. 1. We further conduct a subjective experiment and find that AGAV-Rater helps VTA methods select high-quality generated results to present to users, thereby enhancing user experience. Our core contributions are threefold:
+
+- A large-scale AGAV quality assessment dataset AGAVQA-3k. It labels AGAVs' quality in two ways: multi-dimensional numerical scores and the optimal AGAV pair.
+- A novel LMM-based AGAV quality assessment model, AGAV-Rater. It can predict multi-dimensional quality scores for AGAVs, TTA, and TTM, and assist VTA methods in selecting the optimal AGAV samples.
+- Enhance the user experience of AGAVs generated by VTA methods. According to our experiment, $80\%$ of users recognize that using AGAV-Rater to select higher-quality AGAVs offers a better A/V experience, validating that AGAV-Rater can assist VTA methods in improving quality.
+
+# 2. Related Works
+
+# 2.1. Audio-Visual Quality Assessment Dataset
+
+Early research focused on compression distortions during transmission, leading to the construction of several traditionally distorted AVQA datasets. The largest one is the LIVE-SJTU dataset proposed by Min et al. (Min et al., 2020), which contains 14 original high-quality A/V consequences and 336 degraded ones. With the development of streaming media, researchers (Cao et al., 2023b; Ying et al., 2022) found that user-uploaded videos have more complex and diverse distortions, thus constructing authentically distorted AVQA datasets. With the rise of generative models, AIGC exhibits unique distortions that do not occur in real-world scenarios. To explore the distortions and perceived quality of AGAVs, we establish the first AGAV quality assessment dataset, AGAVQA-3k. Compared to single-modal AIGC image (video) quality assessment datasets, AGAVQA-3k dataset tackles more complex multimodal challenges, such as A/V content inconsistency and synchronization issues.
+
+# 2.2. Quality Assessment Methods
+
+Audio Quality Assessment. Most audio quality assessment (AQA) methods focus on distortions in speech recordings
+
+
+Figure 2. The construction process of the AGAVQA-3k dataset. The AGAVQA-3k dataset is divided into two subsets: (a) AGAVQA-MOS and (b) AGAVQA-Pair, which involve multi-dimensional score prediction and optimal AGAV pair selection tasks. (c) We present the maximum, minimum, and average subjective scores for the 8 VTA methods across the three dimensions.
+
+from modern communication networks, such as noise and discontinuities. Early speech quality assessments (Rix et al., 2001; Beerends et al., 2013) used handcrafted metrics designed by speech experts to predict speech quality. However, these methods rely on comparing degraded speech with clean references, severely limiting their application in real-world scenarios. As a result, researchers proposed machine learning-based methods to predict speech quality using only degraded speech, eliminating the need for clean speech references during inference. Training a robust speech quality evaluator requires large-scale listening tests to collect speech and MOS for training. For example, NORESQA-MOS (Manocha & Kumar, 2022) was trained on 7,000 audio recordings, and NISQA (Mittag et al., 2021) was trained on 72,903 audio files. Soham et al. (Deshmukh et al., 2024) applied the original weight of audiolanguage models directly to predict TTA and TTM quality without additional training data, which led to limited performance. Despite the high cost of collecting training samples, most AQA methods have difficulty handling the unique distortions in AI-generated audio, which differ from realworld audio distortions. In this paper, we construct 50,952 instruction-response pairs without human annotations for pre-training the AGAV-Rater, thereby alleviating the burden of large-scale subjective experiments while allowing the AGAV-Rater to capture distortions in AIGC.
+
+Audio-Visual Quality Assessment. Compared to AQA, AVQA is a more complex task as it requires handling the interaction between video and audio modalities. Early research (Min et al., 2020) utilized Support Vector Regression (SVR) to predict A/V quality scores by regressing handcrafted features extracted from video and audio. Although this method is effective for evaluating compressed A/V content, it performs poorly in real-world scenarios with mixed distortions. To address this issue, researchers (Cao et al., 2023b) have proposed deep learning-based approaches to predict A/V quality in real-world environments. However, these methods are not suitable for AGAVs, as they do not consider A/V content consistency and fail to address unnatural audio often present in AIGC scenarios. Our proposed AGAV-Rater not only predicts the quality of AGAVs but also evaluates the quality of TTAs and TTMs. We validate the effectiveness of AGAV-Rater on the AGAVQA-3k, TTA, and TTM datasets.
+
+# 3. Dataset Construction
+
+Our proposed AGAVQA-3k dataset consists of two subsets: the AGAVQA-MOS and AGAVQA-Pair, designed for multidimensional score prediction and optimal AGAV selection, respectively. In this section, we introduce the construction process of two subsets and analyze the subjective scores in the AGAVQA-MOS subset, as illustrated in Fig. 2.
+
+
+Figure 3. Training process and architecture of AGAV-Rater. The training process of AGAV-Rater consists of two steps: first, pre-training AGAV-Rater via text-defined levels, and then fine-tuning it via numerical scores.
+
+Can you evaluate the audio quality, audio-visual content consistency and overall audio-visual quality of the given content one by one?
+
+The text is a man speaks and then a horn blows. How would you rate the quality, the audio-text content consistency of audio and the text?
+
+
+Stage 2: Fine-Tuning via Numerical Scores
+
+Audio quality: 62
+
+audio-visual consistency: 50
+
+Horseshoe Sounds
+
+overall audio-visual quality: 64
+
+Electric guitar
+
+Music quality: 78 music-text consistency:64
+
+#
+
+Stage 1: Instruction (Content-related as example)
+USER: Can you evaluate the audio-visual content consistency of the given content in one word? Assistant: Audio-visual consistency: [Mask].
+
+Stage 2: Instruction (Audio-visual Pair as example)
+USER: Can you evaluate the audio quality, audio-visual content consistency and overall audio-visual quality of the given content one by one? Assistant: Audio quality: [Mask], audio-visual consistency: [Mask], overall audio-visual quality: [Mask].
+
+
+Model Structure of AGAV-Rater
+
+# 3.1. AGAVQA-MOS Subset Generation
+
+AIGC Video Collection. We first collected high-quality AIGC videos from AIGC video display websites (Sora (sor, 2024), KLing (kli, 2024), and Gen3 (Gen, 2024)) and the video generation benchmark Vbench (Huang et al., 2024). Additionally, we gathered prompts from FETV (Liu et al., 2024b) and generated AIGC videos using closed-source text-to-video platforms (Pika 1.0 (pik, 2023) and Gen3 (Gen, 2024)). From these AIGC videos, we manually selected 386 high-quality videos with clear audio source information, covering 11 types of audio sources. The distribution of main audio sources in AIGC videos is shown in Fig. 2(a).
+
+Audio Generation from Video. We utilized 8 latest VTA methods, including Diff-Foley (Luo et al., 2024), Foley-Crafter (Zhang et al., 2024a), VTA-LDM (Hu et al., 2024), Im2wav (Sheffer & Adi, 2023), SpecVQGAN (Iashin & Rahtu, 2021), ModaVerse (Wang et al., 2024f), SVA (Chen et al., 2024), and ElevenLabs (ele, 2023), to generate audio from AIGC videos, thus producing AGAVs. For the ElevenLabs closed-source platform, we used its API1 , while for the other 7 VTA methods, we used their default weights and code to generate audio. As a result, we obtained a total of 3,088 AGAVs (8 VTA methods × 386 AIGC videos).
+
+Human Evaluation. We asked subjects to rate AGAVs across three dimensions: audio quality, A/V content consistency, and overall A/V quality. Audio quality evaluates the perceived quality of the audio, including clarity, naturalness, and pleasantness. A/V content consistency primarily
+
+assesses whether the audio aligns with the corresponding visual elements in the video. Overall A/V quality evaluates the overall quality of the audio and video, including video quality, audio quality, and the compatibility between audio and video. These three quality dimensions are related yet distinct, offering a comprehensive evaluation of AGAVs from multiple perspectives.
+
+We invited 15 subjects to participate in our subjective experiment. We designed a user interface that allows subjects to watch, listen, and rate the AGAVs across three dimensions. The interface displayed 3 continuous quality rating bars, each labeled with a 1-5 Likert scale for rating. We first explained the experiment requirements and the three rating dimensions to each subject. Then, subjects entered a brief training phase to familiarize themselves with the user interface and scoring rules by watching 24 AGAVs. Afterward, they proceeded to the official testing phase. Finally, we normalized the three-dimensional raw scores to Z-scores ranging from 0 to 100 and calculated the mean Z-scores to obtain the mean opinion scores.
+
+# 3.2. MOSs Analysis
+
+In Fig. 2(c), the maximum, minimum, and average subjective scores for the 8 VTA methods are presented across the three dimensions. We can observe that the AGAVs generated by SVA (Chen et al., 2024) exhibit the best audio quality, which can be attributed to the use of proprietary TTA tools, i.e., AudioGen (Kreuk et al., 2022) and MusicGen (Copet et al., 2024), to generate high-quality sound
+
+effects and background music. ElevenLabs achieves the best A/V content consistency and overall quality by using ChatGPT-4 to extract key sound information from videos and generate high-quality audio with its proprietary TTA tool, ensuring seamless coherence between audio and video. Krippendorff's $\alpha$ (Hayes & Krippendorff, 2007) can be used to measure the quality of the subjects' ratings. We calculate Krippendorff's $\alpha$ for audio quality, A/V content consistency, and A/V overall quality, which are 0.6814, 0.7343, and 0.7143, respectively, indicating appropriate variations among subjects. We also randomly divide subjects into two groups and calculate the SRCC of average scores between the two groups. After ten repetitions, the average SRCC for audio quality, A/V content consistency, and A/V overall quality are 0.8043, 0.8318, and 0.8297, validating rating consistency.
+
+# 3.3. AGAVQA-Pair Subset Collection
+
+Most VTA GitHub pages exhibit superior performance by dubbing the same video using both their approach and other VTA methods. As shown in Fig. 2(b), we collected 294 publicly available AGAVs from 8 VTA GitHub pages, including SSV2A (Guo et al., 2024a), ReWaS (Jeong et al., 2025), TIVA (Wang et al., 2024d), V2A-Mapper (Wang et al., 2024a), STAV2A (Ren et al., 2024), V2A-SceneDetector (Yi & Li, 2024), Frieren (Wang et al., 2024g), and SonicVisionLM (Xie et al., 2024). These AGAVs, sourced from third-party platforms, offer a more objective and impartial dataset. These VTA GitHub pages are all released in the past year, representing the latest technology in VTA methods. AGAVs with the same video content are grouped, with the optimal AGAVs already labeled on the GitHub pages. We manually verified the accuracy of the optimal AGAVs, and then formed 75 instruction-response pairs as follows:
+
+Instruction: The video is , Audio 1 is , Audio 2 is , Audio 3 is ... Which audio best matches this video in terms of audio content, quality, and rhythm? Response: Audio 1.
+
+Due to the inherent subjectivity in human evaluations of AGAVs, different subjects may provide varying scores to the same AGAV. However, when selecting the optimal AGAV, most subjects tend to give the same choice, leading to more reliable quality labels. In some application scenarios, businesses only need to select the highest-quality result from multiple generated options to present to users, without requiring detailed quality scores for each AGAV. Therefore, we construct the AGAVQA-Pair subset. This subset evaluates the performance of AGAV quality assessment methods by measuring their accuracy in selecting the optimal AGAV. More details of the AGAVQA-3k dataset are provided in Appendix A.
+
+# 4. The AGAV-Rater
+
+In this section, we provide a detailed description of the training process and architecture of AGAV-Rater, as illustrated in Fig. 3. We first construct 50,952 instruction-response pairs for AGAV-Rater pre-training, where no human annotations are required, and two text-defined levels are utilized as labels. Then, we utilize the three-dimensional numerical scores from the AGAVQA-MOS subset as labels to finetune the AGAV-Rater.
+
+# 4.1. Pre-Training via Text-Defined Levels
+
+To alleviate the burden of constructing large-scale quality assessment datasets, we first construct 50,952 instruction-response pairs from 3 real-world audio-caption related datasets for AGAV-Rater pre-training, as shown in Fig. 3. The responses are automatically labeled with two text-defined levels (excellent and bad). We select the VGGSound, AudioCaps, and MusicCaps datasets to cover 3 different scenarios: audio-video, audio-text, and music-text. In the 50,952 instruction-response pairs, the audio-video, audiotext, and music-text scenarios contain 25,592,19,000, and 6,000 pairs, respectively. Instruction-response pairs are designed from two perspectives: content consistency and audio quality. Under each scenario, half of the pairs focus on content consistency, and the other half on audio quality. Take the A/V scenario as an example, the instruction for content consistency related pairs is as follows:
+
+```java
+#User: Can you evaluate the audio-visual content consistency of the given content in one word? #Assistant: Audio-visual consistency: [Mask].
+
+In the VGGSound dataset, we consider the A/V content consistency of the original video to be excellent. After replacing the original audio with audio from other categories, the consistency quality becomes bad. In the AudioCaps and MusicCaps dataset, we replace the original caption with another caption and ensure no overlapping nouns between the captions to create audio-text samples with bad audio-text consistency.
+
+Since the unnatural distortions in AIGC audio are difficult to simulate with real-world distortions like white noise or Gaussian noise, we simulate the unnaturalness by reversing the audio. In the VGGSound dataset, the audio quality of the original A/V sample is labeled as excellent. Videos with reversed audio are marked as having bad audio quality. Similarly, for AudioCaps and MusicCaps datasets, if the audio in the audio-caption samples is reversed, the audio quality is labeled as bad. We utilize the content consistency and audio quality related instruction-response pairs together to pre-train the AGAV-Rater, allowing it to develop a basic level of quality perception. The AGAV-Rater utilizes the standard loss function of LMMs, which is the cross-entropy between the labels and output logits.
+
+Table 1. Performance comparisons on the AGAVQA-MOS subset from three dimensions. The best performance results are shown in bold, and the second-best performance results are underlined.
+
+Dimension Audio Quality Content Consistency Overall Quality Model Type Model/Metrics SRCC↑ KRCC↑ PLCC↑ RMSE↓ SRCC↑ KRCC↑ PLCC↑ RMSE↓ SRCC↑ KRCC↑ PLCC↑ RMSE↓ Audio-Visual LMMs PandaGPT (Arxiv 2023) 0.1326 0.0887 0.1697 10.7643 0.2739 0.1861 0.2943 10.8103 0.1272 0.0844 0.1922 11.1854 NextGPT (ICML 2024) 0.1523 0.1029 0.0962 10.8685 0.0076 0.0047 0.0827 11.2758 0.0418 0.0278 0.0576 11.3874 VITA-1.0 (Arxiv 2024) 0.0717 0.0484 0.0980 10.8600 0.0603 0.0403 0.1015 11.2522 0.1284 0.0859 0.1760 11.2200 VideoLLaMA2 (Arxiv 2024) 0.4079 0.2787 0.4384 9.8146 0.2326 0.1559 0.2706 10.8631 0.2438 0.1652 0.2657 10.9716 AQA MOSNet (INTERSPEECH 2019) 0.4182 0.2888 0.4926 9.5064 0.2567 0.1713 0.2759 10.8737 0.2723 0.1829 0.2963 10.8916 STOI-Net (APSIPA ASC 2020) 0.2071 0.1364 0.3285 10.2281 0.0408 0.0239 0.1280 11.2099 0.1692 0.1179 0.2157 11.1291 NISQA (INTERSPEECH 2021) 0.5258 0.3701 0.5875 8.8416 0.3839 0.2641 0.4064 10.3339 0.3374 0.2286 0.3461 10.7026 PAM (INTERSPEECH 2024) 0.3180 0.2149 0.3608 10.0233 -0.0183 -0.0102 0.1441 11.8682 0.1421 0.0950 0.1876 11.3385 Audio-Video Alignment AVID-CMA (CVPR 2021) 0.6986 0.5101 0.7350 7.4310 0.5486 0.3851 0.5669 9.3384 0.6148 0.4350 0.6246 8.8641 VAST (NIPS 2023) 0.7640 0.5682 0.7848 6.7285 0.6811 0.4944 0.6958 8.1110 0.7094 0.5166 0.7180 7.8624 VALOR (TPAMI 2024) 0.7474 0.5549 0.7773 6.8629 0.6471 0.4662 0.6635 8.4625 0.6888 0.4975 0.7034 8.0904 AVQA DNN-RNT (TIP 2023) 0.5228 0.3656 0.5447 9.1389 0.4348 0.2970 0.4460 10.1231 0.4940 0.3406 0.5031 9.8489 DNN-SND (TIP 2023) 0.5582 0.3932 0.5782 8.9103 0.4686 0.3225 0.4821 9.9072 0.5457 0.3815 0.5548 9.4669 GeneralAVQA (TIP 2023) 0.6102 0.4346 0.6458 8.3252 0.4658 0.3219 0.4768 9.9448 0.6007 0.4249 0.6160 8.9777 AGAV-Rater (Ours) 0.7909 0.5980 0.8108 6.3894 0.7553 0.5639 0.7645 7.2956 0.7458 0.5516 0.7552 7.4611
+
+Table 2. Performance comparisons on the TTA and TTM datasets from two dimensions. The best performance results are shown in bold, and the second-best performance results are underlined.
+
+Dataset Text-to-Audio Text-to-Music Dimension Audio Quality Content Consistency Audio Quality Content Consistency Model Type Model/Metrics SRCC↑ KRCC↑ PLCC↑ SRCC↑ KRCC↑ PLCC↑ SRCC↑ KRCC↑ PLCC↑ SRCC↑ KRCC↑ PLCC↑ Audio-Visual LMMs PandaGPT (Arxiv 2023) 0.0888 0.0616 0.2049 0.1976 0.1327 0.3436 0.1722 0.1162 0.2560 0.0748 0.0494 0.2240 NextGPT (ICML 2024) -0.0160 -0.0115 0.1677 -0.0097 -0.0068 0.1599 0.0498 0.0334 0.1648 -0.0586 -0.0397 0.2149 VITA-1.0 (Arxiv 2024) 0.1324 0.0915 0.2559 -0.0116 -0.0092 0.1521 -0.0961 -0.0682 0.1526 0.1886 0.1224 0.3325 VideoLLaMA2 (Arxiv 2024) 0.4698 0.3277 0.5061 0.5472 0.3880 0.5514 0.5046 0.3510 0.5258 0.1402 0.0957 0.2754 AQA MOSNet (INTERSPEECH 2019) 0.4658 0.3328 0.4865 0.4223 0.2987 0.4537 0.4646 0.3110 0.4600 0.3206 0.2213 0.3384 STOI-Net (APSIPA ASC 2020) 0.4327 0.3032 0.4603 0.4080 0.2858 0.4425 0.2760 0.2084 0.3346 0.2924 0.2298 0.3734 NISQA (INTERSPEECH 2021) 0.4262 0.3007 0.4603 0.4072 0.2830 0.4353 0.6264 0.4376 0.6394 0.5724 0.4036 0.5859 PAM (INTERSPEECH 2024) 0.5165 0.3655 0.5264 0.4100 0.2833 0.4217 0.6435 0.4630 0.6448 0.3465 0.2354 0.3813 Audio (Music)-Text Alignment CLAP (ICASSP 2023) 0.7040 0.5231 0.7149 0.6700 0.4876 0.6782 0.8103 0.6265 0.8096 0.7474 0.5553 0.7445 TTM-Retrieval (ICASSP 2023) 0.6121 0.4465 0.6455 0.5818 0.4173 0.5995 0.7586 0.5649 0.7649 0.6974 0.5102 0.7063 VAST (NIPS 2023) 0.7255 0.5421 0.7312 0.6879 0.5021 0.6956 0.8289 0.6412 0.8175 0.7441 0.5557 0.7494 VALOR (TPAMI 2024) 0.6711 0.4972 0.6879 0.5948 0.4250 0.5963 0.2619 0.1838 0.3146 0.2350 0.1653 0.3013 AGAV-Rater (Ours) 0.7390 0.5566 0.7495 0.7330 0.5427 0.7367 0.8322 0.6500 0.8277 0.7719 0.5819 0.7811
+
+# 4.2. Fine-Tuning via Numerical Scores
+
+Multi-Dimensional Evaluation Instruction. During the subjective experiment, we observe that subjects typically first evaluate audio quality, followed by A/V content consistency, and finally the overall A/V quality. To mimic the human thought process, we design a multi-dimensional instruction to fine-tune the LMM to evaluate the three dimensions sequentially, e.g.,
+
+```java
+#User: Can you evaluate the audio quality, audio-visual content consistency, and overall audio-visual quality of the given content one by one? #Assistant: Audio quality: [Mask], audio-visual consistency: [Mask], overall audio-visual quality: [Mask].
+
+Masking Quality Levels. To further improve the model's evaluation capability, we mask the quality levels in instructions, which can prevent the model from overly relying on the ground truth of other scores when predicting quality. It can encourage the model to utilize the inferred quality levels to guide subsequent score predictions, thereby deepening the understanding of the relationships between 3 quality dimensions.
+
+Numerical Score Prediction. Most researchers (Wu et al.; Kou et al., 2024; Zhang et al., 2024b) consider it suboptimal to directly tune LMMs to output numerical scores. They con
+
+vert the MOSs into five text-defined rating levels (excellent, good, fair, poor, and bad) and subsequently format them into instruction-response pairs for visual instruction tuning. However, converting MOSs to five rating levels discards a significant amount of information. We directly regress the LMM's last hidden states to output three-dimensional numerical scores. The loss between the predicted scores and the ground truth is calculated using PLCC Loss. By predicting numerical scores instead of text-defined levels, AGAV-Rater can more precisely understand human subjective perception.
+
+# 4.3. Model Structure
+
+As shown in Fig. 3, the structure of the AGAV-Rater is based on the recently released open-source LMM, VideoLLaMA2 (Cheng et al., 2024), which exhibits excellent A/V perception abilities and strong language comprehension. The video is first converted into a 1fps image sequence. Then, the image sequence and audio signal are separately encoded by the video encoder and audio encoder. After encoding, they are projected into the same vector space through the video projection and audio projection, and input into the large language model together with text embedding. It enables us to process AGAVs, audio-text, and music-text quality assessment tasks within a unified model framework.
+
+Table 3. Answer accuracy on AGAVQA-Pair subset. Based on the VTA method source, we divide the AGAVQA-Pair subset into 8 categories, with All representing the entire dataset. AGAV-Rater is trained on the AGAVQA-MOS subset, demonstrating cross-dataset performance on the AGAVQA-Pair subset. The best result is shown in bold, and the second-best is underlined.
+
+Category SonicVisionLM Frieren V2AMapper TIVA V2A-SceneDetector STAV2A SSV2A ReWaS All Random 0.33 0.20 0.41 0.20 0.50 0.25 0.20 0.25 0.32 Question Type: Multi-Input Comparison, w/o fine-tuning on the AGAVQA-MOS subset. PandaGPT (Su et al., 2023) 0.29 0.20 0.43 0.22 0.43 0.23 0.20 0.22 0.26 NextGPT (Wu et al., 2024) 0.33 0.20 0.28 0.04 0.36 0.30 0.16 0.22 0.22 VITA-1.0 (Fu et al., 2024) 0.33 0.12 0.26 0.08 0.43 0.18 0.04 0.19 0.17 VideoLLaMA2 (Cheng et al., 2024) 0.38 0.17 0.26 0.14 0.21 0.18 0.12 0.28 0.21 Gemini-1.5 Flash-8b (Team et al., 2024a) 0.10 0.05 0.20 0.06 0.29 0.10 0.04 0.17 0.11 Gemini-1.5 Pro (Team et al., 2024a) 0.29 0.23 0.35 0.22 0.07 0.13 0.28 0.17 0.23 Gemini-2.0 Flash (Team et al., 2024a) 0.10 0.18 0.43 0.26 0.64 0.25 0.24 0.19 0.27 Reka Core (Team et al., 2024b) 0.19 0.18 0.39 0.22 0.43 0.18 0.08 0.22 0.23 Reka Flash (Team et al., 2024b) 0.24 0.20 0.30 0.30 0.50 0.20 0.28 0.22 0.26 GPT-4o+Audio Caption (Hurst et al., 2024) 0.29 0.20 0.46 0.18 0.57 0.30 0.20 0.36 0.29 GPT-4o+Video Caption (Hurst et al., 2024) 0.29 0.18 0.48 0.16 0.64 0.40 0.12 0.36 0.30 Question Type: Single-input Scoring, w/o fine-tuning on the AGAVQA-MOS subset. PandaGPT (Su et al., 2023) 0.29 0.17 0.67 0.10 0.57 0.20 0.20 0.33 0.35 NextGPT (Wu et al., 2024) 0.43 0.08 0.28 0.30 0.57 0.30 0.40 0.33 0.31 VITA (Fu et al., 2024) 0.50 0.17 0.39 0.10 0.29 0.30 0.20 0.22 0.27 VideoLLaMA2 (Cheng et al., 2024) 0.71 0.25 0.67 0.30 0.29 0.10 0.20 0.44 0.40 Question Type: Single-input Scoring, with fine-tuning on the AGAVQA-MOS subset. AVID-CMA (Morgado et al., 2021) 0.29 0.58 0.61 0.50 0.71 0.50 0.40 0.44 0.52 VALOR (Liu et al., 2024a) 1.00 0.75 0.72 0.70 0.71 0.70 0.40 0.44 0.55 VAST (Chen et al., 2023) 0.86 0.83 0.78 0.80 0.43 0.40 0.40 0.56 0.64 AGAV-Rater (Ours) 1.00 0.92 0.83 0.80 0.71 0.70 0.60 0.56 0.78
+
+# 5. Experiments
+
+# 5.1. Experimental Settings
+
+In this paper, we fine-tune the AGAV-Rater from the pretrained weights of VideoLLaMA2 (Cheng et al., 2024). The AGAV-Rater model is implemented with PyTorch and trained on two 96GB H20 GPUs. The learning rate is set to $1e - 5$ , and the batch size is set to 9. During pre-training, the number of training epochs is set to 1, and optimization is performed. For fine-tuning, the number of training epochs is set to 5 on the AGAVQA-MOS subset and 10 on the TTA and TTM datasets (Deshmukh et al., 2024). Fine-tuning the AGAV-Rater model on the AGAVQA-MOS subset for 5 epochs using two 96GB H20 GPUs takes approximately 5 hours. All experiments for each method are retrained on the AGAVQA-MOS subset using 5-fold cross-validation. The reported performance of the AGAV-Rater is evaluated on the final weights after training.
+
+# 5.2. Compared Methods
+
+Since no specific method has been proposed for evaluating AGAVs, we select state-of-the-art methods from four areas for comparison: audio-visual LMMs, AQA, AVQA, and multimodal alignment, including:
+
+- Audio-visual LMMs: PandaGPT (Su et al., 2023), NextGPT (Wu et al., 2024), VITA-1.0 (Fu et al., 2024), and VideoLLaMA2 (Cheng et al., 2024).
+- AQA: MOSNet (Lo et al., 2019), STOI-Net (Zezario et al., 2020), NISQA (Mittag et al., 2021), and PAM
+
+(Deshmukh et al., 2024).
+
+- AVQA: DNN-RNT (Cao et al., 2023a), DNN-SND (Cao et al., 2023a), and GeneralAVQA (Cao et al., 2023b).
+- Multimodal alignment: AVID-CMA (Morgado et al., 2021) aligns video features with audio features. VAST (Chen et al., 2023) and VALOR (Liu et al., 2024a) map video, audio, and text into the same semantic space. CLAP (Elizalde et al., 2023) and TTM-Retrieval (Doh et al., 2023) align audio and music with text, respectively.
+
+Except for audio-visual LMMs, all methods are retrained on the AGAVQA-MOS, TTA, and TTM datasets after loading the default weights. Original multimodal alignment methods extract audio and video features using their encoders, then align them into a common vector space. We load the default parameters and use these encoders to extract audio and video features, which are then concatenated. The concatenated features are subsequently fed into a fully connected layer with an output dimension of 3 to predict the three-dimensional scores.
+
+Audio-visual LMMs are directly tested on the dataset using their default weights. For quality-related questions, most audio-visual LMMs respond with text-defined quality levels and are unable to stably output numerical scores. Therefore, following the testing scheme designed in (Wu et al.), we prompt LMMs to receive quality level tokens. Then, extract the probability distribution $\mathcal{X}$ of the predicted level token and convert it into the final predicted scores $\mathrm{S}_{\mathrm{LMM}}$ as
+
+follows:
+
+$$
+\mathrm {S} _ {\mathrm {L M M}} = \sum_ {i = 1} ^ {5} i \times \frac {e ^ {\mathcal {X} _ {l _ {i}}}}{\sum_ {j = 1} ^ {5} e ^ {\mathcal {X} _ {l _ {j}}}} \tag {1}
+$$
+
+where $\{l_i|_{i = 1}^5\} = \{\text{excellent, good, fair, poor, bad}\}$ are the standard text quality levels. A detailed introduction to compared methods can be found in Appendix C.3.
+
+# 5.3. Evaluation on Multi-dimensional Scoring Tasks
+
+We test the multi-dimensional scoring ability of AGAV-Rater on the AGAVQA-MOS, TTA, and TTM datasets, using Spearman Rank-order Correlation (SRCC), Kendall Rank-order Correlation (KRCC), Pearson Linear Correlation (PLCC), and Root Mean Squared Error (RMSE). The experimental results are presented in Tab. 1 and Tab. 2. AGAV-Rater demonstrates the best performance across all three datasets. Especially in the content consistency dimension, the SRCC metric shows $11\%$ , $7\%$ , and $3\%$ improvements on the AGAVQA-MOS, TTA, and TTM datasets, respectively. This highlights the powerful content understanding capability of LMMs, which can significantly aid in evaluating the consistency of AIGC audio and video (text). Furthermore, the poor performance of audio-visual LMMs shows that they can not adapt well to quality assessment tasks, proving that our training process effectively enhances the LMM's understanding of human perception. Traditional AQA and AVQA methods perform weaker on the A/V content consistency compared to the audio quality, as these methods have difficulty adapting to the unique distortions in AIGC media. Audio-video alignment methods focus on the semantic alignment between audio and video, being capable of recognizing semantic-level distortions in AGAVs and demonstrating suboptimal performance.
+
+# 5.4. Evaluation on Optimal AGAV Selection Tasks
+
+The AGAVQA-Pair subset was collected from 8 VTA webpages, dividing it into 8 corresponding categories. For each category, the optimal AGAV is generated by the corresponding VTA method. Since there is no overlap between the video content in the AGAVQA-MOS and AGAVQA-Pair subsets, we perform cross-dataset validation experiments by training the AGAV-Rater on the AGAVQA-MOS subset and testing on the unseen AGAVQA-Pair subset. We collect 11 audio-visual LMMs as comparison methods, including 4 open-source LMMs and 7 closed-source LMMs. For GPT-4o+audio caption, we utilize GPT-4o-audio to generate captions for audio, then feed the text, video, and audio captions into GPT-4o. Similarly, for GPT-4o+video caption, we utilize GPT-4o to generate captions for video, then feed the text, audio, and video captions into GPT-4o-audio. As shown in Tab. 3, we test their accuracy in answering optimal AGAV questions on the AGAVQA-Pair subset. We designed two types of instructions to prompt LMMs for
+
+the optimal AGAV. For multi-input comparisons instruction type, we utilize instruction-response pairs in the AGAVQA-Pair subset to let LMMs answer the optimal audio number. To prevent the model from guessing, we shuffle the audio order and iterate through each audio number as the correct answer, inputting them into LMMs to calculate the response accuracy. This instruction type applies to both open-source and closed-source LMMs, with optimal AGAV judgment based on their textual responses. For the single-input scoring instruction type, we sequentially ask the overall quality of each AGAV and compute the final predicted scores by Eq. 1. The AGAV with the highest score is selected as the optimal AGAV. Since closed-source LMMs cannot access the token probability distribution, this method is only applied to open-source LMMs. The above 11 audio-visual LMMs use their original model parameters without fine-tuning on the AGAVQA-MOS subset. We also test the accuracy of the audio-video alignment methods which have been fine-tuned on the AGAVQA-MOS subset. As seen in Tab. 3, our method achieves the highest accuracy across all 8 VTA methods, demonstrating that our model has the best ability to judge the optimal AGAV and exhibits strong transferability.
+
+# 5.5. Ablation Study
+
+Effects of Pretraining Procedure. The "w/o Pretrain" column in Tab. 4 shows AGAV-Rater's performance without the pretraining step. Compared to the AGAVQA-MOS subset, the pretraining step significantly enhances AGAV-Rater's performance on the smaller TTA and TTM datasets. This is because pretraining provides rich prior knowledge, which effectively compensates for the limited scale and diversity of these smaller datasets. As shown in Tab. 1, VideoLLaMA2 exhibits relatively weak performance on the content consistency dimension for the TTM dataset. The pre-training procedure, through the music-text instruction-response pairs, enhances the music perception ability of AGAV-Rater, thereby improving its performance on the consistency dimension of the TTM dataset. On the AGAVQA-MOS subset, the pretraining step shows a more significant improvement in the overall A/V quality, compared to audio quality and A/V content consistency. We believe this is because AGAV-Rater relies on the scores of audio quality and A/V content consistency when predicting the overall A/V quality, and pretraining helps AGAV-Rater learn audio quality and A/V content consistency faster, reducing disturbances in learning overall A/V quality.
+
+Effects of Scoring Method. We convert MOSs into text-defined levels and use text-defined levels as labels in the fine-tuning step to replace MOSs. As shown in the "Finetuning with Levels" column of Tab. 4, replacing numerical scores with text-defined levels for fine-tuning results in a slight performance decrease. This demonstrates that the AGAV-
+
+Table 4. Ablation study of the proposed AGAV-Rater.
+
+Dataset AGAVQA-MOS TTA TTM Dimension Audio Quality Consistency Overall Quality Audio Quality Consistency Audio Quality Consistency Strategy SRCC↑ PLCC↑ SRCC↑ PLCC↑ SRCC↑ PLCC↑ SRCC↑ PLCC↑ SRCC↑ PLCC↑ SRCC↑ PLCC↑ SRCC↑ PLCC↑ w/o Pretrain 0.7856 0.8030 0.7503 0.7599 0.7141 0.7202 0.7040 0.7117 0.7104 0.7215 0.8218 0.8233 0.7042 0.7188 Finetuning with Levels 0.7832 0.8043 0.7499 0.7451 0.7078 0.7167 0.7005 0.7051 0.7249 0.7324 0.8102 0.8121 0.7359 0.7532 Single-Dimension Instruction 0.7845 0.8035 0.7511 0.7599 0.6865 0.7007 0.7143 0.7152 0.7286 0.7277 0.8297 0.8253 0.7565 0.7759 AGAV-Rater 0.7909 0.8108 0.7553 0.7645 0.7458 0.7552 0.7390 0.7495 0.7330 0.7367 0.8322 0.8277 0.7719 0.7811
+
+Table 5. Ablation study of the base models.
+
+Dimension Audio Quality Consistency Overall Quality Metric SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ SRCC ↑ PLCC ↑ GroundingPCT (ACL 2024) 0.4387 0.4494 0.5067 0.4764 0.4975 0.5297 OneLLM (CVPR 2024) 0.6578 0.6879 0.6184 0.6297 0.6327 0.6388 AGAV-Rater 0.7909 0.8108 0.7553 0.7645 0.7458 0.7552
+
+Rater, by using quality regression to map hidden states to numerical scores, allows the AGAV-Rater to more finely learn human subjective perception.
+
+Effects of Multi-Dimension Instruction. To guide LMMs to score according to the human thought process, we design a multi-dimensional instruction that enables AGAV-Rater to predict scores for all three dimensions simultaneously. For comparison experiments, we break this down into three single-dimensional instructions, where each instruction focuses on one dimension's quality. As shown in the "Single-Dimension Instruction" column of Tab 4, we can see that the multi-dimensional instruction improves AGAV-Rater's performance on the overall A/V quality. This is because guiding AGAV-Rater to first consider audio quality and A/V content consistency helps better predict overall A/V quality. Additionally, the multi-dimensional instruction enables mutual enhancement between the two dimensions on the TTA and TTM datasets, further boosting performance.
+
+Effects of the Base Models. We conduct ablation studies using GroundingGPT (Li et al., 2024) and OneLLM (Han et al., 2024) as base models on the AGAVQA-MOS subset. For OneLLM and GroundingGPT, we first load the default weights, and then fine-tune them using the official training code on the AGAVQA-MOS subset. To ensure fairness, we also add the quality regression module, directly regressing the LLM's last hidden states to output three-dimensional numerical scores. As shown in Tab. 5, AGAV-Rater achieves the best performance. The main reason for this is that VideoLLaMA2 is designed for audio-video content understanding and pre-trained on more diverse audio-video datasets, making it more suitable for our quality assessment task. GroundingGPT focuses more on localization and visual understanding and is not designed or trained to understand continuous audio-video content. Its ability to comprehend video quality may be weaker. OneLLM is a general multimodal model that, while supporting audio and video processing, is not specifically optimized or enhanced for video and audio alignment. Its audio-related dataset only includes audio-text data, and OneLLM is more suited to text-vision or text-audio matching and understanding, rather than specific audio-video content.
+
+# 5.6. Enhancing ElevenLabs Results via AGAV-Rater
+
+We finally conduct a subjective experiment to demonstrate that AGAV-Rater can help ElevenLabs select the high-quality audio to present to users. We collect 230 silent AIGC videos from T2V-CompBench (Ji et al., 2024) and Sora (sor, 2024), generating 2 AGAVs for each video using ElevenLabs. We then utilize AGAV-Rater to select the higher-quality AGAVs. A total of 10 subjects are invited to watch and listen to AGAVs. $80\%$ of them prefer the AGAVs selected by AGAV-Rater for its better quality. This demonstrates that AGAV-Rater can be applied in real-world scenarios to help improve the quality of outputs from VTA methods. On the project page2 , we display the AGAVs that AGAV-Rater considers high quality and low quality.
+
+# 6. Conclusion
+
+In this paper, we construct AGAVQA-3k, the first AGAV quality assessment dataset, which labels AGAV quality in two ways: multi-dimensional score prediction and optimal AGAV selection. We propose a novel LMM-based AGAV quality assessment method, AGAV-Rater. AGAV-Rater demonstrates superior score prediction capabilities on the AGAVQA-MOS, TTA, and TTM datasets, and achieves the highest accuracy in identifying the optimal AGAV on the AGAVQA-Pair subset. AGAV-Rater enhances users a better audio-visual experience and enhance the quality of VTA method outputs.
+
+# Impact Statement
+
+The quality of AI-generated audio and video must align with human preferences. Among existing models, AGAV-Rater achieves the highest consistency with human perceptual evaluations of AGAVs, indicating its potential for supervising and controlling the quality of AGAVs. We will continue to focus on and advance this research in the future.
+
+# References
+
+Elevenlabs. Prime Voice AI, 2023. URL https://elevenlabs.io/.
+Pika 1.0. Accessed December 28, 2023 [Online] https://www.pika.art/, 2023. URL https://www.pika.art/.
+
+Gen-3. Accessed June 17, 2024 [Online] https://runwayml.com/research/introducing-gen-3-alpha, 2024. URL https://runwayml.com/research/introducing-gen-3-alpha.
+Kling. Accessed June 6, 2024 [Online] https:// klingai.kuaishou.com/, 2024. URL https: //klingai.kuaishou.com/.
+Sora. Accessed February 16, 2024 [Online] https://openai.com/sora/, 2024. URL https://openai.com/sora/.
+Agostinelli, A., Denk, T. I., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., Huang, Q., Jansen, A., Roberts, A., Tagliasacchi, M., et al. Musicl: Generating music from text. arXiv preprint arXiv:2301.11325, 2023.
+Beerends, J. G., Schmidmer, C., Berger, J., Obermann, M., Ullmann, R., Pomy, J., and Keyhl, M. Perceptual objective listening quality assessment (polqa), the third generation itu-t standard for end-to-end speech quality measurement part i—temporal alignment. AES, 61(6):366-384, 2013.
+Cao, Y., Min, X., Sun, W., and Zhai, G. Attention-guided neural networks for full-reference and no-reference audiovisual quality assessment. IEEE TIP, 32:1882-1896, 2023a.
+Cao, Y., Min, X., Sun, W., and Zhai, G. Subjective and objective audio-visual quality assessment for user generated content. IEEE TIP, 2023b.
+Chen, G., Wang, G., Huang, X., and Sang, J. Semantically consistent video-to-audio generation using multimodal language large model. arXiv preprint arXiv:2404.16305, 2024.
+Chen, H., Xie, W., Vedaldi, A., and Zisserman, A. Vggsound: A large-scale audio-visual dataset. In ICASSP, pp. 721-725, 2020.
+Chen, S., Li, H., Wang, Q., Zhao, Z., Sun, M., Zhu, X., and Liu, J. Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. NeurIPS, 36:72842-72866, 2023.
+Cheng, Z., Leng, S., Zhang, H., Xin, Y., Li, X., Chen, G., Zhu, Y., Zhang, W., Luo, Z., Zhao, D., et al. VideoLLaMA 2: Advancing spatial-temporal modeling and audio understanding in video-LLMs. arXiv preprint arXiv:2406.07476, 2024.
+Copet, J., Kreuk, F., Gat, I., Remez, T., Kant, D., Synnaeve, G., Adi, Y., and Defossez, A. Simple and controllable music generation. NeurIPS, 36, 2024.
+
+Deshmukh, S., Alharthi, D., Elizalde, B., Gamper, H., Ismail, M. A., Singh, R., Raj, B., and Wang, H. Pam: Prompting audio-language models for audio quality assessment. arXiv preprint arXiv:2402.00282, 2024.
+Doh, S., Won, M., Choi, K., and Nam, J. Toward universal text-to-music retrieval. In ICASSP, 2023.
+Elizalde, B., Deshmukh, S., Al Ismail, M., and Wang, H. Clap learning audio concepts from natural language supervision. In ICASSP, pp. 1-5, 2023.
+Fu, C., Lin, H., Long, Z., Shen, Y., Zhao, M., Zhang, Y., Dong, S., Wang, X., Yin, D., Ma, L., et al. Vita: Towards open-source interactive omni multimodal llm. arXiv preprint arXiv:2408.05211, 2024.
+Guo, W., Wang, H., Cai, W., and Ma, J. Gotta hear them all: Sound source aware vision to audio generation. arXiv preprint arXiv:2411.15447, 2024a.
+Guo, Y., Yang, C., Rao, A., Wang, Y., Qiao, Y., Lin, D., and Dai, B. AnimateDiff: Animate your personalized text-to-image diffusion models without specific tuning. In ICLR, 2024b.
+Han, J., Gong, K., Zhang, Y., Wang, J., Zhang, K., Lin, D., Qiao, Y., Gao, P., and Yue, X. Onellm: One framework to align all modalities with language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26584-26595, 2024.
+Hayes, A. F. and Krippendorff, K. Answering the call for a standard reliability measure for coding data. Communication methods and measures, 1(1):77-89, 2007.
+Hong, W., Ding, M., Zheng, W., Liu, X., and Tang, J. CogVideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022.
+Hu, Y., Gu, Y., Li, C., Chen, R., and Yu, D. Video-to-audio generation with fine-grained temporal semantics. arXiv preprint arXiv:2409.14709, 2024.
+Huang, Z., He, Y., Yu, J., Zhang, F., Si, C., Jiang, Y., Zhang, Y., Wu, T., Jin, Q., Chanpaisit, N., et al. Vbench: Comprehensive benchmark suite for video generative models. In CVPR, pp. 21807-21818, 2024.
+Hurst, A., Lerer, A., Goucher, A. P., Perelman, A., Ramesh, A., Clark, A., Ostrow, A., Welihinda, A., Hayes, A., Radford, A., et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024.
+Iashin, V. and Rahtu, E. Taming visually guided sound generation. arXiv preprint arXiv:2110.08791, 2021.
+
+Jeong, Y., Kim, Y., Chun, S., and Lee, J. Read, watch and scream! sound generation from text and video. In AAAI, 2025.
+Ji, P., Xiao, C., Tai, H., and Huo, M. T2VBench: Benchmarking temporal dynamics for text-to-video generation. In CVPR, pp. 5325-5335, 2024.
+Kim, C. D., Kim, B., Lee, H., and Kim, G. Audiocaps: Generating captions for audiios in the wild. In NAACL-HLT, pp. 119-132, 2019.
+Kou, T., Liu, X., Zhang, Z., Li, C., Wu, H., Min, X., Zhai, G., and Liu, N. Subjective-aligned dataset and metric for text-to-video quality assessment. In ACMM, pp. 7793-7802, 2024.
+Kreuk, F., Synnaeve, G., Polyak, A., Singer, U., Defossez, A., Copet, J., Parikh, D., Taigman, Y., and Adi, Y. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022.
+Li, Z., Xu, Q., Zhang, D., Song, H., Cai, Y., Qi, Q., Zhou, R., Pan, J., Li, Z., Tu, V., et al. Groundinggpt: Language enhanced multi-modal grounding model. In ACL, pp. 6657-6678, 2024.
+Liu, J., Chen, S., He, X., Guo, L., Zhu, X., Wang, W., and Tang, J. VALOR: Vision-audio-language omniperception pretraining model and dataset. IEEE TPAMI, 2024a.
+Liu, Y., Li, L., Ren, S., Gao, R., Li, S., Chen, S., Sun, X., and Hou, L. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. NeurIPS, 36, 2024b.
+Lo, C.-C., Fu, S.-W., Huang, W.-C., Wang, X., Yamagishi, J., Tsao, Y., and Wang, H.-M. Mosnet: Deep learning based objective assessment for voice conversion. In IN-TERSPEECH, 2019.
+Lu, J., Clark, C., Lee, S., Zhang, Z., Khosla, S., Marten, R., Hoiem, D., and Kembhavi, A. Unified-IO 2: Scaling autoregressive multimodal models with vision language audio and action. In CVPR, pp. 26439–26455, 2024.
+Luo, S., Yan, C., Hu, C., and Zhao, H. Diff-foley: Synchronized video-to-audio synthesis with latent diffusion models. NeurIPS, 36, 2024.
+Manocha, P. and Kumar, A. Speech quality assessment through MOS using non-matching references. arXiv preprint arXiv:2206.12285, 2022.
+Min, X., Zhai, G., Zhou, J., Farias, M. C., and Bovik, A. C. Study of subjective and objective quality assessment of audio-visual signals. IEEE TIP, 29:6054-6068, 2020.
+
+Mittag, G., Naderi, B., Chehadi, A., and Möller, S. Nisqa: A deep cnn-self-attention model for multidimensional speech quality prediction with crowdsourced datasets. In INTERSPEECH, 2021.
+Morgado, P., Vasconcelos, N., and Misra, I. Audio-visual instance discrimination with cross-modal agreement. In CVPR, pp. 12475–12486, 2021.
+Ren, Y., Li, C., Xu, M., Liang, W., Gu, Y., Chen, R., and Yu, D. STA-V2A: Video-to-audio generation with semantic and temporal alignment. arXiv preprint arXiv:2409.08601, 2024.
+Rix, A. W., Beerends, J. G., Hollier, M. P., and Hekstra, A. P. Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codec's. In ICASSP, volume 2, pp. 749-752, 2001.
+Sheffer, R. and Adi, Y. I hear your true colors: Image guided audio generation. In ICASSP, pp. 1-5, 2023.
+Su, Y., Lan, T., Li, H., Xu, J., Wang, Y., and Cai, D. PandaGPT: One model to instruction-follow them all. arXiv preprint arXiv:2305.16355, 2023.
+Sun, Y., Min, X., Duan, H., and Zhai, G. The influence of text-guidance on visual attention. In IEEE ISCAS, pp. 1-5, 2023.
+Sun, Y., Min, X., Duan, H., and Zhai, G. How is visual attention influenced by text guidance? database and model. IEEE TIP, 2024a.
+Sun, Y., Zhang, Z., Wu, H., Liu, X., Lin, W., Zhai, G., and Min, X. Explore the hallucination on low-level perception for mllms. arXiv preprint arXiv:2409.09748, 2024b.
+Team, G., Georgiev, P., Lei, V. I., Burnell, R., Bai, L., Gulati, A., Tanzer, G., Vincent, D., Pan, Z., Wang, S., et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024a.
+Team, R., Ormazabal, A., Zheng, C., d'Autume, C. d. M., Yogatama, D., Fu, D., Ong, D., Chen, E., Lamprecht, E., Pham, H., et al. Reka core, flash, and edge: A series of powerful multimodal language models. arXiv preprint arXiv:2404.12387, 2024b.
+Wang, H., Ma, J., Pascual, S., Cartwright, R., and Cai, W. V2a-mapper: A lightweight solution for vision-to-audio generation by connecting foundation models. In AAAI, volume 38, pp. 15492-15501, 2024a.
+Wang, J., Duan, H., Zhai, G., and Min, X. Understanding and evaluating human preferences for ai generated images
+
+with instruction tuning. arXiv preprint arXiv:2405.07346, 2024b.
+Wang, J., Duan, H., Zhai, G., Wang, J., and Min, X. AIGV-Assessor: Benchmarking and evaluating the perceptual quality of text-to-video generation with lmm. arXiv preprint arXiv:2411.17221, 2024c.
+Wang, X., Wang, Y., Wu, Y., Song, R., Tan, X., Chen, Z., Xu, H., and Sui, G. Tiva: Time-aligned video-to-audio generation. In ACMM, pp. 573-582, 2024d.
+Wang, X., Zhang, S., Yuan, H., Qing, Z., Gong, B., Zhang, Y., Shen, Y., Gao, C., and Sang, N. A recipe for scaling up text-to-video generation with text-free videos. In CVPR, 2024e.
+Wang, X., Zhuang, B., and Wu, Q. Modaverse: Efficiently transforming modalities with llms. In CVPR, pp. 26606-26616, 2024f.
+Wang, Y., Chen, X., Ma, X., Zhou, S., Huang, Z., Wang, Y., Yang, C., He, Y., Yu, J., Yang, P., et al. LAVIE: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023.
+Wang, Y., Guo, W., Huang, R., Huang, J., Wang, Z., You, F., Li, R., and Zhao, Z. Frieren: Efficient video-to-audio generation with rectified flow matching. arXiv preprint arXiv:2406.00320, 2024g.
+Wu, H., Zhang, Z., Zhang, W., Chen, C., Liao, L., Li, C., Gao, Y., Wang, A., Zhang, E., Sun, W., et al. Q-Align: Teaching LMMs for visual scoring via discrete text-defined levels. In ICML.
+Wu, S., Fei, H., Qu, L., Ji, W., and Chua, T.-S. NExT-GPT: Any-to-any multimodal llm. In ICML, pp. 53366-53397, 2024.
+Xie, Z., Yu, S., He, Q., and Li, M. Sonicvisionlm: Playing sound with vision language models. In CVPR, pp. 26866-26875, 2024.
+Yi, M. and Li, M. Efficient video to audio mapper with visual scene detection. arXiv preprint arXiv:2409.09823, 2024.
+Ying, Z., Ghadiyaram, D., and Bovik, A. Telepresence video quality assessment. In ECCV, pp. 327-347. Springer, 2022.
+Zezario, R. E., Fu, S.-W., Fuh, C.-S., Tsao, Y., and Wang, H.-M. STOI-Net: A deep learning based non-intrusive speech intelligibility assessment model. In APSIPA ASC, pp. 482-486, 2020.
+
+Zhang, Y., Gu, Y., Zeng, Y., Xing, Z., Wang, Y., Wu, Z., and Chen, K. Foleycrafter: Bring silent videos to life with lifelike and synchronized sounds. arXiv preprint arXiv:2407.01494, 2024a.
+Zhang, Z., Wu, H., Zhou, Y., Li, C., Sun, W., Chen, C., Min, X., Liu, X., Lin, W., and Zhai, G. Lmm-pcqa: Assisting point cloud quality assessment with lmm. In ACMM, pp. 7783-7792, 2024b.
+
+Table 6. Categorization of prompts for generating AIGC videos.
+
+Category Description Key Words Animals Describe animal behaviors and movements birds fly, dogs sleep, dogs sneeze, penguins walk, fishes float, bees fly, ducks swim, turtles swim, dragons fly, camels walk, cats meow. Artificial Include environmental sounds generated by human-made objects Flag waves, country road, restaurant, rocket goes off, fireworks explode, burn a candle, city road, fire. Food Describe food preparation and eating pour whiskey, pack box, bread, coffee machine, make a salad, preparation of vegetable soup, sparkling champagne, cook pasta coffee beans fall, pour into a cocktail glass. People Describe human action cook, water ski, go boating, wash gravel, swim, walk, get out a lake, box, pour liquid, fight with swords. Nature Include natural environmental sounds flood water, ice melt, fallen leaves, water plants, mountain river, forest, wind, beach, sea, snow all, waterfall, rain. Vehicles Describe vehicles car, war vehicle, boat, aircraft, bike.
+
+# A. More Details of AGAVQA-3k Construction.
+
+# A.1. Detailed Information of AIGC Video Collection
+
+We begin by collecting AIGC videos from public display websites, including Sora3 , KLing4 , and Gen5 . Additionally, we gather AIGC videos generated byAnimateDiff (Guo et al., 2024b), CogVideo (Hong et al., 2022), Gen3 (Gen, 2024), KLing (kli, 2024), LaVie (Wang et al., 2023), Pika (pik, 2023), and TF-T2V (Wang et al., 2024e) from the video generation benchmark Vbench (Huang et al., 2024). To diversify the types of audio sources in the videos, we also collect prompts containing audio information from FETV (Liu et al., 2024b). As shown in Tab. 6, these prompts are categorized into 6 types: animals, artificial, food, people, nature, and vehicles. We also list the key terms associated with each category. Finally, we use these prompts to generate AIGC videos via closed-source text-to-video platforms (Pika 1.0 (pik, 2023) and Gen3 (Gen, 2024)).
+
+# A.2. Detailed Information of VTA Methods
+
+We utilize 16 state-of-the-art VTA methods to construct the AGAVQA-3k dataset, encompassing both open-source and closed-source methods. For open-source models, we rely on official repositories and utilize default weights to generate audio. For closed-source models, we utilize publicly accessible APIs provided by open platforms. For models without publicly available code, we collect public AGAVs showcased on their GitHub pages.
+
+# A.2.1. DIFFUSION BASED VTA METHODS
+
+Diff-Foley.: Diff-Foley (Luo et al., 2024) is a synchronized video-to-audio synthesis method that utilizes a latent diffusion model (LDM) to generate high-quality audio with improved temporal synchronization and audio-visual relevance.
+
+FoleyCrafter. FoleyCrafter (Zhang et al., 2024a) is a pluggable module integrated into a text-to-audio generator, enabling it to generate high-quality audio synchronized with video content. It primarily utilizes two key components: a semantic adapter for semantic alignment and a temporal controller for temporal synchronization.
+
+VTA-LDM. VTA-LDM (Hu et al., 2024) leverages the recently popular grounding segment anything model (Grounding SAM) to extract fine-grained semantic features from video frames and then uses a LDM to generate high-quality audio.
+
+SSV2A. SSV2A $^6$ (Guo et al., 2024a) is a Sound Source-Aware Video-to-Audio generator, which can locally perceive multimodal sound sources from a scene through visual detection and cross-modality translation.
+
+ReWaS. ReWaS (Jeong et al., 2025) is a video-and-text-to-sound generation method, where video conditions control the text-to-audio generation model to create audio that matches the video.
+
+TIVA. TIVA $^{7}$ (Wang et al., 2024d) is a novel time-aligned video-to-audio generator that jointly achieves semantic matching and temporal synchronization when generating audio. TIVA encodes the semantic information of the video and predicts its
+
+rhythmic layout, then utilizes this information as conditioning for a latent diffusion-based audio generator to produce the audio.
+
+V2A-Mapper. V2A-Mapper $^{8}$ (Wang et al., 2024a) employs CLIP, CLAP, and AudioLDM to design a lightweight VTA method. It maps the latent space from the visual CLIP model to the auditory CLAP model and then uses the pre-trained AudioLDM to generate high-fidelity, visually-aligned sound.
+
+STAV2A. STAV2A $^9$ (Ren et al., 2024) is a semantic and temporal aligned video-to-audio method that generates audio by conditioning on both text and video features. STAV2A utilizes an LDM initialized with text-to-audio prior knowledge and guided by cross-modal features from both text and video.
+
+V2A-SceneDetector. V2A-SceneDetector $^{10}$ (Yi & Li, 2024) combines LDM with a scene detector to address the challenge of multiple visual scene transitions in videos. It can identify and handle multiple scenes in a video, generating corresponding audio for each.
+
+# A.2.2. TRANSFORMER BASED VTA METHODS
+
+Im2wav. Im2wav (Sheffer & Adi, 2023) is based on two transformer language models. First, a language model generates a low-level audio representation. Then, an additional language model upsamples the audio tokens to generate high-fidelity audio samples.
+
+SpecVQGAN. SpecVQGAN (Iashin & Rahtu, 2021) is a visually-induced audio generation method. It utilizes a transformer to sample a new spectrogram from a pre-trained spectrogram codebook, given the set of video features, thereby generating the corresponding audio.
+
+# A.2.3. LLM BASED VTA METHODS
+
+ModaVerse. ModaVerse (Wang et al., 2024f) is a multi-modal large language model capable of understanding and converting content across various modalities, including images, videos, and audio. We leverage its video-to-audio capability to add sound to silent videos.
+
+SVA. SVA (Chen et al., 2024) is a semantically consistent video-to-audio generation framework. SVA leverages the capabilities of multi-modal large language models (MLLMs) to understand the video semantics from key frames and generate creative audio plans. These plans are then used as prompts for text-to-audio models, simultaneously generating sound effects and background music.
+
+SonicVisionLM. SonicVisionLM $^{11}$ (Xie et al., 2024) is a new framework designed to generate a wide range of sound effects by leveraging vision-language models (VLMs). It uses VLMs to recognize events in the video and generate sounds that match the video content.
+
+# A.2.4. FLOW MATHCING BASED VTA METHOD
+
+Frieren. Frieren $^{12}$ (Wang et al., 2024g) is a VTA model based on rectified flow matching. Frieren generates high-quality audio in just a few, or even a single, sampling step through backflow and a guided vector field distillation process.
+
+# A.2.5. PROPRIETORY VTA METHOD
+
+ElevenLabs. ElevenLabs $^{13}$ (ele, 2023) utilizes the ElevenLabs texts to sound effects API. It extracts sound source information from key frames using ChatGPT-4o and then inputs this data into ElevenLabs to generate the corresponding audio.
+
+
+Figure 4. An example of the scoring interface. The audio-visual setup consists of two 1080p monitors and a headphone. One monitor displays the scoring interface, while the other is used for subjects to watch the AGAV videos and listen to the audio through the headphones.
+
+
+
+# B. More Details of Human Evaluation
+
+# B.1. Scoring Dimensions
+
+During the subjective scoring process, participants are asked to evaluate the AGAVs from three dimensions: audio quality, A/V content consistency, and overall A/V quality. These three dimensions provide a comprehensive and detailed assessment of the AGAVs. Audio quality mainly focuses on the naturalness and clarity of the audio, minimizing the influence of the video content. Higher scores indicate that the audio is more natural, clear, and free of distortion, while lower scores reflect that distortions and noise in the sound degrade the listener's auditory experience. The scoring range is from 0 to 5, with scores accurate to one decimal place. The scoring criteria are as follows:
+
+- 4-5 (Excellent): The audio is natural, clear, and free of distortion.
+- 3-4 (Good): The audio is generally clear but contains slight distortion or noise.
+- 2–3 (Fair): The audio is somewhat muffled, with noticeable distortion or noise.
+- 1-2 (Poor): The audio is quite muffled, with severe distortion and noise that significantly affect the listening experience.
+- 0-1 (Bad): The audio is completely distorted.
+
+A/V content consistency is independent of audio quality and primarily focuses on whether the sounds or music in the audio align with the video content. Higher scores indicate a strong correlation between the audio and the video content, while lower scores indicate a low correlation, with the audio lacking the sound elements intended to be conveyed by the video. The scoring criteria are as follows:
+
+- 4-5 (Excellent): The audio content is highly consistent with the video content.
+- 3-4 (Good): The audio content is generally consistent with the video content.
+- 2–3 (Fair): The audio content has limited correlation with the video content.
+- 1-2 (Poor): The audio content is mostly inconsistent with the video content.
+- 0-1 (Bad): The audio is largely distorted, and the audio content is completely inconsistent with the video content.
+
+Overall A/V quality mainly focuses on the overall perceptual experience of the audio and video, including audio quality, A/V content consistency, and A/V temporal synchronization. Higher scores indicate clear and natural audio, with both content and timing highly consistent with the video. Lower scores indicate obvious audio distortion and low correlation with the video content. The scoring criteria are as follows:
+
+
+(a) Audio Quality
+
+
+(b) A/V Content Consistency
+
+
+(c) Overall A/V Quality
+Figure 5. Distribution of MOSs across three dimensions in the AGAVQA-MOS subset.
+
+- 4-5 (Excellent): The overall quality of the audio and video is excellent.
+- 3-4 (Good): The overall quality of the audio and video is good, with slight distortion.
+- 2–3 (Fair): The overall quality of the audio and video is average, with noticeable distortion.
+- 1-2 (Poor): The overall audio and video quality is poor, with severe distortion or mostly inconsistent content.
+- 0-1 (Bad): The overall audio and video quality is very poor, with completely inconsistent content and severely distorted audio or video.
+
+We developed a subjective evaluation guideline that includes the scoring criteria for the three dimensions mentioned above. Before each participant begins the subjective experiment, we guide them through reading this document to familiarize themselves with the scoring criteria for each dimension, thereby ensuring accuracy and consistency of the ratings across different participants.
+
+# B.2. Scoring Interface
+
+The example of the scoring interface is shown in Fig. 4. We designed the scoring interface using the Python Tkinter package. Before scoring, each subject is prompted to enter their username, which allows the retrieval of their scoring progress and subsequent presentation of the scoring interface. The interface includes three continuous quality rating bars, three navigation buttons, and a display of the number of AGAVs that have been scored. Each rating bar is labeled with a 1-5 Likert scale, and participants can drag the slider to assign scores accurate to one decimal place. The navigation buttons, including "Prev", "Repeat", and "Next", enable participants to modify the previous AGAV's score, replay the current AGAV, and submit their score to proceed to the next AGAV, respectively, facilitating efficient scoring. All AGAVs are viewed at their original resolution without any scaling or cropping.
+
+# B.3. Evaluation Environment
+
+The official testing phase was conducted in a controlled laboratory setting with normal indoor lighting and a quiet environment. Subjects were seated at a comfortable distance of approximately $60~\mathrm{cm}$ from the screen. We used a Redmi 23.8-inch monitor with a resolution of $1920\times 1080$ and Sony WH-1000XM4 headphones to minimize distortions introduced by the viewing and listening devices during human evaluation.
+
+# B.4. Subject Selection
+
+We invited subjects familiar with AVQA and AGAV to participate in on-site training sessions. Detailed explanations of the scoring criteria for each dimension were provided, along with additional AGAV samples for practice. Expert reviewers then evaluated the annotations and selected 15 qualified subjects. Each subject rated all 3,088 samples in the AGAVQA-MOS subset in a randomized order. To prevent fatigue, each subject was limited to rating a maximum of 60 samples per day, completing the entire task in approximately two months. This protocol ensured the validity and robustness of each subject's ratings.
+
+
+Figure 6. Comparison of mean MOSs of different VTA methods across various audio sources in videos. (a) Results on audio quality, (b) Results on A/V content consistency, (c) Results on overall A/V quality.
+
+Table 7. Standard deviation of subjective scores for each category in the AGAVQA-MOS subset.
+
+Animal Water People Vehicle Object Scenery Sea Fantasy Fire Instrument Cooking 12.56 12.16 12.20 11.79 12.57 12.74 12.37 13.11 12.35 11.93 12.23
+
+# B.5. More Subjective Scores Analysis
+
+In the AGAVQA-MOS subset, we collected MOSs across three dimensions. SRCC between audio quality and A/V content consistency is 0.6860, indicating that the two dimensions are independent. SRCC between audio quality and A/V overall quality is 0.7876, and between content consistency and overall quality is 0.7926, suggesting that overall quality is influenced by both audio quality and content consistency.
+
+The distribution of MOSs for each dimension is shown in Fig. 5. MOSs are evenly distributed between 20 and 80 for all three dimensions. Additionally, as illustrated in Fig. 6, we compare the average MOSs of different VTA methods across various audio sources in videos. The audio generated by SVA consistently demonstrates superior quality across all source types. Most VTA methods show better audio quality in the instrument category. In the A/V content consistency dimension, most VTA methods struggle with the fantasy, cooking, people, and scenery categories, where it is challenging to generate audio that aligns with the content of these four types. In the overall A/V quality dimension, VTA methods underperform in the people and cooking categories.
+
+The range of MOSs is [0, 100]. We categorize the video content into 11 main audio sources and then calculate the standard deviation of the overall quality scores among 15 subjects. We compute the average standard deviation for each category, as shown in Tab. 7. The "Fantasy" category shows the highest standard deviation, as it represents unreal scenarios, leading to more diverse interpretations among subjects.
+
+# C. More Details of Experiments
+
+# C.1. Details of Datasets
+
+We conduct experiments on our proposed AGAVQA-3k dataset, as well as on the TTA and TTM datasets. The TTA dataset (Deshmukh et al., 2024) generates 500 audio samples from 100 prompts using 5 text-to-audio generation methods. Subjects were then invited to rate the quality of the audio and its relevance to the provided description. Similarly, the TTM dataset (Deshmukh et al., 2024) generates 500 music samples from 100 prompts using 5 text-to-music generation methods. Subjects were asked to rate the quality of the music and its relevance to the provided description.
+
+The TTM dataset does not include an evaluation of music's aesthetic quality. Since music aesthetic quality assessment is heavily influenced by personal preferences and the subject's taste in different music styles, it is difficult to achieve an objective evaluation. Therefore, in the AGAVQA-MOS, TTA, and TTM datasets, the audio quality dimension primarily focuses on the quality and realism of the audio.
+
+# C.2. Details of Loss Function
+
+We first pre-train AGAV-Rater using text-defined levels. The model predicts a fixed-length text output of 1 token, representing either "Excellent" or "Bad". During pre-training, the cross-entropy loss is used as the objective function, which can be
+
+Table 8. Instructions for Audio-Visual LMMs to Obtain Quality Levels for AGAV, TTA, and TTM.
+
+Input Type Dimension Instruction AGAV Audio Quality <video><audio>Can you evaluate the audio quality in terms of quality and realism? Response with excellent, good, fair, bad, or poor. A/V Content Consistency <video><audio>Can you evaluate the audio and video content consistency? Response with excellent, good, fair, bad, or poor. Overall A/V Quality <video><audio>Can you evaluate the overall audio-visual quality in terms of audio quality, audio and video content consistency? Response with excellent, good, fair, bad, or poor. TTA Audio Quality <audio>Can you evaluate the audio quality in terms of quality and realism? Response with excellent, good, fair, bad, or poor. Audio-Text Consistency <audio>The text is <text>Can you evaluate the audio-text content consistency of the audio and text? Response with excellent, good, fair, bad, or poor. TTM Audio Quality <music>Can you evaluate the music quality in terms of quality and realism? Response with excellent, good, fair, bad, or poor. Music-Text Consistency <music>The text is <text>Can you evaluate the music-text content consistency of the music and text? Response with excellent, good, fair, bad, or poor.
+
+defined as:
+
+$$
+L _ {C E} = - \frac {1}{N} \sum_ {n = 1} ^ {N} \sum_ {c = 1} ^ {C} y _ {n, c} \log \left(\hat {y} _ {n, c}\right), \tag {2}
+$$
+
+where $N$ is the number of AGAVs in the batch, $C$ is the vocabulary size, $y_{n,c}$ is the ground-truth label for the $n$ -th AGAV at vocabulary position $c$ , $\hat{y}_{n,c}$ is the predicted probability output by the model at position $c$ for the $n$ -th AGAV. For fine-tuning, we train AGAV-Rater using numerical scores and employ the Pearson Linear Correlation Coefficient (PLCC) loss as the objective. The PLCC loss is defined as:
+
+$$
+L = \left(1 - \frac {\langle \widehat {s} - m e a n (\widehat {s}) , s - m e a n (s) \rangle}{\| \widehat {s} - m e a n (\widehat {s}) \| _ {2} \| s - m e a n (s) \| _ {2}}\right) / 2, \tag {3}
+$$
+
+where $s$ and $\widehat{s}$ are the vectors of MOSs and predicted scores of AGAVs in a batch respectively, $\langle \cdot \rangle$ represents the inner product of two vectors, $\| \cdot \|$ denotes the norm operator for a vector, and mean is the average operator for a vector.
+
+# C.3. Detailed Information of Compared Methods
+
+For the multi-dimensional scoring task, we selected 16 compared methods, covering four types: audio-visual LMMs, AQA, AVQA, and audio-video (text) alignment.
+
+Audio-Visual LMMs. We chose 4 latest open-source audio-visual LMM models, including PandaGPT (Su et al., 2023), NextGPT (Wu et al., 2024), VITA-1.0 (Fu et al., 2024), and VideoLLaMA2 (Cheng et al., 2024). We rely on official repositories and use default weights to initialize the models. Then, we input AGAVs, TTAs, and TTM into the models, sequentially asking the quality of each dimension and computing the final predicted scores by Eq. 1. The format of instruction for each dimension is shown in Tab.8.
+
+AQA. We selected 4 popular AQA methods, including MOSNet (Lo et al., 2019), STOI-Net (Zezario et al., 2020), NISQA (Mittag et al., 2021), and PAM (Deshmukh et al., 2024). PAM is based on a large language model and does not require training, allowing for direct testing. For all other models, we initialize them with default weights and train them on the AGAVQA-MOS, TTA, and TTM datasets. Since these AQA methods predict a single quality score based only on audio information, we modified the models to output multi-dimensional scores. Specifically, for the AGAVQA-MOS subset, we adjust the final fully connected layer output dimension from 1 to 3, and for the TTA and TTM datasets, we change the output dimension from 1 to 2, allowing the models to predict scores across multiple dimensions.
+
+AVQA. We selected 3 latest deep learning-based AVQA methods: DNN-RNT (Cao et al., 2023a), DNN-SND (Cao et al., 2023a), and GeneralAVQA (Cao et al., 2023b). DNN-RNT and DNN-SND are used to predict quality scores for compressed distorted A/V content, while GeneralAVQA is designed for predicting quality scores of real-world distorted audio-visual content, which includes issues such as jitter, overexposure, and motion blur caused during user capturing. All 3 AVQA methods extract features separately from both video and audio, then fuse these features and output a one-dimensional quality score via a fully connected layer. We modified the final fully connected layer's output dimension to 3, enabling the three AVQA methods to be trained on the three-dimensional MOS scores in the AGAVQA-MOS subset.
+
+Table 9. Inference latency and throughput of the AGAV-Rater on videos on RTX4090. As videos have variable lengths, we set the batch size as 1 for them to avoid the padding cost.
+
+Video Length (sec) 3 5 7 9 11 Latency (ms) 157 204 220 256 332 Throughput (video/sec) 6.36 4.91 4.55 3.90 3.01
+
+Table 10. 230 AIGC videos content distribution and AGAV-Rater filtering accuracy for each category. We collected these AIGC videos and then used ElevenLabs to generate AGAVs, which were subsequently filtered by AGAV-Rater to select high-quality AGAVs in Section 5.6.
+
+Metric Animal Water People Vehicle Object Scenery Sea Fantasy Fire Instrument Cooking All Video Number 41 15 20 22 31 19 20 11 15 24 12 230 AGAV-Rater filtering accuracy 0.78 0.87 0.85 0.77 0.81 0.79 0.80 0.82 0.80 0.83 0.75 0.80
+
+Audio-Video Alignment. AVID-CMA (Morgado et al., 2021) is a self-supervised learning approach that learns audio-visual representations from video and audio, achieving highly competitive performance when fine-tuned on action recognition tasks. VAST (Chen et al., 2023) is an omni-modality video-text foundational model capable of processing vision, audio, and subtitles. VALOR (Liu et al., 2024a) is a Vision-Audio-Language Omni-peRception pretraining model that projects vision, language, and audio into a shared common space, enabling alignment across vision-language, audio-language, and audiovisual-language domains. All three models align video and audio at the semantic level, so we fine-tune them on the AGAVQA-MOS subset. We use their encoders to extract audio and video features, then concatenate them and apply a fully connected layer to regress the three-dimensional scores.
+
+Audio(Music)-Text Alignment. CLAP (Elizalde et al., 2023) utilizes two encoders and contrastive learning to map audio and text descriptions into a shared multimodal space. TTM-Retrieval (Doh et al., 2023) learns a text-music representation for universal text-to-music retrieval. CLAP, TTM-Retrieval, VAST, and VALOR are all capable of aligning audio and text features. We use their encoders to extract audio and text features, and then apply a fully connected layer to regress and predict audio quality and content consistency scores. We load their pre-trained weights and fine-tune them separately on the TTA and TTM datasets.
+
+# C.4. Details of Audio Preprocessing
+
+AGAV-Rater uses default parameters from VideoLLaMA2 for audio preprocessing. Assuming $T$ video frames are extracted, the steps are:
+
+1. Divide the audio into $T$ segments.
+2. Concatenate all segments, then crop or zero-pad to a fixed length.
+3. Transform into fbank spectrograms with 128 frequency bins.
+4. Use BEATs and an MLP block to extract features from the spectrograms.
+5. Concatenate audio, video, and text features, and input them into the LLM.
+
+# C.5. More Details of Enhancing ElevenLabs Results via AGAV-Rater
+
+In Section 5.6, we collected 230 silent AIGC videos and used ElevenLabs to generate audios, aiming to verify that AGAV-Rater can assist ElevenLabs in selecting high-quality audio for users. We conducted a statistical analysis of the content distribution of these 230 AIGC videos, with the results presented in Tab. 10. The analysis shows that the video content is quite diverse, enabling a comprehensive evaluation of AGAV-Rater's performance across different types of video content. We further present the accuracy of AGAV-Rater in identifying higher-quality AGAVs across these categories. As shown in the Tab. 10, AGAV-Rater achieves over $75\%$ accuracy in each category.
+
+# C.6. Cost Analysis of AGAV-Rater
+
+In Tab. 9, we report the inference latency of AGAV-Rater on AGAVs. On a single RTX 4090 GPU, the model can predict scores for 6.36 videos of 3 seconds or 3.01 videos of 12 seconds per second. With a performance $20 \times$ faster than real-time, the low latency of AGAV-Rater paves the way for broader real-world applications of LMM-based A/V scoring.
\ No newline at end of file
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/images.zip b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e754e59bcf8ad179e156fde146447c19096dc924
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fbcd38ccafae858cd11c08566276db9c0b647bb8a8e75a6d4b7b37258df6a56d
+size 1451824
diff --git a/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/layout.json b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..45149d02decef11e66d251dc132cf12ce48bac96
--- /dev/null
+++ b/agavrateradaptinglargemultimodalmodelforaigeneratedaudiovisualqualityassessment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0639ee951d92d8c599e15fa7bcbba50d588957466ff00764e3c60b1c30d52ee8
+size 578618
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_content_list.json b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d6a9ca98883239eccf3340e3ffc741f8a8f4f397
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff45594e2d808013152c066e41f7648c565c8e4f4cb11e728b76f472493a641c
+size 196079
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_model.json b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0713193bcc37c659e85200544c0df6c8ef679622
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fdf2a0a4e8e87275e722e5ce93d47f0d4e806bf252bda5dc2ec82ac2ede628d6
+size 242915
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_origin.pdf b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..90fe8d9e8b4f3291355877d6c3fe40ecd9755e67
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/3c4322e9-f766-43cb-9ce1-a347532b364e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c3fcc0cec2b89819e2068517313b4fccf94caec3185ffd3b0505164b1155b33
+size 2739634
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/full.md b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f52dc7dba3431e8ac8683301f3d9eb9e4b1c184
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/full.md
@@ -0,0 +1,910 @@
+# AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N
+
+Tianyu Zhang $^{*12}$ Andrew Williams $^{*12}$ Phillip Wozny $^{*3}$ Kai-Hendrik Cohrs $^{*4}$ Koen Ponse $^{5}$ Marco Jiralerspong $^{12}$ Soham Rajesh Phade $^{6}$ Sunil Srinivasa $^{7}$ Lu Li $^{8}$ Yang Zhang $^{9}$ Prateek Gupta $^{10}$ Erman Acar $^{11}$ Irina Rish $^{1212}$ Yoshua Bengio $^{1212}$ Stephan Zheng $^{13}$
+
+# Abstract
+
+Global cooperation on climate change mitigation is essential to limit temperature increases while supporting long-term, equitable economic growth and sustainable development. Achieving such cooperation among diverse regions, each with different incentives, in a dynamic environment shaped by complex political and economic factors, without a central authority, is a profoundly challenging game-theoretic problem. This article introduces RICE-N, a multi-region integrated assessment model that simulates the global climate, economy, and climate negotiations and agreements. RICE-N uses multi-agent reinforcement learning (MARL) to incentivize agents to develop strategic behaviors based on the environmental dynamics and the actions of others. We present two negotiation protocols: (1) Bilateral Negotiation, an example protocol and (2) Basic Club, inspired by Climate Clubs and the carbon border adjustment mechanism (Nordhaus, 2015; Commission, 2022). When we compare their impact against a no-negotiation baseline with various mitigation strategies, we find that both protocols significantly reduce temperature growth at the cost of a minor drop in production while ensuring a more equitable distribution of the emissions reduction costs.
+
+# 1 Introduction
+
+The latest Intergovernmental Panel on Climate Change (IPCC) report emphasizes the urgent need for immediate
+
+*Equal contribution $^{1}$ Mila - Quebec AI Institute $^{2}$ DIRO, Université de Montréal $^{3}$ Vrije Universiteit Amsterdam $^{4}$ Universitat de València $^{5}$ Leiden University $^{6}$ Wayve $^{7}$ NVIDIA $^{8}$ University of Pennsylvania $^{9}$ Bank of Canada $^{10}$ University of Oxford $^{11}$ University of Amsterdam $^{12}$ CIFAR $^{13}$ Asari AI. Correspondence to: Tianyu Zhang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+action, warning that it is "now or never" to avert a climate disaster (Pörtner et al., 2022). Ecosystems are drastically changing: the Amazon rainforest is receding (Lovejoy & Nobre, 2018) and polar ice sheets are melting (Boers & Rypdal, 2021; DeConto et al., 2021). Extreme weather events, including the recent increase in coastal flooding and forest fires are unequivocal warning signs (Kundzewicz, 2016; Schmidt et al., 2022). These developments are increasingly being attributed to climate change and driving towards a system-wide tipping point (van Oldenborgh et al., 2021; Stott et al., 2016).
+
+Climate change is a global issue impacting all. In response, public and private financing have driven technological innovation (e.g. in renewable energy) and community initiatives for systemic change. However, mitigation investments vary across countries due to social and economic factors. For example, developing nations may prioritize basic needs, while developed nations likely have more resources to address climate impacts.1 This creates a "tragedy of the commons," where self-interest can lead to harmful outcomes for all (Gardiner, 2001).
+
+As such, achieving and maintaining global cooperation is crucial to achieve the Paris Agreement's long-term goal of limiting the global temperature rise above pre-industrial levels to well below $2^{\circ}\mathrm{C}$ (DeConto et al., 2021). At the same time, it is important to maintain economic development, including growth and inequality reduction. For instance, international trade treaties, foreign investment, and technology transfer can help developing countries meet net-zero targets while supporting global economic growth. Such cooperation could be fostered through climate clubs, which tackle barriers to climate action (Nordhaus, 2015).
+
+From a modeling perspective, achieving and maintaining global cooperation poses a complex game-theoretic problem involving cooperation, communication, and competition. It can be modeled with $n$ strategic agents, each representing a region or nation seeking to maximize their own utility through policies aimed at their own socio-economic and
+
+climate goals, which may conflict with those of the other agents. These agents interact through trade, diplomacy, or foreign aid and investments, with cooperation manifesting through mutual negotiation and agreements.
+
+A key issue is the lack of central authority to enforce cooperation or compliance with the agreements in the real world. Therefore, it is essential to design negotiations and agreements that promote sustained cooperation in mitigating climate change while allowing all parties to achieve their individual policy goals.
+
+Such game-theoretic problems present unresolved technical challenges. For instance, a key analysis in the 2022 IPCC report predicts climate change under five different so-called Shared Socio-Economic Pathways (SSPs), each based on a set of predefined climate-economic policies for each global region (Pörntner et al., 2022). However, a key limitation is that it is unclear whether these policies would be implemented by utility-maximizing actors, making it uncertain how likely these scenarios are to occur or how robust they are to changes in agent behavior over time.
+
+Climate-economic policy tradeoffs are typically modeled using integrated assessment models (IAMs), which quantify the effects of economic activity and $\mathrm{CO}_{2}$ emissions on global temperatures and long-term economic development. A pioneering example is the Dynamic Integrated model of Climate and Economy (DICE) from (Nordhaus, 2007), which models the links among climate and economic factors, such as population growth, technological change, $\mathrm{CO}_{2}$ emissions, global temperatures, and economic damages. DICE uses a single global economy. The Regional Integrated Model of Climate and Economy (RICE) extends DICE to multiple regions (Nordhaus & Yang, 1996b) and can include tariffs and trade (Nordhaus, 2015; Lessmann et al., 2009). While widely used, RICE has its limitations, such as unrealistic assumptions, lack of distributional analysis, simplistic definition of regional interaction, and no consideration of uncertainty (Pindyck, 2013; Farmer et al., 2015; Gazzotti, 2022). Thus, RICE needs significant modification in order to capture the strategic behavior in climate negotiations. To address these issues, we draw inspiration from agent-based modeling (Bonabeau, 2002) as a bottom-up modeling framework and advances in MARL to identify effective policies (Zheng et al., 2022a) and train strategic agents (Silver et al., 2016; Vinyals et al., 2019).
+
+Our main contributions are as follows:
+
+RICE-N Integrated Assessment Model We introduce RICE-N, an integrated climate-economic model based on RICE (Nordhaus & Yang, 1996a). Designed as a simulation tool for climate negotiations, RICE-N incorporates MARL to model realistic interactions between agents. Furthermore, RICE-N is modular, allowing flexibility to accommodate
+
+various climate-economic model configurations.
+
+Negotiation Protocols As RICE-N is built for modeling negotiations, we develop two novel protocols: (1) Bilateral Negotiation, a baseline protocol that provides simple inter-agent communication; (2) Basic Club, a protocol inspired by actual climate economic policy (Nordhaus, 2015; Commission, 2022) to foster burden sharing among agents.
+
+Analyzing Results To evaluate the Basic Club and Bilateral Negotiation protocols, we compare the climate-economic performance of agents trained with and without these negotiation protocols. Metrics include inequality in emission reduction costs as $\%$ GDP and emissions. Compared to agents trained without negotiation protocols, agents trained under the negotiation protocols reduce temperature growth at the cost of a minor drop in production while ensuring a more equitable distribution of the emissions. Our code samples are available here: https://github.com/mila-iqia/ climate-cooperation-competition
+
+# 2 Related Work
+
+Previous work has studied climate change through various lenses: political economy and negotiations (Chan et al., 2022; Bakaki, 2022), public perception and institutional dynamics (Moore et al., 2022), and coalition formation (Zenker, 2019). Although IAMs have been used to study the impacts of political negotiation (Rochedo et al., 2018), the game-theoretic aspects of climate cooperation using machine learning and calibrated IAMs remain unexplored.
+
+Climate negotiations have been analyzed using game theory, from simple prisoner's dilemma models (DeCanio & Fremstad, 2013) to more complex bargaining games with learning (Smead et al., 2014; Greeven et al., 2016). However, these simplified models often lack crucial real-world features like multilateral settings, strategic behavior with multiple goals, evolving dynamics, and agent heterogeneity (Madani, 2013).
+
+Recent advances in multi-agent reinforcement learning (MARL) offer new approaches to studying strategic behavior in complex environments (Shoham & Leyton-Brown, 2008). While MARL has been applied to climate-related problems like HVAC optimization (Mai et al., 2024) and theoretical cooperation studies (Jaques et al., 2019), it has not yet been extended to rich, calibrated climate-economic simulations. Our work addresses this gap by combining MARL with detailed climate-economic modeling. For an extensive review of related work, please refer to Appendix A.
+
+# 3 The RICE-N Integrated Assessment Model
+
+We introduce RICE-N, an IAM that further augments RICE with a framework for negotiation protocols, and also includes international trade and tariffs, following (Lessmann et al.,
+
+# Observations
+
+- Temperature
+- Carbon mass
+Population
+Capital
+- Production
+Consumption
+- Trades
+
+# No Negotiation Baseline
+
+
+
+
+Learning
+
+
+
+
+
+# Actions
+
+Mitigation rate
+- Savings rate
+Tariffs
+- Import bids
+Export limits
+
+# Your Negotiation Protocol
+
+
+
+
+Learning
+
+
+Negotiation
+
+
+Figure 1. Schematic overview of how negotiation can lead to better outcomes in RICE-N. Each region (agent) uses a policy model to make climate, economic, and trade decisions. For clarity, we show the flow of information and actions for a single agent only. Agents can negotiate, but a negotiation protocol must be implemented, such as the example negotiation protocols in Section 4. At a high-level, at each timestep, each policy chooses climate and economic actions based on the observations they receive. (TOP) If there is no negotiation protocol, agents immediately decide their actions, which leads to high climate damage as can be seen in Figure 3. (BOTTOM) If a negotiation protocol has been implemented, agents must first negotiate before performing actions. The outcome of all negotiations is agreements (or lack thereof), which may be between two or more agents. In particular, an agreement may influence the remaining actions that an agent can take in the climate and economic domains. For the same timestep, each agent then makes decisions with respect to the climate, economy, and trade.
+
+2009)). As such, RICE-N shares climate, economic and social characteristics with the real world.
+
+In RICE-N, there are $n$ regions, each modeled as an independent decision-making agent. Regions interact with each other and the environment through their actions: setting a savings rate, mitigation rate, trades and tariffs, and negotiation actions, for each time step.
+
+RICE-N has two main components: negotiation and climate-economic activity, see Figure 1. The activity component simulates the physical actions of the agents and the resulting evolution of the environment. The negotiation component simulates communication between regions, allowing them to influence each other's behaviors and form agreements. Agreements may, in turn, adjust the available actions for each region during the activity stage.
+
+Each simulation episode consists of $H$ steps, each representing $\Delta$ years (e.g., $\Delta = 5$ ). Thus, the simulation lasts for $H \times \Delta$ simulation-years. At every step, the simulation goes through the negotiation stages, and agreements are formed between regions. The simulation then enters the activity stage where each region takes actions that are affected by the agreements formed during the negotiation stages.
+
+Climate-economic dynamics overview. The state of the world is characterized by global variables such as the
+
+concentration of $\mathrm{CO}_{2}$ levels in the Earth's atmosphere, and the average global temperature, as well as region-specific variables such as population, capital, technology level, carbon intensity of economic activity, and balance of trade. For more details, see Table 2 in Appendix B for variables and Appendix H for calibration details.
+
+RICE-N has climate and economic dynamics. The climate dynamics model how $\mathrm{CO}_{2}$ levels in the atmosphere impacts global temperatures. The economic dynamics model how technology levels, capital, population, and gross domestic production evolve. Notably, the climate dynamics impact the economic dynamics through a damage function, which describes how higher temperatures lead to losses in capital.
+
+These dynamics depend on savings and mitigation rates set by each agent, e.g., agents may choose to invest more in climate change mitigation, but this may lower economic productivity in the short-term.2 As global $\mathrm{CO}_{2}$ levels and temperatures affect all agents, these dynamics mean
+
+the decisions of each agent affect the climate-economic outcomes for other agents, too.
+
+The activity component encapsulates these dynamics. For each step: 1. The gross output production for each region is computed based on the state of the region, in particular, its capital investment, labor (or population), and technology factor. 2. The net economic output is the gross output production reduced by climate damages from rising global temperature, and the cost of efforts towards mitigation by this region. 3. The region consumes domestic goods equal to the quantity of the net economic output that is left after capital investment and export. It also consumes foreign goods from imports. 4. The consumption utility for each region from consuming domestic and foreign goods is computed using the Armington elasticity assumption that has become standard in international computable general equilibrium models (Armington, 1969). This gives the reward corresponding to each region in every step. For more details, please refer to Appendix C.1 and Table 3 in Appendix B.
+
+International trade and tariffs. RICE-N features international trade to exchange and transfer goods between agents, following (Lessmann et al., 2009). Here, agents are modeled to seek diversity in their consumption according to the Armington assumption (Armington, 1969), so they want to consume goods produced by other agents and are willing to export some of their own goods in exchange. Each agent specifies the amount of goods it wishes to import and sees its orders filled partially or fully, depending on the other agents' willingness to export. In addition, agents can choose to impose import tariffs to restrict trading. Import tariffs restrict the consumption of goods to which they are applied, implicitly increasing their prices. These price increases make imported goods without import tariffs more attractive. Therefore, trade and tariffs force agents to engage with other regions and be strategic, incentivizing negotiations and agreements. See Appendix C.2 for more details.
+
+Extensions The climate dynamics of DICE and RICE are based on (Nordhaus, 2018; Kellett et al., 2019). Economic analysis based on these dynamics suggests that optimal policy paths limit global warming to $3.5^{\circ}\mathrm{C}$ and lead to net zero economies in the next century (Nordhaus, 2019). Several works have criticized the trustworthiness of these results due to shortcomings in the model, such as missing representations of climate risk and uncertainty (Daniel et al., 2019), unrealistic damage functions (Drupp & Hännel, 2021) and oversimplified climate dynamics (Mattauch et al., 2020). Recent work has shown that updating the model with a more realistic climate emulator and recent damage estimates results in optimal trajectories that are more aligned with the climate targets of the Paris agreement (Hännel et al., 2020).
+
+RICE-N's modularity enables us to easily update the equations governing the dynamics, enabling comparison to exist-
+
+ing studies with various setups. In particular, the climate, economic and trade components are loosely coupled to enable isolated extensions and modifications. The order in which the dynamics act on the world state is determined in the global climate_and_economy_step() function, which can accommodate extensions or modifications with minimal changes. For example, similar to (Hänsel et al., 2020), we provide an updated and higher damage function based on a recent meta analysis (Howard & Sterner, 2017). We also implement the climate dynamics of the famous Finite Amplitude Impulse Response (FaIR) model, an emissions-based climate model that is also featured in the IPCC reports (Millar et al., 2017).
+
+In Figure 2, we analyze different damage and climate functions. We compare three baselines: agents that always perform the minimum mitigation, the maximum mitigation and agents trained with no negotiation. We see that the effect of the damage function is marginal on the strategy of the agents, and even higher damage does not steer them away from maximizing their own utility. The choice of climate dynamics clearly has an effect on the resulting possible trajectories. The lowest emissions scenario for the DICE dynamics limited warming to between $2.6^{\circ}\mathrm{C}$ and $2.7^{\circ}\mathrm{C}$ by the end of the century. In contrast, the updated FaIR dynamics show that $2^{\circ}\mathrm{C}$ is still possible, which corresponds roughly to the low GHG emission scenario (SSP1-2.6) of the IPCC (Pörnter et al., 2022). As DICE2016 remains the most ubiquitous model in use, we continue with the DICE2016 climate and damage functions for the remainder of the work.
+
+# 4 Negotiation Protocols
+
+A negotiation protocol is a communication channel through which agents can make promises and demands that ultimately constrain their behavior. We design RICE-N to be modular such that its base dynamics can be used to test a variety of different negotiation protocols.
+
+In principle, a negotiation protocol can have many desirable properties, including but not limited to the following (Nisan et al., 2007; Luo et al., 2024):
+
+- Incentive-compatibility: Agents have an incentive to act according to their true preferences. (Pavan et al., 2014),
+- Self-enforcing: No external body or agent is required to enforce other agents to participate in negotiations and adhere to agreements (Telser, 1980),
+- Fairness: Agents get comparable utilities (Luo et al., 2024).
+
+Negotiation steps occur prior to the climate and economic step so that agents' climate-economic behavior is influenced by the outcome of negotiations through binding commitments.
+
+RICE-N is flexible enough to serve as a basis for different
+
+negotiation protocols. We implement two negotiation protocols: Bilateral Negotiation and Basic Club. The former serves as an example to illustrate the key phases of negotiation, and the latter is modeled on actual climate economic policy.
+
+Bilateral Negotiation In addition to the base dynamics of RICE-N, Bilateral Negotiation agents perform the following steps:
+
+- Proposal stage: At this stage, each agent $i$ makes a proposal, $(\hat{\mu}_i,\hat{\mu}_j)$ , to every other agent $j$ where $\hat{\mu}_i$ indicates the mitigation level that agent $i$ promises and $\hat{\mu}_j$ what agent $i$ requests from agent $j$ .
+- Evaluation stage: At this stage, each agent observes the proposals made to it in the preceding stage and takes an action of accepting or rejecting each of the proposals.
+- Commitment stage: At this stage, each agent commits to the maximum mitigation rate of all accepted proposals.
+
+Basic Club Basic Club is a multi-agent negotiation protocol based on climate clubs and the Carbon Border Adjustment Mechanism (CBAM) (Nordhaus, 2015; Commission, 2022). The former was initially proposed by William Nordhaus and codified by Article 6 of the Paris Agreement; a climate club is a coalition of regions that agree to a common
+
+
+
+
+
+
+Figure 2. Extensions to the base setup of the framework. RICE-N is highly adaptable in that different components, such as the climate model or damage function, can easily be exchanged. We compare the basic no-negotiation baseline (see Section 6) over different damage functions and climate dynamics. The DICE2016 damage function and DICE2016 climate model are taken from (Nordhaus, 2018). The H&S damage function is based on the recent estimates of (Hansel et al., 2020), and the FaIR climate model is a famous emission-based climate emulator (Millar et al., 2017). The H&S damage function punishes temperature increases more strongly than the DICE2016 damage function, but the effect is still too mild during the roll-out period to lead to significant behavior changes in the agents under the no-negotiation scenario. The FaIR climate model implements more realistic climate dynamics and shows that a path to $2^{\circ}\mathrm{C}$ is still possible.
+
+
+
+emission reduction target and impose a uniform tariff on all goods coming from non-club members (Nordhaus, 2015; 2021b). The latter (i.e., CBAM) is a feature of the EU Green Deal, aiming to address carbon leakage and facilitate net zero emissions by 2050 (Commission, 2020; 2022). Basic Club borrows the ideas of a uniform tariff on all goods and a variable tariff value that depends on the emission reduction target of the exporting country from Nordhaus and CBAM, respectively (Nordhaus, 2015; Commission, 2020; 2022). The legality of Basic Club with respect to World Trade Organization compliance is described in Appendix J.
+
+Formally, the Basic Club functions as follows:
+
+- Proposal: Each agent $i \in \{1, \dots, n\}$ proposes $\hat{\mu}_i \in [0, 1]$ , indicative of the mitigation rate of the club they would like to join.
+- Evaluation: Each agent $i \in \{1, \dots, n\}$ evaluates proposals $\hat{\mu}_j$ from other agents $j \neq i$ , by either accepting or rejecting them. Let $A_i$ be the set of mitigation rates of accepted proposals for region $i$ .
+- Club Formation: We define the minimum mitigation rate of agent $i$ as $\mu_{i} = \max A_{i}$ if $A_{i}$ is not empty, otherwise, it is 0. A club $c$ is a subset of agents with a common minimum mitigation rate $\mu_{c}$ .
+- Sanctions: For each club $c$ , non-members with lesser minimum mitigation rates the club mitigation rate i.e., $\mu_{j} < \mu_{c}$ receive a tariff as the differences in rates i.e., $\tau_{j,c} = \mu_{c} - \mu_{j}$ . Members and non-members with $\mu_{j} \geq \mu_{c}$ have $\tau_{j,c} = 0$ .
+
+Exact descriptions of the action spaces for both protocols can be found in Appendix C.3.
+
+Binding Commitments Commitments are made binding through action masks, which control the accessible action space during the following step. For example, negotiation could yield a binding commitment to a minimum of $20\%$ mitigation rate. Then, the action mask only allows setting mitigation rates above that level for this region. We discuss extensions beyond this paradigm in Section 7.
+
+In Section 6, we will compare the climate-economic impacts of Basic Club, Bilateral Negotiation and No Negotiation.
+
+# 5 Modeling Strategic Agents using MARL
+
+The negotiation protocols and the climate-economic dynamics of RICE-N define a game-theoretic setup between the different regions. We model the behavior of an agent $i$ using its policy $\pi_i(a_t|o_t)$ that maps the agent's observations $o_t$ to a probability distribution over its actions $a_t$ at time $t$ .
+
+Reason to use MARL Existing IAMs often use a predetermined policy for each agent that is fixed exogenously. Such approaches would require handcrafted agent policies
+
+
+
+
+
+
+Figure 3. Comparison of climate-economic outcomes among different baselines (minimum mitigation, maximum mitigation, and no negotiation) against negotiation protocols (Bilateral negotiation and Basic Club) with mean $\pm 1.96$ stderr (see Section 6 for more details). In respective order, we show the comparison of (a) global temperature anomaly (lower is better), (b) global carbon emissions (lower is better), (c) global output production (higher is better) and (d) global consumption (higher is better). While no negotiation performs best in the early stages of the simulation, it exhibits a downward trend after 2090, suggesting that the adverse effects of rising temperatures start to outweigh the economic benefits. In contrast, Basic Club surpasses no negotiation economically towards the end of the simulation while successfully maintaining a lower increase in temperature anomaly and carbon emissions.
+
+
+
+for each different negotiation protocol. Furthermore, the reliability of the outcomes from the simulation would depend on the modeled policies.
+
+In contrast, we assume that each region strategically interacts with the environment and other strategic agents in it. Therefore, instead of manually setting the behavioral policy $\pi_{i}$ , we use machine learning techniques to find policies that seek to maximize the objectives of the agents, and hence derive the agent policies endogenously.
+
+Specifically, the agents are assumed to be utility-maximizing such that each agent $i$ optimizes its policy $\pi_i(a_t|o_t)$ to maximize its long-term aggregate $\gamma$ -discounted utility:
+
+$$
+\max _ {\pi_ {i}} \mathbb {E} _ {\pi_ {1}, \dots , \pi_ {n}} \left[ \sum_ {t = 0} ^ {H} \gamma^ {t} r _ {i, t} \right], \tag {1}
+$$
+
+where $r_{i,t}$ is the utility of the region $i$ at step $t$ determined by its aggregate consumption $C_{i,t}$ as follows:
+
+$$
+r _ {i, t} = U _ {i, t} = \frac {w}{1 - \alpha} L _ {i, t} \left(\left(\frac {C _ {i , t}}{L _ {i , t}}\right) ^ {1 - \alpha} - 1\right), \tag {2}
+$$
+
+where $L_{i,t}$ is the population of the region $i$ at step $t$ ; $w$ is the welfare loss multiplier that decreases the utility that a given
+
+region receives proportionally to how other regions tariff that region's exports (Nordhaus, 2021a) (see Appendix G for more details); The parameter $\alpha \geq 0$ is the consumption elasticity which can represent the degree of risk aversion or the (un-)willingness of society to sacrifice consumption today for consumption in the future. The discount factor $\gamma$ models the long-term value of rewards for the agents, and as such it could differ across different agents. The discount factor for each region is often updated with the changing administration; in the absence of any consensus, we fix $\gamma$ to be homogeneous across agents as assumed in (Nordhaus, 2015). The aggregate consumption $C_{i,t}$ in the above equation is obtained by combining the domestic and foreign goods consumption using the Armington model (Armington, 1969):
+
+$$
+C _ {i, t} = \left(\psi^ {\operatorname {d o m}} \left(C _ {i, i, t}\right) ^ {\lambda} + \sum_ {j \neq i} \psi^ {\operatorname {f o r}} \left(C _ {i, j, t}\right) ^ {\lambda}\right) ^ {\frac {1}{\lambda}}, \tag {3}
+$$
+
+$$
+C _ {i, j, t} = x _ {i, j, t} \left(1 - \tau_ {i, j, t}\right) \quad \forall j \neq i, \tag {4}
+$$
+
+where $C_{i,j,t}$ is the foreign goods consumed after imposing tariffs $\tau_{i,j,t}$ on the imported goods $x_{i,j,t}$ by region $i$ from region $j$ at step $t$ . $\psi^{\mathrm{dom}}$ and $\psi^{\mathrm{for}}$ are shared parameters, and $\lambda$ is the Armington elasticity parameter that represents the degree to which consumers are willing to switch between domestic and imported goods when prices change.
+
+Multi-agent reinforcement learning (MARL) Simulating a utility-maximizing agent requires computing the optimal policy for each agent3 . Finding the optimal utility-maximizing policy for each agent in response to complex environment dynamics and other agent policies naturally leads to MARL; (Busoniu et al., 2008) provides a comprehensive overview of the topic. In short, MARL extends single-agent RL to find an optimal policy for each agent interacting in a dynamic environment to solve Equation 1. The RL framework models how an agent's actions affect the state of the environment and its rewards (utilities). Thus, an RL agent has to learn to anticipate the long-term effects of its actions. This is especially true and challenging in multi-agent environments such as RICE-N, where agent actions affect key climate-economic metrics including global temperatures, capital investments, carbon emissions, etc. In addition, MARL algorithms have to deal with the additional game-theoretic challenge of each agent's response to the policies of other agents. This makes the task of finding the optimal policy a moving target (until a form of equilibrium is reached in the agent policies).
+
+Equilibrium Concept In contrast to the original RICE IAM (Nordhaus & Yang, 1996b), which discusses pure strategy Nash equilibria, RICE-N employs reinforcement
+
+learning agents operating in a dynamic, partially observed environment with expanded action spaces and evolving negotiation protocols. As such, there is no single, fixed equilibrium concept that applies across all scenarios modeled in RICE-N. Instead, the emergent behavior of agents can approximate different forms of equilibria depending on the structure of the negotiation protocol:
+
+1. Equilibrium type: The negotiation protocol can give rise to the introduction of previously irrelevant equilibrium concepts, such as correlated equilibria from a stochastic protocol.
+2. Punishment and enforcement: Action spaces that include tariffs or sanctions can affect the equilibrium and introduce the possibility of self-enforcement through collective punishment for defection.
+3. Non-binding agreements: If commitment masks are relaxed to allow cheap talk, other solution concepts such as coalition-proof Nash equilibria become relevant (Bernheim et al., 1987).
+4. Information structure: The negotiation protocol can affect what information is public vs private.
+
+Thus, equilibrium behavior in RICE-N is not defined a priori, but emerges from the interaction of learning dynamics, available actions and the design of the negotiation process.
+
+Implementing RL Agents Our code includes both CPU and GPU implementations of the full RL pipeline using A2C (Mnih et al., 2016). RICE-N can also be used with other RL implementations. Our base implementation models each RL agent using a neural network policy that shares weights across agents, but uses agent-specific inputs. The architecture of the network can be adjusted, e.g., the number of layers and the dimension of each layer. Agent policies use separate heads for each action. To distinguish between agents, the policy model's input contains agent-specific features, e.g., their population, capital, technology factor, damage function, and a one-hot representation of the region's index, as well as the public state of the world (e.g., climate conditions). In addition, each agent receives information about negotiations, e.g., the latest proposals made to and by this agent, or the minimum mitigation rate agreed upon by this agent. Depending on the negotiation protocol, not all observations and actions are relevant to each agent. How negotiations evolve depends on the specifics of the protocol and the different actions executed by the agent, e.g., proposals for other agents, decisions on proposals made by other agents, and setting mitigation and savings rates that may or may not be in line with what was agreed upon.
+
+# 6 Evaluating Negotiation Protocols
+
+In this section, we use RICE-N to analyze the climate and economic outcomes of the basic club and bilateral
+
+negotiation protocols (see Section 4). We compare the climate-economic outcomes of these protocols with three baselines, namely a minimal mitigation policy, a maximal mitigation policy, and a mitigation with no negotiation policy. We also examine the (group) fairness of these protocols across regional contributions by calculating the Gini Index of various climate-economic variables measured by RICE-N.
+
+Experimental Setup To compare outcomes with and without negotiation, we train five models consisting of (i) Basic Club, (ii) Bilateral Negotiation, (iii) a no negotiation baseline, (iv) Maximum mitigation, and (v) Minimum mitigation. The latter two models can only mitigate either the maximum or minimum possible amount, respectively; all other actions in (iv) and (v) are trained as normal. Each model is trained for 30,000 episodes. For evaluation, we gather 50 rollouts of each model using a unique seed per run to ensure a varied distribution of outputs. For more details about the concrete time series, please refer to Appendix M.
+
+Negotiation Protocols Can Improve Climate-Economic Outcomes. In Figure 3, we illustrate the global temperature anomaly, carbon emissions, output production and consumption across time steps. To establish upper and lower bounds in performance, we also compare our findings to maximal and minimal mitigation strategies. The former consists of the maximum possible emissions reductions per time step and the latter features no emissions reduction.
+
+We see that Basic Club and Bilateral Negotiation perform vastly better than the minimal mitigation and no negotiation baselines, with temperature increase and carbon emission outcomes nearly matching the maximal mitigation baseline.
+
+Economically, however, Bilateral Negotiation falls relatively short while Basic Club is not significantly worse than the minimum mitigation, maximum mitigation and no negotiation baselines at the last time step. In fact, Basic Club ends with steady output growth, as opposed to the slowing economic growth of the minimum mitigation and no negotiation baselines. The distribution of mitigation strategies under different negotiation protocols is visible in Appendix L.
+
+For global consumption, we see that no negotiation performs best, but this trend is unlikely to continue considering the decrease in economic production observed in later time steps because the damage from the increasing temperature would significantly limit the sustained economic growth in the long term. Minimum mitigation performs second best, but Basic Club results in nearly as much consumption at the last time step, which, paired with its promising steady growth in economic output, looks to result in more sustainable outcomes in the long run.
+
+Overall, the large room of improvement in the global temperature levels at the expense of relatively much smaller difference in production output shows the value of these
+
+protocols. To assess the robustness of our results, we conduct a sensitivity analysis on selected parameters based on economic theory, which are the discount factor, welfare loss weight, consumption substitution rate, and relative preference for domestic goods. The results, discussed in Appendix K, confirm that our findings over different scenarios are stable under changes in critical model parameters.
+
+Negotiation Protocols Impact Fairness To measure the (group) fairness of outcomes across climate economic variables of interest, we use the Gini index (Gini, 1912), a statistical measure used to quantify the inequality of variables $x$ within a population. The Gini Index ranges from 0 (perfect equality) to 1 (absolute inequality) on the variable of interest
+
+$$
+G (x) = \frac {\sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left| x _ {i} - x _ {j} \right|}{2 n ^ {2} \bar {x}} \tag {5}
+$$
+
+where $n$ is the number of agents, and $\bar{x}$ represents the mean.
+
+Table 1 (left) illustrates the degree of inequality of different negotiation protocols across regions with respect to abatement cost, mitigation rate, carbon emission and consumption. We first note that both Basic Club and Bilateral negotiation result in lower inequality than No Negotiation when it comes to abatement cost, mitigation rate and carbon emissions. That is, the economic burden of emission reduction is more equitably shared when agents have the opportunity to negotiate with one another about their emissions reduction targets. The inequality of carbon emissions itself is less impacted, as the carbon intensity is a largely region specific parameter. However, for the mitigation rate, the Bilateral Negotiation seems to have a significantly lower Gini Index than the Basic Club. Although this may appear desirable, it may not be optimal for all regions to contribute equally to mitigation efforts. In fact, in Table 1 (right), if we contrast the Global Output Production and the Global Carbon Emissions of Basic Club and Bilateral Negotiation, it seems that Basic Club achieves significantly better economic outcomes at the cost of a slight increase in temperature while maintaining a similar Gini Index for consumption.
+
+Discussion and Analysis From a climate oriented perspective, Bilateral Negotiation outperforms Basic Club. This stems from the commitment mechanism used. Bilateral Negotiation agents take the maximum of all accepted proposals and requests. Early on in training, those proposals exhibit a high degree of randomness. Hence, the maximum of random proposals is often near the maximum possible mitigation rate. We emphasize that Bilateral Negotiation is intended as an example for RICE-N users which can illustrate the basic steps of negotiation and is not a realistic negotiation protocol. While Basic Club performs comparable, it involves a much smaller proposal action space, and is designed to reflect more realistic climate policy with real-world climate negotiations (Commission, 2022; Nordhaus, 2015; Commission, 2020).
+
+While Bilateral Negotiation has limited real-world applicability, the $O(n^{2})$ communication complexity for $n$ agents is inefficient for large-scale international negotiations. Moreover, bilateral negotiations can lead to contradictory agreements with no feasible solution. This challenge underscores why most international climate negotiations opt for multilateral forums where all parties collectively discuss and agree upon terms, as exemplified by the United Nations Framework Convention on Climate Change (UNFCCC) process (Mantlana & Jegede, 2022).
+
+From a climate justice standpoint, both the Basic Club and Bilateral Negotiation protocols evaluated in this study do not adequately consider regional differences. The high mitigation rates achieved by these protocols may not be equitable or desirable for regions whose historical emissions are typically lower than those of others.
+
+Potential Policy Implications Our findings suggest that the Basic Club protocol holds promise as a basis for real-world climate policy. However, the impacts of climate clubs and of border adjustment mechanisms depend significantly on how they are implemented. Without complementary measures such as redistribution and technology transfer, these mechanisms risk functioning as de facto carbon taxes on developing countries that are heavily reliant on carbon-intensive development pathways (Goldthau & Tagliapietra, 2022; Perdana & Vielle, 2022). One way to mitigate this risk is through instruments like the Loss and Damage Fund, which can support climate justice and compensate vulnerable countries for harms that are difficult to avoid or adapt to (Boyd et al., 2021). That said, we caution that the outcomes observed in our framework should not be interpreted as direct predictions for real-world negotiations. Rather, RICE-N offers a simulation platform to explore, test, and compare the dynamics and consequences of alternative climate policy designs under controlled and transparent assumptions.
+
+# 7 Limitations
+
+RICE-N can be improved in a number of aspects. Firstly, we do not make use of regional damages and temperatures, which are captured by other RICE models. Representing regional disparities is critical for analyzing model outcomes from a climate justice perspective (Gazzotti et al., 2021). Secondly, there are longer time horizon versions of RICE currently available (Biswas et al., 2024) that extend the simulation up to 300 years. Furthermore, RICE-N does not represent damage disparities within regions (Dennig et al., 2015) which may obscure the mediating role of socioeconomic class on climate damages. Finally, the reasoning and decision-making logic behind the negotiation process remains a black box and lacks interpretability, as
+
+Table 1. Climate-economic outcomes and Gini index at the last time step (year 2115). In bold are the best results. If two results are not significantly different, we bold both.
+
+NEGOTIATION
+PROTOCOL GINI INDEX CLIMATE-ECONOMIC OUTCOMES ABATEMENT
+COST (%GDP) MITIGATION
+RATE (%) CARBON
+(GtC) CONSUMPTION
+(USD trillion) TEMP.
+(℃) OUTPUT
+(USD trillion) CARBON
+(GtC) CONSUMPTION
+(USD trillion) None 0.695±0.018 0.405±0.018 0.654±0.008 0.529±0.002 5.121±0.032 233.6±3.9 1697.2±25.6 8.96±0.04 Bilateral 0.333±0.005 0.011±0.007 0.565±0.020 0.540±0.004 3.34±0.03 209.4±5.5 487.0±15.2 8.33±0.05 Basic Club 0.339±0.006 0.030±0.006 0.579±0.017 0.541±0.003 3.422±0.018 235.1±5.0 534.7±9.50 8.55±0.05
+
+it directly stems from the actions of agents trained through multi-agent reinforcement learning (MARL) algorithms.
+
+In addition to addressing the aforementioned shortcomings, we aim to extend RICE-N in the following directions. We plan to integrate multi-level reinforcement learning (Zheng et al., 2022b) to model the negotiation protocol of the Conference of the Parties. Moreover, we aim to explore the inclusion of a welfare redistribution mechanism (Orlov et al., 2024) as a component of negotiation. Additionally, we are exploring the use of JAX (Bradbury et al., 2018) to significantly enhance the performance of RICE-N. This would enable wide-scale sensitivity analysis across simulation parameters. Furthermore, we will leverage large language models to enhance the interpretability of the negotiation process. In our current setup, agents cannot deviate from agreed-upon actions for the specified time step (5 years). While this assumption simplifies analysis and isolates the impact of the negotiation mechanism, it does not reflect the uncertainty and strategic mistrust present in real-world climate negotiations (e.g., countries withdrawing from agreements). Future work should explore the incorporation of non-binding commitments, which would allow agents to deviate from agreements and engage in strategic communication or cheap talk (Crawford & Sobel, 1982; Caparros, 2016).
+
+# 8 Conclusion
+
+In this paper, we introduced RICE-N, a novel integrated assessment model that combines climate-economic dynamics with multi-agent reinforcement learning to simulate global climate negotiations and agreements. RICE-N offers a flexible framework for testing various negotiation protocols and their impact on long-term climate and economic outcomes. We demonstrated the utility of RICE-N by implementing and comparing two negotiation protocols: Bilateral Negotiation and Basic Club. Our results show that both protocols can lead to improved climate outcomes compared to scenarios without negotiation while maintaining comparable economic performance. Notably, the Basic Club protocol, inspired by real-world climate policy proposals, achieved a balance between emissions reduction and economic growth that surpassed the no-negotiation baseline in the long term. Beyond this specific application, RICE-N offers value to a
+
+range of research and policy communities. Machine learning researchers can leverage the modular RL component to benchmark different reinforcement learning algorithms in a dynamic, real-world-calibrated environment. Climate scientists can use the framework to compare alternative climate modules and damage functions under consistent economic and strategic assumptions. Governments may apply it to test the robustness of proposed climate-economic policies to strategic behavior and to anticipate potential impacts on international trade. International organizations, such as the OECD or WTO, could use the tool to analyze the distributional and economic trade-offs associated with alternative climate policy designs and negotiation strategies. It contributes to the development of more robust and equitable climate policies, supporting efforts to mitigate climate change while maintaining sustainable economic development.
+
+# Impact Statement
+
+The goal of this work is to produce a climate-economic model that helps foster robust, durable negotiation protocols. Tools for collective cooperation such as RICE-N can help us move toward more sustainable, fair, and long-lasting climate-economic outcomes. However, such tools can lead to unintended consequences, including the carbon footprint of using RICE-N, economic inequality due to its limitations on the applicability to the real world.
+
+Carbon footprint It is important to acknowledge that using RICE-N inevitably results in carbon emissions. Therefore, we encourage users to consider their energy usage when running experiments and offset their carbon emissions.
+
+Economic Inequality As previously discussed, economic inequality is inevitably intertwined with climate change. The consideration of approaches that address climate change should always include the economic inequalities that they could impact.
+
+Real World Potential and Limits It is important to note that predictions in RICE-N will eventually differ from actual outcomes due to inherent real-world complexity and the limitations of simulation dynamics. Therefore, decision makers should consider these limitations, and the possible gaps between simulated and real outcomes before making any policy decisions.
+
+# Acknowledgments
+
+T Zhang acknowledges the support from Microsoft and Samsung. P Wozny acknowledges the support of the Fiscal Institute of Tilburg. K H Cohrs acknowledges the support from the European Research Council (ERC) under the ERC Synergy Grant USMILE (grant agreement 855187). We acknowledge the support from the Canada CIFAR AI Chair Program and the Canada Excellence Research Chairs Program. We also acknowledge everyone who contributed to or joined AI4GCC (AI for Global Climate Cooperation) competition.
+
+# References
+
+Anderson, S. P., Goeree, J. K., and Holt, C. A. A theoretical analysis of altruism and decision error in public goods games. Journal of Public Economics, 70(2):297-323, 1998.
+Armington, P. S. A theory of demand for products distinguished by place of production. Staff Papers, 16(1): 159-178, 1969.
+Bakaki, Z. The impact of climate summits. Nature Climate Change, 12(7):611-612, 2022.
+Barrett, S. and Dannenberg, A. Climate negotiations
+
+under scientific uncertainty. Proceedings of the National Academy of Sciences, 109(43):17372-17376, 2012.
+Bernheim, B., Peleg, B., and Whinston, M. D. Coalitionproof nash equilibria i. concepts. Journal of Economic Theory, 42(1):1-12, 1987. ISSN 0022-0531. doi: https://doi.org/10.1016/0022-0531(87)90099-8. URL https://www.sciencedirect.com/science/article/pii/0022053187900998.
+Biswas, P., Zatarain Salazar, J., and Kwakkel, J. Adaptive strategies to reconcile diverse equity preferences in climate policies. In EGU General Assembly Conference Abstracts, pp. 4387, 2024.
+Boers, N. and Rypdal, M. Critical slowing down suggests that the western greenland ice sheet is close to a tipping point. Proceedings of the National Academy of Sciences, 118(21), 2021.
+Bonabeau, E. Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the national academy of sciences, 99(suppl_3):7280-7287, 2002.
+Boyd, E., Chaffin, B. C., Dorkenoo, K., Jackson, G., Harrington, L., N'Guetta, A., Johansson, E. L., Nordlander, L., Paolo De Rosa, S., Raju, E., Scown, M., Soo, J., and Stuart-Smith, R. Loss and damage from climate change: A new climate justice agenda. One Earth, 4(10):1365-1370, 2021. ISSN 2590-3322. doi: https://doi.org/10.1016/j.onear.2021.09.015. URL https://www.sciencedirect.com/science/article/pii/S2590332221005376.
+Bradbury, J., Frostig, R., Hawkins, P., James, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q. JAX: Composable transformations of Python+NumPy programs, 2018.
+Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O., Niculae, V., Prettenhofer, P., Gramfort, A., Grobler, J., Layton, R., VanderPlas, J., Joly, A., Holt, B., and Varoquaux, G. API design for machine learning software: experiences from the scikit-learn project, 2013.
+Busoniu, L., Babuska, R., and De Schutter, B. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 38(2):156-172, 2008.
+Cao, K., Lazaridou, A., Lanctot, M., Leibo, J. Z., Tuyls, K., and Clark, S. Emergent communication through negotiation. arXiv preprint arXiv:1804.03980, 2018.
+Caparros, A. The paris agreement as a step backward to gain momentum: Lessons from and for theory. Revue d'économie politique, 126:347, 08 2016. doi: 10.3917/rdp.263.0347.
+
+Carney, M. Breaking the tragedy of the horizon-climate change and financial stability. Speech given at Lloyd's of London, 29:220-230, 2015.
+Chan, S., Hale, T., Deneault, A., Shrivastava, M., Mbeva, K., Chengo, V., and Atela, J. Assessing the effectiveness of orchestrated climate action from five years of summits. Nature Climate Change, pp. 1-6, 2022.
+Chelarescu, P. Deception in social learning: A multi-agent reinforcement learning perspective. arXiv preprint arXiv:2106.05402, 2021.
+Comin, D. total factor productivity, pp. 260-263. Palgrave Macmillan UK, London, 2010. ISBN 978-0-230-28082-3. doi: 10.1057/9780230280823_32. URL https://doi.org/10.1057/9780230280823_32.
+Commission, E. European green deal. 2020.
+Commission, E. Regulation of the european parliament and of the council establishing a carbon border adjustment mechanism (cbam). 2022.
+Crawford, V. P. and Sobel, J. Strategic information transmission. Econometrica, 50(6):1431-1451, 1982. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1913390.
+Daniel, K. D., Litterman, R. B., and Wagner, G. Declining co;sub;2/;sub; price paths. Proceedings of the National Academy of Sciences, 116(42):20886-20891, 2019. doi: 10.1073/pnas.1817444116. URL https://www.pnas.org/doi/abs/10.1073/pnas.1817444116.
+Dannenberg, A., Löschel, A., Paolacci, G., Reif, C., and Tavoni, A. On the provision of public goods with probabilistic and ambiguous thresholds. Environmental and Resource economics, 61:365-383, 2015.
+DeCanio, S. J. and Fremstad, A. Game theory and climate diplomacy. Ecological Economics, 85:177-187, 2013.
+DeConto, R. M., Pollard, D., Alley, R. B., Velicogna, I., Gasson, E., Gomez, N., Sadai, S., Condron, A., Gilford, D. M., Ashe, E. L., et al. The paris climate agreement and future sea-level rise from antarctica. Nature, 593(7857): 83-89, 2021.
+Dennig, F., Budolfson, M. B., Fleurbaey, M., Siebert, A., and Socolow, R. H. Inequality, climate impacts on the future poor, and carbon prices. Proceedings of the National Academy of Sciences, 112(52):15827-15832, 2015.
+Drupp, M. A. and Hänsel, M. C. Relative prices and climate policy: How the scarcity of nonmarket goods drives policy evaluation. American Economic Journal:
+
+Economic Policy, 13(1):168-201, February 2021. doi: 10.1257/pol.20180760. URL https://www.aeaweb.org/articles?id=10.1257/pol.20180760.
+Duque, J. A., Aghajohari, M., Cooijmans, T., Zhang, T., Samiei, M., and Courville, A. Advantage alignment algorithms. 2024. URL https://openreview.net/forum?id=X7QYCcAHw3.
+Erickson, P., Kartha, S., Lazarus, M., and Tempest, K. Assessing carbon lock-in. Environmental Research Letters, 10(8):084023, 2015.
+Farmer, J. D., Hepburn, C., Mealy, P., and Teytelboym, A. A third wave in the economics of climate change. Environmental and Resource Economics, 62(2):329-357, 2015.
+Gardiner, S. M. The real tragedy of the commons. Philosophy & public affairs, 30(4):387-416, 2001.
+Gazzotti, P. Rice50+: Dice model at country and regional level. Socio-Environmental Systems Modelling, 4: 18038-18038, 2022.
+Gazzotti, P., Emmerling, J., Marangoni, G., Castelletti, A., Wijst, K.-I. v. d., Hof, A., and Tavoni, M. Persistent inequality in economically optimal climate policies. Nature Communications, 12(1):3421, 2021.
+Gini, C. Variabilità e mutabilità: contributo allo studio delle distribuzioni e delle relazioni statistiche. [Fasc. I.]. Tipogr. di P. Cuppini, Bologna, Italy, 1912.
+Goldthau, A. and Tagliapietra, S. How an open climate club can generate carbon dividends for the poor. EURACTIV media network, 2022.
+Greeven, S., Kraan, O., Chappin, E. J., et al. The emergence of climate change and mitigation action by society: an agent-based scenario discovery study. Journal of Artificial Societies and Social Simulation, 19(3) 9), 2016.
+Hänsel, M. C., Drupp, M. A., Johansson, D. J. A., Nesje, F., Azar, C., Freeman, M. C., Groom, B., and Sterner, T. Climate economics support for the un climate targets. Nature Climate Change, 10(8):781-789, Aug 2020. ISSN 1758-6798. doi: 10.1038/s41558-020-0833-x. URL https://doi.org/10.1038/s41558-020-0833-x.
+Hanumaiah, V. and Genc, S. Distributed multiagent deep reinforcement learning framework for whole-building hvac control, 2021. URL https://arxiv.org/abs/2110.13450.
+Hardin, G. The tragedy of the commons: the population problem has no technical solution; it requires a fundamental extension in morality. science, 162(3859):1243-1248, 1968.
+
+Hou, X., Yuan, J., Leibo, J. Z., and Jaques, N. Investesg: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma, 2025. URL https://arxiv.org/abs/2411.09856.
+Howard, P. H. and Sterner, T. Few and not so far between: A meta-analysis of climate damage estimates. Environmental and Resource Economics, 68(1):197-225, Sep 2017. ISSN 1573-1502. doi: 10.1007/s10640-017-0166-z. URL https://doi.org/10.1007/s10640-017-0166-z.
+Jaques, N., Lazaridou, A., Hughes, E., Gulcehre, C., Ortega, P., Strouse, D., Leibo, J. Z., and De Freitas, N. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In International conference on machine learning, pp. 3040-3049. PMLR, 2019.
+Kellett, C. M., Weller, S. R., Faulwasser, T., Grüne, L., and Semmler, W. Feedback, dynamics, and optimal control in climate economics. Annual Reviews in Control, 47:7-20, 2019.
+Kundzewicz, Z. Extreme weather events and their consequences. 23:59-69, 2016. doi: 10.1515/IGBP-2016-0005.
+Le Glaou, T., Marjou, X., Lemlouma, T., and Radier, B. Towards circular and asymmetric cooperation in a multi-player graph-based iterated prisoner's dilemma. In 14th International Conference on Agents and Artificial Intelligence, 2022.
+Lerer, A. and Peysakhovich, A. Maintaining cooperation in complex social dilemmas using deep reinforcement learning, 2018.
+Lessmann, K., Marschinski, R., and Edenhofer, O. The effects of tariffs on coalition formation in a dynamic global warming game. *Economic Modelling*, 26(3): 641-649, 2009.
+Loschel, A., Kallis, G., Dannenberg, A., and Tavoni, A. Inequality, communication and the avoidance of disastrous climate change. 2011.
+Lovejoy, T. E. and Nobre, C. Amazon tipping point, 2018.
+Luo, X., Li, Y., Huang, Q., and Zhan, J. A survey of automated negotiation: Human factor, learning, and application. Computer Science Review, 54:100683, 2024. ISSN 1574-0137. doi: https://doi.org/10.1016/j.cosrev.2024.100683. URL https://www.sciencedirect.com/science/article/pii/S1574013724000674.
+Madani, K. Modeling international climate change negotiations more responsibly: Can highly simplified game theory models provide reliable policy insights? Ecological Economics, 90:68-76, 2013.
+
+Mai, V., Maisonneuve, P., Zhang, T., Nekoei, H., Paull, L., and Lesage-Landry, A. Multi-agent reinforcement learning for fast-timescale demand response of residential loads. Machine Learning, 113(8):5203-5234, Aug 2024. ISSN 1573-0565. doi: 10.1007/s10994-023-06460-4. URL https://doi.org/10.1007/s10994-023-06460-4.
+Mantlana, B. and Jegede, A. O. Understanding the multilateral negotiations on climate change ahead of cop27: Priorities for the african region. South African Journal of International Affairs, 29:255-270, 2022. doi: 10.1080/10220461.2022.2134201.
+Mattauch, L., Matthews, H. D., Millar, R., Rezai, A., Solomon, S., and Venmans, F. Steering the climate system: Using inertia to lower the cost of policy: Comment. American Economic Review, 110(4):1231-37, April 2020. doi: 10.1257/aer.20190089. URL https://www.aeaweb.org/articles?id=10.1257/aer.20190089.
+Mavroidis, P. C. and de Melo, J. 16 climate change policies and the wto: Greening the gatt, revisited. Towards a workable and effective climate regime, pp. 225, 2015.
+May, R. and Huang, P. A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets. Applied Energy, 334:120705, 2023. ISSN 0306-2619. doi: https://doi.org/10.1016/j.apenergy.2023.120705. URL https://www.sciencedirect.com/science/article/pii/S0306261923000697.
+Milinski, M., Sommerfeld, R. D., Krambeck, H.-J., Reed, F. A., and Marotzke, J. The collective-risk social dilemma and the prevention of simulated dangerous climate change. Proceedings of the National Academy of Sciences, 105 (7):2291-2294, 2008.
+Millar, R. J., Nicholls, Z. R., Friedlingstein, P., and Allen, M. R. A modified impulse-response representation of the global near-surface air temperature and atmospheric concentration response to carbon dioxide emissions. Atmospheric Chemistry and Physics, 17(11):7213-7228, 2017. doi: 10.5194/acp-17-7213-2017. URL https://acp.copernicus.org/articles/17/7213/2017/.
+Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv: 1602.01783, 2016.
+Moore, F. C., Lacasse, K., Mach, K. J., Shin, Y. A., Gross, L. J., and Beckage, B. Determinants of emissions pathways in the coupled climate-social system. Nature, 603(7899):103-111, 2022.
+
+Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V. V. (eds.). Algorithmic Game Theory. Cambridge University Press, 2007. URL https://EconPapers.repec.org/RePEc:cup:cbooks:9780521872829.
+Nordhaus, W. D. A review of the Stern review on the economics of climate change. Journal of economic literature, 45(3):686-702, 2007.
+Nordhaus, W. D. Climate clubs: Overcoming free-riding in international climate policy. American Economic Review, 105(4):1339-70, 2015.
+Nordhaus, W. D. Evolution of modeling of the economics of global warming: changes in the DICE model, 1992-2017. Climatic Change, 148(4):623-640, June 2018. doi: 10.1007/s10584-018-2218-y. URL https://ideas.repec.org/a/spr/climat/v148y2018i4d10.1007_s10584-018-2218-y.html.
+Nordhaus, W. D. Climate change: The ultimate challenge for economics. American Economic Review, 109(6): 1991-2014, June 2019. doi: 10.1257/aer.109.6.1991. URL https://www.eaaweb.org/articles? id=10.1257/aer.109.6.1991.
+Nordhaus, W. D. Climate club futures: On the effectiveness of future climate clubs. 2021a.
+Nordhaus, W. D. Dynamic climate clubs: On the effectiveness of incentives in global climate agreements. Proceedings of the National Academy of Sciences, 118 (45):e2109988118, 2021b.
+Nordhaus, W. D. and Yang, Z. A regional dynamic general-equilibrium model of alternative climate-change strategies. The American Economic Review, 86(4): 741-765, 2024/10/15/ 1996a. ISSN 00028282. URL http://www.jstor.org/stable/2118303. Full publication date: Sep., 1996.
+Nordhaus, W. D. and Yang, Z. A regional dynamic general-equilibrium model of alternative climate-change strategies. The American Economic Review, pp. 741-765, 1996b.
+Orlov, S., Rovenskaya, E., Puaschunder, J., and Semmler, W. Green bonds, transition to a low-carbon economy, and intertemporal welfare allocation: Evidence from an extended dice model. *AIMS Environmental Science*, 11 (4):628-648, 2024.
+Orzan, N., Acar, E., Grossi, D., Mannion, P., and Radulescu, R. Learning in multi-objective public goods games with non-linear utilities. In 27th European Conference on Artificial Intelligence, pp. 2749-2756. IOS Press, 2024.
+Paquette, P. v diplomacy. https://github.com/diplomacy/diplomacy.
+
+Paquette, P., Lu, Y., Bocco, S. S., Smith, M., O-G, S., Kummerfeld, J. K., Pineau, J., Singh, S., and Courville, A. C. No-press diplomacy: Modeling multi-agent gameplay. Advances in Neural Information Processing Systems, 32, 2019.
+Pavan, A., Segal, I., and Toikka, J. Dynamic mechanism design: A myersonian approach. *Econometrica*, 82(2): 601-653, 2014.
+Perdana, S. and Vielle, M. Making the eu carbon border adjustment mechanism acceptable and climate friendly for least developed countries. Energy Policy, 170:113245, 2022. ISSN 0301-4215. doi: https://doi.org/10.1016/j.enpol.2022.113245. URL https://www.sciencedirect.com/science/article/pii/S0301421522004645.
+Pihl, H. A climate club as a complementary design to the un Paris agreement. *Policy Design and Practice*, 3(1):45-57, 2020.
+Pindyck, R. S. Climate change policy: What do the models tell us? Journal of Economic Literature, 51(3):860-72, September 2013. doi: 10.1257/jel.51.3.860. URL https://www.eaaweb.org/articles?id=10.1257/jel.51.3.860.
+Pörntner, H. O., Roberts, D. C., Adams, H., Adler, C., Aldunce, P., Ali, E., Begum, R. A., Betts, R., Kerr, R. B., Biesbroek, R., et al. Climate change 2022: impacts, adaptation and vulnerability. 2022.
+Rapoport, A. and Chammah, A. Prisoner's Dilemma: A Study in Conflict and Cooperation. University of Michigan Press, 1965.
+Rochedo, P. R., Soares-Filho, B., Schaeffer, R., Viola, E., Szklo, A., Lucena, A. F., Koberle, A., Davis, J. L., Rajão, R., and Rathmann, R. The threat of political bargaining to climate mitigation in brazil. Nature Climate Change, 8(8):695-698, 2018.
+Santos, F. C., Santos, M. D., and Pacheco, J. M. Social diversity promotes the emergence of cooperation in public goods games. Nature, 454(7201):213-216, 2008.
+Schelling, T. C. The Strategy of Conflict: with a new Preface by the Author. Harvard university press, 1980.
+Schmidt, V., Luccioni, A., Teng, M., Zhang, T., Reynaud, A., Raghupathi, S., Cosne, G., Juraver, A., Vardanyan, V., Hernández-García, A., and Bengio, Y. Climategan: Raising climate change awareness by generating images of floods. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=EZNOb_uNpJk.
+
+Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., Lillicrap, T., and Silver, D. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, dec 2020. doi: 10.1038/s41586-020-03051-4. URL https://doi.org/10.1038%2Fs41586-020-03051-4.
+Shaffer, G. United states-import prohibition of certain shrimp and shrimp products. Am. J. Int'l L., 93:507, 1999.
+Shoham, Y. and Leyton-Brown, K. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2008.
+Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, January 2016. ISSN 1476-4687. doi: 10.1038/nature16961. URL https://www.nature.com/articles/nature16961.Bandiera_abtest: a Cg_type: Nature Research Journals Number: 7587 Primaryatype: Research Publisher: Nature Publishing Group Subject_term: Computational science;Computer science;Reward Subject_term_id: computational-science;computer-science;reward.
+Smead, R., Sandler, R. L., Forber, P., and Basl, J. A bargaining game analysis of international climate negotiations. Nature Climate Change, 4(6):442-445, 2014.
+Stott, P., Christidis, N., Otto, F., Sun, Y., Vanderlinden, J., Oldenborgh, G. V., Vautard, R., Storch, H., Walton, P., Yiou, P., and Zwiers, F. Attribution of extreme weather events in the context of climate change. Wiley Interdisciplinary Reviews: Climate Change, 7:23-41, 2016. doi: 10.17226/21852.
+Tavoni, A., Dannenberg, A., Kallis, G., and Löschel, A. Inequality, communication, and the avoidance of disastrous climate change in a public goods game. Proceedings of the National Academy of Sciences, 108 (29):11825-11829, 2011.
+Telser, L. G. A theory of self-enforcing agreements. Journal of business, pp. 27-44, 1980.
+van Oldenborgh, G. J., van der Wiel, K., Kew, S., Philip, S., Otto, F., Vautard, R., King, A., Lott, F., Arrighi, J., Singh, R., and van Aalst, M. Pathways and pitfalls in extreme event attribution. Climatic Change, 166(1):13, May 2021. ISSN 1573-1480. doi: 10.1007/s10584-021-03071-7. URL https://doi.org/10.1007/s10584-021-03071-7.
+
+Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Sulsky, Y., Molloy, J., Paine, T. L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., and Silver, D. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782):350-354, November 2019. ISSN 1476-4687. doi: 10.1038/s41586-019-1724-z. URL https://www.nature.com/articles/s41586-019-1724-z. Bandiera_abtest: a Cg_type: Nature Research Journals Number: 7782 Primaryatype: Research Publisher: Nature Publishing Group Subject_term: Computer science;Statistics Subject_term_id: computer-science;statistics.
+Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, I., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272, 2020. doi: 10.1038/s41592-019-0686-2.
+Wang, X., Ke, L., Qiao, Z., and Chai, X. Large-scale traffic signal control using a novel multiagent reinforcement learning. IEEE Transactions on Cybernetics, 51(1): 174-187, 2021. doi: 10.1109/TCYB.2020.3015811.
+WorldBank. The world bank documents and report api, 2022. URL https://documents.worldbank.org/en/publication/documents-reports/api.
+Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A., and Wu, Y. The surprising effectiveness ofppo in cooperative, multi-agent games, 2021. URL https://arxiv.org/abs/2103.01955.
+Yu, L., Sun, Y., Xu, Z., Shen, C., Yue, D., Jiang, T., and Guan, X. Multi-agent deep reinforcement learning for HVAC control in commercial buildings, 2020. URL https://arxiv.org/abs/2006.14156.
+Zenker, A. International Climate Agreements Under Review: The Potential of Negotiation Linkage Between Climate Change and Preferential Free Trade. Springer Nature, 2019.
+
+Zheng, S., Trott, A., Srinivasa, S., Parkes, D. C., and Socher, R. The AI economist: Taxation policy design via two-level deep multiagent reinforcement learning. Science Advances, 8(18):eabk2607, 2022a. doi: 10.1126/sciadv.abk2607. URL https://www.science.org/doi/abs/10.1126/sciadv.abk2607.
+Zheng, S., Trott, A., Srinivasa, S., Parkes, D. C., and Socher, R. The AI economist: Taxation policy design via two-level deep multiagent reinforcement learning. Science advances, 8(18):eabk2607, 2022b.
+
+# A Extend Related Work
+
+Multi-agent aspects of climate change. Previous work has studied the connection between political economy, negotiations, and climate change. Empirical work has found that previous climate summits have had inconsistent or too little impact (Chan et al., 2022; Bakaki, 2022). The impact of social dynamics has been studied in a stylized climate-social model, finding that public perception and institutional responsiveness are important to explain variations in emissions (Moore et al., 2022). The formation of coalitions and agreements under climate negotiations has also been studied from a game-theory perspective (Zenker, 2019). IAMs have also been used to study the impact of political bargaining on the economic burden required to meet climate targets (Rochedo et al., 2018). However, to the best of our knowledge, no work has analyzed the game-theoretic aspects of climate cooperation using machine learning and calibrated IAMs.
+
+Social dilemmas. Our work also is also one of the social dilemmas, situations where selfish agents act to lead to collective outcomes. These dilemmas are also a subset of general-sum games. Prominent examples of social dilemmas include the Iterated Prisoner's Dilemma (Rapoport & Chammah, 1965), where two agents repeatedly decide whether to cooperate or defect, and the Coin Game (Lerer & Peysakhovich, 2018), in which two agents navigate a $3 \times 3$ grid to collect coins that appear in red or blue. Agents receive rewards for collecting any coin but incur penalties when their opponent collects a coin of their own color, creating tension between selfish coin-collecting and the Pareto-optimal "color-aligned" strategy. Another example is the Negotiation Game (Cao et al., 2018), where two agents simultaneously propose how to divide valuable items, each striving to secure items they highly value while risking worse outcomes if both demand large shares of the same limited resources. Furthermore, Diplomacy (Paquette), an adaptation of the classic Diplomacy board game tailored for multi-agent research, requires players to negotiate alliances, coordinate actions, and balance cooperative and adversarial incentives to expand territorial control without overextending themselves.
+
+Strategic behavior and climate change. Game theory has long studied the collective behavior of self-interested agents, e.g., the tragedy of the commons (Hardin, 1968), negotiation and agreements of agents with conflicting and common goals (Schelling, 1980). Certain works have analyzed international negotiations on climate collaboration and agreements regarding economic activity and climate efforts, e.g., imposing tariffs on countries that do not mitigate sufficiently. Experimental research highlights the collective action dynamics of climate cooperation. Uncertainty about the likelihood of catastrophic scenarios (Barrett & Dannenberg, 2012; Milinski et al., 2008) and interagent inequality (Dannenberg et al., 2015) has been found to hinder cooperation; however, interagent communication (Loschel et al., 2011) and early mitigation commitments facilitate it (Dannenberg et al., 2015). Climate negotiations have been studied using mathematical games, e.g., coordination games or prisoner's dilemmas (DeCanio & Fremstad, 2013). However, the reliability of such simplified models for real-world policy has been called into question. In particular, these games lack (i) a multilateral, rather than bilateral, setting, (ii) strategic behavior from agents with multiple, possibly conflicting, goals, (iii) evolving climate dynamics and changing agent behavior that lead to non-equilibrium outcomes, and (iv) heterogeneity among agents (Madani, 2013).
+
+Subsequent work has gone beyond equilibrium analysis by modeling climate negotiations as a bargaining game in which agents learn, albeit in a highly simplified manner (Smead et al., 2014). Climate scenarios could be studied through the emergence of climate mitigation from these games with learning, in which regions can cooperate or compete (Greeven et al., 2016). Furthermore, other work has studied the difficulty of long-term climate collaboration (Carney, 2015), as well as potential mechanisms for overcoming associated issues (Nordhaus, 2015).
+
+Multi-agent reinforcement learning (MARL) MARL has emerged in recent years as an attractive framework that studies how to train utility-maximizing agents that may communicate, cooperate, or compete. This is a rich area of research that intersects machine learning with game theory, economics, and other domains (Shoham & Leyton-Brown, 2008). Games can be classified as cooperative, competitive, or a mixture of both. In fully cooperative games, agents learn to work together, e.g., to lower the carbon power consumption of heating, ventilation and air conditioning (HVAC) systems (Mai et al., 2024; Hanumaiah & Genc, 2021; Yu et al., 2020), or in the game of Hanabi (Yu et al., 2021). On the other hand, in a competitive game, agents may need to find strategies to defeat opponents, e.g., in Diplomacy (Paquette et al., 2019) and Go (Schrittwieser et al., 2020). However, many games are neither purely competitive nor purely cooperative (Duque et al., 2024). These are called mixed-motive games since incentives of agents are partly misaligned. A common and extensively studied example is public goods games (PGG) which describes the social dilemma between collaboration (contributing to common pot) which is the Pareto optimal outcome, and the free riding (keeping the resource for oneself) (Anderson et al., 1998; Santos et al., 2008; Orzan et al., 2024). In the context of climate change, it has also been studied studied (Tavoni et al., 2011).
+
+Beyond abstract games, MARL has been increasingly applied to real-world scenarios where cooperation and competition are intertwined. For instance, Wang et al. (Wang et al., 2021) applied MARL to large-scale traffic signal control, balancing individual
+
+intersection throughput with global traffic flow. In energy systems, May et al.(May & Huang, 2023) used MARL to model and optimize peer-to-peer prosumer markets, where agents must trade off self-interest and collective grid stability. Similarly, Hou et al. (Hou et al., 2025) introduce InvestESG, a MARL benchmark where agents need to navigate the trade-off between short-term profit goals and long-term climate resilience by strategically investing in mitigation, greenwashing, and adaptation measures.
+
+Recent work has explored the link between MARL and negotiation (Cao et al., 2018), as well as cooperation in social dilemmas and collaboration on climate change (Jaques et al., 2019; Chelarescu, 2021; Le Glaeu et al., 2022). As such, MARL is an attractive framework to analyze climate outcomes which takes strategic behavior into account. However, previous work has largely considered highly stylized environments and has not yet been applied to rich calibrated climate-economic simulations; our work fills this gap.
+
+# B Parameters and variables
+
+Tables 2, 3, 4, 5 and 7 list all (calibrated) parameters and variables.
+Table 2. World-state variables. Global type variables correspond to the entire world, whereas regional type variables correspond to each region. Endogenous variables are those which are affected by the agent actions, whereas exogenous variables are those that are predetermined and not affected by agent actions. Note that the values of endogenous variables can vary across steps in a predetermined manner. Notation: indices are separated from subscripts referring to a name by semicolons (;). For instance, the parameter $\theta_{1}$ varies in time $t$ and by region $i$ , which is denoted as $\theta_{1:i:t}$ .
+
+Variable Type Symbol Description Carbon Mass Global, endogenous Mt, [MtAT, MtUP, MtLO] A three-dimensional vector that indicates the average carbon accumulation in the atmosphere, upper oceans, and lower oceans. Temperature Global, endogenous Tt, [TtAT, TtLO] A two-dimensional vector that indicates the average temperature of the atmosphere and the lower ocean. Population Regional, exogenous Li,t Population and the labor in a region. Technology Regional, exogenous Ai,t Technology factor in the production function of a region. Capital Regional, endogenous Ki,t Total capital accumulated by a region. Carbon intensity of economic activity Regional, exogenous σi,t A scalar coefficient that gives the emissions resulting from economic production. Balance of trade Regional, endogenous Di,t Surplus or deficit from international trade activities. Cost of mitigation efforts Global, endogenous θ1;i,t An estimate of the cost of mitigation efforts. Emission due to land use Regional, exogenous EtLand Carbon emission for land use in a specific region.
+
+Table 3. Agent-action variables.
+
+Variable Symbol Description Savings rate si,t The fraction of output production to be invested in capital. Mitigation rate μi,t The fraction of mitigation efforts by a region. Import tariffs τi,j,t The fraction of imports that are converted to tariff revenue. Export limits pxi,t The fraction of domestic production that regions are willing to export. Import bids bi,j,t The amount of production each region is willing to import from other regions.
+
+# C The Activity Component: Climate, Economics, Trade, and Tariffs
+
+# C.1 Climate and Economic Dynamics
+
+We now describe the RICE-N dynamics developed from DICE and RICE models by (Nordhaus, 2018; Kellett et al., 2019) that govern the evolution of the world state from time $t$ to $t + 1$ for the different regions. Note that variables without an agent index are global quantities.
+
+Table 4. Agent-specific constants.
+
+Variable Symbol Description Initial population L0;i The initial population for a specific region. Population convergence target La;i The estimated convergence population for a specific region. Population convergence rate lg;i How fast the current population converges. Initial capital K0;i The initial capital for a specific region. Initial carbon intensity σ0;i The initial carbon intensity for a specific region. Carbon intensity parameters gσ;i and δσ;i The decay speed of the carbon intensity. Initial technology factor A0;i The initial carbon technology factor for a specific region. Technology factor parameter gi,A and δi,A The update pattern of the technology factor. Initial land use emission EL0;i The initial land use emission for a specific region. Land use emission parameter δEL;i The depreciation rate for the land use emission in a specific region.
+
+Table 5. Global constants.
+
+Variable Symbol Description Capital elasticity of production γ The contribution from capital and population to the economy. Armington substitution parameter λ How substitutable consumption goods from different regions are. Long term welfare discount rate ρ How much short-term welfare is weighted versus long-term welfare. capital depreciation rate ΦK The capital depreciation rate. Backstop technology pb Price of a backstop technology that can remove carbon dioxide from the atmosphere. Backstop technology parameter δpb The decay speed of the cost of backstop technology. Mitigation efficiency parameter θ2 The efficiency loss component of mitigation Domestic share parameter ψdom The relative preference for domestic goods Foreign share parameter ψfor The relative preference for foreign goods
+
+Carbon mass. The total carbon mass in the climate system is given by:
+
+$$
+M _ {t + 1} = \Phi_ {M} M _ {t} + B _ {M} \sum_ {i} E _ {i, t}, \tag {6}
+$$
+
+$$
+E _ {i, t} = E _ {t} ^ {\text {L a n d}} + \sigma_ {i, t} (1 - \mu_ {i, t}) Y _ {i, t}, \tag {7}
+$$
+
+$$
+M _ {t} \doteq \left[ \begin{array}{l l l} M _ {t} ^ {\mathrm {A T}} & M _ {t} ^ {\mathrm {U P}} & M _ {t} ^ {\mathrm {L O}} \end{array} \right] ^ {\top} \in \mathbb {R} ^ {3}, \tag {8}
+$$
+
+$$
+\Phi_ {M} \doteq \left[ \begin{array}{c c c} \zeta_ {1 1} & \zeta_ {1 2} & 0 \\ \zeta_ {2 1} & \zeta_ {2 2} & \zeta_ {2 3} \\ 0 & \zeta_ {3 2} & \zeta_ {3 3} \end{array} \right], \tag {9}
+$$
+
+$$
+B _ {M} \doteq \left[ \begin{array}{c} \xi_ {2} \\ 0 \\ 0 \end{array} \right]. \tag {10}
+$$
+
+This describes a three-reservoir model of the global carbon cycle, in which $M_{AT}$ describes the average mass of carbon in the atmosphere, $M_{UP}$ is the average mass of carbon in the upper ocean, and $M_{LO}$ the average mass of carbon in the deep or lower ocean, see Figure 5. $\Phi_M$ is the Markov transition matrix describing how carbon transfer between different reservoirs. $B_M$ describes how the weight of carbon emission affects the carbon accumulation in the reservoirs.
+
+Global temperature. Ultimately, increasing carbon mass leads to rising temperatures:
+
+$$
+T _ {t + 1} = \Phi_ {T} T _ {t} + B _ {T} F _ {t}, \tag {11}
+$$
+
+$$
+T _ {t} \doteq \left[ \begin{array}{l l} T _ {t} ^ {\mathrm {A T}} & T _ {t} ^ {\mathrm {L O}} \end{array} \right] ^ {\top} \in \mathbb {R} ^ {2}, \tag {12}
+$$
+
+$$
+F _ {t} = F _ {2 \times} \log_ {2} \left(\frac {M _ {t} ^ {\mathrm {A T}}}{M ^ {\mathrm {A T} , 1 7 5 0}}\right), \tag {13}
+$$
+
+$$
+\Phi_ {T} \doteq \left[ \begin{array}{l l} \phi_ {1 1} & \phi_ {1 2} \\ \phi_ {2 1} & \phi_ {2 2} \end{array} \right], \tag {14}
+$$
+
+$$
+B _ {T} \doteq \left[ \begin{array}{l} \xi_ {1} \\ 0 \end{array} \right]. \tag {15}
+$$
+
+Similar to the carbon mass dynamic, there are two layers in the energy balance model, see Figure 4. $T_{AT}$ is the combined average temperature in atmosphere, land surface, and upper ocean (simply referred to as the "atmospheric layer" hereafter). $T_{LO}$ is the temperature in the lower ocean. $\Phi_T$ is the Markov transition matrix describing how heat transfers between different layers. $B_T$ describes how carbon mass contributes to the temperature increases.
+
+Output production. The production in a region is given by the total factor productivity (TFP) (Comin, 2010) formula:
+
+$$
+Y _ {i, t} = A _ {i, t} K _ {i, t} ^ {\gamma} L _ {i, t} ^ {1 - \gamma}. \tag {41}
+$$
+
+Production depends on three factors: total factor productivity ("technology") $A_{t}$ , capital $K_{t}$ , and labor $L_{t}$ . This production function is common in the economic literature and used in the DICE/RICE models. The capital elasticity $\gamma \in [0,1]$ explains the different levels of contribution of capital and labor.
+
+Population. The number of people in a region, denoted $L_{t}$ grows as:
+
+$$
+L _ {i, t + 1} = L _ {i, t} \left(\frac {1 + L _ {a ; i}}{1 + L _ {i , t}}\right) ^ {l _ {g; i}}. \tag {39}
+$$
+
+There are two parameters $L_{a;i}$ and $l_{g;i}$ . $L_{a;i}$ represents the convergence population of region $i$ and $l_{g;i}$ shows how fast the population $L_{i,t}$ converge to $L_{a;i}$ . Please refer to the Appendix H for a more detailed analysis and the calibration procedure.
+
+Level of technology. The technology factor $A_{t}$ describes how efficient production is, i.e., how many units of output a region achieves given fixed capital and labor:
+
+$$
+A _ {i, t + 1} = \left(e ^ {\eta} + g _ {A; i} e ^ {- \delta_ {A; i} \Delta (t - 1)}\right) A _ {i, t}. \tag {40}
+$$
+
+Here, $\eta$ represents the long-term growth of economics which is usually larger than 0, $g_{A}$ represents the short-term part of economics growth, and $\delta_{A}$ represents the speed of decay of short-term growth factor. $\Delta$ is the time difference between steps. We use $\eta = 0.33\%$ as in (Nordhaus, 2018).
+
+Capital. The amount of capital evolves as:
+
+$$
+\begin{array}{l} \Phi_ {K} \doteq (1 - \delta_ {K}) ^ {\Delta}, (16) \\ K _ {i, t + 1} = \Phi_ {K, t} K _ {i, t} + \Delta \left(1 - a _ {1} T _ {t} ^ {\mathrm {A T}} - a _ {2} \left(T _ {t} ^ {\mathrm {A T}}\right) ^ {2}\right) (17) \\ \times \left(1 - \theta_ {1; i, t} \mu_ {i, t} ^ {\theta_ {2}}\right) Y _ {i, t} s _ {i, t}. (18) \\ \end{array}
+$$
+
+The evolution of the capital comes from two parts. The first part is capital inherited from the previous period with depreciation. In the second part, $s_t$ is a control variable which represents the investment/savings rate (as a fraction of production). That is, as a base amount, the economy invests/saves a total of $Y_{i,t} s_{i,t}$ which yields new capital. This base amount is further modified by 2 multipliers: the damage function and mitigation/abatement costs, which are discussed below.
+
+Damage function. The climate damage function represents the economic damage due to climate change, e.g., increases in the atmosphere temperature $T_{t}^{\mathrm{AT}}$ . That is, in Equation 37, the fraction of new capital is modified by the damage function
+
+$$
+1 - a _ {1} T _ {t} ^ {\mathrm {A T}} - a _ {2} \left(T _ {t} ^ {\mathrm {A T}}\right) ^ {2}, \tag {19}
+$$
+
+following (Nordhaus, 2015). That is, higher temperatures lead to less new capital. Similarly, $1 - \theta_{1;i,t}\mu_t^{\theta_2}$ is the fraction of new capital after taking into account carbon emission mitigation. Mitigating carbon emissions more (higher $\mu_t$ ) means (dirty) production needs to be lowered, hence yields less new capital.
+
+Mitigation (abatement) cost. Following (Kellett et al., 2019), for a mitigation rate $\mu_{i,t}$ , the mitigation cost is
+
+$$
+\theta_ {1; i, t} \mu_ {i, t} ^ {\theta_ {2}} Y _ {i, t} s _ {i, t}, \tag {20}
+$$
+
+where $\theta_{1;i,t}$ is given by Equation 38. This represents the loss in capital growth due to a fraction of production being used for mitigation.
+
+
+Figure 4. The two-reservoir temperature model.
+
+
+Figure 5. The three-reservoir carbon mass model.
+
+Carbon intensity of economic activity. A critical part of the model is the interaction between the climate and economic parts. Specifically, the RICE model describes how production leads to carbon emissions:
+
+$$
+E _ {t} ^ {\text {L a n d}} = E _ {L 0} \cdot \left(1 - \delta_ {E L}\right) ^ {t - 1}, \tag {21}
+$$
+
+$$
+E _ {i, t} = E _ {t} ^ {\text {L a n d}} + \sigma_ {i, t} A _ {i, t} (1 - \mu_ {i, t}) Y _ {i, t} \tag {22}
+$$
+
+$$
+\sigma_ {i, t + 1} = \sigma_ {i, t} e ^ {- g _ {\sigma ; i} (1 - \delta_ {\sigma ; i}) ^ {\Delta (t - 1)} \Delta}. \tag {23}
+$$
+
+Here $E_{t}^{\mathrm{land}}$ is the carbon emission due to (changes in) land use, $E_{L0}$ is the carbon emission in the base year, and $\delta_{EL}$ is the speed of decrease of changes in land use. The rates $0 < \delta_{EL} < 1$ , $0 < \delta_{L0} < 1$ are free parameters. Due to a lack of data, $E_{t}^{\mathrm{land}}$ is set to be the same for each region.
+
+$E_{i,t}$ is the total carbon emission, $E_t^{\mathrm{land}}$ is emission from natural sources, while $E_{i,t} - E_t^{\mathrm{land}}$ is emission caused by economic activity. $\sigma_{i,t}A_{i,t}$ is the effective carbon intensity of economic activity: a higher technology factor leads to higher emissions, but can be modulated by lower $\sigma$ (which can be thought of as the degree of "clean" production). $\mu_{i,t} \in [0,1]$ is a control variable called the abatement (ratio), which represents the proportion of the economics contributing to reducing carbon emission. Furthermore, we have 2 parameters $g_{\sigma}$ and $\delta_{\sigma}$ that are fitted to data. $g_{\sigma}$ is the rate of decrease in carbon emissions.
+
+# C.2 Trade
+
+We now describe the international trade dynamics and the resulting regional consumption and utilities. Regions trade by exporting their own consumption goods and importing other regions' consumption goods at a fixed unit price4 .
+
+Agent actions. Each region $i$ at time $t$ must first specify a desired basket of consumption goods $\pmb{b}_{i,t} = [b_{i,1,t},\dots,b_{i,k,t}]$ that they are willing to import from the other regions. These desired imports form a matrix of bids $B_{t}$ such that the import bid by region $i$ for goods from $j$ at time $t$ is $b_{i,j,t} \geq 0$ , i.e., the amount of goods region $i$ is willing to import from region $j$ at time $t$ is $b_{i,j,t}$ . Regions also set an upper bound $p_{i,t}^{x} \in [0,1]$ on the proportion of their own consumption goods that they are willing to export.
+
+Tariffs. Regions can also choose to impose import tariffs on other regions. We denote an import tariff imposed by region $i$ on a region $j$ by $\tau_{i,j,t} \in [0,1]$ . If region $i$ imposes an import tariff $\tau_{i,j,t} \in [0,1]$ on region $j$ , region $i$ consumes
+
+$$
+C _ {i, j, t} = x _ {i, j, t} \left(1 - \tau_ {i, j, t}\right), \tag {24}
+$$
+
+and $\tau_{i,j,t}x_{i,j,t}$ is added to a reserve fund specific to that region.
+
+Consumption. Consumption of domestic goods $C_{i,i,t}$ is determined according to gross output, the savings rate and exports:
+
+$$
+C _ {i, i, t} = \left(1 - s _ {i, t}\right) Q _ {i, t} - \sum_ {j \neq i} x _ {j, i, t}. \tag {25}
+$$
+
+The aggregated consumption $C_{i,t}$ at time $t$ for region $i$ is given by the Armington elasticity model (Lessmann et al., 2009)) as follows:
+
+$$
+C _ {i, t} = \left(\psi^ {d o m} \left(C _ {i, i, t}\right) ^ {\lambda} + \sum_ {j \neq i} \psi^ {f o r} \left(C _ {i, j, t}\right) ^ {\lambda}\right) ^ {\frac {1}{\lambda}}. \tag {26}
+$$
+
+# C.3 Negotiations
+
+Here, we outline the extra state representations and actions that are introduced when either of our two negotiation protocols is active in the simulation. These variables are used in the negotiation phases, which are detailed in Section 4, on top of the already available variables in Tables 2, 3, 4, 5 and 7.
+
+Table 6 presents the agent actions required by the Bilateral Negotiation and Basic Club protocols. These actions are additionally incorporated into the observation space of the agents at each timestep as bilateral observations, meaning that an agent will observe its own negotiation action and the action of other agents when it is part of that negotiation. In addition, each agent observes an additional indicator that signals the current phase of the environment (proposal, evaluation, or no negotiation step). Lastly, agents also privately observe the outcome of their negotiation: their minimum mitigation rate or club mitigation rate $\mu_{c}$ .
+
+We note that other negotiation protocols may modify these action and observations spaces as needed.
+
+Table 6. Agent-action variables introduced when one of our negotiation protocols is enabled in RICE-N. These variables are additionally added as bilateral observations to the observation space. In the Basic Club protocol however, the request action is excluded, and each agent proposes a single mitigation rate to all agents. The result of these negotiation actions sets an agent's minimum mitigation rate or club mitigation rate $\mu_{c}$ .
+
+Variable Symbol Description Proposed mitigation rate μi,j,t / μi,t The minimum mitigation rate proposed by agent i to agent j. In basic club, a single proposal is made by each agent to all other agents. Requested mitigation rate μi,j,t The minimum mitigation rate requested from agent j by agent i. Only used in the Bilateral Negotiation protocol and not in Basic Club. Proposal decisions ei,j,t The decision of agent i on the proposal and request by agent j.
+
+# D RICE-N dynamics
+
+At a high-level, Equations 35 and 36 capture climate dynamics (temperature and carbon mass), while Equations 37, 39, 40, and 41 capture economic dynamics. Finally, Equation 42 captures the carbon-intensity of production, providing a key link between the climate and economic sectors.
+
+# E Computational Complexity
+
+The computational complexity of our MARL approach is driven by the number of regions $N$ (i.e., agents). Since each agent's action space scales linearly with $n$ , the total action space across all agents grows quadratically $(\mathcal{O}(n^2))$ . However, the number of agents in our setting is naturally bounded by the number of countries on the planet. Currently, training 27 agents for 100 thousand episodes takes approximately 3 hours on a 30 CPU cluster. Future efforts will be directed at more efficient implementations using JAX-based acceleration and model parallelism to improve runtime, which will enable large-scale sensitivity analyses and experiments.
+
+# F Creating a 27-Region Simulation
+
+We feature $n = 27$ fictitious regions in our simulation. These are inspired by merging and splitting real-world countries, but are not exactly the same as real-world regions.
+
+We used real data from the World Bank API (WorldBank, 2022), e.g., GDP, capital stock, population, and $\mathrm{CO}_{2}$ -quivalent $(\mathrm{CO}_{2} \mathrm{eq})$ emissions. Furthermore, the World Bank groups countries into regions, including Sub-Saharan Africa, South Asia, North America, the Middle East and North Africa, Latin America and the Caribbean, Europe and Central Asia, East Asia
+
+Algorithm 1 Activity Component (implemented by Climate_and_economy_simulation_step())) Note that we only list input state variables and omit model parameters.
+
+Require: exogenous emissions, land emissions, intensity, production factor, labor, capital, previous global temperature, previous government balance Require: actions: mitigation rates, saving rates, tariffs, export rate limit, desired imports for each region do mitigation cost ← f(intensity) ▷ Equation 38 damages ← f(previous global temperature) ▷ Equation 19 abatement cost ← f(mitigation rate, mitigation cost) ▷ Equation 20 production ← f(production factor, capital, labor) ▷ Equation 41 gross output ← f(damages, abatement cost, production) ▷ Equation 41 government balance ← f(interest rate, previous government balance) investment ← f(saving rate, gross output) ▷ Using Equation 37 scaled imports ← f(gross output, desired imports) ▷ Equation 55 debt ratio ← f(previous government balance) ▷ Equation 56 scaled imports ← f(scaled imports, debt ratio) ▷ Equation 57 end for for each region do max potential exports ← f(gross output, investment, export rate limit) ▷ Equation 58 Scaled imports ← f(scaled imports, max potential exports) ▷ Equation 59 end for for each region do tariff-ed imports, tariff revenue ← f(scaled imports, tariffs) ▷ Equation 24 domestic consumption ← f(savings, gross output, scaled imports) ▷ Equation 25 aggregate consumption ← f(comestic consumption, tariff-ed imports) ▷ Equation 26 utility ← f(labor, aggregate consumption) ▷ Equation 2 government balance ← f(imports, exports) ▷ Equation 60 end for temperature ← f(previous temperature, previous carbon mass, exogenous emissions) carbon mass ← f(previous carbon mass, intensity, mitigation rate, production, land emissions) for each region do capital ← f(capital, investment) ▷ Equation 37 labor ← f(labor) ▷ Equation 39 production factor ← f(capital) ▷ Equation 40 carbon intensity ← f(carbon intensity) ▷ Equation 42 end for
+
+and Pacific. In each region, the different countries (or sub-regions) are classified into 4 income groups: high income, upper middle income, lower middle income, and low income.
+
+Merging regions. We assume the GDP, capital stock, and population for the regions are additive. We also assume the gross $\mathrm{CO}_{2}$ eq emissions across the regions are additive. Thus, we have
+
+$$
+K _ {m} = \sum_ {i} K _ {i}, \tag {27}
+$$
+
+$$
+L _ {m} = \sum_ {i} L _ {i}, \tag {28}
+$$
+
+$$
+Y _ {m} = \sum_ {i} Y _ {i}, \quad \text {w h e r e} \quad Y _ {i} := A _ {i} K _ {i} ^ {\gamma} L _ {i} ^ {1 - \gamma}, \tag {29}
+$$
+
+$$
+A _ {m} = \frac {Y _ {m}}{K _ {m} ^ {\gamma} L _ {m} ^ {1 - \gamma}}, \tag {30}
+$$
+
+$$
+\sigma_ {m} = \frac {\sum_ {i} \sigma_ {i} Y _ {i}}{Y _ {m}}. \tag {31}
+$$
+
+Note that the production function is not scale-invariant:
+
+$$
+Y _ {t} = \left(A _ {t} K _ {t}\right) ^ {\gamma} \left(A _ {t} L _ {t}\right) ^ {1 - \gamma} \tag {32}
+$$
+
+$$
+c \cdot Y _ {t} = \left(c \cdot A _ {t} K _ {t}\right) ^ {\gamma} \left(c \cdot A _ {t} L _ {t}\right) ^ {1 - \gamma} \tag {33}
+$$
+
+$$
+\neq \left(c \cdot A _ {t}\right) \left(c \cdot K _ {t}\right) ^ {\gamma} \left(c \cdot L _ {t}\right) ^ {1 - \gamma}, \quad \forall c > 0. \tag {34}
+$$
+
+Hence, one cannot get the technology after merging multiple regions by simply adding the individual technology levels. Rather, the combined technology factor is imputed from the combined productions, labor, and capital.
+
+$$
+T _ {t + 1} = \Phi_ {T} T _ {t} + B _ {T} \left(F _ {2 \times} \log_ {2} \left(\frac {M _ {t} ^ {\mathrm {A T}}}{M ^ {\mathrm {A T} , 1 7 5 0}}\right) + F _ {t} ^ {\mathrm {E X}}\right), \tag {35}
+$$
+
+$$
+M _ {t + 1} = \Phi_ {M} M _ {t} + B _ {M} \left(\sum_ {i} \sigma_ {i, t} \left(1 - \mu_ {i, t}\right) Y _ {i, t} + E _ {t} ^ {\text {L a n d}}\right), \tag {36}
+$$
+
+$$
+K _ {i, t + 1} = \Phi_ {K, i} K _ {i, t} + \Delta \left(1 - a _ {1} T _ {t} ^ {\mathrm {A T}} - a _ {2} \left(T _ {t} ^ {\mathrm {A T}}\right) ^ {2}\right) \left(1 - \theta_ {1; i, t} \mu_ {i, t} ^ {\theta_ {2}}\right) Y _ {i, t} s _ {i, t}, \tag {37}
+$$
+
+$$
+\theta_ {1; i, t} = \frac {p _ {b}}{1 0 0 0 \cdot \theta_ {2}} \left(1 - \delta_ {p b}\right) ^ {t - 1} \cdot \sigma_ {i, t}, \tag {38}
+$$
+
+$$
+L _ {i, t + 1} = L _ {i, t} \left(\frac {1 + L _ {a ; i}}{1 + L _ {i , t}}\right) ^ {l _ {g; i}}, \tag {39}
+$$
+
+$$
+A _ {i, t + 1} = \left(e ^ {\eta} + g _ {A; i} e ^ {- \delta_ {A; i} \Delta (t - 1)}\right) A _ {i, t}, \tag {40}
+$$
+
+$$
+Y _ {i, t} = A _ {i, t} K _ {i, t} ^ {\gamma} L _ {i, t} ^ {1 - \gamma}, \tag {41}
+$$
+
+$$
+\sigma_ {i, t + 1} = \sigma_ {i, t} e ^ {- g _ {\sigma ; i} (1 - \delta_ {\sigma ; i}) ^ {\Delta (t - 1)} \Delta}. \tag {42}
+$$
+
+Splitting large regions. To avoid huge economies that dominate the fictitious world, we split large economies into pieces based on predetermined fractions $c_{i}$ and random sampled $A_{i}$ :
+
+$$
+\sum_ {i} c _ {i} = 1, \tag {43}
+$$
+
+$$
+L _ {i} = c _ {i} L _ {m}, \tag {44}
+$$
+
+$$
+Y _ {i} = c _ {i} Y _ {m}, \tag {45}
+$$
+
+$$
+K _ {i} = \frac {Y _ {i}}{A _ {i} L _ {i} ^ {1 - \gamma}}, \tag {46}
+$$
+
+$$
+\sigma_ {i} = \sigma_ {m}. \tag {47}
+$$
+
+# G Welfloss
+
+Welfloss, $w$ stands for loss of welfare due to imposed tariffs (Nordhaus, 2015). $w$ relies on $wl$ , the unit of welfare loss per unit of tariff which Nordhaus calibrates to .4.
+
+$$
+w l = . 4 \tag {48}
+$$
+
+$$
+w _ {i, t} = 1 - w l \sum_ {j} \frac {b _ {i , j , t}}{Y _ {i , t}} \tau_ {i, j, t} \tag {49}
+$$
+
+# H Model Calibration
+
+The structural parameters of the RICE-N simulation were calibrated to meet the following objectives:
+
+1. Temperatures match the real data in different versions of RICE-N with 3 regions, 7 regions, 20 regions, 27 regions, and 189 regions, under $0\%$ and $100\%$ mitigation.
+2. The optimistic-pessimistic temperature outcomes fit the projects of Shared Socioeconomic Pathways in the IPCC Sixth Assessment Report (Pörntner et al., 2022) $(2^{\circ}\mathrm{C} - 5^{\circ}\mathrm{C}$ increase in the year of 2100). Each region optimizes the target without negotiation and direct cooperation in the pessimistic case. In the optimistic case, regions negotiate with each other using the baseline bilateral negotiation protocol. Please also notice that in the extremely pessimistic case that regions ignore climate change at all and always choose $0\%$ mitigation and $100\%$ savings, the temperature leads to approximately $7^{\circ}\mathrm{C}$ increase in the year 2100.
+
+The parameters that we estimated and the corresponding estimation methods are listed below:
+
+- The dynamic parameters for total factor productivity $A$ : $g_{A}$ and $\delta_{A}$ .
+- The capital $K$ : for the regions whose capital data is not available, we use a KNN regressor (Buitinck et al., 2013) to estimate it.
+- The dynamic parameters for population $L$ : $l_g$ ; similarly, for the regions whose convergence population data is not available, we use a KNN regressor to estimate it.
+- The initial carbon intensity $\sigma_0$ : for the regions whose capital data is not available, we use a KNN regressor to estimate it.
+- KNN regressor: Because all regions have GDP and population data, we use them as features. For each region that lacks emission data and capital data, we find the nearest 5 neighbors according to its GDP and population. We use the average of the 5 neighbors' emission data and capital data as the estimated values.
+
+# H.1 Population dynamic calibration
+
+Denoting $L_{\infty ;i}\coloneqq \lim_{t\to \infty}L_{i,t}$ , in the limit $t\rightarrow \infty$ we have:
+
+$$
+L _ {\infty ; i} = L _ {\infty ; i} \left(\frac {1 + L _ {a , i}}{1 + L _ {\infty ; i}}\right) ^ {l _ {g; i}}, \tag {50}
+$$
+
+$$
+1 = \left(\frac {1 + L _ {a ; i}}{1 + L _ {\infty}}\right) ^ {l _ {g; i}}. \tag {51}
+$$
+
+As long as $l_{g;i}$ is not zero, $L_{\infty;i} = L_{a;i}$ . Thus, $L_{a;i}$ is the long-term population size and a free parameter that is fitted to data. Assuming $\{L_{i,t}\}_{t=1,2,\ldots}$ is monotonically increasing or decreasing, the absolute value of $l_{g;i}$ represents how fast it converges to $L_{a;i}$ . The closer $L_{i,t}$ is to monotonically increasing or monotonically decreasing in the real data, the easier it is to fit $l_{g;i}$ and $L_{a;i}$ .
+
+To fit the population parameters, we take logs on both sides of Equation 39:
+
+$$
+\log L _ {i, t + 1} =
+$$
+
+$$
+\log L _ {i, t} + l _ {g; i} \left(\log \left(1 + L _ {a; i}\right) - \log \left(1 + L _ {i, t + 1}\right)\right), \tag {52}
+$$
+
+where $\log L_{i,t + 1} - \log L_{i,t}$ and $\log (1 + L_{i,t})$ are given by the data. $\log (1 + L_{i,t})$ and $l_{g;i}$ can then be estimated by linear regression.
+
+# H.2 Technology dynamic calibration
+
+We estimate both $g_{A}$ and $\delta_{A}$ from the existing data $\{A_{t}\}_{i = 1\dots n}$ by solving a regression problem:
+
+$$
+g _ {a; i} ^ {*}, \delta_ {a; i} ^ {*} = \underset {g _ {a; i}, \delta_ {A, i}} {\arg \max } \mathcal {L} _ {i, t} \tag {53}
+$$
+
+$$
+\mathcal {L} _ {i, t} = \left\| A _ {i, t + 1} - \left(\exp \eta + g _ {A, i} \exp \left(- \delta_ {A, i} \Delta (t - 1)\right)\right) A _ {i, t} \right\| ^ {2}. \tag {54}
+$$
+
+This can be solved by numerical optimization algorithms, e.g., as provided in SciPy (Virtanen et al., 2020).
+
+Because the emissions data from the World Bank API do not fit the form of the $\sigma$ dynamic as assumed by DICE2016, use the DICE2016 parameter values for $g_{\sigma}$ and $\delta_{\sigma}$ .
+
+Table 7. Calibrated parameters for 27 regions
+
+Region ID A0 K0 L0 La δA gA lg σ0 1 1.872 0.239 476.878 669.594 0.139 0.122 0.034 0.456 2 8.405 3.304 68.395 93.497 0.188 0.103 0.058 0.529 3 3.558 0.109 64.122 135.074 0.161 0.127 0.026 0.816 4 1.927 1.424 284.699 465.308 0.244 0.134 0.024 1.221 5 8.111 0.268 28.141 23.574 0.163 0.106 -0.057 0.290 6 4.217 3.184 548.754 560.054 0.170 0.095 0.080 0.302 7 2.491 0.044 46.489 59.988 0.058 0.049 0.037 0.420 8 2.525 1.080 69.194 100.016 0.346 0.079 0.029 1.010 9 2.460 0.184 513.737 1867.771 1.839 0.462 0.017 0.310 10 12.158 2.642 38.101 56.990 0.131 0.063 0.020 0.350 11 0.993 0.160 522.482 1830.325 0.086 0.065 0.019 0.235 12 5.000 2.289 165.293 230.191 0.183 0.071 0.027 0.419 13 29.854 2.020 165.751 216.927 0.088 0.075 -0.002 0.254 14 23.315 3.039 109.395 143.172 0.088 0.075 -0.002 0.254 15 29.854 0.687 56.355 73.755 0.088 0.075 -0.002 0.254 16 10.922 0.606 705.465 532.497 0.096 0.168 -0.016 0.781 17 9.634 0.608 465.607 351.448 0.096 0.168 -0.016 0.781 18 8.621 0.453 239.858 181.049 0.096 0.168 -0.016 0.781 19 3.190 0.129 690.002 723.513 0.054 0.068 -0.013 0.949 20 2.034 0.381 455.401 477.518 0.054 0.068 -0.013 0.949 21 13.220 16.295 502.410 445.861 0.252 0.074 -0.033 0.170 22 3.190 0.044 234.601 245.994 0.054 0.068 -0.013 0.949 23 6.387 1.094 317.880 287.533 0.194 0.237 -0.053 0.840 24 2.481 0.090 94.484 102.997 0.203 0.201 0.037 1.665 25 10.853 17.554 222.891 168.351 0.005 0.000 -0.012 0.285 26 4.135 1.002 103.294 87.418 0.158 0.123 -0.063 0.601 27 2.716 1.034 573.818 681.210 0.097 0.101 0.043 0.638
+
+# I Trade constraints
+
+To ensure that total imports and total exports match, three constraints are enforced on regions' trade flows.
+
+1. For each region $i$ , if the region's total desired imports from other regions exceed its own gross output, then the imports are scaled to sum up to the region's gross output. We enforce the constraint that $\sum_{i \neq j} b_{i,j,t} \leq Q_{i,t}$ , which is to say that a region may not import more goods than its current gross output capacity. This constraint helps the agents avoid insurmountable debt, thereby stabilizing trade balances over the entire time period while also easing learning. If a region's desired imports exceed its production capacity, then its import bids are scaled down to size:
+
+$$
+b _ {i, j, t} \leftarrow b _ {i, j, t} \min \left\{1, \frac {Q _ {i , t}}{\sum_ {i \neq j} b _ {i , j , t}} \right\}. \tag {55}
+$$
+
+2. Regions are allowed to carry a (positive or negative) trade balance $D_{i,t}$ . At the start of each new time step, each region's trade balance, positive or negative, accumulates interest at a fixed rate of $10\%$ . Based on this balance, a region's debt-to-initial-capital ratio is determined and the imports are scaled according to this ratio:
+
+$$
+d _ {i, t} = 1 0 \frac {D _ {i , t}}{K _ {0}}, \tag {56}
+$$
+
+$$
+b _ {i, j, t} \leftarrow b _ {i, j, t} \left(1 + d _ {i, t}\right). \tag {57}
+$$
+
+3. If other regions' total desired imports from region $i$ exceed region $i$ 's upper bound on exports $x_{i,t}^{\max}$ , then the bids for goods from region $i$ are scaled proportionally to $x_{i,t}^{\max}$ . Otherwise, each region receives its full import bid from region $i$ . In other
+
+words, region $i$ cannot export more goods at time $t$ than it could consume at time $t$ , so other regions will import less from region $i$ .
+
+$$
+x _ {i, t} ^ {\max } = \min \left(p _ {i, t} ^ {x} Q _ {i, t}, Q _ {i, t} - I _ {i, t}\right), \tag {58}
+$$
+
+$$
+x _ {i, j, t} = b _ {i, j, t} \min \left\{1, \frac {x _ {i , t} ^ {\max }}{\sum_ {j \neq i} b _ {i , j , t}} \right\}. \tag {59}
+$$
+
+After all constraints have been applied, the trade balance for the next period is calculated:
+
+$$
+D _ {i, t + 1} = D _ {i, t} + \Delta \left(\sum_ {j \neq i} x _ {j, i, t} - \sum_ {j \neq i} x _ {i, j, t}\right). \tag {60}
+$$
+
+# J Legal Framework
+
+Tariff enforced mechanisms, such as Basic Club, must comply with the World Trade Organization's (WTO) General Agreement on Tariff and Trade (GATT); specifically, the "most favored nation" clause which requires that tariffs be non-discriminatory. At face value, Basic Club would appear to violate the clause; however, exceptions are made in the following circumstances:
+
+- The agreement promotes one of the GATT article XX (g) objectives; namely, "relating to the conservation of exhaustible natural resources."
+- The agreement should contribute to the objective.
+- The agreement should not discriminate between countries. If it appears to, then its discrimination must be on the grounds justifies the rationale.
+
+This legal framework has precedent since the 1998 WTO Appellate Body Report "United States - Import Prohibition of Certain Shrimp and Shrimp Productions" (Shaffer, 1999). Basic Club inherits this legal framework with respect to GATT compliance. Furthermore, tariffs can be WTO compliant if they correct existing trade imbalances, as is the case with carbon leaking regions which have a competitive advantage, or are used as a punitive measure against misconduct (Pihl, 2020; Mavroidis & de Melo, 2015).
+
+# K Sensitivity Analysis
+
+We carry out a sensitivity analysis to test the robustness of the results under different parameter settings. Since the space of possible configurations is large, we perform a sensitivity analysis over a subset of economically relevant parameters, namely the discount factor, welfare loss weight, consumption substitution rate and relative preference for domestic goods. Figure Appendix J shows the percentage change in outcome variables of interest across different scenarios when critical model parameters are perturbed by a multiplication factor ranging from 0.96 to 1.04. The maximum percentage change is $3.16\%$ while the mean is $-0.22\%$ and the medium is $-0.36\%$ . We thus conclude that the dynamics are stable, corresponding to changes in critical model parameters.
+
+# L Mitigation Distribution
+
+To explore the range of strategies that emerge under different negotiation protocols, we analyze the distribution of final mitigation rates of each agent. This clarifies for each negotiation protocol what proportion of agents are ambitious mitigators, free-riders, or low-effort mitigators. Results are visible in Figure 9.
+
+# M Detailed Outcome and Fairness Times Series
+
+Figure 7 and Figure 8, provide the timeseries and equity of various variables over relevant scenarios. The bump in carbon emissions within the first time steps is a result of the constraint that regions cannot abruptly change their mitigation rate, but only adapt it stepwise, leading to a slow ramp-up of mitigation at the start of the rollout. Even with the maximum mitigation rate, a base emission level remains, as land emissions are assumed to be non-reducible.
+
+
+Figure 6. Sensitivity analysis: heatmap showing the percentage change in variables of interest (including Temperature, Carbon Emissions, GDP) across different scenarios (x-axis) when critical model parameters (including the discount factor, welfare loss weight, consumption substitution rate and relative preference for domestic goods) are perturbed by a multiplication factor of $(1 + \Delta)$ . The parameter $\Delta$ varies from $-0.04$ to $0.04$ (y-axis). For example, looking at the bottom left corner of the Temperature Change heatmap, a $4\%$ decrease in the model parameters leads to an increase in final temperature of $0.41\%$ . Overall, the variables of interest are rather insensitive to changes in the model parameter values. For reference, the global temperature anomaly increases by over $200\%$ from 2015 to 2115 in the Maximum mitigation scenario.
+
+
+
+
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 7. Time series of key variables across various scenarios.
+
+
+(d)
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Figure 8. Equity of key variables across various scenarios.
+
+
+Figure 9. We compare the distribution of final mitigation rates across 50 seeds. Under the default, no negotiation, most regions either free ride or reduce $\leq 30\%$ of their emissions. Under the Basic Club and Bilateral Negotiation, the majority of agents reduce $80\%$ of their emissions.
\ No newline at end of file
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/images.zip b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..40455af53fa436106f949adf620f11eb7b7d5b0f
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9cafc5dcb547c99637b7f9647d25f04138c84f5ce809889d8bf31028ed94711a
+size 1589547
diff --git a/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/layout.json b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a358a1f64f37e170aefde89420dc612acfe6690a
--- /dev/null
+++ b/aiforglobalclimatecooperationmodelingglobalclimatenegotiationsagreementsandlongtermcooperationinricen/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e724c5ac2716396d3003ce8704b993cf4015c5ee902b0e90c9894393432ffa0
+size 1037229
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_content_list.json b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f3104bc5fd6dd9818611cb677aa1d07263695b4f
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ae1b54e03941919005356d95e964fca91030517a7fa39f1ef7603b22ca9f72e
+size 207422
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_model.json b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..212cd5bde7005735121698d92cf8b53ebfe43a4b
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bab1a81730ad7641a7af39a4ad22a0027f2f6ad5bc98f75a421b845a80c02dad
+size 248949
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_origin.pdf b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7bf5188e0e6c24bd393a5452d42804f0ff47d253
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/c72c64d6-2843-4128-8565-88537def6ef5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef8437e3b01ad163696a8259663b247e1368208fbcde9ec34ac69c89adddf3d1
+size 833894
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/full.md b/akornadaptiveknotsgeneratedonlineforregressionsplines/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4ac64c55cfac1c5225b0593f5417d0233ca2f202
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/full.md
@@ -0,0 +1,1201 @@
+# Sunil Madhow1 Dheeraj Baby2 Yu-Xiang Wang1
+
+# Abstract
+
+In order to attain optimal rates, state-of-the-art algorithms for non-parametric regression require that a hyperparameter be tuned according to the smoothness of the ground truth (Tibshirani, 2014). This amounts to an assumption of oracle access to certain features of the data-generating process. We present a parameter-free algorithm for offline non-parametric regression over $TV_{1}$ -bounded functions. By feeding offline data into an optimal online denoising algorithm styled after (Baby et al., 2021), we are able to use changepoints to adaptively select knots that respect the geometry of the underlying ground truth. We call this procedure AKORN (Adaptive Knots generated Online for RegressioN splines). By combining forward and backward passes over the data, we obtain an estimator whose empirical performance is close to Trend Filtering (Kim et al., 2009; Tibshirani, 2014), even when we provide the latter with oracle knowledge of the ground truth's smoothness.
+
+# 1. Introduction
+
+When estimating a nonparametric function with noisy data, the key challenge is knowing where to smooth observations and by how much. Because the "wiggliness" of the ground truth is unknown, practitioners are almost always left with a hyperparameter to tune, which corresponds to the wiggliness of the fit. Attaining optimal statistical rates often requires this parameter to be tuned with oracle knowledge of the ground truth. In this paper, we propose a (near)-optimal, parameter-free algorithm for non-parametric regression that uses techniques from online learning to automatically adapt to the smoothness of the ground truth.
+
+1 Halicioglu Data Science Institute, UC San Diego 2 Amazon (Work was completed prior to joining.). Correspondence to: Sunil Madhow .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Consider the problem of non-parametric regression over total variation smoothness classes. For some covariates $\{x_{i}\}_{i = 1}^{n}$ , we observe data
+
+$$
+y _ {i} = f \left(x _ {i}\right) + \epsilon_ {i} \tag {1}
+$$
+
+where $\{\epsilon_i\}$ are i.i.d. $\mathcal{N}(0,\sigma^2)$ random variables and $f$ has bounded $k$ -th order total variation, which means that the variation in its $k$ -th derivative is controlled.
+
+Traditionally, the best solutions to this problem solve functional risk-minimization objectives with regularization on $TV_{k}$ -smoothness (Tibshirani, 2014). In order to enjoy optimal statistical rates, such methods require the regularization to be tuned in correspondence with a tight upper-bound on the $TV_{k}$ of the ground truth. This makes it difficult for practitioners to be sure that they are benefiting from the powerful theory already established in the literature (Tibshirani, 2014; Guntuboyina et al., 2020).
+
+Recent work uses algorithms from the Online Learning (OL) literature (Hazan et al., 2006; Baby et al., 2021; Chatterjee & Goswami, 2023) to treat the online version of this regression problem, where points $(x_{i},y_{i})$ are revealed one at a time. Thanks to the powerful oracle inequalities enjoyed by OL algorithms, these methods attain optimal statistical rates while obviating the necessity for a priori knowledge of the smoothness of $f$ .
+
+When applied directly in the offline setting, however, existing OL-based methods have serious drawbacks. Principal among these is the fact that their output is not a function $\hat{f}$ , but rather a highly non-smooth sequence of predictions, $\{\hat{y}_1,\dots \hat{y}_n\}$ . This is problematic because inferring a smooth, functional form is one of the key goals in the regression literature (Donoho & Johnstone, 1994). At the same time, each prediction, $\hat{y}_t$ , is made with only knowledge of $y_{1},\ldots y_{t - 1}$ making it harder to pick up patterns in the data. The result is that, when specialized to the offline setting (for instance, by interpolating the predictions $\hat{y}_1,\dots \hat{y}_n$ ), online algorithms are badly outperformed by traditional methods in terms of both MSE and attractiveness of fit (Baby et al., 2021).
+
+Is it possible to inject the instance-dependent knowledge acquired by online algorithms into inherently offline algo
+
+
+Figure 1. "Attention Map" for AKORN compared to ADDLE and Local Linear Regression for noisy evaluations of the Doppler function of (Donoho & Johnstone, 1994). Observe that ADDLE and AKORN can select the appropriate "bandwidth" for the local linear fit adaptively.
+
+
+
+
+
+
+
+rithms? In this paper, we present an OL-based algorithm called AKORN (Adaptive Knots generated Online for RegressioN splines) for offline non-parametric regression that retains some of the best properties of online and offline methods:
+
+1. AKORN adapts to the smoothness of the ground truth with no need for hyperparameter tuning. That is, without any knowledge of $TV_{1}[f]$ , AKORN outputs a linear spline, $\hat{f}$ , satisfying
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} (\hat {f} (x _ {i}) - f (x _ {i})) ^ {2} = \tilde {O} _ {P} (n ^ {- 4 / 5} T V _ {1} [ f ] ^ {2 / 5})
+$$
+
+Furthermore, $\hat{f}$ is guaranteed to have a number of knots scaling as $\tilde{O}(n^{1/5}TV_1[f]^{2/5})$ .
+
+2. AKORN learns a function, $\hat{f}$ , rather than a sequence of point predictions. As such, AKORN offers a principled way of using online methodology for inference, rather than pure prediction.
+3. We can visualize AKORN's "attention map" as in Figure 1. This highlights AKORN's ability to optimize a bias-variance tradeoff in a neighborhood of each covariate $x_{i}$ - a property inherited from its online subroutines. Details on the attention map are in Section 5.1.
+4. AKORN enjoys the optimal statistical rate for the offline regression problem without requiring us to restrict our attention to functions whose $TV_{1}$ is bounded by a known constant. That is, the rate in item 1 is an instance-dependent rate that holds over the set of ground truths $\{f : TV_{1}[f] < \infty\}$ rather than just a minimax rate over $\{f : TV_{1}[f] \leq \alpha\}$ for some $\alpha \in \mathbb{R}$ .
+5. Empirically, AKORN is competitive with state of the art offline methods, even when they are artificially provided with the best possible hyperparameters.
+
+# 1.1. Key Techniques and Other Contributions
+
+1. We reduce knot-selection to optimal online denoising. By suitably measuring the stationarity of an online learner's predictions, AKORN adaptively generates a set of knots, which, when used to fit a linear regression spline, gives optimal statistical rates.
+2. To this end, we introduce ADDLE (Adaptive Denoising with Linear Experts), which optimally solves the online non-parametric denoising problem for functions of bounded 1st-order TV: an extension of "Aligator" from (Baby et al., 2021).
+3. We carry out the analysis for AKORN by introducing a fictitious estimator which allows us to deal with the statistical dependence between the adaptive knots and the data. This technique may be useful in other work on knot-selection.
+
+# 2. Related Work
+
+Regression over total variation classes is well-studied in the literature (Muller, 1992; Donoho & Johnstone, 1994; 1998; Tibshirani, 2014). Optimal techniques include wavelet smoothing (Donoho & Johnstone, 1994; 1998), Locally Adaptive Regression Splines (Mammen & van de Geer, 1997), and the now state-of-the-art Trend Filtering estimate (Kim et al., 2009; Tibshirani, 2014). Crucially, most of these methods require an injection of a priori knowledge of the smoothness of $f$ via hyperparameter, unlike AKORN.
+
+Some work makes use of Stein's Unbiased Risk Estimator (SURE) to select hyperparameters (Tibshirani & Taylor, 2012; Tibshirani, 2015; Donoho & Johnstone, 1995). These methods are typically heuristic, as extracting provable guarantees involves proving uniform convergence of SURE. The exception is (Donoho & Johnstone, 1995), which obtains this uniform convergence for wavelet smoothing. Though wavelets enjoy powerful adaptivity properties from a theoretical perspective, Trend Filtering achieves much better results in practice (Tibshirani, 2014).
+
+There has also been interest in the online nonparametric regression setting, which surprisingly is not much harder than the offline setting (Rakhlin & Sridharan, 2014). Recently, Baby et al. (2021) uses oracle inequalities from the online literature (Hazan et al., 2006) to design an optimal parameter-free algorithm called "Aligator" for the online regression problem over $TV_{0}$ . Unfortunately, when specialized to the offline setting, Aligator cannot compete with Trend Filtering empirically.
+
+All minimax optimal estimates for TV classes must be nonlinear functions of the responses in order to display local adaptivity, as shown in Donoho & Johnstone (1998). For our method, this means knots must be adaptively spaced according to the change-points of the ground truth. Many algorithms have been proposed for knot selection (Friedman, 1991; Luo & Wahba, 1997; Wand, 2000), typically by choosing a large knotset and recursively purging knots (Goepp et al., 2025). To our knowledge, AKORN is the first knot selection algorithm with provable guarantees.
+
+A more detailed discussion of related work is available in Appendix A.
+
+# 3. Problem Setup
+
+We now instantiate the model in Equation 1 by defining our assumption on $f$ . We impose regularity on the ground truth $\pmb{\theta} = [f(x_1),\dots f(x_n)]$ as measured by its 1st-order total variation.
+
+Definition 3.1. The 1st-order total variation of a vector $\pmb{\theta} \in \mathbb{R}^n$ with respect to the points $\mathcal{D}_X = \{x_1, \dots, x_n\}$ is defined as
+
+$$
+T V _ {1} [ \pmb {\theta}; \mathcal {D} _ {X} ] = \sum_ {i = 3} ^ {n} \left| \frac {\theta_ {i} - \theta_ {i - 1}}{x _ {i} - x _ {i - 1}} - \frac {\theta_ {i - 1} - \theta_ {i - 2}}{x _ {i - 1} - x _ {i - 2}} \right|
+$$
+
+By extension, we define the discrete total variation of a function $f$ with respect to the points $\mathcal{D}_X = \{x_1, \ldots, x_n\}$ as
+
+$$
+T V _ {1} [ f; \mathcal {D} _ {X} ] = T V _ {1} [ \boldsymbol {\theta}; \mathcal {D} _ {X} ]
+$$
+
+where $\pmb {\theta} = [f(x_1),\dots f(x_n)]$
+
+As we shall be computing $TV_{1}$ with respect to fixed covariates, we tend to suppress $\mathcal{D}_X$ in the above notation.
+
+Remark 3.2. The definitions adopted above, while standard (Guntuboyina et al., 2020), differ subtly from the definition of the true 1st-order total variation seminorm, $\| \cdot \|_{TV_1}$ , defined for weakly differentiable functions (Tibshirani, 2014). To our knowledge, neither is more general than the other. However for $g$ differentiable, we have
+
+$$
+T V _ {1} [ g; \mathcal {D} _ {X} ] \leq \| g \| _ {T V _ {1}}
+$$
+
+for all $\mathcal{D}_X$ , implying that $TV_{1}[f]$ can be replaced by $\| f\|_{TV_1}$ in our bound when $f$ is differentiable. Throughout this pa
+
+per, we use only the discrete total variation from Definition 3.1.
+
+We assume that we are given data of the form $\{(x_i, y_i)\}_{i=1}^n$ according to Equation 1 where $f$ has bounded $TV_1$ . We reserve the letter $C$ to denote $C := TV_1[f; \{x_i\}_{i=1}^n]$ .
+
+Remark 3.3. Why $TV_{1}[\cdot]$ ? While $TV_{0}$ -functions can be approximated by a sparse combination of Heaviside functions, $TV_{1}$ -functions are well approximated by linear splines with a sparse number of knots. Thus, in the $TV_{0}$ setting, it is "proper" to output a discontinuous, piecewise constant estimate ((Baby et al., 2023) provides a recipe for doing this with a sparse number of segments). On the other hand, "proper" estimates for $TV_{1}$ functions (and $TV_{k\geq 1}$ ) should be continuous, in addition to having a sparse number of change points. Thus, $TV_{1}$ is the first level at which the mismatch between inherently discontinuous online predictions and an inherently continuous ground truth must be addressed in offline-to-online reductions.
+
+In the online setting, each data point comes to us one at a time, and the goal is to produce a sequence of predictions $\hat{y}_t$ so that $\sum_{t=1}^{n} (\hat{y}_t - \theta_t)^2$ is as small as possible. In Section 4.1, we describe an optimal parameter-free algorithm for the online problem.
+
+In the offline setting (our main target), we wish to produce a model $\hat{f}$ such that $\sum_{i=1}^{n} (\hat{f}(x_i) - f(x_i))^2$ is small, assuming simultaneous access to all data points. Though the asymptotic rates for the offline setting and online setting are the same (up to lower-order terms) (Rakhlin & Sridharan, 2014; Baby & Wang, 2019), algorithms for the offline setting typically substantially outperform algorithms for the online setting. In Section 4.2, we propose a reduction from the offline setting to the online setting that mitigates the empirical drawbacks typically suffered by online algorithms.
+
+# 3.1. Additional assumptions
+
+We assume that $f$ is bounded by an unknown constant, $|f| \leq B$ . As mentioned in (Baby et al., 2021), this assumption is typically not made in the literature (Donoho & Johnstone, 1994; Tibshirani, 2014). When $f$ is continuous, this assumption is vacuous. Without loss of generality, we let $B = 1$ .
+
+In the body of this paper, we assume that the covariates are equally spaced: $x_{i} = i / n$ . This assumption is rather strong, but has been the starting point for many non-parametric regression algorithms, including Trend Filtering (Donoho & Johnstone, 1994; Tibshirani, 2014), where it was subsequently relaxed (Wang et al., 2014; Sadhanala & Tibshirani, 2019). In Appendix F, we show how a modified version of AKORN can handle uneven and random covariates.
+
+# 3.2. Additional Notation
+
+For a single natural number, $a$ , we let $[a] := \{1, 2, \dots, a\}$ . For a real number, $z$ , we let $(z)_{+} = \max \{z, 0\}$ . Let $e_i \in \mathbb{R}^n$ be the $i$ th standard basis vector.
+
+We introduce the notation $\mathcal{D}_X = \{x_1, \ldots, x_n\}$ and $\mathcal{D}_Y = \{y_1, \ldots, y_n\}$ . When $k, k' \in \mathcal{D}_X$ with $k < k'$ , we define $[k, k'] = \{x \in \mathcal{D}_X : k \leq x \leq k'\}$ .
+
+We also note that, for all $i \in [n]$ , $x_i = i / n$ . Strictly speaking, each $x_i$ is a scalar. However, as a matter of convenience, we will sometimes denote $[1, x_i]$ as $x_i$ when the distinction is clear from context. We define the truncated power basis, $\{g_i : [0, 1] \to \mathbb{R}\}_{i=1}^n$ as follows:
+
+$$
+\begin{array}{l} g _ {j} (x) = \left(x - x _ {j}\right) _ {+}; 1 \leq i < n - 1 \\ g _ {n} (x) = 1 \\ \end{array}
+$$
+
+For any $i$ , we vectorize the evaluations of $g_{j}$ on the data as $\pmb{g}_{j} = [g_{j}(x_{1}),\dots g_{j}(x_{n})]^{T}$ .
+
+Finally, for any set of (non-repeating) knots $K = \{k_1,\dots k_l\} \subset \mathcal{D}_X$ we let $\mathcal{G}(K) = \{\pmb {g}_1,\pmb {g}_n\} \cup \{\pmb {g}_j\}_{j\in K}$ . For this $K$ , we also let $H_{K}$ be the matrix whose columns are $\{[1,(x_i - x_1)_+ + g_{k_1}(x_i),\dots g_{k_l}(x_i)]^T\}_{i = 1}^n$ . We use $S(K) =$ span $\mathcal{G}(K)$ to denote the space of (evaluations of) linear splines with knotset $K$ . We use $F(K) = \mathrm{span}\{\mathcal{G}(K)\cup$ $\{\xi_1,\ldots \xi_{|K|}\}\}$ , where $\xi_{j} = \sum_{i = j}^{n}e_{i}$ to denote the space of (evaluations of) piecewise linear functions with knotset $K$ .
+
+We sometimes abuse notation by identifying functions $p:[0,1]\to \mathbb{R}$ with the finite-dimensional vector of their evaluations on $\mathcal{D}_X$ , $[p(x_1),\dots,p(x_n)]^T$ . This is done only when the underlying function $p$ is clear from context, as is the case when we are discussing the ground truth ( $f\coloneqq :\pmb{\theta}$ ), or any linear spline $(\sum_{j = 1}^{n}\beta_{j}g_{j}(x)\coloneqq :\sum_{j = 1}^{n}\beta_{j}g_{j})$ .
+
+# 4. ADDLE and AKORN
+
+# 4.1. ADDLE: Online Denoising for $TV_{1}$
+
+We first introduce ADDLE (ADaptive Denoising with Linear Experts) to treat the online problem of denoising the sequence of responses $\{y_1, \ldots, y_n\}$ with a sequence $\{\hat{y}_1, \ldots, \hat{y}_n\}$ , where the responses come from the data model in Equation 1.
+
+ADDLE operates by running Follow-the-Leading-History (FLH) (Hazan et al., 2006) with experts given by online linear regression² (Algorithm 3 in Appendix B) and loss functions $f_{t}(\cdot) = (\cdot - y_{t})^{2}$ . FLH predicts a weighted combination of the predictions by each expert, and uses an ex
+
+ponential reweighting scheme to update its weights at each time-step according to observed losses. A formal description of FLH/ADDLE appears in Appendix B.
+
+Since our end goal is to address offline data, we assume the data is revealed in isotonoic order (i.e., our $t$ -th observation is $y_{t}$ ), though this assumption can be relaxed by means of a geometric cover (Baby et al., 2021). Furthermore, the algorithm can be generalized to handle any $TV_{k}$ by augmenting the experts to perform online regression with $k$ th degree polynomials.
+
+Remark 4.1. Though ADDLE has not, to our knowledge, appeared in the literature, much of the technical scaffolding for online denoising over $TV$ classes via expert aggregation appeared in (Baby et al., 2021). As such, the main technical contribution of this paper is AKORN.
+
+# 4.2. AKORN
+
+As we have mentioned, one issue with ADDLE is that the predictions, $\{\hat{y}_i\}$ , are not very useful in the offline setting. We now present AKORN, which uses online forward and backward passes together with an adaptive restarting rule in order to curate a set of knots, $K = \{k_1,\dots k_l\} \subset \mathcal{D}_X$ . With these knots in hand, AKORN then returns the best linear regression spline with knots in $K$ :
+
+$$
+\begin{array}{l} \hat {f} (x) = [ 1, (x - x _ {1}) _ {+}, g _ {k _ {1}} (x), \dots g _ {k _ {l}} ] (H _ {K} H _ {K} ^ {T}) ^ {- 1} H _ {K} Y \\ =: P _ {S (K)} Y (x) \\ \end{array}
+$$
+
+where we recall from Section 3.2 that $g_{k_i}$ is the truncated power basis function $(x - x_{k_i})_+$ and $H_K$ is the data-matrix whose columns are the features $[1, (\cdot - x_1)_+, (\cdot - k_1)_+, \dots (\cdot - k_l)_+]^T$ and we have engaged in the aforementioned abuse of notation $\hat{f} = P_{S(K)}Y$ .
+
+To form $K$ , AKORN begins by generating a forward knotset, $K_{f}$ , and a backward knotset, $K_{b}$ . These are generated by feeding the data to Algorithm 1 in isotonic and reverse-isotonic order respectively. Algorithm 1 is inspired by Algorithm 5 in (Baby et al., 2023), which was designed to impose a low-switching constraint on online predictions. Essentially, every time ADDLE starts (say, at time $b \in [n]$ ), the predictions of ADDLE are compared to a linear regression that also starts at time $b$ . When the total square distance between these sequences drifts above a certain constant, we conclude that we have exited the interval in which we can linearly approximate $f$ , and we put down a knot. The effect is that ADDLE restarts only when a noisy surrogate of the $TV_{1}$ within the interval exceeds $n / (\text{Interval Size})^{3/2}$ (see Lemma C.1).
+
+To complete the construction of $K$ , we generate preliminary fits $g = P_{F(K_f)}Y \in \mathbb{R}^n$ and $h = P_{F(K_b)}Y \in \mathbb{R}^n$ and form $\tilde{K}$ , the set of all crossover points of $g$ and $h$ . Crossover
+
+Algorithm 1 FindKnots
+Input: data $\{(x_{t},y_{t})_{t = 1}^{n}$ variance $\sigma^2$ $b\gets 0$ $\mathcal{K}\gets \{\}$
+Start ADDLE instance A
+for $t\in \{1,\dots ,n\}$ do
+ $\tilde{\theta}_t\gets$ prediction for $x_{t}$ from $\mathcal{A}$ $\hat{a}_{t}\gets$ LinearLeastSquares $(z_{b:(t - 1)})$ $\{\in \mathbb{R}^2\}$ For all $j\in [b,\ldots t]$ set $\hat{w}_j^t\gets \hat{a}_t^T x_j$ $s\gets \sum_{j = b}^{t}(\hat{w}_j^t -\tilde{\theta}_t)^2$
+if $s > 5\sigma^2\log \frac{2n^2}{\delta}$ then
+ $\mathcal{K} = \mathcal{K}\cup \{x_{t - 1}\}$ $b = t$
+Restart $\mathcal{A}$
+end if
+Update $\mathcal{A}$ with $y_{t}$
+end for
+output $\mathcal{K}$
+
+Algorithm 2 AKORN
+Input: data $\{x_{t},y_{t}\}_{t = 1}^{n}$ variance $\sigma^2$ $K_{f}\gets$ FindKnots $\{\{x_t,y_t\}_{t = 1}^n,\sigma^2\}$ $K_{b}\gets$ FindKnots $\{\{x_t,y_t\}_{t = n}^1,\sigma^2\}$ $g\gets P_{F(K_f)}Y$ $h\gets P_{F(K_b)}Y$ $\tilde{K}\gets$ Crossovers $(g,h,\{x_t\}_{t = 1}^n)$ $K_{collision}\gets \{x_t:(t < n - 1)\wedge x_{t + 1}\in K_f\cap K_b\}$ $K = K_{f}\cup K_{b}\cup \tilde{K}\cup K_{collision}$
+output $\hat{f} = P_{S(K_f\cup K_b\cup \tilde{K})}Y$
+
+points are defined as covariates $z_{t}$ where either of the following holds
+
+1. $h(z_{t})\geq g(z_{t})$ and $g(z_{t + 1}) > h(z_{t + 1})$
+2. $g(z_{t})\geq h(z_{t})$ and $h(z_{t + 1}) > g(z_{t + 1})$
+
+We then report $K = K_{f} \cup K_{b} \cup \tilde{K}$ and perform least-squares regression of $Y$ onto the space $S(K)$ .
+
+In effect, AKORN is forced to first think about the data from an online perspective – at this step it forms a qualitative understanding of the ground truth $f$ , summarized by the knot sets $K_{f}$ and $K_{b}$ . It then combines this understanding with offline access to the data in order to produce a fit that is simultaneously attractive and adaptive.
+
+Remark 4.2. Strictly speaking, our proofs require that we also add to $K$ all points in the set $K_{collision} \coloneqq \{x_t : (t < n - 1) \wedge x_{t + 1} \in K_f \cap K_b\}$ . When $n$ , the number of data points, is large and $\sigma > 0$ , it is rare that $K_f$ and $K_b$ share knots, so we mention this step only parenthetically.
+
+# 4.3. Computational Complexity
+
+Using an $O(1)$ -time update rule for each linear expert, ADDLE can be implemented in $O(n^2)$ time. AKORN also runs in $O(n^2)$ . As an aside, we observe that Algorithm 1 can be viewed as an optimized form of ADDLE that adaptively purges the pool of experts. Using a Geometric Cover as in (Baby et al., 2021), it is straightforward to reduce the run-time of ADDLE to $O(n \log n)$ . However, this does not immediately lead to an $O(n \log n)$ runtime for AKORN.
+
+For comparison, the worst-case computational complexity of Trend Filtering is $O(n^{3/2} \log \frac{1}{\epsilon})$ to find an $\epsilon$ -approximate solution (Tibshirani, 2014). This does not take into account the cost of parameter tuning. If we tune parameters heuristically using Stein's Unbiased Risk Estimator at a discretization level $\Delta$ , we need to solve trend filtering $C / \Delta$ times and compute the effective degrees of freedom (dof) of each fit. In general, the only possible a-priori upper bound on $C$ is $C = O(n^2)$ , leading to a generic complexity of $O(n^{7/2} \log 1 / \epsilon)$ . In practice, algorithms for solving trend filtering are extremely fast, and the main computational burden comes from computing dof for several candidate fits.
+
+# 5. Experimental Results
+
+# 5.1. Local adaptivity
+
+As a matter of interest, we begin by noting that the fitted values from ADDLE can be expressed as
+
+$$
+\hat {Y} _ {\text {a d d l e}} = W _ {\text {a d d l e}} Y
+$$
+
+for some hat-matrix $W_{\text{addle}}$ that depends on the weights of the ADDLE instance. We can do the same for AKORN:
+
+$$
+\hat {Y} _ {a k o r n} = W _ {a k o r n} Y
+$$
+
+where $W_{akorn} = H_K^T (H_K H_K^T)^{-1} H_K$ . Because the bandwidth of these matrices around each diagonal element $(i, i)$ corresponds to the neighborhood of the data used in forming the $i$ th prediction, $\hat{y}_i$ , we dub $W_{addle}$ and $W_{akorn}$ "attention maps."
+
+It is informative to compare these learned attention maps to the static attention induced by local linear regression, as we do in Figure 1 for Donoho & Johnstone (1994)'s "Doppler" function. Unlike local linear regression, we can see that that ADDLE has learned how to adaptively optimize a biasvariance trade-off in the neighborhood of each data point by choosing a spatially varying "bandwidth". Similarly, we see that AKORN inherits ADDLE's learned knowledge of the geometry of the ground truth. It is well-known that this kind of local adaptivity is essential to getting optimal rates over $TV$ classes (Donoho & Johnstone, 1994).
+
+Empirical Rates
+
+PW-Lin Doppler Jump Oracle TF -0.95 -0.84 -0.37 AKORN -1 -0.91 -0.7
+
+Table 1. Estimated rates as determined as the slopes of the lines corresponding to Oracle TF and AKORN in Figure 2. Each entry in the table gives the exponent, $\gamma$ , in the rate $O(n^{\gamma})$
+
+# 5.2. Performance comparison
+
+While ADDLE's ability to learn the local smoothness of the ground truth is remarkable, it performs comparatively poorly on offline datasets, as we demonstrate in Figure 5 in Appendix G.2. In this section, we show that AKORN is able to use ADDLE's adaptivity while efficiently leveraging offline data. We compare several policies.
+
+1. Oracle linear trend filtering. We solve the variational problem described in Section A for a grid of possible $\lambda$ . We then measure the true MSE against the ground truth, and return the best fit. We emphasize that this policy requires oracle knowledge of the ground truth.
+2. DoF linear trend filtering. We solve the same trend filtering optimization problem for the same set of possible $\lambda$ . We then form the Stein estimate of the risk by estimating the degrees of freedom of each model (Tibshirani & Taylor, 2012), and choose the best fit. Note that this procedure has no theoretical guarantees, is computationally intensive, and can require high-precision arithmetic when the dataset is large.
+3. Wavelets. We use the soft-thresholding estimator of (Donoho & Johnstone, 1998) with Debauchies 2 wavelets. To our knowledge, apart from AKORN, this is the only optimal and parameter-free algorithm for estimating $TV_{1}$ functions.
+4. AKORN. We run AKORN as described in Section 4.2, with failure probability $\delta = 0.1$ .
+
+In Figure 2, we display a log-log plot of the error of each policy for various ground truths, against exponentially increasing values of $n$ . In Table 1, we report the slope of the lines corresponding to Oracle Trend Filtering and AKORN, as estimated by the Linear Least Squares fit. This gives the approximate rate of each estimator. Across all ground truths, we observe that AKORN competes closely with Trend Filtering, despite the fact that the latter is provided with access to the ground truth.
+
+The first example in Figure 2, together with the first column in Table 1, suggests that AKORN adaptively achieves the parametric rate $\frac{1}{n}$ on piecewise linear functions, as does trend filtering. The second example in Figure 2 and
+
+MSEs for Doppler Function $(\mathbf{n} = 1000)$
+
+σ Oracle TF DoF TF AKORN Wavelets 0 0 0 0 0 0.1 0.0004 0.0006 0.0008 0.005 0.2 0.0012 0.0016 0.0023 0.014 0.3 0.0023 0.003 0.0039 0.024 0.4 0.0034 0.0041 0.0052 0.034 0.5 0.0057 0.007 0.007 0.046
+
+Table 2. Values averaged over 20 runs and rounded to the nearest ${10}^{-4}$
+
+Table 1 validates that AKORN achieves the claimed rate of $\tilde{O}(n^{-4/5})$ on spatially heterogenous functions like the Doppler function of (Donoho & Johnstone, 1994).
+
+The final example in Figure 2 represents runs of each policy on the ground truths $\pmb{\theta}_n = [0^T \in \mathbb{R}^{1 \times n - 5}, 1, 2, 3, 4, 5]^T$ . In this setting, the fast rates for Trend Filtering from (Guntuboyina et al., 2020) do not apply, because the final linear segment of the ground truth is small. Notice that in this case, $TV_1[\pmb{\theta}] = \Theta(n)$ , which means that the rate predicted both by our theory and that of Trend Filtering is $\tilde{O}(n^{-4/5}n^{2/5}) = \tilde{O}(n^{-2/5})$ . While this rate seems to be accurate for Oracle Trend Filtering, our experimental results seem to indicate that AKORN outperforms this rate substantially, as the least-squares slope of the orange line is about $-0.7$ (Table 1). These results indicate that AKORN's enhanced adaptivity leads to favorable performance on highly irregular problem instances. Visually, we see that AKORN's proposed model is much more attractive than that of Trend Filtering.
+
+In Table 2, we report the MSEs of each policy for fixed $n$ and various noise levels $\sigma$ . From the table, we gather that AKORN is competitive with both Oracle and DoF Trend Filtering, especially for larger values of $\sigma$ . Wavelets is substantially behind all other policies.
+
+In summary, AKORN's performance is very close to both Oracle Trend Filtering and DoF Trend Filtering across all tests, and it outperforms even Oracle Trend Filtering on such pathologies as the jump function of Figure 2. Wavelet denoising, which is the only other method we know of that provides adaptivity to $TV_{1}[f]$ , is behind the pack empirically. Code is available at github.com/ SunilMadhow/AKORN.
+
+# 6. Theoretical Results
+
+We begin by confirming that ADDLE achieves the optimal total square error, $\tilde{O}(n^{1/5}C^{2/5})$ . Note that this implies that the average square error (nearly) matches the optimal rate, $\tilde{O}(n^{-4/5}C^{2/5})$
+
+Theorem 6.1 (Bound on ADDLE error). Consider equally
+
+
+
+
+
+
+Figure 2. Estimated rates for AKORN and Oracle Trend Filtering formed using 20 Monte-Carlo runs for each $n$ , together with representative fits for $n = 5000$ . $\sigma = 0.3$ , $\delta = 0.1$ . Top left: PW Linear function; Top Right: Doppler function; Bottom: Jump function.
+
+spaced design points $\{x_{i}\}_{i = 1}^{n}$ and any $f$ with $C\coloneqq TV_{1}[f;\mathcal{D}_{X}] < \infty$ . Let responses $\{y_{t}\}$ come from the model in Equation 1. Let $\{\hat{y}_t\}_{t = 1}^n$ be the the predictions generated by ADDLE when fed the data in isotonic order. With probability $1 - \delta$ , the total squared error satisfies:
+
+$$
+\sum_ {t = 1} ^ {n} \left(\hat {y} _ {t} - f (x _ {t})\right) ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ and $\delta$ .
+
+With this result in hand, we are able to prove that the function, $\hat{f}$ , outputted by AKORN also adaptively achieves the rate of $\tilde{O}(n^{-4/5}C^{2/5})$ .
+
+Theorem 6.2 (Bound on AKORN MSE). Consider equally spaced design points, $\{x_{t} = t / n\}_{t = 1}^{n}$ and $f$ such that $C\coloneqq TV_{1}[f;\mathcal{D}_{X}] < \infty$ . Let responses $\{y_t\}$ come from the model in Equation 1. Let $\hat{f}$ be the function returned by AKORN. Then, with probability $1 - \delta$ , the average square error satisfies:
+
+$$
+\frac {1}{n} \sum_ {t = 1} ^ {n} (\hat {f} (x _ {t}) - f (x _ {t})) ^ {2} = \tilde {O} (n ^ {- 4 / 5} C ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ and $\delta$ .
+
+Let us emphasize the message of Theorem 6.2 by comparing it with the standard guarantees of Locally Adaptive Regression Splines and Trend Filtering (Donoho & Johnstone, 1998; Mammen & van de Geer, 1997; Tibshirani, 2014), each of both of which provide an algorithm $\mathcal{A}_{\lambda}$ with hyperparameter $\lambda$ so that for any $\alpha \in \mathbb{R}$ , there exists $\lambda (\alpha)$ so that $A_{\lambda (\alpha)}$ (nearly) achieves the minimax rate $O(\alpha^{2 / 5}n^{-4 / 5})$ over $\{f:\| f\|_{TV_1}\leq \alpha \}$ .
+
+On the other hand, Theorem 6.2 says that AKORN is not only (nearly) minimax over $\{f:TV_1[f]\leq \alpha \}$ in a parameter-free way, but it achieves the instance-dependent rate $\tilde{O} (TV_{1}[f]^{2 / 5}n^{-4 / 5})$ over $\{f:TV_1[f] < \infty \}$ .
+
+In Appendix F, we describe how a modified version of ADDLE can achieve the same rate (up to log terms) when the covariates $\{x_{i}\}_{i = 1}^{n}$ satisfy $\max_{i = 2,\dots,n}|x_i - x_{i - 1}|\leq \frac{\log n}{p_0n}$ . This condition is satisfied with high probability when $x_{i}\stackrel {iid}{\sim}p_{X}(\cdot)$ , where $p_X$ is a density with support on $[0,1]$ that satisfies $p_X(x)\geq p_0 > 0$ for $x\in [0,1]$ (Wang et al., 2014). This implies an optimal variant of AKORN under the same conditions. Detailed theorem statements and proofs can be found in Appendix F.
+
+# 7. Proof Sketches
+
+# 7.1. Proof Sketch of Theorem 6.1
+
+The proof of Theorem 6.1 follows along the lines of (Baby et al., 2021). The crucial component is the following classical Lemma from (Hazan & Seshadhri, 2007).
+
+Proposition 7.1 ((Hazan & Seshadhri, 2007) informal). For any interval $I = [r,s]$ in time, the algorithm FLH (Fig.4) with learning rate $\zeta = \alpha$ applied to the loss functions $\ell_t(\cdot) = (\cdot - y_t)^2$ gives $O(\alpha^{-1} (\log r + \log |I|))$ regret against the best base learner in hindsight, where $\alpha$ upper bounds the parameter of exp-concavity for all $\ell_t$ .
+
+For any fixed partition $\mathcal{P}$ of $[n]$ into intervals, this Lemma
+
+directly controls the quantity
+
+$$
+\sum_ {p = [ r, s ] \in \mathcal {P}} \sum_ {t \in p} (\hat {y} _ {t} - y _ {t}) ^ {2} - (\hat {z} ^ {r} (t) - y _ {t}) ^ {2} \tag {2}
+$$
+
+where $\hat{z}^r (t)$ is the prediction at time $t$ of an online linear regression expert that starts at time $r$ (Specifically, an instance of Algorithm 3 in Appendix B).
+
+Appendix D concerns itself with establishing:
+
+1. The existence of a partition, $\mathcal{P}^*$ , of $f$ into $O(n^{1/5}C^{2/5})$ roughly linear chunks, such that Equation (2) is $\tilde{O}(1)$ for each $p \in \mathcal{P}^*$ .
+2. A statistical control on the difference between Equation 2 and the corresponding quantity with noisy responses $y_{t}$ replaced by the ground truth $\theta_t = f(x_t)$ .
+
+One appealing aspect of such proofs is that the heavy lifting is done by approximation theoretic analysis. That is, we can encode oracle knowledge about the structure of $f$ into the partition $\mathcal{P}^*$ while proving theorems, and be sure that FLH will discover this structure without any additional algorithmic input. This type of adaptivity is the core contribution that online learning can make to statistics. The complete proof for ADDLE is in Appendix D
+
+# 7.2. Proof Sketch of Theorem 6.2
+
+In order to prove the optimality of $\hat{f} = P_{S(K)}Y$ , we begin by establishing properties about the knotset $K$ . Recall that $K = K_{f} \cup K_{b} \cup \tilde{K}$ , where $K_{f}$ and $K_{b}$ are generated according to the online passes of Algorithm 1. Using the optimality of ADDLE (Theorem 6.1), we can prove the following "Change-point Detection Lemma," which tells us that $K_{f}$ (and $K_{b}$ ) divide $f$ into a small number of piecewise linear chunks.
+
+Lemma 7.2 (Change-point Detection Lemma (Lemma C.1 informal)). With high probability, we have
+
+$$
+\left| K _ {f} \right| \lesssim \max \left\{n ^ {1 / 5} C ^ {2 / 5}, 1 \right\}
+$$
+
+and for all $k_i^f \in K_f$ there is a linear function $w^i$ defined on $[k_i^f, k_{i+1}^f)$ such that
+
+$$
+\sum_ {t = k _ {i} ^ {f}} ^ {k _ {i + 1} ^ {f} - 1} (w ^ {i} (x _ {t}) - \theta_ {t}) ^ {2} \lesssim 1 + n _ {i} ^ {1 / 5} T V _ {1} [ \pmb {\theta} ^ {i} ] ^ {2 / 5}
+$$
+
+While the adaptivity an online algorithm is typically evaluated on the basis of its regret bound, the Change-point Detection Lemma tells us that online algorithms are implicitly making fairly deep inferences about their data. In
+
+particular, ADDLE provides not only predictions with small error, but access to a sparse set of change-points that encode an optimal bias/variance tradeoff around each covariate.
+
+The properties established in the Change-point Detection Lemma quickly imply that back-fitting piecewise linear functions on either $K_{f}$ or $K_{b}$ gives the optimal rate. In particular, the smallness of $K_{f}$ (resp. $K_{b}$ ) allows us to control the variance of $P_{F(K_f)}Y$ (resp. $P_{F(K_b)}Y$ ), while the approximate linearity property allows us to control $\| \mathbb{E}[P_{F(K_f)}Y] - \pmb{\theta}\|_2^2$ (resp. $\| \mathbb{E}[P_{F(K_b)}Y] - \pmb{\theta}\|_2^2$ ) (Lemmas C.3, C.4 and C.5 in the Appendix). This certifies that $g = P_{F(K_f)}Y$ and $h = P_{F(K_b)}$ have MSE scaling as $\tilde{O}(n^{1/5}C^{2/5})$ . While $g$ and $h$ are, in general, discontinuous (and therefore improper estimates of the ground truth $f$ ), we rely on their existence later in the proof. We summarize in the following Lemma:
+
+Lemma 7.3. [Corollary C.6 (informal)] With high probability, we have
+
+$$
+\left\| P _ {F \left(K _ {f}\right)} Y - \boldsymbol {\theta} \right\| _ {2} ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+and
+
+$$
+\left\| P _ {F \left(K _ {b}\right)} Y - \boldsymbol {\theta} \right\| _ {2} ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+Turning our attention to $\hat{f} = P_{S(K)}Y$ , Lemma C.2 provides us with the following bound, which relates the square error of $\hat{f} = P_{S(K)}Y$ with that of a fictitious estimator $\hat{f}_f = P_{F(K)}Y$ .
+
+$$
+\left\| P _ {S (K)} Y - \boldsymbol {\theta} \right\| _ {2} ^ {2} \leq 2 \left\| P _ {F (K)} Y - \boldsymbol {\theta} \right\| _ {2} ^ {2} + 2 \left\| P _ {S (K)} \boldsymbol {\theta} - \boldsymbol {\theta} \right\| _ {2} ^ {2} \tag {3}
+$$
+
+This is crucial because we are able to cover the space of possible knot sets $K$ in the first term on the right hand side by covering intervals independently. The first term is easily bounded using the methodology of Lemma 7.3. The second term is free from dependence on the responses, $Y$ , and is bounded using the following approximation theoretic lemma, which asserts the existence of a linear spline $s \in S(K_f \cup K_b \cup \tilde{K})$ whose curve lies in between the curves of any two functions $g \in F(K_f)$ and $h \in F(K_b)$ .
+
+Lemma 7.4. [Lemma C.7 (Informal)] If $K_{f} \cap K_{b} = \{\}$ then for all $g \in F(K_{f})$ and $h \in F(K_{b})$ there exists $s \in S(K)$ so that for all $x \in \mathcal{D}_X$ there exists $\lambda_{x} \in [0,1]$ with
+
+$$
+s (x) = \lambda_ {x} g (x) + (1 - \lambda_ {x}) h (x)
+$$
+
+Finally, we apply Lemma 7.4 with $g = P_{F(K_f)}Y$ and $h = P_{F(K_b)}Y$ , in order to assert the existence of $s \in S(K)$ with small error. Concretely, by the convexity of square loss, we have in Equation 3
+
+$$
+\left\| P _ {S (K)} \boldsymbol {\theta} - \boldsymbol {\theta} \right\| _ {2} ^ {2} \leq \| s - \boldsymbol {\theta} \| _ {2} ^ {2} \leq
+$$
+
+$$
+\left\| g - \boldsymbol {\theta} \right\| _ {2} ^ {2} + \left\| h - \boldsymbol {\theta} \right\| _ {2} ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+in Equation 3. The complete proof for Theorem 6.2 is in Appendix C.
+
+# 8. Conclusion and Future Work
+
+The main contribution of this paper is AKORN, a parameter-free algorithm which uses ideas from online learning to produce a function $\hat{f}$ that (a) empirically competes with the output of linear Trend Filtering, even when the latter is given oracle access to the ground truth for hyperparameter tuning (b) achieves the optimal rate for $TV_{1}$ -bounded functions by adapting to the local smoothness of $f$ (c) operates by means of a reduction from knot-selection to adaptive online prediction.
+
+A major limitation of AKORN is that it does not handle higher order $TV_{k}$ classes. An extension to arbitrary $k > 1$ would place AKORN on even footing with Trend Filtering theoretically. They key challenge here is generalizing Lemma 7.4 to splines of higher degree. Another limitation of AKORN is its $O(n^{2})$ runtime. We believe that a more careful reduction to a geometric cover version of ADDLE (Baby et al., 2021) may lead to an $O(n\log n)$ algorithm, though in practice this may come at the cost of additional MSE.
+
+Online methods have long been promising to expand the scope of theory by eliminating assumptions on optimally tuned parameters and enhancing adaptivity to problem instances (Cutkosky & Orabona, 2018; Cutkosky et al., 2023). If, as AKORN suggests, such methods can be modified to compete ex-situ with state-of-the-art offline algorithms, offline-to-online reductions could yield new theorems whose hypotheses are more likely to hold in real-world scenarios.
+
+# Acknowledgments
+
+We thank the anonymous reviewers for their thoughtful feedback, which helped improve the clarity and presentation of this paper. This work was partially supported by NSF Awards #2134214 and #2007117.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Baby, D. and Wang, Y. Online forecasting of total-variation-bounded sequences. In Advances in Neural Information Processing Systems, 2019.
+
+Baby, D. and Wang, Y. Optimal dynamic regret in exp-concave online learning. In Conference on Learning Theory, Proceedings of Machine Learning Research, 2021.
+Baby, D. and Wang, Y. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. 2022.
+Baby, D. and Wang, Y. Second order path variationals in non-stationary online learning. In International Conference on Artificial Intelligence and Statistics, 2023.
+Baby, D. and Wang, Y.-X. Adaptive online estimation of piecewise polynomial trends. In Advances in Neural Information Processing Systems, 2020.
+Baby, D., Zhao, X., and Wang, Y. An optimal reduction of tv-denoising to adaptive online learning. In International Conference on Artificial Intelligence and Statistics, 2021.
+Baby, D., Garg, S., Yen, T., Balakrishnan, S., Lipton, Z. C., and Wang, Y. Online label shift: Optimal dynamic regret meets practical algorithms. In Advances in Neural Information Processing Systems, 2023.
+Beygelzimer, A., Langford, J., Li, L., Reyzin, L., and Schapire, R. E. Contextual bandit algorithms with supervised learning guarantees. In International Conference on Artificial Intelligence and Statistics, volume 15, 2011.
+Cesa-Bianchi, N. and Lugosi, G. Prediction, learning, and games. Cambridge University Press, 2006. ISBN 978-0-521-84108-5. doi: 10.1017/CBO9780511546921.
+Chatterjee, S. and Goswami, S. Spatially adaptive online prediction of piecewise regular functions. In International Conference on Algorithmic Learning Theory, 2023.
+Chaudhuri, K., Freund, Y., and Hsu, D. J. A parameter-free hedging algorithm. In Advances in Neural Information Processing Systems, 2009.
+Cutkosky, A. and Orabona, F. Black-box reductions for parameter-free online learning in banach spaces. In _Conference On Learning Theory_, 2018.
+Cutkosky, A., Mehta, H., and Orabona, F. Optimal stochastic non-smooth non-convex optimization through online-to-non-convex conversion. In International Conference on Machine Learning, 2023.
+Daniely, A., Gonen, A., and Shalev-Shwartz, S. Strongly adaptive online learning. In International Conference on Machine Learning, 2015.
+Donoho, D. and Johnstone, I. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3), 1994.
+
+Donoho, D. L. and Johnstone, I. M. Adapting to unknown smoothness via wavelet shrinkage. Journal of the American Statistical Association, 90(432), 1995.
+Donoho, D. L. and Johnstone, I. M. Minimax estimation via wavelet shrinkage. The Annals of Statistics, 1998.
+Foster, D. J., Rakhlin, A., and Sridharan, K. Adaptive online learning. Advances in Neural Information Processing Systems, 2015.
+Friedman, J. H. Multivariate Adaptive Regression Splines. The Annals of Statistics, 19(1), 1991.
+Goepp, V., Bouaziz, O., and Nuel, G. Spline regression with automatic knot selection. Computational Statistics Data Analysis, 202, 2025.
+Gradu, P., Hazan, E., and Minasyan, E. Adaptive regret for control of time-varying dynamics. In Proceedings of The 5th Annual Learning for Dynamics and Control Conference, 2023.
+Guntuboyina, A., Lieu, D., Chatterjee, S., and Sen, B. Adaptive risk bounds in univariate total variation denoising and trend filtering. The Annals of Statistics, 2020.
+Hazan, E. and Seshadhri, C. Adaptive algorithms for online decision problems. *Electron. Colloquium Comput. Complex.*, TR07-088, 2007.
+Hazan, E. and Seshadhri, C. Efficient learning algorithms for changing environments. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009.
+Hazan, E., Kalai, A., Kale, S., and Agarwal, A. Logarithmic regret algorithms for online convex optimization. 2006.
+Kim, S.-J., Koh, K., Boyd, S., and Gorinevsky, D. $\ell_{1}$ trend filtering. SIAM review, 51(2), 2009.
+Luo, Z. and Wahba, G. Hybrid adaptive splines. Journal of the American Statistical Association, 92(437), 1997.
+Mammen, E. and van de Geer, S. Locally adaptive regression splines. The Annals of Statistics, 25, 1997.
+Muller, H.-G. Change-Points in Nonparametric Regression Analysis. The Annals of Statistics, 20(2), 1992.
+Orabona, F. Simultaneous model selection and optimization through parameter-free stochastic learning. In Advances in Neural Information Processing System, 2014.
+Parhi, R. and Nowak, R. D. Banach space representer theorems for neural networks and ridge splines. Journal of Machine Learning Research, 22(43), 2021.
+
+Rakhlin, A. and Sridharan, K. Online nonparametric regression. In Conference on Learning Theory, 2014.
+Rhee, W. and Talagrand, M. Uniform bound in the central limit theorem for banach space valued dependent random variables. Journal of Multivariate Analysis, 20(2), 1986.
+Rosenfeld, E., Ravikumar, P., and Risteski, A. An online learning approach to interpolation and extrapolation in domain generalization. In International Conference on Artificial Intelligence and Statistics, 2022.
+Sadhanala, V. glmgen. URL https://github.com/glmgen/glmgen.
+Sadhanala, V. and Tibshirani, R. J. Additive models with trend filtering. The Annals of Statistics, 47, 2019.
+Sherman, J. and Morrison, W. J. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics, 1950.
+Streeter, M. J. and McMahan, H. B. No-regret algorithms for unconstrained online convex optimization. In Advances in Neural Information Processing Systems, 2012.
+Tibshirani, R. J. Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics, 42(1), 2014.
+Tibshirani, R. J. Degrees of freedom and model search. Statistica Sinica, 2015.
+Tibshirani, R. J. Divided differences, falling factorials, and discrete splines: Another look at trend filtering and related problems, 2022.
+Tibshirani, R. J. and Taylor, J. Degrees of freedom in lasso problems. The Annals of Statistics, 40, 2012.
+Wand, M. P. A comparison of regression spline smoothing procedures. Comput. Stat., 15(4), 2000.
+Wang, Y.-X., Smola, A., and Tibshirani, R. The falling factorial basis and its statistical applications. In International Conference on Machine Learning, 2014.
+Wu, R., Guo, C., Su, Y., and Weinberger, K. Q. Online adaptation to label distribution shift. In Advances in Neural Information Processing Systems 34: Annual Conference, 2021.
+Zhang, K. and Wang, Y.-X. Deep learning meets nonparametric regression: Are weight-decayed dnns locally adaptive? In International Conference on Learning Representations, 2022.
+
+1. For $t = 1 \dots T$
+
+(a) Player plays an action $z_{t} \in V$
+(b) Universe chooses a loss function $f_{t}$
+(c) Player suffers loss $f_{t}(z_{t})$
+
+Figure 3. Online interaction protocol
+
+# A. More Background on Related Work
+
+One solution to the non-parametric problem of Equation 1 is given by $k$ -th order trend filtering (Kim et al., 2009; Tibshirani, 2014), which efficiently solves the minimization problem
+
+$$
+\hat {f} _ {t f} = \underset {g \in \mathcal {U} _ {n} ^ {k}} {\arg \min } \sum_ {i = 1} ^ {n} \left(y _ {i} - g \left(x _ {i}\right)\right) ^ {2} + \lambda T V _ {k} (g) \tag {4}
+$$
+
+where $\mathcal{U}_n^k$ is the span of a certain collection of functions called the falling factorial basis functions (Wang et al., 2014; Tibshirani, 2022). For $\lambda = \Phi(n^{\frac{1}{2k + 3}}C^{\frac{-(2k + 1)}{2k + 3}})$ , the loss of $\hat{f}_{tf}$ satisfies
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} (f (x _ {i}) - \hat {f} (x _ {i})) ^ {2} = \tilde {O} _ {P} (n ^ {\frac {- (2 k + 2)}{(2 k + 3)}} C ^ {\frac {2}{2 k + 3}})
+$$
+
+where $C$ is an upper bound on $TV_{k}(f)$ , which is known to be the minimax-optimal rate. Crucially, in order to benefit from tight bounds, a practitioner would need to choose $\lambda$ with knowledge of the smoothness of the ground truth, $f$ . Said another way, Trend Filtering is optimal only over the class of functions $\mathcal{F}_k(C) = \{f:TV_k(f)\leq C\}$ and requires knowledge of $C$ . For a comprehensive treatment of the theory underlying Trend Filtering, which is the state of the art solution for regression over $TV_{k}$ classes, we refer the reader to (Tibshirani, 2022).
+
+Regression over non-parametric classes is a much broader field, with a well-developed theory of minimaxity (Donoho & Johnstone, 1994; Mammen & van de Geer, 1997; Rakhlin & Sridharan, 2014; Tibshirani, 2014; Baby & Wang, 2019). In particular, (Rakhlin & Sridharan, 2014) observes that the complexity of the online problem typically does not differ too much from that of the offline problem.
+
+$TV_{k}$ function classes admit functions whose smoothness varies spatially, which makes the estimation problem especially challenging. While Holder or Sobolev functions can be optimally estimated by linear smoothers, optimal estimators for $TV_{k}$ must display local adaptivity to the smoothness of the ground truth (Muller, 1992; Donoho & Johnstone, 1994; Mammen & van de Geer, 1997; Tibshirani, 2014).
+
+# A.1. Online learning
+
+Online learning studies algorithms for playing the game in Figure 3.
+
+The goal is to choose actions such that the cumulative loss is small (with respect to some comparator). This setting is entirely non-stochastic, and we therefore measure an algorithm's performance in terms of the regret of its predictions against a comparator:
+
+$$
+\operatorname {R e g r e t} \left(z _ {1}, \dots z _ {n} \mid u\right) = \sum_ {t = 1} ^ {T} \left(f _ {t} \left(z _ {t}\right) - f _ {t} (u)\right)
+$$
+
+If we let $f_{t}(\cdot) = (\cdot - y_{t})^{2}$ , we have a stochastic relaxation of the adversarial setting. When specialized to stochastic/batch settings, OL algorithms often enjoy remarkable adaptivity to problem features (Orabona, 2014; Baby et al., 2021; Wu et al., 2021; Cutkosky et al., 2023). In our setting, particularly relevant is the Aligator algorithm (Baby et al., 2021), which addresses online nonparametric denoising over $TV_{0}$ by using an expert aggregation algorithm to adaptively select the best window in which to perform averaging for each $x_{t}$ . For $k = 0$ , Aligator's predictions incur the optimal error of $\tilde{O}(n^{1/3}C^{1/3})$ .
+
+Central to our work is the theory of adaptivity in OL (Hazan & Seshadhri, 2009; Daniely et al., 2015), which seeks to guarantee that the regret within each interval is small with respect to some set of comparators. In particular, we will make use of the Follow-the-Leading-History expert aggregation algorithm from (Hazan et al., 2006), which has the remarkable property that, in any interval, the aggregate prediction competes with each individual expert.
+
+Online forecasting over $TV_{k}$ -bounded sequences has also been addressed for $k = 0,1$ in the fully adversarial setting in a sequence of papers by (Baby & Wang, 2021; Baby et al., 2021; Baby & Wang, 2023).
+
+More broadly, it is widely believed that hyperparameters, which can typically only be tuned heuristically, are responsible for large gaps between learning theory and practice across the entire discipline of Machine Learning (Chaudhuri et al., 2009). This has led to a growing movement to design parameter-free algorithms, whose theoretical guarantees can reasonably be expected to hold in real-world scenarios (Cutkosky & Orabona, 2018; Chaudhuri et al., 2009; Orabona, 2014).
+
+# B. Algorithm descriptions
+
+The following algorithm, due to Hazan & Seshadhri (2007), is called Follow-the-Leading-History (FLH). ADDLE operates by running FLH with online linear regression experts.
+
+FLH: inputs - Learning rate $\zeta$ and $n$ base learners $E^1, \ldots, E^n$
+
+1. For each $t$ , $v_{t} = (v_{t}^{(1)}, \ldots, v_{t}^{(t)})$ is a probability vector in $\mathbb{R}^t$ . Initialize $v_{1}^{(1)} = 1$ .
+2. In round $t$ , set $\forall j \leq t$ , $x_{t}^{j} \gets E^{j}(t)$ (the prediction of the $j^{th}$ bas learner at time $t$ ). Play $x_{t} = \sum_{j=1}^{t} v_{t}^{(j)} x_{t}^{(j)}$ .
+3. After receiving $f_{t}$ , set $\hat{v}_{t + 1}^{(t + 1)} = 0$ and perform update for $1\leq i\leq t$ :
+
+$$
+\dot {v} _ {t + 1} ^ {(i)} = \frac {v _ {t} ^ {(i)} e ^ {- \zeta f _ {t} \left(x _ {t} ^ {(i)}\right)}}{\sum_ {j = 1} ^ {t} v _ {t} ^ {(j)} e ^ {- \zeta f _ {t} \left(x _ {t} ^ {(j)}\right)}} \tag {5}
+$$
+
+4. Addition step - Set $v_{t+1}^{(t+1)}$ to $1/(t+1)$ and for $i \neq t+1$ :
+
+$$
+v _ {t + 1} ^ {(i)} = (1 - (t + 1) ^ {- 1}) \hat {v} _ {t + 1} ^ {(i)} \tag {6}
+$$
+
+Figure 4. FLH algorithm (copied verbatim from (Baby & Wang, 2021))
+
+In the course of the paper, we will make references to both bounded linear regression experts and unbounded linear regression experts, described in Algorithms 3 and 4 respectively.
+
+Algorithm 5 gives the full description of ADDLE, which technically requires us to use the bounded linear regression expert.
+
+Algorithm 3 Bounded Linear Regression Expert
+Input: history $\mathcal{H}\subset (\mathcal{X}\times [0,1])^*$ , feature $x$ , bound $B$ {Generates $B$ -bounded prediction given history $\mathcal{H}$
+if $\mathcal{H}$ is empty then Output 0
+else if $\mathcal{H} = \{(x_1^{\mathcal{H}},y_1^{\mathcal{H}})\}$ then Output $[y_1^{\mathcal{H}}]_{[-B,B]}$ $\{z_{[A,B]}$ clips a real number $z$ between $A$ and $B\}$
+else $\hat{l}\gets$ linear least squares fit on $\mathcal{H}$
+Output $\hat{l} (x)_{[-B,B]}$
+end if
+
+Algorithm 4 Linear regression expert
+Input: history $\mathcal{H}\subset (\mathcal{X}\times [0,1])^*$ , feature $x$ {Generates prediction given history $\mathcal{H}$ } if $\mathcal{H}$ is empty then Output 0 else if $\mathcal{H} = \{(x_1^{\mathcal{H}},y_1^{\mathcal{H}})\}$ then Output $y_{1}^{\mathcal{H}}$ else $\hat{l}\gets$ linear least squares fit on $\mathcal{H}$ Output $\hat{l} (x)$ end if
+Algorithm 5 ADDLE
+input $\mathcal{D},\sigma$ $B\gets \max_{i}|y_{i}| + \max \{\sigma \sqrt{2\log 4n / \delta},1\}$ Run FLH with learning rate $\eta = \frac{1}{8(1 + \sigma\sqrt{\log 2n / \delta})^2}$ , base learners $E^{j}(t) = \mathrm{Predict}(\{(x_{k},y_{k})_{k = j}^{t - 1},x_{t},B)$
+
+# C. Proofs for AKORN
+
+# C.1. Properties of Knot Selection
+
+For $\theta$ fixed, and some interval $I = [r,s] \subset [n]$ , we introduce the notation $TV_{1}(I) \coloneqq TV_{1}[\theta[r:s]]$ . In the following lemma, we prove that Algorithm 1 induces a not-too-large partition of $f$ into roughly linear segments.
+
+Lemma C.1 (Compare to Theorem 18 in (Baby et al., 2023)). Consider the set of knots, $K_{0} = \{k_{1},\ldots k_{l},k_{l + 1}\coloneqq x_{n + 1}\}$ outputted by Algorithm 1 when exposed to data isotonically, as well as the induced partition
+
+$$
+\mathcal {P} = \left\{p _ {i} := \left[ k _ {i}, \dots k _ {i + 1} - 1 \right] \mid i \in [ l ] \right\}
+$$
+
+Let $n_i$ be the size of $p_i$ .
+
+Then, for any $\delta > 0$ , there is an event $\mathcal{E}_1(\delta)$ which holds with probability at least $1 - \delta$ and upon which
+
+$$
+| \mathcal {P} | \lesssim \max \left\{n ^ {1 / 5} C ^ {2 / 5}, 1 \right\}
+$$
+
+and
+
+$$
+\forall i \exists w ^ {i} \mathrm {l i n e a r}: \sum_ {t \in p _ {i}} (w ^ {i} (x _ {t}) - \theta_ {t}) ^ {2} \lesssim 1 + n _ {i} ^ {1 / 5} T V _ {1} (p _ {i}) ^ {2 / 5}
+$$
+
+where $\lesssim$ hides constants and polylog factors of $\delta, n$ .
+
+Proof. We follow the methodology of Theorem 18 from (Baby et al., 2023). Suppose that for time $t$ , line the condition inside the if-statement of Algorithm 1 is executing for the $i$ th time. We start by bounding $C_i \coloneqq TV_1([b, t])$ .
+
+$$
+\begin{array}{l} \sum_ {j = b + 1} ^ {t} \left(\hat {w} _ {j} ^ {t} - \tilde {\theta} _ {j}\right) ^ {2} \leq \sum_ {j = b + 1} ^ {t} \left(\hat {w} _ {j} ^ {t} - \mathbb {E} [ \hat {w} _ {j} ^ {t} ] + \mathbb {E} [ \hat {w} _ {j} ^ {t} ] - \theta_ {j} + \theta_ {j} - \tilde {\theta} _ {j}\right) ^ {2} \\ \leq 2 \sum_ {j = b + 1} ^ {t} \left(\hat {w} _ {j} ^ {t} - \mathbb {E} \left[ \hat {w} _ {j} ^ {t} \right]\right) ^ {2} + 4 \sum_ {j = b + 1} ^ {t} \left(\mathbb {E} \left[ \hat {w} _ {j} ^ {t} \right] - \theta_ {j}\right) ^ {2} + 4 \sum_ {j = b + 1} ^ {t} \left(\theta_ {j} - \tilde {\theta} _ {j}\right) ^ {2} \tag {7} \\ \end{array}
+$$
+
+By Lemma C.10, the first term is bounded by $4\sigma^2 \log(1/\delta)$ with probability $1 - \delta$ . The second term is bounded by $C_i^2 n_i^3 / n^2$ , which can be shown by direct computation, as in Appendix E. By Theorem D.1 (Optimality of ADDLE), the final term is bounded (with probability $1 - \delta$ ) by $\iota n_i^{3/5} C_i^{2/5} / n^{2/5}$ , where $\iota$ contains constants and log-factors. Union bounding and then combining this with the condition inside the if statement, we see that with probability $1 - \delta$ we have:
+
+$$
+4 \sigma^ {2} \log (4 / \delta) + 4 \max \left\{\iota C _ {i} ^ {2 / 5} n _ {i} ^ {3 / 5} / n ^ {2 / 5}, C _ {i} ^ {2} n _ {i} ^ {3} / n ^ {2} \right\} \geq 5 \sigma^ {2} \log 4 / \delta \tag {8}
+$$
+
+Regardless of which value the maximum takes, we conclude that
+
+$$
+C _ {i} \gtrsim n / n _ {i} ^ {3 / 2} \tag {9}
+$$
+
+At the same time, on this same event, we have by definition that
+
+$$
+\sum_ {j = b + 1} ^ {t} \left(\hat {w} _ {j} ^ {t} - \theta_ {k}\right) ^ {2} \leq 2 \sum_ {j = b + 1} ^ {t} \left(\hat {w} _ {j} ^ {t} - \tilde {\theta} _ {k}\right) ^ {2} + 2 \sum_ {j = b + 1} ^ {t} \left(\tilde {\theta} _ {j} - \theta_ {j}\right) ^ {2} \lesssim 1 + n _ {i} ^ {1 / 5} C _ {i} ^ {2 / 5}
+$$
+
+We may now cover all $n^2$ possible intervals, which amounts to adding an $n^2$ in the log and yields $\mathcal{E}_1(\delta)$ .
+
+Now let $M - 1$ be the total number of times that the if-block is executed. So $M = |\mathcal{P}|$ is the number of bins spawned. Returning to Equation 9, Jensen's inequality with $\phi : x \mapsto x^{-3/2}$ gives
+
+$$
+\begin{array}{l} C \geq \sum_ {i = 1} ^ {M - 1} C _ {i} \\ \gtrsim \sum_ {i = 1} ^ {M - 1} \frac {n}{n _ {i} ^ {3 / 2}} \\ = \sum_ {i = 1} ^ {M - 1} \phi \left(\frac {n _ {i}}{n ^ {\frac {2}{3}} (M - 1) ^ {\frac {2}{3}}}\right) \frac {1}{(M - 1)} \\ \geq \phi (\sum_ {i = 1} ^ {M - 1} \frac {n _ {i}}{n ^ {\frac {2}{3}} (M - 1) ^ {\frac {5}{3}}}) \\ \geq \phi (n ^ {\frac {1}{3}} / (M - 1) ^ {\frac {5}{3}}) = (M - 1) ^ {\frac {5}{2}} / n ^ {\frac {1}{2}} \\ \end{array}
+$$
+
+So for $M > 1$ , we have
+
+$$
+| \mathcal {P} | = M \lesssim M - 1 \lesssim C ^ {2 / 5} n ^ {1 / 5}
+$$
+
+# C.2. Introducing the fictitious estimator: $P_{S(K)}Y$ versus $P_{F(K)}Y$
+
+In our analysis, it is necessary to introduce a fictitious estimator that conducts independent fits within each partition. This will allow us to cover knot-sets more efficiently. The following lemma relates the error of $\hat{f}$ to the fictitious estimator, $\hat{f}_f$ . Recall the definitions of $F(K)$ and $S(K)$ for knotsets $K$ from Section 3.2. Note that, in the following lemma, we abuse notation by identifying $\hat{f}$ with $[\hat{f}(x_1), \dots, \hat{f}(x_n)]^T$ (and likewise for $\hat{f}_f$ ), which allows us to use the notation $P_{S(K)}f(z)$ for $[1, (z - x_1)_+, g_{k_1}(z), \dots, g_{k_l}(z)](H_K H_K^T)^{-1}H_K\pmb{\theta} = \mathbb{E}_{\mathcal{D}_Y}[\hat{f}(z)]$ for fixed $K$ .
+
+Lemma C.2. Let $K = \{k_1, \ldots, k_l\}$ be a (possibly random) set of knot points in the data, and $Y$ be the vector of responses.
+
+Let $F(K)$ be the subspace of vectors in $\mathbb{R}^n$ representable as evaluations piecewise-linear functions with optional discontinuities at points $K$ . Let $S(K)$ be the subspace of vectors in $\mathbb{R}^n$ representable as evaluations (on the data) of linear splines with knots in $K$ . Let $P_{(\cdot)}$ be the projection map. Now consider the following two estimators:
+
+1. $\hat{f}_f$ is the fictitious estimator, whose predictions are given by:
+
+$$
+[ \hat {f} _ {f} (x _ {1}) \dots \hat {f} _ {f} (x _ {n}) ] ^ {T} = P _ {F (K)} Y
+$$
+
+2. $\hat{f}$ is AKORN's output, whose predictions are given by:
+
+$$
+\left[ \hat {f} \left(x _ {1}\right)... \hat {f} \left(x _ {n}\right) \right] ^ {T} = P _ {S (K)} Y
+$$
+
+Then the following holds deterministically
+
+$$
+\sum_ {i = 1} ^ {n} (\hat {f} (x _ {i}) - f (x _ {i})) ^ {2} \leq 2 \sum_ {i = 1} ^ {n} (\hat {f} _ {f} (x _ {i}) - f (x _ {i})) ^ {2} + 2 \sum_ {i = 1} ^ {n} (P _ {S (K)} f (x _ {i}) - f (x _ {i})) ^ {2}
+$$
+
+Proof. We abuse notation by letting $\hat{f} \coloneqq [\hat{f}(x_1), \dots, \hat{f}(x_n)]$ and $\hat{f}_f = [\hat{f}_f(x_1), \dots, \hat{f}_f(x_n)]$ and suppressing the dependence of the spaces $S(K)$ and $F(K)$ on $K$ . Now, with $\| \cdot \| = \| \cdot \|_2$ , we obtain
+
+$$
+\begin{array}{l} \| \hat {f} _ {f} - f \| \geq \| P _ {S} \hat {f} _ {f} - P _ {S} f \| = \| P _ {S} P _ {F} Y - P _ {S} f \| = \\ \| P _ {S} Y - P _ {S} f \| = \| \hat {f} - P _ {S} f \| \\ \geq | | \hat {f} - f | | - | | P _ {S} f - f | | \\ \end{array}
+$$
+
+Where the first inequality is because projections are contractions, the second equality is because $S \subset F$ . By the reverse triangle inequality, we obtain
+
+Thus
+
+$$
+\| \hat {f} - f \| \leq \| \hat {f} _ {f} - f \| + \| P _ {S} f - f ] \|
+$$
+
+
+
+# C.3. Analysis for Piecewise Linear Estimates: $\| P_{F(K_r)}Y - \theta \|_2^2$
+
+For any random knot-set $K_r$ , we now analyze the quality of the fit $P_{F(K_r)}Y$ . This is necessary in two parts of our proof: for providing certificates $g$ and $h$ to be plugged into Lemma C.7 (with $K_r = K_f$ and $K_r = K_b$ respectively) and for bounding the error of the fictitious estimator in Lemma C.2 (with $K_r = K = K_f \cup K_b \cup \tilde{K}$ ).
+
+The next Lemma gives a bias-variance decomposition of $\| P_{F(K_r)}Y - \pmb{\theta} \|_2^2$ .
+
+Lemma C.3. There exists an event $\mathcal{E}_0(\delta)$ which holds with probability at least $1 - \delta$ upon which the following holds for all knot-sets $K_{r} = \{k_{1},\ldots k_{l},k_{l + 1}\coloneqq n\}$ with their associated fit $\hat{f}_f = P_{F(K_r)}Y$
+
+$$
+\sum_ {t = 1} ^ {n} (\hat {f} _ {f} (x _ {t}) - f (x _ {t})) ^ {2} \lesssim \sum_ {i = 1} ^ {l} (\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (f (x _ {t}) - \mu_ {t}) ^ {2} + \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \log (\frac {n ^ {3}}{\delta}))
+$$
+
+where $\mu_t$ and $\sigma_t^2$ are the mean and variance of $\hat{f}_f(x_t)$ respectively, treating the knots as fixed.
+
+Proof. Begin by fixing a static interval $p_i = [k_i, k_{i+1} - 1]$ . Further, fix some $t \in p_i$ .
+
+For any vector $q \in \mathbb{R}^n$ , let $q^i = [q_{k_i}, \dots, q_{k_{i+1}-1}]^T$ . Let $X_i$ collect only the covariates in $p_i$ . Now note that
+
+$$
+\hat {f} _ {f} (x _ {t}) = x _ {t} ^ {T} \left(X _ {i} X _ {i} ^ {T}\right) ^ {- 1} X _ {i} Y ^ {i} \sim \mathcal {N} (\mu_ {t}, \sigma_ {t} ^ {2})
+$$
+
+where $\mu_t = \langle x_t, (X_i X_i^T)^{-1} X_i \pmb{\theta}^i \rangle$ and $\sigma_t^2 = \sigma^2 x_t^T (X_i X_i^T)^{-1} x_t$ .
+
+Thus, letting $W_{t} \coloneqq \frac{\hat{f}_{f}(x_{t}) - \mu_{t}}{\sigma_{t}}$ , we have that
+
+$$
+\Pr \left[ \left| W _ {t} \right| \leq \sqrt {2 \log (2 n / \delta)} \right] > 1 - \delta / n
+$$
+
+Or equivalently
+
+$$
+\Pr \left[ \left| \hat {f} _ {f} \left(x _ {t}\right) - \mu_ {t} \right| \leq \sigma_ {t} \sqrt {2 \log (2 n / \delta)} \right] > 1 - \delta / n \tag {10}
+$$
+
+By reverse triangle inequality,
+
+$$
+| \hat {f} _ {f} (x _ {t}) - \mu_ {t} | = | \hat {f} _ {f} (x _ {t}) - \theta_ {t} - (\mu_ {t} - \theta_ {t}) | > | \hat {f} _ {f} (x _ {t}) - \theta_ {t}) | - | \mu_ {t} - \theta_ {t} |
+$$
+
+So that Equation (10) implies
+
+$$
+| \hat {f} (x _ {t}) - \theta_ {t} | \leq | \mu_ {t} - \theta_ {t} | + \sigma_ {t} \sqrt {2 \log {(2 n ^ {3} / \delta)}}
+$$
+
+holds with probability at least $1 - \delta / n^3$
+
+Squaring each side and summing (and union bounding) over $t$ from $k_{i}$ to $k_{i + 1} - 1$ , this implies that with probability $1 - \delta / n^2$ :
+
+$$
+\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (\hat {f} (x _ {t}) - \theta_ {t}) ^ {2} \leq 2 \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (\mu_ {t} - f (x _ {t})) ^ {2} + 4 \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \log \left(\frac {2 n ^ {3}}{\delta}\right)
+$$
+
+where we have used $(a + b)^2\leq 2a^2 +2b^2$
+
+To finish, cover all $n^2$ realizations of $p_i$ . This means that with probability $1 - \delta$ , for all $k_i, k_{i+1}$ :
+
+$$
+\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \left(\hat {f} \left(x _ {t}\right) - \theta_ {t}\right) ^ {2} \leq 2 \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \left(\mu_ {t} - f \left(x _ {t}\right)\right) ^ {2} + 4 \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \log \left(\frac {2 n ^ {3}}{\delta}\right)
+$$
+
+In particular, we have the claimed decomposition for all $K_r$ .
+
+
+
+We elaborate on the bias and variance terms in the following two lemmas.
+
+Lemma C.4. [Variance of $P_{F(K_r)}Y$ ] Within the setting of Lemma C.3, we have that for each $k_i \in K_r$
+
+$$
+\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \log (\frac {2 T}{\delta})) \lesssim \sigma^ {2}
+$$
+
+Proof.
+
+$$
+\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} = \sigma^ {2} \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} x _ {t} ^ {T} (X _ {i} X _ {i} ^ {T}) ^ {- 1} x _ {t} = 2 \sigma^ {2}
+$$
+
+because the sum of the leverage scores is the number of parameters in the model.
+
+For $K_{r} = K_{0}$ a knot-set from Lemma C.1 (on either a forward or backward pass), we have a control on the total bias of $P_{F(K_0)}Y$ .
+
+Lemma C.5. [Bias] In the setting of Lemma C.1, we have the following on the good event $\mathcal{E}_1(\delta)$ :
+
+$$
+\left\| P _ {F \left(K _ {0}\right)} \boldsymbol {\theta} - \boldsymbol {\theta} \right\| _ {2} ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+Proof. Lemma C.1's success event guarantees that there is a vector $\pmb{\eta} \in F(K_0)$ with
+
+$$
+\begin{array}{l} \| \boldsymbol {\eta} - \boldsymbol {\theta} \| _ {2} ^ {2} = \lesssim \sum_ {i = 1} ^ {l} 1 + n _ {i} ^ {3 / 5} \| D ^ {2} \boldsymbol {\theta} ^ {i} \| _ {2} ^ {2 / 5} \\ \lesssim l + \sum_ {i = 1} ^ {l} n _ {i} ^ {3 / 5} \| D ^ {2} \boldsymbol {\theta} ^ {i} \| _ {2} ^ {2 / 5} \\ \end{array}
+$$
+
+Now using Holder's inequality with the dual norm pair $(5/3, 5/2)$ , we obtain
+
+$$
+\sum_ {i = 1} ^ {l} \| D ^ {2} \boldsymbol {\theta} ^ {i} \| _ {1} ^ {2 / 5} n _ {i} ^ {3 / 5} \leq \left(\sum_ {i = 1} ^ {l} \| D ^ {2} \boldsymbol {\theta} ^ {i} \| _ {1}\right) ^ {2 / 5} \left(\sum_ {i = 1} ^ {l} n _ {i}\right) ^ {3 / 5} \leq \| D ^ {2} \boldsymbol {\theta} \| _ {1} ^ {2 / 5} n ^ {3 / 5} \leq C ^ {2 / 5} n ^ {1 / 5}
+$$
+
+An application of the previous three lemmas shows that $P_{F(K_0)}Y$ has small error for $K_0$ coming from the knot selection algorithm (i.e. both with $K_0 = K_f$ and $K_0 = K_b$ ).
+
+Corollary C.6. Let $\mathcal{E}_0(\delta /2)$ be the good event from Lemma C.3 and $\mathcal{E}_1(\delta /2)$ be the good event from Lemma C.1. Then on $\mathcal{E}_0(\delta /2)\cap \mathcal{E}_1(\delta /2)$ we have
+
+$$
+\left\| P _ {F \left(K _ {0}\right)} Y - \boldsymbol {\theta} \right\| _ {2} ^ {2} = \tilde {O} \left(n ^ {1 / 5} C ^ {2 / 5}\right)
+$$
+
+Proof. On $\mathcal{E}_0(\delta /2)$ we have
+
+$$
+\| P _ {F (K _ {0})} Y - \pmb {\theta} \| _ {2} ^ {2} \leq \sum_ {t = 1} ^ {n} (\hat {f} _ {f} (x _ {t}) - f (x _ {t})) ^ {2} \lesssim \sum_ {i = 1} ^ {l} (\sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (f (x _ {t}) - \mu_ {t}) ^ {2} + \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \log (\frac {n}{\delta}))
+$$
+
+On the event $\mathcal{E}_1(\delta /2)$ from Lemma C.5 we can bound the bias term by $\tilde{O} (n^{1 / 5}C^{2 / 5})$ . On this same event, we can bound the variance term by $\tilde{O} (\sigma^2 n^{1 / 5}C^{2 / 5}) = \tilde{O} (n^{1 / 5}C^{2 / 5})$ by Lemma C.4.
+
+# C.4. Spline Existence
+
+Lemma C.7. Suppose $g \in F(K_{f})$ and $h \in F(K_{b})$ for $K_{f} \cap K_{b} = \{\}$ . Let $K = K_{f} \cup K_{b} \cup \tilde{K}$ where $\tilde{K}$ contains all the crossover points between $g$ and $h$ . Then there exists $f \in S(K)$ such that for all $x \in [0,1]$ there exists $\lambda_{x} \in [0,1]$ such that
+
+$$
+f (x) = \lambda_ {x} g (x) + (1 - \lambda_ {x}) h (x) \tag {11}
+$$
+
+The idea behind the proof is simple. We construct a linear spline left-to-right that greedily sticks with whichever function of $g$ and $h$ is furthest from a change point, transitioning linearly between the two as necessary. We must include crossover points in order to ensure we do not exit the region between the curves $g$ and $h$ when we perform a "slide" from one to the other.
+
+Proof. Assume $K = k_{1}, \ldots, k_{l}$ is ordered. Let $k_{l+1} = 1$ for convenience.
+
+We prove a stronger result, where we enforce the following additional requirements at each knot $k_{i}$ :
+
+1. $f(k_{i}) = g(k_{i})$ or $f(k_{i}) = h(k_{i})$
+2. If $i \in [l - 1]$ is such that $k_{i} \in \tilde{K}$ and $k_{i + 1} \in K_{f}$ , we have that $f(k_{i}) = h(k_{i})$ .
+3. If $i \in [l - 1]$ is such that $k_{i} \in \tilde{K}$ and $k_{i + 1} \in K_b$ , we have that $f(k_{i}) = g(k_{i})$ .
+
+We construct $f$ in cases while iterating over knots.
+
+Base case: If $k_{1} \in K_{f}$ , let $f(z) = h(z)$ for $z \in [0, k_{1}]$ . If $k_{1} \in K_{b}$ , let $f(z) = g(z)$ for $z \in [0, k_{1}]$ . If $k_{1} \in \tilde{K}$ , then let $f(z) = g(z)$ for $z \in [0, k_{1}]$ if $k_{2} \in K_{b}$ and $f(z) = h(z)$ for $z \in [0, k_{1}]$ if $k_{2} \in K_{f}$ .
+
+"Inductive" step: Assume that $(f:[0,k_{i-1}]\to \mathbb{R})\in S([k_1,\dots k_{i-1}])$ is constructed such that the above requirements are satisfied for all knots $k_{1},\ldots k_{i-1}$ . We now extend $f$ to $(f:[0,k_i]\to \mathbb{R})\in S([k_1,\dots k_i])$ .
+
+Case 0: $f(k_{i-1}) = g(k_{i-1})$ and $k_{i-1}$ is the last knot ( $i - 1 = l$ )
+
+We can extend $f$ to $[0, k_{l + 1}] = [0, 1]$ by letting $f(z) = g(z)$ for $z \in [k_{i - 1}, k_i]$ . Because $g$ is linear on $[k_{i - 1}, k_i]$ , $f$ is in $S(K)$ . Equation 11 holds by construction, and requirements 1, 2 and 3 all hold by our iterative hypothesis.
+
+Case 1: $f(k_{i-1}) = g(k_{i-1})$ and $k_i \in K_b$
+
+Extend $f$ by letting $f(z) = g(z)$ for all $z \in [k_{i-1}, k_i]$ . Because $g$ is linear on $[k_{i-1}, k_i]$ and $g(k_{i-1}) = f(k_{i-1})$ , we still have that $f \in S([k_1, \ldots, k_i])$ . We also have $f(z) = g(z)$ for all $z \in [k_{i-1}, k_i]$ and $f(k_i) = g(k_i)$ .
+
+Case 2: $f(k_{i-1}) = g(k_{i-1})$ and $k_i \in K_f$
+
+Extend $f$ by letting $f(z) = g(z) + (h(z) - g(z))\frac{z - k_{i-1}}{k_i - k_{i-1}}$ for $z \in [k_{i-1}, k_i]$ . By construction, we are also assured that $k_{i-1} \notin \tilde{K}$ (because otherwise we would have $f(k_{i-1}) = h(k_{i-1})$ ). Because $g$ and $h$ do not cross on the interval $[k_{i-1}, k_i]$ , we have that $f$ is in $S([k_1, k_i])$ and satisfies Equation 11 (with $x$ restricted to $[0, k_i] \cup [k_{i-1}, k_i]$ ). By construction, we also have that $f(k_i) = h(k_i)$ (satisfying requirement 1). By our inductive hypothesis, the extended version of $f$ satisfies requirements 2 and 3.
+
+Case 3: $f(k_{i-1}) = g(k_{i-1})$ and $k_i \in \tilde{K}$ and $k_{i+1} \in K_b \cup \{k_{l+1}\}$
+
+Define $f(z) = g(z)$ for all $z \in [k_{i-1}, k_i]$ . Because $g$ is linear on $[k_{i-1}, k]$ , we still have that $f \in S([k_1, \ldots, k_i])$ . We also have $f(z) = g(z)$ for all $z \in [k_{i-1}, k_i]$ and $f(k_i) = g(k_i) = h(k_i)$ .
+
+Further, by construction we have that requirements 1 and 3 hold after extension. By hypothesis, requirement 2 still holds.
+
+Case 4: $f(k_{i-1}) = g(k_{i-1})$ and $k_i \in \tilde{K}$ and $k_{i+1} \in K_f$
+
+Define $f(z) = g(z) + (h(z) - g(z))\frac{z - k_{i-1}}{k_i - k_{i-1}}$ for $z \in [k_{i-1}, k_i]$ . Because $k_i \in \tilde{K}$ , we know that $g$ and $h$ do not cross on the interval $[k_{i-1}, k_i]$ . Because $g$ and $h$ do not cross on the interval $[k_{i-1}, k_i]$ , we have that $f$ satisfies Equation 11 (with $x$ restricted to $[0, k_i] \cup [k_{i-1}, k_i]$ ).
+
+Furthermore, by construction, requirements 1 and 2 hold after extension. By hypothesis, requirement 3 still holds.
+
+The remaining cases are symmetric to the above ones (i.e. the orders of $g$ and $h$ and $K_{f}$ and $K_{b}$ are flipped). We can iterate this scheme left-to-right over all knots $k \in K$ to prove the result.
+
+Generally speaking, the odds of $K_{f}$ and $K_{b}$ sharing knots when generated according to AKORN are not high. However, the following corollary shows that we can handle this case should it occur.
+
+Corollary C.8. Suppose $g \in F(K_{f})$ and $h \in F(K_{b})$ , with $K_{f} \cap K_{b} \neq \{\}$ . Let $K = K_{f} \cup K_{b} \cup \tilde{K} \cup Q$ , where $Q = \{x_{i-1} : x_{i} \in K_{b} \cap K_{f}\}$ and $\tilde{K}$ is the set of crossover points of $g$ and $h$ . Then there exists $s \in S(K)$ such that for all $x \in [0,1]$ there exists $\lambda_{x} \in [0,1]$ such that
+
+$$
+s (x) = \lambda_ {x} g (x) + (1 - \lambda_ {x}) h (x) \tag {12}
+$$
+
+Proof. Let $Q = x_{q_1}, \ldots, x_{q_w}$ be ordered. On each interval $[x_{q_i}, x_{q_{i+1}-1}]$ we may construct a corresponding linear spline $s_i$ using Lemma C.7. We can then construct $s$ by linearly interpolating between the various $s_i$ using the knots in $Q$ .
+
+# C.5. Proof of Theorem 6.2
+
+Theorem C.9 (Theorem 6.2).
+
+Proof. Let $\hat{f} = P_{S(K)}f$ . Let $\mathcal{E}_0(\delta /2)$ be the event from Lemma C.3. On $\mathcal{E}_0(\delta /2)$ , the following holds for all knot-sets $K = \{k_1,\dots k_l\}$ , $l > 0$ with $P[\mathcal{E}_0]\geq 1 - \delta /2$ .
+
+$$
+\begin{array}{l} \sum_ {t = 1} ^ {n} (\hat {f} (x _ {t}) - f (x _ {t})) ^ {2} \leq \sum_ {t = 1} ^ {n} (\hat {f} _ {f} (x _ {t}) - f (x _ {t})) ^ {2} + \sum_ {t = 1} ^ {n} (P _ {S (K)} f (x _ {t}) - f (x _ {t})) ^ {2} \\ \leq \sum_ {i = 1} ^ {l} \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (f (x _ {t}) - \mathbb {E} [ \hat {f} _ {f} (x _ {t}) ]) ^ {2} + \sum_ {i = 1} ^ {l} \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \sigma_ {t} ^ {2} \iota (\delta) + \sum_ {t = 1} ^ {n} (P _ {S (K)} f (x _ {t}) - f (x _ {t})) ^ {2} \\ \leq \sum_ {i = 1} ^ {l} \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} \left(f \left(x _ {t}\right) - \mathbb {E} \left[ \hat {f} _ {f} \left(x _ {t}\right) \right]\right) ^ {2} + 2 \sigma^ {2} l + \sum_ {t = 1} ^ {n} \left(P _ {S (K)} f \left(x _ {t}\right) - f \left(x _ {t}\right)\right) ^ {2} \tag {13} \\ \end{array}
+$$
+
+where the first line holds deterministically by Lemma C.2, the second line holds on the event $\mathcal{E}_0$ from Lemma C.3, and the third line holds by Corollary C.4.
+
+Now, let $K_{f}$ and $K_{b}$ be the random knot-sets from isotonic and reverse isotonic runs of Algorithm 1 and let $K = K_{f} \cup K_{b} \cup \tilde{K}$ . Let $\mathcal{E}_1^f(\delta/4)$ be the event from Lemma C.1 applied to the forward run of AKORN. By the conclusion of Lemma C.1, we have $|K_{f}| = \tilde{O}(n^{1/5}C^{2/5})$ . By corollary C.5, we also have $\| f - P_{F(K_{f})}f\|_{2}^{2} \leq \tilde{O}(n^{1/5}C^{2/5})$ on $\mathcal{E}_1^f(\delta/4)$ . Similarly, we let $\mathcal{E}_1^b(\delta/4)$ be the event from Lemma C.1 applied to the backward run of AKORN, upon which we have $|K_{b}| = \tilde{O}(n^{1/5}C^{2/5})$ on $\mathcal{E}_1^b(\delta/4)$ and $\| f - P_{F(K_{b})}f\|_{2}^{2} \leq \tilde{O}(n^{1/5}C^{2/5})$ .
+
+Let $\mathcal{E}_1 = \mathcal{E}_1^f\cap \mathcal{E}_1^b$ . By union bound, $\operatorname *{Pr}[\mathcal{E}_1]\geq 1 - \delta /2$ . On $\mathcal{E}_1$ , we may bound the first term in Equation 13 as
+
+$$
+\sum_ {i = 1} ^ {l} \sum_ {t = k _ {i}} ^ {k _ {i + 1} - 1} (f (x _ {t}) - \mathbb {E} [ \hat {f} _ {f} (x _ {t}) ]) ^ {2} = \| f - P _ {F (K)} f \| _ {2} ^ {2} \leq \| f - P _ {F (K _ {f})} f \| _ {2} ^ {2} = \tilde {O} (n ^ {1 / 5} C ^ {2 / 5})
+$$
+
+because $F(K_{f})\subset F(K)$
+
+and the second term as $2\sigma^2 |K|\leq 2\sigma^2 (4\times |K|) = \tilde{O} (n^{1 / 5}C^{2 / 5})$
+
+All that remains is to bound the final term of Equation 13. To do this, first define $\mathcal{E}_3 = \mathcal{E}_0 \cap \mathcal{E}_1$ . By construction, all the previous bounds still hold on $\mathcal{E}_3$ , and we have $\operatorname*{Pr}[\mathcal{E}_3] > 1 - \delta$ (by a union bound). Now apply Lemma C.7 to $g = P_{F(K_f)}Y$ and $h = P_{F(K_b)}Y$ to get a function $s \in S(K)$ that lies in between $g$ and $h$ for all $x \in [0,1]$ (i.e. $s(x_i) = \lambda_{x_i} g(x_i) + (1 - \lambda_{x_i}) h(x_i)$ for $\lambda_{x_i} \in [0,1]$ ). Using convexity of square loss and the bounds on the error of $g$ and $h$ from the previous paragraph, we have
+
+$$
+\begin{array}{l} \| P _ {S (K)} f - f \| _ {2} ^ {2} \leq \| s - f \| _ {2} ^ {2} = \sum_ {i = 1} ^ {n} \left(\lambda_ {x _ {i}} g \left(x _ {i}\right) + \left(1 - \lambda_ {x _ {i}}\right) h \left(x _ {i}\right) - f \left(x _ {i}\right)\right) ^ {2} \\ \leq \sum_ {i = 1} ^ {n} \lambda_ {x _ {i}} (g (x _ {i}) - f (x _ {i})) ^ {2} + \sum_ {i = 1} ^ {n} (1 - \lambda_ {x _ {i}}) (h (x _ {i}) - f (x _ {i})) ^ {2} = \tilde {O} (n ^ {1 / 5} C ^ {2 / 5}) \\ \end{array}
+$$
+
+Where the inequality holds by convexity of $\ell^2$ -loss and the final bound holds on $\mathcal{E}_3$ due to Lemma C.6, which guarantees small error for $g$ and $h$ on $\mathcal{E}_0 \cap \mathcal{E}_1^f \subset \mathcal{E}_3$ and $\mathcal{E}_0 \cap \mathcal{E}_1^b \subset \mathcal{E}_3$ respectively.
+
+# C.6. Helper Lemmas
+
+Lemma C.10. [Simplification of Lemma 4 from (Rhee & Talagrand, 1986)] Let $Z \sim \mathcal{N}(0, \Sigma)$ . Then
+
+$$
+\Pr [ \| Z \| \geq t ] \leq \exp \frac {- t ^ {2}}{2 \operatorname {t r} (\Sigma)}
+$$
+
+# D. Proofs for ADDLE
+
+The proofs in this section represent fairly straightforward extensions of those found in (Baby et al., 2021).
+
+We will prove the following theorem. If $N = n$ , then Theorem D.1 becomes Theorem 6.1.
+
+Theorem D.1. Consider equally spaced design points, $\{x_{t} = t / n\}_{t = p}^{p + N}$ , for $p \geq 1$ and $p + N \leq n$ . Let $C := TV_1[f|_{[x_p,x_{p + N}]}$ . Let $\{\hat{y}_t\}_{t = 1}^n$ be the predictions generated by Algorithm 5 when fed these data in order. With probability $1 - \delta$ , the total squared error satisfies:
+
+$$
+\sum_ {t = 1} ^ {n} (\hat {y _ {t}} - f (x _ {t})) ^ {2} = \tilde {O} (N ^ {\frac {3}{5}} C ^ {2 / 5} / n ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ and $\delta$ .
+
+Proof Sketch: The proof of the bound for ADDLE in Theorem 6.1 follows along the same lines as the proof of Aligator in (Baby et al., 2021). The idea is that, for any $f$ , there exists a not-too-large partition of $\{x_1, \ldots, x_n\}$ into intervals such that $f$ is approximately linear within each interval (as measured by $TV_1$ ). On each of these intervals, the linear expert who starts at the beginning of the interval achieves low error. Furthermore, by the adaptivity property of FLH (Proposition D.2 below), ADDLE competes with the best expert on each interval. Thus, summing over intervals, we observe that ADDLE achieves the optimal rate.
+
+# Beginning of formal proof
+
+The main tool is the following lemma, which states that FLH competes with each expert in each interval.
+
+Proposition D.2 ((Hazan & Seshadhri, 2007)). Suppose the loss functions are exp-concave with parameter $\alpha$ . For any interval $I = [r,s]$ in time, the algorithm FLH Fig.4 with learning rate $\zeta = \alpha$ gives $O(\alpha^{-1}(\log r + \log |I|))$ regret against the base learner in hindsight.
+
+The following lemma follows instantly from a subgaussian tail bound and a union bound.
+
+Lemma D.3 (Lemma 16 from (Baby et al., 2021)). Let $\mathcal{V}$ be the event that $|\epsilon_t| \leq \sigma \sqrt{2\log 4n / \delta}$ . Then $\operatorname*{Pr}[\mathcal{V}] \geq 1 - \delta / 2$
+
+Note that, conditioned on $\mathcal{V}$ , the quantity $B$ from Algorithm 5 upper bounds $\theta_t$ for all $t$ .
+
+Define the filtration $\mathcal{F}_j = \sigma \{y_1,\dots y_{j - 1}\}$ and let $\mathbb{E}_j[\cdot ] = \mathbb{E}[\cdot |\mathcal{F}_j]$ and $\mathrm{Var}_j[\cdot ] = \mathrm{Var}[\cdot |\mathcal{F}_j]$ .
+
+Let $\hat{y}_t$ be ADDLE's prediction at time $t$ . Let $\hat{z}_t^r$ be the prediction at time $t$ of the linear expert that starts at time $r$ . Let $R_{\sigma} = 16(1 + \sigma \sqrt{\log 4n / \delta})^2$ . Let $\tilde{\sigma} = \max \{\sigma \sqrt{\log 4n / \delta}, 1\}$ .
+
+The following Lemma ensures that $B$ , as defined in Algorithm 3 is an upper-bound on $f$ .
+
+Lemma D.4. Let $B = \max_{i}|y_{i}| + \tilde{\sigma}$ . On the event $\mathcal{V}$ , we have that $|f(x_{i})|\leq B\leq 1 + 2\tilde{\sigma}$ for every $i\in [n]$
+
+Proof. On $\mathcal{V}$ , we have that, for any $i$ , $y_{i} \in (f(x_{i}) - \tilde{\sigma}, f(x_{i}) + |\tilde{\sigma})$ . Therefore, $B \geq |y_{i}| + \tilde{\sigma} \geq |f(x_{i})|$ . Also, $B \leq \max_{i} |y_{i}| + \tilde{\sigma} \leq 1 + 2\tilde{\sigma}$ .
+
+Lemma D.5. Let $I = [r,s]$ be any interval. On the event $\mathcal{V}$ , the predictions $\hat{y}_j$ made by ADDLE satisfy:
+
+$$
+\sum_ {j = r} ^ {s} (\hat {y} _ {j} - y _ {j}) ^ {2} \leq \sum_ {j = r} ^ {s} (\hat {z} _ {j} ^ {r} - y _ {j}) ^ {2} + \frac {2 \log n}{R _ {\sigma}}
+$$
+
+Proof. On the event $\mathcal{V}$ , each loss function $(\cdot - y_t)^2$ is $R_{\sigma}^{-1} \coloneqq \eta$ exp-concave.
+
+Now apply Lemma D.2 and bound $r, s - r \leq n$ .
+
+The following Lemma is proved as Lemma 18 in (Baby et al., 2021), recalling that $|\hat{z}_j^r -\theta_j|\leq 2(1 + \tilde{\sigma})$
+
+Lemma D.6. For any $j \in [n]$ , we have
+
+1. $\mathbb{E}_j[(y_j - \hat{z}_j^r)^2 -(y_j - \theta_j)^2 |\mathcal{V}] = \mathbb{E}_j[(\hat{z}_j^r -\theta_j)^2 |\mathcal{V}].$
+2. $\operatorname{Var}_j[(y_j - \hat{z}_j^r (j))^2 -(y_j - \theta_j)^2 |\mathcal{V}]\leq R_\sigma \mathbb{E}_j[(\hat{z}_j^r (j) - \theta_j)^2 |\mathcal{V}].$
+
+Lemma D.7. (Freedman type inequality, (Beygelzimer et al., 2011)) For any real valued martingale difference sequence $\{Z_t\}_{t=1}^T$ with $|Z_t| \leq R$ it holds that,
+
+$$
+\sum_ {t = 1} ^ {T} Z _ {t} \leq \eta (e - 2) \sum_ {t = 1} ^ {T} \operatorname {V a r} _ {t} [ Z _ {t} ] + \frac {R \log (1 / \delta)}{\eta}, \tag {14}
+$$
+
+with probability at least $1 - \delta$ for all $\eta \in [0,1 / R]$ .
+
+We use these Lemmas to define and bound two Martingale Difference Sequences (MDS). Again, compare to Lemma 19 in (Baby et al., 2021).
+
+Lemma D.8. Condition on $\mathcal{V}$ . For any interval $[r,s]$ , it holds with probability at least $1 - \delta$ that
+
+1. $\sum_{j=r}^{s}(y_j - \hat{z}_j^r)^2 - (y_j - \theta_j)^2 \leq (e-1)\sum_{j=r}^{s}(\hat{z}_j^r - \theta_j)^2 + R_\sigma^2\log 4/\delta,$
+2. $\sum_{j = r}^{s}(y_{j} - \hat{y}_{j})^{2} - (y_{j} - \theta_{j})^{2}\geq (3 - e)\sum_{j = r}^{s}(\hat{y}_{j} - \theta_{j})^{2} - R_{\sigma}^{2}\log 4 / \delta .$
+
+Proof. We continue to condition on $\mathcal{V}$ . By Lemma D.6, $Z_{j} := (\hat{z}_{j}^{r} - y_{j})^{2} - (y_{j} - \theta_{j})^{2} - (\hat{z}_{j}^{r} - \theta_{j})^{2} = 2\epsilon_{j}(\hat{z}_{j}^{r} - \theta_{j})$ is an MDS. Note that, because of the truncation step, $|Z_{j}| = 2|(\hat{z}_{j}^{r} - \theta_{t})(\theta_{t} - y_{t})| \leq 2(2B)\tilde{\sigma} \leq 4(1 + 2\tilde{\sigma})\tilde{\sigma} \leq R_{\sigma}$
+
+By Lemma D.7 with $\eta = \frac{1}{R_{\sigma}}$ , we therefore obtain
+
+$$
+\sum_ {j = r} ^ {s} (\hat {z} _ {j} ^ {r} - y _ {j}) ^ {2} - (y _ {j} - \theta_ {j}) ^ {2} - (\hat {z} _ {j} ^ {r} - \theta_ {j}) ^ {2} \leq (e - 2) \sum_ {j = r} ^ {s} (\hat {z} _ {j} ^ {r} - y _ {j}) ^ {2} + R _ {\sigma} ^ {2} \log \frac {1}{\delta}
+$$
+
+with probability $1 - \delta$
+
+We obtain the second inequality by an identical argument with the MDS
+
+$$
+\sum_ {j = r} ^ {s} (\hat {y} _ {j} - \theta_ {j}) ^ {2} + (y _ {j} - \theta_ {j}) ^ {2} - (\hat {y} _ {j} - y _ {j}) ^ {2}
+$$
+
+Union bounding over 1 and 2 gives the result.
+
+Now, note that by Lemma D.5, we have that
+
+$$
+\sum_ {j = r} ^ {s} (\hat {y} _ {j} - y _ {j}) ^ {2} - (y _ {j} - \theta_ {j}) ^ {2} \leq \sum_ {j = r} ^ {s} (\hat {z} _ {j} ^ {r} - y _ {j}) ^ {2} - (y _ {j} - \theta_ {j}) ^ {2} + \frac {2 \log n}{R _ {\sigma}}
+$$
+
+So that, by Lemma D.8
+
+$$
+(3 - e) \sum_ {j = r} ^ {s} (\hat {y} _ {j} - \theta_ {j}) ^ {2} - R _ {\sigma} ^ {2} \log \frac {4}{\delta} \leq \sum_ {j = r} ^ {s} (\hat {y} _ {j} - y _ {j}) ^ {2} - (y _ {j} - \theta_ {j}) ^ {2} \leq \sum_ {j = r} ^ {s} (\hat {z} _ {j} ^ {r} - \theta_ {j}) ^ {2} + R _ {\sigma} ^ {2} \log \frac {4}{\delta} + \frac {2 \log n}{R _ {\sigma}}
+$$
+
+which (for fixed $r, s$ ) leads to the following high-probability relation:
+
+$$
+\sum_ {j = r} ^ {s} \left(\hat {y} _ {j} - \theta_ {j}\right) ^ {2} \leq \frac {(e - 1)}{(3 - e)} \sum_ {j = r} ^ {s} \left(\hat {z} _ {j} ^ {r} - \theta_ {j}\right) ^ {2} + 2 R _ {\sigma} ^ {2} \log 4 / \delta + 2 \log n / R _ {\sigma}
+$$
+
+Let's union bound over the $n^2$ possibilities for $r, s$ to get that, with probability $1 - \delta$ for all intervals $[r, s]$
+
+$$
+\sum_ {j = r} ^ {s} \left(\hat {y} _ {j} - \theta_ {j}\right) ^ {2} \leq \frac {(e - 1)}{(3 - e)} \sum_ {j = r} ^ {s} \left(\hat {z} _ {j} ^ {r} - \theta_ {j}\right) ^ {2} + 2 R _ {\sigma} ^ {2} \log 4 n ^ {2} / \delta + 2 \log n / R _ {\sigma} \tag {15}
+$$
+
+Since we're conditioning on $\nu$ , observe that, if $\hat{w}_j^r = \text{Predict}(\{x_t, y_t\}_{t=r}^{j-1}, x_j)$ is the prediction from a hypothetical unbounded linear expert (Algorithm 4), $(\hat{z}_j^r - \theta_j)^2 \leq (\hat{w}_j^r - \theta_j)^2$ . Thus, from this point forward, we consider $\hat{w}_j^r$ instead of $\hat{z}_j^r$ .
+
+Now, noting that (for $j > r + 1$ ) $\hat{w}_j^r \sim \mathcal{N}(w_j^r, \underbrace{\sigma^2 x_j^T(X_{j-1} X_{j-1}^T) x_j}_{\sigma_j^2})$ , we have by direct computation (Lemma E.2 in
+
+Appendix E)
+
+$$
+\sum_ {j = r} ^ {s} (\hat {w} _ {j} ^ {r} - \theta_ {j}) ^ {2} \leq 3 \sigma^ {2} \log 2 n / \delta \log e n + \sum_ {j = r + 2} ^ {s} (w _ {j} ^ {r} - \theta_ {j}) ^ {2}
+$$
+
+Plugging this result into Equation 15, we summarize in the following lemma.
+
+Lemma D.9. Condition on $\mathcal{V}$ . Within this conditioning, with probability $1 - \delta / 2$ , the following bound holds over all intervals $[r, s]$
+
+$$
+\sum_ {j = r} ^ {s} \big (\hat {y} _ {j} - \theta_ {j} \big) ^ {2} \leq \tilde {O} \big (1 + \sum_ {j = r + 2} ^ {s} \big (w _ {j} ^ {r} - \theta_ {j} \big) ^ {2} + \sigma_ {j} ^ {2} \big)
+$$
+
+where $\tilde{O} (\cdot)$ hides only constants, as well as log factors of $n$ and $\delta$ , and where the sum is considered to be zero if $s\leq r + 1$ .
+
+We can also now uncondition on $\mathcal{V}$ , and union bound over $\mathcal{V}^c$ and $\mathcal{V} \cap \mathcal{C}^c$ , where $\mathcal{C}$ is the good event from Lemma D.9.
+
+Lemma D.10. With probability $1 - \delta$ , the following bound holds over all intervals $[r, s]$
+
+$$
+\sum_ {j = r} ^ {s} (\hat {y} _ {j} - \theta_ {j}) ^ {2} \leq \tilde {O} (1 + \sum_ {j = r + 2} ^ {s} (w _ {j} ^ {r} - \theta_ {j}) ^ {2} + \sigma_ {j} ^ {2})
+$$
+
+where $\tilde{O} (\cdot)$ hides only constants, as well as log factors of $n$ and $\delta$ , and where the sum is considered to be zero if $s\leq r + 1$ .
+
+As compute in Appendix E (Equation 16 of Lemma E.2), $\sum_{j=r}^{s} \sigma_{j}^{2} = \tilde{O}(1)$ . By Lemma E.1 we obtain $\sum_{j=r+2}^{s} (w_{j}^{r} - \theta_{j})^{2} = TV_{1}(\pmb{\theta}[r:s])^{2}|r - s|^{3}/n^{2}$ for equally spaced $\{x_{j} = j/n\}_{j=r}^{s}$ . This leads to the following lemma.
+
+Lemma D.11. Let $\mathcal{P} = [r_1 = 1, r_2] \cup \{[r_i, r_{i+1} - 1]\}_{i=2}^{l-2} \cup [r_{l-1}, r_l - 1]$ be any partition of $[n]$ into contiguous intervals with $r_l = n + 1$ . Let $n_i = r_{i+1} - r_i$ be the length of the $i$ th interval, and $TV_1(i) \coloneqq TV_1(\theta[r_i : r_{i+1} - 1])$ . Then, with probability $1 - \delta$ :
+
+$$
+\sum_ {j = 1} ^ {n} (\hat {y} _ {j} - \theta_ {j}) ^ {2} \leq \tilde {O} (\sum_ {i = 1} ^ {l - 1} (T V _ {1} (i) ^ {2} n _ {i} ^ {3} / n ^ {2} + 1))
+$$
+
+Now consider the partitioning scheme that scans left to right from $p$ to $p + N$ , and adds points to the current bin so long as $TV_{1}(\theta[\text{current bin}]) < \frac{n}{\text{current bin size}^{3/2}}$ . It follows immediately that, the $TV_{1}$ inside each bin satisfies $TV_{1}(\text{bin})^{2} \leq n^{2}/(\text{bin size})^{3}$ . This is analogous to the $TV_{0}$ case from (Baby et al., 2021). As can be seen in Lemma 23 in (Baby & Wang, 2020), the total number of bins in this partition is bounded by $O(N^{3/5}C^{2/5}/n^{2/5})$ . Thus, letting $\mathcal{P}$ be this partition, Lemma D.11 becomes Theorem D.1.
+
+# E. Some missing computations
+
+Lemma E.1 (Bias of linear regression). Suppose $x_{1}, \ldots, x_{n}$ are sorted covariates such that $\max_{j=2,\ldots,n}(x_{j} - x_{j-1}) \leq \log n / (p_{0}n)$ for some constant $p_{0} > 0$ . Let $\theta_{j} := f(x_{j})$ , so that our data is $\{(x_{i},\theta_{i})\}_{i=1}^{n}$ . Further, consider some subset $\{(x_{i},\theta_{i})\}_{i=r}^{N}$ . Let $\hat{l}(z) = \hat{a} + \hat{b}x$ be the linear least squares fit trained on this subset. Then the error of $\hat{l}$ is bounded as
+
+$$
+\sum_ {i = r} ^ {N} (\hat {l} (x _ {i}) - \theta_ {i}) ^ {2} \leq N ^ {3} T V _ {1} (\theta [ r: N ]) ^ {2} \log^ {2} n / (p _ {0} ^ {2} n ^ {2})
+$$
+
+In the special case where $x_{i} = i / n$ for $i = 1, \dots, n$ , we have
+
+$$
+\sum_ {i = r} ^ {N} (\hat {l} \left(x _ {i}\right) - \theta_ {i}) ^ {2} \leq N ^ {3} T V _ {1} \left(\theta [ r: N ]\right) ^ {2} / n ^ {2}
+$$
+
+Proof. WLOG suppose $r = 1$ .
+
+Define $\bar{a}$ to be equal to $\theta_{1}$ and $\bar{b}$ to be $\frac{1}{N}\sum_{j = 1}^{N}s_{j}$ , where for $j > 1$ we let $s_j = \frac{\theta_j - \theta_{j-1}}{x_j - x_{j-1}}$ be the slope from the datapoint $j - 1$ to $j$ . We then have
+
+$$
+\sum_ {i = 1} ^ {N} (\hat {a} + \hat {b} x _ {i} - \theta_ {i}) ^ {2}
+$$
+
+$$
+\begin{array}{l} \stackrel {(1)} {\leq} \sum_ {i = 1} ^ {N} (\bar {a} + \bar {b} (x _ {i} - x _ {1}) - \theta_ {1} - \sum_ {k = 2} ^ {i} s _ {k} (x _ {k} - x _ {k - 1})) ^ {2} \\ = \sum_ {i = 1} ^ {N} (\bar {b} \sum_ {k = 2} ^ {i} (x _ {k} - x _ {k - 1}) - \sum_ {k = 2} ^ {i} (x _ {k} - x _ {k - 1}) s _ {k}) ^ {2} \\ = \sum_ {i = 1} ^ {N} \left(\sum_ {k = 2} ^ {i} (\bar {b} - s _ {k}) (x _ {k} - x _ {k - 1})\right) ^ {2} \\ \leq \sum_ {i = 1} ^ {N} \sum_ {k = 2} ^ {i} (\bar {b} - s _ {k}) ^ {2} \sum_ {k = 2} ^ {i} (x _ {k} - x _ {k - 1}) ^ {2} \\ \stackrel {(2)} {\leq} \sum_ {i = 1} ^ {N} N T V _ {1} (\theta [ 1: N ]) ^ {2} \times \frac {N \log^ {2} n}{p _ {0} ^ {2} n ^ {2}} \\ \leq N ^ {3} T V _ {1} \left(\theta [ 1: N ]\right) ^ {2} \times \log n ^ {2} / \left(p _ {0} ^ {2} n ^ {2}\right) \\ \end{array}
+$$
+
+(1) holds because $\hat{a},\hat{b}$ minimize square loss among linear functions. (2) holds because for any vector $z\in \mathbb{R}^d$ , we have $\sum_{i = 1}^{d}(z[i] - \bar{z})^{2}\leq dTV_{0}(z)^{2}$ , where $\bar{z} = \sum_{i = 1}^{d}z_{i} / d$ . This fact is applied with $z_{j} = s_{j}$ for $j = 1,\dots n$ , which leads to $TV_{0}(z) = TV_{1}(\theta [1:N])$
+
+The equal-spacing case follows from an identical argument where $(x_{k} - x_{k - 1})^{2}$ is instead set to $\frac{1}{n^2}$
+
+Lemma E.2 (Running variance for ADDLE). Consider a set of covariates, $\{x_{t} = t / n\}_{t = 1}^{n}$ , and responses $\{y_{t} = f(x_{t}) + \epsilon_{t}\}_{t = 1}^{n}$ . For any interval $[a,b] \subset [1,n]$ with length $l > 2$ , consider $\hat{z}_t$ to be the prediction of online linear regression (Algorithm 4) at time $t$ after starting at time $a$ . Let $z_{t} = \mathbb{E}[\hat{z}_{t}]$ . Then with probability $1 - \delta$ :
+
+$$
+\sum_ {t = a} ^ {b} (\hat {z} _ {t} - z _ {t}) ^ {2} \leq 2 \sigma^ {2} \log \left(\frac {2 n}{\delta}\right) \log e n + \sigma^ {2} \log 2 / \delta
+$$
+
+Proof. Without loss of generality let $[a, b] = [1, l]$ . Start by fixing $t \in [3, l-1]$ . Let $X_{t} \in \mathbb{R}^{2 \times t}$ have columns $\{[x_{i}, 1]^{T}\}_{i=1}^{t}$ .
+
+$$
+\hat {z} _ {t + 1} - z _ {t + 1} = x _ {t + 1} ^ {T} (X _ {t} X _ {t} ^ {T}) ^ {- 1} X _ {t} (Y - \pmb {\theta}) \sim \mathcal {N} (0, \sigma^ {2} x _ {t + 1} ^ {T} (X _ {t} X _ {t} ^ {T}) ^ {- 1} x _ {t + 1})
+$$
+
+Letting $\sigma_t^2 = \sigma^2 x_{t + 1}(X_tX_t^T)^{-1}x_{t + 1}$ , and applying a gaussian tail bound, we obtain:
+
+$$
+\operatorname * {P r} [ | \hat {z} _ {t + 1} - z _ {t + 1} | \leq \sigma_ {t} \sqrt {2 \log 2 n / \delta} ] \geq 1 - \delta / n
+$$
+
+So that
+
+Squaring each side, then union bounding over $t \in [3,l]$ and summing up, we have that, with probability $1 - \delta$ :
+
+$$
+\sum_ {t = 3} ^ {l} \left(\hat {z} _ {t + 1} - z _ {t + 1}\right) ^ {2} \leq 2 \sigma^ {2} \log (2 n / \delta) \sum_ {t = 3} ^ {l} x _ {t} ^ {T} \left(X _ {t} X _ {t} ^ {T}\right) ^ {- 1} x _ {t} \tag {16}
+$$
+
+Thus, we need to analyze the "out-of-sample leverage scores". Observe that $(X_{t}X_{t}^{T})^{-1}$ is given by:
+
+$$
+(X _ {t} X _ {t} ^ {T}) ^ {- 1} = \frac {1}{t ^ {2} \times \frac {1}{t} \times \sum_ {i} (x _ {i} - \overline {{x}}) ^ {2}} \left[ \begin{array}{c c} t & - \sum_ {i = 1} ^ {t} x _ {i} \\ - \sum_ {i = 1} ^ {t} x _ {i} & \sum_ {i = 1} ^ {t} x _ {i} ^ {2} \end{array} \right]
+$$
+
+If we assume equally spaced points with pairwise distance $1 / n$ , we can compute
+
+$$
+\frac {n ^ {2}}{t ^ {2} (t + 1) (t - 1)} \left[ \begin{array}{c c} t & - t (t + 1) / 2 n \\ - t (t + 1) / 2 n & t (t + 1) (2 t + 1) / 6 n ^ {2} \end{array} \right] = \left[ \begin{array}{c c} n ^ {2} / t (t + 1) (t - 1) & - n / 2 t (t - 1) \\ - n / 2 t (t - 1) & (2 t + 1) / 6 t (t - 1) \end{array} \right]
+$$
+
+So that
+
+$$
+x _ {t + 1} ^ {T} Z _ {t} x _ {t + 1} = \frac {(t + 1) ^ {2}}{n ^ {2}} \times \frac {n ^ {2}}{t (t + 1) (t - 1)} - 2 \times \frac {(t + 1)}{2 t (t - 1)} + \frac {2 t + 1}{6 (t - 1) t} = \frac {2 t + 1}{6 t - 1}) \times \frac {1}{t} \leq \frac {1}{t}
+$$
+
+Thus, using $\sum_{t=1}^{l} \frac{1}{t} \leq \log n + 1 = \log (en)$ when we plug in to Equation 16, we are left with:
+
+$$
+\sum_ {t = 3} ^ {l} \left(\hat {z} _ {t + 1} - z _ {t + 1}\right) ^ {2} \leq 2 \sigma^ {2} \log (2 n / \delta) \log e n \tag {17}
+$$
+
+To finish, recall that for $t = 1$ , online regression predicts 0 deterministically, so that $\hat{z}_t - z_t = 0$ . For $t = 2$ , it predicts $y_1$ which will yield a $\mathcal{N}(0, \sigma^2)$ summand, which can be bounded with high probability. We tack these terms onto the above display after a union bound.
+
+
+
+# F. Uneven and Random Covariates
+
+# F.1. Theorem statements for uneven covariates
+
+In this section, we explain how ADDLE can be generalized to handle the case of uneven covariates. These proofs rely on a minor algorithmic change: We replace each clipped online linear regression expert of Figure 4 by a clipped Vovk-Azoury-Warmuth (VAW) forcaster with the same start-point (see (Baby & Wang, 2020; Cesa-Bianchi & Lugosi, 2006) for descriptions of the VAW forcaster).
+
+We show how to prove the following generalization of Theorems 6.1
+
+Theorem F.1. For some $p_0 > 0$ , consider sorted design points $0 \leq x_{1}, \ldots, x_{n} \leq 1$ such that $\max_{j=2,\ldots,n} |x_{j} - x_{j-1}| \leq \frac{\log n}{p_0 n}$ . Let $f$ be a function with $C := TV_1[f, \mathcal{D}_X]$ , and consider responses $\{y_t\}$ coming from the regression model. Let $\{\hat{y}_t\}_{t=1}^n$ be the predictions generated by ADDLE, now with (clipped) VAW forcasters as experts. With probability $1 - \delta$ , the total squared error satisfies:
+
+$$
+\sum_ {t = 1} ^ {n} (\hat {y _ {t}} - f (x _ {t})) ^ {2} = \tilde {O} (n ^ {1 / 5} C ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ and $\delta$ .
+
+This leads directly to the corresponding generalization of Theorem 6.2.
+
+Theorem F.2. For some $p_0 > 0$ , consider sorted design points $0 \leq x_{1}, \ldots, x_{n} \leq 1$ such that $\max_{j=2,\ldots,n} |x_{j} - x_{j-1}| \leq \frac{\log n}{p_0 n}$ . Let $f$ be a function with $C := TV_1[f, \mathcal{D}_X]$ , and consider responses $\{y_t\}$ coming from the regression model. Let $\hat{f}$ be the function returned by AKORN. Then, with probability $1 - \delta$ , the average square error satisfies:
+
+$$
+\frac {1}{n} \sum_ {t = 1} ^ {n} (\hat {f} (x _ {t}) - f (x _ {t})) ^ {2} = \tilde {O} (n ^ {- 4 / 5} C ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ and $\delta$ .
+
+# F.2. Proof steps for Theorem's F.1 and F.2
+
+# Steps for Theorem F.1
+
+We now consider as our experts clipped linear Vovk-Azoury-Warmuth (VAW) forecasters ((Cesa-Bianchi & Lugosi, 2006)) starting at time $r$ for each $r \in [n]$ . This is a very minor change from the original linear regression experts, and does not affect computational or statistical efficiency. The VAW expert starting at $r$ is fed data $D_{r,s} := \{(x_j, y_j)\}_{j=r}^s$ in an online fashion, and produces estimates $\hat{w}_r^r, \ldots, \hat{w}_s^r$ .
+
+Notice that, even with these changes to our setting, we can run the proof of Appendix D up until Equation (15), where now $\hat{z}_j^r$ is the clipped VAW expert that starts at time $r$ . We can still replace this expert with a hypothetical unclipped expert, $\hat{w}_j^r$ .
+
+By Lemma 24 of (Baby & Wang, 2020), we have:
+
+$$
+\sum_ {j = r} ^ {s} (\theta_ {j} - \hat {w} _ {j} ^ {r}) ^ {2} \leq \sum_ {j = r} ^ {s} (\theta_ {j} - l _ {r: s} (x _ {j})) ^ {2} + \| u \| _ {2} ^ {2} + \tilde {\mathcal {O}} (1)
+$$
+
+where $l_{r;s}(x_i) = u^T x_i$ is the offline linear least squares estimate trained on noiseless data $(x_j, \theta_j) j = r, \ldots, s$ . From Corollary 40 of (Baby & Wang, 2020), we have $\|u\|_2^2 = O(1)$ . By Lemma E.1 we have that the first term is bounded by $|r - s|^3 TV_1(\theta[r : s])^2 / n^2$ .
+
+Plugging this argument into Equation (15), we recover Lemma D.11's statement that, with high probability, for any partition $\mathcal{P} = p_1, \ldots, p_l$ with $p_i = [x_{r_i}, x_{r_{i+1}-1}]$
+
+$$
+\sum_ {j = 1} ^ {n} (\hat {y} _ {j} - \theta_ {j}) ^ {2} \leq \tilde {O} (\sum_ {i = 1} ^ {l} (T V _ {1} (i) ^ {2} n _ {i} ^ {3} / n ^ {2} + 1))
+$$
+
+To complete the proof, we may now construct the oracle partition in the same way as before, where $TV_{1}$ of a bin is computed with respect to realized covariate spacing.
+
+# Steps for Theorem F.2
+
+All of the spline approximation results of Appendix C go through without technical changes. Now that ADDLE has been generalized to the uneven covariate setting, Lemma C.1. also goes through by an application of Lemma E.1 to the bias of the linear fits $\hat{a}_t$ (concentration is not an issue, as we still have $\sum_{j=1}^{n} x_j^T (XX^T)^{-1} x_j = 2\sigma^2$ ).
+
+# F.3. Theorem for random covariates
+
+First, we cite a result that tells us that draws are roughly evenly spaced when they come from a distribution whose density is bounded below on $[0, 1]$ . We do not have control over the probability of the good event in this lemma.
+
+Lemma F.3 (Lemma 5 of (Wang et al., 2014)). Suppose $p$ is a pdf with support in $[0,1]$ and such that $p(x) \geq p_0 > 0$ . Let $x_1, \ldots, x_n$ be a sorted list of iid draws from $p$ . Then, with probability at least $1 - 2p_0n^{-10}$ , the maximum gap between two draws satisfies
+
+$$
+\max _ {i > 1} \left| x _ {i} - x _ {i - 1} \right| \leq \frac {c \log n}{p _ {0} n}
+$$
+
+where $c$ is a universal constant.
+
+By including the bad event of Lemma F.3 in a union bound, we have the following corollary of Theorem F.1. Notice that now, $TV_{1}[f; \mathcal{D}_{X}]$ is a random variable depending on the sampled covariates. In the special case where $f$ is differentiable, then $TV_{1}[f; \mathcal{D}_{X}] \leq \| f \|_{TV_{1}}$ a.s.
+
+Corollary F.4. Suppose the same setting as Theorem F.1, except that $x_{1}, \ldots, x_{n}$ are sorted draws from a pdf $p$ with support in $[0, 1]$ and such that $p(x) \geq p_{0} > 0$ . Then, with probability at least $1 - p_{0} n^{-10} - \delta$ , the error satisfies
+
+$$
+\sum_ {t = 1} ^ {n} (\hat {y _ {t}} - f (x _ {t})) ^ {2} = \tilde {\cal O} (n ^ {1 / 5} T V _ {1} [ f; \mathcal {D} _ {X} ] ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ , $p_0$ and $\delta$ .
+
+Similarly, for AKORN's Theorem F.2:
+
+Corollary F.5. Suppose now that $x_{1}, \ldots, x_{n}$ are sorted draws from a pdf $p$ with support in $[0, 1]$ and such that $p(x) \geq p_{0} > 0$ . Then, with probability at least $1 - p_{0} n^{-10} - \delta$ , the error of $\hat{f}$ satisfies
+
+$$
+\sum_ {t = 1} ^ {n} (\hat {f} (x _ {t}) - f (x _ {t})) ^ {2} = \tilde {O} (n ^ {1 / 5} T V _ {1} [ f; \mathcal {D} _ {X} ] ^ {2 / 5})
+$$
+
+where $\tilde{O}$ hides constants (including $\sigma$ ) and polylog factors of $n$ , $p_0$ and $\delta$ .
+
+# G. Experimental Details and Additional Simulations
+
+# G.1. Details
+
+A single run of AKORN, Oracle Trend Filtering, and DoF Trend Filtering is performed as follows
+
+1. Generate $\epsilon \sim \mathcal{N}(0, \sigma^2 I_n)$
+2. Get $\hat{f} = \mathrm{AKORN}(\{x_i, f(x_i) + \epsilon_i\}, \sigma)$
+3. Get $\hat{f}_{tf}^{\lambda}$ for data $\{x_i, f(x_i) + \epsilon_i\}$ and parameter $\lambda$ . We use the library glmgen: https://github.com/glmgen/glmgen.
+4. Let $\hat{f}_{o - tf}$ be the $\hat{f}_{tf}^{\lambda}$ which has the smallest MSE with respect to the noiseless data
+5. Let $\hat{f}_{s - tf} = \hat{f}_{tf}^{\lambda}$ where $\lambda = \arg \min_{\lambda \in E}\{\| \hat{f}_{tf}^{\lambda} - Y\| _2^2 +2\sigma^2 L(\hat{f}_{tf}^{\lambda})\}$ where $L(g)$ gives the number of linear pieces of $g$
+6. Produce fitted values $\hat{y} = [\hat{f}(x_1) \dots \hat{f}(x_j)]$ and $\hat{y}_{o - tf} = [\hat{f}_{o - tf}(x_1) \dots \hat{f}_{o - tf}(x_j)]$ and $\hat{y}_{s - tf} = [\hat{f}_{s - tf}(x_1) \dots \hat{f}_{s - tf}(x_j)]$ for comparison
+
+This procedure is used as a subroutine for producing Tables 1, 2, and Figure 2.
+
+# G.2. ADDLE is worse than AKORN
+
+In this appendix, we back up the statement made in Section 5.2 that ADDLE is not competitive with offline methods. In Figure 5, we reproduce a couple entries of Figure 2, substituting TF-DoF with ADDLE.
+
+
+
+
+
+
+Figure 5. Same methodology as Figure 2 applied to the Doppler function and Jump Function, but this time comparing ADDLE to AKORN and Trend Filtering. There is roughly an order of magnitude difference in the MSEs for all $n$
+
+
\ No newline at end of file
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/images.zip b/akornadaptiveknotsgeneratedonlineforregressionsplines/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4b94904cf443d0b972d0ecf42c8cd4c750b8a0b8
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:528deb1b089be7234ed3db9d6c87edd9129b40dd6f0d05d48c4b54fa4c3c4c52
+size 916187
diff --git a/akornadaptiveknotsgeneratedonlineforregressionsplines/layout.json b/akornadaptiveknotsgeneratedonlineforregressionsplines/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a308c845bd85a4de3c2d930998affed339068e0
--- /dev/null
+++ b/akornadaptiveknotsgeneratedonlineforregressionsplines/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e5b3424dce6fe61b0c2e07ac880efde0ed4adc37f771efcaa5a578334dcaa80
+size 1668798
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_content_list.json b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e7d17f524d418c0631ca019161366e5e08a5cad
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05c957bdf6f4b256c6163847cd8c0966fa83a7a73fb82e26be706989ddf07a0e
+size 126715
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_model.json b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..25bfb17b5743efe494845f4218250e3aa4791b4e
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea13fbea6983f45026c1d2bd85f08cfc230cc135a2e3dc28c18b37ab8120f53c
+size 153860
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_origin.pdf b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ae01e00257c9f5e1030c7a980cf2a727a2451997
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/a448a509-d25f-4d52-947d-9a2f85aa48eb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fed0f581c5fa31952dbc32ba646016b8c2760f7d0d0c0199f7720201068938eb
+size 9970718
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/full.md b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..42c4f817628b854791f6d48e67b8dde2829198e0
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/full.md
@@ -0,0 +1,515 @@
+# AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings
+
+Yilin Ye12 Junchao Huang3 Xingchen Zeng1 Jiazhi Xia4 Wei Zeng12
+
+# Abstract
+
+Cross-modal embeddings form the foundation for multi-modal models. However, visualization methods for interpreting cross-modal embeddings have been primarily confined to traditional dimensionality reduction (DR) techniques like PCA and t-SNE. These DR methods primarily focus on feature distributions within a single modality, whilst failing to incorporate metrics (e.g., CLIPScore) across multiple modalities. This paper introduces AKRMap, a new DR technique designed to visualize cross-modal embeddings metric with enhanced accuracy by learning kernel regression of the metric landscape in the projection space. Specifically, AKRMap constructs a supervised projection network guided by a post-projection kernel regression loss, and employs adaptive generalized kernels that can be jointly optimized with the projection. This approach enables AKRMap to efficiently generate visualizations that capture complex metric distributions, while also supporting interactive features such as zoom and overlay for deeper exploration. Quantitative experiments demonstrate that AKRMap outperforms existing DR methods in generating more accurate and trustworthy visualizations. We further showcase the effectiveness of AKRMap in visualizing and comparing cross-modal embeddings for text-to-image models. Code and demo are available at https://github.com/yilinye/AKRMap.
+
+1 The Hong Kong University of Science and Technology (Guangzhou) 2 The Hong Kong University of Science and Technology 3 The Chinese University of Hong Kong (Shenzhen) 4 Central South University. Correspondence to: Wei Zeng .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+# 1. Introduction
+
+Cross-modal embeddings play a fundamental role for multimodal models, functioning as cross-modal encoders (Rombach et al., 2022), objective functions (Ramesh et al., 2022), or evaluation metrics (Hessel et al., 2021; Wu et al., 2023a) for tasks like text-to-image (T2I) generation (Peebles & Xie, 2023; Hu et al., 2025). To assess the alignment of multimodal models, various evaluation metrics based on these embeddings have been introduced, such as CLIPScore (Hessel et al., 2021) and Human Preference Score (HPS) (Wu et al., 2023a;b; Zhang et al., 2024). Despite their utility, embedding-based evaluations are often difficult to interpret, as the metrics are typically reported as aggregated values, without providing insights into instance-level performance. This deficiency undermines the trustworthiness and transparency of the evaluation.
+
+To address this limitation, dimensionality reduction (DR) visualization serves as a crucial tool, offering a way to reveal the landscape of cross-modal metrics and enabling a more comprehensive understanding of model performance. Recent studies (Liang et al., 2022; Wang et al., 2023b;c) have explored the use of DR methods for cross-modal embeddings. These efforts often leverage established techniques such as PCA (Li et al., 2018), UMAP (McInnes et al., 2018) and t-SNE (Van der Maaten & Hinton, 2008), as well as autoencoder-based approaches (Le et al., 2018; Elhamod & Karpatne, 2024). However, existing DR methods are primarily designed to depict feature distributions within a single modality. When applied to cross-modal metrics, these methods often produce dense neighborhoods where points with significantly different metric values are positioned close, leading to overlap and local occlusion (Figure 1(b)). Moreover, multi-modal models are typically evaluated on large-scale datasets containing millions of data points. This creates a need for rendering contour maps that can reveal continuous metric distributions. However, local neighboring points with a mix of high and low metric values may be misrepresented, as the contour map depicts only a single aggregated value, causing inaccurate contour mapping as illustrated in Figure 1(c).
+
+To construct trustworthy visualizations for cross-modal embeddings, a key consideration is to enhance the accuracy
+
+
+(a) t-SNE projection
+(b) local occlusion
+(c) inaccurate contour map
+Figure 1. CLIPScore distribution on the COCO dataset by t-SNE (a). The visualization shows dense neighboring points with significantly different metric values, causing overlapping and occlusion (b) and highly inaccurate contour mapping (c).
+
+of metric contour mapping in the projected 2D space. As contour estimation typically relies on radial basis function (RBF) kernel (e.g., Gaussian kernel), we seek to enhance the kernel-based mapping by coordinating it with the DR process. Drawing inspiration from supervised t-SNE (Hajderanj et al., 2019), we introduce Adaptive Kernel Regression Map (AKRMap), a supervised DR method that leverages adaptive kernel regression to effectively visualize the distribution of cross-modal embedding metrics. Specifically, AKRMap first constructs a supervised DR network explicitly guided by post-projection kernel regression loss with neighborhood constraint (Sect. 4.1.1). Next, to account for the gap between high-dimensional and low dimensional kernels, we improve the flexibility of the post-projection kernel through adaptive generalized kernel jointly learned with the projection (Sect. 4.1.2). This approach enables the generation of scatterplots for discrete data points and contour maps for continuous metric landscapes, while also supporting advanced features such as overlay views and zooming for interactive interpretation (Sect. 4.2). Both the DR method and the visualization tool have been implemented as a Python package and made publicly available.
+
+We conduct quantitative experiments to evaluate AKRMap against traditional and autoencoder-based DR visualizations (Sect. 5.1). The results highlight the superior performance of AKRMap in accurately mapping cross-modal metrics for both in-sample and out-of-sample data points. We further demonstrate the practical applications of AKRMap across three distinct scenarios (Sect. 5.2): 1) visual exploration of human preference dataset (HPD) (Wu et al., 2023a), 2) visual comparison of diffusion-based and auto-regressive T2I models, and 3) visual examination of the global impact for fine-tuning. These applications illustrate that AKRMap effectively enables human-in-the-loop interpretation of model performance on large-scale datasets. Our method can also be potentially generalized to other modalities such as text-to-video task, with additional quantitative experiments shown in Appendix F.
+
+In summary, our contributions are as follows:
+
+- We propose a novel DR method for cross-modal metric visualization, which jointly learns the projection and metric contour mapping through kernel regression supervised DR with adaptive generalized kernel.
+- We develop a tool for trustworthy visualization of cross-modal metrics, incorporating visualization features such as a scatterplot view and a contour map, along with interactive features like zooming and overlaying.
+- We conduct quantitative experiments to demonstrate the superior performance of AKRMap in generating more accurate visualizations of cross-modal metric, and highlight its applications across three scenarios to enhance the trustworthiness of T2I model evaluation.
+
+# 2. Related Work
+
+# 2.1. Cross-Modal Embedding-based Evaluation
+
+Cross-modal embeddings like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) form the foundation of multimodal learning. Specifically, many evaluation methods for multi-modal models, such as T2I models, rely on these embeddings. Conventional metrics like Inception Score (Salimans et al., 2016) and Fréchet Inception Distance (Heusel et al., 2017) compute the average distance or distributional difference between generated and reference images in the embedding space, yet they are insufficient to measure cross-modal alignment and instance-level performance. With the advancement of multi-modal AI, cross-modal embedding metrics like CLIPScore (Hessel et al., 2021) have emerged to measure the alignment between prompts and generated images. While being effective in capturing the alignment, CLIPScore fails to model human preferences. To address this limitation, recent studies have proposed training specialized human preference models such as HPS (Wu et al., 2023a;b; Zhang et al., 2024), PickScore (Kirstain et al., 2023), and ImageReward (Xu et al., 2024), to better align with human judgments. For example, ImageReward (Xu et al., 2024) introduces a systematic pipeline for expert annotations to train preference models, while PickScore (Kirstain et al., 2023) leverages crowd-sourced human comparisons.
+
+However, these metrics generally provide only an average score, offering a broad overview of a model's performance across the entire embedding space. This highlights the need for neural embedding visualizations that can expose the detailed distribution of metric values within the cross-modal embeddings, facilitating instance-level inspection and supporting human-in-the-loop evaluation.
+
+# 2.2. Visualization for Neural Embeddings
+
+Visualization is an important tool to support data analysis in conjunction with AI (Wang et al., 2023a; Ye et al., 2024).
+
+
+Figure 2. AKRMap is a neural network based DR method designed to learn adaptive kernel regression for visualizing cross-modal embeddings. The network integrates two key components to jointly learn data point projection and cross-modal metric estimation: 1) Kernel regression supervision, and 2) Adaptive generalized kernel. The resulting visualizations, including scatterplots and contour maps, provide a clearer and more accurate representation of the cross-modal metric distribution.
+
+Particularly, it has proven to be effective for enhancing the understanding of various types of neural embeddings, including word embeddings (Mikolov et al., 2013; Chiang et al., 2020), vision-language embeddings (Liang et al., 2022; Wang et al., 2023b; Ye et al., 2025), and parameter spaces within loss landscape (Li et al., 2018; Elhamod & Karpatne, 2024). Commonly, dimensionality reduction (DR) techniques are employed for visualization. These include linear DR methods such as PCA (Abdi & Williams, 2010), as well as nonlinear DR methods like t-SNE (Van der Maaten & Hinton, 2008) and UMAP (McInnes et al., 2018). Parametric versions of traditional DR methods, which provide explicit mapping for projections, can be achieved by training neural networks, such as parametric t-SNE (Gisbrecht et al., 2015; Damrich et al., 2023), parametric UMAP (Sainburg et al., 2021) and other parametric methods for tasks like interactive clustering (Xia et al., 2023) or streaming data (Xia et al., 2024). Similarly, autoencoder-based DR methods (Le et al., 2018; Elhamod & Karpatne, 2024) have also been developed, incorporating an additional objective of reconstructing high-dimensional embeddings from their projections. Recently, there has been a growing interest in utilizing DR techniques to visualize cross-modal embeddings. For instance, UMAP has been employed to explore and visualize the modality gap between text and image embeddings (Liang et al., 2022; Wang et al., 2023b). To address this modality gap and enhance the cross-modal alignment in visualization space, some studies have introduced fusion-based DR methods (Ye et al., 2025), enabling the visualization of image embeddings in relation to text embedding anchor points.
+
+However, previous efforts treat multi-modal embeddings separately, failing to effectively depict the landscape of cross-modal metrics (e.g., CLIPScore and HPS). In addition, existing metric-aware DR techniques focus on a limited range of particular metrics like distance (Sainburg et al., 2021) and density (Narayan et al., 2021), without a generalizable mechanism to preserve other different metrics. To address this limitation, we propose a kernel regression-
+
+based supervised projection method combined with an adaptive generalized kernel. Different from traditional kernel DR methods like kernel PCA (Scholkopf et al., 1997) that mainly use kernel to transform the high-dimensional features, we develop an adaptive kernel in the projection space to guide the DR. Notably, to construct the kernel guidance, we propose a novel cross-validation supervision technique that bridges between traditionally nonparametric kernel regression and parametric DR. This design allows the propagation of contour estimation errors back into the projection process. In this way, our method can dynamically adjust both the DR mapping and the kernel shape, enabling it to accurately fit the complex landscape of cross-modal metrics.
+
+# 3. Problem Definition
+
+Taking text-to-image generation models as an example, we formally define embedding-based metric for multi-modal models: A pretrained neural network $e(\cdot)$ encodes the prompt $t$ and the generated image $v$ into a high-dimensional embedding representations $e(t,v)$ , which are subsequently used to predict a metric score $s = f(e)$ . Here, $f(\cdot)$ represents the final operation applied to the embeddings, such as cosine distance in HPS (Wu et al., 2023a).
+
+The problem of generating trustworthy visualizations for cross-modal embeddings can be defined as follows: given a dataset of prompt-image pairs $D = \{(t_i, v_i)\}_{i=1}^N$ and its embedding representations $e(D) = \{e(t_i, v_i)\}_{i=1}^N$ , we first seek to learn a manifold $M$ (Goldberg et al., 2008) where $e(D)$ resides in the high-dimensional embedding space $R^d$ . This manifold can be "spread out" (reparametrized) in the visualization space $R^2$ to show the distribution of metric score across the dataset, where the process can be modeled as a projection mapping $P(\cdot)$ s.t. $P(e(D)) \in R^2$ . This process aims to learn an explicit mapping function that projects high-dimensional embeddings to points in 2D space while accurately preserving the underlying metric distribution.
+
+Next, to visualize a continuous distribution from the discrete projected sample points in $P(e(D))$ , a contour map
+
+in the 2D space needs to be estimated. Here, we adopt the approach used by contouring algorithms in various Python libraries, such as Plotly, by dividing the projected 2D space into a grid and calculating the metric distribution values at each grid point. Specifically, suppose the projected coordinates lie within a normalized 2D space of $[0,1] \times [0,1]$ . For each grid $\mathbf{x}_g = \left(\frac{i}{N_w},\frac{j}{N_h}\right)$ , we compute a value $\hat{s} (\mathbf{x}_g)$ based on the local metric distribution, which is then colored using a continuous colormap. $N_{w}$ and $N_{h}$ denote the number of grids along the x-axis and y-axis, respectively. In this manner, we can generate a contour map depicting continuous landscape of the metric distribution from the discrete projected sample points.
+
+# 4. AKRMap
+
+We propose Adaptive Kernel Regression Map (AKRMap), with the workflow illustrated in Figure 2. The input is a set of cross-modal embeddings with each high-dimensional vector representing a data point in the embedding space. First, to synchronize the contour mapping with the projection for accurate metric landscape estimation, we propose a cross-modal metric-supervised projection method comprising two key components: 1) Kernel Regression Supervision, which guides the projection to achieve more precise metric mapping (Sect. 4.1.1), and 2) an Adaptive Generalized Kernel, which accounts for the gap between high-dimensional and low-dimensional kernels and allows for more flexible contour mapping to capture complex metric landscapes (Sect. 4.1.2). Then, we design an interactive visualization tool to facilitate multi-scale exploration of metric distribution and individual data points (Sect. 4.2).
+
+# 4.1. Adaptive Kernel Regression
+
+# 4.1.1. KERNEL REGRESSION SUPERVISED PROJECTION
+
+According to the problem definition, to achieve an accurate cross-modal metric visualization, cross-modal metric supervision is necessary. To address the challenge of constructing appropriate supervision for continuous cross-modal metric, we develop a kernel regression supervised projection method. Specifically, we propose learning a projection network $P: R^n \to R^2$ with a Nadaraya-Watson kernel regression (Ali et al., 2023) in the projected space:
+
+$$
+\hat {s} (\mathbf {x}) = \frac {\sum_ {k = 1} ^ {N} K \left(\mathbf {x} - P \left(e \left(t _ {k} , v _ {k}\right)\right)\right) \cdot s _ {k}}{\sum_ {k = 1} ^ {N} K \left(\mathbf {x} - P \left(e \left(t _ {k} , v _ {k}\right)\right)\right)}, \tag {1}
+$$
+
+where $P(e(t_k, v_k))$ is the projected sample point of metric embeddings and $s_k$ is the corresponding ground-truth metric value in the training set. $K(\cdot)$ is an RBF kernel in $R^2$ .
+
+Subsequently, to construct a supervised learning objective for the projection, inspired by the cross-validation method for kernel learning (Silverman, 2018), we randomly split
+
+the dataset $D$ into training set $D_{tr}$ and validation set $D_{vl}$ of ratio $9:1$ in each epoch. Then, in equation (1), we estimate the metric distribution only with points in training set $(\left(t_k,v_k\right)\in D_{tr})$ . Next, we seek to minimize the weighted mean square error loss:
+
+$$
+M S E _ {p} = \frac {1}{\left| D _ {p} \right|} \sum_ {x _ {i} \in D _ {p}} \left(\hat {s} \left(x _ {i}\right) - s _ {i}\right) ^ {2}, p \in \{v l, t r \}, \tag {2}
+$$
+
+$$
+M S E _ {r} = w _ {1} M S E _ {v l} + w _ {2} M S E _ {t r}, \tag {3}
+$$
+
+where the overall regression loss $MSE_{r}$ is a weighted sum of the loss on validation set $(MSE_{vl})$ and that on training set $(MSE_{tr})$ . We seek to balance these loss terms to ensure mapping accuracy at both in-sample and out-of-sample positions on the contour, and the weights $w_{1} = 1$ and $w_{2} = 0.3$ are set empirically. The construction of this weighted train-val loss may differ from common practice but is motivated by our deeper thinking on the problem of connecting projection with kernel regression, which is further explained as follows. Unlike neural network methods, kernel regression is nonparametric, where the parameter like bandwidth is traditionally either precomputed or optimized via cross-validation. Thus, we are facing an interesting problem of how to bridge between parametric projection and nonparametric kernel regression, motivating us to leave a validation set to learn kernel parameters jointly with the neural network. The validation loss improves the generalization of kernel regression for unseen positions. However, relying solely on the validation loss will decrease the local detail of the map. In fact, as shown by additional experiments in Figure 14 in Appendix E, this train-val scheme proves to be essential to ensure robust mapping.
+
+Neighborhood Preservation Constraint. To encourage the projection to maintain some neighborhood information of the high-dimensional embeddings, we combine our regression loss with a constraint term from traditional dimensionality reduction. Specifically, we incorporate the KL-divergence loss from t-SNE:
+
+$$
+K L = \sum_ {i} ^ {n} \sum_ {j, j \neq i} ^ {n} p _ {i j} \ln \frac {p _ {i j}}{q _ {i j}}, \tag {4}
+$$
+
+where $p_{ij}$ and $q_{ij}$ are the neighborhood distribution probabilities in high-dimensional and low-dimensional space respectively, as detailed in (Van der Maaten & Hinton, 2008). Specifically, we adopt a perplexity-free implementation (De Bodt et al., 2018; Crecchi et al., 2020) of the KL loss for our parametric projection network. Overall, the objective function of our method is:
+
+$$
+L = \lambda M S E _ {r} + K L, \tag {5}
+$$
+
+where $\lambda = 0.125$ is empirically set to balance the loss.
+
+# 4.1.2. ADAPTIVE GENERALIZED KERNEL
+
+In this section we illustrate the design of kernel $K(\cdot)$ in the regression supervision in equation (1). The key challenge is that we need to consider the discrepancy between high-dimensional and low-dimensional kernels. First, to avoid unstable optimization caused by exponentials and make the regression more compatible with our low-dimensional t-SNE neighborhood constraint, we adopt a t-distribution-like kernel instead of a Gaussian kernel. However, for complex metric distribution landscapes, especially those in embedding-based generative model evaluation, a standard kernel may lack enough flexibility. Particularly, it is difficult to determine a proper decay rate of the kernel value in relation to the distance. To address the problem, we take inspiration from the observation in a previous study (Narayan et al., 2021) of an approximate power law relationship between the local radius of the low-dimensional space $(r_e)$ and that of original high-dimensional space $r_o$ : $r_e(x_i) \approx \alpha [r_o(x_i^h)]^\beta$ . We hypothesize that this transformation can also improve the adaptability of low-dimensional kernel. Therefore, we propose using an adaptive generalized t-distribution kernel:
+
+$$
+K (\mathbf {x}, \alpha , \beta) = \left(1 + \alpha \| \mathbf {x} \| ^ {2 \beta}\right) ^ {- 1}, \tag {6}
+$$
+
+where $\alpha$ and $\beta$ are learnable parameters that are jointly optimized with the projection. In this way, our method can dynamically change the shape of the kernel to accurately fit the cross-modal metric landscape and reduce the risk of overfitting or underfitting due to suboptimal decay rate.
+
+# 4.2. Cross-modal Metric Visualization
+
+On the basis of AKRMap, we further develop an interactive visualization tool that provides two distinct views for exploring cross-modal embedding metrics:
+
+- Scatterplot view. The scatterplot view displays all individual data points within the embedding space, with each point color-coded according to the cross-modal embedding metric. This view allows for direct interaction with the data points. Unlike scatterplots generated by baseline DR methods, which often suffer from significant occlusion that obscures the true metric distribution, AKRMap effectively reveals extreme values with greater clarity, as demonstrated in Figure 3.
+- Contour map. For large datasets like HPD and COCO, scatterplot can become overcrowded with cluttered points. To address this limitation, we introduce a contour map that effectively represents cross-modal metric distribution in a continuous manner, by dividing the 2D space into grids and computing grid values based on the local metric distribution. This representation makes regions with extreme values more prominent and reveals distribution patterns with greater clarity.
+
+
+(a) t-SNE $(p = 100)$
+
+
+(b) Pt-SNE
+
+
+(c)AKRMap
+
+
+Figure 3. Comparison of scatterplots generated by t-SNE and AKRMap for the HPSv2 metric. Despite the visual clutter introduced by the large-scale dataset, AKRMap provides a clearer and more accurate representation of the HPSv2 metric distribution.
+(a) Overview
+Figure 4. Contour map combined with zoom and overlay with point sampling for multiscale exploration of the HPD dataset.
+
+
+(c) Instance
+
+The visualization tool includes various interactive features designed to facilitate in-depth exploration, including:
+
+- Zoom. AKRMap enables efficient multiscale zooming by dynamically computing the contour map using varying grid resolutions, as illustrated in Figure 4. Specifically, it adjusts the level of detail by increasing the grid resolution proportionally to the zoom level, revealing finer details in the map as users zoom in. This capability is unique to AKRMap and not achievable with traditional DR methods, which lack the ability to preserve and display local details.
+- Overlay. The scatterplot view and contour map allow data points to be overlaid onto the contour map, enabling users to simultaneously observe the overall distribution while interacting with individual data points, as shown in Figure 4. To mitigate occlusion, we have implemented sampling techniques, such as random sampling and Poisson disk sampling.
+
+The visualization tool has been built into an easy-to-use python package with minimal dependencies, requiring only PyTorch and Plotly, which can be seamlessly integrated into interactive computational notebooks.
+
+# 5. Experiment
+
+# 5.1.Quantitative Experiments
+
+Baseline methods. We compare AKRMap against three commonly used DR methods: PCA, t-SNE and UMAP, as
+
+Table 1. Quantitative comparison of cross-modal embedding metric visualizations for test (out-of-sample) points on the HPD dataset. Our AKRMap outperforms baseline methods and ablations, achieving the best performance across all four cross-modal embedding metrics.
+
+Method CLIPScore HPSv2 PickScore Aesthetic Score mae mape rmse mae mape rmse mae mape rmse mae mape rmse PCA 4.0042 16.5506 5.1142 1.4444 5.7870 1.8749 1.1497 5.9206 1.4686 0.5322 11.5645 0.6756 t-SNE 4.0241 16.6643 5.1523 1.4361 5.7568 1.8629 1.1579 5.9429 1.4640 0.4725 10.3128 0.6018 UMAP 4.0819 16.9066 5.2179 1.4432 5.7817 1.8725 1.2214 6.2721 1.5536 0.4463 9.6323 0.5721 SAE 4.1843 16.9386 5.2844 1.4483 5.8117 1.8696 1.2357 6.3952 1.5650 0.5809 12.7041 0.7251 Neuro-Visualizer 12.4201 43.8219 13.7318 1.6871 6.4526 2.0649 1.9655 9.5295 2.3427 0.4900 10.6306 0.6252 AKRMap (w/o KR) 4.0164 16.4830 5.1274 1.3978 5.6039 1.8102 1.1284 5.8004 1.4353 0.4752 10.3943 0.6003 AKRMap (w/o GK) 2.2812 9.1189 2.9458 1.1430 4.5498 1.4695 0.9064 4.6194 1.1686 0.4555 9.9267 0.5718 AKRMap 1.8707 7.3649 2.4253 0.8108 3.1935 1.1200 0.7712 3.9225 1.0225 0.4305 9.3340 0.5488
+
+well as two encoder-based approaches: SAE (Le et al., 2018) and Neuro-Visualizer (Elhamod & Karpatne, 2024). For the encoder-based methods, the decoder is used to estimate grid colors, while the other baseline methods rely on the common Gaussian RBF kernel to estimate the 2D distribution map. Additional details about RBF parameter selection are provided in Appendix C. We use distance threshold to cut off empty areas for traditional DR techniques but keep the full landscape of autoencoder methods because far away positions in traditional DR results are ambiguous and can hardly be mapped back to original high-dimensional space without an inverse decoder. On the other hand, as autoencoder can easily lead to value explosion, we set an upper bound for the map to align with the maximum metric value in the dataset.
+
+Ablation Study. We evaluate AKRMap under two alternative settings: 1) training the projection network using only the neighborhood constraint, without the supervision provided by Kernel Regression (AKRMap w/o KR), and 2) removing the adaptive Generalized Kernel component (AKRMap w/o GR).
+
+Dataset. To evaluate performance on the cross-modal embedding metric, we select the widely used large-scale T2I dataset, the Human Preference Dataset (HPD) (Wu et al., 2023a). The HPD dataset contains over 430,000 unique images generated by various T2I models, along with their corresponding prompts, in the official training set, and 3,700 images in the test set.
+
+Cross-modal Embedding Metrics. We compare AKRMap against baseline methods for visualizing several commonly used cross-modal embedding metrics in T2I generation, including CLIPScore (Hessel et al., 2021), HPSv2 (Wu et al., 2023a), PickScore (Kirstain et al., 2023), and a unimodal embedding metric commonly used in this cross-modal scenario, the Aesthetic Score (Schuhmann et al., 2022).
+
+Performance Evaluation. We evaluate the accuracy and trustworthiness of the visualization methods using mapping errors, including mae, rmse, and mape. Specifically, we calculate these errors on the test set of HPD for out-of-sample points that were not used during training. We
+
+also report the errors for in-sample points $(mae_{in}, mape_{in}, rmse_{in})$ from the HPD training set in Appendix B. This dual evaluation is important because a good visualization method must accurately represent the training data distribution while avoiding misleading maps caused by overfitting and ensuring reliable accuracy for test points.
+
+Training implementations. The architecture of our projection network is a 4-layer MLP, with shape $(d,d)$ in the first three layers and $(d,2)$ in the last layer, where $d$ is the dimensions of the input embeddings. Batch normalization and ReLU activation are applied to each layer. Our projection model is trained on one Nvidia L4 GPU with batch size of 1000 and 20 epochs. We use Adam optimizer with a learning rate of 0.002. For the t-SNE and PCA implementations, we use the python sklearn package, where the t-SNE method adopts the Barnes-Hut t-SNE (Van Der Maaten, 2014). For the UMAP implementation, we adopt the python umap-learn package. The Neuro-Visualizer implementation is based on the open-sourced code of the original paper (Elhamod & Karpatne, 2024), while the SAE is based on a github reimplementation1 .
+
+The experiment results are presented in Table 1.
+
+Comparison with baselines. Our method AKRMap consistently outperforms baseline methods in mapping accuracy across all the embedding metrics. Notably, AKRMap effectively reduces mapping errors for both training (in-sample) points (see Appendix B) and test (out-of-sample) points, demonstrating its ability to produce more trustworthy mappings. For instance, when applied to the HPSv2 metric, AKRMap reduces the MAE by nearly $50\%$ for in-sample points and approximately $43\%$ for out-of-sample points compared to the best-performing baseline, t-SNE. In addition, while autoencoder methods are effective for loss landscape, it has shown weakness in cross-modal metric mapping with relatively higher error and unstable performance. These findings highlight AKRMap's superior reliability and robustness across diverse embedding scenarios.
+
+Ablation results. The ablation results highlight the impor
+
+tance of the two key components of our method: kernel regression supervision (KR) and the adaptive generalized kernel (GK), both of which are critical for enhancing mapping accuracy. Among these, kernel regression supervision proves to be the most impactful, as the results for the AKRMap $w / o$ KR setting are nearly indistinguishable from those of the baseline methods in Table 1. This outcome is expected, as removing kernel regression supervision leaves the projection network relying solely on the neighborhood constraint, making it functionally similar to the t-SNE method. In addition, the adaptive generalized kernel demonstrates its value by further reducing errors across various metrics, underscoring its effectiveness in capturing and fitting complex metric landscapes. These findings validate the necessity of both components in achieving superior performance.
+
+# 5.2. Applications
+
+# 5.2.1. VISUALIZING LARGE T2I DATASET
+
+AKRMap can provide an accurate overview of large T2I dataset by effectively capturing the cross-modal metric distribution. Figure 5 shows the contour maps of ClipScore, HPSv2, PickScore, and Aesthetic Score distributions in the HPD dataset, generated by our AKRMap and four baseline methods: PCA, UMAP, t-SNE, and Neuro-Visualizer.
+
+In alignment with the results from the quantitative experiments, our method demonstrates superior performance by accurately depicting score distributions with rich local details. In contrast, the baselines suffer from various limitations, such as over-smoothing due to averaging effects, which obscure local structures. For example, PCA introduces pronounced block effects with sharp edges at region boundaries, significantly disrupting the overall smoothness of the visualizations. Similarly, UMAP and t-SNE often struggle with low-value regions being either overshadowed by high values or averaged out, making it challenging to identify clusters with suboptimal performance. Surprisingly, Neuro-Visualizer performs the worst among the baselines for visualizing cross-modal embedding metrics. Except the Aesthetic Score, its results are highly unstable, with excessively large estimations in out-of-sample areas, and the contour maps are riddled with jagged terrains, exhibiting poor smoothness. This highlights the increased challenge of cross-modal embeddings compared to unimodal embeddings. It also further underscores the strength of our AKRMap in producing more accurate and visually coherent representations. Overall, our AKRMap achieves a superior balance between local accuracy and smoothness. Furthermore, the mapping performance of traditional methods can deteriorate significantly if a smaller bandwidth is manually set, as demonstrated by additional results in Appendix C.
+
+# 5.2.2.COMPARING DIFFUSION-BASED MODEL AND AUTO-REGRESSIVE MODEL
+
+We conduct a visual comparison of two representative T2I models from different architectural families: Stable Diffusion-v2.1 (Rombach et al., 2022) (SD-2.1), a diffusion-based model, and Infinity-2B (Han et al., 2024), an autoregressive model. Using approximately 590,000 image captions from the MS-COCO (Lin et al., 2014) training dataset as prompts, we generate corresponding images using both models. To illustrate their performance differences, we visualize the HPSv2 score differences between Infinity and SD-2.1 (calculated as Infinity's score minus SD-2.1's score), as shown in Figure 6. Here, red regions highlight areas where Infinity outperforms SD-2.1 significantly, while blue regions indicate smaller differences.
+
+The visualization reveals interesting patterns, particularly in Region A, where a significant performance gap exists between the two models. A deeper analysis reveals that this region is primarily associated with prompts related to sports players and athletes. By examining instance-level generated images with overlay feature, we find that SD-2.1 often struggles to generate human figures or produces black-and-white photographs, whereas Infinity consistently generates high-quality, colored images of athletes; see the right side of Figure 6. This indicates that Infinity demonstrates stronger capabilities in generating human figures, especially in sports-related contexts. Overall, AKRMap effectively highlights performance distribution patterns across large-scale datasets, facilitating a detailed comparative analysis of the strengths and limitations of different models.
+
+# 5.2.3. VISUALIZING FINE-TUNED MODEL
+
+We showcase AKRMap's ability to analyze the impact of model fine-tuning by comparing the SD model (Rombach et al., 2022) with Dreamlike Photoreal 2.0 (DP-2.0) (dream-like.art, 2024), a fine-tuned variant of SD-1.5 optimized for photorealistic image generation. Our evaluation utilizes two distinct datasets: the MS-COCO (Lin et al., 2014) validation set, from which we extract 25,000 image captions as prompts, and the PartiPrompts (Yu et al., 2022) dataset, which contains 1,600 diverse prompts. Consistent with the previous analysis, we visualize the HPSv2 score differences between DP-2.0 and SD-1.5 to illustrate their comparative performance, with the results presented in Figure 7.
+
+Overall, the effects of fine-tuning are more pronounced on the PartiPrompts benchmark, as evidenced by a larger presence of red regions in the right visualization. This is likely because MS-COCO dataset contains common captions on the web that are similar to the pretraining data of SD model, while PartiPrompts contains manually created sophisticated prompts that are challenging in different aspects. Moreover, the visualizations highlight several regions of interest (A,
+
+
+Figure 5. Qualitative comparison of contour map visualizations of ClipScore, HPSv2, PickScore, and Aesthetic Score distributions in the HPD dataset generated by our AKRMap and four baselines: PCA, UMAP, t-SNE, and Neuro-Visualizer.
+
+
+Figure 6. AKRMap can be used to compare generative performance of auto-regressive model and diffusion model.
+
+B, and C) where DP-2.0 demonstrates significant improvements in photorealism and overall image quality. In Region A, further analysis of individual instances reveals that DP-2.0 achieves enhanced realism in both static and dynamic objects (such as oranges and flying birds) and shows notable improvements in scenes, such as beach waves and sand textures. Region B primarily includes various stylistic renditions of raccoon images, where DP-2.0 not only accurately
+
+captures the intended styles but also excels in rendering intricate details, such as patterns on ties and textiles. Region C encompasses a variety of automotive scenes, where both the environmental contexts and the vehicles themselves exhibit greater realism and richer detail compared to SD-1.5. Importantly, our visualization demonstrates that DP-2.0 consistently outperforms SD-1.5 across the entire distribution space. This indicates that the fine-tuning process successfully enhanced the model's photorealism while maintaining its general capabilities across other domains.
+
+# 6. Discussion
+
+Limitations. As AKRMap prioritizes metric mapping, the neighborhood preservation performance may not be comparable to traditional DR methods. Nevertheless, due to the incorporation of neighborhood constraint, AKRMap is able to maintain a desirable level of neighborhood preservation, with detailed comparison results shown in Appendix A. Furthermore, for more complex embedding metrics used
+
+
+(a) COCO evaluation
+Figure 7. AKRMap can be used to show the global impact of fine-tuning by comparison to base model.
+
+
+(b) PartiPrompts evaluation
+
+in other tasks, such as those computed over sequences of embeddings (e.g., CodeBertScore (Zhou et al., 2023)), it remains uncertain how AKRMap can effectively determine a single vector representation for each instance prior to projection. This presents a potential area for further investigation.
+
+Future Work. A recent study introduces Multi-dimensional Human Preference (MPS) (Zhang et al., 2024), which evaluates embedding scores across four dimensions: Overall, Aesthetics, Alignment, and Detail. Leveraging AKRMap to visualize multi-dimensional comparisons of different T2I models would be an interesting avenue for future exploration. AKRMap offers the potential to enhance existing cross-modal metric models through interactive human feedback. For example, we aim to fine-tune models to address domain-specific needs, such as evaluating and filtering game assets generated by T2I models.
+
+Moreover, our method provides a versatile framework that can be adapted to other value landscape mapping challenges for high-dimensional data. For instance, AKRMap could be employed to visualize the distribution of predicted values in classical multivariate regression tasks. Beyond evaluation, AKRMap could also play a role in supporting trustworthy filtering of pretraining data by visualizing filtering scores across large-scale datasets. One pertinent example involves the use of Aesthetic Scores to filter the massive LAION-5b dataset (Schuhmann et al., 2022). We plan to scale our approach to datasets containing billions of data points, thereby increasing transparency in pretraining data selection and promoting better understanding of these processes.
+
+# 7. Conclusion
+
+In this paper, we introduce AKRMap, a dimensionality reduction method to visualize cross-modal embedding metrics through kernel regression supervised projection with adaptive generalized kernel. Based on AKRMap, we develop a visualization tool to support metric-aware visualization of cross-modal embeddings for the evaluation of text-to-image generative models. Quantitative experiments and three application scenarios show that AKRMap can facil
+
+itate trustworthy visualization of cross-modal metric for transparent evaluation.
+
+# Acknowledgments
+
+We would like to extend our gratitude to the anonymous reviewers for their valuable comments. This work is partially supported by the National Natural Science Foundation of China (62172398, U23A20313, 62372471), The Science Foundation for Distinguished Young Scholars of Hunan Province (NO. 2023JJ10080), and Guangzhou Basic and Applied Basic Research Foundation (2024A04J6462).
+
+# Impact Statement
+
+AKRMap advances transparent multi-modal evaluation by trustworthy visualization of cross-modal embedding metrics, which is pivotal for enhancing human-in-the-loop evaluation in tasks like T2I generation. The strategy of jointly learning discrete point DR with continuous metric landscape estimation proves highly effective in scenarios like visualizing large generated dataset, comparing different models' performance, and revealing the global impact of fine-tuning. Our method also provides a general framework that can be adapted to embedding-based metric visualization for other tasks such as 3D generation and pretraining data filtering.
+
+Our method highlights the synergy between automatically computed metrics and human's interactive interpretation. With the rapid development of AI models trained and evaluated on datasets of tremendous scale, it is critical to strike a balance between the efficiency and trustworthiness of evaluation, where the transparency and visibility to humans are an essential factor.
+
+# References
+
+Abdi, H. and Williams, L. J. Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4):433-459, 2010.
+
+Ali, T. H., Hayawi, H. A. A.-M., and Botani, D. S. I. Estima
+
+tion of the bandwidth parameter in nadaraya-watson kernel non-parametric regression based on universal threshold level. Communications in Statistics-Simulation and Computation, 52(4):1476-1489, 2023.
+Cheng, S. and Mueller, K. The data context map: Fusing data and attributes into a unified display. IEEE Transactions on Visualization and Computer Graphics, 22(1): 121-130, 2015.
+Chiang, H.-Y., Camacho-Collados, J., and Pardos, Z. Understanding the source of semantic regularities in word embeddings. In Proceedings of the Conference on Computational Natural Language Learning, pp. 119-131, 2020.
+Crecchi, F., De Bodt, C., Verleysen, M., Lee, J., Bacciu, D., et al. Perplexity-free parametric t-sne. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 387-392, 2020.
+Damrich, S., Böhm, N., Hamprecht, F. A., and Kobak, D. From $t$ -sne to umap with contrastive learning. In The International Conference on Learning Representations, 2023.
+De Bodt, C., Mulders, D., Verleysen, M., and Lee, J. A. Perplexity-free t-sne and twice student tt-sne. In European Symposium on Artificial Neural Networks, pp. 123-128, 2018.
+dreamlike.art. Dreamlike photoreal 2.0 - a photorealistic model based on stable diffusion 1.5, 2024. URL https://huggingface.co/dreamlike-art/ dreamlike-photoreal-2.0.
+Elhamod, M. and Karpatne, A. Neuro-visualizer: A novel auto-encoder-based loss landscape visualization method with an application in knowledge-guided machine learning. In International Conference on Machine Learning, 2024.
+Gisbrecht, A., Schulz, A., and Hammer, B. Parametric nonlinear dimensionality reduction using kernel t-sne. Neurocomputing, 147:71-82, 2015.
+Goldberg, Y., Zakai, A., Kushner, D., and Ritov, Y. Manifold learning: The price of normalization. Journal of Machine Learning Research, 9(8), 2008.
+Hajderanj, L., Weheliye, I., and Chen, D. A new supervised t-SNE with dissimilarity measure for effective data visualization and classification. In Proceedings of the International Conference on Software and Information Engineering, pp. 232-236, 2019.
+Han, J., Liu, J., Jiang, Y., Yan, B., Zhang, Y., Yuan, Z., Peng, B., and Liu, X. Infinity: Scaling bitwise autoregressive modeling for high-resolution image synthesis. arXiv preprint arXiv:2412.04431, 2024.
+
+Harwath, D. and Glass, J. Deep multimodal semantic embeddings for speech and images. In IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 237-244. IEEE, 2015.
+Hessel, J., Holtzman, A., Forbes, M., Le Bras, R., and Choi, Y. CLIPScore: A reference-free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 7514-7528, 2021.
+Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017.
+Hu, X., Wang, H., Lenssen, J. E., and Schiele, B. Personaho: Effortlessly improving personalized face with human-object interaction generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.
+Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, Y.-H., Li, Z., and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904-4916. PMLR, 2021.
+Kaski, S., Nikkilä, J., Oja, M., Venna, J., Törönen, P., and Castrén, E. Trustworthiness and metrics in visualizing similarity of gene expression. BMC Bioinformatics, 4(1):1-13, 2003. doi: https://doi.org/10.1186/1471-2105-4-48.
+Kirstain, Y., Polyak, A., Singer, U., Matiana, S., Penna, J., and Levy, O. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
+Le, L., Patterson, A., and White, M. Supervised autoencoders: Improving generalization performance with unsupervised regularizers. Advances in Neural Information Processing Systems, 31, 2018.
+Li, H., Xu, Z., Taylor, G., Studer, C., and Goldstein, T. Visualizing the loss landscape of neural nets. Advances in Neural Information Processing Systems, 31, 2018.
+Liang, V. W., Zhang, Y., Kwon, Y., Yeung, S., and Zou, J. Y. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Advances in Neural Information Processing Systems, 35: 17612-17625, 2022.
+Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft
+
+COCO: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740-755. Springer, 2014.
+McInnes, L., Healy, J., Saul, N., and Großberger, L. UMAP: Uniform manifold approximation and projection. Journal of Open Source Software, 3(29):861, 2018.
+Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, 26, 2013.
+Narayan, A., Berger, B., and Cho, H. Assessing single-cell transcriptomic variability through density-preserving data visualization. Nature Biotechnology, 39(6):765-774, 2021.
+Peebles, W. and Xie, S. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.
+Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
+Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3, 2022.
+Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695, 2022.
+Sainburg, T., McInnes, L., and Gentner, T. Q. Parametric umap embeddings for representation and semisupervised learning. Neural Computation, 33(11):2881-2907, 2021.
+Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training gans. Advances in Neural Information Processing Systems, 29, 2016.
+Schölkopf, B., Smola, A., and Müller, K.-R. Kernel principal component analysis. In International Conference on Artificial Neural Networks, pp. 583-588. Springer, 1997.
+Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35: 25278-25294, 2022.
+
+Silverman, B. W. Density estimation for statistics and data analysis. Routledge, 2018.
+Van Der Maaten, L. Accelerating t-sne using tree-based algorithms. The Journal of Machine Learning Research, 15(1):3221-3245, 2014.
+Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008.
+Wang, X., Wu, Z., Huang, W., Wei, Y., Huang, Z., Xu, M., and Chen, W. VIS+AI: integrating visualization with artificial intelligence for efficient data analysis. Frontiers of Computer Science, 17(6):176709, 2023a.
+Wang, Z. J., Hohman, F., and Chau, D. H. WizMap: Scalable interactive visualization for exploring large machine learning embeddings. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pp. 516-523, 2023b.
+Wang, Z. J., Montoya, E., Munechika, D., Yang, H., Hoover, B., and Chau, D. H. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 893-911, 2023c.
+Weglarczyk, S. Kernel density estimation and its application. In ITM Web of Conferences, volume 23, pp. 00037. EDP Sciences, 2018.
+Wu, X., Hao, Y., Sun, K., Chen, Y., Zhu, F., Zhao, R., and Li, H. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023a.
+Wu, X., Sun, K., Zhu, F., Zhao, R., and Li, H. Human preference score: Better aligning text-to-image models with human preference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2096-2105, 2023b.
+Xia, J., Huang, L., Lin, W., Zhao, X., Wu, J., Chen, Y., Zhao, Y., and Chen, W. Interactive visual cluster analysis by contrastive dimensionality reduction. IEEE Transactions on Visualization and Computer Graphics, 29(1):734-744, 2023.
+Xia, J., Huang, L., Sun, Y., Deng, Z., Zhang, X. L., and Zhu, M. A parallel framework for streaming dimensionality reduction. IEEE Transactions on Visualization and Computer Graphics, 30(1):142-152, 2024.
+Xu, J., Mei, T., Yao, T., and Rui, Y. MSR-VTT: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5288-5296, 2016.
+
+Xu, J., Liu, X., Wu, Y., Tong, Y., Li, Q., Ding, M., Tang, J., and Dong, Y. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024.
+Ye, Y., Hao, J., Hou, Y., Wang, Z., Xiao, S., Luo, Y., and Zeng, W. Generative AI for visualization: State of the art and future directions. Visual Informatics, 8(2):43-66, 2024.
+Ye, Y., Xiao, S., Zeng, X., and Zeng, W. ModalChorus: Visual probing and alignment of multi-modal embeddings via modal fusion map. IEEE Transactions on Visualization and Computer Graphics, 31(1):294-304, 2025.
+Yu, J., Xu, Y., Koh, J. Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., Ku, A., Yang, Y., Ayan, B. K., et al. Scaling autoregressive models for content-rich text-to-image generation. Transactions on Machine Learning Research, 2022.
+Zhang, S., Wang, B., Wu, J., Li, Y., Gao, T., Zhang, D., and Wang, Z. Learning multi-dimensional human preference for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8018-8027, 2024.
+Zhou, S., Alon, U., Agarwal, S., and Neubig, G. CodeBERTScore: Evaluating code generation with pretrained models of code. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 13921-13937, 2023.
+
+# Appendix
+
+# A. Performance in Traditional Neighborhood Preservation Objective
+
+Traditional dimensionality reduction methods often seek to preserve the neighborhood in high-dimensional space in the visualization scatterplot. Since the target of our method is not to optimize the neighborhood preservation, we do not expect our method to be better than traditional dimensionality reduction method in this regard. In Table 2, we provide statistics of traditional neighborhood trustworthiness metric (Kaski et al., 2003) to show that our method does not significantly harm the neighborhood property. Furthermore, we note that in our application where we focus on continuous metric, cluster separation is not a meaningful target since we are not looking at discrete categories as in classification tasks. As shown in Figure 8, for the HPSv2 metric on HPD dataset, even in scatterplot view AKRMap can reveal the distribution of metric significantly better than other traditional DR methods. Here we also note that we deliberately select larger perplexity for the t-SNE method because of the large scale of the HPD dataset, and we show in Figure 9 that smaller perplexity will lead to worse results.
+
+Table 2. Neighborhood Trustworthiness metrics for visualization of HPSv2 embeddings on HPD.
+
+n=20 n=30 n=40 n=50 PCA 0.7243 0.7246 0.7245 0.7242 t-SNE 0.8945 0.8786 0.8667 0.8575 UMAP 0.8604 0.8467 0.8380 0.8309 AKRMap 0.8284 0.8287 0.8283 0.8277
+
+
+(a) t-SNE (perplexity=100)
+
+
+(b) UMAP (n_neighbors=100)
+
+
+(c) PCA
+
+
+(d)AKRMap
+
+
+Figure 8. Comparison of scatterplots produced by different DR methods for HPSv2.
+Perplexity $= 5$
+
+
+Perplexity $= 10$
+
+
+Perplexity $= 30$
+Figure 9. Results of smaller perplexity for t-SNE on HPSv2.
+
+# B. Mapping Accuracy Evaluation for in-sample Points
+
+In this section, we record the quantitative results for in-sample mapping accuracy of different methods, as shown in Table 3. The results combined with Table 1 indicate that AKRMap outperforms other methods consistently for both in-sample and out-of-sample positions.
+
+Table 3. Quantitative comparison of mapping accuracy at in-sample points of HPD dataset.
+
+Method CLIPScore HPSv2 PickScore Aesthetic Score mae mape rmse mae mape rmse mae mape rmse mae mape rmse PCA 4.0494 22.8348 5.3954 1.3744 5.3870 1.7690 1.1341 5.7472 1.4437 0.5209 10.6224 0.6749 t-SNE 4.0573 22.8162 5.3798 1.3567 5.3259 1.7606 1.1200 5.6774 1.4340 0.5192 10.5890 0.6728 UMAP 4.0906 22.8612 5.4106 1.4755 5.7803 1.9073 1.1804 5.9824 1.5145 0.5241 10.6851 0.6782 SAE 4.2591 23.2463 5.5196 1.5323 5.9966 1.9738 1.2731 6.4886 1.6270 0.5207 10.6322 0.6746 Neuro-Visualizer 10.4468 39.8135 12.1020 1.8716 7.0312 2.2237 2.0031 9.6188 2.3844 0.3102 6.1445 0.4009 AKRMap (w/o KR) 4.0506 22.6324 5.3415 1.3696 5.3706 1.7761 1.1237 5.6914 1.4343 0.3619 7.2980 0.4654 AKRMap (w/o GK) 1.6359 8.3539 2.1271 1.0479 4.0655 1.3358 0.9062 4.5448 1.1537 0.3390 6.7634 0.4320 AKRMap 1.1652 5.4238 1.5029 0.6834 2.6159 0.8980 0.6815 3.4181 0.8862 0.3142 6.1747 0.4018
+
+# C. RBF Kernel Settings for Traditional DR Methods
+
+Here we provide details and more experiment results concerning RBF kernel settings for traditional projection method. We show in Table 4 and Table 5 that traditional methods cannot achieve satisfactory performance regardless of the kernel parameter settings. Specifically, we test different commonly used bandwidth selection methods:
+
+- Plug-in method (Silverman) (Silverman, 2018). This is the default method we used in the paper due to its efficiency. For d-dimensional data, the Silverman's rule is given by:
+
+$$
+h = \left(\frac {4}{(d + 2) n}\right) ^ {\frac {1}{d + 4}} \hat {\sigma}, \tag {7}
+$$
+
+where $\hat{\sigma}$ is the estimated standard deviation.
+
+- Adaptive local bandwidth (ALB) (Cheng & Mueller, 2015). This method computes bandwidth for each point by adapting a pre-computed bandwidth based on local estimated density $f(x_{i})$ .
+
+$$
+h _ {i} = \lambda_ {i} \times h, \lambda_ {i} = (G / f (x _ {i})) ^ {2}, G = \left(\prod_ {i = 1} ^ {n} f (x _ {i})\right) ^ {1 / n}. \tag {8}
+$$
+
+- Cross-validation (LOOCV) (Weglarczyk, 2018). The cross-validation is performed in a Leave-one-out manner:
+
+$$
+C V (h) = n ^ {- 1} \sum_ {j = 1} ^ {n} \left[ y _ {j} - \hat {s} _ {j} \left(x _ {j}\right) \right] ^ {2} w \left(x _ {j}\right), \tag {9}
+$$
+
+where $\hat{s}_j(x_j)$ is the leave-one-out estimator for $y_{j}$ that is computed on all data points except $x_{j}$ , and $w(x_{j})$ is a nonnegative weight function (all set to one by default). Then $h$ is selected to minimize this validation error.
+
+Table 4. Quantitative comparison of out-of-sample mapping accuracy of different bandwidth selection methods for traditional DR.
+
+Method CLIPScore HPSv2 PickScore mae mape rmse mae mape rmse mae mape rmse PCA+Silverman 4.0042 16.5506 5.1142 1.4444 5.7870 1.8749 1.1497 5.9209 1.4686 PCA+ALB 4.0163 16.5445 5.1148 1.4390 5.7662 1.8651 1.1469 5.9140 1.4658 PCA+LOOCV 4.0000 16.5140 5.1034 1.4347 5.7492 1.8598 1.1415 5.8769 1.4557 t-SNE+Silverman 4.0241 16.6643 5.1523 1.4361 5.7568 1.8629 1.1579 5.9429 1.4640 t-SNE+ALB 4.0215 16.6381 5.1497 1.5147 6.1203 1.9546 1.1463 5.8838 1.4524 t-SNE+LOOCV 4.0205 16.6276 5.1366 1.5250 6.1516 1.9573 1.1520 5.9210 1.4639 UMAP+Silverman 4.0819 16.9066 5.2179 1.4432 5.7817 1.8725 1.2214 6.2721 1.5536 UMAP+ALB 4.0142 16.7284 5.1332 1.4171 5.6262 1.8285 1.2019 6.1807 1.5276 UMAP+LOOCV 4.0417 16.7337 5.1617 1.4830 5.9180 1.9233 1.2715 6.5419 1.6274
+
+In addition, we also show that it is not possible to achieve accurate mapping by manually setting smaller bandwidth than the automatically selected one. In fact, this will evidently cause severe overfitting issue and result in fragmented blocks in the map, as shown in Figure 10 of t-SNE method. This also causes the quantitative accuracy of the mapping to drop. For example, for the case in Figure 10, the error has increased as shown in Table 6.
+
+Table 5. Quantitative comparison of in-sample mapping accuracy of different bandwidth selection methods for traditional DR.
+
+Method CLIPScore HPSv2 PickScore mae mape rmse mae mape rmse mae mape rmse PCA+Silverman 4.0494 22.8348 5.3954 1.3744 5.3870 1.7690 1.1341 5.7472 1.4437 PCA+ALB 4.0415 22.8072 5.3806 1.3702 5.3707 1.7621 1.1284 5.7254 1.4394 PCA+LOOCV 4.0505 22.9057 5.3927 1.3747 5.3874 1.7674 1.1351 5.7502 1.4447 t-SNE+Silverman 4.0573 22.8162 5.3798 1.3567 5.3259 1.7606 1.1200 5.6774 1.4340 t-SNE+ALB 4.0476 22.7261 5.2578 1.3549 5.3168 1.7564 1.1164 5.6626 1.4292 t-SNE+LOOCV 4.0447 22.6818 5.3511 1.3545 5.3158 1.7569 1.1131 5.6390 1.4270 UMAP+Silverman 4.0906 22.8612 5.4106 1.4755 5.7803 1.9073 1.1804 5.9824 1.5145 UMAP+ALB 4.0675 22.9090 5.3831 1.4621 5.7202 1.8871 1.1729 5.9458 1.5051 UMAP+LOOCV 4.0684 22.7450 5.3680 1.4699 5.7576 1.9013 1.1730 5.9430 1.5083
+
+
+(a) auto-selected bandwidth
+
+
+(b) manually set 0.1x bandwidth
+Figure 10. Attempts to manually selecting smaller bandwidth for traditional methods may lead to worse mapping performance with severe overfitting, as exemplified by the t-SNE visualization of HPSv2 score distribution.
+
+# D. Zoom in by Different Grid Numbers
+
+This section shows the zoom-in effect of AKRMap when increasing the grid number. As shown in Figure 11, AKRMap can accurately estimate the local distribution of cross-modal metric and adds detail on demand. Specifically, with a small grid number of 30 by 30, AKRMap can already achieve the level of detail close to most baseline methods with grid number of 500 by 500. When increasing the grid number, we can see in Figure 11 that our method can accurately show the detail contour of metric distribution.
+
+# E. Hyperparameter
+
+In this section we discuss some hyperparameter trade-off in our method with additional experimental results. First, we show the trade-off effect by different $\lambda$ value to weight the regression loss and KL loss, with test (out-of-sample) quantitative results shown in Table 7, and visualization effects shown in Figure 12. Next, we show the scatterplot view of our ablation studies in Figure 13 to demonstrate the effect of our proposed component in large-scale DR point plot. Finally, in Figure 14, we illustrate some qualitative and quantitative effect of setting the train-val weights $w_{1}$ or $w_{2}$ to be zero to demonstrate the necessity of the train-val balance scheme in our kernel regression guidance loss.
+
+Table 6. Impact of manually setting smaller bandwidth for Figure 10.
+
+maein mapein rmsein maeout mapeout rmseout auto-selected bandwidth 1.3567 5.3259 1.7606 1.4361 5.7568 1.8629 0.1x bandwidth 1.5403 6.0284 2.0470 1.7050 6.7994 2.1951
+
+
+Figure 11. Zoom-in effect by different grid numbers in our method.
+
+
+
+
+
+Table 7. Experimental results on trade-off effect by $\lambda$ settings.
+
+Method CLIPScore HPSv2 PickScore Aesthetic Score mae rmse trust mae rmse trust mae rmse trust mae rmse trust λ = 0.05 2.0752 2.6639 0.7849 1.3613 1.7624 0.8635 1.1048 1.4027 0.8457 0.4625 0.5869 0.8731 λ = 0.125 1.8707 2.4253 0.7672 0.8108 1.1200 0.8284 0.7712 1.0225 0.8143 0.4305 0.5488 0.8704 λ = 0.25 3.4395 4.4237 0.6865 0.5621 0.8048 0.8111 0.5511 0.7413 0.7990 0.4039 0.5149 0.8626 λ = 0.5 3.4811 4.4620 0.7083 0.4453 0.6554 0.7847 0.4597 0.6346 0.7791 0.3875 0.4925 0.8545
+
+# F. Test on Other Modality Embeddings
+
+We further perform quantitative experiments on CLIP text-video embeddings on MSR-VTT dataset (Xu et al., 2016) which consists of 200,000 text-video pairs (Table 10), as well as text-audio embeddings and image-audio embeddings on Flickr 8k Audio Caption Corpus dataset (Harwath & Glass, 2015) consisting of 40,000 pairs (Table 9 and Table 8). Overall, the results indicate that our method can improve upon traditional DR in test set mapping accuracy.
+
+Table 8. ImageBind image-audio embeddings mapping accuracy on Flickr 8k Audio Caption Corpus.
+
+PCA t-SNE UMAP AKRMap mae 4.1239 3.9576 3.8811 1.9425 mape 19.1321 18.3808 18.1024 8.9521 rmse 5.1737 5.0254 4.8633 2.6425
+
+# G. Weight Balancing Mechanism
+
+We also test automatic weight balancing method, using a sigmoid function to adjust the weight of KL loss.
+
+$$
+w (x, \mu) = \sigma (k (x - \mu)) = \frac {1}{1 + e ^ {- k (x - \mu)}}, \tag {10}
+$$
+
+where $\mu$ is a threshold of acceptable $KL$ loss which we set to 2 by default and $k$ is a parameter to control the decay rate which we default to 1. Then the total loss becomes:
+
+$$
+L _ {1} = \lambda M S E _ {r} + w (K L, \mu) \cdot K L, \tag {11}
+$$
+
+where $\lambda$ is fixed to the default value 0.125.
+
+Alternatively, for users who care more about the neighborhood preservation, we can use this method to weight the MSE term with a user-specified threshold $\mu_{1}$ :
+
+$$
+L _ {2} = w \left(M S E _ {r}, \mu_ {1}\right) \cdot \lambda M S E _ {r} + \cdot K L. \tag {12}
+$$
+
+The experimental results for different $\mu$ and $\mu_{1}$ settings are presented in Table 11.
+
+
+Figure 12. AKRMap mapping results for different $\lambda$ settings.
+
+Table 9. ImageBind text-audio embeddings mapping accuracy on Flickr 8k Audio Caption Corpus.
+
+PCA t-SNE UMAP AKRMap mae 3.7194 3.5992 3.6801 1.8629 mape 21.1113 20.3858 20.8498 10.1821 rmse 4.7175 4.5607 4.6541 2.4841
+
+# H. Interactive Features Evaluation
+
+We conduct a user study among 12 users on our interactive features including zoom and overlay. They complete a 7 Likert-scale survey after using our interactive visualization of HPSv2 score on HPD dataset. As shown in Figure 15, users generally appreciate our interactive features in several usability aspects including effectiveness, ease of use, interpretability, and future use.
+
+# I. Convergence Properties of the Adaptive Generalized Kernel Regression
+
+In this appendix, we summarize the theoretical conditions and practical considerations related to the convergence of our adaptive generalized kernel regression method, which extends the classical Nadaraya-Watson kernel regression framework.
+
+To guarantee convergence in terms of mean squared error (MSE), the following conditions are essential:
+
+- 1) Integrability of the generalized kernel on $\mathbb{R}^2$ : The kernel is defined as:
+
+$$
+K (x) = \left(1 + \alpha | x | ^ {2 \beta}\right) ^ {- 1}. \tag {13}
+$$
+
+For $K(x)$ to be integrable and finite over $\mathbb{R}^2$ , parameters must satisfy $\alpha > 0$ and $\beta > 1$ , ensuring sufficient decay of the kernel function at infinity.
+
+- 2) Growth behavior of the parameter $\alpha$ with sample size $N$ : The parameter $\alpha$ must effectively increase as the
+
+
+Figure 13. Scatterplots of ablation.
+
+Table 10. CLIP text-video embeddings mapping accuracy on MSR-VTT datasets.
+
+PCA t-SNE UMAP AKRMap mae 1.4908 1.4201 1.4562 0.6763 mape 8.3080 7.9589 8.1636 3.4832 rmse 2.1231 2.0216 2.0723 0.9578
+
+sample size $N$ grows, analogous to the classical kernel bandwidth $h$ decreasing with larger $N$ . This condition is critical for establishing consistency and uniform convergence of the estimator.
+
+- 3) Smoothness of the regression target function $s(x)$ : The target function $s(x)$ should satisfy smoothness assumptions such as Lipschitz continuity. This is a standard requirement in kernel regression theory to control the bias term and facilitate uniform convergence as bandwidth parameters are tuned.
+
+In our training procedure, we enforce the non-negativity of parameters $\alpha$ and $\beta$ through reparameterization (by squaring them) and include them within a set of learnable parameters updated via backpropagation. Intuitively, this process pushes the network to automatically identify suitable decay rates for the kernel in each epoch or mini-batch, adapting progressively to match the magnitude scale of the Nadaraya-Watson kernel. Empirical experiments demonstrate that the learned parameter values $(\alpha, \beta)$ converge successfully. For example, the values for HPSv2 are (68.57, 1.61), for PickScore are (59.13, 1.35), for CLIPScore are (104.70, 3.18), and for Aesthetic Score are (74.95, 1.11). These results indicate that our data-driven approach effectively satisfies conditions (1) and (2) mentioned above.
+
+In practical multimodal scenarios, although the target function might not strictly satisfy Lipschitz continuity, numerical experiments indicate that kernel regression methods remain robust as long as the target distribution does not exhibit extreme irregularities, such as highly discontinuous or severely jagged patterns. In other words, even piecewise smooth or locally continuous target functions can yield stable and accurate kernel regression estimates in practice. Specifically, in our scenario involving high-dimensional to low-dimensional mappings, there exists an continuous relationship between evaluation metrics and the projected cross-modal embeddings. For example, CLIPScore, which is computed using cosine similarity between image and text embeddings, is an obvious continuous function between the original high-dimensional embeddings and the metric. Furthermore, our parametric projection network is an explicit continuous and differentiable function. The Implicit
+
+
+
+
+
+
+
+
+Figure 14. Comparison with setting $w_{1} = 0$ and $w_{2} = 0$ in our MSE loss on HPSv2.
+
+
+
+
+
+Table 11. Weight balancing results of ${L}_{1}$ and ${L}_{2}$ on HPSv2.
+
+mae mape rmse trust w/o weight balance 0.8108 3.1935 1.1200 0.8284 μ = 2 0.5118 2.0318 0.7464 0.8053 μ = 1.8 0.4963 1.9632 0.7051 0.7979 μ = 1.6 0.5336 2.1115 0.7772 0.8059 μ1 = 1 1.0427 4.1393 1.3787 0.8399 μ1 = 0.8 0.9610 3.8069 1.2686 0.8316 μ1 = 0.6 1.0118 1.9999 1.3478 0.8426
+
+
+Figure 15. User study results on interactive features.
+
+Function Theorem can then ensure the continuity of the metric $w.r.t$ the projected 2D embeddings. Thus, our practical setup implicitly fulfills the smoothness assumption (condition 3) required for convergence.
+
+Furthermore, the mapping function $s(x)$ is continuously updated during dimensionality reduction (projection). Our adaptively updated generalized kernel not only naturally accommodates complex distributions encountered in practice but also ensures consistency in metric spaces through supervised kernel regression. Regarding projection stability, our kernel regression loss can be regarded as a regularization term that enforces the global structure of the projection, as it explicitly pushes the projected sample points to regress a global metric distribution.
\ No newline at end of file
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/images.zip b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..961ea7c1551659496cd73073edceda2ccd94e4ec
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:36f005182fe1ac8852b816eb513dd44ecacadc02597f5ba26c9b0db92e6066ed
+size 1486135
diff --git a/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/layout.json b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a8cd11d3db198128f48cf3ce8ad809fb07489102
--- /dev/null
+++ b/akrmapadaptivekernelregressionfortrustworthyvisualizationofcrossmodalembeddings/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f581ae09c87e33f585ae9e68763f5c5fb193f4094d0a1dfcfb71397303ee2d56
+size 597905
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_content_list.json b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e5f100c6e18df88fdf6381a1d4287178c35e9d28
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76dda28bace76b35e87f3df69f83ccbacc1fd502e60987fd4fad6b16a8244c4e
+size 156667
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_model.json b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..62908f2bd895b0ac01902846496d875252928afb
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0e91731a56d061662e615a1629410851ed66deffec07b728d72f0cd1c74133e
+size 186377
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_origin.pdf b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..59d079639e2a8033b74d2f2f53ad6fa90cd86024
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/35fcf7bc-c28c-42ae-8935-4781cda4dad2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a11c054fa915cbfd21870aee49d02bb2af5711f95d7d3cad8710b89888a67c38
+size 2683500
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/full.md b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..300ca36291f55ca2c2174920f28e9f29d5a9a5f3
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/full.md
@@ -0,0 +1,640 @@
+# ALMTokenizer: A Low-bitrate and Semantic-rich Audio Codec Tokenizer for Audio Language Modeling
+
+Dongchao Yang $^{1}$ Songxiang Liu $^{2}$ Haohan Guo $^{1}$ Jiankun Zhao $^{1}$ Yuanyuan Wang $^{1}$ Helin Wang $^{2}$ Zeqian Ju $^{2}$ Xubo Liu $^{2}$ Xueyuan Chen $^{1}$ Xu Tan $^{2}$ Xixin Wu $^{1}$ Helen Meng $^{1}$
+
+# Abstract
+
+Recent advancements in audio language models have underscored the pivotal role of audio tokenization, which converts audio signals into discrete tokens, thereby facilitating the application of language model architectures to the audio domain. In this study, we introduce ALMTokensizer, a novel low-bitrate and semantically rich audio codec tokenizer for audio language models. Prior methods, such as Encodec, typically encode individual audio frames into discrete tokens without considering the use of context information across frames. Unlike these methods, we introduce a novel query-based compression strategy to capture holistic information with a set of learnable query tokens by explicitly modeling the context information across frames. This design not only enables the codec model to capture more semantic information but also encodes the audio signal with fewer token sequences. Additionally, to enhance the semantic information in audio codec models, we introduce the following: (1) A masked autoencoder (MAE) loss, (2) Vector quantization based on semantic priors, and (3) An autoregressive (AR) prediction loss. As a result, ALMTokensizer achieves competitive reconstruction performance relative to state-of-the-art approaches while operating at a lower bitrate. Within the same audio language model framework, ALMTokensizer outperforms previous tokenizers in audio understanding and generation tasks. $^{1}$
+
+1 The Chinese University of Hong Kong, Hong Kong, China 2 Independent Researchers. Correspondence to: Dongchao Yang , Helen Meng .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+1 http://dongchaoyang.top/ALMTokensizer/
+
+# 1. Introduction
+
+The field of generative modeling has witnessed remarkable progress, largely driven by the success of autoregressive (AR) models in the development of large language models (LLMs) (OpenAI, 2023). Inspired by the success of LLMs in the fields of natural language processing (NLP), recent works have begun to employ AR transformers for audio generation (Borsos et al., 2023a; Agostinelli et al., 2023; Yang et al., 2023c), such as using the AR transformer paradigm to solve text-to-speech task (Wang et al., 2023), or expanding the text LLM into multimodal LLM by integrating the audio modality into the original LLM (Défossez et al., 2024). Audio tokenizer plays an important role in all of these models, which converts audio signals into discrete token sequence for AR audio language modeling.
+
+In the literature, audio codec models, such as SoundStream (Zeghidour et al., 2021) and Encodec (Défossez et al., 2022), have been widely adopted as audio tokenizers for audio language models. These generative models aim to represent audio data in a quantized discrete latent space, where the codec's decoder is then used to reconstruct the audio signals from the generated discrete token sequences. Recently, there has been significant interest in the audio community regarding audio codec tokenizers, leading to the proposal of several novel models (Kumar et al., 2023; Ji et al., 2024; Défossez et al., 2024; Parker et al., 2024; Zhang et al., 2023). Despite the advancements in audio codec models, an important research question remains unanswered: which type of audio codec is most suitable for audio language modeling? Inspired by previous works (Borsos et al., 2023a; Parker et al., 2024; Ji et al., 2024; Défossez et al., 2024), these studies investigate two key properties of audio codec models: low bitrate and semantic richness. We first conduct a set of evaluation experiments to explore the influence of bitrate and semantic information on audio language modeling. Specifically, we train three audio codec models with varying bitrates, while keeping the number of vector quantization (VQ) layers constant and adjusting the frame rates to $50\mathrm{Hz}$ , $25\mathrm{Hz}$ , and $12.5\mathrm{Hz}$ . We then train the audio language model using different audio tokenizers on the same dataset. To assess the impact of semantic information, we
+
+also train a $12.5\mathrm{Hz}$ semantic tokenizer and incorporate it into the audio language model. Further details can be found in Appendix B. Figure 1 presents the results, which show that: (1) low-bitrate audio codec models significantly enhance training and inference efficiency; and (2) semantic information is more easily modeled by LM-based generative methods, e.g. lower PPL and loss. The experimental findings demonstrate the importance of constructing a low-bitrate and semantic-rich audio codec tokenizer for audio language modeling. Based on these results, we propose a novel audio codec tokenizer that offers the following advantages: (1) Low-bitrate: it compresses the audio data into fewer tokens; (2) Semantic-rich: it incorporates abundant semantic information; (3) AR-driven latent space: it optimizes the latent space for autoregressive (AR) modeling.
+
+To achieve this objective, we propose the following novel techniques: (1) We introduce a novel query-based compression strategy, which uses a set of learnable query tokens to capture holistic information by explicitly modeling the context information across audio frames with transformer layers. This strategy effectively takes advantage of the strong modeling capabilities of transformers to achieve better compression and semantic modeling. It also enables dynamic control over the compression rate by adjusting the number of query tokens. (2) To enhance semantic richness in the codec model, we introduce a Masked Autoencoder (MAE) loss, which encourages the model to capture more global information. (3) Inspired by previous works (Zhu et al., 2024), we propose the integration of semantic priors into the VQ layer. Specifically, we perform k-means clustering on the pre-trained wav2vec2 (Baevski et al., 2020) and BEATs (Chen et al., 2022b) encoder outputs, using the cluster centers to initialize the VQ layer. (4) We observe that AR models struggle to fit the distribution of the residuals in the VQ layers, with token prediction accuracy being notably lower in the second and third VQ layers compared to the first. To address this issue, we introduce an AR prediction loss to optimize the latent space.
+
+To evaluate the effectiveness of the ALMTokenizer, we first compare its reconstruction and semantic performance with previous state-of-the-art models. Using the same audio language model framework, we then demonstrate that ALMTokenizer achieves superior performance in LM-based audio understanding and generation tasks, including text-to-speech (TTS), speech-to-text (ASR), audio captioning, text-to-sound, text-to-music, and music captioning.
+
+# 2. Related Works
+
+# 2.1. Audio Language Models
+
+Recently, there has been a growing interest in bridging audio and text through multimodal learning approaches. Models such as AudioLM (Borsos et al., 2023a) leverage AR trans
+
+
+Figure 1. The performance comparison when different types of tokenizer is used for audio modeling. PPL refers to perplexity.
+
+formers and hierarchical modeling techniques to process audio data directly, learning representations that capture both linguistic and acoustic features. Inspired by AudioLM, VALL-E (Wang et al., 2023) and SPEAR-TTS (Kharitonov et al., 2023) formulate the text-to-speech task as an audio language modeling problem: generating an audio token sequence with the help of an autoregressive transformer. MusicLM (Agostinelli et al., 2023) and MusicGen (Copet et al., 2023) frame the text-to-music task as an audio language modeling problem. UniSep (Wang et al., 2025) explores using audio LM to solve audio separation tasks with the help of audio tokenizer. Moshi (Défossez et al., 2024), SpiRiTLM (Nguyen et al., 2025), and GLM4-Voice (Zeng et al., 2024) explore speech-to-speech conversation. Furthermore, audio tokenizers can also be combined with discrete diffusion models (Yang et al., 2023d;a; Borsos et al., 2023b; Ju et al., 2024). In all of these models, the audio tokenizer plays a crucial role by transforming audio data into a discrete latent sequence, reducing computational demands compared to directly processing the audio signal, and enhancing the effectiveness and efficiency of the generation process.
+
+# 2.2. Audio Tokenizer
+
+In the literature, both semantic and acoustic tokenizers are widely employed in audio language models. The semantic tokenizer is trained using pre-trained self-supervised learning (SSL) models, such as Hubert (Hsu et al., 2021) and WavLM (Chen et al., 2022a). Applying k-means or vector quantization in these models generates semantic tokens (Zeng et al., 2024; Du et al., 2024; Liu et al., 2024). Previous works (Borsos et al., 2023a) demonstrate that semantic tokens are more easily modeled by language models. However, due to the loss of significant acoustic information
+
+
+Figure 2. The left part illustrates the framework of the previous audio codec, while the right part provides an overview of the proposed ALMTokensizer. $w$ denotes the window size. The details of ALMTokensizer can be found in Section 3.2.
+
+
+
+in semantic tokens, they rely on an additional decoder to generate high-fidelity waveform, such as a diffusion model (Ho et al., 2020) or flow-matching (Lipman et al., 2022). Inevitably, this additional module results in increased inference complexity and poorer reconstruction.
+
+Acoustic tokenizer refers to audio codec models, trained for acoustic-level reconstruction tasks. Audio codec (Zeghidour et al., 2021; Defossez et al., 2022; Yang et al., 2023b; Kumar et al., 2023) have demonstrated exceptional performance in reconstructing high-quality audio. In general, these codec models consist of an encoder, a quantizer, and a decoder. Both the encoder and decoder are lightweight, resulting in minimal inference costs. Compared to semantic tokens, codec models can support audio, speech, and music domains, and their rich acoustic details mitigate the need for cascading architectures in downstream generative models. Recently, an increasing number of audio codec models have been proposed, focusing on (1) Better reconstruction quality, such as DAC (Kumar et al., 2023), Vocos (Siuzdak, 2023), SQ-Codec (Yang et al., 2024c;b) and APCodec (Ai et al., 2024); (2) Low-bitrate models, such as HiFiCodec (Yang et al., 2023b), wavtokenizer (Ji et al., 2024), StableCodec (Parker et al., 2024), and TS3-Codec (Wu et al., 2024); (3) Task-driven codecs, designed for text-to-speech tasks, such as FACodec (Ju et al., 2024), SpeechTokenizer (Zhang et al., 2023), Single-Codec (Li et al., 2024), audio retrieval-based Tokenizers (Banerjee & Arora, 2022; van Niekerk et al., 2024). In this study, we focus on developing a low-bitrate, semantically rich audio codec tokenizer. The most closely related work to ours is MimiCodec (Defossez et al., 2024), which provides high-quality semantic information while achieving a low bitrate (1.1 kbps). However, MimiCodec relies on knowledge distillation from WavLM (Chen et al., 2022a) to the first VQ layer, whereas the remaining VQ layers do not incorporate semantic information. Furthermore, it is specifically designed for speech tasks and has not been
+
+validated for non-speech tasks, such as sound and music generation. In contrast to MimiCodec, our ALMTokensizer encodes more semantic information across all VQ layers, achieves a lower bitrate, and is designed for both speech and general sound.
+
+# 3. Proposed Method
+
+This section introduces the technical details of the proposed ALMTokensizer. Section 3.1 presents the framework of previous audio codec models. Section 3.2 presents the details of proposed audio codec framework. In Sections 3.3 and 3.4, we present the training loss and training strategies.
+
+# 3.1. Preliminary
+
+Previous audio codec (Défossez et al., 2022; Zeghidour et al., 2021) typically adopt an encoder-quantizer-decoder framework, as shown in the left part of Figure 2. The audio is encoded into several audio frames by the encoder. Then, residual vector quantization (RVQ) (Zeghidour et al., 2021) is used to quantize these audio frames. Lastly, the decoder is used to recover the waveform from the quantized audio frames. It can be observed that previous works treat each audio frame equally and rely on these quantized frames to recover the audio. However, such a strategy (1) ignores the fact that different audio frames encode different levels of information, which results in some audio frames being difficult to recover in low-bitrate settings (e.g., encoding the audio frames at $12.5\mathrm{Hz}$ ); (2) fails to utilize the context information between different frames.
+
+# 3.2. Query-based Audio Compression
+
+To construct a low-bitrate, semantically rich audio codec model, we propose a query-based compression strategy. Our
+
+approach is inspired by the success of MAE (He et al., 2022), which applies a masking operation to the original image with a high mask rate (75%). With the help of a transformer encoder and decoder, it is possible to recover the masked image content by utilizing the context information between different patches. Thus, we propose using a group of query tokens ${}^{2}$ to capture holistic audio context information from the audio frames with the assistance of a transformer encoder. Since these query tokens include rich context information, it is possible to reconstruct the audio based on them. Then, a transformer decoder and mask tokens are employed to reconstruct the audio from the quantized query tokens. This strategy leverages the powerful modeling capabilities of transformers to achieve better compression and semantic modeling. Similar query-based strategies has been widely explored in previous works, such as BLIP2 (Li et al., 2023), SALMONN (Tang et al., 2024) and TiTok(Yu et al., 2024). The right part of Figure 2 illustrates the overall framework of ALMTokensizer. In the following sections, we detail each component and the associated training loss.
+
+Patchify and UnPatchify We explore two types of Patchify modules: (1) Following Encodec (Défossez et al., 2022), a convolution-based module, which encodes the audio data $\mathbf{x}$ into $e \in \mathcal{R}^{T \times d}$ , where $T$ and $d$ denote the number of frames and the vector dimension, and (2) Following StableCodec (Parker et al., 2024), which directly uses a linear layer to encode the audio data into $e \in \mathcal{R}^{T \times d}$ and adds several transformer layers. Similarly, the UnPatchify mirrors the architecture of Patchify. If we use the Encodec-style Patchify module, the UnPatchify module substitutes stride convolutions with transposed convolutions and reverses the stride order. If we use the StableCodec-style Patchify module, the UnPatchify module includes a transformer block and a reshape operation. In our preliminary experiments, we find that the Encodec-style Patchify and UnPatchify modules bring better reconstruction performance. We adopt the Encodec-style Patchify module as our default setting.
+
+Token Interleaving The token interleaving module aims to combine two token sequences into a single sequence. In the encoder part, we combine the audio frames $e \in \mathcal{R}^{T \times d}$ and the query token [CLS]. Assuming a window size of $w$ , the query token will be inserted into the audio frame sequence at every $w$ -intervals. In the decoder part, the token interleaving module is used to combine the quantized query tokens and learnable mask tokens. We insert $w$ mask tokens before each query token. During the training stage, we dynamically choose the window size for each training iteration.
+
+Token Retrieval The token retrieval module aims to retrieve the relevant tokens from a sequence. In the encoder part, we
+
+use it to retrieve the learnable query tokens. In the decoder part, we use it to retrieve the learnable mask tokens.
+
+Query-based Transformer Encoder As the previous part discussed, we introduce a learnable query token $[cls] \in \mathcal{R}^{1 \times d}$ to capture holistic information from the audio frames $e$ . As Figure 2 shows, we first combine the audio frames and query token using a token interleaving module with a window size $w$ . Then, a transformer module is applied to model the whole sequence $e_a$ . After that, we employ a token retrieval module to extract the query tokens $h \in \mathcal{R}^{\lfloor T / w \rfloor \times d}$ .
+
+$$
+\boldsymbol {e} = P (\boldsymbol {x}), \boldsymbol {e} _ {\boldsymbol {a}} = \text {I n t e r l e a v i n g} (\boldsymbol {\mathsf {e}}, \boldsymbol {\mathsf {c l s}}, w), \tag {1}
+$$
+
+$$
+\boldsymbol {e} _ {\boldsymbol {a}} = E n (\boldsymbol {e} _ {\boldsymbol {a}}), \boldsymbol {h} = R e c t r i e v a l (\boldsymbol {e} _ {\boldsymbol {a}}, w)
+$$
+
+where $P(\cdot)$ denotes the Patchify module. $En(\cdot)$ denotes the transformer encoder.
+
+Residual Vector Quantization To build a low-bitrate audio codec, we empirically set the number of RVQ layers to 3, since we found that 3 RVQ layers suffice to build an effective audio codec model: $\hat{h} = Q(h)$ . Inspired by previous works (Zhu et al., 2024; Yang et al., 2024a), we first obtain the k-means clusters of Wav2vec2 (Baevski et al., 2020) to represent the speech semantic prior, and the k-means clusters of the BEATs (Chen et al., 2022b) to represent the general sound semantic prior. Assuming the codebook size is $C$ , we set $C / 2$ to represent speech, with the remaining portion representing general sound. We then use these semantic priors to initialize the codebook of the VQ layer and fix it. Next, we apply a linear layer to map the input features into the VQ layer.
+
+Query-based Transformer Decoder To recover the audio information, we construct a reverse process using the encoder part. We first use the token interleaving module to combine the mask token $m \in \mathcal{R}^{1 \times d}$ with $\hat{\pmb{h}}$ . The new sequence is then modeled by a transformer module. We expect that these mask tokens can be used to recover the audio information with the help of the Unpatchify module.
+
+$$
+\boldsymbol {q} _ {\boldsymbol {a}} = \text {I n t e r l e a v i n g} (\hat {\boldsymbol {h}}, \boldsymbol {m}, w), \boldsymbol {q} _ {\boldsymbol {a}} = D e (\boldsymbol {q} _ {\boldsymbol {a}}) \tag {2}
+$$
+
+$$
+\boldsymbol {e} _ {\boldsymbol {o}} = \text {R e c t r i e v a l} (\boldsymbol {q} _ {\boldsymbol {a}}, w), \hat {\boldsymbol {x}} = U n P (\boldsymbol {e} _ {\boldsymbol {o}}),
+$$
+
+where $Unp(\cdot)$ denotes the Unpatchify module. $De(\cdot)$ denotes the transformer decoder.
+
+# 3.3. Training Loss
+
+Similar to previous audio CODECs, our approach is based on a GAN objective, where we optimize both the generator (which consists of the Patchify module, transformer encoder, quantizer, transformer decoder, and UnPatchify module) and the discriminators. For the generator, the training loss comprises four components: (1) reconstruction loss term; (2) adversarial loss term; (3) Masked AutoEncoder (MAE) loss; and (4) AR prediction loss. The reconstruction and
+
+adversarial losses typically follow previous works (Défossez et al., 2022; Zeghidour et al., 2021). In the following, we describe the MAE loss and AR prediction loss. More details of training loss refer to Appendix G.
+
+MAE Loss As we discussed in Section 1, a semantic-rich audio codec tokenizer is better suited for audio language modeling. Inspired by the success of MAE (He et al., 2022), we propose to incorporate an MAE loss during the training of the audio codec. Specifically, for the frame sequence $e$ , we randomly choose several audio frame features and set these frames to zero, $e_m = \mathrm{Mask}(e)$ . We pass the masked features $e_m$ into the encoder transformer. Then, the encoded features are passed into an MAE-decoder transformer block to predict $e$ . In our experiments, we adopt a dynamic mask rate (from 0.2 to 0.3), we found that using a large mask rate will significantly influence the reconstruction performance. Following MAE (He et al., 2022), we apply the MSE loss to the masked audio frames.
+
+AR Loss As shown in figure 3, we find that the first layer of RVQ-based audio codec models is easier to fit for the audio language model than the other layers (e.g., layers 2 and 3). One possible reason is that the first layer encodes more semantically related information. For speech data, most of the content information can be recovered by the first VQ layer, while the residual layers primarily encode acoustic-level information, which influences speech quality. To make the tokens in the residual layer easier to fit, we introduce an autoregressive (AR) prediction prior (Wang et al., 2024a) in the RVQ latent space. Specifically, we introduce a lightweight continuous autoregressive (AR) transformer3 , which is used to conduct next-token prediction in the RVQ layer. For example, it is tasked with predicting the quantized feature of the third VQ layer based on the features of the first and second VQ layers. We use mean squared error (MSE) loss for optimization.
+
+$$
+p _ {\theta} = \prod_ {i = 1} ^ {3} p _ {\theta} \left(\boldsymbol {x} _ {i} \mid \boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {i - 1}, \theta\right) \tag {3}
+$$
+
+where $\theta$ denotes the parameter of AR transformer.
+
+# 3.4. Two-stage Training Strategy
+
+Although training the ALMTokenizer using the typical Encoder (Défossez et al., 2022) setting is feasible, we introduce a two-stage training paradigm to improve both reconstruction performance and semantic information. Our motivation stems from the fact that audio codec quantization focuses on modeling local relationships, whereas seman
+
+3 The term continuous autoregressive (AR) transformer is used to distinguish our approach from traditional discrete AR models, which operate on discrete token sequences and are optimized using cross-entropy loss. In our study, to facilitate gradient backpropagation, we apply the AR transformer directly to continuous features.
+
+tic information focuses on modeling global relationships. These two goals are in conflict. To resolve this conflict, we present a two-stage training strategy. In the first stage, we do not incorporate the quantization part; instead, we train directly an AutoEncoder with Patchify and UnPatchify modules. To encode more semantic information in the Patchify module, we introduce MAE loss during this stage, by adding transformer-based MAE-encoder and decoder. The encoder processes the masked frame sequence, and the decoder predicts the masked part. After training, the transformer encoder and decoder are discarded. In the second stage, we first initialize the ALMTokensizer's Patchify and UnPatchify modules with the checkpoint from the first stage, and freeze the parameters of the Patchify module. Then, we train the model using the training loss described in Section 3.3.
+
+# 4. Experiments
+
+# 4.1. Dataset and Training Details
+
+Data preparation for the audio codec ALMTokensizer is trained on approximately 4,500 hours of data. In the speech domain, we utilize LibriTTS training set (Zen et al., 2019) and a subset of Multilingual LibriSpeech (MLS) (Pratap et al., 2020), with 2,000 hours randomly selected. In the sound domain, we utilize a subset of AudioSet, with 1,000 hours randomly selected; in the music domain, we employ a subset of the Million Song Dataset (Bertin-Mahieux et al., 2011), also with 1,000 hours randomly selected. We evaluate the codec's speech reconstruction performance using a subset of the VCTK dataset (Veaux et al., 2017), and assess both audio and music reconstruction performance using the AudioCaps (Kim et al., 2019) validation set and the MusicCaps dataset (Agostinelli et al., 2023), respectively.
+
+Data for Audio Language Models To assess the effectiveness of the proposed audio tokenizer, we construct an audio language model framework to perform six audio-related tasks. The details are provided in Appendix D.3 and D.4. For speech data, we select 2,000 hours of speech-text pairs from LibriHeavy (Kang et al., 2024). For sound data, we utilize the AudioCaps training set and BBC Sound Effects. For music data, we use a subset of the Million Song dataset and the caption data from LP-MusicCaps (Doh et al., 2023).
+
+Implementation Details ALMTokenizer first performs patchification on the audio data, we set the patch size to 320 in all of experiments, which encodes 1 second of $24\mathrm{kHz}$ audio into 75 frames. For the Encoder-style Patchify module, we adopt the settings from Encodec (Défossez et al., 2022) encoder. To enable streaming for the codec model, a causal convolution layer is employed. For the encoder-transformer and decoder-transformer components, we use 24 self-attention layers, with latent dimensions of 256 and 512, respectively. Following StableCodec (Parker et al.,
+
+Table 1. The speech reconstruction and semantic performance comparison between the ALMTokensizer and previous tokenizers. FPS denotes that the frame number in one second. TPS denotes that the token number in one second. CS denotes the codebook size, BR denotes the bit-rate. ST denotes speechtokenizer. Bold for the best result and underline for the second-best result. Evaluation on VCTK dataset.
+
+Models FPS/TPS CS/BR Reconstruction Semantic UTMOS (↑) DNS-MOS (↑) VISQOL (↑) STOI (↑) PESQ (↑) ASR (↓) ER (↑) Hubert (Hsu et al., 2021) - - - - - - - 6.5 31.0 WavLM (Chen et al., 2022a) - - - - - - - 6.2 29.0 Encodec (Défossez et al., 2022) 50/150 1024/1.5kbps 2.58 3.27 3.64 0.81 2.0 35.3 26.5 DAC (Kumar et al., 2023) 50/150 1024/1.5kbps 3.13 3.41 3.67 0.81 2.1 44.1 17.6 Wavtokenizer (Ji et al., 2024) 40/40 4096/0.48kbps 3.67 3.50 3.72 0.79 1.9 44.6 19.8 StableCodec (Parker et al., 2024) 25/25 46656/0.4kbps 4.22 3.64 3.40 0.76 1.8 98.3 15.8 ST (Zhang et al., 2023) 50/150 1024/1.5kbps 3.41 3.36 3.68 0.79 1.7 19.8 27.0 Mimi (Défossez et al., 2024) 12.5/37.5 2048/0.41kbps 3.01 3.14 3.28 0.75 1.5 25.1 28.0 Mimi (Défossez et al., 2024) 12.5/100 2048/1.1kbps 3.65 3.38 3.82 0.82 2.1 23.8 28.3 ALMTokensizer (Ours) 12.5/37.5 2048/0.41kbps 3.76 3.64 3.78 0.81 2.0 18.3 29.0
+
+2024), the self-attention mechanism uses a causal sliding attention window of 64 steps to restrict the receptive field and promote the generalization of the architecture to sequences of arbitrary length. Rotary Positional Embeddings (RoPE) are used. Refer to Appendix G for the details of ALMTokenizer model training. For the audio language model, we follow the framework of Moshi (Défossez et al., 2024). For further details, refer to Appendix A.
+
+# 4.2. Evaluation Metrics
+
+We evaluate the performance of previous SOTA audio tokenizers, and our proposed ALMTokensizer across audio reconstruction, audio semantic information, audio understanding, and audio generation tasks.
+
+Audio Reconstruction For speech reconstruction, we use DNS-MOS, UT-MOS, PESQ, STOI (Short-time Objective Intelligibility), and VISQOL. For sound and music data evaluation, VISQOL (audio version), STFT loss, and Mel loss are used. Furthermore, following (Kumar et al., 2023), the MUSHRA subjective test is conducted for speech, sound, and music. Refer to Appendix D for more details.
+
+Audio Semantic Information Previous SSL models, such as Hubert (Hsu et al., 2021), have shown that semantic-rich representation can be used to solve downstream recognition tasks by fine-tuning several adaptor layers. Thus, we can validate the performance of features of the audio tokenizer for downstream recognition tasks. For speech data, we conduct the automatic speech recognition (ASR) task on the LibriSpeech (Panayotov et al., 2015) dataset, and the emotion classification (EC) task on the EMOVO (Costantini et al., 2014) dataset. For sound data, we conduct sound classification tasks on the ESC-50 dataset (Piczak, 2015). For music data, we conduct music classification tasks on the Medley-solos-DB dataset (Lostanlen & Cella, 2016).
+
+Audio Understanding To further validate whether the audio
+
+Table 2. The sound reconstruction performance comparison between the proposed ALMTokensizer and previous audio tokenizer models. SC denotes the sound classification task. Evaluation on AudioCaps validation set.
+
+Models ViSQOL (↑) Mel loss (↓) STFT loss (↓) SC (↑) BEATs - - - 24% Wav2vec2 - - - 53% Encodec 3.05 16.3 1.23 15% DAC 2.98 17.6 1.24 20% Wavtokenizer 2.18 32.7 2.50 12% Ours 2.99 15.0 1.24 44%
+
+Table 3. The music reconstruction and semantic performance comparison between the ALMTokensizer and previous audio tokenizers. MC denotes the music classification task. Evaluation on Musicaps dataset.
+
+Models ViSQOL (↑) Mel loss (↓) STFT loss (↓) MC (↑) BEATs - - - 54% Wav2vec2 - - - 65% Encodec 4.04 34.8 1.26 45% DAC 4.06 35.9 1.28 48% Wavtokenizer 3.85 48.2 1.47 54% Ours 3.96 34.4 1.32 59%
+
+tokenizer is suitable for building an audio language model, we propose to conduct an understanding task using discrete tokens. We conduct three tasks: ASR, audio caption, and music caption. For the audio data, we use the audio tokenizer to transform it into discrete tokens, and for text data, we use the BPE tokenizer of LLAMA 3.2. For audio and music caption, we follow (Drossos et al., 2020) and adopt BLEU-1, BLEU-2, BLEU-3, METEOR, ROUGE-L, CIDEr-D, SPICE, and SPIDER metrics.
+
+Audio Generation We also conduct audio generation tasks, including text-to-speech, text-to-sound, and text-to-music. Refer to Appendix D for more details.
+
+# 4.3. The Reconstruction and Semantic Performance
+
+We first compare the reconstruction and semantic performance of ALMTokensizer with previous audio tokenizers. Table 1 presents the speech reconstruction and semantic results. We observe the following: (1) In terms of reconstruction, ALMTokensizer achieves impressive results in the low-bitrate setting. For example, compared with previous SOTA models, MimiCodec and Wavtokenizer, ALMTokensizer achieves better reconstruction performance at a lower bitrate. We also note that StableCodec performs well on UTMOS. The main reason is that StableCodec has denoising capabilities, while the original audio includes some noise. This explains why StableCodec achieves good results on UTMOS but performs poorly on PESQ and STOI. (2) In terms of semantic information, ALMTokensizer demonstrates superior performance, e.g., ALMTokensizer outperforms previous SOTA models, such as Wavtokenizer and StableCodec $^{4}$ . Notably, in the emotion classification task, ALMTokensizer achieves performance comparable to previous SSL models, such as Hubert and WavLM. However, we also note that ALMTokensizer still lags behind these SSL models in ASR performance. We speculate that the inclusion of acoustic information may detract from ASR performance, despite ALMTokensizer containing rich semantic information. Table 2 and 3 show the sound and music experimental results. We can see that ALMTokensizer demonstrates strong reconstruction performance under the low-bitrate setting. Compared to WavTokenizer, the reconstruction performance shows significant improvement. Furthermore, we also note that sound and music are inherently more complex than speech, and encoding them at very low-bitrate remains a challenge. In terms of semantic information, ALMTokensizer significantly surpasses previous works, such as WavTokenizer and Encodec. In comparison with SSL models, BEATs (Chen et al., 2022b) and Wav2vec2-audioset version, ALMTokensizer shows comparable performance. We also perform the MUSHRA subjective test for the reconstruction performance. As shown in Table 7, we find that ALMTokensizer effectively maintains strong subjective reconstruction performance on speech, music, and audio, even with a very low-bitrate setting.
+
+# 4.4. Audio Understanding and Generation Results
+
+Speech Understanding and Generation Tasks Table 4 shows the LM-based TTS and ASR results. For the TTS task, we mainly focus on robustness and speech quality. In terms of robustness, we can see that the GLM4-voice tokenizer (Zeng et al., 2024), MimiCodec, and the proposed ALMTokens bring better performance than others, highlighting the importance of semantic information
+
+Table 4. The LM-based TTS and ASR results. The first three metrics are used for TTS, while the last one is used for ASR. GLM4-Voice (Zeng et al., 2024) is a single layer semantic tokenizer. Evaluation on LibriSpeech test clean set.
+
+Models WER (↓) DNSMOS (↑) UT-MOS (↑) ASR (↓) GLM4-voice 9.9 3.96 3.79 16.3 ± 1.5 DAC 24.5 3.14 2.06 58.4 ± 1.2 Encodec 22.9 3.48 2.14 77.2 ± 2.3 StableCodec 22.7 3.63 3.70 28.0 ± 1.9 Wavtokenizer 18.5 3.72 3.58 45.6 ± 2.7 MimiCodec 16.0 3.67 2.93 23.1 ± 1.5 Ours 11.7 3.75 3.88 19.6 ± 1.8
+
+
+Figure 3. The performance comparison with or without AR loss.
+
+for LM-based speech generation. Compared to previous audio codec tokenizers, ALMTokens bring significant improvement. In terms of generated speech quality, ALMTokens also shows great advantages, further demonstrating that the proposed tokenizer is more suitable for audio language modeling. Similarly, when we conduct the ASR task using discrete tokens as input, semantic information is also important. Traditional audio codec models perform poorly in this setting, such as DAC, Encodec, and WavTokenizer. StableCodec was fine-tuned by using a CTC head to predict the force-aligned phoneme tags from pre-bottleneck latents. MimiCodec distills the semantic information from WavLM. Thus, they have better performance than previous codec models. In ALMTokens, we propose a novel codec framework and training loss to better encode semantic information in the codec model.
+
+Sound/music Understanding and Generation Results We conduct text-to-sound, text-to-music, audio caption and music caption tasks within the same audio language model framework. The experimental results shown in Table 5 indicate that ALMTokensizer shows better performance in both audio caption and audio generation tasks, further demonstrating its advantages. We put more audio tokenizer reconstruction performance experiments on Appendix F, including evaluation on LibriTTS test set, length generalization,
+
+Table 5. The LM-based sound, music understanding and generation. B1, B2, B3, RG, ME, CD, SP, and SD denote BLEU-1, BLEU-2, BLEU-3, METEOR, ROUGE-L, CIDEr-D, SPICE, and SPIDER, respectively. Evaluation on Audiocaps and Musiccaps datasets.
+
+Models Understanding Generation B1 (↑) B2(↑) B3 (↑) ME (↑) RG (↑) CD (↑) SP (↑) SD (↑) FD (↓) FAD (↓) KL (↓) Sound Task Encodec 0.25 0.15 0.08 0.11 0.24 0.57 0.14 0.35 10.03 8.22 1.73 DAC 0.26 0.15 0.08 0.11 0.26 0.51 0.13 0.32 14.14 11.7 1.55 Wavtokenizer 0.24 0.14 0.08 0.10 0.22 0.38 0.11 0.25 6.76 4.55 1.28 ALMTokensizer (Ours) 0.28 0.17 0.11 0.12 0.24 0.60 0.15 0.37 4.11 6.16 0.55 Music Task Encodec 0.30 0.14 0.08 0.11 0.23 0.37 0.09 0.23 7.22 5.48 1.06 DAC 0.29 0.14 0.08 0.11 0.23 0.37 0.09 0.23 12.89 8.36 1.68 Wavtokenizer 0.19 0.06 0.02 0.06 0.13 0.06 0.05 0.05 4.39 11.93 0.88 ALMTokensizer (Ours) 0.34 0.15 0.07 0.13 0.25 0.44 0.10 0.27 3.55 4.58 0.43
+
+and compared to diffusion-based audio codec models.
+
+# 4.5. Ablation Study
+
+In order to gain a more comprehensive understanding of ALMTokenizer, we systematically compared each key component using a controlled experimental setup, employing identical architectures and hyperparameters across all trials.
+
+The Effectiveness of Query-based Audio Compression In this study, we propose a query-based audio compression strategy for compressing audio data in a very low-bitrate setting. To validate its effectiveness, we follow previous audio codec models, such as MimiCodec (Défossez et al., 2024). In the encoder part, we use a stride size of [8, 6, 5, 4, 2] to compress 1-second, $24\mathrm{kHz}$ audio into $12.5\mathrm{Hz}$ , followed by applying 3 RVQ layers to quantize it. As shown in Table 6, using previous audio codec frameworks makes it difficult to maintain good reconstruction performance in very low-bitrate settings. As a result, the proposed query-based compression method is more effective in this setting.
+
+The Influence of Semantic Prior for VQ To explore the influence of semantic priors on the audio codec model, we conduct an experiment where we remove the semantic prior and instead train a learnable RVQ following Encodec. As shown in Table 6, we find that updating the RVQ layer improves reconstruction performance but reduces semantic information, demonstrating that integrating semantic priors into the VQ layer enhances semantic information.
+
+The Influence of MAE Loss We also conduct experiments to evaluate the effectiveness of the MAE loss. As shown in Table 6, we find that the MAE loss is crucial for enhancing the semantic information in the codec model. Although the MAE loss has a slight negative effect on reconstruction, it is a crucial factor in building a better audio tokenizer.
+
+The Influence of AR Loss From Table 6, we observe that adding the AR loss reduces reconstruction performance. In Figure 3, we compare token prediction accuracy and TTS performance with and without LM loss. We observe that using LM loss significantly improves token prediction accuracy, particularly for the second and third VQ layers, which
+
+shows the effectiveness of our motivation and solution.
+
+The Influence of Two-stage Training As Table 6 shows, the two-stage training strategy is crucial as it significantly improves reconstruction performance and semantic information in the codec model. The Influence of Patchify Module We investigate two types of Patchify modules: Encode-style and StableCodec-style. As shown in Table 6, using Encode-style Patchify modules yields better performance. One possible reason is that StableCodec-style Patchify modules (Parker et al., 2024) may depend on larger data and model parameters, as the original paper scales their model to 1B. In contrast, we use only four transformer layers to ensure a fair comparison with Encode-style modules. Due to page limitations, we defer the ablation study on the influence of window size $w$ in query-based compression, codebook size, the influence of mask-rate, and model size on reconstruction to Appendix C.
+
+# 4.6. Discussion
+
+In this section, we discuss two fundamental questions in audio tokenization. Question 1: Is a single quantization layer better than multiple quantization layers? Question 2: Does a low-bit rate with high reconstruction performance define a good audio tokenizer?
+
+Question 1 Although WavTokenizer and StableCodec demonstrate the potential to build a low-bitrate audio codec tokenizer with a single quantization layer, they rely on a higher frame rate (e.g., 25 or $40\mathrm{Hz}$ ). As shown in Figure 1, a lower frame rate (e.g., $12.5\mathrm{Hz}$ ) is critical for improving training efficiency. Thanks to UniAudio (Yang et al., 2023c) and Moshi's (Défossez et al., 2024) audio language model framework, multiple quantization layers do not increase the sequence length. Therefore, multiple quantization layers present an effective approach for building a low-bitrate, semantically rich audio codec.
+
+Question 2 To address this question, we present two comparisons. First, as shown in Tables 4 and 1, StableCodec exhibits better reconstruction performance and a lower bit-rate compared to WavTokenizer. However, when applied to the
+
+Table 6. Ablation study of codec framework, training loss, and training strategy. ASR and ER are used to evaluate the semantic information. The others are used to evaluate the reconstruction performance. Experiments conduct on VCTK dataset.
+
+Setting UTMOS (↑) DNSMOS (↑) VISQOL (↑) PESQ (↑) STOI (↑) ASR (↓) ER (↑) ALMTokensizer 3.76 3.64 3.78 2.0 0.81 18.3 29.0 Framework ablation w/o the query-based framework 2.49 3.13 3.37 1.58 0.77 34.5 22.6 Only query-based framework 3.54 3.41 3.44 1.69 0.78 27.2 24.5 Training loss ablation w/o semantic prior for VQ 3.79 3.66 3.78 2.12 0.83 19.2 28.4 w/o MAE loss 3.70 3.76 3.83 2.10 0.82 24.5 23.2 w/o AR loss 3.72 3.81 3.80 2.08 0.82 18.8 30.2 Different Patchify module use Linear-Patchify 3.47 3.36 3.27 1.78 0.78 20.3 26.7 Training strategy ablation w/o two-stage training 3.60 3.39 3.24 1.55 0.74 22.8 25.9
+
+Table 7. The subjective reconstruction results using MUSHRA (comparative scoring of samples) of codec models on speech, sound and music. Bold for the best result and underline for the second-best result.
+
+Models FPS/TPS CS/BR Speech (↑) Sound (↑) Music (↑) Speech MimiCodec (3 RVQ) (Défossez et al., 2024) 12.5/37.5 2048/0.41kbps 65.61 ± 5.2 - - MimiCodec (8 RVQ) (Défossez et al., 2024) 12.5/100 2048/1.1kbps 86.7 ± 2.3 - - StableCodec (Parker et al., 2024) 25/25 46656/0.4kbps 81.7 ± 4.4 - - SpeechTokenizer (Zhang et al., 2023) 50/150 1024/1.5bps 73.7 ± 4.6 - - Audio Encodec (Défossez et al., 2022) 50/150 1024/1.5bps 75.1 ± 3.9 77.2 ± 4.2 73.7 ± 4.6 DAC (Kumar et al., 2023) 50/150 1024/1.5bps 79.3 ± 4.2 71.3 ± 4.1 71.3 ± 4.1 Wavtokenizer (Défossez et al., 2022) 40/40 4096/0.48bps 84.0 ± 2.1 63.1 ± 4.6 54.1 ± 5.4 Ours 12.5/37.5 2048/0.41kbps 84.8 ± 3.7 72.4 ± 4.7 69.0 ± 4.5
+
+text-to-speech generation task, WavTokenizer demonstrates better robustness. One possible reason for this is that StableCodec uses a large-scale codebook size (46,656), which may increase the modeling complexity. Second, although MimiCodec has a higher bit-rate and poorer reconstruction performance than StableCodec, it demonstrates more stable TTS generation performance and better ASR performance. This phenomenon further underscores the importance of semantic information. In summary, a good audio tokenizer for an audio language model should not only consider low-bitrate and reconstruction, but also account for the semantic information in the codec model.
+
+# 5. Conclusion
+
+In this study, we present a low-bitrate, semantically rich audio codec tokenizer. Specifically, we propose a query-based compression strategy to effectively compress the audio data into a low-bitrate format while incorporating more semantic information. Furthermore, we introduce several training losses to enhance semantic information, including MAE loss and AR loss. Extensive experiments demonstrate the
+
+effectiveness of ALMTokensizer. Within the same audio language modeling framework, ALMTokensizer exhibits superior performance in both understanding and generation tasks. We discuss the limitation of this study in Appendix I.
+
+# Impact Statement
+
+This paper presents an audio tokenizer for audio language models, which can be applied to various audio generation tasks, such as text-to-speech and text-to-music. There is potential for misuse in generating misinformation, deepfake audio, or other harmful content. We advocate for the development of a detection model to identify audio produced by the codec model and generated by other generative models.
+
+# Acknowledgements
+
+This study was supported in part by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd., a CUHK-led InnoCentre under the InnoHK initiative of the Innovation and Technology Commission of the Hong Kong Special Administrative Region Government.
+
+# References
+
+Agostinelli, A., Denk, T. I., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., Huang, Q., Jansen, A., Roberts, A., Tagliasacchi, M., et al. Musicl: Generating music from text. arXiv preprint arXiv:2301.11325, 2023.
+Ai, Y., Jiang, X.-H., Lu, Y.-X., Du, H.-P., and Ling, Z.-H. Apcodec: A neural audio codec with parallel amplitude and phase spectrum encoding and decoding. arXiv preprint arXiv:2402.10533, 2024.
+Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460, 2020.
+Banerjee, A. and Arora, V. wav2tok: Deep sequence tokenizer for audio retrieval. In The Eleventh International Conference on Learning Representations, 2022.
+Bertin-Mahieux, T., Ellis, D. P., Whitman, B., and Lamere, P. The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), 2011.
+Borsos, Z., Marinier, R., Vincent, D., Kharitonov, E., Pietquin, O., Sharifi, M., Roblek, D., Teboul, O., Grangier, D., Tagliasacchi, M., et al. Audiolm: a language modeling approach to audio generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023a.
+Borsos, Z., Sharifi, M., Vincent, D., Kharitonov, E., Zeghidour, N., and Tagliasacchi, M. Soundstorm: Efficient parallel audio generation. arXiv preprint arXiv:2305.09636, 2023b.
+Chen, S., Wang, C., Chen, Z., Wu, Y., Liu, S., Chen, Z., Li, J., Kanda, N., Yoshioka, T., Xiao, X., et al. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505-1518, 2022a.
+Chen, S., Wu, Y., Wang, C., Liu, S., Tompkins, D., Chen, Z., and Wei, F. Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058, 2022b.
+Copet, J., Kreuk, F., Gat, I., Remez, T., Kant, D., Synnaeve, G., Adi, Y., and Defossez, A. Simple and controllable music generation. arXiv preprint arXiv:2306.05284, 2023.
+Costantini, G., Iaderola, I., Paoloni, A., Todisco, M., et al. Emovo corpus: an italian emotional speech database. In Proceedings of the ninth international conference on language resources and evaluation (LREC'14), pp. 3501-3504. European Language Resources Association (ELRA), 2014.
+
+Défossez, A., Copet, J., Synnaeve, G., and Adi, Y. High fidelity neural audio compression. arXiv preprint arXiv:2210.13438, 2022.
+Défossez, A., Mazaré, L., Orsini, M., Royer, A., Pérez, P., Jégou, H., Grave, E., and Zeghidour, N. Moshi: a speech-text foundation model for real-time dialogue. arXiv preprint arXiv:2410.00037, 2024.
+Doh, S., Choi, K., Lee, J., and Nam, J. Lp-musiccaps: Llm-based pseudo music captioning. arXiv preprint arXiv:2307.16372, 2023.
+Drossos, K., Lipping, S., and Virtanen, T. Clotho: An audio captioning dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 736-740. IEEE, 2020.
+Du, Z., Chen, Q., Zhang, S., Hu, K., Lu, H., Yang, Y., Hu, H., Zheng, S., Gu, Y., Ma, Z., et al. Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407, 2024.
+Hao, H., Zhou, L., Liu, S., Li, J., Hu, S., Wang, R., and Wei, F. Boosting large language model for speech synthesis: An empirical study. arXiv preprint arXiv:2401.00246, 2023.
+He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000-16009, 2022.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
+Hsu, W.-N., Bolte, B., Tsai, Y.-H. H., Lakhotia, K., Salakhutdinov, R., and Mohamed, A. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451-3460, 2021.
+Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
+Huang, P.-Y., Xu, H., Li, J., Baevski, A., Auli, M., Galuba, W., Metze, F., and Feichtenhofer, C. Masked autoencoders that listen. Advances in Neural Information Processing Systems, 35:28708-28720, 2022.
+Ji, S., Jiang, Z., Wang, W., Chen, Y., Fang, M., Zuo, J., Yang, Q., Cheng, X., Wang, Z., Li, R., et al. Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio
+
+language modeling. arXiv preprint arXiv:2408.16532, 2024.
+Ju, Z., Wang, Y., Shen, K., Tan, X., Xin, D., Yang, D., Liu, Y., Leng, Y., Song, K., Tang, S., et al. Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models. arXiv preprint arXiv:2403.03100, 2024.
+Kang, W., Yang, X., Yao, Z., Kuang, F., Yang, Y., Guo, L., Lin, L., and Povey, D. Libriheavy: a 50,000 hours asr corpus with punctuation casing and context. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 10991-10995. IEEE, 2024.
+Kharitonov, E., Vincent, D., Borsos, Z., Marinier, R., Girgin, S., Pietquin, O., Sharifi, M., Tagliasacchi, M., and Zeghidour, N. Speak, read and prompt: High-fidelity text-to-speech with minimal supervision. arXiv preprint arXiv:2302.03540, 2023.
+Kim, C. D., Kim, B., Lee, H., and Kim, G. Audiocaps: Generating captions for audios in the wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 119-132, 2019.
+Kreuk, F., Synnaeve, G., Polyak, A., Singer, U., Defossez, A., Copet, J., Parikh, D., Taigman, Y., and Adi, Y. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022.
+Kumar, R., Seetharaman, P., Luebs, A., Kumar, I., and Kumar, K. High-fidelity audio compression with improved RVQGAN. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=qjnl1QUUnFA.
+La Quatra, M., Koudounas, A., Vaiani, L., Baralis, E., Cagliero, L., Garza, P., and Siniscalchi, S. M. Benchmarking representations for speech, music, and acoustic events. In 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 505-509, 2024. doi: 10.1109/ICASSPW62465.2024.10625960.
+Li, H., Xue, L., Guo, H., Zhu, X., Lv, Y., Xie, L., Chen, Y., Yin, H., and Li, Z. Single-codec: Single-codebook speech codec towards high-performance speech generation. arXiv preprint arXiv:2406.07422, 2024.
+Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp. 19730-19742. PMLR, 2023.
+
+Lipman, Y., Chen, R. T., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747, 2022.
+Liu, H., Xu, X., Yuan, Y., Wu, M., Wang, W., and Plumbley, M. D. Semanticodec: An ultra low bitrate semantic audio codec for general sound. arXiv preprint arXiv:2405.00233, 2024.
+Lostanlen, V. and Cella, C.-E. Deep convolutional networks on the pitch spiral for musical instrument recognition. arXiv preprint arXiv:1605.06644, 2016.
+Mei, X., Meng, C., Liu, H., Kong, Q., Ko, T., Zhao, C., Plumbley, M. D., Zou, Y., and Wang, W. Wavcaps: A chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research. arXiv preprint arXiv:2303.17395, 2023.
+Nguyen, T. A., Muller, B., Yu, B., Costa-Jussa, M. R., Elbayad, M., Popuri, S., Ropers, C., Duquenne, P.-A., Algayres, R., Mavlyutov, R., et al. Spirit-lm: Interleaved spoken and written language model. Transactions of the Association for Computational Linguistics, 13:30-52, 2025.
+OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2204.06125, 2023.
+Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 5206-5210. IEEE, 2015.
+Parker, J. D., Smirnov, A., Pons, J., Carr, C., Zukowski, Z., Evans, Z., and Liu, X. Scaling transformers for low-bitrate high-quality speech coding. arXiv preprint arXiv:2411.19842, 2024.
+Piczak, K. J. Esc: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on Multimedia, pp. 1015-1018, 2015.
+Pratap, V., Xu, Q., Sriram, A., Synnaeve, G., and Collobert, R. Mls: A large-scale multilingual dataset for speech research. arXiv preprint arXiv:2012.03411, 2020.
+Reddy, C. K., Gopal, V., and Cutler, R. Dnsmos p. 835: A non-intrusive perceptual objective speech quality metric to evaluate noise suppressors. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 886-890. IEEE, 2022.
+Saeki, T., Xin, D., Nakata, W., Koriyama, T., Takamichi, S., and Saruwatari, H. Utmos: Utokyo-sarulab system for voicemos challenge 2022. arXiv preprint arXiv:2204.02152, 2022.
+
+Siuzdak, H. Vocos: Closing the gap between time-domain and fourier-based neural vocoders for high-quality audio synthesis. arXiv preprint arXiv:2306.00814, 2023.
+Tang, C., Yu, W., Sun, G., Chen, X., Tan, T., Li, W., Lu, L., MA, Z., and Zhang, C. SALMONN: Towards generic hearing abilities for large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=14rn7HpKVk.
+van Niekerk, B., Zaidi, J., Carbonneau, M.-A., and Kamper, H. Spoken-term discovery using discrete speech units. arXiv preprint arXiv:2408.14390, 2024.
+Veaux, C., Yamagishi, J., MacDonald, K., et al. Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit. University of Edinburgh. The Centre for Speech Technology Research (CSTR), 6:15, 2017.
+Wang, C., Chen, S., Wu, Y., Zhang, Z., Zhou, L., Liu, S., Chen, Z., Liu, Y., Wang, H., Li, J., et al. Neural codec language models are zero-shot text to speech synthesizers. arXiv preprint arXiv:2301.02111, 2023.
+Wang, H., Suri, S., Ren, Y., Chen, H., and Shrivastava, A. Larp: Tokenizing videos with a learned autoregressive generative prior. arXiv preprint arXiv:2410.21264, 2024a.
+Wang, Y., Chen, H., Yang, D., Yu, J., Weng, C., Wu, Z., and Meng, H. Consistent and relevant: Rethink the query embedding in general sound separation. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 961-965. IEEE, 2024b.
+Wang, Y., Chen, H., Yang, D., Li, W., Luo, D., Li, G., Yang, S., Wu, Z., Meng, H., and Wu, X. Unisep: Universal target audio separation with language models at scale. arXiv preprint arXiv:2503.23762, 2025.
+Wu, H., Kanda, N., Eskimez, S. E., and Li, J. Ts3-codec: Transformer-based simple streaming single codec. arXiv preprint arXiv:2411.18803, 2024.
+Yang, D., Liu, S., Huang, R., Lei, G., Weng, C., Meng, H., and Yu, D. Instructts: Modelling expressive tts in discrete latent space with natural language style prompt. arXiv preprint arXiv:2301.13662, 2023a.
+Yang, D., Liu, S., Huang, R., Tian, J., Weng, C., and Zou, Y. Hifi-codec: Group-residual vector quantization for high fidelity audio codec. arXiv preprint arXiv:2305.02765, 2023b.
+
+Yang, D., Tian, J., Tan, X., Huang, R., Liu, S., Chang, X., Shi, J., Zhao, S., Bian, J., Wu, X., et al. Uniaudio: An audio foundation model toward universal audio generation. arXiv preprint arXiv:2310.00704, 2023c.
+Yang, D., Yu, J., Wang, H., Wang, W., Weng, C., Zou, Y., and Yu, D. Diffsound: Discrete diffusion model for text-to-sound generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023d.
+Yang, D., Guo, H., Wang, Y., Huang, R., Li, X., Tan, X., Wu, X., and Meng, H. Uniaudio 1.5: Large language model-driven audio codec is a few-shot audio task learner. arXiv preprint arXiv:2406.10056, 2024a.
+Yang, D., Huang, R., Wang, Y., Guo, H., Chong, D., Liu, S., Wu, X., and Meng, H. Simplespeech 2: Towards simple and efficient text-to-speech with flow-based scalar latent transformer diffusion models. arXiv preprint arXiv:2408.13893, 2024b.
+Yang, D., Wang, D., Guo, H., Chen, X., Wu, X., and Meng, H. Simplespeech: Towards simple and efficient text-to-speech with scalar latent transformer diffusion models. arXiv preprint arXiv:2406.02328, 2024c.
+Yang, S.-w., Chi, P.-H., Chuang, Y.-S., Lai, C.-I. J., Lakhotia, K., Lin, Y. Y., Liu, A. T., Shi, J., Chang, X., Lin, G.-T., et al. Superb: Speech processing universal performance benchmark. arXiv preprint arXiv:2105.01051, 2021.
+Yu, Q., Weber, M., Deng, X., Shen, X., Cremers, D., and Chen, L.-C. An image is worth 32 tokens for reconstruction and generation. arXiv preprint arXiv:2406.07550, 2024.
+Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., and Tagliasacchi, M. Soundstream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495-507, 2021.
+Zen, H., Dang, V., Clark, R., Zhang, Y., Weiss, R. J., Jia, Y., Chen, Z., and Wu, Y. Libritts: A corpus derived from librispeech for text-to-speech. arXiv preprint arXiv:1904.02882, 2019.
+Zeng, A., Du, Z., Liu, M., Wang, K., Jiang, S., Zhao, L., Dong, Y., and Tang, J. Glm-4-voice: Towards intelligent and human-like end-to-end spoken chatbot. arXiv preprint arXiv:2412.02612, 2024.
+Zhang, X., Zhang, D., Li, S., Zhou, Y., and Qiu, X. Speechtokenizer: Unified speech tokenizer for speech large language models. arXiv preprint arXiv:2308.16692, 2023.
+Zhu, L., Wei, F., Lu, Y., and Chen, D. Scaling the codebook size of vqgan to 100,000 with a utilization rate of $99\%$ . arXiv preprint arXiv:2406.11837, 2024.
+
+
+Figure 4. The left diagram illustrates the framework of the audio language model, which includes a pre-trained LLM, a LoRA module, and a depth transformer. The audio language model can process both text and audio streaming inputs and generate corresponding text and audio outputs. The right diagram provides details of hierarchical audio modeling.
+
+
+
+# A. The details of audio language model framework
+
+In this section, we provide details of the audio language model. We follow the framework of UniAudio (Yang et al., 2023c) and Moshi (Défossez et al., 2024), which combines a pre-trained LLM with a smaller Transformer model to predict audio tokens in a hierarchical manner. In their original paper, both the LLM and the small Transformer are updated during the training process. Due to resource limitations, and following (Hao et al., 2023), we incorporate LoRA (Hu et al., 2021) into the LLM model. For the LLM model, we use the LLAMA3.2 1B version. During training, we update only the LoRA module and the small Transformer.
+
+LORA setting For the LoRA module, we add LoRA parameters to the self-attention and linear layers. We set $lora_{r} = 32$ and $lora_{alpha} = 16$ .
+
+Depth Transformer setting For the depth transformer, we use 6 self-attention layer. We set the attention head number as 32. The attention dimension is the same as the LLAMA 3.2 1B.
+
+# B. The details of the influence of bitrate and semantic information for audio language model.
+
+In this section, we provide details of the validation experiments to explore the influence of bitrate and semantic information on audio language models. Following AudioLM (Borsos et al., 2023a), we construct an audio token pre-training task similar to text pre-training, where the model is tasked with predicting the next audio token based on the previous token sequence.
+
+# B.1. Training data
+
+We conduct the experiments on 2000 hours speech data, these data is selected from MLS dataset (Pratap et al., 2020).
+
+# B.2. Test data
+
+We evaluate on LibriSpeech test clean set.
+
+Table 8. The reconstruction performance of different frame rate of audio tokenizers.
+
+Version Bitrate (↓) FPS (↓) codebook size PESQ (↑) UT-MOS (↑) VISQOL (↑) STOI (↑) 50hz 1650bps 50 2048 2.22 3.69 3.63 0.86 25hz 825bps 25 2048 2.07 3.56 3.61 0.83 12.5hz 412.5bps 12.5 2048 1.58 2.49 3.37 0.77
+
+# B.3. Framework
+
+We use the same framework as described in Section A; the difference is that we do not use text streaming.
+
+# B.4. Three Types of Audio Tokenizers
+
+Following the structure of MimiCodec (Défossez et al., 2024), we train three versions of the audio codec tokenizer. All of the audio codec models are trained on $24\mathrm{kHz}$ speech data. We train three versions of the audio codec models, as follows:
+
+(V1) We set the down-sampling rate to [2, 5, 6, 8], resulting in a $50\mathrm{Hz}$ frame rate. We use three RVQ layers, and the codebook size is 2,048. The bitrate of this audio codec is 1.65 kbps.
+(V2) We set the down-sampling rate to [4, 5, 6, 8], resulting in a $25\mathrm{Hz}$ frame rate. We use three RVQ layers, and the codebook size is 2,048. The bitrate of this audio codec is 825 bps.
+(V3) We set the down-sampling rate to [2, 4, 5, 6, 8], resulting in a $12.5\mathrm{Hz}$ frame rate. We use three RVQ layers, and the codebook size is 2,048. The bitrate of this audio codec is 412.5 bps.
+
+Note that the original MimiCodec is trained with distillation loss from WavLM; we do not add this loss during the training of our audio tokenizer. Therefore, these three audio tokenizers do not include any semantic information. Table 8 shows the reconstruction performance of the three audio tokenizers.
+
+# B.5. Semantic Tokenizer
+
+The previous three audio codec tokenizers do not consider semantic information. To evaluate the importance of semantic information, we follow WhisperSpeech5 to build a Whisper-based semantic tokenizer. Specifically, we follow the training code of WhisperSpeech, using two down-sampling layers to compress the Whisper encoder's features into a $12.5\mathrm{Hz}$ frame rate, and then we add three RVQ layers to quantize them. Thus, this semantic tokenizer has the same bitrate as the V3 audio tokenizer.
+
+# B.6. Evaluation metrics
+
+We evaluate the pre-training performance from the following aspects:
+
+Training efficiency: As is well known, the space complexity of a transformer is $O(T^2)$ , where $T$ is the sequence length. A low-bitrate audio tokenizer can compress the audio signal into a few token sequences, thereby improving training efficiency. For all experiments, we use the same GPU machine to train the model and record the statistical training duration.
+
+Inference efficiency: Similarly, a low-bitrate audio tokenizer can improve inference efficiency, as it requires fewer inference steps. We use the Real-Time Factor (RTF) to assess inference efficiency. Note that for all experiments, we do not use any inference optimization tricks, such as KV cache.
+
+Validation loss and perplexity: Following text LLMs (OpenAI, 2023), we use validation loss and perplexity to evaluate model performance.
+
+
+Figure 5. The performance comparison with different window size during inference.
+
+Table 9. The influence of codebook size for reconstruction performance.
+
+Codebook Size PESQ (↑) UT-MOS (↑) VISQOL (↑) STOI (↑) STFT loss (↓) Token utilization (↑) 2048 2.0 3.76 3.78 0.81 1.20 100% 1024 1.83 3.66 3.65 0.80 1.14 100% 512 1.69 3.64 3.58 0.792 1.18 100%
+
+# C. Ablation study
+
+# C.1. The influence of window size for ALMTokenizer
+
+As discussed in the previous section, the proposed ALMTokensizer supports a dynamic compression rate by changing the window size $w$ . Figure 5 shows the comparison of reconstruction performance with different window sizes. We observe that using a smaller window size results in better reconstruction performance, but it also increases the bitrate. For example, if the window size is 2, the bitrate is 1237.5bps, window size is 6, the bitrate is 412.5. It also shows the advantages of proposed method: we can dynamically change the frame rate during the inference by setting different window size.
+
+# C.2. The influence of codebook size
+
+We explore three different codebook sizes: 512, 1024, and 2048. To align with the setting of MimiCodec (Défossez et al., 2024), we set the max codebook size as 2048. The results, as shown in Table 9, are presented. We observe that scaling the codebook size improves reconstruction performance. Furthermore, we also find that almost all tokens have been used.
+
+# C.3. The influence of model size for reconstruction performance
+
+To explore the influence of model size on reconstruction performance, we set up two configurations: (1) We use 24 self-attention layers for both the transformer encoder and transformer decoder, resulting in 174M parameters. (2) We use 12 self-attention layers for both the transformer encoder and transformer decoder, resulting in 87M parameters. In both settings, we keep the Patchify module the same size, as it consists of several convolutional layers, and its total parameters are small. The experimental results, as shown in Table 10, indicate that using a larger model can improve reconstruction but also increases computational resource consumption (higher RTF). Previous work, StableCodec (Parker et al., 2024), shows that scaling the codec model to 1B parameters can lead to better performance. Due to computational resource limitations, we leave scaling to a larger model size for future work.
+
+Table 10. The influence of model for reconstruction performance.
+
+Setting PESQ (↑) UT-MOS (↑) VISQOL (↑) STOI (↑) Model size (↓) RTF (↓) 24 attention layer 2.0 3.76 3.78 0.81 174 0.031 12 attention layer 1.87 3.57 3.70 0.79 87 0.019
+
+# C.4. The influence of mask-rate in MAE loss
+
+Inspired by MAE(He et al., 2022), we tested three groups of mask rates ranges: (10–20%), (20–30%), and (30–40%). The experiments as following Table shows. Results indicate that higher rates (30–40%) benefit semantics but harm reconstruction, leading us to adopt an intermediate range (20–30%).
+
+Table 11. The influence of mask-rate for MAE loss.
+
+mask rate range UTMOS DNSMOS VISQOL PESQ STOI ASR ER 10-20% 3.77 3.62 3.80 2.0 0.81 18.7 27.7 20-30% 3.76 3.64 3.78 2.0 0.81 18.3 29.0 30-40% 3.36 3.06 3.31 1.58 0.77 18.1 29.6
+
+# D. Evaluation
+
+We evaluate the performance of the previous SOTA audio tokenizers and our proposed ALMTokensizer across audio reconstruction, audio semantic information, audio understanding, and audio generation tasks.
+
+# D.1. Audio Reconstruction
+
+For speech data, we use DNS-MOS (Reddy et al., 2022), UT-MOS (Saeki et al., 2022), PESQ, STOI (Short-Time Objective Intelligibility), VISQOL (speech version), and STFT loss as metrics.
+
+For sound and music data, we use VISQOL (audio version), STFT loss, and Mel loss. Furthermore, following (Kumar et al., 2023), we conduct the MUSHRA subjective test for speech, sound, and music. Specifically, we hire 10 audio-related researchers to conduct the MOS evaluation. We ask the listeners to rate each audio, with scores ranging from 0 to 100. Refer to D.5 for the details.
+
+Evaluation Datasets: For speech data, we evaluate on a subset of VCTK (Veaux et al., 2017) (200 speech utterances) and a subset of the LibriTTS test clean set (Zen et al., 2019) (400 speech utterances). For sound data, we evaluate on a subset of the AudioCaps validation set (Kim et al., 2019) (200 sound utterances). For music data, we evaluate on a subset of the MusicCaps (Agostinelli et al., 2023) dataset (200 music utterances).
+
+# D.2. Audio Semantic Information
+
+Previous SSL models, such as Hubert (Hsu et al., 2021) and WavLM (Chen et al., 2022a), have shown that semantic-rich representations can be used to solve downstream recognition tasks by fine-tuning several adaptor layers. Inspired by these works, we propose evaluating the performance of the audio tokenizer for downstream recognition tasks. We use the quantized features of the audio tokenizer as the input for downstream tasks. We follow two popular benchmarks: SUPERB (Yang et al., 2021) and ARCH (La Quatra et al., 2024).
+
+For speech data, we conduct the automatic speech recognition (ASR) task on the LibriSpeech (Panayotov et al., 2015) dataset and the emotion classification (EC) task on the EMOVO (Costantini et al., 2014) dataset. For the ASR task, we train on the LibriSpeech train-100 set and evaluate on the LibriSpeech test clean set. For the EC task, we follow ARCH (La Quatra et al., 2024) to split the training and test sets.
+
+For sound data, we conduct the sound classification task on the ESC-50 dataset (Piczak, 2015). For music data, we conduct the music classification task on the Medley-Solos-DB dataset (Lostanlen & Cella, 2016). For both tasks, we follow the ARCH benchmarking settings to split the training and test sets.
+
+For all experiments, we train for 10 epochs with the same learning rate and batch size. For the automatic speech recognition
+
+task, we use word error rate (WER) as the metric. For the other classification tasks, we use accuracy as the metric.
+
+# D.3. LM-based Audio Understanding
+
+Overview To further validate whether the audio tokenizer is suitable for building an audio language model, we propose conducting an audio understanding task using discrete tokens as input. We conduct three tasks: automatic speech recognition (ASR), audio captioning, and music captioning. We use the framework introduced in Section A. For audio data, we use the audio tokenizer to encode it as discrete tokens; for text data, we use the BPE tokenizer of LLAMA 3.2. We construct the sequence as [audio token, text token], then the model is asked to predict the text token based on the previous audio token.
+
+Training Data For the ASR task, we select 2,000 hours of LibriHeavy speech data (Kang et al., 2024). For the audio captioning tasks, we use AudioCaps (Kim et al., 2019) and BBC sound effects (Mei et al., 2023). For the BBC sound effects, we cut off the first 10 seconds of audio if the utterance duration is greater than 10 seconds. Finally, we obtain about 500 hours of sound data. For the music captioning task, we use a subset of the Million Song dataset. We cut off the first 10 seconds of music data for each utterance, which results in about 500 hours of music data. For the corresponding captions, we use LPMusicCaps (Doh et al., 2023).
+
+Test Data For the ASR task, we evaluate on the LibriSpeech test clean set. For the audio captioning task, we evaluate on the AudioCaps dataset (Kim et al., 2019). For the music captioning task, we evaluate on the MusicCaps dataset (Agostinelli et al., 2023).
+
+Metrics Similarly, we use WER as the evaluation metric for the ASR task. For audio and music captioning, we follow (Drossos et al., 2020) and adopt BLEU-1, BLEU-2, BLEU-3, METEOR, ROUGE-L, CIDEr-D, SPICE, and SPIDER metrics.
+
+Inference Setting For inference, we directly use the top-k sampling strategy and set $k = 30$ for all experiments.
+
+# D.4. LM-based Audio Generation
+
+We also perform audio generation tasks, including text-to-speech, text-to-sound, and text-to-music generation. Similarly, we construct the sequence as [text token, audio token], then the model is asked to predict the audio token based on the previous text token.
+
+Training and Test Data We use the same training and test data as the audio comprehension task.
+
+Metrics For TTS evaluation, we use WER to evaluate robustness, and UTMOS and DNSMOS are used to assess speech quality. For text-to-sound and text-to-music, we follow previous works AudioGen (Kreuk et al., 2022), using Fréchet Audio Distance (FAD), Kullback-Leibler (KL) Divergence, and Fréchet Distance (FD) for audio fidelity and similarity.
+
+Inference Setting During the inference stage, we use the top-k sampling strategy and set $k = 30$ for all experiments.
+
+# D.5. Subjective Evaluations
+
+For the subjective evaluations, we adopt the approach used in previous works (Kumar et al., 2023; Parker et al., 2024) and use the MUSHRA format without a hidden anchor. Listeners are asked to compare multiple versions of an example simultaneously, including both a labeled reference and a hidden reference. They are given the following instructions: "Please assess the quality similarity between an audio sample and its reference. Listen carefully to the reference audio, then rate the quality of each test clip in comparison. A score of 0 indicates no resemblance to the reference, while a score of 100 means it is identical to the reference." We randomly select 10 samples from each category (speech, music, and sound) in the test set, ensuring that each sample receives 10 ratings.
+
+# E. Audio Tokenizer Baselines
+
+To make a fair comparison, we classify the audio tokenizers into two types: (1) speech-based tokenizers, which are trained on speech datasets, and (2) audio-based tokenizers, which are trained on speech, sound, and music datasets.
+
+# E.1. Speech Tokenizer
+
+For speech data, we compare with:
+
+Table 12. The performance comparison on LibriTTS test clean. Bold for the best result and underline for the second-best result.
+
+Models FPS/TPS CS/BR Reconstruction Efficiency UTMOS (↑) DNS-MOS (↑) VISQOL (↑) STOI (↑) PESQ (↑) Model size (M) (↓) RTF (↓) Encodec 50/400 1024/6kbps 3.30 3.76 3.95 0.94 2.72 14 0.019 Encodec 50/150 1024/1.5kbps 2.02 3.27 3.83 0.88 1.79 14 0.019 DAC 50/150 1024/1.5kbps 2.61 3.36 3.85 0.89 1.96 71 0.026 Wavtokenizer 40/40 4096/0.48kbps 3.65 3.61 3.80 0.87 1.81 77 0.017 StableCodec 25/25 46656/0.4kbps 4.20 3.74 3.51 0.88 1.85 950 0.039 MimiCodec (3 RVQ) 12.5/37.5 2048/0.41kbps 2.82 3.28 3.34 0.83 1.40 75.6 0.023 ALMTokensizer (Ours) 12.5/37.5 2048/0.41kbps 3.68 3.64 3.90 0.90 1.92 174 0.031
+
+(1) Encodec (Defossez et al., 2022), a SOTA audio codec model trained on large-scale speech, sound, and music datasets. The official open-sourced $24\mathrm{kHz}$ version is used.
+(2) DAC-Codec (Kumar et al., 2023), which offers very high reconstruction performance. It is trained on large-scale speech, sound, and music datasets. The official open-sourced $24\mathrm{kHz}$ version is used.
+(3) MimiCodec (Défossez et al., 2024), a SOTA low-bitrate speech codec model trained on a large-scale speech dataset. The sampling rate is $24\mathrm{kHz}$ .
+(4) SpeechTokenizer (Zhang et al., 2023), a semantic-rich speech codec model trained on a large-scale speech dataset. The sampling rate is $16\mathrm{kHz}$ .
+(5) WavTokenizer (Ji et al., 2024), an audio codec tokenizer trained on large-scale speech, sound, and music datasets. The sampling rate is $24\mathrm{kHz}$ .
+
+To make a fair comparison, for Encodec, DAC-Codec, and SpeechTokenizer, we use the first three RVQ layers to control the bitrate during inference.
+
+# E.2. Audio Tokenizer
+
+For sound and music data, we compare with Encodec, DAC-Codec, and WavTokenizer. These three models are trained on large-scale speech, sound, and music datasets.
+
+# E.3. Semantic Models
+
+Furthermore, to evaluate the performance of semantic information, we also introduce several SSL-based models. For speech, we use WavLM (Chen et al., 2022a) and HuBERT (Hsu et al., 2021). For sound and music, we use BEATs (Chen et al., 2022b) and Wav2Vec2-AudioSet $^{6}$ .
+
+# F. More audio tokenizer evaluation experiments
+
+# F.1. The subjective evaluation for audio tokenizer
+
+Table 7 shows the subjective evaluation results for audio tokenizer.
+
+# F.2. Evaluation results on LibriTTS test clean
+
+We report the reconstruction performance evaluated on a subset of the LibriTTS test clean set, where we randomly select 400 speech utterances. Additionally, we calculate the Real-Time Factor (RTF) and model size to assess efficiency. For RTF evaluation, we use an NVIDIA A100 GPU to evaluate all models.
+
+# F.3. Length generalization
+
+StableCodec (Parker et al., 2024) highlights that the introduction of transformer-based architectures can lead to the length generalization problem. For instance, the training data of ALMTokenizer consists of 5-second segments, whereas the test
+
+Table 13. Objective metrics for the ALMTokenizer and baselines, evaluated on utterances from length 4s to 10s, showing generalization of models across lengths
+
+Model FPS TPS Bitrate PESQ (↑) UT-MOS (↑) VISQOL (↑) STOI (↑) DNSMOS (↑) 4 seconds Encodec 50 150 1.5kbps 1.97 2.64 3.62 0.80 3.26 DAC 50 150 1.5kbps 2.1 3.17 3.65 0.81 3.26 Ours 12.5 37.5 0.41kbps 1.84 3.63 3.69 0.79 3.41 6 seconds Encodec 50 150 1.5kbps 1.97 2.54 3.63 0.81 3.26 DAC 50 150 1.5kbps 2.0 3.11 3.65 0.81 3.28 Ours 12.5 37.5 0.41kbps 1.89 3.66 3.75 0.81 3.62 8 seconds Encodec 50 150 1.5kbps 1.96 2.52 3.63 0.81 3.34 DAC 50 150 1.5kbps 2.1 3.18 3.66 0.81 3.28 Ours 12.5 37.5 0.41kbps 1.95 3.55 3.74 0.81 3.66 10 seconds Encodec 50 150 1.5kbps 1.95 2.53 3.65 0.81 3.32 DAC 50 150 1.5kbps 2.1 2.19 3.67 0.81 3.25 Ours 12.5 37.5 0.41kbps 1.96 3.54 3.73 0.81 3.66
+
+data comprises segments of varying durations. We evaluate the model across four distinct length levels: 4, 6, 8, and 10 seconds. Encodec and DAC are selected as baselines due to their reliance on convolutional layers, which demonstrate robustness to variable input lengths. As shown in Table 13, the evaluation results indicate that ALMTokensizer effectively handles inference across these diverse lengths. These findings suggest that ALMTokensizer exhibits strong generalization capabilities with respect to input length variation.
+
+# F.4. Compared to diffusion-based audio codec models
+
+We compare ALMTokens with an alternative family of audio tokenizers that leverage discrete semantic tokens derived from self-supervised pre-trained (SSL) models (e.g., Hubert (Hsu et al., 2021), WavLM (Chen et al., 2022a), AudioMAE (Huang et al., 2022)). These models first quantize the SSL features into semantic tokens and subsequently use a generative model to resynthesize the waveform. Diffusion (Ho et al., 2020) and Flow-Matching (Lipman et al., 2022) are two popular generative models. Previous works, such as GLM4-Voice tokenizer (Zeng et al., 2024) and SemantiCodec (Liu et al., 2024), have demonstrated success using diffusion-based decoders. However, such strategies tend to result in significant information loss. For instance, the semantic tokens in GLM4-Voice lack timbre information and require additional prompts to control timbre during decoding. Notably, the open-sourced GLM4-Voice tokenizer uses a fixed timbre, meaning that any speech encoded by GLM4-Voice will lose its original timbre. To address this information loss in semantic tokens, SemantiCodec introduces acoustic streaming to enhance waveform reconstruction. A key concern, however, is that both SemantiCodec and GLM4-Voice tokenizers demand significantly more computational resources during the inference stage. In the following, we present a comprehensive comparison between ALMTokens and SemantiCodec, focusing on the following aspects: (1) reconstruction performance for speech, sound, and music; (2) semantic information performance for speech, sound, and music; and (3) computational resource requirements during inference, measured using RTF.
+
+Table 14 shows the speech reconstruction and semantic performance, where we observe that ALMTokenizer outperforms the alternatives in both aspects while using less bitrate. Table 15 presents experimental results for sound and music data, where ALMTokenizer again demonstrates superior performance across all metrics compared to SemantiCodec. In Table 16, we present the model size and RTF metrics, showing that ALMTokenizer has fewer model parameters and significantly surpasses SemantiCodec in inference speed (0.031 vs 0.92).
+
+# G. The details of ALMTokenizer structure and training
+
+# G.1. Model structure
+
+Table 17 gives the details of ALMTokensizer configuration, which results in 174M parameters. In all of experiments, for the MAE-transformer encoded and decoder, we adopt a 8 layer transformer layers.
+
+Table 14. The performance comparison between ALMTokensizer and SemanticCodec on VCTK dataset.
+
+Models FPS/TPS CS/BR Reconstruction Semantic UTMOS (↑) DNS-MOS (↑) VISQOL (↑) STOI (↑) PESQ (↑) ASR (↓) EC (↑) SemantiCodec 50/50 16384/0.68kbps 3.2 3.57 3.90 0.81 1.76 48.3 17.8 ALMTokensizer 12.5/37.5 2048/0.41kbps 3.76 3.64 3.78 0.81 2.0 18.3 29.0
+
+Table 15. The performance comparison between ALMTokensizer and SemanticCodec on Music (MusicCaps) and sound data (AudioCaps).
+
+Models FPS/TPS CS/BR Reconstruction Semantic Mel loss (↓) STFT loss (↓) VISQOL (↑) Classification (↑) Sound data SemantiCodec 50/50 16384/0.68kbps 18.45 1.40 2.47 38.8% ALMTokenizer 12.5/37.5 2048/0.41kbps 15.0 1.24 2.99 44% Music data SemantiCodec 50/50 16384/0.68kbps 47.9 1.58 2.49 48% ALMTokenizer 12.5/37.5 2048/0.41kbps 34.4 1.32 3.96 59%
+
+Patchify and UnPatchify modules A single-channel audio signal $\pmb{x} \in \mathcal{R}^{1 \times N}$ (where $N$ denotes the sampling points) is processed through the Encodec-style Patchify and UnPatchify modules, which adopt the same structure as Encodec (Défossez et al., 2022), consisting of four convolutional blocks. Each convolutional block consists of a residual unit followed by a down-sampling layer. These convolution blocks effectively encode the audio signal $\pmb{x}$ into an audio frame representation $e \in \mathcal{R}^{T \times d}$ , where $T$ denotes the number of frames and $d$ denotes the dimension of each vector. The convolution blocks are followed by a two-layer LSTM for sequence modeling, followed by a final 1D convolutional layer with a kernel size of 7 and $D$ output channels. The UnPatchify module mirrors the Patchify architecture by substituting stride convolutions with transposed convolutions and reversing the stride order.
+
+For the StableCodec-style Patchify and UnPatchify modules, we follow the approach in StableCodec (Parker et al., 2024) and use a reshape operation to transform $\boldsymbol{x} \in \mathcal{R}^{t \times sr}$ into $e \in \mathcal{R}^{T \times d}$ , where $T = N / 320$ and $d = 320$ . We then apply a linear layer to map the dimension to $D$ . Finally, we add four transformer layers for sequence modeling. Similarly, the UnPatchify module mirrors the Patchify architecture.
+
+Discriminators For the discriminators, we follow prior work (Défossez et al., 2022), which combines mel-spectrogram and log-mel-spectrogram features and inputs them into a network consisting of several convolutional layers. Specifically, we use six discriminators with different configurations: the hidden dimensions are set as 64, 128, 256, 512, 512, 512, and the hop lengths are set as 32, 64, 128, 256, 512, 1024.
+
+# G.2. Reconstruction loss and adversarial loss for ALMTokenizer
+
+Let the reconstructed signal be $\hat{\pmb{x}}$ . For the reconstruction loss, we design it from two perspectives: the time domain and the frequency domain. We first compute the $L_{1}$ loss between $\pmb{x}$ and $\hat{\pmb{x}}$ in the time domain. Next, we compute the $L_{1}$ loss between the STFT spectrogram of $\pmb{x}$ and $\hat{\pmb{x}}$ in the frequency domain. Following (Wang et al., 2024b), we employ a sub-band split strategy to divide the spectrogram into several parts. The adversarial loss is employed to enhance the perceptual quality of the generated audio:
+
+$$
+\mathcal {L} _ {d} = \frac {1}{K} \sum_ {i = 1} ^ {K} \max (0, 1 - D _ {k} (\boldsymbol {x})) + \max (0, 1 + D _ {k} (\hat {\boldsymbol {x}})) \tag {4}
+$$
+
+where $K$ denotes the number of discriminators. During the training stage, the adversarial loss for the generator is computed as a hinge loss over the logits of these discriminators:
+
+$$
+\mathcal {L} _ {a d v} = \frac {1}{K} \sum_ {i = 1} ^ {K} \max (0, 1 - D _ {k} (\hat {\boldsymbol {x}})) \tag {5}
+$$
+
+The feature loss $\mathcal{L}_{feat}$ is computed by taking the average absolute difference between the discriminator's internal layer outputs for the generated audio and those for the corresponding real audio.
+
+Table 16. The model size and RTF comparison between SemantiCodec and ALMTokensizer.
+
+Model Model size (M) (↓) RTF (↓) SemantiCodec 507 0.92 ALMTokenizer (Ours) 174 0.031
+
+ALMTokenizer Input shape (B, 1, N) Patchify module (output) (B, T, d), T=N/320 Token Interleaving and Retrieval w ∈ [2, 3, 4, 5, 6, 7, 8, 9, 10] Dimension of transformer encoder 256 The number of transformer encoder 24 Dimension of transformer decoder 512 The number of transformer decoder 24 Codebook size 2048 VQ layers 3 Number of Transformer heads 64 UnPatchify module (output) (B, 1, N)
+
+Table 17. ALMTokenizer model backbone configurations
+
+# G.3. Training details
+
+The AdamW optimizer is used in the training. We set the learn rate as $1e - 4$ . We train the model with 200k steps. The final loss as following shows. We set $\lambda_{1} = 0.5$ and $\lambda_{2} = 0.1$ during our experiments. We conduct all of the experiments with 4 NVIDIA A100-80G GPUs.
+
+$$
+\mathcal {L} = \mathbf {L} _ {\text {a d v}} + \mathbf {L} _ {\text {f e a t}} + \mathbf {L} _ {\text {r e c}} + \lambda_ {1} \mathbf {L} _ {\text {M A E}} + \lambda_ {2} \mathbf {L} _ {\text {A R}} \tag {6}
+$$
+
+# H. Reproducibility Statement
+
+To enhance reproducibility, we provide the pseudocode of ALMTokensizer. In the future, we plan to improve both the model structure and training data to obtain more robust models, especially for music and sound, and release the code for the research community.
+
+Listing 1. Pseudocode of ALMTokenizer
+```python
+class ALMTokensizer: def __init__(self, transformerEncoder_args, transformerDecoder_args, maeDecoder_args, depth_gpt_args, patchify_args, encoder_embedding_dim, decoder_embedding_dim, semantic_prior_path, mask_rate, window_sizes = [2,3,4,5,6,7,8,9,10],): self(window_sizes = window_sizes self.transformerEncoder = Transformer(transformerEncoder_args) self.transformerdecoder = Transformer(transformerDecoder_args) self.maeDecoder = Transformer(maedecoder_args) self.Patchify = EncodecEncoder(patchify_args) self.UnPatchify = EncodecDecoder(patchify_args)
+```
+
+```python
+self.cls_token = nnParameter(torch.zeros(1, 1, encoder_embedding_dim))
+selfmasked_token = nnParameter(torch.zeros(1, 1, decoder_embedding_dim))
+checkpoint = torch.load(semantic_prior_path, map_location="cpu")
+self.vq = RVQ_semantic(
+ input_dim=encoder_embedding_dim,
+ semantic_prior = checkpoint,
+ layers = 3)
+self.depth_gpt = GPT Decoder(depth_gpt_args)
+self.temp_window_size = 6
+self_mask_rate = mask_rate
+def Encoder_token_Interleaving(self, x):
+ B, T, D = x.shape # batch, length, dim
+cls_tokens = self.cls_tokenrepeat(B, (T//self.tmp_window_size), 1).unsqueeze(2)
+ new_T = T + (T // self.tmp_window_size)
+x reshaped = x reshape(B, T // self.tmp_window_size, self.tmp_window_size, D)
+x_withCls = torch.cat([x reshaped, cls_tokens], dim=2)
+new_x = x_withCls.reshape(B, -1, D)
+return new_x
+def Encoder_token_Retrieval(self, x):
+ B, new_T, D = x.shape
+original_T = new_T - new_T // (self.tmp_window_size + 1)
+mask Indices = [(i + 1) * (self.tmp_window_size + 1) - 1 for i in range(original_T // self.tmp_window_size)]
+cls_tokens = new_x[;, mask Indices, :]
+returnCLS_tokens
+def Decoder_token_Interleaving(self, en_token):
+ B, T, D = en_token.shape
+x = self-mask_tokenrepeat(B, 1, 1)
+new_T = en_token.shape[1] * self.tmp_window_size + en_token.shape[1]
+x = x.repeat(1, en_token.shape[1] * self.tmp_window_size, 1)
+x = x.reshape(B, -1, self.tmp_window_size, D)
+x_with Masks = torch.cat([x, en_token.unsqueeze(2)], dim=2)
+new_x = x_with Masksreshape(B, -1, D)
+return new_x
+def Decoder_token_Retrieval(self, new_x):
+ B, new_T, D = new_x.shape
+num_masks = new_T // (self.interval + 1)
+original_T = new_T - num_masks
+maskIndices = [(i + 1) * (self.interval + 1) - 1 for i in range(num_masks)]
+allIndices = list(range(new_T))
+maskIndices = [i for i in allIndices if i not in maskIndices]
+mask Frames = new_x[;, maskIndices,:]
+return mask Frames
+def forward(self, x):
+ x_len = x.shape[-1]
+self.temp_window_size = choice(selfwindow_sizes)
+emb Frames = self.Patchify(x)
+if self.trainin:
+ emb Frames_mask = self.apply_mask(emb Frames, mask_rate = self-mask_rate)
+interleaving Frames = self.Encoder_token_Interleaving(emb Frames_mask)
+predictlatent = self.maedecoder(interleavingFrames)
+mae_loss = L1_loss(predictLatent, emb Frames)
+latent_tokens = self.transformerEncoder(interleaving Frames)
+query_token = self.Encoder_token_Retrieval(latent_tokens)
+Quantized_token, codes, allquantized = self.vq(query_token)
+cat_quzized = []
+for q_emb in all_quzized:
+```
+
+```python
+q_emb = q_emb.reshape(-1, q_emb.shape[-1]).unsqueeze(1)
+cat_quantized.append(q_emb)
+cat_quantized = torch.cat(cat_quantized, dim=1)
+gpt_loss = self.depth_gpt.compute_prior_loss(cat_quantized)
+de_interleaving Frames = self.Decoder_token_Interleaving(Quantized_token)
+delatent_token = self.transformer Decoder(de_interleaving Frames)
+mask_tokens = self.Decoder_token_Retestval(de_forensic_token)
+x_ = self.UnPatchify mask_tokens)
+return x_, mae_loss, gpt_loss
+```
+
+# I. Limitation
+
+In this study, we present ALMTokenizer, a low-bitrate, semantic-rich audio codec tokenizer. We demonstrate that ALM-Tokenizer excels in both reconstruction and semantic information retention under low-bitrate conditions. However, we acknowledge that there is still significant room for improvement in reconstruction performance, particularly for sound and music data. Building an audio tokenizer for sound and music in the low-bitrate setting poses additional challenges. In terms of semantic information, ALMTokenizer still lags behind traditional SSL models. Although we propose several training losses to enhance semantic information in the codec model, the improvements are limited and, in some cases, negatively impact reconstruction quality. We recognize the need for a careful design and balance of these semantic loss terms. Additionally, the multi-stage training strategy increases training complexity. These training strategy brings waste. Most of the components are eventually discarded, e.g. MAE-transformer encoder/decoder, MAE-decoder, and depth AR-transformer. These components would have made sense to still utilize them for some purpose, e.g. the AR decoder could have been used to initialize the depth transformer in the Language modeling task. These concerns are left for future work.
\ No newline at end of file
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/images.zip b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fa8c4e96448bbe7e31a66090a865fb787bcb293e
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2de90bafe3a5d5033a1ce855c12127cf317ed36b51c448bccf2b110a9af95e86
+size 891932
diff --git a/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/layout.json b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..65120dd14e34302c794909ee5bba861ed3b6fbfc
--- /dev/null
+++ b/almtokenizeralowbitrateandsemanticrichaudiocodectokenizerforaudiolanguagemodeling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77b7dae3c984dd60b0ac50d1d6a26897db98589eba10b643b58cc3265f22a42f
+size 653403
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_content_list.json b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..84a68015247df7e6cc21787099e738d6cb4573de
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d88a0b5c0b46db3aff1dae99508be5f67fb9f575d6c6ef9bda58789997c32b79
+size 287330
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_model.json b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2ace681aeeb96011e51c538e8eaf2d8dae486d9a
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ccc573a10e9c3f4cfefa674893d666d03e4194834c65dcc574f09a8688728677
+size 311160
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_origin.pdf b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2f26a1bddcefe3b5fdd3c7f3982fc2e5026be9fb
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/a3169d99-2a7b-4cdd-83b8-70a53f1cdac7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39862085dcce3a33f9bc9978d0425b80b99dafe2605697a2afc791b5274931b0
+size 11823960
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/full.md b/amodelofplacefieldreorganizationduringrewardmaximization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a828f635685f0d763f8c0efdd8d2dece58e9097
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/full.md
@@ -0,0 +1,1391 @@
+# A Model of Place Field Reorganization During Reward Maximization
+
+M Ganesh Kumar $^{123}$ Blake Bordelon $^{123}$ Jacob A. Zavatone-Veth $^{24}$ Cengiz Pehlevan $^{123}$
+
+# Abstract
+
+When rodents learn to navigate in a novel environment, a high density of place fields emerges at reward locations, fields elongate against the trajectory, and individual fields change spatial selectivity while demonstrating stable behavior. Why place fields demonstrate these characteristic phenomena during learning remains elusive. We develop a normative framework using a reward maximization objective, whereby the temporal difference (TD) error drives place field reorganization to improve policy learning. Place fields are modeled using Gaussian radial basis functions to represent states in an environment, and directly synapse to an actor-critic for policy learning. Each field's amplitude, center, and width, as well as downstream weights, are updated online at each time step to maximize rewards. We demonstrate that this framework unifies three disparate phenomena observed in navigation experiments. Furthermore, we show that these place field phenomena improve policy convergence when learning to navigate to a single target and relearning multiple new targets. To conclude, we develop a simple normative model that recapitulates several aspects of hippocampal place field learning dynamics and unifies mechanisms to offer testable predictions for future experiments.
+
+# 1. Introduction
+
+A place field is canonically described as a localized region in an environment where a hippocampal neuron's firing rate is maximal and robust across trials (O'Keefe & Dostrovsky, 1971; O'Keefe, 1978). Classically, each neuron has a unique
+
+1 The John A. Paulson School of Engineering and Applied Sciences, Harvard University 2 Center for Brain Science, Harvard University 3 The Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University 4 Society of Fellows, Harvard University. Correspondence to: M Ganesh Kumar .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+spatial receptive field such that the population activity can describe an animal's allocentric position within the environment (Moser et al., 2015). Ablation studies demonstrate that the hippocampal representation is useful for learning to navigate to new targets (Morris et al., 1982; Packard & McGaugh, 1996; Steele & Morris, 1999). Importantly, each field's spatial selectivity evolves with experience in a new environment before stabilizing in the later stages of learning (Frank et al., 2004). Specifically, a high density of place fields emerge at reward locations (Gauthier & Tank, 2018; Lee et al., 2020; Sosa et al., 2023), place fields elongate backward against the trajectory (Mehta et al., 1997; Priestley et al., 2022), and individual field's spatial selectivity continues to change or "drift" even when animals demonstrate stable behavior (Geva et al., 2023; Krishnan & Sheffield, 2023; Kentros et al., 2004; Mankin et al., 2012; Ziv et al., 2013). Although disparate mechanisms have been proposed to model these phenomena, a framework that can unify these phenomena and clarify their computational role remains elusive. Here, we propose a single normative model for spatial representation learning, based on the hippocampal CA1 given its role in representing salient spatial information (Dong et al., 2021; Dupret et al., 2010). Our key contributions are:
+
+- We develop a two-layered reinforcement learning model to study spatial representation learning by place fields (Fig.1A). The first layer is a population of Gaussian radial basis functions that transform continuous spatial information into a relevant representational substrate or "state", which feed into an actor-critic network in the second layer that uses these representations to learn actions that maximize cumulative discounted reward. Besides the actor and critic weights, each place field's amplitude, center and width is optimized by the Temporal Difference error.
+- Our model recapitulates three experimentally-observed neural phenomena during task learning: (1) the emergence of high place field density at rewards, (2) elongation of fields against the trajectory, and (3) drifting fields that do not affect task performance.
+- We analyze the factors that influence these representational changes: a low number of fields drives greater spatial representation learning, the mean population firing rate reflects the value of that location, and increasing
+
+noise magnitude during field parameter updates causes a monotonic decrease in population vector correlation but non-monotonic change in behavior.
+
+- We demonstrate that optimizing place field widths and amplitudes enhances reward maximization and policy convergence. However, field parameter optimization alone is insufficient for learning to navigate to new targets. Introducing noisy field parameter updates improves new target learning, suggesting a functional role for noise.
+
+# 2. Related Works
+
+# Anatomically constrained architecture for navigation.
+
+Learning to navigate involves the hippocampus encoding spatial information and its strong glutamatergic projections to the striatum (Lisman & Grace, 2005; Floresco et al., 2001). The ventral and dorsal regions of the striatum are associated with value estimation and stimulus-response associations, functioning similarly to a critic and an actor, respectively (Niv, 2009; Joel et al., 2002; Houk et al., 1994). Additionally, dopamine neurons in the Ventral Tangemental Area influence plasticity in the striatal synapses (Reynolds et al., 2001; Russo & Nestler, 2013). This anatomical insight has led to the design of a biologically plausible navigation model, where place fields connect directly to an actor-critic framework, and synapses are modulated by the TD error (Arleo & Gerstner, 2000; Foster et al., 2000; Fremaux et al., 2013; Brown & Sharp, 1995; Kumar et al., 2022). Recent evidence shows direct dopaminergic projections to the hippocampus to modulate place cell activity, strengthening the case for navigation models with adaptive place fields (Palacios-Filardo & Mellor, 2019; Krishnan et al., 2022; Kempadoo et al., 2016; Sayegh et al., 2024). How upstream information from the entorhinal cortex influences place field representations for policy learning needs clarity (Fiete et al., 2008; Bush et al., 2015). As new experiments challenge the canonical definition that a place cell only has one place field (Eliav et al., 2021), we study spatial representational learning using Gaussian place fields, instead of place cells.
+
+Field density increases near reward locations. Density traditionally refers to the number of field centers of mass in a location. However, we also consider changes in the mean population firing rate, which includes variations in each field's width and amplitude. As animals learn to navigate towards a reward, a high density of place fields emerge at reward locations (Gauthier & Tank, 2018; Lee et al., 2020; Sosa et al., 2023). Reward location based reorganization was observed in hippocampal CA1 and not in CA3 (Dupret et al., 2010). Interestingly, a recent study showed that place fields initially coding the reward location shifted backwards against the trajectory causing a decrease in reward coding fields, suggesting of a representation predictively coding for reward (Yaghoubi et al., 2024).
+
+Fields learn to encode future occupancy. As animals traverse a 1D track, most CA1 fields increase in size and their center of mass shift backwards against the trajectory of motion (Mehta et al., 1997; Frank et al., 2004; Priestley et al., 2022). A proposal for this behavior is that fields initially coding for location $x_{t}$ are learning to also encode the previous location $x_{t-1}$ , hence predictively coding for location occupancy $p(x_{t+1}|x_{t})$ (Mehta et al., 2000; Stachenfeld et al., 2017). While algorithms such as the successor representation (Dayan, 1993) learn to predict the transition structure (Gershman, 2018), the representation is dependent on a predefined navigation policy. Hence, a complete normative argument—including policy learning—for why fields exhibit this behavior is still lacking.
+
+Fields drift during stable behavior. After animals reach a certain performance criterion in navigating to a reward location, the spatial selectivity of individual place fields changes across days, even though animals exhibit stable behavior (Kentros et al., 2004; Mankin et al., 2012; Ziv et al., 2013; Geva et al., 2023; de Snoo et al., 2023). A proposal is that these fields continue to drift within a degenerate solution space while the overall representational manifold or the chosen performance metric remains stable (Qin et al., 2023; Pashakhanloo & Koulakov, 2023; Masset et al., 2022; Kappel et al., 2015; Rokni et al., 2007). Another proposal is that compensatory synaptic plasticity adjusts the readout to maintain stable decoding over time (Rule et al., 2020; Rule & O'Leary, 2022). However, a model that demonstrates stable navigation learning behavior with drifting fields is absent, and the functional role of drift remains unclear.
+
+# 3. Task and Model setup
+
+Most navigational experiments involve an animal moving from a start location to a target location to receive a reward, either in a one-dimensional (1D) track or a two-dimensional (2D) arena. Similarly, our agents receive their true position at every time step $(t)$ described by the variable ( $scalar x_{t}$ in 1D, vector $x_{t}$ in 2D), and have to learn a policy $(\pi)$ that specifies the actions to take $(g_{t})$ to move from a start location (e.g. $x_{start} = -0.75$ , Fig. 1A green dash) to a target with reward values following a Gaussian distribution $(x_{r} = 0.5, \sigma_{r} = 0.05$ , Fig. 1A red area). The agent outputs a one-hot vector $g_{t}$ (left or right in 1D and left, right, up or down in 2D), which causes its motion to be discrete, similar to a trajectory in a grid world. To model smooth trajectories in a continuous space as an animal's behavior (Foster et al., 2000; Frémaux et al., 2013; Kumar et al., 2022; 2024), we use a low-pass filter to smooth $g_{t}$ using a constant $\alpha_{env} = 0.2$ after scaling for maximum displacement using $v_{max} = 0.1$ :
+
+$$
+x _ {t + 1} = x _ {t} + \bar {g} _ {t}, \bar {g} _ {t + 1} = (1 - \alpha_ {e n v}) \bar {g} _ {t} + \alpha_ {e n v} v _ {\max } g _ {t}. \tag {1}
+$$
+
+To track an agent's reward maximization performance during navigational learning we compute the cumulative discounted reward $(G = \sum_{t=0}^{T} \sum_{k=0}^{T} \gamma^k r_{t+1+k})$ for the entire trajectory for each trial using $\gamma = 0.9$ as the discount factor, which is similar to tracking the cumulative reward. The agent will continue to receive rewards by staying at the target. Hence, the trial is terminated when either the maximum trial time is reached $T_{max}$ or when the total reward achieved $\sum_{t=0}^{T} r_t$ reaches a threshold $R_{max}$ .
+
+# 3.1. Place fields as spatial features
+
+The agent represents space through $N$ place fields, which have spatial selectivity modeled as simple Gaussian bumps:
+
+$$
+\phi_ {i} \left(x _ {t}\right) = \alpha_ {i} ^ {2} \exp \left(- \left| \left| x _ {t} - \lambda_ {i} \right| \right| _ {2} ^ {2} / 2 \sigma_ {i} ^ {2}\right), \tag {2}
+$$
+
+where $\alpha$ , $\lambda$ and $\sigma$ set the amplitude, center, and width respectively. Two types of place field distributions were initialized to tile the environment: (1) Homogeneous population with constant values for amplitudes $\alpha_{i} = 0.5$ , widths $\sigma_{i} = 0.1$ , and centers uniformly tiling the environment $\lambda = [-1,\dots,1]$ (Foster et al., 2000; Fremaux et al., 2013; Kumar et al., 2022; 2024). (2) Heterogeneous population with amplitudes, widths and centers drawn from uniform random distributions between $[0,1]$ , $[10^{-5},0.1]$ , $[-1,1]$ respectively. These ranges are consistent with experimental data where place fields were $20~\mathrm{cm}$ to $50~\mathrm{cm}$ wide in small to medium environments (Lee et al., 2020; Frank et al., 2004; Mehta et al., 1997; Sosa et al., 2023). 2D place fields have scalar amplitudes, two dimensional vectors for center, and square covariance matrices for the width (Menache et al., 2005). Refer to App. A for further details.
+
+# 3.2. Policy learning using an Actor-Critic
+
+To model an animal's trial-and-error based learning behavior, we adopt the reinforcement learning framework, specifically the actor-critic (Arleo & Gerstner, 2000; Brown & Sharp, 1995; Foster et al., 2000; Frémaux et al., 2013; Kumar et al., 2022; 2024). The critic linearly weights place field activity using a vector $w_{i}^{v}$ to estimate the value of the current location:
+
+$$
+v \left(x _ {t}\right) = \sum_ {i} ^ {N} w _ {i} ^ {v} \phi_ {i} \left(x _ {t}\right). \tag {3}
+$$
+
+The value of a location corresponds to the expected cumulative discounted reward for that location. The actor has $M$ units, each specifying a movement direction. In the 1D and 2D environments, $M = 2$ and $M = 4$ respectively to code for opposing directions in each dimension e.g. left versus right and up versus down. Each actor unit $a_{j}$ linearly weights the place field activity such that the matrix $W_{ji}^{\pi}$ computes the preference for moving in the $j$ -th direction:
+
+$$
+a _ {j} \left(x _ {t}\right) = \sum_ {i} ^ {N} W _ {j i} ^ {\pi} \phi_ {i} \left(x _ {t}\right), \quad P _ {j} = \frac {\exp \left(a _ {j}\right)}{\sum_ {k} ^ {M} \exp \left(a _ {k}\right)}, \tag {4}
+$$
+
+with the probability of taking an action computed using a softmax. A one-hot vector $g_{j}$ is sampled from the action
+
+probability distribution $P$ as in Foster et al. (2000), making this policy stochastic. $w_{i}^{v}$ and $W_{ji}^{\pi}$ were initialized by sampling from a normal distribution $\mathcal{N}(0,10^{-5})$ , with other initializations (uniform or all zeros) reproducing similar results.
+
+# 3.3. Reward Maximization Learning Objective
+
+The objective of our agent is to learn a policy $\pi$ that maximize the expected cumulative discounted reward $\mathcal{J}^G = \mathbb{E}_{\tau}[G(\tau)] = \mathbb{E}_{\tau}\left[\sum_{t=0}^{T} \gamma^t r_t\right]$ over all trajectories $\tau$ . To achieve this goal, our agent uses the standard actor-critic algorithm using the temporal difference residual (refer to App. A):
+
+$$
+\mathcal {J} ^ {T D} = \mathbb {E} _ {\tau} \left[ r _ {t} + \gamma v \left(x _ {t + 1}\right) - v \left(x _ {t}\right) \right]. \tag {5}
+$$
+
+which reduces variance and speeds up policy convergence (Sutton & Barto, 2018; Dayan & Abbott, 2005; Wang et al., 2018; Schulman et al., 2017; Mnih et al., 2016). The TD residual is also biologically relevant, as the responses of midbrain dopamine neurons resemble TD reward prediction error (Schultz et al., 1997; Starkweather & Uchida, 2021; Gershman & Uchida, 2019; Amo et al., 2022; Montague et al., 1996). The actor learns a reward maximizing policy by ascending the gradient of the policy log likelihood, modulated by the TD residual. To accurately estimate the value function and critique policy learning using the TD error, the critic minimizes the squared TD error $\mathcal{L} = \mathbb{E}_{\tau}\left[\frac{1}{2}\delta_t^2\right]$ .
+
+As our agent uses a single population of place fields, these fields must learn spatial features that enhance both policy and value learning. The field parameters $\theta = \{\alpha, \lambda, \sigma\}$ and the policy weights $W^{\pi}$ , $w^{v}$ are updated by gradient ascent using a joint objective modified from Wang et al. (2018):
+
+$$
+\begin{array}{l} \nabla_ {\theta , W ^ {\pi}, w ^ {v}} \mathcal {J} ^ {T D} = \mathbb {E} \left[ \sum_ {t} ^ {T} \left(\nabla_ {\theta , W ^ {\pi}} \log \pi \left(g _ {t} \mid x _ {t}\right) \right. \right. \\ \left. + \nabla_ {\theta , w ^ {v}} v \left(x _ {t}\right)\right) \cdot \delta_ {t} ], \tag {6} \\ \end{array}
+$$
+
+with $\nabla_{w^v}\mathcal{J}^{TD} = 0$ and $\nabla_{W^{\pi}}\mathcal{L} = 0$ . We estimate all parameter gradients online, and provide the explicit update equations for each parameter in App. A. The learning rates for the actor-critic and place field parameters can be the same (Fig. S13). For theoretical analysis, we assume a separation of timescales between learning the actor-critic weights and updating place field parameters (App. B).
+
+# 4. Results
+
+# 4.1. High field density emerges at the reward location
+
+We first examine the neural phenomenon where a high field density emerges at the reward location over the course of learning. Field density is defined by the distribution of place field center of mass (COM) (Lee et al., 2020), which we estimate using Gaussian kernel smoothing.
+
+Figure 1B shows how our agent's track occupancy $(p(x))$
+
+
+
+
+
+
+Figure 1. Fields shift towards and amplify at locations with high value. (A) The task is to navigate from the start (green dash) to the target (red area) to receive rewards. The agent has $N$ place fields (blue) which synapse to an actor (red) and critic (green). The TD error $\delta$ modulates all parameter updates. (B) Example change in field centers for an agent on a 1D track when only optimizing field centers $(\Delta \lambda)$ . Initially $(T = 50)$ , the agent spends a high proportion of time $(p_{RM}(x)$ , black) at the start location while place fields are uniformly distributed. After some learning trials $(T = 1200)$ , the agent still spends a high proportion of time at the start but a high field density $(d(x))$ and mean firing rate $(f(x))$ emerges at the reward location. As learning proceeds, the agent spends a higher amount of time at the reward location but now field density and mean firing rate starts to increase at the start location $(T = 12000)$ . (Right) A high field density and mean firing rate emerge at the reward and start locations for 50 agents that were randomly initialized with heterogeneous populations and when all parameters $(\lambda, \alpha, \sigma, w^v, W^\pi)$ are updated using the same learning rate $(\eta = 0.0005)$ . (C) Example change in field centers for an agent in a 2D arena with a start (green), target (red), and obstacle (gray). In the early learning phase, field centers (black dots) near the target shift closer, causing a high number of field centers (goal representation) to aggregate at the reward (10 agents with random seeds, right). In the later learning phase, field centers align along the trajectory. The start, reward locations and radius for goal representation (G.R.) are marked by green, red and blue circles. (D) When initialized with a heterogeneous field population, the enhancement of average field density $(d(x))$ at the reward location $x_r$ compared to non-reward location $x'$ decreases as the number of fields increases. This density increases when the reward magnitude $(R_{max})$ increases, and reward location's size $(R_{size})$ decreases. (E) Example field dynamics when an agent $(N = 512)$ navigates a 1D track. Fields initialized before $(\lambda_i = 0.5,$ blue) and after $(\lambda_i = 0.6,$ orange) the target move forward and backward respectively, increasing the density near the target. (F) Fields closest to the reward $(\lambda_i = 0.5:$ blue) show rapid amplification compared to other fields $(\lambda_i = -0.75, 0.0:$ green and purple). The first order perturbative prediction (theory) provides a good approximation. Shaded area and error bars are $95\%$ CI over 50 seeds.
+
+
+
+
+
+
+
+
+
+field density $(d(x))$ , mean firing rate $(f(x))$ , and individual field's spatial selectivity $(\phi(x))$ change when learning to navigate in a 1D track from the start $x_{start} = -0.75$ to the target at $x_r = 0.5$ , when only optimizing place field centers $(\Delta \lambda)$ . In the early stages of learning ( $T = 50$ to $T = 1200$ ), the agent with a homogeneous place field population spends a higher proportion of time at the start location with only sporadic exploration towards the reward. Despite this behavior, a high field density and mean firing rate emerges at the target after a few trials. Individual fields at the reward location shift closer to the target, as seen in Sosa et al. (2023), in contrast to fields at non-rewarded locations. As learning progresses and the agent spends a higher proportion of time at the reward ( $T = 12000$ ), field density and mean firing
+
+rate at the start location also begins to rise, replicating the two-peaked field distribution in Gauthier & Tank (2018), with some fields shifting backwards from the reward location towards the start as in Yaghoubi et al. (2024). Figure 1B (right) also shows that a high field density at the reward location followed by the start location robustly emerges in agents initialized with a heterogeneous place field population, and when all the field parameters $(\lambda, \alpha, \sigma)$ as well as the actor $(W^{\pi}$ and critic $(w^{v})$ parameters are optimized using the same learning rate $(\eta = 0.0005)$ .
+
+Similar field dynamics are observed in a 2D arena with an obstacle where agents have to navigate to a target from a starting location (Fig. 1C). When optimizing all the place field parameters in a homogeneous population, a high field
+
+density rapidly emerges at the reward location to increase goal representation (number of COM within 0.25 unit radius from target center) as seen in (Dupret et al., 2010), followed by gradual reorganization of field density backward along the agent's trajectory.
+
+Increasing the number of fields in a heterogeneous place field population reduced the average density and mean firing rate (Fig. 1D, Fig. S1) that emerges near the reward location. This is because as the number of fields increase, the agent goes into a weak feature learning regime (Fig. S4) in which feature learning does not contribute to additional advantage. Conversely, the density and mean firing rate are proportional to the reward magnitude, and inversely proportional to the reward location width as a narrower target might require higher discriminability for the agent to maximize rewards.
+
+To understand why place fields exhibit these dynamics, we perform a perturbative approximation to the place field parameter changes under TD learning updates (Menache et al., 2005; Bordelon et al., 2024). In this approximation, we assume that the change to the field parameters is small, controlled by the number of fields, and by the large separation between learning rates. Focusing on the place field centers, we derive in App. B the approximation where $\eta_{\lambda} = 0.0001$ is the learning rate for the field centers and $\eta = 0.01$ is the learning rate for the critic weights such that $\eta_{\lambda} \ll \eta$ :
+
+$$
+\begin{array}{l} \lambda_ {i} (t) - \lambda_ {i} (0) \approx \frac {\eta_ {\lambda}}{\eta} \left(\frac {2}{\sigma_ {i} ^ {2}} + \frac {1}{\sigma_ {x} ^ {2}}\right) ^ {- 1} \\ \times \left[ \frac {\bar {\lambda} - \lambda_ {i} (0)}{\sigma_ {i} ^ {2}} + \frac {\bar {\mu} _ {x} - \lambda_ {i} (0)}{\sigma_ {x} ^ {2}} \right] w _ {v, i} ^ {2} (t). \tag {7} \\ \end{array}
+$$
+
+Under this approximation, each field's center shifts proportionally to the squared magnitude of the critic weights $(w_{v}^{2})$ , implying that fields at locations with a high value will shift at a faster rate compared to locations with a low value. In addition to the value of a location, the agent's start location (modeled as a Gaussian with mean $\bar{\mu}_x = -0.75$ and spread $\sigma_x$ ) and the mean field center location $\bar{\lambda}$ over time under the policy influence each field's displacement. As the reward location is visited frequently, we expect $\bar{\lambda} \approx 0.5$ . As the term within the square bracket changes sign depending on the field location, only the fields near the reward location will shift towards the reward, while the rest of the fields will move towards the start location. Due to these influences, the field density at the reward location will increase first followed by a gradual increase in start location (Fig. 1B,E). Additional approximations are needed to model the agent's trajectory and improve the simulation-theory fit for place field centers (See App. B).
+
+A similar perturbative analysis for amplitude yields:
+
+$$
+\alpha_ {i} (t) - \alpha_ {i} (0) \approx 2 \frac {\eta_ {\alpha}}{\eta} w _ {v, i} ^ {2} (t), \tag {8}
+$$
+
+when $\eta_{\alpha} \ll \eta$ and where $\eta_{\alpha} = 0.0001$ is the learning rate for the $\alpha$ parameters. Thus, place fields will be amplified at a rate similar to learning the value function, causing fields at
+
+the reward location to be amplified first, followed by the start location (Fig. 1F, Fig. S1). Therefore, these approximations predict fields shifting to the start and reward location with field amplification at locations of high value.
+
+# 4.2. Fields elongate against the movement direction
+
+We now turn to the next phenomenon where place field sizes increase and their centers shift backward against the movement direction as animals learn to navigate. This behavior suggests predictive coding for future occupancy, which can be learned through Hebbian association of fields (Mehta et al., 2000), or through the successor representation (SR) algorithm, which minimizes state prediction error for each place field to learn the transition probabilities (Stachenfeld et al., 2017). Here, we show that our proposed Reward Maximizing (RM) agent also recapitulates field elongation in a 1D track and 2D arena around an obstacle.
+
+For comparison, we developed two agents: A) an SR agent that learns the transition probabilities in parallel to policy learning (Fig. 2A). The SR agent has a similar architecture to our (RM) agent (Fig. 1A), with two key differences: 1) It has one set of place fields with fixed parameters, and only the synapses from these place fields to the actor-critic are optimized for policy learning. 2) There is a separate set of $N$ successor place fields $\psi(x)$ that receive input from the fixed place fields via synapses $U$ which are optimized using the SR algorithm (App. C). We compare the learned successor place fields to the learned place fields in our RM model, referring to them henceforth as place fields. B) a Metric Representation (MR) agent (Fig. 2B) that estimates its current coordinates in an environment $(z_t)$ . This metric representation enables navigation to recalled targets by vector subtraction (Foster et al., 2000; Kumar et al., 2024). The coordinate readout weights and place field parameters are updated by gradient descent to minimize the path integration-derived TD error $\mathcal{L}^{MR} = \mathbb{E}\left[\frac{1}{2}(z_{t+1} - (z_t + a_t))^2\right]$ , while only the actor and critic readout weights are updated to learn a policy. This objective allows place fields $\phi(x)$ to reorganize even in the absence of rewards. See App. D for the derivation.
+
+All three agents (SR, MR, RM) recapitulate the phenomena seen in (Mehta et al., 1997): on average, place fields increase in size over learning (Fig. 2C), and the center of mass (COM) shifts backwards from their initialized positions (Fig. 2D). However, the place field dynamics evolve differently. All agents initially spend a high proportion of time at the start location and gradually learn a policy to spend a higher proportion of time at the reward (Fig. 2E, Fig. S5E).
+
+The SR, by design, tracks the transition probabilities of the agent's policy. Consequently, the SR population mean firing rate $f_{\psi}(x)$ closely aligns with the agent's probability of being in a location $p_{SR}$ , showing a high positive correlation (Fig. 2F, blue). Since the MR representation is modulated
+
+
+
+
+
+
+
+
+
+
+Figure 2. Different objectives cause fields to elongate against the trajectory, but with different dynamics. (A) Successor Representation agent architecture with successor fields $(\psi)$ as a separate set of features that are learned in parallel to navigational learning. (B) Metric Representation agent architecture where the path-integration derived temporal difference error drives place field reorganization, while policy learning occurs in parallel. (C-D) The Reward Maximization (RM), Successor Representation (SR) and Metric Representation (MR) algorithms cause (C) field sizes to increase and (D) center of mass to shift backwards against the trajectory in a 1D track. Field changes were normalized separately to be between 0 to 1 for visualization. (E) All agents initially spend a high proportion of time at the start location, and later learn to dwell at the target (black). Individual SR fields and mean firing rate (red) closely track the proportion of time the agent spends in a location (top). MR fields reorganize only at the start location (middle). Conversely, individual RM fields and mean firing rate show an inverse relationship against the proportion of time the RM agent spends at a location in the early learning phase, but start to align in the later phases (bottom). (F) SR agents show a consistently high, positive correlation (blue) between mean firing rate and proportion of time spent in a location. MR agents' show a non-monotonic increase in correlation (green). Conversely, the RM agents' mean firing rate and time spent at a location become anti-correlated before becoming positively correlated (orange). (G) The SR and RM mean firing rates (blue) become anti-correlated before becoming positively correlated at the later learning phase, while the SR and MR fields align momentarily before de-correlating (orange), and the RM and MR fields become anti-correlated (green). (H) Example change in field selectivity by SR (top), MR (middle), and RM (bottom) agents in a 2D arena with an obstacle. The RM agent's field elongation is more pronounced than the SR and MR agents. Summary statistics in Fig. S6. Shaded area is $95\%$ CI over 10 seeds.
+
+
+
+
+
+
+
+by the agent's displacement $a_{t}$ , fields reorganize more at the start location since displacement is nonzero, causing a higher mean firing rate. Conversely, displacement becomes zero at the reward location as the agent comes to a stop to maximize rewards, causing low field reorganization at the reward. Hence, MR fields $f_{\phi_{MR}}(x)$ become positively correlated with $p_{MR}$ at the start location, but do not fully align with the agent's time spent at the reward location (Fig. 2F, green). Conversely, during early learning, the RM agent exhibits a high population mean firing rate $f_{\phi_{RM}}(x)$ at the reward location, which contrasts sharply with the proportion of time spent at that location, leading to a highly negative correlation between $f_{\phi_{RM}}(x)$ and $p_{RM}$ (Fig. 2F, orange). Interestingly, in the later phase of learning, $f_{\phi_{RM}}(x)$ and $p_{RM}$ become positively correlated.
+
+The mean firing rates learned by the SR and RM agents become negatively correlated during the early learning phase but become positively correlated at the later learning phase
+
+(Fig. 2G, blue). Conversely, the mean firing rate correlation decreases monotonically towards zero for the MR and RM agents (Fig. 2G, green), while the correlation between SR and MR increases due to the alignment at the start location in the early learning phase before becoming uncorrelated in the later learning phase. A similar change in correlation is observed when comparing the individual field selectivity, and the spatial representation similarity matrix (Fig. S5F,G). Hence, while all three algorithms demonstrate qualitatively similar neural phenomenon, the dynamics of learning these representations are different, with SR and RM agents eventually learning similar spatial representations.
+
+In a 2D arena with an obstacle, the three agents show field elongation against the movement direction (Fig. 2H) while also accounting for the blockage of path by the obstacle. The RM agent shows a significantly larger elongation of fields to span the entire corridor while the elongation of fields by SR is subtle and field elongation by MR is more pronounced
+
+only at the start location. See Fig. S6 for summary statistics.
+
+# 4.3. Stable navigation behavior with drifting fields
+
+The third phenomena that the model captures has been described as representational drift, where the agent demonstrates stable behavior but the spatial selectivity of individual place fields change over time (Fig. 3A, Fig. S8G), as seen in Ziv et al. (2013). Although our agent uses a stochastic policy, both the navigation behavior (Fig. 3E, blue) and the population vector (PV) correlation (Fig. 3C, blue) are extremely stable.
+
+To drive larger variability in the representation, we introduced Gaussian noise $(\xi_{t})$ of varying magnitudes during place field parameter updates at every time step $(\theta_{t + 1} = \theta_t + \xi_t,$ see App. E). Increasing the noise magnitude led to a faster decrease in PV correlation but also disrupted agents' policy convergence for magnitudes greater than $10^{-3}$ (Fig. 3E purple, Fig. S7). Hence, we consider the noise magnitudes between $10^{-4}$ and $10^{-3}$ to be relevant. As the noise magnitude increases, agent's reward maximization behavior remains stable (Fig. 3E) while the PV correlation decreases rapidly (Fig. 3C). This demonstrates that agents can optimize their policies to maintain stable behavior even though individual spatial selectivity is changing. Interestingly, the spatial representation similarity matrix remains more stable than PV correlation (Fig. 3B), even at higher noise magnitudes (Fig. 3D), and when the agents are not explicitly optimizing for representational similarity (Qin et al., 2023).
+
+Unlike noisy field parameter updates, adding noise to the actor and critic synapses $(w_{t + 1}^v = w_t^v +\xi_t^v$ and $W_{t + 1}^{\pi} = W_{t}^{\pi} + \xi_{t}^{\pi})$ caused the reward maximization behavior, representation similarity and PV correlation to change at similar rates (Fig. S7), which is not as consistent with experiments (See Fig. S9 for comparisons to data).
+
+We quantified this drifting behavior at the level of individual neurons by summing the normalized (between [0, 1]) variance in each field's parameters $(\sum Var(\tilde{\theta}) = Var(\tilde{\alpha}) + Var(\tilde{\lambda}) + Var(\tilde{\sigma}))$ across learning trials, and comparing this against the mean amplitudes for each field. When no Gaussian noise is added (Fig. 3F, blue), fields with a higher mean amplitude showed a higher variance in its parameters, which is expected since fields with a higher amplitude are more likely to be involved in policy learning. Conversely, with a small Gaussian noise, we see the opposite trend where fields with a smaller mean amplitude showed a higher variance in parameters while fields with a higher mean amplitude were more stable (Fig. 3F, orange). At smaller noise magnitudes, there is a strong positive correlation between higher amplitude fields and the magnitude of actor and critic readout weights (Fig. S8). This suggests that high-amplitude fields are more involved in policy learning and thus more stable, whereas less important fields can alter
+
+
+
+
+
+
+
+
+Figure 3. Stable representation similarity and anchor fields facilitate consistent behavior. (A) Injecting Gaussian noise with a magnitude $\sigma_{noise} = 0.0001$ into field parameters causes individual field's spatial selectivity to change across trials while (B) the representation similarity matrix (dot product of population activity) remains stable. (C) Injecting higher noise magnitudes ( $\sigma_{noise} = 0.0001, \dots, 0.001$ ) leads to a faster decrease in population vector correlation ( $R_{PV}$ ) across trials while (D) the similarity matrix correlation ( $R_{RS}$ ) decreases at a slower rate. (E) Agents' reward maximization performance ( $G$ ) remains fairly stable even when the noise magnitude increases. However, beyond $\sigma_{noise} = 0.001$ , performance becomes highly unstable. Black dash indicates the trial at which PV and similarity matrix correlation was measured from. (F) Normalized variance in field parameters ( $\theta = \{\alpha, \lambda, \sigma\}$ ) between trials 25,000 to 200,000 quantifies the change in individual place fields spatial selectivity. With no noise (blue) or a larger noise magnitude ( $\sigma_{noise} = 0.001$ ), fields with a larger amplitude undergo a greater change in its parameters. When $\sigma_{noise} = 0.0001$ , we see the opposite trend, where fields with a larger amplitude are more stable than fields with a smaller amplitude, suggesting fields with larger amplitudes acting as anchored representations in an environment. Refer to Fig. S8 for other $\sigma_{noise}$ values. Shaded area is $95\%$ CI over 10 seeds.
+
+their spatial selectivity to maintain stable behavior (Fig. 3E, orange), consistent with Qin et al. (2023). Increasing the noise magnitude beyond this magnitude causes place fields with large magnitudes and more involved in policy learning to drift more (Fig. 3F, purple), causing unstable behavior (Fig. 3E, purple).
+
+# 4.4. Field reorganization improves policy convergence
+
+
+
+
+
+
+Figure 4. Field reorganization and noisy updates improve target learning. (A) Optimizing all three field parameters, amplitude, width and center of randomly distributed fields allowed agents $(N = 16,\sigma = 0.1)$ to attain the highest cumulative discounted reward $(G)$ , while fields with fixed field parameters attained the lowest. (B) Optimizing place field widths $(\sigma)$ , followed by field amplitudes $(\alpha)$ and lastly field centers $(\lambda)$ caused the biggest decrease in the number of trials needed for policy convergence $(T_{G > 45}$ , attain a running average of $G = 45$ over 300 trials). As the number of fields increased, the number of trials needed for policy convergence decreased and the computational advantage afforded by field optimization extinguished. (C) Agents need to navigate to a target that changed after 50,000 trials $x_{r} = \{0.5,0.0,0.75, - 0.25,0.5\}$ . Without noisy field parameter updates, agents $(N = 128,\sigma = 0.1)$ struggled to learn new targets (blue, $\sigma_{noise} = 0.0$ ). Field updates with different noise magnitudes influenced the policy convergence speed and maximum cumulative reward for subsequent targets, with $\sigma_{noise} = 0.0005$ (red) demonstrating the highest improvement. Shaded area is $95\%$ CI over 50 seeds.
+
+As the reward-maximizing model recapitulates experimentally-observed changes in place fields, it is natural to ask what computational advantage representation learning might offer during reinforcement learning. To probe the contributions of each field parameter to policy learning, we perform ablation experiments. These ablations are particularly important due to the parameter degeneracies in the model: one can trade off the place field amplitudes and the critic and actor weights.
+
+We first considered the task of navigating to a single fixed target. Agents with fixed place fields attained the lowest navigational performance with cumulative reward $G$ plateauing at $G = 33$ (Fig. 4A), and showed the slowest policy con
+
+vergence even as the number of fields increased (Fig. 4B). Optimizing place field widths $(\sigma)$ contributed to the greatest improvement in maximum reward and largest decrease in the number of trials needed for policy convergence (Fig. 4A-B). Optimizing place field amplitudes $(\alpha)$ contributed to the next most significant improvement (Fig. 4A-B). Interestingly, place field center $(\lambda)$ optimization did not contribute to a significant improvement in performance, and in fact caused a decrease in reward maximization performance and speed of policy convergence when optimized together with the amplitude parameter. Hence, optimizing field widths followed by amplitudes and lastly centers significantly improved agent's reward maximization performance and increased the speed of policy convergence.
+
+Optimizing field parameters using the auxiliary metric representation (MR) objective, inspired by Fang & Stachenfeld (2023), marginally improved policy learning (Fig. S15). However, as the number of place fields increase (Fig. 4B), the computational advantage afforded by place field optimization extinguishes. Nevertheless, optimizing all the parameters in a small number of fields, e.g. 8, leads to a similar rate of policy convergence than with a larger number of randomly initialized fields e.g. 128, which hints that representation flexibility could allow efficient learning in systems with few neurons.
+
+We now turn to the influence of noisy fields when learning to navigate to new targets, inspired by Dohare et al. (2024). Agents now have to navigate from the same start location to a target that repeatedly changes location. Although all agents learned to navigate to the first $(x_{r} = 0.5)$ and the second $(x_{r} = 0.0)$ targets equally well, agents without noisy field updates struggled to learn the next three targets, and achieved a lower average cumulative reward (Figure 4C, blue). Increasing the noise magnitude led to a monotonic improvement in new target learning. Some place fields coding for the initial reward location shifted to code for the new reward location (Fig. S3), replicating the behavior of place cells that selectively code for rewards (Gauthier & Tank, 2018; Sosa et al., 2023). This behavior was suppressed in agents without noisy field parameter updates. However, noise magnitudes beyond a threshold $(\sigma_{noise} = 0.001)$ caused average cumulative reward to decrease. These results suggest that there is a functional role for noise, especially for new target learning. We see a similar improvement in reward maximization performance with noisy field updates in a 2D arena with an obstacle when we either change the target or the obstacle location (Fig. S12).
+
+# 5. Discussion
+
+We present a two-layer navigation model which uses tunable place fields as feature inputs to an actor and a critic for policy learning. The noisy parameters of the place fields
+
+and the policy and value function learn to maximize rewards using the temporal difference (TD) error. Our simple reinforcement learning model reproduces three experimentally-observed neural phenomena: (1) the emergence of a high place field density at rewards, (2) enlargement of fields against the trajectory, and (3) drifting fields without influencing task performance. We analyzed the model to understand how the TD error, number of place fields and noise magnitudes influenced place field representations. Lastly, we demonstrate that learning place field representations with noisy field parameters improves the rate of policy convergence when learning single and multiple targets.
+
+Our goal was not to replicate every known mechanistic detail about place fields, but come up with a minimal biologically grounded model that captures as many phenomena as possible. This required deliberate decisions to omit certain granularities. While this necessitates some disconnect with neural mechanisms, it allows parsimony (Van Vreeswijk & Sompolinsky, 1996; Burak & Fiete, 2009; Hopfield, 1982) and interpretability (Bordelon et al., 2024) to make experimentally testable predictions for neural systems (Montague et al., 1996; Schultz et al., 1997; Kumar et al., 2024). For instance, our model gives an alternative normative account for field elongation against the trajectory, which can be contrasted with the successor representation algorithm (Raju et al., 2024; Kumar et al., 2024). As field dynamics are different in these two models, they could be distinguished by experiments that track fields over the full course of learning (Fig. 2C-E, Fig. S6). Furthermore, place field width and amplitude optimization increases maximum cumulative reward and accelerates policy convergence (Fig. 4A-B).
+
+Most models that characterized representational drift were not studied under the context of navigational policy learning (Masset et al., 2022; Pashakhanloo & Koulakov, 2023; Ratzon et al., 2024). We showed that increasing the noise magnitudes caused different drift regimes (Fig. 3F; Fig. S9D), and at very high noise levels navigation behavior started to collapse (Fig. 3C, Fig. S7). Importantly, we showed that fields in the noisy regime allowed agents to consistently learn new targets in both 1D (Fig. 4C) and 2D (Fig. S12A-B) environments, without getting stuck in local minima. The biological origins of adding noise to place field parameters can be attributed to noisy synaptic plasticity mechanisms (Mongillo et al., 2017; Kappel et al., 2015; Attardo et al., 2015). Other mechanisms such as unstable dynamics in downstream networks (Sorscher et al., 2023) and modulatory mechanisms such dopamine fluctuations (Krishnan & Sheffield, 2023) could adaptively control drift rates. A difficult experiment that could directly verify our model is to induce or constrain place field drift rates in animals and determine how this perturbation influences new target learning. How fluctuations in dopamine, stochastic actions and stochastic firing rates within place fields drive
+
+drift rates needs to be explored. The current model provides a starting point for this investigation.
+
+The proposed model is not without limitations. First, we modeled single peaked place fields instead of the complex representations resulting from single "place" cells, which can be multi-field and multi-scale. Nevertheless, the proposed online reinforcement learning framework is general enough to accommodate other models of place cell description (Mainali et al., 2024; Sorscher et al., 2023) e.g. Fig. S14, and can be extended to study representation learning in other brain regions e.g. medial entorhinal (Boccara et al., 2019; Wen et al., 2024) or posterior parietal (Suhaimi et al., 2022) cortex.
+
+Next, place field parameters are optimized by backpropagating the temporal difference error through the actor and critic components (Fig. S15). Since the motivation was to develop a normative model whose objective was to maximize rewards, this was a reasonable starting point. However, this model must be extended using biologically-plausible learning rules (Miconi, 2017; Murray, 2019; Lillicrap et al., 2016; Nokland, 2016; Overwiening et al., 2025) before it can in any way be considered mechanistic (Lee et al., 2024; Starkweather & Uchida, 2021; Krishnan et al., 2022; Kempadoo et al., 2016; Edelmann & Lessmann, 2018).
+
+Although we explored a simple non-reward-dependent objective to drive place field reorganization, extending the model to other auxiliary objectives (Low et al., 2018; Schaeffer et al., 2022) to understand their influence in representation learning for policy learning is the next step. While our computational experiments successfully demonstrated the model's effectiveness in reproducing three disparate phenomena, further work should test its robustness across other reinforcement learning algorithms e.g. policy gradient (Kumar & Pehlevan, 2024) and network architectures (Team et al., 2021; Kumar et al., 2025). Additionally, we need to explore how place field reorganization scales in larger, more complex environments (Hill et al., 2020; Lin et al., 2023; Nieh et al., 2021; Kumar et al., 2024) beyond the few environments we considered.
+
+# Code Availability
+
+The code for our agents and to reproduce all figures in this paper is available at: https://github.com/Pehlevan-Group/placefield_reorg_agent
+
+# Author Contributions
+
+MGK and CP conceptualized and designed the study. BB performed the theoretical analysis. MGK performed the simulation experiments and wrote the original draft. BB, JZV and CP revised the manuscript.
+
+# Acknowledgments
+
+We would like to thank Albert Lee, Lucas Janson, Peter Dayan, Farhad Pashakhanloo, Shahriar Talebi, Paul Masset, as well as the members of the Pehlevan, Ba, Janson and Murthy labs for useful insights. We also appreciate the discussions during the Analytical Connectionism Summer School 2024 and COSYNE 2025. This research was supported in part by grants NSF PHY-1748958 and PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP). MGK and CP are supported by NSF Award DMS-2134157. BB is supported by a Google PhD Fellowship. JAZV was supported by a Junior Fellowship from the Harvard Society of Fellows. CP is further supported by NSF CAREER Award IIS-2239780, DARPA grant DIAL-FP-038, a Sloan Research Fellowship, and The William F. Milton Fund from Harvard University. This work has been made possible in part by a gift from the Chan Zuckerberg Initiative Foundation to establish the Kempner Institute for the Study of Natural and Artificial Intelligence.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Neuroscience and/or Reinforcement Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Amo, R., Matias, S., Yamanaka, A., Tanaka, K. F., Uchida, N., and Watabe-Uchida, M. A gradual temporal shift of dopamine responses mirrors the progression of temporal difference error in machine learning. Nature neuroscience, 25(8):1082-1092, 2022.
+Arleo, A. and Gerstner, W. Spatial cognition and neuromimetic navigation: a model of hippocampal place cell activity. Biological cybernetics, 83(3):287-299, 2000.
+Attardo, A., Fitzgerald, J. E., and Schnitzer, M. J. Imperman-. nence of dendritic spines in live adult ca1 hippocampus. Nature, 523(7562):592-596, 2015.
+Boccara, C. N., Nardin, M., Stella, F., O'Neill, J., and Csicsvari, J. The entorhinal cognitive map is attracted to goals. Science, 363(6434):1443-1447, 2019.
+Bordelon, B., Masset, P., Kuo, H., and Pehlevan, C. Loss dynamics of temporal difference reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024.
+Brown, M. A. and Sharp, P. E. Simulation of spatial learning in the morris water maze by a neural network model of the hippocampal formation and nucleus accumbens. Hippocampus, 5(3):171-188, 1995.
+
+Burak, Y. and Fiete, I. R. Accurate path integration in continuous attractor network models of grid cells. PLoS computational biology, 5(2):e1000291, 2009.
+Bush, D., Barry, C., Manson, D., and Burgess, N. Using grid cells for navigation. *Neuron*, 87(3):507-520, 2015.
+Dayan, P. Improving generalization for temporal difference learning: The successor representation. Neural computation, 5(4):613-624, 1993.
+Dayan, P. and Abbott, L. F. Theoretical neuroscience: computational and mathematical modeling of neural systems. MIT press, 2005.
+de Snoo, M. L., Miller, A. M., Ramsaran, A. I., Josselyn, S. A., and Frankland, P. W. Exercise accelerates place cell representational drift. Current Biology, 33(3):R96-R97, 2023.
+Dohare, S., Hernandez-Garcia, J. F., Lan, Q., Rahman, P., Mahmood, A. R., and Sutton, R. S. Loss of plasticity in deep continual learning. Nature, 632(8026):768-774, 2024.
+Dong, C., Madar, A. D., and Sheffield, M. E. Distinct place cell dynamics in ca1 and ca3 encode experience in new environments. Nature communications, 12(1):2977, 2021.
+Dupret, D., O'Neill, J., Pleydell-Bouverie, B., and Csicsvari, J. The reorganization and reactivation of hippocampal maps predict spatial memory performance. Nature neuroscience, 13(8):995-1002, 2010.
+Edelmann, E. and Lessmann, V. Dopaminergic innervation and modulation of hippocampal networks. Cell and tissue research, 373:711-727, 2018.
+Eliav, T., Maimon, S. R., Aljadeff, J., Tsodyks, M., Ginosar, G., Las, L., and Ulanovsky, N. Multiscale representation of very large environments in the hippocampus of flying bats. Science, 372(6545):eabg4020, 2021.
+Fang, C. and Stachenfeld, K. L. Predictive auxiliary objectives in deep rl mimic learning in the brain. arXiv preprint arXiv:2310.06089, 2023.
+Fiete, I. R., Burak, Y., and Brookings, T. What grid cells convey about rat location. Journal of Neuroscience, 28 (27):6858-6871, 2008.
+Floresco, S. B., Todd, C. L., and Grace, A. A. Glutamatergic afferents from the hippocampus to the nucleus accumbens regulate activity of ventral tegmental area dopamine neurons. Journal of Neuroscience, 21(13):4915-4922, 2001.
+
+Foster, D. J., Morris, R. G., and Dayan, P. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus, 10(1):1-16, 2000.
+Frank, L. M., Stanley, G. B., and Brown, E. N. Hippocampal plasticity across multiple days of exposure to novel environments. Journal of Neuroscience, 24(35):7681-7689, 2004.
+Fremaux, N., Sprekeler, H., and Gerstner, W. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS computational biology, 9(4): e1003024, 2013.
+Gardner, M. P., Schoenbaum, G., and Gershman, S. J. Rethinking dopamine as generalized prediction error. Proceedings of the Royal Society B, 285(1891):20181645, 2018.
+Gauthier, J. L. and Tank, D. W. A dedicated population for reward coding in the hippocampus. *Neuron*, 99(1): 179–193, 2018.
+Gershman, S. J. The successor representation: its computational logic and neural substrates. Journal of Neuroscience, 38(33):7193-7200, 2018.
+Gershman, S. J. and Uchida, N. Believing in dopamine. Nature Reviews Neuroscience, 20(11):703-714, 2019.
+Geva, N., Deitch, D., Rubin, A., and Ziv, Y. Time and experience differentially affect distinct aspects of hippocampal representational drift. Neuron, 111(15):2357-2366, 2023.
+Gonzalez, W. G., Zhang, H., Harutyunyan, A., and Lois, C. Persistence of neuronal representations through time and damage in the hippocampus. Science, 365(6455): 821-825, 2019.
+Hill, F., Tieleman, O., Von Glehn, T., Wong, N., Merzic, H., and Clark, S. Grounded language learning fast and slow. arXiv preprint arXiv:2009.01719, 2020.
+Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554-2558, 1982.
+Houk, J. C., Adams, J. L., and Barto, A. G. A Model of How the Basal Ganglia Generate and Use Neural Signals That Predict Reinforcement. In Models of Information Processing in the Basal Ganglia. The MIT Press, 11 1994. ISBN 9780262275774. doi: 10.7551/mitpress/4708.003.0020. URL https://doi.org/10.7551/mitpress/4708.003.0020.
+Joel, D., Niv, Y., and Ruppin, E. Actor-critic models of the basal ganglia: New anatomical and computational perspectives. Neural networks, 15(4-6):535-547, 2002.
+
+Kappel, D., Habenschuss, S., Legenstein, R., and Maass, W. Network plasticity as bayesian inference. PLoS computational biology, 11(11):e1004485, 2015.
+Kempadoo, K. A., Mosharov, E. V., Choi, S. J., Sulzer, D., and Kandel, E. R. Dopamine release from the locus coeruleus to the dorsal hippocampus promotes spatial learning and memory. Proceedings of the National Academy of Sciences, 113(51):14835-14840, 2016.
+Kentros, C. G., Agnihotri, N. T., Streater, S., Hawkins, R. D., and Kandel, E. R. Increased attention to spatial context increases both place field stability and spatial memory. Neuron, 42(2):283-295, 2004.
+Krishnan, S. and Sheffield, M. E. Reward expectation reduces representational drift in the hippocampus. *bioRxiv*, 2023.
+Krishnan, S., Heer, C., Cherian, C., and Sheffield, M. E. Reward expectation extinction restructures and degrades cal spatial maps through loss of a dopaminergic reward proximity signal. Nature communications, 13(1):6662, 2022.
+Kumar, M. G. and Pehlevan, C. Place fields organize along goal trajectory with reinforcement learning. Cognitive Computational Neuroscience, 2024.
+Kumar, M. G., Tan, C., Libedinsky, C., Yen, S.-C., and Tan, A. Y. A nonlinear hidden layer enables actor-critic agents to learn multiple paired association navigation. *Cerebral Cortex*, 32(18):3917-3936, 2022.
+Kumar, M. G., Tan, C., Libedinsky, C., Yen, S.-C., and Tan, A. Y.-Y. One-shot learning of paired association navigation with biologically plausible schemas, 2024. URL https://arxiv.org/abs/2106.03580.
+Kumar, M. G., Manoogian, A., Qian, B., Pehlevan, C., and Rhoads, S. A. Neurocomputational underpinnings of suboptimal beliefs in recurrent neural network-based agents. bioRxiv, pp. 2025-03, 2025.
+Lee, J. S., Briguglio, J. J., Cohen, J. D., Romani, S., and Lee, A. K. The statistical structure of the hippocampal code for space as a function of time, context, and value. Cell, 183(3):620-635, 2020.
+Lee, R. S., Sagiv, Y., Engelhard, B., Witten, I. B., and Daw, N. D. A feature-specific prediction error model explains dopaminergic heterogeneity. Nature neuroscience, 27(8): 1574-1586, 2024.
+Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman, C. J. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7(1):13276, 2016.
+
+Lin, Z., Azaman, H., Kumar, M. G., and Tan, C. Com- compositional learning of visually-grounded concepts using reinforcement. arXiv preprint arXiv:2309.04504, 2023.
+Lisman, J. E. and Grace, A. A. The hippocampal-vta loop: controlling the entry of information into long-term memory. Neuron, 46(5):703-713, 2005.
+Low, R. J., Lewallen, S., Aronov, D., Nevers, R., and Tank, D. W. Probing variability in a cognitive map using manifold inference from neural dynamics. *BioRxiv*, pp. 418939, 2018.
+Mainali, N., da Silveira, R. A., and Burak, Y. Universal statistics of hippocampal place fields across species and dimensionalities. bioRxiv, pp. 2024-06, 2024.
+Mankin, E. A., Sparks, F. T., Slayyeh, B., Sutherland, R. J., Leutgeb, S., and Leutgeb, J. K. Neuronal code for extended time in the hippocampus. Proceedings of the National Academy of Sciences, 109(47):19462-19467, 2012.
+Masset, P., Qin, S., and Zavatone-Veth, J. A. Drifting neuronal representations: Bug or feature? Biological cybernetics, 116(3):253-266, 2022.
+Mehta, M. R., Barnes, C. A., and McNaughton, B. L. Experience-dependent, asymmetric expansion of hippocampal place fields. Proceedings of the National Academy of Sciences, 94(16):8918-8921, 1997.
+Mehta, M. R., Quirk, M. C., and Wilson, M. A. Experience-dependent asymmetric shape of hippocampal receptive fields. Neuron, 25(3):707-715, 2000.
+Menache, I., Mannor, S., and Shimkin, N. Basis function adaptation in temporal difference reinforcement learning. Annals of Operations Research, 134(1):215-238, 2005.
+Miconi, T. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks. *Elife*, 6:e20899, 2017.
+Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In Balcan, M. F. and Weinberger, K. Q. (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1928-1937, New York, New York, USA, 20-22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/mniha16.html.
+Mongillo, G., Rumpel, S., and Loewenstein, Y. Intrinsic volatility of synaptic connections—a challenge to the synaptic trace theory of memory. *Current opinion in neurobiology*, 46:7-13, 2017.
+
+Montague, P. R., Dayan, P., and Sejnowski, T. J. A framework for mesencephalic dopamine systems based on predictive hebbian learning. Journal of neuroscience, 16(5): 1936-1947, 1996.
+Morris, R. G., Garrud, P., Rawlins, J. a., and O'Keefe, J. Place navigation impaired in rats with hippocampal lesions. Nature, 297(5868):681-683, 1982.
+Moser, M.-B., Rowland, D. C., and Moser, E. I. Place cells, grid cells, and memory. Cold Spring Harbor perspectives in biology, 7(2):a021808, 2015.
+Murray, J. M. Local online learning in recurrent networks with random feedback. *Elife*, 8:e43299, 2019.
+Nieh, E. H., Schottdorf, M., Freeman, N. W., Low, R. J., Lewallen, S., Koay, S. A., Pinto, L., Gauthier, J. L., Brody, C. D., and Tank, D. W. Geometry of abstract learned knowledge in the hippocampus. Nature, 595(7865):80-84, 2021.
+Niv, Y. Reinforcement learning in the brain. Journal of Mathematical Psychology, 53(3):139-154, 2009.
+Nøkland, A. Direct feedback alignment provides learning in deep neural networks. Advances in neural information processing systems, 29, 2016.
+O'Keefe, J. The hippocampus as a cognitive map, 1978.
+O'Keefe, J. and Dostrovsky, J. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. *Brain research*, 1971.
+Overwiening, J., Kumar, M. G., and Sompolinsky, H. Tedfaδ: Temporal integration in deep spiking networks trained with feedback alignment improves policy learning. Cognitive Computational Neuroscience, 2025.
+Packard, M. G. and McGaugh, J. L. Inactivation of hippocampus or caudate nucleus with lidocaine differentially affects expression of place and response learning. Neurobiology of learning and memory, 65(1):65-72, 1996.
+Palacios-Filardo, J. and Mellor, J. R. Neuromodulation of hippocampal long-term synaptic plasticity. Current opinion in neurobiology, 54:37-43, 2019.
+Pashakhanloo, F. and Koulakov, A. Stochastic gradient descent-induced drift of representation in a two-layer neural network. In International Conference on Machine Learning, pp. 27401-27419. PMLR, 2023.
+Priestley, J. B., Bowler, J. C., Rolotti, S. V., Fusi, S., and Losonczy, A. Signatures of rapid plasticity in hippocampal cal representations during novel experiences. Neuron, 110(12):1978-1992, 2022.
+
+Qin, S., Farashahi, S., Lipshutz, D., Sengupta, A. M., Chklovskii, D. B., and Pehlevan, C. Coordinated drift of receptive fields in hebbian/anti-hebbian network models during noisy representation learning. Nature Neuroscience, 26(2):339-349, 2023.
+Raju, R. V., Guntupalli, J. S., Zhou, G., Wendelken, C., Lázaro-Gredilla, M., and George, D. Space is a latent sequence: A theory of the hippocampus. Science Advances, 10(31):eadm8470, 2024.
+Ratzon, A., Derdikman, D., and Barak, O. Representational drift as a result of implicit regularization. *Elife*, 12: RP90069, 2024.
+Reynolds, J. N., Hyland, B. I., and Wickens, J. R. A cellular mechanism of reward-related learning. Nature, 413 (6851):67-70, 2001.
+Rokni, U., Richardson, A. G., Bizzi, E., and Seung, H. S. Motor learning with unstable neural representations. *Neuron*, 54(4):653-666, 2007.
+Rule, M. E. and O'Leary, T. Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proceedings of the National Academy of Sciences, 119(7):e2106692119, 2022.
+Rule, M. E., Loback, A. R., Raman, D. V., Driscoll, L. N., Harvey, C. D., and O'Leary, T. Stable task information from an unstable neural population. *elife*, 9:e51121, 2020.
+Russo, S. J. and Nestler, E. J. The brain reward circuitry in mood disorders. Nature reviews neuroscience, 14(9): 609-625, 2013.
+Sayegh, F. J., Mouledous, L., Macri, C., Pi Macedo, J., Lejards, C., Rampon, C., Verret, L., and Dahan, L. Ventral tegmental area dopamine projections to the hippocampus trigger long-term potentiation and contextual learning. Nature Communications, 15(1):4100, 2024.
+Schaeffer, R., Khona, M., and Fiete, I. No free lunch from deep learning in neuroscience: A case study through models of the entorhinal-hippocampal circuit. Advances in neural information processing systems, 35:16052-16067, 2022.
+Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
+Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+
+Schultz, W., Dayan, P., and Montague, P. R. A neural substrate of prediction and reward. Science, 275(5306): 1593-1599, 1997.
+Sorscher, B., Mel, G. C., Ocko, S. A., Giocomo, L. M., and Ganguli, S. A unified theory for the computational and mechanistic origins of grid cells. *Neuron*, 111(1): 121-137, 2023.
+Sosa, M., Plitt, M. H., and Giocomo, L. M. Hippocampal sequences span experience relative to rewards. *bioRxiv*, 2023.
+Stachenfeld, K. L., Botvinick, M. M., and Gershman, S. J. The hippocampus as a predictive map. Nature neuroscience, 20(11):1643-1653, 2017.
+Starkweather, C. K. and Uchida, N. Dopamine signals as temporal difference errors: recent advances. *Current Opinion in Neurobiology*, 67:95–105, 2021.
+Steele, R. and Morris, R. Delay-dependent impairment of a matching-to-place task with chronic and intrahippocampal infusion of the nmda-antagonist d-ap5. Hippocampus, 9(2):118-136, 1999.
+Stöckl, C., Yang, Y., and Maass, W. Local prediction-learning in high-dimensional spaces enables neural networks to plan. Nature Communications, 15(1):2344, 2024.
+Suhaimi, A., Lim, A. W., Chia, X. W., Li, C., and Makino, H. Representation learning in the artificial and biological neural networks underlying sensorimotor integration. Science Advances, 8(22):eabn0984, 2022.
+Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. A Bradford Book, 2018.
+Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12, 1999.
+Team, O. E. L., Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Sygnowski, J., Trebacz, M., Jaderberg, M., Mathieu, M., et al. Open-ended learning leads to generally capable agents. arXiv preprint arXiv:2107.12808, 2021.
+Tolman, E. C. Cognitive maps in rats and men. Psychological review, 55(4):189, 1948.
+Tse, D., Langston, R. F., Kakeyama, M., Bethus, I., Spooner, P. A., Wood, E. R., Witter, M. P., and Morris, R. G. Schemas and memory consolidation. Science, 316(5821): 76-82, 2007.
+
+Van Vreeswijk, C. and Sompolinsky, H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724-1726, 1996.
+Wang, J. X., Kurth-Nelson, Z., Kumaran, D., Tirumala, D., Soyer, H., Leibo, J. Z., Hassabis, D., and Botvinick, M. Prefrontal cortex as a meta-reinforcement learning system. Nature neuroscience, 21(6):860-868, 2018.
+Wen, J. H., Sorscher, B., Aery Jones, E. A., Ganguli, S., and Giocomo, L. M. One-shot entorhinal maps enable flexible navigation in novel environments. Nature, pp. 1-8, 2024.
+Yaghoubi, M., Nieto-Posadas, A., Mosser, C.-A., Gisiger, T., Wilson, E., Williams, S., and Brandon, M. P. Predictive coding of reward in the hippocampus. *bioRxiv*, pp. 2024–09, 2024.
+Ziv, Y., Burns, L. D., Cocker, E. D., Hamel, E. O., Ghosh, K. K., Kitch, L. J., Gamal, A. E., and Schnitzer, M. J. Long-term dynamics of cal hippocampal place codes. Nature neuroscience, 16(3):264-266, 2013.
+
+# A. Details of the Place field-based navigation model
+
+The code for initializing and training the model in 1D and 2D environments, along with the code for analyzing neural phenomena and generating all figures, will be available on GitHub upon acceptance.
+
+# A.1. Place fields in 1D and 2D environments
+
+The agent contains $N$ place fields. In a 1D track, each place field is described as
+
+$$
+\phi_ {i} \left(x _ {t}\right) = \alpha_ {i} ^ {2} \exp \left(- \frac {\left| \left| x _ {t} - \lambda_ {i} \right| \right| _ {2} ^ {2}}{2 \sigma_ {i} ^ {2}}\right), \tag {9}
+$$
+
+with $\alpha$ , $\lambda$ and $\sigma$ describing the amplitude, center and width, adapted from Foster et al. (2000); Kumar et al. (2022; 2024). Most of the simulations were initialized with amplitudes $\alpha_{i} = 0.5$ and widths $\sigma_{i} = 0.1$ , with centers uniformly tiling the environment $\lambda = \{-1,\dots,1\}$ . Nevertheless, similar representations emerge for amplitudes drawn from a uniform distribution between [0, 1] and widths uniformly drawn between [0.01, 0.25]. This parameter initialization was used for ablation studies in Fig. 4. In a 2D arena, each place field is described as
+
+$$
+\phi_ {i} \left(x _ {t}\right) = \alpha_ {i} ^ {2} \exp \left[ - \frac {1}{2} \left(x _ {t} - \lambda_ {i}\right) ^ {\top} \Sigma_ {i} ^ {- 1} \left(x _ {t} - \lambda_ {i}\right) \right], \tag {10}
+$$
+
+where $\Sigma_{i}$ is a 2x2 covariance matrix, adapted from Menache et al. (2005). The off-diagonals were initialized as zeros and diagonals initialized to match the variance in the 1D place field description, i.e. $\Sigma_{ii} = 0.1^2$ to ensure field widths are consistent in 1D and 2D.
+
+# A.2. Reward Maximization Objective (Policy Gradient)
+
+The objective of the model is to learn a policy $\pi$ parametrized by $W^{\pi}$ and spatial features $\phi$ parameterized by $\theta$ that maximizes the expected cumulative discounted rewards over any trajectory $\tau = \{x_0, g_0, r_0, x_1, \dots, x_T, g_T, r_T, x_{T+1}\}$ in a finite-horizon setting, modeling the trial structure in neuroscience experiments
+
+$$
+\mathcal {J} ^ {G} = \mathbb {E} _ {\tau \sim \phi_ {\theta}, \pi_ {W \pi}} \left[ \sum_ {t = 0} ^ {T} \gamma^ {t} r _ {t} \right] = \mathbb {E} _ {\tau} [ G (\tau) ], \tag {11}
+$$
+
+where $\gamma$ is the discount factor, $r_t$ is the reward at time step $t$ after choosing an action $g_t$ at state $x_t$ , and the time horizon $T$ is finite with trials ending after a maximum of 100 steps in the 1D track and 300 steps in the 2D arena.
+
+To maximize the cumulative reward objective, we perform gradient ascent on the policy and place field parameters,
+
+$$
+\theta_ {n e w} = \theta_ {o l d} + \eta_ {\theta} \nabla_ {\theta} \mathcal {J} ^ {G}, \quad W _ {n e w} ^ {\pi} = W _ {o l d} ^ {\pi} + \eta \nabla_ {W ^ {\pi}} \mathcal {J} ^ {G}, \tag {12}
+$$
+
+where $\eta_{\theta}$ and $\eta$ are learning rates for $\theta$ and $W^{\pi}$ respectively. The gradients are derived using the log-derivative trick,
+
+$$
+\begin{array}{l} \nabla_ {\theta , W ^ {\pi}} \mathcal {J} ^ {G} = \nabla_ {\theta , W ^ {\pi}} \mathbb {E} _ {\tau} [ G (\tau) ] (13) \\ = \nabla_ {\theta , W ^ {\pi}} \int_ {\tau} p (\tau | \theta , W ^ {\pi}) G (\tau) (14) \\ = \int_ {\tau} p (\tau | \theta , W ^ {\pi}) \nabla_ {\theta , W ^ {\pi}} \log p (\tau | \theta , W ^ {\pi}) G (\tau) (15) \\ = \mathbb {E} _ {\tau} \left[ \nabla_ {\theta , W ^ {\pi}} \log p (\tau | \theta , W ^ {\pi}) G (\tau) \right], (16) \\ \end{array}
+$$
+
+where the trajectory $\tau$ describes the state to state transitions. We expand the above using the Markov assumption that the transition to future states depend only on the present state and not on the states preceding it,
+
+$$
+p (\tau | \theta , W ^ {\pi}) = p (x _ {0}) \prod_ {t = 0} ^ {T} p \left(x _ {t + 1} \mid x _ {t}\right) \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right) \tag {17}
+$$
+
+$$
+\log p \left(\tau \mid \theta , W ^ {\pi}\right) = \log p \left(x _ {0}\right) + \sum_ {t = 0} ^ {T} \left(\log p \left(x _ {t + 1} \mid x _ {t}\right) + \log \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right)\right) \tag {18}
+$$
+
+$$
+\nabla_ {\theta , W ^ {\pi}} \log p (\tau | \theta , W ^ {\pi}) = \sum_ {t = 0} ^ {T} \nabla_ {\theta , W ^ {\pi}} \log \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right). \tag {19}
+$$
+
+Since the gradients are not dependent on the state transitions, the last line excludes them. Substituting Eq. 19 into Eq. 16
+
+yields
+
+$$
+\nabla_ {\theta , W ^ {\pi}} \mathcal {J} ^ {G} = \mathbb {E} _ {\tau} \left[ \sum_ {t = 0} ^ {T} \nabla_ {\theta , W ^ {\pi}} \log \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right) \cdot G _ {t} \right], \tag {20}
+$$
+
+which completes the full derivation of the policy gradient theorem (Sutton et al., 1999; Sutton & Barto, 2018). The policy gradient objective was used by Kumar & Pehlevan (2024) to optimize the policy and place field parameters. However, this learning signal requires an explicit reward and policy gradient methods are slow to converge as they suffer from high variance due to:
+
+- Monte Carlo sampling: Agents need to sample an entire episode to estimate the expected return $\mathbb{E}_{\tau}[G_t = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \ldots]$ before updating the policy. This can introduce significant variance because the estimate is based on a single path through the stochastic environment, which may not be representative of the expected value over many episodes.
+- No Baseline: The basic policy gradient algorithm computes the gradient solely based on the return $G$ from each trajectory. By introducing a baseline (either constant $b$ or dynamically evolving $b_{t}$ e.g. value function $v_{t}$ ), which estimates the expected return from a given state, the variance of the gradient estimate can be reduced, because now the policy learns which action is better than the previous (concept of using an Advantage $A_{t}$ instead of rewards).
+
+Value based methods (Sutton & Barto (2018), Chapter 3.5) were introduced to address some of these issues. For instance, instead of sampling returns $G_{t}$ , value functions $V_{t}$ learn to estimate the expected returns at each time step $t$ :
+
+$$
+V _ {t} = \mathbb {E} [ G _ {t} ], \tag {21}
+$$
+
+which can reduce the variance during credit assignment. The combination of policy gradient with value-based methods lead us to the Actor-Critic algorithm.
+
+# A.3. Alternative reward maximization objective (Temporal Difference)
+
+The optimal value function $V_{t}$ reflects the true expected cumulative discounted rewards, hence the policy gradient objective can be rewritten as
+
+$$
+\begin{array}{l} \mathcal {J} ^ {G} = \mathbb {E} _ {\tau} [ G (\tau) ] = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \gamma^ {t} r _ {t} \right], (22) \\ = \mathbb {E} \left[ r _ {t} + \gamma \sum_ {t = 1} ^ {T} \gamma^ {t} r _ {t} \right], (23) \\ \end{array}
+$$
+
+$$
+\mathcal {J} ^ {G} = \mathbb {E} \left[ r _ {t} + \gamma V _ {t + 1} \right]. \tag {24}
+$$
+
+which yields the following self-consistency equation
+
+$$
+r _ {t} + \gamma V _ {t + 1} - V _ {t} \equiv 0, \tag {25}
+$$
+
+as argued by Sutton & Barto (2018); Frémaux et al. (2013).
+
+Alternatives to policy gradient algorithms propose subtracting a baseline which can be a fixed constant $b$ or a dynamically changing variable $b_{t}$ . Since we have the value function $V_{t}$ we can modify the objective to be
+
+$$
+\mathcal {J} ^ {A} = \mathbb {E} [ G _ {t} - V _ {t} ] = \mathbb {E} [ A _ {t} ] = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} r _ {t} + \gamma V _ {t + 1} - V _ {t} \right], \tag {26}
+$$
+
+which gives us the Advantage function (Mnih et al., 2016; Schulman et al., 2015). This reduces the variance as the policy has to learn to select actions that gives an advantage over the current value function. We get a learning objective function that is an analogue to maximizing the expected cumulative discounted returns while subtracting a baseline Eq. 11.
+
+$$
+\nabla_ {\theta , W ^ {\pi}} \mathcal {J} ^ {A} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {\theta} \log \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right) \cdot A _ {t} \right]. \tag {27}
+$$
+
+However, we have assumed that we are given the optimal value function $V_{t}$ to critique the actor if it is doing better or worse than before. Instead, we can learn to estimate the value function $v_{t}$ using a critic by minimizing the Temporal Difference error
+
+$$
+\delta_ {t} = r _ {t} + \gamma v _ {t + 1} - v _ {t}. \tag {28}
+$$
+
+The critic can learn to approximate the true value function by minimizing the mean squared error between the true value function $V_{t}$ and the predicted $v_{t}$ , or the temporal difference error $\delta_{t}$
+
+$$
+\begin{array}{l} \mathcal {L} ^ {v} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \left(V \left(x _ {t}\right) - v \left(x _ {t}; \theta , w ^ {v}\right)\right) ^ {2} \right] (29) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \left(r _ {t} + \gamma V \left(x _ {t + 1}\right) - v \left(x _ {t}; \theta , w ^ {v}\right)\right) ^ {2} \right]. (30) \\ \end{array}
+$$
+
+Since we do not have the optimal value function $V_{t}$ , we can approximate it by bootstrapping the estimated value function $v_{t}$ and ensuring that we do not take gradients with respect to the time shifted value estimate $v(x_{t + 1})$ using a stop gradient (sg):
+
+$$
+\begin{array}{l} \mathcal {L} ^ {T D} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \left(r _ {t} + \gamma v \left(x _ {t + 1}; s g \left(\theta , w ^ {v}\right)\right) - v \left(x _ {t}; \theta , w ^ {v}\right)\right) ^ {2} \right] (31) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \delta_ {t} ^ {2} \left(\theta , w ^ {v}\right) \right]. (32) \\ \end{array}
+$$
+
+We minimize the temporal difference error using gradient descent for the critic to estimate the value function
+
+$$
+\begin{array}{l} \nabla_ {\theta , w ^ {v}} \mathcal {L} ^ {T D} = \frac {\partial \mathcal {L} ^ {T D}}{\partial \delta} \cdot \frac {\partial \delta}{\partial v} \cdot \nabla_ {\theta , w ^ {v}} v (\theta , w ^ {v}), (33) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \delta_ {t} \cdot (- 1) \cdot \nabla_ {\theta , w ^ {v}} v \left(x _ {t}; \theta , w ^ {v}\right) \right], (34) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} - \nabla_ {\theta^ {v}} v \left(x _ {t}; \theta , w ^ {v}\right) \cdot \delta_ {t} \right]. (35) \\ \end{array}
+$$
+
+Notice the additional negative sign that pops out when you take the derivative of $\delta$ only with respect to $v_{t}$
+
+$$
+\frac {\partial \delta}{\partial v} = \frac {\partial \left(r _ {t} + \gamma v _ {t + 1} - v _ {t}\right)}{\partial v _ {t}} = - 1, \tag {36}
+$$
+
+since $r_t$ and $v_{t+1}$ are treated as constants, we do not take their derivatives. Since we do not have the optimal value function $V_t$ but a biased estimate $v_t$ , we can use the temporal difference error as our reward maximization objective
+
+$$
+\mathcal {J} ^ {T D} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} r _ {t} + \gamma v _ {t + 1} - v _ {t} \right] = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \delta_ {t} \right]. \tag {37}
+$$
+
+As the value estimation becomes closer to the optimal value $v_{t} \rightarrow V_{t}$ , this objective becomes similar to the advantage objective $\mathcal{J}^{TD} \rightarrow \mathcal{J}^A$ . Note that we are not directly maximizing the TD error during policy learning. Rather, we want to optimize the policy $\pi$ and place field parameters $\theta$ by gradient ascent, using the biased estimate of the advantage function
+
+$$
+\nabla_ {\theta , W ^ {\pi}} \mathcal {J} ^ {T D} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {\theta , W ^ {\pi}} \log \pi \left(g _ {t} \mid x _ {t}; \theta , W ^ {\pi}\right) \cdot \delta_ {t} \right]. \tag {38}
+$$
+
+An alternative interpretation is that during policy learning, the agent learns a policy to maximize the difference between the actual reward and the estimated value
+
+# A.4. Combined reward maximization objective for place field parameters
+
+In our model (Fig. 1A), actor $W^{\pi}$ and critic $w^{v}$ weights are optimized separately, while the place field parameters $\theta$ overlap. The actor uses gradient ascent for Eq. 27, and the critic employs gradient descent for Eq. 35. Since we have a single population of place fields, we optimize these parameters to support both objectives. Thus, we derive a combined objective function to update $W^{\pi}, w^{v}$ , and $\theta$ in a single gradient pass
+
+$$
+\begin{array}{l} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {J} = \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {J} ^ {T D} - \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {L} ^ {T D} (39) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \log \pi \left(g _ {t} \mid x _ {t}; W ^ {\pi}, \theta\right) \delta_ {t} \right] - \mathbb {E} \left[ \sum_ {t = 0} ^ {T} - \nabla_ {W ^ {\pi}, w ^ {v}, \theta} v \left(x _ {t}; w ^ {v}, \theta\right) \delta_ {t} \right], (40) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \log \pi \left(g _ {t} \mid x _ {t}; W ^ {\pi}, \theta\right) \delta_ {t} + \nabla_ {W ^ {\pi}, w ^ {v}, \theta} v \left(x _ {t}; w ^ {v}, \theta\right) \delta_ {t} \right], (41) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \left(\nabla_ {W ^ {\pi}, w ^ {v}, \theta} \log \pi \left(g _ {t} \mid x _ {t}; W ^ {\pi}, \theta\right) + \nabla_ {W ^ {\pi}, w ^ {v}, \theta} v \left(x _ {t}; w ^ {v}, \theta\right)\right) \delta_ {t} \right]. (42) \\ \end{array}
+$$
+
+where $\nabla_{w^v}\mathcal{J}^{TD} = 0$ and $\nabla_{W^{\pi}}\mathcal{L}^{TD} = 0$ since the respective objectives are not parameterized by $w^v$ and $W^{\pi}$ respectively. This means that $W^{\pi}$ is tuned to maximize $\mathcal{J}^{TD}$ , $w^v$ is tuned to minimize $\mathcal{L}^{TD}$ and $\theta$ is tuned to balance both the objectives.
+
+Since most optimizers e.g. in Tensorflow, PyTorch perform gradient descent, not ascent, we can minimize the negative policy gradient Eq. 27, which is equivalent to the negative log likelihood
+
+$$
+\begin{array}{l} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {L} = - \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {J} ^ {T D} + \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {L} ^ {T D} (43) \\ = - \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \log \pi \left(g _ {t} \mid x _ {t}; W ^ {\pi}, \theta\right) \cdot \delta_ {t} \right] + \mathbb {E} \left[ \sum_ {t = 0} ^ {T} - \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \tilde {v} \left(x _ {t}; w ^ {v}, \theta\right) \cdot \delta_ {t} \right], (44) \\ = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \nabla_ {W ^ {\pi}, w ^ {v}, \theta} - \log \pi \left(g _ {t} \mid x _ {t}; W ^ {\pi}, \theta\right) \cdot \delta_ {t} \right] + \mathbb {E} \left[ \sum_ {t = 0} ^ {T} - \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \tilde {v} \left(x _ {t}; w ^ {v}, \theta\right) \cdot \delta_ {t} \right], (45) \\ = \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {L} _ {\pi} ^ {T D} + \nabla_ {W ^ {\pi}, w ^ {v}, \theta} \mathcal {L} _ {v} ^ {T D}. (46) \\ \end{array}
+$$
+
+which is the same update rule used in Wang et al. (2018); Mnih et al. (2016) to train the actor and critic separately while the feature parameters are trained jointly.
+
+It is also possible to initialize two separate populations of place fields, each for the actor and critic. Alternatively, we only optimize place field parameters using the actor's objective while the critic uses the spatial features to learn the value function. The converse is also possible where the place field parameters and critic weights are optimized to minimize the TD error while the actor learns a policy without optimizing the spatial representations, as we did in the perturbative approximation (App. B). From numerical experiments, optimizing place field parameters using both the actor and critic objectives allowed the agent to achieve the fastest policy convergence and highest cumulative reward performance (Fig. S15).
+
+# A.5. Online update of place field and actor-critic parameters
+
+Now, we derive an online implementation of Eq. 6 which is the same as Eq. 42, so that all parameters are updated at every time step. Extending from Foster et al. (2000); Kumar et al. (2022), the actor and critic weights are updated according to the gradients
+
+$$
+\Delta \boldsymbol {w} ^ {v} (t + 1) = \eta \delta_ {t} \phi (x _ {t}), \quad \Delta \boldsymbol {W} ^ {\pi} (t + 1) = \eta \delta_ {t} \tilde {\boldsymbol {g}} _ {t} \phi (x _ {t}) ^ {\top}, \tag {47}
+$$
+
+where $\tilde{g}_{t,j} = g_t - P$ and $\eta = 0.01$ . The gradient updates for place field parameters follow
+
+$$
+\Delta \boldsymbol {\theta} (t + 1) = \eta_ {\theta} \delta_ {t} \left(\boldsymbol {w} _ {v} (t) + \boldsymbol {W} _ {\pi} ^ {\top} (t) \cdot \tilde {\boldsymbol {g}} _ {t}\right) \nabla_ {\theta} \phi \left(x _ {t}; \boldsymbol {\theta}\right), \tag {48}
+$$
+
+where we use a significantly smaller learning rate $\eta_{\theta} = 0.0001$ so that the spatial representation evolves in a stable manner.
+
+Specifically, each field parameter is updated according to
+
+$$
+\delta_ {i, t} ^ {b p} = \delta_ {t} \left(w _ {i} ^ {v} (t) + W _ {j i} ^ {\pi} (t) \cdot \tilde {g} _ {t, j}\right), \tag {49}
+$$
+
+$$
+\Delta \alpha_ {i, t} = \eta_ {\alpha} \cdot \delta_ {i, t} ^ {b p} \cdot \phi_ {i} \left(x _ {t}\right) \cdot \left(\frac {2}{\alpha_ {i}}\right), \tag {50}
+$$
+
+$$
+\Delta \lambda_ {i, t} = \eta_ {\lambda} \cdot \delta_ {i, t} ^ {b p} \cdot \phi_ {i} \left(x _ {t}\right) \cdot \left(\frac {x _ {t} - \lambda_ {i}}{\sigma_ {i} ^ {2}}\right), \tag {51}
+$$
+
+where $\delta_{i,t}^{bp}$ is the TD error gradient that has been backpropagated through the actor and critic weights. Using just the $w_{i}^{v}(t)$ or $W_{ji}^{\pi}$ weights alone to backpropagate the TD error influences the representation learned by the place field population and ultimately the navigation performance (Fig. S15).
+
+There are two ways to optimize the place field width parameter. The first and straightforward method is to update the width parameter according to
+
+$$
+\Delta \sigma_ {i, k, t} = \eta_ {\sigma} \cdot \delta_ {i, t} ^ {b p} \cdot \phi_ {i, k} \left(x _ {t}\right) \cdot \left(\frac {\left(x _ {t} - \lambda_ {i}\right) ^ {2}}{\sigma_ {i , k} ^ {3}}\right), \tag {52}
+$$
+
+where $k = 1$ in a 1D place field. In a 2D place field with $k = 2$ , we can update the diagonal elements in the 2D matrix while keeping the off-diagonals to zeros as in Menache et al. (2005). However, fields will only elongate along each axis. Instead, in our simulations, we optimized the off-diagonals using the same gradient flow equations. However, we needed to include additional constraints so that each place field's covariance matrix remains 1) symmetric, 2) bounded, and 3) positive semi-definite to perform matrix inversion. Specifically, the covariance matrix was bounded between $[10^{-5}, 0.5]$ to prevent exploding widths and gradients.
+
+# B. Derivation for perturbative expansion
+
+The dynamics of place field parameters are nonlinear and difficult to characterize analytically. To gain some analytical tractability, we impose a strong separation of timescales between policy learning updates and place field parameter updates. To do so, we set the learning rates for the actor-critic $\eta$ to be much larger than the learning rates for the place field parameters $\eta_{\alpha}, \eta_{\lambda}, \eta_{\sigma} \ll \eta$ . In simulations, we use $\eta = 0.01$ and $\eta_{\theta} = 0.0001$ .
+
+The critic estimates the value as
+
+$$
+v \left(x _ {t}\right) = \sum_ {i = 1} ^ {N} w _ {i} \phi_ {i} \left(x _ {t}, \boldsymbol {\theta} _ {i}\right), \tag {53}
+$$
+
+where $\theta_{i} = (\alpha_{i},\lambda_{i},\sigma_{i})$ are neuron specific parameters (amplitude, mean, and bandwidth respectively). We write $w^{v}$ as $w$ for clarity. To start with let's just consider
+
+$$
+\phi_ {i} \left(x _ {t}, \boldsymbol {\theta} _ {i}\right) = \alpha_ {i} ^ {2} \exp \left(- \frac {1}{2 \sigma_ {i} ^ {2}} \left(x _ {t} - \lambda_ {i}\right) ^ {2}\right). \tag {54}
+$$
+
+We consider a TD based update, which in the gradient flow (infinitesimal learning rate) limit can be approximated as
+
+$$
+\frac {d}{d t} \boldsymbol {w} (t) = \boldsymbol {M} (t) \left(\boldsymbol {w} ^ {V} - \boldsymbol {w} (t)\right), \tag {55}
+$$
+
+$$
+\frac {d}{d t} \boldsymbol {\theta} _ {i} (t) = \epsilon w _ {i} (t) \mathbb {E} _ {x _ {t}} \nabla_ {\boldsymbol {\theta} _ {i}} \phi_ {i} \left(x _ {t}, \boldsymbol {\theta} _ {i}\right) \delta_ {t}, \tag {56}
+$$
+
+The key assumption we make is that the dimensionless ratio of learning rates, $\epsilon$ is perturbatively small
+
+$$
+\epsilon = \frac {\eta_ {\theta}}{\eta} \ll 1, \tag {57}
+$$
+
+where $\eta_{\theta}$ is the learning rate for the place field parameters $\theta_{i}$ and $\eta$ is the learning rate for the actor-critic. The matrix $M(t) = \Sigma (t) - \gamma \Sigma_{+}(t)$ where $\Sigma = \langle \psi (x_t)\psi (x_t)\rangle$ and $\Sigma_{+}(t) = \langle \psi (x_{t})\psi (x_{t + 1})^{\top}\rangle$ depends on the equal time and time-step shifted correlations of features. The vector $\pmb{w}^{V} = \pmb{M}^{-1}\pmb{\Sigma}\pmb{w}_{R}$ where $\pmb{w}_R\cdot \pmb {\psi}(x) = R(x)$ . We investigate a simple perturbation series.
+
+$$
+\boldsymbol {w} (t) = \boldsymbol {w} _ {0} (t) + \epsilon \boldsymbol {w} _ {1} (t) + \epsilon^ {2} \boldsymbol {w} _ {2} (t) + \dots
+$$
+
+$$
+\boldsymbol {\theta} (t) = \boldsymbol {\theta} _ {0} (t) + \epsilon \boldsymbol {\theta} _ {1} (t) + \epsilon^ {2} \boldsymbol {\theta} _ {2} (t) + \dots \tag {58}
+$$
+
+and examine the dynamics up to first order in $\epsilon$ . We will show that this recovers many qualitative features of the observed representational updates.
+
+The leading zeroth order dynamics are
+
+$$
+\frac {d}{d t} \boldsymbol {\theta} _ {0} (t) = 0, \frac {d}{d t} \boldsymbol {w} _ {0} (t) = \boldsymbol {M} _ {0} (\boldsymbol {w} _ {V} - \boldsymbol {w} _ {0} (t)), \tag {59}
+$$
+
+where $M_0 = \Sigma(0) - \gamma \Sigma_+(0)$ is the initial feature covariance under the initial policy.
+
+# B.1. Place Field Amplitude
+
+We start by asserting a separation of timescales between training readout weights and feature parameters during a simple TD learning setup
+
+$$
+\frac {d}{d t} w _ {i} (t) = \sum_ {j} M _ {i j} \left(w _ {j} ^ {V} - w _ {j}\right), \tag {60}
+$$
+
+$$
+\frac {d}{d t} \alpha_ {i} (t) = \epsilon \frac {2}{\alpha_ {i} (t)} w _ {i} \sum_ {j} M _ {i j} \left(w _ {j} ^ {V} - w _ {j}\right), \tag {61}
+$$
+
+The zero-th order solution to Eq. 55 is
+
+$$
+\Delta \boldsymbol {w} _ {0} (t) \equiv \boldsymbol {w} _ {V} - \boldsymbol {w} _ {0} (t) = \exp (- M t) \boldsymbol {w} _ {V}, \tag {62}
+$$
+
+$$
+\boldsymbol {w} _ {0} (t) = [ \boldsymbol {I} - \exp (- M t) ] \boldsymbol {w} _ {V}, \tag {63}
+$$
+
+which can be substituted in to get the first order correction to the dynamics for $\theta$
+
+$$
+\frac {d}{d t} \boldsymbol {\alpha} _ {1} (t) = 2 \boldsymbol {\alpha} _ {0} ^ {- 1} \odot [ \boldsymbol {I} - \exp (- \boldsymbol {M} t) ] \boldsymbol {w} _ {V} \odot \boldsymbol {M} \exp (- \boldsymbol {M} t) \boldsymbol {w} _ {V}. \tag {64}
+$$
+
+Under the condition that $\alpha_0 = 1$ and $M = M^{\top}$ we can work out an exact expression in terms of the eigendecomposition $M = \sum_{k} \lambda_k u_k u_k^\top$
+
+$$
+\boldsymbol {\alpha} _ {1} (t) = 2 \sum_ {k \ell} (\boldsymbol {w} _ {V} \cdot \boldsymbol {u} _ {k}) (\boldsymbol {u} _ {\ell} \cdot \boldsymbol {w} _ {V}) (\boldsymbol {u} _ {k} \odot \boldsymbol {u} _ {\ell}) \left[ \left(1 - e ^ {- \lambda_ {k} t}\right) - \frac {\lambda_ {k}}{\lambda_ {k} + \lambda_ {\ell}} \left(1 - e ^ {- \left(\lambda_ {k} + \lambda_ {\ell}\right) t}\right) \right], \tag {65}
+$$
+
+we can approximate this at late times as
+
+$$
+\lim _ {t \rightarrow \infty} \boldsymbol {\alpha} _ {1} (t) \approx 2 \boldsymbol {w} _ {V} \odot \boldsymbol {w} _ {V}. \tag {66}
+$$
+
+As $t\to \infty$ we can approximate this as $\lim_{t\to \infty}\pmb {\theta}(t)\approx 2(\pmb{w}_V)^2$ . This indicates that neurons which are heavily involved in the reproduction of the value function are upweighted in their amplitude.
+
+# B.2. Field Center
+
+Based on the place field center update equation and rewriting the terms as above,
+
+$$
+\frac {d}{d t} \lambda_ {i} (t) \approx \epsilon \frac {x _ {t} - \lambda_ {i}}{\sigma_ {i} ^ {2}} w _ {i} \phi_ {i} (x) \sum_ {j} \phi_ {j} (x) \left(w _ {j} ^ {V} - w _ {j}\right). \tag {67}
+$$
+
+We need to compute an average over spatial positions. We approximate the space position early in training as a Gaussian with mean $s_0$ and variance $\sigma_x^2$
+
+$$
+\left\langle \frac {\left(x _ {t} - \lambda_ {i}\right)}{\sigma^ {2}} \phi_ {i} (x) \phi_ {j} (x) \right\rangle \approx \frac {\mu_ {i j} - \lambda_ {i}}{\sigma^ {2}} M _ {i j}, \tag {68}
+$$
+
+where $\mu_{ij} = \left(\frac{2}{\sigma^2} +\frac{1}{\sigma_x^2}\right)^{-1}\left(\frac{1}{\sigma^2} (\lambda_i + \lambda_j) + \frac{1}{\sigma_x^2}\bar{\mu}_x\right)$ is the mean value of $x$ obtained by the above Gaussian integral under the approximation that $p(x)\sim \mathcal{N}(\bar{\mu}_x,\sigma_x^2)$ . Approximating $\lambda_{j}$ as the mean position of the tuning curves $\bar{\lambda}$ we obtain the following prediction
+
+$$
+\boldsymbol {\lambda} (t) - \boldsymbol {\lambda} (0) \approx \epsilon \boldsymbol {w} ^ {V} \odot \left[ \left(\frac {2}{\sigma^ {2}} + \frac {1}{\sigma_ {x} ^ {2}}\right) ^ {- 1} \left(\frac {1}{\sigma^ {2}} (\boldsymbol {\lambda} (0) + \bar {\lambda}) + \frac {1}{\sigma_ {x} ^ {2}} \bar {\mu} _ {x}\right) - \boldsymbol {\lambda} (0) \right] \odot [ \boldsymbol {I} - \exp (- M t) ] \boldsymbol {w} ^ {V}. \tag {69}
+$$
+
+Following the solution in Eq. 63, we can approximate this at late times as
+
+$$
+\lim _ {t \rightarrow \infty} \boldsymbol {\lambda} (t) - \boldsymbol {\lambda} (0) \approx \epsilon \boldsymbol {w} ^ {V} \odot \left[\left(\frac {2}{\sigma^ {2}} + \frac {1}{\sigma_ {x} ^ {2}}\right) ^ {- 1} \left(\frac {1}{\sigma^ {2}} (\boldsymbol {\lambda} (0) + \bar {\lambda}) + \frac {1}{\sigma_ {x} ^ {2}} \bar {\mu} _ {x}\right) - \boldsymbol {\lambda} (0) \right] \odot \boldsymbol {w} ^ {V}. \tag {70}
+$$
+
+Hence, in addition to the value of a location, three additional factors influence each field's displacement.
+
+$$
+\lambda_ {i} (t) - \lambda_ {i} (0) \approx \frac {\eta_ {\lambda}}{\eta} \left(\frac {2}{\sigma_ {i} ^ {2}} + \frac {1}{\sigma_ {x} ^ {2}}\right) ^ {- 1} \left[ \frac {\bar {\lambda} - \lambda_ {i} (0)}{\sigma_ {i} ^ {2}} + \frac {\bar {\mu} _ {x} - \lambda_ {i} (0)}{\sigma_ {x} ^ {2}} \right] w _ {v, i} ^ {2} (t), \eta_ {\lambda} \ll \eta , \tag {71}
+$$
+
+where $\bar{\lambda}$ is the agent's expected location sampled from its policy, $\bar{\mu}_x = -0.75$ is the starting location and $\sigma_x$ is the estimated spread of the trajectory. This analysis suggests that fields will be influenced by both the start location and the location where the agent spends a higher proportion of time at. In later learning phases, this will be the reward location $\bar{\lambda} = 0.5$ . Consequently, only the fields near the reward location will shift towards the reward, while the rest of the fields will move towards the start location. We illustrate this perturbative approximation at early and late times of training in Figure 5. The theory is quite accurate early in training, but fails at sufficiently long training time.
+
+
+Figure 5. Difference in early versus late time perturbative approximation. Blue scatter points show the magnitude and direction of change in $(N = 256)$ field center position compared to the position at which the fields were initialized $(\lambda_i(T) - \lambda_i(0))$ . (A) In early time, the perturbative expansion is a good fit to the field center displacement, and captures the shift in fields towards the reward location $x_r = 0.5$ (red) (B) As learning proceeds, the approximation begins to break down for fields further from the reward location. Free parameters were fit with $\bar{\lambda} = 0.535$ and $\sigma_x = 0.45$ .
+
+
+
+# C. Successor Representation agent
+
+The generalized temporal difference error is given by
+
+$$
+\delta_ {t, j} ^ {S R} = \phi_ {j} \left(x _ {t}\right) + \gamma \psi_ {j} ^ {\pi} \left(x _ {t + 1}\right) - \psi_ {j} ^ {\pi} \left(x _ {t}\right), \tag {72}
+$$
+
+with $M_{i}$ representing the predicted successor representation and $\phi (x)$ representing the initialized place field representation that is not optimized.
+
+$$
+\psi_ {i} ^ {\pi} \left(x _ {t}\right) = \sum_ {i} ^ {N} \left[ U _ {j i} \right] _ {+} \phi_ {i} \left(x _ {t}\right), \tag {73}
+$$
+
+The successor representation is computed using a summation of the place fields with a learned matrix $U$ that is positively rectified. The rectification is necessary to have a non-negative representation.
+
+$$
+\Delta U _ {t} = \phi_ {i} \left(x _ {t}\right) \cdot \delta_ {t, j} ^ {\top}, \tag {74}
+$$
+
+The matrix $U$ is initialized as an identity matrix and is updated using a two-factor rule using the TD error as in Gardner et al. (2018).
+
+# D. Metric Representation learning objective
+
+The hippocampus is known to learn and represent spatial maps even in the absence of rewards, enabling rapid navigation to new locations when required (Tolman, 1948; Steele & Morris, 1999; Tse et al., 2007). This requires reorganization of place fields in non-rewarded conditions, which has been proposed as a mechanism for learning a predictive map that estimates future spatial occupancy (Mehta et al., 1997; Stachenfeld et al., 2017). To describe this non-reward-based reorganization, the successor representation algorithm (Dayan, 1993) has been used. More recently, an auxiliary predictive objective has been proposed (Fang & Stachenfeld, 2023).
+
+Here, we present a simple predictive objective for place field reorganization that is independent of rewards. We introduce a previously described objective called the Metric Representation (MR), which learns a low-dimensional representation of an environment using place field activity and a biologically plausible learning rule that is modulated by a path integration-derived Temporal Difference error. This representation allows an agent to predict its current coordinates $z(x_{t})$ and perform vector subtraction to rapidly navigate to recalled goals (Foster et al., 2000; Kumar et al., 2024). However, representation learning was not studied using this objective. Recently, a similar objective was proposed to learn a spatial map using local learning rules, although as a high-dimensional representation (Stöckl et al., 2024).
+
+The dimensionality of the coordinate prediction $z(x_{t})$ is equal to the dimensionality of the environment, calculated through a linear summation of place field activity:
+
+$$
+z _ {j} \left(x _ {t}\right) = \sum_ {i} ^ {N} W _ {j i} ^ {M R} \phi_ {i} \left(x _ {t}\right), \quad z \in \mathbb {R} ^ {D}. \tag {75}
+$$
+
+When the agent accurately predicts its coordinates in the environment, the following path integration-derived self-consistency
+
+equation holds:
+
+$$
+z _ {j} \left(x _ {t + 1}\right) \equiv z _ {j} \left(x _ {t}\right) + a _ {j} \left(x _ {t}\right), \tag {76}
+$$
+
+$$
+z _ {j} \left(x _ {t + 1}\right) - z _ {j} \left(x _ {t}\right) - a _ {j} \left(x _ {t}\right) \equiv 0, \tag {77}
+$$
+
+where $a_{j}(x_{t})$ is the true displacement of the agent in the environment. However, if the prediction is inaccurate, Eq. 77 can be reformulated into a temporal difference error for each dimension $j$ of the environment as described by Foster et al. (2000); Kumar et al. (2024):
+
+$$
+\boldsymbol {\chi} _ {t} = z _ {j} \left(x _ {t + 1}\right) - z _ {j} \left(x _ {t}\right) - a _ {j} \left(x _ {t}\right), \quad \boldsymbol {\chi} \in \mathbb {R} ^ {D}. \tag {78}
+$$
+
+This one step prediction error $(\chi_t)$ can be expressed as a loss function, similar to Fang & Stachenfeld (2023) without the temporal discounting factor:
+
+$$
+\mathcal {L} ^ {M R} = \mathbb {E} _ {g \sim \pi} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \boldsymbol {\chi} _ {t} ^ {2} \right] = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \frac {1}{2} \left(\boldsymbol {z} \left(x _ {t + 1}; W ^ {M R}\right) - \boldsymbol {z} \left(x _ {t}; W ^ {M R}\right) - \boldsymbol {a} \left(x _ {t}\right)\right) ^ {2} \right], \tag {79}
+$$
+
+which can be minimized by gradient descent by optimizing both the coordinate readout weights $(W_{MR})$ and place field parameters $(\theta \in \{\alpha, \lambda, \sigma\})$ :
+
+$$
+\nabla_ {W ^ {M R}, \theta} = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \boldsymbol {\chi} _ {t} \nabla \phi \left(x _ {t}; \theta\right) ^ {\top} \right]. \tag {80}
+$$
+
+The gradient updates were implemented in an online manner:
+
+$$
+\Delta \mathbf {W} ^ {M R} (t + 1) = \eta \boldsymbol {\chi} _ {t} \phi \left(x _ {t}\right) ^ {\top}, \tag {81}
+$$
+
+$$
+\Delta \theta (t + 1) = \eta_ {\theta} \mathbf {W} _ {M R} ^ {\top} (t) \boldsymbol {\chi} _ {t} \nabla_ {\theta} \phi \left(x _ {t}; \theta\right), \tag {82}
+$$
+
+We can analyze how the different temporal difference residues (both the canonical reward-dependent and newly proposed metric representation-based) influence place field reorganization and agent policy learning performance by propagating the residues through a combination of actor, critic, and metric representation weights:
+
+$$
+\boldsymbol {\delta} _ {t} ^ {b p} = \delta_ {t} \left(\beta_ {v} \boldsymbol {w} _ {v} (t) + \beta_ {\pi} \boldsymbol {W} _ {\pi} ^ {\top} (t) \cdot \tilde {g} _ {t}\right) + \beta_ {M R} \mathbf {W} _ {M R} ^ {\top} (t) \boldsymbol {\chi} _ {t}, \tag {83}
+$$
+
+$$
+\Delta \theta_ {i} (t + 1) = \eta_ {\theta} \boldsymbol {\delta} _ {t} ^ {b p} \nabla_ {\theta} \phi \left(x _ {t}; \theta\right). \tag {84}
+$$
+
+This can be done by setting the weighting of each component $\beta_v, \beta_\pi, \beta_{MR} \in \{0,1\}$ . Refer to Fig. S15 for policy convergence performance in both the 1D and 2D environments when using different combinations to learn place field representations.
+
+# E. Details for noisy field updates
+
+To induce drift, we independently introduced noise to field amplitudes, centers and width, as well as the synapses to the actor and critic $(\theta \in \{\alpha ,\lambda ,\sigma ,w^v,W^\pi \})$
+
+$$
+\theta_ {t + 1} = \theta_ {t} + \xi_ {t}, \tag {85}
+$$
+
+where the noise term $\xi_{t}$ are independent Gaussian noises with zero mean and magnitude $\sigma_{noise} \in \{10^{-6}, 10^{-1}\}$ . We performed a noise sweep to determine how increasing the noise magnitude affected the agent's reward maximization behavior, population vector correlation and representation similarity. Refer to Fig. S7.
+
+
+Figure S1. Influence of place field parameter optimization for a single seed. Example change in individual field's spatial selectivity $(\phi(x), \text{colored})$ , mean firing activity at a location $\left(\sum_{i}^{N} \phi_{i}(x)\right)$ , field density which is the number of Center of Mass (COM) in a location after smoothing with a Gaussian kernel density estimate (gKDE) (gKDE(COM), blue) and, the frequency of being in a location $(p_{RM}(x))$ , when optimizing different combinations of field parameters $(\alpha, \lambda, \sigma)$ during reward maximization (RM). The location in which the highest value for mean firing activity, field density and frequency is attained is indicated by a red, blue and black vertical dash line respectively. Optimizing a (A) small number $(N = 16)$ and (B) large number of place fields yields a similar high mean firing rate at the reward location followed by the start location. However, the field density evolves differently when in the low field regime, (A) a high density emerges at the reward location in the early stages of learning, but it shifts to the start location at later stages of learning. This effect was observed in a recent experiment where place fields which initially encoded the reward location, gradually shifted backward towards the corresponding start location. This shift led to a decrease in place fields specifically coding for the reward, suggesting that the hippocampal representation reorganizes to predictively code for the reward (Yaghoubi et al., 2024). Whether experiments demonstrate such misalignment between place field density and mean firing rate needs to be analyzed. Based on the ablation studies (Fig. 4A,B), mean firing rate will be a stronger indicator of learning performance than field density. (B) In the high field regime, a high field density at the reward location remains stable throughout learning. Note that COM changes only when the place field centers are optimized $(\Delta \lambda)$ . Distribution is shown for a single seed run for a homogeneous place field population that has been initialized by with equal spacing between field centers $(\lambda \in [-1,1])$ , equal amplitude $(\alpha = 0.5)$ and width $(\sigma = 0.01)$ . Refer to Fig. S2 for general place field reorganization over different seeds.
+
+
+
+
+A
+
+
+Fields with constant space, width, amplitude
+
+
+
+
+
+
+B
+
+
+Heterogeneous place field population
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S2. Average change in field density and mean firing rate for different number of place fields. Vertical blue and red dash lines indicate the location with the highest density and mean firing rate, with the legend indicating the location $(x)$ . (A) Homogeneous place field distribution was initialized with field parameters similar to Fig. S1, equal spacing between field centers $(\lambda \in [-1,1])$ , equal amplitude $(\alpha = 0.5)$ and equal width $(\sigma = 0.01)$ . (B) All place field parameters center $(\lambda)$ , amplitude $(\alpha)$ , and width $(\sigma)$ were initialized by sampling from a uniform distribution between $[-1,1]$ , $[0,1]$ , $[10^{-5}, 0.1]$ respectively to model heterogeneous place field population. Learning rates for the place field parameters and actor-critic were $n_{\theta} = 0.0001$ and $n = 0.01$ respectively. Shaded area is $95\%$ CI over 50 different seeds.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S3. A small proportion of reward-encoding place fields shift to the new reward location. Agents with $N = 256$ place fields and Gaussian noise injected to field parameters $(\sigma_{noise} = 0.0001)$ were trained to navigate to a reward location at $x_r = 0.75$ for 50,000 trials, thereafter the reward location was shifted to $x_r = -0.2$ for the next 50,000 trials. (A) Place field density at the start of learning was uniformly distributed (left) and increased near the first reward location at the end of the first 50,000 trials (center). After the shift in reward location, a high density of fields emerged at the new reward location (right). The black line shows the learned policy, where a velocity of 0.1(-0.1) indicates moving right (left). Agents learn to navigate to the reward location, both before and after the shift. (B) Example distribution of individual place fields before learning (left), before the shift (center) and after the shift (right). All place field parameters $\lambda$ , $\alpha$ , and $\sigma$ were initialized by sampling from a uniform distribution between $[-1,1]$ , $[0,1]$ , $[10^{-5}, 0.1]$ respectively to model heterogeneous place field population. Notice the backward shift of some place fields that were at the initial reward location to the new reward location. (C) About $2.6\%$ of the place fields coding for the initial reward at $x_r = 0.75$ (green dots) shifted to the new reward location at $x_r = -0.2$ (about 19 of the 734 green dots are within the blue circle). Other place fields at $x_r = -0.2$ increased their firing rate to encode the new reward location. We see a large number of fields shifting backward, though not entirely to the new reward location. Shaded area shows $95\%$ CI for 10 seeds of agents with 256 place fields each. Black and green dots show a total 2560 place fields for all 10 agents.
+
+
+
+
+
+
+Figure S4. Weak feature learning with large number of place fields. Critic $w_{i}^{v}$ and actor $W_{ji}^{\pi}$ weights were initialized by sampling from a random normal distribution $\mathcal{N}(0,10^{-5})$ , despite the number of place fields $N$ , similar to Foster et al. (2000); Kumar et al. (2022); Frémaux et al. (2013). (A) Homogeneous place field population: Place field parameters were initialized with equal spacing between field centers $(\lambda \in [-1,1])$ , equal amplitude $(\alpha = 0.5)$ and equal width $(\sigma = 0.01)$ . (B) Heterogeneous place field population: All place field parameters center $(\lambda)$ , amplitude $(\alpha)$ , and width $(\sigma)$ were initialized by sampling from a uniform distribution between $[-1,1]$ , $[0,1]$ , $[10^{-5},0.1]$ respectively. (A-B) The sum of the L2 norm for each place field's center $\lambda$ , amplitude $\alpha$ and width $\sigma$ between its initialized and final value decreases as the number of fields available increases. Hence, as the number of fields increases, the change in each place field's parameter becomes smaller. This suggests a weak feature learning regime with large N. (C) Similar to Fig. 1D. Density at the reward location $d(x_{r})$ compared to non-reward location $d(x')$ decreases with a higher number of fields. (D) The mean firing rate at the reward location $\sum \phi(x_{r})$ compared to non-reward location $\sum \phi(x')$ decreases with a higher number of fields. (C-D) Density and mean firing rate at the reward location are proportional to the reward magnitude $(R_{max})$ , and inversely proportional to the size of the reward location $(R_{size})$ . Error bars show 95% CI over 50 different seeds.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S5. SR and MR agent architecture, and representation dynamics. (A) Successor Representation (SR) agent architecture to learn a navigational policy and the SR place fields. Only the synapses from the initialized place field $(\phi_{fixed})$ to the actor (red) and critic (green), and the synapses $(U)$ to the SR fields $(\psi)$ were plastic. Refer to App. C for implementation details. (B) Left: Metric Representation (MR) agent architecture learns to predict the agent's coordinates in an environment. The coordinate readout is a linear summation of place field activity, and its dimension is the same as the displacement in the environment. The agent learns to predict its coordinates by minimizing a path integration derived temporal difference error $\chi_t$ . The gradient updates are performed on both the coordinate readout weights $W_{ji}^{MR}$ and place field parameters $\alpha, \lambda, \sigma$ . The agent learns to navigate to the reward location only by updating the actor and critic weights, without influencing place field parameters Refer to App. D for details. Hence, place fields in the MR agent will reorganize even in the absence of rewards. Right: Change in MR agent's coordinate estimation in a 1D track across trials $(T = 0, 1, 10, 50000)$ . Coordinate estimation was close to zero during $W_{ji}^{MR}$ initialization. After 10 trial, the agent starts to show a monotonic increase in coordinate estimation as the agent moves from $x = -1$ to $x = 1$ . By 50,000 trials, the agent's coordinate estimation becomes stable. (C) Average change for 16 and 64 place fields' size (firing rate greater than $10^{-3}$ in the track) (top row) and center of mass (bottom row) when SR, RM and MR agents navigate in a 1D track with the absolute change reflected in the y axis. Shaded area shows $95\%$ CI over 5 seeds for agents with 16 and 64 place fields. (D) Spatial representation similarity matrix for SR (top row) and RM (middle row) and MR (bottom row) agents in a 1D track is visualized by taking the dot product of the place field activity at each location. (E) Difference in correlation between the proportion of time spent in a location between SR, RM, MR agents. (F) The correlation between the individual field firing rates learned by SR, RM, and MR agents rapidly diverge but remain positively correlated. (G) The correlation between the spatial representation similarity matrices (purple) learned by SR, RM and MR rapidly diverge in the early learning phase but stabilize and remain positively correlated in later phases.
+
+
+
+
+
+
+A
+
+
+
+
+
+
+C
+
+
+$\psi_{i}^{SR}(x)$
+
+
+
+
+
+
+$\phi_i^{MR}(x)$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+B
+
+
+
+
+
+
+D
+
+
+
+
+
+
+
+
+
+
+$\phi_i^{RM}(x)$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+E
+
+
+Figure S6. Field elongation in 2D arena. (A-C) 2D Place field distortion dynamics by SR (A), RM (B), and MR (C) agents as learning proceeds. Numbers in yellow on the obstacle indicates (Field ID)-(Maximum firing rate). (D) Average change for 256 place fields' size (top row) and center of mass () (bottom row) when SR, RM and MR agents navigate in a 2D arena with the absolute change reflected in the y axis. Area was determined by computing the firing rate that was greater than $10^{-3}$ in the arena. The 2D arena was divided into three sub-areas to track COM movement 1) away from the reward location, 2) the corridor from right to left, and 3) towards the start location. All three agents showed an increase in field area and backward COM shift towards the start location. Shaded area shows $95\%$ CI over 3 seeds. (E) Change in coordinate readout weights in a 2D environment. Each plot indicates the synaptic weights $W_{ji}^{MR}$ from place fields to the $x$ (top row) and $y$ dimensions of the 2D environment respectively. Weights were randomly initialized in trial 0. As the agent explores the environment, the weights converge to reflect a spatial map where the coordinate estimation for the $X$ and $Y$ axes increase monotonically when the agent moves left to right and bottom to top respectively, similar to Foster et al. (2000); Kumar et al. (2024) which used a similar path-integration TD error but with eligibility traces instead.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S7. Noise amplitude monotonically influences population vector correlation and agent performance. Adding Gaussian noise with increasing magnitude $[5x10^{-7}, 10^{1}]$ either in field parameters $(\alpha, \lambda, \sigma)$ or Actor-Critic $(W_{\pi}, w_{v})$ influences the variance in Population Vector Correlation $(R_{PV}, \text{blue})$ , Spatial Representation Similarity which is the dot product of field activity $(R_{RS}, \text{orange})$ and cumulative discounted reward $(G, \text{green})$ . Low variance of $R_{PV}$ and $R_{RS}$ indicates high correlation as learning progresses. Low variance in $G$ indicates stable performance. When $G$ increases before decreasing as the noise amplitude increases, agent's navigation performance collapses and the agent achieves 0 reward with low variance. A high ratio of variance in population vector correlation and reward maximization behavior $(R_{PV}/G, \text{red})$ indicates that there is an optimal noise amplitude which causes high variance in population vector correlation (low PV correlation) while demonstrating stable performance. A similar analysis can be performed using representational similarity $(R_{PV}/R_{RS}, \text{purple})$ to determine the optimal noise amplitude for high variance in population vector correlation but stable representation similarity as seen in Qin et al. (2023). Note that our agents are only optimizing for navigation behavior instead of representation similarity.
+
+
+
+
+
+
+
+
+A
+
+
+
+
+
+
+
+
+
+
+B
+
+
+
+
+
+
+
+
+
+
+C
+
+
+
+
+
+
+
+
+
+
+D
+
+
+E
+
+
+F
+
+
+G
+Figure S8. Influence of noisy fields on agent performance and field representation. (A) Reward maximization performance variability increases when noise magnitude increases. (B) With no noise injection, variance in parameter update is initially positively correlated with field amplitude (blue). When a small amount of noise is added, fields with a larger mean amplitude show a smaller variance in change in parameter while fields with a smaller amplitude show higher variance. Conversely, when the magnitude of noise is further increased (purple), fields with a higher amplitude show higher variance in its parameters. (C) The correlation between mean amplitude and the magnitude of the readout weights (sum over all actions for squared actor weights and squared critic weights) is high and positively correlated when the noise magnitude is low. This correlation decreases and becomes weakly positive when $\sigma_{noise} = 0.001$ . This supports the claim that in the low noise regime, fields with a high amplitude are more involved in policy learning and hence drift less or are more stable to maintain performance integrity. (D) Population vector correlation decreases at a faster rate than the similarity matrix when noise magnitude increases. (E) Representation similarity correlation decreases as the noise magnitude increases, but at a slower rate than PV correlation. (F) Proportion of fields that are active (average fraction of fields with firing rate less than 0.05, 0.1, 0.25) continues to increase with higher noise magnitude. (G) Introducing Gaussian noise with zero mean and variance $N(0, 0.00025)$ to place field parameters during updates $\theta_{t+1} = \theta_t + \xi_t$ caused each place field's center, firing rate and width to fluctuate as trials progressed. See App. E for details. This causes each field's spatial selectivity to change over time. Specifically, each field's centroid ( $\lambda$ ) shifted from its initialized location, firing rates fluctuated ( $\alpha^2$ ) causing fields to gain or lose selectivity, and most fields increased in size ( $\sigma^2$ ) while some did not. The first two were observed by Qin et al. (2023) who analyzed Gonzalez et al. (2019). Each color corresponds to the dynamics of a specific field, with 5 example fields shown.
+
+
+
+
+
+
+A
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+B
+
+
+Centroid shift: $\lambda (\Delta T) = \lambda (T) - \lambda (T = 25000)$
+
+
+
+
+
+
+
+
+
+
+C
+Figure S9. Noisy place field parameter update replicates drift dynamics seen in neural data. (A) Place field centroids becomes distinctively different across trials, after stable navigation performance was attained at trial 25,000, similar to Ziv et al. (2013); de Snoo et al. (2023). Each place field's centroid position was sorted according to trial 25,000, 125,000 and 195,000. (B) When no Gaussian noise is added to place field parameters $(\alpha, \lambda, \sigma)$ , place field optimization alone does not cause centroids to shift. Instead, adding small Gaussian noise $(\sigma_{noise} \in \{0.0001, 0.00025, 0.0005\})$ replicates the gradual shift in centroid position across trials (25,100 to 125,000) as seen in Qin et al. (2023); Ziv et al. (2013); Geva et al. (2023). When the noise magnitude is high e.g. $\sigma_{noise} \geq 0.001$ , centroids shift rapidly to a new location, similar to the random shuffle or null hypothesis seen in Ziv et al. (2013); Qin et al. (2023); Geva et al. (2023). (A-B) Analysis was done for 64 place fields aggregated over 10 agents initialized with different seeds to have 640 fields in total. (C) Example graph topology for one agent with $N = 64$ place fields with Gaussian noise $\sigma_{noise} = 0.00025$ added to field parameters. Each node indicates a place field's centroid position across learning, and the edge is weighted by the normalized (between 0 to 1) cosine distance between each node that is less than 0.55. Red, green, blue, orange, black nodes indicate centroids initialized at the reward, start, end of track near the reward, end of track near the start locations and the middle of the track respectively. As learning progressed, the cosine distance between each centroid changed and the ensemble representation rotated. Nevertheless, fields encoding the reward, start, and track were fairly stably as seen in Gonzalez et al. (2019), and the greater separation of clusters support the phenomenon where a high density of fields emerge at the reward and start locations.
+
+
+
+
+
+
+
+
+A
+
+
+
+
+
+
+
+
+
+
+B
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+C
+
+
+
+
+
+
+
+
+
+
+Figure S10. Influence of field width and number of fields on agent performance. (A) Fields initialized with $\sigma = 0.1$ and (B) $\sigma = 0.05$ . Policy learning is slower when initialized with a smaller field width. (C) Influence of field parameter optimization on the average maximum cumulative reward (left) and trial at which agent achieves cumulative discounted reward of 45 and above for the previous 300 trials (right). Correlation plot shows the p-value for a pairwise t-test performed to determine the influence of fields parameters on learning performance.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S11. Influence of noise on new target learning performance in 1D track. Increasing the number of place fields $(N)$ and field widths $(\sigma)$ led to a general increase in new target learning performance. When no noise was injected to field parameters $(\sigma_{noise} = 0.0, \text{blue})$ , most agents struggled to learn to navigate to new targets and seem to be stuck in a local minima. Instead, noise magnitude of $\sigma_{noise} = 0.0005$ allowed agents to maximize rewards throughout the 250,000 trials. Increasing the noise magnitude beyond this $(\sigma_{noise} = 0.001)$ negatively affected the agent's target learning performance, especially when the number of fields were low.
+
+
+
+
+
+
+A
+
+
+
+
+
+
+
+
+B
+
+
+
+
+
+
+Figure S12. Influence of noise on learning performance in 2D arena with an obstacle. (A) Agents started at the same location $x_{start} = (0.0, 0.75)$ and had to navigate to a target that changed to a new location every 50,000 trials following the sequence $(x_r \in [(0.75, -0.75), (-0.75, 0.75), (0.75, 0.75), (-0.75, -0.75)])$ . Increasing the noise magnitude improved new target learning performance. (B) Agents learned to navigate to a target at $x_r = (0.75, 0.0)$ from a start location $x_{start} = (-0.75, 0.0)$ with an obstacle with coordinates $(x_{min} = -0.2, x_{max} = 0.2, y_{min} = -1.0, y_{max} = 0.5)$ for the first 50,000 trials. After which, the location of the obstacle was shifted up to $(x_{min} = -0.2, x_{max} = 0.2, y_{min} = -0.5, y_{max} = 1.0)$ while the start and target location was the same. Agents with a noise magnitude $\sigma_{noise} = 0.00025$ showed the highest average reward maximization performance followed by $\sigma_{noise} = 0.0005$ . A high noise magnitude $(\sigma_{noise} = 0.001)$ disrupted learning performance while agents without noisy field updates $(\sigma_{noise} = 0.0)$ did not learn to navigate around the new obstacle. Note that field amplitudes and widths were clipped to be between $[10^{-5}, 2]$ and $[10^{-5}, 0.5]$ respectively to ensure the $\Sigma$ covariance matrix in 2D place fields remained valid for matrix inversion. Performance was averaged over agents initialized with different number of 2D place fields $(N \in \{64, 144, 256, 576\})$ with the diagonals of the field width initialized with $\Sigma = 0.01$ and constant amplitude $\alpha = 1.0$ , over 30 different seeds. Shaded area is $95\%$ CI.
+
+
+B
+
+
+$\phi_{i}(x)$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S13. Using the same learning rates for the place field parameters and actor-critic recovers the same phenomena of a high field density emerging at reward location followed by the start location, and field elongation against the agent's trajectory. (A) Each place field's amplitude, center and width were sampled from a uniform distribution of $[0, 1]$ , $[-1, 1]$ , $[10^{-5}, 0.1]$ respectively to model heterogeneous place field distribution. After learning, a high density (number) of fields emerged at the start (green dash) and reward (red area) location, similar to Fig. 1B (right) and Fig. S2B. This phenomenon is consistent across different numbers of place fields. Shaded area is $95\%$ CI over 50 different seeds. (C) In a 2D arena with obstacles, place fields elongate from the reward location (red circle) back to the start location (green circle), while narrowing along the corridor with an obstacle (gray), similar to Fig. 2F. Learning rates for the actor, critic and place field parameters were $\eta = \eta_{\theta} = 0.0005$ .
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure S14. Center-surround place fields reproduces the emergence of a high density of fields at the reward location. (A) Example of 16 center-surround fields uniformly distributed before (left) and after learning for 10,000 trials (right), with the learning rates for the center-surround place field parameters and policy network being the same ( $\eta = \eta_{\theta} = 0.001$ ). Place fields near the reward shifted to the reward location while others elongated from the reward location back to the start location similar to Fig. 2C (bottom row). (B) A high field density (gKDE(COM)) and mean firing rate ( $\sum \phi(x)$ ) emerged at the reward location for $N = 16$ (left) and $N = 64$ (right) when using center-surrounds fields. However, we do not see a high density emerging at the start location robustly. Further analysis is needed to verify if the representations learned by Gaussian basis functions and center-surround fields (difference of Gaussians) are similar, and if not why. Shaded area is $95\%$ CI for 10 seeds.
+
+
+
+
+A
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+B
+
+
+
+
+
+
+Figure S15. Difference in policy convergence when backpropagating temporal difference error to optimize place field parameters. We evaluated the speed of policy learning when optimizing (A) heterogeneously distributed place field population in the 1D track and (B) homogeneously distributed place field population in the 2D arena using: (1) fixed place field parameters (blue), (2) backpropagating the TD error $\delta_t$ through the actor weights $W_{\pi}^{\top}\tilde{g}_t$ ( $\beta_{\pi} = 1, \beta_v = 0, \beta_{MR} = 0$ , orange), (3) backpropagating the TD error through the critic weights $w^v$ ( $\beta_{\pi} = 0, \beta_v = 1, \beta_{MR} = 0$ , green), (4) backpropagating the path integration derived TD error $\chi_t$ through the metric representation weights $W_{MR}$ ( $\beta_{\pi} = 0, \beta_v = 0, \beta_{MR} = 1$ , red) while learning the value function and policy by optimizing only the readout critic and actor weights, (5) backpropagating the TD error through both the actor and critic weights, otherwise called the Reward Maximization agent ( $\beta_{\pi} = 1, \beta_v = 1, \beta_{MR} = 0$ , purple), (6) backpropagating the TD error through both the actor and critic weights and the path integration based TD error through the metric representation weights, ( $\beta_{\pi} = 1, \beta_v = 1, \beta_{MR} = 1$ , brown). The combined RM+MR objective used for place field parameter optimization achieved the fastest policy learning, similar to Stachenfeld et al. (2017) when the number of fields was low ( $N = \{4,8,16,32\}$ in 1D and $N = \{4,8\}$ in 2D). With more fields, the reward maximization agent ( $RM$ , purple) was almost as effective as the combined objective ( $RM + MR$ , brown). Optimizing place field parameters using only the actor weights led to the slowest policy convergence (orange), nevertheless faster than using fixed place fields. The same learning rates were used when the number of fields were increased. Hence, tuning learning rates should improve the stability of policy learning, especially in the 2D arena for the agent with the combined RM+MR objective. Shaded area indicates $95\%$ CI over 50 random seeds.
+
+
+
+
\ No newline at end of file
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/images.zip b/amodelofplacefieldreorganizationduringrewardmaximization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1f043601cfec53707e038350e0a1c670238ed668
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed080c89f85a0f1fcb85f4a5f59ee5886100952f7be4137cfe1d4a9c610d3520
+size 3338074
diff --git a/amodelofplacefieldreorganizationduringrewardmaximization/layout.json b/amodelofplacefieldreorganizationduringrewardmaximization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6570ae0b21a0d5fc1573f89888f1ef81e2138192
--- /dev/null
+++ b/amodelofplacefieldreorganizationduringrewardmaximization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1a551fb669d9df41da196904ad2ebc7fdd8f3026945498b8f402289510b6cc3
+size 1681295
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_content_list.json b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cf658ce6f6f1b30e0c6792663a1de5908ca61b3
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84e0c31719bc2c8ba4e916a6125667a8e612d6b47863dc06aca248e0c7141c15
+size 199924
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_model.json b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..020948eb7952456474341c9ad8776ff5c6e9f6c4
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b2d10cec85bbc6200d50e8841fc1b3332cd9e2c8217f780125e036a71b2ecb0
+size 251626
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_origin.pdf b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3243df17d5fb83ad4e6f214044c3a30b04170c21
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/c74f8cbf-a541-4761-9169-f3f6f46210bb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2555a0960a20f3f3b44bf0d71a5e460d30d4d774ad469e650a978dcebcfe54b
+size 1297566
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/full.md b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ba954d8be11b4d5e70d4e96871abbc99af205c5b
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/full.md
@@ -0,0 +1,1086 @@
+# AMPO: Active Multi-Preference Optimization for Self-play Preference Selection
+
+Taneesh Gupta\*1 Rahul Madhavan\*2 Xuchao Zhang' Chetan Bansal' Saravan Rajmohan
+
+# Abstract
+
+Multi-preference optimization enriches language-model alignment beyond pairwise preferences by contrasting entire sets of helpful and undesired responses, enabling richer training signals for large language models. During self-play alignment, these models often produce numerous candidate answers per query, making it computationally infeasible to include all of them in the training objective. We propose Active Multi-Preference Optimization (AMPO), which combines on-policy generation, a multi-preference group-contrastive loss, and active subset selection. Specifically, we score and embed large candidate pools of responses, then pick a small but informative subset—covering reward extremes and distinct semantic clusters—for preference optimization. The resulting contrastive training scheme identifies not only the best and worst answers but also subtle, underexplored modes crucial for robust alignment. Theoretically, we provide guarantees of expected reward maximization using our active selection method. Empirically, AMPO achieves state-of-the-art results on AlpacaEval with Llama 8B and Mistral 7B. We release our datasets here.
+
+# 1. Introduction
+
+Preference Optimization (PO) has become a standard approach for aligning large language models (LLMs) with human preferences (Christiano et al., 2017; Ouyang et al., 2022; Bai et al., 2022). Traditional alignment pipelines typically rely on pairwise or binary preference comparisons, which may not fully capture the subtleties of human judgment (Rafailov et al., 2024; Liu et al., 2024a; Korbak et al., 2023). As a remedy, there is increasing interest in multipreference methods, which consider entire sets of responses
+
+*Equal contribution ${}^{1}$ Microsoft ${}^{2}$ IISc,Bangalore. Correspondence to: Taneesh Gupta , Rahul Madhavan .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+Figure 1. Overview of the Active Multi-Preference Optimization framework. Given a query, the LLM generates diverse responses, which are evaluated by a rater model. Selected responses with different ratings and semantics are then used to train and align the LLM through preference optimization. Active selection of the preferences to optimize over improves training dynamics.
+
+when providing feedback (Cui et al., 2023; Chen et al., 2024a; Gupta et al., 2024b). By learning from multiple "good" and "bad" outputs simultaneously, these approaches deliver richer alignment signals. At the same time, an important trend in alignment is the shift to on-policy data generation, where the policy learns directly from its own distribution of outputs at each iteration (Chen et al., 2024b; Kumar et al., 2024; Wu et al., 2023; 2024). This feedback loop can accelerate convergence ensuring that the training data stays relevant to the model's behavior.
+
+However, multi-preference alignment faces a serious bottleneck: modern LLMs can easily generate dozens of candidate responses per query, and incorporating all of these into a single training objective can become computationally infeasible (Askell et al., 2021). Many of these sampled responses end up being highly similar or near-duplicates, providing limited additional information for gradient updates (Long et al., 2024). Consequently, naive attempts to process all generated responses cause both memory blow-ups and diminishing returns in training (Dubey et al., 2024). Given these constraints, identifying a small yet highly informative subset of candidate responses is critical for effective multi-preference learning.
+
+To understand the challenge of selecting informative responses, consider the query's answer space as a response
+
+landscape (See Figure 2). Each point in this landscape represents a possible response, characterized by its semantic properties (its location in the embedding space) and its quality (determined by a reward model). Furthermore, the LLM's current policy defines a probability density over this landscape. A naive approach of randomly sampling responses and treating them equally might overemphasize frequently generated areas, even if they contain only mediocre or slightly problematic answers. This risks overlooking critical feedback from less common, yet highly informative, regions—such as subtle failure points in underexplored semantic terrains, or exceptionally good responses that are rarely generated. Therefore, an ideal selection strategy must actively explore this landscape, identifying responses that are not just "good" or "bad" but also semantically distinct, covering reward extremes, and exposing underexplored modes that are crucial for robust alignment (Yu et al., 2024). In this paper, we show that this targeted selection can be tied to an optimal way of suppressing undesired modes under a mild Lipschitz assumption (see Section 7).
+
+At its core, the problem of efficiently selecting the most impactful responses for feedback aligns with the principles of active learning (Cohn et al., 1996; Ceravolo et al., 2024; Xiao et al., 2023). By selecting a small yet semantically diverse subset of responses, the model effectively creates a curriculum for itself. Rather than passively training on random or exhaustively sampled data, an active learner queries the examples that yield the greatest improvement when labeled. In our context, we actively pick a handful of responses that best illustrate extreme or underexplored behaviors – whether very good, very bad, or semantically distinct (Wu et al., 2023). This helps the model quickly eliminate problematic modes while reinforcing the most desirable responses. Crucially, we remain on-policy: after each update, the newly refined policy generates a fresh batch of responses, prompting another round of active subset selection (Liu et al., 2021).
+
+We propose Active Multi-Preference Optimization (AMPO), a framework that unifies (a) on-policy data generation, (b) group-based preference learning, and (c) active subset selection. Specifically, we adopt a reference-free group-contrastive objective known as REFA (Gupta et al., 2024a), which jointly leverages multiple "positive" and "negative" responses in a single loss term. On top of this, we explore various active selection schemes—ranging from simplest bottom- $K$ ranking (Meng et al., 2024) to coreset-based clustering (Cohen-Addad et al., 2021; 2022; Huang et al., 2019) and a more theoretically grounded "Opt-Select" method that ties coverage to maximizing expected reward. Our contributions are: (i) a unifying algorithmic pipeline for multi-preference alignment with active selection, (ii) theoretical results demonstrating that coverage of distinct clusters à la k-medoids, can serve as an optimal
+
+
+
+Generations are cheap
+
+
+Figure 2. A learner can easily generate $n$ responses to a given query, but selection of a much smaller subset $k \ll n$ to train on is a hard problem. This paper addresses this problem through techniques from clustering as well as knapsack related problems.
+
+Optimization is costly
+
+
+
+Selection is hard
+
+
+
+AMPO helps select an informative subset of data for optimization
+
+negative-selection strategy, and (iii) empirical evaluations showing that AMPO achieves state-of-the-art results compared to strong alignment baselines like SIMPO. Altogether, our approach enables models to learn more reliably from diverse sets of model behaviors.
+
+# 1.1. Our Contributions
+
+- Algorithmic Novelty: We propose Active Multi-Preference Optimization (AMPO), an on-policy framework that blends group-based preference alignment with active subset selection without exhaustively training on all generated responses. This opens out avenues for research on how to select for synthetic data, as we outline in Sections 5 and 9.
+- Theoretical Insights: Under mild Lipschitz assumptions, we show that coverage-based negative selection can systematically suppress low-reward modes and maximizes expected reward. This analysis (in Sections 6 and 7) connects our method to the weighted $K$ -medoids problem, yielding performance guarantees for alignment.
+- State-of-the-Art Results: Empirically, AMPO sets a new benchmark on AlpacaEval with Llama 8B, surpassing strong baselines like SIMPO by focusing on a small but strategically chosen set of responses each iteration (see Section 8).
+
+- Dataset Releases: We publicly release our AMPO-Coreset-Selection and AMPO-Opt-Selection datasets on Hugging
+
+Face. These contain curated response subsets for each prompt, facilitating research on multi-preference alignment.
+
+# 2. Related Work
+
+Recent advances in preference optimization (Rafailov et al., 2024; Azar et al., 2023; Hong et al., 2024a) have moved beyond simple pairwise comparisons to include multiple responses per query. This shift is largely driven by datasets like UltraFeedback (Cui et al., 2023), which provide scalar
+
+rewards for diverse candidate outputs. Within this multipreference paradigm, methods such as InfoNCA (Chen et al., 2024a) utilize noise-contrastive objectives to align models with scalar rewards. AMPO builds upon these multipreference approaches by employing REFA (Gupta et al., 2024a), a group-contrastive objective that contrasts sets of selected and rejected responses to emphasize multiple highly informative (positive or negative) examples.
+
+A crucial development in LLM alignment is the adoption of on-policy or "self-play" data generation (Chen et al., 2024b; Wu et al., 2024). While ensuring training data relevance and accelerating convergence, this process can generate a vast number of candidate responses per query. Incorporating all these responses into the training objective becomes computationally infeasible and leads to diminishing returns due to high similarity and redundancy (Askell et al., 2021; Long et al., 2024).
+
+To address this computational bottleneck, AMPO integrates principles from active learning (Cohn et al., 1996; Settles, 2009). Our active subset selection strategies draw from combinatorial optimization and clustering techniques, such as weighted $k$ -medoids and coreset construction (Har-Peled & Mazumdar, 2004; Cohen-Addad et al., 2022). These methods enable AMPO to efficiently identify a small, high-impact subset of responses that effectively cover the diverse landscape of generated outputs, encompassing both reward extremes and distinct semantic regions, thereby facilitating robust and efficient alignment.
+
+# 3. Notations and Preliminaries
+
+On-policy alignment of LLMs with learnt preference scores often involves generating multiple candidate responses (say $N$ responses) for a given prompt. Utilizing all these $N$ candidates for training can be computationally prohibitive and may offer diminishing returns due to response similarity. Our framework, Active Multi-Preference Optimization (AMPO), addresses this by focusing on the active selection of a small, yet highly informative, subset of these responses within a pre-specified budget (say budget is $K$ with $K \ll N$ ). This section establishes the notation and foundational concepts for generating responses, evaluating them, defining selection criteria like coverage, and choosing subsets for efficient alignment using a group-contrastive objective.
+
+Queries, Policy, and Response Generation. Let $\mathcal{D} = \{x_1, x_2, \ldots, x_M\}$ be a dataset of $M$ queries (or prompts), each from a larger space $\mathcal{X}$ . We have a policy model $P_{\theta}(y \mid x)$ , parameterized by $\theta$ , which produces a distribution over possible responses $y \in \mathcal{Y}$ . For each query $x_i$ , we generate a pool of $N$ candidate responses $\{y_{i,1}, y_{i,2}, \ldots, y_{i,N}\}$ by sampling from $P_{\theta}(y \mid x_i)$ at a fixed temperature (e.g., Temp. = 0.8). For notational simplicity, we consider a single query $x$ and its $N$ sampled responses $\{y_1, \ldots, y_N\}$ .
+
+Response Evaluation and Embedding. Each response $y_{j}$ (for $j = 1, \dots, N$ ) is assigned a scalar reward
+
+$$
+r _ {j} = \mathcal {R} (x, y _ {j}) \in [ 0, 1 ], \tag {1}
+$$
+
+where $\mathcal{R}$ is a fixed reward function. We also embed each response via $\mathbf{e}_j = \mathcal{E}(y_j)\in \mathbb{R}^d$ , where $\mathcal{E}$ is an encoder capturing semantic properties. The distance between any two responses $y_{j}$ and $y_{l}$ in this embedding space is denoted $d(\mathbf{e}_j,\mathbf{e}_l)$ (e.g., Euclidean or L2 distance).
+
+Budgeted Subset Selection and Coverage. Given the $N$ generated responses, our objective is to select a subset $\mathcal{S} \subset \{y_1, \ldots, y_N\}$ of size $K < N$ , where $K$ is a pre-specified budget of responses to be used for training. The selection aims to maximize a utility function $\mathcal{U}$ that considers factors such as response quality (rewards), probability of generation, and embedding space coverage. Formally,
+
+$$
+\mathcal {S} ^ {*} = \underset { \begin{array}{c} \mathcal {S} ^ {\prime} \subset \{y _ {1}, \dots , y _ {N} \} \\ | \mathcal {S} ^ {\prime} | = K \end{array} } {\arg \max } \mathcal {U} \left(\mathcal {S} ^ {\prime}, \left\{r _ {j} \right\} _ {y _ {j} \in \mathcal {S} ^ {\prime}}, \left\{\mathrm {e} _ {j} \right\} _ {y _ {j} \in \mathcal {S} ^ {\prime}}\right). \tag {2}
+$$
+
+A key aspect of a "good" subset $S^*$ is its coverage of the original $N$ responses. High coverage implies that for any response $y_j$ from the original $N$ candidates, its minimum distance to any response $y_l \in S^*$ is small. More formally, we can define a coverage cost for a chosen subset $S$ with respect to the initial $N$ responses is defined as:
+
+$$
+\operatorname {c o v e r a g e} _ {-} \operatorname {c o s t} (\mathcal {S}) = \sum_ {j = 1} ^ {N} \min _ {y _ {l} \in \mathcal {S}} d (\mathbf {e} _ {j}, \mathbf {e} _ {l}).
+$$
+
+A subset $S$ provides high coverage if this sum is minimized, ensuring that the selected $K$ responses are representative of the diverse characteristics present in the initial pool of $N$ . The active selection strategies discussed later (Section 5) aim to find such high-coverage subsets.
+
+Group-Contrastive Alignment with REFA. Once the subset $S^*$ (of size $K$ ) is selected, it is partitioned into a set of accepted responses $S^+$ and a set of rejected responses $S^-$ , such that $S^* = S^+ \cup S^-$ and $S^+ \cap S^- = \emptyset$ . The specific criteria for this partitioning can vary (e.g., based on reward thresholds, or a one-vs-many split as in Algorithm 1). For a query $x$ , we train $\theta$ using the reference-free group-contrastive objective REFA (Gupta et al., 2024a) by contrasting these two sets:
+
+$$
+L _ {\mathrm {R E F A}} (\theta) = - \log \left(\frac {\sum_ {y _ {j} \in \mathcal {S} ^ {+}} \exp \left[ s _ {\theta} \left(y _ {j} \mid x\right) \right]}{\sum_ {y _ {j} \in \left(\mathcal {S} ^ {+} \cup \mathcal {S} ^ {-}\right)} \exp \left[ s _ {\theta} \left(y _ {j} \mid x\right) \right]}\right) \tag {3}
+$$
+
+where the score for a response $y_{j}$ , $s_{\theta}(y_j \mid x)$ , incorporates its log-probability under the current policy $P_{\theta}(y_j \mid x)$ and its associated reward $r_{j}$ . This score is given by:
+
+$$
+s _ {\theta} \left(y _ {j} \mid x\right) = \log P _ {\theta} \left(y _ {j} \mid x\right) + \alpha \left| r _ {j} - \bar {r} \right|.
+$$
+
+Algorithm 1 AMPO: One-Positive vs. $K$ -Active Negatives
+
+1: Input: (1) A set of $N$ responses $\{y_{i}\}$ sampled from $P_{\theta}(y \mid x)$ ; (2) Their rewards $\{r_i\}$ , embeddings $\{\mathbf{e}_i\}$ , and probabilities $\{\pi_i\}$ ; (3) Number of negatives $K$ , initial $P_{\theta}$ , and hyperparameter $\alpha$
+2: Output: (i) Positive $y_{+}$ ; (ii) Negatives $\{y_{j}\}_{j\in S^{-}}$ ; (iii) Updated parameters $\theta$ via REFA
+3: 1. Select One Positive (Highest Reward)
+4: $i_{+}\gets \arg \max_{i = 1,\dots,N}r_{i},\quad y_{+}\gets yi_{+}$
+5: 2. Choose $K$ Negatives via Active Selection
+6: $\Omega \gets \{1,\dots ,N\} \setminus \{i_{+}\}$
+7: $S^{-}\gets \mathrm{ACTIVESELECTION}(\Omega ,\{r_i\} ,\{\mathbf{e}_i\} ,\{\pi_i\} ,K)$
+8: 3. Form One-vs.-K REFA Objective
+9: $\overline{r} \gets \frac{r_{i_+} + \sum_{j \in S^-} r_j}{1 + K}$
+10: For each $y_{i}$ :
+11: $s_{\theta}'(y_i) = \log P_{\theta}(y_i \mid x) + \alpha |r_i - \overline{r}|$
+12: $L_{\mathrm{REFA}}(\theta) = -\log \left(\frac{\exp\big[s_{\theta}'(y_{+})\big]}{\exp\big[s_{\theta}'(y_{+})\big] + \sum_{j\in S^{-}}\exp\big[s_{\theta}'(y_{j})\big]}\right)$
+13: 4. Update Model Parameters: $\theta \gets \theta -\eta \nabla_{\theta}L_{\mathrm{REFA}}(\theta)$
+14: return The chosen positive $y_{+}$ , the negative set $\{y_j\}_{j\in S^{-}}$ , and the updated parameters $\theta$
+
+Here, $\alpha$ is a hyperparameter scaling the influence of the reward, $\log P_{\theta}(y_j \mid x)$ is the generation probability of response $y_j$ given query $x$ , and $\bar{r} = \mathrm{mean}_{y_j \in S}(\mathcal{R}(x, y_j))$ . REFA encourages the model to increase the collective preference score of responses in $S^+$ relative to those in $S^-$ . This procedure extends to any dataset $\mathcal{D}$ by summing $L_{\mathrm{REFA}}$ across all queries. Subsequent sections detail strategies for selecting $S^*$ and partitioning it to maximize training efficiency and alignment quality.
+
+# 4. Algorithm and Methodology
+
+Our methodology employs a one-vs- $K$ selection scheme: one best response is chosen as positive, and an active subroutine selects $K$ negative responses from the remaining $N - 1$ candidates. This active selection must balance three key objectives:
+
+Probability: High-probability responses under $P_{\theta}(y \mid x)$ can dominate even if suboptimal by reward.
+
+Rewards: Simply selecting extremes by reward misses problematic "mediocre" outputs.
+
+Semantics: Diverse but undesired responses in distant embedding regions must be penalized.
+
+While positives reinforce a single high-reward candidate, active negative selection balances probability, reward and diversity to systematically suppress problematic regions of the response space.
+
+Algorithm. Formally, let $\{y_1,\ldots ,y_N\}$ be the sampled responses for a single prompt $x$ . Suppose we have:
+
+Algorithm 2 AMPO-CORESET via k-means
+
+1: Input:
+
+2: (1) $N$ responses, each with embedding $\mathbf{e}_i\in \mathbb{R}^d$ and rating $r_i$
+3: (2) Desired number of negatives $K$
+
+4:
+
+5: Step 1: Run $K$ -means on embeddings
+6: Initialize $\{\mathbf{c}_1,\dots ,\mathbf{c}_K\} \subset \mathbb{R}^d$ (e.g., via $K$ -means++)
+7: repeat
+8: $\pi (i) = \arg \min_{1\leq j\leq K}\| \mathbf{e}_i - \mathbf{c}_j\|^2,\quad i = 1,\ldots ,N$
+9: $\mathbf{c}_j = \frac{\sum_{i:\pi(i) = j}\mathbf{e}_i}{\sum_{i:\pi(i) = j}1},\quad j = 1,\ldots ,K$
+10: until convergence
+11:
+12: Step 2: In each cluster, pick the bottom-rated response
+13: For each $j \in \{1, \dots, K\}$ , define $C_j = \{i \mid \pi(i) = j\}$ .
+14: Then $i_j^- = \arg \min_{i\in C_j}r_i$ $j = 1,\ldots ,K$
+15:
+16: Step 3: Return negatives
+17: $S^{-} = \{i_{1}^{-}, i_{2}^{-}, \ldots, i_{K}^{-}\}$
+18: return $S^{-}$ as the set of $K$ negatives
+
+1. A reward function $r_i = \mathcal{R}(x,y_i)\in [0,1]$
+2. An embedding $\mathbf{e}_i = \mathcal{E}(y_i)$
+3. A model probability $\pi_i = P_\theta(y_i \mid x)$ .
+
+Selection algorithms may be rating-based selection (to identify truly poor or excellent answers) with coverage-based selection (to explore distinct regions in the embedding space), we expose the model to both common and outlier responses. This ensures that the REFA loss provides strong gradient signals across the spectrum of answers the model is prone to generating. In Algorithm 1, ACTIVESELECTION $(\cdot)$ is a generic subroutine that selects a set of $K$ "high-impact" negatives. We will detail concrete implementations (e.g. bottom- $K$ by rating, clustering-based, etc.) in later sections.
+
+# 5. Active Subset Selection Strategies
+
+This section details two effective strategies for actively selecting $K$ negative responses within AMPO: AMPO-BottomK, which selects the lowest-rated responses, and AMPO-Coreset, a clustering-based method ensuring broad semantic coverage by selecting one negative per cluster. We connect AMPO-Coreset to coreset construction literature (Section E).
+
+# 5.1. AMPO-BottomK
+
+AMPO-BottomK is the most direct approach that we use for comparison: given $N$ sampled responses and their scalar ratings $\{r_i\}_{i=1}^N$ , we simply pick the $K$ lowest-rated responses as negatives. This can be expressed as:
+
+$$
+S ^ {-} = \operatorname {a r g t o p k} _ {i} (- r _ {i}, K), \tag {4}
+$$
+
+which identifies the $K$ indices with smallest $r_i$ . Although
+
+# Algorithm 3 AMPO-OPTSELECT via Solving MIP
+
+1: Input: Candidates $\{y_i\}_{i = 1}^N$ with $r_i, \mathbf{e}_i$ ; integer $K$
+2: Compute $i_{\mathrm{top}} = \arg \max_i r_i$
+3: Let $w_{i} = \exp (\overline{r} - r_{i})$ with $\overline{r}$ as mean reward
+4: Solve Problem equation 8 to get $\{x_j^*\}$ , $\{z_{i,j}^*\}$ , $\{y_i^*\}$
+5: Let $S_{\mathrm{neg}} = \{j \mid x_j^* = 1\}$ (size $K$ )
+6: return $\{i_{\mathrm{top}}\} \cup S_{\mathrm{neg}}$ for REFA training
+
+conceptually simple, this method can be quite effective when the reward function reliably indicates "bad" behavior. Furthermore to break-ties, we use minimal cosine similarity with the currently selected set.
+
+# 5.2. AMPO-Coreset (Clustering-Based Selection)
+
+AMPO-BottomK may overlook problematic modes that are slightly better than the bottom-K, but fairly important to learn on. A diversity-driven approach, which we refer to as AMPO-CORESET, explicitly seeks coverage in the embedding space by partitioning the $N$ candidate responses into $K$ clusters and then selecting the lowest-rated response within each cluster. Formally:
+
+$$
+i _ {j} ^ {-} = \arg \min _ {i \in C _ {j}} r _ {i}, j = 1, \dots , K, S ^ {-} = \left\{i _ {1} ^ {-}, \dots , i _ {K} ^ {-} \right\}
+$$
+
+where $C_j$ is the set of responses assigned to cluster $j$ by a $K$ -means algorithm (Har-Peled & Mazumdar 2004; Cohen-Addad et al. 2022; see also Section E). The pseudo-code is provided in Algorithm 2.
+
+This approach enforces that each cluster—a potential "mode" in the response space—contributes at least one negative example. Hence, AMPO-CORESET can be interpreted as selecting representative negatives from diverse semantic regions, ensuring that the model is penalized for a wide variety of undesired responses.
+
+# 6. Opt-Select: Active Subset Selection by Optimizing Expected Reward
+
+We propose Opt-Select, a strategy for choosing $K$ negative responses and one positive to maximize expected reward under a Lipschitz assumption. Opt-Select models the local influence of penalizing negatives, formulating an optimization problem to suppress low-reward regions while preserving high-reward modes. We present solutions via mixed-integer programming (MIP) and local search.
+
+# 6.1. Lipschitz-Driven Objective
+
+Let $\{y_i\}_{i=1}^n$ be candidate responses sampled on-policy, each with reward $r_i \in [0,1]$ and embedding $\mathbf{e}_i \in \mathbb{R}^d$ . Suppose that if we completely suppress a response $y_j$ (i.e. set its probability to zero), all answers within distance $\|\mathbf{e}_i - \mathbf{e}_j\|$ must also decrease in probability proportionally, due to a
+
+# Algorithm 4 AMPO-OPTSELECT via Coordinate Descent
+
+1: Input: Set $I = \{1, \dots, N\}$ , integer $K$ , distances $A_{i,j}$ , rewards $\{r_i\}$
+2: Find $i_{\mathrm{top}} = \arg \max_i r_i$
+3: Compute $w_{i} = \exp (\overline{r} - r_{i})$ and $d_{i,j} = A_{i,j}$
+4: Initialize a random subset $S \subseteq I \setminus \{i_{\mathrm{top}}\}$ of size $K$
+5: while improving do
+6: Swap $j_{\mathrm{out}} \in S$ with $j_{\mathrm{in}} \notin S$ if it decreases $\sum_{i \in I} w_i \min_{j \in S} d_{i,j}$
+7: end while
+8: return $S_{\mathrm{neg}} = S$ (negatives) and $i_{\mathrm{top}}$ (positive)
+
+Lipschitz constraint on the policy. Concretely, if the distance is $d_{i,j} = \|\mathbf{e}_i - \mathbf{e}_j\|$ , and the model's Lipschitz constant is $L$ , then the probability of $y_i$ cannot remain above $L$ if $y_j$ is forced to probability zero.
+
+From an expected reward perspective, assigning zero probability to low-reward responses (and their neighborhoods) improves overall alignment. To capture this rigorously, observe that the penalty from retaining a below-average answer $y_{i}$ can be weighted by:
+
+$$
+w _ {i} = \exp (\bar {r} - r _ {i}), \tag {5}
+$$
+
+where $\overline{r}$ is (for instance) the mean reward of $\{r_i\}$ . Intuitively, $w_{i}$ is larger for lower-reward $y_{i}$ , indicating it is more harmful to let $y_{i}$ and its neighborhood remain at high probability.
+
+Next, define a distance matrix
+
+$$
+A _ {i, j} = \left\| \mathbf {e} _ {i} - \mathbf {e} _ {j} \right\| _ {2}, \quad 1 \leq i, j \leq N. \tag {6}
+$$
+
+Selecting a subset $S \subseteq \{1, \dots, N\}$ of "negatives" to penalize suppresses the probability of each $i$ in proportion to $\min_{j \in S} A_{i,j}$ . Consequently, a natural cost function measures how much "weighted distance" $y_i$ has to its closest chosen negative:
+
+$$
+\operatorname {C o s t} (S) = \sum_ {i = 1} ^ {N} w _ {i} \min _ {j \in S} A _ {i, j}. \tag {7}
+$$
+
+Minimizing equation 7 yields a subset $S$ of size $K$ that "covers" or "suppresses" as many low-reward responses (large $w_{i}$ ) as possible. We then add one positive index $i_{\mathrm{top}}$ with the highest $r_i$ to amplify a top-quality answer. This combination of one positive plus $K$ negatives provides a strong signal in the training loss.
+
+Interpretation and Connection to Weighted k-medoids. If each negative $j$ "covers" responses $i$ within some radius (or cost) $A_{i,j}$ , then equation 7 is analogous to a weighted $K$ -medoid objective, where we choose $K$ items (negatives) to minimize a total weighted distance. Formally, this can be cast as a mixed-integer program (MIP) (Problem 8 below). For large $N$ , local search offers an efficient approximation.
+
+# 6.2. Mixed-Integer Programming Formulation
+
+Define binary indicators $x_{j} = 1$ if we choose $y_{j}$ as a negative, and $z_{i,j} = 1$ if $i$ is assigned to $j$ (i.e. $\min_{j\in S}A_{i,j}$ is realized by $j$ ). We write:
+
+$$
+\text {P r o b l e m} \mathcal {P}: \quad \min _ {x _ {j} \in \{0, 1 \}, z _ {i, j} \in \{0, 1 \}, y _ {i} \geq 0} \sum_ {i = 1} ^ {N} w _ {i} y _ {i} \tag {8}
+$$
+
+$$
+\begin{array}{l} \text {s . t .} \quad \sum_ {j = 1} ^ {N} x _ {j} = K, z _ {i, j} \leq x _ {j}, \sum_ {j = 1} ^ {N} z _ {i, j} = 1, \forall i, \end{array}
+$$
+
+$$
+y _ {i} \leq A _ {i, j} + M \left(1 - z _ {i, j}\right),
+$$
+
+$$
+y _ {i} \geq A _ {i, j} - M (1 - z _ {i, j}), \quad \forall i, j, \tag {9}
+$$
+
+where $M = \max_{i,j} A_{i,j}$ . In essence, each $i$ is forced to assign to exactly one chosen negative $j$ , making $y_i = A_{i,j}$ , i.e. the distance between the answer embeddings for answer $\{i,j\}$ . Minimizing $\sum_{i} w_i y_i$ (i.e. equation 7) then ensures that low-reward points $(w_i \text{ large})$ lie close to at least one penalized center.
+
+Algorithmic Overview. Solving $\mathcal{P}$ gives the $K$ negatives $S_{\mathrm{neg}}$ , while the highest-reward index $i_{\mathrm{top}}$ is chosen as a positive. The final subset $\{i_{\mathrm{top}}\} \cup S_{\mathrm{neg}}$ is then passed to the REFA loss (see Section 4). Algorithm 3 outlines the procedure succinctly.
+
+# 6.3. Local Search Approximation
+
+For large $N$ , an exact MIP can be expensive. A simpler local search approach initializes a random subset $\mathcal{S}$ of size $K$ and iteratively swaps elements in and out if it lowers the cost equation 7. In practice, this provides an efficient approximation, especially when $N$ or $K$ grows.
+
+Intuition. If $y_{i}$ is far from all penalized points $j \in S$ , then it remains relatively "safe" from suppression, which is undesirable if $r_{i}$ is low (i.e. $w_{i}$ large). By systematically choosing $S$ to reduce $\sum_{i} w_{i} \min_{j \in S} d_{i,j}$ , we concentrate penalization on high-impact, low-reward regions. The local search repeatedly swaps elements until no single exchange can further reduce the cost.
+
+# 6.4. Why "Opt-Select"? A Lipschitz Argument for Expected Reward
+
+We name the procedure "Opt-Select" because solving equation 8 (or its local search variant) directly approximates an optimal subset for improving the policy's expected reward. Specifically, under a Lipschitz constraint with constant $L$ , assigning zero probability to each chosen negative $y_{j}$ implies neighboring answers $y_{i}$ at distance $d_{i,j}$ cannot exceed probability $Ld_{i,j}$ . Consequently, their contribution to the "bad behavior" portion of expected reward is bounded by
+
+$$
+\exp \big (r _ {\max} - r _ {i} \big) \left(L d _ {i, j}\right),
+$$
+
+where $r_{\mathrm{max}}$ is the rating of the best-rated response. Dividing by a normalization factor (such as $\exp(r_{\mathrm{max}} - \overline{r}) L$ ), one arrives at a cost akin to $w_i d_{i,j}$ with $w_i = \exp(\overline{r} - r_i)$ .
+
+Remark 6.1. This aligns with classical min-knapsack of minimizing some costs subject to some constraints, and has close alignment with the weighted $K$ -medoid notions of "covering" important items at minimum cost.
+
+# 7. Theoretical Results: Key Results
+
+This section presents core theoretical statements underpinning AMPO's active selection. Full proofs are in Appendices C-E. We assume a budget of $K$ responses to be selected from $N$ candidates.
+
+# 7.1. Setup and Assumptions
+
+(A1) $L$ -Lipschitz Constraint. When a response $y_{j}$ is penalized (probability $p_{j} = 0$ ), any other response $y_{i}$ within embedding distance $A_{i,j}$ must satisfy $p_{i} \leq L A_{i,j}$ .
+(A2) Single Positive Enforcement. We allow one highest-reward response $y_{i_{\mathrm{top}}}$ to be unconstrained, i.e. $p_{i_{\mathrm{top}}}$ is not pulled down by the negatives.
+(A3) Finite Support. We focus on a finite set of $N$ candidate responses $\{y_1,\ldots ,y_N\}$ and their scalar rewards $\{r_i\}$ , each embedded in $\mathbb{R}^d$ with distance $A_{i,j} = \| \mathbf{e}_i - \mathbf{e}_j\|$ .
+
+# 7.2. Optimal Negatives via Coverage
+
+Theorem 7.1 (Optimality of OPT-SELECT). Under assumptions (A1)-(A3), let $S^*$ be the set of $K$ "negative" responses that minimizes the coverage cost
+
+$$
+\operatorname {C o s t} (\mathcal {S}) = \sum_ {i = 1} ^ {N} \exp \left(\bar {r} - r _ {i}\right) \min _ {j \in \mathcal {S}} A _ {i, j}, \tag {10}
+$$
+
+where $\overline{r}$ is a reference reward (e.g. average of $\{r_i\}$ ). Then $S^*$ also maximizes the expected reward among all Lipschitzcompliant policies of size $K$ (with a single positive). Consequently, selecting $S^*$ and allowing $p_{i_{\mathrm{top}}}\approx 1$ is optimal.
+
+Sketch of Proof. (See Appendix C for details.) We show a one-to-one correspondence between minimizing coverage cost $\sum_{i} w_{i} \min_{j \in S} A_{i,j}$ and maximizing the feasible expected reward $\sum_{i} r_{i} p_{i}$ under the Lipschitz constraint. Low-reward responses with large $w_{i}$ must lie close to at least one negative $j \in S$ ; else, they are not sufficiently suppressed. A mixed-integer program encodes this cost explicitly, and solving it yields the unique $S^{*}$ that maximizes reward.
+
+# 7.3. Local Search for Weighted $K$ -Medoids
+
+(A4) Weighted $K$ -Medoids Setup. We have $N$ points $\{1, \ldots, N\}$ in a metric space with distance $d(\cdot, \cdot) \geq 0$ , each with weight $w_i \geq 0$ . Our goal is to find a subset $\mathcal{S}$ of size $K$ to minimize $\text{Cost } S = \sum_{i=1}^{N} w_i \min_{j \in \mathcal{S}} d(i, j)$ .
+
+Method Mistral-Instruct (7B) Llama-3-Instruct (8B) AlpacaEval 2 Arena-Hard MT-Bench AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 LC (%) WR (%) WR (%) GPT-4 Base 17.1 14.7 12.6 7.5 28.4 28.4 26.9 7.93 \( RRHF^1 \) 25.3 24.8 18.1 7.6 31.3 28.4 26.5 7.9 \( SLiC-HF^1 \) 24.1 24.6 18.9 7.8 26.9 27.5 26.2 8.1 \( DPO^1 \) 26.8 24.9 16.3 7.6 40.3 37.9 32.6 8.0 \( IPO^1 \) 20.3 20.3 16.2 7.8 35.6 35.6 30.5 8.3 \( CPO^1 \) 23.8 28.8 22.6 7.5 28.9 32.2 28.8 8.0 \( KTO^1 \) 24.5 23.6 17.9 7.7 33.1 31.8 26.4 8.2 \( ORPO^1 \) 24.5 24.9 20.8 7.7 28.5 27.4 25.8 8.0 R-DPO^1 27.3 24.5 16.1 7.5 41.1 37.8 33.1 8.0 SIMPO 30.1 32.3 21.1 7.6 47.6 44.7 34.9 7.5 AMPO-BottomK 32.1 37.0 22.1 7.7 50.8 50.5 45.2 8.1 AMPO-Coreset 32.8 37.3 22.6 7.8 52.4 52.1 47.8 8.1 AMPO-Opt-Select 33.1 37.8 22.8 7.7 51.6 51.2 46.4 8.0
+
+Table 1. Comparison of various preference optimization baselines on AlpacaEval, Arena-Hard, and MT-Bench benchmarks for Llama-3-Instruct (8B). LC-WR represents length-controlled win rate, and WR represents raw win rate. Best results are in bold, second-best are underlined. Our method (AMPO) achieves SOTA performance across all metrics, with different variants achieving either best or second-best results consistently.
+
+Theorem 7.2 (Local Search Approximation). Suppose we apply a 1-swap local search algorithm to select $K$ medoids. Let $\widehat{S}$ be the resulting local optimum and let $S^*$ be the globally optimal subset. Then
+
+$$
+\operatorname {C o s t} (\widehat {\mathcal {S}}) \leq 5 \times \operatorname {C o s t} \left(\mathcal {S} ^ {*}\right).
+$$
+
+The running time is polynomial in $N$ and $K$ .
+
+Sketch of Proof. (See Appendix D for a complete proof.) Assume by contradiction that $\operatorname{Cost}(\widehat{\mathcal{S}}) > 5\operatorname{Cost}(\mathcal{S}^*)$ . We then show there exists a profitable swap (removing some $j \in \widehat{\mathcal{S}}$ and adding $j^* \in \mathcal{S}^*$ ) that strictly decreases cost, contradicting the local optimality of $\widehat{\mathcal{S}}$ .
+
+# 7.4. Coreset Guarantee for AMPO-Coreset
+
+(A5) Bounded-Diameter Clusters: For AMPO-CORESET, we assume the $N - 1$ non-positive candidate responses can be grouped into $K$ semantic clusters, each with an embedding-space diameter at most $d_{\mathrm{max}}$ .
+
+Intuition: AMPO-CORESET selects one lowest-rated negative from each of the $K$ semantic clusters. Under the Lipschitz constraint (A1), penalizing this single representative from a bounded-diameter cluster (A5) effectively suppresses all other semantically similar (i.e., same-cluster) responses. This ensures broad coverage across the response landscape.
+
+Formal Result: (Theorem E.1, Appendix E). The induced policy's maximum expected reward is at least
+
+$$
+r _ {\max } - L d _ {\max }, \tag {11}
+$$
+
+i.e. within additive $Ld_{\mathrm{max}}$ of the unconstrained optimum given assumptions on cluster diameter $(d_{\mathrm{max}})$ and the policy's smoothness $(L)$ .
+
+# 8. Experiments
+
+# 8.1. Experimental Setup
+
+Model and Training Settings: For our experiments, we utilize a pretrained instruction-tuned model (meta-llama/MetaLlama-3-8B-Instruct), as the SFT model. These models have undergone extensive instruction tuning, making them more capable and robust compared to the SFT models used in the Base setup. However, their reinforcement learning with human feedback (RLHF) procedures remain undisclosed, making them less transparent.
+
+To reduce distribution shift between the SFT models and the preference optimization process, we follow the approach in (Tran et al., 2023) and generate the preference dataset using the same SFT models. This ensures that our setup is more aligned with an on-policy setting. Specifically, we utilize prompts from the UltraFeedback dataset (Cui et al., 2023) and regenerate the responses using the SFT models. For each prompt $\mathbf{x}$ , we produce 32 responses by sampling from the SFT model with a sampling temperature of 0.8. We then use the reward model (Skywork/Skywork-Reward-Llama-3.1-8B-v0.2) (Liu et al., 2024b) to score all the 32 responses. Then the response are selected based on the Active Subset selection strategies a.) AMPO-Bottomk b.) AMPO-Coreset c.) AMPO-Opt-Select
+
+In our experiments, we observed that tuning hyperparameters is critical for optimizing the performance. Carefully selecting hyperparameter values significantly impacts the effectiveness of these methods across various datasets. We found that setting the $\beta$ (inverse temperature) parameter in the range of 5.0 to 10.0 consistently yields strong perfor
+
+
+Figure 3. t-SNE visualization of projected high-dimensional response embeddings into a 2D space, illustrating the separation of actively selected responses. (a) AMPO-BottomK (baseline). (b) AMPO-Coreset (ours). (c) Opt-Select (ours). We see that the traditional baselines select many responses close to each other, based on their rating. This provides insufficient feedback to the LLM during preference optimization. In contrast, our methods simultaneously optimize for objectives including coverage, generation probability as well as preference rating.
+
+
+
+
+
+mance, while tuning the $\gamma$ parameter within the range of 2 to 4 further improved performance. These observations highlight the importance of systematic hyperparameter tuning to achieve reliable outcomes across diverse datasets.
+
+Evaluation Benchmarks We evaluate our models using three widely recognized open-ended instruction-following benchmarks: MT-Bench (Zheng et al., 2023), AlpacaEval2 (Dubois et al., 2024), and Arena-Hard v0.1. These benchmarks are commonly used in the community to assess the conversational versatility of models across a diverse range of queries.
+
+AlpacaEval 2 comprises 805 questions sourced from five datasets, while MT-Bench spans eight categories with a total of 80 questions. The recently introduced Arena-Hard builds upon MT-Bench, featuring 500 well-defined technical problem-solving queries designed to test more advanced capabilities.
+
+We adhere to the evaluation protocols specific to each benchmark when reporting results. For AlpacaEval 2, we provide both the raw win rate (WR) and the length-controlled win rate (LC), with the latter being designed to mitigate the influence of model morbidity. For Arena-Hard, we report the win rate (WR) against a baseline model. For MT-Bench, we present the scores as evaluated by GPT-4-Preview-1106, which serve as the judge model.
+
+# 8.2. Experimental Result
+
+Impact of Selection Strategies on Diversity. Figure 3 shows a t-SNE projection of response embeddings, highlighting how each selection method samples the answer space:
+
+AMPO-BottomK: Tends to pick a tight cluster of low-rated responses, limiting coverage and redundancy in feedback.
+
+AMPO-Coreset: Uses coreset-based selection to cover more diverse regions, providing coverage of examples.
+
+Opt-Select: Further balances reward extremity, and embedding coverage, yielding well-separated response clusters and more effective supervision for preference alignment.
+
+Key Takeaway: Figure 3 demonstrates that our selection strategies significantly improve response diversity compared to traditional baselines. By actively optimizing for coverage-aware selection, our methods mitigate redundancy in selected responses, leading to better preference modeling and enhanced LLM alignment.
+
+Impact of Temperature Sampling for Different Active Selection Approaches To analyze the impact of temperature-controlled response sampling on different active selection approaches, we conduct an ablation study by varying the sampling temperature from 0 to 1.0 in increments of 0.25 on AlpacaEval2 benchmark as demonstrated in Figure 4. We evaluate our active selection strategies observe a general trend of declining performance with increasing temperature.
+
+Key Takeaway: AMPO-Coreset and AMPO-Opt-Select demonstrate robustness to temperature variations, whereas LC-WR of SimPO and bottom- $k$ selection are more sensitive.
+
+Effect of gamma for Active Selection Approaches To investigate the sensitivity of core-set selection to different hyper-parameter settings, we conduct an ablation study on the impact of varying the gamma as shown in Figure 5. As gamma increases from 1 to 3, we observe a consistent improvement in both LC-WR and WR scores.
+
+Key Takeaway: This highlights the importance of tuning gamma appropriately to maximize the effectiveness of active-selection approaches.
+
+Robustness to Reward Model Choice To assess the robustness of AMPO to the choice of reward model, we evaluate performance using two distinct reward models: Skywork-Reward-LM and GRM-Reward-LM. Table 2 presents results
+
+
+Figure 4. Effect of Sampling Temperature on different baselines for on the AlpacaEval 2 Benchmark: (a) Length-Controlled Win Rate (LC) and (b) Overall Win Rate (WR).
+
+across three AMPO selection strategies—Bottom- $k$ , Core-set, and Opt-Select—on AlpacaEval 2, Arena-Hard, and MT-Bench.
+
+Method Reward Model AlpacaEval 2 LC (%) WR (%) AMPO-Bottomk Skywork-Reward-LM 50.8 50.5 AMPO-Coreset Skywork-Reward-LM 52.4 52.1 AMPO-Opt-Select Skywork-Reward-LM 51.6 51.2 AMPO-Bottomk GRM-Reward-LM 51.5 49.3 AMPO-Coreset GRM-Reward-LM 52.5 49.7 AMPO-Opt-Select GRM-Reward-LM 52.9 51.7
+
+We observe that the relative ranking of methods remains largely consistent across reward models, with Opt-Select and Coreset outperforming Bottom- $k$ across metrics.
+
+Key Takeaway: AMPO exhibits robust generalization across distinct reward models, indicating that its effectiveness is not tied to specific reward functions.
+
+Effect of Negative Set Size $(K)$ in AMPO To examine how the number of negative comparisons affects performance, we evaluate AMPO-Opt-Select with increasing values of $K$ in the 1-vs- $K$ selection strategy—specifically, $K\in 3,5,7$ . The results are presented in Table 3 across AlpacaEval 2, Arena-Hard, and MT-Bench.
+
+We observe that even with a small number of negatives (e.g., $K = 3$ ), AMPO maintains strong performance, indicating that we identify the high-utility contrastive examples. As $K$ increases, performance improves slightly, peaking at 1-vs-7, yet the marginal gains diminish.
+
+Key Takeaway: AMPO is highly effective even with a small number of negative samples, and further increases in $K$ yield diminishing returns. This shows that our method may work in resource-constrained alignment settings where generating large negative sets is costly.
+
+Effect of Total Number of Responses $(N)$ in AMPO-Opt-Select We investigate the impact of varying the total number of generated responses $N$ available for selection
+
+
+Figure 5. Effect of Gamma on AlpacaEval2 for Active Subset Selection Strategies.
+
+Table 2. Comparison of AMPO-baseline on AlpacaEval 2 using LLaMA-3-Instruct (8B) across different reward models
+
+Method AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 AMPO-Opt-Select (1vs3) 49.6 48.5 46.1 8.03 AMPO-Opt-Select (1vs5) 50.3 49.9 43.9 7.84 AMPO-Opt-Select (1vs7) 51.6 51.2 46.4 8.11
+
+Table 3. Effect of increasing the negative set size $(K)$ in AMPO-Opt-Select on AlpacaEval2, Arena-Hard, and MT-Bench.
+
+Method AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 AMPO-Opt-Select (N = 16) 50.6 50.1 45.5 7.76 AMPO-Opt-Select (N = 24) 51.1 50.5 45.7 7.88 AMPO-Opt-Select (N = 32) 51.6 51.2 46.4 8.11
+
+Table 4. Effect of increasing number of responses $(N)$ for selection using AMPO-Opt-Select (1 vs 7) setting on AlpacaEval2, Arena-Hard, and MT-Bench.
+
+in AMPO-Opt-Select under a fixed 1-vs-7 contrastive setting. Specifically, we compare performance when $N \in 16,24,32$ , as shown in Table 4.
+
+Our findings reveal that while increasing $N$ leads to consistent improvements across all evaluation benchmarks, the performance gains are marginal. Notably, even with $N = 16$ , the results remain competitive, suggesting that AMPO-OptSelect effectively identifies high-quality contrastive sets with limited candidate pools. Nonetheless, a larger $N$ introduces greater response diversity, which can enhance the coverage of the preference space and lead to modest performance gains—culminating at $N = 32$ with the highest scores across AlpacaEval 2, Arena-Hard, and MT-Bench.
+
+Key Takeaway: Increasing the pool size of generated responses for AMPO improves performance, but the method remains strong even at lower $N$ , demonstrating its efficiency and robustness in low-sample settings.
+
+# 9. Discussion & Future Work
+
+Iteration via Active Synthetic Data Generation. The on-policy, coverage-focused active selection in AMPO naturally surfaces candidates for synthetic data generation. This opens avenues for future work in co-adapting the policy and reward model through this actively generated data via a robust policy-reward feedback loop.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Arya, V., Garg, N., Khandekar, R., Meyerson, A., Munagala, K., and Pandit, V. Local search heuristic for k-median and facility location problems. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pp. 21-29, 2001.
+Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Mann, B., DasSarma, N., et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
+Azar, M. G., Rowland, M., Piot, B., Guo, D., Calandriello, D., Valko, M., and Munos, R. A general theoretical paradigm to understand learning from human preferences. *ArXiv*, abs/2310.12036, 2023.
+Azar, M. G., Guo, Z. D., Piot, B., Munos, R., Rowland, M., Valko, M., and Calandriello, D. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pp. 4447-4455. PMLR, 2024.
+Bachem, O., Lucic, M., and Krause, A. Practical core-set constructions for machine learning. arXiv preprint arXiv:1703.06476, 2017.
+Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das-Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
+Cacchiani, V., Iori, M., Locatelli, A., and Martello, S. Knapsack problems—an overview of recent advances. part ii: Multiple, multidimensional, and quadratic knapsack problems. Computers & Operations Research, 143:105693, 2022.
+Ceravolo, P., Mohammadi, F., and Tamborini, M. A. Active learning methodology in llms fine-tuning. In 2024 IEEE International Conference on Cyber Security and Resilience (CSR), pp. 743-749. IEEE, 2024.
+Chen, H., He, G., Yuan, L., Cui, G., Su, H., and Zhu, J. Noise contrastive alignment of language models with explicit rewards. arXiv preprint arXiv:2402.05369, 2024a.
+Chen, T., Kornblith, S., Swersky, K., Norouzi, M., and Hinton, G. E. Big self-supervised models are strong
+
+semi-supervised learners. Advances in neural information processing systems, 33:22243-22255, 2020.
+Chen, Z., Deng, Y., Yuan, H., Ji, K., and Gu, Q. Self-play fine-tuning converts weak language models to strong language models. arXiv preprint arXiv:2401.01335, 2024b.
+Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
+Cohen-Addad, V., Saulpic, D., and Schwiegelshohn, C. A new coreset framework for clustering. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 169–182, 2021.
+Cohen-Addad, V., Green Larsen, K., Saulpic, D., Schwiegelshohn, C., and Sheikh-Omar, O. A. Improved coresets for euclidean $k$ -means. Advances in Neural Information Processing Systems, 35:2679-2694, 2022.
+Cohn, D. A., Ghahramani, Z., and Jordan, M. I. Active learning with statistical models. Journal of artificial intelligence research, 4:129-145, 1996.
+Cui, G., Yuan, L., Ding, N., Yao, G., Zhu, W., Ni, Y., Xie, G., Liu, Z., and Sun, M. Ultrafeedback: Boosting language models with high-quality feedback. arXiv preprint arXiv:2310.01377, 2023.
+Dong, H., Xiong, W., Goyal, D., Zhang, Y., Chow, W., Pan, R., Diao, S., Zhang, J., Shum, K., and Zhang, T. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
+Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
+Dubois, Y., Galambosi, B., Liang, P., and Hashimoto, T. B. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024.
+Ethayarajh, K., Xu, W., Muennighoff, N., Jurafsky, D., and Kiela, D. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024.
+Feldman, D. Core-sets: Updated survey. Sampling techniques for supervised or unsupervised tasks, pp. 23-44, 2020.
+Feldman, D., Schmidt, M., and Sohler, C. Turning big data into tiny data: Constant-size coresets for k-means, pca, and projective clustering. SIAM Journal on Computing, 49(3):601-657, 2020.
+
+Gupta, A. and Tangwongsan, K. Simpler analyses of local search algorithms for facility location. arXiv preprint arXiv:0809.2554, 2008.
+Gupta, T., Madhavan, R., Zhang, X., Bansal, C., and Rajmohan, S. Refa: Reference free alignment for multi-preference optimization. arXiv preprint arXiv:2412.16378, 2024a.
+Gupta, T., Madhavan, R., Zhang, X., Bansal, C., and Rajmohan, S. Swepo: Simultaneous weighted preference optimization for group contrastive alignment. arXiv preprint arXiv:2412.04628, 2024b.
+Har-Peled, S. and Mazumdar, S. On coresets for k-means and k-median clustering. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pp. 291-300, 2004.
+Hartigan, J. A. and Wong, M. A. Algorithm as 136: A k-means clustering algorithm. Journal of the royal statistical society. series c (applied statistics), 28(1):100-108, 1979.
+Hong, J., Lee, N., and Thorne, J. ORPO: Monolithic preference optimization without reference model. *ArXiv*, abs/2403.07691, 2024a.
+Hong, J., Lee, N., and Thorne, J. Orpo: Monolithic preference optimization without reference model. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 11170-11189, 2024b.
+Huang, L., Jiang, S., and Vishnoi, N. Coresets for clustering with fairness constraints. Advances in neural information processing systems, 32, 2019.
+Kellerer, H., Pferschy, U., Pisinger, D., Kellerer, H., Pferschy, U., and Pisinger, D. Introduction to np-completeness of knapsack problems. Knapsack problems, pp. 483-493, 2004a.
+Kellerer, H., Pferschy, U., Pisinger, D., Kellerer, H., Pferschy, U., and Pisinger, D. Multidimensional knapsack problems. Springer, 2004b.
+Kim, D., Kim, Y., Song, W., Kim, H., Kim, Y., Kim, S., and Park, C. sdpo: Don't use your data all at once. arXiv preprint arXiv:2403.19270, 2024.
+Korbak, T., Shi, K., Chen, A., Bhalerao, R. V., Buckley, C., Phang, J., Bowman, S. R., and Perez, E. Pretraining language models with human preferences. In International Conference on Machine Learning, pp. 17506-17533. PMLR, 2023.
+Kumar, A., Zhuang, V., Agarwal, R., Su, Y., Co-Reyes, J. D., Singh, A., Baumli, K., Iqbal, S., Bishop, C., Roelofs,
+
+R., et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917, 2024.
+Liu, A., Bai, H., Lu, Z., Sun, Y., Kong, X., Wang, S., Shan, J., Jose, A. M., Liu, X., Wen, L., et al. Tisdpo: Token-level importance sampling for direct preference optimization with estimated weights. arXiv preprint arXiv:2410.04350, 2024a.
+Liu, C. Y., Zeng, L., Liu, J., Yan, R., He, J., Wang, C., Yan, S., Liu, Y., and Zhou, Y. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451, 2024b.
+Liu, J., Zhou, Z., Liu, J., Bu, X., Yang, C., Zhong, H.-S., and Ouyang, W. Iterative length-regularized direct preference optimization: A case study on improving 7b language models to gpt-4 level. arXiv preprint arXiv:2406.11817, 2024c.
+Liu, T., Zhao, Y., Joshi, R., Khalman, M., Saleh, M., Liu, P. J., and Liu, J. Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657, 2023.
+Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., and Tang, J. Self-supervised learning: Generative or contrastive. IEEE transactions on knowledge and data engineering, 35(1):857-876, 2021.
+Long, D. X., Ngoc, H. N., Sim, T., Dao, H., Joty, S., Kawaguchi, K., Chen, N. F., and Kan, M.-Y. Llms are biased towards output formats! systematically evaluating and mitigating output format bias of llms. arXiv preprint arXiv:2408.08656, 2024.
+Meng, Y., Xia, M., and Chen, D. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734, 2024.
+Oh Song, H., Jegelka, S., Rathod, V., and Murphy, K. Deep metric learning via facility location. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5382-5390, 2017.
+Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+Pang, R. Y., Yuan, W., Cho, K., He, H., Sukhbaatar, S., and Weston, J. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024.
+Park, R., Rafailov, R., Ermon, S., and Finn, C. Disentangling length from quality in direct preference optimization. arXiv preprint arXiv:2403.19159, 2024.
+
+Qi, B., Li, P., Li, F., Gao, J., Zhang, K., and Zhou, B. Online dpo: Online direct preference optimization with fast-slow chasing. arXiv preprint arXiv:2406.05534, 2024.
+Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
+Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897. PMLR, 2015.
+Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Sener, O. and Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
+Settles, B. Active learning literature survey, 2009.
+Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016.
+Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
+Song, F., Yu, B., Li, M., Yu, H., Huang, F., Li, Y., and Wang, H. Preference ranking optimization for human alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 18990-18998, 2024.
+Tang, Y., Guo, Z. D., Zheng, Z., Calandriello, D., Munos, R., Rowland, M., Richemond, P. H., Valko, M., Pires, B. Á., and Piot, B. Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749, 2024.
+Tran, H., Glaze, C., and Hancock, B. Iterative dpo alignment. Technical report, Technical report, Snorkel AI, 2023.
+Wu, Y., Sun, Z., Yuan, H., Ji, K., Yang, Y., and Gu, Q. Self-play preference optimization for language model alignment. arXiv preprint arXiv:2405.00675, 2024.
+Wu, Z., Hu, Y., Shi, W., Dziri, N., Suhr, A., Ammanabrolu, P., Smith, N. A., Ostendorf, M., and Hajishirzi, H. Finegrained human feedback gives better rewards for language model training. Advances in Neural Information Processing Systems, 36:59008-59033, 2023.
+
+Xiao, R., Dong, Y., Zhao, J., Wu, R., Lin, M., Chen, G., and Wang, H. Freearal: Towards human-free active learning in the era of large language models. arXiv preprint arXiv:2311.15614, 2023.
+Xu, H., Sharaf, A., Chen, Y., Tan, W., Shen, L., Durme, B. V., Murray, K., and Kim, Y. J. Contrastive preference optimization: Pushing the boundaries of LLM performance in machine translation. *ArXiv*, abs/2401.08417, 2024.
+Yu, Y., Zhuang, Y., Zhang, J., Meng, Y., Ratner, A. J., Krishna, R., Shen, J., and Zhang, C. Large language model as attributed training data generator: A tale of diversity and bias. Advances in Neural Information Processing Systems, 36, 2024.
+Yuan, W., Kulikov, I., Yu, P., Cho, K., Sukhbaatar, S., Weston, J., and Xu, J. Following length constraints in instructions. arXiv preprint arXiv:2406.17744, 2024.
+Yuan, Z., Yuan, H., Tan, C., Wang, W., Huang, S., and Huang, F. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023.
+Zhang, Y., Feng, S., and Tan, C. Active example selection for in-context learning. arXiv preprint arXiv:2211.04486, 2022.
+Zhao, Y., Joshi, R., Liu, T., Khalman, M., Saleh, M., and Liu, P. J. SLiC-HF: Sequence likelihood calibration with human feedback. ArXiv, abs/2305.10425, 2023.
+Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E., et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36: 46595-46623, 2023.
+Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., and Irving, G. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
+
+# SUPPLEMENTARY MATERIALS
+
+These supplementary materials provide additional details, derivations, and experimental results for our paper. The appendix is organized as follows:
+
+- Section A provides a more comprehensive overview of the related literature.
+- Section B provides additional experiments to supplement the experiments provided in the main part of the paper.
+- Section C provides theoretical analysis of the equivalence of the optimal selection integer program and the reward maximization objective.
+- Section D shows a constant factor approximation for the coordinate descent algorithm in polynomial time.
+- Section E provides theoretical guarantees for our k-means style coreset selection algorithm.
+- Section F provides the code for computation of the optimal selection algorithm.
+- Section G provides t-sne plots for the various queries highlighting the performance of our algorithms.
+
+# A. Related Work
+
+We start this survey with a high-level overview of the broader Reinforcement Learning from Human Feedback (RLHF) literature, then deep dive into preference optimization and multi-preference optimization, and finally discuss active learning and subset selection techniques relevant to our work.
+
+Preference Optimization in RLHF. Reinforcement Learning from Human Feedback (RLHF) has emerged as a robust alignment paradigm for language models. Early methods, such as Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), extend direct RL methods by constraining policy updates for stability. PPO, in particular, has been successfully applied to RLHF, allowing LLMs to produce outputs aligned with human preferences (Ziegler et al., 2019; Ouyang et al., 2022). However, the complexity of training separate reward models and the potential instability of direct RL prompted simpler approaches.
+
+Direct Preference Optimization (DPO) (Rafailov et al., 2024) simplifies LLM alignment by optimizing a contrastive loss directly over paired preference data, bypassing the intermediate reward modeling step. This makes DPO computationally efficient and suitable for limited preference datasets. A wide array of DPO extensions and alternative preference optimization methods have since emerged. These include variants like Identity Preference Optimization (IPO) (Azar et al., 2024), self-play preference optimization (Wu et al., 2024), preference ranking optimization (Song et al., 2024), rejection sampling optimization (Liu et al., 2023), and generalized preference optimization (Tang et al., 2024). Many of these methods also address common DPO limitations, such as the need for a fixed reference model, which adds complexity. Works like RRHF (Yuan et al., 2023) and SLiC-HF (Zhao et al., 2023) propose rank-based loss techniques. KTO (Ethayarajh et al., 2024) is a framework inspired by prospect theory that directly learns desirability, while RAFT (Dong et al., 2023) introduces a list-wise finetuning approach. Other notable methods include SPIN (Chen et al., 2024b), which treats the model as part of an adversarial game, CPO (Xu et al., 2024), which reworks the DPO objective, and ORPO (Hong et al., 2024b), which unifies SFT and preference training. SimPO (Meng et al., 2024) removes the reference model and incorporates length normalization to mitigate morbidity issues. Further variants like R-DPO (Park et al., 2024), LD-DPO (Liu et al., 2024c), sDPO (Kim et al., 2024), IRPO (Pang et al., 2024), OFS-DPO (Qi et al., 2024), and LIFT-DPO (Yuan et al., 2024) address specific challenges like length bias, reasoning chains, and training stability.
+
+Multi-Preference Optimization. Traditional preference optimization methods primarily rely on pairwise comparisons. However, the advent of richer datasets, such as UltraFeedback (Cui et al., 2023), which provide multiple graded responses per query, highlights the necessity of multi-prefereon optimization. These methods move beyond simple binary preferences by leveraging all available positive and negative responses simultaneously, leading to more nuanced feedback signals (Rafailov et al., 2024; Cui et al., 2023; Chen et al., 2024a). Multi-prefereon objectives can reduce alignment bias and better approximate the true preference distribution by incorporating the diversity of acceptable and suboptimal responses. Examples include InfoNCA (Chen et al., 2024a), which utilizes a noise-contrastive objective based on scalar rewards. MPO (Gupta et al., 2024b), builds upon this by introducing deviation-based weighting, giving stronger influence to responses that deviate significantly (positively or negatively) from the average quality. Refa (Gupta et al., 2024a) fixes some of the common issues with MPO relating to multi-prefereon optimization including length bias as well as fixed reference. We build upon this framework to work on the problem of response selection in multi-prefereon optimization. Here, we emphasize highly informative examples while mitigating the overemphasis on less informative negative samples, a common challenge in these contrastive methods.
+
+On-Policy Self-Play. A key advancement in reinforcement learning that directly impacts LLM alignment is self-play or on-policy data generation. In this paradigm, the model continuously updates its policy and re-generates data from its evolving distribution (Silver et al., 2016; 2017). This ensures that the training set remains aligned with the model's current behavior (Christiano et al., 2017; Wu et al., 2023; 2024), accelerating convergence and maintaining data relevance. However, this dynamic generation process can significantly inflate the number of candidate responses per query, thereby motivating the need for selective down-sampling of training examples to manage computational load.
+
+Active Learning for Policy Optimization. The notion of selectively querying the most informative examples is central to active learning (Cohn et al., 1996; Settles, 2009), which aims to reduce labeling effort by focusing on high-utility samples. Several works incorporate active learning ideas into reinforcement learning, e.g., uncertainty sampling or diversity-based selection (Sener & Savarese, 2017; Zhang et al., 2022). In the RLHF setting, Christiano et al. (2017) highlight how strategic feedback can accelerate policy improvements, while others apply active subroutines to refine reward models (Wu et al., 2023). By picking a small yet diverse set of responses, we avoid both computational blow-ups and redundant training signals.
+
+Links with Classical Problems. Our work draws heavily from classic problems in machine learning and combinatorial optimization related to selecting representative subsets. Clustering techniques such as $K$ -means and $K$ -medoids (Hartigan & Wong, 1979) are used to group points and ensure coverage over semantically distinct modes in the embedding space (Har-Peled & Mazumdar, 2004; Cohen-Addad et al., 2022). These methods connect to the facility location problem (Oh Song et al., 2017), which seeks to minimize the cost of "covering" all points with a fixed number of centers, often addressed via coreset construction (Feldman, 2020). Furthermore, when selecting a subset of size $K$ to cover or suppress "bad" outputs, the objective can be framed as a min-knapsack or combinatorial optimization problem (Kellerer et al., 2004a). Such formulations often involve integer programs (Chen et al., 2020), for which approximate solutions can achieve strong empirical results in high-dimensional scenarios (Cohen-Addad et al., 2022; Har-Peled & Mazumdar, 2004). Our method frames the selection of negative samples in a Lipschitz coverage sense, thereby enabling both theoretical guarantees and practical efficiency in multi-preference alignment.
+
+Collectively, our work stands at the intersection of multi-preference alignment (Gupta et al., 2024a; Cui et al., 2023), on-policy data generation (Silver et al., 2017; Ouyang et al., 2022), and active learning (Cohn et al., 1996; Settles, 2009). We leverage ideas from clustering (k-means, k-medoids) and combinatorial optimization (facility location, min-knapsack) (Kellerer et al., 2004b; Cacchiani et al., 2022) to construct small yet powerful training subsets that capture both reward extremes and semantic diversity. The result is an efficient pipeline for aligning LLMs via multi-preference signals without exhaustively processing all generated responses.
+
+# B. Additional Experiments
+
+Method Reward Model AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 AMPO-Opt-Select-ℓ0(1vsk) Skywork-Reward-LM 51.6 51.2 46.4 8.11 AMPO-Opt-Select-ℓ1(1vsk) Skywork-Reward-LM 52.16 51.58 45.4 8.07
+
+Reward Models as Classifiers vs. Regressors To further analyze how reward scores influence alignment, we compare two approaches to forming preference pairs and computing loss in AMPO-Opt-Select: $\ell_0$ (classification-style) and $\ell_1$ (magnitude-aware).
+
+For both settings, we generate a fixed number of responses per prompt. The response with the highest reward is placed in the positive set $\mathcal{Y}^{+}$ , while a contrastive negative subset is selected via Opt-Select.
+
+- $\ell_0$ (Uniform Preference Weighting): Each preference pair contributes equally to the loss, regardless of the magnitude of reward difference. This reflects a pure classifier-style view of the reward model: only the relative ordering matters, not the exact values.
+- $\ell_{1}$ (Reward Gap-Weighted Preference): Each preference pair is weighted by the absolute deviation of the rejected response's reward from the mean reward value, i.e., $w_{i} = |\text{reward}_{i} - \overline{\text{reward}}|$ . This encourages the model to prioritize learning from examples with larger reward separation, treating them as more informative for optimization.
+
+Table 5 presents the results. While $\ell_1$ improves AlpacaEval metrics, $\ell_0$ performs better on Arena-Hard and MT-Bench, which are known to be noisier and more ambiguous in reward calibration. These findings reinforce the hypothesis that reward magnitude may not always reflect true quality, especially when misaligned with task-specific evaluation criteria.
+
+Key Finding: While weighting by reward magnitude can improve alignment on clean datasets, uniform weighting with classifier-style preferences $(\ell_0)$ offers better robustness across varied and noisy evaluation settings—supporting recent trends advocating for classification-based use of reward models.
+
+Why 1-vs- $k$ Preference Selection is Superior to $k$ -vs- $k$ We ablate between 1-vs- $k$ and $k$ -vs- $k$ preference construction strategies in AMPO, where the number of total responses is held fixed. In the 1-vs- $k$ setting, we select the single highest-scoring response as the positive and sample $k$ diverse negatives using AMPO strategies. In contrast, the $k$ -vs- $k$ setup selects multiple top-scoring responses and treats them equally as positives, paired against $k$ negatives.
+
+Table 5. Comparison of AMPO-Opt-Select variants with and without ${\ell }_{1}$ -based selection on AlpacaEval, Arena-Hard, and MT-Bench. The ${\ell }_{1}$ variant improves LC and WR on AlpacaEval, while slightly underperforming on Arena-Hard and MT-Bench. Best results are in bold.
+
+Method Reward Model AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 AMPO-Bottomk (1vs7) Skywork-Reward-LM 50.8 50.5 45.2 8.11 AMPO-Bottomk (4vs4) Skywork-Reward-LM 45.44 51.25 42.2 7.77 AMPO-Coreset (1vs7) Skywork-Reward-LM 52.4 52.1 47.8 8.12 AMPO-Coreset (4vs4) Skywork-Reward-LM 46.61 51.4 46.3 7.67 AMPO-Opt-Select (1vs7) Skywork-Reward-LM 51.6 51.2 46.4 8.11 AMPO-Opt-Select (4vs4) Skywork-Reward-LM 47.16 52.5 44.9 7.72
+
+Table 6. Dynamic AMPO-Based Top/Bottom Response Selection Across Evaluation Benchmarks for Llama-3-Instruct (8B)
+
+Theoretical Motivation If the goal is to maximize expected reward, the optimal strategy—when only the sampling probabilities over responses can be controlled—is to assign the highest probability to the response with the maximum reward score. This ensures reward-weighted sampling favors the best response. Including multiple responses in the positive set can dilute this probability mass and introduce ambiguity, especially when the difference between top-ranked responses is small or noisy.
+
+This analysis is formalized in Section B.1 of the paper, where we show that concentrating probability mass on the single best response, while distributing mass away from contrastive negatives, is provably optimal in expectation under a fixed response budget.
+
+Method Reward Model AlpacaEval 2 Arena-Hard MT-Bench LC (%) WR (%) WR (%) GPT-4 AMPO-Opt-Select (1vs3) Skywork-Reward-LM 49.6 48.5 46.1 8.03 AMPO-Opt-Select (2vs2) Skywork-Reward-LM 48.64 47.85 42.1 7.87
+
+Table 7. Dynamic AMPO-Based Top/Bottom Response Selection with Skywork-Reward-LM Across Evaluation Benchmarks for Llama-3-Instruct (8B).
+
+Empirical Evidence We validate this hypothesis with empirical results presented in Table 6 and Table 7. Across Bottom- $k$ , Coreset, and Opt-Select variants, the 1-vs-7 configuration consistently outperforms 4-vs-4, particularly in terms of LC win rate (AlpacaEval2), Arena-Hard, and MT-Bench. Similarly, 1-vs-3 outperforms 2-vs-2.
+
+This suggests that: Selecting a single clear positive introduces less ambiguity. Including multiple positives can inject noise if some "positive" responses are marginal or inconsistent. A broader negative set ( $k$ larger) allows for better contrast and generalization.
+
+Key Finding: The 1-vs- $k$ preference setup is theoretically optimal for maximizing expected reward and empirically leads to better performance. This supports our design choice of using a single, high-confidence positive response when constructing preference data for alignment.
+
+# C. Extended Theoretical Analysis of OPT-SELECT
+
+In this appendix, we present a more detailed theoretical treatment of AMPO-OPTSELECT. We restate the core problem setup and assumptions, then provide rigorous proofs of our main results. Our exposition here augments the concise version from the main text.
+
+# C.1. Problem Setup
+
+Consider a single prompt (query) $x$ for which we have sampled $N$ candidate responses $\{y_1, y_2, \ldots, y_N\}$ . Each response $y_i$ has:
+
+- A scalar reward $r_i \in [0,1]$ .
+An embedding $\mathbf{e}_i\in \mathbb{R}^d$
+
+We define the distance between two responses $y_{i}$ and $y_{j}$ by
+
+$$
+A _ {i, j} = \left\| \mathbf {e} _ {i} - \mathbf {e} _ {j} \right\|. \tag {12}
+$$
+
+Throughout we rescale the embedding so that $\max_{i,j}A_{i,j} = 1$ ; the Lipschitz constant $L\in (0,1]$ then compares quantities of the same scale.
+
+We wish to learn a policy $\{p_i\}$ , where $p_i \geq 0$ and $\sum_{i=1}^{N} p_i = 1$ . The policy's expected reward is
+
+$$
+\operatorname {E R} (p) = \sum_ {i = 1} ^ {N} r _ {i} p _ {i}. \tag {13}
+$$
+
+Positive and Negative Responses. We designate exactly one response, denoted $y_{i_{\mathrm{top}}}$ , as a positive (the highest-reward candidate). All other responses are potential "negatives." Concretely:
+
+- We fix one index $i_{\mathrm{top}}$ with $i_{\mathrm{top}} = \arg \max_{i \in \{1, \dots, N\}} r_i$ .
+- We choose a subset $\mathcal{S} \subseteq \{1, \dots, N\} \setminus \{i_{\mathrm{top}}\}$ of size $K$ , whose elements are forced to have $p_j = 0$ . (These are the "negatives.")
+
+Tie-breaking. If several responses attain the maximal reward we keep the one with the smallest index; thus $i_{\mathrm{top}}$ is unique.
+
+# C.1.1. LIPSCHITZ SUPPRESSION CONSTRAINT
+
+We assume a mild Lipschitz-like rule:
+
+(A1) $L$ -Lipschitz Constraint. If $p_j = 0$ for some $j \in S$ , then for every response $y_i$ , we must have
+
+$$
+p _ {i} \leq L A _ {i, j} = L \| \mathbf {e} _ {i} - \mathbf {e} _ {j} \|. \tag {14}
+$$
+
+The effect is that whenever we force a particular negative $j$ to have $p_j = 0$ , any response $i$ near $j$ in embedding space also gets pushed down, since $p_i \leq L A_{i,j}$ . By selecting a set of $K$ negatives covering many "bad" or low-reward regions, we curb the policy's probability of generating undesirable responses.
+
+Goal. Define the feasible set of distributions:
+
+$$
+\mathcal {F} (\mathcal {S}) = \left\{\{p _ {i} \}: p _ {j} = 0 \forall j \in \mathcal {S}, p _ {i} \leq L \min _ {j \in \mathcal {S}} A _ {i, j} \forall i \notin \{i _ {\text {t o p}} \} \cup \mathcal {S} \right\}. \tag {15}
+$$
+
+Feasibility condition. For a given $S$ the constraint set $\mathcal{F}(S)$ is non-empty iff
+
+$$
+\sum_{i\notin \mathcal{S}\cup \{i_{\mathrm{top}}\}}\min_{j\in \mathcal{S}}A_{i,j} \leq 1 / L.
+$$
+
+Hence we assume $K$ and $L$ are chosen so that the above inequality holds for at least one subset of size $K$ .
+
+We then have a two-level problem:
+
+$$
+\max _ { \begin{array}{c} \mathcal {S} \subseteq \{1, \ldots , N \} \setminus \{i _ {\mathrm {t o p}} \} \\ | \mathcal {S} | = K \end{array} } \quad \max _ { \begin{array}{c} \{p _ {i} \} \in \mathcal {F} (\mathcal {S}) \\ \sum_ {i} p _ {i} = 1, p _ {i} \geq 0 \end{array} } \sum_ {i = 1} ^ {N} r _ {i} p _ {i},
+$$
+
+$$
+\text {s u b j e c t} \quad p _ {i _ {\mathrm {t o p}}} \text {i s u n c o n s t r a i n e d (n o L i p s c h i z t b o u n d)}. \tag {16}
+$$
+
+We seek $S$ that maximizes the best possible Lipschitz-compliant expected reward.
+
+# C.2. Coverage View and the MIP Formulation
+
+Coverage Cost. To highlight the crucial role of "covering" low-reward responses, define a weight
+
+$$
+w _ {i} := r _ {\max } - r _ {i} \tag {17}
+$$
+
+where $r_{\max} = \max_j r_j$ . Then a natural coverage cost is
+
+$$
+\operatorname {C o s t} (\mathcal {S}) = \sum_ {i = 1} ^ {N} w _ {i} \min _ {j \in \mathcal {S}} A _ {i, j}. \tag {18}
+$$
+
+A small $\min_{j\in S}A_{i,j}$ means response $i$ is "close" to at least one negative center $j$ . If $r_i$ is low, then $w_{i}$ is large, so we put higher penalty on leaving $i$ uncovered. Minimizing $\mathrm{Cost}(S)$ ensures that important (low-reward) responses are forced near penalized centers, thus suppressing them in the policy distribution.
+
+MIP $\mathcal{P}$ for Coverage Minimization. We can write a mixed-integer program:
+
+$$
+\begin{array}{l} \textbf{Problem} \mathcal{P}: \min_{\substack{x_{j}\in \{0,1\} \\ z_{i,j}\in \{0,1\} \\ y_{i}\geq 0}}\sum_{i = 1}^{N}w_{i}y_{i}, \\ \text {s u b j e c t} \left\{ \begin{array}{l} \sum_ {j = 1} ^ {N} x _ {j} = K, \\ z _ {i, j} \leq x _ {j}, \quad \sum_ {j = 1} ^ {N} z _ {i, j} = 1, \quad \forall i, \\ y _ {i} \leq A _ {i, j} + M (1 - z _ {i, j}), \\ y _ {i} \geq A _ {i, j} - M (1 - z _ {i, j}), \quad \forall i, j, \end{array} \right. \tag {19} \\ \end{array}
+$$
+
+where $M = \max_{i,j} A_{i,j}$ . Intuitively, each $x_j$ indicates if $j$ is chosen as a negative; each $z_{i,j}$ indicates whether $i$ is "assigned" to $j$ . At optimality, $y_i = \min_{j \in S} A_{i,j}$ , so the objective $\sum_i w_i y_i$ is precisely $\mathrm{Cost}(S)$ . Hence solving $\mathcal{P}$ yields $S^*$ that minimizes coverage cost equation 18.
+
+Lemma C.1 (Coverage cost controls negative reward). Under (A1)-(A3), suppose $S$ of size $K$ satisfies the feasibility condition, i.e. there exists $\{p_i\} \in \mathcal{F}(S)$ with $\sum_{i} p_i = 1$ . Then for every normalized feasible $\{p_i\}$ (i.e. $\forall \{p_i\} \in \mathcal{F}(S)$ ), we have:
+
+$$
+\sum_ {i = 1} ^ {N} \left(r _ {\max } - r _ {i}\right) p _ {i} \leq L \sum_ {i = 1} ^ {N} \left(r _ {\max } - r _ {i}\right) \min _ {j \in S} A _ {i, j} = L \operatorname {C o s t} (S),
+$$
+
+Consequently,
+
+$$
+\max _ {\{p _ {i} \} \in \mathcal {F} (S)} \sum_ {i} r _ {i} p _ {i} = r _ {\max } - \min _ {\{p _ {i} \} \in \mathcal {F} (S)} \sum_ {i} \left(r _ {\max } - r _ {i}\right) p _ {i}
+$$
+
+is maximised exactly when $\operatorname{Cost}(S)$ is minimised.
+
+Furthermore, this bound is tight: one can set
+
+$$
+p _ {i} = L \min _ {j \in S} A _ {i, j} (i \neq i _ {t o p}), \quad p _ {i _ {t o p}} = 1 - L \sum_ {i \neq i _ {t o p}} \min _ {j \in S} A _ {i, j},
+$$
+
+which is feasible by assumption, and gives $\sum_{i}(r_{\mathrm{max}} - r_i)p_i = L\operatorname {Cost}(S)$ , so $\max \sum_{i}r_{i}p_{i} = r_{\mathrm{max}} - L\operatorname {Cost}(S)$
+
+Proof. By (A1), any $i \notin S \cup \{i_{\mathrm{top}}\}$ satisfies $p_i \leq L \min_{j \in S} A_{i,j}$ , hence $(r_{\max} - r_i)p_i \leq L(r_{\max} - r_i) \min_{j \in S} A_{i,j}$ . Summing over $i$ yields the claimed bound, and the equivalence between minimising $\operatorname{Cost}(S)$ and maximising $\sum_{i} r_i p_i$ follows by writing
+
+$$
+\sum_ {i} r _ {i} p _ {i} = r _ {\max} \underbrace {\sum_ {i} p _ {i}} _ {= 1} - \sum_ {i} (r _ {\max} - r _ {i}) p _ {i} = r _ {\max} - \sum_ {i} (r _ {\max} - r _ {i}) p _ {i},
+$$
+
+and observing the inequality becomes an equality for the choice above.
+
+# C.3. Main Theorem: Optimality of $\mathcal{P}$ for Lipschitz Alignment
+
+Theorem C.2 (Optimal Negative Set via $\mathcal{P}$ ). Let $S^*$ be the solution to the MIP $\mathcal{P}$ in equation 19, i.e. it minimizes $\mathrm{Cost}(\mathcal{S})$ . Then $S^*$ also maximizes the objective equation 16. Consequently, picking $S^*$ and allowing any distribution on $i_{\mathrm{top}} \approx \arg \max_i r_i$ yields the optimal Lipschitz-compliant policy.
+
+Proof. By construction, solving $\mathcal{P}$ returns $S^*$ with $\mathrm{Cost}(\mathcal{S}^*) = \min_{|\mathcal{S}| = K} \mathrm{Cost}(\mathcal{S})$ . By Lemma C.1, minimising $\mathrm{Cost}(S)$ indeed maximises the feasible expected reward, so such an $S^*$ simultaneously maximizes the best possible feasible expected reward. Hence $S^*$ is precisely the negative set that achieves the maximum of equation 16.
+
+
+
+Interpretation. Under a mild Lipschitz assumption in embedding space, penalizing (assigning zero probability to) a small set $S$ and forcing all items near $S$ to have small probability is equivalent to a coverage problem. Solving (or approximating) $\mathcal{P}$ selects negatives that push down low-reward modes as effectively as possible.
+
+# C.4. Discussion and Practical Implementation
+
+OPT-SELECT thus emerges from optimizing coverage:
+
+1. Solve or approximate the MIP $\mathcal{P}$ to find the best subset $\mathcal{S} \subseteq \{1, \dots, N\} \setminus \{i_{\mathrm{top}}\}$ .
+2. **Force $p_j = 0$ for each $j \in S$ ; retain $i_{\mathrm{top}}$ with full probability ( $p_{i_{\mathrm{top}}} \approx 1$ ), subject to normalizing the distribution.**
+
+In practice, local search or approximate clustering-based approaches (e.g. Weighted $K$ -Medoids) can find good solutions without exhaustively solving $\mathcal{P}$ . The method ensures that near any chosen negative $j$ , all semantically similar responses $i$ have bounded probability $p_i \leq L A_{i,j}$ . Consequently, OPT-SELECT simultaneously covers and suppresses undesired modes while preserving at least one high-reward response unpenalized.
+
+# Additional Remarks.
+
+- The single-positive assumption reflects a practical design where one high-reward response is explicitly promoted. This can be extended to multiple positives, e.g. top $K^{+}$ responses each unconstrained.
+- For large $N$ , the exact MIP solution may be expensive; local search (see Appendix D) still achieves a constant-factor approximation.
+- The embedding-based Lipschitz constant $L$ is rarely known exactly; however, the coverage perspective remains valid for "sufficiently smooth" reward behaviors in the embedding space.
+
+Overall, these results solidify OPT-SELECT as a principled framework for negative selection under Lipschitz-based alignment objectives.
+
+# D. Local Search Guarantees for Weighted $K$ -Medoids and Lipschitz-Reward Approximation
+
+In this appendix, we show in Theorem D.1 that a standard local search algorithm for Weighted $K$ -Medoids achieves a constant-factor approximation in polynomial time.
+
+# D.1. Weighted $K$ -Medoids Setup
+
+We are given:
+
+- A set of $N$ points, each indexed by $i \in \{1, \dots, N\}$ .
+- A distance function $d(i,j) \geq 0$ , which forms a metric: $d(i,j) \leq d(i,k) + d(k,j)$ , $d(i,i) = 0$ , $d(i,j) = d(j,i)$ .
+- A nonnegative weight $w_{i}$ for each point $i$ .
+- A budget $K$ , $1 \leq K \leq N$ .
+
+We wish to pick a subset $S \subseteq \{1, \dots, N\}$ of medoids (centers) with size $|S| = K$ that minimizes the objective
+
+$$
+\operatorname {C o s t} (\mathcal {S}) = \sum_ {i = 1} ^ {N} w _ {i} \cdot \min _ {j \in \mathcal {S}} d (i, j). \tag {20}
+$$
+
+We call this the Weighted $K$ -Medoids problem. Note that medoids must come from among the data points, as opposed to $K$ -median or $K$ -means where centers can be arbitrary points in the metric or vector space. Our Algorithm 3 reduces to exactly this problem.
+
+# D.2. Coordinate Descent Algorithm via Local Search
+
+Our approach to the NP-hardness of Algorithm 3 was to recast it as a simpler coordinate descent algorithm in Algorithm 4, wherein we do a local search at every point towards achieving the optimal solution. Let $\mathrm{Cost}(S)$ be as in equation 20.
+
+1. Initialize: pick any subset $S \subseteq \{1, \dots, N\}$ of size $K$ (e.g. random or greedy).
+2. Repeat: Try all possible single swaps of the form
+
+$$
+\mathcal {S} ^ {\prime} = \left(\mathcal {S} \backslash \{j \}\right) \cup \{j ^ {\prime} \},
+$$
+
+where $j\in S$ and $j^{\prime}\notin S$
+
+3. If any swap improves cost: i.e. $\mathrm{Cost}(\mathcal{S}') < \mathrm{Cost}(\mathcal{S})$ , then set $S \leftarrow S'$ and continue.
+4. Else terminate: no single swap can further reduce cost.
+
+When the algorithm stops, we say $\mathcal{S}$ is a local optimum under 1-swaps.
+
+# D.3. Constant-Factor Approximation in Polynomial Time
+
+We now present and prove a result: such local search yields a constant-factor approximation. Below, we prove a version with a factor 5 guarantee for Weighted $K$ -Medoids. Tighter analyses can improve constants, but 5 is a commonly cited bound for this simple variant.
+
+Theorem D.1 (Local Search for Weighted $K$ -Medoids). Let $\mathcal{S}^*$ be an optimal subset of medoids of size $K$ . Let $\widehat{\mathcal{S}}$ be any local optimum obtained by the above 1-swap local search. Then
+
+$$
+\operatorname {C o s t} (\widehat {\mathcal {S}}) \leq 5 \times \operatorname {C o s t} \left(\mathcal {S} ^ {*}\right). \tag {21}
+$$
+
+Moreover, the procedure runs in polynomial time (at most $\left(\binom{N}{K}\right)$ "worse-case" swaps in principle, but in practice each improving swap decreases cost by a non-negligible amount, thus bounding the iteration count).
+
+Remark D.2. We follow the result from Arya et al. (2001) who define the locality gap of the single-swap local-search procedure as the worst-case ratio between the cost of any local optimum and the global optimum. They prove that for the metric K-median problem, this gap is exactly 5. More precisely, permitting only one swap per step guarantees
+
+$$
+\operatorname {C o s t} (\widehat {S}) \leq 5 \operatorname {C o s t} (S ^ {*}) \tag {22}
+$$
+
+for every local optimum $\widehat{S}$ and global optimum $S^{*}$
+
+Sketch of Arya et al. (2001)'s Analysis. They partition the data according to the Voronoi cells of the global optimum, then show via a "coupling" argument (together with repeated triangle-inequality bounds) that whenever a local swap cannot improve the solution, the total service cost from each cell is bounded by five times its optimal cost.
+
+# Proof. Notation.
+
+- Let $\widehat{S}$ be the final local optimum of size $K$ .
+- Let $S^*$ be an optimal set of size $K$ .
+- For each point $i$ , define
+
+$$
+r _ {i} = d (i, \widehat {\mathcal {S}}) = \min _ {j \in \widehat {\mathcal {S}}} d (i, j), \quad r _ {i} ^ {*} = \min _ {j \in \mathcal {S} ^ {*}} d (i, j).
+$$
+
+Thus $\operatorname{Cost}(\widehat{\mathcal{S}}) = \sum_{i} w_i r_i$ and $\operatorname{Cost}(\mathcal{S}^*) = \sum_{i} w_i r_i^*$ .
+
+- Let $c(\mathcal{S}) = \sum_{i} w_{i} d(i, \mathcal{S})$ as shorthand for $\operatorname{Cost}(\mathcal{S})$ .
+
+# Step 1: Construct a "Combined" Set. Consider
+
+$$
+\mathcal {S} ^ {\dagger} = \widehat {\mathcal {S}} \cup \mathcal {S} ^ {*}.
+$$
+
+We have $|\mathcal{S}^{\dagger}| \leq 2K$ . Let $c(\mathcal{S}^{\dagger}) = \sum_{i} w_{i} d(i, \mathcal{S}^{\dagger})$ .
+
+Observe that
+
+$$
+d (i, \mathcal {S} ^ {\dagger}) = \min \left\{d (i, \widehat {\mathcal {S}}), d (i, \mathcal {S} ^ {*}) \right\} = \min \left\{r _ {i}, r _ {i} ^ {*} \right\}.
+$$
+
+Hence
+
+$$
+c \left(\mathcal {S} ^ {\dagger}\right) = \sum_ {i = 1} ^ {N} w _ {i} \min \left\{r _ {i}, r _ {i} ^ {*} \right\}.
+$$
+
+We will relate $c(\mathcal{S}^{\dagger})$ to $c(\widehat{S})$ and $c(S^{*})$ .
+
+Step 2: Partition Points According to $S^*$ . For each $j^* \in S^*$ , define the cluster
+
+$$
+C (j ^ {*}) = \left\{i \mid j ^ {*} = \arg \min _ {j ^ {\prime} \in \mathcal {S} ^ {*}} d (i, j ^ {\prime}) \right\}.
+$$
+
+Hence $\{C(j^{*}):j^{*}\in S^{*}\}$ is a partition of $\{1,\ldots ,N\}$ . We now group the cost contributions by these clusters.
+
+Goal: Existence of a Good Swap. We will assume $c(\hat{S}) > 5c(S^{*})$ and derive a contradiction by producing a profitable swap that local search should have found.
+
+Specifically, we show that there must be a center $j^{*} \in S^{*}$ whose cluster $C(j^{*})$ is "costly enough" under $\widehat{S}$ , so that swapping out some center $j \in \widehat{S}$ for $j^{*}$ significantly reduces cost. But since $\widehat{S}$ was a local optimum, no such profitable swap could exist. This contradiction implies $c(\widehat{S}) \leq 5c(S^{*})$ .
+
+# Step 3: Detailed Bounding.
+
+We have
+
+$$
+c \left(\mathcal {S} ^ {\dagger}\right) = \sum_ {i = 1} ^ {N} w _ {i} \min \left\{r _ {i}, r _ {i} ^ {*} \right\} \leq \sum_ {i = 1} ^ {N} w _ {i} r _ {i} ^ {*} = c \left(\mathcal {S} ^ {*}\right).
+$$
+
+Similarly,
+
+$$
+c (\mathcal {S} ^ {\dagger}) \leq \sum_ {i = 1} ^ {N} w _ {i} r _ {i} = c \big (\widehat {\mathcal {S}} \big).
+$$
+
+Hence $c(\mathcal{S}^{\dagger})\leq \min \bigl \{c(\widehat{\mathcal{S}}),c(\mathcal{S}^{*})\bigr \}$ . Now define
+
+$$
+D = \sum_ {i = 1} ^ {N} w _ {i} \left[ r _ {i} - \min \left\{r _ {i}, r _ {i} ^ {*} \right\} \right] = \sum_ {i = 1} ^ {N} w _ {i} \left(r _ {i} - r _ {i} ^ {*}\right) _ {+},
+$$
+
+where $(x)_{+} = \max \{x,0\}$ . By rearranging,
+
+$$
+\sum_ {i = 1} ^ {N} w _ {i} r _ {i} - \sum_ {i = 1} ^ {N} w _ {i} \min \{r _ {i}, r _ {i} ^ {*} \} = D.
+$$
+
+Thus
+
+$$
+c (\widehat {\mathcal {S}}) - c (\mathcal {S} ^ {\dagger}) = D \geq c (\widehat {\mathcal {S}}) - c (\mathcal {S} ^ {*}).
+$$
+
+So
+
+$$
+D \geq d (\widehat {\mathcal {S}}) - d (\mathcal {S} ^ {*}).
+$$
+
+Under the assumption $c(\widehat{S}) > 5c(S^{*})$ , we get
+
+$$
+D > 4 c \left(\mathcal {S} ^ {*}\right). \tag {*}
+$$
+
+Step 4: Find a Center $j^*$ with Large $D$ Contribution. We now "distribute" $D$ over clusters $C(j^*)$ . Let
+
+$$
+D _ {j ^ {*}} = \sum_ {i \in C (j ^ {*})} w _ {i} \left(r _ {i} - r _ {i} ^ {*}\right) _ {+}.
+$$
+
+Then $D = \sum_{j^{*} \in \mathcal{S}^{*}} D_{j^{*}}$ . Since $D > 4c(\mathcal{S}^{*})$ , at least one $j^{*} \in \mathcal{S}^{*}$ satisfies
+
+$$
+D _ {j ^ {*}} > 4 \frac {c \left(\mathcal {S} ^ {*}\right)}{\left| \mathcal {S} ^ {*} \right|} = 4 \frac {c \left(\mathcal {S} ^ {*}\right)}{K}, \tag {23}
+$$
+
+because $|\mathcal{S}^*| = K$ . Denote this center as $j_{\mathrm{large}}^*$ and its cluster $C^* = C(j_{\mathrm{large}}^*)$ .
+
+Step 5: Swapping $j^*$ into $\hat{S}$ . Consider the swap
+
+$$
+\widehat {\mathcal {S}} _ {\text {s w a p}} = \left(\widehat {\mathcal {S}} \setminus \left\{j _ {\text {o u t}} \right\}\right) \cup \left\{j _ {\text {l a r g e}} ^ {*} \right\}
+$$
+
+where $j_{\mathrm{out}}$ is whichever center in $\widehat{\mathcal{S}}$ we choose to remove. We must show that for an appropriate choice of $j_{\mathrm{out}}$ , the cost $c(\widehat{\mathcal{S}}_{\mathrm{swap}})$ is at least $(r_i - r_i^*)_+$ smaller on average for the points in $C^*$ , forcing a net cost reduction large enough to offset any potential cost increase for points outside $C^*$ .
+
+In detail, partition $\widehat{S}$ into $K$ clusters under Voronoi assignment:
+
+$$
+\widehat {C} (j) = \left\{i: j = \operatorname * {a r g m i n} _ {x \in \widehat {\mathcal {S}}} d (i, x) \right\}, \quad j \in \widehat {\mathcal {S}}.
+$$
+
+Since $|\widehat{S}| = K$ , there must exist at least one $j_{\mathrm{out}} \in \widehat{S}$ whose cluster $\widehat{C}(j_{\mathrm{out}})$ has weight $\sum_{i \in \widehat{C}(j_{\mathrm{out}})} w_i \leq \frac{1}{K} \sum_{i=1}^{N} w_i$ . We remove that $j_{\mathrm{out}}$ and add $j_{\mathrm{large}}^*$ .
+
+Step 6: Net Cost Change Analysis. After the swap, the net change in cost is $\Delta = \Delta_{in} + \Delta_{out}$ . The "in-gain" for points $i \in C^{*} = C(j_{large}^{*})$ is bounded by:
+
+$$
+\Delta_ {\text {i n}} = \sum_ {i \in C ^ {*}} w _ {i} \left(d (i, \widehat {\mathcal {S}} _ {\text {s w a p}}) - d (i, \widehat {\mathcal {S}})\right) \leq - \sum_ {i \in C ^ {*}} w _ {i} \left(r _ {i} - r _ {i} ^ {*}\right) _ {+} = - D _ {j _ {\text {l a r g e}} ^ {*}}. \tag {24}
+$$
+
+The "out-loss," $\Delta_{out}$ , represents the potential cost increase for points not in $C^*$ , primarily those that were served by the removed center $j_{out}$ . Bounding this term is the most complex part of the proof.
+
+Remark D.3 (Bounding $\Delta_{out}$ ). The analysis in Arya et al. (2001) uses a series of clever applications of the triangle inequality to show that the cost increase, $\Delta_{out}$ , is bounded relative to the cost of the optimal solution. A simplified (though non-trivial) result of this bounding shows that for an appropriately chosen $j_{out}$ , the increase can be bounded such that:
+
+$$
+\Delta_ {o u t} \leq \frac {c \left(\mathcal {S} ^ {*}\right)}{K}. \tag {25}
+$$
+
+This bound is sufficient to complete the proof. We defer the detailed derivation of this specific bound to the original literature and proceed with this result.
+
+Step 7: Arriving at a contradiction. Combining our bounds, the total change in cost is:
+
+$$
+c \big (\widehat {\mathcal {S}} _ {\mathrm {s w a p}} \big) - c \big (\widehat {\mathcal {S}} \big) = \Delta_ {i n} + \Delta_ {o u t} \leq - D _ {j _ {l a r g e} ^ {*}} + \frac {c (\mathcal {S} ^ {*})}{K}.
+$$
+
+From Step 4 (Eq. 23), we know $D_{j_{large}^*} > 4\frac{c(S^*)}{K}$ . Substituting this in gives:
+
+$$
+c \big (\widehat {\mathcal {S}} _ {\mathrm {s w a p}} \big) - c \big (\widehat {\mathcal {S}} \big) < - 4 \frac {c \left(\mathcal {S} ^ {*}\right)}{K} + \frac {c \left(\mathcal {S} ^ {*}\right)}{K} = - 3 \frac {c \left(\mathcal {S} ^ {*}\right)}{K} < 0.
+$$
+
+This shows a strict decrease in cost, which contradicts the local optimality of $\widehat{S}$ . Therefore, our initial assumption must be false, and we conclude that $c(\widehat{S}) \leq 5c(S^{*})$ .
+
+Time Complexity. At each iteration we try all $O(KN)$ possible 1-swaps. By maintaining for each point $i$ its distance to the nearest center in $S$ , we can update the total cost in $O(N)$ time per swap check; hence each pass costs $O(KN^2)$ . Moreover, letting
+
+$$
+W _ {\mathrm {t o t}} = \sum_ {i = 1} ^ {N} w _ {i}, \qquad D _ {\mathrm {m a x}} = \max _ {i, j} d (i, j),
+$$
+
+we have
+
+$$
+0 \leq c (\mathcal {S}) \leq W _ {\mathrm {t o t}} D _ {\max}.
+$$
+
+Since all weights and distances come from the finite input, there is a minimum positive gap $\delta > 0$ between any two distinct cost values. Therefore each improving swap decreases $c(S)$ by at least $\delta$ , so there can be at most
+
+$$
+\frac {W _ {\mathrm {t o t}} D _ {\mathrm {m a x}}}{\delta}
+$$
+
+such swaps. Altogether the algorithm performs
+
+$$
+O \left(K N ^ {2}\right) \times O \left(\frac {W _ {\mathrm {t o t}} D _ {\max}}{\delta}\right) = \text {p o l y} (\text {i n p u t s i z e})
+$$
+
+total work, i.e. it runs in polynomial time.
+
+Remark D.4 (Improved Constants). A more intricate analysis can tighten the factor 5 in Theorem D.1 to 3 or 4. See, e.g., (Gupta & Tangwongsan, 2008; Arya et al., 2001) for classical refinements. The simpler argument here suffices to establish the main principles.
+
+# E. Theoretical Guarantee for AMPO-Coreset
+
+This appendix provides the theoretical motivation for the AMPO-CORESET selection strategy. We first introduce the concept of a coreset and then present a formal theorem showing that, under certain clustering assumptions, this strategy yields a policy with a guaranteed additive bound on its expected reward.
+
+**Coresets for Representative Selection.** The term coreset originates in computational geometry and machine learning, referring to a small, weighted subset of data that approximates the entire dataset with respect to a particular objective or loss function (Bachem et al., 2017; Feldman et al., 2020). In the context of AMPO-CORESET, the $K$ -means clustering subroutine identifies representative embedding-space regions. By choosing a single worst-rated example from each region, we mimic a coreset-based selection principle: our selected negatives approximate the distributional diversity of the entire batch of responses. This ensures the model receives penalizing signals for all major modes of undesired behavior, mitigating the risk of ignoring infrequent but problematic minority clusters.
+
+# E.1. Additive Guarantee under Bounded-Diameter Clustering
+
+Recall from Appendix C that we use normalized weights
+
+$$
+W = \sum_ {j = 1} ^ {N} (r _ {\max } - r _ {j}), \qquad w _ {i} = \frac {r _ {\max } - r _ {i}}{W}, \quad \text {s o t h a t} \quad \sum_ {i} w _ {i} = 1.
+$$
+
+This allows Lemma C.1 (Coverage-cost controls negative reward) to give the tight bound $\max_{p \in \mathcal{F}(S)} \sum_{i} r_i p_i = r_{\max} - L \, \mathrm{Cost}(S)$ . We now show that under the clustering assumption of AMPO-coreset, the cost term is bounded by $d_{\max}$ .
+
+Theorem E.1 (Additive $Ld_{\mathrm{max}}$ -Guarantee for Coreset Selection). Suppose the $N$ candidate responses can be partitioned into $K$ clusters $\{C_1, \ldots, C_K\}$ in embedding space, each of diameter at most $d_{\mathrm{max}}$ :
+
+$$
+\max _ {i, i ^ {\prime} \in C _ {j}} \| \mathbf {e} _ {i} - \mathbf {e} _ {i ^ {\prime}} \| \leq d _ {\max } (j = 1, \ldots , K).
+$$
+
+Let the negative set $S$ be formed by picking one arbitrary index $i_j^- \in C_j$ from each cluster $C_j$ . Then, the maximum expected reward achievable by a Lipschitz-compliant policy using this negative set $S$ is bounded by:
+
+$$
+\max_{\substack{p_{j} = 0 (\forall j\in \mathcal{S})\\ p_{i}\leq L\min_{l\in \mathcal{S}}\| \mathbf{e}_{i} - \mathbf{e}_{l}\|}}\sum_{i = 1}^{N}r_{i} p_{i}\geq r_{\max} - L d_{\max},
+$$
+
+where $r_{\mathrm{max}} = \max_i r_i$ . This guarantees the expected reward is within an additive error of $Ld_{\mathrm{max}}$ of the highest possible reward.
+
+Proof. We use the result from Lemma C.1, which states that the maximum expected reward is $r_{\mathrm{max}} - L \cdot \mathrm{Cost}(\mathcal{S})$ , where the cost function uses normalized weights $w_{i} = (r_{\mathrm{max}} - r_{i}) / W$ . We need to bound $\mathrm{Cost}(\mathcal{S})$ for our chosen $\mathcal{S}$ .
+
+$$
+\operatorname {C o s t} (\mathcal {S}) = \sum_ {i = 1} ^ {N} w _ {i} \min _ {l \in \mathcal {S}} \| \mathbf {e} _ {i} - \mathbf {e} _ {l} \|.
+$$
+
+For any point $y_{i}$ , it belongs to some cluster $C_j$ . By construction, the set of negatives $S$ contains the point $i_j^-$ from that same cluster. Therefore, the distance from $y_{i}$ to its closest negative in $S$ is at most its distance to $y_{i_j^-}$ , which is bounded by the cluster diameter:
+
+$$
+\min _ {l \in \mathcal {S}} \| \mathbf {e} _ {i} - \mathbf {e} _ {l} \| \leq \| \mathbf {e} _ {i} - \mathbf {e} _ {i _ {j} ^ {-}} \| \leq d _ {\max }, \quad \forall i \in C _ {j}.
+$$
+
+Since this holds for all points $i$ , we can bound the cost:
+
+$$
+\operatorname {C o s t} (\mathcal {S}) = \sum_ {i = 1} ^ {N} w _ {i} \underbrace {\min _ {l \in \mathcal {S}} \| \mathbf {e} _ {i} - \mathbf {e} _ {l} \|} _ {\leq d _ {\max }} \leq \sum_ {i = 1} ^ {N} w _ {i} d _ {\max } = d _ {\max } \sum_ {i = 1} ^ {N} w _ {i} = d _ {\max },
+$$
+
+where the final step uses the fact that the normalized weights sum to 1. Substituting this bound back into the expression for the maximum expected reward gives:
+
+$$
+\max \sum_ {i = 1} ^ {N} r _ {i} p _ {i} = r _ {\max} - L \operatorname {C o s t} (\mathcal {S}) \geq r _ {\max} - L d _ {\max},
+$$
+
+which completes the proof.
+
+
+
+Remark (Distribution-Dependent Guarantee). The above theorem provides a deterministic guarantee for a fixed set of $N$ points. In practice, we learn from a finite sample of responses drawn from an unknown underlying distribution $\mathcal{D}$ . If we learn $K$ clusters from a sufficiently large i.i.d. sample of responses, standard uniform convergence arguments (see, e.g., (Bachem et al., 2017)) show that these empirical clusters will, with high probability, also cover new responses drawn from $\mathcal{D}$ . Consequently, for a high fraction of new queries, the policy derived from the coreset selection strategy is expected to achieve a near-optimal reward, with an additive error similar to the $Ld_{\mathrm{max}}$ bound.
+
+# F. Optimal Selection Code
+
+In this section we provide the actual code used to compute the optimal selection.
+
+```python
+import numpy as np
+from scipy-spacing.distance import cdist
+def solve_local_search_min_dist_normalized( vectors: np.ndarray, rating: np.ndarray, k: int, max_iter: int = 100, random_seed: int = 42): #Normalize ratings rating_min = np.min(rating) rating_max = np.max(rating) rating_normalized = (rating - rating_min) / (rating_max - rating_min) if rating_max > rating_min else np.zeros_like(rating) + 0.5 # Identify top-rated point excluded_top_index = int(np.argmax(rating_normalized)) # Reduce dataset new_to_old = [idx for idx in range(len(rating_normalized)) if idx != excluded_top_index] vectors_reduced = np.delete(vectors, excluded_top_index, axis=0) rating_reduced = np.delete(rating_normalized, excluded_top_index) # Compute L2 distances and normalize if len(rating_reduced) == 0: return excluded_top_index, None, [], [], [] distance_matrix = cdist(vectors_reduced, vectors_reduced, metric='euclidean') distance_matrix /= distance_matrix.max() if distance_matrix.max() > 1e-12 else 1 #Compute weights meanRating_reduced = np.mean(rating_reduced) w = np.exp(meanRating_reduced - rating_reduced) # Local search setup def compute_objective(chosen_set): return sum(w[i] * min(distance_matrix[i, j] for j in chosen_set) for i in range( len(w))) rms = np.random.default_rng(random_seed) allIndices = np.arange(len(rating_reduced))
+```
+
+```python
+current_set = set(rng.choice(allIndices, size=k, replace=False)) if k < len( rating_reduced) else set(allIndices)
+current_cost = compute_objective(current_set)
+# Local search loop
+improved = True
+while improved: improved $=$ False best_swap $=$ (None, None, 0) for j_out in list(current_set): for j_in in all Indices: if j_in not in current_set: candidate_set $=$ (current_set - {j_out}) | {j_in} improvement $=$ current_cost - compute_objectiveCandidate_set) if improvement > best_swap[2]: best_swap $=$ (j_out, j_in, improvement) if best_swap[2] $>1e - 12$ .. current_set.remove(best_swap[0]) current_set.add(best_swap[1]) current_cost $\equiv$ best_swap[2] improved $=$ True
+chosen Indices original $=$ [new_to_old[j] for j in sorted(current_set)] rejected Indices original $=$ [new_to_old[j] for j in sorted(set(allIndices)- current_set)]
+return excluded_top_index, chosen Indices original[0], rejected Indices original[:k], chosen Indices original,rejected Indices original
+```
+
+# G. Visualization of t-SNE embeddings for Diverse Responses Across Queries
+
+In this section, we showcase the performance of our method through plots of TSNE across various examples. These illustrative figures show how our baseline Bottom-k Algorithm (Section 5.1) chooses similar responses that are often close to each other. Hence the model misses out on feedback relating to other parts of the answer space that it often explores. Contrastingly, we often notice diversity of response selection for both the AMPO-OPTSELECT and AMPO-CORESET algorithms.
+
+
+
+
+(a) 1.
+
+
+
+
+
+
+(b) 2.
+
+
+
+
+Figure 6. t-SNE visualization of projected high-dimensional response embeddings into a 2D space, illustrating the separation of actively selected responses. (a) AMPO-BottomK (baseline). (b) AMPO-Coreset (ours). (c) Opt-Select (ours). Traditional baselines select many responses close to each other based on their rating, providing insufficient feedback to the LLM during preference optimization. In contrast, our methods optimize for objectives including coverage, generation probability, and preference rating.
+
+
+(c) 3.
+
+
\ No newline at end of file
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/images.zip b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a2514b734cf373e514b9fdf99a8d752e1a502956
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a116e3848c2d4f3cdd580a59f7982d1f9bb49ca99fe611ee7a90c5d90e4b9b47
+size 892190
diff --git a/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/layout.json b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2fa1f434b2af0e2d14a63815ec5341933bfa4ff4
--- /dev/null
+++ b/ampoactivemultipreferenceoptimizationforselfplaypreferenceselection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f391acfe987a8b6d639b25846d2c728281b4f76a8dbaff9371ccf22009e2aa8e
+size 1388894
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_content_list.json b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cd96671f6cb783b49bbd5043d3da493c5ee2f15
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a25bc3a424fe33051cf6f19809a4c05f4b9a57e0ab44cf5e0c8ba8f6f5ba0ee
+size 159609
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_model.json b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..738d656867eaeb063389956783f56260bbb6fb57
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a1b677716ea1a7189f9414f96560ac2bddc07135a056d7511390ee429978466
+size 191988
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_origin.pdf b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7a301ded4d7d40a22b1605c2389e5564e4cab929
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/8dbf2916-b95a-49cc-ae75-c9a112fa148f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54533eb10b32d355f63cbbf3e1c0f1a40036874906b97223db087881d6255aa3
+size 18966834
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/full.md b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c06adc53e8dce1b50a2c87acfdfd0f5cfb1320fe
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/full.md
@@ -0,0 +1,702 @@
+# A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making
+
+Yi Xie $^{1,2}$ Jaedong Hwang $^{1}$ Carlos Brody $^{2,3}$ David Tank $^{2}$ Ila Fiete $^{1}$
+
+# Abstract
+
+Brains excel at robust decision-making and data-efficient learning. Understanding the architectures and dynamics underlying these capabilities can inform inductive biases for deep learning. We present a multi-region brain model that explores the normative role of structured memory circuits in a spatially embedded binary decision-making task from neuroscience. We counterfactually compare the learning performance and neural representations of reinforcement learning (RL) agents with brain models of different interaction architectures between grid and place cells in the entorhinal cortex and hippocampus, coupled with an action-selection cortical recurrent neural network. We demonstrate that a specific architecture—where grid cells receive and jointly encode self-movement velocity signals and decision evidence increments—optimizes learning efficiency while best reproducing experimental observations relative to alternative architectures. Our findings thus suggest brain-inspired structured architectures for efficient RL. Importantly, the models make novel, testable predictions about organization and information flow within the entorhinal-hippocampal-neocortical circuit: we predict that grid cells must conjunctively encode position and evidence for effective spatial decision-making, directly motivating new neurophysiological experiments.*
+
+# 1. Introduction
+
+Deep learning has advanced through the adoption of larger datasets (Lin et al., 2014; Russakovsky et al., 2015; Schuh-
+
+*See project page at https://minzsiure.github.io/multiregion-brain-model/.1 Massachusetts Institute of Technology, Cambridge, MA, USA 2 Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA 3 Howard Hughes Medical Institute, USA. Correspondence to: Yi Xie , Ila Fiete .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+mann et al., 2022) and deeper architectures (Dosovitskiy et al., 2021; He et al., 2016; Jiang et al., 2023), frequently emphasizing scale over the efficiency driven by biologically inspired models (Banino et al., 2018). To bridge this gap, insights from neuroscience can inform more efficient architectures by studying how biological systems process information to make robust and adaptive decisions in dynamic and uncertain environments. The brain contains specialized circuits including the hippocampal circuit (HPC), a key set of brain areas critical for spatial, contextual, and associative learning and memory (O'Keefe, 1978; Dostrovsky & O'Keefe, 1971; Squire, 1992; Scoville & Milner, 1957). Meanwhile, cortical and subcortical regions play a central role in evidence accumulation and decision-making (Pinto et al., 2019; IBL et al., 2023).
+
+Brain-scale neural recordings at cellular resolution, which are only recently possible, open a window into how brain regions interact with each other to perform complex tasks. Here we focus on the accumulating tower task, a widely adopted, interpretable benchmark in neuroscience for probing multi-region brain interactions underlying spatially embedded evidence accumulation and decision-making (Pinto et al., 2019; 2022; Nieh et al., 2021; Brown et al., 2024).
+
+In the task, mice navigate an immersive virtual reality corridor, where they are stochastically presented with visual towers on both sides. At the end of the corridor, they must turn left or right depending on which side has more towers (see Figure 1D). This task requires integrating evidence: computing the difference in the total number of towers on each side ("accumulated evidence"). Task standardization enables reproducible experiments, and these in turn inform theoretical models (Lee et al., 2024; Karniol-Tambour et al., 2024) that generate testable predictions about behavior and neural dynamics. We focus on this task to build a multisystem brain model spanning memory, integration, spatial navigation, and decision circuits.
+
+During the task, the dorsal CA1 region of the hippocampus encodes conjunctive cognitive maps of both the animal's location and accumulated evidence. Place fields in this region are tuned not only to spatial position but also to task-relevant accumulated evidence, meaning that individual neurons fire selectively based on both variables (Nieh et al.,
+
+2021).
+
+This finding is intriguing given that the task does not explicitly require spatial information for decision-making—the correct choice depends only on the relative frequency of towers on each side. It raises several key questions: Why does the hippocampus, typically associated with spatial navigation and episodic memory, represent accumulated evidence in this task? Does this suggest that the hippocampus has a broader functional role in spatially embedded decision-making tasks, even when spatial information is unnecessary for the decision? Furthermore, why are spatial and evidence representations jointly encoded in the hippocampus? These questions point to the potential involvement of the hippocampus in coordinating computations across multiple brain regions during decision-making.
+
+Decision-making tasks are often modeled within a reinforcement learning (RL) framework (Gershman & Niv, 2015; Gershman & Daw, 2017). While these models excel in task-level performance, they often overlook structured neural architectures and dynamics observed in biological systems. For instance, deep RL approaches applied to the accumulating tower task (Mochizuki-Freeman et al., 2023; Lee et al., 2024) focus on optimizing performance post-training but fail to capture the distributed computations across brain regions and do not explain how natural systems perform efficient and robust learning.
+
+To address these gaps, we developed a multi-region brain model that incorporates an architecturally and dynamically prestructured circuit model of the hippocampal-entorhinal system, Vector Hippocampal Scaffolded Heteroassociative Memory (Vector-HaSH) (Chandra et al., 2025). Our model extends Vector-HaSH by integrating it with cortical and subcortical regions, abstracted as a recurrent neural network (RNN) (Elman, 1990) to serve as the RL decision-making actor. This integration enables the model to function as a biologically grounded RL solver, leveraging structured memory circuits to support spatially embedded decision-making. Inspired by the vision for autonomous machine intelligence outlined in LeCun (2022), we demonstrate that integrating structured, content-addressable associative memory with neural representations is a promising approach for efficient task learning and navigation. Specifically, our work highlights the essential role of structured coding schemes, such as grid cells, in forming world models (cognitive maps) that support efficient task-solving.
+
+We apply this framework to the accumulating tower task and test counterfactual scenarios in which the entorhinal-hippocampal networks receive different inputs. Our model generates normative predictions about tuning in grid cells, the role of entorhinal-hippocampal networks, and the conditions that give rise to efficient learning and performance of spatially embedded decision-making tasks. These find
+
+ings illuminate how the brain may coordinate computations across its many substructures to flexibly and efficiently tackle complex challenges.
+
+The contributions of this paper are four-fold:
+
+- We propose and demonstrate a multi-region brain model framework that counterfactually tests the computational roles of entorhinal-hippocampal-neocortical interactions during spatially embedded decision-making tasks. Our framework makes novel experimentally verifiable predictions for neuroscience.
+- The model enables systematic exploration of how neural computations shape cognitive capabilities, offering a tool to guide and interpret future neuroscience experiments.
+- We predict that conjunctive position-evidence tuning in grid cells is essential to the emergence of experimentally observed conjunctive position-evidence hippocampal representations (Nieh et al., 2021).
+- Finally, we demonstrate that conjunctive grid cell tuning and non-grid sensory inputs to the hippocampus are critical for learning spatially embedded contexts (model M5).
+
+# 2. Related Works
+
+# 2.1. Biological Evidence and Gaps on Entorhinal-Hippocampal-Neocortical Interactions
+
+Hippocampal place cells (HPC), which encode spatial locations through their activity patterns (Dostrovsky & O'Keefe, 1971), form the basis of the cognitive map theory (O'Keefe, 1978; Fenton, 2015; Moser & Moser, 2016). This theory provides a foundational framework for understanding how flexible and intelligent behaviors arise from coordinated neuronal populations (Fenton, 2024). Cognitive maps not only enable flexible spatial navigation but also support memory organization and the construction of coherent narratives of personal experiences (O'Keefe, 1978; Tolman, 1948; Whittington et al., 2022; Fenton, 2024). Furthermore, the HPC's ability to encode internal cognitive variables, such as accumulated evidence and task-relevant information, highlights its broader role as a model system for studying internally generated cognition (Bostock et al., 1991; Nieh et al., 2021; Olafsdóttir et al., 2015; Tavares et al., 2015).
+
+Interactions between the HPC and the entorhinal cortex (EC) are well documented as crucial for navigation (McNaughton et al., 1996; O'Keefe, 1978) and declarative memory (Scoville & Milner, 1957; Squire, 1992). Within the medial entorhinal cortex (MEC), grid cells (Hafting et al., 2005) provide a spatial metric through their periodic, hexagonal firing fields (Krupic et al., 2012). These grid cells, along
+
+with hippocampal place cells, form the building blocks of neural systems that support both physical navigation and abstract cognitive functions. Recent hypotheses propose that memory and planning mechanisms evolved from processes originally adapted for physical navigation, suggesting a shared computational framework for navigating both physical and mental spaces (Buzsaki & Moser, 2013). Moreover, hippocampal-prefrontal interactions have been shown to play a significant role in higher-order cognitive functions, including decision-making and planning (Eichenbaum, 2017; Preston & Eichenbaum, 2013). These established interactions inform the design of our multi-region model to uncover neural mechanisms underlying cognition.
+
+Recent advances in experimental techniques, such as large-scale neural recordings (Jun et al., 2017; Steinmetz et al., 2021), have made it possible to investigate how thousands of neurons across multiple brain regions coordinate to function coherently (Bondy et al., 2024). However, despite these advances, understanding the mechanistic roles and interactions of individual brain regions in cognition remains a challenge. Our theoretical work bridges this gap by providing interpretable, mechanistic model "testbeds" that complement experimental findings and generate testable predictions to guide future investigations.
+
+# 2.2. Models of Entorhinal-Hippocampal Interactions
+
+Prominent models of entorhinal-hippocampal interactions include the Tolman-Eichenbaum Machine (TEM) (Whittington et al., 2020), a statistical generative model, and Vector-HaSH (Chandra et al., 2025), a biologically realistic, mechanistic model. Vector-HaSH separates fixed-point dynamics for pattern completion from content encoding, leveraging grid-cell scaffolds to prevent catastrophic forgetting and memory capacity cliffs. Unlike generative models, it provides a high-capacity, generalizable framework for spatial and non-spatial memory, making it well-suited to studying episodic and spatial representations.
+
+In contrast to data-driven approaches, such as inferring brain-wide interactions with constrained RNNs (Perich et al., 2020) or disentangling shared and private latent variables across regions (Koukuntla et al., 2024), our mechanistic model provides interpretable hypotheses about entorhinal-hippocampal interactions. By integrating Vector-HaSH with multi-region dynamics, our approach bridges experimental findings with theoretical predictions, advancing understanding of distributed neural computations.
+
+# 2.3. Bidirectional Insights Between Deep Learning and Neuroscience
+
+Machine learning-based frameworks are increasingly applied in neuroscience studies (Richards et al., 2019). For instance, deep neural networks have emerged as plausible
+
+models of the brain (Sacramento et al., 2018; Whittington & Bogacz, 2017), mimicking representational transformations in primate perceptual systems (Bashivan et al., 2019; Kell et al., 2018). These models often exhibit classic behavioral and neurophysiological phenomena when trained on tasks similar to those performed by animals (Banino et al., 2018; Pospisil et al., 2018; Wang et al., 2018). Complementarily, Yamins & DiCarlo (2016) demonstrated correlations between artificial neural network (ANN) representations and neuronal activity in the monkey visual cortex during image classification tasks, informing the design of brain-like ANNs (Kubilius et al., 2019; Zhuang et al., 2021). Such brain-inspired approaches provide value to both neuroscience and machine learning. For example, while navigation is fundamental for humans, it remains challenging for ANNs (Mirowski et al., 2017). Leveraging grid cell-like representations, critical for mammalian navigation, Banino et al. (2018) developed a deep RL agent with navigation abilities resembling those of primates.
+
+# 3. Methods
+
+# 3.1. Entorhinal-Hippocampal-Neocortical Spatial Decision Model
+
+Our multi-region brain model integrates a cortical circuit, abstracted into an action-selection RNN policy, with a prestructured entorhinal-hippocampal circuit inspired by Chandra et al. (2025), which incorporates bidirectional computations between grid cells and place cells to associate, encode, and learn environmental information. As shown in Chandra et al. (2025) and Fig 1A (purple and orange), the entorhinal-hippocampal memory scaffold features a bipartite architecture comprising hidden (hippocampal) and label (grid cell) layers. The scaffold's design is based on established and inferred recurrent connectivity patterns between the MEC and HPC (Witter & Groenewegen, 1984; Amaral & Witter, 1989; Witter & Amaral, 1991; Witter et al., 2017) and among grid cells in the MEC (Burak & Fiete, 2009).
+
+Connections from grid cells to the hippocampus $(\mathbf{W}_{hg})$ are fixed and random, while connections from the hippocampus to grid cells $(\mathbf{W}_{gh})$ are set through associative learning and remain fixed thereafter. Connections between the HPC and non-grid lateral entorhinal cortex (LEC) $(\mathbf{W}_{hs}$ and $\mathbf{W}_{sh}$ ) are learned bidirectionally through associative learning. The grid cell layer operates as a $k$ -hot modular vector, constrained by local recurrent inhibition, where $k$ reflects the number of one-hot grid modules. Each module has a unique periodicity, and velocity inputs (e.g., position and evidence) drive the phase progression of each module within its 2D representational space (i.e., a 2D torus).
+
+We build upon the architecture proposed by Chandra et al. (2025) (Fig 1A), termed Vector-HaSH+ (Fig 1B). Unlike the
+
+A
+
+Example: An agent moves 3 positions forward, encounters evi. value $+1$ , -1, +1.
+
+C
+
+Joint grid representation:
+
+
+Disjoint grid representation:
+Figure 1. Task schematics. (A) Schematic of Vector-HaSH (Chandra et al., 2025), the basis of our model architecture. (B) Schematic of the Vector-HaSH+ circuit, which we propose to model and investigate the neural computation process for spatially embedded decision-making tasks. The numbers in parentheses are the order of computation. (C) Schematic of grid cell code, for a specific example of an agent moving 3 positions forward while encountering evidence value $+1$ , $-1$ , and $+1$ . Here, we assume the grid state is initialized at the top left corner, but the coding scheme is invariant regardless of the initial state. A joint grid representation (top) utilizes both axes of the grid module 2D space for both task variables, position and accumulated evidence, yielding a wiggling activation pattern (red arrows). A disjoint grid representation (bottom) encodes task variables in separate modules, such that each grid module only fires along one axis (red arrows). The periodicity of each module is indicated by dashed gray arrows, as the representation space of a grid module is effectively a 2D torus. (D) Schematic of the RL setup in which an agent navigates a virtual T-maze with towers appearing on both sides, and a reward is given when it turns to the side with more towers in the end. The agent has some field of view ahead, and the visual sensory information is communicated to HPC through MEC and/or non-grid LEC. The HPC code is then mapped by an RNN policy (cortex) to select an action. The action updates the agent's position, which updates the sensory input and then the grid states. This process repeats until task termination.
+
+
+B
+Proposed multi-regional model M5
+D
+
+
+Reinforcement learning scheme with proposed model M5
+
+original Vector-HaSH, Vector-HaSH+ provides flexibility in hippocampal readouts, allowing them to receive projections from both grid cells and non-grid sensory inputs simultaneously or from just one source (Fig 1B, orange and green). Grid states are updated by task-relevant velocity inputs, either across all modules or selectively along specific axes in some modules (see Fig 1C and Appendix A.1). A multilayer perceptron (MLP) processes sensory inputs to extract evidence velocity (Fig 1B, yellow), which informs grid cell updates. Similar to Hwang et al. (2023); Wang et al. (2024), the resulting hippocampal vector is the input to the RNN policy, which is trained using RL policy gradient method to make action decisions (see Fig 1D and Appendix A.3 for a detailed step-by-step procedure).
+
+While the MEC, HPC, and LEC all interact with the cortex
+
+biologically (Preston & Eichenbaum, 2013; Eichenbaum, 2017; Canto et al., 2008), we model the hippocampal vector as the primary cortical input for simplicity. This simplification assumes the hippocampal vector alone is sufficient for learning the task, while allowing us to test hypotheses about conjunctive hippocampal representations observed in Nieh et al. (2021), providing a foundation for further investigations. For instance, future studies could evaluate the computational benefits of various combinations of {MEC, HPC, LEC} inputs to the cortex in enabling generalization and efficient learning.
+
+# 3.2. Model Setup
+
+We describe the model formally in the context of the accumulating tower task. As the agent navigates in space
+
+at time $t$ , it processes sensory information from the left and right visual fields, $\vec{f}_L$ and $\vec{f}_R$ , which is projected through the dorsal visual stream to downstream processing regions. This results in a sensory vector in LEC modeled by $\vec{s}(t) = \mathbf{W}_R \cdot \vec{f}_L(t) + \mathbf{W}_L \cdot \vec{f}_R(t)$ , representing a weighted integration of the two fields that can be used for further computations. We assume a simple concatenation of $\vec{f}_L(t)$ and $\vec{f}_R(t)$ , but this setup provides flexibility for modeling ablation studies, e.g., simulating the effects of optogenetically inhibiting one hemisphere. The downstream computation includes velocity prediction that updates grid cell states and projection into HPC.
+
+Following Chandra et al. (2025), the MEC layer of the model contains $k$ one-hot grid cell modules, each is a binary-valued periodic function on a 2D discretized hexagonal lattice space with periodicity $\lambda$ . Thus, each module state is a vector of dimension $\lambda \times \lambda$ . The module states are concatenated to form a collective grid state $\vec{g} \in \{0,1\}^{N_g}$ , where the vector length $N_{g} = \sum_{M}\lambda_{M}^{2}$ .
+
+The grid cell state is updated through continuous attractor recurrence dynamics (Burak & Fiete, 2009), where a module-wise winner-take-all mechanism, $CAN[\cdot]$ , shifts each grid module based on position and evidence velocity signals $v(t)$ informed by $\vec{s}(t)$ . We model the velocity estimation as an MLP (Rosenblatt, 1958) that processes sensory inputs for simplicity, representing a form of visual-vestibular integration (DeAngelis & Angelaki, 2012). Following Chandra et al. (2025), we assume this process occurs externally to the entorhinal-hippocampal circuit. This simplification does not affect our results and is beyond the scope of this framework.
+
+The grid cell state update at time $t$ is thus formalized as
+
+$$
+\vec {g} (t + 1) = \operatorname {C A N} [ \vec {g} (t), v (t) ]. \tag {1}
+$$
+
+The full implementation of $\mathrm{CAN}[\cdot]$ is provided in Appendix A.5.
+
+The grid cell layer and the non-grid sensory layer project onto the HPC layer, such that the hippocampal activities are
+
+$$
+\vec {h} _ {\operatorname {m i x}} (t + 1) = \operatorname {R e L U} \left[ \mathbf {W} _ {h s} \cdot \vec {s} (t) + \mathbf {W} _ {h g} \cdot \vec {g} (t + 1) \right]. \tag {2}
+$$
+
+We also test the variants of hippocampal coding, in which only the grid cell layer projects onto the HPC layer, such that the hippocampal activities are
+
+$$
+\vec {h} _ {\text {n o n m i x}} (t + 1) = \operatorname {R e L U} \left[ \mathbf {W} _ {h g} \cdot \vec {g} (t + 1) \right]. \tag {3}
+$$
+
+The connectivity between the HPC layer and the EC layer is updated in both cases as pseudo-inverse $(^{+})$ learned heteroassociative weights,
+
+$$
+\mathbf {W} _ {h s} = \mathbf {H} \mathbf {S} ^ {+}, \tag {4}
+$$
+
+$$
+\mathbf {W} _ {s h} = \mathbf {S} \mathbf {H} ^ {+}, \tag {5}
+$$
+
+Table 1. Overview of how model variants correspond to alternative hypotheses of neural coding and information flow based on evidence source. Our final model, M5, is marked with *.
+
+Models of hypotheses (Source of Evidence) Not grid cells grid cells Not sensory M1 M3 sensory M2 M4, M5*
+
+Table 2. Summary of the neural coding and information flow in each model variant. Our final model, M5, is marked with $*$ .
+
+Model Grid cell code Place cell code MLP input RNN input M0 - - - s M0+ - - s s & vpos & evi M1 pos. g - p M2 pos. g & s - p M3 joint pos. & evi. g s p M4 disjoint pos. & evi. g & s s p M5* joint pos. & evi. g & s s p
+
+where $\mathbf{H}$ is a $N_{h} \times N_{patts}$ matrix with $N_{patts}$ hippocampal states, each of length $N_{h}$ , and $\mathbf{S}$ is a $N_{s} \times N_{patts}$ matrix with columns as the encoded sensory inputs of length $N_{s}$ .
+
+We modeled variants with recurrence in the HPC layer (namely, the CA3 recurrence (Sammons et al., 2024)) in Appendix F, which does not alter the conclusions in the main paper. The hippocampal state $\vec{h}$ in Eqn 2 (or Eqn 3) is the readout of the entorhinal-hippocampal circuit to the cortex (under the modeling rationale explained in Section 3.1), which is an action-selection RNN policy trained through policy gradient under reinforcement learning. Please refer to Appendix A.3 for what one step of the agent in the environment entails among the involved brain regions.
+
+# 4. Alternative Multi-Region Interaction Hypotheses
+
+Nieh et al. (2021) observed conjunctive coding of both accumulated evidence (cognitive variable) and position (physical variable) in the hippocampus when mice perform the accumulating tower task, suggesting that the hippocampus also performs a general computation, rather than merely responding to features of external stimulus such as space (O'Keefe & Burgess, 1996). This discovery highlights the need for mechanistic modeling at an appropriate scale to explore how interactions across multiple brain regions contribute to internally generated cognition. Such processes enable individuals to flexibly navigate spaces and organize, relate, and integrate experiences, objects, and events.
+
+We use the accumulating tower task as a minimal framework to hypothesize three potential mechanisms underlying the conjunctive HPC code of physical and cognitive variables:
+
+
+Figure 2. Model schematics. Counterfactual models of hypotheses on neural code and information flow, as detailed in Tables 1 and 2.
+
+- Grid cells encode position, aligning with the prevailing view (Moser et al., 2008), and the conjunctive encoding in the HPC arises from sensory input provided by non-grid LEC neurons (M2).
+- Grid cells co-tune to both position and evidence, a phenomenon that has not been extensively investigated experimentally to the best of our knowledge. The LEC pathway is neither necessary nor relevant (M3).
+- Grid cells co-tune to both position and evidence, and the EC pathway contributes to the formation of the conjunctive hippocampal code (M4, M5).
+
+Notably, there are two possible mechanisms for grid cell tuning of accumulated evidence and position:
+
+- Joint Integration Model: Grid cell modules each encode a combination of evidence and position by leveraging their 2D toroidal attractor network. This implies the simultaneous representation of spatial and cognitive variables within the same grid modules (M3 & M5, see Fig 1C top).
+- Disjoint Integration Model: Individual grid cell modules each encode distinct task variables. Specifically, some grid cell modules exclusively encode position, while others exclusively encode evidence (M4, see Fig 1C bottom).
+
+Our framework provides a systematic approach to evaluate the proposed hypotheses shown in Tables 1 and 2, and Fig 2. This evaluation includes (a) quantitative analyses of learning performance and behavioral outcomes (Section 5.1) and (b) qualitative alignment with experimental findings (Section 5.2). Furthermore, we hypothesize the roles of individual brain regions in spatially embedded decision-making tasks by analyzing neural representations (Section 5.3).
+
+# 5. Results
+
+# 5.1. Joint Integration Model Induces Efficient Learning
+
+We compare the performance of different model variants in terms of cumulative success rate and exploration efficiency during training (Fig 3). Additionally, as shown in Appendices G and H, our findings remain consistent after tuning the learning rate or matching the number of parameters in M0 and $\mathrm{M}0+$ to that of M5. Our results show that agents fail to solve the task when grid cells do not encode evidence (Fig 3A, orange and green), highlighting the importance of MEC in integrating cognitive variables. Moreover, RNN-only baselines (M0 and $\mathrm{M}0+$ , in blue and black, respectively), with the same number of neurons as M1-M5, perform poorly even when supplied with positional and velocity information ( $\mathrm{M}0+$ , in black), suggesting that the EC-HPC network is critical for temporal integration. Additionally, models with jointly tuned grid cells (red and brown) learn more efficiently than those with disjointly tuned grid cells (purple). This phenomenon lacks an immediately clear computational explanation, which we investigate in Section 5.2.
+
+Interestingly, when sensory information is projected to the hippocampus (M4, M5), the learning performance becomes more variable relative to M3 (Fig 3A), possibly because mixing place codes with sensory signals complicates the representation. However, including sensory information in HPC increases exploration efficiency (brown in Fig 3B), presumably because it captures the nuances in the environment—such as wall positions—complement the rigid information encoded by grid cells. In support of this view, M3, a variant of M5 without sensory projections, requires longer navigation times (red in Fig 3B). Please refer to Appendix A.1 for implementation details.
+
+
+Figure 3. Learning performance measured by cumulative success rate and exploration efficiency over the course of training for all model variants. We present the mean and standard deviation of these metrics across three trials. Baselines are indicated by dashed gray lines. In (A), using a window size of 5,000 episodes, we observe efficient learning in models with jointly tuned grid cells (M3, in red; M5, in brown). In (B), with a window size of 10,000 episodes, M5 demonstrates its ability to effectively leverage spatial information, navigating the maze more quickly (brown). For clarity, data from the first 100 episodes are excluded due to initial instability.
+
+
+
+# 5.2. Joint Integration Model Predicts Evidence-Position Co-Tuning in Grid Cells
+
+As shown among simulated hypotheses, joint tuning of position and evidence in the MEC promotes efficient learning, while the sensory pathway enhances efficient navigation. Here, we further analyze the relationship between grid cell computations and place cell firing patterns. Our results reveal that model variants capable of efficient task learning exhibit firing fields closely resembling the experimental observations in Nieh et al. (2021). This alignment makes a clear prediction that the conjunctive grid cell representations give rise to the joint encoding of spatial and cognitive variables in the HPC.
+
+# 5.2.1. JOINT INTEGRATION MODEL EXHIBITS EXPERIMENTALLY ALIGNED HPC FIELDS
+
+Nieh et al. (2021) demonstrated experimentally that individual CA1 neurons encode both position and accumulated evidence. This interdependence implies that trials leading to the same final decision would evoke distinct hippocampal firing sequences, as the agent traverses different tower/evidence configurations (Fig 4, left). Consequently, smaller firing fields would partition the evidence dimension within the Evidence (E) $\times$ Position (Y) space of hippocampal activity.
+
+We found that joint integration models (M5 in Fig 4, bottom right; M3 in Appendix C) successfully replicate the $E \times Y$ place fields observed by Nieh et al. (2021). In contrast, models lacking joint integration, such as M4 (Fig 4, top right), fail to reproduce this behavior, instead exhibiting stripe-like firing patterns indicative of independent representations of position and evidence. In Appendix I, we additionally demonstrate that smoothing hippocampal activity reveals more localized and stereotyped tuning, consistent with experimental observations.
+
+
+Figure 4. Place cell tuning during the task. The schematic plot (left, adapted from Nieh et al. (2021)), and the firing fields of selective hippocampal neurons in models M4 (right, top) and M5 (right, bottom). The firing field of each hippocampal cell is determined by averaging smoothed neural activity across trials and normalizing within the cell. See Appendix C for raw firing fields without smoothing and Appendix I for smoothing details using $\sigma_{1},\sigma_{2} = 1$ . In Nieh et al. (2021), since hippocampal cells have a conjunctive code of evidence (E) and position (Y), smaller firing fields effectively partition the evidence dimension in the $\mathrm{E}\times \mathrm{Y}$ space. Notably, only models with jointly tuned grid cells exhibit conjunctive place fields in the $\mathrm{E}\times \mathrm{Y}$ space (right, bottom).
+
+# 5.2.2. JOINT INTEGRATION MODEL EXHIBITS BOTH CHOICE-SPECIFIC FIELDS & EVIDENCE FIELDS
+
+Here, we demonstrate that only the joint integration model aligns perfectly with the experimental findings, strongly supporting its role in governing the neural computations underlying place cell behaviors described in Nieh et al. (2021). In this model, grid cells jointly encode evidence and position, enabling the HPC to create integrated maps.
+
+Only joint integration models exhibit choice-specific neurons Nieh et al. (2021) observed that CA1 neurons exhibit choice-specific place cell sequences when sorted by their
+
+
+A1
+
+
+A2
+
+
+A3
+
+
+B1
+Figure 5. Choice-specific place cell sequences & evidence fields. Only models with conjunctive grid code exhibit choice-specific place cell sequences observed in Nieh et al. (2021). Activation is averaged and normalized in each cell. We compare results from (1) Nieh et al. (2021), (2) disjoint integration model M4, and (3) joint integration model M5, respectively. (A) Choice-specific place cell sequences. Cells are categorized into left-choice-preferring (top), right-choice-preferring (middle), and non-preferring (bottom) based on the significance of mutual information; within each row, cells are sorted by peak activities of the respective neurons of preferred choices. (B) Firing fields of place cells in accumulated evidence space, sorted by the positions of peak activities.
+
+
+B2
+
+
+B3
+
+peak activity positions. To ensure a fair comparison, we analyze our models using the same approach, computing the mutual information (see Appendix B) between each cell's activity and the agent's position during left- and right-choice trials, and comparing these results to a shuffled dataset for their significance. Our analysis illustrates that only joint integration models (M3, M5) have a subset of place cells that are choice-specific under this metric (Fig 5, A3; Fig 9, B), closely matching experimental observations (Fig 5, A1). In contrast, the disjoint integration model does not exhibit choice-specific place cell sequences (Fig 5, A2).
+
+Place cells form firing fields in evidence space Similarly, we measure the mutual information between accumulated evidence and the neural activity of each place cell. As expected, when grid cells encode evidence, place cells then form firing fields in evidence space, spanning small segments of evidence values, consistent with Fig 4, left. Conversely, place cells fail to form evidence fields in models M1 and M2, where evidence information is either absent or originates from the LEC instead of MEC (see Appendix D).
+
+# 5.3. Only Joint Integration Model With Activated EC Pathway Exhibits Well-Separated Low-Dimensional Co-representation of Task Variables
+
+We performed Principal Component Analysis (PCA) on hippocampal and cortical activities to assess whether task vari
+
+ables form visually separable clusters in low-dimensional principal component (PC) space. The presence of such low-dimensional representations would provide insight into the functional roles of specific brain regions and the computational strategies employed by different model variants.
+
+We showed that only the joint integration model with an activated LEC pathway (M5) exhibits distinct, visually separable clusters of hippocampal activity in PC space for both position (Fig 6, B1) and local evidence velocity (#R-#L towers at a position, Fig 6, B2). This contrasts sharply with other model variants (Figs 6, A1, A2, and Appendix E). Interestingly, we did not observe separability in accumulated evidence within the first three PCs of hippocampal activity. This is counterintuitive, given that grid cells encode and communicate accumulated evidence to the HPC. Since hippocampal neurons are projection neurons (Fox & Ranck Jr, 1975; 1981), the source and encoding mechanism of local evidence velocity in the HPC warrant further investigation. Future studies could analyze existing experimental data to validate the predicted hippocampal role in representing local evidence velocity and conduct ablation studies using the proposed model to understand the underlying mechanisms. Together, these findings underscore the importance of sensory inputs from the LEC in generating cohesive, low-dimensional representations of task variables in the hippocampus, which are critical for efficient learning and spatial navigation (Fig 3).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6. Separability of hippocampal (column 1, 2) and cortical representations (column 3, 4) in low-dimensional PC space in M4 (row A) and M5 (row B). Joint grid cell code with activated EC-HPC pathway (M5) uniquely leads to separable, low-dimensional hippocampal representations when colored by position (B1) and local evidence velocity (#R-#L per position, B2). We did not observe other separable task variables in the first 3 PCs of hippocampal activities in all variants (Appendix E). We observe the separability of actions in hippocampal and cortical activities in PC space in both M4 and M5 (columns 3 and 4). The gray lines indicate the trace of temporal trajectories.
+
+
+
+# 6. Discussion
+
+Our work predicts that grid cells jointly encode spatial and task-relevant information, and that this conjunctive coding, along with LEC sensory input to the hippocampus, facilitates efficient decision-making in spatial contexts. Moreover, our findings indicate that conjunctive grid coding is essential for replicating experimental results observed in Nieh et al. (2021), offering new insight into the prevailing view that grid cells primarily support spatial representations (Moser et al., 2008).
+
+We derive this prediction by proposing and testing counterfactual neural codes and information flow in variants of our multi-region brain models, which integrate a prestructured entorhinal-hippocampal circuit (Chandra et al., 2025) with a cortical action-selecting recurrent neural network (RNN). These models are evaluated based on their task performance and hippocampal representations in the accumulating tower task (Nieh et al., 2021), formulated as a RL problem.
+
+While the CA3 region of the hippocampus is known to exhibit recurrent dynamics (Sammons et al., 2024), we follow Chandra et al. (2025) in assuming, based on anatomical evidence (Donato et al., 2017), that structured, input-driven coding in the MEC plays a primary role, as it matures first and drives the development of hippocampal circuits. In Appendix F, we test variants of models M2 and M4 incorporating CA3 recurrence and find that this modification neither reproduces the experimentally observed conjunctive hippocampal code in Nieh et al. (2021) nor alters the conclusions drawn in the main paper. Building on these results, future studies could extend our framework by systematically
+
+ablating models M1-M5 to test the computational roles of CA3 recurrence and other mechanisms. Concurrently, we are collecting neurophysiological data to directly test our falsifiable predictions and assess whether CA3 recurrence further refines hippocampal place cell tuning.
+
+Taken together, our findings demonstrate that conjunctive grid coding is fundamental to spatially embedded decision-making, supporting the hypothesis (Buzsaki & Moser, 2013) that spatial and cognitive processes are deeply interconnected within the brain's navigation and memory systems. More broadly, neural algorithms that support path integration and spatial navigation may be repurposed for abstract cognitive functions, suggesting that the hippocampal-entorhinal network facilitates both physical navigation and decision-making based on internal cognitive states.
+
+In conclusion, we presented a comprehensive testbed for exploring how the hippocampal-entorhinal-neocortical network integrates physical and cognitive information to build flexible neural representations that facilitate learning, decision-making, and navigation. This framework provides a foundation for future wet lab studies to test clear, falsifiable predictions while minimizing reliance on invasive and resource-intensive animal experiments, offering a platform to investigate the links between physical navigation and abstract cognitive processes. Furthermore, it highlights the mutual benefits of integrating machine learning and neuroscience in advancing our understanding of neural phenomena and guiding future research. This synergy underscores the transformative potential of neuro-inspired artificial intelligence.
+
+# Acknowledgments
+
+Ila Fiete is supported by the Office of Naval Research, the Howard Hughes Medical Institute (HHMI), and NIH (NIMHMH129046). Carlos Brody and David Tank are supported by NIH (U19NS132720). We are grateful to Dr. Sarthak Chandra, Dr. Manuel Schottdorf, and the BRAIN CoGS community for their many insightful discussions. We appreciate the constructive feedback received during our presentations of preliminary results at the 2024 Conference on Cognitive Computational Neuroscience and the 2025 Society for Neuroscience annual meeting.
+
+# Impact Statement
+
+This work advances both machine learning and neuroscience by presenting a biologically plausible multi-region brain model as a computational framework for studying decision-making in spatial contexts in mammals. The model is a testbed for hypothesis generation, offering an efficient way to explore neural mechanisms, such as the role of conjunctive coding in hippocampal-entorhinal circuits, that are difficult to measure experimentally. By reducing the need for invasive or resource-intensive experiments, our approach has the potential to accelerate neuroscience discovery while minimizing unnecessary animal studies. Additionally, the structured neural architectures studied in this work may inform the design of more robust and interpretable machine learning systems that leverage biological principles for real-world decision-making tasks.
+
+This work contributes to advancing the fundamental understanding of the brain and bridging the gap between neuroscience and machine learning. We do not foresee significant negative societal impacts from this research.
+
+# References
+
+Amaral, D. G. and Witter, M. P. The three-dimensional organization of the hippocampal formation: a review of anatomical data. Neuroscience, 31(3):571-591, 1989.
+Banino, A., Barry, C., Uria, B., Blundell, C., Lillicrap, T., Mirowski, P., Pritzel, A., Chadwick, M. J., Degris, T., Modayil, J., et al. Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705): 429-433, 2018.
+Bashivan, P., Kar, K., and DiCarlo, J. J. Neural population control via deep image synthesis. Science, 364(6439): eaav9436, 2019.
+Bondy, A. G., Charlton, J. A., Luo, T. Z., Kopec, C. D., Stagnaro, W. M., Venditto, S. J. C., Lynch, L., Janarthanan, S., Oline, S. N., Harris, T. D., et al. Coordinated cross
+
+brain activity during accumulation of sensory evidence and decision commitment. bioRxiv, pp. 2024-08, 2024.
+Bostock, E., Muller, R. U., and Kubie, J. L. Experience-dependent modifications of hippocampal place cell firing. Hippocampus, 1(2):193-205, 1991.
+Brown, L. S., Cho, J. R., Bolkan, S. S., Nieh, E. H., Schottdorf, M., Tank, D. W., Brody, C. D., Witten, I. B., and Goldman, M. S. Neural circuit models for evidence accumulation through choice-selective sequences. bioRxiv, 2024. doi: 10.1101/2023.09.01.555612.
+Burak, Y. and Fiete, I. R. Accurate path integration in continuous attractor network models of grid cells. PLoS computational biology, 5(2):e1000291, 2009.
+Buzsaki, G. and Moser, E. I. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature neuroscience, 16(2):130-138, 2013.
+Canto, C. B., Wouterlood, F. G., and Witter, M. P. What does the anatomical organization of the entorhinal cortex tell us? Neural plasticity, 2008(1):381243, 2008.
+Chandra, S., Sharma, S., Chaudhuri, R., and Fiete, I. Episodic and associative memory from spatial scaffolds in the hippocampus. Nature, pp. 1-13, 2025.
+DeAngelis, G. C. and Angelaki, D. E. Visual-vestibular integration for self-motion perception. In Murray, M. and Wallace, M. (eds.), The Neural Bases of Multisensory Processes, chapter 31. CRC Press/Taylor & Francis, Boca Raton, FL, 2012.
+Donato, F., Jacobsen, R. I., Moser, M.-B., and Moser, E. I. Stellate cells drive maturation of the entorhinal-hippocampal circuit. Science, 355(6330):eaai8178, 2017.
+Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
+Dostrovsky, J. and O'Keefe, J. The hippocampus as a spatial map. preliminary evidence from unit activity in the freely moving rat. Brain research, 34(1):171-175, 1971.
+Eichenbaum, H. Prefrontal-hippocampal interactions in episodic memory. Nature Reviews Neuroscience, 18(9): 547-558, 2017.
+Elman, J. L. Finding structure in time. Cognitive science, 14(2):179-211, 1990.
+Fenton, A. A. Coordinating with the "inner gps". Hippocampus, 25(6):763-769, 2015.
+
+Fenton, A. A. Remapping revisited: how the hippocampus represents different spaces. Nature Reviews Neuroscience, pp. 1-21, 2024.
+Fox, S. and Ranck Jr, J. Electrophysiological characteristics of hippocampal complex-spike cells and theta cells. Experimental Brain Research, 41(3):399-410, 1981.
+Fox, S. E. and Ranck Jr, J. B. Localization and anatomical identification of theta and complex spike cells in dorsal hippocampal formation of rats. Experimental neurology, 49(1):299-313, 1975.
+Gershman, S. J. and Daw, N. D. Reinforcement learning and episodic memory in humans and animals: an integrative framework. Annual review of psychology, 68(1):101-128, 2017.
+Gershman, S. J. and Niv, Y. Novelty and inductive generalization in human reinforcement learning. *Topics in cognitive science*, 7(3):391-415, 2015.
+Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., and Moser, E. I. Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801-806, 2005.
+He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, 2016.
+Hwang, J., Neupane, S., Jazayeri, M., and Fiete, I. A grid cell-place cell scaffold allows rapid learning and generalization at multiple levels on mental navigation tasks. In CCN, 2023.
+IBL, I. B. L., Benson, B., Benson, J., Birman, D., Bonacchi, N., Carandini, M., Catarino, J. A., Chapuis, G. A., Churchland, A. K., Dan, Y., et al. A brain-wide map of neural activity during complex behaviour. *Biorxiv*, pp. 2023–07, 2023.
+Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
+Jun, J. J., Steinmetz, N. A., Siegle, J. H., Denman, D. J., Bauza, M., Barbarits, B., Lee, A. K., Anastassiou, C. A., Andrei, A., Aydin, C., et al. Fully integrated silicon probes for high-density recording of neural activity. Nature, 551(7679):232-236, 2017.
+Karniol-Tambour, O., Zoltowski, D. M., Diamanti, E. M., Pinto, L., Brody, C. D., Tank, D. W., and Pillow, J. W. Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems. In The Twelfth International Conference on Learning Representations, 2024.
+
+Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V., and McDermott, J. H. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3):630-644, 2018.
+Koukuntla, S., Julian, J. B., Kaminsky, J. C., Schottdorf, M., Tank, D. W., Brody, C. D., and Charles, A. S. Unsupervised discovery of the shared and private geometry in multi-view data, 2024.
+Krupic, J., Burgess, N., and O'Keefe, J. Neural representations of location composed of spatially periodic bands. Science, 337(6096):853-857, 2012.
+Kubilius, J., Schrimpf, M., Kar, K., Rajalingham, R., Hong, H., Majaj, N., Issa, E., Bashivan, P., Prescott-Roy, J., Schmidt, K., et al. Brain-like object recognition with high-performing shallow recurrent anns. Advances in neural information processing systems, 32, 2019.
+LeCun, Y. A path towards autonomous machine intelligence version 0.9.2, 2022-06-27. Open Review, 62(1):1-62, 2022.
+Lee, R. S., Sagiv, Y., Engelhard, B., Witten, I. B., and Daw, N. D. A feature-specific prediction error model explains dopaminergic heterogeneity. Nature neuroscience, pp. 1-13, 2024.
+Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In ECCV, 2014.
+McNaughton, B. L., Barnes, C. A., Gerrard, J. L., Gothard, K., Jung, M. W., Knierim, J. J., Kudrimoti, H., Qin, Y., Skaggs, W., Suster, M., et al. Deciphering the hippocampal polyglot: the hippocampus as a path integration system. Journal of Experimental Biology, 199(1):173-185, 1996.
+Mirowski, P., Pascanu, R., Viola, F., Soyer, H., Ballard, A. J., Banino, A., Denil, M., Goroshin, R., Sifre, L., Kavukcuoglu, K., et al. Learning to navigate in complex environments. In ICLR, 2017.
+Mochizuki-Freeman, J., Maini, S. S., and Tiganj, Z. Characterizing neural activity in cognitively inspired rl agents during an evidence accumulation task. In IJCNN, 2023.
+Moser, E. I., Kropff, E., and Moser, M.-B. Place cells, grid cells, and the brain's spatial representation system. Annu. Rev. Neurosci., 31(1):69-89, 2008.
+Moser, M.-B. and Moser, E. I. Where am i? where am i going? Scientific American, 314(1):26-33, 2016.
+Nieh, E. H., Schottdorf, M., Freeman, N. W., Low, R. J., Lewallen, S., Koay, S. A., Pinto, L., Gauthier, J. L., Brody,
+
+C. D., and Tank, D. W. Geometry of abstract learned knowledge in the hippocampus. Nature, 595(7865):80-84, 2021.
+O'Keefe, J. The hippocampus as a cognitive map, 1978.
+O'Keefe, J. and Burgess, N. Geometric determinants of the place fields of hippocampal neurons. Nature, 381(6581): 425-428, 1996.
+Ólafsdóttir, H. F., Barry, C., Saleem, A. B., Hassabis, D., and Spiers, H. J. Hippocampal place cells construct reward related sequences through unexplored space. *Elife*, 4: e06063, 2015.
+Perich, M. G., Arlt, C., Soares, S., Young, M. E., Mosher, C. P., Minxha, J., Carter, E., Rutishauser, U., Rudebeck, P. H., Harvey, C. D., et al. Inferring brain-wide interactions using data-constrained recurrent neural network models. *BioRxiv*, pp. 2020–12, 2020.
+Pinto, L., Rajan, K., DePasquale, B., Thiberge, S. Y., Tank, D. W., and Brody, C. D. Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron, 104(4):810-824, 2019.
+Pinto, L., Tank, D. W., and Brody, C. D. Multiple timescales of sensory-evidence accumulation across the dorsal cortex. *Elife*, 11:e70263, 2022.
+Pospisil, D. A., Pasupathy, A., and Bair, W. 'artiphysiology'reveals v4-like shape tuning in a deep network trained for image classification. *Elife*, 7:e38242, 2018.
+Preston, A. R. and Eichenbaum, H. Interplay of hippocampus and prefrontal cortex in memory. Current biology, 23 (17):R764-R773, 2013.
+Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., Clopath, C., Costa, R. P., de Berker, A., Ganguli, S., et al. A deep learning framework for neuroscience. Nature neuroscience, 22(11): 1761-1770, 2019.
+Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. *Psychological review*, 65(6):386, 1958.
+Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211-252, 2015.
+Sacramento, J., Ponte Costa, R., Bengio, Y., and Senn, W. Dendritic cortical microcircuits approximate the backpropagation algorithm. Advances in neural information processing systems, 31, 2018.
+
+Sammons, R. P., Vezir, M., Moreno-Velasquez, L., Cano, G., Orlando, M., Sievers, M., Grasso, E., Metodieva, V. D., Kempter, R., Schmidt, H., et al. Structure and function of the hippocampal ca3 module. Proceedings of the National Academy of Sciences, 121(6):e2312281120, 2024.
+Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al. Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 2022.
+Scoville, W. B. and Milner, B. Loss of recent memory after bilateral hippocampal lesions. Journal of neurology, neurosurgery, and psychiatry, 20(1):11, 1957.
+Skaggs, W., McNaughton, B., and Gothard, K. An information-theoretic approach to deciphering the hippocampal code. In Hanson, S., Cowan, J., and Giles, C. (eds.), Advances in Neural Information Processing Systems, volume 5. Morgan-Kaufmann, 1992.
+Squire, L. R. Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. Psychological review, 99(2):195, 1992.
+Steinmetz, N. A., Aydin, C., Lebedeva, A., Okun, M., Pachitariu, M., Bauza, M., Beau, M., Bhagat, J., Böhm, C., Broux, M., et al. Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science, 372(6539):eabf4588, 2021.
+Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12, 1999.
+Tavares, R. M., Mendelsohn, A., Grossman, Y., Williams, C. H., Shapiro, M., Trope, Y., and Schiller, D. A map for social navigation in the human brain. Neuron, 87(1): 231-243, 2015.
+Tolman, E. C. Cognitive maps in rats and men. Psychological review, 55(4):189, 1948.
+Wang, J. X., Kurth-Nelson, Z., Kumaran, D., Tirumala, D., Soyer, H., Leibo, J. Z., Hassabis, D., and Botvinick, M. Prefrontal cortex as a meta-reinforcement learning system. Nature neuroscience, 21(6):860-868, 2018.
+Wang, R., Hwang, J., Boopathy, A., and Fiete, I. R. Rapid learning without catastrophic forgetting in the morris water maze. In ICML, 2024.
+Whittington, J. C. and Bogacz, R. An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity. Neural computation, 29(5):1229-1262, 2017.
+
+Whittington, J. C., Muller, T. H., Mark, S., Chen, G., Barry, C., Burgess, N., and Behrens, T. E. The tolman-eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation. Cell, 183(5):1249-1263, 2020.
+Whittington, J. C., McCaffary, D., Bakermans, J. J., and Behrens, T. E. How to build a cognitive map. Nature neuroscience, 25(10):1257-1272, 2022.
+Witter, M. P. and Amaral, D. G. Entorhinal cortex of the monkey: V. projections to the dentate gyrus, hippocampus, and subicular complex. Journal of Comparative Neurology, 307(3):437-459, 1991.
+Witter, M. P. and Groenewegen, H. J. Laminar origin and septotemporal distribution of entorhinal and perirhinal projections to the hippocampus in the cat. Journal of Comparative Neurology, 224(3):371-385, 1984.
+Witter, M. P., Doan, T. P., Jacobsen, B., Nilssen, E. S., and Ohara, S. Architecture of the entorhinal cortex a review of entorhinal anatomy in rodents with some comparative notes. Frontiers in systems neuroscience, 11:46, 2017.
+Yamins, D. L. and DiCarlo, J. J. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356-365, 2016.
+Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., and Yamins, D. L. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3):e2014196118, 2021.
+
+# Appendix
+
+# A. Experimental Details
+
+# A.1. Model
+
+For models M1-M5, we use three grid modules with periodicities of 7, 8, and 11, esulting in a grid cell layer dimension of $N_{g} = 234$ ( $= 7^{2} + 8^{2} + 11^{2}$ ). We simulate 800 hippocampal cells. Both the MLP and RNN models have a learning rate of 0.0005 and a hidden size of 32. The RNN consists of leaky units with $\alpha = 0.025$ .
+
+The grid coding scheme is hand-designed to test the counterfactual of joint versus disjoint coding, as illustrated in Fig. 1C. In general, velocity inputs update the phases of each grid module via path integration, following the Vector-HaSH implementation from Chandra et al. (2025). Evidence velocity for the grid cell modules is computed as the difference between the number of towers on the right and the number on the left at the current position. This is predicted through an MLP based on the current field of view (sensory inputs). Positional velocity, in contrast, is represented as either 0 (stationary) or +1 (forward movement), without an MLP, as backward movement is not task-relevant (Nieh et al. (2021), behavioral training). While an MLP could be used for positional velocity, we opted for a simplified approach since this modification does not affect the results.
+
+For the two standalone RNN baselines (M0 and M0+), we scale up the hidden size to $32 + N_{g} + N_{p} + N_{s} = 1076$ to match the total number of neurons used in M1-M5. The input to these RNN baselines consists of sensory information. The $\mathrm{M}0+$ variant additionally incorporates positional velocity (whether the agent has moved) and evidence velocity (predicted by an MLP with the same setup as in M1-M5). We use a learning rate of 0.0001 with gradient clipping to a maximum norm of 1. This adjustment was necessary because the large standalone RNNs failed to train with the 0.0005 learning rate used in M1-M5 due to learning instabilities (e.g., exploding or vanishing gradients). To address this, we conducted a hyperparameter search and selected the settings that produced the best performance.
+
+# A.2. Environment
+
+The accumulating tower task consists of an agent navigating a T-maze with towers positioned on both sides (Fig. 1C). The agent must decide which direction to turn at the end of the corridor, aiming to turn toward the side with more towers to receive a reward. The maze is divided into distinct regions: a start region (9% of the total length) with no towers, a cue region (61%) containing towers, and a delay and decision region (the remaining portion) without towers. This structure aligns approximately with the division used in Nieh et al. (2021). Each episode presents a unique configuration of towers, requiring the agent to traverse the corridor step by step before reaching the T-junction and making a decision. The left and right sides of the maze are encoded as vectors, where a value of 1 represents a tower, 0 indicates an empty position, and -1 denotes areas outside the maze. The agent has a limited field of view that allows it to perceive a certain number of positions ahead.
+
+In each episode, the rewarded side (i.e., the side with more towers) is chosen uniformly at random, with the number of towers on that side, $x_{\text{reward}}$ , is sampled from $\text{Uniform}(1, K)$ , where $K$ is the maximum allowable number of towers. The non-rewarded side contains strictly fewer towers, with its count $x_{\text{non-reward}}$ drawn from $\text{Uniform}(0, x_{\text{reward}})$ . At each step, the agent can take one of three possible actions: left (0), right (1), or forward (2). Before reaching the T-junction, the agent receives a small reward of 0.01 for moving forward and a penalty of -0.001 for any other action. Once at the end of the maze, it receives a reward of 10 for making the correct turn, no reward for choosing the wrong direction, and a penalty of -1 for attempting to move forward and colliding with the wall. The episode ends when the agent turns or reaches the maximum number of decision attempts, the latter incurring an additional penalty of -5. For this study, we set the maze sequence length to 20, dividing the start, cue, delay, and decision regions into segments of length $\{1, 12, 6, 1\}$ , respectively. The agent has a field of view spanning five positions. Training is conducted using the REINFORCE algorithm (policy gradient) (Sutton et al., 1999) until convergence.
+
+# A.3. Single step in the task
+
+Each step in the accumulating tower task follows a structured sequence (Fig. 1C). First, the agent perceives sensory information from its field of view. This information is processed by an MLP, which extracts an estimate of evidence velocity. The evidence velocity, along with position velocity, is then used to update the grid cell state through path integration. The updated grid representation is projected onto the hippocampal layer, alongside the non-grid sensory input, forming the
+
+agent's internal state representation. The hippocampal code is then passed to the cortical RNN policy, which selects an action—moving left, right, or forward. Once an action is executed, the agent's position is updated accordingly. This new position brings in fresh sensory input, which is implicitly processed by the grid cell subnetwork to update the grid state. The updated grid state, along with the newly perceived sensory information, is then used to refine the hippocampal representation. This iterative process continues, with the agent repeatedly updating its internal representations and selecting actions, until it reaches the T-junction and makes a final turning decision. The agent's behavior is governed by the reward scheme outlined in Appendix A.2.
+
+# A.4. Further details on the biological grounding of Vector-HaSH
+
+While the relevant biological groundings of Vector-HaSH are inherited from and addressed in Chandra et al. (2025), we include a summary below on how the model enforces sparsity and spatial selectivity in place cells for completeness.
+
+Sparsity As described in the main text, the projection matrix $\mathbf{W}_{hg}$ from grid cells to hippocampal (HPC) units is drawn from a standard Gaussian distribution, consistent with classical random projection models (see Methods, Chandra et al. (2025)). Due to the symmetry of the distribution, each entry has zero mean, and hence half the activations are expected to be subthreshold. Moreover, each grid module encodes its position using a one-hot representation, enforcing input sparsity via inductive bias. Nonlinear gating through ReLU is applied to $h$ (Eqns. 2, 3), further ensuring that only a sparse subset of HPC units are active for a given input. Importantly, the number of unique grid states $\left(\prod_{i} \lambda_{i}^{2}\right)$ is much smaller than the total number of possible HPC activation patterns $(2^{N_{h}})$ . As a result, only a very small subset of HPC units are active for any given grid code, which enforces a highly sparse representation in the hippocampus.
+
+Selectivity To ensure spatial selectivity, each sensory state is associated with a specific grid state. This is implemented by updating the weights $\mathbf{W}_{hs}$ and $\mathbf{W}_{sh}$ during training, such that the sensory input strongly modulates only a small subset of HPC units. Consequently, each sensory state drives a selective hippocampal code via its associated grid representation, thereby grounding place field formation in both sensory and grid input pathways.
+
+# A.5. Continuous-attractor (CAN) update rule in Vector-HaSH
+
+For completeness, here we make explicit the equations that govern the CAN ( ) update used in Eqn. 1 (Burak & Fiete, 2009; Chandra et al., 2025) of the main text. Let $\mathbf{g}(t)\in \{0,1\}^{N_g}$ denote the concatenated grid-code vector at time $t$ and $v(t)\in \{-1, + 1\}$ the 1-D velocity signal. The CAN step is a velocity-dependent cyclic shift of each grid module, implemented by a block-diagonal shift matrix $\mathbf{M}\big(v(t)\big)$ :
+
+$$
+\vec {g} (t + 1) = \operatorname {C A N} [ \vec {g} (t), v (t) ] = \mathbf {M} (v (t)) g (\vec {t}). \tag {6}
+$$
+
+Example with two modules $(\lambda_{1} = 3, \lambda_{2} = 4)$ For illustration, consider two one-dimensional grid modules of periods $\lambda_{1} = 3$ and $\lambda_{2} = 4$ ( $N_{g} = 7$ ). A rightward velocity $v = +1$ is realized by the block-diagonal matrix $\mathbf{U}$ :
+
+$$
+\mathbf {M} (+ 1) = \mathbf {U} = \left[ \begin{array}{c c} \mathbf {S} _ {3} & \mathbf {0} \\ \hline \mathbf {0} & \mathbf {S} _ {4} \end{array} \right], \qquad \text {w h e r e} \quad \mathbf {S} _ {3} = \left[ \begin{array}{c c c} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right], \mathbf {S} _ {4} = \left[ \begin{array}{c c c c} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{array} \right].
+$$
+
+A leftward step $(v = -1)$ is obtained with $\mathbf{M}(-1) = \mathbf{U}^{\top}$ .
+
+Formal definition Let $\lambda_{k}$ be the periodicity of module $k$ and $X_{k} = \sum_{l = 1}^{k}\lambda_{l}$ (with $X_0 = 0$ ) the cumulative offset. For indices $i,j$ belonging to the same module ( $X_{k - 1}\leq i,j < X_k$ ) we set
+
+$$
+M _ {i j} (v) = \left\{ \begin{array}{l l} 1, & \text {i f} (j - X _ {k - 1}) \bmod \lambda_ {k} = (i + v) \bmod \lambda_ {k}, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {7}
+$$
+
+All cross-module blocks are zero, making $\mathbf{M}(v)$ block-diagonal.
+
+Extension to 2-D For two-dimensional Vector-HaSH, we apply the same 1-D rule independently to the $x$ - and $y$ -components and construct the full 2-D shift via a Kronecker product of the corresponding 1-D shift matrices.
+
+# B. Mutual information
+
+# B.1. Mutual information analysis
+
+We follow the mutual information analysis in Nieh et al. (2021). Here we reiterate this procedure for completeness. For each neuron, we evaluate the mutual information metric defined in Skaggs et al. (1992),
+
+$$
+I = \int_ {x} \lambda (x) \log_ {2} {\frac {\lambda (x)}{\lambda}} p (x) d x,
+$$
+
+in which $I$ is the mutual information rate of the neuron in bits per section, $x$ is the spatial location (or accumulated evidence) of the agent, $\lambda(x)$ is the mean firing rate of the neuron at location (accumulated evidence) $x$ , $p(x)$ is the probability density of the agent occupying location (accumulated evidence) $x$ and $\lambda = \int_{x} \lambda(x)p(x)dx$ is the overall mean firing activity of the neuron.
+
+# B.2. Scatterplots of mutual information
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7. Scatterplots of the hippocampal mutual information in $E \times R_{Y}$ space versus $E \times Y$ space (top row), and scatterplots of mutual information in $R_{E} \times Y$ space versus $E \times Y$ space (bottom row). We show data for all model variants M1 to M5, in the order of panel A to $E$ respectively. We observe evidence and position interact to provide meaningful information in M3, M4, and M5 (when grid cells co-tune position and evidence), while M1 and M2 rely on information of position (when grid cells tune evidence only, and there is either no or some sensory information projected into the hippocampus). Here, $R_{Y}$ is a randomized position, generated by randomly sampling from the $Y$ distribution that corresponds to the non-randomized E value of the cell. A similar procedure is performed for generating the $R_{E} \times Y$ variables. More details of the procedure are described in the Mutual Information Analysis section of Nieh et al. (2021).
+
+
+
+
+
+
+
+
+
+# C. Hippocampal firing fields within $E \times Y$ space in model variants
+
+
+A
+
+
+Example ExY fields in M1
+
+
+
+
+B
+
+
+Example ExY fields in M2
+
+
+
+
+C
+
+
+Example ExY fields in M3
+
+
+
+
+D
+
+
+Example ExY fields in M4
+
+
+
+
+E
+
+
+Example ExY fields in M5
+
+
+Figure 8. Example hippocampal firing fields in $E \times Y$ space in M1 (A), M2 (B), M3 (C), M4 (D), and M5 (E). We observe that the firing fields of M1 and M2 (A, B) do not depend on evidence with stripe patterns. M2 firing fields occasionally have some amount of gradient, a potential artifact of sensory injection, similar to the firing fields of M4 and M5 (D, E). M3 (C) firing fields exhibit conjoint tuning of position and evidence and have no apparent gradient artifacts.
+
+# D. Hippocampal evidence and place fields in model variants
+
+
+Figure 9. Hippocampal firing fields in evidence (A) and in space (B), for M1 (A1), M2 (A2), and M3 (A3, B). We see M1 and M2 do not have firing fields in evidence (A1, A2), while M3 does (A3). Furthermore, M3 contains choice-specific place fields (B) similar to M4 (Fig 5, A3), implying that joint tuning of position and evidence in grid cells is key to forming a conjoint hippocampal map.
+
+
+
+
+
+
+
+# E. HPC and RNN PC representation in model variants
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10. Low-dimensional representation of hippocampal and RNN activities in PC space, shown for M1 (row A), M2 (row B), M3 (row C). We show the representations colored according to selective task variables, specifically position, local evidence velocity, and action, in which M4 shows clear separation (in correspondence to Fig 6). Other variables visualized in HPC and RNN activity PC space include accumulated evidence, position changes, left-/right-choice trials, total evidence of the trial, and ground truth action; we observe no visual separation.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 11. Cumulative variance explained in percentage of hippocampal (orange) and RNN (blue) activities, by the number of principle components (PCs), and low-dimensional representations of hippocampal activities in PC space, colored by accumulated evidence, shown for M1 (column A), M2 (column B), M3 (column C), M4 (column D), and M5 (column E). The first two PCs in M5 explained the most amount of variance in hippocampal representations $(68\%)$ in comparison to other model variants. We do not observe any visual separability of accumulated evidence in the PC space of the first three PCs, as shown in the second row.
+
+
+
+
+
+
+
+
+
+# F. Effect of CA3 Recurrence in M2 & M4
+
+In this section, we demonstrate in M2 and M4, as a proof of concept, that the inclusion of CA3 recurrent connectivity in the HPC layer does not affect the general conclusions presented in the main paper. Specifically, the inclusion of CA3 recurrence in M2 or M4 does not induce the experimentally observed place cell phenomena (Nieh et al., 2021), producing similar results as if recurrence was absent (see Figs 4, 5, 7, 8, and 9).
+
+To model CA3 recurrence, we incorporate additional recurrent connections within the HPC layer, $\mathbf{W}_{hh}$ , updated through hebbian-like associative learning using $\vec{h}_{\mathrm{mix}}(t)$ and $\vec{h}_{\mathrm{mix}}(t + 1)$ , analogous to the learning update for $\mathbf{W}_{hs}$ and $\mathbf{W}_{sh}$ (see Eqns. 4 and 5). The activity of mixed hippocampal cells is then described by:
+
+$$
+\vec {h} _ {\text {m i x}} (t + 1) = \operatorname {R e L U} \left[ \mathbf {W} _ {h s} \cdot \vec {s} (t) + \mathbf {W} _ {h g} \cdot \vec {g} (t + 1) + \mathbf {W} _ {h h} \cdot \vec {h _ {\text {m i x}} ^ {\prime}} (t) \right]. \tag {8}
+$$
+
+The rest of the setup remains consistent with Appendix A.1. The above changes would be the same for models with $\vec{p}_{\mathrm{nonmix}}(t)$ and $\vec{h}_{\mathrm{nonmix}}(t + 1)$ .
+
+As shown in Fig. 12, the inclusion of the recurrent integration of positional information from MEC and sensory information from non-grid EC does not result in the emergence of conjunctive place cells in M2. Similarly, Fig. 13 shows that the lack of conjunctive place cells in M4 persists when the grid cell modules encode position and evidence disjointly.
+
+These findings confirm that the recurrent integration in HPC alone does not induce conjunctive coding, underscoring the critical role of joint integration of position and evidence in grid cells for producing co-tuned place cells.
+
+
+
+
+Figure 12. Analysis of hippocampal code in M2 with CA3 recurrence. (A) Example hippocampal firing fields in $E \times Y$ space. (B) Scatterplots of the hippocampal mutual information in M2 with CA3 recurrence when only the position is randomized (B1), and when only the evidence is randomized. The model shows higher mutual information in position only. See the caption of Fig B.2 for implementation details. (C) Hippocampal firing fields in evidence in M2 with CA3 recurrence.
+
+
+
+
+
+
+A
+
+
+B1
+C
+
+
+Place cell evi. fields under M4 w/ CA3 recurrence 0 Normalized average activity 1
+
+
+D
+Figure 13. Analysis of hippocampal code in M4 with CA3 recurrence.(A) Example hippocampal firing fields in $E \times Y$ space. (B) Scatterplots of the hippocampal mutual information in M4 with CA3 recurrence when only the position is randomized (B1), and when only the evidence is randomized. The model shows higher mutual information in both position and evidence, consistent with the case when the recurrence is not considered (Fig B.2, column D). See the caption of Fig B.2 for implementation details. (C) Hippocampal firing fields in evidence in M4 with CA3 recurrence. We still observe evidence fields. (D) Hippocampal firing fields in space. We do not observe choice-specific place fields shown in Nieh et al. (2021) after considering the CA3 recurrence in M4.
+
+# G. Learning Performance Under Hyperparameter Tuning
+
+To evaluate the impact of hyperparameter choices on model learning, we conducted a sweep over learning rates (LR) for all models, including $[5\mathrm{e} - 5, 1\mathrm{e} - 4, 5\mathrm{e} - 4, 1\mathrm{e} - 3]$ , except when a specific learning rate was already selected in the main text, in which case we retained it without sweeping to ensure consistency with the main text. The performance curves with hyperparameter tuning results are visualized in Fig. 14.
+
+Table 3 reports the mean final success rate and average exploration time (measured at the last 100 episodes across three independent trials), formatted as: success[%] ± standard deviation / steps per episode ± standard deviation. The best-performing metrics for each model are highlighted in bold.
+
+Table 3. Mean success rate ± standard deviation and mean exploration time ± standard deviation across 3 trials over the last 100 out of 17400 episodes. Only the best success rate and lowest exploration time per model are in bold. The maximum number of steps allowed per episode is 200 steps. Our final model, M5, is fairly robust to learning rates as shown.
+
+LR M0 M0+ M1 M2 M3 M4 M5 5e-5 Success (%) 59.33±11.73 72.00±4.97 49.33±2.87 50.33±4.19 95.67±2.05 93.67±2.05 95.00±1.63 Steps/ep. 31.23±2.46 27.94±3.20 24.77±0.66 21.33±0.80 28.91±0.68 26.09±1.67 25.71±0.70 1e-4 Success (%) 68.67±15.46 81.67±4.03 47.33±3.40 49.00±3.56 97.00±2.45 99.33±0.94 99.00±0.82 Steps/ep. 25.22±1.76 22.95±1.18 23.97±0.81 23.01±0.66 29.83±1.45 25.52±0.87 27.65±1.45 5e-4 Success (%) 8.00±11.31 0.00±0.00 51.33±4.64 49.00±2.45 96.33±1.25 76.67±16.54 97.00±4.24 Steps/ep. 193.48±9.22 200.00±0.00 23.31±0.38 29.14±8.35 23.34±0.77 28.02±6.18 19.70±0.22 1e-3 Success (%) 0.00±0.00 12.33±17.44 51.67±4.19 51.67±4.19 67.33±17.46 83.33±19.48 90.00±8.29 Steps/ep. 200.00±0.00 193.36±9.39 25.81±1.35 24.86±2.13 33.71±3.49 28.79±4.02 30.74±2.85
+
+Across models, we observe that the chosen hyperparameters in the main text (1e-4 for M0 and M0; 5e-4 for M1-M5) generally yielded near-optimal performance. For M4, an LR of 1e-4 mitigated instability noted in the main paper's Fig. 3. This updated choice is applied to generate Fig. 14.
+
+Conclusion Although learning rate tuning helped stabilize M4, none of these modifications altered the qualitative conclusions or key claims presented in the main paper. Section 5.1's observations regarding learning efficiency and rapid exploration in M5 remain unchanged.
+
+
+Figure 14. Updated Fig. 3 with optimized learning rate for each model, with a change to using a learning rate of $1\mathrm{e} - 4$ for M4. The rest remains consistent with Fig. 3, and this does not alter the claims made in the paper. The moving average setup is the same as Fig. 3, including the window sizes used (5,000 for A and 10,000 for B) and the exclusion of the first 100 episodes.
+
+
+
+# H. Results for Controlling Trainable Parameter Count in RNN Baselines
+
+Here we provide additional ablations of the RNN baselines (M0 and $\mathrm{M}0+$ ) to address whether the later-onset performance of M0 and $\mathrm{M}0+$ might simply reflect a larger number of weights to optimize. We conclude that the performance of RNN baselines with the same number of parameters as M5, with or without additional velocity input, does not alter the conclusions drawn in the main paper.
+
+Parameter formula For a vanilla RNN with input dimension $I$ , hidden dimension $H$ , and output dimension $O$ , the total number of back-prop-trainable parameters, including biases, is
+
+$$
+\# \text {p a r a m s} = \underbrace {(I H + H H + H)} _ {\text {i n p u t} \& \text {r e c u r r e n t w e i g h t s} + \text {h i d d e n b i a s}} + \underbrace {(H O + O)} _ {\text {o u t p u t w e i g h t s} + \text {o u t p u t b i a s}} = H ^ {2} + H (I + O + 1) + O. \tag {9}
+$$
+
+Matching M0 or M0+ to M5 Model M5 (with an RNN of input size $I = 800$ , hidden size $H_{M5} = 32$ , output size $O = 3$ ) contains
+
+$$
+\# \operatorname {p a r a m s} _ {M 5} \approx 2 6 7 5 5.
+$$
+
+Solving Eq. (9) for $H$ with $I_{\mathrm{mini - M0}} = 10$ and $I_{\mathrm{mini - M0 + }} = 12$ , as well as the same $O = 3$ yields
+
+$$
+H _ {\text {m i n i - M 0}} \approx 1 5 8, \quad H _ {\text {m i n i - M 0 +}} \approx 1 5 7,
+$$
+
+so that the mini versions of M0 and $\mathrm{M}0+$ have the same order of trainable parameters as M5.
+
+Control experiment We trained these mini-models for three independent trials (learning rate $10^{-5}$ ). Figure 15 shows their learning curves alongside the original $\mathrm{M}0 / \mathrm{M}0+$ and M5. The qualitative conclusion of the main paper is unchanged: even when parameter counts are matched, M0 and $\mathrm{M}0+$ still lag behind M5 in both learning speed and final performance.
+
+
+Figure 15. Parameter count alone does not explain performance differences. Learning curves for parameter-matched mini-M0 and mini-M0+ (red and orange), compared with the original M0 (blue), M0+ (black), and M5 (brown). (A) Cumulative success rate during training. (B) Steps spent per episode (exploration time). The plotting setup and color conventions match Fig. 3.
+
+
+
+Additional learning-rate sweeps We further trained the mini-models under alternative learning rates $\{1\mathrm{e} - 3,5\mathrm{e} - 4,5\mathrm{e} - 5\}$ . The supplementary results (see Fig. 16) likewise show no qualitative change, reinforcing that parameter count alone does not explain the performance gap.
+
+
+A1
+
+
+B1
+
+
+A2
+
+
+B2
+
+
+A3
+Figure 16. Mini-models consistently underperform across learning rates. Additional learning curves for parameter-matched mini-M0 and mini-M0+ trained with varying learning rates. (A) Cumulative success rate and (B) exploration time for mini-M0 and mini-M0+ (red and orange), which have approximately the same number of trainable parameters as M5 (brown). Original M0 and M0+ (blue and black) are shown for comparison. The plotting format and color scheme follow Fig. 3. Each row corresponds to a different learning rate used for the mini-models, indicated above the legend: 1e-3 (top), 5e-4 (middle), and 5e-5 (bottom).
+
+
+B3
+
+# I. Effect of smoothing on HPC tuning visualization
+
+Here we provide additional clarification and visual evidence regarding the appearance of conjunctive tuning in the hippocampus neurons, particularly in models M4 and M5.
+
+It was noted that the tuning curves shown in Fig. 4 appear weaker and more diffuse than those observed in Nieh et al. (2021). However, here we show that this difference can largely be attributed to differences in preprocessing and visualization—particularly the use of smoothing.
+
+Smoothing improves the appearance of conjunctive tuning Smoothing neural data is a fairly standard procedure empirically, which is done in both Nieh et al. (2021) and Chandra et al. (2025). When we apply smoothing procedures, the tuning curves become substantially more localized and stereotyped. Specifically, we follow the 2-stage filtering process described in Nieh et al. (2021):
+
+1. Apply a 1D Gaussian filter with standard deviation $\sigma_{1}$
+2. Threshold the result by zeroing all values below a fixed multiple of the standard deviation across time.
+3. Apply a second 1D Gaussian filter with $\sigma_{2}$
+
+We show examples of tuning curves after smoothing in selected HPC units from models M4 and M5 in Fig. 17. As anticipated, smoothing reveals structure that is otherwise obscured, making the tuning appear more consistent with findings reported in experimental work, such as Nieh et al. (2021).
+
+
+M5
+
+
+
+
+
+
+
+
+M4
+
+
+Figure 17. Smoothing enhances visualization of conjunctive tuning. Tuning curves from selected hippocampal neurons in M5 (top row) and M4 (bottom row) after applying a two-stage smoothing and thresholding procedure. Each row shows two example neurons (A, B in M5; C, D in M4), with each neuron visualized under $\sigma_{1} = 1$ , then two different levels of secondary smoothing: $\sigma_{2} = 1$ (left two columns) and $\sigma_{2} = 2$ (right two columns). Increased $\sigma_{2}$ leads to more diffuse but still conjunctive tuning. Smoothing reveals structure that is less apparent in raw activity and produces hippocampal fields more consistent with experimental observations.
+
+
+
+
+
+Relation to overall findings While smoothing improves the interpretability of individual neuron tuning curves, our broader conclusions regarding conjunctive representations are supported by quantitative measures such as mutual information (Appendix. B.2) and the presence of both place fields and evidence fields in HPC (Figs 5, 9). Thus, the visual appearance of raw tuning curves does not reflect a fundamental limitation of the model, but rather a visualization artifact that can be addressed through appropriate preprocessing.
\ No newline at end of file
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/images.zip b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cc96db2733c9650fe364f92cd55011f4585e4894
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:908663d142ea501637e477b202562c3ccade15320258a89ace2d3beb16efee03
+size 1863626
diff --git a/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/layout.json b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d22049da46dbc3712ca4c43b9a47ea18b1107821
--- /dev/null
+++ b/amultiregionbrainmodeltoelucidatetheroleofhippocampusinspatiallyembeddeddecisionmaking/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:484f633cd1595c6ae57385d7d2e5fc87932dfd4daed9a0af00829714d395ebe4
+size 840762
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_content_list.json b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..525da1772b196a2343a6558f9f24f884b369a4fc
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2e0b7a0a6192da704d4be1adb7b2fbd7ec35c47dbc41c8f48b1106884111211
+size 157131
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_model.json b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bae2b0a6ee1a42e399b769453b1f2bf8056d9b64
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43d158abfea47f6ccbae06c1e3fd6f5c43e8b1d97d0cfca9e4e87b8386550af8
+size 195023
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_origin.pdf b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9aa3dc3b727afca6c574d84af0280bd617424096
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/35ab4aba-c63c-47c4-bf31-729f73a67229_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d36760c2dd9182f8b96094ea148a8cfdc29805b9a2cb50e72ad4dbbc2f7b93db
+size 422116
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/full.md b/anearlinearquerylowerboundforsubmodularmaximization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..448faa04a54ea71240dbb43aa58d4c0517cbeb49
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/full.md
@@ -0,0 +1,922 @@
+# A Near Linear Query Lower Bound for Submodular Maximization
+
+Binghui Peng1 Aviad Rubinstein2
+
+# Abstract
+
+We revisit the problem of selecting $k$ -out-of- $n$ elements with the goal of optimizing an objective function, and ask whether it can be solved approximately with sublinear query complexity. For objective functions that are monotone submodular, [Li, Feldman, Kazemi, Karbasi, NeurIPS'22; Kuhnle, AISTATS'21] gave an $\Omega(n / k)$ query lower bound for approximating to within any constant factor. We strengthen their lower bound to a nearly tight $\widetilde{\Omega}(n)$ . This lower bound holds even for estimating the value of the optimal subset. When the objective function is additive, we prove that finding an approximately optimal subset still requires near-linear query complexity, but we can estimate the value of the optimal subset in $\widetilde{O}(n / k)$ queries, and that this is tight up to polylog factors.
+
+# 1. Introduction
+
+We consider the problem of selecting the "best" $k$ -out-of- $n$ elements, e.g. selecting $k$ locations to place sensors, selecting $k$ features to include in data analysis, or selecting $k$ samples for training a model. We are particularly interested in the case where:
+
+Expensive query access We are not given an explicit function to compute what makes one subset "better" than another: rather we have expensive query access to an oracle that evaluates different candidate subsets (e.g. estimating the quality of prediction from a subset of features or samples by training a smaller model on them).
+
+Sublinear query complexity Motivated by the expensive query constraint and large problem sizes in machine learning applications, we ask whether it is possible to obtain approximately optimal subsets with query complexity that
+
+$^{1}$ Department of Computer Science, Stanford University, United States $^{2}$ Department of Computer Science, Stanford University, United States. Correspondence to: Binghui Peng , Aviad Rubinstein .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+scalesublinearlywith $n$
+
+In full generality, it is clearly hopeless to find an approximately optimal $k$ -subset without querying all $\binom{n}{k}$ subsets (not to mention sublinear query complexity), so as is common in the literature we restrict our attention to monotone submodular objective functions, i.e. we assume that the marginal contribution from each additional element is diminishing, yet it is always non-negative (see Section 2 for a formal definition). Since our result for worst-case functions in this class is quite negative, we also consider the most important special case: (monotone) additive objective functions, i.e. $f(S) = \sum_{i\in S}w_{i}$ for unknown $w_{i}\geq 0$ .
+
+Monotone submodular maximization has important applications in machine learning (see e.g. the recent survey of Bilmes (2022) and references therein), and the problem sizes in those machine learning applications increases rapidly. This inspired over the past decade an extensive effort in obtaining sublinear time algorithms for approximate submodular maximization by parallelization (Balkanski et al., 2016; Balkanski & Singer, 2018; Balkanski et al., 2018; 2019; Chekuri & Quanrud, 2019a;b; Chen et al., 2019; Ene & Nguyen, 2019; Fahrbach et al., 2019; Kazemi et al., 2019a; Breuer et al., 2020; Ene & Nguyen, 2020; Balkanski et al., 2022a;b). In contrast, in this work we look for algorithms whose total work (specifically, total query complexity) is sublinear. Indeed, this is motivated in part by the increasing monetary and environmental cost of total work of state-of-the-art machine learning algorithms that already achieve significant parallelization. Beyond these important connections to applications in machine learning, submodular maximization itself is generally a fundamental problem in optimization.
+
+The question of sublinear-query algorithms for submodular maximization was considered by Li et al. (2022); Kuhnle (2021), who showed that any constant factor approximation algorithm requires query complexity $\Omega(n / k)$ ; Li et al. (2022) additionally proved a lower bound of $\Omega\left(\frac{n}{\log(n)}\right)$ for the special case of $k = \Theta(n)$ . Hence, for the special case of $k = \Theta(1)$ and $k = \Theta(n)$ , previous work rules out truly sublinear query complexity algorithms.
+
+Our first result strengthens this impossibility result by ruling out approximate submodular maximization in sublinear query complexity for any $k \ll n$ , even if we're just inter
+
+tested in estimating the value of the optimal subset:
+
+Theorem (Submodular: informal version of Theorem 3.9). With a monotone submodular objective function and for any $k = o(n)$ , no $\widetilde{O}(n)$ -query complexity algorithm can approximate the value of the optimal $k$ -subset to within any constant factor.
+
+We note for typical applications of submodular maximization (e.g. dataset selection – selecting a small dataset to train a model; influence maximization – selecting a small subset of influential users), the subset size $k$ is neither linear in the entire dataset, nor a small constant that can be ignored. Comparing with previous work (Li et al., 2022; Kuhnle, 2021), we obtain improved (and optimal) lower bound for the critical regime of $\mathrm{polylog}(n) \leq k \leq \frac{n}{\mathrm{polylog}(n)}$ . Our lower bound rules out sublinear query algorithms over all possible regimes of $k$ and fully resolves the query complexity of monotone submodular maximization (up to polylog factor), a fundamental question in optimization theory.
+
+Given this sweeping negative result, we turn our attention to characterizing the complexity of selecting an approximately optimal $k$ -subset with an additive objective function. Here, we have a mix of negative/positive results, where the key difference depends on whether we just want to estimate the value of the optimal subset, or actually find it:
+
+Theorem (Additive: Informal version of Theorems 3.7, 4.1, 4.7).
+
+With a monotone additive objective function and for any $k = o(n)$ ,
+
+- No $\widetilde{O}(n)$ -query complexity algorithm can find an approximately optimal $k$ -subset, for any constant approximation factor (Theorem 3.7).
+- For any constant $\epsilon > 0$ , there exists an $(1 \pm \epsilon)$ -approximation algorithm for estimating the value of the optimal $k$ -subset using only $\widetilde{O}(n / k)$ queries (Theorem 4.1).
+- Furthermore, this query complexity is nearly tight for any algorithm obtaining any constant factor approximation (Theorem 4.7).
+
+Our algorithm uses sublinear queries and estimates the value of the optimal $k$ -subset, this is a standard goal in the field of sublinear algorithms, see (Chen et al., 2022; Charikar et al., 2023; Behnezhad, 2023; Behnezhad et al., 2023; Bhattacharya et al., 2024) and reference there in. To motivate this, consider a scenario where one needs to select a subset from a large dataset under a budget constraint. Our algorithm can first be used to assess whether the dataset is sufficiently valuable – that is, whether it contains elements with large values. If the dataset meets this criterion, one
+
+can proceed with further analysis or selection; otherwise, unnecessary efforts can be avoided.
+
+Query complexity vs. Runtime The focus of this paper is on the query complexity, which is standard in the literature. Our algorithm uses sublinear queries (i.e., $\widetilde{O}(n / k)$ ) and linear computation time (i.e. $\widetilde{O}(n)$ ), both of which are optimal1 . The motivation for studying query complexity arises from practical concerns in training large models using only a subset of the data. Even under the simplest assumption—an additive function – we lack explicit input specifying each data point's marginal value. Instead, we estimate the value of a subset by, for example, training a smaller model from scratch. Related applications include data valuation (Ilyas et al., 2022), where a query to subset $S$ corresponds to training a neural network over set $[n] \setminus S$ ; influence maximization (Kempe et al., 2003), where a query to subset $S$ corresponds to a simulation or a social experiment using $S$ as the seed set. In all these settings, queries are significantly more expensive than individual computational steps: while linear computation time (e.g., loading or processing the full dataset) is feasible, making a linear number of queries (e.g., training $n$ different models) is impractical.
+
+Additional related work The query complexity of monotone submodular maximization has been studied in the literature. The greedy algorithm (Fisher et al., 1978) finds an $(1 - 1 / e)$ -approximate solution using $O(nk)$ queries. The query complexity can be improved, the stochastic greedy algorithm (Mirzasoleiman et al., 2015) makes $O(n\log(1/\epsilon))$ queries and return an $(1 - 1 / e - \epsilon)$ approximate solution; (Li et al., 2022) gives a deterministic algorithm using $O(n/\epsilon)$ queries. Despite of extensive research, the only query lower bound known is $\Omega(n/k)$ from the recent work of (Li et al., 2022; Kuhnle, 2021).
+
+In addition to previously mentioned work on parallel algorithms for submodular maximization, our work is also related to algorithms for submodular maximization in other sublinear models such as dynamic (where the update time needs to be sublinear) (Chen & Peng, 2022; Peng, 2021; Peng & Rubinstein, 2023; Lattanzi et al., 2020; Monemizadeh, 2020; Dütting et al., 2023; Agarwal & Balkanski, 2023; Banihashem et al., 2024; 2023) and streaming (where the space needs to be sublinear) (Badanidiyuru et al., 2014; Feldman et al., 2023; Chakrabarti & Kale, 2015; Chekuri et al., 2015; Feldman et al., 2021; Huang et al., 2021; Liu et al., 2021; Shadravan, 2020; Norouzi-Fard et al., 2018; Alaluf & Feldman, 2019; Agrawal et al., 2018; Kazemi et al.,
+
+2019b; Huang et al., 2022; Feldman et al., 2018; Alaluf et al., 2022; McGregor & Vu, 2019; Indyk & Vakilian, 2019).
+
+# 1.1. Technique Overview
+
+The linear lower bound for submodular maximization The key idea driving our approach for proving lower bounds is to relate the query complexity of submodular maximization to the communication complexity of distributed set detection problem.
+
+We first describe the communication problem of distributed set detection task. The distributed set detection is a multi-party communication problem, where each party observes the outcome of $n$ coins. Among these $n$ coins, $k$ coins are fair and have mean $1/2$ , while the rest $n - k$ coins are biased and have small mean value. The goal of these parties is to collectively identify a small fraction (e.g. $0.1k$ ) of fair coins. We prove that the distributed set detection requires $\widetilde{\Omega}(n)$ communication cost using the distributed strong data process inequality (SDPI) and a direct-sum argument.
+
+Our key observation is that the linear communication lower bound for distributed set detection yields a linear query lower bound of submodular maximization. To this end, consider a linear function whose weight on the $i$ -th element equals the summation of the outcome of the $i$ -th coin. The optimal $k$ -subset is exactly the $k$ fair coins, given the number of parties is roughly $\Theta (\log (n))$ . The crucial observation is one could simulate the query algorithm in the communication model, with only polylog(n) overheads: If the query algorithm asks for the value of $f(S)$ for some set $S$ , then all parties just locally compute the summation of coins in $S$ and broadcast their results, it takes at most $O((\log (n))^2)$ communication bits $(O(\log (n))$ parties and $\log (n)$ bits per party). This proves that finding the optimal $k$ -subset requires $\widetilde{\Omega} (n)$ queries due to the $\widetilde{\Omega} (n)$ communication lower bound of distributed set detection.
+
+When the goal is to estimate the value of the optimal $k$ -subset, the above hard instance fails because the query algorithm could easily find one fair coin using $\widetilde{O}(n / k)$ queries. To this end, we construct a monotone submodular function by applying two levels of truncation to the above linear function. Roughly speaking, the optimal $k$ -subset still corresponds to the $k$ fair coins in distributed set detection, but the two-level truncation over the linear function (see Eq. (2)(3)) masks off useful information on large sets and requires the detection of a non-trivial fraction of the fair coins (see Section 3.2 for detailed description).
+
+Remark 1.1 (Comparison with previous work (Li et al., 2022; Kuhnle, 2021)). The previous work (Li et al., 2022; Kuhnle, 2021) prove a lower bound of $\widetilde{\Omega}(n/k)$ and (Li et al., 2022) additionally has a lower bound of $\widetilde{\Omega}(n)$ when $k = \Theta(n)$ . Our approach yields significantly stronger lower bound, e.g. when $k = \sqrt{n}$ , our lower bound (i.e., $\widetilde{\Omega}(n)$ ) is quadratically
+
+better than (Li et al., 2022; Kuhnle, 2021) (i.e., $\widetilde{\Omega} (\sqrt{n})$ ). From a technical point of view, all previous lower bounds are proved using counting arguments. Our technique is completely different from them, it is based on novel ideas, including (1) the query to communication reduction, (2) a communication lower bound using information complexity and distributed data-processing inequality, and (3) a new construction of the hard submodular function with two-level truncation.
+
+Sublinear algorithm for linear function Our algorithm is a multi-scale combination of two base subroutines. The first subroutine is to randomly select a set of size $n / m$ and estimate the value of each individual element. This subroutine yields a good estimate on the top $m$ -th quantile of the ground set, but fails to give an accurate estimation on top $o(m)$ elements (to see this, imagine there are $o(m)$ elements that have super large value, then one can observe their value only after sampling $\gg n / m$ elements). The second subroutine is to partition the ground set into $m$ subsets and estimate the value of each subset. Intuitively, this subroutine alleviates the weakness of the first subroutine – if there are $o(m)$ elements that have super large value, then most of them would fall into different subsets and we can have a good sense of their value after querying the value of $m$ subsets. Nevertheless, the second subroutine could fail if the top $o(m)$ elements are not significantly larger than rest elements (say they have value 2 and the rest have value 0 or 1, then querying the value of $m$ subsets gives negligible hints on the value of top $o(m)$ elements).
+
+Intuitively, both subroutine have their own weakness, but they are somewhat complementary to each other. We wish to combine them in a careful manner to get an optimal sublinear algorithm. The real situation gets complicated due to the possible intricate choice of the top $k$ elements, so we only sketch our final solution here. Indeed, we compose the two base subroutines in multiple scales. Let $k_{r} = (1 + \epsilon)^{r}$ , at each level $r$ , the algorithm randomly partitions the ground set into $nk_{r} / k$ subsets and estimates the $k_{r}$ -th quantile of these subsets (hence each level takes $\tilde{O}(n / k)$ queries). Roughly speaking, we expect the top $k_{r}$ -th subsets to be as valuable as the top $k_{r}$ -th element + an average bucket. Our final estimate is a weighted average over the output at each scale. See Section 4 for details.
+
+Organization of the paper We describe notations and models in Section 2. The lower bound for submodular maximization is presented in Section 3, and the algorithm for additive function is in Section 4. Due to space constraints, we defer detailed proof to the appendix.
+
+# 2. Preliminary
+
+Notation We write $[n] = \{1,2,\dots ,n\}$ . For random variables $X,Y$ , we use $I(X;Y)$ to denote the mutual information of $X,Y$ , $h(X;Y)$ to denote the Hellinger distance, $\mathrm{TV}(X;Y)$ to denote the total variation distance. For any value $p\in [0,1]$ , let $B_{p}$ the Bernoulli distributions with mean $p$ .
+
+Submodular maximization Let $f: \{0,1\}^n \to \mathbb{R}^+$ be a nonnegative set function. For any sets $A, B \subseteq [n]$ , let $f_A(B) := f(A \cup B) - f(A)$ be the marginal value of a set $B$ w.r.t. a set $A$ . The function is monotone if $f(B) \geq f(A)$ for any $A \subseteq B$ . The function is said to be submodular if $f_A(u) \geq f_B(u)$ holds for every sets $A \subseteq B \subseteq [n]$ and every element $u \in [n] \setminus B$ . In a constrained monotone submodular maximization problem, there is a budget $k \in [n]$ and the goal is to find a subset $S \subseteq [n]$ of size at most $k$ that (approximately) maximizes the function value, i.e., $\max_{S \subseteq [n], |S| \leq k} f(S)$ . The problem is studied in the query oracle model, where each time the algorithm submits a query $V \subseteq [n]$ and the oracle returns the function value $f(V)$ .
+
+Let $S^{*} \coloneqq \operatorname{argmax}_{S \subseteq [n], |S| \leq k} f(S)$ be the optimum solution set (breaking ties arbitrarily). In the search problem, the goal is to find an (approximately) optimal solution, and we say the algorithm finds an $\alpha$ -approximate solution if it returns a set $S$ ( $|S| \leq k$ ) such $f(S) \geq \alpha f(S^{*})$ . In the decision problem, the goal is to determine the optimal value, given a value OPT, we say an algorithm is $\alpha$ -approximate if it can distinguish between $f(S^{*}) \geq OPT$ vs. $f(S^{*}) \leq \alpha OPT$ .
+
+We have the following basic fact about submodular functions.
+
+Fact 2.1. We have the following basic facts about submodular functions: (1) A linear function is submodular; (2) If $f, g$ are submodular, then $f + g$ is submodular; and (3) If $f$ is monotone submodular, Let $c > 0$ be any constant, and let $g(S) = \min \{f(S), c\}$ for any $S \subseteq [n]$ , then $g$ is also submodular.
+
+Information theory To establish communication lower bounds, we need a few basic facts of information theory.
+
+Fact 2.2 (Hellinger v.s. total variation). For any two distributions $P, Q$ , we have
+
+$$
+h ^ {2} (P, Q) \leq \operatorname {T V} (P, Q) \leq \sqrt {2} h (P, Q).
+$$
+
+Fact 2.3. Suppose $A, B$ are independent when conditioned on $C$ , then we have
+
+$$
+I (D; A | C) + I (D; B | C) \leq I (D; A, B | C)
+$$
+
+# 3. Lower Bound for Submodular Maximization
+
+We prove that the distributed set detection requires $\widetilde{\Omega}(n)$ communication cost in Section 3.1, using the distributed SDPI inequality and a direct sum argument. In Section 3.2, we give a simple reduction from distributed set detection to submodular maximization, which rules out the possibility of approximately finding the optimal $k$ -subset using sublinear query (and it holds even for additive function). In Section 3.3, we present a more involved reduction that proves the hardness of approximating the value of optimal $k$ -subset.
+
+# 3.1. Communication Lower Bound
+
+A key ingredient of our proof is a communication lower bound on the distributed set detection problem. The distributed set detection is a multi-party communication problem and we study it under the blackboard model, where every party can write and read over a common blackboard.
+
+Definition 3.1 (Distributed set detection). Let $n, k, m$ be input parameters, $\mathcal{D}_0, \mathcal{D}_1$ be two Bernoulli distributions with mean $\mu_0, \mu_1$ . $m$ is the number of parties, who communicate in the blackboard model. Let $\mathcal{I} \subseteq [n]$ be an index set of size $[k/2, k]$ . The input of the $t$ -th party ( $t \in [m]$ ) is a vector $X_t \in \{0, 1\}^n$ , such that for any $i \in [n]$ , $X_{t,i} \sim \mathcal{D}_0$ if $i \in [n] \backslash I$ and $X_{t,i} \sim \mathcal{D}_1$ if $i \in \mathcal{I}$ . The goal is to output a set $\widehat{\mathcal{I}} \subseteq [n]$ ( $|\widehat{\mathcal{I}}| = k$ ) that maximizes the intersection $|\mathcal{I} \cap \widehat{\mathcal{I}}|$ .
+
+The main goal of this section is to establish the following communication lower bound of distributed set detection.
+
+Theorem 3.2 (Communication lower bound). Let $\epsilon \in (0,1)$ , $n,k,m$ be integers and satisfy $k \leq \epsilon n / 4$ . Let $c \geq 1$ be some constant and $\frac{1}{c}\mu_0 \leq \mu_1 \leq c\mu_0$ . For the problem of distributed set detection, any (randomized) communication protocol that outputs $\epsilon$ -fraction of the index set (i.e., $|\mathcal{I} \cap \widehat{\mathcal{I}}| \geq \epsilon k$ ) in expectation has communication complexity at least $\Omega(\epsilon^2 n / c)$ .
+
+With the goal of proving Theorem 3.2, we first introduce the distributed detection problem, where each party only receives one coordinate and the goal is to detect whether they come from $\mathcal{D}_0$ or $\mathcal{D}_1$ .
+
+Definition 3.3 (Distributed detection). Let $V \sim B_{1/2}$ and $\mathcal{D}_0, \mathcal{D}_1$ be two Bernoulli distributions with mean $\mu_0, \mu_1$ . There are $m$ parties communicate in the blackboard model and each party receives a single bit $Z_t \sim \mathcal{D}_V$ independently drawn from $\mathcal{D}_V$ (with the same $V$ for all parties). The goal is to determine the value of $V$ .
+
+The distributed detection problem has large information cost.
+
+- Using public randomness, the $m$ parties sample a set $\mathcal{I} = \{i_1,\dots ,i_k\} \subseteq [n]$
+- For any $t \in [m]$ , the $t$ -th party constructs the vector $X_{t} \in \{0,1\}^{n}$ as follows:
+
+$$
+X _ {t, i} \sim \left\{ \begin{array}{l l} \mathcal {D} _ {0} & i \in [ n ] \backslash \mathcal {I} \\ \mathcal {D} _ {1} & i \in \{i _ {2}, \ldots , i _ {K} \} \\ z _ {t} & i = i _ {1} \end{array} \right.
+$$
+
+- Then the $m$ parties follow the communication protocol of distributed set detection, and they output 1 if $i_1 \in \widehat{\mathcal{I}}$ .
+
+Figure 1. Communication protocol for distributed detection
+
+Lemma 3.4 (Distributed SDPI, Theorem 1.1 in (Braverman et al., 2016)). Suppose $\frac{1}{c}\mu_0\leq \mu_1\leq c\mu_0$ , $\beta (\mu_0,\mu_1)\leq 1$ be the SDPI constant of $\mu_0,\mu_1$ . Let $\Pi$ be the communication transcript, $\Pi |_{V = 0}$ (resp. $\Pi |_{V = 1}$ ) be the transcript when $V = 0$ (resp. $V = 1$ ). In the distributed detection problem, we have the following distributed strong data processing inequality,
+
+$$
+\begin{array}{l} h ^ {2} \left(\Pi | _ {V = 0}, \Pi | _ {V = 1}\right) \leq K \cdot c \beta \left(\mu_ {0}, \mu_ {1}\right). \\ \min \left\{I (\Pi ; Z _ {1} \dots Z _ {m} | V = 0), I (\Pi ; Z _ {1} \dots Z _ {m} | V = 1) \right\} \\ \end{array}
+$$
+
+for some fixed constant $K > 0$
+
+We apply a direct sum argument and prove the information cost for distributed set detection is $\Omega(n)$ times the information cost of distributed detection. To this end, consider the communication protocol in Figure 1 for distributed detection.
+
+Let $\Pi$ be the transcript of distributed set detection. Let $R$ be the public randomness used by the distributed set detection, $R' = (R, \mathcal{I})$ be the public randomness of distributed detection. The information cost of distributed set detection is defined as
+
+$$
+\mathsf {I C} = \max _ {\mathcal {I} \subseteq [ n ], k / 2 \leq | \mathcal {I} | \leq k} I (\Pi ; X _ {1}, \dots , X _ {m} | \mathcal {I}, R). \tag {1}
+$$
+
+The information cost is also a lower bound on the communication cost of distributed set detection.
+
+We have the following direct-sum theorem on the information cost, the proof is similar to Proposition 5.2 in (Braverman et al., 2016).
+
+Lemma 3.5. For any $k \geq 2$ , we have
+
+$$
+I (\Pi , R ^ {\prime}; Z _ {1}, \dots , Z _ {m} | V = 0) \leq \frac {| C}{n - k + 1}.
+$$
+
+We next analyse the correctness of the protocol.
+
+Lemma 3.6. Suppose the communication protocol of distributed set detection outputs $\epsilon$ -fraction of the index set, then the protocol in Figure 1 correctly guesses the value of $V$ with probability at least $\frac{1}{2} + \frac{\epsilon}{4}$ .
+
+Combining Lemma 3.4 - 3.6, we can prove Theorem 3.2.
+
+# 3.2. Lower Bound for Search Problem
+
+We start with a simpler linear query lower bound for the search problem. It is worth noting that the lower bound holds even when the function is additive.
+
+Theorem 3.7. Let $n$ be the size of the ground set, $k \in [n]$ be the cardinality constraint, $\alpha \in (0,1)$ be the approximation ratio and $k \leq O(\alpha n)$ . A randomized algorithm must make at least $\Omega (\alpha^5 n / \log^2 (n))$ queries in order to find an $\alpha$ -approximate solution for submodular maximization under the cardinality constraint.
+
+We reduce from the distributed set detection problem. Let
+
+$$
+\epsilon = \alpha / 1 5, \quad m = \frac {1 0 0 \log n}{\epsilon^ {2}},
+$$
+
+and
+
+$$
+\mu_ {0} = \epsilon , \quad \mu_ {1} = 1 / 2 \quad c = 1 / 2 \epsilon
+$$
+
+Given an instance of distributed set detection with input $X_{1},\ldots ,X_{m}\in \{0,1\}^{n}$ , consider the following function
+
+$$
+f (S) = \sum_ {i \in S} \sum_ {t \in [ m ]} X _ {t, i} \quad \forall S \subseteq \{0, 1 \} ^ {n}.
+$$
+
+It is clear that the function $f$ is non-negative, monotone, submodular since it is a linear function. Furthermore, its optimal solution satisfies
+
+Lemma 3.8. With probability at least $1 - 1 / n^{10}$ , for any $\alpha$ -approximate solution set $S \subseteq [n]$ ( $|S| \leq k$ ), we have $|S \cap \mathcal{I}| \geq \epsilon k$ .
+
+The proof of Theorem 3.7 follows from Lemma 3.8 and the communication lower bound of distribution set detection (Theorem 3.2).
+
+# 3.3. Lower Bound for Decision Problem
+
+We next prove a linear query lower bound for the decision problem.
+
+Theorem 3.9. Let $n$ be the size of the ground set, $k \in [n]$ be the cardinality constraint, $\alpha \in (0,1)$ be the approximation ratio, $\mathrm{OPT} \in \mathbb{R}^{+}$ and $k \leq O(\alpha^{2}n)$ . A randomized algorithm must make at least $\Omega (\alpha^{11}n / \log^2 (n))$ queries in order to distinguish between
+
+- YES Instance $f(S^{*}) \geq \mathrm{OPT}$
+- NO Instance $f(S^{*}) \leq \alpha$ OPT
+
+We present a reduction from the distributed set detection problem. The construction of the submodular function and the choice of parameters are slightly different from Theorem 3.7. Let
+
+$$
+\epsilon = \frac {\alpha^ {2}}{8 0 0}, \qquad m = \frac {1 0 0 \log n}{\epsilon^ {2}}
+$$
+
+and
+
+$$
+\mu_ {0} = \epsilon , \qquad \mu_ {1} = 1 / 2, \qquad c = 1 / 2 \epsilon
+$$
+
+Given an instance of distributed set detection with input $X_{1},\ldots ,X_{m}\in \{0,1\}^{n}$ , consider the following functions
+
+$$
+\begin{array}{l} f _ {\text {y e s}} (S) = \min \left\{\sum_ {i \in S \cap \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} + \sum_ {i \in S \cap [ n ] \backslash \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} \right. \\ \left. + \frac {\alpha m | S |}{2 0}, m k \right\} \tag {2} \\ \end{array}
+$$
+
+and
+
+$$
+\begin{array}{l} f _ {\mathrm {n o}} (S) = \min \left\{\min \left\{\sum_ {i \in S \cap \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i}, \frac {\alpha m k}{1 0} \right\} \right. \\ \left. + \sum_ {i \in S \cap [ n ] \backslash \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} + \frac {\alpha m | S |}{2 0}, m k \right\}. \tag {3} \\ \end{array}
+$$
+
+It is easy to see that both $f_{\mathrm{yes}}$ , $f_{\mathrm{no}}$ are monotone and submodular, because linear combination and minimum with a constant keep submodularity and monotonicity (see Fact 2.1).
+
+We first make a few simple observations.
+
+Lemma 3.10. With probability at least $1 - 1/n^{10}$ , we have
+
+$$
+\sum_ {t \in [ m ]} X _ {t, i} \in \left[ (1 / 2 - \epsilon) m, (1 / 2 + \epsilon) m \right] \quad \forall i \in \mathcal {I} \tag {4}
+$$
+
+and
+
+$$
+\sum_ {t \in [ m ]} X _ {t, i} \in (\epsilon m / 2, 2 \epsilon m) \quad \forall i \in [ n ] \backslash \mathcal {I} \tag {5}
+$$
+
+In the rest of the proof, we would condition on the high probability event of Lemma 3.10. Let $\mathrm{OPT} = mk / 5$ . We have the following observation on the optimal value of $f_{\mathrm{yes}}$ and $f_{\mathrm{no}}$ .
+
+Lemma 3.11. We have (1) $\max_{S^* \subseteq [n], |S^*| = k} f_{\mathrm{yes}}(S^*) \geq \mathrm{OPT}$ ; and $f_{\mathrm{no}}(S) \leq \alpha \mathrm{OPT}$ for any $S \subseteq [n]$ , $|S| = k$ .
+
+Now we can proceed to prove Theorem 3.9.
+
+Proof of Theorem 3.9. Suppose there exists an algorithm ALG that makes at most $R$ queries and distinguishes between the YES/NO instance. Consider the communication protocol in Algorithm 1 for distributed set detection.
+
+From a high level, the communication proceeds in (at most) $R$ rounds. At the $r$ -th round, the $m$ parties look at the $r$ -th query $S_{r} \subseteq [n]$ of ALG, they either decide an output set or construct the value oracle $f(S_{r})$ . Concretely, if the size of $S_{r}$ is large, they set $f(S_{r}) = mk$ and proceed to the next round/query. Otherwise, they partition the set $S_{r} = S_{r,1} \cup \dots \cup S_{r,20/\alpha}$ into $20/\alpha$ subsets, each of size at most $k$ . The $m$ parties then compute the value of $\sum_{i \in S_{r,\tau}} \sum_{t \in [m]} X_{t,i}$ for each subset $S_{r,\tau} (\tau \in [20/\alpha])$ . If the value is large, they return the set $\widehat{I} \gets S_{r,\tau}$ ; otherwise, they set the oracle value $f(S_{r}) = \min \{\sum_{t \in [m]} \sum_{i \in S_{r}} X_{t,i} + \alpha k |S_{r}| / 20, mk\}$ and proceed to the next round/query.
+
+Algorithm 1 Reduction: From submodular maximization to distributed set detection
+
+1: Input: Input $X_{1},\ldots ,X_{m}\in \{0,1\}^{n}$ , submodular maximization algorithm ALG,
+2: for $r = 1,2,\ldots ,R$ do
+3: Let $S_r \subseteq [n]$ be the $r$ -th query of ALG
+4: if $|S_r| \geq 20k / \alpha$ then
+5: $f(S_{r})\gets mk$
+6: else
+7: Partition $S_{r} = S_{r,1}\cup \dots \cup S_{r,20 / \alpha}$
+8: for $\tau = 1,2\ldots ,20 / \alpha$ do
+9: if $\sum_{i\in S_{r,\tau}}\sum_{t\in [m]}X_{t,i}\geq \alpha^2 mk / 200$ then
+0: $\widehat{I} \gets S_{r,\tau}$ and return
+11: end if
+12: end for
+13: $f(S_{r})\underset {\alpha m|S_{r}| / 20,mk\}}{\leftarrow}\min \{\sum_{i\in S_{r}}\sum_{t\in [m]}X_{t,i} + \right.$
+14: end if
+15: end for
+
+Correctness First, we prove the correctness of the protocol. We divide into two cases.
+
+Case 1. Algorithm 1 returns a set $\widehat{\mathcal{I}}\gets S_{r,\tau}$ at some round $r\in [R]$ and $\tau \in [20 / \alpha ]$ . Then we prove it must satisfy $|S_{r,\tau}\cap \mathcal{I}|\geq \epsilon k.$
+
+We have
+
+$$
+\begin{array}{l} \alpha^ {2} m k / 2 0 0 \leq \sum_ {i \in S _ {r, \tau}} \sum_ {t \in [ m ]} X _ {t, i} \\ = \sum_ {i \in S _ {r, \tau} \cap \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} + \sum_ {i \in S _ {r, \tau} \cap [ n ] \backslash \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} \\ \leq \left| S _ {r, \tau} \cap \mathcal {I} \right| \cdot (1 / 2 + \epsilon) m + k \cdot 2 \epsilon m. \\ \end{array}
+$$
+
+Here the first step follows from the assumption, the third step follows from Lemma 3.10. Plugging in the value of $\alpha, \epsilon$ , we have that
+
+$$
+\left| S _ {r, \tau} \cap \mathcal {I} \right| \geq \frac {\alpha^ {2} / 2 0 0 - 2 \epsilon}{1 / 2 + \epsilon} k \geq \epsilon k.
+$$
+
+Case 2. Suppose Algorithm 1 never returns anything, i.e., the if condition at Line 9 of Algorithm 1 is never satisfied. Then we prove $f(S_{r}) = f_{\mathrm{yes}}(S_{r}) = f_{\mathrm{no}}(S_{r})$ holds for every $r \in [R]$ .
+
+Fix a round $r \in [R]$ , if $|S_r| \geq 20k / \alpha$ , then we have $f_{\mathrm{yes}}(S_r) = f_{\mathrm{no}}(S_r) = mk = f(S_r)$ due to Line 5 of Algorithm 1. Now suppose $|S_r| < \frac{20k}{\alpha}$ , we have
+
+$$
+\begin{array}{l} f \left(S _ {r}\right) = \min \left\{\sum_ {i \in S _ {r}} \sum_ {t \in [ m ]} X _ {t, i} + \alpha m \left| S _ {r} \right| / 2 0, m k \right\} \\ = f _ {\text {y e s}} \left(S _ {r}\right) \\ \end{array}
+$$
+
+Meanwhile, by the assumption of the second case, we also have
+
+$$
+\begin{array}{l} \sum_ {i \in S _ {r} \cap \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} \leq \sum_ {i \in S _ {r}} \sum_ {t \in [ m ]} X _ {t, i} \\ = \sum_ {\tau \in [ 2 0 / \alpha ]} \sum_ {i \in S _ {r, \tau}} \sum_ {t \in [ m ]} X _ {t, i} \\ \leq (2 0 / \alpha) \cdot (\alpha^ {2} m k / 2 0 0) = \alpha m k / 1 0, \\ \end{array}
+$$
+
+Hence, we also have $f_{\mathrm{no}}(S_r) = f_{\mathrm{yes}}(S_r)$ (see the definition in Eq. (2)(3)). This proved that $f_{\mathrm{yes}}(S_r) = f_{\mathrm{no}}(S_r) = f(S_r)$ for any $r \in [R]$ .
+
+In summary, we conclude that in Case 1, the reduction outputs a set $\widehat{\mathcal{I}}$ with $|\mathcal{I} \cap \mathcal{I}| \geq \epsilon k$ , while in Case 2, the transcript of ALG is the same for $f_{\mathrm{yes}}$ and $f_{\mathrm{no}}$ . Since we assume ALG is able to distinguish between $f_{\mathrm{yes}}, f_{\mathrm{no}}$ after $R$ queries, then we must fall into Case 1. This proves the correctness of Algorithm 1.
+
+Communication complexity Next we analyse the communication complexity. At each round $r \in [R]$ , the $m$ parties need to compute $\sum_{t \in [m]} \sum_{i \in S_{r,\tau}} X_{t,i}$ for $\tau \in [20 / \alpha]$ . For each $\tau \in [20 / \alpha]$ , the $t$ -th party ( $t \in [m]$ ) could compute the partial sum $\sum_{i \in S_{r,\tau}} X_{t,i}$ locally, and then write it on the blackboard. Hence, the total communication cost per round is at most $\tau \cdot m \cdot \log(n) = O(\log^2(n) / \alpha \epsilon^2) = O(\log^2(n) / \alpha^5)$ . There are at most $R$ rounds, so the total communication cost over $R$ rounds are $O(R\log^2(n) / \alpha^5)$ . By Theorem 3.2, we must have
+
+$$
+R \log^ {2} (n) / \alpha^ {5} \geq \Omega (\epsilon^ {2} n / c) \Rightarrow R \geq \Omega (\alpha^ {1 1} n / \log^ {2} (n)).
+$$
+
+# 4. Sublinear Algorithm for Additive Function
+
+Theorem 4.1. Let $n$ be the size of ground set, $k$ be the constraint, $\epsilon \in (0,1/8)$ be a constant. Suppose $f$ is monotone and additive, then there is an algorithm that approximates the value of $\max_{S^{*} \in [n], |S| = k} f(S)$ within $(1 \pm \epsilon)$ factor using $(n/k) \cdot \mathrm{poly}(\log(n), \epsilon^{-1})$ number of queries and succeeds with probability at least $4/5$ .
+
+Parameters and notation Let $w_{i} = f(i)\geq 0$ and we have $f(S) = \sum_{i\in S}w_{i}$ for all $S\subseteq [n]$ . For parameters $R,R_1,R_2$ defined below, we define a set $\{k_r\}_r$ of scales ranging from $k_{1} = 1$ to $k_{R + R_1 + R_2} = k$ . Specifically, we let $R = 100\log (n) / \epsilon^2$ , $R_{1} = \log_{1 + \epsilon}(k / R^{3})$ , $R_{2} = \log_{1 + \epsilon}(R^{2})$ . Let $k_{r} = r$ for $r\leq R$ , and $k_{r} = R(1 + \epsilon)^{r - R}$ for $r\in [R + 1:R + R_1 + R_2]$ .
+
+Algorithm description Our approach is depicted as LINEARSUM (Algorithm 2). From a high level, LINEARSUM divides the largest $k$ elements into multiple scales. For the largest scales, $k_{r} \in \{k_{R + R_{1} + 1}, \ldots, k_{R + R_{1} + R_{2}}\}$ , LINEARSUM directly estimates the $k_{r}$ -th quantile and it takes roughly $\widetilde{O}(n / k_{r}) = \widetilde{O}(n / k)$ samples.
+
+For smaller scales, $k_{r} \in \{k_{1},\ldots ,k_{R + R_{1}}\}$ , we cannot directly use this naive estimation approach as we cannot afford $\widetilde{O}(n / k_{r})$ query complexity for smaller $k_{r}$ . To avoid this increase in query complexity, LINEARSUM randomly partitions the ground set $[n]$ into $nk_{r} / k$ buckets $A_{r,1},\dots,A_{r,nk_{r} / l}$ . Define $f_{r}(i) := f(A_{r,i})$ for $i \in [nk_{r} / k]$ , LINEARSUM estimates the $k_{r}$ -th largest element of $f_{r}$ . Roughly speaking, we expect the top $k_{r}$ -th bucket to be as valuable as the top $k_{r}$ -th element + an average bucket, so LINEARSUM further subtracts the average value of a bucket to get an estimate of the contribution just from the top $k_{r}$ elements.
+
+# Algorithm 2 LINEARSUM(f,k)
+
+1: for $r = 1,2,\ldots ,R + R_{1}$ do
+2: Random partition $[n] = A_{r,1} \cup \dots \cup A_{r,nk_r / k}$ into $nk_{r} / k$ subsets
+3: Define $f_{r}(i) \coloneqq f(A_{r,i})$ for $\forall i \in [nk_r / k]$
+4: $b_r \gets \text{ESTIMATEQUANTILE}(f_r, k_r)$
+5: $c_{r}\gets b_{r} - \frac{k}{nk_{r}} f([n])$
+6: end for
+7: $c_{r} \gets \text{ESTIMATEQUANTILE}(f, k_{r})$ for $r \in [R + R_{1} + 1 : R + R_{1} + R_{2}]$
+8: Return $\sum_{r=1}^{R+R_1+R_2}(k_r - k_{r-1})c_r$
+
+# Algorithm 3 ESTIMATEQUANTILE $(g,t)$
+
+1: if $t > 100\log (n) / \epsilon^2$ then
+2: Random sample a set $S \subseteq [m]$ of size $100m\log (n) / t\epsilon^2$
+3: Query $g(i)$ for all $i \in S$ and let $b_{t}$ be the $100\log (n) / \epsilon^{2}$ -th largest element of $\{g(i)\}_{i \in S}$
+4: Return $b_{t}$
+5: else
+6: Query $g$ and Return the $t$ -th largest element
+7: end if
+
+# 4.1. Analysis
+
+We relabel the ground set elements such that $w_{1} \geq \dots \geq w_{n}$ for convenience; note that LINEARSUM is oblivious to the labeling of ground set elements so this is without loss of generality.
+
+We first prove that $\text{ESTIMATEQUANTILE}(g, t)$ gives a good estimate on the value of the top $t$ -th element of $g$ .
+
+Lemma 4.2. Given any function $g:[m]\to \mathbb{R}^{+}$ , and let $\widehat{b}_1\geq \widehat{b}_2\geq \dots \widehat{b}_m$ be the descending ordering of $\{g(i)\}_{i\in [m]}$ . For any $t\in [m]$ , the output $b_{t}$ of ESTIMATEQUANTILE $(g,t)$ satisfies
+
+$$
+\widehat {b} _ {(1 + \epsilon) t} \leq b _ {t} \leq \widehat {b} _ {(1 + \epsilon) ^ {- 1} t}
+$$
+
+for $t > 100\log (n) / \epsilon^2$ with probability at least $1 - 1 / n^{4}$ , and $b_{t} = \widehat{b}_{t}$ for $t\leq 100\log (n) / \epsilon^2$
+
+In the rest of the proof, it is convenient to assume $k$ divides $n$ , and further that
+
+$$
+\left(\log (n) / \epsilon\right) ^ {8} \leq k \leq n \cdot \left(\epsilon / \log (n)\right) ^ {8}. \tag {6}
+$$
+
+This assumption is without loss of generality as we could just add dummy elements to the ground set.
+
+The key step is to prove that $c_{r}$ gives a good estimate on the $k_{r}$ -th largest element of $f$ .
+
+Lemma 4.3. For $r \leq R$ , with probability at least $1 - \frac{1}{100R}$
+
+$$
+w _ {k _ {r}} - \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {j} \leq c _ {r} \leq w _ {k _ {r}} + \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {j}. \tag {7}
+$$
+
+and for $r \in [R + 1 : R + R_1]$ , with probability at least $1 - \frac{1}{n^4}$ , we have
+
+$$
+w _ {(1 + 2 \epsilon) k _ {r}} - \frac {2}{k _ {r}} \sum_ {j \leq k} w _ {j} \leq c _ {r} \leq w _ {(1 - 2 \epsilon) k _ {r}} + \frac {2}{k _ {r}} \sum_ {j \leq k} w _ {j}. \tag {8}
+$$
+
+The detailed proof of Lemma 4.3 can be found at Appendix B and we sketch the high level idea here. In the proof of Lemma 4.3, fix a value of $r$ , let $t_r = \log^3 (n) / \epsilon^3$ when $r\leq R$ and $t_r = k_r\log^2 (n) / \epsilon$ when $r\in [R + 1:R + R_1]$ . For each subset $i\in [nk_r / k]$ , we decompose $f(A_{r,i})$ into three parts
+
+$$
+\begin{array}{l} f \left(A _ {r, i}\right) = \sum_ {j \in A _ {r, i}} w _ {j} \\ = \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j > k}. \\ \end{array}
+$$
+
+From a high level, we wish to prove that (1) there is at most one element contributes the first term; (2) the second term is negligible; and (3) the last term concentrates on its mean.
+
+Fix a partition $A_{r,1} \cup \dots \cup A_{r,nk_r / k}$ . For any subset $S \subseteq [n]$ , the number of collisions among $S$ is defined as the total number of elements in $S$ that are allocated to subsets with more than one element of $S$ . We first prove that (with sufficiently high probability) there are few collisions among the largest $t_r$ elements.
+
+Lemma 4.4. We have
+
+- For $r \leq R$ , with probability at least $1 - \frac{1}{100R}$ , there are no collisions among the top $t_r$ elements.
+- For $r \in [R + 1 : R + R_1]$ , with probability at least $1 - 1 / n^4$ , the number of collisions among the top $t_r$ elements are at most is at most $\epsilon k_r$
+
+We next prove $\sum_{j\in A_{r,i}}w_j1_{t_r < j\leq k}$ is negligible.
+
+Lemma 4.5. For any $r \in [R + R_1]$ , $i \subseteq [nk_r / k]$ , with probability at least $1 - 1 / n^4$ , we have
+
+$$
+\sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} \leq \frac {\log^ {2} (n)}{t _ {r}} \sum_ {j \leq k} w _ {j}.
+$$
+
+Finally, we prove $\sum_{j\in A_{r,i}}w_j\cdot 1_{j > k}$ concentrates around the mean.
+
+Lemma 4.6. For any $r \in [R + R_1]$ and $i \in [nk_r / k]$ , with probability at least $1 - 1 / n^5$ ,
+
+$$
+\sum_ {j \in A _ {i, r}} w _ {j} \cdot 1 _ {j > k} = \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} \pm \sqrt {\frac {k}{k _ {r}}} \log (n) w _ {k}.
+$$
+
+We can obtain Lemma 4.3 from Lemma 4.4 - Lemma 4.6, and obtain Theorem 4.1 using Lemma 4.3.
+
+# 4.2. A Nearly-matching Lower Bound
+
+We present a matching lower bound to Theorem 4.1 whose proof can be found at the Appendix.
+
+Theorem 4.7. Let $f$ be an additive function, $n$ be the size of ground set, $k$ be the constraint, $\alpha \in (0,1)$ be a constant. Then it takes $\Omega(\alpha^3 n / k \log(n))$ queries to distinguish between
+
+- YES Instance: $f(S^{*}) \geq \mathrm{OPT}$
+- NO Instance: $f(S^{*}) \leq \alpha$ OPT
+
+# Acknowledgement
+
+The authors would like to thank Yair Carmon for useful discussion over the project. The work is supported by NSF CCF-1954927, a David and Lucile Packard Fellowship, AFOSR award FA95502310251 and ONR award number N000142212771.
+
+# Impact Statement
+
+This is a theoretical paper and it does not present foreseeable negative social impacts.
+
+# References
+
+Agarwal, A. and Balkanski, E. Learning-augmented dynamic submodular maximization. arXiv preprint arXiv:2311.13006, 2023.
+Agrawal, S., Shadravan, M., and Stein, C. Submodular secretary problem with shortlists. arXiv preprint arXiv:1809.05082, 2018.
+Alaluf, N. and Feldman, M. Making a sieve random: Improved semi-streaming algorithm for submodular maximization under a cardinality constraint. arXiv preprint arXiv:1906.11237, 2019.
+Alaluf, N., Ene, A., Feldman, M., Nguyen, H. L., and Suh, A. An optimal streaming algorithm for submodular maximization with a cardinality constraint. Mathematics of Operations Research, 47(4):2667-2690, 2022.
+Badanidiyuru, A., Mirzasoleiman, B., Karbasi, A., and Krause, A. Streaming submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 671-680, 2014.
+Balkanski, E. and Singer, Y. The adaptive complexity of maximizing a submodular function. In Diakonikolas, I., Kempe, D., and Henzinger, M. (eds.), Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pp. 1138-1151. ACM, 2018. doi: 10.1145/3188745.3188752. URL https://doi.org/10.1145/3188745.3188752.
+Balkanski, E., Rubinstein, A., and Singer, Y. The power of optimization from samples. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 4017-4025, 2016.
+Balkanski, E., Breuer, A., and Singer, Y. Non-monotone submodular maximization in exponentially fewer iterations. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 2359-2370, 2018.
+
+Balkanski, E., Rubinstein, A., and Singer, Y. An exponential speedup in parallel running time for submodular maximization without loss in approximation. In Chan, T. M. (ed.), Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pp. 283-302. SIAM, 2019. doi: 10.1137/1.9781611975482.19.
+Balkanski, E., Rubinstein, A., and Singer, Y. An optimal approximation for submodular maximization under a matroid constraint in the adaptive complexity model. Oper. Res., 70(5):2967-2981, 2022a. doi: 10.1287/OPRE.2021.2170.
+Balkanski, E., Rubinstein, A., and Singer, Y. The limitations of optimization from samples. J. ACM, 69(3):21:1-21:33, 2022b. doi: 10.1145/3511018.
+Banihashem, K., Biabani, L., Goudarzi, S., Hajiaghayi, M., Jabbarzade, P., and Monemizadeh, M. Dynamic constrained submodular optimization with polylogarithmic update time. In International Conference on Machine Learning, pp. 1660-1691. PMLR, 2023.
+Banihashem, K., Biabani, L., Goudarzi, S., Hajiaghayi, M., Jabbarzade, P., and Monemizadeh, M. Dynamic algorithms for matroid submodular maximization. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 3485-3533. SIAM, 2024.
+Behnezhad, S. Dynamic algorithms for maximum matching size. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 129-162. SIAM, 2023.
+Behnezhad, S., Roghani, M., and Rubinstein, A. Sublinear time algorithms and complexity of approximate maximum matching. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pp. 267-280, 2023.
+Bhattacharya, S., Kiss, P., Saranurak, T., and Wajc, D. Dynamic matching with better-than-2 approximation in polylogarithmic update time. Journal of the ACM, 71(5):1-32, 2024.
+Bilmes, J. A. Submodularity in machine learning and artificial intelligence. CoRR, abs/2202.00132, 2022. URL https://arxiv.org/abs/2202.00132.
+Braverman, M., Garg, A., Ma, T., Nguyen, H. L., and Woodruff, D. P. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pp. 1011-1020, 2016.
+
+Breuer, A., Balkanski, E., and Singer, Y. The FAST algorithm for submodular maximization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 1134-1143. PMLR, 2020.
+Chakrabarti, A. and Kale, S. Submodular maximization meets streaming: matchings, matroids, and more. Mathematical Programming, 154:225-247, 2015.
+Charikar, M., Chen, B., Ré, C., and Waingarten, E. Fast algorithms for a new relaxation of optimal transport. In The Thirty Sixth Annual Conference on Learning Theory, pp. 4831-4862. PMLR, 2023.
+Chekuri, C. and Quanrud, K. Parallelizing greedy for submodular set function maximization in matroids and beyond. In Charikar, M. and Cohen, E. (eds.), Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pp. 78-89. ACM, 2019a. doi: 10.1145/3313276.3316406.
+Chekuri, C. and Quanrud, K. Submodular function maximization in parallel via the multilinear relaxation. In Chan, T. M. (ed.), Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pp. 303-322. SIAM, 2019b. doi: 10.1137/1.9781611975482.20. URL https://doi.org/10.1137/1.9781611975482.20.
+Chekuri, C., Gupta, S., and Quanrud, K. Streaming algorithms for submodular function maximization. In Automata, Languages, and Programming: 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I 42, pp. 318-330. Springer, 2015.
+Chen, L., Feldman, M., and Karbasi, A. Unconstrained submodular maximization with constant adaptive complexity. In Charikar, M. and Cohen, E. (eds.), Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pp. 102-113. ACM, 2019. doi: 10.1145/3313276.3316327.
+Chen, X. and Peng, B. On the complexity of dynamic submodular maximization. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1685-1698, 2022.
+Chen, X., Jayaram, R., Levi, A., and Waingarten, E. New streaming algorithms for high dimensional emd and mst. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pp. 222-233, 2022.
+
+Dütting, P., Fusco, F., Lattanzi, S., Norouzi-Fard, A., and Zadimoghaddam, M. Fully dynamic submodular maximization over matroids. In International Conference on Machine Learning, pp. 8821-8835. PMLR, 2023.
+Ene, A. and Nguyen, H. L. Submodular maximization with nearly-optimal approximation and adaptivity in nearly-linear time. In Chan, T. M. (ed.), Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pp. 274-282. SIAM, 2019. doi: 10.1137/1.9781611975482.18.
+Ene, A. and Nguyen, H. L. Parallel algorithm for non-monotone dr-submodular maximization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 2902-2911. PMLR, 2020. URL http://proceedings.mlr.press/v119/ene20a.html.
+Fahrbach, M., Mirrokni, V. S., and Zadimoghaddam, M. Submodular maximization with nearly optimal approximation, adaptivity and query complexity. In Chan, T. M. (ed.), Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pp. 255-273. SIAM, 2019. doi: 10.1137/1.9781611975482.17. URL https://doi.org/10.1137/1.9781611975482.17.
+Feldman, M., Karbasi, A., and Kazemi, E. Do less, get more: Streaming submodular maximization with subsampling. Advances in Neural Information Processing Systems, 31, 2018.
+Feldman, M., Liu, P., Norouzi-Fard, A., Svensson, O., and Zenklusen, R. Streaming submodular maximization under matroid constraints. arXiv preprint arXiv:2107.07183, 2021.
+Feldman, M., Norouzi-Fard, A., Svensson, O., and Zenklusen, R. The one-way communication complexity of submodular maximization with applications to streaming and robustness. Journal of the ACM, 70(4):1-52, 2023.
+Fisher, M. L., Nemhauser, G. L., and Wolsey, L. A. An analysis of approximations for maximizing submodular set functions—II. Springer, 1978.
+Huang, C.-C., Thiery, T., and Ward, J. Improved multi-pass streaming algorithms for submodular maximization with matroid constraints. arXiv preprint arXiv:2102.09679, 2021.
+
+Huang, C.-C., Kakimura, N., Mauras, S., and Yoshida, Y. Approximability of monotone submodular function maximization under cardinality and matroid constraints in the streaming model. SIAM Journal on Discrete Mathematics, 36(1):355-382, 2022.
+Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., and Madry, A. Datamodels: Predicting predictions from training data. In Proceedings of the 39th International Conference on Machine Learning, 2022.
+Indyk, P. and Vakilian, A. Tight trade-offs for the maximum k-coverage problem in the general streaming model. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pp. 200-217, 2019.
+Kazemi, E., Mitrovic, M., Zadimoghaddam, M., Lattanzi, S., and Karbasi, A. Submodular streaming in all its glory: Tight approximation, minimum memory and low adaptive complexity. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 3311-3320. PMLR, 2019a. URL http://proceedings.mlr.press/v97/kazemi19a.html.
+Kazemi, E., Mitrovic, M., Zadimoghaddam, M., Lattanzi, S., and Karbasi, A. Submodular streaming in all its glory: Tight approximation, minimum memory and low adaptive complexity. In International Conference on Machine Learning, pp. 3311-3320. PMLR, 2019b.
+Kempe, D., Kleinberg, J., and Tardos, E. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 137-146, 2003.
+Kuhnle, A. Quick streaming algorithms for maximization of monotone submodular functions in linear time. In International Conference on Artificial Intelligence and Statistics, pp. 1360-1368. PMLR, 2021.
+Lattanzi, S., Mitrovic, S., Norouzi-Fard, A., Tarnawski, J. M., and Zadimoghaddam, M. Fully dynamic algorithm for constrained submodular optimization. Advances in Neural Information Processing Systems, 33:12923-12933, 2020.
+Li, W., Feldman, M., Kazemi, E., and Karbasi, A. Submodular maximization in clean linear time. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
+
+Liu, P., Rubinstein, A., Vondrak, J., and Zhao, J. Cardinality constrained submodular maximization for random streams. Advances in Neural Information Processing Systems, 34:6491-6502, 2021.
+McGregor, A. and Vu, H. T. Better streaming algorithms for the maximum coverage problem. Theory of Computing Systems, 63:1595-1619, 2019.
+Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A., Vondrák, J., and Krause, A. Lazier than lazy greedy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
+Monemizadeh, M. Dynamic submodular maximization. Advances in Neural Information Processing Systems, 33: 9806-9817, 2020.
+Norouzi-Fard, A., Tarnawski, J., Mitrovic, S., Zandieh, A., Mousavifar, A., and Svensson, O. Beyond 1/2-approximation for submodular maximization on massive data streams. In International Conference on Machine Learning, pp. 3829-3838. PMLR, 2018.
+Peng, B. Dynamic influence maximization. Advances in Neural Information Processing Systems, 34:10718-10731, 2021.
+Peng, B. and Rubinstein, A. Fully-dynamic-to-incremental reductions with known deletion order (eg sliding window). In Symposium on Simplicity in Algorithms (SOSA), pp. 261-271. SIAM, 2023.
+Shadravan, M. Improved submodular secretary problem with shortlists. arXiv preprint arXiv:2010.01901, 2020.
+
+# A. Missing Proof from Section 3
+
+We first prove Lemma 3.5.
+
+Proof of Lemma 3.5. First, we have
+
+$$
+\begin{array}{l} I (\Pi , R ^ {\prime}; Z _ {1}, \dots , Z _ {m} | V = 0) = I (\Pi ; Z _ {1}, \dots , Z _ {m} | R ^ {\prime}, V = 0) \\ = I (\Pi ; X _ {1, \mathcal {I} _ {1}}, \dots , X _ {m, \mathcal {I} _ {1}} | R, \mathcal {I}, V = 0) \\ = \underset {i _ {2}, \dots , i _ {k}} {\mathbb {E}} \left[ I \left(\Pi ; X _ {1, \mathcal {I} _ {1}}, \dots , X _ {m, \mathcal {I} _ {1}} \mid R, \mathcal {I} _ {1}, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0\right). \right. \tag {9} \\ \end{array}
+$$
+
+The first step follows because the public randomness $R'$ is independent of $Z_{1},\ldots ,Z_{m}$ conditioning on $V = 0$ , the second step follows from $R^{\prime} = (R,\mathcal{I})$ , and $Z_{1},\ldots ,Z_{m}$ are embedded to the $\mathcal{I}_1$ -th coordinate. The third step follows from the definition of condition mutual information.
+
+For any fixed $i_2, \ldots, i_k$ , we bound the RHS of Eq. (9) and our goal is to prove
+
+$$
+I \left(\Pi ; X _ {1, \mathcal {I} _ {1}}, \dots , X _ {m, \mathcal {I} _ {1}} \mid R, \mathcal {I} _ {1}, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0\right) \leq \frac {\mid C}{n - k + 1}. \tag {10}
+$$
+
+To this end, we have
+
+$$
+\begin{array}{l} I (\Pi ; X _ {1, \mathcal {I} _ {1}}, \dots , X _ {m, \mathcal {I} _ {1}} | R, \mathcal {I} _ {1}, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0) \\ = \frac {1}{n - k + 1} \sum_ {i \in [ n ] \backslash \{i _ {2}, \dots , i _ {k} \}} I (\Pi ; X _ {1, \mathcal {I} _ {1}}, \dots , X _ {m, \mathcal {I} _ {1}} | R, \mathcal {I} _ {1} = i, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0) \\ = \frac {1}{n - k + 1} \sum_ {i \in [ n ] \backslash \{i _ {2}, \dots , i _ {k} \}} I (\Pi ; X _ {1, i}, \dots , X _ {m, i} | R, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0) \\ \leq \frac {1}{n - k + 1} I (\Pi ; \{X _ {1, i}, \dots , X _ {m, i} \} _ {i \in [ n ] \backslash \{i _ {2}, \dots , i _ {k} \}} | R, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0) \\ \leq \frac {1}{n - k + 1} I \left(\Pi ; X _ {1}, \dots , X _ {m} \mid R, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0\right). \tag {11} \\ \end{array}
+$$
+
+The first step holds since the choice of $I_{1}$ is uniform over $[n] \backslash \{i_2, \ldots, i_k\}$ . The second step holds since the distribution of $X_{1,i}, \ldots, X_{m,i}$ for $i \in [n] \backslash \{i_2, \ldots, i_k\}$ does not depend on $i$ (because they are drawn from $\mathcal{D}_0^m$ ), and the transcript $\Pi$ is oblivious of $\mathcal{I}_1$ . The third step holds due to $X_{1,i}, \ldots, X_{m,i}$ are independent across $i \in [n] \backslash \{i_2, \ldots, i_k\}$ and Fact 2.3.
+
+Finally, for any $i_2,\ldots ,i_k$ , we have
+
+$$
+I \left(\Pi ; X _ {1}, \dots , X _ {m} \mid R, \mathcal {I} _ {2} = i _ {2}, \dots , \mathcal {I} _ {k} = i _ {k}, V = 0\right) \leq \mathrm {I C} \tag {12}
+$$
+
+holds for any $k \geq 2$ due to the definition of information cost (see Eq. (1)).
+
+We have proved Eq. (10) combining Eq. (11)(12), and combining with Eq. (9), we complete the proof.
+
+We next prove Lemma 3.6
+
+Proof of Lemma 3.6. When $V = 1$ , note $\{X_{1,i},\ldots ,X_{m,i}\}_{i\in \mathcal{I}}$ are drawn from the same distribution, hence we have
+
+$$
+\Pr \left[ i _ {1} \in \mathcal {I} \right] = \frac {\left| \mathcal {I} \cap \widehat {\mathcal {I}} \right|}{\left| \mathcal {I} \right|} \geq \frac {\epsilon k}{\left| \mathcal {I} \right|} \geq \epsilon .
+$$
+
+When $V = 0$ , note $\{X_{1,i},\ldots ,X_{m,i}\}_{i\in [n]\setminus \{i_2,\ldots ,i_k\}}$ have the same distribution, hence we have
+
+$$
+\operatorname * {P r} [ i _ {1} \in \mathcal {I} ] = \frac {| \widehat {\mathcal {I}} |}{n - k + 1} = \frac {k}{n - k + 1}
+$$
+
+Combining the above two cases, the success probability is at least
+
+$$
+\frac {1}{2} \cdot \epsilon + \frac {1}{2} \cdot \left(1 - \frac {k}{n - k + 1}\right) \geq \frac {1}{2} + \frac {\epsilon}{4}.
+$$
+
+Here we use the fact that $k \leq \epsilon n / 4$ .
+
+Now we provide the proof of Theorem 3.2
+
+Proof of Theorem 3.2. By Lemma 3.6, we have that $\mathsf{TV}(\Pi|_{V=0}, \Pi|_{V=1}) \geq \frac{\epsilon}{4}$ , and therefore,
+
+$$
+h ^ {2} (\Pi | _ {V = 0}, \Pi | _ {V = 1}) \geq \frac {1}{2} \mathsf {T V} ^ {2} (\Pi | _ {V = 0}, \Pi | _ {V = 1}) \geq \frac {\epsilon^ {2}}{3 2}.
+$$
+
+Here, the first step follows from Fact 2.2.
+
+By Lemma 3.4 and the fact that the SDPI constant $\beta (\mu_0,\mu_1)$ is at most 1, we have
+
+$$
+I (\Pi ; Z _ {1}, \dots , Z _ {m} | V = 0) \geq \Omega (1 / c) \cdot h ^ {2} (\Pi | _ {V = 0}, \Pi | _ {V = 1}) = \Omega (\epsilon^ {2} / c).
+$$
+
+By Lemma 3.5, we have $\mathsf{IC}\geq \Omega (\epsilon^2 n / c)$ . This completes the proof since the communication cost is at least the information cost.
+
+Next we prove lower bound for submodular maximization. We first prove Lemma 3.8.
+
+Proof of Lemma 3.8. By Chernoff bound, we have
+
+$$
+f (i) = \sum_ {t \in [ m ]} X _ {t, i} \in \left[ \left(1 / 2 - \epsilon\right) m, \left(1 / 2 + \epsilon\right) m \right] \quad \forall i \in \mathcal {I} \tag {13}
+$$
+
+and
+
+$$
+f (i) = \sum_ {t \in [ m ]} X _ {t, i} \leq 2 \epsilon m \quad \forall i \in [ n ] \backslash \mathcal {I} \tag {14}
+$$
+
+holds with probability at least $1 - 1 / n^{10}$ . Hence, we have
+
+$$
+f \left(S ^ {*}\right) \geq f (\mathcal {I}) = \sum_ {i \in \mathcal {I}} f (i) \geq (k / 2) (1 / 2 - \epsilon) m. \tag {15}
+$$
+
+The first two steps follow from the definition, the last step follows from $|\mathcal{I}|\geq k / 2$ and Eq. (13).
+
+On the other hand, for any set $S \subseteq [n]$ with size $k$ , we have
+
+$$
+f (S) = f (S \cap \mathcal {I}) + f (S \setminus \mathcal {I}) \leq | S \cap \mathcal {I} | \cdot \left(\frac {1}{2} + \epsilon\right) \cdot m + k \cdot 2 \epsilon m. \tag {16}
+$$
+
+The first step follows from the definition of $f$ . The second step follows from Eq. (13)(14). Combining Eq. (15)(16), we can conclude that for any $\alpha$ -approximate solution $S$
+
+$$
+\frac {f (S)}{f \left(S ^ {*}\right)} \geq \alpha = 1 5 \epsilon \Rightarrow \frac {\left| S \cap \mathcal {I} \right| \cdot \left(\frac {1}{2} + \epsilon\right) \cdot m + k \cdot 2 \epsilon m}{(k / 2) (1 / 2 - \epsilon) m} \geq 1 5 \epsilon \Rightarrow | S \cap \mathcal {I} | \geq \epsilon k.
+$$
+
+We next prove Theorem 3.7
+
+Proof of Theorem 3.7. Suppose there exists an algorithm ALG that makes at most $R$ queries and outputs an $\alpha$ -approximate solution $S$ for the submodular maximization problem. Consider the following communication protocol: The protocol proceeds in $R$ rounds, where in the $r$ -th round $(r \in [R])$ , suppose ALG queries set $S_r$ , then the $m$ parties collectively compute the value of $f(S_r)$ . Since
+
+$$
+f (S _ {r}) = \sum_ {i \in S _ {r}} \sum_ {t \in [ m ]} X _ {t, i} = \sum_ {t \in [ m ]} \sum_ {i \in S _ {r}} X _ {t, i},
+$$
+
+it suffices for the $t$ -th party to compute $\sum_{i \in S_r} X_{t,i}$ locally and writes it on the blackboard. Given the knowledge of $S_1, \ldots, S_r, f(S_1), \ldots, f(S_r)$ , $m$ parties can simulate ALG and determine the next query $S_{r+1}$ , and therefore, continue the protocol. Finally, $m$ parties output the solution set $\widehat{\mathcal{I}} = S$ .
+
+The communication cost at each round equals $m \cdot \log(n) = O(\log^2(n) / \epsilon^2)$ , and there is a sequence of $R$ queries, so there are $O(R\log^2(n) / \epsilon^2)$ bits of communication in total. Moreover, ALG guarantees the output solution $\widehat{\mathcal{I}} = S$ is $\alpha = 15\epsilon$ -approximate, by Lemma 3.8, we know that $|\widehat{\mathcal{I}} \cap \mathcal{I}| \geq \epsilon k$ . By Theorem 3.2, we must have
+
+$$
+R \log^ {2} (n) / \epsilon^ {2} \geq \Omega (\epsilon^ {2} n / c) \Rightarrow R \geq \Omega (\epsilon^ {5} n / \log^ {2} (n)) = \Omega (\alpha^ {5} n / \log^ {2} (n)).
+$$
+
+We next prove Lemma 3.10 and Lemma 3.11.
+
+Proof of Lemma 3.10. This follows directly from Chernoff bound. For any $i \in \mathcal{I}$ , $X_{t,i} \sim B_{1/2}$ , and therefore
+
+$$
+\operatorname * {P r} \left[ \left| \sum_ {t \in [ m ]} X _ {t, i} - m / 2 \right| \geq \epsilon m \right] \leq 2 \exp (- 2 m \epsilon^ {2}) = \frac {2}{n ^ {2 0 0}}
+$$
+
+and for any $i\in [n]\backslash \mathcal{I},X_{t,i}\sim B_{\epsilon}$
+
+$$
+\operatorname * {P r} \left[ \left| \sum_ {t \in [ m ]} X _ {t, i} - \epsilon m \right| \geq \epsilon m / 2 \right] \leq 2 \exp (- m \epsilon^ {2} / 2) = \frac {2}{n ^ {5 0}}.
+$$
+
+Proof of Lemma 3.11. For the YES instance, we have
+
+$$
+\max _ {S ^ {*} \subseteq [ n ], | S ^ {*} | = k} f _ {\mathrm {y e s}} (S ^ {*}) \geq f _ {\mathrm {y e s}} (\mathcal {I}) \geq \min \left\{\sum_ {i \in \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i}, m k \right\} \geq (k / 2) (1 / 2 - \epsilon) m \geq \text {O P T}
+$$
+
+where the second step follows from the definition of $f_{\mathrm{yes}}$ (see Eq. (2)), the third step follows from Eq. (4) and the last step holds since OPT.
+
+For the NO instance, for any set $S$ of size at most $k$ , we have
+
+$$
+\begin{array}{l} f _ {\mathrm {n o}} (S) \leq \min \left\{\sum_ {i \in S \cap \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i}, \frac {\alpha m k}{1 0} \right\} + \sum_ {i \in S \cap [ n ] \setminus \mathcal {I}} \sum_ {t \in [ m ]} X _ {t, i} + \frac {\alpha m k}{2 0} \\ \leq \alpha m k / 1 0 + k \cdot 2 \epsilon m + \alpha m k / 2 0 \leq \alpha \mathrm {O P T} \\ \end{array}
+$$
+
+where the first step follows from the definition of $f_{\mathrm{no}}$ (see Eq. (3)), and the second step holds due to Eq. (5) and $|S| \leq k$ . The last step follows from the choice of parameters.
+
+# B. Missing Proof from Section 4
+
+We first prove Lemma 4.2.
+
+Proof of Lemma 4.2. We focus on the non-trivial case of $t > 100\log (n) / \epsilon^2$ . By Chernoff bound, we have
+
+$$
+\operatorname * {P r} [ b _ {t} \leq \widehat {b} _ {(1 + \epsilon) t} ] = \operatorname * {P r} \left[ | S \cap [ (1 + \epsilon) t ] | \leq \frac {1 0 0 \log (n)}{\epsilon^ {2}} \right] = \exp \left(- \epsilon^ {2} \cdot \frac {1 0 0 \log (n)}{\epsilon^ {2}} \cdot \frac {1}{2}\right) \leq \frac {1}{n ^ {5 0}}
+$$
+
+here the second step follows from Chernoff bound and
+
+$$
+\mathbb {E} [ | S \cap [ (1 + \epsilon) t ] | ] = \frac {1 0 0 m \log (n)}{t \epsilon^ {2}} \cdot \frac {(1 + \epsilon) t}{m} = (1 + \epsilon) \cdot \frac {1 0 0 \log (n)}{\epsilon^ {2}}
+$$
+
+Similarly,
+
+$$
+\begin{array}{l} \operatorname * {P r} [ b _ {t} \geq \widehat {b} _ {(1 + \epsilon) ^ {- 1} t} ] = \operatorname * {P r} \left[ | S \cap [ (1 + \epsilon) ^ {- 1} t ] | \geq \frac {1 0 0 \log (n)}{\epsilon^ {2}} \right] \\ = \exp \left(- \left(\frac {\epsilon}{1 + \epsilon}\right) ^ {2} \cdot \frac {1 0 0 \log (n)}{(1 + \epsilon) \epsilon^ {2}} \cdot \frac {1}{3}\right) \leq \frac {1}{n ^ {4}}. \\ \end{array}
+$$
+
+where the second step follows from Chernoff bound and
+
+$$
+\mathbb {E} [ | S \cap [ (1 + \epsilon) ^ {- 1} t ] | ] = \frac {1 0 0 m \log (n)}{t \epsilon^ {2}} \cdot \frac {t}{(1 + \epsilon) m} = \frac {1 0 0 \log (n)}{(1 + \epsilon) \epsilon^ {2}}.
+$$
+
+We next prove Lemma 4.4 - 4.6
+
+Proof of Lemma 4.4. For $r \leq R$ , the probability that there are no collisions among $[t_r]$ equals
+
+$$
+1 \cdot \left(1 - \frac {k}{k _ {r} n}\right) \dots \left(1 - \frac {\left(t _ {r} - 1\right) k}{k _ {r} n}\right) \geq 1 - t _ {r} ^ {2} \cdot \frac {k}{k _ {r} n} \geq 1 - \frac {1}{1 0 0 R}
+$$
+
+(here the last inequality uses (6)).
+
+For $r \in [R + 1 : R + R_1]$ , consider the random process that $j = 1, 2, 3, \ldots, t_r$ are randomly put into $A_{r,1}, \ldots, A_{r,nk_r / k}$ . Let $X_j$ indicates if $j$ falls into the same set as some element $j' < j$ , i.e.,
+
+$$
+X _ {j} = \left\{ \begin{array}{l l} 1 & j \in A _ {r, i}, j ^ {\prime} \in A _ {r, i} \text {f o r s o m e} j ^ {\prime} < j, i \in [ n k _ {r} / k ] \\ 0 & \text {o t h e r w i s e} \end{array} \right.
+$$
+
+We know that $\mathbb{E}[X_j] \leq t_r \cdot \frac{k}{k_r n}$ , and
+
+$$
+\mathbb {E} \left[ \sum_ {j \leq t _ {r}} X _ {t} \right] \leq t _ {r} ^ {2} \cdot \frac {k}{k _ {r} n} = (k _ {r} \log^ {2} (n) / \epsilon) ^ {2} \cdot \frac {k}{k _ {r} n} = \frac {k \log^ {4} (n)}{n \epsilon^ {2}} \cdot k _ {r} \leq \epsilon k _ {r} / 4
+$$
+
+where the last step holds since we assume $k \leq \epsilon^8 n / \log^8 (n)$ .
+
+By Azuma-Hoeffding bound, we have
+
+$$
+\Pr \left[ \sum_ {j \leq t _ {r}} X _ {j} \geq \epsilon k _ {r} / 2 \right] \leq \exp (- \epsilon k _ {r} / 1 2) \leq 1 / n ^ {5}.
+$$
+
+This completes the proof.
+
+Proof of Lemma 4.5. For any $r \in [R + R_1]$ , for any subset $i \subseteq [nk_r / k]$ , we have
+
+$$
+\mathbb {E} \left[ \sum_ {j \in A _ {i, r}} w _ {j} 1 _ {t _ {r} < j \leq k} \right] = \frac {k}{n k _ {r}} \sum_ {t _ {r} < j \leq k} w _ {j} \leq \frac {\log^ {2} (n)}{2 t _ {r}} \sum_ {j \leq k} w _ {j},
+$$
+
+where the second step follows from the choice of parameters. By Chernoff bound, we have
+
+$$
+\begin{array}{l} \Pr \left[ \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} \geq \frac {\log^ {2} (n)}{t _ {r}} \sum_ {j \leq k} w _ {j} \right] \leq \exp \left(- \sum_ {j \leq k} w _ {j} \log^ {2} (n) / 6 t _ {r} w _ {t _ {r}}\right) \\ \leq \exp (- \log^ {2} (n) / 6) \leq 1 / n ^ {6} \\ \end{array}
+$$
+
+where the second step follows from $w_{t_r} \leq \frac{1}{t_r} \sum_{j \leq t_r} w_j \leq \frac{1}{t_r} \sum_{j \leq t} w_j$ .
+
+
+
+Proof Lemma 4.6. We have
+
+$$
+\mathbb {E} \left[ \sum_ {j \in A _ {r, i}} w _ {j} \cdot 1 _ {j > k} \right] = \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j},
+$$
+
+We use Chernoff bound. If $\frac{k}{nk_r}\sum_{j > k}w_j\geq \sqrt{k / k_r}\log (n)w_k$ , then we have
+
+$$
+\begin{array}{l} \Pr \left[ \left| \sum_ {j \in A _ {r, i}} w _ {j} \cdot 1 _ {j > k} - \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} \right| \leq \sqrt {k / k _ {r}} \log (n) w _ {k} \right] \\ \geq 1 - 2 \exp \left(- \frac {\log^ {2} (n) k / k _ {r}}{3 k \sum_ {j > k} w _ {j} / n k _ {r} w _ {k}}\right) \\ \geq 1 - \exp (- \log^ {2} (n) / 3) \geq 1 - \frac {1}{n ^ {5}}. \\ \end{array}
+$$
+
+Otherwise, If $\frac{k}{nk_r}\sum_{j > k}w_j < \sqrt{k / k_r}\log (n)w_k$ , then we have
+
+$$
+\begin{array}{l} \Pr \left[ \left| \sum_ {j \in A _ {r, i}} w _ {j} \cdot 1 _ {j > k} - \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} \right| \leq \sqrt {k / k _ {r}} \log (n) w _ {k} \right] \\ \geq 1 - 2 \exp \left(- \log^ {2} (n) k / 3 k _ {r}\right) \\ \geq 1 - \exp (- \log^ {2} (n) / 3) \geq 1 - \frac {1}{n ^ {5}}. \\ \end{array}
+$$
+
+Now we can finish the proof of Lemma 4.3.
+
+Proof of Lemma 4.3. Fix a value of $r$ , for each subset $i \in [nk_r / k]$ , we would decompose $f(A_{r,i})$ into three parts
+
+$$
+f \left(A _ {r, i}\right) = \sum_ {j \in A _ {r, i}} w _ {j} = \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j > k}. \tag {17}
+$$
+
+CASE 1: $r \in [R + 1 : R + R_1]$
+
+We first consider the case that $r \in [R + 1 : R + R_1]$ and prove Eq. (8). For the LHS of Eq. (8), let $\mathcal{I} \subseteq [nk_r / k]$ be the collection of subsets such that contain the top $(1 + 2\epsilon)k_{r}$ elements, i.e. $[(1 + 2\epsilon)k_{r}] \subseteq \cup_{i \in \mathcal{I}} A_{r,i}$ . By Lemma 4.4, we know that $|\mathcal{I}| \geq (1 + \epsilon)k_{r}$ . We have
+
+$$
+\begin{array}{l} b _ {r} \geq \min _ {i \in \mathcal {I}} \left\{\sum_ {j \in A _ {r, i}} w _ {j} \right\} \\ = \min _ {i \in \mathcal {I}} \left\{\sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j > k} \right\} \\ \geq w _ {(1 + 2 \epsilon) k _ {r}} + 0 + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} - \sqrt {k / k _ {r}} \log (n) w _ {k} \\ \geq w _ {(1 + 2 \epsilon) k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} - \frac {1}{k _ {r}} \sum_ {j \leq k} w _ {j} \tag {18} \\ \end{array}
+$$
+
+The first step follows from the guarantee of QUANTILEESTIMATE (see Lemma 4.2), the third step follows from Lemma 4.6, the last step follows from $k_r \leq k / \log^2(n)$ .
+
+Meanwhile, we have
+
+$$
+\frac {k}{n k _ {r}} f ([ n ]) = \frac {k}{n k _ {r}} \sum_ {j = 1} ^ {n} w _ {j} = \frac {k}{n k _ {r}} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} \leq \frac {1}{k _ {r}} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j \geq k} w _ {j} \tag {19}
+$$
+
+Combining Eq. (18)(19), we have
+
+$$
+c _ {r} = b _ {r} - \frac {k}{n k _ {r}} f ([ n ]) \geq w _ {(1 + 2 \epsilon) k _ {r}} - \frac {2}{k _ {r}} \sum_ {j \leq k} w _ {r}
+$$
+
+For the RHS of Eq. (8). Let $\mathcal{I}_r \subseteq [nk_r / k]$ be the top $(1 - \epsilon)k_r$ subsets of $\{A_{r,i}\}_{i\in [nk_r / k]}$ . By Lemma 4.4, we know that there exists $i_r \in \mathcal{I}_r$ , such that, either $|A_{r,i_r} \cap [t_r]| = 0$ , or $j_r = A_{r,i_r} \cap [t_r]$ and $j_r \geq (1 - 2\epsilon)k_r$ . Therefore, we have
+
+$$
+\begin{array}{l} b _ {r} \leq \sum_ {j \in A _ {r, i _ {r}}} w _ {j} \\ = \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {j > k} \\ \leq w _ {(1 - 2 \epsilon) k _ {r}} + \frac {\log^ {2} (n)}{t _ {r}} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \sqrt {k / k _ {r}} \log (n) w _ {k} \\ \leq w _ {k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \frac {1 + 1}{k _ {r}} \sum_ {j \leq k} w _ {j} \\ \end{array}
+$$
+
+The first step follows from the guarantee of QUANTILEEstIMATE (see Lemma 4.2), the third step follows from Lemma 4.5 and Lemma 4.6, the last step follows from the choice of parameters.
+
+Hence, we have
+
+$$
+\begin{array}{l} c _ {r} = b _ {r} - \frac {k}{n k _ {r}} f ([ n ]) \\ \leq w _ {(1 - 2 \epsilon) k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \frac {2}{k _ {r}} \sum_ {j \leq k} w _ {j} - \frac {k}{n k _ {r}} \sum_ {j \geq k} w _ {j} \\ \leq w _ {(1 - 2 \epsilon) k _ {r}} + \frac {2}{k _ {r}} \sum_ {j \leq k} w _ {j}. \\ \end{array}
+$$
+
+CASE 2: $r \leq R$
+
+We next study the case that $r \leq R$ . The proof is similar. For the LHS of Eq. (7), let $\mathcal{I} \subseteq [nk_r / k]$ be the collection of subsets such that in $[k_r] \subseteq \cup_{i \in \mathcal{I}} A_{r,i}$ . By Lemma 4.4, we know that $|\mathcal{I}| = pk_r$ . We have
+
+$$
+\begin{array}{l} b _ {r} \geq \min _ {i \in \mathcal {I}} \left\{\sum_ {j \in A _ {r, i}} w _ {j} \right\} \\ = \min _ {i \in \mathcal {I}} \left\{\sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i}} w _ {j} 1 _ {j > k} \right\} \\ \geq w _ {k _ {r}} + 0 + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} - \sqrt {k / k _ {r}} \log (n) w _ {k} \\ \geq w _ {k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} - \frac {\epsilon}{R} \sum_ {j \leq k} w _ {j} \tag {20} \\ \end{array}
+$$
+
+The first step follows from the guarantee of QUANTILEEstIMATE (see Lemma 4.2), the third step follows from Lemma 4.5, the last step follows from the choice of parameters.
+
+Meanwhile, we have
+
+$$
+\frac {k}{n k _ {r}} f ([ n ]) = \frac {k}{n k _ {r}} \sum_ {j = 1} ^ {n} w _ {j} = \frac {k}{n k _ {r}} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} \leq \frac {\epsilon}{R} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j \geq k} w _ {j} \tag {21}
+$$
+
+Combining Eq. (20)(21), we have proved
+
+$$
+c _ {r} = b _ {r} - \frac {k}{n k _ {r}} f ([ n ]) \geq w _ {k _ {r}} - \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {r}.
+$$
+
+For the RHS of Eq. (7). Let $\mathcal{I}_r \subseteq [nk_r / k]$ be the top $k_r$ subsets of $\{A_{r,i}\}_{i\in [nk_r / k]}$ . By Lemma 4.4, we know that there exists $i_r \in \mathcal{I}_r$ , such that, either $|A_{r,i_r} \cap [t_r]| = 0$ , or $j_r = A_{r,i_r} \cap [t_r]$ and $j_r \geq k_r$ . Therefore, we have
+
+$$
+\begin{array}{l} b _ {r} \leq \sum_ {j \in A _ {r, i _ {r}}} w _ {j} \\ = \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {j \leq t _ {r}} + \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {t _ {r} < j \leq k} + \sum_ {j \in A _ {r, i _ {r}}} w _ {j} 1 _ {j > k} \\ \leq w _ {k _ {r}} + \frac {\log^ {2} (n)}{t _ {r}} \sum_ {j \leq k} w _ {j} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \sqrt {k / k _ {r}} \log (n) w _ {k} \\ \leq w _ {k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {j} \\ \end{array}
+$$
+
+. The first step follows from the guarantee of QUANTILEESTIMATE (see Lemma 4.2), the third step follows from Lemma 4.5 and Lemma 4.6, the last step follows from the choice of parameters.
+
+Consequently, we have
+
+$$
+\begin{array}{l} c _ {r} = b _ {r} - \frac {k}{n k _ {r}} f ([ n ]) \\ \leq w _ {k _ {r}} + \frac {k}{n k _ {r}} \sum_ {j > k} w _ {j} + \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {j} - \frac {k}{n k _ {r}} \sum_ {j \geq k} w _ {j} \\ \leq w _ {k _ {r}} + \frac {2 \epsilon}{R} \sum_ {j \leq k} w _ {j}. \\ \end{array}
+$$
+
+This completes the proof.
+
+Now we can wrap up the proof of Theorem 4.1
+
+Proof of Theorem 4.1. We prove LINEARSUM $(f,k)$ approximates the optimal value within $(1\pm O(\log (n)\epsilon))$ factor with probability at least $4 / 5$ , and it draws at most $(n / k)\cdot \mathrm{poly}(n,\epsilon^{-1})$ samples.
+
+For the correctness guarantee, we condition on the event of Lemma 4.3, which holds with probability at least $5/6$ . For $r \leq R$ , by Lemma 4.3, we have
+
+$$
+\sum_ {r = 1} ^ {R} \left(k _ {r} - k _ {r - 1}\right) c _ {r} = \sum_ {j = 1} ^ {R} w _ {j} \pm 2 \epsilon \sum_ {j \leq k} w _ {j}. \tag {22}
+$$
+
+For $j\in [R + 1,R + R_1]$ , we have
+
+$$
+\begin{array}{l} \sum_ {r = R + 1} ^ {R + R _ {1}} \left(k _ {r} - k _ {r - 1}\right) c _ {r} \leq \sum_ {r = R + 1} ^ {R + R _ {1}} \left(k _ {r} - k _ {r - 1}\right) w _ {(1 - 2 \epsilon) k _ {r}} + 2 \log (n) \epsilon \sum_ {j \leq k} w _ {j} \\ \leq \sum_ {j = k _ {R} + 1} ^ {k _ {R + R _ {1}}} w _ {j} + O (\log (n) \epsilon) \sum_ {j \leq k} w _ {j} \tag {23} \\ \end{array}
+$$
+
+where the first step follows from Lemma 4.3 and $k_{r} - k_{r - 1}\leq \epsilon k_{r}$ . Similarly, we have
+
+$$
+\begin{array}{l} \sum_ {r = R + 1} ^ {R + R _ {1}} \left(k _ {r} - k _ {r - 1}\right) c _ {r} \geq \sum_ {r = R + 1} ^ {R + R _ {1}} \left(k _ {r} - k _ {r - 1}\right) w _ {(1 + 2 \epsilon) k _ {r}} - 2 \log (n) \epsilon \sum_ {j \leq k} w _ {j} \\ \geq \sum_ {j = k _ {R} + 1} ^ {k _ {R + R _ {1}}} w _ {j} - O (\log (n) \epsilon) \sum_ {j \leq k} w _ {j} \tag {24} \\ \end{array}
+$$
+
+Finally, for $r \in [R + R_1 + 1 : R + R_1 + R_2]$ , by the guarantee of ESTIMATEQUANTILE (see Lemma 4.2)
+
+$$
+\sum_ {r = R + R _ {1} + 1} ^ {R + R _ {1} + R _ {1}} \left(k _ {r} - k _ {r - 1}\right) c _ {r} = \sum_ {j = k _ {R + R _ {1}} + 1} ^ {k _ {R + R _ {1} + R _ {2}}} w _ {j} \pm O (\epsilon) \cdot \sum_ {j \leq k} w _ {j} \tag {25}
+$$
+
+Combining Eq. (22)-(25), we have
+
+$$
+\sum_ {r = 1} ^ {R + R _ {1} + R _ {2}} \left(k _ {r} - k _ {r - 1}\right) c _ {r} = \sum_ {j = 1} ^ {k} w _ {j} \pm O (\log (n) \epsilon) \cdot \sum_ {j \leq k} w _ {j}
+$$
+
+For the sample complexity, for $r \in [R + R_1]$ , the total number of samples to obtain $\{b_r\}_{r \in [R + R_1]}$ is at most
+
+$$
+\sum_ {r = 1} ^ {R + R _ {1}} 1 0 0 \left(n k _ {r} / k\right) \log (n) / k _ {r} \epsilon^ {2} = (n / k) \cdot \operatorname {p o l y} (\log (n), \epsilon^ {- 1}),
+$$
+
+for $r \in [R + R_1 + 1 : R + R_1 + R_2]$ , the total number of samples to obtain $\{b_r\}_{r \in [R + R_1 + 1 : R + R_1 + R_2]}$ is at most
+
+$$
+\sum_ {r = R + R _ {1} + 1} ^ {R + R _ {1} + R _ {2}} 1 0 0 n \log (n) / k _ {r} \epsilon^ {2} = (n / k) \cdot \operatorname {p o l y} (\log (n), \epsilon^ {- 1}).
+$$
+
+This completes the proof.
+
+Next we prove Theorem 4.7. We reduce from a decision version of the distributed set detection problem.
+
+Definition B.1 (Distributed index detection). Let $n, m$ be input parameters, $\mathcal{D}_0, \mathcal{D}_1$ be two Bernoulli distributions with mean $\mu_0, \mu_1$ . $m$ is the number of parties, who communicate in the blackboard model. The input of the $t$ -th party $(t \in [m])$ is a vector $X_{t} \in \{0, 1\}^{n}$ such that
+
+- YES Instance: $X_{t,i} \sim \mathcal{D}_0$ for $i \in [n] \backslash \{i^*\}$ and $X_{t,i^*} \sim \mathcal{D}_1$ ;
+- NO Instance: $X_{t,i} \sim \mathcal{D}_0$ for all $i \in [n]$
+
+The goal is to distinguish between the YES/NO instance.
+
+The communication complexity of distributed index detection is at least $\Omega(n / c \log(n))$ , because there is an $\Omega(n / c)$ lower bound of finding the index $i^*$ in the YES instance (taking $k = 1$ , $\epsilon = 1/2$ in Theorem 3.2), and one could find the index $i^*$ by performing binary search using $O(\log(n))$ calls to distributed index detection.
+
+Proof of Theorem 4.7. Given $n, k, \alpha$ , suppose there exists an algorithm ALG that makes at most $R$ queries and approximates the value of optimal solution. Consider an instance of distributed index detection with
+
+$$
+\epsilon = \alpha / 1 5, \qquad n ^ {\prime} = n / k, \qquad m = \frac {1 0 \log (n)}{\epsilon^ {2}}, \qquad \mu_ {0} = \epsilon , \qquad \mu_ {1} = 1 / 2.
+$$
+
+Let $X_{1},\ldots ,X_{m}\in \{0,1\}^{n^{\prime}}$ be the input of distributed index detection. Consider the following function $f:[n]\to \mathbb{R}^{+}$
+
+$$
+f (S) = \sum_ {i \in S} f (i) \quad \text {a n d} \quad f (i) = \sum_ {t \in [ m ]} X _ {t, i (\mathrm {m o d} n ^ {\prime})}.
+$$
+
+It is easy to see that $f$ is additive and monotone. By Lemma 3.10, in the YES instance, by taking $S^{*} = \{i : i = i^{*} (\mathrm{mod} n')\}$ , we have
+
+$$
+f _ {\text {y e s}} \left(S ^ {*}\right) \geq k f \left(i ^ {*}\right) \geq (1 / 2 - \epsilon) m k. \tag {26}
+$$
+
+In the NO instance, for any set $S$ of size at most $k$ , we have we
+
+$$
+f _ {\mathrm {n o}} (S) \leq 2 \epsilon m k. \tag {27}
+$$
+
+Let $\mathrm{OPT} = (1 / 2 - \epsilon)mk$ . Consider the following communication protocol: The protocol proceeds in $R$ rounds, where in the $r$ -th round ( $r \in [R]$ ), suppose ALG queries set $S_r$ , then the $m$ parties collectively compute the value of $f(S_r)$ . Since
+
+$$
+f (S _ {r}) = \sum_ {i \in S _ {r}} \sum_ {t \in [ m ]} X _ {t, i \pmod {n ^ {\prime}}} = \sum_ {t \in [ m ]} \sum_ {i \in S _ {r}} X _ {t, i \pmod {n ^ {\prime}}},
+$$
+
+it suffices for the $t$ -th party to compute $\sum_{i \in S_r} X_{t,i \pmod{n'}}$ locally and write it on the blackboard. Given the knowledge of $S_1, \ldots, S_r, f(S_1), \ldots, f(S_r)$ , $m$ parties can simulate ALG and determine the next query $S_{r+1}$ , and therefore, continue the protocol. Finally, $m$ parties could distinguish between (1) $f(S^*) \geq \mathrm{OPT} = (1/2 - \epsilon)mk$ and (2) $f(S^*) \leq \alpha \mathrm{OPT} = \alpha (1/2 - \epsilon)mk$ , and therefore, resolve the distributed index detection task (see Eq. (26)(27)).
+
+The communication cost at each round equals $m \cdot \log(n) = O(\log^2(n) / \epsilon^2)$ , and there is a sequence of $R$ queries, so there are $O(R\log^2(n) / \epsilon^2)$ bits of communication in total. Hence, we have
+
+$$
+R \log^ {2} (n) / \epsilon^ {2} \geq \epsilon n ^ {\prime} / \log (n) \Rightarrow R \geq \alpha^ {3} n / k \log^ {3} (n).
+$$
+
+
\ No newline at end of file
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/images.zip b/anearlinearquerylowerboundforsubmodularmaximization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..00ad8b277a151c3339e0292be17ed9bfb0e1ede0
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb2c4732ef1128c6aa12278bdaa088b9b7dbf176b356a9572b15184926c7b4e7
+size 815143
diff --git a/anearlinearquerylowerboundforsubmodularmaximization/layout.json b/anearlinearquerylowerboundforsubmodularmaximization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b854b73850828f0890b8b3585418c1acbf7f0fe
--- /dev/null
+++ b/anearlinearquerylowerboundforsubmodularmaximization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a0afbbed59433900dc57d896332353d080379873f7ed2c51f4249ebe8a8796d
+size 1159704
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_content_list.json b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..18120468026a23007dbcbd1a97741bb77a3dd8c9
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf29d63d07170ddbccd26f2f20e890cef6d7c65a557ba2494d2022b1722e9bf8
+size 274490
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_model.json b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..63727624768c52e58cc5a0481cb0b3167128e50a
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cd168c5c68bf35a820b2888dff92cdfc4c5caa5bc3579ae3331066eb33cf1be
+size 312960
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_origin.pdf b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3452bafbdf61529ee3bf5428d14e6ae455b71d1b
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/caf47eee-42a3-47c6-8570-41b24397e746_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f708b762a32aef4e9335bd1bf8129c465550968ac4ff765c9e9425cd9089abae
+size 1552152
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/full.md b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..23bebd4e001665910cb9cf76cb6a0cda3f46c183
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/full.md
@@ -0,0 +1,1139 @@
+# A Near-Optimal Single-Loop Stochastic Algorithm for Convex Finite-Sum Coupled Compositional Optimization
+
+Bokun Wang $^{1}$ Tianbao Yang $^{1}$
+
+# Abstract
+
+This paper studies a class of convex Finite-sum Coupled Compositional Optimization (cFCCO) problems for empirical X-risk minimization with applications including group distributionally robust optimization (GDRO) and learning with imbalanced data. To better address these problems, we introduce an efficient single-loop primal-dual block-coordinate stochastic algorithm called ALEXR. The algorithm employs block-coordinate stochastic mirror ascent with extrapolation for the dual variable and stochastic proximal gradient descent updates for the primal variable. We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions of involved functions, which not only improve the best rates in previous works on smooth cFCCO problems but also expand the realm of cFCCO for solving more challenging non-smooth problems such as the dual form of GDRO. Finally, we derive lower complexity bounds, demonstrating the (near-)optimality of ALEXR within a broad class of stochastic algorithms for cFCCO. Experimental results on GDRO and partial Area Under the ROC Curve (pAUC) maximization demonstrate the promising performance of our algorithm.
+
+# 1. Introduction
+
+We revisit the following regularized finite-sum coupled compositional optimization problem (Wang & Yang, 2022):
+
+$$
+\min _ {x \in \mathcal {X}} F (x), \quad F (x) := \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} \left(g _ {i} (x)\right) + r (x), \tag {1}
+$$
+
+where $\mathcal{X} \subset \mathbb{R}^d$ is a convex and closed set, the inner function $g_i(x) = \mathbb{E}_{\zeta_i \sim \mathbb{P}_i}[g_i(x; \zeta_i)]$ is in expectation form, and the objective function $F(x)$ is convex.
+
+In this paper, we study a special class of convex FCCO problems, termed cFCCO, which has a specific structure: The inner function $g_{i}:\mathcal{X}\to \mathbb{R}^{m}$ is convex and accessible via stochastic oracles (typically a loss function), while the outer function $f_{i}:\mathbb{R}^{m}\rightarrow \mathbb{R}$ is a convex and deterministic simple function (transformation) that is monotonically non-decreasing. Besides, the regularizer $r$ is also convex. Although the above condition is sufficient but not necessary for the convexity of $F$ , fully exploiting it allows us to design a single-loop algorithm that achieves better complexity than previous algorithms and applies to non-smooth problems with convergence guarantees. Moreover, we can prove its (near-)optimality by establishing lower complexity bounds.
+
+Notably, the special structure of cFCCO arises in various machine learning applications, including group distributionally robust optimization (GDRO) (Sagawa et al., 2019; Soma et al., 2022; Zhang et al., 2023), sub-population fairness (Martinez et al., 2021), partial area under the ROC curve (pAUC) maximization (Zhu et al., 2022), and bipartite ranking (Rudin, 2009). We postpone detailed descriptions of some of these problems to Section 5 and Appendix B.
+
+While several algorithms have been developed to solve the convex FCCO problem in (1) with theoretical guarantees of global convergence (Wang & Yang, 2022; Jiang et al., 2022), these methods are limited by critical drawbacks: First, these algorithms only have convergence guarantees under the strong assumption that $f_{i}$ and $g_{i}$ are both smooth and Lipschitz continuous. Second, their convergence rates have poor dependence on the target optimization error $\epsilon$ , batch sizes, and, in the strongly convex case, the condition number. Third, these algorithms rely on nested inner loops where the number of iterations in each loop depends on problem-specific constants, increasing the difficulty of implementation.
+
+To address these limitations, we leverage the structure of the problem. Using the convex conjugate of $f_{i}$ denoted
+
+by $f_{i}^{*}$ , the cFCCO problem can be reformulated (1) into a convex-concave min-max problem:
+
+$$
+\min _ {x \in \mathcal {X}} \max _ {y \in \mathcal {Y}} \left\{\frac {1}{n} \sum_ {i = 1} ^ {n} \left[ g _ {i} (x) ^ {\top} y ^ {(i)} - f _ {i} ^ {*} (y ^ {(i)}) \right] + r (x) \right\}, \tag {2}
+$$
+
+where $y^{(i)} \in \mathcal{Y}_i \subseteq \mathbb{R}_+^m$ is the $i$ -th block of $y$ , and $\mathcal{V} = \mathcal{V}_1 \times \ldots \times \mathcal{V}_n \subseteq \mathbb{R}_+^{nm}$ . This reformulation is motivated by state-of-the-art primal-dual methods for empirical risk minimization (ERM) (Alacaoglu et al., 2022) and the general convex-concave min-max optimization problem (Zhang et al., 2024). However, the problem in (2) presents unique challenges: (i) $g_i(x)$ may be neither linear nor deterministic, unlike that assumed in Alacaoglu et al. (2022); (ii) When $n$ is large, updating the entire dual variable $y$ in each iteration as in Zhang et al. (2024) becomes computationally prohibitive, motivating the block-coordinate dual update in our approach.
+
+Our contributions can be summarized as follows:
+
+- We propose a primal-dual block-coordinate stochastic algorithm named ALEXR to efficiently solve the cFCCO problems, which only requires $O(1)$ batch size per iteration.
+- In both merely and strongly convex cases, the iteration complexities of ALEXR improve upon the iteration complexities in previous works (Wang & Yang, 2022; Jiang et al., 2022) on cFCCO problems with smooth $f_{i}$ and $g_{i}$ (See Table 1 for a detailed comparison). Besides, we also provide the convergence analysis of ALEXR for cFCCO problems with non-smooth $f_{i}$ and $g_{i}$ in convex and strongly convex cases.
+- For cFCCO problems with smooth and non-smooth $f_{i}$ , we prove lower complexity bounds for an abstract first-order update scheme, which covers our ALEXR and previous algorithms as special cases. These lower bounds demonstrate the near-optimality of our proposed algorithm.
+
+# 2. Related Work
+
+The cFCCO problem in (1) is related to several well-studied optimization problems. In Appendix A, we discuss the literature related to the min-max reformulation in (2).
+
+Convex Stochastic Compositional Optimization. Several papers have studied the convex stochastic compositional optimization (SCO) problem $\min_{x\in \mathcal{X}}F(x)\coloneqq$ $\mathbb{E}_{\xi}[f_{\xi}(\mathbb{E}_{\zeta}[g_{\zeta}(x)])]$ , where $\xi$ and $\zeta$ are mutually independent. When $F$ is merely convex and $f_{\xi}$ is smooth, the SCGD algorithm in Wang et al. (2017a) requires an $O(\frac{1}{\epsilon^4})$ iterations to find an $x_{\mathrm{out}}$ s.t. $\mathbb{E}[F(x_{\mathrm{out}}) - \min_x F(x)]\leq \epsilon$ . When $F$ is $\mu$ -strongly convex with unique minimizer $x_*$ , SCGD requires $O(\frac{1}{\mu^2\epsilon^{1.5}})$ iterations to make $\frac{\mu}{2}\mathbb{E}\| x_{\mathrm{out}} - x_*\|_2^2\leq \epsilon$ . Further exploiting the smoothness of $g_{\zeta}$ , Wang et al. (2017b) proposed ASC-PG, which improves the convergence rate to
+
+$O\left(\frac{1}{\epsilon^{3.5}}\right)$ for merely convex SCO and $O\left(\frac{1}{\mu \epsilon^{1.25}}\right)$ for strongly convex SCO. When $f$ is convex and monotonically non-decreasing and $g$ is convex, Zhang & Lan (2020) reformulated the convex SCO problem as a min-max-max problem and proposed the stochastic sequential dual (SSD) method to obtain the optimal $O\left(\frac{1}{\epsilon^2}\right)$ rate in the merely convex case and $O\left(\frac{1}{\mu \epsilon}\right)$ rate in the $\mu$ -strongly convex and smooth case. They also showed that the $O\left(\frac{1}{\epsilon^2}\right)$ rate is optimal when $f_i$ is non-smooth, even if $F$ is strongly convex. However, these algorithms for SCO are inapplicable to cFCCO. In fact, FCCO introduces challenges beyond those in SCO: both the inner function $g_i$ and the distribution $\mathbb{P}_i$ in FCCO depend on the outer index $i$ , whereas in SCO $\xi$ and $\zeta$ are mutually independent and the inner function $\mathbb{E}_{\zeta}[g_{\zeta}(x)]$ does not depend on $\xi$ .
+
+Conditional stochastic optimization and FCCO. Hu et al. (2020) studied a more general class of problems called conditional stochastic optimization: $\min_{x\in \mathcal{X}}F(x)\coloneqq$ $\mathbb{E}_{\xi}[f_{\xi}(\mathbb{E}_{\zeta |\xi}[g_{\zeta}(x;\xi)])]$ . They proposed biased SGD (BSGD) with large batch sizes. For convex and smooth $F$ , BSGD requires $O(\frac{1}{\epsilon^2})$ iterations and a large batch size of $B = O(\frac{1}{\epsilon})$ per iteration to find an $\epsilon$ -accurate solution. For a $\mu$ -strongly convex $F$ , BSGD requires $O(\frac{1}{\mu\epsilon})$ iterations and a large batch size of $B = O(\frac{1}{\epsilon})$ per iteration to find an $\epsilon$ -accurate solution. For the FCCO problem with convex $F$ , Wang & Yang (2022) used the moving-average estimator and the restarting trick to find an $\epsilon$ -accurate solution with only $O(1)$ batch size per iteration. In particular, restarted SOX has an iteration complexity of $O(\frac{n}{\mu^2BS\epsilon})$ for the $\mu$ -strongly convex problem and $O(\frac{n}{BS\epsilon^3})$ for the merely convex problem, where $S$ is the outer batch size and $B$ is the inner batch size. Jiang et al. (2022) proposed a variance-reduced algorithm MSVR that has improved iteration complexities $O(\frac{n}{S\sqrt{B}\mu\epsilon})$ for the $\mu$ -strongly convex problem and $O(\frac{n}{S\sqrt{B}\epsilon^2})$ for the convex problem. However, the theoretical guarantees of MSVR have several limitations such as poor dependence on the batch size $B$ , reliance on restrictive assumptions, and suboptimality for the $\mu$ -strongly cFCCO problem with a smooth outer function $f_{i}$ (as we will demonstrate in Section 4).
+
+Applications. FCCO serves as the algorithmic framework for optimizing a broad range of risk functions coined as empirical X-risk minimization (Yang, 2022). It has been applied to many machine learning problems, including optimizing listwise losses for learning to rank (Qiu et al., 2022), optimizing partial area under the ROC curve (pAUC) for imbalanced data classification (Zhu et al., 2022), group DRO (Hu et al., 2023b) and optimizing global contrastive losses for self-supervised learning (Yuan et al., 2022; Qiu et al., 2023). The proposed algorithm ALEXR for cFCCO is applicable to pAUC maximization and group DRO.
+
+Table 1. Comparison of iteration complexities to achieve $\epsilon$ -optimal solution of (1) in terms of $\mathbb{E}[F(x_{\mathrm{out}}) - F(x_{*})] \leq \epsilon$ in the merely convex case and $\frac{\mu}{2}\mathbb{E}\| x_{\mathrm{out}} - x_{*}\|_{2}^{2} \leq \epsilon$ in the $\mu$ -strongly convex case, where $x_{\mathrm{out}}$ is the output of each algorithm. $\tilde{O}$ hides poly $\log(1/\epsilon)$ factors. $S$ denotes the size of a batch $S \subset [n]$ and $B$ denotes the size of batch $B_{i}$ sampled from $\mathbb{P}_{i}$ for each $i \in S$ . In the "Monotonicity" column, $\uparrow$ means the function is monotonically non-decreasing. "N/A" means not applicable or not available. The gray parts are implications of Theorems 2 and 3.
+
+Method Iteration Complexity Inner Batch Size B Outer Batch Size S Loops Smoothness Monotonicity Convexity† Strongly Convex Merely Convex BSGD (Hu et al., 2020) O\left(\frac{1}{\mu\epsilon}\right) O\left(\frac{1}{\epsilon^2}\right) O\left(\frac{1}{\epsilon}\right) O(1) Single fi, gi None F O\left(\frac{1}{\epsilon^2}\right) O(1) Single gi None F SOX-Boost (Wang & Yang, 2022) O\left(\frac{n}{\mu^2BS\epsilon}\right) O\left(\frac{n}{BS\epsilon^3}\right) O(1) O(1) Double fi, gi None F SOX (Wang & Yang, 2023) O\left(\frac{n}{\mu S\epsilon}\right) N/A* O(1) O(1) Single fi, gi fi↑ fi, gi,r MSVR (Jiang et al., 2022) O\left(\frac{n}{\mu\sqrt{B\epsilon}}\right) O\left(\frac{n}{\sqrt{B\epsilon^2}}\right) O(1) O(1) Double fi, gi None F ALEXR (This Work) O\left(\max\left\{\frac{1}{\mu S\epsilon}, \frac{1}{\mu B\epsilon}, \frac{n}{BS\epsilon}\right\}\right) Theorem 1 (i) O\left(\max\left\{\frac{1}{S\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right) O(1) O(1) Single fi, gi fi↑ fi, gi,r O\left(\max\left\{\frac{1}{\mu\epsilon}, \frac{n}{BS\epsilon}\right\}\right) Theorem 1 (ii) O\left(\max\left\{\frac{1}{\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right) O(1) O(1) Single fi fi↑ fi, gi,r O\left(\max\left\{\frac{1}{S\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right)\# O\left(\max\left\{\frac{1}{S\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right) Theorem 3 O(1) O(1) Single gi fi↑ fi, gi,r O\left(\max\left\{\frac{1}{\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right)\# O\left(\max\left\{\frac{1}{\epsilon^2}, \frac{n}{BS\epsilon^2}\right\}\right) Theorem 2 O(1) O(1) Single None fi↑ fi, gi,r
+
+The sufficient condition (convexity of $f_{i}, g_{i}, r$ and monotonicity of $f_{i}$ ) for the convexity of $F$ can be met in the applications of interest described in Section 5 and Appendix B.
+* The analysis of the merely convex case in Wang & Yang (2023) is under a weaker convergence measure that cannot be converted to the objective gap.
+As shown in our lower complexity bound in Section 4, strong convexity does not yield a faster rate due to the compositional structure when the outer function $f_{i}$ is non-smooth. In Zhang
+& Lan (2020), a similar result has been established for convex stochastic compositional optimization.
+
+# 3. Algorithm and Convergence Analysis
+
+Notations. For a vector $y \in \mathbb{R}^{nm}$ , we use $y^{(i)} \in \mathbb{R}^m$ to represent the $i$ -th coordinate (block) of $y$ , i.e., $y = (y^{(1)}, \ldots, y^{(n)})^\top$ . We denote the Bregman divergence associated with $\psi_i : \mathbb{R}^m \to \mathbb{R}$ for any $u, v \in \mathbb{R}^m$ as $U_{\psi_i}(u, v) = \psi_i(u) - \psi_i(v) - \partial \psi_i(v)^\top (u - v)$ and define $U_{\psi}(y_1, y_2) := \sum_{i=1}^n U_{\psi_i}(y_1^{(i)}, y_2^{(i)})$ for $y_1, y_2 \in \mathbb{R}^{nm}$ . For a function $g_i(x) = \mathbb{E}_{\zeta_i \sim \mathbb{P}_i}[g(x; \zeta_i)]$ , we define the stochastic estimator based on the mini-batch $\mathcal{B}_i$ as $g_i(x; \mathcal{B}_i) := \frac{1}{|\mathcal{B}_i|} \sum_{\zeta_i \in \mathcal{B}_i} g_i(x; \zeta_i)$ . Let $\mathcal{X}$ be a normed vector space with $\| \cdot \|_2$ . For each $i \in [n]$ , let $\mathcal{Y}_i \subset \mathbb{R}^m$ be a normed vector space with a general norm $\| \cdot \|$ and $\| \cdot \|_*$ be its dual norm. See Table 3 in the appendix for the full list of notations.
+
+We make the following assumptions throughout the paper.
+
+Assumption 1. The domain $\mathcal{X} \subseteq \mathbb{R}^d$ in (1) is a convex and closed set. Besides, the regularizatoin term $r$ is proper, lower-semicontinuous, and $\mu$ -convex on $\mathcal{X}$ , $\mu \geq 0$ .
+
+Assumption 2. $g_{i}$ is convex. Besides, there exists $C_g > 0$ such that $\| g_i(x) - g_i(x')\|_* \leq C_g\| x - x'\|_2, \forall x, x' \in \mathcal{X}$ .
+
+Assumption 3. $f_{i}:\mathbb{R}^{m}\to \mathbb{R}$ is convex. Besides, there exists $C_f > 0$ such that $|f_{i}(u) - f_{i}(u^{\prime})|\leq C_{f}\| u - u^{\prime}\|_{*}, \forall u,u^{\prime}\in \mathcal{Y}_{i}^{*}$ . If $g_{i}$ is nonlinear, we assume that $f_{i}$ is monotonically non-decreasing w.r.t. each coordinate of its input.
+
+Assumption 3 implies that $\| y^{(i)}\| \leq C_f$ for all $y^{(i)}\in \mathcal{V}_i$ and $\mathcal{V}_i\subseteq \mathbb{R}_+^m,\forall i\in [n]$ . Thus, (1) is equivalent to the
+
+convex-concave problem (2) with a convex and compact $\mathcal{V} = \mathcal{V}_1\times \ldots \times \mathcal{V}_n$ . Note that $f_{i}$ of all applications in Section 5 and Appendix B satisfy the assumption above.
+
+Although the smoothness of $f_{i}$ and $g_{i}$ are not necessary in our work, incorporating them leads to better convergence rates. We say that $g_{i}: \mathcal{X} \to \mathbb{R}^{m}$ is $L_{g}$ -smooth if it is differentiable and there exists $L_{g} > 0$ such that $\| g_{i}(x_{1}) - g_{i}(x_{2}) - \nabla g_{i}(x_{2})(x_{1} - x_{2}) \|_{*} \leq \frac{L_{g}}{2} \| x_{1} - x_{2} \|_{2}^{2}, \forall x_{1}, x_{2} \in \mathcal{X}$ ; Besides, we say that $f_{i}: \mathbb{R}^{m} \to \mathbb{R}$ is $L_{f}$ -smooth if it is differentiable and there exists $L_{f} > 0$ such that $|f_{i}(u_{1}) - f_{i}(u_{2}) - \langle \nabla f_{i}(u_{2}), u_{1} - u_{2} \rangle| \leq \frac{L_{f}}{2} \| u_{1} - u_{2} \|_{*}^{2}, \forall u_{1}, u_{2} \in \mathcal{Y}_{i}^{*}$ .
+
+Lastly, we assume that the variances of the zeroth-order and first-order stochastic oracles are bounded.
+
+Assumption 4. There exists finite $\sigma_0^2, \sigma_1^2, \delta^2$ such that
+
+$$
+\begin{array}{l} \mathbb {E} _ {\zeta_ {i}} \left\| g _ {i} (x) - g _ {i} (x; \zeta_ {i}) \right\| _ {*} ^ {2} \leq \sigma_ {0} ^ {2}, \\ \mathbb {E} _ {\zeta_ {i}} \| [ g _ {i} ^ {\prime} (x) ] ^ {\top} - [ g _ {i} ^ {\prime} (x; \zeta_ {i}) ] ^ {\top} \| _ {\mathrm {o p}} ^ {2} \leq \sigma_ {1} ^ {2}, \\ \frac {1}{n} \sum_ {j = 1} ^ {n} \| [ g _ {j} ^ {\prime} (x) ] ^ {\top} y ^ {(j)} - \frac {1}{n} \sum_ {i = 1} ^ {n} [ g _ {i} ^ {\prime} (x) ] ^ {\top} y ^ {(i)} \| _ {2} ^ {2} \leq \delta^ {2}, \\ \end{array}
+$$
+
+for any $x\in \mathcal{X},g_i'(x)\in \partial g_i(x)$ , and $y\in \mathcal{V}$
+
+Under Assumptions 2, 3, the existence of $\delta^2$ is ensured since $\frac{1}{n}\sum_{j=1}^{n}[[g_j'(x)]^\top y^{(j)} - \frac{1}{n}\sum_{i=1}^{n}[g_i'(x)]^\top y^{(i)}]_2^2 \leq C_f^2 C_g^2$ .
+
+# Algorithm 1 ALEXR
+
+1: Initialize: $x_0 \in \mathcal{X} \subseteq \mathbb{R}^d$ , $y_0 \in \mathcal{Y} \subset \mathbb{R}_+^n$
+2: for $t = 0,1,\ldots ,T - 1$ do
+3: Sample a batch $\mathcal{S}_t\subset \{1,\dots ,n\}$ $|\mathcal{S}_t| = S$
+4: for each $i\in S_t$ do
+5: Sample batches $\mathcal{B}_t^{(i)}$ , $\tilde{\mathcal{B}}_t^{(i)}$ of size- $B$ from $\mathbb{P}_i$
+6: Compute stochastic estimator $\tilde{g}_t^{(i)} = g_i(x_t;\mathcal{B}_t^{(i)}) + \theta (g_i(x_t;\mathcal{B}_t^{(i)}) - g_i(x_{t - 1};\mathcal{B}_t^{(i)}))$
+7: Update the $i$ -th block of the dual variable $y_{t+1}^{(i)} = \arg \max_{v \in \mathcal{Y}_i} \{ v \tilde{g}_t^{(i)} - f_i^*(v) - \tau U_{\psi_i}(v, y_t^{(i)}) \}$
+8: end for
+9: For each $i \notin S_t, y_{t+1}^{(i)} = y_t^{(i)}$
+10: Compute the stochastic gradient estimator $G_{t} = \frac{1}{S}\sum_{i\in S_{t}}[g_{i}'(x_{t};\tilde{\mathcal{B}}_{t}^{(i)})]^{\top}y_{t + 1}^{(i)}$ based on the stochastic partial gradient $g_{i}'(x_{t};\tilde{\mathcal{B}}_{t}^{(i)})\in \partial g_{i}(x_{t};\tilde{\mathcal{B}}_{t}^{(i)})$
+11: $x_{t + 1} = \arg \min_{x\in \mathcal{X}}\{\langle G_t,x\rangle +r(x) + \frac{\eta}{2}\| x - x_t\| _2^2\}$
+12: end for
+
+# 3.1. A Primal-Dual Block-Coordinate Algorithm
+
+We propose a stochastic algorithm, ALEXR (refer to Algorithm 1), to efficiently solve the cFCCO problem defined in (1) by leveraging its reformulation in (2). Due to the structure of (1), ALEXR begins each iteration by sampling a mini-batch $S_{t}$ of size $S$ from $\{1,\dots ,n\}$ and, for each $i\in S_{t}$ , sampling two i.i.d. mini-batches $\mathcal{B}_t^i,\tilde{\mathcal{B}}_t^i$ of size $B$ from $\mathbb{P}_i$ .
+
+Since stochastic oracles $g_{i}(x;\zeta_{i})$ are only available for those blocks $i\in S_t$ , ALEXR employs a block-coordinate stochastic update for the dual variable $y$ , which occurs between line 5 and line 9 in Algorithm 1. For a sampled block $i\in S_t$ , the update of $y^{(i)}$ is based on the extrapolated stochastic gradient estimator $\tilde{g}_t^{(i)}$ of the linear coupling term $y^{(i)}g_{i}(x_{t})$ in Step 6 with $\theta \in [0,1]$ , and a mirror-prox mapping w.r.t. to some strongly convex distance-generating function $\psi_{i}$ . To ensure the proximal mapping in line 7 of Algorithm 1 can be efficiently computed, it is crucial to carefully select $\psi_{i}$ :
+
+- For any smooth outer function $f_{i}$ , we can select $\psi_{i} = f_{i}^{*}$ . For $u_{t}^{(i)} \in \partial f_{i}^{*}(y_{t}^{(i)})$ , we can show that (see Lemma 1 in Appendix C):
+
+$$
+y _ {t + 1} ^ {(i)} = \nabla f _ {i} (u _ {t + 1} ^ {(i)}), \quad u _ {t + 1} ^ {(i)} = \frac {\tau u _ {t} ^ {(i)} + \tilde {g} _ {t} ^ {(i)}}{1 + \tau}, \forall i \in S _ {t} \tag {3}
+$$
+
+If $f_{i}$ is a Legendre-type (proper, closed, strictly convex, and essentially smooth) function, ALEXR has a primal-only implementation similar to SOX (Wang & Yang, 2022) and MSVR (Jiang et al., 2022). To be specific, we can derive the following equivalent update of $u$ sequence such that
+
+$$
+\begin{array}{l} y _ {t} ^ {(i)} = \nabla f _ {i} (u _ {t} ^ {(i)}). \\ u _ {t + 1} ^ {(i)} = \left\{ \begin{array}{l l} \frac {1}{1 + \tau} u _ {t} ^ {(i)} + \frac {\tau}{1 + \tau} \tilde {g} _ {t} ^ {(i)}, & \text {i f} i \in \mathcal {S} _ {t} \\ u _ {t} ^ {(i)} & \text {o . w .} \end{array} \right. \tag {4} \\ \end{array}
+$$
+
+- For a non-smooth outer function $f_{i}$ , we can choose $\psi_{i}$ to be the quadratic function $\psi_{i}(\cdot) = \frac{1}{2}\| \cdot \|_{2}^{2}$ . This choice requires that the proximal mapping of $f_{i}^{*}$ can be efficiently computed, a condition that holds for many simple functions (See Chapters 4 and 6 in Beck, 2017), e.g., the nonsmooth function $f_{i}(\cdot) = \frac{1}{\alpha}\max \{\cdot ,0\}$ in the GDRO problem with Conditional Value at Risk (CVaR) divergence and the pAUC maximization problem described in Section 5.
+
+Then, ALEXR updates the primal variable $x$ based on a stochastic proximal gradient descent update, where $G_{t}$ is a (sub)gradient estimator of the coupling term $\frac{1}{n}\sum_{i}y_{t + 1}^{(i)}g_{i}(x_{t})$ using an independent mini-batch $\tilde{B}_t^{(i)}$ .
+
+# 3.2. Relation to Existing Algorithms
+
+Relation to SOX (Wang & Yang, 2022). By setting $\theta = 0$ , $\psi_{i} = f_{i}^{*}$ in ALEXR, the dual update and the gradient estimator become similar to that used in SOX. In particular, the update of $u_{t+1}^{(i)}$ in (3) becomes the moving average estimator, i.e., $u_{t+1}^{(i)} = (1 - \gamma) u_{t}^{(i)} + \gamma g_{i}(x_{t}; \mathcal{B}_{t}^{(i)})$ , where $\gamma = \frac{1}{1 + \tau}$ . Hence, the updates of ALEXR with $\theta = 0$ , $\psi_{i} = f_{i}^{*}$ reduce to SOX without gradient momentum, whose convergence is analyzed in Wang & Yang (2023) for strongly convex FCCO. However, establishing its convergence guarantee for the merely convex problem is still an open problem.
+
+Relation to MSVR (Jiang et al., 2022). By setting $\psi_{i} = f_{i}^{*}$ , there is only a subtle difference between MSVR and ALEXR, which gives ALEXR an advantage. In particular, the update of $u_{t+1}^{(i)}$ in (3) of ALEXR can be written as $u_{t+1}^{(i)} = (1 - \gamma) u_{t}^{(i)} + \gamma g_{i}(x_{t}; \mathcal{B}_{t}^{(i)}) + \gamma \theta(g_{i}(x_{t}; \mathcal{B}_{t}^{(i)}) - g_{i}(x_{t-1}; \mathcal{B}_{t}^{(i)}))$ with $\gamma = \frac{1}{1+\tau} < 1$ , $\forall i \in S_{t}$ . This estimator is similar to the one used in MSVR except that the scaling factor before the correction term $(g_{i}(x_{t}; \mathcal{B}_{t}^{(i)}) - g_{i}(x_{t-1}; \mathcal{B}_{t}^{(i)}))$ in MSVR is $\beta = \frac{n-S}{S(1-\gamma)} + 1 - \gamma$ , which could be much larger than 1. Notably, several existing works have reported better empirical performance using a $\gamma$ less than one (Jiang et al., 2022; Hu et al., 2023a;b), which is consistent with our setting and theory. Another difference between ALEXR and MSVR is that ALEXR does not use the variance-reduction technique (Cutkosky & Orabona, 2019) to compute the gradient estimator of the primal variable, which demands more memory and computational costs, albeit resulting in a worse oracle complexity compared to that of ALEXR for cFCCO.
+
+Relation to SAPD (Zhang et al., 2024). When $n = 1$ , the reformulation in (2) is a convex-concave saddle point prob-
+
+lem, where SAPD is a representative stochastic algorithm. Both SAPD and ALEXR use an extrapolated estimator to update the dual variable $y$ . The key difference is that SAPD updates the entire $y$ without assuming block separability of the dual domain, whereas ALEXR leverages this property to update only $y^{(i)}$ for those sampled blocks $i \in S_t$ . This design addresses the challenge of cFCCO: when $n$ is large, sampling from $\mathbb{P}_i$ and computing stochastic gradient estimator for each $i \in [n]$ is computationally expensive. Although ALEXR can be viewed as a block-coordinate variant of SAPD, its convergence analysis introduces several new challenges that are not present in the analysis of SAPD: (i) The block-coordinate updates of ALEXR lead to new challenges in convergence analysis, such as the dependence issue discussed in Section 3.3; (ii) ALEXR provides more flexibility to choose distance-generating functions $\psi_i$ other than the quadratic one in SAPD for the dual step; (iii) ALEXR and its convergence guarantees also apply to non-smooth problems, whereas SAPD focuses on smooth problems.
+
+# 3.3. Convergence Analysis
+
+For the convenience of analyzing the block-coordinate updates of the dual variable $y$ , we define an auxiliary sequence:
+
+$$
+\bar {y} _ {t + 1} ^ {(i)} = \underset {v \in \mathcal {Y} _ {i}} {\arg \max } \left\{v \tilde {g} _ {t} ^ {(i)} - f _ {i} ^ {*} (v) - \tau U _ {\psi_ {i}} \left(v, y _ {t} ^ {(i)}\right) \right\}, \tag {5}
+$$
+
+where $\tilde{g}_t = (\tilde{g}_t^{(1)},\dots ,\tilde{g}_t^{(n)})^\top$ $\forall i\in [n]$ . Note that only $\tilde{g}_t^{(i)}$ for those blocks $i\in S_t$ are computed in the $t$ -th iteration of Algorithm 1 while $\tilde{g}_t^{(i)}$ for those $i\notin S_{t}$ are virtual. The reason we introduce the sequence $\{\bar{y}_t\}_{t\geq 0}$ is to decouple the dependence between $y_{t + 1}$ and $S_{t}$ . Besides, for the options of $\psi_{i}$ listed in Section 3.1, we have $U_{f_i^*}(u,v)\geq \rho U_{\psi_i}(u,v)$ for some $\rho \geq 0$ and any $u,v\in \mathcal{V}_i$ : For example, $\rho = 1$ for a smooth $f_{i}$ and $\psi_{i} = f_{i}^{*}$ while $\rho = 0$ for a non-smooth $f_{i}$
+
+We define the objective function in (2) to be $L(x,y) \coloneqq \frac{1}{n}\sum_{i=1}^{n}[g_i(x)^\top y^{(i)} - f_i^*(y^{(i)})] + r(x)$ . After the $t$ -th iteration of ALEXR, for any $x \in \mathcal{X}, y \in \mathcal{Y}$ we can obtain
+
+$$
+\begin{array}{l} \frac {\eta + \mu}{2} \| x - x _ {t + 1} \| _ {2} ^ {2} + \frac {\tau + \rho}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) \leq \tag {6} \\ \frac {\eta}{2} \left\| x - x _ {t} \right\| _ {2} ^ {2} + \frac {\tau}{n} U _ {\psi} (y, y _ {t}) - (L (x _ {t + 1}, y) - L (x, \bar {y} _ {t + 1})) \\ + R _ {t} \\ \end{array}
+$$
+
+where $R_{t}$ captures the remaining terms in (15). Notably, the term $L(x_{t + 1},y) - L(x,\bar{y}_{t + 1})$ can be converted into the objective gap $F(x_{t + 1}) - F(x)$ in an ergodic sense. In a standard convergence analysis based on potential functions (Bansal & Gupta, 2019), all terms in the potential function are expected to be non-expansive after a single iteration. However, it is not immediately clear whether the shaded part in (6) is non-expansive, regardless of whether
+
+we choose $U_{\psi}(y,y_t)$ or $U_{\psi}(y,\bar{y}_t)$ as part of the potential function. It is worth noting that the issue above does not arise in the analysis of min-max optimization algorithms without block-coordinate updates such as that in Zhang et al. (2024), because $\bar{y}_{t + 1} = y_{t + 1}$ if the whole $y$ is updated.
+
+# 3.3.1. SMOOTH AND STRONGLY CONVEX CASE
+
+When the outer function $f_{i}$ is smooth, we show that ALEXR can achieve the fast $O(\epsilon^{-1})$ rate for a strongly convex cFCCO problem. Under this setting, the min-max problem in (2) is strongly-convex-strongly-concave (SCSC) and a unique saddle point $(x_{*},y_{*})$ exists for the unique minimizer $x_{*}$ of the original problem in (1). We define that $\mathcal{G}_t$ is the $\sigma$ -algebra generated by $\{\mathcal{B}_0,\mathcal{S}_0,\dots ,\mathcal{B}_{t - 1},\mathcal{S}_{t - 1},\mathcal{B}_t\}$ and $\mathcal{F}_t$ is the $\sigma$ -algebra generated by $\{\mathcal{B}_0,\mathcal{S}_0,\dots ,\mathcal{B}_{t - 1},\mathcal{S}_{t - 1},\mathcal{B}_t,\mathcal{S}_t\}$ . Note that $\mathcal{G}_t\subset \mathcal{F}_t$ and $y_{t + 1}$ is $\mathcal{F}_t$ -measurable. Since $x_{*},y_{*}$ are independent of the randomness in the algorithm, we have
+
+$$
+\mathbb {E} \left[ U _ {\psi} \left(y _ {*}, y _ {t + 1}\right) \mid \mathcal {G} _ {t} \right] = \frac {S}{n} U _ {\psi} \left(y _ {*}, \bar {y} _ {t + 1}\right) + \frac {n - S}{n} U _ {\psi} \left(y _ {*}, y _ {t}\right).
+$$
+
+Plug the equation above and $x = x_{*},y = y_{*}$ into (6) can establish the contraction needed for potential-function-based convergence analysis. This leads to the following results, which holds for ALEXR with any strongly convex $\psi_{i}$ , including $\psi_i(\cdot) = \frac{1}{2}\| \cdot \| _2^2$ and $\psi_{i} = f_{i}^{*}$ for a smooth $f_{i}$ .
+
+Theorem 1. Suppose that Assumptions 1, 2, 3, 4 hold with $\mu > 0$ and $L_{f}$ -smooth outer function $f_{i}$ .
+
+(i) If $g_i$ is smooth, ALEXR with $\eta = \frac{\mu\theta}{1 - \theta}$ , $\tau = \frac{S}{n(1 - \theta)}$ , and $\theta = 1 - O(\epsilon)$ makes $\frac{\mu}{2}\mathbb{E}\| x_T - x_*\|_2^2 \leq \epsilon$ in $\tilde{O}(\max\{\frac{1}{\mu S}, \frac{1}{\mu B}, \frac{n}{BS}\} \epsilon^{-1})$ iterations;
+- (ii) If $g_i$ is non-smooth, ALEXR with the same setting of $\eta, \tau$ , and $\theta = 1 - O(\epsilon)$ leads to iteration complexity $\tilde{O}(\max\{\frac{1}{\mu}, \frac{n}{BS}\} \epsilon^{-1})$ .
+
+Remark 1. On strongly convex cFCCO with smooth $f_{i}$ and $g_{i}$ , ALEXR achieves $\tilde{O}\left(\max \left\{\frac{1}{\mu S\epsilon},\frac{1}{\mu B\epsilon},\frac{n}{BS\epsilon}\right\}\right)$ iteration complexity, which improves upon the previously best-known $O\left(\frac{n}{\mu\sqrt{BS\epsilon}}\right)$ achieved by MSVR (Jiang et al., 2022). Besides, we also provide the oracle complexity of ALEXR when the inner function $g_{i}$ is non-smooth, which is absent in previous work.
+
+# 3.3.2. CONVEX CASE WITH POSSIBLY NON-SMOOTH $f_{i}$
+
+Now we shift our focus to the cFCCO problem with possibly non-smooth outer function $f_{i}$ . In this case, we require $\psi_{i} = \frac{1}{2}\| \cdot \|_{2}^{2}$ .
+
+To derive a bound of the objective gap $\mathbb{E}[F(\bar{x}_T) - F(x_*)]$ , $\bar{x}_T = \frac{1}{T}\sum_{t=0}^{T-1}x_t$ , we will plug $x = x_*$ and $y = \tilde{y}_T^{(i)} \in \arg \max_{v \in \mathcal{Y}_i}\{v^\top g_i(\bar{x}_T) - f_i^*(v)\} \in \partial f_i(g_i(\bar{x}_T))$ into (6).
+
+Unfortunately, the technique outlined in Section 3.3.1 does not address this issue because $\tilde{y}_T$ also depends on $S_t$ . We address this issue by introducing multiple virtual sequences to transform the shaded part in (6) into telescoping sums of several potential functions of these virtual sequences (See Lemma 9), a technique we extended from Nemirovski et al. (2009); Juditsky et al. (2011); Alacaoglu et al. (2022).
+
+When $g_{i}$ is non-smooth, ALEXR achieves the same convergence rate for $\theta \in \{0,1\}$ , but choosing $\theta = 0$ saves $S$ inner function evaluations at $x_{t - 1}$ .
+
+Theorem 2. Suppose Assumptions 1, 2, 3, 4 hold and $g_{i}$ is non-smooth. ALEXR with $\psi_{i} = \frac{1}{2}\| \cdot \|_{2}^{2}, \theta = 0, \eta = O(\frac{1}{\epsilon})$ , and $\tau = O\left(\frac{1}{B\epsilon}\right)$ can make $\mathbb{E}[F(\bar{x}_T) - F(x_*)] \leq \epsilon$ in $O(\max \{1,\frac{\Omega_T^0}{BS}\} \epsilon^{-2})$ iterations, where $\Omega_{\mathcal{Y}}^{0} := \mathbb{E}[U_{\psi}(\tilde{y}_{T},y_{0})] = \sum_{i=1}^{n}\mathbb{E}[U_{\psi_{i}}(\tilde{y}_{T}^{(i)},y_{0}^{(i)})]$ .
+
+When $g_{i}$ is smooth, setting the parameter $\theta = 1$ leverages the extrapolation term and yields a better convergence rate.
+
+Theorem 3. Suppose Assumptions 1, 2, 3, 4 hold and $g_{i}$ is smooth. ALEXR with $\psi_{i} = \frac{1}{2}\| \cdot \|_{2}^{2}, \theta = 1, \eta = O(\frac{1}{\epsilon})$ , and $\tau = O\left(\frac{1}{B\epsilon}\right)$ can make $\mathbb{E}[F(\bar{x}_T) - F(x_*)] \leq \epsilon$ in $O(\max \{\frac{1}{S},\frac{\Omega_{\mathcal{Y}}^0}{BS}\} \epsilon^{-2})$ iterations, where $\Omega_{\mathcal{Y}}^{0} := \mathbb{E}[U_{\psi}(\tilde{y}_{T},y_{0})] = \sum_{i = 1}^{n}\mathbb{E}[U_{\psi_{i}}(\tilde{y}_{T}^{(i)},y_{0}^{(i)})]$ .
+
+Remark 2. The radius $\Omega_{\mathcal{V}}^{0}$ is $O(n)$ in the worst case, but it can be much smaller than $O(n)$ when $\tilde{y}_T$ and $y_0$ exhibit some sparsity structure (an example is provided in Appendix F.1).
+
+We compare the above results with that of MSVR. For non-smooth $f_{i}$ , MSVR is not applicable. Theorem 2 implies that ALEXR can achieve the $O(\max \left\{\frac{1}{\epsilon^2}, \frac{n}{BS\epsilon^2}\right\})$ iteration complexity for the merely convex problem even $f_{i}$ is non-smooth. Furthermore, Theorem 3 also indicates that when both $f_{i}$ and $g_{i}$ are smooth, ALEXR achieves the $O(\max \left\{\frac{1}{S\epsilon^2}, \frac{n}{BS\epsilon^2}\right\})$ iteration complexity for the merely convex problem, improving upon the $O\left(\frac{n}{\sqrt{BS\epsilon^2}}\right)$ iteration complexity in (Jiang et al., 2022) of the double-loop algorithm MSVR.
+
+When $f_{i}$ is non-smooth, the strong convexity of the objective does not result in a better rate compared to the merely convex case, as we will demonstrate in Section 4.
+
+# 4. Lower Complexity Bounds
+
+In the previous section, we introduced the ALEXR algorithm and established the upper bounds of its iteration complexity and oracle complexity (i.e., the number of calls of stochastic oracles). In order to examine whether these bounds of ALEXR are (near-)optimal for the problem in (1), we examine the lower bounds by constructing "hard" instances of 1 for the following abstract first-order update scheme that subsumes ALEXR as well as previous algo
+
+rithms (Zhang et al., 2024; Wang & Yang, 2022; Jiang et al., 2022) $^{2}$ .
+
+The abstract scheme starts with the initial spaces $\mathfrak{X}_0 = \mathfrak{G}_0 = \{\mathbf{0}_d\}$ , $\mathfrak{Y}_0 = \{\mathbf{0}_n\}$ , $\mathfrak{g}_0 = \{\mathbf{0}_m\}$ and progresses as follows in the $t$ -th iteration: First, it samples a batch $S_t \subset [n]$ and $\zeta_t^i, \tilde{\zeta}_t^i$ i.i.d. from $\mathbb{P}_i$ . For those $i \in S_t$ ,
+
+$$
+\mathfrak {g} _ {t + 1} ^ {(i)} = \mathfrak {g} _ {t} ^ {(i)} + \operatorname {s p a n} \left\{g _ {i} (\hat {x}; \zeta_ {t} ^ {(i)}) \mid \hat {x} \in \mathfrak {X} _ {t} \right\},
+$$
+
+$$
+\mathfrak {Y} _ {t + 1} ^ {(i)} = \mathfrak {Y} _ {t} ^ {(i)} + \operatorname {s p a n} \left\{\mathrm {M P} \left(\hat {y} _ {i}, \hat {g} _ {i}\right) \mid \hat {y} _ {i} \in \mathfrak {Y} _ {t} ^ {(i)}, \hat {g} _ {i} \in \mathfrak {g} _ {t + 1} ^ {(i)} \right\},
+$$
+
+where “+” refers to the Minkowski sum, $\mathfrak{g}_t^{(i)},\mathfrak{V}_t^{(i)}$ are the $i$ -th slices of the spaces $\mathfrak{g}_t,\mathfrak{V}_t$ , and $\mathbb{M}\mathbb{P}(\hat{y}_i,\hat{g}_i)\coloneqq$ arg $\max_v\{v\hat{g}_i - f_i^* (v) - \tau U_{\psi_i}(v,\hat{y}_i)\}$ . For those $i\notin S_t$ , the corresponding slices remain unchanged, i.e., $\mathfrak{g}_{t + 1}^{(i)} = \mathfrak{g}_t^{(i)},\mathfrak{V}_{t + 1}^{(i)} = \mathfrak{V}_t^{(i)}$ . The spaces $\mathfrak{G}_t,\mathfrak{X}_t$ are updated as
+
+$$
+\mathfrak {G} _ {t + 1} = \mathfrak {G} _ {t} + \operatorname {s p a n} \{G (\hat {x}, \hat {y}) \mid \hat {x} \in \mathfrak {X} _ {t}, \hat {y} \in \mathfrak {Y} _ {t + 1} \},
+$$
+
+$$
+\mathfrak {X} _ {t + 1} = \mathfrak {X} _ {t} + \operatorname {s p a n} \left\{\mathbb {Q P} (\hat {x}, \hat {G}) \mid \hat {x} \in \mathfrak {X} _ {t}, \hat {G} \in \mathfrak {G} _ {t + 1} \right\},
+$$
+
+where we define $G(\hat{x},\hat{y})\coloneqq \frac{1}{S}\sum_{i\in S_t}[\nabla g_i(\hat{x};\tilde{\zeta}_t^{(i)})]^{\top}\hat{y}^{(i)}$ and $\mathbb{QP}(\hat{x},\hat{G})\coloneqq \arg \min_x\{x^\top \hat{G} +r(x) + \frac{\eta}{2}\| x - \hat{x}\| _2^2\} .$
+
+For the problem with smooth outer function $f_{i}$ , we construct a hard instance of (1) by setting $f_{i}$ to a variant of the Huber function and inner function $g_{i}$ to a linear function with some Bernoulli distributed noise. For the problem with non-smooth outer function $f_{i}$ , we construct a hard instance by replacing the smooth Huber function $f_{i}$ with a monotonically non-decreasing hinge function. Details of the constructions and the proof are provided in Appendix E.
+
+The construction of the noise and the hinge function $f_{i}$ for the non-smooth problem is adapted from Zhang & Lan (2020). Our contributions are twofold: First, we design an abstract scheme that supports block-coordinate updates and characterize how the optimal oracle complexity depends on the total number of blocks $n$ ; Second, we also construct a hard instance to prove the lower bound for the strongly convex cFCCO problem with a smooth outer function $f_{i}$ , which highlights the near-optimality of our ALEXR in this setting and its significant improvement over previous algorithms.
+
+Theorem 4. For the $\mu$ -strongly cFCCO problem in (1) with a smooth outer function $f_{i}$ , any algorithm within the abstract scheme described above requires at least $\Omega (\max \{\frac{S}{\mu}, n\} \epsilon^{-1})$ oracles calls to find an $\bar{x}$ such that $\frac{\mu}{2} \mathbb{E} \| \bar{x} - x_* \|^2_2 \leq \epsilon$ ; Besides, For the cFCCO problem in (1) (whether merely convex or strongly convex) with a nonsmooth outer function $f_{i}$ , any algorithm within the abstract scheme described above requires at least $\Omega (n \epsilon^{-2})$ oracles calls to find an $\bar{x}$ such that $\mathbb{E}[F(\bar{x}) - F(x_*)] \leq \epsilon$ .
+
+Theorem 4 demonstrates that ALEXR is near-optimal in both cases. Furthermore, it also shows that the upper bounds established in Theorem 1 and Theorem 3 are tight.
+
+# 5. Experiments
+
+In this section, we present two main applications of the cFCCO problem: Group Distributionally Robust Optimization (GDRO) and Partial AUC Maximization (pAUC) with a restricted True Positive Rate (TPR). We then evaluate the empirical performance of our proposed ALEXR algorithm against previous baselines in these applications. More applications are discussed in Appendix B while additional details and experimental results can be found in Appendix G.
+
+# 5.1. Group Distributionally Robust Optimization
+
+The Group Distributionally Robust Optimization (GDRO) framework aims to train machine learning models that are robust across predefined subgroups (Sagawa et al., 2019). Suppose that there are $n$ predefined groups and the data distribution of the $i$ -th group is $\mathbb{P}_i$ . The $\phi$ -divergence penalized GDRO can be formulated as
+
+$$
+\min _ {w} \max _ {q \in \Delta_ {n}} \left\{\sum_ {i = 1} ^ {n} \left(q ^ {(i)} R _ {i} (w) - \frac {\lambda}{n} \phi \left(n q _ {i}\right)\right) \right\} + r (w), \tag {7}
+$$
+
+where $\Delta_{n}$ is the $(n - 1)$ -dimensional probability simplex, $w$ is the model parameter, $R_{i}(w) \coloneqq \mathbb{E}_{z \sim \mathbb{P}_{i}}[\ell(w;z)]$ is the risk of the $i$ -th group, and $\phi: \mathbb{R}_{+} \to \mathbb{R} \cup \{+\infty\}$ , $\phi(1) = 0$ .
+
+Prior work (Sagawa et al., 2019; Zhang et al., 2023) discarded the $\phi$ -divergence penalty, i.e., $\lambda = 0$ in (7), and consider the problem $\min_{w} \max_{i \in [n]} R_i(w)$ , which minimizes the risk of the worst group. However, the model trained through worst-group risk minimization may be vacuous if the worst group is an outlier. Moreover, the sizes of groups may follow a long-tailed distribution such that multiple rare groups exist. To resolve these issues, we choose $\lambda > 0$ and consider the penalized GDRO problem with CVaR divergence $\phi = \mathbb{I}_{[0,\alpha^{-1}]}$ or $\chi^2$ -divergence $\phi(t) = \frac{1}{2}(t - 1)^2$ .
+
+The challenges of directly solving (7) using stochastic min-max algorithms lie in estimating the stochastic gradient of $q$ and controlling its variance when $n$ is large (Zhang et al., 2023). Alternatively, we can transform the above problem into an equivalent problem by duality (Levy et al., 2020):
+
+$$
+\min _ {w, c \in \mathbb {R}} \frac {\lambda}{n} \sum_ {i = 1} ^ {n} \phi^ {*} \left(\frac {R _ {i} (w) - c}{\lambda}\right) + c + r (w), \tag {8}
+$$
+
+where $\phi^{*}$ is monotonically non-decreasing, e.g., $\phi^{*}(u) = \frac{1}{\alpha} (u)_{+}$ for CVaR divergence and $\phi^{*}(u) = \frac{1}{4} (u + 2)_{+}^{2} - 1$ for $\chi^2$ -divergence. The dual formulation in (8) is recognized as a difficult open problem in Sagawa et al. (2019) due to the biased stochastic estimator (refer to footnote 4 in
+
+their paper). When $R_{i}(w)$ is convex, we can solve the problem in (8) by viewing it as a cFCCO problem with a convex outer function $f_{i}(\cdot) = \lambda \phi^{*}(\cdot)$ and an inner function $g_{i}(x) = (R_{i}(w) - c) / \lambda$ that is jointly convex to $x = (w,c)$ . In Appendix F, we compare the convergence rates and per-iteration costs of ALEXR with previous GDRO algorithms.
+
+First, we empirically compare our proposed ALEXR with baseline methods on the GDRO problem in (7) with the CVaR divergence for the binary classification task. We consider the linear model $w$ and the logistic loss $\ell(w;z)$ .
+
+Datasets. We perform experiments on two datasets: a tabular dataset Adult (Becker & Kohavi, 1996) and an image dataset CelebA (Liu et al., 2015). For the Adult dataset, we construct 83 groups according to features such as race and the task is to predict the income. For the CelebA dataset, we constructed 160 groups based on binary attributes such as sex and the task is to determine whether a person possesses blonde hair. Please see Appendix G.1 for more details.
+
+Baselines. We compare ALEXR with previous algorithms on the FCCO problem including BSGD (Hu et al., 2020), SOX (Wang & Yang, 2022), and MSVR (Jiang et al., 2022) $^3$ . Besides, we also compare ALEXR with previous algorithms for the GDRO problem, which include OOA (Sagawa et al., 2019) and SGD with up-weighting (SGD-UW) (Buda et al., 2018). OOA was originally proposed for the GDRO problem without a penalty term and we extend it to the CVaR-penalized GDRO based on an efficient algorithm for projection onto the capped simplex (Lim & Wright, 2016), where we use the implementation in Blondel (2019). To show the benefit of GDRO, we also include SGD based on empirical risk minimization (ERM) as a baseline, which neglects the group information. We do not compare with some other GDRO algorithms (Zhang et al., 2023; Soma et al., 2022) that do not support group sampling or do not apply to the CVaR-penalized problem. Moreover, algorithms for distributionally robust optimization (DRO) (Levy et al., 2020; Meng & Gower, 2023) are not applicable to the GDRO problem due to the stochastic oracles of per-group risk $R_{i}(w)$ . We execute all algorithms for 5 runs with different random seeds. For a fair comparison, each algorithm samples 64 data points in each iteration. For SGD, these data points are sampled from the entire training dataset, whereas for other algorithms, we sample 8 groups and 8 data points per group. We tune the step sizes of all algorithms in the range $\{2,5,10\} \times 10^{\{-3,-2,-1\}}$ . For SOX and MSVR, we also tune the momentum parameter $\gamma$ in the range $\{0.1,0.5,0.9\}$ . For ALEXR, we choose the extrapolation parameter $\theta \in \{0.1,1.0\}$ and $\psi_{i}(\cdot) = \frac{1}{2} (\cdot)^{2}$ . For all algorithms, we choose the weight decay parameter 0.05 on
+
+
+Figure 1. GDRO loss curves evaluated on the validation datasets during the training process with $\alpha = 0.1$ and 0.15.
+
+
+
+
+
+
+
+Table 2. Test accuracy $(\%)$ on the worst- $(\alpha n)$ groups with $\alpha = 0.1$ and 0.15. The best accuracy is highlighted in black.
+
+Methods Adult CelebA Mean α = 0.1 α = 0.15 α = 0.1 α = 0.15 SGD 0.71±0.20 1.87±0.25 2.75±0.08 4.89±0.10 2.56 SGD-UW 23.70±1.01 26.26±1.06 73.70±0.13 74.18±0.13 49.46 OOA 51.46±2.21 54.12±2.04 66.40±6.37 73.43±0.79 61.35 BSGD 55.81±0.70 58.58±0.61 75.30±0.27 76.16±0.12 66.46 SOX 56.34 ±1.15 58.36±0.44 75.04±0.20 76.10±0.30 66.46 MSVR 47.78±1.06 49.49±0.95 75.34±0.28 76.17±0.09 62.20 ALEXR 56.58±0.69 58.52±0.71 75.79±0.05 76.29±0.07 66.80
+
+the Adult dataset and 0.1 on the CelebA dataset.
+
+Results. In Table 2, we report test accuracy for all algorithms on the worst- $(\alpha n)$ groups with $\alpha = 0.1$ and 0.15. Besides, we plot the validation loss curves for FCCO algorithms sharing the same objective function (8) in Figure 1. First, we notice that the vanilla SGD performs poorly on the worst- $(\alpha n)$ groups' data. While the up-weighting trick offers some improvement for SGD, its effectiveness still falls short of Group DRO algorithms. Among GDRO algorithms, our proposed ALEXR exhibits faster convergence compared to baseline methods. ALEXR also achieves superior test accuracy compared to baseline methods in most cases.
+
+# 5.2. Partial AUC Maximization with Restricted TPR
+
+The Area Under the ROC Curve (AUC) is a more informative metric than accuracy for assessing the performance of binary classifiers in the context of imbalanced data (Yang & Ying, 2022). In scenarios influenced by diagnostic or monetary considerations, the primary objective may be to maximize the partial AUC (pAUC) with a specified lower bound $\alpha$ for the true positive rate (TPR). As shown in Zhu et al. (2022), a surrogate objective for maximizing pAUC with restricted TPR is formulated as
+
+$$
+\min _ {w \in \mathbb {R} ^ {d}} \frac {1}{n _ {+} n _ {-}} \sum_ {a _ {i} \in \mathcal {S} _ {+} ^ {\uparrow} [ 1, n _ {+} (1 - \alpha) ]} \sum_ {a _ {j} \in \mathcal {S} _ {-}} L (w; a _ {i}, a _ {j}), \tag {9}
+$$
+
+Here $S_{+}, S_{-}$ are the sets of positive/negative data, $w$ refers to the model and $L(w; a_i, a_j) = \ell(h_w(a_j) - h_w(a_i))$ represents a continuous pairwise surrogate loss, where $h_w(a_i)$ de
+
+notes the prediction score for data $a_{i}$ . Additionally, $\mathcal{S}_{+}^{\uparrow}[1,k]$ the bottom- $k$ positive data based on the prediction scores. In particular, $\ell$ is a convex and monotonically non-decreasing function, ensuring the consistency of the surrogate objective (Gao & Zhou, 2015). Based on the duality (Levy et al., 2020), the problem in (9) is equivalent to
+
+$$
+\begin{array}{l} \min _ {w, s \in \mathbb {R}} \frac {1}{n _ {+} (1 - \alpha)} \sum_ {a _ {i} \in \mathcal {S} _ {+}} \left[ \frac {1}{n _ {-}} \sum_ {a _ {j} \in \mathcal {S} _ {-}} L (w; a _ {i}, a _ {j}) - s \right] _ {+} \\ + s, \\ \end{array}
+$$
+
+which is a cFCCO problem with $f_{i} = (\cdot)_{+}$ and $g_{i}(w,s) = \frac{1}{n_{-}}\sum_{a_{j}\in S_{-}}L(w;a_{i},a_{j}) - s$ jointly convex to $(w,s)$ .
+
+In our experiments, we consider linear prediction model $w$ and two different lower bounds $\alpha$ of TPR: 0.5 and 0.75.
+
+Baselines. Apart from BSGD, SOX, and MSVR, we also include SGD with over-sampling for the cross-entropy (CE) loss and the SOTA algorithm (Zhu et al., 2022) as baselines. In each iteration, each algorithm samples an equal number of positive and negative data points (16 for each), which is based on the convergence theory in Zhu et al. (2022).
+
+Datasets. We perform experiments on four datasets: Covtype, Cardiomegaly, Lung-mass, and Higgs. The Covtype and Higgs datasets are from the LibSVM repository (Chang & Lin, 2011). To create imbalanced datasets, we randomly remove $99.5\%$ positive data from Covtype and $99.9\%$ positive data from Higgs. For Covtype, we randomly allocate $75\%$ of the data for training and $25\%$ for validation. For Higgs, we randomly select 500,000 data points for vali
+
+
+
+
+
+
+
+
+
+
+Figure 2. Partial AUC evaluated on the validation datasets during the training process under TPR $\geq 0.5$ and TPR $\geq 0.75$ .
+
+
+
+
+
+
+
+dation and the rest as training data. Cardiomegaly and Lung-mass are two imbalanced datasets that share the same collection of Chest X-ray images and different label annotations from the MedMNIST repository (Yang et al., 2023), where we use the default splits. We vectorize each $28 \times 28$ image in Cardiomegaly/Lung-mass datasets as a data point. Statistics of datasets are listed in Table 5 of the appendix.
+
+Results. In Figure 2, we compare the pAUC curves during training. First, the results suggest that optimizing the surrogate loss in (9) outperforms optimizing the CE loss for maximizing pAUC with a restricted TPR. Moreover, ALEXR demonstrates overall superior performance when compared to other baselines including the SOTA algorithm specifically designed for pAUC maximization.
+
+# 6. Conclusion and Discussion
+
+In this paper, we study a class of convex FCCO problems, called cFCCO, via its min-max reformulation (2). We propose a single-loop primal-dual block-coordinate stochastic algorithm called ALEXR, which achieves improved iteration complexities compared to previous works on both merely and strongly convex cFCCO problems with smooth $f_{i}$ and $g_{i}$ . We also establish the iteration complexities of ALEXR when either $f_{i}$ or $g_{i}$ is non-smooth. Furthermore, we present lower complexity bounds to show that the convergence rate of ALEXR is near-optimal among first-order stochastic methods for cFCCO problems. Finally, we note that it remains an open problem to prove similar complexities as in this work for cFCCO with concave outer functions $f_{i}$ such as the logarithmic function, which has broad applications in machine learning.
+
+# Acknowledgments
+
+We are deeply grateful to Guanghui Lan for his invaluable feedback on this paper. We are also thankful to Stephen J. Wright for bringing the analysis of the duality gap for PureCD to our attention. We thank Guanghui Wang for the initial discussion of the problem. We also thank anonymous reviewers for their comments. BW and TY were partially supported by by National Science Foundation Award #2306572 and #2147253.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Agarwal, A., Wainwright, M. J., Bartlett, P., and Ravikumar, P. Information-theoretic lower bounds on the oracle complexity of convex optimization. Advances in Neural Information Processing Systems, 22, 2009.
+Alacaoglu, A., Fercoq, O., and Cevher, V. Random extrapolation for primal-dual coordinate descent. In International conference on machine learning, pp. 191-201. PMLR, 2020.
+Alacaoglu, A., Cevher, V., and Wright, S. J. On the complexity of a practical primal-dual coordinate method. arXiv preprint arXiv:2201.07684, 2022.
+Bansal, N. and Gupta, A. Potential-function proofs for
+
+gradient methods. Theory of Computing, 15(1):1-32, 2019.
+Beck, A. First-order methods in optimization. SIAM, 2017.
+Becker, B. and Kohavi, R. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20.
+Blondel, M. Python implementation of "structured prediction with projection oracles". https://github.com/mlbmondel/projection-losses/blob/master/polytopes.py, 2019.
+Buda, M., Maki, A., and Mazurowski, M. A. A systematic study of the class imbalance problem in convolutional neural networks. Neural networks, 106:249-259, 2018.
+Chang, C.-C. and Lin, C.-J. Libsvm: a library for support vector machines. TIST, 2(3):27, 2011.
+Cutkosky, A. and Orabona, F. Momentum-based variance reduction in non-convex SGD. In Advances in Neural Information Processing Systems 32 (NeurIPS), pp. 15236-15245, 2019.
+Gao, W. and Zhou, Z.-H. On the consistency of AUC pairwise optimization. In International Joint Conferences on Artificial Intelligence (IJCAI), pp. 939-945, 2015.
+Hamedani, E. Y. and Aybat, N. S. A primal-dual algorithm with line search for general convex-concave saddle point problems. SIAM Journal on Optimization, 31(2):1299-1329, 2021.
+Hamedani, E. Y., Jalilzadeh, A., and Aybat, N. S. Randomized primal-dual methods with adaptive step sizes. In International Conference on Artificial Intelligence and Statistics, pp. 11185-11212. PMLR, 2023.
+Hsieh, Y.-G., Iutzeler, F., Malick, J., and Mertikopoulos, P. On the convergence of single-call stochastic extra-gradient methods. arXiv preprint arXiv:1908.08465, 2019.
+Hu, Q., Qiu, Z., Guo, Z., Zhang, L., and Yang, T. Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization. In International Conference on Machine Learning, 2023a.
+Hu, Q., Zhu, D., and Yang, T. Non-smooth weakly-convex finite-sum coupled compositional optimization. In NeurIPS, 2023b.
+Hu, Q., Zhu, D., and Yang, T. Non-smooth weakly-convex finite-sum coupled compositional optimization. In Advances in Neural Information Processing Systems, 2023c.
+
+Hu, Y., Zhang, S., Chen, X., and He, N. Biased stochastic first-order methods for conditional stochastic optimization and applications in meta learning. Advances in Neural Information Processing Systems, 33, 2020.
+Jalilzadeh, A., Hamedani, E. Y., and Aybat, N. S. A doubly-randomized block-coordinate primal-dual method for large-scale saddle point problems. arXiv preprint arXiv:1907.03886, 2019.
+Jiang, W., Li, G., Wang, Y., Zhang, L., and Yang, T. Multi-block-single-probe variance reduced estimator for coupled compositional optimization. arXiv preprint arXiv:2207.08540, 2022.
+Juditsky, A., Nemirovski, A., and Tauvel, C. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17-58, 2011.
+Lan, G. First-order and stochastic optimization methods for machine learning, volume 1. Springer, 2020.
+Levy, D., Carmon, Y., Duchi, J. C., and Sidford, A. Large-scale methods for distributionally robust optimization. Advances in Neural Information Processing Systems, 33: 8847-8860, 2020.
+Lim, C. H. and Wright, S. J. Efficient bregman projections onto the permutahedron and related polytopes. In Artificial Intelligence and Statistics, pp. 1205-1213. PMLR, 2016.
+Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
+Martinez, N. L., Bertran, M. A., Papadaki, A., Rodrigues, M., and Sapiro, G. Blind pareto fairness and subgroup robustness. In International Conference on Machine Learning, pp. 7492-7501. PMLR, 2021.
+Meng, S. Y. and Gower, R. M. A model-based method for minimizing cvar and beyond. In International Conference on Machine Learning, pp. 24436-24456. PMLR, 2023.
+Nemirovski, A., Juditsky, A., Lan, G., and Shapiro, A. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574-1609, 2009.
+Nguyen, P. H., Nguyen, L., and van Dijk, M. Tight dimension independent lower bound on the expected convergence rate for diminishing step sizes in sgd. Advances in Neural Information Processing Systems, 32, 2019.
+Platt, J. C. Fast training of support vector machines using sequential minimal optimization. Advances in kernel methods, pp. 185-208, 1999.
+
+Qiu, Z., Hu, Q., Yuan, Z., Zhou, D., Zhang, L., and Yang, T. Not all semantics are created equal: Contrastive self-supervised learning with automatic temperature individualization. In International Conference on Machine Learning, Proceedings of Machine Learning Research, 2023.
+Qiu, Z.-H., Hu, Q., Zhong, Y., Zhang, L., and Yang, T. Large-scale stochastic optimization of ndcg surrogates for deep learning with provable convergence. In International Conference on Machine Learning, pp. 18122-18152. PMLR, 2022.
+Rudin, C. The p-norm push: A simple convex ranking algorithm that concentrates at the top of the list. Journal of Machine Learning Research, 10(Oct):2233-2271, 2009.
+Sagawa, S., Koh, P. W., Hashimoto, T. B., and Liang, P. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
+Soma, T., Gatmiry, K., and Jegelka, S. Optimal algorithms for group distributionally robust optimization and beyond. arXiv preprint arXiv:2212.13669, 2022.
+Wang, B. and Yang, T. Finite-sum coupled compositional stochastic optimization: Theory and applications. In International Conference on Machine Learning, pp. 23292-23317. PMLR, 2022.
+Wang, B. and Yang, T. (extended version) finite-sum coupled compositional stochastic optimization: Theory and applications. arXiv preprint arXiv:2202.12396, 2023.
+Wang, M., Fang, E. X., and Liu, H. Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions. Mathematical Programming, 161(1-2):419-449, 2017a.
+Wang, M., Liu, J., and Fang, E. X. Accelerating stochastic composition optimization. Journal of Machine Learning Research, 18:1-23, 2017b.
+Yan, Y., Xu, Y., Lin, Q., Liu, W., and Yang, T. Optimal epoch stochastic gradient descent ascent methods for min-max optimization. In Advances in Neural Information Processing Systems 33 (NeurIPS), 2020.
+Yang, J., Shi, R., Wei, D., Liu, Z., Zhao, L., Ke, B., Pfister, H., and Ni, B. Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification. Scientific Data, 10(1):41, 2023.
+Yang, T. Algorithmic foundations of empirical x-risk minimization. arXiv preprint arXiv:2206.00439, 2022.
+Yang, T. and Ying, Y. Auc maximization in the era of big data and ai: A survey. ACM Computing Surveys, 55(8): 1-37, 2022.
+
+Yuan, Z., Wu, Y., Qiu, Z.-H., Du, X., Zhang, L., Zhou, D., and Yang, T. Provable stochastic optimization for global contrastive learning: Small batch does not harm performance. In International Conference on Machine Learning, pp. 25760-25782. PMLR, 2022.
+Zhang, L., Zhao, P., Yang, T., and Zhou, Z.-H. Stochastic approximation approaches to group distributionally robust optimization. arXiv preprint arXiv:2302.09267, 2023.
+Zhang, X., Aybat, N. S., and Gurbuzbalaban, M. Robust accelerated primal-dual methods for computing saddle points. SIAM Journal on Optimization, 34(1):1097-1130, 2024.
+Zhang, Y. and Xiao, L. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In ICML, pp. 353-361, 2015.
+Zhang, Z. and Lan, G. Optimal algorithms for convex nested stochastic composite optimization. arXiv preprint arXiv:2011.10076, 2020.
+Zhu, D., Li, G., Wang, B., Wu, X., and Yang, T. When AUC meets DRO: optimizing partial AUC for deep learning with non-convex convergence guarantee. In International Conference on Machine Learning, 2022.
+
+Table 3. Notations we use throughout the paper.
+
+Basic d Number of the model parameters n Number of summands in cFCCO (1) R+ Set of non-negative real numbers Below (2) (x)+ max{x,0} [n] Set{1,2,...,n} y(i) The i-th block of size m in the vector y ∈ Rnm a ∨ b max(a,b) for a,b ∈ R a ∧ b min(a,b) for a,b ∈ R a ∝ b There exists c,C>0 such that cb ≤ a ≤ Cb for a,b > 0 I_E = 1 if the event E is true and = 0 otherwise Appendix E Standard f* The convex conjugate of a function f μ The strong convexity constant Assumption 1 ψi Strictly convex and differentiable ψi: Rm → R X Convex and closed domain of the model x (1) Y The decomposable domain of the dual variable y, Y = Y1 × ... × Yn, Yi ⊆ Rm+ (2) Yi A normed vector space in Rm+ with norm ||·|| (2) y(i) The i-th size-m block of a vector y ∈ Rnm Yi* Dual space of Yi with norm ||·||* Uψi(u,v) Bregman divergence ψi(u) - ψi(v) - ⟨∇ψi(v), u - v⟩ for u, v ∈ Rm associated with ψi Uψ(y1,y2) Defined as ∑i=1n Uψi(y1(i), y2(i)) for y1,y2 ∈ Rnm Dψi,yi The diameter [maxv∈Yiψi(v) - minv∈Yiψi(v)]1/2 of a set Yi w.r.t. ψi Dy Defined as [∑i=1n Dψi,yi]1/2 Dχ Dψi,x with ψi = 1/2 ||·||2 ||T_i||op Operator norm supx∈X {||T_i x||* / ||x||_2} for a linear operator Ti: X → Yi* ||T_i*||op Operator norm supx∈X {||T_i x||* / ||x||_2} of the adjoint operator Ti*: Yi → X span(S) Linear span of a set S of vectors Δn The (n-1)-dimensional probability simplex Δn ∈ Rn+ Cg Lipschitz constant of inner function gi Assumption 2 Cf Lipschitz constant of outer function fi Assumption 3 σ0^2, σ1^2 Variance upper bounds of zero-th and first order oracles of gi Assumption 4 δ^2 Variance upper bound of compositional stochastic gradient Assumption 4 Algorithm St Batch St ⊂ [n] of size S sampled in the t-th iteration Algorithm 1 Bt(i), Bt(i) Two i.i.d. batches of size-B sampled from Pi in the t-th iteration. Batches {Bt(i)}i∈St are actually sampled in Algorithm 1; Batches {Bt(i)}i∉St are virtual and only for analysis gi(x; B(t)) Stochastic estimator 1/B ∑ζi∈B(t) g(x;ζi) based on the mini-batch B(t) sampled from Pi for gi(x) in (1) η, τ, θ Hyperparameters of ALEXR Algorithm 1 g_t(t) Extrapolated stochastic estimator for the inner function value Line 6 in Algorithm 1 Analysis F(x) Objective function of cFCCO (1) L(x,y) Objective function of the min-max reformulation (2) x* A minimizer of F(x) in (1) (x*, y*) Unique saddle point of L(x,y) in (2) when fi is smooth and r is strongly convex y_t Dual auxiliary sequence defined for the convenience of convergence analysis (5) x_T Time-average primal iterate 1/T ∑t=0T-1xt+1 y_T y_t(i) ∈ arg maxv∈Yi{v^T gi(x̂T) - f_i*(v)} ∈ ∂f_i(g_i(x̂T)) y_t Virtual sequence defined in the proofs of Lemma 6 and Lemma 10 Below (20) and (34) Γ_t Defined as 1/n ∑i=1n (gi(x_t) - gi(x_t-1))^T (y^i - y_t^i) Lemma 6 Y_t^x Defined as 1/2E ||x* - xt||_2^2 Section D.1.2 Y_t^y Defined as 1/S E U_y(y*,yt) Section D.1.2 y_t Virtual sequence constructed in Lemma 9 Below (6) y_t, y_t Virtual sequences constructed in Lemma 10 Below (33) y_T Time-average dual auxiliary iterate 1/T ∑t=0T-1y_t+1 Below (39)
+
+# A. Other Related Work
+
+The min-max reformulation of cFCCO in (2) is closely related to the following prior work.
+
+Convex-Concave Saddle Point (SP) Problem. The saddle point (SP) problem $\min_{x\in \mathcal{X}}\max_{y\in \mathcal{Y}}L(x,y)$ that is $\mu_{x}$ -convex in $x$ and $\mu_y$ -concave in $y$ ( $\mu_x,\mu_y\geq 0$ ) has been thoroughly studied. We refer to the SP problem with $\mu_x,\mu_y > 0$ as a strongly-convex-strongly-concave (SCSC) problem while those with $\mu_x,\mu_y = 0$ as a convex-concave (CC) problem. A saddle point $(x_{*},y_{*})$ , if it exists, satisfies the condition $L(x_{*},y)\leq L(x_{*},y_{*})\leq L(x,y_{*}),\forall (x,y)\in \mathcal{X}\times \mathcal{Y}$ . Besides, the SP problem is closely related to the more general monotone variational inequalities (VI), which involve finding a point $z_{*} = (x_{*},y_{*})$ such that $\langle \Phi (z_{*}),z - z_{*}\rangle \geq 0$ , $\Phi (z) = (\partial_xL(x,y), - \partial_yL(x,y)),\forall z\in \mathcal{X}\times \mathcal{Y}$ . To assess the optimality of any point $(x_{\mathrm{out}},y_{\mathrm{out}})\in \mathcal{X}\times \mathcal{Y}$ , we can employ the concept of the duality gap, defined as $\operatorname {Gap}(x_{\mathrm{out}},y_{\mathrm{out}})\coloneqq \max_{x,y}\{L(x_{\mathrm{out}},y) - L(x,y_{\mathrm{out}})\}$ , and for SCSC problems, we can also use $D(x_{\mathrm{out}},y_{\mathrm{out}})\coloneqq \frac{\mu_x}{2}\| x_{\mathrm{out}} - x_*\|_2^2 +\frac{\mu_y}{2}\| y_{\mathrm{out}} - y_*\|_2^2$ . The convergence rate is quantified by measuring the number of iterations required to find an $\epsilon$ -approximate saddle point or an $\epsilon$ -approximate solution to the VI, satisfying one of the following conditions: (i) $\operatorname {Gap}(x_{\mathrm{out}},y_{\mathrm{out}})\leq \epsilon$ ; (ii) $D(x_{\mathrm{out}},y_{\mathrm{out}})\leq \epsilon$ ; (iii) $\langle \Phi (z_{\mathrm{out}}),z_{\mathrm{out}} - z\rangle \leq \epsilon$ .
+
+Accessing exact oracles such as $\nabla_xL$ and $\nabla_yL$ may not be feasible in many real-world scenarios. Instead, the available resources provide only unbiased stochastic estimators, denoted as $\tilde{\nabla}_xL$ and $\tilde{\nabla}_yL$ , with variances bounded by $\sigma^2$ . This limitation has prompted the development of numerous algorithms tailored for addressing the stochastic saddle point problem (SPP) and the more general stochastic variational inequalities (SVIs). For instance, the stochastic mirror descent (SMD) method (Nemirovski et al., 2009) achieves the optimal convergence rate of $O\left(\frac{1}{\epsilon^2}\right)$ for non-Lipschitz SVIs. For Lipschitz monotone SVIs, the stochastic mirror-prox (SMP) method (Juditsky et al., 2011) attains the optimal rate of $O\left(\frac{1}{\epsilon} + \frac{\sigma^2}{\epsilon^2}\right)$ . For SCSC and non-smooth SP problems, Yan et al. (2020) establish the $\tilde{O}\left(\frac{1}{\epsilon} + \frac{1}{\mu_x\epsilon} \vee \frac{1}{\mu_y\epsilon}\right)$ rate with probability $1 - p$ . Hsieh et al. (2019) propose a single-call stochastic extragradient (SSEG) method that achieves a rate of $O\left(\frac{1}{\epsilon} + \frac{\sigma^2}{\mu_x\epsilon} \vee \frac{\sigma^2}{\mu_y\epsilon}\right)$ for Lipschitz and strongly monotone SVIs. More recently, several works have devised stochastic algorithms for both the SSP and SVI problems, achieving (near-)optimal deterministic and stochastic convergence rates simultaneously. Zhang et al. (2024) introduce the SAPD algorithm, which reaches a convergence rate of $\tilde{O}\left(\frac{1}{\mu_x} \vee \frac{1}{\mu_y} + \frac{1}{\sqrt{\mu_x\mu_y}} + \frac{\sigma^2}{\mu_x\epsilon} \vee \frac{\sigma^2}{\mu_y\epsilon}\right)$ for the SCSC problem and $O\left(\frac{1}{\epsilon} + \frac{\sigma^2}{\epsilon^2}\right)$ for the CC problem. These algorithms cannot be directly applied to the min-max reformulation of cFCCO in (2) because the dual variable $y$ could be very high-dimensional, making it computationally infeasible to update the entire $y$ . This challenge motivates the block-coordinate stochastic update in our algorithm ALEXR.
+
+Coordinate Methods for the Block-Separable Deterministic SP Problem. A special class of bilinearly-coupled SP problem is in the form $\min_x\max_yL(x,y)\coloneqq \frac{1}{n}\sum_{i = 1}^n [y^{(i)}a_i^\top x - \phi_i(y^{(i)})] + r(x)$ , where $L(x,y)$ is block-separable w.r.t. the dual variable $y$ . One illustrative example is the primal-dual reformulation of the (regularized) empirical risk minimization (ERM) problem, denoted as $\min_xF(x)$ , where $F(x)$ is defined as $F(x)\coloneqq \frac{1}{n}\sum_{i = 1}^n\ell (a_i^\top x) + r(x)$ . This reformulation applies to data-label pairs $(a_i,b_i)_{i = 1}^n$ in the context of a linear model. Particularly in scenarios with a significantly large value of $n$ , the computational overhead of computing $\nabla_yL(x,y)$ and updating $y$ can become prohibitively expensive. In such cases, randomized coordinate methods offer a viable solution by reducing the per-iteration oracle cost from $O(n)$ to $O(1)$ . The SPDC method (Zhang & Xiao, 2015) leads to $\tilde{O}(n + \sqrt{\frac{n}{\mu_x\mu_y}})$ convergence rate to make $\mathbb{E}[D(\bar{x},\bar{y})]\leq \epsilon$ for the SCSC problem and $\tilde{O}\left(n + \frac{\sqrt{n}}{\epsilon}\right)$ rate to make $\mathbb{E}[F(\bar{x}) - F(x_*)]\leq \epsilon$ for the CC problem. Recently, Alacaoglu et al. (2022) extended the Pure-CD originally proposed in Alacaoglu et al. (2020) to incorporate importance sampling and exploit the potential sparsity in $A$ . For the CC problem with dense $A$ , Pure-CD not only achieves an improved rate of $O(n + \frac{\sqrt{n}}{\epsilon})$ to guarantee $\mathbb{E}[F(\bar{x}) - F(x_*)]\leq \epsilon$ but also attains a rate of $\tilde{O} (\frac{n}{\epsilon})$ to ensure $\mathbb{E}[\mathrm{Gap}(\bar{x},\bar{y})]\leq \epsilon$ . It is worth noting that $\mathbb{E}[\mathrm{Gap}(\bar{x},\bar{y})]\leq \epsilon$ serves as a sufficient but not necessary condition for $\mathbb{E}[F(\bar{x}) - F(x_*)]\leq \epsilon$ .
+
+In addition to addressing the bilinearly-coupled block-separable saddle point (SP) problem, Hamedani et al. (2023) have extended their focus to the more general Convex-Concave (CC) problem, defined as $L(x,y) = \Phi (x,y) - \phi (y) + \sum_{i = 1}^{m}h_{i}(x^{(i)})$ . Their work establishes a convergence rate of $O\left(\frac{m}{\epsilon}\right)$ for a randomized block-coordinate primal-dual method, ensuring that $\mathbb{E}[\mathrm{Gap}(\bar{x},\bar{y})]\leq \epsilon$ . Furthermore, Jalilzadeh et al. (2019) have delved into scenarios where $L(x,y)$ exhibits block-separability to both $x$ and $y$ . In this context, $L(x,y)$ is defined as $L(x,y) = \Phi (x,y) - \sum_{i = 1}^{n}\phi_{i}(y^{(i)}) + \sum_{j = 1}^{m}h_{j}(x^{(j)})$ . They introduce a doubly-randomized block-coordinate method to address such problems. It is worth emphasizing that all the works mentioned in this section (Zhang & Xiao, 2015; Alacaoglu et al., 2020; 2022; Jalilzadeh et al., 2019) rely on the assumption of having access to the exact $\nabla_{x}\Phi (x,y)$ and $\nabla_{y}\Phi (x,y)$ . In contrast, our work addresses the more challenging problem where only stochastic oracles are available.
+
+# B. Others Applications of cFCCO
+
+In Section 5, we introduce two applications of cFCCO: Group Distributionally Robust Optimization (GDRO) and Partial AUC (pAUC) Maximization with a Restricted TPR. Here we provide more applications of cFCCO in machine learning.
+
+Robust Logistic Regression. Consider a collection of data-label pairs, denoted as $(a_{i},b_{i})_{i = 1}^{n}$ . We can formulate the robust logistic regression problem as $\min_{x\in \mathcal{X}}\frac{1}{n}\sum_{i = 1}^{n}\log (1 + \exp b_i\mathbb{E}[\mathcal{A}(a_i)^\top x\mid a_i]) + r(x)$ . In this formulation, $\mathcal{A}(a_i)$ represents the perturbed data generated from an underlying distribution $\mathbb{P}_i$ . This is a special case of (1), where the functions $f_{i}(\cdot)$ are convex and monotonically non-decreasing given by $f_{i}(\cdot) = \log (1 + \exp (b_{i}\cdot))$ , and $g_{i}(x) = \mathbb{E}_{A(a_{i})\sim \mathbb{P}_{i}}[\mathcal{A}(a_{i})^{\top}x]$ .
+
+Bellman Residual Minimization. The task of approximating the value function, denoted as $V^{\pi}(s)$ , for each state $s$ under policy $\pi$ using a linear mapping can be expressed as $\min_{x\in \mathcal{X}}\sum_{s = 1}^{S}(\phi_s^\top x - \sum_{s'}\mathbb{P}_{s,s'}^{\pi}[r_{s,s'} + \gamma \cdot \phi_{s'}^\top x])^2$ . In this formulation, $\phi_s$ and $\phi_{s'}$ are feature vectors representing states $s$ and $s'$ , respectively. Additionally, $r_{s,s'}$ represents the random reward obtained during the transition from state $s$ to $s'$ , $\gamma < 1$ is the discount factor, $\pi$ denotes the policy, and $\mathbb{P}_{s,s'}^{\pi}$ represents the probability of transitioning from state $s$ to $s'$ under policy $\pi$ . This problem can be formulated as (1), where the functions $f_{s}(\cdot)$ are convex and given by $f_{s}(\cdot) = \frac{1}{S} (\cdot)^{2}$ , and the affine function $g_{s}(x) = \phi_{s}^{\top}x - \sum_{s'}\mathbb{P}_{s,s'}^{\pi}[r_{s,s'} + \gamma \cdot \phi_{s'}^\top x]$ .
+
+Bipartite Ranking. Imbalanced data classification is usually tackled in the context of the bipartite ranking problem. There is often a desire to penalize those positive examples with lower scores. One approach is the $p$ -norm push, introduced by Rudin (2009). It formulates the problem as $\min_{x \in \mathcal{X}} \frac{1}{n_+} \sum_{a_i \in \mathcal{D}_+} \left( \frac{1}{n_-} \sum_{a_j \in \mathcal{D}_-} \ell(s_x(a_j) - s_x(a_i)) \right)^p + r(x), p \geq 1$ . Here, $\mathcal{D}_+$ and $\mathcal{D}_-$ represent positive and negative data sets. The function $s_x(a)$ denotes the ranking score of data point $a$ , which is determined by a linear model parameterized by $x$ . The loss function $\ell$ is non-negative, convex, and monotonically non-decreasing, for instance, $\ell(\cdot) = \exp(\cdot)$ . The $p$ -norm push method is in a special case of (1), where the functions $f_i(\cdot)$ are convex and monotonically non-decreasing and given by $f_i(\cdot) = (\cdot)^p$ , and the convex function $g_i(x) = \frac{1}{n_+} \sum_{a_j \in \mathcal{D}_+} \ell(s_x(a_j) - s_x(a_i))$ . One popular approach for retrieval problems is maximizing the precision or recall at top $k$ positions (prec/rec@k). Yang (2022) has formulated the problem as $\min_{x \in \mathcal{X}} \frac{1}{n_+} \sum_{a_i \in \mathcal{D}_+} \ell_1(\sum_{a_j \in \mathcal{D}_+ \cup \mathcal{D}_-} \ell_2(s_x(a_j) - s_x(a_i) - k)) + r(x)$ , where $\ell_1, \ell_2$ are monotonically non-decreasing convex surrogate losses of the zero-one loss. Hence, maximizing precision or recall at top $k$ positions with a convex model $s_x(a)$ is covered by (1).
+
+Multi-Task GDRO. GDRO can be extended to the multi-task setting. Consider a scenario with $n$ tasks and $m$ groups. We represent the data distribution for the $i$ -th task and the $j$ -th group as $\mathbb{P}_{i,j}$ . Additionally, let $\ell(x;z)$ be the loss function associated with parameter $x$ on data point $z$ . The Multi-Task GDRO, with a regularization term $r$ , is formulated as $\min_{x \in \mathcal{X}} \frac{1}{n} \sum_{i=1}^{n} \max_{j \in [m]} \mathbb{E}[\ell(x;z_{ij})] + r(x)$ . In this formulation, the functions $f_i(\cdot)$ are defined as $f_i(g_i) = \max_{j \in [m]} (g_{ij})$ , and $g_{ij}(x) = \mathbb{E}[\ell(x;z_{ij})]$ , where $g_i(x) = [g_{i1}(x),\dots,g_{im}(x)]$ . Alternatively, we may consider the smooth $f_i(g_i) = \log \sum_{j \in [m]} \exp(g_{ij})$ . This problem is particularly relevant for the scenario featuring a substantial number $n$ of tasks, such as identity prediction in human faces, with a limited number $m$ of groups (e.g., lightning conditions).
+
+# C. Basic Lemmas
+
+Lemma 1. Suppose that $y_0^{(i)} = f_i'(u_0^{(i)}) \in \partial f_i(u_0^{(i)})$ for some $u_0^{(i)} \in \mathbb{R}$ and $u_{t+1}^{(i)} = \begin{cases} \frac{\tau}{1 + \tau} u_t^{(i)} + \frac{1}{1 + \tau} \tilde{g}_t^{(i)}, & i \in S_t \\ u_t^{(i)}, & i \notin S_t. \end{cases}$ .
+
+Algorithm 1 with $\psi_{i} = f_{i}^{*}$ satisfies that $y_{t}^{(i)} = f_{i}'(u_{t}^{(i)})\in \partial f_{i}(u_{t}^{(i)})$ for all $i\in \{1,\ldots ,n\}$ and $t\geq 0$
+
+Proof. We prove it by induction. The base case follows from the premise. Assume that $y_{t}^{(i)} = f_{i}'(u_{t}^{(i)}) \in \partial f_{i}(u_{t}^{(i)})$ .
+
+- Case I ( $i \notin S_t$ ): Note that $y_{t+1}^{(i)} = y_t^{(i)}$ and $u_{t+1}^{(i)} = u_t^{(i)}$ . Thus, $y_{t+1}^{(i)} = f_i'(u_{t+1}^{(i)}) \in \partial f_i(u_{t+1}^{(i)})$ .
+- Case II ( $i \in S_t$ ): This part resembles Lemma 2 in Zhang & Lan (2020). Based on the update rule and the premise $y_t^{(i)} \in \partial f_i(u_t^{(i)})$ , we have
+
+$$
+\begin{array}{l} y _ {t + 1} ^ {(i)} = \arg \max _ {y ^ {(i)}} \left\{y ^ {(i)} \tilde {g} _ {t} ^ {(i)} - f _ {i} ^ {*} (y ^ {(i)}) - \tau \left(f _ {i} ^ {*} (y ^ {(i)}) - (f _ {i} ^ {*}) ^ {\prime} (y _ {t} ^ {(i)}) \cdot y ^ {(i)}\right) \right\} \\ = \arg \max _ {y ^ {(i)}} \left\{\left(\frac {1}{1 + \tau} \tilde {g} _ {t} ^ {(i)} + \frac {\tau}{1 + \tau} u _ {t} ^ {(i)}\right) \cdot y ^ {(i)} - f _ {i} ^ {*} (y ^ {(i)}) \right\} \in \partial f _ {i} \left(\frac {1}{1 + \tau} \tilde {g} _ {t} ^ {(i)} + \frac {\tau}{1 + \tau} u _ {t} ^ {(i)}\right) = \partial f _ {i} (u _ {t + 1} ^ {(i)}). \\ \end{array}
+$$
+
+The following lemma is well-known and similar ideas have been used in Nemirovski et al. (2009); Juditsky et al. (2011).
+
+Lemma 2. Consider a martingale difference sequence $\Delta_t$ adapted to $\mathcal{F}_t$ . Define a sequence $\{\hat{\pi}_t\}_t$ :
+
+$$
+\hat {\pi} _ {0} = 0, \quad \hat {\pi} _ {t + 1} = \underset {v} {\arg \min} \{\langle - \Delta_ {t}, v \rangle + \alpha U _ {\psi} (v, \hat {\pi} _ {t}) \},
+$$
+
+where we also assume that $\psi$ is $\mu_{\psi}$ -strongly convex w.r.t. $\| \cdot \| (\mu_{\psi} > 0)$ . For any $v$ (that possibly depends on $\Delta_t$ ) we have
+
+$$
+\mathbb {E} \left[ \langle \Delta_ {t}, v \rangle \right] \leq \mathbb {E} \left[ \alpha U _ {\psi} (v, \hat {\pi} _ {t}) - \alpha U _ {\psi} (v, \hat {\pi} _ {t + 1}) \right] + \frac {1}{2 \alpha \mu_ {\psi}} \mathbb {E} \left\| \Delta_ {t} \right\| _ {*} ^ {2}.
+$$
+
+Proof. Use the three-point inequality:
+
+$$
+\langle - \Delta_ {t}, \hat {\pi} _ {t + 1} - v \rangle \leq \alpha U _ {\psi} (v, \hat {\pi} _ {t}) - \alpha U _ {\psi} (v, \hat {\pi} _ {t + 1}) - \alpha U _ {\psi} (\hat {\pi} _ {t + 1}, \hat {\pi} _ {t}).
+$$
+
+Add $\langle -\Delta_t, \hat{\pi}_t - \hat{\pi}_{t+1} \rangle$ to both sides and use Young's inequality.
+
+$$
+\begin{array}{l} \langle - \Delta_ {t}, \hat {\pi} _ {t} - v \rangle \leq \alpha U _ {\psi} (v, \hat {\pi} _ {t}) - \alpha U _ {\psi} (v, \hat {\pi} _ {t + 1}) - \alpha U _ {\psi} (\hat {\pi} _ {t + 1}, \hat {\pi} _ {t}) + \langle \Delta_ {t}, \hat {\pi} _ {t + 1} - \hat {\pi} _ {t} \rangle \\ \leq \alpha U _ {\psi} (v, \hat {\pi} _ {t}) - \alpha U _ {\psi} (v, \hat {\pi} _ {t + 1}) - \alpha U _ {\psi} (\hat {\pi} _ {t + 1}, \hat {\pi} _ {t}) + \frac {\alpha \mu_ {\psi}}{2} \| \hat {\pi} _ {t + 1} - \hat {\pi} _ {t} \| ^ {2} + \frac {1}{2 \alpha \mu_ {\psi}} \| \Delta_ {t} \| _ {*} ^ {2}. \\ \end{array}
+$$
+
+If $\psi$ is $\mu_{\psi}$ -strongly convex, we have $-U_{\psi}(\hat{\pi}_{t + 1},\hat{\pi}_t)\leq -\frac{\mu_{\psi}}{2}\| \hat{\pi}_{t + 1} - \hat{\pi}_t\|^2$ . Lastly, $\mathbb{E}_t[\Delta_t,\hat{\pi}_t] = 0$ .
+
+
+
+The following lemma combines Lemma 4 in Juditsky et al. (2011) and Lemma 7 in Zhang & Lan (2020).
+
+Lemma 3. Let $\Pi \subset \mathbb{R}^m$ be a non-empty closed and convex set and function $u(\pi)$ be $\mu$ -strongly convex on $\Pi$ w.r.t. $\|\cdot\|$ . Let $\hat{\pi}$ be generated via a prox-mapping with the argument $g + \delta$ , $\hat{\pi} \gets \arg \min_{\pi \in \Pi} \{\langle \pi, g + \delta - u'(\underline{\pi}) \rangle + u(\pi)\}$ for some $\underline{\pi} \in \Pi$ , where $\delta$ denotes a noise term with $\mathbb{E}[\delta] = 0$ and $\mathbb{E}[\|\delta\|_*^2] \leq \sigma_0^2$ . Then, for $\bar{\pi}$ generated via a prox-mapping with the argument $g$ , $\bar{\pi} \gets \arg \min_{\pi \in \Pi} \{\langle \pi, g - u'(\underline{\pi}) \rangle + u(\pi)\}$ , we have
+
+$$
+\left\| \hat {\pi} - \bar {\pi} \right\| \leq \left\| \delta \right\| _ {*} / \mu , \tag {10}
+$$
+
+$$
+\left| \mathbb {E} \langle \hat {\pi}, \delta \rangle \right| \leq \sigma_ {0} ^ {2} / \mu . \tag {11}
+$$
+
+For completeness, we present the proof of the lemma above. We do not claim any novelty here.
+
+Proof. By the optimality condition of prox-mapping, we have
+
+$$
+\langle u ^ {\prime} (\hat {\pi}) - u ^ {\prime} (\underline {{\pi}}) + g + \delta , \hat {\pi} - \pi \rangle \leq 0, \quad \forall \pi \in \Pi , \tag {12}
+$$
+
+$$
+\langle u ^ {\prime} (\bar {\pi}) - u ^ {\prime} (\underline {{\pi}}) + g, \bar {\pi} - \pi \rangle \leq 0, \quad \forall \pi \in \Pi . \tag {13}
+$$
+
+Choose $\pi = \bar{\pi}$ in (12) and $\pi = \hat{\pi}$ in (13). By combining (12) and (13), we have
+
+$$
+\left\| \delta \right\| _ {*} \left\| \hat {\pi} - \bar {\pi} \right\| \geq \langle \delta , \hat {\pi} - \bar {\pi} \rangle \geq \langle u ^ {\prime} (\hat {\pi}) - u ^ {\prime} (\bar {\pi}), \hat {\pi} - \bar {\pi} \rangle .
+$$
+
+Since $u$ is $\mu$ -strongly convex, we have $\langle u'(\hat{\pi}) - u'(\bar{\pi}), \hat{\pi} - \bar{\pi} \rangle \geq \mu \| \hat{\pi} - \bar{\pi} \|^2$ . Thus, $\| \hat{\pi} - \bar{\pi} \| \leq \| \delta \|_* / \mu$ .
+
+Moreover, the triangle inequality leads to $|\mathbb{E}\langle \hat{\pi},\delta \rangle |\leq |\mathbb{E}\langle \hat{\pi} -\bar{\pi},\delta \rangle | + |\mathbb{E}\langle \bar{\pi},\delta \rangle |.$ Note that $\mathbb{E}\langle \bar{\pi},\delta \rangle = 0$ . Moreover, Cauchy-Schwartz inequality and (10) leads to
+
+$$
+\left| \mathbb {E} \left\langle \hat {\pi}, \delta \right\rangle \right| \leq \left| \mathbb {E} \left\langle \hat {\pi} - \bar {\pi}, \delta \right\rangle \right| \leq \mathbb {E} \left[ \| \hat {\pi} - \bar {\pi} \| \| \delta \| _ {*} \right] \leq \mathbb {E} \| \delta \| _ {*} ^ {2} / \mu \leq \sigma_ {0} ^ {2} / \mu .
+$$
+
+Next, we present a basic inequality about the mirror proximal update. Similar results have been widely used in the literature, e.g., Lemma 3.8 in Lan (2020) and Lemma 7.1 in Hamedani & Aybat (2021).
+
+Lemma 4. Suppose that the function $\phi : \mathcal{X} \to \mathbb{R}$ is on a convex closed domain $\mathcal{X}$ and $\phi$ is $\mu$ -convex ( $\mu \geq 0$ ) with respect to a prox-function $U_{\psi}(x, y) \coloneqq \psi(x) - \psi(y) - \langle \psi'(y), x - y \rangle$ for any $x, y \in \mathcal{X}$ with a generating function $\psi$ , i.e., $\phi(x) \geq \phi(y) + \langle \phi'(y), x - y \rangle + \mu U_{\psi}(x, y), \forall x, y \in \mathcal{X}$ . For $\hat{x} = \arg \min_{x \in \mathcal{X}} \{\phi(x) + \eta U_{\psi}(\underline{x}, x)\}$ , we have
+
+$$
+\phi (\hat {x}) - \phi (x) \leq \eta U _ {\psi} (x, \underline {{x}}) - (\eta + \mu) U _ {\psi} (x, \hat {x}) - \eta U _ {\psi} (\hat {x}, \underline {{x}}), \quad \forall x \in \mathcal {X}. \tag {14}
+$$
+
+Proof. By the definition of the prox-function $U_{\psi}(x,y)$ , we have
+
+$$
+\begin{array}{l} U _ {\psi} (x, \underline {{x}}) - U _ {\psi} (x, \hat {x}) - U _ {\psi} (\hat {x}, \underline {{x}}) \\ = \psi (x) - \psi (\underline {{x}}) - \langle \psi^ {\prime} (\underline {{x}}), x - \underline {{x}} \rangle - \psi (x) + \psi (\hat {x}) + \langle \psi^ {\prime} (\hat {x}), x - \hat {x} \rangle - \psi (\hat {x}) + \psi (\underline {{x}}) + \langle \psi^ {\prime} (\underline {{x}}), \hat {x} - \underline {{x}} \rangle \\ = \langle \psi^ {\prime} (\hat {x}) - \psi^ {\prime} (\underline {{x}}), x - \hat {x} \rangle . \\ \end{array}
+$$
+
+By the strong convexity of $\phi$ with respect to $\psi$ , we have $\phi(x) - \phi(\hat{x}) \geq \langle \phi'(\hat{x}), x - \hat{x} \rangle + \mu U_{\psi}(x, \hat{x})$ . The optimality condition of the prox-mapping implies that $\langle \phi'(\hat{x}) + \eta (\psi'(\hat{x}) - \psi'(\underline{x})), x - \hat{x} \rangle \geq 0$ for any $x \in \mathcal{X}$ . Thus, we obtain $\langle \phi'(\hat{x}), x - \hat{x} \rangle \geq -\eta \langle \psi'(\hat{x}) - \psi'(\underline{x}), x - \hat{x} \rangle$ such that
+
+$$
+\begin{array}{l} \phi (x) - \phi (\hat {x}) \geq \left\langle \phi^ {\prime} (\hat {x}), x - \hat {x} \right\rangle + \mu U _ {\psi} (x, \hat {x}) \\ \geq \eta \langle \psi^ {\prime} (\underline {{x}}) - \psi^ {\prime} (\hat {x}), x - \hat {x} \rangle + \mu U _ {\psi} (x, \hat {x}) \geq - \eta U _ {\psi} (x, \underline {{x}}) + (\eta + \mu) U _ {\psi} (x, \hat {x}) + U _ {\psi} (\hat {x}, \underline {{x}}). \\ \end{array}
+$$
+
+# D. Convergence Analysis
+
+The following lemma is the complete version of (6), which is the starting point for the convergence analysis of ALEXR. Recall that in all cases we have $U_{f_i^*}(u,v) \geq \rho U_{\psi_i}(u,v)$ for some $\rho \geq 0$ and any $u, v \in \mathcal{V}_i$ .
+
+Lemma 5. Under Assumption 1, the following holds for any $x \in \mathcal{X}, y \in \mathcal{Y}$ after the t-th iteration of Algorithm 1.
+
+$$
+\begin{array}{l} L \left(x _ {t + 1}, y\right) - L \left(x, \bar {y} _ {t + 1}\right) \tag {15} \\ \leq \frac {\tau}{n} U _ {\psi} (y, y _ {t}) - \frac {\tau + \rho}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) + \frac {1}{n} \sum_ {i = 1} ^ {n} \big (g _ {i} (x _ {t + 1}) - \tilde {g} _ {t} ^ {(i)} \big) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {\eta}{2} \| x - x _ {t} \| _ {2} ^ {2} \\ - \frac {\eta + \mu}{2} \| x - x _ {t + 1} \| _ {2} ^ {2} - \frac {\eta}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x)) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \langle G _ {t}, x _ {t + 1} - x \rangle . \\ \end{array}
+$$
+
+Remark 3. Note that Algorithm 1 only samples $\mathcal{B}_t^{(i)}$ for those $i\in S_t$ and computes the extrapolated stochastic estimator $\tilde{g}_t^{(i)}$ for those $i\in S_t$ in the $t$ -th iteration. For those $i\notin S_t$ , the stochastic estimators $\{\tilde{g}_t^{(i)}\}_{i\notin S_t}$ and the batches $\{\mathcal{B}_t^{(i)}\}_{i\notin S_t}$ are virtual and introduced solely for the convenience of analysis. They are not required in the actual execution of Algorithm 1.
+
+Proof. According to Lemma 4, the primal update rule implies that
+
+$$
+- \left\langle G _ {t}, x - x _ {t + 1} \right\rangle + r \left(x _ {t + 1}\right) - r (x) \leq \frac {\eta}{2} \| x - x _ {t} \| _ {2} ^ {2} - \frac {\eta + \mu}{2} \| x - x _ {t + 1} \| _ {2} ^ {2} - \frac {\eta}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \tag {16}
+$$
+
+Similarly, for all $i \in [n]$ the dual update rule implies that
+
+$$
+(y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) ^ {\top} \tilde {g} _ {t} ^ {(i)} + f _ {i} ^ {*} (\bar {y} _ {t + 1} ^ {(i)}) - f _ {i} ^ {*} (y ^ {(i)}) \leq \tau U _ {\psi_ {i}} (y ^ {(i)}, y _ {t} ^ {(i)}) - (\tau + \rho) U _ {\psi_ {i}} (y ^ {(i)}, \bar {y} _ {t + 1} ^ {(i)}) - \tau U _ {\psi_ {i}} (\bar {y} _ {t + 1} ^ {(i)}, y _ {t} ^ {(i)}).
+$$
+
+Average this equation over $i = 1,\dots ,n$
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \left(\left(y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) ^ {\top} \tilde {g} _ {t} ^ {(i)} + f _ {i} ^ {*} (\bar {y} _ {t + 1} ^ {(i)}) - f _ {i} ^ {*} (y ^ {(i)})\right) \leq \frac {\tau}{n} U _ {\psi} (y, y _ {t}) - \frac {\tau + \rho}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}). \tag {17}
+$$
+
+By the definition of $L(x,y)$ in (2), we have
+
+$$
+\begin{array}{l} L (x _ {t + 1}, y) - L (x, \bar {y} _ {t + 1}) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t + 1}) ^ {\top} y ^ {(i)} - \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} ^ {*} (y ^ {(i)}) + r (x _ {t + 1}) - \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} ^ {*} (\bar {y} _ {t + 1} ^ {(i)}) - r (x) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \Big (g _ {i} (x _ {t + 1}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + f _ {i} ^ {*} (\bar {y} _ {t + 1} ^ {(i)}) - f _ {i} ^ {*} (y ^ {(i)}) + \big (g _ {i} (x _ {t + 1}) - g _ {i} (x) \big) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} \Big) + r (x _ {t + 1}) - r (x). \\ \end{array}
+$$
+
+Combine the equation above with (16) and (17).
+
+$$
+\begin{array}{l} L \left(x _ {t + 1}, y\right) - L (x, \bar {y} _ {t + 1}) \\ \leq \frac {\tau}{n} U _ {\psi} (y, y _ {t}) - \frac {\tau + \rho}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - \tilde {g} _ {t} ^ {(i)}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {\eta}{2} \| x - x _ {t} \| _ {2} ^ {2} \\ - \frac {\eta + \mu}{2} \| x - x _ {t + 1} \| _ {2} ^ {2} - \frac {\eta}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} + \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} (x)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \langle G _ {t}, x _ {t + 1} - x \rangle . \tag {18} \\ \end{array}
+$$
+
+# D.1. Convergence Analysis of the Smooth and Strongly Convex Case (Section 3.3.1)
+
+First, we present several lemmas that upper-bound different terms in (15) with $(x,y) = (x_{*},y_{*})$ , where $(x_{*},y_{*})$ is the unique saddle point of the strongly-convex-strongly-concave objective $L(x,y)$ of (2) when (1) is strongly convex and $f_{i}$ is smooth. We present our result with a general $\mu_{\psi}$ -strongly convex distance-generating function $\psi_{i}$ . For example, we have $\mu_{\psi} = \frac{1}{L_f}$ for $L_{f}$ -smooth $f_{i}$ and $\psi_{i} = f_{i}^{*}$ ; Besides, we have $\mu_{\psi} = 1$ for non-smooth $f_{i}$ and $\psi_{i} = \frac{1}{2}\| \cdot \|_{2}^{2}$ .
+
+# D.1.1. SUPPORTING LEMMAS
+
+Lemma 6. Under Assumptions 2 and 4, the following inequality holds for Algorithm 1 with $\theta < 1$ and any $\lambda_2, \lambda_3 > 0$ .
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left(g _ {i} \left(x _ {t + 1}\right) - \tilde {g} _ {t} ^ {(i)}\right) ^ {\top} \left(y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) \tag {19} \\ \leq \Gamma_ {t + 1} - \theta \Gamma_ {t} + \frac {C _ {g} ^ {2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}}{2 \lambda_ {2}} + \frac {\theta C _ {g} ^ {2} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2}}{2 \lambda_ {3}} + \frac {(\lambda_ {2} + \lambda_ {3} \theta) U _ {\psi} (\bar {y} _ {t + 1} , y _ {t})}{\mu_ {\psi} n} + \frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{B \mu_ {\psi} (\rho + \tau)}, \\ \end{array}
+$$
+
+where $\Gamma_t \coloneqq \frac{1}{n} \sum_{i=1}^{n} (g_i(x_t) - g_i(x_{t-1}))^\top (y_*^{(i)} - y_t^{(i)})$ .
+
+Proof. The $\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}(g_i(x_{t+1}) - \tilde{g}_t^{(i)})^\top(y_*^{(i)} - \bar{y}_{t+1}^{(i)})$ term can be decomposed as
+
+$$
+\begin{array}{l} \diamondsuit = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - \tilde {g} _ {t} ^ {(i)}\right) ^ {\top} \left(y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) \tag {20} \\ = \underbrace {\frac {1 + \theta}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t}\right) - g _ {i} \left(x _ {t}; \mathcal {B} _ {t} ^ {(i)}\right)\right) ^ {\top} \left(y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right)} _ {\mathrm {I}} + \underbrace {\frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right)\right) ^ {\top} \left(y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right)} _ {\mathrm {I I}} \\ + \underbrace {\frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\mathrm {I I I}} + \underbrace {\frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1} ; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})) ^ {\top} (y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\mathrm {I V}}. \\ \end{array}
+$$
+
+Taking conditional expectations of terms I and IV leads to $\mathbb{E}[(g_i(x_t) - g_i(x_t;\mathcal{B}_t^{(i)}))^\top y_*^{(i)}\mid \mathcal{F}_{t - 1}] = 0$ and $\mathbb{E}[(g_i(x_{t - 1}) - g_i(x_{t - 1};\mathcal{B}_t^{(i)}))^\top y_*(i)\mid \mathcal{F}_{t - 1}] = 0.\forall i\in [n]$ , define $\dot{y}_{t + 1}^{(i)}\coloneqq \arg \max_{v\in \mathcal{V}_i}\{v^\top \bar{g}_t^{(i)} - f_i^* (v) - \tau U_{\psi_i}(v,y_t^{(i)})\}$ and $\bar{g}_t^{(i)}\coloneqq$
+
+$g_{i}(x_{t}) + \theta (g_{i}(x_{t}) - g_{i}(x_{t - 1})).$ Note that $\dot{y}_{t + 1}^{(i)}$ is independent of $\mathcal{B}_t^{(i)}$ such that $\mathbb{E}[(g_i(x_t;\mathcal{B}_t^{(i)}) - g_i(x_t))^{\top}\dot{y}_{t + 1}^{(i)}\mid \mathcal{F}_{t - 1}] = 0.$
+
+$$
+\begin{array}{l} \mathbb {E} \left[ (g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} \right] = \mathbb {E} \left[ (g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t})) ^ {\top} (\bar {y} _ {t + 1} ^ {(i)} - \dot {y} _ {t + 1} ^ {(i)}) \right] \\ \stackrel {\text {L e m m a} ^ {3}} {\leq} \frac {1}{\mu_ {\psi} (\rho + \tau)} \mathbb {E} \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} \| (1 + \theta) (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) - \theta (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)})) \| _ {*} \\ = \frac {(1 + \theta) \mathbb {E} \| g _ {i} (x _ {t}) - g _ {i} (x _ {t} ; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2}}{\mu_ {\psi} (\rho + \tau)} + \frac {\theta \mathbb {E} \| g _ {i} (x _ {t}) - g _ {i} (x _ {t} ; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1} ; \mathcal {B} _ {t} ^ {(i)}) \| _ {*}}{\mu_ {\psi} (\rho + \tau)} \\ \leq \frac {(1 + 1 . 5 \theta)}{\mu_ {\psi} (\rho + \tau)} \mathbb {E} \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2} + \frac {0 . 5 \theta}{\mu_ {\psi} (\rho + \tau)} \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2} \leq \frac {(1 + 2 \theta) \sigma_ {0} ^ {2}}{B \mu_ {\psi} (\rho + \tau)}, \\ \end{array}
+$$
+
+$$
+\mathbb {E} \left[ (g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} \right] = \mathbb {E} \left[ (g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})) ^ {\top} (\bar {y} _ {t + 1} ^ {(i)} - \dot {y} _ {t + 1} ^ {(i)}) \right] \leq \frac {(1 + 2 \theta) \sigma_ {0} ^ {2}}{B \mu_ {\psi} (\rho + \tau)}.
+$$
+
+Define $\Gamma_t := \frac{1}{n} \sum_{i=1}^{n} (g_i(x_t) - g_i(x_{t-1}))^\top (y_*^{(i)} - y_t^{(i)})$ . II + III in (20) can be rewritten as
+
+$$
+\begin{array}{l} \mathrm {I I} + \mathrm {I I I} = \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t + 1}) ^ {\top} (y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) - \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t}) ^ {\top} (y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {*} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ = \Gamma_ {t + 1} - \theta \Gamma_ {t} + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {t} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ \leq \Gamma_ {t + 1} - \theta \Gamma_ {t} + \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}) \| _ {*} \| y _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \| + \frac {\theta}{n} \sum_ {i = 1} ^ {n} \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t}) \| _ {*} \| y _ {t} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \| \\ \leq \Gamma_ {t + 1} - \theta \Gamma_ {t} + \frac {C _ {g} ^ {2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}}{2 \lambda_ {2}} + \frac {\theta C _ {g} ^ {2} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2}}{2 \lambda_ {3}} + \frac {(\lambda_ {2} + \lambda_ {3} \theta) U _ {\psi} (\bar {y} _ {t + 1} , y _ {t})}{\mu_ {\psi} n}. \\ \end{array}
+$$
+
+
+
+Lemma 7. When $g_{i}$ is $L_{g}$ -smooth and Assumptions 1, 2, 3, 4 hold, the following holds for Algorithm 1.
+
+$$
+\frac {1}{n} \mathbb {E} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {*}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \mathbb {E} \left\langle G _ {t}, x _ {t + 1} - x _ {*} \right\rangle \leq \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu} + \frac {L _ {g} C _ {f}}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \tag {21}
+$$
+
+Proof. We define $\Delta_t \coloneqq \frac{1}{S} \sum_{i \in S_t} [\nabla g_i(x_t; \tilde{\mathcal{B}}_t^{(i)})]^\top y_{t+1}^{(i)} - \frac{1}{n} \sum_{i=1}^n [\nabla g_i(x_t)]^\top \bar{y}_{t+1}^{(i)}$ .
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {*})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \langle G _ {t}, x _ {t + 1} - x _ {*} \rangle \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t}) - g _ {i} (x _ {*})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + (\frac {1}{n} \sum_ {i = 1} ^ {n} [ \nabla g _ {i} (x _ {t}) ] ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \Delta_ {t}) ^ {\top} (x _ {*} - x _ {t + 1}) \\ \stackrel {\diamond} {\leq} \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + (\frac {1}{n} \sum_ {i = 1} ^ {n} [ \nabla g _ {i} (x _ {t}) ] ^ {\top} \bar {y} _ {t + 1} ^ {(i)}) ^ {\top} (x _ {t} - x _ {*}) + (\frac {1}{n} \sum_ {i = 1} ^ {n} [ \nabla g _ {i} (x _ {t}) ] ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \Delta_ {t}) ^ {\top} (x _ {*} - x _ {t + 1}) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \nabla g _ {i} \left(x _ {t}\right) \right] ^ {\top} \bar {y} _ {t + 1} ^ {(i)}\right) ^ {\top} \left(x _ {t} - x _ {t + 1}\right) + \left\langle \Delta_ {t}, x _ {*} - x _ {t + 1} \right\rangle , \tag {22} \\ \end{array}
+$$
+
+where $\diamond$ is due to the convexity of $g_{i}$ and $\mathcal{V}_i\subseteq \mathbb{R}_+^m$ . The first two terms in (22) can be bounded by the Lipschitz continuity
+
+of $f_{i}$ and $\nabla g_{i}$
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + (\frac {1}{n} \sum_ {i = 1} ^ {n} [ \nabla g _ {i} (x _ {t}) ] ^ {\top} \bar {y} _ {t + 1} ^ {(i)}) ^ {\top} (x _ {t} - x _ {t + 1}) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right) - \nabla g _ {i} \left(x _ {t}\right) \left(x _ {t + 1} - x _ {t}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \bar {y} _ {t + 1} ^ {(i)} \right\| \left\| g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}) - \nabla g _ {i} (x _ {t}) (x _ {t + 1} - x _ {t}) \right\| _ {*} \leq \frac {C _ {f}}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}) - \nabla g _ {i} (x _ {t}) (x _ {t + 1} - x _ {t}) \right\| _ {*}. \\ \end{array}
+$$
+
+Due to the $L_{g}$ -smoothness of $g_{i}$ , we have
+
+$$
+\left\| g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right) - \nabla g _ {i} \left(x _ {t}\right) \left(x _ {t + 1} - x _ {t}\right) \right\| _ {*} \leq \frac {L _ {g}}{2} \left\| x _ {t + 1} - x _ {t} \right\| _ {2} ^ {2}.
+$$
+
+Thus, the first two terms in (22) can be upper bounded by
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \nabla g _ {i} \left(x _ {t}\right) \right] ^ {\top} \bar {y} _ {t + 1} ^ {(i)}\right) ^ {\top} \left(x _ {t} - x _ {t + 1}\right) \leq \frac {L _ {g} C _ {f}}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \tag {23}
+$$
+
+Besides, we have $\mathbb{E}[\langle \Delta_t,x_*\rangle \mid \mathcal{F}_{t - 1}] = 0$ . By the Lipschitz continuity of $f_{i}$ and the definition of the operator norm, we have $\| ([\nabla g_i(x_t)]^\top -[\nabla g_i(x_t;\tilde{B}_t^{(i)})]^\top)\bar{y}_{t + 1}^{(i)}\| _2\leq \| [\nabla g_i(x_t)]^\top -[\nabla g_i(x_t;\tilde{B}_t^{(i)})]^\top\|_{\mathrm{op}}\| \bar{y}_{t + 1}^{(i)}\| \leq C_f\| [\nabla g_i(x_t)]^\top -$ $[\nabla g_i(x_t;\tilde{B}_t^{(i)})]^{\top}\|_{\mathrm{op}}$ . According to Lemma 3 and Assumption 4, we can derive that
+
+$$
+- \mathbb {E} [ \langle x _ {t + 1}, \Delta_ {t} \rangle ] \leq \frac {\mathbb {E} \| \Delta_ {t} \| _ {2} ^ {2}}{\mu + \eta} \leq \frac {1}{\mu + \eta} \left(\frac {\delta^ {2}}{S} + \mathbb {E} \| \frac {1}{S} \sum_ {i \in S _ {t}} \left([ \nabla g _ {i} (x _ {t}) ] ^ {\top} - [ \nabla g _ {i} (x _ {t}; \tilde {B} _ {t} ^ {(i)}) ] ^ {\top}\right) \bar {y} _ {t + 1} ^ {(i)} \| _ {2} ^ {2}\right) \leq \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\mu + \eta}. \tag {24}
+$$
+
+Then, combining (22), (23) and (24) leads to
+
+$$
+\frac {1}{n} \mathbb {E} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {*})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \mathbb {E} \left< G _ {t}, x _ {t + 1} - x \right> \leq \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\mu + \eta} + \frac {L _ {g} C _ {f}}{2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}.
+$$
+
+Lemma 8. When $g_{i}$ is non-smooth and Assumptions 1, 2, 3, 4 hold, the following holds for Algorithm 1.
+
+$$
+\frac {1}{n} \mathbb {E} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {*}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \mathbb {E} \left\langle G _ {t}, x _ {t + 1} - x \right\rangle \leq \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S} + 4 C _ {f} ^ {2} C _ {g} ^ {2}}{\mu + \eta} + \frac {\eta + \mu}{4} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \tag {25}
+$$
+
+Proof. Note that (22) and (24) still hold. Since $g_{i}$ is non-smooth, we need to bound the left-hand side of (23) in a different way. Based on the definition of the operator norm and the Lipschitz continuity of $g_{i}$ , we have $\| g_i'(x_t)(x_t - x_{t + 1})\|_*\leq \| g_i'(x_t)\|_{\mathrm{op}}\| x_t - x_{t + 1}\| _2\leq C_g\| x_t - x_{t + 1}\| _2$ such that
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \frac {1}{n} \sum_ {i = 1} ^ {n} ([ g _ {i} ^ {\prime} (x _ {t}) ] ^ {\top} \bar {y} _ {t + 1} ^ {(i)}) ^ {\top} (x _ {t} - x _ {t + 1}) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - g _ {i} \left(x _ {t}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} + \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} ^ {\prime} \left(x _ {t}\right) \left(x _ {t} - x _ {t + 1}\right)\right) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \| \bar {y} _ {t + 1} ^ {(i)} \| (\| g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}) \| _ {*} + \| g _ {i} ^ {\prime} (x _ {t}) (x _ {t} - x _ {t + 1}) \| _ {*}) \\ \leq 2 C _ {f} C _ {g} \| x _ {t + 1} - x _ {t} \| _ {2} \leq \frac {4 C _ {f} ^ {2} C _ {g} ^ {2}}{\eta + \mu} + \frac {\eta + \mu}{4} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}, \tag {26} \\ \end{array}
+$$
+
+where $g_i'(x_t) \in \partial g_i(x_t)$ . Merge (22), (24), and (26).
+
+$$
+\frac {1}{n} \mathbb {E} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {*})) ^ {\top} \bar {y} _ {t + 1} ^ {(i)} - \mathbb {E} \left\langle G _ {t}, x _ {t + 1} - x \right\rangle \leq \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S} + 4 C _ {f} ^ {2} C _ {g} ^ {2}}{\mu + \eta} + 0. 2 5 (\eta + \mu) \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}.
+$$
+
+# D.1.2. PROOF OF THEOREM 1
+
+Proof. If $g_{i}$ is smooth, we combine (15), (19), and (21).
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(x _ {t + 1}, y _ {*}\right) - L \left(x _ {*}, \bar {y} _ {t + 1}\right) \right] \leq \frac {\tau + \rho (1 - \frac {S}{n})}{S} \mathbb {E} \left[ U _ {\psi} \left(y _ {*}, y _ {t}\right) \right] - \frac {\tau + \rho}{S} \mathbb {E} \left[ U _ {\psi} \left(y _ {*}, y _ {t + 1}\right) \right] + \frac {\eta}{2} \mathbb {E} \| x _ {*} - x _ {t} \| _ {2} ^ {2} \\ - \frac {\eta + \mu}{2} \mathbb {E} \| x _ {*} - x _ {t + 1} \| _ {2} ^ {2} - \left(\frac {\tau}{n} - \frac {\lambda_ {2} + \lambda_ {3} \theta}{\mu_ {\psi} n}\right) \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) ] - \left(\frac {\eta}{2} - \frac {C _ {g} ^ {2}}{2 \lambda_ {2}} - \frac {L _ {g} C _ {f}}{2}\right) \mathbb {E} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} \\ + \frac {\theta C _ {g} ^ {2}}{2 \lambda_ {3}} \mathbb {E} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2} + \mathbb {E} [ \Gamma_ {t + 1} - \theta \Gamma_ {t} ] + \frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{B \mu_ {\psi} (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}. \tag {27} \\ \end{array}
+$$
+
+Define $\Upsilon_t^x \coloneqq \frac{1}{2}\mathbb{E}\| x_* - x_t\|_2^2$ and $\Upsilon_t^y = \frac{1}{S}\mathbb{E}U_\psi(y_*, y_t)$ . Note that $L(x_{t+1}, y_*) - L(x_*, \bar{y}_{t+1}) \geq 0$ . Multiply both sides of (27) by $\theta^{-t}$ and do telescoping sum from $t = 0$ to $T - 1$ . Add $\eta \theta^{-T}\Upsilon_T^x$ to both sides.
+
+$$
+\begin{array}{l} \eta \theta^ {- T} \Upsilon_ {T} ^ {x} \leq \sum_ {t = 0} ^ {T - 1} \theta^ {- t} ((\eta \Upsilon_ {t} ^ {x} + (\tau + \rho (1 - \frac {S}{n})) \Upsilon_ {t} ^ {y} - \theta \mathbb {E} \Gamma_ {t}) - ((\eta + \mu) \Upsilon_ {t + 1} ^ {x} + (\tau + \rho) \Upsilon_ {t + 1} ^ {y} - \mathbb {E} \Gamma_ {t + 1})) \\ + \eta \theta^ {- T} \Upsilon_ {T} ^ {x} + \left(\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}\right) \sum_ {t = 0} ^ {T - 1} \theta^ {- t} - \sum_ {t = 0} ^ {T - 1} \theta^ {- t} \left(\frac {\tau}{n} - \frac {(\lambda_ {2} + \lambda_ {3} \theta)}{\mu_ {\psi} n}\right) \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) ] \\ - \sum_ {t = 0} ^ {T - 1} \theta^ {- t} \left(\frac {\eta}{2} - \frac {L _ {g} C _ {f}}{2} - \frac {C _ {g} ^ {2}}{2 \lambda_ {2}} - \frac {C _ {g} ^ {2}}{2 \lambda_ {3}}\right) \mathbb {E} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \\ \end{array}
+$$
+
+Let $\eta = \frac{\mu\theta}{1 - \theta}$ such that $\theta = \frac{\eta}{\eta + \mu}$ and $\tau = \frac{\rho S}{n(1 - \theta)} -\rho$ (where $\tau >0$ if $\theta >1 - \frac{S}{n})$ such that $\theta = \frac{\tau + \rho\left(1 - \frac{S}{n}\right)}{\tau + \rho}$ . Then,
+
+$$
+\begin{array}{l} \sum_ {t = 0} ^ {T - 1} \theta^ {- t} \left(\left(\eta \Upsilon_ {t} ^ {x} + (\tau + \rho (1 - \frac {S}{n})) \Upsilon_ {t} ^ {y} - \theta \mathbb {E} \Gamma_ {t}\right) - \left(\left(\eta + \mu\right) \Upsilon_ {t + 1} ^ {x} + (\tau + \rho) \Upsilon_ {t + 1} ^ {y} - \mathbb {E} \Gamma_ {t + 1}\right)\right) \\ = \eta \Upsilon_ {0} ^ {x} + (\tau + \rho (1 - \frac {S}{n})) \Upsilon_ {0} ^ {y} - \theta \mathbb {E} \Gamma_ {0} - \theta^ {- T + 1} ((\eta + \mu) \Upsilon_ {T} ^ {x} + (\tau + \rho) \Upsilon_ {T} ^ {y} - \mathbb {E} \Gamma_ {T}). \\ \end{array}
+$$
+
+By setting $x_{-1} = x_0$ , we have $\Gamma_0 = 0$ . Besides, we have $-\Gamma_T \leq \frac{1}{n} \sum_{i=1}^{n} \| g_i(x_T) - g_i(x_{T-1}) \|_* \| y_*^{(i)} - y_t^{(i)} \| \leq \frac{C_g}{n} \| x_T - x_{T-1} \|_2 \| y_* - y_T$ . Thus,
+
+$$
+\begin{array}{l} \eta \theta^ {- T} \Upsilon_ {T} ^ {x} \leq \eta \Upsilon_ {0} ^ {x} + (\tau + \rho (1 - \frac {S}{n})) \Upsilon_ {0} ^ {y} - \theta^ {- T + 1} ((\eta + \mu) \Upsilon_ {T} ^ {x} + (\tau + \rho) \Upsilon_ {T} ^ {y} - \frac {\eta}{\theta} \Upsilon_ {T} ^ {x} - \frac {C _ {g}}{n} \| x _ {T} - x _ {T - 1} \| _ {2} \| y _ {*} - y _ {T} \|) \\ -\sum_{t = 1}^{T - 1}\theta^{-t + 1}\underbrace{((\eta + \mu)\Upsilon^{x}_{t + 1} + (\tau + \rho)\Upsilon^{y}_{t + 1} - \mathbb{E}\Gamma_{t + 1}) - (\frac{\eta}{\theta}\Upsilon^{x}_{t} + (\tau + \rho(1 - \frac{S}{n})) / \theta\Upsilon^{y}_{t} - \mathbb{E}\Gamma_{t}))}_{\heartsuit} \\ + \left(\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}\right) \sum_ {t = 0} ^ {T - 1} \theta^ {- t} - \sum_ {t = 0} ^ {T - 1} \theta^ {- t} \underbrace {\left(\frac {\tau}{n} - \frac {(\lambda_ {2} + \lambda_ {3} \theta)}{\mu_ {\psi} n}\right)} _ {\heartsuit} \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) ] \\ - \sum_ {t = 0} ^ {T - 1} \theta^ {- t} \underbrace {\left(\frac {\eta}{2} - \frac {L _ {g} C _ {f}}{2} - \frac {C _ {g} ^ {2}}{2 \lambda_ {2}} - \frac {C _ {g} ^ {2}}{2 \lambda_ {3}}\right)} _ {\text {心}} \mathbb {E} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}. \tag {28} \\ \end{array}
+$$
+
+Note that $\eta +\mu -\frac{\eta}{\theta} = 0\Leftrightarrow \theta = \frac{\eta}{\eta + \mu}$ such that $(\eta +\mu)\Upsilon_T^x -\frac{\eta}{\theta}\Upsilon_T^x\geq 0$ and $\frac{C_g}{n}\| x_T - x_{T - 1}\| _2\| y - y_T\| \leq$ $\frac{C_g^2}{2\lambda_2}\| x_T - x_{T - 1}\| _2^2 +\frac{\lambda_2}{2\mu_\psi n^2} U_\psi (y_*,y_T)$ . To make the $\heartsuit$ terms in (28) be non-negative, we choose $\lambda_{2}\asymp \frac{C_{g}\sqrt{S\rho\mu_{\psi}}}{\sqrt{n\mu}}$ $\lambda_3\succ \frac{C_g\sqrt{S\rho\mu_\psi}}{\sqrt{n\mu}}$ while ensuring that
+
+$$
+1 / \tau \leq O \left(\frac {\sqrt {n \mu \mu_ {\psi}}}{C _ {g} \sqrt {S \rho}}\right), \quad 1 / \eta \leq O \left(\frac {\sqrt {S \rho \mu_ {\psi}}}{C _ {g} \sqrt {n \mu}} \wedge \frac {1}{L _ {g} C _ {f}}\right). \tag {29}
+$$
+
+Since $\tau = \frac{\rho S}{n(1 - \theta)} -\rho \Leftrightarrow \theta = \frac{\tau + \rho(1 - \frac{S}{n})}{\tau + \rho}$ , we have $\tau +\rho \left(1 - \frac{S}{n}\right) = \theta (\tau +\rho)$ and $(\tau +\rho)(1 - \theta) = \frac{\rho S}{n}$ .
+
+$$
+\begin{array}{l} \mu \Upsilon_ {T} ^ {x} \leq \mu \theta^ {T} \Upsilon_ {0} ^ {x} + \frac {\left(\tau + \rho \left(1 - \frac {S}{n}\right)\right) (1 - \theta)}{\theta} \theta^ {T} \Upsilon_ {0} ^ {y} + \left(\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}\right) \\ = \mu \theta^ {T} \Upsilon_ {0} ^ {x} + (\tau + \rho) (1 - \theta) \theta^ {T} \Upsilon_ {0} ^ {y} + \left(\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}\right) \\ = \mu \theta^ {T} \Upsilon_ {0} ^ {x} + \frac {\rho S}{n} \theta^ {T} \Upsilon_ {0} ^ {y} + \left(\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu}\right). \\ \end{array}
+$$
+
+We select $\theta = 1 - O\left(\frac{S}{n}\wedge \frac{\mu}{L_gC_f}\wedge \sqrt{\frac{\mu\rho\mu_\psi S}{C_g^2n}}\wedge \frac{\mu_\psi B\rho S\epsilon}{\sigma_0^2n}\wedge \frac{B\mu\epsilon}{C_f^2\sigma_1^2}\wedge \frac{S\mu\epsilon}{\delta^2}\right)$ to make (29) hold and
+
+$$
+\frac {2 (1 + 2 \theta) \sigma_ {0} ^ {2}}{\mu_ {\psi} B (\rho + \tau)} + \frac {\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}}{\eta + \mu} \leq \frac {2 (1 + 2 \theta) (1 - \theta) \sigma_ {0} ^ {2} n}{\mu_ {\psi} B \rho S} + \frac {(1 - \theta) \left(\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B} + \frac {\delta^ {2}}{S}\right)}{\mu} \leq \epsilon .
+$$
+
+Since $\frac{1}{L_f} = \mu_{\psi}\rho$ when $f_{i}$ is $L_{f}$ -smooth, the number of iterations needed by Algorithm 1 to make $\mu \Upsilon_T^x\leq \epsilon$ is
+
+$$
+T = \tilde {O} \left(\frac {n}{S} + \frac {L _ {g} C _ {f}}{\mu} + \frac {C _ {g} \sqrt {n L _ {f}}}{\sqrt {S} \bar {\mu}} + \frac {n L _ {f} \sigma_ {0} ^ {2}}{B S \epsilon} + \frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{\mu B \epsilon} + \frac {\delta^ {2}}{\mu S \epsilon}\right),
+$$
+
+where $\tilde{O}(\cdot)$ hides the polylog $(1 / \epsilon)$ factor. In the case that $g_{i}$ is non-smooth, we utilize (25) instead of (21). Correspondingly, we need to replace the blue term $\frac{L_gC_f}{2}$ in (27) by $0.25(\eta + \mu)$ . Additionally, there is a $\frac{4C_f^2C_g^2}{\eta + \mu}$ term on the right-hand side of (27). Following similar steps, we can get the iteration complexity to make $\mu \Upsilon_T^x \leq \epsilon$ is
+
+$$
+T = \tilde {O} \left(\frac {n}{S} + \frac {C _ {g} \sqrt {n L _ {f}}}{\sqrt {S} \mu} + \frac {n L _ {f} \sigma_ {0} ^ {2}}{B S \epsilon} + \frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{\mu B \epsilon} + \frac {\delta^ {2}}{\mu S \epsilon} + \frac {C _ {f} ^ {2} C _ {g} ^ {2}}{\mu \epsilon}\right).
+$$
+
+# D.2. Convergence Analysis of the Convex Case with Possibly Non-smooth $f_{i}$ (Section 3.3.2)
+
+As discussed in Section 3.1, we can choose $\psi_{i} = \frac{1}{2}\| \cdot \|_{2}^{2}$ for the cFCCO problem with non-smooth $f_{i}$ . We present our result with a general $\mu_{\psi}$ -strongly convex and $L_{\psi}$ -smooth distance-generating function $\psi_{i}$ , which subsumes the quadratic one. The following lemma extends Lemma A.2 in Alacaoglu et al. (2022) to mini-batch sampling and a general smooth and strongly convex distance-generating function $\psi_{i}$ .
+
+Lemma 9. The following holds for Algorithm 1 with $L_{\psi}$ -smooth and $\mu_{\psi}$ -strongly convex $\psi_i$ and any $\lambda_1 > 0$ satisfies that
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \frac {\tau}{n} \left(U _ {\psi} (y, y _ {t}) - U _ {\psi} (y, \bar {y} _ {t + 1})\right) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) \right] \tag {30} \\ \leq \mathbb {E} \left[ \frac {\tau}{S} (U _ {\psi} (y, y _ {t}) - U _ {\psi} (y, y _ {t + 1})) + \frac {\tau \lambda_ {1}}{S} (U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1})) \right] - \frac {\tau}{n} \left(1 - \frac {L _ {\psi} ^ {2}}{\lambda_ {1} \mu_ {\psi} ^ {2} S}\right) \mathbb {E} \left[ U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) \right], \\ \end{array}
+$$
+
+where $\hat{y}_{t + 1}^{(i)} = \arg \min_v\{-v^\top \Delta_t^{(i)} + \frac{n\lambda_1}{S} U_{\psi_i}(v,\hat{y}_t^{(i)})\}$ and $\Delta_t^{(i)}\coloneqq -\frac{n}{S}\nabla \psi_i(y_{t + 1}^{(i)}) + \nabla \psi_i(\bar{y}_{t + 1}^{(i)}) + \frac{n - S}{S}\nabla \psi_i(y_t^{(i)}).$
+
+Proof. First, we can make the following decomposition.
+
+$$
+\begin{array}{l} \frac {\tau}{n} U _ {\psi} (y, y _ {t}) - \frac {\tau}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) \tag {31} \\ = \frac {\tau}{S} U _ {\psi} (y, y _ {t}) - \frac {\tau}{S} U _ {\psi} (y, y _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) + \frac {\tau}{S} U _ {\psi} (y, y _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) + \frac {(S - n) \tau}{n S} U _ {\psi} (y, y _ {t}). \\ \end{array}
+$$
+
+We rewrite the last three terms above as follows.
+
+$$
+\begin{array}{l} \frac {\tau}{S} U _ {\psi} (y, y _ {t + 1}) - \frac {\tau}{n} U _ {\psi} (y, \bar {y} _ {t + 1}) + \frac {(S - n) \tau}{n S} U _ {\psi} (y, y _ {t}) \\ = \frac {\tau}{S} \sum_ {i = 1} ^ {n} (\psi_ {i} (y ^ {(i)}) - \psi_ {i} (y _ {t + 1} ^ {(i)}) - (y ^ {(i)} - y _ {t + 1} ^ {(i)}) ^ {\top} \nabla \psi_ {i} (y _ {t + 1} ^ {(i)})) - \frac {\tau}{n} \sum_ {i = 1} ^ {n} (\psi_ {i} (y ^ {(i)}) - \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}) - (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) ^ {\top} \nabla \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)})) \\ + \frac {(S - n) \tau}{n S} \sum_ {i = 1} ^ {n} (\psi_ {i} (y ^ {(i)}) - \psi_ {i} (y _ {t} ^ {(i)}) - (y ^ {(i)} - y _ {t} ^ {(i)}) ^ {\top} \nabla \psi_ {i} (y _ {t} ^ {(i)})) \\ = \frac{\tau}{n}\sum_{i = 1}^{n}\left(\psi_{i}(\bar{y}_{t + 1}^{(i)}) - \frac{n}{S}\psi_{i}(y_{t + 1}^{(i)}) + \frac{n - S}{S}\psi_{i}(y_{t}^{(i)})\right) + \underbrace{\frac{\tau}{n}\sum_{i = 1}^{n}(-\frac{n}{S}\nabla\psi_{i}(y_{t + 1}^{(i)}) + \nabla\psi_{i}(\bar{y}_{t + 1}^{(i)}) + \frac{n - S}{S}\nabla\psi_{i}(y_{t}^{(i)}))^{\top}y^{(i)}}_{\sharp} \\ + \frac {\tau}{S} \sum_ {i = 1} ^ {n} \left\langle \nabla \psi_ {i} (y _ {t + 1} ^ {(i)}), y _ {t + 1} ^ {(i)} \right\rangle - \frac {\tau}{n} \sum_ {i = 1} ^ {n} \left\langle \nabla \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}), \bar {y} _ {t + 1} ^ {(i)} \right\rangle + \frac {(S - n) \tau}{n S} \sum_ {i = 1} ^ {n} \left\langle \nabla \psi_ {i} (y _ {t} ^ {(i)}), y _ {t} ^ {(i)} \right\rangle . \\ \end{array}
+$$
+
+Note that both $\bar{y}_{t + 1}^{(i)}$ and $y_{t}^{(i)}$ are independent of $S_{t}$ such that
+
+$$
+\begin{array}{l} \mathbb {E} [ \psi_ {i} (y _ {t + 1} ^ {(i)}) \mid \mathcal {G} _ {t} ] = \frac {S}{n} \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}) + \frac {n - S}{n} \psi_ {i} (y _ {t} ^ {(i)}), \\ \mathbb {E} \left[ \left\langle \nabla \psi_ {i} (y _ {t + 1} ^ {(i)}), y _ {t + 1} ^ {(i)} \right\rangle \mid \mathcal {G} _ {t} \right] = \frac {S}{n} \left\langle \nabla \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}), \bar {y} _ {t + 1} ^ {(i)} \right\rangle + \frac {n - S}{n} \left\langle \nabla \psi_ {i} (y _ {t} ^ {(i)}), y _ {t} ^ {(i)} \right\rangle , \\ \mathbb {E} \left[ \nabla \psi_ {i} (y _ {t + 1} ^ {(i)}) \mid \mathcal {G} _ {t} \right] = \frac {S}{n} \nabla \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}) + \frac {n - S}{n} \nabla \psi_ {i} (y _ {t} ^ {(i)}). \\ \end{array}
+$$
+
+Apply Lemma 2 to $\sharp$ with $\Delta_t^{(i)} := -\frac{n}{S} \nabla \psi_i(y_{t+1}^{(i)}) + \nabla \psi_i(\bar{y}_{t+1}^{(i)}) + \frac{n-S}{S} \nabla \psi_i(y_t^{(i)})$ , $\hat{y}_{t+1}^{(i)} = \arg \min_v \{-v^\top \Delta_t^{(i)} + \alpha U_{\psi_i}(v, \hat{y}_t^{(i)})\}$ ( $\alpha$ to be determined) such that
+
+$$
+\mathbb {E} \left[ \left\langle \Delta_ {t} ^ {(i)}, y ^ {(i)} \right\rangle \right] \leq \mathbb {E} \left[ \alpha U _ {\psi_ {i}} (y ^ {(i)}, \hat {y} _ {t} ^ {(i)}) - \alpha U _ {\psi_ {i}} (y ^ {(i)}, \hat {y} _ {t + 1} ^ {(i)}) \right] + \frac {1}{2 \mu_ {\psi} \alpha} \mathbb {E} \left[ \left\| \Delta_ {t} ^ {(i)} \right\| _ {*} ^ {2} \right].
+$$
+
+Sum both sides from 1 to $n$ and divide $n$ on both sides
+
+$$
+\mathbb {E} [ \sharp ] \leq \mathbb {E} \left[ \frac {\alpha \tau}{n} (U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1})) \right] + \frac {\tau}{2 n \mu_ {\psi} \alpha} \mathbb {E} \left[ \sum_ {i = 1} ^ {n} \left\| \Delta_ {t} ^ {(i)} \right\| _ {*} ^ {2} \right].
+$$
+
+Note that $\mathbb{E}[(\nabla \psi_i(y_{t + 1}^{(i)}) - \nabla \psi_i(y_t^{(i)}))|\mathcal{G}_t] = \frac{S}{n} (\nabla \psi_i(\bar{y}_{t + 1}^{(i)}) - \nabla \psi_i(y_t^{(i)}))$ such that
+
+$$
+\mathbb {E} \left[ \| \Delta_ {t} ^ {(i)} \| _ {*} ^ {2} \right] = \mathbb {E} \left\| (\nabla \psi_ {i} (\bar {y} _ {t + 1} ^ {(i)}) - \nabla \psi_ {i} (y _ {t} ^ {(i)})) - \frac {n}{S} (\nabla \psi_ {i} (y _ {t + 1} ^ {(i)}) - \nabla \psi_ {i} (y _ {t} ^ {(i)}) \right\| _ {*} ^ {2} \leq \frac {n ^ {2}}{S ^ {2}} \mathbb {E} \left\| \nabla \psi_ {i} (y _ {t + 1} ^ {(i)}) - \nabla \psi_ {i} (y _ {t} ^ {(i)}) \right\| _ {*} ^ {2}.
+$$
+
+Thus, we have
+
+$$
+\mathbb {E} [ \sharp ] \leq \mathbb {E} \left[ \frac {\alpha \tau}{n} \left(U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1})\right) \right] + \frac {\tau n}{2 \mu_ {\psi} \alpha S ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} \left\| \nabla \psi_ {i} \left(y _ {t + 1} ^ {(i)}\right) - \nabla \psi_ {i} \left(y _ {t} ^ {(i)}\right) \right\| _ {*} ^ {2}. \tag {32}
+$$
+
+The last term above can be upper bounded as
+
+$$
+\frac {\tau n}{2 \mu_ {\psi} \alpha S ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} \left\| \nabla \psi_ {i} (y _ {t + 1} ^ {(i)}) - \nabla \psi_ {i} (y _ {t} ^ {(i)}) \right\| _ {*} ^ {2} \leq \frac {\tau n L _ {\psi} ^ {2}}{2 \mu_ {\psi} \alpha S ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} \left\| y _ {t + 1} ^ {(i)} - y _ {t} ^ {(i)} \right\| ^ {2}.
+$$
+
+Choose $\alpha = \frac{n\lambda_1}{S}$ for some $\lambda_{1} > 0$ According to (31) and (32) and $\mathbb{E}\big[\| y_{t + 1} - y_t\| ^2\mid \mathcal{G}_t\big] = \frac{S}{n}\| \bar{y}_{t + 1} - y_t\| ^2\leq$ $\frac{2S}{n\mu_{\psi}} U_{\psi}(\bar{y}_{t + 1},y_t)$ , we can finish the proof.
+
+# D.2.1. A SUPPORTING LEMMA
+
+Lemma 10. Under Assumptions 2 and 4, the following holds for Algorithm 1 with $\theta = 1$ and any $\lambda_2, \lambda_3, \lambda_4, \lambda_5 > 0$ , $y \in \mathcal{V}$ .
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left(g _ {i} \left(x _ {t + 1}\right) - \tilde {g} _ {t} ^ {(i)}\right) ^ {\top} \left(y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) \tag {33} \\ = \mathbb {E} [ \Gamma_ {t + 1} - \Gamma_ {t} ] + \frac {2 \lambda_ {2}}{n} \mathbb {E} [ U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1}) ] + \frac {\lambda_ {5}}{n} \mathbb {E} [ U _ {\psi} (y, \check {y} _ {t}) - U _ {\psi} (y, \check {y} _ {t + 1}) ] \\ + \frac {(\lambda_ {3} + \lambda_ {4}) \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1} , y _ {t}) ]}{\mu_ {\psi} n} + \frac {C _ {g} ^ {2} \mathbb {E} \| x _ {t + 1} - x _ {t} \| ^ {2}}{2 \lambda_ {3}} + \frac {C _ {g} ^ {2} \mathbb {E} \| x _ {t} - x _ {t - 1} \| ^ {2}}{2 \lambda_ {4}} + \frac {9 \sigma_ {0} ^ {2}}{\tau \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{\lambda_ {2} \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {5} \mu_ {\psi} B}. \\ \end{array}
+$$
+
+where $\Gamma_t := \frac{1}{n} \sum_{i=1}^{n} (g_i(x_t) - g_i(x_{t-1}))^\top (y^{(i)} - y_t^{(i)})$ , $\{\check{y}_t\}_{t \geq 0}$ , $\{\check{y}_t\}_{t \geq 0}$ are virtual sequences constructed as $\hat{y}_{t+1}^{(i)} = \arg \min_{v \in \mathcal{Y}_i} \{v^\top(g_i(x_t; \mathcal{B}_t^{(i)}) - g_i(x_t)) + \lambda_2 U_{\psi_i}(v, \hat{y}_t^{(i)})\}$ , $\check{y}_{t+1}^{(i)} = \arg \min_{v \in \mathcal{Y}_i} \{v^\top(g_i(x_{t-1}; \mathcal{B}_t^{(i)}) - g_i(x_{t-1})) + \lambda_2 U_{\psi_i}(v, \check{y}_t^{(i)})\}$ for each $i \in [n]$ .
+
+Proof. The term $\frac{1}{n}\sum_{i=1}^{n}(g_i(x_{t+1}) - \tilde{g}_t^{(i)})^\top(y^{(i)} - \bar{y}_{t+1}^{(i)})$ can be decomposed as
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t + 1}\right) - \tilde {g} _ {t} ^ {(i)}\right) ^ {\top} \left(y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) \tag {34} \\ = \underbrace {\frac {1 + \theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t}) - g _ {i} (x _ {t} ; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\text {I}} + \underbrace {\frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t + 1}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) - \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\text {I I}} \\ + \underbrace {\frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\text {I I I}} + \underbrace {\frac {\theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1} ; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)})} _ {\text {I V}}. \\ \end{array}
+$$
+
+Note that our ALEXR (Algorithm 1) only samples $\mathcal{B}_t^{(i)}$ for those $i\in S_t$ in the $t$ -th iteration. For those $i\notin S_t$ , the batches $\{\mathcal{B}_t^{(i)}\}_{i\notin S_t}$ are virtual and introduced solely for the convenience of analysis, which are not required in the actual execution of Algorithm 1. For each $i\in [n]$ , define $\dot{y}_{t + 1}^{(i)}\coloneqq \arg \max_{v\in \mathcal{V}_i}\{v^\top \bar{g}_t^{(i)} - f_i^* (v) - \tau U_{\psi_i}(v,y_t^{(i)})\}$ and $\bar{g}_t^{(i)}\coloneqq g_i(x_t) + \theta (g_i(x_t) - g_i(x_{t - 1}))$ . We decompose the I term in (34) as
+
+$$
+\begin{array}{l} \mathrm {I} = \frac {1 + \theta}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ = \frac {1 + \theta}{n} \sum_ {i = 1} ^ {n} \Big \{(g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} (\dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} y ^ {(i)} - (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} \dot {y} _ {t + 1} ^ {(i)} \Big \}. \\ \end{array}
+$$
+
+Since $f_{i}^{*} + \tau U_{\psi_{i}}(y^{(i)},y_{t}^{(i)})$ is $\tau \mu_{\psi^{-}}$ -strongly convex, Lemma 3 implies that
+
+$$
+\begin{array}{l} \frac {1}{n} \mathbb {E} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} (\dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} \| \dot {y} _ {t + 1} - \bar {y} _ {t + 1} \| \\ \leq \frac {1}{n \tau \mu_ {\psi}} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ((1 + \theta) \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} + \theta \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*}) \right] \\ \leq \frac {1}{n \tau \mu_ {\psi}} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ (1 + 1. 5 \theta) \| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2} + 0. 5 \theta \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2} \right] \leq \frac {(1 + 2 \theta) \sigma_ {0} ^ {2}}{\tau B \mu_ {\psi}}. \\ \end{array}
+$$
+
+Apply Lemma 2 to the term $\frac{1}{n}\sum_{i=1}^{n}(g_i(x_t) - g_i(x_t;\mathcal{B}_t^{(i)}))^\top y^{(i)}$ . For any $\lambda_2 > 0$ and the auxiliary sequence $\{\hat{y}_t\}_{t\geq 0}$ constructed as $\hat{y}_{t+1}^{(i)} = \arg \min_{v\in \mathcal{Y}_i}\{v^\top (g_i(x_t;\mathcal{B}_t^{(i)}) - g_i(x_t)) + \lambda_2U_{\psi_i}(v,\hat{y}_t^{(i)})\}$ for each $i\in [n]$ , we have
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)})) ^ {\top} y ^ {(i)} \leq \frac {\lambda_ {2}}{n} \mathbb {E} [ U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1}) ] + \frac {1}{2 \lambda_ {2} \mu_ {\psi} n} \mathbb {E} \| g (x _ {t}) - g (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} ^ {2}.
+$$
+
+Lastly, $\mathbb{E}[(g_i(x_t) - g_i(x_t; \mathcal{B}_t^{(i)}))^\top \dot{y}_{t+1}^{(i)} \mid \mathcal{F}_{t-1}] = 0$ . Choose $\theta = 1$ . Then, the I term in (34) can be bounded as
+
+$$
+\mathbb {E} [ \mathrm {I} ] \leq \frac {2 \lambda_ {2}}{n} \mathbb {E} \left[ U _ {\psi} \left(y, \hat {y} _ {t}\right) - U _ {\psi} \left(y, \hat {y} _ {t + 1}\right) \right] + \frac {\sigma_ {0} ^ {2}}{\lambda_ {2} \mu_ {\psi} B} + \frac {6 \sigma_ {0} ^ {2}}{\tau \mu_ {\psi} B}. \tag {35}
+$$
+
+Define $\Gamma_t := \frac{1}{n} \sum_{i=1}^{n} (g_i(x_t) - g_i(x_{t-1}))^\top (y^{(i)} - y_t^{(i)})$ . For any $\lambda_3, \lambda_4 > 0$ , $\Pi + \Pi$ can be rewritten as
+
+$$
+\begin{array}{l} \mathrm {I I} + \mathrm {I I I} = \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t + 1}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) - \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i} (x _ {t}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ = \Gamma_ {t + 1} - \Gamma_ {t} + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) + \frac {1}{n} \sum_ {i = 1} ^ {n} (g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t})) ^ {\top} (y _ {t} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ \leq \Gamma_ {t + 1} - \Gamma_ {t} + \frac {C _ {g} ^ {2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}}{2 \lambda_ {3}} + \frac {\lambda_ {3} \| y _ {t + 1} - \bar {y} _ {t + 1} \| ^ {2}}{2 n} + \frac {C _ {g} ^ {2} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2}}{2 \lambda_ {4}} + \frac {\lambda_ {4} \| y _ {t} - \bar {y} _ {t + 1} \| ^ {2}}{2 n} \\ \end{array}
+$$
+
+Note that $y_{t + 1}^{(i)} = \bar{y}_{t + 1}^{(i)}$ if $i\in S_t$ and $y_{t + 1}^{(i)} = y_t^{(i)}$ otherwise. Then, $\| y_{t + 1} - \bar{y}_{t + 1}\| ^2\leq \| y_t - \bar{y}_{t + 1}\| ^2$ such that
+
+$$
+\mathrm {I I} + \mathrm {I I I} \leq \Gamma_ {t + 1} - \Gamma_ {t} + \frac {C _ {g} ^ {2} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2}}{2 \lambda_ {3}} + \frac {C _ {g} ^ {2} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2}}{2 \lambda_ {4}} + \frac {\left(\lambda_ {3} + \lambda_ {4}\right) U _ {\psi} (\bar {y} _ {t + 1} , y _ {t})}{\mu_ {\psi} n}. \tag {36}
+$$
+
+We decompose the IV term in (34) as
+
+$$
+\begin{array}{l} \mathrm {I V} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} \left(x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}\right) - g _ {i} \left(x _ {t - 1}\right)\right) ^ {\top} \left(y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left\{\left(g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})\right) ^ {\top} (\dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \right. \\ \left. \right.\left. + \left(g _ {i} \left(x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}\right) - g _ {i} \left(x _ {t - 1}\right)\right) ^ {\top} y ^ {(i)} - \left(g _ {i} \left(x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}\right) - g _ {i} \left(x _ {t - 1}\right)\right) ^ {\top} \dot {y} _ {t + 1} ^ {(i)} \right\}. \\ \end{array}
+$$
+
+By the Cauchy-Schwarz inequality, we have
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left(g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})\right) ^ {\top} (\dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \right] \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \| g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1}) \| _ {*} \| \dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \| \right].
+$$
+
+Since $f_{i}^{*}(y^{(i)}) + \tau U_{\psi_{i}}(y^{(i)},y_{t}^{(i)})$ is $\tau \mu_{\psi}$ -strongly convex to $y^{(i)}$ , Lemma 3 implies that
+
+$$
+\| \dot {y} _ {t + 1} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \| \leq \frac {(1 + \theta) \| g _ {i} (x _ {t}) - g _ {i} (x _ {t} ; \mathcal {B} _ {t} ^ {(i)}) \| _ {*} + \theta \| g _ {i} (x _ {t - 1}) - g _ {i} (x _ {t - 1} ; \mathcal {B} _ {t} ^ {(i)}) \| _ {*}}{\tau \mu_ {\psi}}.
+$$
+
+Similar to (35), the following holds for any $\lambda_5 > 0$ and the auxiliary sequence $\{\check{y}_t\}_{t\geq 0}$ that is constructed as $\check{y}_{t + 1}^{(i)} = \arg \min_{v\in \mathcal{V}_i}\{v^\top (g_i(x_{t - 1};\mathcal{B}_t^{(i)}) - g_i(x_{t - 1})) + \lambda_2U_{\psi_i}(v,\check{y}_t^{(i)})\}$ for each $i\in [n]$ .
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ (g _ {i} (x _ {t - 1}; \mathcal {B} _ {t} ^ {(i)}) - g _ {i} (x _ {t - 1})) ^ {\top} y ^ {(i)} \right] \leq \frac {\lambda_ {5}}{n} \mathbb {E} [ U _ {\psi} (y, \breve {y} _ {t}) - U _ {\psi} (y, \breve {y} _ {t + 1}) ] + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {5} \mu_ {\psi} B}.
+$$
+
+Consider that $\frac{1}{n}\sum_{i = 1}^{n}\mathbb{E}[(g_i(x_{t - 1};\mathcal{B}_t^{(i)}) - g_i(x_{t - 1}))^\top \dot{y}_{t + 1}^{(i)}] = 0.$
+
+$$
+\mathbb {E} [ \mathrm {I V} ] \leq \frac {\lambda_ {5}}{n} \mathbb {E} \left[ U _ {\psi} \left(y, \hat {y} _ {t}\right) - U _ {\psi} \left(y, \hat {y} _ {t + 1}\right) \right] + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {5} \mu_ {\psi} B} + \frac {3 \sigma_ {0} ^ {2}}{\tau \mu_ {\psi} B}. \tag {37}
+$$
+
+Combine (35), (36), and (37).
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} (g _ {i} (x _ {t + 1}) - \tilde {g} _ {t} ^ {(i)}) ^ {\top} (y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)}) \\ \leq \mathbb {E} [ \Gamma_ {t + 1} - \Gamma_ {t} ] + \frac {2 \lambda_ {2}}{n} \mathbb {E} [ U _ {\psi} (y, \hat {\boldsymbol {y}} _ {t}) - U _ {\psi} (y, \hat {\boldsymbol {y}} _ {t + 1}) ] + \frac {\lambda_ {5}}{n} \mathbb {E} [ U _ {\psi} (y, \check {y} _ {t}) - U _ {\psi} (y, \check {y} _ {t + 1}) ] \\ + \frac {(\lambda_ {3} + \lambda_ {4}) \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1} , y _ {t}) ]}{\mu_ {\psi} n} + \frac {C _ {g} ^ {2} \mathbb {E} \| x _ {t + 1} - x _ {t} \| ^ {2}}{2 \lambda_ {3}} + \frac {C _ {g} ^ {2} \mathbb {E} \| x _ {t} - x _ {t - 1} \| ^ {2}}{2 \lambda_ {4}} + \frac {9 \sigma_ {0} ^ {2}}{\tau \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{\lambda_ {2} \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {5} \mu_ {\psi} B}. \\ \end{array}
+$$
+
+
+
+# D.2.2. PROOF OF THEOREM 3
+
+Proof. If $g_{i}$ is smooth, we combine (15), (21), (30), (33). Set $x = x_{*}$ and $x_0 = x_{-1}$ .
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(x _ {t + 1}, y _ {t + 1}\right) - L \left(x _ {*}, \bar {y} _ {t + 1}\right) \right] \\ \leq \frac {\tau}{S} \mathbb {E} [ U _ {\psi} (y, y _ {t}) - U _ {\psi} (y, y _ {t + 1}) ] + \frac {\tau \lambda_ {1}}{S} \mathbb {E} [ U _ {\psi} (y, \hat {y} _ {t}) - U _ {\psi} (y, \hat {y} _ {t + 1}) ] + \frac {\eta}{2} \mathbb {E} \| x _ {*} - x _ {t} \| _ {2} ^ {2} - \frac {\eta}{2} \mathbb {E} \| x _ {*} - x _ {t + 1} \| _ {2} ^ {2} \\ + \mathbb {E} \left[ \Gamma_ {t + 1} - \Gamma_ {t} \right] + \frac {2 \lambda_ {2}}{n} \mathbb {E} \left[ U _ {\psi} \left(y, \hat {\bar {y}} _ {t}\right) - U _ {\psi} \left(y, \hat {\bar {y}} _ {t + 1}\right) \right] + \frac {\lambda_ {5}}{n} \mathbb {E} \left[ U _ {\psi} \left(y, \check {y} _ {t}\right) - U _ {\psi} \left(y, \check {y} _ {t + 1}\right) \right] \\ - \left(\frac {\tau}{n} - \frac {\tau L _ {\psi} ^ {2}}{n \lambda_ {1} \mu_ {\psi} ^ {2} S} - \frac {\lambda_ {3} + \lambda_ {4}}{\mu_ {\psi} n}\right) \mathbb {E} [ U _ {\psi} (\bar {y} _ {t + 1}, y _ {t}) ] - \left(\frac {\eta}{2} - \frac {C _ {g} ^ {2}}{2 \lambda_ {3}} - \frac {L _ {g} C _ {f}}{2}\right) \mathbb {E} \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} + \frac {C _ {g} ^ {2}}{2 \lambda_ {4}} \mathbb {E} \| x _ {t} - x _ {t - 1} \| _ {2} ^ {2} \\ + \frac {9 \sigma_ {0} ^ {2}}{\tau \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{\lambda_ {2} \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {5} \mu_ {\psi} B} + \frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{\eta B} + \frac {\delta^ {2}}{\eta S}. \tag {38} \\ \end{array}
+$$
+
+Do telescoping sum from $t = 0$ to $T - 1$ for the equation above.
+
+$$
+\begin{array}{l} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ L \left(x _ {t + 1}, y\right) - L \left(x _ {*}, \bar {y} _ {t + 1}\right) \right] \\ \leq \frac {\eta \mathbb {E} \| x _ {*} - x _ {0} \| _ {2} ^ {2}}{2} + \frac {\tau}{S} \mathbb {E} [ U _ {\psi} (y, y _ {0}) ] + \frac {\tau \lambda_ {1}}{S} \mathbb {E} [ U _ {\psi} (y, \hat {y} _ {0}) ] + \frac {2 \lambda_ {2}}{n} \mathbb {E} [ U _ {\psi} (y, \hat {y} _ {0}) ] + \frac {\lambda_ {5}}{n} \mathbb {E} [ U _ {\psi} (y, \check {y} _ {0}) ] \\ - \left(\frac {\tau}{n} - \frac {\tau L _ {\psi} ^ {2}}{n \lambda_ {1} \mu_ {\psi} ^ {2} S} - \frac {\lambda_ {3} + \lambda_ {4}}{\mu_ {\psi} n}\right) \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ U _ {\psi} \left(\bar {y} _ {t + 1}, y _ {t}\right) \right] - \left(\frac {\eta}{2} - \frac {L _ {g} C _ {f}}{2} - \frac {C _ {g} ^ {2}}{2 \lambda_ {3}} - \frac {C _ {g} ^ {2}}{2 \lambda_ {4}}\right) \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left\| x _ {t + 1} - x _ {t} \right\| ^ {2} \\ + \mathbb {E} [ \Gamma_ {T} ] - \frac {\tau}{S} \mathbb {E} [ U _ {\psi} (y, y _ {T}) ] + \left(\frac {C _ {f} ^ {2} \sigma_ {1} ^ {2}}{B \eta} + \frac {\delta^ {2}}{S \eta}\right) T + \frac {9 \sigma_ {0} ^ {2} T}{\tau \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2} T}{\lambda_ {2} \mu_ {\psi} B} + \frac {\sigma_ {0} ^ {2} T}{2 \lambda_ {5} \mu_ {\psi} B}. \\ \end{array}
+$$
+
+Note that $\Gamma_0 = 0$ , $\Gamma_T \leq \frac{1}{n} \sum_{i=1}^{n} \|g_i(x_T) - g_i(x_{T-1})\|_* \left\|y^{(i)} - y_T^{(i)}\right\| \leq \frac{C_g^2}{2\lambda_3} \|x_T - x_{T-1}\|_2^2 + \frac{\lambda_3}{2n\mu_\psi} U_\psi(y, y_T)$ . Choose $\lambda_1 \asymp \frac{L_\psi^2}{S\mu_\psi^2}$ , $\lambda_2 \asymp \frac{n\tau}{S}$ , $\lambda_3 \asymp \frac{C_g\sqrt{S}}{\sqrt{n}}$ , $\lambda_4 \asymp \frac{C_g\sqrt{S}}{\sqrt{n}}$ , $\lambda_5 \asymp \frac{n\tau}{S}$ , and let $1/\tau = O\left(\frac{\sqrt{n}\mu_\psi}{C_g\sqrt{S}}\right)$ and $1/\eta = O\left(\frac{\sqrt{S}}{C_g\sqrt{n}}\right)$ . Since $L(x, y)$ is convex in $x$ and linear in $y$ , we have
+
+$$
+\mathbb {E} \max _ {y} [ L (\bar {x} _ {T}, y) - L (x _ {*}, \bar {y} _ {T}) ] \leq \mathbb {E} \max _ {y} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} [ L (x _ {t + 1}, y) - L (x _ {*}, \bar {y} _ {t + 1}) ], \tag {39}
+$$
+
+where $\bar{x}_T = \frac{1}{T}\sum_{t=0}^{T-1}x_{t+1}, \bar{\bar{y}}_T = \frac{1}{T}\sum_{t=0}^{T-1}\bar{y}_{t+1}$ . Now work on the LHS.
+
+$$
+L (\bar {x} _ {T}, y) - L (x _ {*}, \bar {y} _ {T}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y ^ {(i)} g _ {i} (\bar {x} _ {T}) - f _ {i} ^ {*} (y ^ {(i)})\right) + r (\bar {x} _ {T}) - \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\bar {\bar {y}} _ {T} ^ {(i)} g _ {i} (x _ {*}) - f _ {i} ^ {*} (\bar {\bar {y}} _ {T} ^ {(i)})\right) - r (x _ {*})
+$$
+
+Choose $y^{(i)} = \tilde{y}_T^{(i)}\in \arg \max_v\{vg_i(\bar{x}_T) - f_i^* (v)\} \Leftrightarrow g_i(\bar{x}_T)\in \partial f_i^* (\tilde{y}_T^{(i)})\Leftrightarrow \tilde{y}_T^{(i)}\in \partial f_i(g_i(\bar{x}_T))$ such that $\tilde{y}_T^{(i)}g_i(\bar{x}_T) - f_i^* (\tilde{y}_T^{(i)}) = f_i(g_i(\bar{x}_T)).$ By Fenchel-Young, $-\bar{\bar{y}}_T^{(i)}g_i(x_*) + f_i^* (\bar{\bar{y}}_T^{(i)})\geq -f_i(g_i(x_*)).$ Thus, $\mathbb{E}[F(\bar{x}_T) - F(x_*)]\leq \mathbb{E}\max_y[L(\bar{x}_T,y) - L(x_*,\bar{y}_T)].$ Thus, we can make $\mathbb{E}[F(\bar{x}_T) - F(x_*)]\leq \epsilon$ after $T = O\left(\frac{L_gC_fD_\chi^2}{\epsilon} +\frac{\sqrt{n}C_gD_\chi^2}{\sqrt{S}\epsilon} +\frac{C_g(1 + L_\psi^2 / (S\mu_\psi^2))D_\gamma^2}{\mu_\psi\sqrt{nS}\epsilon} +\frac{D_\chi^2\delta^2}{S\epsilon^2} +\frac{D_\chi^2C_f^2\sigma_1^2}{B\epsilon^2} +\frac{\sigma_0^2(1 + L_\psi^2 / (S\mu_\psi^2))D_\gamma^2)}{\mu_\psi B S\epsilon^2}\right)$ iterations by setting $\theta = 1$ $\tau = O\left(\frac{\sqrt{S}C_g}{\mu_\psi\sqrt{n}}\vee \frac{\sigma_0^2}{\mu_\psi B\epsilon}\right),\eta = O\left(L_gC_f\vee \frac{\sqrt{n}C_g}{\sqrt{S}}\vee \frac{\delta^2}{S\epsilon}\vee \frac{C_f^2\sigma_1^2}{B\epsilon}\right).$
+
+# D.2.3. PROOF OF THEOREM 2
+
+Proof. If $g_{i}$ is non-smooth, we can use ALEXR with $\theta = 0$ , where $\tilde{g}_{t}^{(i)} = g_{i}(x_{t};\mathcal{B}_{t}^{(i)})$ . Then, for any $\lambda > 0$ we have
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left\langle g _ {i} (x _ {t + 1}) - \tilde {g} _ {t}, y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left\langle g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \| g _ {i} (x _ {t + 1}) - g _ {i} (x _ {t}) \| _ {*} \left\| y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\| \right] + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \\ \leq \frac {C _ {g} ^ {2} \mathbb {E} \left\| x _ {t + 1} - x _ {t} \right\| _ {2} ^ {2}}{2 \lambda_ {4}} + 2 \lambda_ {4} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} [ \| y ^ {(i)} \| ^ {2} + \| \bar {y} _ {t + 1} ^ {(i)} \| ^ {2} ] + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \\ \leq \frac {C _ {g} ^ {2} \mathbb {E} \left[ \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} \right]}{2 \lambda_ {4}} + 4 \lambda_ {4} C _ {f} ^ {2} + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right]. \\ \end{array}
+$$
+
+The last term above can be decomposed as
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} \left(x _ {t}\right) - g _ {i} \left(x _ {t}; \mathcal {B} _ {t} ^ {(i)}\right), y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\langle g _ {i} \left(x _ {t}\right) - g _ {i} \left(x _ {t}; \mathcal {B} _ {t} ^ {(i)}\right), y ^ {(i)} - y _ {t} ^ {(i)} \right\rangle \right] + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} \left(x _ {t}\right) - g _ {i} \left(x _ {t}; \mathcal {B} _ {t} ^ {(i)}\right), y _ {t} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right]. \tag {40} \\ \end{array}
+$$
+
+Note that $\mathbb{E}\left[\left\langle g_i(x_t) - g_i(x_t;\mathcal{B}_t^{(i)}),y_t^{(i)}\right\rangle \mid \mathcal{F}_{t - 1}\right] = 0$ Besides, Lemma (9) implies that for some $\lambda_2 > 0$ and sequence $\{\tilde{y}_t\}_t$
+
+$$
+\mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} \right\rangle \right] \leq \mathbb {E} \left[ \lambda_ {2} U _ {\psi_ {i}} (y ^ {(i)}, \tilde {y} _ {t} ^ {(i)}) - \lambda_ {2} U _ {\psi_ {i}} (y ^ {(i)}, \tilde {y} _ {t + 1} ^ {(i)}) \right] + \frac {1}{2 \lambda_ {2} \mu_ {\psi}} \mathbb {E} \left\| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \right\| _ {*} ^ {2}.
+$$
+
+such that
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y ^ {(i)} \right\rangle \right] \leq \frac {\lambda_ {2}}{n} \mathbb {E} \left[ U _ {\psi} (y, \tilde {y} _ {t}) - U _ {\psi} (y, \tilde {y} _ {t + 1}) \right] + \frac {\sigma_ {0} ^ {2}}{2 \lambda_ {2} B \mu_ {\psi}}.
+$$
+
+For any $\lambda_3 > 0$ , the second term in (40) can be bounded as
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\langle g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}), y _ {t} ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \leq \frac {\lambda_ {3}}{2 n} \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \left\| g _ {i} (x _ {t}) - g _ {i} (x _ {t}; \mathcal {B} _ {t} ^ {(i)}) \right\| _ {*} ^ {2} \right] + \frac {\mathbb {E} \left[ \| y _ {t} - \bar {y} _ {t + 1} \| ^ {2} \right]}{2 \lambda_ {3} n} \\ \leq \frac {\lambda_ {3} \sigma_ {0} ^ {2}}{2 B} + \frac {\mathbb {E} \left[ U _ {\psi} \left(\bar {y} _ {t + 1} , y _ {t}\right) \right]}{\lambda_ {3} \mu_ {\psi} n}. \\ \end{array}
+$$
+
+Thus, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\langle g _ {i} \left(x _ {t + 1}\right) - \tilde {g} _ {t}, y ^ {(i)} - \bar {y} _ {t + 1} ^ {(i)} \right\rangle \right] \leq \frac {C _ {g} ^ {2} \mathbb {E} \left[ \| x _ {t + 1} - x _ {t} \| _ {2} ^ {2} \right]}{2 \lambda_ {4}} + 4 \lambda_ {4} C _ {f} ^ {2} + \frac {\lambda_ {2}}{n} \mathbb {E} \left[ U _ {\psi} \left(y, \tilde {y} _ {t}\right) - U _ {\psi} \left(y, \tilde {y} _ {t + 1}\right) \right] \\ + \frac {\sigma_ {0} ^ {2}}{2 B \lambda_ {2} \mu_ {\psi}} + \frac {\lambda_ {3} \sigma_ {0} ^ {2}}{2 B} + \frac {\mathbb {E} \left[ U _ {\psi} (\bar {y} _ {t + 1} , y _ {t}) \right]}{2 \lambda_ {3} \mu_ {\psi} n}. \\ \end{array}
+$$
+
+The other parts are the same as as the proof of Theorem 3.
+
+# E. Proof of Theorem 4
+
+We consider a special instance of problem (1) that is separable over the coordinates $i$ and $\mathbb{P}_i = \mathbb{P}$ .
+
+$$
+\min _ {x \in [ - D, D ] ^ {n}} F (x), \quad F (x) = \frac {1}{n} \left(\sum_ {i = 1} ^ {n} f \left(g _ {i} (x)\right) + \frac {\alpha}{2} \| x \| ^ {2}\right), \tag {41}
+$$
+
+$$
+g _ {i} (x) = \mathbb {E} _ {\zeta \sim \mathbb {P}} [ g _ {i} (x; \zeta) ], g _ {i} (x; \zeta) = x ^ {(i)} + \zeta ,
+$$
+
+where the additive noise $\zeta$ follows
+
+$$
+\zeta = \left\{ \begin{array}{l l} - \nu & \text {w . p .} 1 - p, \\ \nu (1 - p) / p & \text {w . p .} p. \end{array} \right., \quad \text {w h e r e} p := \frac {\nu^ {2}}{\sigma^ {2}} \in (0, 1).
+$$
+
+We construct the hard problems for (i) smooth $f_{i}$ ; and (ii) non-smooth $f_{i}$ separately.
+
+(i) Smooth $f_{i}$ : First, we can consider the special instance that $f_{i}$ is the identity mapping and $\delta = 0$ (e.g., all $f_{i}$ 's are identical), $\sigma_0 = 0$ . Then, the cFCCO problem in (1) becomes the standard strongly convex minimization problem. Then, we can apply the information-theoretic lower bounds (Agarwal et al., 2009; Nguyen et al., 2019): Any algorithm in the abstract scheme requires at least $\Omega\left(\frac{1}{\mu\epsilon}\right)$ iterations to find an $\bar{x}$ such that $\frac{\mu}{2}\mathbb{E}\|\bar{x} -x_{*}\|_{2}^{2}\leq \epsilon$ .
+
+
+(a) Visualization of $f$ in (42)
+
+
+(b) Convex conjugate $f^{*}$ in (43) of $f$ . Note that $f^{*}(y^{(i)}) = +\infty$ in grey areas.
+
+Next, we construct another "hard" instance to derive the second half of the lower bound in this case. Consider the following strongly convex FCCO problem
+
+$$
+\begin{array}{l} \min _ {x \in \mathcal {X}} F (x) = \frac {1}{n} \sum_ {i = 1} ^ {n} f (g _ {i} (x)) + r (x), \\ f (u) = \left\{ \begin{array}{l l} (\nu - 1) u + \frac {1}{2} (\nu - 1) ^ {2} + \nu - 1 - \frac {\nu^ {2}}{2}, & u \in (- \infty , - 1) \\ \frac {1}{2} (u + \nu) ^ {2} - \frac {\nu^ {2}}{2}, & u \in [ - 1, 1 ] \\ (1 + \nu) u + \frac {1}{2} (1 + \nu) ^ {2} - 1 - \nu - \frac {\nu^ {2}}{2}, & u \in (1, \infty) \end{array} \right., \quad r (x) = \frac {1}{4 n} \| x \| _ {2} ^ {2} \tag {42} \\ \end{array}
+$$
+
+where $\mathcal{X} = [-1,1]^n$ , the outer function $f: \mathbb{R} \to \mathbb{R}$ is smooth and Lipschitz continuous for $\nu < 1$ . As stated in Assumption 3, we do not require $f$ to be monotonically non-decreasing when $g_i$ is affine. Choose $\alpha = \frac{1}{2}$ in (41). We define that $F_i(x^{(i)}) := f(g_i(x)) + \frac{1}{4}[x^{(i)}]^2$ such that $F(x) = \frac{1}{n}\sum_{i=1}^{n} F_i(x^{(i)})$ . Thus, the problem $\min_x F(x)$ is equivalent to the problems $\min_{x^{(i)}} F_i(x^{(i)})$ on all coordinates $i \in [n]$ . Since the problem is separable over the coordinates, we have $x_*^{(i)} = \arg \min_{x \in [-1,1]} F_i(x^{(i)})$ for $x_* = \arg \min_{x \in \mathcal{X}} F(x)$ . Thus, we have $x_*^{(i)} = -\frac{2\nu}{3}$ and $F_i(x_*^{(i)}) = -\frac{\nu^2}{3}$ . By the
+
+convex conjugate, for any $y^{(i)}\in \mathbb{R}$ we have
+
+$$
+\begin{array}{l} f ^ {*} (y ^ {(i)}) = \max \left\{\sup _ {u < - 1} \left\{u y ^ {(i)} - \left((\nu - 1) u + \frac {1}{2} (\nu - 1) ^ {2} + \nu - 1 - \frac {\nu^ {2}}{2}\right) \right\}, \sup _ {- 1 \leq u \leq 1} \left\{u y ^ {(i)} - \frac {1}{2} (u + \nu) ^ {2} + \frac {\nu^ {2}}{2} \right\}, \right. \\ \sup _ {u > 1} \left\{u y ^ {(i)} - \left((1 + \nu) u + \frac {1}{2} (1 + \nu) ^ {2} - 1 - \nu - \frac {\nu^ {2}}{2}\right) \right\} \Bigg \} \\ = \left\{ \begin{array}{l l} + \infty , & y ^ {(i)} \in (- \infty , \nu - 1) \cup (\nu + 1, \infty) \\ \frac {1}{2} (y ^ {(i)} - \nu) ^ {2}, & y ^ {(i)} \in [ \nu - 1, \nu + 1 ]. \end{array} \right. \tag {43} \\ \end{array}
+$$
+
+Since $\mathbb{P}_i = \mathbb{P}$ in the "hard" problem (41), the abstract scheme only needs to sample shared $\zeta_t, \tilde{\zeta}_t \sim \mathbb{P}$ for all coordinates $i \in S_t$ in the $t$ -th iteration. For an $i \in [n]$ , suppose that $\mathfrak{g}_{\tau}^{(i)} = \{0\}$ or $\{-\nu\}$ , $\mathfrak{Y}_{\tau}^{(i)} = \{0\}$ , $\mathfrak{X}_{\tau}^{(i)} = \{0\}$ for all $\tau \leq t$ . Then,
+
+- If $i \notin S_t$ , the abstract scheme leads to
+
+$$
+\mathfrak {g} _ {t + 1} ^ {(i)} = \{0 \} \text {o r} \{- \nu \}, \quad \mathfrak {Y} _ {t + 1} ^ {(i)} = \{0 \}, \quad \mathfrak {X} _ {t + 1} ^ {(i)} = \{0 \}.
+$$
+
+- If $i \in S_t$ and $\zeta_t = -\nu$ , the abstract scheme proceeds as
+
+$$
+\mathfrak {g} _ {t + 1} ^ {(i)} = \mathfrak {g} _ {t} ^ {(i)} + \operatorname {s p a n} \left\{\hat {x} ^ {(i)} + \zeta_ {t} \mid \hat {x} ^ {(i)} \in \mathfrak {X} _ {t} ^ {(i)} \right\},
+$$
+
+$$
+\mathfrak{Y}_{t + 1}^{(i)} = \mathfrak{Y}_{t}^{(i)} + \operatorname {span}\left\{\underset {y^{(i)}\in [\nu -1,\nu +1]}{\arg \max}\left\{y^{(i)}(\hat{g}^{(i)} + \nu) - \frac{1}{2} (y^{(i)})^{2} - \tau U_{\psi_{i}}(y^{(i)},\hat{y}^{(i)})\right\} |\hat{g}^{(i)}\in \mathfrak{g}_{t + 1}^{(i)},\hat{y}^{(i)}\in \mathfrak{Y}_{t}^{(i)}\right\} ,
+$$
+
+$$
+\mathfrak {X} _ {t + 1} ^ {(i)} = \mathfrak {X} _ {t} ^ {(i)} + \operatorname {s p a n} \left\{\underset {x ^ {(i)} \in [ - 1, 1 ]} {\arg \min} \left\{\frac {1}{S} \hat {y} ^ {(i)} x ^ {(i)} + \frac {1}{n} (x ^ {(i)}) ^ {2} + \frac {\eta}{2} (x ^ {(i)} - \hat {x} ^ {(i)}) ^ {2} \right\} \mid \hat {y} ^ {(i)} \in \mathfrak {Y} _ {t + 1} ^ {(i)}, \hat {x} ^ {(i)} \in \mathfrak {X} _ {t} ^ {(i)} \right\}.
+$$
+
+In this case, we can derive that $\mathfrak{g}_{t+1}^{(i)} = \{-\nu\}$ such that $y^{(i)}(\hat{g}^{(i)} + \nu) = 0$ for $\hat{g}^{(i)} \in \mathfrak{g}_{t+1}^{(i)}, \forall i \in S_t$ . Since $\frac{1}{2}(y^{(i)})^2 + \tau U_{\psi_i}(y^{(i)}, \hat{y}^{(i)})$ is strongly convex to $y^{(i)}$ and non-negative, we have $\mathfrak{Y}_{t+1}^{(i)} = \{0\}$ for $i \in S_t$ . Then, $\mathfrak{X}_{t+1}^{(i)} = \{0\}$ for $i \in S_t$ . To sum up, given the event $\bigcap_{\tau=1}^{t} \{\mathfrak{g}_{\tau}^{(i)} = \{0\}$ or $\{-\nu\}$ , $\mathfrak{Y}_{\tau}^{(i)} = \{0\}$ , $\mathfrak{X}_{\tau}^{(i)} = \{0\}\}$ , we can make sure that $\{\mathfrak{g}_{t+1}^{(i)} = \{0\}$ or $\{-\nu\} \wedge \mathfrak{Y}_{t+1}^{(i)} = \{0\} \wedge \mathfrak{X}_{t+1}^{(i)} = \{0\}\}$ when one of the following mutually exclusive events happens:
+
+Event I: $i\notin \mathcal{S}_t$
+Event II: $i\in S_t$ and $\zeta_t = -\nu$
+
+Note that the random variable $\zeta_t$ is independent of $S_t$ . Thus, the probability of the event $E_{t+1}^{(i)} := \{\mathfrak{g}_{t+1}^{(i)} = \emptyset \text{ or } \{-\nu\} \wedge \mathfrak{Y}_{t+1}^{(i)} = \{0\} \wedge \mathfrak{X}_{t+1}^{(i)} = \{0\}$ conditioned on $\bigcap_{\tau=1}^{t} E_{\tau}^{(i)}$ can be bounded as
+
+$$
+\begin{array}{l} \mathbb {P} \left[ E _ {t + 1} ^ {(i)} \mid \bigcap_ {\tau = 1} ^ {t} E _ {\tau} ^ {(i)} \right] = \mathbb {P} \left[ \left\{\mathfrak {g} _ {t + 1} ^ {(i)} = \{0 \} \text {o r} \{- \nu \} \wedge \mathfrak {Y} _ {t + 1} ^ {(i)} = \{0 \} \wedge \mathfrak {X} _ {t + 1} ^ {(i)} = \{0 \} \right\} \mid \bigcap_ {\tau = 1} ^ {t} E _ {\tau} ^ {(i)} \right] \\ \geq \mathbb {P} \left[ \{i \notin S _ {t} \} \right] + \mathbb {P} \left[ \{\{i \in S _ {t} \} \wedge \{\zeta_ {t} = - \nu \} \} \right] \\ = \mathbb {P} [ \{i \notin \mathcal {S} _ {t} \} ] + \mathbb {P} [ \{i \in \mathcal {S} _ {t} \} ] \mathbb {P} [ \{\zeta_ {t} = - \nu \} ] = \left(1 - \frac {S}{n}\right) + \frac {S}{n} (1 - p) = 1 - \frac {S p}{n}. \\ \end{array}
+$$
+
+Since $S_{t}$ and $\zeta_t$ in different iterations $t$ are mutually independent, we have
+
+$$
+\mathbb {P} \left[ E _ {T} ^ {(i)} \right] \geq \mathbb {P} \left[ \bigcap_ {t = 0} ^ {T - 1} E _ {t + 1} ^ {(i)} \right] = \prod_ {t = 0} ^ {T - 1} \mathbb {P} \left[ E _ {t + 1} ^ {(i)} \mid \bigcap_ {t = 1} ^ {t} E _ {t} ^ {(i)} \right] = \left(1 - \frac {S p}{n}\right) ^ {T} > 3 / 4 - \frac {T S p}{n}.
+$$
+
+Thus, letting $T < \frac{n}{4Sp}$ can make $\mathbb{P}[E_T^{(i)}] > \frac{1}{2}$ . Choose $\nu = 3\sqrt{2\epsilon}$ , and $\sigma = \sigma_0$ such that $p = \frac{\nu^2}{\sigma^2} = \frac{18\epsilon}{\sigma_0^2}$ . For any $i \in [n]$ and any output $\tilde{x}_T^{(i)} \in \mathfrak{X}_T^{(i)}$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} [ (\tilde {x} _ {T} ^ {(i)} - x _ {*} ^ {(i)}) ^ {2} ] = \mathbb {E} [ \mathbb {I} _ {E _ {T} ^ {(i)}} (\tilde {x} _ {T} ^ {(i)} - x _ {*} ^ {(i)}) ^ {2} + \mathbb {I} _ {E _ {T} ^ {(i)}} (\tilde {x} _ {T} ^ {(i)} - x _ {*} ^ {(i)}) ^ {2} ] \geq \mathbb {E} [ \mathbb {I} _ {E _ {T} ^ {(i)}} (\tilde {x} _ {T} ^ {(i)} - x _ {*} ^ {(i)}) ^ {2} ] \\ = \mathbb {E} \left[ \mathbb {I} _ {E _ {T} ^ {(i)}} (x _ {*} ^ {(i)}) ^ {2} \right] = \mathbb {P} \left[ E _ {T} ^ {(i)} \right] (x _ {*} ^ {(i)}) ^ {2} > \frac {2 \nu^ {2}}{9} = 4 \epsilon . \\ \end{array}
+$$
+
+Since the derivations above hold for arbitrary $i \in [S]$ and the $r(x)$ in (42) is $\frac{1}{2n}$ -strongly convex ( $\mu = \frac{1}{2n}$ ), we have
+
+$$
+\frac {\mu}{2} \mathbb {E} \| \tilde {x} _ {T} ^ {(i)} - x _ {*} \| _ {2} ^ {2} = \frac {1}{4 n} \mathbb {E} \| \tilde {x} _ {T} ^ {(i)} - x _ {*} \| _ {2} ^ {2} = \frac {1}{4 n} \sum_ {i = 1} ^ {n} \mathbb {E} [ (\tilde {x} _ {T} ^ {(i)} - x _ {*} ^ {(i)}) ^ {2} ] > \epsilon .
+$$
+
+Thus, to find an output $\tilde{x}_T$ such that $\frac{\mu}{2}\mathbb{E}\left\| \tilde{x}_T - x_*\right\| _2^2\leq \epsilon$ , the abstract scheme requires at least $T\geq \frac{n}{4Sp} = \frac{n\sigma_0^2}{72S\epsilon}$ iterations.
+
+(ii) Non-smooth $f_{i}$ : We borrow the construction $f_{i}(\cdot) = \beta \max \{\cdot, -\nu\}$ from Zhang & Lan (2020). We define that $F_{i}(x^{(i)}) := f(g_{i}(x)) + \frac{\alpha}{2}[x^{(i)}]^{2} = \beta \max \{x^{(i)}, -\nu\} + \frac{\alpha}{2}[x^{(i)}]^{2}$ such that $F(x) = \frac{1}{n}\sum_{i=1}^{n}F_{i}(x^{(i)})$ . Let the domain $\mathcal{X}$ be $[-2\nu, 2\nu]^{n}$ . Since the problem is separable over the coordinates, we have $x_{*}^{(i)} = \arg \min_{x\in [-2\nu, 2\nu]}F_{i}(x^{(i)}) = \arg \min_{x\in [-2\nu, 2\nu]} \left\{\beta \max \{x^{(i)}, -\nu\} + \frac{\alpha}{2}(x^{(i)})^{2}\right\}$ for $x_{*} = \arg \min_{x\in \mathcal{X}}F(x)$ . We have
+
+$$
+x _ {*} ^ {(i)} = \left\{ \begin{array}{l l} - \beta / \alpha & \text {i f} \alpha > \beta / \nu \\ - \nu & \text {i f} \alpha \in \frac {\beta}{\nu} [ 0, 1 ] \end{array} \right., \quad F _ {i} (x _ {*} ^ {(i)}) \leq \left\{ \begin{array}{l l} - \beta^ {2} / (2 \alpha) & \text {i f} \alpha > \beta / \nu \\ - \beta \nu / 2 & \text {i f} \alpha \in \frac {\beta}{\nu} [ 0, 1 ]. \end{array} \right.
+$$
+
+Since $F_{i}(0) = 0$ , we can derive that $F_{i}(0) - F_{i}(x_{*}^{(i)}) \geq \frac{1}{2}\min \{\beta \nu, \beta^{2} / \alpha\}$ . By the convex conjugate, we have
+
+$$
+f(\hat{g}^{(i)}) = \max_{y^{(i)}\in [0,\beta}\{y^{(i)}\hat{g}^{(i)} - \nu (\beta -y^{(i)})\} .
+$$
+
+Due to similar reason as in the smooth $f_{i}$ case, the probability of the event $E_T^{(i)}\coloneqq \{\mathfrak{g}_T^{(i)} = \{0\}$ or $\{-\nu \} \wedge \mathfrak{V}_T^{(i)} = \{0\} \wedge \mathfrak{X}_T^{(i)} = \{0\} \}$ can be lower bounded as
+
+$$
+\mathbb {P} [ E _ {T} ^ {(i)} ] \geq \mathbb {P} \left[ \bigcap_ {t = 0} ^ {T - 1} E _ {t + 1} ^ {(i)} \right] = \prod_ {t = 0} ^ {T - 1} \mathbb {P} \left[ E _ {t + 1} ^ {(i)} \mid \bigcap_ {t = 1} ^ {t} E _ {t} ^ {(i)} \right] = \left(1 - \frac {S p}{n}\right) ^ {T} > 3 / 4 - \frac {T S p}{n}.
+$$
+
+Thus, letting $T < \frac{n}{4Sp}$ can make $\mathbb{P}[E_T^{(i)}] > \frac{1}{2}$ . Choose $\beta = C_f$ , $\nu = \frac{4\epsilon}{C_f}$ , and $\sigma = \sigma_0$ such that $p := \frac{\nu^2}{\sigma^2} = \frac{16\epsilon^2}{C_f^2\sigma_0^2}$ . For any $i \in [n]$ and any output $\tilde{x}_T^{(i)} \in \mathfrak{X}_T^{(i)}$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} [ F _ {i} (\tilde {x} _ {T} ^ {(i)}) - F _ {i} (x _ {*} ^ {(i)}) ] = \mathbb {E} \left[ \mathbb {I} _ {E _ {T} ^ {(i)}} \left(F _ {i} (\tilde {x} _ {T} ^ {(i)}) - F _ {i} (x _ {*} ^ {(i)})\right) + \mathbb {I} _ {\overline {{E _ {T} ^ {(i)}}}} \left(F _ {i} (\tilde {x} _ {T} ^ {(i)}) - F _ {i} (x _ {*} ^ {(i)})\right) \right] \geq \mathbb {E} \left[ \mathbb {I} _ {E _ {T} ^ {(i)}} \left(F _ {i} (\tilde {x} _ {T} ^ {(i)}) - F _ {i} (x _ {*} ^ {(i)})\right) \right] \\ = \mathbb {E} \left[ \mathbb {I} _ {E _ {T} ^ {(i)}} \left(F _ {i} (0) - F _ {i} (x _ {*} ^ {(i)})\right) \right] = \mathbb {P} [ E _ {T} ^ {(i)} ] \left(F _ {i} (0) - F _ {i} (x _ {*} ^ {(i)})\right) > \min \{\beta \nu , \beta^ {2} / \alpha \} / 4 = \epsilon . \\ \end{array}
+$$
+
+Since the derivations above hold for arbitrary $i \in [S]$ , we can derive that
+
+$$
+\mathbb {E} [ F (\tilde {x} _ {T}) - F (x _ {*}) ] = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} [ F _ {i} (\tilde {x} ^ {(i)}) - F _ {i} (x _ {*} ^ {(i)}) ] > \epsilon .
+$$
+
+Thus, to find a $\tilde{x}_T$ such that $\mathbb{E}[F(\bar{x}) - F(x_*)] \leq \epsilon$ , the abstract scheme requires at least $T \geq \frac{n}{4Sp} = \frac{nC_f^2\sigma_0^2}{64S\epsilon^2}$ iterations.
+
+# F. Application to GDRO with $\phi$ -divergence
+
+We discuss two examples of the GDRO problem with $\phi$ -divergence: CVaR divergence with a hyper-parameter $\alpha \in (0,1)$ and $\chi^2$ -divergence with a hyper-parameter $\lambda > 0$ . We compare ALEXR to the following baselines:
+
+- SMD (Nemirovski et al., 2009; Zhang et al., 2023): It can be applied to the GDRO problem in (7) with CVaR divergence, where the dual mirror step with the entropy distance-generating function can be efficiently solved by projection onto the permutahedron (Lim & Wright, 2016). Moreover, SMD can also be applied to the worst-group DRO problem (Sagawa et al., 2019) (i.e., $\lambda = 0$ in (7) or $\alpha = \frac{1}{n}$ in CVaR). The iteration complexity of SMD is $T = O\left(\frac{\log n}{\epsilon^2}\right)$ . Besides, it requires $O(n \log n)$ computational cost for performing the dual projection and $O(n)$ oracles in each iteration. Note that SMD cannot be applied to the GDRO problem in (7) with $\chi^2$ -divergence due to the non-linear penalty term.
+- OOA (Sagawa et al., 2019): This algorithm can be viewed as a variant of the SMD algorithm with the dual gradient estimator $[0, \dots, n\ell(w_t; z_t^{(i_t)}), \dots, 0]^\top$ for some $i_t \in [n]$ such that it only requires $O(1)$ oracles per iteration. But the dual
+
+Table 4. Comparison of iteration complexities, dual projection cost, and per-iteration #oracles for achieving $\epsilon$ -optimal solution of the GDRO problem in (7) in terms of $\mathbb{E}[F(w_{\mathrm{out}}) - F(w_{*})]\leq \epsilon$ in the merely convex case and $\frac{\mu}{2}\mathbb{E}\| w_{\mathrm{out}} - w_{*}\|_{2}^{2}\leq \epsilon$ in the $\mu$ -strongly convex case, where $x_{\mathrm{out}}$ is the output of each algorithm. We hide other constant quantities except for $n$ , variances $\sigma_0^2$ , $\sigma_1^2$ , $\delta^2$ , and batch sizes $B$ , $S$ . Besides, $\tilde{O}$ hides poly $\log(1/\epsilon)$ factors.
+
+φ-Divergence Algorithm Per-Iter #Oracles Dual Proj. Iteration Complexity CVaR SMD O(n) O(n log n) O(log n/ε2) OOA O(1) O(n log n) O(n2log n/ε2) ALEXR O(1) O(1) O(√n/α2√Sε + 1/α2ε2 + δ2/Sε2 + σ12/α2Bε2 + σ02Ωy0/BSe2)† χ2 ALEXR O(1) O(1) Merely Convex r
+Strongly Convex r
+O(√n/λ√Sε + 1/λ2ε2 + δ2/Sε2 + σ12/Be2 + σ02Ωy0/BSe2) O(√n/λ√Sμ + 1/μλε + nσ02/BSε + σ12/μBε + δ2/μSe)
+
+The worst-case estimate of $\Omega_{\mathcal{V}}^{0}$ is $\frac{n}{2\alpha^2}$ , but it could be much smaller than $\frac{n}{2\alpha^2}$ in practice, as explained in Remark F.1.
+
+projection cost in each iteration is still $O(n \log n)$ . The iteration complexity of SMD is $T = O\left(\frac{n^2 \log n}{\epsilon^2}\right)$ , which is also independent of $\alpha$ . OOA is not applicable to the GDRO problem in (7) with $\chi^2$ -divergence either.
+
+It comes to our attention that the NOL algorithm (Zhang et al., 2023) designed for the worst-group DRO problem, i.e., $\lambda = 0$ in (7), can achieve $T = O\left(\frac{n\log n}{\epsilon^2}\right)$ iteration complexity in high probability with per-iteration $O(1)$ oracles. However, this result cannot be extended to the GDRO problem with CVaR or $\chi^2$ -divergence, since their proof technique relies on powerful tools for multi-armed bandits. Besides, Soma et al. (2022) also consider the GDRO problem with CVaR divergence but their convergence analysis suffers from dependency issues, as pointed out in Zhang et al. (2023). Recently, Hu et al. (2023c) studied non-smooth weakly convex FCCO problems and proposed an algorithm SONX, which can be applied to solving GDRO with CVaR divergence. However, their algorithm does not leverage the convexity of the inner function and hence suffers from a worse complexity of $O\left(\frac{n}{S\sqrt{B\epsilon^6}}\right)$ .
+
+# F.1. GDRO with CVaR divergence
+
+GDRO with CVaR divergence can be formulated as (1) with $f_{i}(\cdot) = \alpha^{-1}(\cdot)_{+}$ , $\alpha \in (0,1)$ and $g_{i}(w,c) = R_{i}(w) - c$ such that $C_f = \frac{1}{\alpha}$ and $C_g = C_R + 1$ , where $C_R$ is the Lipschitz constant of $R_i$ . The dual update (7) of ALEXR with $\psi_i(\cdot) = \frac{1}{2} (\cdot)^2$ has the closed-form expression $y_{t + 1}^{(i)} = \left\{ \begin{array}{ll} \mathrm{Proj}_{[0,\alpha^{-1}]}[y_t^{(i)} + (1 / \tau)\tilde{g}_t^{(i)}], & i \in S_t \\ y_t^{(i)}, & i \notin S_t. \end{array} \right.$
+
+The worst-case estimate of the $\Omega_{\mathcal{Y}}^{0}$ term in Theorem 3 is $\Omega_{\mathcal{Y}}^{0} \leq \frac{nC_{f}^{2}}{2} = \frac{n}{2\alpha^{2}}$ when $\psi_{i} = \frac{1}{2} (\cdot)^{2}$ . However, it could be much smaller than $\frac{n}{2\alpha^2}$ in practice since $\tilde{y}^{(i)} = 0$ for those coordinates $i$ that satisfy $R_{i}(\bar{w}) \leq \bar{c}$ , i.e., the ALEXR algorithm can benefit from the "sparsity" of $\tilde{y}^{(i)} \in \partial f_{i}(g_{i}(\bar{w},\bar{c}))$ , where $(\bar{w},\bar{c})$ is the output of the algorithm. In particular, when $(\bar{w},\bar{c})$ is close to the optimal solution, then roughly about $\alpha n$ number of groups such that $[R_i(\bar{w}) - \bar{c}]_+ > 0$ . As a result, $\Omega_{\mathcal{Y}}^{0} = \mathbb{E}[\sum_{i=1}^{n} U_{\psi_i}(\tilde{y}^{(i)},0)] \approx \frac{n\alpha C_{f}^{2}}{2} = \frac{n}{2\alpha}$ .
+
+# F.2. GDRO with $\chi^2$ -divergence
+
+GDRO with $\chi^2$ -divergence can be formulated as (1) with $f_{i}(\cdot) = \lambda \left(\frac{1}{4} (\cdot + 2)^{2}_{+} - 1\right)$ and $g_{i}(w, c) = (R_{i}(w) - c) / \lambda$ such that $C_f = \frac{\max\{B_R - c, B_R + \overline{c}\}}{2}$ and $C_g = \frac{C_R + 1}{\lambda}$ , where $B_R := \max_w |R_i(w)|$ and a valid choice of $\underline{c}, \overline{c}$ is $\underline{c} = -\lambda, \overline{c} = B_R$ (See Appendix A.3 in Levy et al. 2020). In this case, the proximal mapping of $f_{i}^{*}(y^{(i)}) = \frac{\lambda}{2}(y^{(i)} / \lambda - 1)^{2}$ with $\psi_i(\cdot) = \frac{1}{2} (\cdot)^2$ can also be efficiently solved. We can also consider the GDRO problem with a convex regularization term $r(x)$ . We can choose either $\psi_i = f_i^*$ or $\psi_i(\cdot) = \frac{1}{2} (\cdot)^2$ .
+
+# F.3. Comparison with Baselines
+
+In Table 4, we compare our ALEXR to the baseline algorithms OOA and SMD. It is notable that although SMD has a better iteration complexity for CVaR divergence, it requires $O(n)$ oracles at each iteration. In contrast, ALEXR and OOA only require $O(1)$ oracles in each iteration. In the worst case, we have $\Omega_{\mathcal{V}}^{0} = O(n / \alpha^{2})$ for CVaR-penalized GDRO, then ALEXR
+
+has a better complexity than OOA when $\frac{1}{\alpha} = o(\sqrt{n \log n})$ . In practice, we have $\Omega_{y}^{0} = O(n / \alpha)$ for CVaR-penalized GDRO, then ALEXR has a better complexity than OOA when $\frac{1}{\alpha} = o(n \log n)$ . In addition, OOA cannot enjoy the parallel speedup with respect to the inner batch size $B$ due to its scaled dual gradient estimator. Moreover, we also provide the iteration complexity of ALEXR on this the GDRO problem with $\chi^2$ -divergence, with or without a strongly convex regularizer.
+
+# G. More Details of Experiments
+
+All algorithms are implemented using the PyTorch framework. Experiments are conducted on a workstation with the 12th Gen Intel(R) Core(TM) i7-12700K CPU with 20 logical cores.
+
+# G.1. Group Distributionally Robust Optimization
+
+# G.1.1. DATA PREPROCESSING
+
+Adult dataset: We construct 83 groups for the Adult dataset according to income (">50K", "≤50K"), race ("white", "black", "other"), sex ("female", "male"), age ("≤30", "30-45", ">45"), relationship ("single", "not_single"), and education ("higher", "others"), where we discard those groups with less than 50 data points. Following Platt (1999), we transform both continuous and categorical features into binary features, resulting in a 122-dimensional feature vector for each data.
+
+CelebA dataset: We construct 160 groups for this dataset according to 4 binary attributes ("blond hair", "male", "mouth slightly open", "smiling") and 10 types of additive Gaussian noises (means -0.08:0.02:0.1 and variance 0.08) to the images. Each image of the CelebA dataset is resized to $224 \times 224 \times 3$ , normalized, and center-cropped. Then, we extract 512-dim feature vectors for those preprocessed images from the last convolutional layer of a ResNet18 pre-trained on ImageNet.
+
+# G.1.2. ADDITIONAL RESULTS
+
+The first two columns of Figure 3 show the existence of rare groups in the datasets. The last two columns of Figure 3 demonstrate that the actual value of $\Omega_{\mathcal{Y}}^{0}$ is indeed much smaller than its worst-case estimate $\frac{n}{2\alpha^2}$ , which verifies the claims in the remark below Theorem 3 and Section F.1.
+
+
+Figure 3. Group populations and the computed values of $\Omega_{\mathcal{V}}^{0}$
+
+
+
+
+
+
+
+# G.2. Partial AUC Maximization with Restricted TPR
+
+Table 5. Statistics of datasets used in the partial AUC maximization experiments. Here $n_{+}$ and $n_{-}$ refer to the numbers of positive and negative data in the train and validation splits.
+
+Datasets Train Validation n+ n- n+ n- Covtype 889 178,587 252 59,573 Higgs 4,676 4,172,030 582 499,418 Cardiomegaly 1,950 76,518 240 10,979 Lung-mass 3,988 74,480 625 10,594
\ No newline at end of file
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/images.zip b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a55cb15150623f18b39a5ad71cc474de411e1c72
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cb8be75d0f9347b96a6bd9328828aa2ae0a0c90fba54aa249f3bc70f5b285c8
+size 2681951
diff --git a/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/layout.json b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2f40d3441ae58f633743028f6c459fe8871ecb42
--- /dev/null
+++ b/anearoptimalsingleloopstochasticalgorithmforconvexfinitesumcoupledcompositionaloptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c333102114e507338ec9b70fb6653d09fb8931600535240aac7175055090074
+size 1794811
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_content_list.json b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1c357cc25173ec1f54cbeeccf470b65acaab4e31
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b84fe7b3668bafb9cfd224f7b9db28cd6bd173cfea82bf9e1f13766efe55b4d
+size 116201
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_model.json b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c832bc204a4c67732ee8752e6061c2a0bb0d0249
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f3733d2299608f57ec7cdcb8a852a725e34a650441798463fc0a91c4bc44d20
+size 138489
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_origin.pdf b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1acf19aeb2b390a55e30deb276bba8e88a28272c
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/c4e855c7-8c5a-4bac-9f5e-91c51a9f3248_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f69eb029fb4272584de2284c4ea8e8bf5441fce8504d6aa87d80ee32914c7fd
+size 407966
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/full.md b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8bc757259b38e8bcfb6b163158e24444a718403
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/full.md
@@ -0,0 +1,578 @@
+# A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
+
+Pouria Fatemi $^{1,2}$ Ehsan Sharifian $^{3}$ Mohammad Hossein Yassaee $^{4}$
+
+# Abstract
+
+Counterfactual explanations enhance interpretability by identifying alternative inputs that produce different outputs, offering localized insights into model decisions. However, traditional methods often neglect causal relationships, leading to unrealistic examples. While newer approaches integrate causality, they are computationally expensive. To address these challenges, we propose an efficient method called BRACE based on backtracking counterfactuals that incorporates causal reasoning to generate actionable explanations. We first examine the limitations of existing methods and then introduce our novel approach and its features. We also explore the relationship between our method and previous techniques, demonstrating that it generalizes them in specific scenarios. Finally, experiments show that our method provides deeper insights into model outputs.
+
+# 1. Introduction
+
+Machine learning (ML) has become a core technology in areas such as healthcare, finance, and autonomous systems (Bhoi et al., 2024; Xie et al., 2024; Sancaktar et al., 2022). Although ML models are generally very effective, their limited interpretability is still a significant obstacle (Jethani et al., 2021). Understanding why a model generates a specific prediction is crucial for trust, fairness, and accountability (Miller, 2019; Zhang & Bairenboim, 2018; Von Kugelgen et al., 2022; Karimi et al., 2023). This need is especially clear in high-stakes domains like medical diagnosis or loan approval, where decisions can lead to serious consequences (Doshi-Velez & Kim, 2017).
+
+$^{1}$ Department of Mathematics, Technical University of Munich, Germany $^{2}$ Munich Center for Machine Learning, Germany $^{3}$ Department of Electrical Engineering, École Polytechnique Fédérale de Lausanne, Switzerland $^{4}$ Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran. Correspondence to: Mohammad Hossein Yassae .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Counterfactual explanations are a widely used tool for interpretability. They address two main questions:
+
+1. "Why did the model produce this outcome?"
+2. "What changes can lead to a different outcome?" (Karimi et al., 2022).
+
+These explanations offer localized insights by highlighting minimal modifications to input features that would alter the model's output (Wachter et al., 2017; Karimi et al., 2020a). For instance, in the context of loan applications, a counterfactual explanation could recommend increasing one's income or reducing debt to secure approval.
+
+Despite their benefits, traditional counterfactual methods often overlook causal relationships between features, which can lead to impractical or unrealistic suggestions (Slack et al., 2021). For example, advising someone to lower their income while increasing savings ignores the causal dependency between these factors. This limitation reduces the practical value of such explanations. Causal algorithmic recourse (Karimi et al., 2021) incorporates Interventional Counterfactuals (ICF) to produce more realistic outputs, but this approach is typically computationally expensive and difficult to scale.
+
+Backtracking Counterfactuals (BCF) (Von Kugelgen et al., 2023) present a new way to define counterfactuals in causal inference. We propose a new framework for generating counterfactual explanations using backtracking counterfactuals. Our method combines causal reasoning with computational efficiency, enabling it to produce actionable explanations at scale. The key contributions of our work are as follows:
+
+- We analyze the limitations of existing counterfactual methods, including their inability to handle causal dependencies and their high computational costs.
+- We introduce our novel method, BRACE: Backtracking Recourse and Actionable Counterfactual Explanations, that leverages backtracking counterfactuals to provide actionable and meaningful explanations.
+- We show that our new approach unifies existing methods in certain scenarios.
+- We demonstrate through experiments that our method provides better insights into model behavior.
+
+This paper is organized as follows. Section 2 introduces fundamental concepts in causal inference, counterfactual reasoning, and the problem definition. Section 3 reviews prior work on counterfactual explanations and interpretability. Section 4 details our proposed framework. Section 5 explores the relationship between backtracking and interventional counterfactuals, while Section 6 examines how our method connects to existing approaches. Section 7 discusses metric selection and our optimization method. We present experimental results in Section 8, followed by conclusions and future directions in Sections 9 and 10, respectively.
+
+# 2. Preliminaries and Problem Statement
+
+In this section, we review the notion of Structural Causal Models (SCMs) (Pearl, 2009), discuss interventional versus backtracking counterfactuals, and formally define the problem setting.
+
+# 2.1. Structural Causal Models (SCMs)
+
+A SCM $\mathcal{C} \coloneqq (\mathbf{S}, P_{\mathbf{U}})$ describes a set $\mathbf{S}$ of causal relationships among variables through structural equations:
+
+$$
+X _ {i} := f _ {i} \left(\mathbf {X} _ {\operatorname {p a} (i)}, U _ {i}\right), \quad i = 1, \dots , n, \tag {1}
+$$
+
+where $\mathbf{X}_{\mathrm{pa}(i)}$ are the parents variables (direct causes) of $X_{i}$ , and $U_{i}$ are independent noise terms sampled from a distribution $P_{\mathbf{U}}$ . These relationships are represented by a Directed Acyclic Graph (DAG) $G$ , which governs the observational distribution $P_{\mathbf{X}}^{\mathcal{C}}$ (Peters et al., 2017).
+
+The acyclic structure of $G$ ensures that each $X_{i}$ can be expressed as a deterministic function of $\mathbf{U}$ . This results in a unique mapping from $\mathbf{U}$ to $\mathbf{X}$ , denoted by:
+
+$$
+\mathbf {X} = \mathbf {F} (\mathbf {U}), \tag {2}
+$$
+
+commonly referred to as the reduced-form expression. The function $\mathbf{F}(.)$ translates the distribution of latent variables $\mathbf{U}$ into the distribution of observed variables $\mathbf{X}$ . We assume causal sufficiency, implying no hidden confounders are present.
+
+Additionally, we adopt a Bijective Generation Mechanism (Nasr-Esfahany et al., 2023), which assumes that $f_{i}(\mathbf{x}_{\mathrm{pa}(i)}, \cdot)$ is invertible for fixed $\mathbf{x}_{\mathrm{pa}(i)}$ . This ensures the existence of the inverse mapping $\mathbf{F}^{-1}(.)$ , allowing us to recover:
+
+$$
+\mathbf {U} = \mathbf {F} ^ {- 1} (\mathbf {X}). \tag {3}
+$$
+
+# 2.2. Interventional and Backtracking Counterfactuals
+
+Let $\mathbf{x}$ be the observed value, and let $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}} = (x_{i}^{\mathrm{CF}}: i \in \mathcal{A})$ be an alternative set of values for a subset $\mathcal{A} \subseteq \{1,2,\dots,n\}$ . A full counterfactual vector $\mathbf{x}^{\mathrm{CF}} =$
+
+$(x_{1}^{\mathrm{CF}}, x_{2}^{\mathrm{CF}}, \ldots, x_{n}^{\mathrm{CF}})$ must agree with $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}}$ on all indices in $\mathcal{A}$ . Intuitively, $\mathbf{x}^{\mathrm{CF}}$ addresses the question: "What would the variables $\mathbf{X}$ have been if $\mathbf{X}_{\mathcal{A}}$ took the values $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}}$ instead of the observed values $\mathbf{x}_{\mathcal{A}}$ ?
+
+We focus on two main ways to form such counterfactuals, both described by random variables $\mathbf{X}^{\mathrm{CF}}$ : the interventional approach and the backtracking approach. Below is a concise explanation of these two methods.
+
+Interventional Counterfactuals. In the interventional method, we force the antecedent $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}}$ by modifying the system's structural functions $\mathbf{S}$ to create a new set $\mathbf{S}^{\mathrm{CF}} = (f_1^{\mathrm{CF}}, f_2^{\mathrm{CF}}, \ldots, f_n^{\mathrm{CF}})$ . Specifically, we fix each $f_i^{\mathrm{CF}}$ to be $x_i^{\mathrm{CF}}$ for $i \in \mathcal{A}$ , while keeping $f_i^{\mathrm{CF}} = f_i$ for all $i \notin \mathcal{A}$ . This process is similar to making a direct change in the causal mechanism of the variables in $\mathcal{A}$ , referred to as a hard intervention.
+
+Backtracking Counterfactuals. By contrast, backtracking counterfactuals preserve the original structural assignments $\mathbf{S}$ and instead adjust the latent variables $\mathbf{U}$ . To enforce $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}} \neq \mathbf{x}_{\mathcal{A}}$ , we introduce a modified set of latent variables $\mathbf{U}^{\mathrm{CF}}$ . These are drawn from a backtracking conditional distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}} \mid \mathbf{U})$ (Von Kugelgen et al., 2023), which controls how closely $\mathbf{U}^{\mathrm{CF}}$ resembles the original $\mathbf{U}$ . Once we obtain $\mathbf{U}^{\mathrm{CF}}$ , we derive the resulting distribution of $\mathbf{X}^{\mathrm{CF}}$ (given $\mathbf{x}$ and $\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}}$ ) by marginalizing over all possible values of $\mathbf{U}^{\mathrm{CF}}$ .
+
+Both interventional and backtracking perspectives provide valuable insights into counterfactual reasoning but rely on distinct causal reasoning paradigms. Here, we only gave a brief overview of these two approaches. Their precise definitions appear in Appendix A.
+
+# 2.3. Problem Definition
+
+We examine a complex model (e.g., a deep neural network) designed for classification tasks. This model is represented as $h: \mathbb{R}^d \to \{0, 1, \dots, m\}$ , where for a given input $\mathbf{x}$ , the model predicts $h(\mathbf{x}) = y$ .
+
+The input $\mathbf{x}$ is assumed to follow a SCM $\mathcal{C} = (\mathbf{S}, P_{\mathbf{U}})$ , where the structural equations $\mathbf{S}$ are fully known. Formally, if $\mathbf{U}$ denotes the latent (noise) variables of the SCM, the input $\mathbf{X}$ is generated as $\mathbf{X} = \mathbf{F}(\mathbf{U})$ , with the function $\mathbf{F}(.)$ explicitly defined. The components of $\mathbf{U}$ are mutually independent and $\mathbf{F}(.)$ is invertible. The goal is to find a counterfactual input $\mathbf{x}^{\mathrm{CF}}$ that satisfies:
+
+1. $\mathbf{x}^{\mathrm{CF}}$ is similar to $\mathbf{x}$
+2. $h(\mathbf{x}^{\mathrm{CF}}) = y^{\mathrm{CF}}\neq y = h(\mathbf{x})$ , and
+3. the causal structure of the input variables is maintained.
+
+In essence, $y^{\mathrm{CF}}$ represents the desired outcome of the model, and the task is to determine the nearest plausible input that
+
+would produce this outcome. This problem definition sheds light on why the model predicted $y$ instead of $y^{\mathrm{CF}}$ and provides a localized understanding of the model's behavior around $\mathbf{x}$ .
+
+# 3. Related Work
+
+Methods for interpretability are generally classified into feature-based and example-based approaches (Molnar, 2020). Feature-based techniques attribute model predictions to input features, offering global or local interpretability. For instance, SHapley Additive exPlanations (SHAP) (Lundberg & Lee, 2017) decompose predictions into additive contributions of features. Extensions like Causal Shapley Values (Heskes et al., 2020) and Asymmetric Shapley Values (Frye et al., 2020) incorporate causal dependencies or relax symmetry assumptions, respectively, to enhance the interpretive granularity and address redundancies. Local surrogate models, such as Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro et al., 2016), provide localized, model-agnostic explanations by approximating the behavior of black-box models for individual predictions.
+
+Example-based methods focus on understanding models through data points. Prototypes and criticisms (Kim et al., 2016) identify representative and atypical samples, while Contrastive Explanations (Dhurandhar et al., 2018) highlight minimal features that sustain or alter predictions. Counterfactual explanations (Wachter et al., 2017), a prominent example-based approach, aim to find minimal modifications to input features that result in different model outputs. These methods are inherently model-agnostic, localized, and intuitive for decision support systems.
+
+In recent years, causality has played a growing role in interpretability. Causal Algorithmic Recourse (Karimi et al., 2021) generates actionable and realistic counterfactuals by respecting causal structures. This approach ensures the plausibility of counterfactuals by adhering to causal dependencies in the data. Subsequent research has extended this framework to address various challenges. For instance, (Karimi et al., 2020b) weakens the assumption of fully known causal graphs and proposes methods for algorithmic recourse when causal knowledge is incomplete. Similarly, (Dominguez-Olmedo et al., 2022) focuses on generating robust and stable algorithmic recourse by introducing cost functions tailored to ensure resilience against adversarial perturbations. Additionally, advancements like (Janzing et al., 2020) refine feature attributions using causal insights, and (Jung et al., 2022; Wang et al., 2021) explore novel Shapley value formulations incorporating causality to create more meaningful interpretations.
+
+A novel direction involves backtracking counterfactual explanations (Von Kugelgen et al., 2023), which modify latent
+
+variables while preserving causal dependencies, thereby ensuring consistency with the structural causal model. This approach has been extended through practical algorithms, such as Deep Backtracking Explanations (Kladny et al., 2024), enabling computation of backtracking counterfactuals in high-dimensional settings.
+
+Our approach belongs to the category of example-based methods, focusing on counterfactual explanations. It is directly comparable to methods such as Counterfactual Explanations (Wachter et al., 2017), Causal Algorithmic Recourse (Karimi et al., 2021), Backtracking Counterfactual Explanations (Von Kugelgen et al., 2023), and Deep Backtracking Explanations (Kladny et al., 2024). Below, we briefly review and critique these methods.
+
+Counterfactual Explanations: The method in (Wachter et al., 2017) generates counterfactuals through the following optimization:
+
+$$
+\underset {\mathbf {x} ^ {\mathrm {C F}}} {\arg \min } d _ {X} \left(\mathbf {x} ^ {\mathrm {C F}}, \mathbf {x}\right) \tag {4}
+$$
+
+$$
+\mathrm {s . t .} \quad h (\mathbf {x} ^ {\mathrm {C F}}) = y ^ {\mathrm {C F}}
+$$
+
+A key drawback of this approach is its failure to account for causal dependencies among input variables, often leading to counterfactuals that are unrealistic or infeasible. For example, in a loan approval scenario, it may suggest decreasing age while increasing education level, violating causal relationships. Although these counterfactuals minimize the distance to the original input, they offer little practical guidance for future improvements and fail to provide actionable insights.
+
+Causal Algorithmic Recourse: The method in (Karimi et al., 2021) addresses feasibility by optimizing the following:
+
+$$
+\operatorname * {a r g m i n} _ {\mathcal {A}} \qquad \operatorname {c o s t} (\mathcal {A}; \mathbf {x})
+$$
+
+$$
+s. t. \quad h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}} \tag {5}
+$$
+
+$$
+\mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} _ {\mathcal {A}} \left(\mathbf {F} ^ {- 1} (\mathbf {x})\right)
+$$
+
+Here, $\mathrm{cost}(.;\mathbf{x})$ measures the intervention cost, and $\mathbf{F}_{\mathcal{A}}(.)$ represents causal functions after intervening on $\mathcal{A}$ . While this method ensures actionable counterfactuals, it has two challenges. First, the optimization is combinatorial, requiring a search over all subsets $\mathcal{A}$ , which grows exponentially with $n$ input variables ( $2^n$ subsets). Second, the method relies on interventional counterfactuals, which are often criticized for lacking causal intuition (Dorr, 2016). Backtracking counterfactuals are considered a better alternative.
+
+Backtracking Counterfactual Explanations: The method in (Von Kugelgen et al., 2023) formulates the problem as:
+
+$$
+\underset {\mathbf {x} ^ {\mathrm {C F}}} {\arg \max } \quad \mathbb {P} _ {B} \left(\mathbf {x} ^ {\mathrm {C F}} \mid y ^ {\mathrm {C F}}, \mathbf {x}, y\right), \tag {6}
+$$
+
+focusing on the backtracking conditional distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ , which adjusts latent variables to produce counterfactuals. However, its main drawback is the dependence on $\mathbb{P}_B$ . Different choices of this distribution lead to varying counterfactuals, and selecting $\mathbb{P}_B$ is left to the user. Additionally, solving (6) becomes computationally challenging for complex $\mathbb{P}_B$ distributions, as we must integrate over all values of this distribution to compute backtracking counterfactuals (see Appendix A).
+
+Deep Backtracking Explanations: The method in (Kladny et al., 2024) refines backtracking counterfactuals using this optimization:
+
+$$
+\begin{array}{l} \underset {\mathbf {x} ^ {\mathrm {C F}}} {\arg \min } d _ {U} \left(\mathbf {F} ^ {- 1} \left(\mathbf {x} ^ {\mathrm {C F}}\right), \mathbf {F} ^ {- 1} (\mathbf {x})\right) \tag {7} \\ \mathrm {s . t .} \qquad h (\mathbf {x} ^ {\mathrm {C F}}) = y ^ {\mathrm {C F}} \\ \end{array}
+$$
+
+This method eliminates dependence on $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ by focusing on the latent space distance $d_U$ . However, it ignores proximity between $\mathbf{x}^{\mathrm{CF}}$ and $\mathbf{x}$ in the observed space. As a result, the generated counterfactuals may lack intuitive interpretability and fail to meet the original goal of being close to $\mathbf{x}$ .
+
+While these methods offer valuable insights, they have notable limitations. In the next section, we propose a new approach that addresses these issues and provides a more effective solution.
+
+# 4. Our method
+
+In this section, we propose our method called BRACE: Backtracking Recourse and Actionable Counterfactual Explanations. As discussed earlier, one of the main limitations of backtracking counterfactuals is their reliance on the conditional distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ . This dependency arises because the choice of $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ significantly influences the resulting counterfactuals, and its specification is left entirely to the algorithm. Such a distribution is essential when a probabilistic representation of backtracking counterfactuals is required. However, in our scenario where $\mathbf{X} = \mathbf{F}(\mathbf{U})$ and $\mathbf{F}(.)$ is invertible, a simpler perspective can be adopted. Here, $\mathbf{U}$ can be treated as a deterministic vector, which simplifies the formulation considerably.
+
+When $\mathbf{U}$ is deterministic, we may treat $\mathbf{U}^{\mathrm{CF}}$ as another deterministic vector close to $\mathbf{U}$ , preserving the essence of backtracking without resorting to $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ . In interpretability tasks, one typically seeks an input $\mathbf{x}^{\mathrm{CF}}$ near $\mathbf{x}$ that remains faithful to causal constraints. Thus, viewing $\mathbf{x}^{\mathrm{CF}}$ as deterministic naturally aligns with this goal.
+
+Based on this reasoning, we propose our method, BRACE,
+
+with the following optimization problem:
+
+$$
+\begin{array}{l} \arg \min _ {\mathbf {x} ^ {\mathrm {C F}}, \mathbf {u} ^ {\mathrm {C F}}} d _ {X} \left(\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}\right) + \lambda d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right) \\ \text {s . t .} \quad h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}}, \tag {8} \\ \mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} (\mathbf {u} ^ {\mathrm {C F}}), \\ \mathbf {x} = \mathbf {F} (\mathbf {u}), \\ \end{array}
+$$
+
+which can also be expressed as:
+
+$$
+\begin{array}{l} \arg \min _ {\mathbf {x} ^ {\mathrm {C F}}} d _ {X} (\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}) + \lambda d _ {U} (\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {F} ^ {- 1} (\mathbf {x} ^ {\mathrm {C F}})) \tag {9} \\ \begin{array}{c c} \text {s . t .} & h (\mathbf {x} ^ {\mathrm {C F}}) = y ^ {\mathrm {C F}}, \end{array} \\ \end{array}
+$$
+
+or equivalently:
+
+$$
+\begin{array}{l} \arg \min _ {\mathbf {u} ^ {\mathrm {C F}}} d _ {X} (\mathbf {x}, \mathbf {F} (\mathbf {u} ^ {\mathrm {C F}})) + \lambda d _ {U} (\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {u} ^ {\mathrm {C F}}) \tag {10} \\ \begin{array}{l l} \text {s . t .} & h \left(\mathbf {F} (\mathbf {u} ^ {\mathrm {C F}})\right) = y ^ {\mathrm {C F}}. \end{array} \\ \end{array}
+$$
+
+Intuitively, this optimization seeks the closest input $\mathbf{x}^{\mathrm{CF}}$ to $\mathbf{x}$ that achieves the desired output $y^{\mathrm{CF}}$ while preserving the causal relationships encoded in the input variables.
+
+In (8), the objective function includes two terms: $d_{X}(\mathbf{x},\mathbf{x}^{\mathrm{CF}})$ , which ensures the counterfactual input remains close to the observed input, and $d_{U}(\mathbf{u},\mathbf{u}^{\mathrm{CF}})$ , which ensures that the latent variables of the factual and counterfactual worlds are similar. The constraints enforce the desired counterfactual output $(h(\mathbf{x}^{\mathrm{CF}}) = y^{\mathrm{CF}})$ , causal consistency $(\mathbf{x}^{\mathrm{CF}} = \mathbf{F}(\mathbf{u}^{\mathrm{CF}}))$ , and the relationship between the observed input and the latent variables $(\mathbf{x} = \mathbf{F}(\mathbf{u}))$ .
+
+As $d_U(\mathbf{u},\mathbf{u}^{\mathrm{CF}})$ increases, the counterfactual latent variables $\mathbf{u}^{\mathrm{CF}}$ deviate further from the factual latent variables $\mathbf{u}$ , making the counterfactual less connected to the factual observation. The parameter $\lambda$ regulates the trade-off between maintaining proximity in the latent space and ensuring the counterfactual remains close to the original input.
+
+When $\lambda = 0$ , the proximity of latent variables is ignored, resulting in solutions that lack causal consistency and focus solely on minimizing the distance between $\mathbf{x}$ and $\mathbf{x}^{\mathrm{CF}}$ . Conversely, as $\lambda \to \infty$ , the optimization prioritizes minimizing $d_{U}\left(\mathbf{u},\mathbf{u}^{\mathrm{CF}}\right)$ , which ensures minimal deviation in the latent space but disregards proximity in the input space. The ideal solution balances these objectives, ensuring that the counterfactual is both causally consistent and close to the original input.
+
+# 5. Relation Between Backtracking and Interventional Counterfactuals
+
+Our causal model is represented as $\mathbf{X} = \mathbf{F}(\mathbf{U})$ , where $\mathbf{F}(.)$ is an invertible function. Consequently, the distribution of the noise variables conditioned on $\mathbf{X} = \mathbf{x}$ becomes
+
+deterministic. Specifically, all the probability mass of the posterior distribution $\mathbb{P}_{\mathcal{C}}(\mathbf{U} \mid \mathbf{X} = \mathbf{x})$ is concentrated at $\mathbf{u} = \mathbf{F}^{-1}(\mathbf{x})$ .
+
+As outlined in Section 2.2, backtracking counterfactuals aim to produce a desired counterfactual outcome by keeping the causal graph unchanged while minimally modifying the noise variables $\mathbf{U}$ after observing $\mathbf{x}$ . Although backtracking counterfactuals are typically defined using the backtracking conditional distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ , when $\mathbf{F}(.)$ is invertible, the deterministic nature of $\mathbf{U}$ eliminates the necessity for statistical modeling. Instead, we directly analyze the connection between backtracking and interventional counterfactuals via their respective causal equations.
+
+Theorem 5.1. In structural causal models that adhere to the Bijective Generation Mechanism (i.e., $\mathbf{F}(.)$ is invertible), backtracking counterfactuals generalize interventional counterfactuals. Specifically, one of the solutions derived from the backtracking counterfactual formulation always coincides with the interventional counterfactual.
+
+Proof. Consider a counterfactual query involving a subset of variables $\mathbf{X}_{\mathcal{A}}^{\mathrm{CF}} = \mathbf{x}_{\mathcal{A}}^{*}$ . Under the Bijective Generation Mechanism, the posterior distribution $\mathbb{P}_{\mathcal{C}}(\mathbf{U} \mid \mathbf{X} = \mathbf{x})$ assigns probability one to $\mathbf{u} = \mathbf{F}^{-1}(\mathbf{x})$ , making $\mathbf{u}$ deterministic. The interventional counterfactuals for this query are defined by the following system of equations:
+
+$$
+\left\{ \begin{array}{l l} x _ {i} ^ {\mathrm {I C F}} = f _ {i} \left(\mathbf {x} _ {\operatorname {p a} (i)} ^ {\mathrm {I C F}}, u _ {i}\right), & \forall i \notin \mathcal {A}, \\ x _ {i} ^ {\mathrm {I C F}} = x _ {i} ^ {*}, & \forall i \in \mathcal {A}. \end{array} \right. \tag {11}
+$$
+
+In contrast, the backtracking counterfactuals are determined by:
+
+$$
+\left\{ \begin{array}{l l} x _ {i} ^ {\mathrm {B C F}} = f _ {i} \left(\mathbf {x} _ {\text {p a} (i)} ^ {\mathrm {B C F}}, u _ {i} ^ {\mathrm {B C F}}\right), & \forall i \notin \mathcal {A}, \\ x _ {i} ^ {\mathrm {B C F}} = f _ {i} \left(\mathbf {x} _ {\text {p a} (i)} ^ {\mathrm {B C F}}, u _ {i} ^ {\mathrm {B C F}}\right) = x _ {i} ^ {*}, & \forall i \in \mathcal {A}. \end{array} \right. \tag {12}
+$$
+
+The key difference between (11) and (12) lies in the adjustment mechanism. Interventional counterfactuals modify the causal graph to enforce $\mathbf{X}_{\mathcal{A}}^{\mathrm{CF}} = \mathbf{x}_{\mathcal{A}}^{*}$ , whereas backtracking counterfactuals achieve the same result by adjusting the noise variables.
+
+Due to the DAG assumption in the causal graph, it is clear that equation (11) has a unique solution. After performing interventions on the set $\mathcal{A}$ , we obtain multiple DAGs from which we can derive the unique solution for $\mathbf{x}^{\mathrm{ICF}}$ by starting from the source nodes.
+
+Given that $\mathbf{F}(.)$ is invertible, we define
+
+$$
+\mathbf {u} _ {\mathcal {A}} ^ {\mathrm {I C F}} = \mathbf {F} ^ {- 1} \left(\mathbf {x} ^ {\mathrm {I C F}}\right). \tag {13}
+$$
+
+We can also rewrite equation (12) as $\mathbf{x}^{\mathrm{BCF}} = \mathbf{F}(\mathbf{u}^{\mathrm{BCF}})$ . By definition, if we substitute $\mathbf{u}^{\mathrm{BCF}} = \mathbf{u}_{\mathcal{A}}^{\mathrm{ICF}}$ into the backtracking equations (12), we arrive at the same counterfactual
+
+solution, $\mathbf{x}^{\mathrm{ICF}}$ . Therefore, interventional counterfactuals can be considered a specific case of backtracking counterfactuals.
+
+This reasoning can be generalized to any subset of variables $\mathcal{A}$ . For any counterfactual query involving $\mathcal{A}$ , we can construct $\mathbf{u}^{\mathrm{ICF}}$ in such a way that backtracking counterfactuals align with interventional counterfactuals.
+
+To the best of our knowledge, Theorem 5.1 is the first result that relates backtracking and interventional counterfactuals, and it holds independent significance. Theorem 5.1 demonstrates that when $\mathbf{F}(.)$ is invertible, backtracking counterfactuals inherently include interventional counterfactuals as a specific case. Furthermore, interventional counterfactuals, typically expressed as $\mathbf{x}^{\mathrm{ICF}} = \mathbf{F}_{\mathcal{A}}(\mathbf{u})$ , where $\mathbf{F}_{\mathcal{A}}(.)$ represents the structural equations post-intervention, can equivalently be reformulated as $\mathbf{x}^{\mathrm{ICF}} = \mathbf{F}(\mathbf{u}_{\mathcal{A}}^{\mathrm{ICF}})$ , bridging the gap between the two paradigms. By the construction (13) in Theorem 5.1, we can see $\mathbf{u}_{\mathcal{A}}^{\mathrm{ICF}} = \mathbf{F}^{-1}\left(\mathbf{F}_{\mathcal{A}}\left(\mathbf{F}^{-1}(\mathbf{x})\right)\right)$ .
+
+# 6. Connection Between Our Method and Previous Approaches
+
+Our method BRACE unifies other existing methods in certain scenarios. In our optimization problem (8), setting $\lambda = 0$ simplifies the problem to Counterfactual Explanations (4), where causal relationships are disregarded, and the objective becomes finding $\mathbf{x}^{\mathrm{CF}}$ that is closest to $\mathbf{x}$ while modifying the model output.
+
+When $\lambda \to \infty$ , (8) reduces to Deep Backtracking Explanations (7), which exclusively focuses on finding $\mathbf{u}^{\mathrm{CF}}$ closest to $\mathbf{u}$ without considering the proximity between $\mathbf{x}^{\mathrm{CF}}$ and $\mathbf{x}$ , while ensuring the model's output changes.
+
+Our solution (8) can also be interpreted as a special case of Backtracking Counterfactual Explanations (6). Specifically, it can be shown that employing the backtracking conditional distribution:
+
+$$
+\begin{array}{l} \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {u}\right) \propto \exp \left(- d _ {X} \left(\mathbf {F} (\mathbf {u}), \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right)\right) \right. \tag {14} \\ \left. - \lambda \cdot d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right)\right) \\ \end{array}
+$$
+
+renders (8) equivalent to Backtracking Counterfactual Explanations (6). Detailed derivations are provided in the Appendix B, leveraging the theoretical framework from (Von Kugelgen et al., 2023).
+
+While the connections to Counterfactual Explanations, Deep Backtracking Explanations, and Backtracking Counterfactual Explanations are established, a significant question remains: how does our solution (8) relate to Causal Algorithmic Recourse (5)? The following theorem provides an answer.
+
+Theorem 6.1. Assume that the distance functions $d_X(\cdot, \mathbf{x})$ and $d_U(\cdot, \mathbf{u})$ are convex, $\mathbf{F}(\cdot)$ and $h(\cdot)$ are linear functions, and the cost function is given as $\mathrm{cost}(\mathcal{A}; \mathbf{x}) = d_X(\mathbf{x}^{\mathrm{CF}}, \mathbf{x})$ . Then, our method BRACE outperforms Causal Algorithmic Recourse. Specifically, for a fixed distance $\alpha$ in the latent space $d_U(\mathbf{u}^{\mathrm{CF}}, \mathbf{u}) = \alpha$ , there exists a $\lambda$ such that the solution of (8) yields a counterfactual $\mathbf{x}^{\mathrm{CF}}$ closer to the observed input $\mathbf{x}$ than the solution of (5).
+
+Proof. We start by reformulating Causal Algorithmic Recourse (5) into a form analogous to our proposed solution (8). Using Theorem 5.1, Causal Algorithmic Recourse (5) can be rewritten as:
+
+$$
+\arg \min _ {\mathbf {x} ^ {\mathrm {C F}}, \mathcal {A}} d _ {X} \left(\mathbf {x} ^ {\mathrm {C F}}, \mathbf {x}\right)
+$$
+
+s.t. $h(\mathbf{x}^{\mathrm{CF}}) = y^{\mathrm{CF}},$
+
+$$
+\mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} \left(\mathbf {u} _ {\mathcal {A}} ^ {\mathrm {I C F}}\right), \tag {15}
+$$
+
+$$
+\mathbf {u} _ {\mathcal {A}} ^ {\mathrm {I C F}} = \mathbf {F} ^ {- 1} \left(\mathbf {F} _ {\mathcal {A}} (\mathbf {u})\right),
+$$
+
+$$
+\mathbf {x} = \mathbf {F} (\mathbf {u}).
+$$
+
+The main question is whether there exists a $\lambda$ such that the optimal $\mathbf{x}^{\mathrm{CF}}$ in our method (8) coincides with the optimal $\mathbf{x}^{\mathrm{CF}}$ in (15). Suppose the optimal solution to (15) is attained for $\mathbf{u}_{\mathcal{A}^*}^{\mathrm{ICF}}$ . Let $\alpha$ represent the distance between $\mathbf{u}_{\mathcal{A}^*}^{\mathrm{ICF}}$ and $\mathbf{u}$ :
+
+$$
+d _ {U} \left(\mathbf {u} _ {\mathcal {A} ^ {*}} ^ {\mathrm {I C F}}, \mathbf {u}\right) = \alpha . \tag {16}
+$$
+
+We now define the following optimization problem:
+
+$$
+\arg \min _ {\mathbf {x} ^ {\mathrm {C F}}, \mathbf {u} ^ {\mathrm {C F}}} d _ {X} \left(\mathbf {x} ^ {\mathrm {C F}}, \mathbf {x}\right)
+$$
+
+s.t. $h(\mathbf{x}^{\mathrm{CF}}) = y^{\mathrm{CF}},$
+
+$$
+\mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right), \tag {17}
+$$
+
+$$
+d _ {U} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u}\right) = \alpha ,
+$$
+
+$$
+\mathbf {x} = \mathbf {F} (\mathbf {u}).
+$$
+
+Let the optimal solution to (15) be $\mathbf{x}^{* \mathrm{ICF}}$ , and the optimal solution to (17) be $\mathbf{x}^{* \mathrm{BCF}}$ . Then, it follows:
+
+$$
+d _ {X} \left(\mathbf {x} ^ {* \mathrm {B C F}}, \mathbf {x}\right) \leq d _ {X} \left(\mathbf {x} ^ {* \mathrm {I C F}}, \mathbf {x}\right). \tag {18}
+$$
+
+This inequality holds because $\mathbf{u}_{\mathcal{A}^*}^{\mathrm{ICF}}$ satisfies the constraint $d_U(\mathbf{u}^{\mathrm{CF}},\mathbf{u}) = \alpha$ , while other feasible values of $\mathbf{u}^{\mathrm{CF}}$ within the same constraint may reduce the objective $d_X(\mathbf{x}^{\mathrm{CF}},\mathbf{x})$ further. Hence, the Causal Algorithmic Recourse formulation (15) may not always yield the closest $\mathbf{x}^{\mathrm{CF}}$ to $\mathbf{x}$ among all $\mathbf{u}^{\mathrm{CF}}$ satisfying the distance constraint $\alpha$ from $\mathbf{u}$ .
+
+Next, we examine whether there exists a $\lambda$ such that the optimal solution of our method (8) aligns with the optimal solution of (17). In essence, we seek a $\lambda$ such that
+
+the optimal $\mathbf{u}^{\mathrm{CF}}$ from (8) satisfies the distance constraint $d_U\left(\mathbf{u}^{\mathrm{CF}},\mathbf{u}\right) = \alpha$
+
+To approach this, consider the following vector optimization problem:
+
+$$
+\begin{array}{l} \arg \min _ {\mathbf {x} ^ {\mathrm {C F}}, \mathbf {u} ^ {\mathrm {C F}}} \quad \left(d _ {X} \left(\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}\right), d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right)\right) \\ \text {s . t .} \quad h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}}, \tag {19} \\ \mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} (\mathbf {u} ^ {\mathrm {C F}}), \\ \mathbf {x} = \mathbf {F} (\mathbf {u}). \\ \end{array}
+$$
+
+The optimization (19) simultaneously minimizes $d_X(\mathbf{x}, \mathbf{x}^{\mathrm{CF}})$ and $d_U(\mathbf{u}, \mathbf{u}^{\mathrm{CF}})$ . However, in certain cases, reducing one term may result in an increase in the other.
+
+To resolve this, we utilize the concept of Pareto optimality (Boyd & Vandenberghe, 2004). A well-known result for convex problems is that scalarizing the objective:
+
+$$
+d _ {X} \left(\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}\right) + \lambda d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right) \tag {20}
+$$
+
+yields all Pareto-optimal solutions by varying $\lambda > 0$ . Specifically, every optimal solution of the scalarized optimization corresponds to a Pareto-optimal point of the vector optimization. Moreover, since the vector optimization problem (19) is convex (from the assumptions of the theorem), all Pareto-optimal points can be achieved.
+
+Returning to our optimization, note that the solution to (17) is a Pareto-optimal point of (19) because with constraint $d_U(\mathbf{u}^{\mathrm{CF}},\mathbf{u}) = \alpha$ we minimizes $d_X(\mathbf{x}^{\mathrm{CF}},\mathbf{x})$ . Thus, the solution cannot be further improved along the $d_{X}(\mathbf{x}^{\mathrm{CF}},\mathbf{x})$ axis.
+
+Thus, by varying $\lambda$ , it is possible to identify a $\lambda$ such that the solution of our method BRACE (8) matches the solution of (17), ensuring $d_U(\mathbf{u}^{\mathrm{CF}},\mathbf{u}) = \alpha$ . Consequently, as demonstrated in (18), our proposed method yields a $\mathbf{x}^{\mathrm{CF}}$ that is closer to $\mathbf{x}$ compared to Causal Algorithmic Recourse, while preserving the fixed distance $\alpha$ between the latent variables $\mathbf{u}$ and $\mathbf{u}^{\mathrm{CF}}$ .
+
+The convexity of the distance functions, along with the linearity of $\mathbf{F}(\cdot)$ and $h(\cdot)$ , are assumed primarily to ensure the existence of $\lambda$ by utilizing the convexity of the vector optimization problem (19). However, similar conclusions can still be derived without these assumptions if there exists a suitable $\lambda$ such that $d_{U}\left(\mathbf{u}^{\mathrm{CF}},\mathbf{u}\right) = \alpha$ . This indicates that, even in cases where the convexity of $d_{X}(\cdot ,\mathbf{x})$ , $d_{U}(\cdot ,\mathbf{u})$ , or the linearity of $\mathbf{F}(\cdot)$ and $h(\cdot)$ are not assumed, our proposed method often provides a $\mathbf{x}^{\mathrm{CF}}$ that is closer to $\mathbf{x}$ than Causal Algorithmic Recourse, while preserving the fixed distance $\alpha$ between the latent variables $\mathbf{u}$ and $\mathbf{u}^{\mathrm{CF}}$ .
+
+Importantly, convexity and linearity assumptions are not necessary for the core insight to remain valid. Even without assuming convexity of the distance functions or linearity of $\mathbf{F}(\cdot)$ and $h(\cdot)$ , we can always find a solution that outperforms Causal Algorithmic Recourse among the Pareto optimal points of the vector optimization problem (19). The assumptions of linearity and convexity are only required to guarantee that this Pareto optimal point can be captured by some value of $\lambda$ .
+
+It is worth noting that our method is substantially more efficient computationally than Causal Algorithmic Recourse, since that approach relies on a combinatorial optimization procedure. Moreover, our method also surpasses Backtracking Counterfactual Explanations in computational efficiency, especially when dealing with a complex distribution $\mathbb{P}_B$ .
+
+# 7. Metric Selection and Optimization Approach
+
+To solve the optimization problem in Eq. (8), it is essential to define the distance metrics $d_X(\mathbf{x}, \mathbf{x}^{\mathrm{CF}})$ and $d_U(\mathbf{u}, \mathbf{u}^{\mathrm{CF}})$ . For $d_X(.,.)$ , which measures proximity in the observed space, the $\ell_1$ norm is a natural choice as it minimizes the number of modified features, making the counterfactuals more interpretable and actionable:
+
+$$
+d _ {X} \left(\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}\right) = \left\| \mathbf {x} - \mathbf {x} ^ {\mathrm {C F}} \right\| _ {1}. \tag {21}
+$$
+
+For the latent space, $d_U(.,.)$ evaluates how plausible a counterfactual is relative to the original latent representation. The $\ell_2$ norm ensures smoothness and proximity:
+
+$$
+d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right) = \left\| \mathbf {u} - \mathbf {u} ^ {\mathrm {C F}} \right\| _ {2}. \tag {22}
+$$
+
+By combining these metrics, the optimization problem can be reformulated in a meaningful way.
+
+Solving the optimization problem in Eq. (8), particularly for complex models such as neural networks (Katz et al., 2017) or additive tree models (Ates et al., 2021), is generally NP-hard. Gradient-based methods are effective when both the objective and the constraints are differentiable. For example, the constraints can be integrated into the objective function as penalty terms:
+
+$$
+\begin{array}{l} \arg \min _ {\mathbf {x} ^ {\mathrm {C F}}} d _ {X} (\mathbf {x}, \mathbf {x} ^ {\mathrm {C F}}) + \lambda d _ {U} \left(\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {F} ^ {- 1} \left(\mathbf {x} ^ {\mathrm {C F}}\right)\right) \tag {23} \\ + \beta \operatorname {L o s s} \left(h (\mathbf {x} ^ {\mathrm {C F}}), y ^ {\mathrm {C F}}\right), \\ \end{array}
+$$
+
+where $\mathrm{Loss}(.)$ is a common classification loss, such as cross-entropy. To approximate solutions in practice, $\beta$ is gradually increased until the counterfactual $\mathbf{x}^{\mathrm{CF}}$ satisfies the desired output class $y^{\mathrm{CF}}$ (Szegedy et al., 2013).
+
+
+Figure 1. Causal graph of the bank's high-risk detection model. $X_{1}$ is gender, $X_{2}$ is age, $X_{3}$ is loan amount, and $X_{4}$ is repayment duration in months. The model's output $\hat{Y}$ indicates high or low risk for loan approval.
+
+Heuristic approaches also provide practical alternatives; for instance, shortest path searches in empirical graphs (Poyiadzi et al., 2020) or expanding-sphere searches (Laugel et al., 2017) offer approximate solutions in specific scenarios.
+
+# 8. Experimental Evaluation
+
+# 8.1. Simulation Setup
+
+To evaluate the proposed method, we adopt the experiment in the Causal Algorithmic Recourse paper (Karimi et al., 2021), as it serves as a critical baseline for comparison. Since the primary focus is on assessing the interpretability of the proposed method, we use a simple model for $h(\cdot)$ . This ensures that the exact solutions to the optimization problem can be computed and aligned with our intuitive understanding of the task.
+
+We consider a model $h(\cdot)$ designed to classify individuals as high- or low-risk for loan approval. The input vector $\mathbf{X}$ is assumed to follow the given causal structure:
+
+$$
+\begin{array}{l} X _ {1} := U _ {1}, \\ X _ {2} := U _ {2}, \tag {24} \\ X _ {3} := f _ {3} \left(X _ {1}, X _ {2}\right) + U _ {3}, \\ X _ {4} := f _ {4} \left(X _ {3}\right) + U _ {4}, \\ \end{array}
+$$
+
+where the system's output is given by $\hat{Y} = h(X_1, X_2, X_3, X_4)$ . Figure 1 illustrates the causal graph associated with the problem.
+
+For this simulation, the causal graph is assumed to be known, while the functions $f_{3}(\cdot), f_{4}(\cdot)$ , and $h(\cdot)$ are estimated using real-world data from the German Credit Dataset (Hofmann, 1994). We assume $f_{3}(\cdot)$ and $f_{4}(\cdot)$ are linear functions and $h(\cdot)$ is logistic regression. Following (Peters et al., 2017), the coefficients of the causal model can be derived using linear regression when the causal functions are linear.
+
+The feature $X_{1}$ (gender) is one-hot encoded during logistic regression for $h(\cdot)$ and is kept fixed when generating coun
+
+Table 1. Counterfactual solutions from different methods for an individual originally classified as high-risk (x = (female, 24, $4308, 48)). Each method modifies the features to flip the prediction to low-risk.
+
+Method Gender Age Loan amount Duration Original (High-risk) female 24 $4308 48 BRACE (Our Method, λ = 1) female 24 $4087 33.0 BRACE (Our Method, λ = 1.2) female 24 $3736 33.3 Counterfactual Explanations (Wachter et al., 2017) female 24 $4308 32.8 Causal Algorithmic Recourse (Karimi et al., 2021) female 24 $4308 32.8 Deep Backtracking Explanations (Kladny et al., 2024) female 27.2 $2727 35.7
+
+terfactuals. Since $X_{1}$ is categorical, modifying it is avoided, as changing gender does not provide actionable insights. Additionally, all features are normalized by their standard deviations to improve performance.
+
+# 8.2. Optimization Problem and Results
+
+We consider an individual with features $\mathbf{x} =$ (female, 24, $4308, 48) classified as high-risk by the model $h(\cdot)$ . Using the causal model and the learned functions $f_{3}(\cdot)$ and $f_{4}(\cdot)$ , we derive the latent representation $\mathbf{u}$ . The following optimization problem is then formulated:
+
+$$
+\underset {\mathbf {x} ^ {\mathrm {C F}}, \mathbf {u} ^ {\mathrm {C F}}} {\arg \min} \sum_ {i = 2} ^ {4} \frac {\left| x _ {i} - x _ {i} ^ {\mathrm {C F}} \right|}{\sigma_ {i}} + \lambda \sqrt {\sum_ {i = 2} ^ {4} \frac {\left(u _ {i} - u _ {i} ^ {\mathrm {C F}}\right) ^ {2}}{\sigma_ {i} ^ {2}}}
+$$
+
+s.t. $h(\mathbf{x}^{\mathrm{CF}}) = \mathrm{low - risk},$ (25)
+
+$$
+\begin{array}{l} x _ {2} ^ {\mathrm {C F}} = u _ {2} ^ {\mathrm {C F}}, \\ x _ {3} ^ {\mathrm {C F}} = f _ {3} (x _ {1} ^ {\mathrm {C F}}, x _ {2} ^ {\mathrm {C F}}) + u _ {3} ^ {\mathrm {C F}}, \\ x _ {4} ^ {\mathrm {C F}} = f _ {4} (x _ {3} ^ {\mathrm {C F}}) + u _ {4} ^ {\mathrm {C F}}. \\ \end{array}
+$$
+
+We solve the optimization problem in (25) with $\lambda = 1$ and $\lambda = 1.2$ . Table 1 reports $\mathbf{x}^{\mathrm{CF}}$ from our method and other prominent approaches for comparison.
+
+As shown in Table 1, both Counterfactual Explanations and Causal Algorithmic Recourse focus solely on reducing the repayment duration, which is not actionable for the user and fails to provide meaningful guidance for future improvements. In contrast, the Deep Backtracking Explanations alters all features, leading to a significant departure from the original observation and reducing local interpretability. Our approach finds a balance by adjusting both the loan amount and repayment duration while maintaining sparsity and interpretability, offering a more intuitive and actionable explanation for the user.
+
+When the user's initial features are given by $x =$ (female, 24, $4308, 48), this suggests that a 48-month loan repayment is appropriate for the user. Therefore, if we adjust this feature vector solely by reducing the repayment
+
+duration, the repayment becomes significantly more challenging for the user, thereby making the explanation less actionable.
+
+To put it quantitatively, repaying $4308 over 48 months corresponds to a monthly payment of$ 89.75. Any explanation that deviates considerably from this monthly rate is less actionable. Counterfactual Explanations and Causal Algorithmic Recourse yield a monthly repayment of about $131.3 (i.e., $4308 divided by 32.8 months). In contrast, our solution results in a monthly repayment of $123.8 when λ = 1 (i.e., $4087 divided by 33 months) and $112.2 when λ = 1.2 (i.e., $3736 divided by 33.3 months). Thus, while Counterfactual Explanations and Causal Algorithmic Recourse increase the monthly repayment by 46.3%, our approach leads to increases of 37.8% and 25.0% for λ = 1 and λ = 1.2, respectively, making our explanations more actionable for the user.
+
+Table 2 provides the counterfactual outcomes for another example with initial features $x =$ (male, 27, $14027, 60), where similar trends across methods are observed.
+
+# 8.3. Sensitivity Analysis
+
+Estimating causal functions in the input graph often involves approximations, introducing potential noise. To evaluate the robustness of our method, we add zero-mean Gaussian noise with a standard deviation of 5 to the coefficients of $f_{3}(\cdot)$ and solve the optimization problem (25) with $\lambda = 1.2$ for an individual with features $\mathbf{x} =$ (female, 24, $4308, 48). The resulting counterfactuals are:
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\mathrm {C F}} = (\text {f e m a l e}, 2 4, \$ 3 8 3 9, 3 3. 2), \\ \mathbf {x} ^ {\mathrm {C F}} = (\text {f e m a l e}, 2 4, \$ 3 5 7 2, 3 3. 5), \tag {26} \\ \mathbf {x} ^ {\mathrm {C F}} = (\text {f e m a l e}, 2 4, \$ 3 6 1 8, 3 3. 4). \\ \end{array}
+$$
+
+Despite significant noise, the results remain stable, with age unchanged and explanations staying sparse. This highlights the robustness of our method, showing that even approximate causal functions produce reliable and interpretable counterfactuals.
+
+Table 2. Counterfactual solutions from different methods for an individual originally classified as high-risk.
+
+Method Gender Age Loan amount Duration Original (High-risk) male 27 $14027 60 BRACE (Our Method, λ = 1) male 27 $13686 36.9 BRACE (Our Method, λ = 1.2) male 27 $13149 37.4 Counterfactual Explanations (Wachter et al., 2017) male 27 $14027 36.6 Causal Algorithmic Recourse (Karimi et al., 2021) male 27 $14027 36.6 Deep Backtracking Explanations (Kladny et al., 2024) male 31.9 $11599 41.1
+
+# 9. Conclusion
+
+In this work, we presented a new framework BRACE for counterfactual explanations based on backtracking counterfactuals. Our approach overcomes the limitations of interventional counterfactuals by introducing an optimization problem that generates actionable and causally consistent explanations. By solving a single unified objective parameterized by $\lambda$ , BRACE recovers four established paradigms—classical Counterfactual Explanations ( $\lambda = 0$ ), Deep Backtracking Explanations ( $\lambda \to \infty$ ), Backtracking Counterfactual Explanations via a specific backtracking conditional distribution, and Causal Algorithmic Recourse under a convexity assumption. Additionally, we demonstrated that our method is both easier to understand and more computationally efficient compared to causal algorithmic recourse. Through simulation experiments, we verified that the proposed method produces explanations that are more intuitive for users and more practical for real-world applications.
+
+# 10. Future Work
+
+This work opens several directions for further research:
+
+Relaxing Assumptions for Connection with Causal Algorithmic Recourse: Our approach depends on convexity assumptions to establish a connection with causal algorithmic recourse. Future work could investigate alternative conditions that do not require these assumptions, allowing the theory to be applied to non-linear and non-convex models frequently found in real-world applications.
+
+Testing on Complex Models: This study focused on simpler models for $h(. )$ to ensure intuitive understanding of the task. A valuable next step is to test our method on more complex models, such as deep neural networks, and compare its performance with existing state-of-the-art methods. This would help demonstrate the method's effectiveness in handling challenging real-world scenarios.
+
+Improving Backtracking Counterfactual Definitions: The current definition of backtracking counterfactuals does not
+
+ensure that the noise variables $\mathbf{U}^{\mathrm{CF}}$ in the counterfactual world remain mutually independent. In SCMs, this independence is important for maintaining the causal interpretation of the noise variables. Extending the definition to enforce this independence would enhance both the theoretical consistency and practical usefulness of backtracking counter-factuals, making them more aligned with core principles of causal reasoning.
+
+# Acknowledgements
+
+This research was conducted while Pouria Fatemi and Ehsan Sharifian were affiliated with Sharif University of Technology.
+
+# Impact Statement
+
+Our work advances the field of machine learning by proposing a framework for counterfactual explanations that enhances interpretability while ensuring causal consistency and actionability. This method unifies multiple existing approaches, including counterfactual explanations, deep backtracking explanations, causal algorithmic recourse, and backtracking counterfactual explanations, while improving computational efficiency.
+
+The potential impact of our work is most relevant to high-stakes applications such as finance and healthcare, where reliable and transparent decision-making is essential. By incorporating causal reasoning into counterfactual explanations, our approach contributes to making AI-driven decisions more interpretable and aligned with real-world constraints. While our method does not introduce direct ethical concerns, its application in automated decision-making should be carefully evaluated to ensure fair and responsible use.
+
+# References
+
+Ates, E., Aksar, B., Leung, V. J., and Coskun, A. K. Counterfactual explanations for multivariate time series. In 2021 International Conference on Applied Artificial In
+
+telligence (ICAPAI), pp. 1-8. IEEE, 2021.
+Bhoi, S., Lee, M. L., Hsu, W., and Tan, N. C. Refine: a fine-grained medication recommendation system using deep learning and personalized drug interaction modeling. Advances in Neural Information Processing Systems, 36, 2024.
+Boyd, S. P. and Vandenberghe, L. Convex optimization. Cambridge university press, 2004.
+Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., and Das, P. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31, 2018.
+Dominguez-Olmedo, R., Karimi, A. H., and Scholkopf, B. On the adversarial robustness of causal algorithmic recourse. In International Conference on Machine Learning, pp. 5324-5342. PMLR, 2022.
+Dorr, C. Against counterfactual miracles. The Philosophical Review, 125(2):241-286, 2016.
+Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.
+Frye, C., Rowat, C., and Feige, I. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. Advances in Neural Information Processing Systems, 33:1229-1239, 2020.
+Heskes, T., Sijben, E., Bucur, I. G., and Claassen, T. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. Advances in neural information processing systems, 33:4778-4789, 2020.
+Hofmann, H. Statlog (German Credit Data). UCI Machine Learning Repository, 1994. DOI: https://doi.org/10.24432/C5NC77.
+Janzing, D., Minorics, L., and Blöbaum, P. Feature relevance quantification in explainable ai: A causal problem. In International Conference on artificial intelligence and statistics, pp. 2907-2916. PMLR, 2020.
+Jethani, N., Sudarshan, M., Aphinyanaphongs, Y., and Ranganath, R. Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations. In International Conference on Artificial Intelligence and Statistics, pp. 1459-1467. PMLR, 2021.
+Jung, Y., Kasiviswanathan, S., Tian, J., Janzing, D., Blöbaum, P., and Bareinboim, E. On measuring causal
+
+contributions via do-interventions. In International Conference on Machine Learning, pp. 10476-10501. PMLR, 2022.
+Karimi, A.-H., Barthe, G., Balle, B., and Valera, I. Model-agnostic counterfactual explanations for consequential decisions. In International conference on artificial intelligence and statistics, pp. 895-905. PMLR, 2020a.
+Karimi, A.-H., Von Kugelgen, J., Schölkopf, B., and Valera, I. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. Advances in neural information processing systems, 33:265-277, 2020b.
+Karimi, A.-H., Schölkopf, B., and Valera, I. Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 353-362, 2021.
+Karimi, A.-H., Barthe, G., Schölkopf, B., and Valera, I. A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Computing Surveys, 55(5):1-29, 2022.
+Karimi, A.-H., Muandet, K., Kornblith, S., Schölkopf, B., and Kim, B. On the relationship between explanation and prediction: A causal view. In XAI in Action: Past, Present, and Future Applications, 2023. URL https://openreview.net/forum?id=ag1CpSUjPS.
+Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pp. 97-117. Springer, 2017.
+Kim, B., Khanna, R., and Koyejo, O. O. Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29, 2016.
+Kladny, K.-R., von Kugelgen, J., Schölkopf, B., and Muehlebach, M. Deep backtracking counterfactuals for causally compliant explanations. Transactions on Machine Learning Research, 2024.
+Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., and Detyniecki, M. Inverse classification for comparison-based interpretability in machine learning. arXiv preprint arXiv:1712.08443, 2017.
+Lundberg, S. M. and Lee, S.-I. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
+Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1-38, 2019.
+
+Molnar, C. Interpretable machine learning. Lulu. com, 2020.
+Nasr-Esfahany, A., Alizadeh, M., and Shah, D. Counterfactual identifiability of bijective causal models. In International Conference on Machine Learning, pp. 25733-25754. PMLR, 2023.
+Pearl, J. Causality. Cambridge university press, 2009.
+Peters, J., Janzing, D., and Schölkopf, B. Elements of causal inference: foundations and learning algorithms. MIT press, 2017.
+Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., and Flach, P. Face: feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344-350, 2020.
+Ribeiro, M. T., Singh, S., and Guestrin, C. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144, 2016.
+Sancaktar, C., Blaes, S., and Martius, G. Curious exploration via structured world models yields zero-shot object manipulation. Advances in Neural Information Processing Systems, 35:24170-24183, 2022.
+Slack, D., Hilgard, A., Lakkaraju, H., and Singh, S. Counterfactual explanations can be manipulated. Advances in neural information processing systems, 34:62-75, 2021.
+Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
+Von Kugelgen, J., Karimi, A.-H., Bhatt, U., Valera, I., Weller, A., and Scholkopf, B. On the fairness of causal algorithmic recourse. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pp. 9584-9594, 2022.
+Von Kügelgen, J., Mohamed, A., and Beckers, S. Backtracking counterfactuals. In Conference on Causal Learning and Reasoning, pp. 177-196. PMLR, 2023.
+Wachter, S., Mittelstadt, B., and Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
+Wang, J., Wiens, J., and Lundberg, S. Shapley flow: A graph-based approach to interpreting model predictions. In International Conference on Artificial Intelligence and Statistics, pp. 721-729. PMLR, 2021.
+
+Xie, Q., Han, W., Zhang, X., Lai, Y., Peng, M., Lopez-Lira, A., and Huang, J. Pixiu: A comprehensive benchmark, instruction dataset and large language model for finance. Advances in Neural Information Processing Systems, 36, 2024.
+Zhang, J. and Baireinboim, E. Fairness in decision-making—the causal explanation formula. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
+
+# A. Formal Definition of Interventional and Backtracking Counterfactuals
+
+To formally understand the differences and computation processes underlying interventional and backtracking counterfactuals, we outline their respective definitions and procedural steps below.
+
+# A.1. Interventional Counterfactuals
+
+1. Abduction: Update the distribution of the noise variables $\mathbf{U}$ in the causal model from $P_{\mathbf{U}}$ to the posterior distribution $P_{\mathbf{U}|\mathbf{X} = \mathbf{x}}$ , using the observed factual data $\mathbf{x}$ .
+2. Action: Perform a hard intervention $do(X_{i} \coloneqq x_{i}^{\mathrm{CF}})$ for $i \in \mathcal{A}$ , modifying the structural equations of the causal model. Denote the modified structural equations as $\mathbf{S}^{\mathrm{CF}}$ , while retaining the original equations $f_{i}^{\mathrm{CF}} = f_{i}$ for $i \notin \mathcal{A}$ .
+3. Prediction: Using the updated causal model $\mathcal{C}^{\mathrm{CF}} = (\mathbf{S}^{\mathrm{CF}}, P_{\mathbf{U}|\mathbf{X} = \mathbf{x}})$ , compute the distribution over the desired counterfactual outcomes $\mathbf{Y}^{\mathrm{CF}}$ .
+
+# A.2. Backtracking Counterfactuals
+
+1. Cross-World Abduction: Update the joint distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}},\mathbf{U}) = \mathbb{P}(\mathbf{U})\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ using the variables $(\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}},\mathbf{x})$ to obtain the posterior distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}},\mathbf{U}\mid \mathbf{x}_{\mathcal{A}}^{\mathrm{CF}},\mathbf{x})$ :
+
+$$
+\mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u} \mid \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) = \frac {\mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} , \mathbf {u}\right) 1 \left\{\mathbf {F} _ {\mathcal {A}} \left(\mathbf {u} ^ {\mathrm {C F}}\right) = \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}} \right\} 1 \left\{\mathbf {F} (\mathbf {u}) = \mathbf {x} \right\}}{\mathbb {P} _ {B} \left(\mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right)}. \tag {27}
+$$
+
+Where
+
+$$
+\mathbb {P} _ {B} \left(\mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) = \int \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u}\right) 1 \left\{\mathbf {F} _ {\mathcal {A}} \left(\mathbf {u} ^ {\mathrm {C F}}\right) = \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}} \right\} 1 \left\{\mathbf {F} (\mathbf {u}) = \mathbf {x} \right\} d \mathbf {u} d \mathbf {u} ^ {\mathrm {C F}}. \tag {28}
+$$
+
+Calculating $\mathbb{P}_B(\mathbf{x}_{\mathcal{A}}^{\mathrm{CF}},\mathbf{x})$ becomes computationally challenging for complex $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}},\mathbf{U})$ distributions, as we must integrate over all values of this distribution.
+
+2. Marginalization: Marginalize over $\mathbf{U}$ to compute the posterior distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{x}_A^{\mathrm{CF}},\mathbf{x})$
+
+$$
+\mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) = \int \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u} \mid \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) d \mathbf {u}. \tag {29}
+$$
+
+3. Prediction: Using the updated causal graph with noise distribution $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{x}_A^{\mathrm{CF}},\mathbf{x})$ , compute the probability of the desired counterfactual event:
+
+$$
+\mathbb {P} _ {B} \left(\mathbf {y} ^ {\mathrm {C F}} \mid \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) = \int \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {x} _ {\mathcal {A}} ^ {\mathrm {C F}}, \mathbf {x}\right) 1 \left\{\mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right) = \mathbf {y} ^ {\mathrm {C F}} \right\} d \mathbf {u} ^ {\mathrm {C F}}. \tag {30}
+$$
+
+# B. Relation of Our Solution to Backtracking Counterfactual Explanations
+
+We aim to demonstrate that our solution (8) can be connected to the backtracking counterfactual explanations framework presented in (Von Kugelgen et al., 2023), which is formulated as the optimization problem (6), by considering a specific choice of $\mathbb{P}_B(\mathbf{U}^{\mathrm{CF}}\mid \mathbf{U})$ . This connection is established by following the three steps of backtracking counterfactual computation:
+
+1. Cross-World Abduction: Compute the posterior distribution of the latent variables in the causal graph. Given that the function $\mathbf{F}(.)$ is invertible, we have:
+
+$$
+\begin{array}{l} \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u} \mid y ^ {\mathrm {C F}}, \mathbf {x}\right) = \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {u}, y ^ {\mathrm {C F}}, \mathbf {x}\right) \mathbb {P} _ {B} \left(\mathbf {u} \mid y ^ {\mathrm {C F}}, \mathbf {x}\right) (31) \\ = \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {u}, y ^ {\mathrm {C F}}\right) \mathbb {P} _ {B} (\mathbf {u} \mid \mathbf {x}) (32) \\ = \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {u}, y ^ {\mathrm {C F}}\right) 1 \left\{\mathbf {F} ^ {- 1} (\mathbf {x}) = \mathbf {u} \right\}. (33) \\ \end{array}
+$$
+
+2. Marginalization: Compute the marginal posterior distribution over $\mathbf{u}^{\mathrm{CF}}$ . Since the entire probability mass is concentrated at the point $\mathbf{u} = \mathbf{F}^{-1}(\mathbf{x})$ , we have:
+
+$$
+\begin{array}{l} \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid y ^ {\mathrm {C F}}, \mathbf {x}\right) = \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}}, \mathbf {u} = \mathbf {F} ^ {- 1} (\mathbf {x}) \mid y ^ {\mathrm {C F}}, \mathbf {x}\right) (34) \\ = \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {F} ^ {- 1} (\mathbf {x}), y ^ {\mathrm {C F}}\right). (35) \\ \end{array}
+$$
+
+3. Prediction: Compute the distribution $\mathbf{X}^{\mathrm{CF}}\mid y^{\mathrm{CF}},\mathbf{x}$ . Since $\mathbf{F}(.)$ is deterministic, we have:
+
+$$
+\mathbf {u} ^ {\mathrm {C F}} \sim \mathbf {U} ^ {\mathrm {C F}} \mid \mathbf {F} ^ {- 1} (\mathbf {x}), y ^ {\mathrm {C F}}, \quad \mathbf {x} ^ {\mathrm {C F}} = \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right). \tag {36}
+$$
+
+Now, consider a specific choice for the backtracking conditional distribution:
+
+$$
+\mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {u}\right) \propto \exp \left\{- d _ {X} \left(\mathbf {F} (\mathbf {u}), \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right)\right) - \lambda d _ {U} \left(\mathbf {u}, \mathbf {u} ^ {\mathrm {C F}}\right) \right\}. \tag {37}
+$$
+
+Substituting this into the posterior distribution of $\mathbf{U}^{\mathrm{CF}}\mid \mathbf{F}^{-1}(\mathbf{x}),y^{\mathrm{CF}}$ , we obtain:
+
+$$
+\mathbf {U} ^ {\mathrm {C F}} \mid \mathbf {F} ^ {- 1} (\mathbf {x}), y ^ {\mathrm {C F}} \propto \left\{ \begin{array}{l l} \exp \left\{- d _ {X} \left(\mathbf {x}, \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right)\right) - \lambda d _ {U} \left(\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {u} ^ {\mathrm {C F}}\right) \right\}, & \text {i f} h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}}, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {38}
+$$
+
+Taking the logarithm on both sides, we have:
+
+$$
+\log \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {F} ^ {- 1} (\mathbf {x}), y ^ {\mathrm {C F}}\right) \propto \left\{ \begin{array}{l l} - d _ {X} \left(\mathbf {x}, \mathbf {F} \left(\mathbf {u} ^ {\mathrm {C F}}\right)\right) - \lambda d _ {U} \left(\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {u} ^ {\mathrm {C F}}\right), & \text {i f} h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}}, \\ - \infty , & \text {o t h e r w i s e .} \end{array} \right. \tag {39}
+$$
+
+Thus, we have:
+
+$$
+\begin{array}{l} \arg \max _ {\mathbf {u} ^ {\mathrm {C F}}} \log \mathbb {P} _ {B} \left(\mathbf {u} ^ {\mathrm {C F}} \mid \mathbf {F} ^ {- 1} (\mathbf {x}), y ^ {\mathrm {C F}}\right) \equiv \arg \min _ {\mathbf {u} ^ {\mathrm {C F}}} d _ {X} \left(\mathbf {x}, \mathbf {F} (\mathbf {u} ^ {\mathrm {C F}})\right) + \lambda d _ {U} \left(\mathbf {F} ^ {- 1} (\mathbf {x}), \mathbf {u} ^ {\mathrm {C F}}\right), \\ \text {s . t .} \quad h \left(\mathbf {x} ^ {\mathrm {C F}}\right) = y ^ {\mathrm {C F}}. \tag {40} \\ \end{array}
+$$
+
+As shown in (40), the optimization problem (10) aligns with backtracking counterfactual explanations (6). Therefore, our solution provides a valid interpretation based on backtracking counterfactuals.
\ No newline at end of file
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/images.zip b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..97010f5f6617e28d4b623f51731957e4b7b7a5cc
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65a716b82b2b9b980a99c62321c72d1a67afb91393570c19e5c62784435a01cc
+size 427431
diff --git a/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/layout.json b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d96d0572ac98622690178520613c54b0dd5b893f
--- /dev/null
+++ b/anewapproachtobacktrackingcounterfactualexplanationsaunifiedcausalframeworkforefficientmodelinterpretability/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5a8e5c18d5a4c63e812864de0c6538aa859a41be0619e294a42bd6f78ccdd1b
+size 689752
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_content_list.json b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b8d167d7834c3aba3f535c697e562e8b9b242aac
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2672ec37759d8bdd092c21e87fc1b96cb7dd9d603f596839e0eb94b7147a4c55
+size 226701
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_model.json b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2f6433c5da66a59d4e188d237190939a7bb05e0c
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:264310c9782d4b716eebdcb500bcbbe42d652fae2423bcb35a5e0ff1de26f8c4
+size 253818
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_origin.pdf b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0ae2def165c6216918103c85de24c134d17fc582
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/5c31f94d-a1a1-4076-9e0f-373d2dc59475_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe64a82906b4b5977ee5ff1998b54b661a2b4ee7f3051942359c22ed8ed7f2bb
+size 611819
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/full.md b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7db9dcf3313c9c886c64777ca51019222b2c2cec
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/full.md
@@ -0,0 +1,1205 @@
+# A New Concentration Inequality for Sampling Without Replacement and Its Application for Transductive Learning
+
+Yingzhen Yang
+
+# Abstract
+
+We introduce a new tool, Transductive Local Complexity (TLC), to analyze the generalization performance of transductive learning methods and motivate new transductive learning algorithms. Our work extends the idea of the popular Local Rademacher Complexity (LRC) (Bartlett et al., 2005) to the transductive setting with considerable and novel changes compared to the analysis of typical LRC methods in the inductive setting. While LRC has been widely used as a powerful tool in the analysis of inductive models with sharp generalization bounds for classification and minimax rates for nonparametric regression, it remains an open problem whether a localized version of Rademacher complexity based tool can be designed and applied to transductive learning and gain sharp bound for transductive learning which is consistent with the inductive excess risk bound by (LRC) (Bartlett et al., 2005). We give a confirmative answer to this open problem by TLC. Similar to the development of LRC (Bartlett & Mendelson, 2003), we build TLC by first establishing a novel and sharp concentration inequality for supremum of empirical processes for the gap between test and training loss in the setting of sampling uniformly without replacement. Then a peeling strategy and a new surrogate variance operator are used to derive the following excess risk bound in the transductive setting, which is consistent with that of the classical LRC based excess risk bound in the inductive setting. As an application of TLC, we use the new TLC tool to analyze the Transductive Kernel Learning (TKL) model, and derive sharper excess risk bound than that by the current state-of-the-art (Tolstikhin et al., 2014). As
+
+a result of independent interest, the concentration inequality for the test-train process is used to derive a sharp concentration inequality for the general supremum of empirical process involving random variables in the setting of sampling uniformly without replacement, with comparison to current concentration inequalities.
+
+# 1. Introduction
+
+We study transductive learning in this paper, where the learner has access to both labeled training data and unlabeled test data, and the task is to predict the labels of the test data. Obtaining a tight generalization bound for transductive learning is an important problem in statistical learning theory. Tools for inductive learning, such as Rademacher complexity and VC dimension, have been used for transductive learning, including empirical risk minimization, transductive regression, and transductive classification (Vapnik, 1982; 1998; Cortes & Mohri, 2006; El-Yaniv & Pechyony, 2009). On the other hand, it is important to employ localized version of Rademacher complexity, such as Local Rademacher Complexity (LRC) (Bartlett et al., 2005), to obtain sharper generation bound for transductive learning, such as (Tolstikhin et al., 2014).
+
+The classical work (LRC) (Bartlett et al., 2005) presents the following sharp bound for the excess risk of empirical risk minimizer $\hat{f}$ for inductive learning as follows: for every $x > 0$ , with probability at least $1 - \exp(-x)$ ,
+
+Excess Risk of $\widehat{f} \leq \Theta$ Fixed Point of the Sub-Root
+
+Functions for Certain Empirical Process $+\frac{x}{N}\} ,$ (1)
+
+$^{1}$ School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA. Correspondence to: Yingzhen Yang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+where $\Theta$ only hides a constant factor, and $N$ is the size of the training data. Given the fact that LRC is capable of achieving various minimax rates for M-estimators in tasks such as nonparametric regression in the inductive regime, we propose to solve the following interesting and important question for LRC based transductive learning:
+
+Can we have a sharp LRC based generalization bound for the excess risk of transductive learning as that for the inductive setting?
+
+The most relevant result which addresses the above open problem, to the best of our knowledge, is presented in (Tolstikhin et al., 2014, Corollary 14), where the excess risk bound is given as the following inequality which happens with high probability:
+
+$$
+\text {E x c e s s} \widehat {f} \leq \Theta \left(\frac {n}{u} r _ {m} ^ {*} + \frac {n}{m} r _ {u} ^ {*} + \frac {1}{m} + \frac {1}{u}\right). \tag {2}
+$$
+
+Here $r_m^*$ and $r_u^*$ are the fixed points of upper bounds for certain empirical processes, where $m, u$ are the size of training data and test data. It is remarked that the above bound may diverge due to the undesirable factors of $n/m$ and $n/u$ before the fixed points. With $m$ or $u$ grows in a much slower rate than $n$ with $n = u + m$ , $n/m \cdot r_u^* + n/u \cdot r_m^*$ may not converge to 0. An example for the standard transductive kernel learning is given in Section 4. As a result, there is a remarkable difference between the current state-of-the-art excess bound (2) in the transductive setting and the excess risk bound (1) in the inductive setting, and the latter always converges to 0 under standard learning models. We note that the excess risk bound in (Tolstikhin et al., 2014, Corollary 13) still diverges when $m = o(\sqrt{n})$ or $u = o(\sqrt{n})$ as $m, u \to \infty$ .
+
+Our main result is the following sharp bound for the excess risk below, such that with the same high probability as the inductive bound (1),
+
+Excess Risk of $\widehat{f} \leq \Theta (r_u + r_m + r^*$
+
+$$
++ \frac {1}{u} + \frac {1}{m} + \frac {x}{\min \{u , m \}}), (1 6) \text {i n} \tag {3}
+$$
+
+Here $r_m, r_u, r^*$ are the fixed points of upper bounds for certain empirical processes, which all converge to 0 with a fast rate as the case in the popular inductive learning models. As a result of the sharp excess risk bound (3), we give a confirmative answer to the above open problem.
+
+# 1.1. Summary of Main Results
+
+Our main results are summarized as follows. This summary also features a high-level description of the ideas we have developed to obtain the detailed technical results in Section 3.
+
+First, we present the first sharp bound for the excess risk of empirical minimizer for transductive learning using local complexities based method inspired by LRC (Bartlett et al., 2005), and such bound (3) is consistent with existing sharp bound for excess risk bound (1) for inductive
+
+learning. Two novel technical elements are proposed to establish such sharp bound: (1) Transductive Local Complexity (TLC), which renders particularly sharp bound for transductive learning using the peeling strategy on the function class with a new surrogate variance operator; (2) a novel and sharp concentration inequality for the bound for the supremum of the empirical loss which is the difference between the test loss and the training loss, that is, $\sup_{h\in \mathcal{H}}(\mathcal{U}_h^u -\mathcal{L}_h^m)$ , where $\mathcal{H}$ is a function class, $\mathcal{U}_h^u,\mathcal{L}_h^m$ are the test loss and the training loss associated with the predictor $h\in \mathcal{H}$ . We refer to such empirical process as the test-train process in the sequel. It is remarked that the existing local complexity based transductive learning method (Tolstikhin et al., 2014) is based on the bound for the supremum of the empirical process which is the difference between the training or test loss and the population loss, that is, $\sup_{h\in \mathcal{H}}\mathcal{U}_h^u -\mathcal{L}_n(h)$ or $\sup_{h\in \mathcal{H}}\mathcal{L}_h^m -\mathcal{L}_n(h)$ , where $\mathcal{L}_n(h)$ is the average loss of $h$ on the entire data. Our novel concentration inequality for the test-train process presented in Theorem 3.1 is derived using new techniques based on a novel and interesting property of the test-train process involving random variables in the setting of sampling uniformly without replacement and using the exponential version of the Efron-Stein inequality ((Boucheron et al., 2003, Theorem 2)) twice to derive the variance of the test-train process. As an application of our sharp bound for excess risk for generic transductive learning, we derive a sharp excess risk bound for transductive kernel learning by Theorem 4.1 in Section 4, which is sharper than current state-of-the-art (Tolstikhin et al., 2014).
+
+Second, as a result of independent interest, we derive a sharp concentration inequality for the general supremum of empirical process involving random variables (RVs) in the setting of sampling uniformly without replacement in Theorem 5.1 in Section 5. Our new concentration inequality is sharper than the two versions of the concentration inequality in (Tolstikhin et al., 2014), and this result is based on our new concentration inequality for the test-train process introduced above.
+
+It is worthwhile to mention that concentration inequalities about sampling without replacement have been actively studied in the literature (Bardenet & Maillard, 2015; Tolstikhin, 2017), including those on the multislice which are based on the modified log-Sobolev inequalities (Sambale & Sinulis, 2022). Compared to (Tolstikhin, 2017), our bound in Theorem 5.1 in Section 5 is sharper using a similar argument in Section 5. Furthermore, in contrast with our results, the supremum of empirical process involving sampling without replacement is not addressed in (Bardenet & Maillard, 2015).
+
+# 1.2. Notations
+
+We use bold letters for matrices and vectors, and regular lower letter for scalars throughout this paper. The bold letter with a single superscript indicates the corresponding column of a matrix, e.g. $\mathbf{A}_i$ is the $i$ -th column of matrix $\mathbf{A}$ , and the bold letter with subscripts indicates the corresponding element of a matrix or vector. We put an arrow on top of a letter with subscript if it denotes a vector, e.g., $\vec{\mathbf{x}}_i$ denotes the $i$ -th training feature. We also use $\mathbf{Z}(i)$ to denote the $i$ -th element of a vector $\mathbf{Z}$ , and $\mathbf{Z}(i:j)$ denotes the vector formed by elements of $\mathbf{Z}$ with indices between $i$ and $j$ inclusively. Span $(\cdot)\mathbf{A}$ is the column space of matrix $\mathbf{A}$ . $\|\cdot\|_F$ and $\|\cdot\|_p$ denote the Frobenius norm and the vector $\ell^p$ -norm or the matrix $p$ -norm. Var $[\cdot]$ denotes the variance of a random variable. $\mathbf{I}_n$ is a $n\times n$ identity matrix. $\mathbb{I}_{\{E\}}$ is an indicator function which takes the value of 1 if event $E$ happens, or 0 otherwise. The complement of a set $A$ is denoted by $\overline{A}$ , and $|A|$ is the cardinality of the set $A$ . tr $(\cdot)$ is the trace of a matrix. We denote the unit sphere in $d$ -dimensional Euclidean space by $\mathbb{S}^{d-1} := \{\mathbf{x} : \mathbf{x} \in \mathbb{R}^d, \| \mathbf{x}\|_2 = 1\}$ . Let $L^2(\mathcal{X},\mu^{(P)})$ denote the space of square-integrable functions on $\mathbb{S}^{d-1}$ with probability measure $\mu^{(P)}$ , and the inner product $\langle \cdot,\cdot\rangle_{\mu^{(P)}}$ and $\|\cdot\|_{\mu^{(P)}}^2$ are defined as $\langle f,g\rangle_{L^2} := \int_{\mathbb{S}^{d-1}}f(x)g(x)\mathrm{d}\mu^{(P)}(x)$ and $\|f\|_{L^2}^2 := \int_{\mathbb{S}^{d-1}}f^2(x)\mathrm{d}\mu^{(P)}(x) < \infty$ . $\mathbb{P}_{\mathcal{A}}$ is the orthogonal projection onto a linear space $\mathcal{A}$ , and $\mathcal{A}^\perp$ is the linear subspace orthogonal to $\mathcal{A}$ . $\langle \cdot,\cdot\rangle_{\mathcal{H}}$ and $\|\cdot\|_{\mathcal{H}}$ denote the inner product and the norm in the Hilbert space $\mathcal{H}$ . we write $a = \mathcal{O}(b)$ or $a \lesssim b$ if there exists a constant $C > 0$ such that $a \leq C b$ , $\tilde{\mathcal{O}}$ indicates there are specific requirements in the constants of the $\mathcal{O}$ notation. $a = o(b)$ and $a = w(b)$ indicates that $\lim |a / b| = 0$ and $\lim |a / b| = \infty$ respectively. $a \asymp b$ or $a = \Theta(b)$ denotes that there exists constants $c_1,c_2 > 0$ such that $c_1b \leq a \leq c_2b$ . $\binom{m}{k}$ for $1 \leq k \leq m$ is the combinatory number of selecting $k$ different objects from $m$ objects. $\mathbb{R}^+$ is the set of all non-negative real numbers, and $\mathbb{N}$ is the set of all the natural numbers. We use the convention that $\sum_{i=p}^{q}=0$ if $p > q$ or $q = 0$ . $[m:n]$ denotes all the natural numbers between $m$ and $n$ inclusively, and we abbreviate $[1:n]$ as $[n]$ .
+
+# 2. Problem Setup of Transductive Learning
+
+We consider a set $\mathbf{S}_{m + u} \coloneqq \left\{(\vec{\mathbf{x}}_i, y_i)\right\}_{i = 1}^{m + u}$ , where $y_{i}$ is the label for the point $\vec{\mathbf{x}}_i$ . Let $n = m + u$ , $\left\{\vec{\mathbf{x}}_i\right\}_{i = 1}^n \subseteq \mathcal{X} \subseteq \mathbb{R}^d$ , $\{y_i\}_{i = 1}^n \subseteq \mathcal{Y} \subseteq \mathbb{R}$ where $\mathcal{X}, \mathcal{Y}$ are the input and output spaces. The learner is provided with the (unlabeled) full sample $\mathbf{X}_n \coloneqq \left\{\vec{\mathbf{x}}_i\right\}_{i = 1}^n$ . Under the standard setting of transductive learning (El-Yaniv & Pechyony, 2009; Tolstikhin et al., 2014), the training features $\mathbf{X}_m$ of size $m$ are sampled uniformly from $\mathbf{X}_n$ without replace
+
+ment, and the remaining features are the test features denoted by $\mathbf{X}_u = \mathbf{X}_n\setminus \mathbf{X}_m$ . In the next paragraph we specify the sampling process of $\mathbf{X}_u$ as a random subset of $\mathbf{X}_n$ of size $u$ sampled uniformly without replacement. Then it follows by symmetry that $\mathbf{X}_m$ are sampled uniformly from $\mathbf{X}_n$ without replacement.
+
+Let $\mathbf{d} = [d_1, \ldots, d_u] \in \mathbb{N}^u$ be a random vector, and $\{d_i\}_{i=1}^u$ are $u$ independent random variables such that $d_i$ takes values in $[i:n]$ uniformly at random. Algorithm 1, which is adapted from (El-Yaniv & Pechyony, 2009) and deferred to the next subsection, specifies how to obtain $\mathbf{Z_d} = [\mathbf{Z_d}(1), \ldots, \mathbf{Z_d}(u)]^\top \in \mathbb{N}^u$ as the first $u$ elements of a uniformly distributed permutation of $[n]$ , so that $\mathbf{Z_d}$ are the indices of $u$ test features sampled uniformly from $\mathbf{X}_n$ without replacement. Let $\mathbf{Z}$ be a vector, we use $\{\mathbf{Z}\}$ denote a set containing all the elements of the vector $\mathbf{Z}$ regardless of the order of these elements in $\mathbf{Z}$ . Let $\overline{\mathbf{Z_d}} = [n] \setminus \{\mathbf{Z_d}\}$ be the indices not in $\{\mathbf{Z_d}\}$ . It has been verified in (El-Yaniv & Pechyony, 2009) that the all the $u$ points in $\mathbf{X}_u := \left\{\vec{\mathbf{x}}_i\right\}_{i \in \mathbf{Z_d}}$ , which are selected by indices in $\{\mathbf{Z_d}\}$ , are selected from $\mathbf{X}_n$ uniformly at random among all subsets of size $u$ , and $\mathbf{X}_u$ serves as the test features. As a result, $\mathbf{X}_m = \mathbf{X}_n \setminus \mathbf{X}_u = \left\{\vec{\mathbf{x}}_i\right\}_{i \in \overline{\mathbf{Z_d}}}$ are $m$ training features sampled uniformly from $\mathbf{X}_n$ without replacement. The training features together with their labels, $\{y_i\}_{i \in \overline{\mathbf{Z_d}}}$ , are given to the learner as a training set. We denote the labeled training set by $\mathbf{S}_m := \left\{\left(\vec{\mathbf{x}}_i, y_i\right)\right\}_{i \in \overline{\mathbf{Z_d}}}$ . $\mathbf{X}_u$ is also called the test set. The learner's goal is to predict the labels of the test points in $\mathbf{X}_u$ based on $\mathbf{S}_m \bigcup \mathbf{X}_u$ .
+
+This paper studies the sharp generalization bounds of transductive learning algorithms. We assume that all the points in the full sample $\mathbf{X}_n$ are distinct. Given a prediction function $f$ defined on $\mathcal{X}$ , we define the following loss functions. For simplicity of notations, we let $g(i) = g(\vec{\mathbf{x}}_i, y_i)$ or $g(i) = g(\vec{\mathbf{x}}_i)$ for a function $g$ defined on $\mathcal{X} \times \mathcal{Y}$ or $\mathcal{X}$ . We write $\ell \circ f$ as $\ell_f$ and let $\ell_f(i) = \ell(f(\vec{\mathbf{x}}_i), y_i)$ be the loss on the $i$ -th data point. Let $\mathcal{H}$ be a class of functions defined on $\mathcal{X} \times \mathcal{Y}$ . For any set $\mathcal{A} \subseteq [n]$ , we define $\mathcal{L}_h^{(m)}(\mathcal{A}) := 1/m \cdot \sum_{i \in \mathcal{A}} h(i)$ when $|\mathcal{A}| = m$ , and $\mathcal{U}_h^{(u)}(\mathcal{A}) := \frac{1}{u} \sum_{i \in \mathcal{A}} h(i)$ when $|\mathcal{A}| = u$ . The average loss and average squared loss associated with $h$ are defined as $\mathcal{L}_n(h) := \frac{1}{n} \sum_{i=1}^n h(i), T_n(h) := \frac{1}{n} \sum_{i=1}^n h^2(i)$ . When $h = \ell_f$ , $\mathcal{L}_h^{(m)}(\overline{\mathbf{Z}_{\mathbf{d}}})$ and $\mathcal{U}_h^{(u)}(\mathbf{Z}_{\mathbf{d}})$ are the training loss and the test loss of the prediction function $f$ . We have $\mathbb{E}_{\mathbf{d}}\left[\mathcal{U}_h^{(u)}(\mathbf{Z}_{\mathbf{d}})\right] = \mathbb{E}_{\mathbf{d}}\left[\mathcal{L}_h^{(m)}(\overline{\mathbf{Z}_{\mathbf{d}}})\right] = \mathcal{L}_n(h)$ .
+
+# 2.1. Sampling Random Set Uniformly Without Replacement
+
+The sampling strategy in (El-Yaniv & Pechyony, 2009) is adopted to sample $u$ points from the full sample $\mathbf{X}_n$ uniformly at random among all subsets of size $u$ , which is described in Algorithm 1. Let $\mathbf{Z}_{\mathrm{d}}$ be the vector returned by Algorithm 1. Then $\{\mathbf{Z}_{\mathrm{d}}\}$ is the set of the indices of the test features, and $\overline{\mathbf{Z}}_{\mathrm{d}}$ is the set of the indices of the training features.
+
+Algorithm 1 The RANDPERM Algorithm in (El-Yaniv & Pechyony, 2009), which obtains $\mathbf{Z}_{\mathbf{d}} \in \mathbb{N}^{u}$ as the first $u$ elements of a uniformly distributed permutation of $[n]$ by sampling independent random variables $d_{1}, \ldots, d_{u}$ .
+
+1: $\mathbf{Z}_{\mathbf{d}} \gets \mathrm{RANDPERM}(u)$
+2: input: $u$
+3: initialize: $\mathbf{I} = [n]$ , $\mathbf{d}$ , $\mathbf{Z}_{\mathbf{d}} \in \mathbb{N}^u$ are initialized as zero vectors.
+4: for $i = 1,\dots ,u$ do Sample $d_{i}$ uniformly from $[i:n]$ $\mathbf{d}(i) = d_i,\mathbf{Z_d}(i) = \mathbf{I}(d_i)$ Swap the values of $\mathbf{I}(i)$ and $\mathbf{I}(d_i)$
+5: end for
+6: return $\mathbf{Z_d}$
+
+# 2.2. Basic Definitions
+
+We hereby define basic notations for Transductive Complexity. Let $\mathbf{d}' = [d_1', \ldots, d_u']$ be independent copies of $\mathbf{d}$ , and $\mathbf{d}^{(i)} = [d_1, \ldots, d_{i-1}, d_i', d_{i+1}, \ldots, d_u]$ . We define the supremum of the empirical process of the gap between the test loss and the training loss as
+
+$$
+g (\mathbf {d}) := \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right)\right), \tag {4}
+$$
+
+where $\mathcal{U}_h^{(u)}(\mathbf{Z_d}) = \frac{1}{u}\sum_{i\in \{\mathbf{Z_d}\}}h(i)$ is the test loss, $\mathcal{L}_h^{(m)}(\overline{\mathbf{Z_d}}) = \frac{1}{m}\sum_{i\in \{\overline{\mathbf{Z_d}}\}}h(i)$ is the training loss. $g(\mathbf{d})$
+
+is also referred to as the test-train process. We then define Rademacher variables and Transductive Complexity (TC), and then relate TC to the conventional inductive Rademacher complexity.
+
+Definition 2.1 (Rademacher Variables). Let $\{\sigma_i\}_{i=1}^n$ be $n$ i.i.d. random variables such that $\operatorname*{Pr}[\sigma_i = 1] = \operatorname*{Pr}[\sigma_i = -1] = \frac{1}{2}$ , and they are defined as the Rademacher variables.
+
+The Transductive Complexity is defined below.
+
+Definition 2.2 (Transductive Complexity). The four types of Transductive Complexity (TC) of a function class $\mathcal{H}$ are
+
+defined as
+
+$$
+\mathfrak {R} _ {u} ^ {+} (\mathcal {H}) := \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} R _ {u, \mathbf {d}} ^ {+} h \right], \mathfrak {R} _ {u} ^ {-} (\mathcal {H}) := \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} R _ {u, \mathbf {d}} ^ {-} h \right],
+$$
+
+$$
+\Re_ {m} ^ {+} (\mathcal {H}) := \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} R _ {m, \mathbf {d}} ^ {+} h \right], \Re_ {m} ^ {-} (\mathcal {H}) := \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} R _ {m, \mathbf {d}} ^ {-} h \right], \tag {5}
+$$
+
+where $R_{u,\mathbf{d}}^{+}h\coloneqq 1 / u\cdot \sum_{i = 1}^{u}h(\mathbf{Z}_{\mathbf{d}}(i)) - \mathcal{L}_{n}(h),R_{m,\mathbf{d}}^{+}h\coloneqq$
+
+$$
+1 / m \cdot \sum_ {i = 1} ^ {m} h (\overline {{\mathbf {Z} _ {\mathbf {d}}}} (i)) - \mathcal {L} _ {n} (h), \text {a n d} R _ {u, \mathbf {d}} ^ {-} h := - R _ {u, \mathbf {d}} ^ {+} h,
+$$
+
+$$
+R _ {m, \mathbf {d}} ^ {-} h := - R _ {m, \mathbf {d}} ^ {+} h.
+$$
+
+We remark that the proposed Transductive Complexity (TC) is fundamentally different from the transductive version of the Rademacher complexity in (El-Yaniv & Pechyony, 2009, Definition 1) in the sense that our TC is defined on the random training set or test set, while the counterpart in (El-Yaniv & Pechyony, 2009, Definition 1) operates on the entire full sample.
+
+Let $\mathbf{Y}^{(u)} = \{Y_1,\dots ,Y_u\}$ with each $Y_{i}$ sampled uniformly and independently from $[n]$ with replacement for all $i\in [u]$ . Similar, $\mathbf{Y}^{(m)} = \{Y_1,\ldots ,Y_m\}$ with each $Y_{i}$ sampled uniformly and independently from $[n]$ with replacement for all $i\in [m]$ . The following theorem relates the TC defined in Definition 2.2 to the usual inductive Rademacher complexity.
+
+Theorem 2.1. Let $\sigma = \{\sigma_i\}_{i=1}^{\max\{u,m\}}$ be iid Rademacher variables. Define $R_{\sigma, \mathbf{Y}(u)}^{(\mathrm{ind})}h := \frac{1}{u}\sum_{i=1}^{u}\sigma_ih(Y_i)$ and $R_{\sigma, \mathbf{Y}(m)}^{(\mathrm{ind})}h := \frac{1}{m}\sum_{i=1}^{m}\sigma_ih(Y_i)$ . Then
+
+$$
+\max \left\{\mathfrak {R} _ {u} ^ {+} (\mathcal {H}), \mathfrak {R} _ {u} ^ {-} (\mathcal {H}) \right\} \leq 2 \mathfrak {R} _ {u} ^ {\text {(i n d)}} (\mathcal {H}),
+$$
+
+$$
+\max \left\{\Re_ {m} ^ {+} (\mathcal {H}), \Re_ {m} ^ {-} (\mathcal {H}) \right\} \leq 2 \Re_ {m} ^ {(\mathrm {i n d})} (\mathcal {H}), \tag {6}
+$$
+
+where $\mathfrak{R}_u^{(\mathrm{ind})}(\mathcal{H})\coloneqq \mathbb{E}_{\mathbf{Y}^{(u)},\pmb{\sigma}}\left[\sup_{h\in \mathcal{H}}R_{\pmb{\sigma},\mathbf{Y}^{(u)}}^{\mathrm{(ind)}}h\right],$ $\mathfrak{R}_m^{(\mathrm{ind})}(\mathcal{H}):= \mathbb{E}_{\mathbf{Y}^{(m)},\pmb{\sigma}}\left[\sup_{h\in \mathcal{H}}R_{\pmb{\sigma},\mathbf{Y}^{(m)}}^{\mathrm{(ind)}}h\right].$
+
+Remark 2.2. $\mathfrak{R}_u^{(\mathrm{ind})}(\mathcal{H})$ and $\mathfrak{R}_m^{(\mathrm{ind})}(\mathcal{H})$ are the Rademacher complexity in the inductive setting. It is remarked that (6) indicates that the established symmetrization inequality of inductive Rademacher complexity also holds for the transductive complexity defined in Definition 2.2. For simplicity of notations if no confusion arises, we also write $\mathfrak{R}_u^{(\mathrm{ind})}(\mathcal{H}) = \mathbb{E}\left[\sup_{h\in \mathcal{H}}R_{\pmb{\sigma},\mathbf{Y}^{(u)}}^{(\mathrm{ind})}h\right]$
+
+and $\Re_m^{(\mathrm{ind})}(\mathcal{H}) = \mathbb{E}\left[\sup_{h\in \mathcal{H}}R_{\pmb{\sigma},\mathbf{Y}(m)}^{(\mathrm{ind})}h\right]$
+
+We define the sub-root function below, which will be extensively used for deriving sharp bounds based on transductive local complexity.
+
+Definition 2.3 (Sub-root function, (Bartlett et al., 2005, Definition 3.1)). A function $\psi \colon [0,\infty) \to [0,\infty)$ is subroot if it is nonnegative, nondecreasing and if $\frac{\psi(r)}{\sqrt{r}}$ is nonincreasing for $r > 0$ .
+
+# 3. TLC Excess Risk Bound for Generic Transductive Learning
+
+In this section, we first introduce our new concentration inequality for the test-train process as Theorem 3.1 in Section 3.1. We then apply Theorem 3.1 to obtain Theorem 3.2, which presents the bound for the test-train process involving the fixed points of certain sub-root functions as the upper bounds for the TC of localized function classes. Based on Theorem 3.2, the generalization bound and excess risk bound for generic transductive learning are presented in Theorem 3.5 and Theorem 3.6, respectively.
+
+# 3.1. Concentration Inequality for the Test-Train Process
+
+Let $\mathcal{H}$ be a class of functions defined on $\mathcal{X} \times \mathcal{Y}$ and for any $h \in \mathcal{H}, 0 \leq |h(i)| \leq H_0$ for all $i \in [n]$ with a positive number $H_0$ . For a technical reason we let $H_0 \geq 2\sqrt{2}$ throughout this paper, which is achieved by setting $H_0 = \max \left\{2\sqrt{2}, \max_{i \in [n]} |h(i)|\right\}$ . Without special notes the function class $\mathcal{H}$ is separable in this paper. Given the function class $\mathcal{H}$ , we define the function class $\mathcal{H}^2 \coloneqq \left\{h^2 \mid h \in \mathcal{H}\right\}$ as the "squared version" of $\mathcal{H}$ . We then have the following concentration inequality for the test-train process $g(\mathbf{d})$ . We consider two cases throughout this paper, that is, $m \gg u^2$ or $u \gg m^2$ .
+
+Theorem 3.1 (Concentration Inequality for the Test-Train Process (4)). Assume that there is a positive number $r > 0$ such that $\sup_{h\in \mathcal{H}}T_n(h^2)\leq r$ . Suppose that $m\gg u^2$ or $u\gg m^2$ . Then for every $x > 0$ , with probability at least $1 - \exp (-x) - (\min \{m,u\})^{2} / \max \{m,u\}$ over $\mathbf{d}$ ,
+
+$$
+\begin{array}{l} g (\mathbf {d}) \leq \mathbb {E} _ {\mathbf {d}} \left[ g (\mathbf {d}) \right] + 8 \sqrt {\frac {5 r x}{\min \left\{u , m \right\}}} \\ + 2 \sqrt {2} \inf _ {\alpha > 0} \left(\frac {\Re_ {\min \{u , m \}} ^ {+} (\mathcal {H} ^ {2})}{\alpha} + \frac {2 \alpha x}{\min \{u , m \}}\right) + \frac {8 H _ {0} ^ {2} x}{\min \{u , m \}}. \tag {7} \\ \end{array}
+$$
+
+Here $\Re_u^+(\cdot), \Re_m^+(\cdot)$ are the Transductive Complexity defined in (5), and $\mathcal{H}^2 = \{h^2 \mid h \in \mathcal{H}\}$ .
+
+Key Innovations in the Proof of Theorem 3.1. Proof of Theorem 3.1 is deferred to Section B.2 of the appendix, and it is based on the a novel combinatorial property of the test-train process revealed in Lemma B.1 and Lemma B.2. Such property is used to derive the upper bound for the variance of $g(\mathbf{d})$ , $V_{+}(g)$ . Such upper bound also involves another empirical process for the class $\mathcal{H}^2$ . The bound for
+
+the empirical process for the $\mathcal{H}^2$ is derived with the exponential version of the Efron-Stein inequality ((Boucheron et al., 2003, Theorem 2)), and we use (Boucheron et al., 2003, Theorem 2) again along with the bound for $V_{+}(\cdot)$ to derive the sharp bound for $g(\mathbf{d})$ .
+
+# 3.2. The First Bound by Transductive Local Complexity
+
+Using Theorem 3.1 and the peeling strategy in the proof of (Bartlett et al., 2005, Theorem 3.3), we have the following bound for the test-train process involving the fixed points of sub-root functions as the upper bounds for the TC of localized function classes.
+
+Theorem 3.2. Suppose $K > 1$ is a fixed constant, and $\tilde{T}_n(h)\colon \mathcal{H}\to \mathbb{R}^+$ is a functional such that $T_{n}(h)\leq \tilde{T}_{n}(h)$ for all $h\in \mathcal{H}$ . Let $\psi_{u}$ be a sub-root function and let $r_u$ be the fixed point of $\psi_{u}$ . Let $\psi_{m}$ be another sub-root function and let $r_m$ be the fixed point of $\psi_{m}$ . Assume that for all $r\geq r_u$ ,
+
+$$
+\begin{array}{l} \psi_ {u} (r) \geq \max \left\{\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}, \tilde {T} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {+} h \right], \right. \\ \left. \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}, \tilde {T} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}, \tag {8} \\ \end{array}
+$$
+
+and for all $r\geq r_m$
+
+$$
+\begin{array}{l} \psi_ {m} (r) \geq \max \left\{\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}, \tilde {T} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {-} h \right], \right. \\ \left. \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}, \tilde {T} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}. \tag {9} \\ \end{array}
+$$
+
+Suppose that $m \gg u^2$ or $u \gg m^2$ . Then for every $x > 0$ , with probability at least $1 - \exp(-x) - (\min\{m, u\})^2 / \max\{m, u\}$ over $\mathbf{d}$ , for every $h \in \mathcal{H}$ ,
+
+$$
+\begin{array}{l} \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) \leq \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) + \frac {\tilde {T} _ {n} (h)}{K} + c _ {0} (r _ {u} + r _ {m}) \\ + \frac {c _ {1} x}{\min \{m , u \}}, \tag {10} \\ \end{array}
+$$
+
+where $c_{0}, c_{1}$ are absolute positive constants depending on $K$ , and $c_{1}$ also depends on $H_{0}$ .
+
+Remark 3.3. $\tilde{T}_n(\cdot)$ is termed a surrogate variance operator, and it is an upper bound for the usual variance operator $T(\cdot)$ . $\psi_u(r)$ is the sub-root upper bound for the TC for a localized function class, $\{h\in \mathcal{H}\colon \tilde{T}_n(h)\leq r\}$ , where every function $h$ has its functional value $\tilde{T}_n(h)$ bounded by $r$ . In this sense, $\psi_{u}(r)$ is the upper bound for the TC of a localized function class, so we attribute the results of Theorem 3.2 to Transductive Local Complexity (TLC). The same comments also apply to $\psi_m(r)$ .
+
+# 3.3. Sharp Excess Risk Bounds using Transductive Local Complexity (TLC) for Generic Transductive Learning
+
+We apply Theorem 3.2 to the transductive learning task introduced in Section 2, and derive sharp bound for the excess risk. Suppose we have a function class $\mathcal{F}$ which contains all the prediction functions. We assume $0 \leq \ell_f(i) \leq L_0$ for all $f \in \mathcal{F}$ and all $i \in [n]$ throughout this paper, and $L_0 \geq 2\sqrt{2}$ . Given $\mathbf{d}$ , we define $\widehat{f}_{\mathbf{d},u} := \arg \min_{f \in \mathcal{F}} \mathcal{U}_{\ell_f}^{(u)}(\mathbf{Z}_{\mathbf{d}})$ as the oracle predictor with minimum loss on the test set $\mathbf{X}_u$ , and $\widehat{f}_{\mathbf{d},m} := \arg \min_{f \in \mathcal{F}} \mathcal{L}_{\ell_f}^{(m)}(\overline{\mathbf{Z}_{\mathbf{d}}})$ as empirical minimizer, that is, the predictor with minimum loss on the training data $\mathbf{X}_m$ . The excess risk of $f_{\mathbf{d},m}$ is defined by
+
+$$
+\mathcal {E} \left(\widehat {f} _ {\mathbf {d}, m}\right) := \mathcal {U} _ {\ell_ {\widehat {f} _ {\mathbf {d}, m}}} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {U} _ {\ell_ {\widehat {f} _ {\mathbf {d}, u}}} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right). \tag {11}
+$$
+
+Furthermore, we define the function class $\Delta_{\mathcal{F}} := \{h \colon h = \ell_{f_1} - \ell_{f_2}, f_1, f_2 \in \mathcal{F}\}$ . For $h = \ell_{f_1} - \ell_{f_2} \in \Delta_{\mathcal{F}}$ , we define in (12) a novel surrogate variance operator as a functional $\tilde{T}_n(h) \colon \Delta_{\mathcal{F}} \to \mathbb{R}^+$ such that $T_n(h) \leq \tilde{T}_n(h)$ . As a result, we can apply Theorem 3.2 to the functional class $\Delta_{\mathcal{F}}$ and obtain the following theorem, which states the upper bound for the test-train process with prediction functions as the difference of loss functions with two predictors from $\mathcal{F}$ . The following assumption, which is the standard assumption adopted by existing local complexity based methods for both inductive and transductive learning (Bartlett et al., 2005; Tolstikhin et al., 2014) for performance guarantee of transductive learning with loss functions $\ell(\cdot, \cdot)$ , is introduced below.
+
+Assumption 1 (Main Assumption). (1) There is a function $f_{n}^{*} \in \mathcal{F}$ such that $\ell_{f_n^*} = \inf_{f\in \mathcal{F}}\mathcal{L}_n(\ell_f)$
+
+(2) There is a constant $B$ such that for any $h \in \Delta_{\mathcal{F}}^{*}$ , $T_{n}(h) \leq B\mathcal{L}_{n}(h)$ , where $\Delta_{\mathcal{F}}^{*} \coloneqq \left\{\ell_{f} - \ell_{f_{n}^{*}}: f \in \mathcal{F}\right\}$ .
+
+Remark 3.4. Assumption 1 is not restrictive, it is the standard assumption also used in (Tolstikhin et al., 2014). In addition, Assumption 1(2) holds if the loss function $\ell(\cdot, \cdot)$ is Lipschitz continuous in its first argument and a uniform convexity condition on $\ell$ , for example, $\ell(y', y) = (y' - y)^2$ .
+
+Applying Theorem 3.2 to the function class $\Delta_{\mathcal{F}}$ , we obtain the following theorem.
+
+Theorem 3.5. Suppose that Assumption 1 holds, and $K > 1$ is a fixed constant. For $h = \ell_{f_1} - \ell_{f_2} \in \Delta_{\mathcal{F}}$ with $f_1, f_2 \in \mathcal{F}$ , let
+
+$$
+\begin{array}{l} \tilde {T} _ {n} (h) := \\ \inf _ {f _ {1}, f _ {2} \in \mathcal {F}: \ell_ {f _ {1}} - \ell_ {f _ {2}} = h} 2 B \mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + 2 B \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right). \tag {12} \\ \end{array}
+$$
+
+Let $\psi_{u}$ be a sub-root function and $r_u$ is the fixed point of $\psi_{u}$ . Let $\psi_{m}$ be another sub-root function and $r_m$ is the fixed point of $\psi_{m}$ . Assume that for all $r\geq r_u$
+
+$$
+\begin{array}{l} \psi_ {u} (r) \geq \max \left\{\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {+} h \right], \right. \\ \left. \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \vec {T} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}, \tag {13} \\ \end{array}
+$$
+
+and for all $r \geq r_m$ ,
+
+$$
+\begin{array}{l} \psi_ {m} (r) \geq \max \left\{\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {-} h \right], \right. \\ \left. \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \vec {T} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}. \tag {14} \\ \end{array}
+$$
+
+Suppose that $m \gg u^2$ or $u \gg m^2$ . Then for every $x > 0$ , with probability at least $1 - \exp(-x) - (\min\{m, u\})^2 / \max\{m, u\}$ over $\mathbf{d}$ , for every $h \in \Delta_{\mathcal{F}}$ ,
+
+$$
+\begin{array}{l} \mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {2 B}{K} \mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) \\ + \frac {2 B}{K} \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right) + c _ {0} \left(r _ {u} + r _ {m}\right) + \frac {c _ {1} x}{\min \{m , u \}}, \tag {15} \\ \end{array}
+$$
+
+where $c_{0}, c_{1}$ are absolute positive constants depending on $K$ , and $c_{1}$ also depends on $L_{0}$ .
+
+Combining Theorem 3.5 and Theorem B.11 in the appendix, we have the following excess risk bound for the empirical minimizer $\widehat{f}_{\mathbf{d},m}$ .
+
+Theorem 3.6. Suppose that Assumption 1 holds and $m \gg u^2$ or $u \gg m^2$ , $K > 1$ is a fixed constant. Then for every $x > 0$ , with probability at least $1 - 3\exp(-x) - 3(\min\{m, u\})^2 / \max\{m, u\}$ over $\mathbf{d}$ , the excess risk $\mathcal{E}(\widehat{f}_{\mathbf{d},m})$ satisfies
+
+$$
+\mathcal {E} \left(\widehat {f} _ {\mathbf {d}, m}\right) \leq c _ {0} \left(r _ {u} + r _ {m}\right) + \frac {4 B c _ {2} r ^ {*}}{K} + \frac {c _ {3} x}{\min \{m , u \}}, \tag {16}
+$$
+
+where $r^*$ is specified by Theorem B.11 in the appendix, $c_3 = c_1 + 4Bc_2 / K$ is a positive constant, $c_0, c_1, c_2$ are the positive constants in Theorem 3.5 and Theorem B.11 in the appendix.
+
+Proof. (16) follows by plugging the upper bounds (75) for $\mathcal{L}_n(\ell_{\widehat{f}_{\mathbf{d},u}} - \ell_{f_n^*})$ and $\mathcal{L}_n(\ell_{\widehat{f}_{\mathbf{d},m}} - \ell_{f_n^*})$ in Theorem B.11 to (15) in Theorem 3.5.
+
+We remark that (16) is consistent with the sharp bound for excess risk for inductive learning (1), and there are only
+
+constant factors on the fixed points $r_u$ and $r_m$ . In contrast, the existing local complexity based excess risk bound (2) for transductive learning involves undesirable factors $n / u$ and $n / m$ , which can make the bound (2) diverge under standard learning models, such as the Transductive Kernel Learning (TKL). In the next section, we apply the TLC based sharp excess risk bound (16) to TKL.
+
+# 4. TLC Excess Risk Bound for Transductive Kernel Learning
+
+We apply the results in Section 3.3 to obtain the sharper risk bound for transductive kernel learning in Theorem 4.1 than that in the current state-of-the-art (Tolstikhin et al., 2014).
+
+Background in RKHS and Kernel Learning. Let $\mathcal{H}_K$ be the Reproducing Kernel Hilbert Space (RKHS) associated with $K$ , where $K: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ is a positive definite kernel defined on the compact set $\mathcal{X} \times \mathcal{X}$ , and we assume $\mathcal{X}$ is compact in this section. Let $\mathcal{H}_{\mathbf{X}_n} := \overline{\left\{\sum_{i=1}^n K(\cdot, \vec{\mathbf{x}}_i) \alpha_i \mid \{\alpha_i\}_{i=1}^n \subseteq \mathbb{R}\right\}}$ be the usual RKHS spanned by $\left\{K(\cdot, \vec{\mathbf{x}}_i)\right\}_{i=1}^n$ on the full sample $\mathbf{X}_n = \left\{\vec{\mathbf{x}}_i\right\}_{i=1}^n$ . Let the gram matrix of $K$ over the full sample be $\mathbf{K} \in \mathbb{R}^{n \times n}$ , $\mathbf{K}_{ij} = K(\vec{\mathbf{x}}_i, \vec{\mathbf{x}}_j)$ for $i, j \in [n]$ , and $\mathbf{K}_n := \frac{1}{n} \mathbf{K}$ . Let $\widehat{\lambda}_1 \geq \widehat{\lambda}_2 \ldots \geq \widehat{\lambda}_n > 0$ be the eigenvalues of $\mathbf{K}_n$ , and $\max_{\mathbf{x} \in \mathcal{X}} K(\mathbf{x}, \mathbf{x}) = \tau_0^2 < \infty$ . We then have $\widehat{\lambda}_1 \leq \operatorname{tr}(\mathbf{K}_n) \leq \tau_0^2$ . For a positive number $\mu$ , define $\mathcal{H}_K(\mu) := \{f \in \mathcal{H}_K | \|f\|_{\mathcal{H}} \leq \mu\}$ , we consider the function class $\mathcal{H}_{\mathbf{X}_n}(\mu) := \mathcal{H}_{\mathbf{X}_n}(\mu) \cap \mathcal{H}_{\mathbf{X}_n}$ for TKL.
+
+Results. The following assumption is standard when analyzing the LRC based excess risk bounds, which is also adopted in (Bartlett et al., 2005; Tolstikhin et al., 2014).
+
+Assumption 2. (1) The loss function $\ell(\cdot, \cdot)$ is $L$ -Lipschitz in its first argument, that is, $|\ell(f(\mathbf{x}), y) - \ell(f(\mathbf{x}', y)| \leq L|f(\mathbf{x}) - f(\mathbf{x}')|$ for all $f \in \mathcal{F}$ .
+
+(2) There is a constant $B^{\prime}$ such that for any $f\in \mathcal{F}$ $T_{n}(f - f_{n}^{*})\leq B^{\prime}\mathcal{L}_{n}\left(\ell_{f} - \ell_{f_{n}^{*}}\right).$
+
+It can be verified that Assumption 2 implies Assumption 1 (2), so that the former is stronger than the latter. We now let the empirical minimizer $\widehat{f}_{\mathbf{d},m}$ and the oracle predictor $\widehat{f}_{\mathbf{d},u}$ be defined using the function class $\mathcal{F} = \mathcal{H}_{\mathbf{X}_n}(\mu)$ . The following theorem states the sharp bound for the excess risk $\mathcal{E}(\widehat{f}_{\mathbf{d},m})$ for TKL based on Assumption 1 (1) and Assumption 2.
+
+Theorem 4.1. Suppose that Assumption 1 (1) and Assumption 2 hold. Suppose $K$ is a positive definite kernel on $\mathcal{X}\times \mathcal{X}$ . Suppose that for all $f\in \mathcal{H}_{\mathbf{X}_n}(\mu)$ , $0\leq \ell_{f}(i)\leq L_{0}$ for all $i\in [n]$ , and $L_0\geq 2\sqrt{2}$ . Suppose that $m\gg u^2$ or
+
+$u \gg m^2$ . Then for every $x > 0$ , with probability at least $1 - 3\exp(-x) - 3\left(\min\{m, u\}\right)^2 / \max\{m, u\}$ over $\mathbf{d}$ , we have the excess risk bound
+
+$$
+\mathcal {E} \left(\widehat {f} _ {\mathbf {d}, m}\right) \leq c _ {5} \left(\min _ {0 \leq Q \leq n} r (u, m, Q) + \frac {x}{\min \{m , u \}}\right), \tag {17}
+$$
+
+where
+
+$$
+\begin{array}{l} r (u, m, Q) := Q \left(\frac {1}{u} + \frac {1}{m}\right) \\ + \left(\sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right), \\ \end{array}
+$$
+
+$c_{5}$ is an absolute positive constant depending on $B^{\prime},L_{0},L,\mu$
+
+Comparison with current state-of-the-art. We now compare our excess risk bound (17) for TKL to the following excess risk bound obtained by the current state-of-the-art method for TKL (Tolstikhin et al., 2014), which is also based on local complexity method for transductive learning. (Tolstikhin et al., 2014) shows that with high probability,
+
+$$
+\mathcal {E} \left(\widehat {f} _ {\mathbf {d}, m}\right) \leq \Theta \left(\frac {n}{u} r _ {m} ^ {*} + \frac {n}{m} r _ {u} ^ {*} + \frac {1}{m} + \frac {1}{u}\right),
+$$
+
+$$
+r _ {s} ^ {*} \leq \Theta \left(\min _ {0 \leq Q \leq s} \left(\frac {Q}{s} + \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{s}}\right)\right), s = u \text {o r} m. \tag {18}
+$$
+
+It is emphasized that both our excess risk bound (17) and (18) in (Tolstikhin et al., 2014) are derived under the same assumptions, Assumption 1 (1) and Assumption 2, and our result requires the additional assumption that $u \gg m^2$ or $m \gg u^2$ . It can be observed that our bound (17) is free of the undesirable factors of $n / u$ and $n / m$ in (18). Moreover, it is well-known by standard results on the population and empirical Rademacher complexities (Bartlett et al., 2005; Mendelson, 2002) that when the full-sample $\mathbf{X}_n$ are sampled uniformly from the unit sphere with $\mathcal{X} = \mathbb{S}^{d - 1}$ and the kernel $K$ is a dot-product ker-
+
+nel, then $\min_{0\leq Q\leq n}\left(\frac{Q}{n} +\sqrt{\frac{\sum_{q = Q + 1}^{n}\hat{\lambda}_q}{n}}\right)\asymp n^{-2\alpha /(2\alpha +1)}$
+
+with $\alpha > 1/2$ . In this well studied case, the RHS of (18) diverges with $u = o\left(n^{1/(2\alpha + 1)}\right)$ or $m = o\left(n^{1/(2\alpha + 1)}\right)$ , when $u, m \to \infty$ . On the other hand, our excess risk bound (17) always converges to 0 as $u, m \to \infty$ because $r(u, n, Q) \leq \Theta(1/\sqrt{u} + 1/\sqrt{m})$ .
+
+# 5. Concentration Inequality for Supremum of Empirical Process Involving RVs Sampled Uniformly Without Replacement
+
+Suppose that $\mathcal{H}$ is a function class bounded by $H_0 \geq 2\sqrt{2}$ such that $\mathcal{L}_n(h) = 0$ for all $h \in \mathcal{H}$ . Let $g_u(\mathbf{d}) \coloneqq \sup_{h \in \mathcal{H}} \mathcal{U}_h^{(u)}(\mathbf{Z}_{\mathbf{d}})$ . The following theorem gives a sharp bound for such general empirical process $g_u(\mathbf{d})$ . Such result follows from the concentration inequality for the testtrain process in Theorem 3.1.
+
+Theorem 5.1. Suppose $\sup_{h\in \mathcal{H}}T_n(h)\leq r$ , and let $\mathcal{H}^2 =$ $\{h^2\colon h\in \mathcal{H}\}$ . Suppose that $m\gg u^2$ or $u\gg m^2$ . Then for every $x > 0$ , with probability at least $1 - \exp (-x) -$ $(\min \{m,u\})^{2} / \max \{m,u\}$ over $\mathbf{d}$
+
+$$
+\begin{array}{l} g _ {u} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} (\mathbf {d}) \right] \lesssim \frac {m}{n} \left(\sqrt {\frac {r x}{\min \{u , m \}}}\right) \\ + \inf _ {\alpha > 0} \left(\frac {\Re_ {\operatorname* {m i n} \{u , m \}} ^ {+} \left(\mathcal {H} ^ {2}\right)}{\alpha} + \frac {\alpha x}{\operatorname* {m i n} \{u , m \}}\right) + \frac {x}{\operatorname* {m i n} \{u , m \}} \Bigg). \tag {19} \\ \end{array}
+$$
+
+Proof. (19) follows immediately by (66) in Lemma B.10 and noting that $\mathcal{L}_n(h) = 0$ .
+
+We hereby compare the existing concentration inequalities for supremum of empirical process in (Tolstikhin et al., 2014). There are two versions of such inequalities in (Tolstikhin et al., 2014, Theroem 1), which are presented as follows. For the first version, with probability at least $1 - \exp(-t)$ ,
+
+$$
+g _ {u} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} (\mathbf {d}) \right] \leq 2 \sqrt {\frac {2 n r t}{u ^ {2}}}. \tag {20}
+$$
+
+For the second version in (Tolstikhin et al., 2014, Theroem 2), with probability at least $1 - \exp (-t)$
+
+$$
+\begin{array}{l} g _ {u} (\mathbf {d}) - \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \bar {g} _ {u} \left(\mathbf {Y} ^ {(u)}\right) \right] \\ \leq \sqrt {\frac {2 (r + 2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}} [ \bar {g} _ {u} (\mathbf {Y} ^ {(u)}) ]) t}{u}} + \frac {t}{3}, \tag {21} \\ \end{array}
+$$
+
+where $\bar{g} (\mathbf{Y}^{(u)})\coloneqq \sup_{h\in \mathcal{H}}\frac{1}{u}\cdot \sum_{i = 1}^{u}h(Y_i)$ is the supremum of empirical process with iid random variables $\{bY^{(u)}\}$ . Because we always expect the deviation between $g_{u}(\mathbf{d})$ and its expectation, the gap between $\mathbb{E}_{\mathbf{Y}^{(u)}}\left[\bar{g}_u(\mathbf{Y}^{(u)})\right]$ and $\mathbb{E}_{\mathbf{d}}[g_u(\mathbf{d})]$ is offered by (Tolstikhin et al., 2014, Theroem 3) as follows:
+
+$$
+0 \leq \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \bar {g} _ {u} \left(\mathbf {Y} ^ {(u))}\right) \right] - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} (\mathbf {d}) \right] \leq \frac {2 m ^ {2}}{n}.
+$$
+
+It follows from (21) and the above inequality that for the second version,
+
+$$
+\begin{array}{l} g _ {u} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} (\mathbf {d}) \right] \\ \leq 2 \sqrt {\frac {2 (r + 2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}} [ \bar {g} _ {u} (\mathbf {Y} ^ {(u)}) ]) t}{u}} + \frac {t}{3} + \frac {2 m ^ {2}}{n}. \tag {22} \\ \end{array}
+$$
+
+As a result, the RHS of (20) diverges when $u = o(\sqrt{n})$ , and the RHS of (22) diverges when $m = w(\sqrt{n})$ as $u, m \to \infty$ . In contrast, the RHS of our bound (19) converges to 0 under many standard learning models by noting that (1) $\Re_{\min \{u, m\}}^{+}(\mathcal{H}^2)$ can be bounded by the inductive Rademacher complexity of $\mathcal{H}^2$ , $\Re_{\min \{u, m\}}^{(\mathrm{ind})}(\mathcal{H}^2)$ , using Theorem 2.1; (2) the inductive Rademacher complexity $\Re_{\min \{u, m\}}^{(\mathrm{ind})}(\mathcal{H}^2)$ usually converges to 0 at a fast rate, such as $O(\sqrt{1 / \min\{u, m\}})$ , for many standard learning models (Bartlett & Mendelson, 2003), when combined with the contraction property of the inductive Rademacher complexity (e.g., Theorem A.5).
+
+# 6. Conclusion
+
+We present Transductive Local Complexity (TLC) to derive sharp excess risk for transductive learning. TLC is based on our new concentration inequality for the supremum of empirical processes for the gap between the test and the training loss in the setting of sampling uniformly without replacement. Using a peeling strategy and a new surrogate variance operator, sharper excess risk bound, compared to the current state-of-the-art, for generic transductive learning with bounded loss function is derived. As an result of independent interest, a sharp concentration inequality for the general supremum of empirical process involving random variables in the setting of sampling uniformly without replacement is derived using the concentration inequality for the test-train process, with comparison to the current concentration inequalities.
+
+# Acknowledgments
+
+This work is supported by the 2023 Mayo Clinic and Arizona State University Alliance for Health Care Collaborative Research Seed Grant Program under Grant Award Number AWD00038846.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the theoretical understanding of sharp generalization capability of transductive learning.
+
+# References
+
+Bardenet, R. and Maillard, O.-A. Concentration inequalities for sampling without replacement. Bernoulli, 21(3): 1361-1385, 2015. ISSN 13507265, 15739759.
+Bartlett, P. L. and Mendelson, S. Rademacher and gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463-482, March 2003.
+Bartlett, P. L., Bousquet, O., and Mendelson, S. Local rademacher complexities. Ann. Statist., 33(4):1497-1537, 08 2005.
+Boucheron, S., Lugosi, G., and Massart, P. Concentration inequalities using the entropy method. The Annals of Probability, 31(3):1583-1614, 2003.
+Cortes, C. and Mohri, M. On transductive regression. In Proceedings of the 19th International Conference on Neural Information Processing Systems, NIPS'06, pp. 305-312, Cambridge, MA, USA, 2006. MIT Press.
+El-Yaniv, R. and Pechyony, D. Transductive rademacher complexity and its applications. J. Artif. Int. Res., 35(1): 193-234, jun 2009.
+Gross, D. and Nesme, V. Note on sampling without replacing from a finite collection of matrices. CoRR, abs/1001.2738, 2010. URL http://arxiv.org/abs/1001.2738.
+Hoeffding, W. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13-30, 1963.
+Ledoux, M. and Talagrand, M. Probability in Banach Spaces: isoperimetry and processes. Springer, Berlin, May 1991.
+Massart, P. About the constants in talagrand's concentration inequalities for empirical processes. Annals of Probability, Apr 2000.
+Mendelson, S. Geometric parameters of kernel machines. In Kivinen, J. and Sloan, R. H. (eds.), Computational Learning Theory, 15th Annual Conference on Computational Learning Theory, COLT 2002, Sydney, Australia, July 8-10, 2002, Proceedings, volume 2375 of Lecture Notes in Computer Science, pp. 29-43. Springer, 2002.
+Sambale, H. and Sinulis, A. Concentration inequalities on the multislice and for sampling without replacement. Journal of Theoretical Probability, 35(4):2712-2737, Dec 2022. ISSN 1572-9230. doi: 10.1007/s10959-021-01139-9.
+
+Tolstikhin, I. O. Concentration inequalities for samples without replacement. Theory of Probability & Its Applications, 61(3):462-481, 2017. doi: 10.1137/S0040585X97T988277.
+Tolstikhin, I. O., Blanchard, G., and Kloft, M. Localized complexities for transductive learning. In Balcan, M., Feldman, V., and Szepesvári, C. (eds.), Proceedings of The 27th Conference on Learning Theory, COLT 2014, Barcelona, Spain, June 13-15, 2014, volume 35 of JMLR Workshop and Conference Proceedings, pp. 857-884. JMLR.org, 2014.
+Vapnik, V. Estimation of Dependencies Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics). Springer-Verlag, Berlin, Heidelberg, 1982. ISBN 0387907335.
+Vapnik, V. N. Statistical Learning Theory. Wiley-Interscience, 1998.
+Yang, Y. Improved generalization bound for transductive learning by transductive local complexity and its applications. arXiv preprint arXiv:2309.16858, 2025. URL https://arxiv.org/abs/2309.16858.
+
+We present the basic mathematical results required in our proofs in Section A, then present proofs of our results in this paper in Section B.
+
+# A. Mathematical Tools
+
+We introduce the basic concentration inequality which we used to develop the main results of this paper. Let $X_{1}, X_{2}, \ldots, X_{n}$ are independent random variables taking values in a measurable space $\mathcal{X}$ , and let $X_{1}^{n}$ denote the vector of these $n$ random variables. Let $f\colon \mathcal{X}^{n} \to \mathbb{R}$ be some measurable function. We are concerned with concentration of the random variable $Z = f(X_{1}, X_{2}, \ldots, X_{n})$ . Let $X_{1}', X_{2}', \ldots, X_{n}'$ denote independent copies of $X_{1}, X_{2}, \ldots, X_{n}$ , and we write
+
+$$
+Z ^ {(i)} = f \left(X _ {1}, \dots , X _ {i - 1}, X _ {i} ^ {\prime}, X _ {i + 1}, \dots , X _ {n}\right).
+$$
+
+Theorem A.1. ((Boucheron et al., 2003, Theorem 2), the exponential version of the Efron-Stein inequality) For all $\theta > 0$ and $\lambda \in (0, 1 / \theta)$ ,
+
+$$
+\log \mathbb {E} \left[ \exp \left(\lambda \left(Z - \mathbb {E} [ Z ]\right)\right) \right] \leq \frac {\lambda \theta}{1 - \lambda \theta} \log \mathbb {E} \left[ \exp \left(\frac {\lambda V _ {+}}{\theta}\right) \right]. \tag {23}
+$$
+
+Theorem A.2. ((Boucheron et al., 2003, Theorem 5, Theorem 6)) Assume that there exist constants $a \geq 0$ and $b > 0$ such that
+
+$$
+V _ {+} (Z) := \mathbb {E} \left[ \sum_ {i = 1} ^ {n} \left(Z - Z ^ {(i)}\right) ^ {2} \mathbb {I} _ {\{Z > Z ^ {(i)} \}} \mid X _ {1} ^ {n} \right] \leq a Z + b.
+$$
+
+Then for any $\lambda \in (0,1 / a)$
+
+$$
+\log \mathbb {E} [ \exp (\lambda (Z - \mathbb {E} [ Z ])) ] \leq \frac {\lambda^ {2}}{1 - a \lambda} (a \mathbb {E} [ Z ] + b), \tag {24}
+$$
+
+and for all $t > 0$
+
+$$
+\Pr \left[ Z > \mathbb {E} [ Z ] + t \right] \leq \exp \left(\frac {- t ^ {2}}{4 a \mathbb {E} [ Z ] + 4 b + 2 a t}\right). \tag {25}
+$$
+
+Moreover, if
+
+$$
+V _ {-} (Z) := \mathbb {E} \left[ \sum_ {i = 1} ^ {n} \left(Z - Z ^ {(i)}\right) ^ {2} \mathbb {I} _ {\{Z < Z ^ {(i)} \}} \mid X _ {1} ^ {n} \right] \leq v (Z)
+$$
+
+holds for a nondecreasing function $v$ . Then, for all $t > 0$
+
+$$
+\Pr \left[ Z < \mathbb {E} [ Z ] - t \right] \leq \exp \left(\frac {- t ^ {2}}{4 \mathbb {E} [ v (Z) ]}\right). \tag {26}
+$$
+
+Remark A.3. While $a > 0$ in the original Theorem 5 of (Boucheron et al., 2003), one can use the same proof of this theorem to show that (25) holds for $a = 0$ with $b > 0$ . $V_{+}(Z)$ defined in this theorem is the "upper variance" of the random variable as a function of independent random variables $X_{1}, X_{2}, \ldots, X_{n}$ . In particular, $V_{+}(Z)$ measures the variance of $Z$ when $X_{i}$ is changed to another sample $X_{i}'$ for all $i \in [n]$ .
+
+Proposition A.4. (Logarithmic Sobolev inequality in (Boucheron et al., 2003, Proposition 10)), which is a variant proposed by (Massart, 2000)) For all $\lambda \in \mathbb{R}$
+
+$$
+\begin{array}{l} \lambda \mathbb {E} \left[ Z \exp (\lambda Z) \right] - \mathbb {E} \left[ \exp (\lambda Z) \right] \log \mathbb {E} \left[ Z \exp (\lambda Z) \right] \\ \leq \sum_ {i = 1} ^ {n} \mathbb {E} \left[ \exp (\lambda Z) \psi \left(- \lambda \left(Z - Z ^ {(i)}\right)\right) \mathbb {I} _ {\{Z > Z ^ {(i)} \}} \right], \tag {27} \\ \end{array}
+$$
+
+where $\psi (x)\coloneqq x(e^{x} - 1)$
+
+Theorem A.5 (Contraction Property of Inductive Rademacher Complexity (Ledoux & Talagrand, 1991)). Suppose $g$ is a Lipschitz continuous with $|g(x) - g(y)| \leq L|x - y|$ . Then
+
+$$
+\mathfrak {R} _ {u} ^ {(\mathrm {i n d})} (g \circ \mathcal {H}) \leq L \mathfrak {R} _ {u} ^ {(\mathrm {i n d})} (\mathcal {H}), \quad \mathfrak {R} _ {m} ^ {(\mathrm {i n d})} (g \circ \mathcal {H}) \leq L \mathfrak {R} _ {m} ^ {(\mathrm {i n d})} (\mathcal {H}).
+$$
+
+Theorem A.6 ((Hoeffding, 1963, Theorem 4), (Gross & Nesme, 2010, Section D)). Let $\{X_i\}_{i=1}^n$ and $\{Y_i\}_{i=1}^n$ be sampled from a population $\{c_i\}_{i=1}^N \subseteq \mathcal{X} \subseteq \mathbb{R}^d$ without replacement and with replacement respectively. Suppose $f$ is continuous and convex on $\mathcal{X}$ , then
+
+$$
+\mathbb {E} \left[ f \left(\sum_ {i = 1} ^ {n} X _ {i}\right) \right] \leq \mathbb {E} \left[ f \left(\sum_ {i = 1} ^ {n} Y _ {i}\right) \right]. \tag {28}
+$$
+
+# B. Proofs
+
+# B.1. Proof of Theorem 2.1
+
+Proof of Theorem 2.1. We prove the first bound in (6). We let $\mathbf{Y}^{(u)} = \{Y_1,\ldots ,Y_u\}$ be $u$ independent random variables with each $Y_{i}$ for $i\in [u]$ sampled uniformly from $[n]$ with replacement. Let $\mathbf{Y}^{(u)^{\prime}} = [Y_{1}^{\prime},\dots ,Y_{u}^{\prime}]$ be independent copies of $\mathbf{Y}^{(u)}$ , and $\sigma = \{\sigma_i\}_{i = 1}^{\max \{u,m\}}$ be iid Rademacher variables. Let $\mathcal{H}_0 = \left\{h_j^{(0)}\right\}_{j\geq 1}$ be a countable dense subset of $\mathcal{H}$ such that $\overline{\mathcal{H}_0} = \mathcal{H}$ . We define $c_{i} = \left[h_{j}^{(0)}(i) - \mathcal{L}_{n}(h)\right]_{j\in [M]}\in \mathbb{R}^{M}$ for $i\in [n]$ , and let $\{Q_i\}_{i\in [u]}$ and $\{Q_i'\}_{i\in [u]}$ be sampled from $\{c_i\}_{i\in [n]}$ without replacement and with replacement respectively. Then it follows from Theorem A.6 that $\mathbb{E}\left[f\left(\sum_{i = 1}^{u}Q_{i}\right)\right]\leq \mathbb{E}\left[f\left(\sum_{i = 1}^{u}Q_{i}'\right)\right]$ , which means that
+
+$$
+\mathbb {E} _ {\mathbf {d}} \left[ \max _ {j \in [ M ]} \left(\mathcal {U} _ {h _ {j} ^ {(0)}} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {n} (h _ {j} ^ {(0)})\right) \right] \leq \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \max _ {j \in [ M ]} \left(\frac {1}{u} \sum_ {i = 1} ^ {u} h _ {j} ^ {(0)} (Y _ {i}) - \mathcal {L} _ {n} (h _ {j} ^ {(0)})\right) \right],
+$$
+
+with $f(\mathbf{x}) = \max_{j\in [M]}\mathbf{x}_j$ being a convex function for $\mathbf{x} \in \mathbb{R}^M$ , due to the fact that $\{\mathbf{Z}_{\mathbf{d}}\}$ is a random set of size $u$ sampled uniformly from $[n]$ without replacement. We note that both sequences $\left\{\max_{j\in [M]}\left(\mathcal{U}_{h_j^{(0)}}^{(u)}(\mathbf{Z_d}) - \mathcal{L}_n(h_j^{(0)})\right)\right\}_{M\geq 1}$ and $\left\{\max_{j\in [M]}\left(\frac{1}{u}\sum_{i = 1}^{u}h_{j}^{(0)}(Y_{i}) - \mathcal{L}_{n}(h_{j}^{(0)})\right)\right\}_{M\geq 1}$ are nondecreasing in terms of $M$ . Letting $M\to \infty$ , it then follows from Levi's monotone convergence theorem and the fact that the first element of both sequences are integrable that
+
+$$
+\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H} _ {0}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {n} (h)\right) \right] \leq \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \sup _ {h \in \mathcal {H} _ {0}} \left(\frac {1}{u} \sum_ {i = 1} ^ {u} h (Y _ {i}) - \mathcal {L} _ {n} (h)\right) \right].
+$$
+
+Because $\mathcal{H}_0$ is dense in $\mathcal{H}$ , we have
+
+$$
+\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {L} _ {n} (h)\right) \right] \leq \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \sup _ {h \in \mathcal {H}} \left(\frac {1}{u} \sum_ {i = 1} ^ {u} h \left(Y _ {i}\right) - \mathcal {L} _ {n} (h)\right) \right]. \tag {29}
+$$
+
+As a result, we have
+
+$$
+\begin{array}{l} \mathfrak {R} _ {u} ^ {+} (\mathcal {H}) = \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {n} (h)\right) \right] \\ \stackrel {1} {\leq} \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \sup _ {h \in \mathcal {H}} \left(\frac {1}{u} \sum_ {i = 1} ^ {u} h (Y _ {i}) - \mathcal {L} _ {n} (h)\right) \right] \\ \stackrel {\mathcal {②}} {=} \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \sup _ {h \in \mathcal {H}} \left(\frac {1}{u} \sum_ {i = 1} ^ {u} h (Y _ {i}) - \mathbb {E} _ {\mathbf {Y} ^ {(u) ^ {\prime}}} \left[ \frac {1}{u} \sum_ {i = 1} ^ {u} h (Y _ {i} ^ {\prime}) \right]\right) \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {3} {\leq} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \mathbf {Y} ^ {(u) ^ {\prime}}} \left[ \frac {1}{u} \sup _ {h \in \mathcal {H}} \left(\sum_ {i = 1} ^ {u} h \left(Y _ {i}\right) - \sum_ {i = 1} ^ {u} h \left(Y _ {i} ^ {\prime}\right)\right) \right] \\ \stackrel {\circledast} {=} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \mathbf {Y} ^ {(u) ^ {\prime}}, \pmb {\sigma}} \left[ \frac {1}{u} \operatorname * {s u p} _ {h \in \mathcal {H}} \left(\sum_ {i = 1} ^ {u} \sigma_ {i} \left(h (Y _ {i}) - h (Y _ {i} ^ {\prime})\right)\right) \right] \\ \leq \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \sigma} \left[ \frac {1}{u} \sup _ {h \in \mathcal {H}} \sum_ {i = 1} ^ {u} \sigma_ {i} h (Y _ {i}) \right] + \mathbb {E} _ {\mathbf {Y} ^ {(u) ^ {\prime}}, \sigma} \left[ \frac {1}{u} \sup _ {h \in \mathcal {H}} \sum_ {i = 1} ^ {u} \sigma_ {i} h \left(Y _ {i} ^ {\prime}\right) \right] \\ = 2 \Re_ {u} ^ {(\mathrm {i n d})} (\mathcal {H}). \tag {30} \\ \end{array}
+$$
+
+Here 1 follows from (29). 2 is due to $\mathbb{E}_{\mathbf{Y}^{(u)'}}\left[1 / u\cdot \sum_{i = 1}^{u}h(Y_i')\right] = \mathcal{L}_n(h)$ . 3 is due to the Jensen's inequality, and 4 is due to the definition of the Rademacher variables. All the other bounds for $\Re_u^-(\mathcal{H})$ $\Re_m^+(\mathcal{H})$ , and $\Re_{m}^{-}(\mathcal{H})$ in (6) can be proved in a similar manner.
+
+
+
+# B.2. Concentration Inequality for Test-Train Process: Proof of Theorem 3.1
+
+Let $\mathcal{H}$ be the function class defined in Section 3 of the main paper. For all $h\in \mathcal{H}$ , we define
+
+$$
+E (h, \mathbf {d}, \mathbf {d} ^ {(i)}) := \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) - \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d} ^ {(i)}}) + \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d} ^ {(i)}}}})
+$$
+
+as the change of the test-train loss if $\mathbf{d}$ is changed to $\mathbf{d}^{(i)}$ . Then we have the following lemma showing the values of $E(h, \mathbf{d}, \mathbf{d}^{(i)})$ under four specific cases. This lemma is based on the fact that there can be at most only one pair of different elements in $\{\mathbf{Z}_{\mathbf{d}}\}$ and $\{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ .
+
+Lemma B.1. For any $h\in \mathcal{H}$ , there are four cases for the value of $E(h,\mathbf{d},\mathbf{d}^{(i)})$ for $i\in [u]$ .
+
+Case 1: $E(h, \mathbf{d}, \mathbf{d}^{(i)}) = \left(\frac{1}{u} + \frac{1}{m}\right) (h(\mathbf{Z}_{\mathbf{d}}(i)) - h(\mathbf{Z}_{\mathbf{d}^{(i)}}(q(i))))$ ,
+
+$$
+i f d _ {i} \neq d _ {i} ^ {\prime}, q (i) \leq u, p (i) > u,
+$$
+
+Case 2: $E(h,\mathbf{d},\mathbf{d}^{(i)}) = \left(\frac{1}{u} +\frac{1}{m}\right)\left(h(\mathbf{Z}_{\mathbf{d}}(p(i))) - h(\mathbf{Z}_{\mathbf{d}^{(i)}}(i))\right),$
+
+$$
+i f d _ {i} \neq d _ {i} ^ {\prime}, p (i) \leq u, q (i) > u,
+$$
+
+Case 3: $E(h, \mathbf{d}, \mathbf{d}^{(i)}) = \left(\frac{1}{u} + \frac{1}{m}\right) \left(h(\mathbf{Z}_{\mathbf{d}}(i)) - h(\mathbf{Z}_{\mathbf{d}^{(i)}}(i))\right)$ ,
+
+$$
+i f d _ {i} \neq d _ {i} ^ {\prime}, p (i) > u, q (i) > u,
+$$
+
+Case 4: $E(h, \mathbf{d}, \mathbf{d}^{(i)}) = 0$ ,
+
+$$
+i f d _ {i} = d _ {i} ^ {\prime} o r p (i), q (i) \leq u.
+$$
+
+Here
+
+$$
+q (i) := \min \left\{i ^ {\prime} \in [ i + 1, u ]: \mathbf {Z} _ {\mathbf {d} ^ {(i)}} \left(i ^ {\prime}\right) = i \right\}, \tag {31}
+$$
+
+$$
+p (i) := \min \left\{i ^ {\prime} \in [ i + 1, u ]: \mathbf {Z} _ {\mathbf {d}} \left(i ^ {\prime}\right) = i \right\}. \tag {32}
+$$
+
+In (31) and (32), we use the convention that the min over an empty set returns $+\infty$ .
+
+Proof. It can be checked by running Algorithm 1 that $\mathbf{Z}_{\mathbf{d}}$ and $\mathbf{Z}_{\mathbf{d}^{(i)}}$ can differ at most by one element. As a reminder, we let $\{\mathbf{Z}\}$ denote a set containing all the elements of a vector $\mathbf{Z}$ regardless of the orders of these elements in $\mathbf{Z}$ .
+
+By the definition in (31) and (32), when $p(i) \leq u$ , then $\mathbf{Z}_{\mathbf{d}^{(i)}}(p(i)) = \mathbf{Z}_{\mathbf{d}}(i)$ . When $q(i) \leq u$ , then $\mathbf{Z}_{\mathbf{d}}(q(i)) = \mathbf{Z}_{\mathbf{d}^{(i)}}(i)$ . To see this, when $p(i) \leq u$ , the element $\mathbf{Z}_{\mathbf{d}}(i)$ would be picked up at a location $i' = p(i) \in (i,u]$ in $\mathbf{Z}_{\mathbf{d}^{(i)}}$ . That is, $\mathbf{Z}_{\mathbf{d}^{(i)}}(p(i)) = \mathbf{Z}_{\mathbf{d}}(i)$ . When $q(i) \leq u$ , the element $\mathbf{Z}_{\mathbf{d}^{(i)}}(i)$ would be picked up at a location $i' = q(i) \in (i,u]$ in $\mathbf{Z}_{\mathbf{d}}$ . That is, $\mathbf{Z}_{\mathbf{d}}(q(i)) = \mathbf{Z}_{\mathbf{d}^{(i)}}(i)$ .
+
+As a result, when $d_i' = d_i$ , or $p(i), q(i) \leq u$ , then $\{\mathbf{Z}_{\mathbf{d}}\} = \{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ , and $E(g, \mathbf{d}, \mathbf{d}^{(i)}) = 0$ , which proves Case 4. Otherwise, when $d_i' \neq d_i$ and only one element of $\{p(i), q(i)\}$ is not $\infty$ , then the conditions of Case 2 or Case 1 hold. When the conditions of Case 2 hold, the pair of different elements in $\{\mathbf{Z}_{\mathbf{d}}\}$ and $\{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ is $\{h(\mathbf{Z}_{\mathbf{d}}(p(i)), h(\mathbf{Z}_{\mathbf{d}^{(i)}}(i))\}$ . When the conditions of Case 1 hold, the pair of different elements in $\{\mathbf{Z}_{\mathbf{d}}\}$ and $\{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ is $\{h(\mathbf{Z}_{\mathbf{d}}(i)), h(\mathbf{Z}_{\mathbf{d}^{(i)}}(q(i)))\}$ . It follows that both Case 2 and Case 1 hold.
+
+If $d_i' \neq d_i$ and both $p(i)$ and $q(i)$ are $\infty$ , then the only element of $\{\mathbf{Z}_{\mathbf{d}}\}$ not in $\{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ is $\mathbf{Z}_{\mathbf{d}}(i)$ , and the only element of $\{\mathbf{Z}_{\mathbf{d}^{(i)}}\}$ not in $\{\mathbf{Z}_{\mathbf{d}}\}$ is $\mathbf{Z}_{\mathbf{d}^{(i)}}(i)$ , so that Case 3 holds.
+
+The proof of Theorem 3.1 needs sampling $m$ elements from the full sample $X_{n}$ uniformly without replacement as the training features. To this end, let $\tilde{\mathbf{d}} = \left[\tilde{d}_1,\dots ,\tilde{d}_m\right]\in \mathbb{N}^m$ be a random vector, and $\left\{\tilde{d}_i\right\}_{i = 1}^m$ are $m$ independent random variables such that $\tilde{d}_i$ takes values in $[i:n]$ uniformly at random. If we invoke function RANDPERM in Algorithm 1 with input changed from $u$ to $m$ , then $\mathbf{Z}_{\tilde{\mathbf{d}}} = \mathrm{RANDPERM}(m)$ are the first $m$ elements of a uniformly distributed permutation of $[n]$ . We use $\overline{\mathbf{Z}_{\tilde{\mathbf{d}}}} = [n]\setminus \{\mathbf{Z}_{\tilde{\mathbf{d}}}\}$ to denote the indices not in $\{\mathbf{Z}_{\tilde{\mathbf{d}}}\}$ . Similar to $\{\mathbf{Z}_{\mathbf{d}}\}$ introduced in Section 2, $\{\mathbf{Z}_{\tilde{\mathbf{d}}}\}$ is a random set of size $m$ sampled uniformly from $[n]$ without replacement. Let $\tilde{\mathbf{d}}' = [\tilde{d}_1',\dots ,\tilde{d}_m']$ be independent copies of $\tilde{\mathbf{d}}$ , and $\tilde{\mathbf{d}}^{(i)} = [\tilde{d}_1,\dots ,\tilde{d}_{i - 1},\tilde{d}_i',\tilde{d}_{i + 1},\dots ,\tilde{d}_m]$ .
+
+For all $h\in \mathcal{H}$ , we define
+
+$$
+\tilde {E} (h, \tilde {\mathbf {d}}, \tilde {\mathbf {d}} ^ {(i)}) := \mathcal {L} _ {h} ^ {(m)} \left(\mathbf {Z} _ {\tilde {\mathbf {d}}}\right) - \mathcal {U} _ {h} ^ {(u)} \left(\overline {{\mathbf {Z} _ {\tilde {\mathbf {d}}}}}\right) - \mathcal {L} _ {h} ^ {(m)} \left(\mathbf {Z} _ {\tilde {\mathbf {d}} ^ {(i)}}\right) + \mathcal {U} _ {h} ^ {(u)} \left(\overline {{\mathbf {Z} _ {\tilde {\mathbf {d}} ^ {(i)}}}}\right). \tag {33}
+$$
+
+Similar to the four cases in Lemma B.1, it follows by repeating the argument in the proof of Lemma B.1 that we have the following four cases for $\tilde{E}$ as stated in the following lemma.
+
+Lemma B.2. For any $h\in \mathcal{H}$ , there are four cases for the value of $\tilde{E} (h,\mathbf{d},\mathbf{d}^{(i)})$ for $i\in [m]$ .
+
+Case 1: $\tilde{E} (h,\tilde{\mathbf{d}},\tilde{\mathbf{d}}^{(i)}) = \left(\frac{1}{u} +\frac{1}{m}\right)\left(h(\mathbf{Z}_{\tilde{\mathbf{d}}}(\boldsymbol {i})) - h(\mathbf{Z}_{\tilde{\mathbf{d}}^{(i)}}(\tilde{q} (\boldsymbol {i})))\right),$
+
+$$
+i f \tilde {d} _ {i} \neq \tilde {d} _ {i} ^ {\prime}, \tilde {q} (i) \leq m, \tilde {p} (i) > m,
+$$
+
+Case 2: $\tilde{E} (h,\tilde{\mathbf{d}},\tilde{\mathbf{d}}^{(i)}) = \left(\frac{1}{u} +\frac{1}{m}\right)\left(h(\mathbf{Z}_{\tilde{\mathbf{d}}}(\tilde{p} (i))) - h(\mathbf{Z}_{\tilde{\mathbf{d}}^{(i)}}(i))\right),$
+
+$$
+\text {i f} \tilde {d} _ {i} \neq \tilde {d} _ {i} ^ {\prime}, \tilde {p} (i) \leq m, \tilde {q} (i) > m,
+$$
+
+Case 3: $\tilde{E} (h,\tilde{\mathbf{d}},\tilde{\mathbf{d}}^{(i)}) = \left(\frac{1}{u} +\frac{1}{m}\right)\left(h(\mathbf{Z}_{\tilde{\mathbf{d}}}(\boldsymbol {i})) - h(\mathbf{Z}_{\tilde{\mathbf{d}}^{(i)}}(\boldsymbol {i}))\right),$
+
+$$
+i f \tilde {d} _ {i} \neq \tilde {d} _ {i} ^ {\prime}, \tilde {p} (i) > m, \tilde {q} (i) > m,
+$$
+
+Case 4: $\tilde{E} (h,\tilde{\mathbf{d}},\tilde{\mathbf{d}}^{(i)}) = 0$
+
+$$
+\text {i f} \tilde {d} _ {i} = \tilde {d} _ {i} ^ {\prime} \text {o r} \tilde {p} (i), \tilde {q} (i) \leq m,
+$$
+
+where
+
+$$
+\tilde {q} (i) := \min \left\{i ^ {\prime} \in [ i + 1, m ]: \mathbf {Z} _ {\tilde {\mathbf {d}} ^ {(i)}} \left(i ^ {\prime}\right) = i \right\}, \tag {34}
+$$
+
+$$
+\tilde {p} (i) := \min \left\{i ^ {\prime} \in [ i + 1, m ]: \mathbf {Z} _ {\tilde {\mathbf {d}}} \left(i ^ {\prime}\right) = i \right\}, \tag {35}
+$$
+
+and similar to (31)-(32) in Lemma B.1, we use the convention that the min over an empty set returns $+\infty$ .
+
+We need the concept of chain associated with $\mathbf{d},\bar{\mathbf{d}}$ , the following surrogate processes, and the following lemmas, Lemma B.3-Lemma B.6, before the proof of Theorem B.7 and Theorem B.8.
+
+A set $\{j_1, \ldots, j_Q\} \subseteq [N]$ , where $j_1 < j_2 < \ldots < j_Q$ , is defined to be a chain in $[N]$ associated with $\mathbf{v}$ if $j_k = \mathbf{v}(j_{k-1})$ holds for all $k \in [2:Q]$ when $Q \geq 2$ . We define the event $\Omega$ as the event that there exists a chain $\{j_1, \ldots, j_{Q'}\}$ in $[u]$ associated with $\mathbf{d}$ with $2 \leq Q' \leq u$ . Similarly, let $\tilde{\Omega}$ denote the event that there is a chain $\{j_1, \ldots, j_{Q'}\}$ in $[m]$ associated with $\tilde{\mathbf{d}}$ with $2 \leq Q' \leq m$ . $\Omega(Q)$ , $\tilde{\Omega}(Q)$ are defined for chains of length not less than a general $Q$ in (Yang, 2025). It also follows from (Yang, 2025) that $\operatorname*{Pr}[\Omega] \leq u^2 / m$ when $m \gg u^2$ , and $\operatorname*{Pr}\left[\tilde{\Omega}\right] \leq m^2 / u$ when $u \gg m^2$ .
+
+We define a surrogate process $\bar{t}_u(\mathbf{d},\mathcal{H}^{\prime})$ for a class of functions $\mathcal{H}'$ with ranges in $[0,H']$ ( $H' > 0$ ) as
+
+$$
+\bar {t} _ {u} (\mathbf {d}, \mathcal {H} ^ {\prime}) := \left\{ \begin{array}{l l} t _ {u} (\mathbf {d}, \mathcal {H} ^ {\prime}) & \mathbf {d} \notin \Omega , \\ - \sup _ {h \in \mathcal {H} ^ {\prime}} \mathcal {L} _ {n} (h) & \mathbf {d} \in \Omega . \end{array} \right. \tag {36}
+$$
+
+We also define a surrogate process $\bar{t}_m(\hat{\mathbf{d}},\mathcal{H}^{\prime})$ for a class of functions $\mathcal{H}'$ with ranges in $[0,H']$ ( $H' > 0$ ) as
+
+$$
+\bar {t} _ {m} \left(\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}\right) := \left\{ \begin{array}{l l} t _ {m} \left(\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}\right) & \tilde {\mathbf {d}} \notin \tilde {\Omega}, \\ - \sup _ {h \in \mathcal {H} ^ {\prime}} \mathcal {L} _ {n} (h) & \tilde {\mathbf {d}} \in \tilde {\Omega}. \end{array} \right. \tag {37}
+$$
+
+Lemma B.3. For any $h \in \mathcal{H}$ , let $\mathcal{A}_{h,2} = \{i \in [u] : E(h, \mathbf{d}, \mathbf{d}^{(i)})$ satisfies Case 2 in Lemma B.1\} and $\tilde{\mathcal{A}}_{h,2} = \{i \in [m] : \tilde{E}(h, \tilde{\mathbf{d}}, \tilde{\mathbf{d}}^{(i)})$ satisfies Case 2 in Lemma B.2\} . Then we have
+
+$$
+\sum_ {i \in \mathcal {A} _ {h, 2}} \left(h \left(\mathbf {Z} _ {\mathbf {d}} (p (i))\right)\right) ^ {2} \leq \sum_ {i = 1} ^ {n} \left(h \left(\mathbf {Z} _ {\mathbf {d}} (i)\right)\right) ^ {2}, \text {i f} \mathbf {d} \notin \Omega . \tag {38}
+$$
+
+$$
+\sum_ {i \in \tilde {\mathcal {A}} _ {h, 2}} \left(h \left(\mathbf {Z} _ {\tilde {\mathbf {d}}} (\tilde {p} (i))\right)\right) ^ {2} \leq \sum_ {i = 1} ^ {n} \left(h \left(\mathbf {Z} _ {\tilde {\mathbf {d}}} (i)\right)\right) ^ {2}, \text {i f} \tilde {\mathbf {d}} \notin \tilde {\Omega}. \tag {39}
+$$
+
+Proof. This lemma follows from (Yang, 2025, Lemma B.6, Lemma B.11). $\square$
+
+Lemma B.4. For a function class $\mathcal{H}'$ , we define
+
+$$
+t _ {u} (\mathbf {d}, \mathcal {H} ^ {\prime}) := \sup _ {h \in \mathcal {H} ^ {\prime}} \left(\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {L} _ {n} (h)\right), \tag {40}
+$$
+
+$$
+t _ {m} \left(\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}\right) := \sup _ {h \in \mathcal {H} ^ {\prime}} \left(\mathcal {L} _ {h} ^ {(m)} \left(\mathbf {Z} _ {\tilde {\mathbf {d}}}\right) - \mathcal {L} _ {n} (h)\right). \tag {41}
+$$
+
+Then
+
+$$
+\mathbb {E} _ {\mathbf {d}} \left[ t _ {u} \left(\mathbf {d}, \mathcal {H} ^ {\prime}\right) \right] = \Re_ {u} ^ {+} \left(\mathcal {H} ^ {\prime}\right), \quad \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ t _ {m} \left(\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}\right) \right] = \Re_ {m} ^ {+} \left(\mathcal {H} ^ {\prime}\right). \tag {42}
+$$
+
+Moreover, for $g(\mathbf{d})$ defined in (4), we have
+
+$$
+\mathbb {E} _ {\mathbf {d}} [ g (\mathbf {d}) ] \leq \Re_ {u} ^ {+} (\mathcal {H}) + \Re_ {m} ^ {-} (\mathcal {H}). \tag {43}
+$$
+
+Proof. $\mathbb{E}_{\mathbf{d}}[t_u(\mathbf{d})] = \Re_u^+ (\mathcal{H}^\prime)$ follows from the definition of Transductive Complexity in (5). We have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ t _ {m} (\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}) \right] = \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ \sup _ {h \in \mathcal {H} ^ {\prime}} \left(\mathcal {L} _ {h} ^ {(m)} (\mathbf {Z} _ {\tilde {\mathbf {d}}}) - \mathcal {L} _ {n} (h)\right) \right] \\ = \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H} ^ {\prime}} \left(\mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) - \mathcal {L} _ {n} (h)\right) \right] = \Re_ {m} ^ {+} (\mathcal {H} ^ {\prime}). \\ \end{array}
+$$
+
+where the second last equality is due to the fact that $\{\mathbf{Z}_{\tilde{\mathbf{d}}}\}$ and $\{\overline{\mathbf{Z_d}}\}$ have the same distribution, that is, they are sets of size $m$ sampled uniformly from $[n]$ without replacement. This proves (42).
+
+We now prove (43). We first have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathbf {d}} \left[ g (\mathbf {d}) \right] = \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}})\right) \right] \\ = \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {n} (h) + \mathcal {L} _ {n} (h) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}})\right) \right] \\ \stackrel {{1}} {{\leq}} \underbrace {\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {n} (h)\right) \right]} _ {\mathfrak {R} _ {u} ^ {+} (\mathcal {H})} + \underbrace {\mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h \in \mathcal {H}} \left(\mathcal {L} _ {n} (h) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}})\right) \right]} _ {\mathfrak {R} _ {m} ^ {-} (\mathcal {H})} \\ \end{array}
+$$
+
+$$
+= \Re_ {u} ^ {+} (\mathcal {H}) + \Re_ {m} ^ {-} (\mathcal {H})
+$$
+
+Here ① follows from the sub-additivity of supremum.
+
+Lemma B.5 ((Yang, 2025, Lemma B.8, Lemma B.13) with $Q = 2$ ). Let $\mathcal{H}'$ be a class of functions with ranges in $[0, H']$ , and $t_u, \bar{t}_u$ are defined in (40) and (36). Suppose $\sup_{h \in \mathcal{H}'} \mathcal{L}_n(h) \leq r$ for $r > 0$ . Then
+
+$$
+\log \mathbb {E} _ {\mathbf {d}} \left[ \exp \left(\lambda \left(\bar {t} _ {u} (\mathbf {d}, \mathcal {H} ^ {\prime}) - \mathbb {E} _ {\mathbf {d}} \left[ \bar {t} _ {u} (\mathbf {d}, \mathcal {H} ^ {\prime}) \right]\right)\right) \right] \leq \frac {2 H ^ {\prime} \lambda^ {2} \left(\mathbb {E} _ {\mathbf {d}} \left[ t _ {u} (\mathbf {d} , \mathcal {H} ^ {\prime}) \right] + r\right)}{u - 2 H ^ {\prime} \lambda} \tag {44}
+$$
+
+holds for all $\lambda \in (0, u / (2H'))$ , where the surrogate process $\bar{t}_u$ is defined in (36). Similarly,
+
+$$
+\left. \right. \log \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ \exp \left(\lambda \left(\bar {t} _ {m} (\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}) - \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ \bar {t} _ {m} (\tilde {\mathbf {d}}, \mathcal {H} ^ {\prime}) \right]\right)\right)\right] \leq \frac {2 H ^ {\prime} \lambda^ {2} \left(\mathbb {E} _ {\tilde {\mathbf {d}}} \left[ t _ {m} (\tilde {\mathbf {d}} , \mathcal {H} ^ {\prime}) \right] + r\right)}{m - 2 H ^ {\prime} \lambda} \tag {45}
+$$
+
+holds for all $\lambda \in (0,m / (2H'))$ , where the surrogate process $\bar{t}_m$ is defined in (37)
+
+Lemma B.6 (Special case of (Yang, 2025, Lemma B.7, Lemma B.12) with $Q = 2$ ). For any $h \in \mathcal{H}$ , let
+
+$$
+\mathcal {A} _ {h, 1} = \{i \in [ u ]: \text {C a s e 1 d e f i n e d i n L e m m a B . 1 i s s a t i s f i e d} \},
+$$
+
+$$
+\tilde {\mathcal {A}} _ {h, 1} = \{i \in [ m ]: \text {C a s e 1 d e f i n e d i n L e m m a B . 2 i s s a t i s f i e d} \}.
+$$
+
+Then with probability at least $1 - \operatorname{Pr}[\Omega]$ , for all $h \in \mathcal{H}$ , we have
+
+$$
+\mathbb {E} \left[ \sum_ {i = 1} ^ {u} \left(h \left(\mathbf {Z} _ {\mathbf {d} ^ {(i)}} (i)\right)\right) ^ {2} \mid \mathbf {d} \right] \leq \frac {n u}{m} T _ {n} (h), \mathbb {E} \left[ \sum_ {i \in \mathcal {A} _ {h, 1}} \left(h \left(\mathbf {Z} _ {\mathbf {d} ^ {(i)}} (q (i))\right)\right) ^ {2} \mid \mathbf {d} \right] \leq \frac {2 n u}{m} T _ {n} (h), \tag {46}
+$$
+
+$$
+\mathbb {E} \left[ \sum_ {i = 1} ^ {m} \left(h \left(\mathbf {Z} _ {\tilde {\mathbf {d}} ^ {(i)}} (i)\right)\right) ^ {2} \mid \tilde {\mathbf {d}} \right] \leq \frac {n m}{u} T _ {n} (h), \mathbb {E} \left[ \sum_ {i \in \tilde {\mathcal {A}} _ {h, 1}} \left(h \left(\mathbf {Z} _ {\tilde {\mathbf {d}} ^ {(i)}} (\tilde {q} (i))\right)\right) ^ {2} \mid \tilde {\mathbf {d}} \right] \leq \frac {2 n m}{u} T _ {n} (h). \tag {47}
+$$
+
+The following theorem states the concentration inequality for the supremum of the test-train process $g(\mathbf{d})$ when $m \gg u^2$ . Theorem B.7. Suppose $\sup_{h \in \mathcal{H}} T_n(h^2) \leq r$ . If $m \gg u^2$ , then for all $x > 0$ , with probability at least $1 - \exp(-x) - \operatorname*{Pr}[\Omega]$ over $\mathbf{d}$ ,
+
+$$
+g (\mathbf {d}) \leq \mathbb {E} _ {\mathbf {d}} [ g (\mathbf {d}) ] + 8 \sqrt {\frac {5 r x}{u}} + 2 \sqrt {2} \inf _ {\alpha > 0} \left(\frac {\Re_ {u} ^ {+} \left(\mathcal {H} ^ {2}\right)}{\alpha} + \frac {2 \alpha x}{u}\right) + \frac {8 H _ {0} ^ {2} x}{u}. \tag {48}
+$$
+
+Proof. This theorem follows from the proof of (Yang, 2025, Theorem 5.1) with the special case that $Q = 2$ .
+
+The following theorem, Theorem B.8, states the concentration inequality for the supremum of the test-train process $g(\mathbf{d})$ when $u \gg m^2$ . Many technical details in the proof of Theorem B.7 are reused in the proof of Theorem B.8. The major difference is that we consider a different supremum of empirical process, $\sup_{h \in \mathcal{H}} \left( \mathcal{U}_h^{(u)}(\overline{\mathbf{Z}_{\tilde{\mathbf{d}}}}) - \mathcal{L}_h^{(m)}(\mathbf{Z}_{\tilde{\mathbf{d}}}) \right)$ , instead of $g(\mathbf{d})$ as in the proof of Theorem B.7, to handle the case that $u \geq m$ .
+
+Theorem B.8. Suppose $\sup_{h\in \mathcal{H}}T_n(h)\leq r$ . If $u\gg m^2$ , then for all $x > 0$ , with probability at least $1 - \exp (-x) - \operatorname *{Pr}\left[\tilde{\Omega}\right]$ over $\mathbf{d}$ ,
+
+$$
+g (\mathbf {d}) \leq \mathbb {E} [ g (\mathbf {d}) ] + 8 \sqrt {\frac {5 r x}{m}} + 2 \sqrt {2} \inf _ {\alpha > 0} \left(\frac {\Re_ {m} ^ {+} \left(\mathcal {H} ^ {2}\right)}{\alpha} + \frac {2 \alpha x}{m}\right) + \frac {8 H _ {0} ^ {2} x}{m}. \tag {49}
+$$
+
+Proof. This theorem follows from the proof of (Yang, 2025, Theorem 5.2) with the special case that $Q = 2$ .
+
+Proof of Theorem 3.1. (7) follows by combining the upper bound (48) in Theorem B.7 for the case that $m \gg u^2$ , and the upper bound (49) for the case that $u \gg m^2$ in Theorem B.8. In particular, we note that when $Q = 2$ , $\operatorname{Pr}[\Omega] \leq u^2 / m$ when $m \gg u^2$ , and $\operatorname{Pr}\left[\tilde{\Omega}\right] \leq m^2 / u$ when $u \gg m^2$ .
+
+# B.3. Proof of Theorem 3.2
+
+Below are the definitions and lemmas useful for the proof of Theorem 3.2.
+
+For $r > 0$ , define the function class
+
+$$
+\mathcal {H} ^ {(r)} = \left\{\frac {r}{w (h)} h: h \in \mathcal {H} \right\}, \tag {50}
+$$
+
+where $w(h)\coloneqq \min \left\{r\lambda^k:k\geq 0,r\lambda^k\geq \tilde{T}_n(h)\right\}$ with $\lambda >1$
+
+Define
+
+$$
+U _ {r} ^ {+} := \sup _ {s \in \mathcal {H} ^ {(r)}} \left(\mathcal {U} _ {s} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {L} _ {s} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right)\right). \tag {51}
+$$
+
+Lemma B.9. Fix $\lambda > 1$ , $K > 1$ , and $r > 0$ . If $U_r^+ \leq \frac{r}{\lambda K}$ , then
+
+$$
+\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {r}{\lambda K} + \frac {\tilde {T} _ {n} (h)}{K}, \quad \forall h \in \mathcal {H}. \tag {52}
+$$
+
+Proof. If $\tilde{T}_n(h) \leq r$ , then $w(h) = r$ and $s = \frac{r}{w(h)} h = h$ . Therefore, $U_r^+ \leq \frac{r}{\lambda} \Rightarrow \mathcal{U}_s^{(u)}(\mathbf{Z}_{\mathbf{d}}) - \mathcal{L}_s^{(m)}(\overline{\mathbf{Z}_{\mathbf{d}}}) \leq \frac{r}{\lambda K}$ and (52) holds since $\tilde{T}_n(h) \geq 0$ for all $h \in \mathcal{H}$ .
+
+If $\tilde{T}_n(h) > r$ , then $w(h) = r\lambda^k$ with $\tilde{T}_n(h)\in (r\lambda^{k - 1},r\lambda^k ]$ . Again, it follows from $U_r^+ \leq \frac{r}{\lambda}$ that
+
+$$
+\mathcal {U} _ {s} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {s} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) \leq \frac {r}{\lambda}, s = \frac {h}{\lambda^ {k}},
+$$
+
+and we have
+
+$$
+\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z _ {d}}) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z _ {d}}}}) \leq \frac {r \lambda^ {k - 1}}{K} \leq \frac {\tilde {T} _ {n} (h)}{K},
+$$
+
+and (52) still holds.
+
+Proof of Theorem 3.2. Let $r$ be chosen such that $r \geq \max \{r_u, r_m\}$ . Let $s = \frac{r}{w(h)} h \in \mathcal{H}^{(r)}$ , then we have $T_n(s) \leq r$ . To see this, if $\tilde{T}_n(h) \leq r$ , then $w(h) = r$ and $s = h$ , so $T_n(s) \leq \tilde{T}_n(h) \leq r$ . Otherwise, if $\tilde{T}_n(h) > r$ , then $s = \frac{h}{\lambda^k}$ where $k$ is such that $\tilde{T}_n(h) \in (r\lambda^{k-1}, r\lambda^k]$ . It follows that $T_n(s) = \frac{T_n(h)}{\lambda^{2k}} \leq \frac{\tilde{T}_n(h)}{\lambda^{2k}} \leq \frac{r\lambda^k}{\lambda^{2k}} \leq r$ . It follows that $T_n(s) \leq r$ for all $s \in \mathcal{H}^{(r)}$ .
+
+We first consider the case that $m \geq u$ . It follows from (43) in Lemma B.4 that $\mathbb{E}_{\mathbf{d}}[U_r^+] \leq \Re_u^+(\mathcal{H}^{(r)}) + \Re_m^-(\mathcal{H}^{(r)})$ . Applying (7) in Theorem 3.1 with $\alpha = 1$ to the function class $\mathcal{H}^{(r)}$ , then for all $x > 0$ , with probability at least $1 - e^{-x}$ ,
+
+$$
+U _ {r} ^ {+} \leq \Re_ {u} ^ {+} \left(\mathcal {H} ^ {(r)}\right) + \Theta \left(\Re_ {u} ^ {+} \left(\mathcal {H} _ {1} ^ {(r)}\right)\right) + \Re_ {m} ^ {-} \left(\mathcal {H} ^ {(r)}\right) + \Theta \left(\frac {x}{u}\right) + \Theta \left(\sqrt {\frac {r x}{u}}\right), \tag {53}
+$$
+
+where $\mathcal{H}_1^{(r)} = \left\{h^2\colon h\in \mathcal{H}^{(r)}\right\}$
+
+Define the function class $\mathcal{H}(x,y) \coloneqq \left\{h \in \mathcal{H} \colon x \leq \tilde{T}_n(h) \leq y\right\}$ . Let $T$ be the smallest integer such that $r\lambda^{T + 1} \geq T_0 \coloneqq \sup_{h \in \mathcal{H}} \mathcal{T}_n(h)$ . If $T_0 = \infty$ , then set $T = \infty$ . We have
+
+$$
+\begin{array}{l} \mathfrak {R} _ {u} ^ {+} \left(\mathcal {H} ^ {(r)}\right) \leq \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (0, r)} R _ {u, \mathbf {d}} ^ {+} h \right] + \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (r, T _ {0})} \frac {r}{w (h)} R _ {u, \mathbf {d}} ^ {+} h \right] \\ \leq \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (0, r)} R _ {u, \mathbf {d}} ^ {+} h \right] + \sum_ {t = 0} ^ {T} \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (r \lambda^ {t}, r \lambda^ {t + 1})} \frac {r}{w (h)} R _ {u, \mathbf {d}} ^ {+} h \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {1} {\leq} \psi_ {u} (r) + \sum_ {t = 0} ^ {T} \lambda^ {- t} \psi_ {u} (r \lambda^ {t + 1}) \\ \stackrel {2} {\leq} \psi_ {u} (r) \left(1 + \lambda^ {1 / 2} \sum_ {t = 0} ^ {T} \lambda^ {- t / 2}\right). \tag {54} \\ \end{array}
+$$
+
+Here is due to $w(h)\geq r\lambda^t$ and $\mathbb{E}\left[\sup_{h:\tilde{T}_n(h)\leq r\lambda^{t + 1}}R_{u,\mathbf{d}}^+ h\right]\leq \psi_u(r\lambda^{t + 1})$ 2 is due to the fact that the sub-root function $\psi$ satisfies $\psi_{u}(\alpha r)\leq \sqrt{\alpha}\psi_{u}(r)$ for $\alpha >1$
+
+Setting $\lambda = 4$ on the RHS of (54), we have
+
+$$
+\Re_ {u} ^ {+} \left(\mathcal {H} ^ {(r)}\right) \leq 5 \psi_ {u} (r) \leq 5 \sqrt {r r _ {u}}. \tag {55}
+$$
+
+The last inequality follows from $\psi_u(r) \leq \sqrt{\frac{r}{r_u}}\psi(r_u) = \sqrt{rr_u}$ because $r \geq r_u$ .
+
+Following a similar argument,
+
+$$
+\begin{array}{l} \mathfrak {R} _ {u} ^ {+} \left(\mathcal {H} _ {1} ^ {(r)}\right) \leq \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (0, r)} R _ {u, \mathbf {d}} ^ {+} h ^ {2} \right] + \sum_ {t = 0} ^ {T} \mathbb {E} \left[ \sup _ {h \in \mathcal {H} (r \lambda^ {t}, r \lambda^ {t + 1})} \frac {r ^ {2}}{w (h) ^ {2}} R _ {u, \mathbf {d}} ^ {+} h ^ {2} \right] \\ \stackrel {③} {\leq} \psi_ {u} (r) + \sum_ {t = 0} ^ {T} \lambda^ {- 2 t} \psi_ {u} (r \lambda^ {t + 1}) \\ \stackrel {4} {\leq} \psi_ {u} (r) \left(1 + \lambda^ {1 / 2} \sum_ {t = 0} ^ {T} \lambda^ {- 3 t / 2}\right). \tag {56} \\ \end{array}
+$$
+
+Here 1 is due to $w(h)\geq r\lambda^t$ and $\mathbb{E}\left[\sup_{h\colon \tilde{T}_n(h)\leq r\lambda^{t + 1}}R_{u,\mathbf{d}}^+ h^2\right]\leq \psi_u(r\lambda^{t + 1})$ ② is due to the fact that the sub-root function $\psi$ satisfies $\psi_{u}(\alpha r)\leq \sqrt{\alpha}\psi_{u}(r)$ for $\alpha >1$
+
+Again, setting $\lambda = 4$ on the RHS of (56), we have
+
+$$
+\Re_ {u} ^ {+} \left(\mathcal {H} _ {1} ^ {(r)}\right) \leq \frac {2 3}{7} \psi_ {u} (r) \leq \frac {2 3}{7} \sqrt {r r _ {u}}. \tag {57}
+$$
+
+Similar to the argument for $\Re_u^+(\mathcal{H}^{(r)})$ , since $r \geq r_m$ , we have
+
+$$
+\Re_ {m} ^ {-} \left(\mathcal {H} ^ {(r)}\right) \leq 5 \sqrt {r r _ {m}} \tag {58}
+$$
+
+It follows from (53), (55), (57), and (58) that
+
+$$
+U _ {r} ^ {+} \leq \Theta (\sqrt {r r _ {u}}) + \Theta (\sqrt {r r _ {m}}) + \Theta \left(\frac {x}{u}\right) + \Theta \left(\sqrt {\frac {r x}{u}}\right) := P (r). \tag {59}
+$$
+
+Let $r_0$ be the largest solution to $P(r) = \frac{r}{\lambda K}$ for the fixed $K > 1$ . We have
+
+$$
+r _ {0} \leq r _ {1} := \lambda \left(\lambda K ^ {2} \left(\Theta \left(\sqrt {r _ {u}}\right) + \Theta \left(\sqrt {r _ {m}}\right) + \Theta \left(\sqrt {\frac {x}{u}}\right)\right) ^ {2} + \frac {\Theta (K x)}{u}\right). \tag {60}
+$$
+
+Then $r_1 \geq \max \{r_u, r_m\}$ . Setting $r = r_1$ in (59), we have
+
+$$
+U _ {r _ {1}} ^ {+} \leq \frac {r _ {1}}{\lambda K} = \lambda K \left(\Theta (\sqrt {r _ {u}}) + \Theta (\sqrt {r _ {m}}) + \Theta \left(\sqrt {\frac {x}{u}}\right)\right) ^ {2} + \Theta \left(\frac {x}{u}\right).
+$$
+
+It follows from Lemma B.9 that
+
+$$
+\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) \leq \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) + \frac {r _ {1}}{\lambda K} + \frac {\tilde {T} _ {n} (h)}{K}
+$$
+
+$$
+\begin{array}{l} = \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z _ {d}}}}) + \frac {\tilde {T} _ {n} (h)}{K} + \lambda K \left(\Theta (\sqrt {r _ {u}}) + \Theta (\sqrt {r _ {m}}) + \Theta \left(\sqrt {\frac {x}{u}}\right)\right) ^ {2} + \Theta \left(\frac {x}{u}\right) \\ \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {\tilde {T} _ {n} (h)}{K} + c _ {0} \left(r _ {u} + r _ {m}\right) + \frac {c _ {1} x}{u}, \quad \forall h \in \mathcal {H}. \tag {61} \\ \end{array}
+$$
+
+Regarding the other case that $u \geq m$ , we repeat the argument above and obtain the same upper bound as (61) with $m$ and $u$ swapped:
+
+$$
+\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {\tilde {T} _ {n} (h)}{K} + c _ {0} \left(r _ {u} + r _ {m}\right) + \frac {c _ {1} x}{m}, \quad \forall h \in \mathcal {H}. \tag {62}
+$$
+
+(10) the follows from (61) and (62).
+
+# B.4. Sharp TLC Excess Risk Bound for Generic Transductive Learning
+
+Proof of Theorem 3.5. We first check that $T_{n}(h) \leq \tilde{T}_{n}(h)$ . For any $h \in \mathcal{H}$ , by the definition of $\tilde{T}_n(h)$ in (12), for every $\varepsilon > 0$ , there exist $f_1, f_2 \in \mathcal{F}$ such that $h = \ell_{f_1} - \ell_{f_2}$ , and $2B\mathcal{L}_n(\ell_{f_1} - \ell_{f_n^*}) + 2B\mathcal{L}_n(\ell_{f_2} - \ell_{f_n^*}) < \tilde{T}_n(h) + \varepsilon$ . Therefore,
+
+$$
+\begin{array}{l} T _ {n} (h) = \mathcal {L} _ {n} \left(\left(\ell_ {f _ {1}} - \ell_ {f _ {2}}\right) ^ {2}\right) \\ \leq 2 T _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + 2 T _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right) \\ \leq 2 B \mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + 2 B \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right) < \tilde {T} _ {n} (h) + \varepsilon , \\ \end{array}
+$$
+
+where the first inequality follows from the Cauchy-Schwarz inequality, and the second inequality is due to Assumption 1(2). It follows that $T_{n}(h) \leq \tilde{T}_{n}(h)$ .
+
+As a result, we can apply Theorem 3.2 with $\tilde{T}_n(\cdot)$ defined in this theorem. Then (15) follows from Theorem 3.2.
+
+Lemma B.10. Suppose Assumption 1 holds and $m \gg u^2$ or $u \gg m^2$ . Define
+
+$$
+g _ {u} ^ {+} (\mathbf {d}) := \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) - \mathcal {L} _ {n} (h)\right), \tag {63}
+$$
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) := \sup _ {h \in \mathcal {H}} \left(\mathcal {L} _ {n} (h) - \mathcal {L} _ {h} ^ {(m)} \left(\mathbf {Z} _ {\tilde {\mathbf {d}}}\right)\right), \tag {64}
+$$
+
+$$
+g _ {u} ^ {-} (\mathbf {d}) := \sup _ {h \in \mathcal {H}} \left(\mathcal {L} _ {n} (h) - \mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right)\right). \tag {65}
+$$
+
+Suppose $\sup_{h\in \mathcal{H}}T_n(h)\leq r$ . Then for every $x > 0$ , with probability at least $1 - \exp (-x) - (\min \{m,u\})^2 /\max \{m,u\}$ over $\mathbf{d}$ ,
+
+$$
+g _ {u} ^ {+} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} ^ {+} (\mathbf {d}) \right] \lesssim \frac {m}{n} \left(\sqrt {\frac {r x}{\operatorname* {m i n} \{u , m \}}} + \inf _ {\alpha > 0} \left(\frac {\Re_ {\operatorname* {m i n} \{u , m \}} ^ {+} \left(\mathcal {H} ^ {2}\right)}{\alpha} + \frac {\alpha x}{\operatorname* {m i n} \{u , m \}}\right) + \frac {x}{\operatorname* {m i n} \{u , m \}}\right), \tag {66}
+$$
+
+where $[\cdot ]_{+}\coloneqq \max \{\cdot ,0\}$
+
+Furthermore, let $\mathcal{H} = \Delta_{\mathcal{F}}^{*}$ , $\psi_{u,m}$ be a sub-root function and $r^*$ is the fixed point of $\psi_{u,m}$ . Assume that for all $r \geq r^*$ ,
+
+$$
+\begin{array}{l} \psi_ {u, m} (r) \geq \max \left\{\mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {-} h \right], \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {-} h \right], \right. \\ \left. \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {\min \{u, m \}, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}, \tag {67} \\ \end{array}
+$$
+
+Then for any fixed constant $K > 1$ , there exists an absolute positive constant $\widehat{c}_2$ depending on $K, L_0$ such that for every $x > 0$ , with probability at least $1 - \exp(-x) - \left( \min \{m, u\} \right)^2 / \max \{m, u\}$ over $\tilde{\mathbf{d}}$ ,
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right). \tag {68}
+$$
+
+Similarly, with probability at least $1 - \exp (-x) - (\min \{m,u\})^2 /\max \{m,u\}$ over $\mathbf{d}$
+
+$$
+g _ {u} ^ {-} (\mathbf {d}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right). \tag {69}
+$$
+
+Proof. We have $g_{u}^{+}(\mathbf{d}) = \frac{m}{n} g(\mathbf{d})$ , therefore,
+
+$$
+g _ {u} ^ {+} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} ^ {+} (\mathbf {d}) \right] = \frac {m}{n} \left(g (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g (\mathbf {d}) \right]\right),
+$$
+
+where $g$ is defined in (4). Then (66) follows from (7) in Theorem 3.1.
+
+It can be verified that
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) \stackrel {\mathrm {d i s t}} {=} \frac {u}{n} \sup _ {h \in \mathcal {H}} \left(\mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}})\right) = \frac {u}{n} g (\mathbf {d}),
+$$
+
+where the first equality is due to the fact that $\{\mathbf{Z}_{\mathbf{d}}\}$ and $\overline{\{\mathbf{Z}_{\tilde{\mathbf{d}}}\}}$ are both sets of size $u$ sampled uniformly from $[n]$ without replacement. Here $\stackrel{\mathrm{dist}}{=}^{}$ indicates the random variables on both sides follow the same distribution. As a result, $\mathbb{E}_{\tilde{\mathbf{d}}} \left[ g_m(\tilde{\mathbf{d}}) \right] = u / n \cdot \mathbb{E}_{\mathbf{d}}[g(\mathbf{d})]$ and
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) - \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ g _ {m} (\tilde {\mathbf {d}}) \right] \stackrel {\mathrm {d i s t}} {=} \frac {u}{n} \left(g (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g (\mathbf {d}) \right]\right),
+$$
+
+and it follows from (7) in Theorem 3.1 that
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) - \mathbb {E} _ {\tilde {\mathbf {d}}} \left[ g _ {m} (\tilde {\mathbf {d}}) \right] \lesssim \sqrt {\frac {r x}{\operatorname* {m i n} \{u , m \}}} + \inf _ {\alpha > 0} \left(\frac {\left[ \Re_ {\min \{u , m \}} ^ {+} \left(\mathcal {H} ^ {2}\right) \right] _ {+}}{\alpha} + \frac {\alpha x}{\min \{u , m \}}\right) + \frac {x}{\min \{u , m \}}. \tag {70}
+$$
+
+We further have
+
+$$
+g _ {u} ^ {-} (\mathbf {d}) = \frac {m}{n} \sup _ {h \in \mathcal {H}} \left(\mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) - \mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right)\right). \tag {71}
+$$
+
+Taking $\left\{\overrightarrow{\mathbf{x}}_i\right\}_{i\in \overline{\mathbf{Z_d}}}$ as the training features and $\left\{\overrightarrow{\mathbf{x}}_i\right\}_{i\in \mathbf{Z_d}}$ as the test set, then we can repeat the proofs of Theorem B.7 and Theorem B.8, and obtain the following concentration inequality. For every $x > 0$ , with probability at least $1 - \exp (-x) - (\min \{m,u\})^2 /\max \{m,u\}$ over $\mathbf{d}$ ,
+
+$$
+\begin{array}{l} \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z _ {d}}}}) - \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z _ {d}}) - \mathbb {E} _ {\mathbf {d}} \left[ \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z _ {d}}}}) - \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z _ {d}}) \right] \\ \lesssim \sqrt {\frac {r x}{\operatorname* {m i n} \{u , m \}}} + \inf _ {\alpha > 0} \left(\frac {\Re_ {\operatorname* {m i n} \{u , m \}} ^ {+} \left(\mathcal {H} ^ {2}\right)}{\alpha} + \frac {\alpha x}{\operatorname* {m i n} \{u , m \}}\right) + \frac {x}{\operatorname* {m i n} \{u , m \}}. \tag {72} \\ \end{array}
+$$
+
+It follows from (71) and (72) that
+
+$$
+g _ {u} ^ {-} (\mathbf {d}) - \mathbb {E} _ {\mathbf {d}} \left[ g _ {u} ^ {-} (\mathbf {d}) \right] \lesssim \sqrt {\frac {r x}{\min \{u , m \}}} + \inf _ {\alpha > 0} \left(\frac {\left[ \Re_ {\min \{u , m \}} ^ {+} \left(\mathcal {H} ^ {2}\right) \right] _ {+}}{\alpha} + \frac {\alpha x}{\min \{u , m \}}\right) + \frac {x}{\min \{u , m \}}. \tag {73}
+$$
+
+Furthermore, we have $\mathbb{E}_{\tilde{\mathbf{d}}}\left[g_m(\tilde{\mathbf{d}})\right] = \Re_{m}^{-}(\mathcal{H})$ and $\mathbb{E}_{\mathbf{d}}[g_u^- (\mathbf{d})] = \Re_u^- (\mathcal{H}).$
+
+For any $h \in \mathcal{H} = \Delta_{\mathcal{F}}^{*}$ such that $h = \ell_f - \ell_{f_n^*}$ with $f \in \mathcal{F}$ , we set $\tilde{T}_n(h) = B\mathcal{L}_n(h)$ . It follows from Assumption 1(2) that $T_{n}(h) \leq \tilde{T}_{n}(h)$ for all $h \in \Delta_{\mathcal{F}}^{*}$ .
+
+We note that $\sup_{h: h \in \Delta_{\mathcal{F}}^{*}, B\mathcal{L}_{n}(h) \leq r} R_{\min\{u, m\}, \mathbf{d}}^{+} h^{2} \geq 0$ , $\sup_{h: h \in \Delta_{\mathcal{F}}^{*}, B\mathcal{L}_{n}(h) \leq r} R_{u, \mathbf{d}}^{-} h \geq 0$ and $\sup_{h: h \in \Delta_{\mathcal{F}}^{*}, B\mathcal{L}_{n}(h) \leq r} R_{m, \mathbf{d}}^{-} h \geq 0$ holds because $0 \in \Delta_{\mathcal{F}}^{*}$ . With $\psi_{u, m}$ given in this theorem, by repeating the proof of Theorem 3.2 to (70) and (73), we have
+
+$$
+g _ {m} (\tilde {\mathbf {d}}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right),
+$$
+
+$$
+g _ {u} ^ {-} (\mathbf {d}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right),
+$$
+
+where $K > 1$ is a fixed constant, and $\widehat{c}_2$ is a positive constant depending on $K$ and $L_0$ .
+
+Theorem B.11. Suppose that Assumption 1 holds and $m \gg u^2$ or $u \gg m^2$ . Let $\psi_{u,m}$ be a sub-root function and $r^*$ is the fixed point of $\psi_{u,m}$ . Assume that for all $r \geq r^*$ ,
+
+$$
+\begin{array}{l} \psi_ {u, m} (r) \geq \max \left\{\mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {u, \mathbf {d}} ^ {-} h \right], \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {m, \mathbf {d}} ^ {-} h \right], \right. \\ \left. \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {\min \{u, m \}, \mathbf {d}} ^ {+} h ^ {2} \right] \right\}, \tag {74} \\ \end{array}
+$$
+
+Then for every $x > 0$ , with probability at least $1 - 2 \exp(-x) - 2 \left( \min \{m, u\} \right)^2 / \max \{m, u\}$ ,
+
+$$
+\mathcal {L} _ {n} \left(\ell_ {\widehat {f} _ {\mathbf {d}, u}} - \ell_ {f _ {n} ^ {*}}\right) \leq c _ {2} \left(r ^ {*} + \frac {x}{u}\right), \quad \mathcal {L} _ {n} \left(\ell_ {\widehat {f} _ {\mathbf {d}, m}} - \ell_ {f _ {n} ^ {*}}\right) \leq c _ {2} \left(r ^ {*} + \frac {x}{m}\right), \tag {75}
+$$
+
+where $c_{2}$ is an absolute positive constant which depends on $B$ and $L_{0}$ .
+
+Proof. Let $\mathcal{H} = \Delta_{\mathcal{F}}^{*}$ . It follows from (69) in Lemma B.10 that with high probability, for all $h \in \mathcal{H}$ ,
+
+$$
+\mathcal {L} _ {n} (h) - \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right)
+$$
+
+holds for a fixed constant $K > 1$ , and $\widehat{c}_2$ depends on $K$ and $L_0$ . We set $h = \widehat{f}_{\mathbf{d},u} - \ell_{f_n^*}$ in the above inequality, and note that $\mathcal{U}_h^{(u)}(\mathbf{Z_d}) = \mathcal{U}_{\widehat{f}_{\mathbf{d},u}}^{(u)}(\mathbf{Z_d}) - \mathcal{U}_{f_n^*}^{(u)}(\mathbf{Z_d}) \leq 0$ due to the optimality of $\widehat{f}_{\mathbf{d},u}$ . Let $K > B$ , then the first upper bound in (75) is proved by the above inequality with constant $c_2 = \widehat{c}_2 / (1 - B / K)$ .
+
+Moreover, it follows from (68) in Lemma B.10 that with high probability, for all $h\in \mathcal{H}$
+
+$$
+\mathcal {L} _ {n} (h) - \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z _ {d}}}}) \leq \frac {B \mathcal {L} _ {n} (h)}{K} + \widehat {c} _ {2} \left(r ^ {*} + \frac {x}{\min \{u , m \}}\right),
+$$
+
+since $\{\mathbf{Z}_{\tilde{\mathbf{d}}}\}$ and $\{\overline{\mathbf{Z_d}}\}$ follow the same distribution and they are all random sets of size $m$ sampled uniformly from $[n]$ without replacement. We set $h = \widehat{f}_{\mathbf{d},m} - \ell_{f_n^*}$ in the above inequality, and note that $\mathcal{L}_h^{(m)}(\mathbf{Z_d}) = \mathcal{L}_{\widehat{f}_{\mathbf{d},m}}^{(m)}(\overline{\mathbf{Z_d}}) - \mathcal{L}_{f_n^*}^{(m)}(\overline{\mathbf{Z_d}})\leq 0$ due to the optimality of $\widehat{f}_{\mathbf{d},m}$ . Let $K > B$ , then the second upper bound in (75) is proved by the above inequality with the same constant $c_{2} = \widehat{c}_{2} / (1 - B / K)$ .
+
+# B.5. TLC Excess Risk Bound for Transductive Kernel Learning
+
+Before presenting the proof of Theorem 4.1, we introduce the definition about convex and symmetric sets below, as well as Lemma B.12 which lay the foundation of the proof of Theorem 4.1.
+
+Definition B.1 (Convex and Symmetric Set). A set $X$ is convex if $\alpha X + (1 - \alpha)X \subseteq X$ for all $\alpha \in [0,1]$ . $X$ is symmetric if $-X \subseteq X$ .
+
+Lemma B.12. Let $\mathcal{F} = \mathcal{H}_{\mathbf{X}_n}(\mu)$ . For every $r > 0$
+
+$$
+\mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {f \in \mathcal {F}: T _ {n} (f) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\text {i n d})} f \right] \leq \tilde {\varphi} _ {u} (r), \tag {76}
+$$
+
+where
+
+$$
+\tilde {\varphi} _ {u} (r) := \min _ {Q: 0 \leq Q \leq n} \left(\sqrt {\frac {r Q}{u}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}}\right). \tag {77}
+$$
+
+Similarly, for every $r > 0$
+
+$$
+\mathbb {E} _ {\mathbf {Y} ^ {(m)}, \boldsymbol {\sigma}} \left[ \sup _ {f \in \mathcal {F}: T _ {n} (f) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\text {i n d})} f \right] \leq \tilde {\varphi} _ {m} (r), \tag {78}
+$$
+
+where
+
+$$
+\tilde {\varphi} _ {m} (r) := \min _ {Q: 0 \leq Q \leq n} \left(\sqrt {\frac {r Q}{m}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right). \tag {79}
+$$
+
+Proof. We have
+
+$$
+R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} f = \frac {1}{u} \sum_ {i = 1} ^ {u} \sigma_ {i} f (\vec {\mathbf {x}} _ {Y _ {i}}) = \left\langle f, \frac {1}{u} \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}) \right\rangle_ {\mathcal {H} _ {K}}. \tag {80}
+$$
+
+Because $\{\Phi^{(k)}\}_{k\geq 1}$ is an orthonormal basis of $\mathcal{H}_K$ , for any $0\leq Q\leq n$ , we further express the first term on the RHS of (80) as
+
+$$
+\left\langle f, \frac {1}{u} \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}) \right\rangle_ {\mathcal {H} _ {K}} = \left\langle \sum_ {q = 1} ^ {Q} \sqrt {\hat {\lambda} _ {q}} \left\langle f, \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} \Phi_ {q}, v ^ {(Q)} \left(\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}\right) \right\rangle_ {\mathcal {H} _ {K}} + \left\langle \bar {f}, \bar {v} ^ {(Q)} \left(\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}\right) \right\rangle_ {\mathcal {H} _ {K}}, \tag {81}
+$$
+
+where
+
+$$
+\bar {f} = f - \sum_ {q = 1} ^ {Q} \langle f, \Phi_ {q} \rangle_ {\mathcal {H} _ {K}} \Phi_ {q},
+$$
+
+$$
+v ^ {(Q)} (\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}) := \frac {1}{u} \sum_ {q = 1} ^ {Q} \frac {1}{\sqrt {\widehat {\lambda} _ {q}}} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} \Phi_ {q},
+$$
+
+$$
+\bar {v} ^ {(Q)} (\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}) := \frac {1}{u} \sum_ {q = Q + 1} ^ {n} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} \Phi_ {q}.
+$$
+
+Define the operator $\widehat{T}_n\colon \mathcal{H}_K\to \mathcal{H}_K$ by $\widehat{T}_n f = 1 / n\cdot \sum_{i = 1}^n K(\cdot ,\vec{\mathbf{x}}_i)f(\vec{\mathbf{x}}_i)$ for any $f\in \mathcal{H}_K$ . It can be verified that $\Phi_q$ is the eigenfunction of $\widehat{T}_n$ with the corresponding eigenvalue $\widehat{\lambda}_q$ for $q\in [n]$ . We have
+
+$$
+\left\langle \widehat {T} _ {n} f, f \right\rangle_ {\mathcal {H} _ {K}} = \left\langle \frac {1}{n} \sum_ {i = 1} ^ {n} K (\cdot , \vec {\bf x} _ {i}) f (\vec {\bf x} _ {i}), f \right\rangle_ {\mathcal {H} _ {K}} = T _ {n} (f).
+$$
+
+As a result,
+
+$$
+\left\| \sum_ {q = 1} ^ {Q} \sqrt {\widehat {\lambda} _ {q}} \langle f, \Phi_ {q} \rangle \Phi_ {q} \right\| _ {\mathcal {H} _ {K}} ^ {2} = \sum_ {q = 1} ^ {Q} \widehat {\lambda} _ {q} \langle f, \Phi_ {q} \rangle_ {\mathcal {H} _ {K}} ^ {2} \leq \sum_ {q = 1} ^ {n} \widehat {\lambda} _ {q} \langle f, \Phi_ {q} \rangle_ {\mathcal {H} _ {K}} ^ {2} = \langle T _ {n} f, f \rangle_ {\mathcal {H} _ {K}} = T _ {n} (f) \leq r, \tag {82}
+$$
+
+which holds for all $f$ such that $T_{n}(f) \leq r$ .
+
+Combining (80)-(82), we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {f \in \mathcal {F}: T _ {n} (f) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} f \right] \\ \stackrel {1} {\leq} \sup _ {f \in \mathcal {F}: T _ {n} (f) \leq r} \left\| \sum_ {q = 1} ^ {Q} \sqrt {\widehat {\lambda} _ {q}} \langle f, \Phi_ {q} \rangle_ {\mathcal {H} _ {K}} \Phi_ {q} \right\| _ {\mathcal {H} _ {K}} \cdot \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \left\| v ^ {(Q)} (\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}) \right\| _ {\mathcal {H} _ {K}} \right] \\ + \left\| \bar {f} \right\| _ {\mathcal {H} _ {K}} \cdot \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[ \left\| \bar {v} ^ {(Q)} (\mathbf {Y} ^ {(u)}, \pmb {\sigma}) \right\| _ {\mathcal {H} _ {K}} \right] \\ \leq \sqrt {r} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \left\| v ^ {(Q)} \left(\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}\right) \right\| _ {\mathcal {H} _ {K}} \right] + \mu \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \left\| \bar {v} ^ {(Q)} \left(\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}\right) \right\| _ {\mathcal {H} _ {K}} \right]. \tag {83} \\ \end{array}
+$$
+
+where ① is due to the Cauchy-Schwarz inequality.
+
+We have
+
+$$
+\begin{array}{l} \frac {1}{u} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[\left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2} \right] \stackrel {{\circledast}} {{=}} \frac {1}{u} \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \right.\left. \sum_ {i = 1} ^ {u} \left\langle K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2} \right] \\ = \frac {1}{u} \mathbb {E} _ {\mathbf {Y} ^ {(u)}} \left[ \sum_ {i = 1} ^ {u} \Phi_ {q} (\vec {\mathbf {x}} _ {Y _ {i}}) ^ {2} \right] \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \Phi_ {q} ^ {2} (\vec {\mathbf {x}} _ {i}) \\ = \left\langle \widehat {T} _ {n} \Phi_ {q}, \Phi_ {q} \right\rangle = \widehat {\lambda} _ {q}. \tag {84} \\ \end{array}
+$$
+
+Here ① is due to the fact that $\mathbb{E}[\sigma_i] = 0$ for all $i\in [n]$ . It follows from (84) that
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[ \left\| v ^ {(Q)} (\mathbf {Y} ^ {(u)}, \pmb {\sigma}) \right\| _ {\mathcal {H} _ {K}} \right] = \frac {1}{\sqrt {u}} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[ \sqrt {\frac {1}{u} \sum_ {q = 1} ^ {Q} \frac {1}{\widehat {\lambda} _ {q}} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}) , \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2}} \right] \\ \stackrel {\circledast} {\leq} \frac {1}{\sqrt {u}} \sqrt {\frac {1}{u} \mathbb {E} _ {\mathbf {Y} ^ {(u)} , \sigma} \left[ \sum_ {q = 1} ^ {Q} \frac {1}{\widehat {\lambda} _ {q}} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2} \right]} \\ \stackrel {②} {=} \sqrt {\frac {Q}{u}}, \tag {85} \\ \end{array}
+$$
+
+where 1 is due to the Jensen's inequality, 2 is due to the fact that $\mathbb{E}[\sigma_i] = 0$ for all $i\in [n]$ . Similarly, we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[ \left\| \bar {v} ^ {(Q)} (\mathbf {Y} ^ {(u)}, \pmb {\sigma}) \right\| _ {\mathcal {H} _ {K}} \right] = \frac {1}{\sqrt {u}} \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \pmb {\sigma}} \left[ \sqrt {\frac {1}{u} \sum_ {q = Q + 1} ^ {n} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}) , \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2}} \right] \\ \leq \frac {1}{\sqrt {u}} \sqrt {\frac {1}{u} \mathbb {E} _ {\mathbf {Y} ^ {(u)} , \sigma} \left[ \sum_ {q = Q + 1} ^ {n} \left\langle \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}), \Phi_ {q} \right\rangle_ {\mathcal {H} _ {K}} ^ {2} \right]} \\ \end{array}
+$$
+
+$$
+= \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}}. \tag {86}
+$$
+
+It follows from (83), (85), and (86) that
+
+$$
+\mathbb {E} _ {\mathbf {d}, \boldsymbol {\sigma}} \left[ \sup _ {f \in \mathcal {F}: T _ {n} (f) \leq r} \left\langle f, \frac {1}{u} \sum_ {i = 1} ^ {u} \sigma_ {i} K (\cdot , \vec {\mathbf {x}} _ {Y _ {i}}) \right\rangle \right] \leq \min _ {Q: 0 \leq Q \leq n} \left(\sqrt {\frac {r Q}{u}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}}\right), \tag {87}
+$$
+
+which completes the proof of (76). (78) can be proved by a similar argument.
+
+The following corollary will also be necessary for the proof of Theorem 4.1. We can have $\psi_{u},\psi_{m}$ in Theorem 3.5 as the upper bounds for inductive Rademacher complexities using Theorem 2.1, leading to this corollary.
+
+Corollary B.13. Under the same conditions of Theorem 3.5, for every $x > 0$ , with probability at least $1 - \exp(-x) - \left(\min\{m, u\}\right)^2 / \max\{m, u\}$ , (15) still holds if for all $r \geq r_u$ ,
+
+$$
+\psi_ {u} (r) \geq 2 \max \left\{\mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} h \right], \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} h ^ {2} \right] \right\}, \tag {88}
+$$
+
+and for all $r \geq r_m$ ,
+
+$$
+\psi_ {m} (r) \geq 2 \max \left\{\mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\mathrm {i n d})} h \right], \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\mathrm {i n d})} h ^ {2} \right] \right\}. \tag {89}
+$$
+
+Here $r_u, r_m$ are the fixed points of $\psi_u$ and $\psi_m$ respectively.
+
+Instead of proving Theorem 4.1, we will prove the more detailed version of Theorem 4.1 as shown in the following theorem.
+
+Theorem B.14. Suppose that Assumption 1 (1) and Assumption 2 hold. Suppose $K$ is a positive definite kernel on $\mathcal{X} \times \mathcal{X}$ . Suppose that for all $f \in \mathcal{H}_{\mathbf{X}_n}(\mu)$ , $0 \leq \ell_f(i) \leq L_0$ for all $i \in [n]$ , and $L_0 \geq 2\sqrt{2}$ . Suppose that $m \gg u^2$ or $u \gg m^2$ . Then for every $x > 0$ , with probability at least $1 - \exp(-x) - (\min\{m, u\})^2 / \max\{m, u\}$ over $\mathbf{d}$ ,
+
+$$
+\begin{array}{l} \mathcal {U} _ {h} ^ {(u)} (\mathbf {Z} _ {\mathbf {d}}) \leq \mathcal {L} _ {h} ^ {(m)} (\overline {{\mathbf {Z} _ {\mathbf {d}}}}) + \frac {2 L ^ {2} B ^ {\prime}}{K} \left(\mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right)\right) \\ + c _ {3} \min _ {0 \leq Q \leq n} r (u, m, Q) + \frac {c _ {1} x}{\min \{m , u \}}, \forall h \in \Delta_ {\mathcal {H} _ {\mathbf {X} _ {n}} (\mu)}, \tag {90} \\ \end{array}
+$$
+
+where $c_{3}$ is an absolute positive number depending on $B^{\prime},L_{0},L,\mu$ , and
+
+$$
+r (u, m, Q) := Q \left(\frac {1}{u} + \frac {1}{m}\right) + \left(\sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right).
+$$
+
+In particular, with probability at least $1 - 3\exp (-x) - 3(\min \{m,u\})^2 / \max \{m,u\}$ over $\mathbf{d}$ , with $h = \widehat{f}_{\mathbf{d},m} - \widehat{f}_{\mathbf{d},u}$ in (90), we have the excess risk bound
+
+$$
+\mathcal {E} \left(\widehat {f} _ {\mathbf {d}, m}\right) \leq c _ {5} \left(\min _ {0 \leq Q \leq n} r (u, m, Q) + \frac {x}{\min \{m , u \}}\right), \tag {91}
+$$
+
+where $c_{5}$ is an absolute positive constant depending on $B^{\prime},L_{0},L,\mu$
+
+Proof of Theorem B.14. It follows from Assumption 2 that for all $h \in \Delta_{\mathcal{F}}^{*}$ , $T_{n}(h) \leq B' L^{2} \mathcal{L}_{n}(h)$ . To see this, let $h = \ell_{f_1} - \ell_{f_n^*}$ with $f_1, f_2 \in \mathcal{F}$ . Then $T_{n}(h) = T_{n}(\ell_{f_1} - \ell_{f_n^*}) \leq L^{2} T_{n}(f_1 - f_n^*) \leq B' L^{2} \mathcal{L}_{n}(\ell_{f_1} - \ell_{f_n^*}) = B' L^{2} \mathcal{L}_{n}(h)$ . This inequality indicates that Assumption 1 (2) holds with $B = B' L^{2}$ . As a result, Assumption 1 holds.
+
+We now apply Theorem 3.5 and Corollary B.13 with the function class $\mathcal{F} = \mathcal{H}_{\mathbf{X}_n}(\mu)$ and $\tilde{T}_n(\cdot)$ defined in (12) with $B = B^{\prime}L^{2}$ . Let $h = \ell_{f_1} - \ell_{f_2} \in \Delta_{\mathcal{F}}$ with $f_1, f_2 \in \mathcal{F}$ , and $\tilde{T}_n(h) \leq r$ . By the definition of $\tilde{T}_n$ , there exist $f_1, f_2 \in \mathcal{F}$ such that $h = \ell_{f_1} - \ell_{f_2}$ and $2B\mathcal{L}_n(\ell_{f_1} - \ell_{f_n^*}) + 2B\mathcal{L}_n(\ell_{f_2} - \ell_{f_n^*}) \leq r'$ for arbitrary $r' > r$ . For simplicity of notations we set $r' = 1.1r$ . Let $\sigma = \{\sigma_i\}_{i=1}^{\max\{u,m\}}$ be iid Rademacher variables. For $r > 0$ we have
+
+$$
+\begin{array}{l} 2 \mathbb {E} _ {\mathbf {d}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} h \right] \\ \stackrel {1} {\leq} 2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {f _ {1}, f _ {2} \in \mathcal {F}: 2 B \mathcal {L} _ {n} (\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}) + 2 B \mathcal {L} _ {n} (\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}) \leq r ^ {\prime}} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} (\ell_ {f _ {1}} - \ell_ {f _ {2}}) \right] \\ \leq 2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {f _ {1} \in \mathcal {F}: \mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) \leq 1. 1 r / 2 B} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) \right] \\ + 2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {f _ {2} \in \mathcal {F}: \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right) \leq 1. 1 r / 2 B} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right) \right] \\ \stackrel {2} {\leq} 4 L \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \sigma} \left[ \sup _ {f \in \mathcal {F}: T _ {n} (f - f _ {n} ^ {*}) \leq r B _ {1} / 2 B} R _ {\sigma , \mathbf {Y} ^ {(u)}} ^ {(\text {i n d})} (f - f _ {n} ^ {*}) \right] \\ \stackrel {3}{\leq}8L\mathbb{E}_{\mathbf{Y}^{(u)},\boldsymbol{\sigma}}\left[\sup_{f\in \mathcal{F}:T_{n}(f)\leq rB_{1} / 8B}R_{\boldsymbol{\sigma},\mathbf{Y}^{(u)}}^{\mathrm{(ind)}}f\right] \\ \stackrel {4} {\leq} 8 L \tilde {\varphi} _ {u} \left(\frac {r B _ {1}}{8 B}\right). \tag {92} \\ \end{array}
+$$
+
+Here 1 is due to the definition of $\tilde{T}_n$ . 2 is due to the contraction property in Theorem A.5 and the fact that the loss function $\ell (\cdot ,\cdot)$ is $L$ -Lipschitz continuous, and $B_{1}$ is a positive constant such that $B_{1} = 1.1B^{\prime}$ . ③ follows by noting that $(f - f_n^*) / 2\in \mathcal{F}$ because $\mathcal{F}$ is symmetric and convex. $\tilde{\varphi}_{u}$ in 4 is defined in (77) in Lemma B.12.
+
+It follows from (92) that
+
+$$
+2 \mathbb {E} _ {\mathbf {Y} ^ {(u)}, \boldsymbol {\sigma}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} h \right] \leq 8 L \tilde {\varphi} _ {u} \left(\frac {r B _ {1}}{8 B}\right).
+$$
+
+By a similar argument and noting that $\left(\ell_{f_1} - \ell_{f_2}\right)^2\leq 2\left((\ell_{f_1} - \ell_{f_n^*})^2 +(\ell_{f_2} - \ell_{f_n^*})^2\right)$ , we have
+
+$$
+\begin{array}{l} 2\mathbb{E}_{\mathbf{Y}^{(u)},\boldsymbol{\sigma}}\left[\sup_{h: h\in \Delta_{\mathcal{F}},\tilde{T}_{n}(h)\leq r}R_{\boldsymbol{\sigma},\mathbf{Y}^{(u)}}^{\mathrm{(ind)}}h^{2}\right] \\ \leq 8\mathbb{E}_{\mathbf{Y}^{(u)},\boldsymbol{\sigma}}\left[\sup_{f\in \mathcal{F}: T_{n}(f - f^{*})\leq rB_{1} / 2B}R_{\boldsymbol{\sigma},\mathbf{Y}^{(u)}}^{\mathrm{(ind)}}\left(\ell_{f_{1}} - \ell_{f_{n}^{*}}\right)^{2}\right] \\ \stackrel {1)}{\leq}16L_{0}\mathbb{E}_{\mathbf{Y}^{(u)},\boldsymbol{\sigma}}\left[ \sup_{f\in \mathcal{F}:T_{n}(f - f^{*})\leq rB_{1} / 2B}R_{\boldsymbol{\sigma},\mathbf{Y}^{(u)}}^{\mathrm{(ind)}}\left(\ell_{f_{1}} - \ell_{f_{n}^{*}}\right)\right] \\ \leq 3 2 L _ {0} L \tilde {\varphi} _ {u} \left(\frac {r B _ {1}}{8 B}\right), \tag {93} \\ \end{array}
+$$
+
+where $①$ is due to the contraction property in Theorem A.5 and $0 \leq \ell_{f}(i) \leq L_{0}$ for all $f \in \mathcal{F}$ and $i \in [n]$ . Define $\varphi_{u}(r) := \max \left\{8L\tilde{\varphi}_{u}\left(\frac{rB_{1}}{8B}\right), 32L_{0}L\tilde{\varphi}_{u}\left(\frac{rB_{1}}{8B}\right)\right\} = L^{\prime}\tilde{\varphi}_{u}\left(\frac{rB_{1}}{8B}\right)$ with $L^{\prime} := \max \{8L, 32L_{0}L\}$ . It can be verified that $\varphi_{u}$ is a sub-root function by checking the definition of the sub-root function.
+
+Similarly, we have
+
+$$
+2 \mathbb {E} _ {\mathbf {Y} ^ {(m)}, \boldsymbol {\sigma}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\text {i n d})} h \right] \leq 8 L \tilde {\varphi} _ {m} \left(\frac {r B _ {1}}{8 B}\right),
+$$
+
+$$
+2 \mathbb {E} _ {\mathbf {Y} ^ {(m)}, \boldsymbol {\sigma}} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}}, \tilde {T} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\mathrm {i n d})} h ^ {2} \right] \leq 3 2 L _ {0} L \tilde {\varphi} _ {m} \left(\frac {r B _ {1}}{8 B}\right),
+$$
+
+and $\varphi_{m}(r) := \max \left\{8L\tilde{\varphi}_{m}\left(\frac{rB_{1}}{8B}\right), 32L_{0}L\tilde{\varphi}_{m}\left(\frac{rB_{1}}{8B}\right)\right\} = L^{\prime}\tilde{\varphi}_{m}\left(\frac{rB_{1}}{8B}\right)$ is also a sub-root function. Let $r_{u}, r_{m}$ be the fixed point of $\varphi_{u}$ and $\varphi_{m}$ respectively. We define $\varphi(r) := \varphi_{u}(r) + \varphi_{m}(r)$ , then $\varphi$ is also a sub-root function. Let $r$ be the fixed point of $\varphi$ , then $r \geq \max \{r_{u}, r_{m}\}$ . Since both $\varphi_{u}$ and $\varphi_{m}$ are nondecreasing functions, we have
+
+$$
+r = \varphi (r) = \varphi_ {u} (r) + \varphi_ {m} (r) \geq \varphi_ {u} (r _ {u}) + \varphi_ {m} (r _ {m}) = r _ {u} + r _ {m}.
+$$
+
+It then follows from the above inequality and Corollary B.13 that, for all $h \in \Delta_{\mathcal{F}}$ we have
+
+$$
+\mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {2 B}{K} \left(\mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right)\right) + c _ {0} r + \frac {c _ {1} x}{\min \{m , u \}}. \tag {94}
+$$
+
+Let $0 \leq r' \leq r$ . Then it follows from (Bartlett et al., 2005, Lemma 3.2) that $0 \leq r' \leq \varphi(r')$ . Therefore, by the definition of $\tilde{\varphi}_u$ in (77) and $\tilde{\varphi}_m$ in (79), for every $0 \leq Q \leq n$ we have
+
+$$
+\frac {r ^ {\prime}}{L ^ {\prime}} \leq \sqrt {\frac {r ^ {\prime} B _ {1} Q}{8 B u}} + \sqrt {\frac {r ^ {\prime} B _ {1} Q}{8 B m}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}.
+$$
+
+Solving the above quadratic inequality for $r'$ , we have
+
+$$
+r ^ {\prime} \leq \widehat {c} _ {3} Q \left(\frac {1}{u} + \frac {1}{m}\right) + \widehat {c} _ {3} \left(\sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right) = \widehat {c} _ {3} r (u, m, Q), \tag {95}
+$$
+
+where $\widehat{c}_3$ is a positive constant depending on $B', L_0, L, \mu$ . (95) holds for every $0 \leq Q \leq n$ , so it follows from (94) and (95) that
+
+$$
+\begin{array}{l} \mathcal {U} _ {h} ^ {(u)} \left(\mathbf {Z} _ {\mathbf {d}}\right) \leq \mathcal {L} _ {h} ^ {(m)} \left(\overline {{\mathbf {Z} _ {\mathbf {d}}}}\right) + \frac {2 B}{K} \left(\mathcal {L} _ {n} \left(\ell_ {f _ {1}} - \ell_ {f _ {n} ^ {*}}\right) + \mathcal {L} _ {n} \left(\ell_ {f _ {2}} - \ell_ {f _ {n} ^ {*}}\right)\right) \\ + c _ {0} \widehat {c} _ {3} \min _ {0 \leq Q \leq n} \left(Q \left(\frac {1}{u} + \frac {1}{m}\right) + \left(\sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right)\right) + \frac {c _ {1} x}{\min \{m , u \}}, \tag {96} \\ \end{array}
+$$
+
+which proves (90) with $c_{3} = c_{0}\widehat{c}_{3}$
+
+When $h = \ell_{\widehat{f}_{\mathbf{d},m}} - \ell_{\widehat{f}_{\mathbf{d},u}}$ , then we can set $f_{1} = \widehat{f}_{\mathbf{d},m}$ and $f_{2} = \widehat{f}_{\mathbf{d},u}$ in (96).
+
+We now derive the upper bounds for $\mathcal{L}_n(\ell_{\widehat{f}_{\mathbf{d},u}} - \ell_{f_n^*})$ and $\mathcal{L}_n(\ell_{\widehat{f}_{\mathbf{d},m}} - \ell_{f_n^*})$ using Theorem B.11. Applying Theorem 2.1, we need to find the sub-root function $\psi_{u,m}$ such that
+
+$$
+\begin{array}{l} \psi_ {u, m} (r) \geq 2 \max \left\{\mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(u)}} ^ {(\mathrm {i n d})} h \right], \mathbb {E} \left[ \sup _ {h: h \in \Delta_ {\mathcal {F}} ^ {*}, B \mathcal {L} _ {n} (h) \leq r} R _ {\boldsymbol {\sigma}, \mathbf {Y} ^ {(m)}} ^ {(\mathrm {i n d})} h \right], \right. \\ \mathbb{E}\left[\sup_{h: h\in \Delta^{*}_{\mathcal{F}},B\mathcal{L}_{n}(h)\leq r}R^{(\mathrm{ind})}_{\boldsymbol {\sigma},\mathbf{Y}(\min \{u,m\})}h^{2}\right]\Bigg\}, \\ \end{array}
+$$
+
+By repeating the argument in (92) and (93), we have
+
+$$
+\psi_ {u, m} (r) = \Theta \left(\min _ {0 \leq Q \leq n} \left(\sqrt {\frac {r Q}{u}} + \sqrt {\frac {r Q}{m}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{u}} + \mu \sqrt {\frac {\sum_ {q = Q + 1} ^ {n} \widehat {\lambda} _ {q}}{m}}\right)\right).
+$$
+
+Let $r^*$ be the fixed point of $\psi_{u,m}$ . Any $r' \leq r^*$ satisfies $r' \leq \Theta \left( \min_{0 \leq Q \leq n} r(u, m, Q) \right)$ . As a result, it follows from (75) in Theorem B.11 that, with probability at least $1 - 2 \exp(-x) - 2 \left( \min \{m, u\} \right)^2 / \max \{m, u\}$ ,
+
+$$
+\mathcal {L} _ {n} \left(\ell_ {\widehat {f} _ {\mathbf {d}, u}} - \ell_ {f _ {n} ^ {*}}\right) \leq c _ {2} \left(\Theta \left(\min _ {0 \leq Q \leq n} r (u, m, Q)\right) + \frac {x}{u}\right), \mathcal {L} _ {n} \left(\ell_ {\widehat {f} _ {\mathbf {d}, m}} - \ell_ {f _ {n} ^ {*}}\right) \leq c _ {2} \left(\Theta \left(\min _ {0 \leq Q \leq n} r (u, m, Q)\right) + \frac {x}{m}\right). \tag {97}
+$$
+
+We note that $\mathcal{L}_{\ell \widehat{f}_{\mathbf{d},m}}^{(m)} - \ell \widehat{f}_{\mathbf{d},u}(\overline{\mathbf{Z}_{\mathbf{d}}}) \leq 0$ due to the optimality of $\widehat{f}_{\mathbf{d},m}$ . Applying the upper bound in (97) to (96) proves (91).
+
+
\ No newline at end of file
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/images.zip b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fc0afa6d1d6e8e280993bc2bc5a0a8f74b35a3ff
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:814afa395b9c3ef26577e691e3c53628bd3cc4a6d9560a3d8136914d2956e8c5
+size 1557239
diff --git a/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/layout.json b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c327b1697bfc38fb9927ed6ec7cd6e612d55eeb4
--- /dev/null
+++ b/anewconcentrationinequalityforsamplingwithoutreplacementanditsapplicationfortransductivelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7cf66c9e8ec0c68d8facb228e09196d80e09cf0504d122725e44b800ed04586f
+size 1572313
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_content_list.json b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..63da82dd11d140bb34da6ee206dc2f0f8761dbf0
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14e79280385469e5ce7bcc75de020cf39cfcbbdfdbd026024af1d4d738fc25ac
+size 317879
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_model.json b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..54e12eaaecf1c795501f1d54555e04bc01485625
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0aeae0dab6f17db11a5bced0b7273f1d8a2bef0893d746a3f4d8f696b26c68af
+size 363561
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_origin.pdf b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6080a2c34309db73dc0d4f346d7829650cac67e1
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/02fac83e-2543-4726-a03c-70d2e7d5ee13_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:58320621a5f3541958345ef0ef4ab0ad7d85443fbd2d9c58082e5a0b70e78a17
+size 629158
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/full.md b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a95c2d1e67618b159daf047fa54ffec0b4c495ce
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/full.md
@@ -0,0 +1,1705 @@
+# A Non-Asymptotic Convergent Analysis for Scored-Based Graph Generative Model via a System of Stochastic Differential Equations
+
+Junwei Su1 Chuan Wu1
+
+# Abstract
+
+Score-based graph generative models (SGGMs) have proven effective in critical applications such as drug discovery and protein synthesis. However, their theoretical behavior, particularly regarding convergence, remains underexplored. Unlike common score-based generative models (SGMs), which are governed by a single stochastic differential equation (SDE), SGGMs involve a system of coupled SDEs. In SGGMs, the graph structure and node features are governed by separate but interdependent SDEs. This distinction makes existing convergence analyses from SGMs inapplicable for SGGMs. In this work, we present the first non-asymptotic convergence analysis for SGGMs, focusing on the convergence bound (the risk of generative error) across three key graph generation paradigms: (1) feature generation with a fixed graph structure, (2) graph structure generation with fixed node features, and (3) joint generation of both graph structure and node features. Our analysis reveals several unique factors specific to SGGMs (e.g., the topological properties of the graph structure) which affect the convergence bound. Additionally, we offer theoretical insights into the selection of hyperparameters (e.g., sampling steps and diffusion length) and advocate for techniques like normalization to improve convergence. To validate our theoretical findings, we conduct a controlled empirical study using synthetic graph models, and the results align with our theoretical predictions. This work deepens the theoretical understanding of SGGMs, demonstrates their applicability in critical domains, and provides practical guidance for designing effective models.
+
+# 1. Introduction
+
+Graph-structured data is ubiquitous across a wide range of domains, including social networks, biological systems, recommendation engines, and knowledge graphs (Newman, 2018). The graph generation problem involves creating new graphs that closely resemble real-world data, a task critical to many graph-based applications. In recent years, score-based graph generative models (SGGMs) (Niu et al., 2020; Jo et al., 2022; Vignac et al., 2022; Chen et al., 2023c) have emerged as a flexible and powerful approach, delivering state-of-the-art empirical performance in graph generation tasks. These models have shown significant impact in critical areas such as molecular generation (Gnaneshwar et al., 2022), protein design (Lee et al., 2023c), graph-based recommendation systems (Liu et al., 2024), and automated program generation (Zhu et al., 2022).
+
+SGGMs are a subclass of the broader family of score-based generative models (SGMs), also referred to as diffusion probabilistic models (Song et al., 2020a; Ho et al., 2020; Nichol & Dhariwal, 2021; Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Li et al., 2024; Yang et al., 2024). SGMs generate data by learning a score function (see Sec. 3 for more details), which represents the gradient of the log-probability of the data distribution. These models conceptualize the data generation process as a diffusion process, where data is progressively corrupted by noise and then reconstructed through a reverse process. The forward process involves gradually adding noise over several steps, transforming the data into a simple distribution (e.g., Gaussian noise), and is governed by a stochastic differential equation (SDE) (Song et al., 2020b). In the reverse process, the model uses the learned score function to iteratively denoise the data, starting from random noise and guiding it step-by-step toward the original data distribution. To efficiently approximate this reverse process, a discrete sampling approach is employed, using numerical methods such as the Euler-Maruyama scheme or the exponential integrator (Chen et al., 2023a).
+
+Understanding the convergence behavior of SGGMs is both theoretically and practically crucial (Suh & Cheng, 2024; Huang et al., 2024; Chen et al., 2022a; 2024a; Lee et al., 2023b). Convergence bounds quantify the discrepancy be
+
+tween the generated graphs and the true data distribution, indicating whether the diffusion process will eventually produce samples that align with the desired target distribution. This is particularly important for SGGMs, as convergence guarantees the reliability and validity of the generated graphs, which are used in high-stakes applications like drug discovery. Furthermore, understanding convergence provides valuable insights into practical considerations, such as the selection of hyperparameters (e.g., sampling steps and diffusion length). Convergence analysis also helps identify sources of error and guides the development of more effective models.
+
+Gap in Existing Research. Despite the empirical success of SGGMs in high-stakes applications, their convergence behavior remains underexplored. Most existing research has primarily focused on SGMs derived from the image domain (Yeğin & Amasyah, 2024; Li et al., 2024; Wang et al., 2024; Chen et al., 2024b; Benton et al., 2023). There are two key differences between SGGMs and SGMs. First, while the generative process of SGMs is governed by a single SDE, SGGMs, due to the nature of graph data (which includes both graph structure and node features), involve a system of coupled SDEs (Jo et al., 2022; Niu et al., 2020). In SGGMs, the graph structure and node features are governed by separate but interdependent SDEs. Second, SGM convergence analyses often assume independence between data elements (e.g., pixels in images), while in SGGMs, the graph structure and node features are inherently coupled and interdependent (Deshpande et al., 2018). Thus, the formulation and convergence analysis of SGGMs must account for these relationships. As a result, existing convergence analyses for SGMs cannot be directly applied to SGGMs, highlighting the need for new theoretical formulations and analyses to understand and ensure their convergence.
+
+Problem Studied and Challenges. In this paper, we address this gap by extending the convergence analysis to SGGMs. Several key challenges arise in this extension: First, the interdependency between graph structure and node features requires a careful and nuanced formulation. Unlike the independent elements typically found in conventional SGMs, the graph structure and node features in SGGMs are deeply interconnected. Accurately modeling this interdependency is crucial for capturing the true generative process of SGGMs and deriving meaningful insights. Second, this interdependency creates entangled dynamics in the generative process. Changes in the graph structure directly affect the node features, and vice versa. This reciprocal influence complicates the convergence analysis, as conventional methods that assume independence are no longer applicable. The simultaneous evolution of both the graph structure and node features calls for the development of new tools and methodologies to analyze the convergence of the entire system effectively. Finally, it is essential to connect the
+
+factors and insights from this analysis with the practical SGGM framework. This includes not only understanding the theoretical results in the context of graph data but also leveraging these insights to guide model design decisions, such as hyperparameter tuning (e.g., sampling steps, diffusion length) and implementing regularization techniques. These challenges make the convergence analysis of SGGMs both novel and non-trivial.
+
+Our Contributions and Results. We present a comprehensive formulation for SGGMs that captures the complex interdependencies between graph structure, node features, and the use of graph neural networks (GNNs). Based on this formulation, we present a detailed non-asymptotic convergence analysis for three common graph generation paradigms: 1) node feature generation with a fixed graph structure (Theorem 4.1), 2) graph structure generation with fixed node features (Theorem 4.2), and 3) joint generation of both graph structure and node features (Theorem 4.3). Our analysis provides a detailed account of the factors influencing convergence bound of SGGMs. In addition, our results reveal several key insights and implications (Sec. 4) regarding the convergence behavior of SGGMs:
+
+1. Graph size vs. feature dimensionality: There is a nonisotropic effect between the graph size (number of nodes) and feature dimensionality. Specifically, increasing the size of the graph leads to a greater risk of generative error (i.e., larger convergence bounds) than increasing the dimensionality of node features. This finding helps explain why SGGMs are empirically effective for smaller graphs, even when the node features are complex (Jo et al., 2022).
+2. Topological properties of the graph structure: The topological properties of the graph significantly influence the convergence bounds of SGGMs. In particular, graphs with heterogeneous degree distributions—where some nodes have significantly more connections than others—tend to result in a larger generative error. Our findings suggest that SGGMs perform more reliably when generating graphs with more uniform degree distributions, as the risk of large error is lower in these graphs. This insight is critical for applications where the generated graphs need to closely match real-world graph structures.
+3. Impact of norms on feature matrices: Smaller norms in the feature matrices reduce the risk of generative error in the generated graphs, providing theoretical support for employing normalization techniques in SGGMs. By controlling the scale of the feature matrices, these techniques help tighten the convergence bounds, resulting in more stable and accurate graph generation.
+
+The empirical results from our controlled experiments using synthetic graphs are consistent with our theoretical predictions, thereby validating our analysis and conclusions.
+
+These results enhance the theoretical understanding of SG-GMs, confirm their applicability in high-stakes applications, and provide practical insights for designing and deploying more effective SGGMs.
+
+# 2. Related Work
+
+Graph Generation Methods. The graph generation problem has a long history of study, with rule-based random graph models traditionally dominating the field (Barabási & Albert, 1999; Holland et al., 1983; Erdős et al., 1960; Newman et al., 2002). A prime example of such models is the Stochastic Block Model (SBM) (Holland et al., 1983), which is based on the observation that real-life graphs often consist of densely connected blocks of vertices exhibiting similar behaviors (Abbe, 2018; Newman et al., 2002; Newman & Girvan, 2004; Newman, 2006; Karrer & Newman, 2011; Cherifi et al., 2019; Su & Marbach, 2022). However, such rule-based models fail to capture the complex and nuanced distribution of graph-structured data observed in real-world problems (Russell & Norvig, 2016). As a result, the focus has shifted towards deep learning-based methods that can model more intricate graph properties (Zhu et al., 2022; You et al., 2018; Xie et al., 2021; Liao et al., 2019; Li et al., 2018; Jensen, 2019; Fu et al., 2021; De Cao & Kipf, 2018; Zang & Wang, 2020; Simonovsky & Komodakis, 2018). Among these, SGGMs have emerged as a promising approach, showing impressive empirical performance in graph generation (Niu et al., 2020; Jo et al., 2022; Vignac et al., 2022; Chen et al., 2023c). Due to the nature of graph data, the standard formulation of SGGMs involves a system of stochastic processes, either manifested as a Markov chain (discrete) (Chen et al., 2023c; Vignac et al., 2022) or SDE (continuous) (Jo et al., 2022; Niu et al., 2020). In this paper, we focus on the formulation with SDE, and point out our analysis can be extended to the discrete version by replacement of suitable theoretical tools (Sec. 6).
+
+Convergent Analysis of SGMs. Theoretical studies on SGMs have garnered significant attention in recent years (De Bortoli et al., 2021; Zhang et al., 2024; Lee et al., 2022; 2023a; Wang et al., 2024; Chen et al., 2024b; Li et al., 2023; Benton et al., 2023; Chen et al., 2022a; 2023a;b). A central focus of these studies is examining the convergence behavior of these models, specifically how well they approximate the true data distribution. Existing research has shown that, under suitable smoothness and regularity assumptions on the score functions, SGMs provide provable guarantees for convergence, meaning that the generated samples increasingly resemble the true data distribution as the number of iterations increases (Chen et al., 2022a; 2023a;b). However, much of the existing work has been motivated by tasks such as image generation, where the generative process is governed by a single stochastic process. In con
+
+trast, SGGMs involve a system of dependent SDEs, with separate equations governing the graph structure and the node features, respectively. It remains unclear how this interconnectedness nature of graph data is manifested in the convergent behavior of SGGMs. In this paper, we address this gap by providing a non-asymptotic convergence analysis for SGGMs, accounting for the distinctive challenges posed by graph data.
+
+# 3. Preliminaries and Problem Formulation
+
+In this section, we introduce the formulation SGGMs via a system of SDEs, and different graph generation paradigms.
+
+Notation. We use bold upper and lower-case letters to denote vectors and matrices. For two functions $f(x) \geq 0$ and $g(x) \geq 0$ , we write $f(x) \lesssim g(x)$ if $f(x) \leq c \cdot g(x)$ for some absolute constant $c > 0$ . For a given SDE, we use $x$ to denote the forward continuous process, $\bar{x}$ to denote the backward continuous process and $\hat{\bar{x}}$ to denote the backward approximated process. A graph $\mathcal{G}$ can be formally defined as a two-tuple $\mathcal{G} = (\mathbf{X}, \mathbf{A})$ , where $\mathbf{X} \in \mathbb{R}^{N \times F}$ is the node feature matrix and $\mathbf{A} \in \mathbb{R}^{N \times N}$ is the adjacency matrix representing the graph structure. Here, $N$ is the number of nodes in the graph and $F$ is the dimension of node features. In addition, we use $\mathbb{N}(a, b)$ to denote Gaussian distribution with mean $a$ and variance $b$ .
+
+# 3.1. SGGMs via System of SDEs.
+
+SGGMs consist of three main components: 1) a forward process, 2) a reverse process, and 3) a sampling process that approximates the reverse process to facilitate data generation. In the following sections, we provide an introduction to each of these components.
+
+# 3.1.1. FORWARD PROCESS.
+
+Formally, the forward diffusion process can be described by the trajectory of random variables $\{\mathcal{G}_t = (\mathbf{X}_t,\mathbf{A}_t)\}_{t\in [0,T]}$ in a fixed time horizon $[0,T]$ , where $\mathcal{G}_0 = (\mathbf{X}_0,\mathbf{A}_0)$ is sampled from the data distribution $\mathbb{P}(\mathcal{G}_0)$ (what we want to learn). This process is modelled using the following system of dependent SDEs:
+
+$$
+\mathrm {d} \mathbf {X} _ {t} = f _ {\mathbf {X}} (\mathcal {G} _ {t}, t) \mathrm {d} t + g _ {\mathbf {X}} (\mathcal {G} _ {t}, t) \mathrm {d} \mathbf {W} _ {\mathbf {X}},
+$$
+
+$$
+\mathrm {d} \mathbf {A} _ {t} = f _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) \mathrm {d} t + g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) \mathrm {d} \mathbf {W} _ {\mathbf {A}}, \tag {3.1}
+$$
+
+where the first equation is the forward SDE for the node feature and the second equation is the forward SDE for the graph structure. $f_{\mathbf{X}}(.)$ and $f_{\mathbf{A}}(.)$ are drift coefficients, and $g_{\mathbf{X}}(.)$ and $g_{\mathbf{A}}(.)$ are scalar diffusion coefficients. $\mathbf{W}_{\mathbf{X}}, \mathbf{W}_{\mathbf{A}}$ are standard Wiener processes (also known as Brownian motion) acting in the node feature space and graph structure space, respectively. This diffusion process gradually adds
+
+noise to the initial graph samples, and at the terminal time $T$ , the sample $\mathcal{G}_T = (\mathbf{X}_T, \mathbf{A}_T)$ follows a simple convergent distribution (also referred to as the prior distribution) which we denote as $\Pi_{\mathbf{A}}$ and $\Pi_{\mathbf{X}}$ for the graph structure process and node feature process respectivelys. In this paper, we focus on the commonly used standard Gaussian distribution (with zero mean and unit variance) as the convergent distribution.
+
+We focus our analysis on the following choices of the drift and diffusion coefficients:
+
+$$
+\begin{array}{l} f _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = - 1 / 2 g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {X} _ {t}, \\ f _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) = - 1 / 2 g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {A} _ {t}, \\ g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) \equiv 1. \tag {3.2} \\ \end{array}
+$$
+
+These choices match those in the original SGMs paper (Song et al., 2020a; Jo et al., 2022), and are commonly used for similar convergence analysis (Chen et al., 2023a). We emphasize that our analysis can be adapted for some other choices of linear drift terms and constant variance functions as well (we provide a further discussion on this regard in Appendix A).
+
+Under these choices of coefficients, the forward process becomes the Ornstein-Uhlenbeck (OU) process (Jacobsen, 1996), which has an explicit conditional density:
+
+$$
+\begin{array}{l} \mathbf {X} _ {t} | \mathbf {X} _ {0} \sim \mathbb {N} \left(e ^ {- 1 / 2 t} \mathbf {X} _ {0}, (1 - e ^ {- t}) \mathbf {I} _ {N \times F}\right), \\ \mathbf {A} _ {t} \mid \mathbf {A} _ {0} \sim \mathbb {N} \left(e ^ {- 1 / 2 t} \mathbf {A} _ {0}, \left(1 - e ^ {- t}\right) \mathbf {I} _ {N \times N}\right). \tag {3.3} \\ \end{array}
+$$
+
+Moreover, the OU process converges exponentially to the standard Gaussian distribution:
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {t}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) \leq e ^ {- t} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right), \\ \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {t}) \| \boldsymbol {\Pi} _ {\mathbf {A}}) \leq e ^ {- t} \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \boldsymbol {\Pi} _ {\mathbf {A}}), \\ \end{array}
+$$
+
+where KL denotes the Kullback-Leibler (KL) divergence used to quantify the discrepancy of two distributions (see Appendix D for a further discussion).
+
+# 3.1.2. REVERSE PROCESS.
+
+The reverses process aims to generate graph samples from the convergent distribution by reversing the forward process. This is also described by a system of SDEs driven by the score function:
+
+$$
+\begin{array}{l} \mathrm {d} \bar {\mathbf {X}} _ {t} = \left[ 1 / 2 \bar {\mathbf {X}} _ {t} - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right] \mathrm {d} t + \mathrm {d} \bar {\mathbf {W}} _ {\mathbf {X}}, \\ \mathrm {d} \bar {\mathbf {A}} _ {t} = \left[ 1 / 2 \bar {\mathbf {A}} _ {t} - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \right] \mathrm {d} t + \mathrm {d} \bar {\mathbf {W}} _ {\mathbf {A}}, \tag {3.4} \\ \end{array}
+$$
+
+where $\bar{\mathbf{X}}_t, \bar{\mathbf{A}}_t, \bar{\mathbf{W}}_{\mathbf{X}}, \bar{\mathbf{W}}_{\mathbf{A}}$ are the respective reverse processes in time. $\nabla_{\mathbf{X}} \log \mathbb{P}(\mathcal{G}_t)$ and $\nabla_{\mathbf{A}_t} \log \mathbb{P}(\mathcal{G}_t)$ are the partial score (partial gradient of log-probility of graph distribution) with respect to node feature and graph structure respectively.
+
+Training SGGMs. The partial score functions can be estimated by training time-dependent neural networks (also referred to as the score networks) $s_{\theta}(.)$ and $s_{\phi}(.)$ , so that
+
+$$
+s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \approx \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}), \quad s _ {\boldsymbol {\phi}} (\mathcal {G} _ {t}, t) \approx \nabla_ {\mathbf {A} _ {t}} \log \mathbb {P} (\mathcal {G} _ {t}),
+$$
+
+where $\theta$ and $\phi$ are learnable parameters. Since the drift coefficients of the forward diffusion process are linear, the transition distribution $\mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ can be decomposed in terms of $\mathbf{X}_t$ and $\mathbf{A}_t$ , as follows:
+
+$$
+\mathbb {P} \left(\mathcal {G} _ {t} \mid \mathcal {G} _ {0}\right) = \mathbb {P} \left(\mathbf {X} _ {t} \mid \mathbf {X} _ {0}\right) \mathbb {P} \left(\mathbf {A} _ {t} \mid \mathbf {A} _ {0}\right).
+$$
+
+Notably, it is easy to sample from the transition distributions of each component, $\mathbb{P}(\mathbf{X}_t|\mathbf{X}_0)$ and $\mathbb{P}(\mathbf{A}_t|\mathbf{A}_0)$ , as they are Gaussian distributions with mean and variance determined by the coefficients of the forward diffusion process given by Eq. 3.3. Then, with the decomposition above, training objectives of SGGMs generalize the one from (Song et al., 2020b) to minimizing the estimation of partial scores on the given graph dataset:
+
+$$
+\begin{array}{l} \min _ {\boldsymbol {\theta}} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathbf {X} _ {t} \mid \mathbf {X} _ {0}\right) \right\| _ {2} ^ {2} \right], \\ \operatorname * {m i n} _ {\boldsymbol {\phi}} \mathbb {E} _ {t} \bigg [ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \big \| s _ {\boldsymbol {\phi}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathbf {A} _ {t} | \mathbf {A} _ {0}) \big \| _ {2} ^ {2} \bigg ]. \\ \end{array}
+$$
+
+The expectations above can be efficiently computed using Monte Carlo estimation with samples $(t, \mathcal{G}_0, \mathcal{G}_t)$ . For completeness, we provide a more detailed derivation and discussion on the training objectives in Appendix A.
+
+Functional Form of Score and Score Networks. To capture and model the dependencies between graph structure and node features, GNNs, such as Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017) and Graph Transformers (Dwivedi & Bresson, 2020), are used as the score networks for SGGMs. A defining characteristic of GNNs is their usage of graph structure to combine and update the representation of each node or edge. From a functional perspective, GNNs can be expressed as a function $\mathcal{F}(\mathcal{T}(\mathbf{A})\mathcal{T}'(\mathbf{X}))$ , where $\mathcal{T}$ and $\mathcal{T}'$ represent some transformations applied to the adjacency matrix $\mathbf{A}$ and the feature matrix $\mathbf{X}$ , respectively. For simplicity and interpretability, in this paper, we focus on the case where $\mathcal{T}$ and $\mathcal{T}'$ are identity mappings, corresponding to a vanilla GCN without the normalization. Our analysis and results can be extended to other transformations by replacing $\mathbf{A}$ , $\mathbf{X}$ with $\mathcal{T}(\mathbf{A})$ , $\mathcal{T}'(\mathbf{X})$ . In addition, we focus on a feasible setting where we assume certain regularity conditions on the data distribution, and that the score function aligns with the functional form of GNNs.
+
+Assumption 3.1. The data distributions for node features and graph structure are twice differentiable and have bounded second moments, i.e.,
+
+$$
+H _ {\mathbf {X}} := \mathbb {E} \| \mathbf {X} \| ^ {2} \leq \infty , \quad H _ {\mathbf {A}} := \mathbb {E} \| \mathbf {A} \| ^ {2} \leq \infty .
+$$
+
+Furthermore, the score functions $\nabla \log \mathbb{P}_t$ are $L$ -Lipschitz and can be written as a function of the form $\mathcal{F}(\mathbf{A}_t\mathbf{X}_t)$ .
+
+The regularity conditions such as continuity, boundedness and smoothness of data distributions and score function are commonly employed in other similar studies (Benton et al., 2023; Chen et al., 2022a; 2023a;b). We note that the smoothness assumption could be relaxed in recent analysis (Chen et al., 2023a); however, such a relaxation would complicate the results. In this paper, we target more user-friendly results for better interpretability and connection with practical settings, and hence retain these standard assumptions.
+
+# 3.1.3. SAMPLING PROCESS.
+
+A discrete-time approximation of the sampling dynamics in Eq. 3.4 is required for practical implementation. Let $0 = t_0 \leq t_1 \leq \dots \leq t_M = T$ be the discretization points. For the $k$ -th discretization step ( $1 \leq k \leq M$ ), we denote $\Delta t_k := t_k - t_{k-1}$ as the step size for the $k$ -th step. Let $t_k' = T - t_{M-k}$ be the corresponding discretization points in the reverse-process SDE. In addition, we make the following assumption on the learned score functions with respect to the discretization.
+
+Assumption 3.2. For both score functions $s_{\theta}(.)$ and $s_{\phi}(.)$ , we assume that there exist constant $\epsilon_{\mathbf{X}}, \epsilon_{\mathbf{A}}$ for any $1 \leq i \leq M$ :
+
+$$
+\sum_ {i = 1} ^ {M} \frac {\Delta t _ {i}}{T} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) - s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) \| ^ {2} \leq \epsilon_ {\mathbf {X}} ^ {2},
+$$
+
+$$
+\sum_ {i = 1} ^ {M} \frac {\Delta t _ {i}}{T} \mathbb {E} \left\| \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) - s _ {\phi} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) \right\| ^ {2} \leq \epsilon_ {\mathbf {A}} ^ {2}.
+$$
+
+Assumption 3.2 is commonly used in convergence analyses of diffusion models with general distributions (Chen et al., 2022a; Zhang et al., 2024; Chen et al., 2023a). We will further discuss in Sec. 6 how we can extend the analysis and relax this assumption to obtain a more precise (but less general) result, with additional modelling assumptions on the underlying distribution.
+
+We consider two types of discretization schemes that are widely used in existing works: the Euler-Maruyama scheme and the exponential integrator scheme.
+
+Euler-Maruyama Scheme is a simple, first-order discretization method that approximates an SDE by applying a first-order Taylor approximation. For approximated trajectory $\widehat{\mathbf{X}}_t$ , $\widehat{\mathbf{A}}_t$ with the discrete scheme, the progression rule at the $k$ -th step is given by:
+
+$$
+\widehat {\mathbf {X}} _ {t _ {k + 1} ^ {\prime}} = \widehat {\mathbf {X}} _ {t _ {k} ^ {\prime}} + \left[ - 1 / 2 \widehat {\mathbf {X}} _ {t _ {k} ^ {\prime}} - s _ {\boldsymbol {\theta}} \left(\widehat {\mathcal {G}} _ {t _ {k} ^ {\prime}}, t _ {k} ^ {\prime}\right) \right] \Delta t _ {k} + \Delta t _ {k} \bar {\mathbf {W}} _ {\mathbf {X}},
+$$
+
+$$
+\widehat {\mathbf {A}} _ {t _ {k + 1} ^ {\prime}} = \widehat {\mathbf {A}} _ {t _ {k} ^ {\prime}} + \left[ - 1 / 2 \widehat {\mathbf {A}} _ {t _ {k} ^ {\prime}} - s _ {\phi} \left(\widehat {\mathcal {G}} _ {t _ {k} ^ {\prime}}, t _ {k} ^ {\prime}\right) \right] \Delta t _ {k} + \Delta t _ {k} \bar {\mathbf {W}} _ {\mathbf {A}}, \tag {3.5}
+$$
+
+where $\Delta \bar{\mathbf{W}}_{\mathbf{A}}$ and $\Delta \bar{\mathbf{W}}_{\mathbf{A}}$ represent the increment in the Wiener processes within the interval.
+
+Exponential Integrator Scheme provides a more accurate method for solving SDEs, especially when the system has a semi-linear structure. It discretizes the nonlinear terms while retaining the continuous dynamics arising from the linear terms. Specifically, the nonlinear term is discretized while the linear part evolves continuously according to:
+
+$$
+\begin{array}{l} \widehat {\mathbf {X}} _ {t _ {k + 1} ^ {\prime}} = e ^ {1 / 2 \Delta t _ {k}} \widehat {\mathbf {X}} _ {t _ {k} ^ {\prime}} + 2 \left(e ^ {1 / 2 \Delta t _ {k}} - 1\right) s _ {\boldsymbol {\theta}} \left(\widehat {\mathcal {G}} _ {t _ {k} ^ {\prime}}, t _ {k} ^ {\prime}\right) \\ + \sqrt {e ^ {\Delta t _ {k}} - 1} \xi_ {k}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \widehat {\mathbf {A}} _ {t _ {k + 1} ^ {\prime}} = e ^ {1 / 2 \Delta t _ {k}} \widehat {\mathbf {A}} _ {t _ {k} ^ {\prime}} + 2 \left(e ^ {1 / 2 \Delta t _ {k}} - 1\right) s _ {\phi} \left(\widehat {\mathcal {G}} _ {t _ {k} ^ {\prime}}, t _ {k} ^ {\prime}\right) \\ + \sqrt {e ^ {\Delta t _ {k}} - 1} \xi_ {k} ^ {\prime}. \tag {3.6} \\ \end{array}
+$$
+
+where $\pmb{\xi}_k$ and $\pmb{\xi}_k^{\prime}$ are sampled from standard Gaussian.
+
+# 3.2. Graph Generation Paradigms
+
+For graph generation tasks, there are three distinct paradigms, each with its unique significance.
+
+Joint Generation of Graph Structure and Node Features. This paradigm aims to simultaneously generate both the graph structure and node features. It is particularly significant in biological applications, such as protein design (Vignac et al., 2022; Jo et al., 2022), where the graph structure represents the interactions between proteins (nodes), and the node features describe attributes like protein function, structure, or expression levels. Jointly generating both components ensures that the resulting graph is biologically plausible, accurately reflecting both the interaction patterns and functional properties of the proteins. The corresponding forward process is given by Eq. 3.1.
+
+Node Feature Generation with Fixed Graph Structure. In this paradigm, the goal is to generate node features while maintaining a fixed graph structure. This is particularly important in domains like molecular graph generation, where the connectivity of atoms is predefined, but their properties—such as chemical features—can vary (Sanchez-Lengeling et al., 2017; Simonovsky & Komodakis, 2018; Jin et al., 2018). By focusing on feature generation, this paradigm enables the exploration of diverse attribute combinations while maintaining a consistent structural backbone, facilitating the generation of realistic and varied instances for a given graph structure. In this setting, node features are generated based on a fixed graph structure. The corresponding forward process is:
+
+$$
+\begin{array}{l} \mathrm {d} \mathbf {X} _ {t} = 1 / 2 \mathbf {X} _ {t} \mathrm {d} t + \mathrm {d} \mathbf {W} _ {\mathbf {X}}, \\ \mathrm {d} \mathbf {A} _ {t} = 0, \quad \mathbf {A} _ {0} = \mathbf {A} ^ {*} \tag {3.7} \\ \end{array}
+$$
+
+where $\mathbf{A}^*$ is the fixed graph structure that remains constant during the feature generation process.
+
+Graph Structure Generation with Fixed Node Features. This paradigm focuses on generating the graph structure while keeping the node features fixed. It is particularly useful when the node features are known or predefined, but the relationships between the nodes (i.e., the graph structure) need to be learned or generated (Martínez et al., 2016; Zhang & Chen, 2018; Benson et al., 2016; Niu et al., 2020). For example, in social network analysis, the characteristics of individuals (e.g., age, location, interests) are known, but the interactions or connections between them need to be inferred. In this setting, the generating process focuses solely on the graph structure, as modeled by:
+
+$$
+\begin{array}{l} \mathrm {d} \mathbf {X} _ {t} = 0, \quad \mathbf {X} _ {0} = \mathbf {X} ^ {*} \\ \mathrm {d} \mathbf {A} _ {t} = 1 / 2 \mathbf {A} _ {t} \mathrm {d} t + \mathrm {d} \mathbf {W} _ {\mathbf {A}}, \tag {3.8} \\ \end{array}
+$$
+
+where $\mathbf{X}^*$ is the node feature matrix that remains constant during the feature generation process.
+
+# 4. Main Results
+
+We next present our main results on the convergence behaviour of SGGMs across the three graph generation paradigms. We first present the convergence results for each paradigm, deferring the detailed discussion of these results until after all paradigms have been presented.
+
+# 4.1. Convergence Results
+
+Theorem 4.1 (Convergence Bound of Node Feature Generation with Fixed Structure). Consider the graph generation paradigm in Eq. 3.7. Under Assumptions 3.1 and 3.2 and supposing that the fixed graph structure $\mathbf{A}^*$ has a bounded norm, i.e., $\| \mathbf{A}^* \| ^2 \leq \sigma_{\mathbf{A}}^2$ , we have the following results:
+
+- For exponential integration scheme,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {0}\right)\right) \lesssim (H _ {\mathbf {X}} + N F) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + N F L \left(\sigma_ {\mathbf {A}} ^ {2} L \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}\right). \tag {4.1} \\ \end{array}
+$$
+
+- For Euler-Maruyama scheme,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {0}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N F\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + \left(N F L ^ {2} \sigma_ {\mathbf {A}} ^ {2} + N F\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \left(N F L + H _ {\mathbf {X}}\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}. \tag {4.2} \\ \end{array}
+$$
+
+Theorem 4.2 (Convergence Bound of Graph Structure Generation with Fixed Node Features). Consider the graph generation paradigm in Eq. 3.8. Under Assumptions 3.1
+
+and 3.2 and supposing the fixed feature matrix $\mathbf{X}^*$ has a bounded norm, i.e., $\| \mathbf{X}^{*}\|^{2}\leq \sigma_{\mathbf{X}}^{2}$ , we have the following results:
+
+- For exponential integration scheme,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {0})) \lesssim (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + N ^ {2} L \left(\sigma_ {\mathbf {X}} ^ {2} L \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}\right). \tag {4.3} \\ \end{array}
+$$
+
+- For Euler-Maruyama scheme,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {0}\right)\right) \lesssim \left(H _ {\mathbf {A}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + \left(N ^ {2} L ^ {2} \sigma_ {\mathbf {X}} ^ {2} + N ^ {2}\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \left(N ^ {2} L + H _ {\mathbf {X}}\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}. \tag {4.4} \\ \end{array}
+$$
+
+Theorem 4.3 (Convergence Bound of Joint Generation of Graph Structure and Node Features). Consider the graph generation paradigm in Eq. 3.1. Under Assumptions 3.1 and 3.2, we have the following results:
+
+- For exponential integration scheme,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {0}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N F\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + N F L \left(L \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {0}\right)\right) \lesssim \left(H _ {\mathbf {A}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + N ^ {2} L \left(L \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}\right). \tag {4.5} \\ \end{array}
+$$
+
+- For Euler-Maruyama scheme,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {0}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + (N F L + N F) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + (N F L + H _ {\mathbf {X}}) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {0})) \lesssim (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + \left(N ^ {2} L + N ^ {2}\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {2} + \left(N ^ {2} L + H _ {\mathbf {A}}\right) \sum_ {i = 1} ^ {M} \Delta t _ {i} ^ {3}. \tag {4.6} \\ \end{array}
+$$
+
+The proofs of Theorem 4.1, Theorem 4.2 and Theorem 4.3 can be found in the Appendix.
+
+# 4.2. Discussion and Insights.
+
+In this section, we provide a detailed discussion of the convergence results, highlighting several interesting insights and their implications.
+
+Source of Error. From Theorems 4.1, 4.2, and 4.3, we identify three primary sources of generative errors in SGGMs: 1) Distance to convergent distribution: represented by the first term, $(H_{\mathbf{A}} + N^{2})e^{-T}$ and $(H_{\mathbf{X}} + NF)e^{-T}$ , in the convergence bounds, this error quantifies how far the forward process has transformed the initial data distribution towards the convergent distribution. Interestingly, it is independent of the initial data distribution and is influenced by the dimensionality, second moments of the data distribution, and the length of the diffusion process; 2) Score estimation error: captured by the second term, $T\epsilon_{\mathbf{X}}^{2}$ and $T\epsilon_{\mathbf{A}}^{2}$ in the convergence bounds, this error arises from inaccuracies in estimating the underlying score functions; 3) Discretization error: represented by the remaining terms in the convergence bounds, this error is induced by the discretization of sampling scheme.
+
+Connection with Graph Data. The convergence bounds reveal several interesting connections between the properties of the graph data and the convergence behavior of the SGGM. The scale of graph data is determined by two factors: the size of the graph $N$ and the dimension of the features $F$ . Our convergence results show that these two factors affect convergence behavior in a non-isotropic manner. Specifically, we have the following remark:
+
+Remark 4.4. The performance of SGGMs deteriorates more significantly as the graph size increases compared to an increase in the feature dimensionality.
+
+This is evident from the convergence bounds, where the bounds grow quadratically with respect to graph size but only linearly with respect to feature dimensionality. This also explains why SGGMs perform well for smaller graphs, even when the features are complex (Jo et al., 2022).
+
+Additionally, the terms $\sigma_{\mathbf{A}}$ and $\sigma_{\mathbf{X}}$ , which represent bounds on the norms of the graph structure and feature matrices, are significant in the feature generation and structure generation paradigms respectively. Larger $\sigma_{\mathbf{A}}$ and $\sigma_{\mathbf{X}}$ lead to a larger risk of generative errors (i.e., larger convergence bound). Based on spectral graph theory, $\sigma_{\mathbf{A}}$ has an intrinsic connection with the graph structure. For example, $\sigma_{\mathbf{A}}$ is closely related to the maximum degree of the graph (Spielman, 2012):
+
+$$
+\sigma_ {\mathbf {A}} \leq \max \operatorname {d e g r e e} (\mathcal {G}).
+$$
+
+Based on this connection, we make the following remark:
+
+Remark 4.5. SGGMs are better at learning and generating graphs with uniform degree distributions (more regular-like) than graphs with heterogeneous degree distributions (e.g., power-law graphs).
+
+On the other hand, $\sigma_{\mathbf{X}}$ , can be reduced with the commonly used normalization techniques, leading to a smaller convergence bound. This insight results in the following remark:
+
+Remark 4.6. Applying the normalization technique to $\mathbf{X}$ in the feature generation paradigm can improve the convergence bound of SGGMs.
+
+Hyperparameters. Next, we discuss the implications of the convergence results for the hyperparameters involved in SGGM learning, particularly the length of the diffusion process $T$ and the sampling step $M$ . For clarity, we assume uniform discretization steps and focus on the joint generation paradigm with the exponential integrator scheme. The implications also extend to other paradigms and the Euler-Maruyama scheme (see Appendix D).
+
+Corollary 4.7. Suppose the discretization step is uniform $\Delta t_i = T / M \leq 1, \forall i \in 1, \dots, M$ . Then the convergence results of SGGMs under the exponential integrator scheme are given by
+
+$$
+\begin{array}{l} \operatorname {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim \\ (H _ {\mathbf {X}} + N F) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \frac {N F L ^ {2} T ^ {2}}{M}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {T}\right)\right) \lesssim \\ (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + \frac {N ^ {2} L T ^ {2}}{M}. \\ \end{array}
+$$
+
+Furthermore, taking
+
+$$
+\begin{array}{l} T = \max \left\{\log \left(\frac {H _ {\mathbf {X}} + N F}{\epsilon_ {\mathbf {X}} ^ {2}}\right), \log \left(\frac {H _ {\mathbf {X}} + N ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}}\right) \right\}, \\ M = \max \left\{\frac {N F L ^ {2} T ^ {2}}{\epsilon_ {\mathbf {X}} ^ {2}}, \frac {N ^ {2} L ^ {2} T ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}} \right\}, \\ \end{array}
+$$
+
+we have that the overall generative error of SGGMs is bounded by the score estimation errors, i.e.,
+
+$$
+\mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim \epsilon_ {\mathbf {X}} ^ {2}, \quad \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim \epsilon_ {\mathbf {A}} ^ {2}.
+$$
+
+This corollary provides a clearer formulation for the convergence bounds and offers guidance for choosing the hyperparameters. It is evident that there is a trade-off in the length of the diffusion $T$ : larger values of $T$ lead to smaller errors and better convergence to the target distribution, while a larger sampling step $M$ (which increases computational complexity) is required to control the sampling error.
+
+Sampling Scheme. When comparing the convergence bounds under the Euler-Maruyama scheme and the exponential integrator scheme (e.g., Eq.4.1 vs. Eq.4.2), we observe that the Euler-Maruyama scheme introduces additional error terms due to higher-order discretization errors. As a result, the Euler-Maruyama scheme leads to a larger convergence bound than the exponential integrator scheme. Therefore, theoretically, the exponential integrator scheme is expected to outperform the Euler-Maruyama scheme in graph generation tasks. This finding aligns with the results in SGMs (Chen et al., 2023a).
+
+
+(a) Regular Graph
+
+
+(b) Power-law Graph
+
+
+(c) Graph Size and Structure
+Figure 1. Performance of SGGMs. Fig. 1(a) and Fig. 1(b) are examples of regular and power-law graphs generated from our synthetic graph model. Fig. 1(c) plots the performance of SGGMs with respect to increasing graph size for regular and power-law graphs. The feature size in this experiment is fixed to 50. Fig. 1(d) plots the performance of SGGMs with respect to increasing feature size w./w.o. normalization. The graph size is fixed to 50.
+
+
+(d) Feature and Normalization
+
+# 5. Empirical Study
+
+We next present an empirical study to validate our theoretical results. Specifically, we answer the following questions:
+
+Q1: Is SGGM more effective at learning and generating regular graphs compared to power-law graphs in the feature generation paradigm?
+Q2: Does applying normalization techniques improve the convergence of SGGM in the feature generation paradigm?
+Q3: Does increasing graph size result in more significant performance degradation than increasing feature size?
+
+Experimental Setup. We conduct controlled experiments using synthetic graph models, consistent with other theoretical studies, to examine how the performance of SGGMs varies with changes in graph size, graph structure (regular vs. power-law), feature size, and the application of normalization techniques. To generate power-law graphs, we employ the well-known Barabási-Albert model (Pósfai & Barabási, 2016). Node features for all experiments are drawn from a Gaussian distribution with a mean of 1 and a variance of 2 (to distinguish them from the convergent distribution). The SGGM is implemented using the hyperparameters specified in the theoretical analysis, with a diffusion length $T = 100$ and sampling steps $M = 500$ and a simple one-layer GCN as the score network. For simplicity, we use uniform discretization and the exponential integrator scheme to facilitate the sampling process. Each experiment generates 200 independent samples, which are then split into training, validation, and test sets in a 6:2:2 ratio. Each experiment consists of five independent trials, with the results averaged across these trials to ensure statistical robustness. Further technical details of the experiments and implementation can be found in Appendix E.
+
+Results. The experimental results, summarized in Fig.1, provide affirmative answers to the questions (Q1-Q3) and validate our theoretical predictions. Fig.1(c) shows the performance of SGGMs in the graph structure generation
+
+paradigm as the graph size increases. As predicted, SGGMs perform better on regular graphs compared to power-law graphs. Fig.1(d) illustrates the performance of SGGMs as the feature size increases. As predicted, applying normalization improves SGGMs' performance. Comparing the trends in Fig.1(c) and Fig. 1(d), we observe that SGGMs' performance degrades more rapidly with increasing graph size than with increasing feature size.
+
+# 6. Concluding Discussions
+
+This paper presents a novel convergence analysis for SGGMs across three common graph generation paradigms. Our analysis identifies the primary sources of generative error in SGGMs and provides valuable insights into the factors unique to graph data that influence their convergence behavior. Specifically, we examine how graph size, structure, and feature dimensionality affect the convergence bound. Additionally, we offer practical recommendations for improving model performance, including the use of normalization and the selection of key hyperparameters, such as diffusion length and sampling step size. Our empirical study using synthetic graph data validates the theoretical predictions. Overall, this work advances the theoretical foundation of score-based generative models, confirms the applicability of SGGMs in critical applications, and provides actionable insights for their effective use in graph generation tasks.
+
+# 6.1. Future Works
+
+Learning Process. Our current analysis assumes a static outcome of the learning process, as specified in Assumption 3.2. However, in practice, the estimation (or learning) of score functions is subject to errors that accumulate over time as the score network learns from data. These errors are influenced by the specific behavior of the learning algorithm. Future work could incorporate the dynamics of the learning process in SGGMs, as explored in studies such as (Chen et al., 2022a;b; Benton et al., 2024; Zhu et al., 2023), to gain a more nuanced understanding of SGGMs' behavior. This
+
+would be especially valuable for addressing challenges like sample complexity in SGGMs and for providing insights into their empirical performance.
+
+More Precise Analysis with Synthetic Graph Model. The analysis presented in this paper does not assume any specific structure for the data distribution, making it applicable to general smooth and bounded distributions. However, to make the analysis more precise, one could impose additional structural assumptions on the underlying data distribution, similar to case studies with Gaussian mixture models in SGM research (Chen et al., 2024b; Shah et al., 2023). A natural extension for graph data would be to assume that the graph is generated from the contextual stochastic block model (Deshpande et al., 2018). Integrating this modeling choice could lead to more accurate convergence bounds and provide valuable insights that would require a more detailed, fine-grained analysis.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the understanding of score-based graph generation methods. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# Acknowledgement
+
+We would like to thank the anonymous reviewers and area chairs for their helpful comments. This work was supported in part by grants from Hong Kong RGC under the contracts 17207621, 17203522, and C7004-22G (CRF).
+
+# References
+
+Abbe, E. Community detection and stochastic block models: recent developments. Journal of Machine Learning Research, 18(177):1-86, 2018.
+Barabási, A.-L. and Albert, R. Emergence of scaling in random networks. science, 286(5439):509-512, 1999.
+Benson, A. R., Gleich, D. F., and Leskovec, J. Higher-order organization of complex networks. Science, 353 (6295):163-166, July 2016. ISSN 1095-9203. doi: 10. 1126/science.aad9029. URL http://dx.doi.org/ 10.1126/science.aad9029.
+Benton, J., Deligiannidis, G., and Doucet, A. Error bounds for flow matching methods. arXiv preprint arXiv:2305.16860, 2023.
+Benton, J., Bortoli, V., Doucet, A., and Deligiannidis, G. Nearly d-linear convergence bounds for diffusion models via stochastic localization. 2024.
+
+Chen, H., Lee, H., and Lu, J. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions. In International Conference on Machine Learning, pp. 4735-4763. PMLR, 2023a.
+Chen, M., Huang, K., Zhao, T., and Wang, M. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. In International Conference on Machine Learning, pp. 4672-4712. PMLR, 2023b.
+Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., and Zhang, A. R. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. arXiv preprint arXiv:2209.11215, 2022a.
+Chen, S., Chewi, S., Lee, H., Li, Y., Lu, J., and Salim, A. The probability flow ode is provably fast. Advances in Neural Information Processing Systems, 36, 2024a.
+Chen, S., Kontonis, V., and Shah, K. Learning general gaussian mixtures with efficient score matching. arXiv preprint arXiv:2404.18893, 2024b.
+Chen, X., He, J., Han, X., and Liu, L.-P. Efficient and degree-guided graph generation via discrete diffusion modeling. arXiv preprint arXiv:2305.04111, 2023c.
+Chen, Y., Chewi, S., Salim, A., and Wibisono, A. Improved analysis for a proximal algorithm for sampling. In Conference on Learning Theory, pp. 2984-3014. PMLR, 2022b.
+Cherifi, H., Palla, G., Szymanski, B. K., and Lu, X. On community structure in complex networks: challenges and opportunities. Applied Network Science, 4(1):1-35, 2019.
+Chewi, S., Erdogdu, M. A., Li, M., Shen, R., and Zhang, M. S. Analysis of Langevin monte carlo from poincare to log-sobolev. Foundations of Computational Mathematics, pp. 1-51, 2024.
+De Bortoli, V., Thornton, J., Heng, J., and Doucet, A. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021.
+De Cao, N. and Kipf, T. Molgan: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973, 2018.
+Deshpande, Y., Sen, S., Montanari, A., and Mossel, E. Contextual stochastic block models. Advances in Neural Information Processing Systems, 31, 2018.
+Dwivedi, V. P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020.
+
+Erdős, P., Rényi, A., et al. On the evolution of random graphs. Publ. math. inst. hung. acad. sci, 5(1):17-60, 1960.
+Fu, T., Gao, W., Xiao, C., Yasonik, J., Coley, C. W., and Sun, J. Differentiable scaffolding tree for molecular optimization. arXiv preprint arXiv:2109.10469, 2021.
+Gnaneshwar, D., Ramsundar, B., Gandhi, D., Kurchin, R., and Viswanathan, V. Score-based generative models for molecule generation. arXiv preprint arXiv:2203.04698, 2022.
+Hagberg, A., Swart, P., and S Chult, D. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
+Holland, P. W., Laskey, K. B., and Leinhardt, S. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983.
+Huang, D. Z., Huang, J., and Lin, Z. Convergence analysis of probability flow ode for score-based generative models, 2024. URL https://arxiv.org/abs/2404.09730.
+Jacobsen, M. Laplace and the origin of the Ornstein-uhlenbeck process. Bernoulli, 2(3):271-286, 1996.
+Jensen, J. H. A graph-based genetic algorithm and generative model/monte carlo tree search for the exploration of chemical space. Chemical science, 10(12):3567-3572, 2019.
+Jin, W., Barzilay, R., and Jaakkola, T. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pp. 2323-2332. PMLR, 2018.
+Jo, J., Lee, S., and Hwang, S. J. Score-based generative modeling of graphs via the system of stochastic differential equations. In International Conference on Machine Learning, pp. 10362-10383. PMLR, 2022.
+Karrer, B. and Newman, M. E. Stochastic blockmodels and community structure in networks. Physical review $E$ , 83 (1):016107, 2011.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks, 2017.
+
+Lee, H., Lu, J., and Tan, Y. Convergence for score-based generative modeling with polynomial complexity. Advances in Neural Information Processing Systems, 35: 22870-22882, 2022.
+Lee, H., Lu, J., and Tan, Y. Convergence of score-based generative modeling for general data distributions. In International Conference on Algorithmic Learning Theory, pp. 946-985. PMLR, 2023a.
+Lee, H., Lu, J., and Tan, Y. Convergence for score-based generative modeling with polynomial complexity, 2023b. URL https://arxiv.org/abs/2206.06227.
+Lee, J. S., Kim, J., and Kim, P. M. Score-based generative modeling for de novo protein design. Nature Computational Science, 3(5):382-392, 2023c.
+Li, G., Huang, Y., Efimov, T., Wei, Y., Chi, Y., and Chen, Y. Accelerating convergence of score-based diffusion models, provably. arXiv preprint arXiv:2403.03852, 2024.
+Li, P., Li, Z., Zhang, H., and Bian, J. On the generalization properties of diffusion models. Advances in Neural Information Processing Systems, 36:2097-2127, 2023.
+Li, Y., Vinyals, O., Dyer, C., Pascanu, R., and Battaglia, P. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018.
+Liao, R., Li, Y., Song, Y., Wang, S., Hamilton, W., Duvenaud, D. K., Urtasun, R., and Zemel, R. Efficient graph generation with graph recurrent attention networks. Advances in neural information processing systems, 32, 2019.
+Liu, C., Zhang, J., Wang, S., Fan, W., and Li, Q. Score-based generative diffusion models for social recommendations. arXiv preprint arXiv:2412.15579, 2024.
+Martínez, V., Berzal, F., and Cubero, J.-C. A survey of link prediction in complex networks. ACM computing surveys (CSUR), 49(4):1-33, 2016.
+Newman, M. Networks. Oxford university press, 2018.
+Newman, M. E. Modularity and community structure in networks. Proceedings of the national academy of sciences, 103(23):8577-8582, 2006.
+Newman, M. E. and Girvan, M. Finding and evaluating community structure in networks. Physical review E, 69 (2):026113, 2004.
+Newman, M. E., Watts, D. J., and Strogatz, S. H. Random graph models of social networks. Proceedings of the national academy of sciences, 99(suppl_1):2566-2572, 2002.
+
+Nichol, A. Q. and Dhariwal, P. Improved denoising diffusion probabilistic models. In International conference on machine learning, pp. 8162-8171. PMLR, 2021.
+Niu, C., Song, Y., Song, J., Zhao, S., Grover, A., and Ermon, S. Permutation invariant graph generation via score-based generative modeling. In International Conference on Artificial Intelligence and Statistics, pp. 4474-4484. PMLR, 2020.
+Posfai, M. and Barabási, A.-L. Network science. CiteSeer, 2016.
+Revuz, D. and Yor, M. Continuous martingales and Brownian motion, volume 293. Springer Science & Business Media, 2013.
+Russell, S. J. and Norvig, P. Artificial intelligence: a modern approach. Pearson, 2016.
+Sanchez-Lengeling, B., Outeiral, C., Guimaraes, G. L., and Aspuru-Guzik, A. Optimizing distributions over molecular space: an objective-reinforced generative adversarial network for inverse-design chemistry (organic). 2017.
+Shah, K., Chen, S., and Klivans, A. Learning mixtures of gaussians using the ddpm objective. Advances in Neural Information Processing Systems, 36:19636-19649, 2023.
+Simonovsky, M. and Komodakis, N. Graphvae: Towards generation of small graphs using variational autoencoders. In Artificial Neural Networks and Machine Learning-ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27, pp. 412-422. Springer, 2018.
+Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp. 2256-2265. PMLR, 2015.
+Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a.
+Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
+Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b.
+Spielman, D. Spectral graph theory. Combinatorial scientific computing, 18:18, 2012.
+Su, J. and Marbach, P. Structure of core-periphery communities. In International Conference on Complex Networks and Their Applications, pp. 151-161. Springer, 2022.
+
+Suh, N. and Cheng, G. A survey on statistical theory of deep learning: Approximation, training dynamics, and generative models. Annual Review of Statistics and Its Application, 12, 2024.
+Trench, W. F. Introduction to real analysis. 2013.
+Vignac, C., Krawczuk, I., Siraudin, A., Wang, B., Cevher, V., and Frossard, P. Digress: Discrete denoising diffusion for graph generation. arXiv preprint arXiv:2209.14734, 2022.
+Vincent, P. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661-1674, 2011.
+Wang, P., Zhang, H., Zhang, Z., Chen, S., Ma, Y., and Qu, Q. Diffusion models learn low-dimensional distributions via subspace clustering. arXiv preprint arXiv:2409.02426, 2024.
+Xie, Y., Shi, C., Zhou, H., Yang, Y., Zhang, W., Yu, Y., and Li, L. Mars: Markov molecular sampling for multi-objective drug discovery. arXiv preprint arXiv:2103.10432, 2021.
+Yang, L., Zhang, Z., Song, Y., Hong, S., Xu, R., Zhao, Y., Zhang, W., Cui, B., and Yang, M.-H. Diffusion models: A comprehensive survey of methods and applications, 2024. URL https://arxiv.org/abs/2209.00796.
+Yegin, M. N. and Amasyah, M. F. Generative diffusion models: A survey of current theoretical developments. Neurocomputing, 608:128373, 2024.
+You, J., Liu, B., Ying, Z., Pande, V., and Leskovec, J. Graph convolutional policy network for goal-directed molecular graph generation. Advances in neural information processing systems, 31, 2018.
+Zang, C. and Wang, F. Moflow: an invertible flow model for generating molecular graphs. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 617-626, 2020.
+Zhang, M. and Chen, Y. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
+Zhang, Z., Chen, Z., and Gu, Q. Convergence of score-based discrete diffusion models: A discrete-time analysis. arXiv preprint arXiv:2410.02321, 2024.
+Zhu, Y., Du, Y., Wang, Y., Xu, Y., Zhang, J., Liu, Q., and Wu, S. A survey on deep graph generation: Methods and applications. In Learning on Graphs Conference, pp. 47-1. PMLR, 2022.
+
+Zhu, Z., Locatello, F., and Cevher, V. Sample complexity bounds for score-matching: Causal discovery and generative modeling, 2023. URL https://arxiv.org/abs/2310.18123.
+
+# A. Problem Formulation Discussion
+
+In this appendix, we provide a further discussion on our problem formulation, including the choice of hyper-parameter and the derivation of the training objectives.
+
+# A.1. Choice of Hyper-parameter
+
+For our analysis, we have chosen the following set of hyper-parameter:
+
+$$
+\begin{array}{l} f _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = - 1 / 2 g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {X} _ {t}, \\ f _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) = - 1 / 2 g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {A} _ {t}, \\ g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) \equiv 1. \tag {A.1} \\ \end{array}
+$$
+
+As mentioned in the main paper, our analysis can be generalize further to any linear drift function $f_{\mathbf{X}}(\mathcal{G}_t,t)$ and $f_{\mathbf{A}}(\mathcal{G}_t,t)$ and other constant variance functions so long as the underlying process remains a OU process and would converge to the prior distribution (i.e., the convergent distribution)
+
+Without loss of generality, we can express the set of linear function and constant variance function as,
+
+$$
+\begin{array}{l} f _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = \xi g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {X} _ {t}, \\ f _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) = \xi g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) ^ {2} \mathbf {A} _ {t}, \\ g _ {\mathbf {X}} \left(\mathcal {G} _ {t}, t\right) = g _ {\mathbf {A}} \left(\mathcal {G} _ {t}, t\right) \equiv \kappa , \tag {A.2} \\ \end{array}
+$$
+
+where $\xi \in (-1,0)$ and $\kappa$ is some positive constant.
+
+The selection of a constant variance function does not result in any loss of generality. This is because altering the variance function can be viewed as a re-scaling of time, provided that the drift function does not explicitly depend on time. In other words, changing the variance function only affects the rate at which the diffusion process evolves, but this transformation is effectively equivalent to adjusting the temporal scale of the process. Consequently, the analysis remains valid even if the variance function is modified, as long as the time re-scaling is appropriately accounted for.
+
+By changing the coefficient in the linear drift function, this amounts to a different conditional density for the forward process and score:
+
+$$
+\begin{array}{l} \mathbf {X} _ {t} | \mathbf {X} _ {0} \sim \mathbb {N} \left(e ^ {\xi t} \mathbf {X} _ {0}, \left(1 - e ^ {- 2 \xi t}\right) \mathbf {I} _ {N \times F}\right), \\ \mathbf {A} _ {t} \left| \mathbf {A} _ {0} \right. \sim \mathbb {N} \left(e ^ {\xi t} \mathbf {A} _ {0}, \left(1 - e ^ {- 2 \xi t}\right) \mathbf {I} _ {N \times N}\right). \tag {A.3} \\ \end{array}
+$$
+
+Since all the theoretical tools we use for the analysis are grounded in the OU process and independent of the scale of the hyper-parameters, all the results and analysis can be immediately applied with the general formulation above. However, in the paper, we strike for cleaner results for better interpretability and focus on the set of hyper-parameters specified in the paper.
+
+# A.2. Training Objective Derivation
+
+In this appendix, we present a derivation for the Equations used in the problem formulation for completeness. A similar derivation can be found in (Jo et al., 2022).
+
+The partial score functions can be estimated by training the time-dependent score-based models $s_{\theta}(.)$ and $s_{\phi}(.)$ , so that
+
+$$
+s _ {\theta} (\mathcal {G} _ {t}, t) \approx \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}), s _ {\phi} (\mathcal {G} _ {t}, t) \approx \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}).
+$$
+
+However, the objectives introduced in SGM for estimating the score function are not directly applicable here, since the partial score functions are defined as the gradient of each component, rather than the gradient of the data as in the conventional score function. This interdependence between the two diffusion processes tied by the partial scores adds another layer of difficulty.
+
+To address this issue, an new objective for estimating the partial scores is needed. Intuitively, the score-based models should be trained to minimize the distance to the corresponding ground-truth partial scores. The following new objectives
+
+generalize score matching (Song et al., 2020b) to the estimation of partial scores for the given graph dataset, as follows:
+
+$$
+\min _ {\theta} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\theta , t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| _ {2} ^ {2} \right], \tag {A.4}
+$$
+
+$$
+\min _ {\phi} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\phi , t} \left(\mathcal {G} _ {t}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t}\right) \right\| _ {2} ^ {2} \right], \tag {A.5}
+$$
+
+where $t$ is uniformly sampled from $[0,T]$ . The expectations are taken over samples $\mathcal{G}_0\sim p_{\mathrm{data}}$ and $\mathcal{G}_t\sim \mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ , where $\mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ denotes the transition distribution from 0 to $t$ induced by the forward diffusion process.
+
+Unfortunately, the equations above are still not directly trainable since the ground-truth partial scores are not analytically accessible in general. This is why we need to underlying process to be an OU process, as we can leverage the known conditional density of OU process for training.
+
+$$
+\min _ {\theta} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\theta , t} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t} \mid \mathcal {G} _ {0}\right) \right\| _ {2} ^ {2} \right], \tag {A.6}
+$$
+
+$$
+\min _ {\phi} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\phi} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t} \mid \mathcal {G} _ {0}\right) \right\| _ {2} ^ {2} \right]. \tag {A.7}
+$$
+
+Since the drift coefficient of the forward diffusion process is linear, the transition distribution $\mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ can be separated in terms of $\mathbf{X}_t$ and $\mathbf{A}_t$ as follows:
+
+$$
+\mathbb {P} \left(\mathcal {G} _ {t} \mid \mathcal {G} _ {0}\right) = \mathbb {P} \left(\mathbf {X} _ {t} \mid \mathbf {X} _ {0}\right) \mathbb {P} \left(\mathbf {A} _ {t} \mid \mathbf {A} _ {0}\right). \tag {A.8}
+$$
+
+Notably, we can easily sample from the transition distributions of each component, $\mathbb{P}(\mathbf{X}_t|\mathbf{X}_0)$ and $\mathbb{P}(\mathbf{A}_t|\mathbf{A}_0)$ , as they are Gaussian distributions with mean and variance determined by the coefficients of the forward diffusion process. This leads to the following training objective:
+
+$$
+\min _ {\theta} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathbf {X} _ {t} \mid \mathbf {X} _ {0}\right) \right\| _ {2} ^ {2} \right], \tag {A.9}
+$$
+
+$$
+\min _ {\phi} \mathbb {E} _ {t} \left[ \mathbb {E} _ {\mathcal {G} _ {0}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left\| s _ {\phi} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathbf {A} _ {t} | \mathbf {A} _ {0}) \right\| _ {2} ^ {2} \right]. \tag {A.10}
+$$
+
+The expectations in the equation above can be efficiently computed using the Monte Carlo estimate with the samples $(t, \mathcal{G}_0, \mathcal{G}_t)$ . Note that estimating the partial scores is not equivalent to estimating $\nabla_{\mathbf{X}} \log \mathbb{P}(\mathbf{X}_t)$ or $\nabla_{\mathbf{A}} \log \mathbb{P}(\mathbf{A}_t)$ , the main objective of previous score-based generative models, since estimating the partial scores requires capturing the dependency between $\mathbf{X}_t$ and $\mathbf{A}_t$ determined by the joint probability through time.
+
+# A.2.1. DERIVATION OF TRAINING OBJECTIVE A.4
+
+The original score matching objective can be written as follows:
+
+$$
+\mathbb {E} _ {\mathcal {G} _ {t}} \left[ \| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| _ {2} ^ {2} \right] = \mathbb {E} _ {\mathcal {G} _ {t}} \left[ \| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) \| _ {2} ^ {2} \right] - 2 \mathbb {E} _ {\mathcal {G} _ {t}} \left[ \langle s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \rangle \right] + C _ {1},
+$$
+
+where $C_1$ is a constant that does not depend on $\mathbf{W}$ . On the other hand, we have
+
+$$
+\mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) \| _ {2} ^ {2} \right] = \mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) \| _ {2} ^ {2} \right] - 2 \mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \langle s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) \rangle \right] + C _ {2},
+$$
+
+For the second term, from the derivation (Appendix A.1 from (Jo et al., 2022)), we know that it has the following equivalency:
+
+$$
+\mathbb {E} _ {\mathcal {G} _ {t}} \left[ \langle s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \rangle \right] = \mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \langle s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) \rangle \right]
+$$
+
+Since the constant $C_1$ and $C_2$ does not affect the optimization results, we can conclude that the following two objectives are equivalent with respect to $\theta$
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) \| _ {2} ^ {2} \right] \\ \mathbb {E} _ {\mathcal {G} _ {t}} \left[ \left\| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| _ {2} ^ {2} \right] \\ \end{array}
+$$
+
+Similarly, computing the gradient with respect to $\mathbf{A}$ , we can show that the following two objectives are also equivalent with respect to $\phi$ :
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {G} _ {t}} \mathbb {E} _ {\mathcal {G} _ {t} | \mathcal {G} _ {0}} \left[ \| s _ {\phi} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) \| _ {2} ^ {2} \right] \\ \mathbb {E} _ {\mathcal {G} _ {t}} \left[ \| s _ {\phi} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| _ {2} ^ {2} \right] \\ \end{array}
+$$
+
+Now, it remains to show that $\nabla_{\mathbf{X}}\log \mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ is equivalent to $\nabla_{\mathbf{X}}\log \mathbb{P}(\mathbf{X}_t|\mathbf{X}_0)$ . Using the chain rule, we get that
+
+$$
+\frac {\partial \log \mathbb {P} (\mathbf {A} _ {t} | \mathbf {A} _ {0})}{\partial (\mathbf {X} _ {t}) _ {i j}} = \mathrm {T r} \left[ \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathbf {A} _ {t} | \mathbf {A} _ {0}) \frac {\partial \mathbf {A} _ {t}}{\partial (\mathbf {X} _ {t}) _ {i j}} \right] = 0.
+$$
+
+With this result, we have that,
+
+$$
+\nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) = \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathbf {X} _ {t} | \mathbf {X} _ {0}) + \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathbf {A} _ {t} | \mathbf {A} _ {0}) = \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathbf {X} _ {t} | \mathbf {X} _ {0}).
+$$
+
+Therefore, we can conclude that
+
+$$
+\nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}) = \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathbf {X} _ {t} | \mathbf {X} _ {0})
+$$
+
+With a similar computation for $\mathbf{A}_t$ , we can also show that $\nabla_{\mathbf{A}}\log \mathbb{P}(\mathcal{G}_t|\mathcal{G}_0)$ is equal to $\nabla_{\mathbf{A}}\log \mathbb{P}(\mathbf{A}_t|\mathbf{A}_0)$ .
+
+# A.2.2. TRACTABLE TRAINING OBJECTIVE
+
+In the previous section, we have proved the equivalence of the training objective used in our analysis and the common score-based generative model. It remains to show how we compute this training objective with tractable objects. To simplify the notation, we define the following for the rest of appendix, for any $0 \leq t \leq s \leq T$
+
+$$
+\begin{array}{l} \sigma_ {t} ^ {2} := 1 - e ^ {- t}, \\ \alpha_ {t, s} := e ^ {- 1 / 2 (s - t)}, \\ \alpha_ {t} := \alpha_ {0, t}. \\ \end{array}
+$$
+
+Then, using the ideas in (Vincent, 2011), we get the following,
+
+$$
+\begin{array}{l} \mathbb {E} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \mathbb {P} \left(\mathcal {G} _ {t}\right) \| ^ {2} \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} \langle s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \rangle \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} \nabla \cdot s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {0})} \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0})} \nabla \cdot s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {0})} \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0})} \langle \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0}), s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \rangle \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {0})} \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0})} \langle \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathbf {X} _ {t} | \mathbf {X} _ {0}), s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \rangle \\ = \mathbb {E} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - 2 \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {0})} \mathbb {E} _ {\mathbb {P} (\mathcal {G} _ {t} | \mathcal {G} _ {0})} \left\langle \frac {\mathbf {X} _ {t} - \alpha_ {t} \mathbf {X} _ {0}}{\sigma_ {t} ^ {2}}, s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t}, t) \right\rangle \\ = \mathbb {E} \left\| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \frac {\mathbf {X} _ {t} - \alpha_ {t} \mathbf {X} _ {0}}{\sigma_ {t} ^ {2}} \right\| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} - \frac {d}{\sigma_ {t} ^ {2}} \\ = \mathbb {E} \left\| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \frac {\mathbf {X} _ {t} - \alpha_ {t} \mathbf {X} _ {0}}{\sigma_ {t} ^ {2}} \right\| ^ {2} + C _ {3} \\ \end{array}
+$$
+
+where $C_3$ is some constant independent of $\theta$ . Therefore, we can use the last equation as the training objective. Through a similar computation, we get that,
+
+$$
+\mathbb {E} \| s _ {\phi} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {A}} \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} = \mathbb {E} \left\| s _ {\phi} (\mathcal {G} _ {t}, t) - \frac {\mathbf {A} _ {t} - \alpha_ {t} \mathbf {A} _ {0}}{\sigma_ {t} ^ {2}} \right\| ^ {2} + C _ {4}
+$$
+
+# A.2.3. DISCUSSION OF ASSUMPTION 3.2
+
+In this section, we present an argument for why Assumption 3.2 is reasonable and hold in practice. We can observe that
+
+$$
+\mathbb {E} \left\| \frac {\mathbf {X} - \alpha_ {t} \mathbf {X} _ {0}}{\sigma_ {t} ^ {2}} \right\| = \frac {1}{\sigma_ {t} ^ {2}}.
+$$
+
+Therefore, it is natural to expect the error to scale as
+
+$$
+\mathbb {E} \left\| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| ^ {2} \lesssim \frac {\delta_ {\mathbf {X}} ^ {2}}{\sigma_ {t} ^ {2}}.
+$$
+
+for some $\delta_{\mathbf{X}}$ . Furthermore, notice that $\sigma_{t_k}^2 \simeq \min \{t_k, 1\}$ , then we have,
+
+$$
+\frac {1}{T} \sum_ {i = 1} ^ {T} \Delta_ {t _ {i}} \mathbb {E} \| s _ {\pmb {\theta}} (\mathcal {G} _ {t}, t) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \lesssim \frac {1}{T} \int_ {t _ {1}} ^ {T} \frac {\delta_ {\mathbf {X}} ^ {2}}{t \wedge 1} \mathrm {d} t \lesssim \delta_ {\mathbf {X}} ^ {2} \log (1 / t _ {1}).
+$$
+
+Through a symmetric argument, we can get a similar result for the structure process.
+
+# B. Useful Lemmas
+
+In this appendix, we present a set of useful results that are going to be used in the proof of the main theorem. The proof of some of the results can also be found in other diffusion convergent analysis (Chen et al., 2023a; 2022a; Zhu et al., 2023; Lee et al., 2023b; Gnaneshwar et al., 2022). We present the proof for these results for completeness.
+
+To simplify the notation a bit, in the following, we use $\mathbb{P}_t$ and $\mathbb{P}(\mathbf{X}_t)$ interchangeably to represent the density of $\mathbf{X}$ at time $t$ . We use $\mathbb{P}_{t|t'}$ and $\mathbb{P}(\mathbf{X}_t|\mathbf{X}_{t'})$ interchangeably to represent the condition density of $\mathbf{X}$ at time $t$ given $t'$ . In addition, we adopt the Frobenius norm as the matrix norm.
+
+Lemma B.1. Given two Ito processes coupled by the same initial condition and random noise as follows,
+
+$$
+\mathrm {d} \mathcal {X} _ {t} = f _ {1} (\mathcal {X} _ {t}, t) \mathrm {d} t + g (t) \mathrm {d} \mathbf {W} _ {t}, \quad \mathcal {X} _ {0} = \boldsymbol {\gamma},
+$$
+
+$$
+\mathrm {d} \mathcal {X} _ {t} ^ {\prime} = f _ {2} \left(\mathcal {X} _ {t} ^ {\prime}, t\right) \mathrm {d} t + g (t) \mathrm {d} \mathbf {W} _ {t}, \quad \mathcal {X} _ {0} ^ {\prime} = \boldsymbol {\gamma},
+$$
+
+where $f_{1}, f_{2}, g$ are continuous functions. Furthermore, suppose the two SDEs satisfy the following conditions,
+
+1. the two SDEs have unique solutions,
+2. $\mathcal{X}_t, \mathcal{X}_t'$ admit densities $\mathbb{P}_t, \mathbb{Q}_t$ that are twice continuously differentiable with respect to inputs $t > 0$ .
+
+Then, we denote the relative Fisher information between $\mathbb{P}_t$ and $\mathbb{Q}_t$ by
+
+$$
+J (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = \int \mathbb {P} _ {t} (\mathcal {X}) \left\| \nabla \log \frac {\mathbb {P} _ {t}}{\mathbb {Q} _ {t}} \right\| ^ {2} \mathrm {d} \mathbf {X}.
+$$
+
+Then for any $t > 0$ , the time derivative of $\mathrm{KL}(\mathbb{P}_t\| \mathbb{Q}_t)$ is given by,
+
+$$
+\frac {\mathrm {d}}{\mathrm {d} t} \mathrm {K L} (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = - g (t) ^ {2} J (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) + \mathbb {E} \left[ \left\langle f _ {1} (\mathcal {X} _ {t}, t) - f _ {2} (\mathcal {X} _ {t}, t), \nabla \log \frac {\mathbb {P} _ {t}}{\mathbb {Q} _ {t}} \right\rangle \right]
+$$
+
+Proof. By definition of KL divergence, we have that
+
+$$
+\mathrm {K L} (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = \int \mathbb {P} _ {t} (\mathcal {X}) \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X}.
+$$
+
+Taking the time derivative of the expression above, we obtain,
+
+$$
+\begin{array}{l} \frac {\partial}{\partial t} \mathrm {K L} (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = \frac {\partial}{\partial t} \left[ \int \mathbb {P} _ {t} (\mathcal {X}) \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} \right] \\ = \int \frac {\partial \mathbb {P} _ {t} (\mathcal {X})}{\partial t} \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} - \int \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \frac {\partial \mathbb {Q} _ {t} (\mathcal {X})}{\partial t} d \mathcal {X} \\ \end{array}
+$$
+
+Then, we can use Fokker-Plank equation to obtain the time derivatives of $\mathbb{P}_t$ and $\mathbb{Q}_t$ which are given by
+
+$$
+\frac {\partial \mathbb {P} _ {t} (\mathcal {X})}{\partial \mathcal {X}} = \nabla \cdot \left[ - f _ {1} (\mathcal {X}, t) \mathbb {P} _ {t} (\mathcal {X}) + \frac {g (t) ^ {2}}{2} \nabla \mathbb {P} _ {t} (\mathcal {X}) \right],
+$$
+
+$$
+\frac {\partial \mathbb {Q} _ {t} (\mathcal {X})}{\partial \mathcal {X}} = \nabla \cdot \left[ - f _ {2} (\mathcal {X}, t) \mathbb {Q} _ {t} (\mathcal {X}) + \frac {g (t) ^ {2}}{2} \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right].
+$$
+
+Substituting the time derivatives of $\mathbb{P}_t$ and $\mathbb{Q}_t$ into the corresponding terms of the time derivative of $\frac{\partial}{\partial t}\mathrm{KL}(\mathbb{P}_t\| \mathbb{Q}_t)$ , we get that,
+
+$$
+\begin{array}{l} \int \frac {\partial \mathbb {P} _ {t} (\mathcal {X})}{\partial t} \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} = \int \nabla \cdot \left[ - f _ {1} (\mathcal {X}, t) \mathbb {P} _ {t} (\mathcal {X}) + \frac {g (t) ^ {2}}{2} \nabla \mathbb {P} _ {t} (\mathcal {X}) \right] \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} \\ = \int \left\langle \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right), \mathbb {P} _ {t} (\mathcal {X}) f _ {1} (\mathcal {X}, t) - \frac {g (t) ^ {2}}{2} \nabla \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}, \\ = \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle f _ {1} (\mathcal {X}, t), \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) \right\rangle - \frac {g (t) ^ {2}}{2} \left\langle \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right), \nabla \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}. \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \int \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \frac {\partial \mathbb {Q} _ {t} (\mathcal {X})}{\partial t} d \mathcal {X} = \int \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \nabla \cdot \left[ - f _ {2} (\mathcal {X}, t) \mathbb {Q} _ {t} (\mathcal {X}) + \frac {g (t) ^ {2}}{2} \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right] d \mathcal {X}, \\ = \int \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, f _ {2} (\mathcal {X}, t) \mathbb {Q} _ {t} (\mathcal {X}) - \frac {g (t) ^ {2}}{2} \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle , \\ = \int \mathbb {Q} _ {t} (\mathcal {X}) \left\langle \nabla \frac {\mathbb {P} _ {t}}{\mathbb {Q} _ {t}}, f _ {2} (\mathcal {X}, t) \right\rangle - \frac {g (t) ^ {2}}{2} \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}. \\ \end{array}
+$$
+
+Combining the results above, we obtain,
+
+$$
+\begin{array}{l} \frac {\partial}{\partial t} \mathrm {K L} (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = \frac {\partial}{\partial t} \left[ \int \mathbb {P} _ {t} (\mathcal {X}) \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} \right], \\ = \int \frac {\partial \mathbb {P} _ {t} (\mathcal {X})}{\partial t} \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) d \mathcal {X} - \int \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \frac {\partial \mathbb {Q} _ {t} (\mathcal {X})}{\partial t} d \mathcal {X}, \\ = \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle f _ {1} (\mathcal {X}, t), \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right) \right\rangle - \frac {g (t) ^ {2}}{2} \left\langle \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right), \nabla \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X} - \dots \\ \int \mathbb {Q} _ {t} (\mathcal {X}) \left\langle \nabla \frac {\mathbb {P} _ {t}}{\mathbb {Q} _ {t}}, f _ {2} (\mathcal {X}, t) \right\rangle + \frac {g (t) ^ {2}}{2} \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}, \\ = \frac {g (t) ^ {2}}{2} \left(\int \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle - \left\langle \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right), \nabla \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}\right) + \dots \\ \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle f _ {1} (\mathcal {X}, t), \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \right\rangle - \mathbb {Q} _ {t} (\mathcal {X}) \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathbf {X})}{\mathbb {Q} _ {t} (\mathbf {X})}, f _ {2} (\mathcal {X}, t) \right\rangle d \mathcal {X}, \\ \end{array}
+$$
+
+Notice that
+
+$$
+\begin{array}{l} \int \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle - \left\langle \nabla \log \left(\frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}\right), \nabla \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X} \\ = \int \left\langle \frac {\mathbb {Q} _ {t} (\mathcal {X}) \nabla \mathbb {P} _ {t} (\mathcal {X}) - \mathbb {P} _ {t} (\mathcal {X}) \nabla \mathbb {Q} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \log \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle - \mathbb {P} _ {t} \left\langle \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \log \mathbb {P} _ {t} (\mathcal {X}) \right\rangle , \\ = \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \log \mathbb {Q} _ {t} (\mathcal {X}) \right\rangle - \mathbb {P} _ {t} \left\langle \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})}, \nabla \log \mathbb {P} _ {t} (\mathcal {X}) \right\rangle d \mathcal {X}, \\ = - J \left(\mathbb {P} _ {t} (\mathcal {X}) \| \mathbb {Q} _ {t} (\mathcal {X})\right). \\ \end{array}
+$$
+
+In addition, we have
+
+$$
+\begin{array}{l} \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle f _ {1} (\mathcal {X}, t), \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \right\rangle - \mathbb {Q} _ {t} (\mathcal {X}) \left\langle \nabla \frac {\mathbb {P} _ {t} (\mathbf {X})}{\mathbb {Q} _ {t} (\mathbf {X})}, f _ {2} (\mathcal {X}, t) \right\rangle d \mathcal {X} \\ = \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle f _ {1} (\mathcal {X}, t), \nabla \log \frac {\mathbb {P} _ {t} (\mathcal {X})}{\mathbb {Q} _ {t} (\mathcal {X})} \right\rangle - \mathbb {P} _ {t} (\mathcal {X}) \left\langle \nabla \log \frac {\mathbb {P} _ {t} (\mathbf {X})}{\mathbb {Q} _ {t} (\mathbf {X})}, f _ {2} (\mathcal {X}, t) \right\rangle d \mathcal {X} \\ = \int \mathbb {P} _ {t} (\mathcal {X}) \left\langle \nabla \log \frac {\mathbb {Q} _ {t} (\mathcal {X})}{\mathbb {P} _ {t} (\mathcal {X})}, f _ {1} (\mathcal {X}, t) - f _ {2} (\mathcal {X}, t) \right\rangle \\ = \mathbb {E} \left[ \left\langle f _ {1} (\mathcal {X}, t) - f _ {2} (\mathcal {X}, t), \nabla \log \frac {\mathbb {Q} _ {t} (\mathcal {X})}{\mathbb {P} _ {t} (\mathcal {X})} \right\rangle \right] \\ \end{array}
+$$
+
+Combining all the results above, we get that,
+
+$$
+\frac {\partial}{\partial t} \mathrm {K L} (\mathbb {P} _ {t} \| \mathbb {Q} _ {t}) = - g (t) ^ {2} \mathrm {J} (\mathbb {P} _ {t} (\mathcal {X}) \| \mathbb {Q} _ {t} (\mathcal {X})) + \mathbb {E} \left[ \left\langle f _ {1} (\mathcal {X}, t) - f _ {2} (\mathcal {X}, t), \nabla \log \frac {\mathbb {Q} _ {t} (\mathcal {X})}{\mathbb {P} _ {t} (\mathcal {X})} \right\rangle \right]
+$$
+
+This completes the proof.
+
+The next two lemmas capture the properties of Gaussian perturbation in the forward process.
+
+Lemma B.2. Let $\mathbb{P}$ be a probability measure on $\mathbb{R}^{N\times M}$ . Consider the Gaussian perturbation of $\mathbb{P}$ that admits a density $\mathbb{P}_{\mu ,\sigma}(\mathbf{X})$ , where $\mathbf{X}\in \mathbb{R}^{N\times M}$ . Specifically, we define
+
+$$
+\mathbb {P} _ {\mu , \sigma} (\mathbf {X}) \propto \int_ {\mathbb {R} ^ {N \times M}} \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \| _ {F} ^ {2}}{2 \sigma^ {2}}\right) \mathrm {d} \mathbb {P} (\mathbf {Y})
+$$
+
+where $\| \cdot \| _F$ is the Frobenius norm. Let $\mathbb{P}_{\mu ,\sigma}(\mathbf{Y}|\mathbf{X})$ be the conditional probability measure given $\mathbf{X}$ , defined as
+
+$$
+\mathrm {d} P _ {\mu , \sigma} (\mathbf {Y} | \mathbf {X}) \propto \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \| _ {F} ^ {2}}{2 \sigma^ {2}}\right) \mathrm {d} \mathbb {P} (\mathbf {Y})
+$$
+
+If $\mathbb{P}$ admits a density in $C^1 (\mathbb{R}^{N\times M})$ , we have
+
+$$
+\nabla_ {\mathbf {X}} \log \mathbb {P} _ {\mu , \sigma} (\mathbf {X}) = \frac {1}{\mu} \mathbb {E} _ {\mathbb {P} _ {\mu , \sigma} (\mathbf {Y} | \mathbf {X})} [ \nabla_ {\mathbf {Y}} \log \mathbb {P} (\mathbf {Y}) ]
+$$
+
+Proof.
+
+$$
+\begin{array}{l} \nabla \log \mathbb {P} _ {\mu , \sigma} (\mathbf {X}) = \frac {\int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \nabla_ {\mathbf {X}} \left[ \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \| ^ {2}}{2 \sigma^ {2}}\right) \right] d \mathbf {Y}}{\int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \|}{2 \sigma^ {2}}\right)} \\ = \frac {- \int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \left[ \nabla_ {\mathbf {X}} \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \| ^ {2}}{2 \sigma^ {2}}\right) \right] d \mathbf {Y}}{\mu \int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \|}{2 \sigma^ {2}}\right) d \mathbf {Y}} \\ = \frac {\int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \left[ \nabla_ {\mathbf {X}} \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \| ^ {2}}{2 \sigma^ {2}}\right) \right] \mathrm {d} \mathbf {Y}}{\alpha_ {t , s} \int_ {\mathbb {R} ^ {N \times M}} \mathbb {P} (\mathbf {Y}) \exp \left(- \frac {\| \mathbf {X} - \mu \mathbf {Y} \|}{2 \sigma^ {2}}\right) \mathrm {d} \mathbf {Y}} \\ = \frac {1}{\mu} \mathbb {E} _ {\mathbb {P} _ {\mu , \sigma} (\mathbf {Y} | \mathbf {X})} \nabla_ {\mathbf {Y}} \log \mathbb {P} (\mathbf {Y}). \\ \end{array}
+$$
+
+
+
+Lemma B.3. For $0 \leq k \leq M - 1$ and for time $t \in (t_k, t_{k + 1}]$ , consider the continuous and the discrete approximated reverse SDE starting from $\gamma$ ,
+
+$$
+\mathrm {d} \bar {\mathbf {X}} _ {t} = \left[ 1 / 2 \bar {\mathbf {X}} _ {t} + \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right] \mathrm {d} t + \mathrm {d} \mathbf {W} _ {t}, \quad \bar {\mathbf {X}} _ {0} = \boldsymbol {\gamma},
+$$
+
+$$
+\mathrm {d} \widehat {\mathbf {X}} _ {t} = \left[ 1 / 2 \widehat {\mathbf {X}} _ {t} + s _ {\pmb {\theta}} (\widehat {\mathcal {G}} _ {t _ {k} ^ {\prime}}, t _ {k} ^ {\prime}) \right] \mathrm {d} t + \mathrm {d} \mathbf {W} _ {t}, \qquad \widehat {\mathbf {X}} _ {0} = \pmb {\gamma},
+$$
+
+for time $t \in (t_k, t_{k+1}]$ . Let $\bar{\mathbb{P}}_{t|t_k}$ be the density of $\bar{\mathbf{X}}_t$ given $\bar{\mathbf{X}}_{t_k}$ and $\widehat{\mathbb{P}}_{t|t_k}$ be the density of $\widehat{\mathbf{X}}_t$ given $\widehat{\mathbf{X}}_{t_k}$ . Then we have that
+
+1. For any $\gamma$ , the two processes above satisfy: 1) there is a unique solution; and 2) the density functions are two continuously differentiable for $t > 0$ .
+2. For a.e. $\gamma$ (with respect to he Lebesgue measure), we have that
+
+$$
+\lim _ {t \mapsto t _ {k} ^ {+}} \mathrm {K L} \left(\bar {\mathbb {P}} _ {t | t _ {k}} (\cdot | \boldsymbol {\gamma}) | | \widehat {\mathbb {P}} _ {t | t _ {k}} (\cdot | \boldsymbol {\gamma})\right) = 0
+$$
+
+Proof. Let $\mathbb{P}_{[t_k', t]}$ and $\mathbb{Q}_{[t_k', t]}$ denote the path measure of $(\widehat{\mathbf{X}}_s)_{t_k' \leq s \leq t}$ and $(\widehat{\mathbf{X}}_s)_{t_k' \leq s \leq t}$ , respectively. For any $\mathbf{Y} \in \mathbb{R}^{N \times M}$ , we have
+
+$$
+\mathrm {K L} (\bar {\mathbb {P}} _ {t | t _ {k} ^ {\prime}} (. | \mathbf {Y}) \| \widehat {\mathbb {P}} _ {t | t _ {k} ^ {\prime}} (. | \mathbf {Y})) \leq \mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y}).
+$$
+
+Thus, it suffices to show
+
+$$
+\lim _ {t \to t _ {k} ^ {\prime} +} \mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]} (.. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]} (.. | \mathbf {Y})) = 0,
+$$
+
+for a.e. $\mathbf{Y} \in \mathbb{R}^{N \times M}$ . It is easy to check the Novikov condition satisfied under Assumption 3.1 and 3.2. Therefore, we can apply Girsanov change of measure (Revuz & Yor, 2013) on $\mathbb{P}_{[t_k', t]}(.|\mathbf{Y})$ and $\mathbb{Q}_{[t_k', t]}(.|\mathbf{Y})$ . Then, for the exponential integrator scheme, we have that
+
+$$
+\mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]}. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]}. (| \mathbf {Y})) = \mathbb {E} \left[ \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) - s _ {\pmb {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {k}} = \pmb {\gamma} \right]
+$$
+
+Again, by Assumption 3.2, we have that $\| \nabla_{\mathbf{X}}\log \mathbb{P}(\mathcal{G}) - s_{\theta}(\widehat{\mathcal{G}}_{t_i'},t_i')\|^2$ is bounded and therefore, we can apply the dominated convergence theorem (Trench, 2013) and move the limit inside the expectation, i.e.,
+
+$$
+\begin{array}{l} \lim _ {t \rightarrow t _ {k} ^ {\prime}} \mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y})) = \lim _ {t \rightarrow t _ {k} ^ {\prime}} \mathbb {E} \left[ \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) - s _ {\boldsymbol {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {i}} = \boldsymbol {\gamma} \right] \\ = \mathbb {E} \left[ \lim _ {t \to t _ {k} ^ {\prime} +} \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) - s _ {\boldsymbol {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {i}} = \boldsymbol {\gamma} \right] \\ = 0 \\ \end{array}
+$$
+
+This completes the proof for the exponential integrator scheme. Similarly for the Euler-Maruyama scheme, we have that
+
+$$
+\mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]}. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]}. | \mathbf {Y})) = \mathbb {E} \left[ \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - s _ {\pmb {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + 1 / 2 (\mathcal {G} _ {t} - \pmb {\gamma}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {i}} = \pmb {\gamma} \right].
+$$
+
+Again, by Assumption 3.1 and 3.2, we can conclude that $\| \nabla_{\mathbf{X}}\log \mathbb{P}(\mathcal{G}_t) - s_\theta (\widehat{\mathcal{G}}_{t_i'},t_i') + 1 / 2(\mathcal{G}_t - \boldsymbol {\gamma})\|^2$ is bounded. Then, again, by the dominant convergence theorem, we get that
+
+$$
+\begin{array}{l} \lim _ {t \to t _ {k} ^ {\prime} +} \mathrm {K L} (\mathbb {P} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y}) \| \mathbb {Q} _ {[ t _ {k} ^ {\prime}, t ]} (. | \mathbf {Y})) = \mathbb {E} \left[ \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - s _ {\pmb {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + 1 / 2 (\mathcal {G} _ {t} - \pmb {\gamma}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {i}} = \pmb {\gamma} \right] \\ = \mathbb {E} \left[ \lim _ {t \to t _ {k} ^ {\prime}} \int_ {t _ {k}} ^ {t} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - s _ {\pmb {\theta}} (\widehat {\mathcal {G}} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + 1 / 2 (\mathcal {G} _ {t} - \pmb {\gamma}) \| ^ {2} | \bar {\mathbf {X}} _ {t _ {i}} = \pmb {\gamma} \right] \\ = 0 \\ \end{array}
+$$
+
+This completes the proof.
+
+
+
+We have the following results for the decomposition of the convergence bound for the Euler-Maruyama scheme and the exponential integrator scheme. We emphasize that the result below is independent of the generation paradigms.
+
+Lemma B.4. For the exponential integrator scheme, we have that,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {X}) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} \mathrm {d} t \tag {B.1} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {A}) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {A}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\phi} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} \mathrm {d} t \tag {B.2} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t. \\ \end{array}
+$$
+
+For the Euler-Maruyama scheme, we have that,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {X}) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {B.3} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left(\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i} - 1}) \| ^ {2} + \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2}\right) \mathrm {d} t, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {A}) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {A}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\phi} \left(\mathcal {G} _ {t}, t\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {B.4} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left(\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} + \mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2}\right) \mathrm {d} t. \\ \end{array}
+$$
+
+Proof. We start with deriving the expression for the feature progression. Consider an arbitrary time interval $t_i < t \leq t_{i+1}$ , let $\bar{\mathbb{P}}_{t|t_i}$ denote the distribution of $\bar{\mathbf{X}}_t$ given $\bar{\mathbf{X}}_{t_i}$ . Let $\widehat{\mathbb{P}}_{t|t_i}$ denote the distribution of the discrete approximation $\hat{\mathbf{X}}_t$ given $\bar{\mathbf{X}}_{t_i}$ .
+
+Then for any $\gamma \in \mathbb{R}^{N\times F}$ , by the chain rule of KL-divergence, we can get the following progression relation,
+
+$$
+\mathrm {K L} \left(\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime}} | | \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime}}\right) \leq \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}}} \mathrm {K L} \left(\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} \mid t _ {i} ^ {\prime}} (., \mid \boldsymbol {\gamma})\right) \| \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} \mid t _ {i} ^ {\prime}} (., \mid \boldsymbol {\gamma})) + \mathrm {K L} \left(\bar {\mathbb {P}} _ {t _ {i} ^ {\prime}} | | \widehat {\mathbb {P}} _ {t _ {i} ^ {\prime}}\right). \tag {B.5}
+$$
+
+Equivalently, we have that
+
+$$
+\mathrm {K L} (\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime}} | | \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime}}) - \mathrm {K L} (\bar {\mathbb {P}} _ {t _ {i} ^ {\prime}} | | \widehat {\mathbb {P}} _ {t _ {i} ^ {\prime}}) \leq \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}}} \mathrm {K L} (\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) | | \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})).
+$$
+
+We can observe that if we do a telescope sum over $0 \leq i \leq M$ , the left-hand side can cancel most of the term and left of $\mathrm{KL}(\mathbb{P}_T||\widehat{\mathbb{P}}_T)$ and $\mathrm{KL}(\mathbb{P}_0||\widehat{\mathbb{P}}_0)$ .
+
+Therefore, we can focus on the right hand side and deriving an expression for
+
+$$
+\mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}}} \mathrm {K L} (\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) \| \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})).
+$$
+
+By Lemma B.3 we have that the two process satisfy the conditions: 1) unique solution and 2) twice continuously differentiable. Then, by Lemma B.1, we have the following time evolution relation for any $\gamma$ and $t > t_i$
+
+$$
+\begin{array}{l} \frac {\mathrm {d}}{\mathrm {d} t} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}}} \operatorname {K L} (\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) \| \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) = - \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {x} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \frac {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t} | \boldsymbol {\gamma})}{\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \right\| ^ {2} \\ + \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left[ \left\langle \nabla \log \bar {\mathbb {P}} _ {t} ^ {\prime} (\mathcal {G} _ {t} ^ {\prime}) - s _ {\boldsymbol {\theta} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime})} + \frac {1}{2} (\mathbf {X} _ {t} - \mathbf {X} _ {t _ {i} ^ {\prime}}), \nabla \log \frac {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} | \boldsymbol {\gamma})}{\widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} | \boldsymbol {\gamma})} \right\rangle \right] \\ \end{array}
+$$
+
+Using the fact that $\langle a,b\rangle \leq \frac{1}{2}\| a\| ^2 +\frac{1}{2}\| b\| ^2$ , we get that
+
+$$
+\begin{array}{l} \leq - \frac {1}{2} \mathbb {E} _ {\widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \frac {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})}{\widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \right\| ^ {2} \\ + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \\ + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \frac {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})}{\widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \right\| ^ {2} \\ = \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \\ \end{array}
+$$
+
+This means that we have,
+
+$$
+\begin{array}{l} \frac {\mathrm {d}}{\mathrm {d} t} \mathrm {K L} (\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} \| \widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) \leq \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) + \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \\ \leq \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t | t _ {i}} (\mathbf {X} | \boldsymbol {\gamma})} \left(\left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2}\right), \\ \leq \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}}} \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \\ \end{array}
+$$
+
+Then, by Lemma B.3, for a.e. $\gamma$ , we have
+
+$$
+\lim _ {t ^ {\prime} \mapsto t _ {i} ^ {\prime +}} \mathrm {K L} (\widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma}) | | \widehat {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) = 0.
+$$
+
+This means that $\bar{\mathbb{P}}_{t^{\prime}|t_i^{\prime}}(.|\gamma)$ and $\widehat{\mathbb{P}}_{t^{\prime}|t_i^{\prime}}(.|\gamma)$ become increasingly similar as the approximation interval become small. Then, by the derivation above and taking the integral from $t_i^\prime$ to $t_{i + 1}^{\prime}$ , we get that,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma}) \| \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})\right) \leq \\ \frac {1}{2} \int_ {t _ {i} ^ {\prime}} ^ {t _ {i + 1} ^ {\prime}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime} | t _ {i} ^ {\prime}}} \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \mathrm {d} t. \\ \end{array}
+$$
+
+Because of the fact that $\bar{\mathbb{P}}_t^{\prime}(\mathbf{X}_{t^{\prime}})$ is absolutely continuous with respect to the Lebesgue measure, integrating both sides above with respect to $\bar{\mathbb{P}}_{t_i'}$ we get,
+
+$$
+\begin{array}{l} \mathbb {E} _ {\bar {\mathbb {P}} _ {t _ {i} ^ {\prime}}} \mathrm {K L} (\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma}) | | \widehat {\mathbb {P}} _ {t _ {i + 1} ^ {\prime} | t _ {i} ^ {\prime}} (. | \boldsymbol {\gamma})) \leq \\ \frac {1}{2} \int_ {t _ {i} ^ {\prime}} ^ {t _ {i + 1} ^ {\prime}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}}} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\pmb {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}})} \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \mathrm {d} t \\ \end{array}
+$$
+
+Substitute the above result into the progression relation given in Eq. B.5, we get the following progression relation,
+
+$$
+\begin{array}{l} \mathrm {K L} \left(\bar {\mathbb {P}} _ {t _ {i + 1} ^ {\prime}} \mid \widehat {\mathbb {P}} _ {t _ {i} ^ {\prime}}\right) \leq \mathrm {K L} \left(\bar {\mathbb {P}} _ {t _ {i} ^ {\prime}} \mid \widehat {\mathbb {P}} _ {t ^ {\prime}}\right) + \\ \frac {1}{2} \int_ {t _ {i} ^ {\prime}} ^ {t _ {i + 1} ^ {\prime}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}})} \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} d t \\ = \operatorname {K L} \left(\bar {\mathbb {P}} _ {t _ {i} ^ {\prime}} | \widehat {\mathbb {P}} _ {t ^ {\prime}}\right) + \\ \frac {1}{2} \int_ {t _ {i} ^ {\prime}} ^ {t _ {i + 1} ^ {\prime}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}})} \left\| \nabla \log \bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X} _ {t ^ {\prime}}) - s _ {\pmb {\theta}} (\mathcal {G} _ {t _ {i} ^ {\prime}}, t _ {i} ^ {\prime}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t ^ {\prime}} (\mathbf {X})} \left\| \frac {1}{2} (\mathbf {X} _ {t ^ {\prime}} - \mathbf {X} _ {t _ {i} ^ {\prime}}) \right\| ^ {2} \mathrm {d} t \\ \end{array}
+$$
+
+Then, summing above iterative relation over $0 \leq i \leq M$ and replacing $\mathbb{P}_{t'} = \mathbb{P}_{T - t}$ , we can obtain,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} _ {0} \| \widehat {\mathbb {P}} _ {T}) \leq \mathrm {K L} (\mathbb {P} _ {T} \| \boldsymbol {\gamma}) + \frac {1}{2} \sum_ {i = 0} ^ {M} \int_ {t _ {i}} ^ {t _ {i + 1}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \nabla \log \bar {\mathbb {P}} (\mathcal {G} _ {t}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i + 1}}, t _ {i + 1}) \right\| ^ {2} + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \frac {1}{2} (\mathbf {X} _ {t} - \mathbf {X} _ {t _ {i + 1}}) \right\| ^ {2} \mathrm {d} t \\ = \mathrm {K L} (\mathbb {P} _ {T} \| \boldsymbol {\gamma}) + \frac {1}{2} \sum_ {i = 0} ^ {M} \int_ {t _ {i}} ^ {t _ {i + 1}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i + 1}}) + \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i + 1}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i + 1}}, t _ {i + 1}) \right\| ^ {2} \\ + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \frac {1}{2} \left(\mathbf {X} _ {t} - \mathbf {X} _ {t _ {i + 1}}\right) \right\| ^ {2} d t \\ \leq \mathrm {K L} (\mathbb {P} _ {T} \| \boldsymbol {\gamma}) + \frac {1}{2} \sum_ {i = 0} ^ {M} \int_ {t _ {i}} ^ {t _ {i + 1}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i}}) + \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i}}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i}}, t _ {i}) \right\| ^ {2} \\ + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \frac {1}{2} \left(\mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}}\right) \right\| ^ {2} d t \\ \leq \frac {1}{2} \sum_ {i = 0} ^ {M} \int_ {t _ {i}} ^ {t _ {i + 1}} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i + 1}}) \right\| ^ {2} + \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \| \nabla_ {\mathbf {X}} \log \bar {\mathbb {P}} (\mathcal {G} _ {t _ {i + 1}}) - s _ {\pmb {\theta}} (\mathcal {G} _ {t _ {i + 1}}, t _ {i + 1}) \| ^ {2} \\ + \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t} (\mathbf {X})} \left\| \frac {1}{2} \left(\mathbf {X} _ {t} - \mathbf {X} _ {t _ {i + 1}}\right) \right\| ^ {2} d t \\ \end{array}
+$$
+
+This completes the derivation for the Euler-Marumaya scheme for the feature matrix. The derivation for the structure matrix is symmetric.
+
+Furthermore, to obtain the derivation for the exponential integrator scheme, we only need to replace the time derivative with,
+
+$$
+\frac {\mathrm {d}}{\mathrm {d} t} \mathrm {K L} (\bar {\mathbb {P}} _ {t | t _ {i}} \| \widehat {\mathbb {P}} _ {t | t _ {i}} (. | \boldsymbol {\gamma})) \leq \frac {1}{2} \mathbb {E} _ {\bar {\mathbb {P}} _ {t | t _ {i}} (\mathbf {X} | \boldsymbol {\gamma})} \left\| \nabla \log \bar {\mathbb {P}} _ {t} (\mathbf {X}) - s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i}}, t _ {i}) \right\| ^ {2}.
+$$
+
+Then, we can go through the derivation above in a similar manner to obtain the derivation for feature and structure with the exponential integrator scheme.
+
+Lemma B.5. Under Assumption 3.1, for $T > 1$ , we have
+
+$$
+\begin{array}{l} \mathrm {K L} \left(\mathbb {P} _ {T} (\mathbf {X}) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) \leq \left(N F + H _ {\mathbf {X}}\right) e ^ {- T}, \\ \mathrm {K L} \left(\mathbb {P} _ {T} (\mathbf {A}) \| \boldsymbol {\Pi} _ {\mathbf {A}}\right) \leq \left(N ^ {2} + H _ {\mathbf {X}}\right) e ^ {- T}. \\ \end{array}
+$$
+
+Proof. First, we note that that $f(x) = x \log x$ is a convex function for $x > 0$ . Then, for any $t > 0$ , we can use Jensen's inequality to bound the entropy of $\mathbb{P}_t$ :
+
+$$
+\begin{array}{l} \int \mathbb {P} _ {t} (\mathbf {X}) \log \mathbb {P} _ {t} (\mathbf {X}) \mathrm {d} \mathbf {X} = \int \left(\int \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \mathbb {P} (\mathbf {Y}) \mathrm {d} \mathbf {Y}\right) \log \left(\int \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \mathbb {P} (\mathbf {Y}) \mathrm {d} \mathbf {Y}\right) \mathrm {d} \mathbf {X}, \\ \leq \int \int \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \log \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \mathrm {d} \mathbb {P} (\mathbf {Y}) \mathrm {d} \mathbf {X}, \\ = \int \left(\int \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \log \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \mathrm {d} \mathbf {X}\right) \mathrm {d} \mathbb {P} (\mathbf {Y}). \\ \end{array}
+$$
+
+Since $\mathbf{X}_t|\mathbf{X}_0 = \mathbf{Y}\sim \mathbb{N}(\alpha \mathbf{X}_0,\sigma_t^2\mathbf{I})$ , we have
+
+$$
+\int \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \log \mathbb {P} _ {t | 0} (\mathbf {X} | \mathbf {Y}) \mathrm {d} \mathbf {X} = - \frac {N F}{2} \log \left(2 \pi \sigma_ {t} ^ {2}\right) - \frac {N F}{2}.
+$$
+
+Substitute this back into the derivation before, we have
+
+$$
+\int \mathbb {P} _ {t} (\mathbf {X}) \log \mathbb {P} _ {t} (\mathbf {X}) \mathrm {d} \mathbf {X} \leq - \frac {N F}{2} \log \left(2 \pi \sigma_ {t} ^ {2}\right) - \frac {N F}{2}. \tag {B.6}
+$$
+
+Then, by the definition of KL divergence, we have that,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} _ {t} \| \boldsymbol {\Pi}) = \int \mathbb {P} _ {t} (\mathbf {X}) \log \frac {\mathbb {P} _ {t} (\mathbf {X})}{\boldsymbol {\Pi} _ {\mathbf {X}}} \mathrm {d} \mathbf {X} \\ = \int \mathbb {P} _ {t} (\mathbf {X}) [ \log \mathbb {P} _ {t} (\mathbf {X}) - \log \boldsymbol {\Pi} _ {\mathbf {X}} ] d \mathbf {X} \\ \end{array}
+$$
+
+Substitute int the definition of standard Gaussian for $\Pi_{\mathbf{X}}$ , we get
+
+$$
+= \int \mathbb {P} _ {t} (\mathbf {X}) \log \mathbb {P} _ {t} (\mathbf {X}) \mathrm {d} \mathbf {X} + \mathbb {E} _ {\mathbb {P} _ {t}} \left[ \frac {\| \mathbf {X} \| ^ {2}}{2} + \frac {N F}{2} \log (2 \pi) \right]
+$$
+
+Substitute in the result of Eq. B.6 and rearrange the terms, we get
+
+$$
+\leq \frac {N F}{2} \log \sigma_ {t} ^ {- 2} + \frac {1}{2} (H _ {\mathbf {X}} - N F).
+$$
+
+Since Langevin dynamics with strongly log-concave stationary distribution converge exponentially, we can obtain
+
+$$
+\operatorname {K L} \left(\mathbb {P} _ {t} \| \boldsymbol {\Pi}\right) \leq e ^ {- T + t} \frac {N F}{2} \log \sigma_ {t} ^ {- 2} + \frac {1}{2} \left(H _ {\mathbf {X}} - N F\right),
+$$
+
+Picking $t = \log 2, e^t \log \left(\frac{1}{\sigma_t^2}\right) \lesssim 1$ , we obtain
+
+$$
+\leq e ^ {- T} (N F + H _ {\mathbf {X}}).
+$$
+
+The proof for the structure is symmetric by replace the dimension of $NF$ with $N^2$ .
+
+Lemma B.6. Suppose the step size $\Delta_{i}$ for the Euler-Maruyama scheme satisfies $\Delta_{i} \leq 1$ , then for $1 \leq i \leq M$ , we have
+
+$$
+\mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \leq N F (t _ {i} - t) + H _ {\mathbf {X}} (t _ {i} - t) ^ {2}, \quad t _ {i - 1} \leq t \leq t _ {i},
+$$
+
+$$
+\mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2} \leq N ^ {2} (t _ {i} - t) + H _ {\mathbf {A}} (t _ {i} - t) ^ {2}, \quad t _ {i - 1} \leq t \leq t _ {i},
+$$
+
+Proof. We start with the feature matrix. By the definition of the forward process, we get that
+
+$$
+\begin{array}{l} \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} = \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathbf {X} _ {s} d s - \int_ {t} ^ {t _ {i}} \frac {1}{2} d \mathbf {W} _ {s} \right\| ^ {2} \\ \leq \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathbf {X} _ {s} d s \right\| ^ {2} + \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} d \mathbf {W} _ {s} \right\| ^ {2} \\ \end{array}
+$$
+
+By Cauchy-Schwartz inequality, we can get
+
+$$
+\lesssim (t _ {i} - t) \int_ {t} ^ {t _ {i}} \mathbb {E} \| \mathbf {X} _ {s} \| ^ {2} d s + N F (t _ {i} - t)
+$$
+
+In addition, we have the explicit expression for the conditional density $\mathbf{X}_s|\mathbf{X}_0$ given by,
+
+$$
+\mathbb {N} \left(e ^ {- 1 / 2 s} \mathbf {X} _ {0}, (1 - e ^ {- s}) \mathbf {I}\right).
+$$
+
+Then, based on the expression and the Assumption 3.1 that the second moment is bounded, we can get that
+
+$$
+\mathbb {E} \| \mathbf {X} _ {s} \| ^ {2} \leq H _ {\mathbf {X}} + N F.
+$$
+
+Substitute this back into the derivation before, we get
+
+$$
+\mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \lesssim N F (t _ {i} - t) + (N F + H _ {\mathbf {X}}) (t _ {i} - t) ^ {2}.
+$$
+
+Similarly, for the structure matrix, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2} = \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathbf {A} _ {s} \mathrm {d} s - \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathrm {d} \mathbf {W} _ {s} \right\| ^ {2} \\ \leq \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathbf {A} _ {s} \mathrm {d} s \right\| ^ {2} + \mathbb {E} \left\| \int_ {t} ^ {t _ {i}} \frac {1}{2} \mathrm {d} \mathbf {W} _ {s} \right\| ^ {2} \\ \end{array}
+$$
+
+By Cauchy-Schwartz inequality, we can get
+
+$$
+\lesssim (t _ {i} - t) \int_ {t} ^ {t _ {i}} \mathbb {E} \| \mathbf {A} _ {s} \| ^ {2} \mathrm {d} s + N ^ {2} (t _ {i} - t)
+$$
+
+In addition, we have the explicit expression for the conditional density $\mathbf{A}_s|\mathbf{A}_0$ given by,
+
+$$
+\mathbb {N} (e ^ {- 1 / 2 s} \mathbf {A} _ {0}, (1 - e ^ {- s}) \mathbf {I}).
+$$
+
+Then, based on the expression and the Assumption 3.1 that the second moment is bounded, we can get that
+
+$$
+\mathbb {E} \| \mathbf {A} _ {s} \| ^ {2} \leq H _ {\mathbf {A}} + N ^ {2}.
+$$
+
+Substitute this back into the derivation before, we get
+
+$$
+\mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2} \lesssim N ^ {2} (t _ {i} - t) + (N ^ {2} + H _ {\mathbf {X}}) (t _ {i} - t) ^ {2}.
+$$
+
+
+
+Lemma B.7 ((Chewi et al., 2024)). Let $\mathbb{P}$ be a continuously differentiable probability density. Suppose $\nabla \log \mathbb{P}$ is $L$ -Lipschitz, we have
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) \| ^ {2} \leq N F L,
+$$
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G}) \| ^ {2} \leq N ^ {2} L.
+$$
+
+Proof. Using integration by parts for feature $\mathbf{X}$ , we have:
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) \| ^ {2} = \int \mathbb {P} (\mathbf {X}) \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) \| ^ {2} d \mathbf {X} \\ = \int \left\langle \nabla \mathbb {P} (\mathbf {X}), \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G}) \right\rangle d X, \\ = \int \mathbb {P} (\mathbf {X}) \Delta \log \mathbb {P} (\mathbf {X}) \mathrm {d} \mathbf {X} \\ \leq N F L. \\ \end{array}
+$$
+
+The derivation for the structure is similar.
+
+
+
+Lemma B.8. For any $0 \leq t \leq s \leq T$ , the forward process satisfies
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {s}) \| ^ {2} \right] \lesssim \\ \mathbb {E} \left[ \left\| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \right\| ^ {2} \right] + \mathbb {E} \left[ \left\| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| ^ {2} \right] \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2}. \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {s}) \| ^ {2} \right] \lesssim \\ \mathbb {E} \left[ \left\| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \right\| ^ {2} \right] + \mathbb {E} \left[ \left\| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| ^ {2} \right] \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2}. \\ \end{array}
+$$
+
+Proof. We start with proving for the feature $\mathbf{X}$ . By our choice of hyper-parameter, the forward process is a OU process with condition density:
+
+$$
+\begin{array}{l} \mathbf {X} _ {s} \mid \mathbf {X} _ {t} \sim \mathbb {N} \left(\alpha_ {t, s} \mathbf {X} _ {t}, \left(1 - \alpha_ {t, s} ^ {2}\right) \mathbf {I}\right) \\ \mathbf {A} _ {s} \left| \mathbf {A} _ {t} \right. \sim \mathbb {N} \left(\alpha_ {t, s} \mathbf {A} _ {t}, \left(1 - \alpha_ {t, s} ^ {2}\right) \mathbf {I}\right) \\ \end{array}
+$$
+
+from Lemma B.2, we can rewrite $\nabla_{\mathbf{X}}\log \mathbb{P}(\mathcal{G}_s)$ as
+
+$$
+\nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {s}) = \alpha_ {t, s} ^ {- 1} \mathbb {E} _ {\mathbb {P} _ {t | s}} \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G}),
+$$
+
+where $\mathbb{P}_{t|s}$ is the conditional density of $\mathcal{G}_t$ given $\mathcal{G}_s$ . Thus the discretization error can be bounded by
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {s}) \| ^ {2} \right] = \mathbb {E} _ {\mathbb {P} _ {s}} \left[ \| \alpha_ {t, s} ^ {- 1} \mathbb {E} _ {\mathbb {P} _ {t | s}} \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \| ^ {2} \right] \\ \leq \mathbb {E} \left[ \| \alpha_ {t, s} ^ {- 1} \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}) \| ^ {2} \right] \\ \leq 2 \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2} \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \right] + \\ 2 \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}) \| ^ {2} \right]. \\ \end{array}
+$$
+
+Therefore, splitting the error into the space-discretization and the time-discretization error, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \| ^ {2} \right] \\ \leq 2 \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \| ^ {2} \right] + 2 \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {s}) \| ^ {2} \right] \\ \leq 2 \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2} \mathbb {E} \left[ \left\| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \right\| ^ {2} \right] + 4 \mathbb {E} \left[ \left\| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla \log \mathbb {P} \left(\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}\right) \right\| ^ {2} \right] \\ \lesssim \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \right] \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2} + \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, s} ^ {- 1} \mathcal {G} _ {s}) \| ^ {2} \right]. \\ \end{array}
+$$
+
+The proof for the structure matrix $\mathbf{A}$ is symmetric and therefore omitted here.
+
+
+
+Lemma B.9. For $t_{k - 1}\leq t\leq t_k$ , if $L\geq 1$ and $\Delta_{t_k}\leq 1$ , we have:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) + N F L (t _ {k} - t) ^ {2},
+$$
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N ^ {2} L ^ {2} (t _ {k} - t) + N ^ {2} L (t _ {k} - t) ^ {2}.
+$$
+
+Proof. We start with proving for the feature $\mathbf{X}$ .
+
+By Lemmas B.8, we have that
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \\ \lesssim \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \right] + \mathbb {E} \left[ \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \right] \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2}. \\ \end{array}
+$$
+
+Next, we tackle each term of the equation above. By Assumption 3.1, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \\ = \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \\ \leq N F L ^ {2} \mathbb {E} \| \mathbf {A} _ {t} \mathbf {X} _ {t} - \alpha_ {t, t _ {k}} ^ {- 2} \mathbf {A} _ {t} \mathbf {X} _ {t} \| ^ {2} \\ = N F L ^ {2} \left(e ^ {2 \left(t _ {k} - t\right)} - 1\right) \\ \end{array}
+$$
+
+Using the premise that $t_k - t \leq \Delta_k \leq 1$ :
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \lesssim N F L ^ {2} (t _ {k} - t)
+$$
+
+Then, by Lemma B.7, we have that
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2} \leq N F L (t _ {k} - t)
+$$
+
+Thus, we conclude that:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) + N F L (t _ {k} - t) ^ {2}
+$$
+
+Similarly for the structure, we have that By Lemmas B.8, we have that
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \\ \lesssim \mathbb {E} \left[ \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \right] + \mathbb {E} \left[ \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \right] \left(1 - \alpha_ {t, s} ^ {- 1}\right) ^ {2}. \\ \end{array}
+$$
+
+Next, we tackle each term of the equation above. By Assumption 3.1, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \\ = \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \\ \leq N ^ {2} L ^ {2} \mathbb {E} \| \mathbf {A} _ {t} \mathbf {X} _ {t} - \alpha_ {t, t _ {k}} ^ {- 2} \mathbf {A} _ {t} \mathbf {X} _ {t} \| ^ {2} \\ = N ^ {2} L ^ {2} \left(e ^ {2 \left(t _ {k} - t\right)} - 1\right) \\ \end{array}
+$$
+
+Using the premise that $t_k - t \leq \Delta_k \leq 1$ :
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \lesssim N ^ {2} L ^ {2} (t _ {k} - t) ^ {2}
+$$
+
+Then, by Lemma B.7, we have that
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2} \leq N ^ {2} (t _ {k} - t)
+$$
+
+Thus, we conclude that:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N ^ {2} L ^ {2} (t _ {k} - t) + N ^ {2} L (t _ {k} - t) ^ {2}
+$$
+
+Lemma B.10. For the graph paradigm given by Eq. 3.7, under the premise $\| \mathbf{A}^* \|^2 \leq \sigma_{\mathbf{A}}^2$ , for $t_{k-1} \leq t \leq t_k$ , if $L \geq 1$ and $\Delta_{t_k} \leq 1$ , we have:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {A}} ^ {2} + N F L (t _ {k} - t) ^ {2},
+$$
+
+Proof. By Lemmas B.8, we have that
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \\ \lesssim \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2}. \\ \end{array}
+$$
+
+Next, we tackle each term of the equation above. By Assumption 3.1, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \\ = \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}\right) \| ^ {2} \\ \leq N F L ^ {2} \mathbb {E} \| \mathbf {X} _ {t} - \alpha_ {t, t _ {k}} ^ {- 1} \mathbf {X} _ {t} \| ^ {2} \| \mathbf {A} ^ {*} \| ^ {2} \\ = N F L ^ {2} \left(e ^ {t _ {k} - t} - 1\right) \sigma_ {\mathbf {A}} ^ {2} \\ \end{array}
+$$
+
+Using the premise that $t_k - t \leq \Delta_k \leq 1$ :
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {A}} ^ {2}
+$$
+
+Then, by Lemma B.7, we have that
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2} \leq N F L (t _ {k} - t) ^ {2}
+$$
+
+Thus, we conclude that:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {A}} ^ {2} + N F L (t _ {k} - t) ^ {2}
+$$
+
+We complete the proof.
+
+Lemma B.11. For graph generation paradigm given by Eq. 3.8, under the premise that $\| \mathbf{X}\| ^2\leq \sigma_{\mathbf{X}}^2$ for $t_{k - 1}\leq t\leq t_k$ , if $L\geq 1$ and $\Delta_{t_k}\leq 1$ , we have:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N ^ {2} L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {A}} ^ {2} + N ^ {2} L (t _ {k} - t) ^ {2},
+$$
+
+Proof. By Lemmas B.8, we have that
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \\ \lesssim \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2}. \\ \end{array}
+$$
+
+Next, we tackle each term of the equation above. By Assumption 3.1, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \\ = \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \\ \leq N ^ {2} L ^ {2} \mathbb {E} \| \mathbf {X} ^ {*} \| ^ {2} \mathbb {E} \| \mathbf {A} _ {t} - \alpha_ {t, t _ {k}} ^ {- 1} \mathbf {A} _ {t} \| ^ {2} \\ = N ^ {2} L ^ {2} \left(e ^ {t _ {k} - t} - 1\right) \sigma_ {\mathbf {X}} ^ {2} \\ \end{array}
+$$
+
+Using the premise that $t_k - t \leq \Delta_k \leq 1$ :
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N ^ {2} L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {X}} ^ {2}
+$$
+
+Then, by Lemma B.7, we have that
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2} \leq N ^ {2} (t _ {k} - T) ^ {2}
+$$
+
+Thus, we conclude that:
+
+$$
+\mathbb {E} \| \nabla \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N ^ {2} L ^ {2} (t _ {k} - t) \sigma_ {\mathbf {A}} ^ {2} + N ^ {2} L (t _ {k} - t) ^ {2}
+$$
+
+We complete the proof.
+
+Lemma B.12. For $t_{k - 1}\leq t\leq t_k$ , if $L\geq 1$ and $\Delta_{t_k}\leq 1$ , we have:
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) ^ {2},
+$$
+
+Proof. By Lemmas B.8, we have that
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| \\ \lesssim \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} + \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2}. \\ \end{array}
+$$
+
+Next, we tackle each term of the equation above. By Assumption 3.1, we have that,
+
+$$
+\begin{array}{l} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \\ = \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\alpha_ {t, t _ {k}} ^ {- 1} \mathcal {G} _ {t}) \| ^ {2} \\ \leq N F L ^ {2} \mathbb {E} \| \mathbf {X} _ {t} - \alpha_ {t, t _ {k}} ^ {- 1} \mathbf {X} _ {t} \| ^ {2} \mathbb {E} \| \mathbf {A} _ {t} - \alpha_ {t, t _ {k}} ^ {- 1} \mathbf {A} _ {t} \| ^ {2} \\ = N F L ^ {2} \left(e ^ {t _ {k} - t} - 1\right) ^ {2} \\ \end{array}
+$$
+
+Using the premise that $t_k - t \leq \Delta_k \leq 1$ :
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {k}}) \| ^ {2} \lesssim N F L ^ {2} (t _ {k} - t) ^ {2}
+$$
+
+Then, by Lemma B.7, we have that
+
+$$
+\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} (1 - \alpha_ {t, k} ^ {- 1}) ^ {2} \leq N F (t _ {k})
+$$
+
+Thus, we conclude that:
+
+$$
+\lesssim d L ^ {2} (t _ {k} - t) + d L (t _ {k} - t) ^ {2} \lesssim d L ^ {2} (t _ {k} - t).
+$$
+
+We complete the proof.
+
+# C. Proof of Main Results
+
+In this section, we present the proof of the main results. The overall structure of the proof for each paradigm is similar. We first use Lemma B.4 to obtain the decomposition of the convergence bound and then apply the intermediate results we prove in the last section to get the final expression.
+
+# C.1. Proof of Theorem 4.1
+
+In this section, we present the proof for Theorem 4.1.
+
+Proof of Theorem 4.1. Let $\mathbb{P}_0$ be the data distribution and $\widehat{\mathbb{P}}_T$ is the learned distribution through the sampling scheme. We start with considering the exponential integrator scheme. By Lemma B.4, we can decompose the KL divergence of $\mathbb{P}_0$ and $\widehat{\mathbb{P}}_T$ as follows,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {C.1} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \mathrm {d} t. \\ \end{array}
+$$
+
+Then, we tackle each term in the equation above.
+
+By Lemma B.5, we can immediately bound the first term as follows,
+
+$$
+\operatorname {K L} \left(\mathbb {P} _ {T} (\mathbf {X}) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T}.
+$$
+
+Next, we tackle the second term. By Assumption 3.2, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i}}, t _ {i}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t \\ = \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} \\ \leq T \epsilon_ {\mathbf {X}} ^ {2} \\ \end{array}
+$$
+
+Therefore, it remains to tackle the last term. By Lemma B.10, we immediately get that
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t \\ \lesssim \sum_ {i = 1} ^ {M} N F L ^ {2} \sigma_ {\mathbf {A}} ^ {2} \int_ {t _ {i - 1}} ^ {t _ {i}} (t _ {i} - t) d t + N F L \int_ {t _ {i - 1}} ^ {t _ {i}} (t _ {i} - t) ^ {2} \\ \lesssim \sum_ {i = 1} ^ {M} N F L ^ {2} \sigma_ {\mathbf {A}} ^ {2} \Delta_ {t _ {i}} ^ {2} + N F L \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Combining all the results above, we get that,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \widehat {\mathbb {P}} (\mathbf {X} _ {T})) \lesssim \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {T}) \| \boldsymbol {\Pi} _ {\mathbf {X}}) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i}}, t _ {i}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t \\ \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \sum_ {i = 1} ^ {M} N F L ^ {2} \sigma_ {\mathbf {A}} ^ {2} \Delta_ {t _ {i}} ^ {2} + N F L \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+This completes the derivation for the exponential integrator scheme.
+
+Again, from Lemma B.4, we have that the decomposition of the convergence bound of the Euler-Marumaya scheme is given by,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {X}) \| \mathbb {P} _ {T} (\widehat {\mathbf {X}})\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {C.2} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left(\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} + \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2}\right) d t, \\ \end{array}
+$$
+
+From comparing the decomposition of Euler-Marumaya and the exponential integrator scheme, we can easily see that the difference between the Euler-Marumaya scheme and the exponential integrator scheme is the extra term on
+
+$$
+\sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \mathrm {d} t
+$$
+
+For this, we can use Lemma B.6 and immediately get that
+
+$$
+\mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \leq N F (t _ {i} - t) + H _ {\mathbf {X}} (t _ {i} - t) ^ {2}, \quad t _ {i - 1} \leq t \leq t _ {i}
+$$
+
+Substituting this result into the extra term, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \mathrm {d} t \\ \leq \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} N F (t _ {i} - t) + H _ {\mathbf {X}} (t _ {i} - t) ^ {2} \mathrm {d} t \\ \leq \sum_ {i = 1} ^ {M} N F (t _ {i} - t) ^ {2} + H _ {\mathbf {X}} (t _ {i} - t) ^ {3} \\ \leq \sum_ {i = 1} ^ {M} N F \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {X}} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Therefore, combining this result and the result of the exponential integrator scheme, we have,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \widehat {\mathbb {P}} (\mathbf {X} _ {T})) \lesssim \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {T}) \| \boldsymbol {\Pi} _ {\mathbf {X}}) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} (\mathcal {G} _ {t _ {i - 1}}, t _ {i - 1}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i - 1}}) \| ^ {2} \mathrm {d} t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t \\ \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \sum_ {i = 1} ^ {M} N F L ^ {2} \sigma_ {\mathbf {A}} ^ {2} \Delta_ {t _ {i}} ^ {2} + N F L \Delta_ {t _ {i}} ^ {3} + \sum_ {i = 1} ^ {M} N F \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {X}} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+This completes the proof.
+
+# C.2. Proof of Theorem 4.2
+
+In this section, we present the proof for Theorem 4.2.
+
+Proof of Theorem 4.2. Let $\mathbb{P}_0$ be the data distribution and $\widehat{\mathbb{P}}_T$ is the learned distribution through the sampling scheme. We start with considering the exponential integrator scheme. By Lemma B.4, we can decompose the KL divergence of $\mathbb{P}_0$ and $\widehat{\mathbb{P}}_T$ as follows,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {A}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {C.3} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} \mathrm {d} t. \\ \end{array}
+$$
+
+Then, we tackle each term in the equation above.
+
+By Lemma B.5, we can immediately bound the first term as follows,
+
+$$
+\mathrm {K L} (\mathbb {P} _ {T} (\mathbf {A}) \| \boldsymbol {\Pi} _ {\mathbf {A}}) \lesssim (N ^ {2} + H _ {\mathbf {A}}) e ^ {- T}.
+$$
+
+Next, we tackle the second term. By Assumption 3.2, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\phi} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \\ = \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} \| s _ {\boldsymbol {\phi}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} \\ \leq T \epsilon_ {\mathbf {A}} ^ {2} \\ \end{array}
+$$
+
+Therefore, it remains to tackle the last term. By Lemma B.11, we immediately get that
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t \\ \lesssim \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} N ^ {2} L ^ {2} (t _ {i} - t) \sigma_ {\mathbf {A}} ^ {2} + N ^ {2} L (t _ {i} - t) ^ {2} \mathrm {d} t \\ \lesssim N ^ {2} L ^ {2} \sigma_ {\mathbf {A}} ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N ^ {2} L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Combining all the results above, we get that,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {T}) \| \boldsymbol {\Pi} _ {\mathbf {A}}) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\phi} (\mathcal {G} _ {t _ {i}}, t _ {i}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t}\right) \| ^ {2} d t \\ \lesssim (N ^ {2} + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + N ^ {2} L ^ {2} \sigma_ {\mathbf {X}} ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N ^ {2} L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} \\ = (N ^ {2} + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + N ^ {2} L \left(L \sigma_ {\mathbf {X}} ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3}\right) \\ \end{array}
+$$
+
+This complete the derivation for the exponential integrator scheme.
+
+Again, from Lemma B.4, we have that the decomposition of convergence bound of the Euler-Marumaya scheme is given by,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {A}) \| \mathbb {P} _ {T} (\widehat {\mathbf {A}})\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {A}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\phi} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {A}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {C.4} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left(\mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} + \mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2}\right) d t, \\ \end{array}
+$$
+
+From comparing the decomposition of Euler-Marumaya and the exponential integrator scheme, we can easily see that the difference between Euler-Marumaya scheme and the exponential integrator scheme is the extra term on
+
+$$
+\sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2} \mathrm {d} t
+$$
+
+For this, we can use Lemma B.6 and immediately get that
+
+$$
+\mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {i}} \| ^ {2} \leq N ^ {2} (t _ {i} - t) + H _ {\mathbf {A}} (t _ {i} - t) ^ {2}, \quad t _ {i - 1} \leq t \leq t _ {i}
+$$
+
+Substituting this result into the extra term, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {A} _ {t} - \mathbf {A} _ {t _ {k}} \| ^ {2} \mathrm {d} t \\ \leq \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} N ^ {2} \left(t _ {i} - t\right) + H _ {\mathbf {A}} \left(t _ {i} - t\right) ^ {2} \\ \leq \sum_ {i = 1} ^ {M} N ^ {2} (t _ {i} - t) ^ {2} + H _ {\mathbf {A}} (t _ {i} - t) ^ {3} \\ \leq \sum_ {i = 1} ^ {M} N ^ {2} \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {A}} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Therefore, combining this result and the result of the exponential integrator scheme, we have,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \widehat {\mathbb {P}} (\mathbf {A} _ {T})) \lesssim \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {T}) \| \boldsymbol {\Pi} _ {\mathbf {A}}) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\phi} (\mathcal {G} _ {t _ {i}}, t _ {i}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} \mathrm {d} t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {A}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} d t \\ \lesssim (N ^ {2} + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + N ^ {2} L ^ {2} \sigma_ {\mathbf {X}} ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N ^ {2} L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} + \sum_ {i = 1} ^ {M} N ^ {2} \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {A}} \Delta_ {t _ {i}} ^ {3} \\ = (N ^ {2} + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + (N ^ {2} L ^ {2} \sigma_ {\mathbf {X}} ^ {2} + N ^ {2}) \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + (N ^ {2} L + H _ {\mathbf {A}}) \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+This completes the proof.
+
+
+
+# C.3. Proof of Theorem 4.3
+
+In this section, we present the proof for Theorem 4.3.
+
+Proof of Theorem 4.3. We start with proving the result for the exponential integrator scheme.
+
+Let $\mathbb{P}_0$ be the data distribution and $\widehat{\mathbb{P}}_T$ is the learned distribution through the sampling scheme. We start with considering the exponential integrator scheme. By Lemma B.4, we can decompose the KL divergence of $\mathbb{P}_0$ and $\widehat{\mathbb{P}}_T$ as follows,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} d t \tag {C.5} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t. \\ \end{array}
+$$
+
+Then, we tackle each term in the equation above.
+
+By Lemma B.5, we can immediately bound the first term as follows,
+
+$$
+\mathrm {K L} \left(\mathbb {P} _ {T} (\mathbf {X}) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T}.
+$$
+
+Next, we tackle the second term. By Assumption 3.2, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} d t \\ = \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \| ^ {2} \\ \leq T \epsilon_ {\mathbf {X}} ^ {2} \\ \end{array}
+$$
+
+Therefore, it remains to tackle the last term. By Lemma B.9, we immediately get that
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t \\ \lesssim \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} N F L ^ {2} \left(t _ {k} - t\right) + N F L \left(t _ {k} - t\right) ^ {2} \mathrm {d} t \\ \lesssim N F L ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N F L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Combining all the results above, we get that,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} d t \\ \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + N F L ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N F L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} \\ = (N F + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + N F L \left(L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3}\right) \\ \end{array}
+$$
+
+This complete the derivation for the exponential integrator scheme.
+
+Again, from Lemma B.4, we have that the decomposition of convergence bound of the Euler-Marumaya scheme is given by,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} (\mathbf {X}) \| \mathbb {P} _ {T} (\widehat {\mathbf {X}})\right) \lesssim \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {T}\right) \| \boldsymbol {\Pi} _ {\mathbf {X}}\right) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left\| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i}}, t _ {i}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i}}\right) \right\| ^ {2} d t \tag {C.6} \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \left(\mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} _ {t} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t _ {i}}) \| ^ {2} + \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2}\right) d t, \\ \end{array}
+$$
+
+From comparing the decomposition of Euler-Marumaya and the exponential integrator scheme, we can easily see that the difference between Euler-Marumaya scheme and the exponential integrator scheme is the extra term on
+
+$$
+\sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {k}} \| ^ {2} \mathrm {d} t
+$$
+
+For this, we can use Lemma B.6 and immediately get that
+
+$$
+\mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {i}} \| ^ {2} \leq N F (t _ {i} - t) + H _ {\mathbf {X}} (t _ {i} - t) ^ {2}, \quad t _ {i - 1} \leq t \leq t _ {i}
+$$
+
+Substituting this result into the extra term, we have that,
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \mathbf {X} _ {t} - \mathbf {X} _ {t _ {k}} \| ^ {2} \mathrm {d} t \\ \leq \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} N F (t _ {i} - t) + H _ {\mathbf {X}} (t _ {i} - t) ^ {2} \\ \leq \sum_ {i = 1} ^ {M} N F (t _ {i} - t) ^ {2} + H _ {\mathbf {X}} (t _ {i} - t) ^ {3} \\ \leq \sum_ {i = 1} ^ {M} N F \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {X}} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+Therefore, combining this result and the result of the exponential integrator scheme, we have,
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \widehat {\mathbb {P}} (\mathbf {X} _ {T})) \lesssim \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {T}) \| \boldsymbol {\Pi} _ {\mathbf {X}}) + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \| s _ {\boldsymbol {\theta}} \left(\mathcal {G} _ {t _ {i - 1}}, t _ {i - 1}\right) - \nabla_ {\mathbf {X}} \log \mathbb {P} \left(\mathcal {G} _ {t _ {i - 1}}\right) \| ^ {2} \mathrm {d} t \\ + \sum_ {i = 1} ^ {M} \int_ {t _ {i - 1}} ^ {t _ {i}} \mathbb {E} \| \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) - \nabla_ {\mathbf {X}} \log \mathbb {P} (\mathcal {G} _ {t}) \| ^ {2} d t \\ \lesssim (N F + H _ {\mathbf {X}}) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + N F L ^ {2} \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {2} + N F L \sum_ {i = 1} ^ {M} \Delta_ {t _ {i}} ^ {3} + \sum_ {i = 1} ^ {M} N F \Delta_ {t _ {i}} ^ {2} + H _ {\mathbf {X}} \Delta_ {t _ {i}} ^ {3} \\ \end{array}
+$$
+
+This completes the proof.
+
+The derivation for the structure matrix $\mathbf{A}$ is similar and therefore is omitted here
+
+# D. Additional Results
+
+In this appendix, we present additional results on the selection of hyper-parameters.
+
+Corollary D.1. Suppose the discretization step is uniform $\Delta t_{i} = T / M\leq 1,\forall i\in 1,\dots,M$ . Then the convergence results of SGGMs under the exponential integrator scheme are given by
+
+$$
+\mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim (H _ {\mathbf {X}} + N F) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \frac {N F L ^ {2} T ^ {2}}{M},
+$$
+
+$$
+\mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + \frac {N ^ {2} L T ^ {2}}{M}.
+$$
+
+Furthermore, taking
+
+$$
+T = \max \left\{\log \left(\frac {H _ {\mathbf {X}} + N F}{\epsilon_ {\mathbf {X}} ^ {2}}\right), \log \left(\frac {H _ {\mathbf {X}} + N ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}}\right) \right\},
+$$
+
+$$
+M = \max \left\{\frac {N F L ^ {2} T ^ {2}}{\epsilon_ {\bf X} ^ {2}}, \frac {N ^ {2} L ^ {2} T ^ {2}}{\epsilon_ {\bf A} ^ {2}} \right\},
+$$
+
+we have that the overall generative error of SGGMs is bounded by the score estimation errors, i.e.,
+
+$$
+\mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim \epsilon_ {\mathbf {X}} ^ {2}, \quad \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim \epsilon_ {\mathbf {A}} ^ {2}.
+$$
+
+Proof. By Theorem 4.3, we have that,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\hat {\mathbf {X}} _ {T}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N F\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + N F L \left(L \sum_ {i} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i} ^ {M} \Delta t _ {i} ^ {3}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \widehat {\mathbb {P}} _ {T} (\mathbf {A})\right) \lesssim \left(H _ {\mathbf {A}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + N ^ {2} L \left(L \sum_ {i} ^ {M} \Delta t _ {i} ^ {2} + \sum_ {i} ^ {M} \Delta t _ {i} ^ {3}\right) \tag {D.1} \\ \end{array}
+$$
+
+By premise, we have that $\Delta t_i^3\leq 1$ . This means that
+
+$$
+\Delta t _ {i} ^ {2} \geq \Delta t _ {i} ^ {3}
+$$
+
+This means that we can absorb the higher-order term with a larger constant. This leads to:
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\hat {\mathbf {X}} _ {T}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N F\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + N F L \left(L \sum_ {i} ^ {M} \Delta t _ {i} ^ {2}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \widehat {\mathbb {P}} _ {T} (\mathbf {A})\right) \lesssim \left(H _ {\mathbf {A}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + N ^ {2} L \left(L \sum_ {i} ^ {M} \Delta t _ {i} ^ {2}\right) \tag {D.2} \\ \end{array}
+$$
+
+Then, replacing $\Delta_{t_i}$ with $T / M$ we get,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\hat {\mathbf {X}} _ {T}\right)\right) \lesssim \left(H _ {\mathbf {X}} + N F\right) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} \\ + N F L \left(L T ^ {2} / M\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \widehat {\mathbb {P}} (\widehat {\mathbf {A}})\right) \lesssim \left(H _ {\mathbf {A}} + N ^ {2}\right) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} \\ + N ^ {2} L \left(L T ^ {2} / M\right) \tag {D.3} \\ \end{array}
+$$
+
+With some algebra, we arrive
+
+$$
+\begin{array}{l} \mathrm {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim (H _ {\mathbf {X}} + N F) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \frac {N F L ^ {2} T ^ {2}}{M}, \\ \mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + \frac {N ^ {2} L T ^ {2}}{M}. \\ \end{array}
+$$
+
+Next, substituting in our choice of $T$ and $M$
+
+$$
+\begin{array}{l} T = \max \left\{\log \left(\frac {H _ {\mathbf {X}} + N F}{\epsilon_ {\mathbf {X}} ^ {2}}\right), \log \left(\frac {H _ {\mathbf {X}} + N ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}}\right) \right\}, \\ M = \max \left\{\frac {N F L ^ {2} T ^ {2}}{\epsilon_ {\mathbf {X}} ^ {2}}, \frac {N ^ {2} L ^ {2} T ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}} \right\}, \\ \end{array}
+$$
+
+It is easy to check that the overall generative error of SGGMs is bounded by the score estimation errors, i.e.,
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\mathbb {P} \left(\mathbf {X} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {X}} _ {T}\right)\right) \lesssim \epsilon_ {\mathbf {X}} ^ {2}, \\ \operatorname {K L} \left(\mathbb {P} \left(\mathbf {A} _ {0}\right) \| \mathbb {P} \left(\widehat {\mathbf {A}} _ {T}\right)\right) \lesssim \epsilon_ {\mathbf {A}} ^ {2}. \\ \end{array}
+$$
+
+
+
+Similarly, we have the following result for the Euler-Marumaya scheme and omit the proof due to similarity.
+
+Corollary D.2. Suppose the discretization step is uniform $\Delta t_{i} = T / M\leq 1,\forall i\in 1,\dots,M$ . Then the convergence results of SGGMs under Euler-Marumaya scheme are given by
+
+$$
+\operatorname {K L} (\mathbb {P} (\mathbf {X} _ {0}) | | \mathbb {P} (\widehat {\mathbf {X}} _ {T})) \lesssim (H _ {\mathbf {X}} + N F) e ^ {- T} + T \epsilon_ {\mathbf {X}} ^ {2} + \frac {N F L ^ {2} T ^ {2}}{M},
+$$
+
+$$
+\mathrm {K L} (\mathbb {P} (\mathbf {A} _ {0}) | | \mathbb {P} (\widehat {\mathbf {A}} _ {T})) \lesssim (H _ {\mathbf {A}} + N ^ {2}) e ^ {- T} + T \epsilon_ {\mathbf {A}} ^ {2} + \frac {N ^ {2} L T ^ {2}}{M}.
+$$
+
+Furthermore, taking
+
+$$
+T = \max \left\{\log \left(\frac {H _ {\mathbf {X}} + N F}{\epsilon_ {\mathbf {X}} ^ {2}}\right), \log \left(\frac {H _ {\mathbf {X}} + N ^ {2}}{\epsilon_ {\mathbf {A}} ^ {2}}\right) \right\},
+$$
+
+$$
+M = \max \left\{\frac {N F L ^ {2} T ^ {2}}{\epsilon_ {\bf X} ^ {2}}, \frac {N ^ {2} L ^ {2} T ^ {2}}{\epsilon_ {\bf A} ^ {2}} \right\},
+$$
+
+we have that the overall generative error of SGGMs is bounded by the score estimation errors, i.e.,
+
+$$
+\operatorname {K L} (\mathbb {P} (\mathbf {X} _ {0}) \| \mathbb {P} (\widehat {\mathbf {X}} _ {0})) \lesssim \epsilon_ {\mathbf {X}} ^ {2}, \quad \operatorname {K L} (\mathbb {P} (\mathbf {A} _ {0}) \| \mathbb {P} (\widehat {\mathbf {A}} _ {0})) \lesssim \epsilon_ {\mathbf {A}} ^ {2}.
+$$
+
+# D.1. Bound with Other Measures
+
+In the context of generative models and their convergence properties, it is often useful to bound the discrepancy between two probability distributions, $P$ and $Q$ , using different metrics. Among the most commonly used divergence measures are the Kullback-Leibler (KL) divergence, Wasserstein distance, and total variation (TV) distance. In this paper, we adopt the KL divergence as the measure for the analysis of its direct relation with the training objectives. In this section, we present a discussion of how our result with KL divergence also immediately implies a bound on the other two bounds. We start by providing a brief introduction to each of the metrics.
+
+The Kullback-Leibler (KL) divergence, is defined as:
+
+$$
+\operatorname {K L} (\mathbb {P} \| \mathbb {Q}) = \mathbb {E} _ {\mathbb {P}} \left[ \log \frac {\mathrm {d} \mathbb {P}}{\mathrm {d} \mathbb {Q}} \right],
+$$
+
+measures the expected logarithmic difference between the probability densities $\mathbb{P}$ and $\mathbb{Q}$ corresponding to the distributions $\mathbb{P}$ and $\mathbb{Q}$ . This divergence is widely used in variational inference and optimization because of its convenient properties, such as non-negativity and the fact that it is always zero when $\mathbb{P} = \mathbb{Q}$ . However, KL divergence is not symmetric and does not satisfy the triangle inequality, making it less suitable as a distance metric in certain contexts.
+
+In contrast, the Wasserstein distance, is defined as:
+
+$$
+\mathrm {W} _ {p} (\mathbb {P}, \mathbb {Q}) = \inf _ {\gamma \in \Gamma (\mathbb {P}, \mathbb {Q})} \left(\mathbb {E} _ {(\mathbf {X}, \mathbf {Y}) \sim \gamma} \| \mathbf {X} - \mathbf {Y} \| ^ {p}\right) ^ {1 / p},
+$$
+
+It measures the "cost" of transforming one distribution into another by considering the optimal transport plan $\gamma$ that minimizes this cost. It is a more intuitive and geometrically meaningful metric compared to KL divergence, especially in high-dimensional spaces. Wasserstein distance has the advantage of being symmetric and satisfying the triangle inequality, which makes it a true metric. When $p = 1$ , the Wasserstein distance is often referred to as the Earth Mover's Distance (EMD).
+
+Total variation (TV) distance is another useful metric defined as:
+
+$$
+\operatorname {T V} (\mathbb {P}, \mathbb {Q}) = \frac {1}{2} \int | \mathbb {P} (x) - \mathbb {Q} (x) | d x,
+$$
+
+which quantifies the maximum difference between the probabilities assigned to the same event by two distributions. TV distance is symmetric and satisfies the triangle inequality, making it a true distance metric. It is tightly connected to other divergences like KL divergence and can be bounded in terms of them.
+
+# D.1.1. RELATIONSHIP BETWEEN KL DIVERGENCE, WASSERSTEIN DISTANCE, AND TV DISTANCE
+
+It is known that the KL divergence can be bounded in terms of both the Wasserstein distance and the total variation distance. Specifically, the following inequalities provide useful bounds:
+
+- KL Divergence and Total Variation Distance:
+
+$$
+\mathrm {K L} (\mathbb {P} \| \mathbb {Q}) \geq 2 \mathrm {T V} (\mathbb {P}, \mathbb {Q}) ^ {2} \quad \text {f o r d i s t r i b u t i o n s w i t h b o u n d e d s u p p o r t .}
+$$
+
+The result above is known to be Pinsker's inequality. It shows that while KL divergence is at least as large as the square of the TV distance, up to a constant factor.
+
+- Wasserstein Distance AND Total Variation Distance:
+
+$$
+\mathrm {W} _ {1} (\mathbb {P}, \mathbb {Q}) \lesssim \mathrm {T V} (\mathbb {P}, \mathbb {Q}),
+$$
+
+where $\mathrm{W}_1(\mathbb{P},\mathbb{Q})$ is the Wasserstein distance with $p = 1$ . The relation above holds for bounded metric space. Then, by transitivity property, we get that $\mathrm{W}_1(\mathbb{P},\mathbb{Q}) \lesssim \mathrm{KL}(\mathbb{P}\| \mathbb{Q})$ .
+
+The results above show that our results in this paper immediately imply a (growth) bound with the other two commonly used measures (at least in some cases) as well.
+
+# E. Experiment Details
+
+In this appendix, we provide additional details on our experimental study. This section elaborates on the testbed setup, graph generation model, the implementation of score-based graph generative models (SGGMs), and the hyperparameter search.
+
+# E.1.Testbed
+
+Our experiments were conducted on a Dell PowerEdge C4140 server. The key specifications of this server, relevant to our research, are as follows:
+
+CPU: Dual Intel Xeon Gold 6230 processors, each offering 20 cores and 40 threads.
+
+GPU: Four NVIDIA Tesla V100 SXM2 units, each equipped with 32GB of memory, tailored for NV Link.
+
+Memory: A total of 256GB RAM, distributed across eight 32GB RDIMM modules.
+
+Storage: Dual 1.92TB SSDs with a 6Gbps SATA interface.
+
+Networking: Dual 1Gbps NICs and a Mellanox ConnectX-5 EX Dual Port 40/100GbE QSFP28 Adapter with GPUDirect support.
+
+Operating System: Ubuntu 18.04 LTS.
+
+# E.2. Graph Generation Model
+
+For our experiments, we primarily utilize the NetworkX Python library (Hagberg et al., 2008). Specifically, we use the default implementations of regular graph generation and the Barabási-Albert model (Pósfai & Barabási, 2016) provided by NetworkX. The Barabási-Albert (BA) model is widely used to generate scale-free networks characterized by a power-law degree distribution. The core idea behind the BA model is preferential attachment, where new nodes are more likely to connect to existing nodes with a higher degree of connections. This mechanism mirrors real-world networks, such as social networks, where popular individuals (nodes) tend to attract more connections.
+
+Procedure of the Barabási-Albert Model. The Barabási-Albert model generates a network through the following steps:
+
+1. Initialization: Start with a small connected network of $m_0$ nodes.
+2. Growth: Add one new node at a time. Each new node forms $m$ edges connecting it to $m$ existing nodes.
+
+3. Preferential Attachment: The probability that a new node connects to an existing node $i$ is proportional to the degree of node $i$ . Formally, the probability $P(i)$ that the new node connects to node $i$ is given by:
+
+$$
+P (i) = \frac {k _ {i}}{\sum_ {j} k _ {j}}
+$$
+
+where $k_{i}$ is the degree of node $i$ , and the sum is taken over all existing nodes.
+
+Hyperparameters. The BA model has two key hyperparameters:
+
+- $\mathbf{m}_0$ : The initial number of nodes in the network.
+- m : The number of edges each new node adds when it is introduced to the network. This parameter influences the density and structure of the resulting network.
+
+# E.3. Score Networks and Diffusion
+
+In the experiments, we use a simple one-layer Graph Convolutional Network (GCN) as the score network. However, to ensure sufficient capacity for learning, we use two-layer MLPs (multi-layer perceptrons) preceding the GCN layer. The hidden layer size is set to 500 for all experiments. For the diffusion process, we set the diffusion length $T = 50$ to ensure sufficient diffusion, and we use a uniform step size of 0.1 for the sampling scheme. For simplicity, our experiments focus on the Euler-Maruyama scheme, and early stopping is employed to prevent variance explosion near the end.
+
+# E.4. Hyperparameter Search for Learning Algorithm
+
+To train the SGGMs, we use the widely adopted Adam optimizer (Kingma & Ba, 2014). A simple parameter search is performed for the learning rate, testing values from the set [0.1, 0.01, 0.001, 0.0001]. The implementation of the Adam algorithm is provided by the PyTorch library.
\ No newline at end of file
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/images.zip b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..00030d76e6831b727b299fc61c4fbfc4c5f65ab3
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18803f37d367c80ab77c2929d96fa839ef5f6240f38230dfd1471cb5f0533389
+size 2679018
diff --git a/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/layout.json b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..51c3105e3f8d3b4a69a6c70b9eb3ea3ece856d2a
--- /dev/null
+++ b/anonasymptoticconvergentanalysisforscoredbasedgraphgenerativemodelviaasystemofstochasticdifferentialequations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f969da11d677ba8b9dc2f3626f2a9caabc9762d35889e5b7d13296a6253ec7c
+size 1385246
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_content_list.json b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..342e11d19b621b01c5b7b58bc63165d1dcbd3e94
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea3fadcc21be7e331fbd286b8b5fb0f76a662783bc05146ae21e09f97fa3840c
+size 155169
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_model.json b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4c7969de5b080166d67f3557792b312a13cd5dcd
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81f9c435baa70ced8d12cb92284ba96d77921ec7abe017de313221a51d717168
+size 175954
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_origin.pdf b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..93064cc8b03372e1133f11004395c7a7ff28b548
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/4cd00291-d09c-4b5a-8784-84a2c2c4f316_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f4be33aaf5b0fc9b3dd42c2e1bd3a79048843c44b682969f15cb6d68ecd1cbb
+size 1731551
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/full.md b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac547de8dca2b6bad6ab682fdb0131d10f646e7a
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/full.md
@@ -0,0 +1,655 @@
+# A Non-isotropic Time Series Diffusion Model with Moving Average Transitions
+
+Chenxi Wang $^{1,2}$ Linxiao Yang $^{2}$ Zhixian Wang $^{1,2}$ Liang Sun $^{2}$ Yi Wang
+
+# Abstract
+
+Diffusion models, known for their generative ability, have recently been adapted to time series analysis. Most pioneering works rely on the standard isotropic diffusion, treating each time step and the entire frequency spectrum identically. However, it may not be suitable for time series, which often have more informative low-frequency components. We empirically found that direct application of standard diffusion to time series may cause gradient contradiction during training, due to the rapid decrease of low-frequency information in the diffusion process. To this end, we proposed a novel time series diffusion model, MA-TSD, which utilizes the moving average, a natural low-frequency filter, as the forward transition. Its backward process is accelerable like DDIM and can be further considered a time series super-resolution. Our experiments on various datasets demonstrated MA-TSD's superior performance in time series forecasting and super-resolution tasks.
+
+# 1. Introduction
+
+Time series data is widely adopted in the real world. Extensive examples include electricity consumption in power systems, stock prices in financial markets, traffic flows in transportation systems, etc. Over the past decade, remarkable time series models have been developed with versatile deep neural networks to perform various time series analyses (Wang et al., 2024).
+
+In recent years, the diffusion model (Ho et al., 2020) has risen as a shining generative model, showing remarkable performance for image and video synthesis. Such supercity in modeling complex data distributions also drives the community to seek how to adapt it to time series, and thus empower
+
+$^{1}$ Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR $^{2}$ DAMO Academy, Alibaba Group, Hangzhou, China. Correspondence to: Yi Wang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+time series analysis (Yang et al., 2024). So far, pioneer works have accommodated the diffusion model for time series forecasting (Rasul et al., 2021; Shen & Kwok, 2023; Li et al., 2022), missing value imputation (Tashiro et al., 2021; Alcaraz & Strodthoff, 2023), uncertainty quantification (Li et al., 2024) and so on.
+
+Despite the initial success of these works, most of them still relied on the classical standard isotropic diffusion model, namely the Denosing Diffusion Probabilistic Model (DDPM) (Ho et al., 2020). It treats each time step independently and applies the same diffusion schedule. In the frequency domain, both low and high-frequency components are also degraded identically (see Figure 1). However, low-frequency components are usually more informative than high-frequency ones in time series analysis (Xu et al., 2024). We found that decreasing the low-frequency components identically to the high-frequency ones during the diffusion process may cause a drastic reduction of the essential time series information. It may further lead to contradictions on the gradient directions of DDPM at different diffusion steps, impeding the training convergence (see Section 3). Therefore, it's inappropriate to handle all the frequencies with the same diffusion process, and the classical design of the DDPM doesn't fully fit the inductive bias of time series data.
+
+To tackle such inequality in the frequency domain of time series, we utilized moving average operation, a natural low-pass filter, to build a non-isotropic time series diffusion model, Moving Average Time Series Diffusion (MA-TSD). In the forward process, moving averages are set by small-to-large kernel sizes, gradually coarsening the time series until zero-frequency components. The corresponding transition matrices are no longer diagonal like standard diffusion models. A corresponding dataset-based noise schedule is provided alongside. Similar to Denosing Diffusion Implicit Models (DDIM) (Song et al., 2021a), we give an accelerable backward process with a customized strategy to select backward steps. Naturally, with the coarse-to-fine philosophy, the backward process of MA-TSD can also be viewed as time series super-resolution. Empirically, we show on diverse datasets that MA-TSD has outstanding performances over time series forecasting and super-resolution tasks.
+
+Contributions: 1) We empirically disclosed the training
+
+
+Figure 1. DDPM versus our proposed moving average diffusion process. Visualization relies on reparameterization, i.e. $\pmb{x}_t = \pmb{K}_t\pmb{x}_0 + \beta_t\pmb{\epsilon}_t$ . Left: Comparison in time domain. Right: Comparison in the frequency domain. For illustration, $\pmb{\epsilon}_t$ is fixed.
+
+issue when directly applying DDPM to time series data, and explored the relationship between the gradient similarity at different diffusion steps and the change of frequency information. 2) We accordingly proposed a novel time series diffusion model with moving average as transition. The backward process can also be naturally considered as time series super-resolution. 3) We conducted extensive experiments to demonstrate our salient performances over existing DDPM-based diffusion models on time series-related tasks like time series forecasting and time series super-resolution.
+
+# 2. Background
+
+Given samples from a data distribution $q(\pmb{x}_0)$ , diffusion models are latent variable models in the form of $p_{\theta}(\pmb{x}_0) = \int p_{\theta}(\pmb{x}_{0:T})d\pmb{x}_{1:T}$ , trying to approximate the unknown $q(\pmb{x}_0)$ . The joint distribution is usually modeled as a Markovian chain: $p(\pmb{x}_{0:T}) = p(\pmb{x}_T)\prod_{t = 1}^{T}p_{\theta}(\pmb{x}_{t - 1}|\pmb{x}_t)$ . The latent variables of the diffusion models lie in the same space of the original data, i.e. $\pmb{x}_t\in \mathbb{R}^L,\forall t\in [0,1,\dots ,T]$ . In our context, $\pmb{x}_0$ is a time series with $L$ time steps.
+
+The trainable parameters $\theta$ are optimized to minimize the negative variational lower bound of the log-likelihood on the data distribution $q(\pmb{x}_0)$ :
+
+$$
+\min _ {\theta} \mathcal {L} = \mathbb {E} _ {q \left(\boldsymbol {x} _ {0: T}\right)} \left[ \log q \left(\boldsymbol {x} _ {1: T} \mid \boldsymbol {x} _ {0}\right) - \log p _ {\theta} \left(\boldsymbol {x} _ {0: T}\right) \right], \tag {1}
+$$
+
+where the conditional joint $q(\pmb{x}_{1:T}|\pmb{x}_0)$ is the core of diffusion design. The designs of classical DDPM and DDIM will be introduced to pave the way for our proposed method.
+
+# 2.1. Denosing diffusion probabilistic model
+
+In DDPM, the forward process degrades an original data $\pmb{x}_0$ by gradually compressing the data and adding Gaussian noises until $T \in \mathbb{N}^+$ diffusion steps so that all the structures of original data are lost, i.e. $q(\pmb{x}_T|\pmb{x}_0) = \mathcal{N}(\pmb{0},\pmb{I})$ . The
+
+whole process is modeled as a Markovian chain:
+
+$$
+q \left(\boldsymbol {x} _ {1: T} \mid \boldsymbol {x} _ {0}\right) := \prod_ {t = 1} ^ {T} q \left(\boldsymbol {x} _ {t} \mid \boldsymbol {x} _ {t - 1}\right), \tag {2}
+$$
+
+where the one-step transition is given by: $q(\pmb{x}_t|\pmb{x}_{t-1}) \coloneqq \mathcal{N}\left(\sqrt{\alpha_t}\pmb{x}_{t-1}, (1 - \alpha_t)\pmb{I}\right)$ . The coefficient $\alpha_t \in [0,1]$ monotonically decreases with $t$ . Through the property of Gaussian distribution, the transition from $\pmb{x}_0$ to $\pmb{x}_t$ can be derived:
+
+$$
+q \left(\boldsymbol {x} _ {t} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0}, (1 - \bar {\alpha} _ {t}) \boldsymbol {I}\right), \tag {3}
+$$
+
+with the transition coefficient $\bar{\alpha}_t = \prod_{i=1}^t \alpha_i$ . Through Bayes rule and the property of Markovian chain, Equation (2) can be reformulated as:
+
+$$
+q \left(\boldsymbol {x} _ {1: T} \mid \boldsymbol {x} _ {0}\right) = q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right) \prod_ {t = 2} ^ {T} q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}, \boldsymbol {x} _ {0}\right), \tag {4}
+$$
+
+where the close form of $q(\pmb{x}_{t-1}|\pmb{x}_t,\pmb{x}_0)$ can be analytically derived by Bayes rule. With the factorization of the forward $p_{\theta}(\pmb{x}_{0:T})$ and the backward $q(\pmb{x}_{1:T}|\pmb{x}_0)$ , the optimization target (Equation (1)) can be simplified. A large part of the simplified optimization target is $\min_{\theta}\mathbb{E}_{q(\pmb{x}_0,\pmb{x}_t)}[D_{\mathrm{KL}}(q(\pmb{x}_{t-1}|\pmb{x}_t,\pmb{x}_0)\| p_{\theta}(\pmb{x}_{t-1}|\pmb{x}_t))]$ . Therefore, the backward one-step transition $p_{\theta}(\pmb{x}_{t-1}|\pmb{x}_t)$ is modeled to approximate the true posterior:
+
+$$
+p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}\right) = q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}, f _ {\theta} (\boldsymbol {x} _ {t}, t)\right), \tag {5}
+$$
+
+where $f_{\theta}(\boldsymbol{x}_t, t)$ is a trainable denoising neural network to estimate the $\boldsymbol{x}_0$ . The loss function can be consequently simplified as:
+
+$$
+\mathcal {L} = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0}), t \sim [ 1, T ]} \left[ \| f _ {\theta} (\boldsymbol {x} _ {t}, t) - \boldsymbol {x} _ {0} \| _ {2} ^ {2} \right], \tag {6}
+$$
+
+with $\pmb{x}_t = \sqrt{\bar{\alpha}_t}\pmb{x}_0 + \sqrt{1 - \bar{\alpha}_t}\pmb{\epsilon},\pmb{\epsilon} \sim \mathcal{N}(\pmb{0},\pmb{I})$ via reparameterization. This means that the network $f_{\theta}$ takes in the noisy
+
+data $\boldsymbol{x}_t$ and the current diffusion step $t$ , and outputs the prediction of the clean data $\hat{\boldsymbol{x}}_0$ . Alternatively, the network $f_{\theta}$ can also be optimized to estimate the added noise $\epsilon_t$ , and then apply $\hat{\boldsymbol{x}}_0 = (\boldsymbol{x}_t - \sqrt{1 - \bar{\alpha}_t} f_\theta(\boldsymbol{x}_t, t)) / \sqrt{\bar{\alpha}_t}$ to obtain the prediction of clean data. After training, one can simply generate a synthetic data sample by iteratively denoising a $\boldsymbol{x}_T \sim p(\boldsymbol{x}_T)$ with Equation (5).
+
+# 2.2. Denosing diffusion implicit model
+
+Based on DDPM, DDIM generalized the foward process to be non-Markovian and derived a new backward process. First, DDIM bypasses the DDPM design of $q(\pmb{x}_t|\pmb{x}_{t-1})$ , and directly considers the conditional joint $q(\pmb{x}_{1:T}|\pmb{x}_0)$ in the form of Equation (4), where $q(\pmb{x}_{t-1}|\pmb{x}_t, \pmb{x}_0)$ is specially designed to ensure that for all $t$ , the transition $q(\pmb{x}_t|\pmb{x}_0)$ always matches Equation (3). In other words, the marginal is satisfied, $\int q(\pmb{x}_{t-1}|\pmb{x}_t, \pmb{x}_0)q(\pmb{x}_t|\pmb{x}_0)d\pmb{x}_t = q(\pmb{x}_{t-1}|\pmb{x}_0)$ .
+
+Since the conditional joint $q(\pmb{x}_{1:T}|\pmb{x}_0)$ is defined as the same form as DDPM's, the optimization target of DDIM can be identically factorized, and the backward one-step transition is also chosen as Equation (5). In this way, DDIM shares the identical loss function of DDPM.
+
+For backward process, DDIM offers an accelerable inference option. Specifically, given an ascending subset of $[1,\dots ,T]$ , denoted as $\{t_i\}_{1}^{\tau},\forall t_i\in [0,T]$ with $\tau \leq T$ , DDIM allows the following sampling scheme:
+
+$$
+\begin{array}{l} \pmb {x} _ {t _ {i - 1}} = \sqrt {\bar {\alpha} _ {t _ {i - 1}}} f _ {\theta} (\pmb {x} _ {t _ {i}}, t _ {i}) + \\ \sqrt {1 - \bar {\alpha} _ {t _ {i - 1}} - \eta_ {t _ {i}} ^ {2}} \cdot \frac {\boldsymbol {x} _ {t _ {i}} - \sqrt {\bar {\alpha} _ {t _ {i}}} f _ {\theta} (\boldsymbol {x} _ {t _ {i}} , t _ {i})}{\sqrt {1 - \bar {\alpha} _ {t _ {i}}}} + \eta_ {t _ {i}} \boldsymbol {\epsilon}, \\ \end{array}
+$$
+
+where $\{\eta_{t_i}\}$ are hyperparameters. DDIM can generate reasonable synthetic data within 50 steps for images. Besides, when $\eta_{t_i} = 0$ , also known as deterministic sampling, such scheme is considered as the numerical solution of a probability flow ordinary differential equation (ODE)(Song et al., 2021b).
+
+# 2.3. Conditional diffusion for time series forecasting
+
+Time series forecasting can be viewed as a conditional generation task. Given a look-back window $\pmb{c} \in \mathbb{R}^{H}$ with $H$ time steps, we are aimed to predict the next $L$ steps, i.e. the target window $\pmb{x}_0$ . In other words, we are interested in the conditional distribution $q(\pmb{x}_0|\pmb{c})$ . To include the guidance of the look-back window into the diffusion model, we can add the condition at each transition (Shen & Kwok, 2023):
+
+$$
+p \left(\boldsymbol {x} _ {0: T} | \boldsymbol {c}\right) = p \left(\boldsymbol {x} _ {T}\right) \prod_ {t = 1} ^ {T} p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {t}, \boldsymbol {c}\right). \tag {7}
+$$
+
+Accordingly, the condition $\pmb{c}$ is handled as another feature input to the denosing network, $f_{\theta}(\pmb{x}_t,t,\pmb{c}) \approx \pmb{x}_0$ . Other
+
+types of time series analysis tasks can also be modeled in this way, for example super-resolution (conditional on low-resolution time series).
+
+# 3. Empirical Findings on Applying DDPM to Time Series
+
+In this section, we discovered a training issue when directly applying DDPM on time series (see Appendix A for detailed settings). As depicted in the left of Figure 2, the training process of DDPM on a typical time series dataset, Electricity, experienced large variations, even in the beginning stage. As we introduced above, the DDPM is optimized to denoise the corrupted time series at all diffusion steps (see Equation (6)), but we observed that the directions of model gradients at different diffusion steps could be contradicted during training. When we specifically probed the $100^{\mathrm{th}}$ training step (see the left down of Figure 2), the gradients of about the first $25\%$ diffusion steps show greater similarity, while they could be opposite with the rest $75\%$ diffusion steps, and vice versa. Since each data sample is corrupted to varying degrees during DDPM training, the averaged gradient of each mini-batch can accordingly show considerable variation with polarized per-sample gradients. Consequently, it may lead to the unstable optimization of DDPM on time series.
+
+To further analyze the reason for this phenomenon, we first explored the change of the frequency information during the whole diffusion process (see the right up of Figure 2). We computed the spectral energy ratio between low-frequency components and high-frequency ones of the time series at each diffusion step. During the whole diffusion process, low-frequency energy was dominant at the early stage, and then steeply decreased until a turning point after which the corrupted time series barely had salient low-frequency information, i.e., they became almost noises. If we compare this energy ratio change with the gradient similarity with different diffusion steps (see the right of Figure 2). We found that the shift of gradient directions is highly aligned with the turning point of energy ratios. It implies that when DDPM is directly applied on time series, low-frequency information decays so steeply that there is a large discrepancy in the model's perception of the input data, with a few steps in the early diffusion being informative, whereas, in the middle and late diffusion, they are nearly noises. Therefore, to prevent the drastic decline of the energy ratio, we expect a time series diffusion model to have a gradual diffusion process that can keep more low-frequency information.
+
+In the following sections, we will introduce our method. Based on the moving average, a natural low-pass filter, it keeps more essential time series information during diffusion, alleviates the gradient contradiction during training, and thus obtains a less fluctuate training process (see the
+
+
+
+
+Figure 2. Gradient analysis on Electricity dataset. Left up: Training loss curves of DDPM and ours. Left down: Cosine similarity matrices of gradients w.r.t. the denosing network with different diffusion steps (total diffusion steps $T = 1000$ ). Right up: Energy ratio at different diffusion steps between the low-frequency components and the high-frequency ones. Right down: Cosine similarity matrices with different $T$ at $100^{\text{th}}$ training step.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 3. Our proposed MA-TSD. Dashed lines denote the forward process, while solid lines represent the backward process.
+
+left of Figure 2).
+
+# 4. Moving Average Time Series Diffusion
+
+In this section, we present our proposed MA-TSD, depicted by Figure 3. Time series will first be normalized to be zero-mean and unit-variance. Then, we utilize moving average to build the diffusion model. Finally, the generated time series will be denormalized for downstream tasks.
+
+# 4.1. Instance normalization
+
+Since moving average operation doesn't modify the zero-frequency component of a time series, the final $p(\pmb{x}_T)$ will no longer be a easy-to-sample standard Gaussian distribution if we directly construct moving average diffusion on $\pmb{x}_0$ . Therefore, we apply instance normalization to each target time series sample $\pmb{x}_0 \in \mathbb{R}^L$ , and obtain the normalized $\pmb{z}_0 \in \mathbb{R}^L$ with zero mean and unit variance, specifically:
+
+$$
+\boldsymbol {z} _ {0} = \frac {\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0})}{\sigma (\boldsymbol {x} _ {0})}, \tag {8}
+$$
+
+where $\mu (\cdot),\sigma (\cdot)$ denotes the functions obtaining the mean and standard deviation of a time series, respectively. Therefore, we will build on the diffusion framework on the normalized time series. The denormalization strategies will be investigated in Section 4.4.
+
+# 4.2. Non-isotropic forward process
+
+We now consider the following transition process for normalized time series, $q(\pmb{z}_t|\pmb{z}_0) \coloneqq \mathcal{N}(\pmb{z}_t; \pmb{K}_t\pmb{z}_0, \beta_t^2\pmb{I})$ , which can also be reparameterized to:
+
+$$
+\boldsymbol {z} _ {t} = \boldsymbol {K} _ {t} \boldsymbol {z} _ {0} + \beta_ {t} \boldsymbol {\epsilon} _ {t}, \boldsymbol {\epsilon} _ {t} \sim \mathcal {N} (\boldsymbol {0}, \boldsymbol {I}) \tag {9}
+$$
+
+with the transition matrix $\pmb{K}_t \in \mathbb{R}^{L \times L}$ , and the noise schedule $\beta_t \in \mathbb{R}$ . In the standard DDPM, the transition matrix is diagonal with identical entries, $\pmb{K}_t = \mathrm{diag}(\sqrt{\bar{\alpha}_t})$ , and the noise schedule is set accordingly to maintain the variance, $\beta_t = \sqrt{1 - \bar{\alpha}_t}$ .
+
+Transition matrix. In our design, we expect to utilize moving average to build our non-isotropic transition. First, let us consider non-overlapping moving average filters. The kernel sizes $\{k_i\}$ are naturally chosen as all the factors of the length of our target time series, given by:
+
+$$
+\left\{k _ {i} \right\} _ {1} ^ {n} = \left\{k _ {i} \in \mathbb {N} \mid 1 < k _ {i} \leq L, L \mod k _ {i} = 0 \right\}. \tag {10}
+$$
+
+Here, we sort the $\{k_i\}_{1}^n$ in ascending order, and use the index $i$ instead of $t$ to tell from diffusion step indices for now.
+
+The corresponding moving average kernels are denoted as $\{\hat{K}_i\}_{1}^{n}$ , and for each kernel, it convolves the normalized time series, i.e. $\hat{K}_i * z_0$ . Such convolution can be unrolled and reformulated as matrix multiplication for generality,
+
+namely $\hat{\pmb{K}}_i*\pmb {z}_0 = \bar{\pmb{K}}_i\pmb {z}_0$ , with $\bar{K}_i = \mathrm{Unroll}(\hat{K}_i)\in$ $\mathbb{R}^{(L / k_i)\times L}$ . We can further interpolate $\bar{\pmb{K}}_i$ along the time step axis to make it square to keep the shape of $\bar{K}_i\pmb {z}_0$ unchanged during transitions, and thus the transition matrix for $i^{\mathrm{th}}$ moving average kernel is given as:
+
+$$
+\boldsymbol {K} _ {i} ^ {\prime} = \operatorname {I n t e r p} \left(\operatorname {U n r o l l} \left(\hat {\boldsymbol {K}} _ {i}\right)\right) \in \mathbb {R} ^ {L \times L}. \tag {11}
+$$
+
+Though the moving average transition matrices are defined well, we find that such transition could have large jumps between adjacent kernel sizes, since the factors of the time series length can be non-consecutive integers. To extend such design to the continuous case (i.e. arbitrary diffusion steps), we can further interpolate on $\{K_i'\}_{1}^{n}$ along side the diffusion steps to have unlimited $T$ -step transition matrices $\{K_t\}_1^T$ :
+
+$$
+\left\{\boldsymbol {K} _ {t} \right\} _ {1} ^ {T} = \operatorname {I n t e r p} \left(\left\{\boldsymbol {K} _ {i} ^ {\prime} \right\} _ {1} ^ {n}\right), \tag {12}
+$$
+
+where $K_{t}$ is no longer a diagonal matrix like DDPM. A simple example of how we obtain the moving average transition is included in Appendix B.1.
+
+Noise schedule. For the noise schedule, we follow the variance preserving principle in (Ho et al., 2020; Song et al., 2021b). The noise schedules of the standard diffusion can be analytically designed to keep variances according to the transition coefficient $\sqrt{\bar{\alpha}_t}$ , i.e. $\beta_{t} = \sqrt{1 - \bar{\alpha}_{t}}$ , while the decrease of time series variance caused by moving average varies by datasets. Therefore, we provide a dataset-based noise schedule to complement our forward process. Specifically, we can first compute the averaged decrease ratio $\gamma_{t}$ over the whole time series dataset:
+
+$$
+\gamma_ {t} = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} \boldsymbol {x} _ {0})}{\sigma (\boldsymbol {x} _ {0})} \right]. \tag {13}
+$$
+
+Then, we accordingly set $\beta_{t} = \sqrt{1 - \gamma_{t}^{2}}$ as our noise schedule to compensate for variance decrease. At the last diffusion step, the kernel size of the moving average equals to the time series length, and the corresponding $\gamma_{T} = 0, \beta_{T} = 1$ , which ensures $q(z_{T}|z_{0}) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ . It should be noted that though we conduct the diffusion model on the normalized time series, it makes no difference to compute $\{\gamma_t\}, \{\beta_t\}$ over the normalized or original time series. The proof can be found in the Appendix B.3.
+
+Conditional joint distribution. Similar to DDIM, we directly define the following family of joint distribution: $q(\pmb{z}_{1:T}|\pmb{z}_0) \coloneqq q(\pmb{z}_T|\pmb{z}_0)\prod_{t=2}^T q(\pmb{z}_{t-1}|\pmb{z}_t,\pmb{z}_0)$ , where $q(\pmb{z}_T|\pmb{z}_0) = \mathcal{N}(\pmb{z}_T; \pmb{K}_T\pmb{z}_0, \beta_t^2\pmb{I})$ , and for $t \geq 2$ :
+
+$$
+q (\boldsymbol {z} _ {t - 1} | \boldsymbol {z} _ {t}, \boldsymbol {z} _ {0}) =
+$$
+
+$$
+\mathcal {N} \left(\boldsymbol {K} _ {t - 1} \boldsymbol {z} _ {0} + \frac {\sqrt {\beta_ {t - 1} ^ {2} - \eta_ {t} ^ {2}}}{\beta_ {t}} \left(\boldsymbol {z} _ {t} - \boldsymbol {K} _ {t} \boldsymbol {z} _ {0}\right), \eta_ {t} ^ {2} \boldsymbol {I}\right). \tag {14}
+$$
+
+The mean is chosen in order to guarantee that the modelling of the joint matches the marginal for all $t$ , i.e. $\int q(\pmb{z}_{t-1}|\pmb{z}_t,\pmb{z}_0)q(\pmb{z}_t|\pmb{z}_0)d\pmb{z}_t = q(\pmb{z}_{t-1}|\pmb{z}_0)$ . In other words, the choice of Equation (14) ensures $q(\pmb{z}_t|\pmb{z}_0) = \mathcal{N}(\pmb{z}_t;\pmb{K}_t\pmb{z}_0,\beta_t^2\pmb{I})$ for all $t$ . The proof of such choice is included in Appendix B.3.
+
+# 4.3. Accelerable backward process
+
+Analogous to DDIM, we now define the backward process on the normalized time series as follows: $p_{\theta}(z_{t - 1}|z_t) = q(z_{t - 1}|z_t,f_\theta (z_t,t,c))$ , where the denoising network $f_{\theta}(z_{t},t,c)$ tries to predict the clean normalized time series. For generality, we include the possible condition $c$ as input of the denoising network for the conditional generation task, like time series forecasting. For unconditional tasks, we can simply set $c = \varnothing$ .
+
+For the normalized time series, we modeled $q(\pmb{z}_{1:T}|\pmb{z}_0)$ and $p_{\theta}(\pmb{z}_{t-1}|\pmb{z}_t)$ the same as DDIM, so it's natural to derive the similar optimization target as Equation (6) but in the normalized space:
+
+$$
+\mathcal {L} _ {\boldsymbol {z}} = \mathbb {E} _ {\boldsymbol {z} _ {0}, \boldsymbol {c} \sim q (\boldsymbol {z} _ {0}, \boldsymbol {c}), t \sim [ 1, T ]} \left[ \| f _ {\theta} (\boldsymbol {z} _ {t}, t, \boldsymbol {c}) - \boldsymbol {z} _ {0} \| _ {2} ^ {2} \right], \tag {15}
+$$
+
+After training, we can also accelerate the backward process as we introduced in Section 2.2, given as:
+
+$$
+\boldsymbol {z} _ {t _ {i - 1}} = \boldsymbol {K} _ {t _ {i - 1}} f _ {\theta} \left(\boldsymbol {z} _ {t _ {i}}, t _ {i}, \boldsymbol {c}\right) +
+$$
+
+$$
+\frac {\sqrt {\beta_ {t _ {i - 1}} ^ {2} - \eta_ {t _ {i}} ^ {2}}}{\beta_ {t _ {i}}} \left(\boldsymbol {z} _ {t _ {i}} - \boldsymbol {K} _ {t _ {i}} f _ {\theta} \left(\boldsymbol {z} _ {t _ {i}}, t _ {i}, \boldsymbol {c}\right)\right) + \eta_ {t _ {i}} \boldsymbol {\epsilon}, \tag {16}
+$$
+
+where $\{t_i\}_1^\tau$ is an ascending sub-sequence of $[1,\dots ,T]$ . The detailed proof that the accelerated backward process doesn't essentially change the training objective can be found in the (Song et al., 2021a).
+
+Acceleration strategy. Though the backward process can be fastened by selecting a subset of total diffusion steps, how this subset is chosen may cause performance differences. Here, we offer a reasonable sampling strategy based on our moving average forward process. Specifically, we recall that the diffusion transition matrices $\{\pmb{K}_t\}$ are obtained by interpolating the original moving average transition matrices $\{\pmb{K}_i^{\prime}\}$ . We consider shortening the backward process by finding those diffusion steps whose transition matrices are the closest to the original $\{\pmb{K}_i^{\prime}\}$ . For the $i^{\text{th}}$ original matrix $\pmb{K}_i^{\prime}$ , we search by:
+
+$$
+t _ {i} ^ {*} = \arg \min _ {t} \| \boldsymbol {K} _ {t} - \boldsymbol {K} _ {i} ^ {\prime} \| _ {2} ^ {2}, \quad \text {s . t .} \quad \boldsymbol {K} _ {t} \in \left\{\boldsymbol {K} _ {t} \right\} _ {1} ^ {T}. \tag {17}
+$$
+
+Therefore, we can collect $n$ diffusion steps $\{t_i^*\}_{1}^{n}$ , corresponding to the original non-overlapping moving average kernels, as our accelerated backward steps. We call such a strategy as factor-only backward since the selected backward steps are only related to the factors of the length of
+
+the time series. When $\eta_t = 0, \forall t \in [1, \dots, T]$ , the backward process can also be viewed as a numerical solution to an ODE with Euler discretization. Further, if the function $\mathrm{Interp}(\cdot)$ in Equation (12) interpolates evenly between $K_i', K_{i-1}'$ , $\forall i \geq 2$ , the factor-only backward essentially solves that ODE with larger steps. Refer to Appendix B.3 for details.
+
+Backward as super-resolution. As the forward process is designed as gradually coarsening the time series, the backward process can be spontaneously utilized for super-resolution. Here, we don't have to include the low-resolution time series as the condition input into MA-TSD, since the framework itself demonstrates the multi-resolution property.
+
+To be specific, let us consider a coarse time series whose downscale rate is one of the factors of the original time series length. Utilizing the moving average transition matrix, we denote such coarse time series as $\pmb{x}_i = \pmb{K}_i'\pmb{x}_0$ and denote the normalized one as $\pmb{z}_i$ . The super-resolution scale we expected is then exactly $k_i$ . With Equation (17), we can locate such scale in the diffusion process, and accordingly choose a subset of total diffusion steps as $[1,\dots ,t_i^* ]$ . Then, we can start with $\pmb{z}_{t_i^*} = \pmb{z}_i$ , and iteratively apply Equation (16) to obtain a super-resolution result of $\pmb{z}_i$ . Therefore, the backward process of MA-TSD can be viewed as superresolution.
+
+Compared with the diffusion-based models which take low-resolution inputs as conditions, the time complexity is decreased from $\mathcal{O}(nM)$ to $\mathcal{O}(M)$ , where $n$ is the number of scales and $\mathcal{O}(M)$ is the complexity of training.
+
+# 4.4. Denormalization
+
+Given the denoised $\hat{z}_0$ from Equation (16), we need to denominate it to generate the final time series $\pmb{x}_0$ , i.e. $\hat{\pmb{x}}_0 = \hat{\pmb{z}}_0\cdot \hat{\pmb{\sigma}} +\hat{\pmb{\mu}}$ . In this section, the choice of $\hat{\mu},\hat{\sigma}$ is considered different, depending on the downstream application of MA-TSD.
+
+Time series synthesis. For the unconditional generation, we usually expect unlimited time series synthesis. Therefore, we can simply sample from the empirical distribution of time series means and standard deviations, $\hat{\mu} \sim q_{\mathrm{emp}}(\mu(\boldsymbol{x}_0)), \hat{\sigma} \sim q_{\mathrm{emp}}(\sigma(\boldsymbol{x}_0))$ . Given a time series dataset, we can easily obtain the empirical distributions and denormalize them.
+
+Time series forecasting. For time series forecasting, the mean and standard deviation of the target window is vital for the prediction accuracy (Kim et al., 2021; Qin et al., 2024). It's improper to randomly sample from the empirical distribution to conduct denormalization. We consider to utilize the look-back window $\mathcal{C}$ to produce the $\hat{\mu},\hat{\sigma}$ for the target window. To prevent training a separate statistics
+
+prediction network for $\hat{\mu},\hat{\sigma}$ , we optimize both the denosing model $f_{\theta}$ and the statistics prediction model with parameters $\omega$ $g_{\omega} = \{g_{\omega}^{\mu},g_{\omega}^{\sigma}\}$ , in a hybrid manner. In the appendix, Figure 7 shows how these two networks work together for time series forecasting. Specifically, we modify the loss function on the normalized data (Equation (15)) into a hybrid case:
+
+$$
+\mathcal {L} _ {\text {h y b r i d}} = \lambda_ {z} \mathcal {L} _ {z} + \lambda_ {\mu} \mathcal {L} _ {\mu} + \lambda_ {\sigma} \mathcal {L} _ {\sigma}, \tag {18}
+$$
+
+where we denote $\mathcal{L}_{\mu} = \mathbb{E}_{\boldsymbol{x}_0\sim q(\boldsymbol{x}_0)}\left[\| g_{\omega}^{\mu}(\boldsymbol {c}) - \mu (\boldsymbol {x}_0)\| _2^2\right]$ and $\mathcal{L}_{\sigma} = \mathbb{E}_{\boldsymbol{x}_0\sim q(\boldsymbol{x}_0)}\left[\| g_{\omega}^{\sigma}(\boldsymbol {c}) - \sigma (\boldsymbol {x}_0)\| _2^2\right]$ . The coefficients $\lambda_z,\lambda_\mu ,\lambda_\sigma$ are hyperparameters. We interpret that $\mathcal{L}_z$ aims to learn the shape of the target time series, while $\mathcal{L}_{\mu}$ and $\mathcal{L}_{\sigma}$ are set for the statistics. Therefore, we can set $\hat{\mu} = g_{\omega}^{\mu}(\boldsymbol {c}),\hat{\sigma} = g_{\omega}^{\sigma}(\boldsymbol {c})$ , to denormalize $z_0$ in the context of time series forecasting. Besides, when the coefficients $\lambda_z,\lambda_\mu ,\lambda_\sigma$ are set properly, we can prove that the hybrid loss (Equation (18)) is essentially the upper bound of the loss of conditional diffusion models without instance normalization. The detailed proof can be found in Appendix B.3.
+
+Time series super-resolution. For super-resolution, since the coarse time series $\pmb{x}_i$ obtained by moving average shares the same mean of the target time series, we can use $\hat{\mu} = \mu(\pmb{x}_i)$ . On the other hand, though the standard deviations are not shared, we can utilize the dataset-based noise schedule to re-scale $\sigma(\pmb{x}_i)$ as: $\hat{\sigma} = \sigma(\pmb{x}_i) / \gamma t_i^*$ where $\gamma t_i^*$ is the decrease ratio of the standard deviation at the diffusion step $t_i^*$ mentioned before.
+
+# 5. Experiments
+
+In this section, we mainly focus on two important time series analysis tasks, forecasting and super-resolution. The standard time series synthesis task is included in Appendix C.1. An ablation study of our framework design is also included.
+
+# 5.1. Time series forecasting
+
+Datasets. We consider six real-world datasets with diverse temporal dynamics, commonly used by the community (Wang et al., 2024), namely Electricity, ETTh2, ETTm2, exchange, traffic, weather.
+
+Evaluation metrics. We assess time series forecasting using MSE (Mean Squared Error) for deterministic accuracy and CRPS (Continuous Ranked Probability Score) for probabilistic accuracy.
+
+**Benchmarks.** We compare our proposed MA-TSD with other diffusion-based time series forecasting models, including CSDI (Tashiro et al., 2021), SSSD (Alcaraz & Strodthoff, 2023), D3VAE (Li et al., 2022), TMDM (Li et al., 2024) and mr-diff (Shen et al., 2024). Details about the implementation and comparison to non-diffusion models are included in the Appendix C.2.
+
+Table 1. Average MSEs over prediction lengths $L = \{ {96},{192},{336},{720}\}$ . The best is bold and the second best is underlined.
+
+METHOD ELECTRICITY ETTH2 ETTM2 EXCHANGE TRAFFIC WEATHER RANK CSDI 0.4581 0.2571 2.1230 1.2557 0.4991 0.1938 3.50 SSSD 1.0257 0.7201 0.8936 2.9004 1.9662 0.6905 5.17 D3VAE 0.8450 1.3961 3.3449 2.1086 6.3583 1.5461 5.67 TMDM 0.4071 0.2508 0.1789 0.7885 0.1805 0.2209 2.83 MR-DIFF 0.5287 0.2172 0.1700 0.4801 0.2471 0.2078 2.67 MA-TSD 0.3404 0.2121 0.1241 0.3718 0.1660 0.2074 1.17
+
+Table 2. Average CRPSs over prediction lengths $L = \{ {96},{192},{336},{720}\}$ . The best is bold and the second best is underlined.
+
+METHOD ELECTRICITY ETTH2 ETTM2 EXCHANGE TRAFFIC WEATHER RANK CSDI 0.1939 0.1638 0.4720 0.3028 0.1883 0.1261 3.33 SSSD 0.3216 0.3108 0.3565 0.6743 0.4579 0.3129 5.17 D3VAE 0.3111 0.4173 0.6497 0.5380 0.8314 0.4497 5.67 TMDM 0.1881 0.1591 0.1253 0.2959 0.1076 0.1380 2.00 MR-DIFF 0.2357 0.1647 0.1293 0.2117 0.1453 0.1506 3.17 MA-TSD 0.1747 0.1567 0.1135 0.2178 0.1156 0.1501 1.67
+
+Results. As depicted in Table 1 and Table 2, the proposed MA-TSD generally outperforms the benchmark time series diffusion models, achieving the best or the second best position on 6/6 and 5/6 datasets, respectively. The improvement on the weather dataset is marginal, possibly because it is recorded in a higher resolution (Table 6). Both informative high-frequency components and stochastic noises are revealed, which could be simultaneously suppressed by the moving average process, and thus lead to the difficulty to accurately forecast. Refer to Appendix C.2 for visualization and Table 7 for the full results on each prediction length.
+
+# 5.2. Time series super-resolution
+
+Datasets. We consider three high-resolution real-world datasets with 5-minute resolution, MFRED, Wind, Solar. For each dataset, we test the models for the following 3 tasks, i.e. 5min-to-15min $(3\times)$ , 5min-to-30min $(6\times)$ , and 5min-to-60min $(12\times)$ .
+
+Evaluation metrics. We assess time series super-resolution by Consistency and Context-FID. The former one measures MSE between the low-resolution inputs and the down-scaled super-resolution outputs (Saharia et al., 2022), while the latter one examines the quality of the super-resolution results compared to the real high-resolution time series (Jeha et al., 2022).
+
+**Benchmarks.** We compare ours with two diffusion models directly conditioned on low-resolution inputs. One is trained under DDPM framework (Saharia et al., 2022), and the other is trained by flow matching with a variance-preserving path (Lipman et al., 2023), which we denote as FM-VP. The
+
+benchmark models are re-trained for each super-resolution scale, since the condition inputs change. Details about the implementation are all included in the Appendix C.3.
+
+Results. Table 3 records the results of time series superresolution. The proposed MA-TSD generally exceeds the conditional DDPM and FM-VP in terms of both consistency and quality. Despite the slight inferiority in Context-FID on the Solar dataset at the large scale, ours still outperformed the FM-VP model significantly in terms of consistency. Notably, MA-TSD performs SR naturally through its backward process instead of retraining individually on each scale. Besides, our SR backward starts in the middle of the whole process, resulting in even fewer backward steps compared to benchmarks (Figure 12). Therefore, we believe that our method provides a better trade-off of computational overheads, SR quality, and consistency to the low-resolution input. Refer to Appendix C.3 for more visualization.
+
+# 5.3. Ablation study
+
+In this section, we evaluate the effectiveness of important components and designs of MA-TSD through unlimited time series synthesis on the mentioned MFRED, Wind, Solar datasets.
+
+Key modules. Two key different designs from the standard time series diffusion model are the moving average diffusion schedule and the instance normalization in our proposed MA-TSD. We compared the possible design with/without these two components, as shown in Table 4. Without MA and IN, namely directly applying DDIM on time series data, exhibited the worst performance. Equipped with either IN or
+
+Table 3. Comparison on time series super-resolution. The best is bold and the second best is underlined.
+
+SCALE METHOD MFRED WIND SOLAR CONSIST. CONTEXT-FID CONSIST. CONTEXT-FID CONSIST. CONTEXT-FID 3 MA-TSD 0.0032 0.1047 0.0067 0.2863 0.0106 0.3491 DDPM 0.0291 3.1028 0.0382 7.4843 0.0367 1.7197 FM-VP 0.0328 1.3481 0.0636 4.2311 0.0523 0.7950 6 MA-TSD 0.0037 0.1235 0.0098 1.0241 0.0115 0.6972 DDPM 0.0214 3.0740 0.0343 7.9524 0.0260 2.2293 FM-VP 0.0238 1.3381 0.0505 4.4683 0.0361 0.6846 12 MA-TSD 0.0047 0.4358 0.0136 3.0567 0.0129 1.4135 DDPM 0.0157 3.4044 0.0318 9.2504 0.0429 6.2596 FM-VP 0.0156 1.5222 0.0350 4.8737 0.0249 0.8616
+
+Table 4. Context-FIDs of MA-TSDs with different components. MA: Moving Average schedule. IN: Instance Normalization
+
+MODULES DATASETS MA IN MFRED WIND SOLAR - - 29.7093 52.3499 43.6236 - ✓ 13.1704 44.6908 35.3704 ✓ - 4.9403 21.3606 8.0744 ✓ ✓ 2.6742 16.9026 8.0297
+
+MA, the model obtained considerable improvements, with MA functioning more significantly than IN, which implies the importance of MA in our framework design. Combining both modules achieved the best scores over all three datasets.
+
+Accelerated backward. For a given backward step budget $\tau \leq T = 100$ , we compare our factor-only backward strategy with 1) randomly selecting subsets from $[1, \dots, T]$ and 2) uniformly choosing a subset, i.e. $\text{linspace}(1, T, \tau)$ . Notably, given a budget $\tau$ , uniformly sampling renders a fixed subset $\{t_i\}_1^\tau$ , resulting in a deterministic result.
+
+As shown in Figure 4, given a fixed ratio $\tau / T$ , randomly selecting the backward steps may cause large variances in model performances. Uniform sampling did offer a better result than random sampling, but the marginal gain on the performance decreases by $\tau / T$ . Compared to both random and uniform strategies, our factor-only strategy consistently offers fair and effective results given the same $\tau / T$ budget, optimizing the resources while maintaining the quality of the results. Especially on MFRED and Solar, whose periodical patterns are more significant, the factor-only accelerating strategy could even achieve competitive performance with the full-step backward, indicating that the strategy captures the key transition steps and can be utilized for fastening MA-TSD.
+
+
+Figure 4. Comparison among accelerated backward strategies.
+
+# 6. Related Works
+
+Diffusion models have been embraced by the time series analysis community for their advanced probability modeling ability. (Rasul et al., 2021) first combined autoregressive modeling of recurrent neural networks and the diffusion process for time series forecasting. Then (Shen & Kwok, 2023) proposed a non-autoregressive diffusion strategy for forecasting, improving on both efficiency and accuracy. In addition, there are also several works (Kollovieh et al., 2024; Alcaraz & Strodthoff, 2023; Tashiro et al., 2021) that link time series forecasting and time series imputation, modeling them with a conditional generation design, proposing unified frameworks for these two tasks with diffusion models.
+
+Recently, the community began to fuse the unique time series property into diffusion models. (Fan et al., 2024) leveraged coarse time series data as guidance during the diffusion process, and added regularization terms into the loss function to constrain the backward process is coarse-to-fine. (Shen et al., 2024) set several diffusion stages, where the previous diffusion stage generates coarse time series as condition input for the latter stage to refine. (Liu et al., 2024) leveraged the historical windows to retrieve the $k$ nearest samples as references to guide diffusion model to generate more accurate forecasts. Despite the recent special design on time series data, they still rely on the DDPM process and
+
+hardly improve on the typical isotropic design to meet the characteristics of time series.
+
+Beyond time series diffusion models, some works similarly investigated non-isotropic diffusion models for images. (Hoogeboom & Salimans, 2023; Rissanen et al., 2023) designed frequency-domain diffusion with Gaussian blurring as transition. (Daras et al., 2023) tried to generalize the transition to linear corruptions, and gave examples of blurring and masking while (Bansal et al., 2024) proposed a diffusion model with arbitrary degradation functions, for example snowification and animorphosis, but without noise. However, few of them obtained tremendous improvements, let alone being further explored by our time series community.
+
+Regarding diffusion design, probably the most related work to ours is (Hoogeboom & Salimans, 2023), which shared a similar high-level idea of building the degradation process with low-pass filters (blurring in theirs, MA in ours). However, they still tried to fit low-pass filters into the traditional DDPM's Markovian process in the frequency domain, while we reformulated a non-Markovian design with a new backward process. Besides, our noise schedule is specially designed to be dataset-based, regarding the variance decrease caused by the filters on different time series data. Please refer to Appendix B.4 for more detailed information.
+
+# 7. Conclusion
+
+In this paper, we first revealed that direct application of standard DDPM to time series data may cause gradient contradiction, because of rapid degradation of low-frequency information. A novel time series diffusion model, MA-TSD, is accordingly proposed, equipped with moving average forward transitions to keep more low-frequency information. The backward process can be accelerated in a DDIM style and further act as super-resolution. The experiments show that MA-TSD has superior performances over the state-of-the-art time series diffusion models in terms of forecasting and super-resolution.
+
+# Software and Data
+
+Our codes can be found in https://github.com/WillWang1113/Moving-Average-Diffusion.
+
+# Acknowledgements
+
+The work was supported in part by the Research Grants Council of the Hong Kong SAR (HKU 17200224), and in part by the Alibaba Group through Alibaba Research Intern Program.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Alcaraz, J. L. and Strodthoff, N. Diffusion-based time series imputation and forecasting with structured state space models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=hHiIbk7ApW.
+Bansal, A., Borgnia, E., Chu, H.-M., Li, J., Kazemi, H., Huang, F., Goldblum, M., Geiping, J., and Goldstein, T. Cold diffusion: Inverting arbitrary image transforms without noise. Advances in Neural Information Processing Systems, 36, 2024.
+Daras, G., Delbracio, M., Talebi, H., Dimakis, A., and Milanfar, P. Soft diffusion: Score matching with general corruptions. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=W98rebX1Q.
+Fan, X., Wu, Y., Xu, C., Huang, Y., Liu, W., and Bian, J. MG-TSD: Multi-granularity time series diffusion models with guided learning process. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=CZiY6OLktd.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
+Hoogeboom, E. and Salimans, T. Blurring diffusion models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=OjDkC57x5sz.
+Jeha, P., Bohlke-Schneider, M., Mercado, P., Kapoor, S., Nirwan, R. S., Flunkert, V., Gasthaus, J., and Januschowski, T. Psa-gan: Progressive self attention gans for synthetic time series. In *The Tenth International Conference on Learning Representations*, 2022.
+Kim, T., Kim, J., Tae, Y., Park, C., Choi, J.-H., and Choo, J. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations, 2021.
+Kolovieh, M., Ansari, A. F., Bohlke-Schneider, M., Zschiegner, J., Wang, H., and Wang, Y. B. Predict, refine,
+
+synthesize: Self-guiding diffusion models for probabilistic time series forecasting. Advances in Neural Information Processing Systems, 36, 2024.
+Li, Y., Lu, X., Wang, Y., and Dou, D. Generative time series forecasting with diffusion, denoise, and disentanglement. Advances in Neural Information Processing Systems, 35: 23009-23022, 2022.
+Li, Y., Chen, W., Hu, X., Chen, B., Zhou, M., et al. Transformer-modulated diffusion models for probabilistic multivariate time series forecasting. In The Twelfth International Conference on Learning Representations, 2024.
+Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=PqvMRDCJT9t.
+Liu, J., Yang, L., Li, H., and Hong, S. Retrieval-augmented diffusion models for time series forecasting. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=dRJJt0Ji48.
+Liu, Y., Wu, H., Wang, J., and Long, M. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in neural information processing systems, 35:9881-9893, 2022.
+Meinrenken, C. J., Rauschkolb, N., Abrol, S., Chakrabarty, T., Decalf, V. C., Hidey, C., McKeown, K., Mehmani, A., Modi, V., and Culligan, P. J. Mfred, 10 second interval real and reactive power for groups of 390 us apartments of varying size and vintage. Scientific Data, 7(1):375, 2020.
+Naiman, I., Berman, N., Pemper, I., Arbiv, I., Fadlon, G., and Azencot, O. Utilizing image transforms and diffusion models for generative modeling of short and long time series. Advances in Neural Information Processing Systems, 37:121699-121730, 2024a.
+Naiman, I., Erichson, N. B., Ren, P., Mahoney, M. W., and Azencot, O. Generative modeling of regular and irregular time series data via koopman VAEs. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=eY7sLb0dVF.
+Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=Jbdc0vTOcol.
+
+Peebles, W. and Xie, S. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.
+Qin, D., Li, Y., Chen, W., Zhu, Z., Wen, Q., Sun, L., Pinson, P., and Wang, Y. Evolving multi-scale normalization for time series forecasting under distribution shifts. arXiv preprint arXiv:2409.19718, 2024.
+Rasul, K., Seward, C., Schuster, I., and Vollgraf, R. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning, pp. 8857-8868. PMLR, 2021.
+Rissanen, S., Heinonen, M., and Solin, A. Generative modelling with inverse heat dissipation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=4PJUBT9f201.
+Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., and Norouzi, M. Image super-resolution via iterative refinement. IEEE transactions on pattern analysis and machine intelligence, 45(4):4713-4726, 2022.
+Shen, L. and Kwok, J. Non-autoregressive conditional diffusion models for time series prediction. In International Conference on Machine Learning, pp. 31016-31029. PMLR, 2023.
+Shen, L., Chen, W., and Kwok, J. Multi-resolution diffusion models for time series forecasting. In The Twelfth International Conference on Learning Representations, 2024.
+Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021a. URL https://openreview.net/forum?id=St1giarCHLP.
+Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=PxTIG12RRHS.
+Tashiro, Y., Song, J., Song, Y., and Ermon, S. Ccsi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in Neural Information Processing Systems, 34:24804-24816, 2021.
+Wang, Y., Wu, H., Dong, J., Liu, Y., Long, M., and Wang, J. Deep time series models: A comprehensive survey and benchmark. arXiv preprint arXiv:2407.13278, 2024.
+
+Wu, H., Xu, J., Wang, J., and Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, 34:22419-22430, 2021.
+Xu, Z., Zeng, A., and Xu, Q. FITS: Modeling time series with $10k$ parameters. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=bWcvvZ3qMb.
+Yang, Y., Jin, M., Wen, H., Zhang, C., Liang, Y., Ma, L., Wang, Y., Liu, C., Yang, B., Xu, Z., et al. A survey on diffusion models for time series and spatio-temporal data. arXiv preprint arXiv:2404.18886, 2024.
+
+# A. Details of directly applying DDPM on time series
+
+As an example of illustrating the potential glitch when DDPM is directly applied on time series data, we used Electricity dataset, a typical time series dataset with complex patterns. We consider $L = 400$ and the spectral energy ratio $e$ is calculated as follows:
+
+$$
+e = \frac {\sum_ {i = 1} ^ {\zeta} \left| u _ {0} ^ {i} \right| ^ {2}}{\sum_ {i = \zeta + 1} ^ {N _ {\mathrm {N y q}}} \left| u _ {0} ^ {i} \right| ^ {2}}, \tag {19}
+$$
+
+where we denote the real fast Fourier transform (rFFT) of the time series $\pmb{x}_0$ as $\pmb{u}_0 = [u_0^1, u_0^2, \dots, u_0^{N_{\mathrm{Nyq}}}]$ , and $\pmb{u}_0^i \in \mathcal{C}$ is the $i^{\text{th}}$ frequency component. The parameter $\zeta$ is the split ratio. We set $\zeta = \text{round}(0.2 \cdot N_{\mathrm{Nyq}})$ in this example, since the frequency components are in a relative low level after then (see Figure 5).
+
+
+Figure 5. Frequency energy of a data sample in Electricity dataset. The x-axis is deided by $N_{\mathrm{Nyq}}$
+
+During training, we fixed the random seed to guarantee the same initialization of models and the order of data samples. In this simple example, for each epoch, we did 20 times mini-batch training with batch size set as 64, thus in total 2000 training steps.
+
+# B. Details of MA-TSD
+
+# B.1. Transition matrix example
+
+As depicted in Figure 6, we consider the situation of $L = 6$ as an naive example to illustrate how we get the square transition matrix $K_{t}$ with moving average.
+
+# B.2. Combination of denosing networks and conditioning networks
+
+The combination of denosing networks and conditioning networks are shown in Figure 7. The Encoder embeds the noisy $\mathbf{z}_t$ and fuse with the position embedding of $t$ . In the conditional generation, the condition $c$ is also encoded and fused together. Then, the Decoder will output the prediction $\hat{\mathbf{z}}_0$ , and the conditional decoder will output the $\hat{\sigma}, \hat{\mu}$ for denormalization, if needed. During inference, $\mathbf{z}_{t-1}$ will be obtained by $\mathbf{z}_t$ and $\hat{\mathbf{z}}_0$ .
+
+
+Figure 6. Example at $k_{1} = 2$ and $k_{2} = 3$ . The convolution kernels are first unrolled to a matrix, and then interpolate along side the time steps to be square. Across the diffusion steps, the transition matrices are also interpolated.
+
+# B.3. Proofs and Derivations
+
+# B.3.1. DECREASE RATIO $\gamma_{t}$ CALCULATION
+
+We denote the decrease ratio calculated over the $z_0$ as $\gamma_t^z$ :
+
+$$
+\begin{array}{l} \gamma_ {t} ^ {z} = \mathbb {E} _ {\boldsymbol {z} _ {0} \sim q (\boldsymbol {z} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} \boldsymbol {z} _ {0})}{\sigma (\boldsymbol {z} _ {0})} \right] = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} \frac {\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0})}{\sigma (\boldsymbol {x} _ {0})})}{\sigma (\frac {\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0})}{\sigma (\boldsymbol {x} _ {0})})} \right] = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} (\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0})))}{\sigma (\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0}))} \right] \\ = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} \boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0}))}{\sigma (\boldsymbol {x} _ {0} - \mu (\boldsymbol {x} _ {0}))} \right] = \mathbb {E} _ {\boldsymbol {x} _ {0} \sim q (\boldsymbol {x} _ {0})} \left[ \frac {\sigma (\boldsymbol {K} _ {t} \boldsymbol {x} _ {0})}{\sigma (\boldsymbol {x} _ {0})} \right] = \gamma_ {t}, \\ \end{array}
+$$
+
+where the first equal in the second line is because moving average doesn't change the mean value of $\pmb{x}_0$ , i.e. $K_{t}(\pmb{x}_{0} - \mu (\pmb{x}_{0})) = K_{t}\pmb{x}_{0} - \mu (\pmb{x}_{0})$ , and the second equal in the second line is because $\sigma (\pmb{x}_0 + const.) = \sigma (\pmb{x}_0)$ . Therefore, it's the same to calculate $\gamma_t$ over $z_0$ and $x_0$ .
+
+# B.3.2. THE CHOICE OF $q(\pmb{z}_{t-1}|\pmb{z}_t, \pmb{z}_0)$
+
+Given the defined $q(\pmb{z}_{1:T}|\pmb{z}_0) \coloneqq q(\pmb{z}_T|\pmb{z}_0)\prod_{t=2}^T q(\pmb{z}_{t-1}|\pmb{z}_t, \pmb{z}_0)$ , the defined $q(\pmb{z}_{t-1}|\pmb{z}_t, \pmb{z}_0)$ in Equation (14) and $q(\pmb{z}_T|\pmb{z}_0) \coloneqq \mathcal{N}(\pmb{z}_T; \pmb{K}_T\pmb{z}_0, \beta_T^2\pmb{I})$ , we can have $q(\pmb{z}_t|\pmb{z}_0) = \mathcal{N}(\pmb{z}_t; \pmb{K}_t\pmb{z}_0, \beta_t^2\pmb{I})$ for all $t$ . The proof is similar to that of (Song et al., 2021a), with the transition generalized to $\pmb{K}_t\pmb{x}_0$
+
+Proof. We can prove the above statement through an induction argument. Assume that for $t \leq T$ , $q(\pmb{z}_t | \pmb{z}_0) = \mathcal{N}(\pmb{z}_t; \pmb{K}_t \pmb{z}_0, \beta_t^2 \pmb{I})$ stands, and if $q(\pmb{z}_{t-1} | \pmb{z}_0) = \mathcal{N}(\pmb{z}_{t-1}; \pmb{K}_{t-1} \pmb{z}_0, \beta_{t-1}^2 \pmb{I})$ also stands, then we can prove it from $t = T$ to $t = 1$ with the initial $q(\pmb{z}_T | \pmb{z}_0) := \mathcal{N}(\pmb{z}_T; \pmb{K}_T \pmb{z}_0, \beta_T^2 \pmb{I})$ .
+
+First, via the marginalization, we have:
+
+$$
+q \left(\boldsymbol {z} _ {t - 1} \mid \boldsymbol {z} _ {0}\right) = \int_ {\boldsymbol {x} _ {t}} q \left(\boldsymbol {z} _ {t - 1} \mid \boldsymbol {z} _ {t}, \boldsymbol {z} _ {0}\right) q \left(\boldsymbol {z} _ {t} \mid \boldsymbol {z} _ {0}\right) d \boldsymbol {z} _ {t}. \tag {20}
+$$
+
+
+Figure 7. The denoising and statistics prediction models for time series forecasting. To include the unconditional generation $c = \varnothing$ , we make the dashed line for illustration. Pos. Emb.=Position Embedder, Cond. Enc.=Condition Encoder, Cond. Dec.=Condition Decoder.
+
+With the given $q(\pmb{z}_t|\pmb{z}_0) = \mathcal{N}(\pmb{z}_t; \pmb{K}_t\pmb{z}_0, \beta_t^2\pmb{I})$ and $q(\pmb{z}_{t-1}|\pmb{z}_t, \pmb{z}_0) = \mathcal{N}\left(\pmb{K}_{t-1}\pmb{z}_0 + \frac{\sqrt{\beta_{t-1}^2 - \eta_t^2}}{\beta_t} (\pmb{z}_t - \pmb{K}_t\pmb{z}_0), \eta_t^2\pmb{I}\right)$ , we can deduce that $q(\pmb{z}_{t-1}|\pmb{z}_0)$ is also Gaussian, with the mean $\mu_{z_{t-1}}$ and the variance $\Sigma_{z_{t-1}}$ :
+
+$$
+\mu_ {z _ {t - 1}} = \boldsymbol {K} _ {t - 1} \boldsymbol {z} _ {0} + \frac {\sqrt {\beta_ {t - 1} ^ {2} - \eta_ {t} ^ {2}}}{\beta_ {t}} \left(\boldsymbol {K} _ {t} \boldsymbol {z} _ {0} - \boldsymbol {K} _ {t} \boldsymbol {z} _ {0}\right) = \boldsymbol {K} _ {t - 1} \boldsymbol {z} _ {0}, \tag {21}
+$$
+
+$$
+\Sigma_ {z _ {t - 1}} = \left(\eta_ {t - 1} ^ {2} + \beta_ {t} ^ {2} \cdot \frac {\beta_ {t - 1} ^ {2} - \eta_ {t} ^ {2}}{\beta_ {t} ^ {2}}\right) \pmb {I} = \beta_ {t - 1} ^ {2} \pmb {I}.
+$$
+
+Therefore, $q(\pmb{z}_{t-1}|\pmb{z}_0) = \mathcal{N}(\pmb{K}_{t-1}\pmb{z}_0, \beta_{t-1}^2\pmb{I})$ holds and the inductive argument can be processed.
+
+# B.3.3. BACKWARD ODE
+
+Let us first consider our backward process without acceleration, given by:
+
+$$
+\boldsymbol {z} _ {t - 1} = \boldsymbol {K} _ {t - 1} f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) + \frac {\sqrt {\beta_ {t - 1} ^ {2} - \eta_ {t} ^ {2}}}{\beta_ {t}} \left(\boldsymbol {z} _ {t} - \boldsymbol {K} _ {t} f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right)\right) + \eta_ {t} \boldsymbol {\epsilon}. \tag {22}
+$$
+
+When $\eta_t = 0$ , we can have:
+
+$$
+\boldsymbol {z} _ {t - 1} = \boldsymbol {K} _ {t - 1} f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) + \frac {\beta_ {t - 1}}{\beta_ {t}} \left(\boldsymbol {z} _ {t} - \boldsymbol {K} _ {t} f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right)\right) \tag {23}
+$$
+
+After reformulation, we further have:
+
+$$
+\frac {\boldsymbol {z} _ {t - 1}}{\beta_ {t - 1}} - \frac {\boldsymbol {z} _ {t}}{\beta_ {t}} = \left(\frac {\boldsymbol {K} _ {t - 1}}{\beta_ {t - 1}} - \frac {\boldsymbol {K} _ {t}}{\beta_ {t}}\right) f _ {\theta} (\boldsymbol {z} _ {t}, t, \boldsymbol {c}), \tag {24}
+$$
+
+which can be viewed as the numerical solution with Euler method to the following ODE with the discrete step $\Delta t = 1$ :
+
+$$
+d \left(\frac {\boldsymbol {z} _ {t}}{\beta_ {t}}\right) = f _ {\theta} ^ {\top} \left(\boldsymbol {z} _ {t}, t\right) d \left(\frac {\boldsymbol {K} _ {t} ^ {\top}}{\beta_ {t}}\right). \tag {25}
+$$
+
+Now, we consider the even interpolation between $\{K_i'\}_{i}^n$ where we anchor the original $K_i'$ , and linearly inject $m$ intermediate matrices between $K_i'$ and $K_{i+1}'$ . Then the collected diffusion step subset $\{t_i^*\}_{1}^n$ are essentially $\{i(m+1)\}_{1}^n = \{m+1, 2(m+1), \dots, n(m+1)\}$ . Therefore, backward with $\{t_i^*\}_{1}^n$ in this situation is basically Euler method with the discrete step $\Delta t = m + 1$ .
+
+# B.3.4. HYBRID OPTIMIZATION
+
+The conditional diffusion loss function without instance normalization over $x_0$ can be written by:
+
+$$
+\mathcal {L} _ {\boldsymbol {x}} = \mathbb {E} _ {\boldsymbol {x} _ {0}, \boldsymbol {c}, t} \left[ \left\| h _ {\Theta} \left(\boldsymbol {x} _ {t}, t, \boldsymbol {c}\right) - \boldsymbol {x} _ {0} \right\| _ {2} ^ {2} \right], \tag {26}
+$$
+
+where we use $h_\Theta$ to distinguish from $f_{\theta}$ and $g_{\omega}$ mentioned previously.
+
+Further, we can express the $\pmb{x}_0$ into: $\pmb{x}_0 = \pmb{z}_0 \cdot \sigma(\pmb{x}_0) + \mu(\pmb{x}_0)$ , and then parameterize the denosing network accordingly: $h_\Theta(\pmb{x}_t, t, \pmb{c}) = f_\theta(\pmb{z}_t, t, \pmb{c}) \cdot g_\omega^\sigma(\pmb{c}) + g_\omega^\mu(\pmb{c})$
+
+For simplicity, we denote $l_{\pmb{x}} = \| h_{\Theta}(\pmb{x}_t,t,\pmb{c}) - \pmb{x}_0\| _2$ , and have:
+
+$$
+\begin{array}{l} l _ {\boldsymbol {x}} ^ {2} = \left\| h _ {\Theta} (\boldsymbol {x} _ {t}, t, \boldsymbol {c}) - \boldsymbol {x} _ {0} \right\| _ {2} ^ {2} (27) \\ = \| f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) \cdot g _ {\omega} ^ {\sigma} (\boldsymbol {c}) + g _ {\omega} ^ {\mu} (\boldsymbol {c}) - \boldsymbol {z} _ {0} \cdot \sigma \left(\boldsymbol {x} _ {0}\right) - \mu \left(\boldsymbol {x} _ {0}\right) \| _ {2} ^ {2} (28) \\ \leq \left(\left\| f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) \cdot g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \boldsymbol {z} _ {0} \cdot \sigma \left(\boldsymbol {x} _ {0}\right) \right\| _ {2} + \left\| g _ {\omega} ^ {\mu} (\boldsymbol {c}) - \mu \left(\boldsymbol {x} _ {0}\right) \right\| _ {2}\right) ^ {2} (29) \\ = \left(\| f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) \cdot g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \boldsymbol {z} _ {0} \cdot \sigma \left(\boldsymbol {x} _ {0}\right) + \boldsymbol {z} _ {0} \cdot g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \boldsymbol {z} _ {0} \cdot g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \| _ {2} + l _ {\mu}\right) ^ {2} (30) \\ = \left(\| \boldsymbol {z} _ {0} \cdot \left(g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \sigma (\boldsymbol {x} _ {0})\right) + g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \cdot \left(f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) - \boldsymbol {z} _ {0}\right) \| _ {2} + l _ {\mu}\right) ^ {2} (31) \\ \leq \left(\| \boldsymbol {z} _ {0} \| _ {\infty} \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \sigma (\boldsymbol {x} _ {0}) \| _ {2} + \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \| _ {2} \| \left(f _ {\theta} \left(\boldsymbol {z} _ {t}, t, \boldsymbol {c}\right) - \boldsymbol {z} _ {0}\right) \| _ {2} + l _ {\mu}\right) ^ {2} \quad \text {(T r i a n g l e i n e q u a l i t y)} (32) \\ = \left(\| z _ {0} \| _ {\infty} l _ {\sigma} + \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \| _ {2} l _ {\boldsymbol {z}} + l _ {\mu}\right) ^ {2} (33) \\ \leq \left(\| z _ {0} \| _ {\infty} ^ {2} + \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \| _ {2} ^ {2} + 1\right) \left(l _ {\boldsymbol {z}} ^ {2} + l _ {\mu} ^ {2} + l _ {\sigma} ^ {2}\right) (34) \\ \end{array}
+$$
+
+Further, we can scale $\| \pmb{z}_0 \|_{\infty}$ to the maximum absolute value of the all the $\pmb{z}_0$ over the dataset. Then, for $g_{\omega}^{\sigma}(\pmb{c})$ , it tries to approximate the real standard deviation, $g_{\omega}^{\sigma}(\pmb{c}) \approx \sigma(\pmb{x}_0)$ , and the maximum $\sigma(\pmb{x}_0)$ over the whole training dataset $q(\pmb{x}_0)$ can be obtained. We can further limit the output of the $g_{\omega}^{\sigma}$ to the largest $\sigma(\pmb{x}_0)$ . Therefore, we can scale $\left( \| \pmb{z}_0 \|_{\infty}^2 + \| g_{\omega}^{\sigma}(\pmb{c}) \|_2^2 + 1 \right) \leq (\pmb{z}_{\max} + \sigma_{\max} + 1) = \lambda$
+
+Therefore, we have:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\boldsymbol {x}} = \mathbb {E} _ {\boldsymbol {x} _ {0}, \boldsymbol {c}, t} \left[ l _ {x} ^ {2} \right] (35) \\ \leq \mathbb {E} _ {\boldsymbol {x} _ {0}, \boldsymbol {c}, t} \left[ \left(\| \boldsymbol {z} _ {0} \| _ {\infty} ^ {2} + \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) \| _ {2} ^ {2} + 1\right) \left(l _ {\boldsymbol {z}} ^ {2} + l _ {\mu} ^ {2} + l _ {\sigma} ^ {2}\right) \right] (36) \\ \leq \mathbb {E} _ {\boldsymbol {x} _ {0}, c, t} \left[ \lambda \left(l _ {z} ^ {2} + l _ {\mu} ^ {2} + l _ {\sigma} ^ {2}\right) \right] (37) \\ \end{array}
+$$
+
+We compare the hybrid loss function:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {h y b r i d}} = \lambda_ {z} \mathcal {L} _ {z} + \lambda_ {\mu} \mathcal {L} _ {\mu} + \lambda_ {\sigma} \mathcal {L} _ {\sigma} \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, \boldsymbol {c}, t} \left[ \| \lambda_ {z} f _ {\theta} (\boldsymbol {z} _ {t}, t, \boldsymbol {c}) - \boldsymbol {z} _ {0} \| _ {2} ^ {2} + \lambda_ {\mu} \| g _ {\omega} ^ {\mu} (\boldsymbol {c}) - \mu (\boldsymbol {x} _ {0}) \| _ {2} ^ {2} + \lambda_ {\sigma} \| g _ {\omega} ^ {\sigma} (\boldsymbol {c}) - \sigma (\boldsymbol {x} _ {0}) \| _ {2} ^ {2} \right] \tag {38} \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, \boldsymbol {c}, t} \left[ \lambda_ {z} l _ {z} ^ {2} + \lambda_ {\mu} l _ {\mu} ^ {2} + \lambda_ {\sigma} l _ {\sigma} ^ {2} \right]. \\ \end{array}
+$$
+
+We can see that if the $\lambda_z = \lambda_\mu = \lambda_\sigma = \lambda$ , then $\mathcal{L}_{\pmb{x}} \leq \mathcal{L}_{\mathrm{hybrid}}$ .
+
+# B.4. Differences from Blurring Diffusion Models (Hoogeboom & Salimans, 2023)
+
+From a high-level perspective, BDM and ours shared a similar idea, i.e., building the degradation process with low-pass filters (blurring in BDM, MA in ours). However, there exist clear distinctions.
+
+Filtering space: We filtered the data in the time space by convolution (matrix multiplication), $q(\boldsymbol{x}_t \mid \boldsymbol{x}_0) = \mathcal{N}(\boldsymbol{K}_t \boldsymbol{x}_0, \beta_t^2 \boldsymbol{I})$ while BDM blurs the images in the frequency domain and transform back to the pixel domain (using convolution theorem), i.e., $q(\boldsymbol{x}_t \mid \boldsymbol{x}_0) = \mathcal{N}(\boldsymbol{V} \boldsymbol{\alpha}_t \boldsymbol{V}^\top \boldsymbol{x}_0, \sigma_t^2 \boldsymbol{I})$ , where $\boldsymbol{V}^\top$ , $\boldsymbol{V}$ are Discrete Cosine Transform (DCT) and Inverse DCT (IDCT), and $\boldsymbol{\alpha}_t$ represent the frequency response of Gaussian blurring kernel, a diagonal matrix, whose each entry $\alpha_t^i \in (0,1]$ is a coefficient of $i^{\text{th}}$ frequency component. For low pass filters, $\forall t$ , $\alpha_t^i$ decreases by $i$ until (nearly) zero to suppress high frequency components.
+
+Markovian or not: Since $\alpha_{t}$ is diagonal, BDM proposed that for each frequency component, a standard Markovian DDPM can be constructed. The one-step transition is accordingly defined, i.e. $q(\pmb{u}_t \mid \pmb{u}_{t-1}) = \mathcal{N}(\pmb{\alpha}_{t|t-1}\pmb{u}_{t-1}, \sigma_{t|t-1}^2\pmb{I})$
+
+where $\pmb{u}_t = \pmb{V}^\top \pmb{x}_0$ is the frequency representation and $\alpha_{t|t-1} = \alpha_t / \alpha_{t-1}$ . However, dividing $\alpha_{t-1}$ could be numerically unstable in practice. As we mentioned above, $\alpha_t^i$ could become to be (nearly) zero for larger $i$ , so dividing $\alpha_{t-1}$ is unstable for all diffusion steps. Though some epsilons can be added to ensure stable division, tiny errors in the high frequency components will still be amplified a lot through the iterative backward process. Chances are that the generated data are dominated by the improperly amplified high frequency components. Therefore, when it comes to designing a diffusion process with non-reversible low-pass filters, we believe it's improper to follow the Markovian assumption and define the necessary $q(\pmb{x}_t \mid \pmb{x}_{t-1})$ . Instead, in our framework, faced with similar non-reversible MA, we bypassed the definition of $q(\pmb{x}_t \mid \pmb{x}_{t-1})$ , assumed $q(\pmb{x}_{1:T} \mid \pmb{x}_0)$ non-Markovian (in the DDIM-style) and then delicately defined $q(\pmb{x}_{t-1} \mid \pmb{x}_t, \pmb{x}_0)$ to satisfy $q(\pmb{x}_t \mid \pmb{x}_0)$ for all $t$ . Thus, whether it's Markovian or not is another distinct difference from ours and BDM.
+
+Noise schedule: BDM designed $\alpha_{t} = a_{t}\pmb{d}_{t}$ , where $\pmb{d}_t$ is the frequency response of blurring kernel and $a_{t}\in [0,1]$ is an extra scalar decreasing by $t$ , and the noise schedule $\sigma_t = 1 - a_t^2$ . In our framework, the noise schedule $\beta_{t}$ is dataset-based, chosen regarding the variance decrease caused by MA (Equation (13)) on different datasets.
+
+In summary, despite the similar high-level idea, there exist clear differences between BDM and ours. BDM tried to fit in the standard Markovian DDPM framework, while we reformulated a framework with special adaptation on moving average filters and time series.
+
+# C. Experiment details
+
+We launch our experiments on a single NVIDIA GeForce RTX 4090 24GB GPU.
+
+# C.1. Synthesis
+
+We follow the setting of KoVAE (Naiman et al., 2024b) and ImagenTime (Naiman et al., 2024a) on three datasets, ETTh2, Exchange, and ECG (medical time series), i.e. $L = 24$ . KoVAE is a SOTA VAE-based time series generative model while ImagenTime is a diffusion-based one.
+
+We also include discriminative score (Disc. Score) and predictive score (Pred. Score) as additional metrics for evaluating the fidelity and usefulness of synthetic time series, as the following table shows.
+
+Table 5. Results on standard time series synthesis
+
+DATASET MODEL DISC. SCORE PRED. SCORE CONTEXT-FID ETTH2 KoVAE 0.069 0.034 0.258 IMAGENTIME 0.053 0.054 0.118 OURS 0.044 0.026 0.075 EXCHANGE KoVAE 0.137 0.038 1.520 IMAGENTIME 0.129 0.067 1.112 OURS 0.030 0.027 0.083 ECG KoVAE 0.459 0.081 1.206 IMAGENTIME 0.400 0.079 1.223 OURS 0.345 0.076 0.979
+
+We can see that compared to the SOTA time series generative models, our method still shows salient improvements in discriminative score and predictive score, illustrating the capability of generating high-fidelity and useful synthetic time series samples.
+
+# C.2. Forecasting
+
+The length of the look-back window is set as 96, and the target time series length is set as $L \in \{96, 192, 336, 720\}$ . Such a setting is aligned with (Wang et al., 2024). Table 6 recorded the information of the forecasting datasets, $\mathrm{ETT}^3$ ,
+
+Table 6. Forecasting datasets
+
+Dataset Resolution Time steps Description Electricity 1 hour 26304 Electricity consumptions ETTh 1 hour 17420 Oil temperature of power transformers ETTm 15 min 69680 Oil temperature of power transformers Exchange 1 day 7588 Panel data of exchange rates Traffic 1 hour 17544 Traffic loads Weather 10 min 52695 Meteorological indicators
+
+Exchange4 , Traffic5 and Weather6 .
+
+We set the batch size as 64, the learning rate as $2 \times 10^{-4}$ , the training epoch as 100 with early stopping, and the diffusion step $T = 100$ . To include the condition input, the condition encoder and decoder are all Multi-Layer Perceptrons (MLP). The hyperparameters of hybrid optimization are chosen as $\lambda_z = \lambda_\mu = \lambda_\sigma = 1$ . The specific network architectures can be referred to as our source code.
+
+The diffusion models inferred for 100 times to calculate the metrics. For MSE, we averaged over 100 times to have deterministic forecasts, while for CRPS, we first calculated nine quantiles at $\{0.1, 0.2, \dots, 0.9\}$ , and then approximated CRPS.
+
+For each target length, benchmarks and our proposed model are trained with 5 different random initialization seeds, and the full results on each target length are reported in Table 7.
+
+Non-diffusion time series forecasting benchmarks. Although our paper focuses on how to improve the time series diffusion model, we also believe that it's necessary to include SOTA non-diffusion time series forecasting methods as a reference. Therefore, we included Autoformer (Wu et al., 2021), Non-stationary Transformer (Liu et al., 2022), and PatchTST (Nie et al., 2023) to compare deterministic forecasting performance. We run all models in the same setting mentioned above, i.e., $L = \{96, 192, 336, 720\}$ , and the averaged MSEs over all $L$ is reported in Table 8.
+
+Regarding overall performance, our model still ranks first among these benchmarks, though it is slightly inferior to ETTh2 and weather compared to PatchTST.
+
+It should be noted that these SOTA architectures are particularly tailored for time series forecasting and well adapted to the benchmark datasets, while forecasting is one of the downstream applications of our proposed MA-TSD framework. Therefore, we think there exists great potential to accommodate the SOTA architectures into our MA-TSD framework to have a better forecasting performance in our future work.
+
+# C.3. Super-resolution
+
+For time series super-resolution, we set the length of the time series as $L = 576$ , and also train with the batch size as 64, the learning rate as $2 \times 10^{-4}$ , the training epoch as 100 with early stopping, the diffusion step $T = 100$ .
+
+The information of the datasets, MFRED (Meinrenken et al., 2020), Wind7 and Solar8 are listed in Table 9.
+
+Table 9. Super-resolution datasets
+
+Dataset Resolution Time steps Description MFRED 5 min 25908 Household electricity load in Manhattan Wind 5 min 26496 Power generation from Australian wind farms Solar 5 min 52992 Power generation from Australian PV panels
+
+
+
+
+
+
+
+
+Figure 8. Forecasting samples on $L = 96$
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9. Forecasting samples on $L = 192$
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10. Forecasting samples on $L = 336$
+
+
+
+
+
+Table 7. Forecasting performance measured in MSE and CRPS.
+
+Dataset L MA-TSD mrdiff TMDM SSSD D3VAE CSDI MSE CRPS MSE CRPS MSE CRPS MSE CRPS MSE CRPS MSE CRPS Electricity 96 0.288±0.003 0.158±0.001 0.496±0.008 0.226±0.004 0.348±0.018 0.175±0.006 0.976±0.043 0.309±0.004 0.816±0.078 0.306±0.014 0.358±0.022 0.168±0.006 192 0.297±0.006 0.160±0.002 0.466±0.010 0.218±0.002 0.380±0.020 0.182±0.005 1.020±0.037 0.319±0.004 0.885±0.083 0.314±0.023 0.415±0.006 0.185±0.002 336 0.347±0.008 0.177±0.003 0.496±0.009 0.224±0.006 0.423±0.016 0.192±0.004 1.035±0.040 0.324±0.006 0.808±0.126 0.307±0.021 0.459±0.011 0.195±0.003 720 0.430±0.018 0.204±0.003 0.657±0.025 0.275±0.006 0.478±0.025 0.203±0.005 1.072±0.037 0.335±0.004 0.871±0.138 0.318±0.021 0.601±0.052 0.228±0.011 ETTh2 96 0.136±0.002 0.121±0.001 0.149±0.003 0.134±0.001 0.208±0.015 0.141±0.006 0.590±0.039 0.27±0.011 1.577±0.721 0.464±0.106 0.212±0.033 0.140±0.011 192 0.199±0.007 0.149±0.002 0.194±0.004 0.155±0.002 0.241±0.010 0.156±0.004 0.712±0.084 0.304±0.023 1.635±0.339 0.461±0.042 0.236±0.009 0.151±0.002 336 0.229±0.009 0.166±0.003 0.236±0.003 0.176±0.002 0.277±0.009 0.170±0.002 0.799±0.132 0.331±0.036 1.522±0.722 0.414±0.089 0.262±0.009 0.165±0.006 720 0.284±0.017 0.191±0.006 0.289±0.005 0.194±0.003 0.278±0.006 0.169±0.003 0.779±0.088 0.339±0.027 0.851±0.200 0.330±0.040 0.319±0.040 0.199±0.014 ETTm2 96 0.070±0.002 0.085±0.001 0.122±0.042 0.105±0.015 0.090±0.008 0.089±0.004 0.549±0.061 0.264±0.017 3.743±0.637 0.719±0.059 1.609±0.417 0.427±0.067 192 0.105±0.003 0.105±0.002 0.125±0.010 0.112±0.004 0.142±0.008 0.114±0.004 0.782±0.070 0.332±0.018 2.934±0.639 0.624±0.077 1.763±0.389 0.438±0.054 336 0.136±0.004 0.121±0.002 0.198±0.013 0.144±0.008 0.187±0.020 0.132±0.006 1.038±0.124 0.394±0.029 3.490±0.866 0.671±0.090 2.751±0.747 0.556±0.113 720 0.185±0.003 0.143±0.002 0.235±0.023 0.157±0.007 0.297±0.061 0.166±0.015 1.206±0.085 0.437±0.021 3.212±1.440 0.585±0.115 2.369±0.256 0.466±0.021 Exchange 96 0.098±0.006 0.110±0.004 0.102±0.005 0.104±0.003 0.392±0.096 0.196±0.025 2.572±0.200 0.620±0.022 0.674±0.131 0.290±0.037 0.170±0.083 0.123±0.021 192 0.187±0.005 0.158±0.002 0.251±0.011 0.170±0.003 0.670±0.097 0.297±0.019 3.672±0.318 0.775±0.040 2.608±0.859 0.601±0.113 0.259±0.089 0.170±0.018 336 0.333±0.017 0.221±0.005 0.456±0.034 0.220±0.006 0.929±0.116 0.333±0.011 3.131±0.088 0.710±0.017 2.851±0.495 0.674±0.063 0.699±0.316 0.302±0.037 720 0.869±0.140 0.382±0.037 1.111±0.069 0.353±0.015 1.163±0.270 0.358±0.036 2.226±0.143 0.592±0.025 2.301±0.732 0.587±0.143 3.895±3.406 0.617±0.313 Traffic 96 0.166±0.004 0.112±0.003 0.269±0.002 0.153±0.002 0.209±0.019 0.114±0.004 1.943±0.017 0.453±0.02 4.714±2.358 0.729±0.149 0.278±0.017 0.145±0.006 192 0.157±0.003 0.109±0.003 0.228±0.001 0.136±0.001 0.172±0.010 0.105±0.005 1.959±0.006 0.455±0.002 7.353±0.692 0.918±0.049 0.276±0.022 0.143±0.009 336 0.157±0.005 0.114±0.004 0.225±0.011 0.137±0.007 0.161±0.010 0.102±0.005 1.965±0.015 0.458±0.003 5.155±2.550 0.734±0.153 0.310±0.034 0.153±0.010 720 0.184±0.010 0.128±0.007 0.267±0.01 0.155±0.005 0.180±0.010 0.109±0.003 1.998±0.022 0.465±0.004 8.212±0.999 0.944±0.054 1.132±0.629 0.312±0.106 Weather 96 0.096±0.001 0.102±0.002 0.108±0.006 0.109±0.003 0.109±0.011 0.100±0.006 0.511±0.029 0.265±0.010 1.447±0.138 0.448±0.021 0.095±0.002 0.089±0.001 192 0.146±0.004 0.126±0.002 0.152±0.003 0.130±0.002 0.163±0.017 0.120±0.006 0.660±0.049 0.305±0.015 1.901±0.324 0.508±0.034 0.138±0.004 0.109±0.002 336 0.225±0.006 0.161±0.005 0.221±0.006 0.155±0.002 0.250±0.022 0.149±0.007 0.759±0.059 0.331±0.015 1.600±0.313 0.457±0.039 0.204±0.004 0.133±0.001 720 0.362±0.007 0.211±0.003 0.350±0.004 0.208±0.002 0.363±0.032 0.183±0.008 0.833±0.062 0.351±0.016 1.237±0.143 0.386±0.023 0.338±0.023 0.173±0.006
+
+Table 8. Average MSEs over prediction lengths $L = \{ {96},{192},{336},{720}\}$ .
+
+method Electricity ETTh2 ETTm2 exchange traffic weather rank Autoformer 0.594 0.218 0.168 0.601 0.267 0.293 3.83 Nonstationary Transformer 0.367 0.230 0.146 0.440 0.229 0.278 2.83 PatchTST 0.412 0.202 0.122 0.500 0.179 0.189 1.83 MA-TSD 0.340 0.212 0.124 0.372 0.166 0.207 1.50
+
+For MFRED and Solar, the original resolution is 10 seconds and 1 minute respectively, we resampled them to 5 minutes for alignment.
+
+We unify the denoising networks of both our proposed MA-TSD and the DDPM as the DiT(Peebles & Xie, 2023). For consistency, we calculated as the MSE between the low-resolution inputs and the down-scaled super-resolution outputs. For Context-FID, we trained Autoencoders for each training dataset individually, obtained the time series embeddings of super-resolution outputs and the real high-resolution data, and calculated the Fréchet distance over these two embeddings.
+
+The comparison of inference steps on each scale is shown in Figure 12. Here, though we conduct the experiments only on $\{3,6,12\}$ scales, we can theocratically calculate the expected inference steps on other scales according to Equation (17).
+
+Visualization of super-resolution can be found in Figure 13, Figure 14 and Figure 15.
+
+
+
+
+
+
+
+
+Figure 11. Forecasting samples on $L = 720$
+
+
+
+
+
+
+Figure 12. Comparison of inference steps under different super-resolution scales.
+
+
+Figure 13. Super-resolution on MFRED dataset
+
+
+Figure 14. Super-resolution on Wind dataset
+
+
+Figure 15. Super-resolution on Solar dataset
\ No newline at end of file
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/images.zip b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..47d55bfa511cce60e56e64d1fbf38fa2ddcbfa72
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e688db9de56ac6b73af38c2513520c3713a64483303c76cdf66c00b2d8695caa
+size 1792502
diff --git a/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/layout.json b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..98bc281de14e51b7874e5a3ea1ea19376c3f1e13
--- /dev/null
+++ b/anonisotropictimeseriesdiffusionmodelwithmovingaveragetransitions/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7b47ecb15916171b0c8db0da3ecb28bd76bc3a6c45d9667353ae7cb6efd58f3
+size 830971
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_content_list.json b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c09726b71c005382f37004b9bb0e9068f33572c2
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c80edee7307c9d86c26bc69973a0ed8592e8ee6554e86a0f498104cbde0b62f
+size 190426
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_model.json b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..84f09be6db331834a8dfee8f720bc8e46f2a35e6
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03235d75fa2870827a8d7fd5ce7ef9d206bd9801895a7d3cf1222f5974f0bcf6
+size 240397
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_origin.pdf b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..735a64dd0663a9a31e86cb547ed8092f7b6d0d0e
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/2c9c312d-ac8c-4d4f-96e4-227ab4d38537_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b9326a8a880cbfc33d417e07cb8e84117ebc1731d9a8b6ded954029d8793b3d
+size 12789756
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/full.md b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..df2901e0faa899bf4ef0be13bea412c82dd125dc
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/full.md
@@ -0,0 +1,1154 @@
+# A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators
+
+Han Zhou1 Jordy Van Landeghem2 Teodora Popordanoska1 Matthew B. Blaschko1
+
+# Abstract
+
+The selective classifier (SC) has been proposed for rank based uncertainty thresholding, which could have applications in safety critical areas such as medical diagnostics, autonomous driving, and the justice system. The Area Under the Risk-Coverage Curve (AURC) has emerged as the foremost evaluation metric for assessing the performance of SC systems. In this work, we present a formal statistical formulation of population AURC, presenting an equivalent expression that can be interpreted as a reweighted risk function. Through Monte Carlo methods, we derive empirical AURC plug-in estimators for finite sample scenarios. The weight estimators associated with these plug-in estimators are shown to be consistent, with low bias and tightly bounded mean squared error (MSE). The plug-in estimators are proven to converge at a rate of $\mathcal{O}(\sqrt{\ln(n) / n})$ demonstrating statistical consistency. We empirically validate the effectiveness of our estimators through experiments across multiple datasets, model architectures, and confidence score functions (CSFs), demonstrating consistency and effectiveness in fine-tuning AURC performance.
+
+# 1. Introduction
+
+In safety-critical scenarios such as autonomous driving, medical diagnostics, and the justice system (Berk et al., 2021; Leibig et al., 2022; Dvijotham et al., 2023; Franc et al., 2023; Groh et al., 2024), selective classifiers (SC) are promising for their ability to withhold predictions under conditions of uncertainty, thereby mitigating associated risks and enhancing the reliability of the models (Geifman
+
+1 Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Belgium 2 Instabase, San Francisco, USA. Correspondence to: Han Zhou .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+and El-Yaniv, 2017; Geifman et al., 2019; Ding et al., 2020; Galil et al., 2023). Specifically, these classifiers employ cost-based models (Chow, 1970; Cortes et al., 2016; Hendrickx et al., 2024) with a reject region to balance the risks of wrong predictions against the non-decision costs. The goal of an effective SC system is to minimize the expected misclassification costs—termed selective risk—while maximizing coverage, ensuring the model provides accurate predictions for as many instances as possible. This dual focus on selective risk and coverage motivates the development and evaluation of SC systems.
+
+Prominent evaluation metrics in SC systems, such as the area under the risk-coverage curve (AURC) and the normalized AURC (E-AURC) (Geifman et al., 2019), are widely used to assess model performance based on selective risk and coverage. While most studies interpret and improve (Ao et al., 2023; Traub et al., 2024) these metrics from the perspective of risk and coverage, relatively little attention has been given to directly optimizing SC models by treating AURC as a loss function. In addition, these metrics are typically computed empirically from the given datasets, making them susceptible to biases and variances, particularly in the context of a small finite sample rather than the underlying population. Franc et al. (2023) proposed the Selective Classifier Learning (SELE) loss as a lower bound of empirical AURC and is designed to optimize uncertainty scores by minimizing both the regression and SELE losses using batch training strategies. This approach only learns uncertainty scores on top of a pre-trained model within the selective classifier framework, and does not directly optimize the classifier itself based on the loss. Franc et al. (2023) motivate the SELE loss by the fact that it is "a close approximation of the AuRC and, at the same time, amenable to optimization." We demonstrate here through analysis of the computational complexity and statistical properties of direct AURC estimation, that approximation by a lower bound is unnecessary and both AURC and SELE are equally amenable to optimization.
+
+We establish a formal definition of AURC at the population level based on the underlying data distribution and derive an equivalent expression that explicitly represents it as a reweighted risk function, where the weights are determined
+
+solely by population rankings according to the CSFs. This formulation allows us to treat AURC as a loss function in a more theoretically grounded manner. Building upon these findings, we introduce two plug-in estimators with weight estimators derived from Monte Carlo method. We show that both can provide good estimation and come with theoretical guarantees. Specifically, we analyze the statistical properties of the weights estimators, including their MSE and bias, and establish the convergence rate of the plug-in estimators. Finally, we validate their efficacy through evaluations and fine-tuning experiments across various model architectures, CSFs, and datasets, demonstrating their practical advantages in AURC estimation.
+
+# 2. Related Work
+
+Evaluation Metrics: The Area Under the Risk Coverage curve (AURC) and its normalized counterpart Excess-AURC (E-AURC) (Geifman et al., 2019) are the most prevalent evaluation metrics for SC systems that compute the risk or the error with accepted predictions at different confidence thresholds. Furthermore, Cattelan and Silva (2024) have proposed a min-max scaled version of E-AURC, designed to maintain a monotonic relationship with AURC, thereby enhancing its consistency in performance assessment. However, Traub et al. (2024) argues that these metrics related to the selective risk, which only focus on the risk w.r.t. accepted predictions, do not suffice for a holistic assessment. To address this limitation, they developed the Area under the Generalized Risk Coverage curve (AUGRC), which quantifies the average risk of undetected failures across all predictions, thereby providing a comprehensive measure of system reliability. Despite these achievements, most studies directly employ the empirical AURC as a proxy for the population AURC, even in finite sample scenarios, without thoroughly examining the effectiveness of these estimators under such conditions. Franc et al. (2023) introduced the SELE score, a lower bound for AURC. However, their study did not explore important statistical properties of this estimator, such as its bias or MSE, when compared to the population AURC. In contrast to previous studies, our work focuses on formally defining the population AURC in terms of the underlying data distribution and offering a reliable approximation for it in finite sample settings. Our goal is to propose an effective estimator with theoretical guarantees and perform an empirical analysis to compare it with existing AURC estimators.
+
+Uncertainty estimation: There has been a large number of works (Geifman et al., 2019; Abdar et al., 2021; Zhu et al., 2023) that highlight the importance of confidence scoring and uncertainty quantification associated with predictions. In practice1 , commonly used CSFs fall into two
+
+main categories: ensemble approaches and post-hoc methods. Ensemble methods (Lakshminarayanan et al., 2017; Teye et al., 2018; Liu et al., 2023; Xia and Bouganis, 2023; Hou et al., 2023) require multiple forward passes to approximate the posterior predictive distribution, exemplified by Monte Carlo Dropout (MCD) techniques (Gal and Ghahramani, 2016). However, recent works (Cattelan and Silva, 2024; Xia and Bouganis, 2022) suggest that ensembles may not be crucial for enhancing uncertainty estimation but rather serve to improve the predictions through a set of diverse classifiers. Thus such methods are not considered further in this paper. In contrast, post-hoc estimators leverage the logits produced by the model to evaluate its performance. Popular methods include Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2022), Maximum Logit Score (MaxLogit) (Hendrycks et al., 2022), Softmax Margin (Belghazi and Lopez-Paz, 2021), and Negative Entropy (Liu et al., 2020). Furthermore, Cattelan and Silva (2023) show that the maximum $p$ -norm of the logits (MaxLogit- $p$ Norm) can work better than MSP in uncertainty estimation when dealing with some models. Specifically, normalizing the logits with $\ell_2$ norm has been shown to yield more distinct confidence scores, as evidenced in (Wei et al., 2022). Gomes et al. (2022) propose the Negative Gini Score, which utilizes the squared $\ell_2$ norm of the softmax probability. In this study, we examine the impact of the post-hoc CSFs on the AURC, aiming to offer a thorough evaluation of AURC estimators in finite sample scenarios.
+
+# 3. Performance Evaluation of Selective Classifiers
+
+# 3.1. Problem Setting
+
+Let $\mathcal{X} \subseteq \mathbb{R}^d$ be the input space, $\mathcal{Y} \subseteq \{0,1\}^k$ be the label space, and $\mathrm{P}(x,y)$ be the unknown joint distribution over $\mathcal{X} \times \mathcal{Y}$ . We consider a classifier $f: \mathcal{X} \to \Delta^k$ , which maps to a $k$ -dimensional probability simplex, and a confidence scoring function (CSF) $g: \mathcal{X} \to [0,1]$ , the selective classification system $(f,g)$ at an input $x$ can then be described by
+
+$$
+(f, g) (x) := \left\{ \begin{array}{l l} f (x) & \text {i f} g (x) \geq \tau , \\ \text {" a b s t a i n "} & \text {o t h e r w i s e .} \end{array} \right. \tag {1}
+$$
+
+where "abstain" is triggered when $g(x)$ falls below a decision threshold $\tau \in \mathbb{R}$ . Given a loss function $\ell : \Delta^k \times \mathcal{Y} \to \mathbb{R}$ , the true risk of $f$ w.r.t. $\mathrm{P}(x, y)$ is $R(f) = \mathbb{E}_{\mathrm{P}(x, y)}[\ell(f(x), y)]$ . Given the finite sample dataset $D_n = \{(x_i, y_i)\}_{i=1}^n \subseteq (\mathcal{X} \times \mathcal{Y})$ sampled i.i.d. from $\mathrm{P}(x, y)$ , the true risk can be inferred from the empirical risk $\hat{R}(f) := \frac{1}{n} \sum_{i=1}^{n} \ell(f(x_i), y_i)$ . For practical purposes, we define the
+
+selection function $\tilde{g}$ as $\tilde{g}(x) = \mathbb{I}[g(x) \geq \tau]$ . The choice of $\tau$ depends on the specific scenario and the evaluation metric being used. It can either be a pre-defined constant or adapt dynamically based on the predicted uncertainty of the observations.
+
+# 3.2. Evaluation Metrics
+
+One common way to assess the performance of selective classifiers is the risk-coverage curve (RC curve) (El-Yaniv et al., 2010), where coverage measures the probability mass of the input space that is not rejected in Eq. (1), denoted by $\mathbb{E}_{\mathrm{P}(x)}[\tilde{g} (x)]$ . And the selective risk w.r.t. $\mathrm{P}(x,y)$ is then defined as
+
+$$
+R (f, \tilde {g}) := \frac {\mathbb {E} _ {\mathrm {P} (x , y)} [ \ell (f (x) , y) \tilde {g} (x) ]}{\mathbb {E} _ {\mathrm {P} (x)} [ \tilde {g} (x) ]}. \tag {2}
+$$
+
+$\ell$ is typically the $0/1$ error, making $R(f, \tilde{g})$ the selective error. As indicated by the equation above, risk and coverage are strongly dependent, where rejecting more examples reduces selective risk but also results in lower coverage. This relationship revealed by the curve motivates the development of more nuanced evaluation metrics for selective classifiers. Additionally, accuracy alone often fails in cases of class imbalance or pixel-level tasks (Ding et al., 2020), so evaluation metrics should accommodate different loss functions for a more comprehensive assessment.
+
+# 3.3. Equivalent Expressions of AURC
+
+Driven by the aforementioned considerations, the AURC metric (Geifman et al., 2019) is designed to offer a robust evaluation framework for classifiers by effectively capturing performance across varying rejection thresholds that are determined based on the distribution of samples within the population. The AURC is typically specified as an empirical quantity from a finite sample (Franc et al., 2023, Eq. (27)), from which we derive the population AURC as
+
+$$
+\operatorname {A U R C} _ {p} (f) = \mathbb {E} _ {\tilde {x} \sim \mathrm {P} (x)} \frac {\mathbb {E} _ {(x , y) \sim \mathrm {P} (x , y)} \ell (f (x) , y)) \mathbb {I} [ g (x) \geq g (\tilde {x}) ]}{\mathbb {E} _ {x ^ {\prime} \sim \mathrm {P} (x)} \mathbb {I} [ g (x ^ {\prime}) \geq g (\tilde {x}) ]}. \tag {3}
+$$
+
+Noticing that the expectation in the numerator can be swapped with the expectation outside, the equation above can then be written as:
+
+$$
+\operatorname {A U R C} _ {p} (f) = \mathbb {E} _ {(x, y) \sim \mathrm {P} (x, y)} [ \alpha (x) \ell (f (x), y) ] \tag {4}
+$$
+
+where
+
+$$
+\alpha (x) = \mathbb {E} _ {\tilde {x} \sim \mathrm {P} (x)} \left(\frac {\mathbb {I} [ g (x) \geq g (\tilde {x}) ]}{\mathbb {E} _ {x ^ {\prime} \sim \mathrm {P} (x)} \mathbb {I} [ g \left(x ^ {\prime}\right) \geq g (\tilde {x}) ]}\right) \tag {5}
+$$
+
+This expression shows that the population AURC can be interpreted as the expectation of the risk function weighted by $\alpha(x)$ that accounts for the importance of each point. In
+
+order to better understand the population AURC, we study the behavior of the weight $\alpha(x)$ in Eq. (5). The following proposition provides an equivalent expression for $\alpha(x)$ .
+
+Proposition 3.1 (An equivalent expression of $\alpha(x)$ ). Define function $G(x)$ as the cumulative distribution function(CDF) of the CSF $g(x)$ such that
+
+$$
+G (x) = \Pr \left(g \left(x ^ {\prime}\right) \leq g (x)\right) = \int \mathbb {I} [ g \left(x ^ {\prime}\right) \leq g (x) ] d P \left(x ^ {\prime}\right). \tag {6}
+$$
+
+Under this definition, the $\alpha(x)$ in Eq. (5) is equivalent to
+
+$$
+\alpha (x) = - \ln (1 - G (x)). \tag {7}
+$$
+
+Proof. Since the expectation in the denominator in Eq. (5) is the CDF of $1 - G(x)$ , we have:
+
+$$
+\alpha (x) = \int_ {\tilde {x}} \frac {\mathbb {I} [ g (x) \geq g (\tilde {x}) ]}{1 - G (\tilde {x})} d \mathbf {P} (\tilde {x}).
+$$
+
+This implies that we are integrating over the domain $\tilde{x}$ where $g(\tilde{x})\leq g(x)$ . Hence, we can rewrite it as:
+
+$$
+\alpha (x) = \int_ {g (\tilde {x}) \leq g (x)} \frac {1}{1 - G (\tilde {x})} d \mathrm {P} (\tilde {x}).
+$$
+
+To proceed, note that $G(\tilde{x})$ is the CDF of $g(\tilde{x})$ , and since $G(\tilde{x})$ is monotonically increasing in $g(\tilde{x})$ , we can reparameterize the integral in terms of $G(\tilde{x})$ . Specifically, we know that $G(\tilde{x})$ takes values between 0 and 1 as it is a CDF, thus integration can be rewritten as:
+
+$$
+\alpha (x) = \int_ {G (\tilde {x}) \leq G (x)} \frac {1}{1 - G (\tilde {x})} d \mathrm {P} (\tilde {x}) = \int_ {0} ^ {G (x)} \frac {1}{1 - G (\tilde {x})} d \mathrm {P} (\tilde {x}).
+$$
+
+Now, this integral is straightforward to compute:
+
+$$
+\int_ {0} ^ {G (x)} \frac {1}{1 - t} d t = - \ln (1 - G (x)).
+$$
+
+Thus, we have derived the desired result:
+
+$$
+\alpha (x) = - \ln (1 - G (x)).
+$$
+
+Here $G(x)$ can be interpreted as the population rank percentile based on the CSF sorted in ascending order. This proposition motivates the following formulation, which can be considered equivalent to the population AURC in Eq. (3).
+
+Definition 3.2 (An equivalent expression of $\mathrm{AURC}_p$ ). Given $G(x)$ as the CDF of the random variable $g(x)$ , the population AURC in Eq. (3) is equivalent to:
+
+$$
+\operatorname {A U R C} _ {a} (f) = \int \alpha (x) \ell (f (x), y)) d P (x, y) \tag {8}
+$$
+
+where $\alpha (x) = -\ln (1 - G(x))$
+
+We also provide empirical evidence supporting the equivalence in Appendix A.5. Notably, the following integral holds:
+
+$$
+\int_ {0} ^ {1} - \ln (1 - z) d z = 1. \tag {9}
+$$
+
+This result indicates that, in the limit of infinite data, the integral of $\alpha(x)$ computed using this formula converges to one. Consequently, the population AURC can be interpreted as a redistribution of the risk.
+
+# 3.4. Plug-in Estimator of AURC
+
+Given a finite sample of size $n$ , namely $D_{n} = \{(x_{i},y_{i})\}_{i = 1}^{n}\subseteq (\mathcal{X}\times \mathcal{Y})$ , which is sampled i.i.d. from the joint probability distribution $\mathrm{P}(x,y)$ , following Eq. (3), the empirical AURC can be defined based on the sample as
+
+$$
+\widehat {\mathbf {A U R C}} _ {p} (f) = \frac {1}{n} \sum_ {j = 1} ^ {n} \frac {\frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(f \left(x _ {i}\right) , y _ {i}\right) \mathbb {I} [ g \left(x _ {i}\right) \geq g \left(x _ {j}\right) ]}{\frac {1}{n} \sum_ {k = 1} ^ {n} \mathbb {I} [ g \left(x _ {k}\right) \geq g \left(x _ {j}\right) ]}. \tag {10}
+$$
+
+This formulation represents the widely used AURC metric for evaluating the SC system. However, guarantees on the relationship to population-level AURC has not been considered, even though relying on this empirical estimator may introduce error, particularly when assessing the SC system with a small sample size. The naive implementation of this estimator incurs a quadratic computational cost of $\mathcal{O}(n^2)$ due to the nested loops. However, some packages e.g. torch-uncertainty $^2$ decrease this complexity to $\mathcal{O}(n\ln(n))$ by replacing redundant subset evaluations with a single sorting step followed by cumulative summation that efficiently computes error rates across all coverage levels. Here, we present a derivation of a method that achieves a computational complexity of $\mathcal{O}(n\ln(n))$ . By leveraging the approach used to transform Eq.(3) into Eq.(4), the empirical AURC can be reformulated as a plug-in estimator:
+
+$$
+\widehat {\mathrm {A U R C}} _ {p} (f) = \frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\alpha} \left(x _ {i}\right) \ell \left(f \left(x _ {i}\right), y _ {i}\right) \tag {11}
+$$
+
+where
+
+$$
+\hat {\alpha} \left(x _ {i}\right) = \sum_ {j = 1} ^ {n} \frac {\mathbb {I} \left[ g \left(x _ {i}\right) \geq g \left(x _ {j}\right) \right]}{\frac {1}{n} \sum_ {k = 1} ^ {n} \mathbb {I} \left[ g \left(x _ {k}\right) \geq g \left(x _ {j}\right) \right]} = \sum_ {j = 1} ^ {r _ {i}} \frac {1}{n - j + 1} \tag {12}
+$$
+
+where $r_i$ denotes the rank of $x_i$ when the data is sorted in ascending order according to the CSF, such that a larger $r_i$ corresponds to a higher CSF value. For simplicity, we use $\hat{\alpha}_i$ as shorthand for $\hat{\alpha}(x_i)$ . This estimator is a consistent estimator of $\alpha_i$ for the population AURC, as directly established by the
+
+Continuous Mapping Theorem. Let $\mathrm{H}_n\coloneqq \sum_{k = 1}^n{\frac{1}{k}}$ denote the $n$ th harmonic number, and define the digamma function as $\psi (n)\coloneqq \frac{\Gamma'(n)}{\Gamma(n)}$ . The relationship between these two functions is given by $\mathrm{H}_n = \psi (n + 1) + \gamma$ , where $\gamma \approx 0.577$ is the Euler-Mascheroni constant. Setting $\mathrm{H}_0 = 0$ , we can express $\hat{\alpha}_i$ in terms of harmonic numbers or digamma functions:
+
+$$
+\hat {\alpha} _ {i} = \mathrm {H} _ {n} - \mathrm {H} _ {n - r _ {i}} = \psi (n + 1) - \psi (n - r _ {i} + 1), \tag {13}
+$$
+
+which enables efficient computation of the weight estimator. The computation cost of Eq. (11) is $\mathcal{O}(n\ln(n))$ due to the sorting operation required for rank computation. Additionally, for the finite sample case, we have
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\alpha} _ {i} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(H _ {n} - H _ {n - i}\right) = 1, \tag {14}
+$$
+
+indicating the plug-in estimator with this weight $\hat{\alpha}_i$ can be viewed as a redistribution of the risk. Each individual loss is weighted by $\frac{1}{n}\hat{\alpha}_i$ , which depends on the rank of the corresponding sample point.
+
+Franc et al. (2023) gave another expression of the empirical AURC in Eq. (10) that can be interpreted as an arithmetic mean of the empirical selective risks corresponding to the coverage spread evenly over the interval $[0, 1]$ with step $\frac{1}{n}$ . In addition, they proposed the SELE score, served as a coarse lower bound for empirical AURC. Details about this estimator are provided in Appendix A.2.
+
+# 3.5. Alternate Derivation of Plug-in Estimators via Monte Carlo
+
+In this section, we explore an alternative derivation of plug-in estimators using the Monte Carlo method. For our population $\mathrm{AURC}_a$ in Eq. (8), we aim to estimate this quantity using Monte Carlo integration. Since the cumulative distribution score $G(x)$ is unknown, we require an estimator for Eq. (7), which can be achieved by taking the conditional expectation
+
+$$
+\mathbb {E} \left[ - \ln \left(1 - G \left(x _ {i}\right)\right) \mid \left\{g \left(x _ {j}\right) \right\} _ {1 \leq j \leq n} \right]. \tag {15}
+$$
+
+Since $\{G(x_i)\}_{1\leq i\leq n}$ behave as i.i.d. samples from a uniform distribution $\mathcal{U}[0,1]$ when $x_{i}$ are i.i.d. from $P(x)$ , we sample i.i.d. $\{\beta_i\}_{1\leq i\leq n}\sim \mathcal{U}[0,1]$ , then sort this set in ascending order. Let $r_i$ be the rank for $\beta_{i}$ and set $\alpha_{i} = -\ln (1 - \beta_{i})$ . Consequently, we can find a lower variance estimate with the same bias by repeating the process and averaging the obtained $\alpha_{i}$ estimates. The $\beta_{i}$ are order statistics of the uniform distribution, and have known distribution (Jones, 2009, Section 2)
+
+$$
+\beta_ {i} \sim \operatorname {B e t a} \left(r _ {i}, n + 1 - r _ {i}\right) = \frac {\beta_ {i} ^ {r _ {i} - 1} \left(1 - \beta_ {i}\right) ^ {n - r _ {i}}}{\mathrm {B} \left(r _ {i} , n + 1 - r _ {i}\right)} \tag {16}
+$$
+
+where B denotes the beta function. Consequently, the limit expectation of our estimator with repeatedly resampled $\beta_{i}$ will yield
+
+$$
+\begin{array}{l} \hat {\alpha} _ {i} = \mathbb {E} _ {\beta_ {i}} [ - \ln (1 - \beta_ {i}) ] \\ = \int_ {0} ^ {1} - \ln (1 - x) \frac {x ^ {r _ {i} - 1} (1 - x) ^ {n - r _ {i}}}{\mathrm {B} \left(r _ {i} , n + 1 - r _ {i}\right)} d x \tag {17} \\ = H _ {n} - H _ {n - r _ {i}} \\ \end{array}
+$$
+
+which leads to the weight estimator in Eq. (13). This indicates that the plug-in estimator of Sec. 3.4 is in fact quite principled. Furthermore, we demonstrate the consistency of this weight estimator in Prop. A.4 and 3.6. In the above procedure, we have used $\hat{\alpha}_i = \mathbb{E}_{\beta_i}[-\ln(1 - \beta_i)]$ , but we can utilize the expectation of $\beta_i$ , leading to another weight estimator
+
+$$
+\hat {\alpha} _ {i} ^ {\prime} = - \ln \left(1 - \mathbb {E} _ {\beta_ {i}} [ \beta_ {i} ]\right) = - \ln \left(1 - \frac {r _ {i}}{n + 1}\right) \tag {18}
+$$
+
+where the last equation is due to the fact that the expectation of $\operatorname{Beta}(a, b)$ is $\frac{a}{a + b}$ . This estimator is consistent, as $\lim_{n\to \infty}\left(\frac{r_i}{n + 1}\right) = \beta_i$ . In addition, we have
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\alpha} _ {i} ^ {\prime} = - \frac {1}{n} \sum_ {i = 1} ^ {n} \ln \left(1 - \frac {r _ {i}}{n + 1}\right) < 1, \tag {19}
+$$
+
+but it approaches one as $n \to \infty$ . Since the function $m(t) = -\ln (1 - t)$ is convex, applying Jensen's inequality gives:
+
+$$
+- \ln \left(1 - \mathbb {E} _ {\beta_ {i}} \left[ \beta_ {i} \right]\right) \leq \mathbb {E} _ {\beta_ {i}} \left[ - \ln \left(1 - \beta_ {i}\right) \right], \tag {20}
+$$
+
+indicating the first estimator $\hat{\alpha}_i$ upper bounds the $\hat{\alpha}_i^\prime$ . Thus this weight estimator $\hat{\alpha}_i^\prime$ will lead to a consistent plug-in estimator that lower bounds the plug-in estimator with $\hat{\alpha}_i$ . In the next section, we will analyze the statistical properties of these two plug-in estimators, incorporating the weight estimators discussed above.
+
+# 3.6. Statistical Properties
+
+In this section, we show that empirical estimators of AURC are in general biased (Prop. 3.3, 3.4), but we also show consistency at a favorable convergence rate (Prop. 3.6) indicating the soundness of empirical estimators even at relatively small batch sizes. Given the fact that the weight estimator and the losses are typically not independent as they both depend on the model logits, it is difficult to directly derive the statistical properties of the plug-in estimators. Therefore, we begin by examining the properties of the weight estimators $\hat{\alpha}_i$ and $\hat{\alpha}_i^\prime$ based on finite samples. For a specific data pair $(x_{i},y_{i})\in (\mathcal{X},\mathcal{Y})$ with an unknown population rank percentile $\beta_{i}$ , we consider randomly sampling $n - 1$ i.i.d. samples repeatedly from the population. Our analysis focuses on assessing the bias and MSE of the weight estimators associated with this data pair.
+
+Proposition 3.3 (Bias of $\hat{\alpha}_i$ ). The bias of the $\hat{\alpha}_i$ is given by
+
+$$
+\begin{array}{l} \operatorname {B i a s} \left(\hat {\alpha} _ {i} \mid G \left(x _ {i}\right) = \beta_ {i}\right) \tag {21} \\ = H _ {n} - \sum_ {i = 1} ^ {n} H _ {n - i} C _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} (1 - \beta_ {i}) ^ {n - i} + \ln (1 - \beta_ {i}). \\ \end{array}
+$$
+
+Proof. The conditional expectation of $\hat{\alpha}_i$ associated with $(x_i, y_i)$ is given by
+
+$$
+\mathbb {E} \left[ \hat {\alpha} _ {i} \mid G (x _ {i}) = \beta_ {i} \right] = \sum_ {i = 1} ^ {n} \left(H _ {n} - H _ {n - i}\right) \Pr \left(r _ {i} = i \mid G (x _ {i}) = \beta_ {i}\right).
+$$
+
+We notice that $\operatorname{Pr}(r_i | G(x_i) = \beta_i)$ is a binomial distribution $\operatorname{Bin}(n - 1, \beta_i)$ given by:
+
+$$
+\Pr \left(r _ {i} = i \mid G (x _ {i}) = \beta_ {i}\right) = \mathrm {C} _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} \left(1 - \beta_ {i}\right) ^ {n - i}
+$$
+
+because the probability of any i.i.d. sample being ranked below $x_{i}$ is $\beta_{i}$ , and there must be $(i - 1)$ of them for the sample to be ranked $i$ th, while the remaining $(n - i)$ samples must be ranked higher, each with an independent probability of $(1 - \beta_{i})$ . Combining these results gives us:
+
+$$
+\begin{array}{l} \operatorname {B i a s} \left(\hat {\alpha} _ {i} \mid G (x _ {i}) = \beta_ {i}\right) = \mathbb {E} \left[ \hat {\alpha} _ {i} \mid G (x _ {i}) = \beta_ {i} \right] - \alpha_ {i} \\ = H _ {n} - \sum_ {i = 1} ^ {n} H _ {n - i} \mathrm {C} _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} (1 - \beta_ {i}) ^ {n - i} + \ln (1 - \beta_ {i}). \\ \end{array}
+$$
+
+which concludes our proof.
+
+Proposition 3.4 (Bias of $\hat{\alpha}_i^{\prime}$ ). The bias of the $\hat{\alpha}_i^{\prime}$ is given by
+
+$$
+\begin{array}{l} \operatorname {B i a s} \left(\hat {\alpha} _ {i} ^ {\prime} \mid G (x _ {i}) = \beta_ {i}\right) \tag {22} \\ = - \sum_ {i = 1} ^ {n} \ln \left(1 - \frac {i}{n + 1}\right) C _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} \left(1 - \beta_ {i}\right) ^ {n - i} + \ln \left(1 - \beta_ {i}\right). \\ \end{array}
+$$
+
+Proof. The expected $\hat{\alpha}_i^\prime$ associated with $(x_{i},y_{i})$ is given by
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \hat {\alpha} _ {i} ^ {\prime} G (x _ {i}) = \beta_ {i} \right] \\ = - \sum_ {i = 1} ^ {n} \ln \left(1 - \frac {i}{n + 1}\right) \Pr (r _ {i} = i | G (x _ {i}) = \beta_ {i}). \\ \end{array}
+$$
+
+Since $\operatorname*{Pr}(r_i|G(x_i) = \beta_i)$ follows $\mathrm{Bin}(n - 1,\beta_i)$ , we obtain:
+
+$$
+\begin{array}{l} \operatorname {B i a s} \left(\hat {\alpha} _ {i} ^ {\prime} | G (x _ {i}) = \beta_ {i}\right) = \mathbb {E} \left[ \hat {\alpha} _ {i} ^ {\prime} | G (x _ {i}) = \beta_ {i} \right] - \alpha_ {i} \\ = - \sum_ {i = 1} ^ {n} \ln \left(1 - \frac {i}{n + 1}\right) \mathrm {C} _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} (1 - \beta_ {i}) ^ {n - i} + \ln (1 - \beta_ {i}) \\ \end{array}
+$$
+
+which concludes our proof.
+
+For the above mentioned weight estimators, the SELE Weight estimator $\hat{\alpha}_i^{se}$ exhibits the largest bias compared with $\hat{\alpha}$ and $\hat{\alpha}^{\prime}$ , as indicated in Fig 1. Due to this significant bias in weight estimation, SELE is not a reliable estimator for population AURC.
+
+
+(a) $n = 8$
+
+
+(b) $n = 128$
+
+Proposition 3.5 (MSE of $\hat{\alpha}_i$ ). The MSE of the $\hat{\alpha}_i$ is
+
+$$
+\begin{array}{l} \mathrm {M S E} \left(\hat {\alpha} _ {i}\right) = \psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1) \\ \succ \frac {\beta_ {i}}{n (1 - \beta_ {i}) + 1}. \tag {23} \\ \end{array}
+$$
+
+Proof. From result in Prop.3.1, we calculate:
+
+$$
+\begin{array}{l} \operatorname {M S E} \left(\hat {\alpha} _ {i}\right) = \mathbb {E} _ {\beta_ {i}} \left[ \left(\hat {\alpha} _ {i} + \ln \left(1 - \beta_ {i}\right)\right) ^ {2} \right] \\ = \int_ {0} ^ {1} \left(\left(\mathbf {H} _ {n} - \mathbf {H} _ {n - r _ {i}}\right) + \ln \left(1 - \beta_ {i}\right)\right) ^ {2} d \mathbf {P} (\beta_ {i}) \\ = \underbrace {\int_ {0} ^ {1} \ln (1 - \beta_ {i}) ^ {2} d \mathbf {P} (\beta_ {i})} _ {:= \mathrm {M}} - \left(\mathbf {H} _ {n} - \mathbf {H} _ {n - r _ {i}}\right) ^ {2} \\ \end{array}
+$$
+
+where the second equality is led by $\int_0^1\ln (1 - \beta_i)d\mathbf{P}(\beta_i) =$ $-(\mathrm{H}_n - \mathrm{H}_{n - r_i})$ . And $d\mathrm{P}(\beta_i)$ is taken to mean integration with respect to the measure induced by $\beta_{i}\sim \mathrm{Beta}(r_{i},n +$ $1 - r_{i})$ . Focusing on the remaining integral, we have the closed form:
+
+$$
+\mathbf {M} = \left(\mathbf {H} _ {n} - \mathbf {H} _ {n - r _ {i}}\right) ^ {2} + \boldsymbol {\psi} ^ {\prime} (n + 1 - r _ {i}) - \boldsymbol {\psi} ^ {\prime} (n + 1),
+$$
+
+and the result:
+
+$$
+\mathrm {M S E} (\hat {\alpha} _ {i}) = \psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1).
+$$
+
+This term involves the first derivative of the digamma function for which the inequality $\frac{1}{n} +\frac{1}{2n^2}\leq \psi '(n)\leq \frac{1}{n} +\frac{1}{n^2}$ is well known. Applying these inequalities, we obtain
+
+$$
+\begin{array}{l} \mathrm {M S E} (\hat {\alpha} _ {i}) \leq \frac {1}{n + 1 - r _ {i}} + \frac {1}{(n + 1 - r _ {i}) ^ {2}} - \frac {1}{n + 1} - \frac {1 / 2}{(n + 1) ^ {2}} \\ = \mathcal {O} \left(\frac {1}{n - r _ {i} + 1} - \frac {1}{n + 1}\right) = \mathcal {O} \left(\frac {\beta_ {i}}{n (1 - \beta_ {i}) + 1}\right). \\ \end{array}
+$$
+
+By analogous reasoning determining a lower bound on the MSE, we achieve the result
+
+$$
+\operatorname {M S E} \left(\hat {\alpha} _ {i}\right) \asymp \frac {\beta_ {i}}{n \left(1 - \beta_ {i}\right) + 1} \quad \forall \beta_ {i} \in (0, 1). \tag {24}
+$$
+
+which is visualized in Fig 2.
+
+
+
+
+Figure 1. The bias of the weights estimator as a function of $\beta$ , based on the results in Propositions 3.3, 3.4 and A.2. The bias of these weight estimators is not equal to zero in general, but being positive for smaller $\beta_{i}$ and negative for larger $\beta_{i}$ as indicated. As sample size increases, the expected bias decreases significantly for larger $\beta_{i}$ . The complete plots for different sample size can be found in Appendix S8.
+Figure 2. The bound in Eq. (24) as a function of $n$ and $\beta_{i}$ .
+
+We also demonstrate in Appendix A.3 that the MSE of $\hat{\alpha}_i^\prime$ is tightly upper bounded by Eq. (24), though it remains larger than the MSE of $\hat{\alpha}_i$ .
+
+Proposition 3.6 (Convergence Rate of the plug-in estimators with $\hat{\alpha}_i$ or $\hat{\alpha}_i^{\prime}$ ). Assume that the loss function $\ell$ is square-integrable, i.e., $\int \ell^{2}(f(x),y)dP(x,y) < \infty$ . Then, the plug-in estimators with $\hat{\alpha}_i$ or $\hat{\alpha}_i^{\prime}$ as the weight estimator, converges at a rate of $\mathcal{O}(\sqrt{\ln(n) / n})$ .
+
+Proof. We first analyze the difference between the plug-in estimator with $\hat{\alpha}_i$ and the population expected value:
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \hat {\alpha} _ {i} \ell (f (x _ {i}), y _ {i}) - \mathbb {E} [ \alpha \ell (f (x), y) ].
+$$
+
+This can be decomposed as the sum of the following two terms:
+
+$$
+\begin{array}{l} A = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\alpha} _ {i} - \alpha_ {i}\right) \ell \left(f \left(x _ {i}\right), y _ {i}\right) \\ B = \frac {1}{n} \sum_ {i = 1} ^ {n} \alpha_ {i} \ell (f (x _ {i}), y _ {i}) - \mathbb {E} [ \alpha \ell (f (x), y) ] \\ \end{array}
+$$
+
+where term (1) captures the error caused by the bias in estimating $\alpha_{i}$ and term (2) represents the error introduced by approximating the expected value $\mathbb{E}[\alpha \ell (f(x),y)]$ with the empirical average.
+
+Making use of the Cauchy-Schwarz inequality, we obtain
+
+the following result:
+
+$$
+\begin{array}{l} A ^ {2} \leq \left(\frac {1}{n} \sum_ {i = 1} ^ {n} (\hat {\alpha} _ {i} - \alpha_ {i}) ^ {2}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(f \left(x _ {i}\right), y _ {i}\right) ^ {2}\right) \tag {25} \\ = \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\alpha} _ {i} - \alpha_ {i}\right) ^ {2}\right) \mathbb {E} \left[ \ell (f (x), y) ^ {2} \right]. \\ \end{array}
+$$
+
+From Proposition A.4, $\frac{1}{n}\sum_{i = 1}^{n}\mathrm{MSE}(\hat{\alpha}_i)$ is bounded by $\mathcal{O}\left(\frac{\ln(n)}{n}\right)$ , which means
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\alpha} _ {i} - \alpha_ {i}\right) ^ {2} = \mathcal {O} \left(\frac {\ln (n)}{n}\right). \tag {26}
+$$
+
+By combining this with Eq. (25) and the square-integrable assumption of the loss function $\ell$ , term A asymptotically converges at a rate $\mathcal{O}(\sqrt{\ln(n)/n})$ . Term B, corresponding to the Monte Carlo method, is well-known to converge at a rate $\mathcal{O}(n^{-1/2})$ (Caflisch, 1998), which is faster than $\mathcal{O}(\sqrt{\ln(n)/n})$ . Thus, our overall convergence rate is dominated by the rate derived for term A. Similarly, the same convergence rate applies to the estimator with $\hat{\alpha}_i'$ .
+
+# 4. Experiments
+
+Datasets. We use images datasets such as CIFAR10/100 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009), and a text dataset i.e Amazon Reviews (Ni et al., 2019). The Amazon dataset contains review text inputs paired with 1-out-of-5 star ratings as labels.
+
+Models. For experiments on CIFAR10/100, we report the results on the VGG13, VGG16, VGG19 (Simonyan and Zisserman, 2014) model with batch norm layers, WideResNet28x10 (Zagoruyko and Komodakis, 2016), and ResNet (He et al., 2016) models with different depths (20, 56, 110, 164). For each model architecture, we have 5 different models that are pre-trained on the CIFAR10/100 dataset. For experiments on Amazon dataset, we use pretrained transformer-based models - BERT (Kenton and Toutanova, 2019), RoBERTa (Liu et al., 2021a), Distill-Bert (Sanh, 2019) (D-BERT), and Distill-Roberta (D-RoBERTa) $^{3}$ . For experiments on the ImageNet dataset, we use the pre-trained models from timm (Wightman, 2019) package, including two vision transformer (ViT) (Dosovitskiy et al., 2020) variants, ViT-Small and ViT-Large; and two Swin transformer-based models (Liu et al., 2021b), Swin-Base and Swin-Tiny. All these models are configured with standard image resolution - 224.
+
+Metrics. For our comparative analysis, we evaluate several metrics, including the population $\mathrm{AURC}_p$ and finite-sample estimators. The $\mathrm{AURC}_p$ is computed using Eq. (11) across
+
+the test set. In the finite-sample setting, we evaluate the plug-in estimators with $\hat{\alpha}$ or $\hat{\alpha}^{\prime}$ , the SELE score (Franc et al., 2023), and $2\times$ SELE as proposed by the original authors (see Appendix A.2 for a discussion). Beyond the $0/1$ loss, we incorporate these metrics with the Cross-Entropy (CE) loss that serves as a complementary measure for assessing the classifier's performance.
+
+Experimental setup. We evaluate the metrics using several pre-trained models on the test set, which is randomly divided into various batch sizes (8, 16, 32, ..., 1024). We use MSP as our confidence score function, compute the metrics for these batch samples, and subsequently calculate the mean and standard deviation of these finite sample estimators. The population $\mathrm{AURC}_p$ is computed across all samples in the test set. For the CIFAR10/100 datasets, we evaluate the mean and standard deviation of the Mean Absolute Error (MAE) across five distinct pre-trained models. For the ImageNet and Amazon datasets, we compute the mean, standard deviation, and MSE for different estimators of the pre-trained model across batch samples. $^4$
+
+
+(a) BERT (0/1)
+Figure 3. (Amazon) Finite sample estimators with 0/1 or CE loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and std of various estimators applied to these batch samples.
+
+
+(b) BERT (CE)
+
+Measurement of the statistical properties of the estimators. From Fig. 3, it is observable that with $0/1$ loss (accuracy) and increasing sample size, the SELE score tends to underestimate the population $\mathrm{AURC}_p$ . Conversely, $2 \times \mathrm{SELE}$ tends to overestimate the population $\mathrm{AURC}_p$ . The plug-in estimator with $\hat{\alpha}'$ empirically serves as a lower bound for that with $\hat{\alpha}$ , supporting the correctness of our theoretical results. As the sample size grows, both estimators progressively converge to the population $\mathrm{AURC}_p$ . Similar trends can also be observed regardless of the $0/1$ loss or CE loss in Fig. S15-S19. Furthermore, a comparison between Fig. 3(a) and 3(b) reveals that using CE loss rather than $0/1$ loss results in a different magnitude of variance and bias in the estimators. The bias plots in Figures S9-S12 show similar findings. In Fig.4, the MAE of the plug-in estimators con
+
+Table 1. Summary of population $\mathrm{AURC}_p$ (expressed as mean ± standard deviation, scaled by $10^{-2}$ ) of the test set for models fine-tuned with various loss functions. The $\mathrm{AURC}_p$ is calculated for each model architecture based on the fine-tuned results aggregated from five different seeds, each using the same pre-trained model.
+
+Model CIFAR10 CIFAR100 CE SELE \( \hat{\alpha} \) Est. \( \hat{\alpha}^{\prime} \) Est. CE SELE \( \hat{\alpha} \) Est. \( \hat{\alpha}^{\prime} \) Est. ResNet18 4.967±0.038 4.470±0.030 4.473±0.030 4.471±0.030 6.648±0.021 6.577±0.011 6.532±0.012 6.533±0.014 ResNet34 6.464±0.036 5.661±0.039 5.652±0.036 5.651±0.036 6.023±0.016 5.862±0.012 5.825±0.011 5.826±0.011 ResNet50 8.318±0.002 7.892±0.046 7.921±0.047 7.918±0.049 6.225±0.009 6.043±0.015 6.007±0.008 6.007±0.009 VGG16BN 7.922±0.002 7.010±0.018 7.064±0.014 7.060±0.015 10.790±0.001 10.586±0.029 10.559±0.029 10.560±0.030 VGG19BN 9.813±0.192 8.475±0.061 8.528±0.059 8.524±0.059 10.633±0.001 10.421±0.026 10.393±0.025 10.391±0.024 WideResNet28x10 4.137±0.046 3.867±0.049 3.864±0.049 3.863±0.049 5.912±0.652 5.607±0.707 5.836±0.652 5.607±0.707
+
+
+(a) VGG16BN (CIFAR10)
+
+
+(b) VGG16BN (CIFAR100)
+
+sistently decreases as the sample size increases. However, the SELE score does not always exhibit this trend. Its performance lacks stability compared to the plug-in estimators and can even be worse as sample size increases. Results shown in Fig. 5 also indicate a declining trend in the MSE of the plug-in estimators on the ImageNet dataset as the sample size increases, regardless of whether 0/1 or CE loss is used. This convergence is not reflected in the SELE scores, as expected. Similar MSE results are also observed across other model architectures for the CIFAR10/100 and Amazon datasets (see Fig. S20-S25). Although $\hat{\alpha}$ theoretically exhibits lower MSE in weight estimation compared to $\hat{\alpha}'$ , its corresponding plug-in estimator empirically achieves an even higher MSE than that of $\hat{\alpha}$ .
+
+Influence of the CSFs. We also examine the impact of various CSFs on the estimators to provide a thorough evaluation of the metrics. Specifically, we consider MSP, Negative Entropy, MaxLogit, Softmax Margin, MaxLogit- $\ell_2$ norm, and Negative Gini Score, as outlined in Table S2. The sample size is set to 128. We report the results for Amazon and ImageNet datasets in Fig. S26-S28. As indicated in Fig. 6, both plug-in estimators exhibits lower bias compared to other estimators across various CSFs. The $2\times$ SELE score is more likely to overestimate $\mathrm{AURC}_p$ , but this is not always
+
+
+(a) Swin-Base $(0 / 1)$
+
+
+(b) Swin-Base (CE)
+
+
+Figure 4. (CIFAR10/100) MAE of different finite sample estimators evaluated with 0/1 loss. For each model architecture, we compute the mean and std of the MAE across five distinct pretrained models. The MAE for each model is calculated using batch samples divided from the test set. More results can be found in Figs. S13-S14.
+(a) BERT (0/1 loss)
+Figure 6. (Amazon) Finite sample estimators with different CSFs.
+
+
+Figure 5. (ImageNet) MSE of finite sample estimators with 0/1 or CE loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+(b) BERT (CE loss)
+
+the case, as shown in Fig. 6(b). The SELE score is substantially lower than the population $\mathrm{AURC}_p$ across various CSFs in our evaluations. We can also observe that compared to CSFs, these finite sample estimators are more sensitive to the choice of loss functions. When using $0/1$ loss, they display lower variance than CE loss.
+
+# Training a selective classifier.
+
+We can finetune our pre-trained model using these finite sample estimators as a loss function. The MSP is employed as the CSF when applying the metrics. The models in Table 1 are fine-tuned on the training set using these estimators incorporated with CE loss over 30 epochs, using a learning rate of $10^{-3}$ . We set the training batch size to be 128. Additionally, we present the results for both CE loss and SELE score, as detailed in Table 1 for the CIFAR10/100
+
+dataset. As indicated by Table 1, training with these estimators can effectively optimize the $\mathrm{AURC}_p$ compared with the CE. Moreover, training with SELE loss also accelerates the optimization of $\mathrm{AURC}_p$ compared with CE optimization.
+
+# 5. Conclusion and Future Work
+
+In this work, we revisit the definition of empirical AURC and propose the population AURC from a statistical perspective, along with an equivalent expression that can be interpreted as a reweighted risk function. Subsequently, we introduce a plug-in estimator for population AURC, characterized by a biased weight estimator. Additionally, we provide an alternative derivation of this and another plug-in estimator using the Monte Carlo method. We rigorously analyze the statistical properties of these Monte Carlo-derived weight estimators, including their bias, MSE, and consistency, and establish their convergence rate to be $\mathcal{O}(\sqrt{\ln(n) / n})$ . To validate our theoretical results, we evaluate the estimator across various state-of-the-art neural network models and widely-used datasets. Both plug-in estimators exhibit better performance compared to the SELE score. Finally, we have demonstrated that the combination of good statistical convergence and efficient computation makes them suitable training objectives for directly fine-tuning networks to minimize AURC.
+
+In this paper, our primary focus is on the estimation of AURC for a fixed model. For completeness, we discuss in the appendix the scenario in which a Bayesian model is considered, specifically when $f \sim \mathbb{P}(f|\mathcal{D})$ . We anticipate that these directions will inspire further research. Additionally, investigating the performance of estimators under distribution shift or in the context of imbalanced datasets represents a promising avenue for future work. We also encourage studies that adapt estimators of the form developed in this paper to these settings.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# Acknowledgement
+
+This research received funding from the Flemish Government (AI Research Program) and the Research Foundation - Flanders (FWO) through project number G0G2921N. HZ is supported by the China Scholarship Council. We acknowledge EuroHPC JU for awarding the project ID EHPC-BEN-2024B10-050 and EHPC-BEN-2025B22-037 access to the EuroHPC supercomputer LEONARDO, hosted by CINECA
+
+(Italy) and the LEONARDO consortium.
+
+# References
+
+Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., Fieguth, P., Cao, X., Khosravi, A., Acharya, U. R., et al. (2021). A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information fusion, 76:243-297.
+Ao, S., Rüger, S., and Siddharthan, A. (2023). Empirical optimal risk to quantify model trustworthiness for failure detection. In CEUR Workshop Proceedings, volume 3505. CEUR-WS.
+Belghazi, M. I. and Lopez-Paz, D. (2021). What classifiers know what they don't? arXiv preprint arXiv:2107.06217.
+Berk, R., Heidari, H., Jabbari, S., Kearns, M., and Roth, A. (2021). Fairness in criminal justice risk assessments: The state of the art. *Sociological Methods & Research*, 50(1):3-44.
+Caflisch, R. E. (1998). Monte carlo and quasi-monte carlo methods. Acta numerica, 7:1-49.
+Cattelan, L. F. P. and Silva, D. (2023). On selective classification under distribution shift. In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models.
+Cattelan, L. F. P. and Silva, D. (2024). How to fix a broken confidence estimator: Evaluating post-hoc methods for selective classification with deep neural networks. In The 40th Conference on Uncertainty in Artificial Intelligence.
+Chow, C. (1970). On optimum recognition error and reject tradeoff. IEEE Transactions on information theory, 16(1):41-46.
+Cortes, C., DeSalvo, G., and Mohri, M. (2016). Learning with rejection. In *Algorithmic Learning Theory: 27th International Conference*, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27, pages 67-82. Springer.
+Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE.
+Ding, Y., Liu, J., Xiong, J., and Shi, Y. (2020). Revisiting the evaluation of uncertainty estimation and its application to explore model complexity-uncertainty trade-off. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 4-5.
+Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer,
+
+M., Heigold, G., Gelly, S., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.
+Dvijotham, K., Winkens, J., Barsbey, M., Ghaisas, S., Stanforth, R., Pawlowski, N., Strachan, P., Ahmed, Z., Azizi, S., Bachrach, Y., et al. (2023). Enhancing the reliability and accuracy of ai-enabled diagnosis via complementarity-driven deferral to clinicians. Nature Medicine, 29(7):1814-1820.
+El-Yaniv, R. et al. (2010). On the foundations of noise-free selective classification. Journal of Machine Learning Research, 11(5).
+Franc, V., Prusa, D., and Voracek, V. (2023). Optimal strategies for reject option classifiers. Journal of Machine Learning Research, 24(11):1-49.
+Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR.
+Galil, I., Dabbah, M., and El-Yaniv, R. (2023). What can we learn from the selective prediction and uncertainty estimation performance of 523 imagenet classifiers? In The Eleventh International Conference on Learning Representations.
+Geifman, Y. and El-Yaniv, R. (2017). Selective classification for deep neural networks. Advances in neural information processing systems, 30.
+Geifman, Y., Uziel, G., and El-Yaniv, R. (2019). Biasreduced uncertainty estimation for deep neural classifiers. In International Conference on Learning Representations.
+Gomes, E. D. C., Romanelli, M., Granese, F., and Piantanida, P. (2022). A simple training-free method for rejection option. URL https://openreview.net/forum?id=K1DdnjL6p7.
+Groh, M., Badri, O., Daneshjou, R., Koochek, A., Harris, C., Soenksen, L. R., Doraiswamy, P. M., and Picard, R. (2024). Deep learning-aided decision support for diagnosis of skin disease across skin tones. Nature Medicine, 30(2):573-583.
+He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
+Hendrickx, K., Perini, L., Van der Plas, D., Meert, W., and Davis, J. (2024). Machine learning with a reject option: A survey. *Machine Learning*, 113(5):3073-3110.
+
+Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., and Song, D. (2022). Scaling out-of-distribution detection for real-world settings. In International Conference on Machine Learning, pages 8759-8773. PMLR.
+Hendrycks, D. and Gimpel, K. (2022). A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations.
+Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., and Zhang, Y. (2023). Decomposing uncertainty for large language models through input clarification ensembling. In *Forty-first International Conference on Machine Learning*.
+Jones, M. (2009). Kumaraswamy's distribution: A beta-type distribution with some tractability advantages. Statistical Methodology, 6(1):70-81.
+Kenton, J. D. M.-W. C. and Toutanova, L. K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.
+Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images. Toronto, ON, Canada.
+Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30.
+Leibig, C., Brehmer, M., Bunk, S., Byng, D., Pinker, K., and Umutlu, L. (2022). Combining the strengths of radiologists and ai for breast cancer screening: a retrospective analysis. The Lancet Digital Health, 4(7):e507-e519.
+Liu, T., Cheng, J., and Tan, S. (2023). Spectral bayesian uncertainty for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18166-18175.
+Liu, W., Wang, X., Owens, J., and Li, Y. (2020). Energy-based out-of-distribution detection. Advances in neural information processing systems, 33:21464-21475.
+Liu, Z., Lin, W., Shi, Y., and Zhao, J. (2021a). A robustly optimized bert pre-training approach with post-training. In China National Conference on Chinese Computational Linguistics, pages 471-484. Springer.
+Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021b). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012-10022.
+
+Ni, J., Li, J., and McAuley, J. (2019). Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 188-197.
+Sanh, V. (2019). Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In Proceedings of Thirty-third Conference on Neural Information Processing Systems (NIPS2019).
+Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
+Teye, M., Azizpour, H., and Smith, K. (2018). Bayesian uncertainty estimation for batch normalized deep networks. In International conference on machine learning, pages 4907-4916. PMLR.
+Traub, J., Bungert, T. J., Luth, C. T., Baumgartner, M., Maier-Hein, K. H., Maier-Hein, L., and Jaeger, P. F. (2024). Overcoming common flaws in the evaluation of selective classification systems. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
+Wei, H., Xie, R., Cheng, H., Feng, L., An, B., and Li, Y. (2022). Mitigating neural network overconfidence with logit normalization. In International conference on machine learning, pages 23631-23644. PMLR.
+Wightman, R. (2019). Pytorch image models. https://github.com/rwrightman/pytorch-image-models.
+Xia, G. and Bouganis, C.-S. (2022). On the usefulness of deep ensemble diversity for out-of-distribution detection. arXiv preprint arXiv:2207.07517.
+Xia, G. and Bouganis, C.-S. (2023). Window-based early-exit cascades for uncertainty estimation: When deep ensembles are more efficient than single models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17368-17380.
+Zagoruyko, S. and Komodakis, N. (2016). Wide residual networks. In Proceedings of the British Machine Vision Conference 2016, pages 87-1. British Machine Vision Association.
+Zhu, F., Zhang, X.-Y., Cheng, Z., and Liu, C.-L. (2023). Revisiting confidence estimation: Towards reliable failure prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence.
+
+# A. Appendix
+
+# A.1. Additional Proofs
+
+Proposition A.1 (Consistency of $\hat{\alpha}_i$ ). Assume $\beta_i$ is the population rank percentile of the observation $(x_i, y_i)$ ranked by CSF. Under this definition, the parameter $\hat{\alpha}_i$ is consistent, converging to the limit
+
+$$
+\lim _ {n \to \infty} \hat {\alpha} _ {i} = - \ln (1 - \beta_ {i}).
+$$
+
+Proof. Given the sample size $n$ and sample rank $r_i$ , let us set $r_i = \beta_i' n$ for $\beta_i' \in (0, 1)$ and take the limit
+
+$$
+\begin{array}{l} \lim _ {n \to \infty} \left[ \mathbf {H} _ {n} - \mathbf {H} _ {n - \beta_ {i} n} \right] = \lim _ {n \to \infty} \left[ \psi (n + 1) - \psi \left(n - \beta_ {i} ^ {\prime} n + 1\right) \right] \\ = \lim _ {n \to \infty} \left[ \ln (n + 1) - \frac {1}{2 (n + 1)} - \ln (n - \beta_ {i} ^ {\prime} n + 1) + \frac {1}{2 (n - \beta_ {i} ^ {\prime} n + 1)} \right] \\ = \lim _ {n \rightarrow \infty} \left[ \right. - \frac {1}{2 (n + 1)} + \frac {1}{2 \left(n - \beta_ {i} ^ {\prime} n + 1\right)} - \ln \left(1 - \beta_ {i} ^ {\prime}\right) + \frac {\beta_ {i} ^ {\prime}}{n + 1}\left. \right) \tag {27} \\ = - \lim _ {n \rightarrow \infty} \ln \left(1 - \beta_ {i} ^ {\prime}\right) \\ = - \ln (1 - \beta_ {i}) \\ \end{array}
+$$
+
+where the 2nd equation was obtained using the asymptotic result that $\psi (n)\to \ln n - \frac{1}{2n}$ as $n\to \infty$
+
+Proposition A.2 (Bias of $\hat{\alpha}_i^{se}$ ). The bias of the weight estimator $\hat{\alpha}_i^{se}$ corresponding to the SELE score is given by
+
+$$
+\operatorname {B i a s} \left(\hat {\alpha} _ {i} ^ {s e} | G (x _ {i}) = \beta_ {i}\right) = \sum_ {i = 1} ^ {n} \frac {i}{n} C _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} (1 - \beta_ {i}) ^ {n - i} + \ln (1 - \beta_ {i}).
+$$
+
+Proof. From Sec. A.2, the expected $\hat{\alpha}_i^{se}$ associated with $(x_i,y_i)$ is given by
+
+$$
+\mathbb {E} \left[ \hat {\alpha} _ {i} ^ {s e} | G (x _ {i}) = \beta_ {i} \right] = \sum_ {i = 1} ^ {n} \frac {i}{n} \Pr (r _ {i} = i | G (x _ {i}) = \beta_ {i}).
+$$
+
+Since $\operatorname{Pr}(r_i|G(x_i) = \beta_i)$ is a binomial distribution $\mathrm{Bin}(n - 1, \beta_i)$ , we obtain:
+
+$$
+\operatorname {B i a s} \left(\hat {\alpha} _ {i} ^ {\prime} \mid G (x _ {i}) = \beta_ {i}\right) = \mathbb {E} \left[ \hat {\alpha} _ {i} ^ {\prime} \mid G (x _ {i}) = \beta_ {i} \right] - \alpha_ {i} = \sum_ {i = 1} ^ {n} \frac {i}{n} \mathrm {C} _ {n - 1} ^ {i - 1} \beta_ {i} ^ {i - 1} (1 - \beta_ {i}) ^ {n - i} + \ln (1 - \beta_ {i}) \tag {28}
+$$
+
+which conclude our proof.
+
+Proposition A.3 (MSE of $\hat{\alpha}_i^{\prime}$ ). The MSE of $\hat{\alpha}_i^{\prime}$ is given by
+
+$$
+\operatorname {M S E} \left(\hat {\alpha} _ {i} ^ {\prime}\right) = \psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1) + \left(\ln \left(1 - \frac {r _ {i}}{n + 1}\right) + H _ {n} - H _ {n - r _ {i}}\right) ^ {2} \asymp \frac {\beta_ {i}}{n \left(1 - \beta_ {i}\right) + 1}. \tag {29}
+$$
+
+Proof. From the result in Proposition 3.1, we calculate the MSE as follows:
+
+$$
+\mathrm {M S E} (\hat {\alpha} _ {i} ^ {\prime}) = \mathbb {E} _ {\beta_ {i}} \left[ (\hat {\alpha} _ {i} ^ {\prime} + \ln (1 - \beta_ {i})) ^ {2} \right],
+$$
+
+which becomes:
+
+$$
+\mathrm {M S E} (\hat {\alpha} _ {i} ^ {\prime}) = \int_ {0} ^ {1} \left(- \ln \left(1 - \frac {r _ {i}}{n + 1}\right) + \ln (1 - \beta_ {i})\right) ^ {2} d \mathbf {P} (\beta_ {i}).
+$$
+
+We can break this into two parts, denoted $M$ and $N$ :
+
+$$
+\mathrm {M S E} (\hat {\alpha} _ {i} ^ {\prime}) = M + N,
+$$
+
+where
+
+$$
+M = \int_ {0} ^ {1} \ln^ {2} (1 - \beta_ {i}) d P (\beta_ {i}) = \left(H _ {n} - H _ {n - r _ {i}}\right) ^ {2} + \psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1),
+$$
+
+and
+
+$$
+N = \ln^ {2} \left(1 - \frac {r _ {i}}{n + 1}\right) + 2 \ln \left(1 - \frac {r _ {i}}{n + 1}\right) \left(H _ {n} - H _ {n - r _ {i}}\right).
+$$
+
+Here, $d\mathbf{P}(\beta_i)$ refers to the integration with respect to the probability measure induced by $\beta_{i}\sim \mathrm{Beta}(r_{i},n + 1 - r_{i})$
+
+Now, by combining $M$ and $N$ , we obtain the MSE:
+
+$$
+\mathrm {M S E} \left(\hat {\alpha} _ {i} ^ {\prime}\right) = \psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1) + Q ^ {2},
+$$
+
+where
+
+$$
+Q = \ln \left(1 - \frac {r _ {i}}{n + 1}\right) + H _ {n} - H _ {n - r _ {i}}.
+$$
+
+Using the inequalities of the harmonic numbers, $\gamma +\ln (n)\leq \mathrm{H}_n\leq \ln (n + 1) + \gamma$ , we obtain the following bounds:
+
+$$
+0 \leq \mathrm {H} _ {n} - \mathrm {H} _ {n - r _ {i}} \leq \ln (n + 1) - \ln (n - r _ {i}).
+$$
+
+This leads to the upper bound for $Q$ :
+
+$$
+Q \leq \ln \left(1 + \frac {1}{n - r _ {i}}\right) \leq \frac {1}{n - r _ {i}},
+$$
+
+since it holds that $\ln (1 + x) \leq x$ for $x > -1$ . For the remaining term, we use the same approach as in Proposition 3.5 and obtain the following estimate:
+
+$$
+\psi^ {\prime} (n + 1 - r _ {i}) - \psi^ {\prime} (n + 1) \asymp \mathcal {O} \left(\frac {\beta_ {i}}{n (1 - \beta_ {i}) + 1}\right).
+$$
+
+Combining this with the upper bound for $Q$ , we obtain the final bound for the MSE:
+
+$$
+\operatorname {M S E} \left(\hat {\alpha} _ {i} ^ {\prime}\right) \asymp \frac {\beta_ {i}}{n \left(1 - \beta_ {i}\right) + 1} \quad \forall \beta_ {i} \in (0, 1) \tag {30}
+$$
+
+as the MSE is dominated by this remaining term.
+
+Proposition A.4 $(\frac{1}{n}\sum_{i = 1}^{n}\mathrm{MSE}(\hat{\alpha}_i)$ or $\frac{1}{n}\sum_{i = 1}^{n}\mathrm{MSE}(\hat{\alpha}_i'))$ . The average of the sum of the MSEs of the weight estimators $\hat{\alpha}_i$ or $\hat{\alpha}_i^\prime$ is tightly bounded by $\mathcal{O}\left(\frac{\ln(n)}{n}\right)$
+
+Proof. From result in Proposition 3.5, we derive:
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \operatorname {M S E} (\hat {\alpha} _ {i}) \rightarrow \frac {1}{n} \sum_ {i = 1} ^ {n} \frac {\beta_ {i}}{n (1 - \beta_ {i}) + 1} \\ \rightarrow \int_ {0} ^ {1} \frac {\beta}{n (1 - \beta) + 1} d \beta . \tag {31} \\ = \frac {(n + 1) \ln (n + 1)}{n ^ {2}} - \frac {1}{n} \\ = \mathcal {O} \left(\frac {\ln (n)}{n}\right) \\ \end{array}
+$$
+
+We could obtain the same results for $\hat{\alpha}_i^{\prime}$ , thereby concluding our proof.
+
+
+
+# A.2.SELE score
+
+Franc et al. (2023) gave another expression of the empirical AURC in Eq. (10) that can be interpreted as an arithmetic mean of the empirical selective risks corresponding to the coverage spread evenly over the interval $[0, 1]$ with step $\frac{1}{n}$ . In addition, they proposed a coarse lower bound, referred to as the SELE score given by
+
+$$
+\Delta_ {\text {s e l e}} (f) = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \ell \left(f \left(x _ {i}\right), y _ {i}\right) \mathbb {I} [ g \left(x _ {i}\right) \geq g \left(x _ {j}\right) ], \tag {32}
+$$
+
+as an alternative to the empirical AURC. They claim that $2\Delta_{\mathrm{sele}}(f)$ is an upper bound to the empirical AURC, but in the Appendix A.2, we demonstrate that this is not the case. This naive implementation of this metric requires $O(n^{2})$ operations but we can rewrite it in a form that can be computed in $\mathcal{O}(n\ln (n))$ using the trick for empirical AURC:
+
+$$
+\Delta_ {\text {s e l e}} (f) = \sum_ {i = 1} ^ {n} \frac {r _ {i}}{n ^ {2}} \ell \left(f \left(x _ {i}\right), y _ {i}\right). \tag {33}
+$$
+
+For this metric, $\hat{\alpha}_i^{se} = \frac{r_i}{n}$ serves as the estimate for $\alpha (\cdot)$ . For the SELE score in Eq. (33), we can show that $2\Delta_{\mathrm{sele}}(f)$ does not always serve as an upper bound for empirical AURC, meaning that (Franc et al., 2023, Theorem 8) does not hold. We could find a simple counterexample s.t. the following inequality holds
+
+$$
+\widehat {\mathrm {A U R C}} _ {p} (f) > 2 \Delta_ {\text {s e l e}} (f). \tag {34}
+$$
+
+Given a dataset of 5 observations sorted according to the CSF $\{x_i, y_i\}_{i=1}^5$ , it is possible to find a classifier $f$ s.t. $\ell(f(x_i), y_i) = 0$ for $i = 1 \cdots 4$ and $\ell(f(x_5), y_5) > 0$ . Then we would have:
+
+$$
+\hat {\alpha} _ {5} = H _ {5} - H _ {0} \approx 2. 2 8 3 3 \geq 2 \hat {\alpha} _ {5} ^ {s e} = 2, \tag {35}
+$$
+
+which leads to
+
+$$
+\widehat {\mathrm {A U R C}} _ {p} (f) = \sum_ {i = 1} ^ {5} \hat {\alpha} _ {i} \ell \left(f \left(x _ {i}\right), y _ {i}\right) \geq \sum_ {i = 1} ^ {5} 2 \hat {\alpha} _ {i} ^ {s e} \ell \left(f \left(x _ {i}\right), y _ {i}\right) = 2 \Delta_ {\text {s e l e}} (f). \tag {36}
+$$
+
+In their proof of Theorem 8, since $\frac{a_i}{b_i}$ is a non-decreasing sequence so they cannot use Lemma 15 to derive their results.
+
+# A.3. A Generalized Approach to Epistemic Risk for Bayesian Models
+
+In this work, we mainly focus on the estimation of AURC for a fixed model. However, when considering a Bayesian model with $f \sim \mathbb{P}(f|\mathcal{D})$ , a natural approach is to consider the following expectation:
+
+$$
+\mathbb {E} _ {f \sim \mathbb {P} (f | \mathcal {D})} [ \mathrm {A U R C} (f) ] = \mathbb {E} _ {f \sim \mathbb {P} (f | \mathcal {D})} \left[ \mathbb {E} _ {x} [ \mathcal {R} (f, g) \mid \tau = g (x) ] \right] \tag {37}
+$$
+
+where $\tau$ is the threshold value. This can be computed directly using our method with Monte Carlo sampling. By applying Fubini's Theorem, we can exchange the order of the two expectations, leading to
+
+$$
+\mathbb {E} _ {f \sim \mathbb {P} (f | \mathcal {D})} [ \operatorname {A U R C} (f) ] = \mathbb {E} _ {x} \left[ \mathbb {E} _ {f \sim \mathbb {P} (f | \mathcal {D})} [ \mathcal {R} (f, g) \mid \tau = g (x) ] \right]. \tag {38}
+$$
+
+Since $g(x)$ depends on $f(x)$ under the posterior distribution $\mathbb{P}(f|\mathcal{D})$ , and the predictions are made through a full Bayesian framework, this formulation allows the evaluation of AURC in a way analogous to the standard AURC for a fixed model. We can envision several ways to define potential quantities of interest based on model uncertainty. Many of these quantities could be potentially connected to AURC and epistemic risk, and exploring these relationships could open up valuable avenues for further investigation.
+
+# A.4. Confidence Score Functions (CSFs)
+
+The CSFs are generally defined as functions of the predicted probabilities $\mathbf{p}_i$ , which are the outputs by passing the logits $\mathbf{z}$ produced by the model for the input $x$ through the softmax function $\sigma(\cdot)$ , expressed as $\sigma(\mathbf{z}) \in \mathbb{R}^K$ . The specific forms of these CSFs are outlined as follows:
+
+Table S2. Commonly Used CSFs
+
+Method Equation MSP g(z) = maxK i=1 p i MaxLogit g(z) = maxK i=1 z i Softmax Margin g(z) = p i - maxj≠i p j with i = arg maxK i=1 p i Negative Entropy g(z) = ∑i=1K z i log z i MaxLogit-ℓp Norm g(z) = ||z||p Negative Gini Score g(z) = -1 + ∑i=1K p i2
+
+
+Figure S6. (a) 0/1 Loss
+
+
+Figure S6. (b) CE Loss
+Figure S7. Comparison of $\mathrm{AURC}_a$ and $\mathrm{AURC}_p$ under different loss functions. The $\mathrm{AURC}_p$ and $\mathrm{AURC}_a$ are computed across the test set using Eqs. (4) and (8), respectively. Subfigure (a) shows the 0/1 loss, while subfigure (b) depicts the CE loss.
+
+# A.5. Empirical Comparison Between $\mathbf{AURC}_a$ and $\mathbf{AURC}_p$
+
+In Section 3.4, we demonstrate the theoretical equivalence of these two metrics, and here, we aim to provide an empirical validation of this equivalence. We evaluate the two population AURC metrics using either 0/1 or CE loss across 30 different models on test sets from CIFAR10/100. We also assess these metrics for the previously mentioned models on the test sets from Amazon and ImageNet datasets. The results are reported in Fig. S7, where these two population AURC metrics are shown to be identical to each other. We also assessed the results using a two-sided t-test, which yielded p-values of 0.9981 and 0.998 for the 0/1 and CE loss, respectively. These values suggest that we should accept the null hypothesis that these two metrics are identical.
+
+# A.6. Figures
+
+
+(a) $n = 8$
+
+
+(b) $n = 16$
+
+
+(c) $n = 32$
+
+
+(d) $n = 64$
+
+
+(e) $n = 128$
+
+
+(f) $n = 256$
+
+
+(g) $n = 512$
+Figure S8. The bias of the weights estimator as a function of $\beta$ for different sample size $n$ . The bias is computed based on the results in Prop. 3.3, 3.4 and A.2.
+
+
+(h) $n = 1024$
+
+
+
+
+
+
+
+
+
+
+(a) PreResNet20 (0/1)
+
+
+(b) PreResNet110 (0/1)
+
+
+(c) PreResNet164 (0/1)
+
+
+(d) WideResNet28x10 (0/1)
+
+
+(e) PreResNet20 (CE)
+(i) PreResNet56 (0/1)
+Figure S9. (CIFAR10) Bias of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and std of bias for various estimators applied to these batch samples.
+
+
+(f) PreResNet110 (CE)
+(j) PreResNet56 (CE)
+
+
+(g) PreResNet164 (CE)
+(k) VGG16 (0/1)
+
+
+(h) WideResNet28x10 (CE)
+(1)VGG16(CE)
+
+
+
+
+
+
+
+
+
+
+(a) PreResNet20 (0/1)
+
+
+(b) PreResNet110 (0/1)
+
+
+(c) PreResNet164 (0/1)
+
+
+(d) WideResNet28x10 (0/1)
+
+
+(e) PreResNet20 (CE)
+(i) PreResNet56 (0/1)
+
+
+(f) PreResNet110 (CE)
+(j) PreResNet56 (CE)
+
+
+(g) PreResNet164 (CE)
+(k) VGG16 (0/1)
+
+
+(h) WideResNet28x10 (CE)
+(1)VGG16(CE)
+
+
+Figure S10. (CIFAR100) Bias of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and std of bias for various estimators applied to these batch samples.
+
+
+
+
+
+
+
+
+(a) BERT (0/1)
+(e) BERT (CE)
+Figure S11. (Amazon) Bias of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and std of bias for various estimators applied to these batch samples.
+
+
+(b) D-BERT $(\mathbf{0} / \mathbf{1})$
+(f) D-BERT (CE)
+
+
+(c) RoBERTa (0/1)
+(g) RoBERTa (CE)
+
+
+(d) D-RoBERTa (0/1)
+(h) D-RoBERTa (CE)
+
+
+
+
+
+
+
+
+
+
+(a) ViT-Small (0/1)
+(e) ViT-Small (CE)
+Figure S12. (ImageNet) Bias of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and std of bias for various estimators applied to these batch samples.
+
+
+(b) ViT-Large (0/1)
+(f) ViT-Large (CE)
+
+
+(c) Swin-Tiny $(0 / 1)$
+(g) Swin-Tiny (CE)
+
+
+(d) Swin-Base (0/1)
+(h) Swin-Base (CE)
+
+
+(a) PreResNet20 (0/1)
+
+
+(b) PreResNet110 (0/1)
+
+
+(c) PreResNet164 (0/1)
+
+
+(d) WideResNet28x10 (0/1)
+
+
+(e) PreResNet20 (CE)
+
+
+(f) PreResNet110 (CE)
+
+
+(g) PreResNet164 (CE)
+
+
+(h) WideResNet28x10 (CE)
+
+
+(i) VGG16BN (CE)
+
+
+(j) PreResNet56 (CE)
+
+
+(k) PreResNet56 (0/1)
+Figure S13. (CIFAR10) MAE of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, we compute the mean and std of the MAE across five distinct pre-trained models. The MAE for each model is calculated using batch samples divided from the test set.
+
+
+(a) PreResNet20 (0/1)
+
+
+(b) PreResNet110 (0/1)
+
+
+(e) PreResNet164 (0/1)
+
+
+(d) WideResNet28x10 (0/1)
+
+
+(e) PreResNet20 (CE)
+
+
+(f) PreResNet110 (CE)
+
+
+(g) PreResNet164 (CE)
+
+
+(h) WideResNet28x10 (CE)
+
+
+(i) VGG16BN (CE)
+
+
+(j) PreResNet56 (CE)
+
+
+(k) PreResNet56 (0/1)
+Figure S14. (CIFAR100) MAE of different finite sample estimators evaluated with 0/1 or CE loss. For each model architecture, we compute the mean and std of the MAE across five distinct pre-trained models. The MAE for each model is calculated using batch samples divided from the test set.
+
+
+
+
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S15. Finite sample estimators on CIFAR10 dataset with 0/1 loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and variance of various estimators applied to these batch samples. Additionally, the population $\mathrm{AURC}_p$ is computed across all samples in the test set.
+
+
+(b) PreResNet20
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+
+
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S16. Finite sample estimators on CIFAR10 dataset with CE loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and variance of various estimators applied to these batch samples. Additionally, the population $\mathrm{AURC}_p$ is computed across all samples in the test set.
+
+
+(b) PreResNet20
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+
+
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S17. Finite sample estimators on the CIFAR100 dataset with 0/1 loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and variance of various estimators applied to these batch samples. Additionally, the population $\mathrm{AURC}_p$ is computed across all samples in the test set.
+
+
+(b) PreResNet20
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+(a) D-BERT $(\mathbf{0} / \mathbf{1})$
+Figure S18. (Amazon) Finite sample estimators with 0/1 or CE loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and variance of various estimators applied to these batch samples. Additionally, the population $\mathrm{AURC}_p$ is computed across all samples in the test set.
+
+
+(b) D-RoBERTa (0/1)
+
+
+(c) D-BERT (CE)
+
+
+(d) D-RoBERTa (CE)
+
+
+
+
+
+
+
+
+
+
+(a) ViT-Small (0/1)
+(e) ViT-Small (CE)
+Figure S19. (ImageNet) Finite sample estimators with 0/1 or CE loss. We utilize a pre-trained model and randomly divide the test set into batch samples of size $n$ . Subsequently, we compute the mean and variance of various estimators applied to these batch samples. Additionally, the population $\mathrm{AURC}_p$ is computed across all samples in the test set.
+
+
+(b) ViT-Large $(0 / 1)$
+(f) ViT-Large (CE)
+
+
+(c) Swin-Tiny $(0 / 1)$
+(g) Swin-Tiny (CE)
+
+
+(d) Swin-Base $(\mathbf{0} / \mathbf{1})$
+(h) Swin-Base (CE)
+
+
+
+
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S20. (CIFAR10) MSE of different finite sample estimators with 0/1 loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+(b) PreResNet20
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+
+
+(b) PreResNet20
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S21. (CIFAR10) MSE of different finite sample estimators with CE loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+
+
+
+
+
+
+(a) VGG16BN
+(d) PreResNet110
+Figure S22. (CIFAR100) MSE of different finite sample estimators with 0/1 loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+(b) PreResNet20
+(e) PreResNet164
+
+
+(c) PreResNet56
+(f) WideResNet28x10
+
+
+(a) VGG16BN
+
+
+(b) PreResNet20
+
+
+(c) PreResNet56
+
+
+(d) PreResNet110
+
+
+(e) PreResNet164
+
+
+(f) WideResNet28x10
+Figure S23. (CIFAR100) MSE of different finite sample estimators with CE loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+
+
+
+
+
+
+
+
+(a) BERT (0/1)
+(e) BERT (CE)
+Figure S24. (Amazon) MSE of different finite sample estimators with 0/1 or CE loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+(b) D-BERT $(\mathbf{0} / \mathbf{1})$
+(f) D-BERT (CE)
+
+
+(c) RoBERTa (0/1)
+(g) RoBERTa (CE)
+
+
+(d) D-RoBERTa $(0 / 1)$
+(h) D-RoBERTa (CE)
+
+
+
+
+
+
+
+
+
+
+(a) ViT-Small (0/1)
+(b) Swin-Tiny $(0 / 1)$
+(e) ViT-Large (CE)
+
+
+(c) ViT-Small (CE)
+(d) Swin-Tiny (CE)
+(f) ViT-Large (0/1)
+
+
+Figure S25. (ImageNet) MSE of finite sample estimators with 0/1 or CE loss. For each model architecture, we calculate the MSE of the estimators using a pre-trained model on batch samples derived from the test set.
+
+
+
+
+
+
+
+
+(a) D-BERT (0/1)
+(b) D-RoBERTa (0/1)
+(e) RoBERTa (0/1 loss)
+
+
+(c) D-BERT (CE)
+(d) D-RoBERTa (CE)
+(f) RoBERTa (CE loss)
+
+
+Figure S26. (Amazon) Finite sample estimators that utilize 0/1 or CE loss with different CSFs.
+(a) ViT-Small
+Figure S27. (ImageNet) Finite sample estimators that utilize 0/1 loss with different CSFs.
+
+
+(b) ViT-Large
+
+
+(c) Swin-Tiny
+
+
+(d) Swin-Base
+
+
+(a) ViT-Small
+
+
+(b) ViT-Large
+
+
+(c) Swin-Tiny
+Figure S28. (ImageNet) Finite sample estimators that utilize CE loss with different CSFs.
+
+
+(d) Swin-Base
\ No newline at end of file
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/images.zip b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..eb97c334fe9ba2a7eea000924f9d4234622b76b2
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8daa9c9bf7fa23d87386e47bde0ab8c791f3c64a868f2d4417b3a3649004a43d
+size 2412448
diff --git a/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/layout.json b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..447745938cd5895d715b81ca1c26fc6655608120
--- /dev/null
+++ b/anovelcharacterizationofthepopulationareaundertheriskcoveragecurveaurcandratesoffinitesampleestimators/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcece5a9143b879ef6319967cc25922ce8e1aae972a443f664761c14acad312f
+size 1307550
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_content_list.json b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..55830cbedd8b8f29dc15c919e20e84aab9c18d4f
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:408000f83c5a9604f24187e539a9afbbf69161b8c91587579e5d0d4faa2b73ce
+size 194512
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_model.json b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d40a7abb70de9c2ad90e91f59b752932fc339036
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:225238524a2b614574adf19c7cead55093f51b239f40097211b2df51f0f33b64
+size 231577
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_origin.pdf b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a6f20d17f98432e53c9109a144305aaf94f6aabd
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/3bf9ee67-913b-4ba1-97a1-7e51e1faef5b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:258dd057f7fd4cbde436ddb3eed83c4a9436039e32181c3624f7e7477007869b
+size 948584
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/full.md b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a98a53015ad651b66e6ec528b63131f60d68ed1
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/full.md
@@ -0,0 +1,1154 @@
+# A Parameter-Free and Near-Optimal Zeroth-Order Algorithm for Stochastic Convex Optimization
+
+Kunjie Ren1 Luo Luo12
+
+# Abstract
+
+This paper studies zeroth-order optimization for stochastic convex minimization problems. We propose a parameter-free stochastic zeroth-order method (POEM), which introduces a stepsize scheme based on the distance over finite difference and an adaptive smoothing parameter. Our theoretical analysis shows that POEM achieves near-optimal stochastic zeroth-order oracle complexity. Furthermore, numerical experiments demonstrate that POEM outperforms existing zeroth-order methods in practice.
+
+# 1. Introduction
+
+This paper studies the stochastic optimization problem
+
+$$
+\min _ {\mathbf {x} \in \mathcal {X}} f (\mathbf {x}) \triangleq \mathbb {E} _ {\boldsymbol {\xi} \sim \Xi} [ F (\mathbf {x}; \boldsymbol {\xi}) ] \tag {1}
+$$
+
+where the domain $\mathcal{X} \subseteq \mathbb{R}^d$ is a compact convex set, the random variable $\xi$ follows a distribution $\Xi$ , and the stochastic component function $F(\mathbf{x}; \boldsymbol{\xi})$ is convex and Lipschitz continuous in $\mathbf{x}$ over $\mathcal{X}$ for any given $\boldsymbol{\xi}$ . We focus on stochastic zeroth-order optimization for solving Problem (1), where the algorithm can only query stochastic function values. This setting is particularly relevant when accessing (stochastic) first-order information is expensive or infeasible. Such scenarios arise in various applications, including bandit optimization (Flaxman et al., 2004; Agarwal et al., 2010; Shamir, 2017), adversarial training (Goodfellow et al., 2014; Shaham et al., 2018), reinforcement learning (Balasubramanian & Ghadimi, 2018; Mania et al., 2018), and other black-box models (Liu et al., 2016; Ilyas et al., 2018).
+
+Finite difference methods are widely used in zeroth-order optimization, where they estimate the first-order information
+
+$^{1}$ School of Data Science, Fudan University, Shanghai, China $^{2}$ Shanghai Key Laboratory for Contemporary Applied Mathematics, Shanghai, China. Correspondence to: Luo Luo .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+of the objective function via random directions (Kiefer & Wolfowitz, 1952; Ghadimi & Lan, 2013; Duchi et al., 2015; Nesterov & Spokoiny, 2017; Nazari et al., 2020; Gasnikov et al., 2022; Lin et al., 2022; Rando et al., 2024; Chen et al., 2023; Kornowski & Shamir, 2024). For stochastic convex problems with Lipschitz continuous components, Nesterov & Spokoiny (2017) first developed random search methods that achieve suboptimal convergence rates. Subsequently, Duchi et al. (2015) proposed a stochastic algorithm that constructs finite differences using two random sequences, improving the dependence on problem dimension compared to Nesterov & Spokoiny (2017). They also established a lower bound, demonstrating that their algorithm achieves near-optimal stochastic zeroth-order oracle (SZO) complexity. Building on this, Shamir (2017) introduced an algorithm using a single random sequence drawn from the uniform distribution over the unit ball, which is optimal and easy to implement. Finite difference methods have also been broadly applied to smooth optimization problems (Ghadimi & Lan, 2013; Nesterov & Spokoiny, 2017; Duchi et al., 2012b; Balasubramanian & Ghadimi, 2018), as well as to nonsmooth and nonconvex settings (Lin et al., 2022; Chen et al., 2023; Kornowski & Shamir, 2024). Despite these advances, existing zeroth-order optimization methods (Kiefer & Wolfowitz, 1952; Ghadimi & Lan, 2013; Duchi et al., 2015; Nesterov & Spokoiny, 2017; Nazari et al., 2020; Gasnikov et al., 2022; Lin et al., 2022; Rando et al., 2024) face several limitations. A key challenge is their high sensitivity to parameter settings. Achieving optimal convergence rates typically requires carefully tuned step sizes that depend on prior knowledge of problem properties (such as the Lipschitz constant) and the iteration budget. In addition, the smoothing parameter used in the finite difference often depends on the target accuracy or decays rapidly, which can lead to numerical instability in practice.
+
+We desire to develop adaptive stochastic optimization methods that remove the need of parameter tuning. Most existing work focuses on first-order methods. For example, Rolinek & Martius (2018); Vaswani et al. (2019); Paquette & Scheinberg (2020) introduced line search techniques for stochastic optimization. Tan et al. (2016); Berrada et al. (2020); Loizou et al. (2021); Wang et al. (2023) extended the Barzilai-Borwein (BB) step size (Barzilai & Borwein, 1988) and the
+
+Polyak step size (Polyak, 1987) to stochastic settings. For training deep neural networks, adaptive algorithms such as AdaGrad (Duchi et al., 2011), Adam (Kingma & Ba, 2014), and their variants (Tieleman, 2012; Zeiler, 2012; Shazeer & Stern, 2018; Wang et al., 2024; Zhang et al., 2025) exploited the specific problem structure and have achieved success in many applications. However, these methods still rely on appropriately chosen initialization parameters, which can significantly influence convergence behavior, both in theory and in practice.
+
+Ideally, we aim to design a parameter-free optimization method that achieves near-optimal convergence rates while requiring minimal knowledge of problem-specific properties (Streeter & McMahan, 2012; Defazio & Mishchenko, 2023; Lan et al., 2023; Li & Lan, 2023). Several parameter-free methods for stochastic convex optimization have been developed using online learning techniques (Luo & Schapire, 2015; Orabona & Pál, 2016; Cutkosky & Orabona, 2018; Bhaskara et al., 2020; Mhammedi & Koolen, 2020; Jacobsen & Cutkosky, 2022), though their implementations are often quite complex. In practice, Orabona & Tommasi (2017); Chen et al. (2022) applied coin-betting techniques within the classical stochastic gradient descent (SGD) framework, achieving strong empirical performance in training neural networks. Later, Carmon & Hinder (2022) showed that using a bisection step to adaptively determine the step size in SGD yields a parameter-free algorithm with near-optimal convergence rates. Building on this work, Ivgi et al. (2023a) proposed a parameter-free step size schedule called Distance over Gradients (DoG), which adjusts step sizes based on the distance from the initial point and the norm of stochastic gradients (You et al., 2017; Shazeer & Stern, 2018; Bernstein et al., 2020). DoG achieves near-optimal convergence rates and performs well in practice. However, all existing parameter-free methods are designed for first-order optimization. Extending these methods to the zeroth-order setting presents additional challenges, particularly the need to eliminate tuning for both the step size and the smoothing parameter, as well as to carefully control the dependence on the problem dimension in the convergence rates.
+
+In this paper, we propose a parameter-free stochastic zeroth-order method (POEM), which introduces a stepsize scheme based on the distance over finite difference and an adaptive smoothing parameter. For the stochastic convex optimization (1), we show that the initialization affects the convergence rates only by a logarithmic factor. We establish high-probability convergence guarantees, demonstrating that POEM achieves near-optimal SZO complexity. A comparison of POEM with related methods is presented in Table 1. We also study the problems with unbounded domains and show that an ideal parameter-free algorithm is impossible in such settings. Finally, we conduct numerical experiments to validate the practical effectiveness of POEM.
+
+# 2. Preliminaries
+
+In this section, we formalize the problem setting and introduce the smoothing technique in zeroth-order optimization.
+
+# 2.1. Notation and Assumptions
+
+Throughout this paper, we use $\|\cdot\|$ to denote the Euclidean norm. The unit ball is defined as $\mathbb{B}^d \triangleq \{\mathbf{u} \in \mathbb{R}^d : \|\mathbf{u}\| \leq 1\}$ and the unit sphere as $\mathbb{S}^{d-1} \triangleq \{\mathbf{v} \in \mathbb{R}^d : \|\mathbf{v}\| = 1\}$ . We denote by $\mathbb{U}(\mathbb{B}^d)$ and $\mathbb{U}(\mathbb{S}^{d-1})$ the uniform distributions on the unit ball and the unit sphere, respectively. Additionally, we use the notation $\tilde{\mathcal{O}}(\cdot)$ to suppress logarithmic factors.
+
+We make the following assumptions for Problem (1).
+
+Assumption 2.1. The domain $\mathcal{X} \subseteq \mathbb{R}^d$ is compact and convex. Furthermore, we denote the diameter of $\mathcal{X}$ by
+
+$$
+D _ {\mathcal {X}} \triangleq \max _ {\mathbf {x}, \mathbf {y} \in \mathcal {X}} \| \mathbf {x} - \mathbf {y} \| < \infty .
+$$
+
+Next, we define the Euclidean projection onto the domain.
+
+Definition 2.2. For any point $\mathbf{x} \in \mathbb{R}^d$ , the Euclidean projection onto the compact convex set $\mathcal{X} \in \mathbb{R}^d$ is given by
+
+$$
+\Pi_{\mathcal{X}}(\mathbf{x})\triangleq \operatorname *{arg min}_{\mathbf{y}\in \mathcal{X}}\| \mathbf{x} - \mathbf{y}\| .
+$$
+
+Under Assumption 2.1, the objective function attains its minimum over the compact set $\mathcal{X}$ . Hence, we define the optimal solution to Problem (1) as follows.
+
+Definition 2.3. Let $\mathbf{x}_{\star} \in \mathcal{X}$ be an optimal solution to Problem (1) such that $f(\mathbf{x}_{\star}) = \min_{\mathbf{x} \in \mathcal{X}} f(\mathbf{x})$ .
+
+We aim for the iterative algorithm to find an approximate solution to Problem (1), which is defined as follows.
+
+Definition 2.4. A point $\hat{\mathbf{x}}$ is called an $\epsilon$ -suboptimal solution to Problem (1) if, for a given $\epsilon > 0$ , it satisfies
+
+$$
+f (\hat {\mathbf {x}}) - f \left(\mathbf {x} _ {\star}\right) \leq \epsilon .
+$$
+
+We also assume the stochastic component $F(\mathbf{x};\boldsymbol {\xi})$ is convex and Lipschitz continuous with respect to $\mathbf{x}$ .
+
+Assumption 2.5. The stochastic component $F(\mathbf{x};\boldsymbol {\xi})$ is convex in $\mathbf{x}$ for each fixed $\pmb{\xi}$ .
+
+Assumption 2.6. There exists a constant $L \geq 0$ such that for all $\mathbf{x}, \mathbf{y} \in \mathbb{R}^d$ , and any fixed $\xi$ , the following holds
+
+$$
+\left\| F (\mathbf {x}; \boldsymbol {\xi}) - F (\mathbf {y}; \boldsymbol {\xi}) \right\| \leq L \| \mathbf {x} - \mathbf {y} \|.
+$$
+
+We further assume that the algorithm for solving Problem (1) has access to a stochastic zeroth-order oracle that returns unbiased stochastic function value estimates at two points.
+
+Assumption 2.7. The stochastic zeroth-order oracle returns the stochastic evaluations $F(\mathbf{x};\pmb {\xi})$ and $F(\mathbf{y};\pmb {\xi})$ for given points $\mathbf{x}\in \mathbb{R}^d$ and $\mathbf{y}\in \mathbb{R}^{d}$ , such that $\mathbb{E}_{\pmb{\xi}}[F(\mathbf{x};\pmb {\xi})] = f(\mathbf{x})$ and $\mathbb{E}_{\pmb{\xi}}[F(\mathbf{y};\pmb {\xi})] = f(\mathbf{y})$ , where $\pmb{\xi}$ is drawn from $\Xi$ .
+
+Table 1. We present the SZO complexity, step size $\eta_t$ , and smoothing parameter $\mu_t$ for obtaining an $\epsilon$ -suboptimal solution to Problem (1), where $t$ denotes the iteration index, $T$ is the total iteration budget, and $\bar{r}_t$ and $G_t$ are defined in Algorithm 1.
+
+Algorithms Parameter-Free SZO Complexity ηt μt RSNSO‡Nesterov & Spokoiny (2017) No O(dL2s02/ε2) s0/dL√T s0√d/T TPGE‡Duchi et al. (2015) No O(dL2Dx/ε2) Dx/L√d log(2d)t Dx/t and Dx/d2t2 TPBCOShamir (2017) No O(dL2Dx/ε2) Dx/L√dT Dx√d/T POEMTheorem 4.9 Yes O(dL2Dx/ε2) r̂t/√Gt r̂t√d/t+1 Lower boundDuchi et al. (2015) - Ω(dL2Dx/ε2) - -
+
+The RSNSO (Nesterov & Spokoiny, 2017) does not require the assumption of a bounded domain, as its complexity depends on the distance between the initial point $\mathbf{x}_0$ and the solution $\mathbf{x}_{\star}$ , denoted by $s_0 = \| \mathbf{x}_0 - \mathbf{x}_{\star}\|$ , rather than the diameter $D\chi$ . We discuss the case without the bounded domain assumption in detail in Section 5.
+$\ddagger$ The TPGE method (Duchi et al., 2015) employs two sequences of stochastic finite difference, each with its own smoothing parameter.
+
+# 2.2. Randomized Smoothing
+
+Randomized smoothing is a widely used technique in zeroth-order optimization, which constructs a smooth surrogate of the objective function by applying perturbations along random directions (Duchi et al., 2012a; Gasnikov et al., 2022; Shamir, 2017; Yousefian et al., 2012; Nesterov & Spokoiny, 2017; Lin et al., 2022). In this work, we specifically focus on randomized smoothing based on the uniform distribution over the unit ball (Duchi et al., 2012a; Gasnikov et al., 2022; Shamir, 2017). Formally, we define the smooth surrogate of the objective function $f(\mathbf{x})$ as
+
+$$
+f _ {\mu} (\mathbf {x}) \triangleq \mathbb {E} _ {\mathbf {u} \sim \mathbb {U} (\mathbb {B} ^ {d})} [ f (\mathbf {x} + \mu \mathbf {u}) ],
+$$
+
+where $\mu > 0$ is the smoothing parameter. The following lemma establishes that the surrogate $f_{\mu}(\mathbf{x})$ preserves the convexity, and that the approximation error between $f(\mathbf{x})$ and $f_{\mu}(\mathbf{x})$ can be bounded in terms of $\mu$ (Shamir, 2017).
+
+Lemma 2.8 (Shamir (2017, Lemma 8)). Under Assumptions 2.5 and 2.6, the smooth surrogate $f_{\mu}(\mathbf{x})$ is convex and satisfies $|f_{\mu}(\mathbf{x}) - f(\mathbf{x})| \leq L\mu$ for all $\mathbf{x} \in \mathbb{R}^d$ .
+
+The next lemma demonstrates that the surrogate $f_{\mu}(\mathbf{x})$ is differentiable, regardless of whether $f(\mathbf{x})$ is differentiable. Moreover, it shows that the gradient of $f_{\mu}(\mathbf{x})$ can be expressed in the form of the finite difference.
+
+Lemma 2.9 (Flaxman et al. (2004, Lemma 3.4)). For a continuous function $f: \mathbb{R}^d \to \mathbb{R}$ , the gradient of its smooth surrogate $f_{\mu}$ is given by
+
+$$
+\nabla f _ {\mu} (\mathbf {x}) = \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} \left[ \frac {d}{2 \mu} (f (\mathbf {x} + \mu \mathbf {v}) - f (\mathbf {x} - \mu \mathbf {v})) \mathbf {v} \right],
+$$
+
+where $\mathbf{x}\in \mathbb{R}^d$ and $\mu >0$
+
+Based on Lemma 2.9, we define the stochastic finite difference as follows
+
+$$
+\mathbf {g} (\mathbf {x}, \mu ; \mathbf {v}, \boldsymbol {\xi}) \triangleq \frac {d}{2 \mu} (F (\mathbf {x} + \mu \mathbf {v}; \boldsymbol {\xi}) - F (\mathbf {x} - \mu \mathbf {v}; \boldsymbol {\xi})) \mathbf {v}, \tag {2}
+$$
+
+where $\mathbf{x} \in \mathcal{X}$ , $\mu > 0$ , $\mathbf{v} \sim \mathbb{U}(\mathbb{S}^{d-1})$ and $\xi \sim \Xi$ . Under Assumption 2.7, the function evaluation $F(\mathbf{x}; \boldsymbol{\xi})$ returned by the stochastic zeroth-order oracle is an unbiased estimator of $f(\mathbf{x})$ . Consequently, the stochastic finite difference $\mathbf{g}(\mathbf{x}, \mu; \mathbf{v}, \boldsymbol{\xi})$ is an unbiased estimator of $\nabla f_{\mu}(\mathbf{x})$ , that is,
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1}), \pmb {\xi} \sim \Xi} [ \mathbf {g} (\mathbf {x}, \mu ; \mathbf {v}, \pmb {\xi}) ] = \nabla f _ {\mu} (\mathbf {x}).
+$$
+
+# 3. Parameter-Free Stochastic Zeroth-Order Optimization
+
+We propose POEM, a parameter-free stochastic zeroth-order method, as described in Algorithm 1. POEM is built on the framework of projected SGD, following the iterative scheme
+
+$$
+\mathbf {x} _ {t + 1} = \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {t} - \eta_ {t} \mathbf {g} _ {t}\right), \tag {3}
+$$
+
+where $\eta_t > 0$ denotes the step size, and the finite difference
+
+$$
+\mathbf {g} _ {t} \triangleq \mathbf {g} \left(\mathbf {x} _ {t}, \mu_ {t}; \mathbf {v} _ {t}, \boldsymbol {\xi} _ {t}\right) \tag {4}
+$$
+
+is defined as in equation (2), with the smoothing parameter $\mu_t > 0$ , and random variables $\mathbf{v}_t \sim \mathbb{U}(\mathbb{S}^{d-1})$ and $\pmb{\xi}_t \sim \Xi$ .
+
+We aim to make both the step size $\eta_t$ and the smoothing parameter $\mu_t$ in equations (3) and (4) tuning-free, and still achieve near-optimal convergence rates. This is more challenging than existing stochastic parameter-free first-order
+
+# Algorithm 1 POEM
+
+1: Input: $\mathbf{x}_0\in \mathcal{X}$ $r_{\epsilon}\in (0,D_{\mathcal{X}}],T\geq 1$
+2: $\bar{r}_{-1} = r_{\epsilon}$ , $G_{-1} = 0$
+3: for $t = 0, \dots, T - 1$ do
+4: $\bar{r}_t = \max \{\bar{r}_{t - 1},\| \mathbf{x}_t - \mathbf{x}_0\| \}$
+5: $\mu_{t} = \bar{r}_{t}\sqrt{\frac{d}{t + 1}}$
+6: $\mathbf{v}_t\sim \mathbb{U}(\mathbb{S}^{d - 1})$ , $\pmb {\xi}_t\sim \Xi$
+7: $\mathbf{g}_t = \frac{d}{2\mu_t} (F(\mathbf{x}_t + \mu_t\mathbf{v}_t;\pmb {\xi}_t) - F(\mathbf{x}_t - \mu_t\mathbf{v}_t;\pmb {\xi}_t))\mathbf{v}_t$
+8: $G_{t} = G_{t - 1} + \left\| \mathbf{g}_{t}\right\|^{2}$
+9: $\eta_{t} = \frac{\bar{r}_{t}}{\sqrt{G_{t}}}$
+10: $\mathbf{x}_{t + 1} = \Pi_{\mathcal{X}}(\mathbf{x}_t - \eta_t\mathbf{g}_t)$
+11: end for
+12: Output: $\bar{\mathbf{x}}_{\tau_T}$ where $\tau_T \triangleq \arg \max_{t \leq T} \sum_{k=0}^{t-1} \frac{\bar{r}_k}{\bar{r}_t}$
+
+methods, which focus only on adapting the step size. Inspired by the strategy of DoG (Ivgi et al., 2023a), we schedule the step size $\eta_t$ based on the ratio between the distance from the initial point and the norm of the stochastic finite difference. Specifically, we define the cumulative squared gradient norm as $G_t \triangleq \sum_{k=0}^{t} \|\mathbf{g}_k\|^2$ , the distance to the initial point as $r_t \triangleq \|\mathbf{x}_t - \mathbf{x}_0\|$ , and the maximum distance as $\bar{r}_t \triangleq \max_{k \leq t} r_k \vee r_\epsilon$ , where $r_\epsilon > 0$ denotes the initial movement. We define the step size at the $t$ -th iteration as
+
+$$
+\eta_ {t} \triangleq \frac {\bar {r} _ {t}}{\sqrt {G _ {t}}}. \tag {5}
+$$
+
+For initialization, we require the movement $r_{\epsilon} \in (0, D_{\mathcal{X}}]$ . As we will show in Sections 4 and 6, the choice of $r_{\epsilon}$ influences the theoretical convergence rates only by a logarithmic term and has minimal impact on practical performance.
+
+Moreover, we define the smoothing parameter as
+
+$$
+\mu_ {t} \triangleq \bar {r} _ {t} \sqrt {\frac {d}{t + 1}}, \tag {6}
+$$
+
+which is adaptive and generally larger than those used in existing stochastic zeroth-order methods (Nesterov & Spokoiny, 2017; Shamir, 2017; Duchi et al., 2015; Ghadimi & Lan, 2013; Duchi et al., 2012b; Rando et al., 2024). For example, Nesterov & Spokoiny (2017); Shamir (2017) set $\mu_t = \mathcal{O}(\sqrt{d / T})$ in their analysis, depending on the total iteration budget $T$ ; Duchi et al. (2015) uses $\mu_t = \mathcal{O}(1 / (dt)^2)$ , which may be quite small in high-dimensional settings. Recall that the smoothing parameter $\mu_t$ appears in the denominator of the stochastic finite difference in equation (2). Consequently, a larger $\mu_t$ improves numerical stability.
+
+# 4. The Complexity Analysis
+
+In this section, we show that POEM (Algorithm 1) achieves near-optimal SZO complexity. The detailed proofs of the results presented here are provided in Appendix B.
+
+Our analysis focuses on the weighted average of the iterates generated by POEM, defined as
+
+$$
+\bar {\mathbf {x}} _ {t} \triangleq \frac {1}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k}} \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \mathbf {x} _ {k}. \tag {7}
+$$
+
+Since the objective function $f$ is convex (Assumption 2.5), we apply Jensen's inequality to bound the optimality gap
+
+$$
+f \left(\overline {{\mathbf {x}}} _ {t}\right) - f \left(\mathbf {x} _ {\star}\right) \leq \frac {1}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k}} \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \left(f \left(\mathbf {x} _ {k}\right) - f \left(\mathbf {x} _ {\star}\right)\right). \tag {8}
+$$
+
+By combining inequality (8) with Lemma 2.8, we obtain
+
+$$
+\begin{array}{l} f \left(\bar {\mathbf {x}} _ {t}\right) - f \left(\mathbf {x} _ {\star}\right) \\ \leq \frac {1}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k}} \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \left(f _ {\mu_ {k}} \left(\mathbf {x} _ {k}\right) - f _ {\mu_ {k}} \left(\mathbf {x} _ {\star}\right) + 2 L \mu_ {k}\right) \tag {9} \\ \leq \frac {1}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k}} \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \left(\left\langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle + 2 L \mu_ {k}\right), \\ \end{array}
+$$
+
+where the last inequality follows from the convexity of $f_{\mu_k}$ . We decompose the sum in the final line of equation (9) into
+
+$$
+\underbrace {\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \left\langle \mathbf {g} _ {k} , \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle} _ {\text {t h e w e i g h t e d r e g r e t}} + \underbrace {\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \left\langle \boldsymbol {\Delta} _ {k} , \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle} _ {\text {t h e n o i s e f r o m g _ {k}}} + \underbrace {\sum_ {k = 0} ^ {t - 1} 2 L \bar {r} _ {k} \mu_ {k}} _ {\text {t h e n o i s e f r o m} \mu_ {k}}, \tag {10}
+$$
+
+where $\Delta_{k} \triangleq \nabla f_{\mu_{k}}(\mathbf{x}_{k}) - \mathbf{g}_{k}$ . The regret term is a standard component in the complexity analysis of stochastic zeroth-order and first-order methods (Shalev-Shwartz, 2012; Duchi et al., 2015; Balasubramanian & Ghadimi, 2018; Ivgi et al., 2023a). In the context of POEM, this term is scaled by the weights $\{\bar{r}_k\}_{k=0}^{t-1}$ , requiring careful control. The noise from $\mathbf{g}_{k}$ arises from the discrepancy between the true gradient $\nabla f_{\mu_{k}}$ and its unbiased estimator $\mathbf{g}_{k}$ . The noise from $\mu_{k}$ reflects the approximation error between the objective function $f$ and its smooth surrogate $f_{\mu_{k}}$ . Notably, this noise doesn't appear in the analysis of first-order methods.
+
+We now present upper bounds for the three components in equation (9): the weighted regret term, the noise from $\mathbf{g}_k$ , and the noise from $\mu_k$ . To facilitate the analysis, we define
+
+$$
+s _ {t} \triangleq \left\| \mathbf {x} _ {t} - \mathbf {x} _ {\star} \right\| \quad \text {a n d} \quad \bar {s} _ {t} \triangleq \max _ {k \leq t} s _ {k}.
+$$
+
+The following lemma provides an upper bound on the weighted regret for SGD-type iterations. Notably, this result holds independently of the specific form of the gradient estimator or the choice of step size, and thus applies directly to POEM (Algorithm 1).
+
+Lemma 4.1 (Ivgi et al. (2023a, Lemma 3.4)). For the iteration scheme (3), the weighted regret satisfies
+
+$$
+\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \mathbf {g} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \leq \bar {r} _ {t} \left(2 \bar {s} _ {t} + \bar {r} _ {t}\right) \sqrt {G _ {t - 1}}.
+$$
+
+To analyze the noise from $\mathbf{g}_k$ , we first establish upper bounds for $\| \mathbf{g}_t\|$ and its second moment $\mathbb{E}[\| \mathbf{g}_t\|^2]$ .
+
+Lemma 4.2 (Shamir (2017, Lemma 10)). Under Assumption 2.6, POEM (Algorithm 1) satisfies
+
+$$
+\| \mathbf {g} _ {t} \| \leq L d \quad a n d \quad \mathbb {E} [ \| \mathbf {g} _ {t} \| ^ {2} ] \leq c L ^ {2} d,
+$$
+
+where $c > 0$ is a numerical constant.
+
+Remark 4.3. This lemma provides $\mathcal{O}(d)$ upper bounds for both $\| \mathbf{g}_t\|$ and $\mathbb{E}[\| \mathbf{g}_t\|^2]$ , which are crucial for achieving optimal dependence on the dimension $d$ in the convergence rates. Unlike the original proof in Shamir (2017, Lemma 10), our analysis is based on the Euclidean norm and offers a more concise argument by avoiding the use of fourth-order moments of the gradient estimator (see Appendix B.1).
+
+Based on Lemma 4.2, we can provide an upper bound for the term $\| \pmb{\Delta}_k\| = \| \nabla f_{\mu_k}(\mathbf{x}_k) - \mathbf{g}_k\|$ . We then use a concentration inequality for martingale differences (Howard et al., 2021; Ivgi et al., 2023b) to control the noise from $\mathbf{g}_k$ .
+
+Lemma 4.4. Under Assumptions 2.6 and 2.7, for any $\delta \in (0,1)$ , POEM (Algorithm 1) satisfies
+
+$$
+\mathbb {P} \left(\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| \geq b _ {t}\right) \leq \delta ,
+$$
+
+where we define $b_{t} \triangleq 8\bar{r}_{t - 1}\bar{s}_{t - 1}\sqrt{\theta_{t,\delta}G_{t - 1} + 4L^{2}d^{2}\theta_{t,\delta}^{2}}$ , and $\theta_{t,\delta} \triangleq \log (60\log (6t / \delta))$ .
+
+Next, we provide an upper bound for the noise from $\mu_{k}$ .
+
+Lemma 4.5. POEM (Algorithm 1) satisfies
+
+$$
+\sum_ {k = 0} ^ {t - 1} 2 L \bar {r} _ {k} \mu_ {k} \leq 4 L \bar {r} _ {t - 1} ^ {2} \sqrt {d t},
+$$
+
+where $\log_+( \cdot )\triangleq \log ( \cdot ) + 1$
+
+Remark 4.6. The choice of the smoothing parameter in (6) implies that $\mu_{k} = \mathcal{O}(\sqrt{d / k})$ . Using this expression, we can bound the series by approximating it with an integral, which leads to the stated result in Lemma 4.5. A detailed proof is provided in Appendix B.3.
+
+By combining Lemmas 4.1, 4.4, and 4.5 with equations (9) and (10), we obtain the following upper bound on the optimality function value gap.
+
+Proposition 4.7. Under Assumptions 2.5, 2.6 and 2.7, for any $\delta \in (0,1)$ and $t\in \mathbb{N}_+$ , POEM (Algorithm 1) satisfies
+
+$$
+f(\bar{\mathbf{x}}_{t}) - f(\mathbf{x}_{\star})\leq \frac{16\theta_{t,\delta}(\bar{r}_{t} + s_{0})(\sqrt{G_{t - 1}} + Ld + L\sqrt{d t})}{\sum_{k = 0}^{t - 1}\bar{r}_{k} / \bar{r}_{t}},
+$$
+
+with probability at least $1 - \delta$ , where $\theta_{t,\delta} = \log (60\log (t / \delta))$
+
+We then consider a lower bound for $\sum_{k=0}^{t-1} \bar{r}_k / \bar{r}_t$ . To this end, we introduce the following key lemma.
+
+Lemma 4.8 (Ivgi et al. (2023a, Lemma 3.7)). Let $a_0, a_1, \ldots, a_T$ be a positive non-decreasing sequence, then
+
+$$
+\max _ {t \leq T} \sum_ {i < t} \frac {a _ {i}}{a _ {t}} \geq \frac {1}{\mathrm {e}} \left(\frac {T}{\log_ {+} (a _ {T} / a _ {0})} - 1\right),
+$$
+
+where $T\in \mathbb{N}_{+}$
+
+Since the sequence $\{\bar{r}_t\}_{t=0}^T$ is positive and non-decreasing, we can apply Lemma 4.8 with $a_t = \bar{r}_t$ . This yields
+
+$$
+\max _ {t \leq T} \sum_ {k = 0} ^ {t - 1} \frac {\bar {r} _ {k}}{\bar {r} _ {t}} = \sum_ {k = 0} ^ {\tau_ {T} - 1} \frac {\bar {r} _ {k}}{\bar {r} _ {\tau_ {T}}} \geq \Omega \left(\frac {T}{\log_ {+} \left(\bar {r} _ {\tau_ {T}} / r _ {\epsilon}\right)}\right), \tag {11}
+$$
+
+where $\tau_T \triangleq \arg \max_{1 \leq t \leq T} \sum_{k=0}^{t-1} \bar{r}_k / \bar{r}_t$ .
+
+Before stating the main result, we define the probability space $(\Omega_0,\mathcal{F}_0,\mathbb{P})$ , where $\Omega_0$ denotes the sample space associated with POEM (Algorithm 1) for a given $\mathbf{x}_0$ and $r_{\epsilon}$ $\mathcal{F}_0$ is the sigma field generated by the random variable sequences $\{\mathbf{v}_t\}_{t = 0}^{T - 1}$ and $\{\pmb {\xi}_t\}_{t = 0}^{T - 1}$ , and $\mathbb{P}$ is a probability measure defined on $\mathcal{F}_0$ . Next, we define the event
+
+$$
+\Omega_ {\delta} \triangleq \left\{\omega \in \Omega_ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| < b _ {t} \right\}.
+$$
+
+By Lemma 4.4, given $\delta \in (0,1)$ , we have $\mathbb{P}(\Omega_{\delta}) \geq 1 - \delta$ . We then define the sigma field $\mathcal{F}_{\delta} \triangleq \{A : A \subset \Omega_{\delta}\} \cap \mathcal{F}_0$ which satisfies $\mathcal{F}_{\delta} \subset \mathcal{F}_0$ . Furthermore, the Lipschitz continuity of $f$ and the boundedness of the domain $\mathcal{X}$ , ensure that $\mathbb{E}|f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star)| \leq LD_\mathcal{X} < \infty$ . Therefore, the conditional expectation $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \mathcal{F}_{\delta}]$ exists and is unique (Durrett, 2019, Chapter 4.1).
+
+By combining Lemma 4.2, Proposition 4.7, and equation (11), we arrive at the main result.
+
+Theorem 4.9. Under Assumptions 2.1, 2.5, 2.6, and 2.7, for any $\delta \in (0,1)$ and $T \in \mathbb{N}_{+}$ , POEM (Algorithm 1) initialized with $\mathbf{x}_0 \in \mathcal{X}$ and $r_{\epsilon} \in (0, D_{\mathcal{X}}]$ satisfies
+
+$$
+\begin{array}{l} \mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \mathcal {F} _ {\delta} \right] \\ \leq \mathcal {O} \left(\left(\frac {d}{T} + \frac {\sqrt {d}}{\sqrt {T}}\right) \theta_ {T, \delta} L D _ {\mathcal {X}} \log_ {+} \left(\frac {D _ {\mathcal {X}}}{r _ {\epsilon}}\right)\right), \\ \end{array}
+$$
+
+with probability at least $1 - \delta$ , where $\theta_{T,\delta} \triangleq \log(60\log(6T / \delta))$ .
+
+By suppressing the logarithmic factors using the $\tilde{\mathcal{O}} (\cdot)$ notation, the SZO complexity for finding a conditionally expected $\epsilon$ -suboptimal solution with probability at least $1 - \delta$ is
+
+$$
+\tilde {\mathcal {O}} \left(\frac {d L ^ {2} D _ {\mathcal {X}} ^ {2}}{\epsilon^ {2}}\right).
+$$
+
+The SZO complexity established in Theorem 4.9 matches the lower bound for stochastic zeroth-order optimization established by Duchi et al. (2015).
+
+Remark 4.10. As $\delta \to 0$ , $\Omega_{\delta} \to \Omega_0$ in probability and $\mathcal{F}_{\delta}$ approaches $\mathcal{F}_0$ , which implies $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \mathcal{F}_{\delta}]$ approaches $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \mathcal{F}_0] = f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star)$ .
+
+# 5. Results for Unbounded Domains
+
+In this section, we extend our method to address the stochastic convex optimization problem in settings where the domain may be unbounded. To accommodate this, we relax Assumption 2.1 as follows.
+
+Assumption 5.1. The domain $\mathcal{X} \subseteq \mathbb{R}^d$ is closed and convex. Moreover, there exists a point $\mathbf{x}_{\star} \in \mathcal{X}$ such that $f(\mathbf{x}_{\star}) = \min_{\mathbf{x} \in \mathcal{X}} f(\mathbf{x})$ .
+
+Remark 5.2. In our analysis for the bounded domain (Theorem 4.9), the upper bound on $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \mathcal{F}_{\delta}]$ includes the term $\log_+(D_{\mathcal{X}} / r_\epsilon)$ , where $r_\epsilon \in (0, D_{\mathcal{X}}]$ . However, this term may become invalid after relaxing Assumption 2.1 to Assumption 5.1, as $D_{\mathcal{X}}$ could be infinite.
+
+In the remainder of this section, we first modify POEM by introducing an overestimate of the Lipschitz constant $L$ to address the problem without assuming a bounded domain. We then show that such estimation is unavoidable in the unbounded setting. The detailed proofs for the results presented in this section are deferred to Appendix C.
+
+We introduce the quantity
+
+$$
+G _ {t} ^ {\prime} \triangleq 8 ^ {4} \theta_ {T, \delta} \log_ {+} ^ {2} (t + 2) \left(G _ {t - 1} + 1 6 \theta_ {T, \delta} d ^ {2} \bar {L} ^ {2}\right), \tag {12}
+$$
+
+where $\theta_{T,\delta} = \log (60\log (6T / \delta))$ $G_{t} = \sum_{k = 0}^{t}\| \mathbf{g}_{k}\|^{2}$ as defined in Section 3 and $\bar{L}$ is an overestimate of the Lipschitz constant $L$ such that $\bar{L}\geq L$
+
+For the unbounded domain, we modify POEM (Algorithm 1) by updating the step size and smoothing parameter as
+
+$$
+\eta_ {t} = \frac {\bar {r} _ {t}}{\sqrt {G _ {t} ^ {\prime}}} \quad \text {a n d} \quad \mu_ {t} = \frac {d \bar {r} _ {t}}{(t + 1) ^ {2}}, \tag {13}
+$$
+
+where $r_t = \|\mathbf{x}_t - \mathbf{x}_0\|$ and $\bar{r}_t = \max_{k \leq t} r_k \vee r_\epsilon$ , following the notation in Section 3. We also define $G_{-1} = 0$ for equation (12) when $t = 0$ . Note that the parameters $T$ and $\delta$ only influence the logarithmic factor in $G_t'$ . Furthermore, the term $16\theta_{T,\delta}d^2\bar{L}^2$ in equation (12) becomes relatively insignificant compared to $G_{t-1}$ as $t$ grows large.
+
+We now provide the complexity analysis for the modified POEM. Unlike in the bounded domain setting, the quantity $\bar{r}_t$ cannot be simply controlled via $D_{\mathcal{X}}$ . Instead, our goal is to show that $\bar{r}_t = \mathcal{O}(s_0)$ , which implies that $\mathbf{x}_t$ remains close to both $\mathbf{x}_0$ and $\mathbf{x}_{\star}$ . Starting from the iteration update $\mathbf{x}_{k + 1} = \Pi_{\mathcal{X}}(\mathbf{x}_k - \eta_k\mathbf{g}_k)$ , we obtain the inequality
+
+$$
+\left\| \mathbf {x} _ {k + 1} - \mathbf {x} _ {\star} \right\| ^ {2} \leq \left\| \mathbf {x} _ {k} - \mathbf {x} _ {\star} - \eta_ {k} \mathbf {g} _ {k} \right\| ^ {2}.
+$$
+
+Rewriting this inequality using the definition of $s_k$ in Section 3, we have
+
+$$
+\begin{array}{l} s _ {k + 1} ^ {2} - s _ {k} ^ {2} \leq \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \eta_ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \\ - 2 \eta_ {k} \left\langle \nabla f _ {\mu_ {k}} \left(\mathbf {x} _ {k}\right), \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle , \\ \end{array}
+$$
+
+where $\pmb{\Delta}_k = \nabla f_{\mu_k}(\mathbf{x}_k) - \mathbf{g}_k$ . Summing the above inequality over $k = 0,1,\ldots ,t - 1$ , we obtain
+
+$$
+\begin{array}{l} s _ {t} ^ {2} - s _ {0} ^ {2} \leq \sum_ {k = 0} ^ {t - 1} \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \left\langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle \tag {14} \\ + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \left\langle \nabla f _ {\mu_ {k}} \left(\mathbf {x} _ {k}\right), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \right\rangle . \\ \end{array}
+$$
+
+Therefore, we can upper bound $\bar{r}_t$ by controlling each of the three terms on the right-hand side of inequality (14). Note that the last term in equation (14) does not appear in the analysis of first-order methods like DoG (Ivgi et al., 2023a). Following the analysis in Appendix C.1, we establish the follow upper bound for $\bar{r}_t$ .
+
+Proposition 5.3. For any $\delta \in (0,1)$ , POEM (Algorithm 1), with settings $\eta_t = \bar{r}_t / \sqrt{G_t'}$ , $\mu_t = d\bar{r}_t / (t + 1)^2$ , and $r_\epsilon \in (0,3s_0]$ , satisfies $\tilde{\mathbb{P}} (\bar{r}_T > 3s_0) \leq \delta$ .
+
+We consider the probability space $(\tilde{\Omega}_0,\tilde{\mathcal{F}}_0,\tilde{\mathbb{P}})$ , where $\tilde{\Omega}_0$ is the sample space of the Algorithm 1 under the modified settings used in Proposition 5.3, $\tilde{\mathcal{F}}_0$ is the sigma field generated by the random sequences $\{\mathbf{v}_t\}_{t = 0}^{T - 1}$ and $\{\pmb {\xi}_t\}_{t = 0}^{T - 1}$ , and $\tilde{\mathbb{P}}$ is a probability measure defined on $\mathcal{F}_0$ .
+
+Following the settings of Proposition 5.3, we define the set
+
+$$
+\tilde {\Omega} _ {\delta} \triangleq \left\{\omega \in \tilde {\Omega} _ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| \leq s _ {0} ^ {2} \right\},
+$$
+
+where $\tilde{\eta}_t\triangleq \eta_t\cdot \mathbb{I}(t < \zeta)$ and $\zeta \triangleq \min \{t\in \mathbb{N}\mid \bar{r}_t > 3s_0\}$
+
+The derivation of Proposition 5.3 (see Appendix C.1) shows that if $r_{\epsilon} \leq 3s_0$ , then $\bar{r}_T \leq 3s_0$ for all $\omega \in \tilde{\Omega}_{\delta}$ . Moreover, it holds that $\tilde{\mathbb{P}}(\tilde{\Omega}_{\delta}) \geq 1 - \delta$ .
+
+Similar to Proposition 4.7, we can establish an upper bound on the optimality gap for the unbounded setting as follows (see Appendix C.2 for the proof).
+
+Proposition 5.4. Under Assumptions 2.5, 2.6 and 2.7, for any $\delta \in (0,1)$ , POEM (Algorithm 1) with the modified
+
+settings from Proposition 5.3 satisfies
+
+$$
+f \left(\bar {\mathbf {x}} _ {t}\right) - f \left(\mathbf {x} _ {\star}\right) \leq \frac {2 0 \theta_ {t , \delta} \left(\bar {r} _ {t} + s _ {0}\right) \left(\sqrt {G _ {t - 1} ^ {\prime}} + L d\right)}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}, \tag {15}
+$$
+
+with probability at least $1 - \delta$ , where $\theta_{t,\delta} = \log (60\log (t / \delta))$
+
+Following the settings of Proposition 5.4, we define the set
+
+$$
+\hat {\Omega} _ {\delta} \triangleq \left\{\omega \in \tilde {\Omega} _ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| < b _ {t} \right\},
+$$
+
+where $b_{t} = 8\bar{r}_{t - 1}\bar{s}_{t - 1}\sqrt{\theta_{t,\delta}G_{t - 1} + 4L^{2}d^{2}\theta_{t,\delta}^{2}}$ . The proof of Proposition 5.4 shows that inequality (15) holds for all $\omega \in \hat{\Omega}_{\delta}$ . Moreover, we have $\tilde{\mathbb{P}} (\hat{\Omega}_\delta)\geq 1 - \delta$
+
+Next, we define $\tilde{F}_{\delta} \triangleq \{A : A \subset \tilde{\Omega}_{\delta} \cap \hat{\Omega}_{\delta}\} \cap \tilde{\mathcal{F}}_{0}$ , which is a sub-sigma-field of $\tilde{\mathcal{F}}_{0}$ . Since the probability space is constructed for an algorithm with finite $T$ , we can conclude the conditional expectation $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \tilde{\mathcal{F}}_{\delta}]$ exists and is unique, even if the domain $D_{\mathcal{X}}$ is unbounded. (Please see Appendix C.3 for the detailed derivation.)
+
+We now combine Propositions 5.3 and 5.4 to establish a convergence result without a bounded domain assumption.
+
+Theorem 5.5. Under Assumptions 2.5, 2.6, 2.7, and 5.1, for any $\delta \in (0,1/2)$ , POEM (Algorithm 1) with the modified settings used in Proposition 5.3, satisfies
+
+$$
+\begin{array}{l} \mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \tilde {\mathcal {F}} _ {\delta} \right] \\ \leq \mathcal {O} \left(\left(\frac {d (L + \bar {L})}{T} + \frac {\sqrt {d} L}{\sqrt {T}}\right) \alpha_ {T, \delta} s _ {0} \log_ {+} \left(\frac {s _ {0}}{r _ {\epsilon}}\right)\right) \\ \end{array}
+$$
+
+with probability at least $1 - 2\delta$ , where $s_0 = \|\mathbf{x}_0 - \mathbf{x}_{\star}\|$ and $\alpha_{T,\delta} \triangleq \log_+(T + 1)\log (60\log (T / \delta))$ .
+
+By setting $\bar{L} = L$ , the upper bound on the conditional expectation shown in Theorem 5.5 simplifies to
+
+$$
+\mathcal {O} \bigg (\bigg (\frac {d}{T} + \frac {\sqrt {d}}{\sqrt {T}} \bigg) \alpha_ {T, \delta} L s _ {0} \log_ {+} \bigg (\frac {s _ {0}}{r _ {\epsilon}} \bigg) \bigg).
+$$
+
+This yields the SZO complexity of $\tilde{\mathcal{O}} (dL^2 s_0^2 /\epsilon^2)$ for finding an $\epsilon$ -suboptimal solution $\bar{\mathbf{x}}_{\tau_T}$ , which improves upon the $\mathcal{O}(d^{2}L^{2}s_{0}^{2} / \epsilon^{2})$ complexity established by Nesterov & Spokoiny (2017).
+
+However, the settings in Theorem 5.5 (also Proposition 5.3) require that $r_{\epsilon} \in (0, 3s_0]$ , where $s_0 = \|\mathbf{x}_0 - \mathbf{x}_{\star}\|$ is unknown in practice. Furthermore, the first term in the upper bound of Theorem 5.5 depends linearly on $\bar{L}$ . Ideally, we would like to design an algorithm that achieves an SZO complexity close to $\tilde{\mathcal{O}}(dL^2 s_0^2/\epsilon^2)$ , with only logarithmic dependence on uncertain problem parameters such as $r_{\epsilon}$ and $\bar{L}$ . Unfortunately, we show that such an ideal, fully
+
+parameter-free zeroth-order algorithm for stochastic convex optimization without a bounded domain assumption is provably unattainable.
+
+We assume that the stochastic zeroth-order algorithm $\mathcal{A}$ accepts valid estimates $\bar{L},\underline{L},\bar{s}$ and $\underline{s}$ such that $\underline{L}\leq L\leq \bar{L}$ and $\underline{s}\leq s_0\leq \bar{s}$ . Based on this setup, we establish the following lower bound on the function value gap for the stochastic convex optimization problem.
+
+Theorem 5.6. Let $\theta : \mathbb{R}^4 \to \mathbb{R}$ be any polylogarithmic function, let $d \in \mathbb{N}$ , and let $\mathcal{A}$ be a stochastic zeroth-order algorithm satisfying Assumption 2.7, with valid estimates $\underline{L}, \bar{L}, \underline{s}$ , and $\bar{s}$ . Then, there exists an $L$ -Lipschitz convex function $f: \mathbb{R}^d \to \mathbb{R}$ such that, for any initial point $\mathbf{x}_0 \in \mathbb{R}^d$ and any number of $SZO$ calls $T \geq 2$ , the algorithm $\mathcal{A}$ returns a point $\hat{\mathbf{x}}$ satisfying
+
+$$
+f (\hat {\mathbf {x}}) - f _ {\star} > \theta \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {\sqrt {d} L s _ {0}}{\sqrt {T}}
+$$
+
+with probability at least $1 / \mathrm{e}$
+
+Remark 5.7. In a recent work, Khaled & Jin (2024) showed the impossibility of an ideal parameter-free algorithm for stochastic first-order optimization by constructing a hard instance in the one-dimensional setting. In contrast, the lower bound for zeroth-order optimization established in Theorem 5.6 must additionally consider the dependence on the dimension of the problem, highlighting a key distinction from the first-order case.
+
+# 6. Numerical Experiments
+
+This section presents numerical experiments to evaluate the empirical performance of POEM (Algorithm 1). We consider a stochastic optimization problem of the form
+
+$$
+\min _ {\mathbf {x} \in \mathcal {X}} f (\mathbf {x}) \triangleq \mathbb {E} _ {(\mathbf {a}, b)} [ F (\mathbf {x}; \mathbf {a}, b) ],
+$$
+
+where $F(\mathbf{x};\mathbf{a},b) = \max \{0,1 - ba^{\top}\mathbf{x}\}$ , $(\mathbf{a},b)\in \mathbb{R}^{d}\times \{\pm 1\}$ is uniformly sampled from a binary classification dataset $\{(a_i,b_i)\}_{i = 1}^n$ . The feasible set is defined as $\mathcal{X} = \{\mathbf{x}\in \mathbb{R}^d:\| \mathbf{x}\| \leq R\}$ with radius $R = 1$ . We conduct experiments on benchmark datasets from (Chang & Lin, 2011), including "mushrooms" ( $d = 112$ , $n = 8124$ ), "a9a" ( $d = 123$ , $n = 32$ , 561), and "w8a" ( $d = 300$ , $n = 49$ , 749). For comparison, we consider two stochastic zeroth-order algorithms: Two-Point Gradient Estimates (TPGE) method (Duchi et al., 2015) and Two-Point Bandit Convex Optimization (TPBCO) method (Shamir, 2017).
+
+Figure 1 shows the comparison of SZO complexity versus function values. For POEM, the initial movement is set to $r_{\epsilon} = 10^{-2}$ . Baseline methods are evaluated under two configurations: theoretical parameter settings (TPGE-T, TPBCO-T), and well-tuned step sizes (TPGE-E, TPBCO-E). Results demonstrate that POEM converges faster than
+
+
+(a) mushrooms
+
+
+(b) a9a
+
+
+(c) w8a
+
+
+Figure 1. The comparison on the SZO complexity versus the function value during the iterations.
+(a) mushrooms
+
+
+(b) a9a
+
+
+(c) w8a
+
+
+Figure 2. The comparison on the parameter settings $(r_{\epsilon}$ for POEM and $1 / L$ for other methods) against $f(\mathbf{x}_T)$ .
+(a) mushrooms
+Figure 3. The change of the step size with difference $r_{\epsilon}$ for POEM.
+
+
+(b) a9a
+
+
+(c) w8a
+
+TPGE-T and TPBCO-T, while its performance is comparable to the well-tuned variants (TPGE-E, TPBCO-E).
+
+We further study the practical impact of parameter settings. Specifically, we present the objective function value at the final iteration $(T = 10^{6})$ across all algorithms under different configurations. Figure 2 summarizes these results, where we tune the initial movement $r_{\epsilon}$ in POEM and the term $1 / L$ in baseline methods over the range $\{10^{-7}, 10^{-6}, \dots, 10^{2}\}$ . It is clear that POEM exhibits greater stability across parameter settings compared to baseline methods. More importantly, when $r_{\epsilon} \leq R = 1$ , the choice of $r_{\epsilon}$ has negligible impact on the function value. This supports our theoretical analysis (Theorem 4.9), where $r_{\epsilon}$ only influences the logarithmic term in the complexity bound if it does not exceed the domain diameter. Additionally, we track the evolution of step sizes in POEM under varying $r_{\epsilon}$ . As shown in Figure 2, the step sizes converge to similar values across all configurations, highlighting the algorithm's adaptive nature.
+
+# 7. Conclusion
+
+In this paper, we propose POEM, a novel zeroth-order optimization algorithm for stochastic convex optimization. It can dynamically schedule both the step size and the smoothing parameter during iterations. We show that POEM achieves near-optimal stochastic zeroth-order oracle complexity for problems with bounded domains. Notably, its initialization only impacts convergence rates by a logarithmic factor. We further extend POEM to unbounded domains and derive a lower bound, which reveals that an ideal parameter-free algorithm is impossible in such settings. We also conduct numerical experiments to confirm the practical efficiency of POEM.
+
+In future work, we are interested in extending the ideas of POEM to broader applications, including zeroth-order optimization for minimax and bilevel problems. Another promising direction is the development of parameter-free zeroth-order methods for finite-sum optimization.
+
+# Acknowledgements
+
+This work is supported by the National Natural Science Foundation of China (No. 62206058), the Major Key Project of PCL under Grant PCL2024A06, and Shanghai Basic Research Program (23JC1401000).
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Agarwal, A., Dekel, O., and Xiao, L. Optimal algorithms for online convex optimization with multi-point bandit feedback. In Conference on Learning Theory, pp. 28-40, 2010.
+Balasubramanian, K. and Ghadimi, S. Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In Advances in Neural Information Processing Systems, pp. 3459-3468, 2018.
+Barzilai, J. and Borwein, J. M. Two-point step size gradient methods. IMA journal of numerical analysis, 8(1):141-148, 1988.
+Bernstein, J., Vahdat, A., Yue, Y., and Liu, M.-Y. On the distance between two neural networks and the stability of learning. In Advances in Neural Information Processing Systems, pp. 21370-21381, 2020.
+Berrada, L., Zisserman, A., and Kumar, M. P. Training neural networks for and by interpolation. In International conference on machine learning, pp. 799-809, 2020.
+Bhaskara, A., Cutkosky, A., Kumar, R., and Purohit, M. Online learning with imperfect hints. In International Conference on Machine Learning, pp. 822-831, 2020.
+Carmon, Y. and Hinder, O. Making SGD parameter-free. In Conference on Learning Theory, pp. 2360-2389, 2022.
+Chang, C.-C. and Lin, C.-J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):1-27, 2011.
+Chen, K., Langford, J., and Orabona, F. Better parameter-free stochastic optimization with ODE updates for coinbetting. In AAAI Conference on Artificial Intelligence, pp. 6239-6247, 2022.
+Chen, L., Xu, J., and Luo, L. Faster gradient-free algorithms for nonsmooth nonconvex stochastic optimization. In International Conference on Machine Learning, pp. 5219-5233, 2023.
+
+Cutkosky, A. and Orabona, F. Black-box reductions for parameter-free online learning in banach spaces. In _Conference On Learning Theory_, pp. 1493-1529, 2018.
+Defazio, A. and Mishchenko, K. Learning-rate-free learning by D-adaptation. In International Conference on Machine Learning, pp. 7449-7479, 2023.
+Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(61):2121-2159, 2011.
+Duchi, J. C., Bartlett, P. L., and Wainwright, M. J. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization, 22(2):674-701, 2012a.
+Duchi, J. C., Jordan, M. I., Wainwright, M. J., and Wibisono, A. Finite sample convergence rates of zero-order stochastic optimization methods. Advances in Neural Information Processing Systems, pp. 1439-1447, 2012b.
+Duchi, J. C., Jordan, M. I., Wainwright, M. J., and Wibisono, A. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory, 61(5):2788-2806, 2015.
+Durrett, R. Probability: theory and examples, volume 49. Cambridge university press, 2019.
+Flaxman, A. D., Kalai, A. T., and McMahan, H. B. Online convex optimization in the bandit setting: gradient descent without a gradient. In ACM-SIAM Symposium on Discrete Algorithms, pp. 385-394, 2004.
+Gasnikov, A., Novitskii, A., Novitskii, V., Abdukhakimov, F., Kamzolov, D., Beznosikov, A., Takac, M., Dvurechensky, P., and Gu, B. The power of first-order smooth optimization for black-box non-smooth problems. In International Conference on Machine Learning, pp. 7241-7265, 2022.
+Ghadimi, S. and Lan, G. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341-2368, 2013.
+Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
+Howard, S. R., Ramdas, A., McAuliffe, J., and Sekhon, J. Time-uniform, nonparametric, nonasymptotic confidence sequences. The Annals of Statistics, 49(2):1055 - 1080, 2021.
+Ilyas, A., Engstrom, L., and Madry, A. Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv preprint arXiv:1807.07978, 2018.
+
+Ivgi, M., Hinder, O., and Carmon, Y. Dog is SGD's best friend: A parameter-free dynamic step size schedule. In International Conference on Machine Learning, pp. 14465-14499, 2023a.
+Ivgi, M., Hinder, O., and Carmon, Y. Dog is SGD's best friend: A parameter-free dynamic step size schedule. arXiv preprint arXiv:2402.07793, 2023b.
+Jacobsen, A. and Cutkosky, A. Parameter-free mirror descent. In Conference on Learning Theory, pp. 4160-4211, 2022.
+Khaled, A. and Jin, C. Tuning-free stochastic optimization. In International Conference on Machine Learning, pp. 23622-23661, 2024.
+Kiefer, J. and Wolfowitz, J. Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, pp. 462-466, 1952.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2014.
+Kornowski, G. and Shamir, O. An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization. Journal of Machine Learning Research, 25(122):1-14, 2024.
+Lan, G., Ouyang, Y., and Zhang, Z. Optimal and parameter-free gradient minimization methods for smooth optimization. arXiv preprint arXiv:2310.12139, 2023.
+Li, T. and Lan, G. A simple uniformly optimal method without line search for convex optimization. arXiv preprint arXiv:2310.10082, 2023.
+Lin, T., Zheng, Z., and Jordan, M. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. In Advances in Neural Information Processing Systems, pp. 26160-26175, 2022.
+Liu, Y., Chen, X., Liu, C., and Song, D. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations, 2016.
+Loizou, N., Vaswani, S., Laradji, I. H., and Lacoste-Julien, S. Stochastic Polyak step-size for SGD: An adaptive learning rate for fast convergence. In International Conference on Artificial Intelligence and Statistics, pp. 1306-1314, 2021.
+Luo, H. and Schapire, R. E. Achieving all with no parameters: AdaNormalHedge. In Conference on Learning Theory, pp. 1286-1304, 2015.
+
+Mania, H., Guy, A., and Recht, B. Simple random search provides a competitive approach to reinforcement learning. In Advanced in Neural Information Processing Systems, pp. 1805-1814, 2018.
+Mhammedi, Z. and Koolen, W. M. Lipschitz and comparator-norm adaptivity in online learning. In Conference on Learning Theory, pp. 2858-2887, 2020.
+Nazari, P., Tarzanagh, D. A., and Michailidis, G. Adaptive first-and zeroth-order methods for weakly convex stochastic optimization problems. arXiv preprint arXiv:2005.09261, 2020.
+Nesterov, Y. and Spokoiny, V. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527-566, 2017.
+Orabona, F. and Pál, D. Coin betting and parameter-free online learning. In Advances in Neural Information Processing Systems, pp. 577-585, 2016.
+Orabona, F. and Tommasi, T. Training deep networks without learning rates through coin betting. In Advances in Neural Information Processing Systems, pp. 2157-2167, 2017.
+Paquette, C. and Scheinberg, K. A stochastic line search method with expected complexity analysis. SIAM Journal on Optimization, 30(1):349-376, 2020.
+Polyak, B. T. Introduction to optimization. New York, Optimization Software., 1987.
+Rando, M., Molinari, C., Rosasco, L., and Villa, S. An optimal structured zeroth-order algorithm for non-smooth optimization. In Advances in Neural Information Processing Systems, pp. 36738-36767, 2024.
+Rolinek, M. and Martius, G. L4: Practical loss-based step-size adaptation for deep learning. In Advances in Neural Information Processing Systems, pp. 6434-6444, 2018.
+Shaham, U., Yamada, Y., and Negahban, S. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing, 307:195-204, 2018.
+Shalev-Shwartz, S. Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107-194, 2012.
+Shamir, O. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. Journal of Machine Learning Research, 18(52):1-11, 2017.
+Shazeer, N. and Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp. 4596-4604, 2018.
+
+Streeter, M. and McMahan, H. B. No-regret algorithms for unconstrained online convex optimization. In Advances in Neural Information Processing Systems, pp. 2402-2410, 2012.
+Tan, C., Ma, S., Dai, Y.-H., and Qian, Y. Barzilai-Borwein step size for stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 685-693, 2016.
+Tieleman, T. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26, 2012.
+Vaswani, S., Mishkin, A., Laradji, I., Schmidt, M., Gidel, G., and Lacoste-Julien, S. Painless stochastic gradient: Interp-. polation, line-search, and convergence rates. Advances in Neural Information Processing Systems, pp. 3732-3745, 2019.
+Wang, B., Zhang, Y., Zhang, H., Meng, Q., Sun, R., Ma, Z.-M., Liu, T.-Y., Luo, Z.-Q., and Chen, W. Provable adaptivity of Adam under non-uniform smoothness. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2960-2969, 2024.
+Wang, X., Johansson, M., and Zhang, T. Generalized Polyak step size for first order optimization with momentum. In International Conference on Machine Learning, pp. 35836-35863, 2023.
+You, Y., Gitman, I., and Ginsburg, B. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
+Yousefian, F., Nedic, A., and Shanbhag, U. V. On stochastic gradient and subgradient methods with adaptive steplength sequences. Automatica, 48(1):56-67, 2012.
+Zeiler, M. D. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
+Zhang, Y., Chen, C., Li, Z., Ding, T., Wu, C., Ye, Y., Luo, Z.-Q., and Sun, R. Adam-mini: Use fewer learning rates to gain more. In International Conference on Learning Representations, 2025.
+
+# A. Some Basic Results
+
+We first present some basic lemmas.
+
+Lemma A.1 (Shamir (2017, Lemma 9)). Suppose $\mathbf{v} \sim \mathbb{U}(\mathbb{S}^{d-1})$ . Then, for any function $h: \mathbb{R}^d \to \mathbb{R}$ that is $L$ -Lipschitz with respect to the $\ell_2$ -norm, the following concentration inequality holds
+
+$$
+\mathbb {P} (| h (\mathbf {v}) - \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ h (\mathbf {v}) ] | \geq t) \leq 2 \exp \left(- \frac {c d t ^ {2}}{L ^ {2}}\right),
+$$
+
+where $c$ is a numerical constant.
+
+Lemma A.2 (Ivgi et al. (2023a, Lemma D.2)). Let $S$ be the set of non-negative and non-decreasing sequences. Let $c > 0$ and let $X_{t}$ be a martingale difference sequence adapted to $\mathcal{F}_t$ such that $|X_{t}|\leq c$ with probability 1 for all $t$ . Then, for all $\delta \in (0,1)$ and $\hat{X}_t\in \mathcal{F}_{t - 1}$ such that $|\hat{X}_t|\leq c$ with probability 1,
+
+$$
+\mathbb {P} \left(\exists t \leq T, \exists \{y _ {k} \} _ {k = 0} ^ {\infty} \in S: \left| \sum_ {k = 0} ^ {t - 1} y _ {k} X _ {k} \right| \geq b _ {t}\right) \leq \delta ,
+$$
+
+where $b_{t} \triangleq 8y_{t}\sqrt{\theta_{t,\delta}\sum_{k = 0}^{t - 1}(X_{k} - \hat{X}_{k})^{2} + c^{2}\theta_{t,\delta}^{2}}$ and $\theta_{t,\delta} \triangleq \log (60\log (6t / \delta))$ .
+
+Lemma A.3 (Ivgi et al. (2023a, Lemma C.3)). Let $a_{-1}, a_0, \ldots, a_t$ be a nondecreasing sequence of nonnegative numbers, then the following inequality holds
+
+$$
+\sum_ {k = 0} ^ {t} \frac {a _ {k} - a _ {k - 1}}{a _ {k} \log_ {+} ^ {2} (a _ {k} / a _ {- 1})} \leq 1.
+$$
+
+Lemma A.4 (Carmon & Hinder (2022, Corollary 1)). Let $c > 0$ and $X_{t}$ be a martingale difference sequence adapted to $\mathcal{F}_t$ such that $|X_t|\leq c$ with probability 1 for all $t$ . Then, for all $\delta \in (0,1)$ , and $\hat{X}_{t}\in \mathcal{F}_{t - 1}$ such that $|\hat{X}_t|\leq c$ with probability 1, the following inequality holds
+
+$$
+\mathbb {P} \bigg (\exists t \leq T: \bigg | \sum_ {k = 1} ^ {t} X _ {k} \bigg | > 4 \sqrt {\theta_ {t , \delta} \sum_ {k = 1} ^ {t} (X _ {k} - \hat {X} _ {k}) ^ {2} + c ^ {2} \theta_ {t , \delta} ^ {2}} \bigg) \leq \delta .
+$$
+
+# B. The Proofs for Section 4
+
+We provide detailed proofs for the results under the bounded domain assumption.
+
+# B.1. Proof of Lemma 4.2
+
+Proof of Lemma 4.2. For convenience, we omit the subscripts. Recall that the gradient estimator $\mathbf{g}$ is defined as
+
+$$
+\mathbf {g} (\mathbf {x}, \mu ; \mathbf {v}, \boldsymbol {\xi}) = \frac {d}{2 \mu} (F (\mathbf {x} + \mu \mathbf {v}; \boldsymbol {\xi}) - F (\mathbf {x} - \mu \mathbf {v}; \boldsymbol {\xi})) \mathbf {v}, \quad \text {w h e r e} \quad \mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1}).
+$$
+
+By Assumption 2.6, the function $F(\mathbf{x};\pmb {\xi})$ is almost surely $L$ -Lispchitz in $\mathbf{x}$ . Thus, we have
+
+$$
+\| \mathbf {g} \| = \frac {d}{2 \mu} | F (\mathbf {x} + \mu \mathbf {v}; \pmb {\xi}) - F (\mathbf {x} - \mu \mathbf {v}; \pmb {\xi}) | \| \mathbf {v} \| \leq L d \| \mathbf {v} \| ^ {2} = L d,
+$$
+
+where the last equality follows from the fact that $\| \mathbf{v}\| = 1$
+
+Next, using the definition of $\mathbf{g}$ and $\| \mathbf{v}\| = 1$ again, we compute the second moment of $\mathbf{g}$ as
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ \| \mathbf {g} \| ^ {2} ] = \frac {d ^ {2}}{4 \mu^ {2}} \cdot \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} \left[ \left(F (\mathbf {x} + \mu \mathbf {v}; \boldsymbol {\xi}) - F (\mathbf {x} - \mu \mathbf {v}; \boldsymbol {\xi})\right) ^ {2} \right].
+$$
+
+For any $\alpha \in \mathbb{R}$ , we can rewrite this as
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ \| \mathbf {g} \| ^ {2} ] = \frac {d ^ {2}}{4 \mu^ {2}} \cdot \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ ((F (\mathbf {x} + \mu \mathbf {v}; \pmb {\xi}) - \alpha) - (F (\mathbf {x} - \mu \mathbf {v}; \pmb {\xi}) - \alpha)) ^ {2} ].
+$$
+
+Applying the inequality $(a - b)^2\leq 2a^2 +2b^2$ , we obtain
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ \| \mathbf {g} \| ^ {2} ] \leq \frac {d ^ {2}}{2 \mu^ {2}} \cdot (\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ (F (\mathbf {x} + \mu \mathbf {v}; \pmb {\xi}) - \alpha) ^ {2} ] + \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ (F (\mathbf {x} - \mu \mathbf {v}; \pmb {\xi}) - \alpha) ^ {2} ]).
+$$
+
+Since the distribution $\mathbf{v} \sim \mathbb{U}(\mathbb{S}^{d-1})$ is symmetric about the origin, the two terms on the right-hand side are equal. Thus,
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ \| \mathbf {g} \| ^ {2} ] \leq \frac {d ^ {2}}{\mu^ {2}} \cdot \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ (F (\mathbf {x} + \mu \mathbf {v}; \boldsymbol {\xi}) - \alpha) ^ {2} ]. \tag {16}
+$$
+
+Let $h(\mathbf{v}) \triangleq F(\mathbf{x} + \mu \mathbf{v}; \pmb{\xi})$ . Since $F(\mathbf{x}; \pmb{\xi})$ is $L$ -Lipschitz in $\mathbf{x}$ , it follows that $h(\mathbf{v})$ is $\mu L$ -Lipschitz in $\mathbf{v}$ . By Lemma A.1, the variance of $h(\mathbf{v})$ is bounded as follows
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ (h (\mathbf {v}) - \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ h (\mathbf {v}) ]) ^ {2} ] = \int_ {0} ^ {\infty} \mathbb {P} ((h (\mathbf {v}) - \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ h (\mathbf {v}) ]) ^ {2} > t) d t \\ = \int_ {0} ^ {\infty} \mathbb {P} \left(| h (\mathbf {v}) - \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ h (\mathbf {v}) ] | > \sqrt {t}\right) d t \\ \leq \int_ {0} ^ {\infty} 2 \exp \left(- \frac {c d t}{\mu^ {2} L ^ {2}}\right) \mathrm {d} t = \frac {2 \mu^ {2} L ^ {2}}{c d}, \\ \end{array}
+$$
+
+where $c > 0$ is a numerical constant. The first equality follows from the identity (Durrett, 2019, Lemma 2.2.13), which states that if a random variable $Y \geq 0$ almost surely, then
+
+$$
+\mathbb {E} [ Y ] = \int_ {0} ^ {\infty} \mathbb {P} (Y > y) \mathrm {d} y.
+$$
+
+Setting $\alpha = \mathbb{E}_{\mathbf{v}\sim \mathbb{U}(\mathbb{S}^{d - 1})}[h(\mathbf{v})]$ , and combining the variance bound above with the inequality (16), we have
+
+$$
+\mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ \| \mathbf {g} \| ^ {2} ] \leq \frac {d ^ {2}}{\mu^ {2}} \cdot \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ (h (\mathbf {v}) - \mathbb {E} _ {\mathbf {v} \sim \mathbb {U} (\mathbb {S} ^ {d - 1})} [ h (\mathbf {v}) ]) ^ {2} ] \leq \frac {2}{c} L ^ {2} d.
+$$
+
+Finally, we apply the law of total expectation to derive $\mathbb{E}[\| \mathbf{g}\| ^2 ] = \mathbb{E}[\mathbb{E}_{\mathbf{v}\sim \mathbb{U}(\mathbb{S}^{d - 1})}[\| \mathbf{g}\| ^2 ]] \leq 2L^2 d / c$
+
+# B.2. Proof of Lemma 4.4
+
+Proof of Lemma 4.4. We begin by defining the filtration $\mathcal{F}_k\triangleq \sigma (\mathbf{v}_i,\pmb {\xi}_i,0\leq i\leq k)$ for $k\in \mathbb{N}$ and $\mathcal{F}_{-1}\triangleq \{\emptyset ,\Omega \}$ . Next, we introduce two stochastic processes $(X_{k},k\in \mathbb{N})$ and $(\hat{X}_k,k\in \mathbb{N})$ defined as
+
+$$
+X _ {k} \triangleq \frac {1}{\bar {s} _ {k}} \left\langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle \in \mathcal {F} _ {k} \quad \text {a n d} \quad \hat {X} _ {k} \triangleq \frac {1}{\bar {s} _ {k}} \left\langle \nabla f _ {\mu_ {k}} \left(\mathbf {x} _ {k}\right), \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle \in \mathcal {F} _ {k - 1}, \tag {17}
+$$
+
+where $\pmb{\Delta}_k = \nabla f_{\mu_k}(\mathbf{x}_k) - \mathbf{g}_k$ . Thus, we derive that $(X_k, k \in \mathbb{N})$ is adapted to $(\mathcal{F}_k, k \in \mathbb{N})$ and $(\hat{X}_k, k \in \mathbb{N})$ is predictable with respect to $(\mathcal{F}_k, k \in \mathbb{N})$ . Moreover, since $\mathbf{g}_k$ is an unbiased estimator of $\nabla f_{\mu_k}(\mathbf{x}_k)$ conditioned on $\mathcal{F}_{k-1}$ , we have
+
+$$
+\mathbb {E} \left[ X _ {k} \mid \mathcal {F} _ {k - 1} \right] = \frac {1}{\bar {s} _ {k}} \cdot \mathbb {E} \left[ \left\langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}) - \mathbf {g} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle \mid \mathcal {F} _ {k - 1} \right] = 0,
+$$
+
+where we use the fact that $\bar{s}_k\in \mathcal{F}_{k - 1}$ . This implies that $(X_{k},\mathcal{F}_{k},k\in \mathbb{N})$ forms a martingale difference process.
+
+Next, applying Lemma 4.2, we know that $\| \mathbf{g}_k\| \leq Ld$ . Since $\mathbf{g}_k$ is an unbiased estimator of $\nabla f_{\mu_k}(\mathbf{x}_k)$ , we obtain
+
+$$
+\left\| \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}) \right\| = \left\| \mathbb {E} _ {\mathbf {v} _ {k}, \boldsymbol {\xi} _ {k}} [ \mathbf {g} _ {k} ] \right\| \leq \mathbb {E} _ {\mathbf {v} _ {k}, \boldsymbol {\xi} _ {k}} \left[ \| \mathbf {g} _ {k} \| \right] \leq L d.
+$$
+
+It follows that
+
+$$
+\left\| \boldsymbol {\Delta} _ {k} \right\| \leq \left\| \mathbf {g} _ {k} \right\| + \left\| \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}) \right\| \leq 2 L d.
+$$
+
+From equation (17), we immediately have $|X_{k}| \leq |\Delta_{k}| \leq 2Ld$ and $|\hat{X}_k| \leq |\nabla f_{\mu_k}(\mathbf{x}_k)| \leq Ld$ .
+
+Now, define the sequence $Y_{k} \coloneqq \bar{r}_{k}\bar{s}_{k}$ for $k \in \mathbb{N}$ , which is non-negative and non-decreasing. Using the concentration inequality for martingale difference sequences from Lemma A.2, and letting $\delta \in (0,1)$ and $c = 2Ld$ , we obtain
+
+$$
+\begin{array}{l} \mathbb {P} \left(\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| \geq b _ {t}\right) = \mathbb {P} \left(\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} Y _ {k} X _ {k} \right|\right) \\ \leq \mathbb {P} \left(\exists t \leq T, \exists \{y _ {k} \} _ {k = 0} ^ {\infty} \in S: \left| \sum_ {k = 0} ^ {t - 1} y _ {k} X _ {k} \right| \geq b _ {t}\right) \\ \leq \delta , \\ \end{array}
+$$
+
+where $b_{t} \triangleq 8\bar{r}_{t - 1}\bar{s}_{t - 1}\sqrt{\theta_{t,\delta}G_{t - 1}} +4L^{2}d^{2}\theta_{t,\delta}^{2}, \theta_{t,\delta} \triangleq \log (60\log (6t / \delta))$ and $S$ is the set of non-negative and nondecreasing sequences.
+
+# B.3. Proof of Lemma 4.5
+
+Proof of Lemma 4.5. Define the partial sum as $S_{t} \triangleq \sum_{k=1}^{t} 1 / \sqrt{k}$ for $t \in \mathbb{N}_{+}$ . An upper bound for $S_{t}$ can be obtained via the following integral
+
+$$
+S _ {t} \leq 1 + \int_ {1} ^ {t} \frac {1}{\sqrt {x}} d x = 2 \sqrt {t} - 1 \leq 2 \sqrt {t}.
+$$
+
+Using this bound, together with the definition of $\mu_{k}$ in equation (6), we can bound the noise as follows
+
+$$
+\sum_ {k = 0} ^ {t - 1} 2 L \bar {r} _ {k} \mu_ {k} \leq 2 L \bar {r} _ {t - 1} \sum_ {k = 0} ^ {t - 1} \mu_ {k} \leq 2 L \sqrt {d} \cdot \bar {r} _ {t - 1} ^ {2} S _ {t} = 4 L \bar {r} _ {t - 1} ^ {2} \sqrt {d t}.
+$$
+
+# B.4. Proof of Proposition 4.7
+
+Proof of Proposition 4.7. Combining Lemma 4.1, 4.4 and 4.5 with equations (9) and (10), we obtain that, with probability at least $1 - \delta$ , the upper bound for $f(\bar{\mathbf{x}}_t) - f(\mathbf{x}_{\star})$ is given by
+
+$$
+\frac {(2 \bar {s} _ {t} + \bar {r} _ {t}) \sqrt {G _ {t - 1}} + 8 \bar {s} _ {t} \sqrt {\theta_ {t , \delta} G _ {t - 1} + 4 L ^ {2} d ^ {2} \theta_ {t , \delta} ^ {2}} + 4 \bar {r} _ {t} L \sqrt {d t}}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}},
+$$
+
+where we use the fact that $\bar{r}_{t - 1}\leq \bar{r}_t$ and $\bar{s}_{t - 1}\leq \bar{s}_t$ . Applying the inequality $\sqrt{a^2 + b^2}\leq a + b$ to the bound, we have
+
+$$
+\frac {(2 \bar {s} _ {t} + \bar {r} _ {t}) \sqrt {G _ {t - 1}} + 8 \bar {s} _ {t} (\theta_ {t , \delta} \sqrt {G _ {t - 1}} + 2 \theta_ {t , \delta} L d) + 4 \bar {r} _ {t} L \sqrt {d t}}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}.
+$$
+
+Finally, using the triangle inequality $\bar{s}_t\leq \bar{r}_t + s_0$ , we obtain the bound
+
+$$
+1 6 \cdot \frac {\theta_ {t , \delta} (\bar {r} _ {t} + s _ {0}) (\sqrt {G _ {t - 1}} + L d + L \sqrt {d t})}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}.
+$$
+
+
+
+# B.5. Proof of Theorem 4.9
+
+We begin by introducing several useful properties of conditional expectations.
+
+Lemma B.1. Let $(\Omega, \mathcal{F}_0, \mathbb{P})$ be a probability space, and let $X_1, X_2$ be two random variables defined on it. Suppose $\mathcal{F} \subset \mathcal{F}_0$ is a sub- $\sigma$ -algebra, and $\mathbb{E}[\cdot | \mathcal{F}]$ denotes the corresponding conditional expectation. If $X_1 \leq X_2$ on a set $B \in \mathcal{F}$ , then $\mathbb{E}[X_1 | \mathcal{F}] \leq \mathbb{E}[X_2 | \mathcal{F}]$ almost surely on $B$ .
+
+Proof of Lemma B.1. We follow the proof strategy of Durrett (2019, Theorem 4.1.2). For any $\epsilon > 0$ , define the event $A = \{\omega \in \Omega : \mathbb{E}[X_1|\mathcal{F}] - \mathbb{E}[X_2|\mathcal{F}] \geq \epsilon\}$ , which satisfies $A \in \mathcal{F}$ since both conditional expectations are $\mathcal{F}$ -measurable. Because $B \in \mathcal{F}$ , their intersection $A \cap B \in \mathcal{F}$ . Then, by the definition of conditional expectation, we have
+
+$$
+\int_ {A \cap B} \mathbb {E} [ X _ {1} \mid \mathcal {F} ] - \mathbb {E} [ X _ {2} \mid \mathcal {F} ] \mathrm {d} \mathbb {P} = \int_ {A \cap B} X _ {1} - X _ {2} \mathrm {d} \mathbb {P} \leq 0,
+$$
+
+where the inequality follows from the assumption $X_{1}\leq X_{2}$ on $B$ . On the other hand, by the definition of $A$ , we have
+
+$$
+\int_ {A \cap B} \mathbb {E} [ X _ {1} \mid \mathcal {F} ] - \mathbb {E} [ X _ {2} \mid \mathcal {F} ] \mathrm {d} \mathbb {P} \geq \int_ {A \cap B} \epsilon \mathrm {d} \mathbb {P} = \epsilon \cdot \mathbb {P} (A \cap B).
+$$
+
+Combining the two inequalities, we have $\mathbb{P}(A\cap B) = 0$ , which implies that
+
+$$
+\mathbb {P} (\omega \in B: \mathbb {E} [ X _ {1} \mid \mathcal {F} ] - \mathbb {E} [ X _ {2} \mid \mathcal {F} ] \geq \epsilon) = 0.
+$$
+
+Since this holds for all $\epsilon > 0$ , it follows that $\mathbb{E}[X_1 \mid \mathcal{F}] \leq \mathbb{E}[X_2 \mid \mathcal{F}]$ almost surely on $B$ .
+
+Lemma B.2 (Durrett (2019, Theorem 4.1.13)). If $\mathcal{F}_0\subset \mathcal{F}$ , then $\mathbb{E}[X\mid \mathcal{F}_0] = \mathbb{E}[\mathbb{E}[X\mid \mathcal{F}]\mid \mathcal{F}_0]$ .
+
+Lemma B.3 (Durrett (2019, Theorem 4.1.10)). Let $\phi$ be a convex function and $X$ be a random variable such that $\mathbb{E}|X| < \infty$ and $\mathbb{E}|\phi(X)| < \infty$ . Then,
+
+$$
+\phi (\mathbb {E} [ X \mid \mathcal {F} ]) \leq \mathbb {E} [ \phi (X) \mid \mathcal {F} ].
+$$
+
+We are now ready to prove Theorem 4.9.
+
+Proof of Theorem 4.9. Recall the event
+
+$$
+\Omega_ {\delta} \triangleq \left\{\omega \in \Omega_ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| < b _ {t} \right\} \in \mathcal {F} _ {0},
+$$
+
+which satisfies $\mathbb{P}(\Omega_{\delta}) \geq 1 - \delta$ by Lemma 4.4. From Proposition 4.7, we know that for any $\omega \in \Omega_{\delta}$ and all $t \leq T$ , the following inequality holds
+
+$$
+f (\bar {\mathbf {x}} _ {t}) - f (\mathbf {x} _ {\star}) \leq 1 6 \cdot \frac {\theta_ {t , \delta} (\bar {r} _ {t} + s _ {0}) (\sqrt {G _ {t - 1}} + L d + L \sqrt {d t})}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}.
+$$
+
+Combining this with equation (11), we obtain that for all $\omega \in \Omega_{\delta}$
+
+$$
+f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) \leq c _ {0} \cdot \frac {\theta_ {\tau_ {T} , \delta} (\bar {r} _ {\tau_ {T}} + s _ {0}) (\sqrt {G _ {\tau_ {T} - 1}} + L d + L \sqrt {d \tau_ {T}})}{T} \log_ {+} \left(\frac {\bar {r} _ {\tau_ {T}}}{r _ {\epsilon}}\right),
+$$
+
+where $c_{0}$ is a constant and $\tau_{T} = \arg \max_{t\leq T}\sum_{k = 0}^{t - 1}\bar{r}_{k} / \bar{r}_{t}$ . Since $\theta_{t,\delta}$ , $\bar{r}_t$ , and $G_{t}$ are non-decreasing in $t$ and $\tau_T\leq T$ , we can simplify the bound
+
+$$
+f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) \leq c _ {0} \cdot \frac {\theta_ {T , \delta} (\bar {r} _ {T} + s _ {0}) (\sqrt {G _ {T - 1}} + L d + L \sqrt {d T})}{T} \log_ {+} \left(\frac {\bar {r} _ {T}}{r _ {\epsilon}}\right).
+$$
+
+Noting that the diameter is $D_{\mathcal{X}}$ and $r_\epsilon \leq D_{\mathcal{X}}$ , we have $\bar{r}_T \leq D_{\mathcal{X}}$ . Substituting this into the above bound yields
+
+$$
+f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) \leq 2 c _ {0} \cdot \frac {\theta_ {T , \delta} D _ {\mathcal {X}} (\sqrt {G _ {T - 1}} + L d + L \sqrt {d T})}{T} \log_ {+} \left(\frac {D _ {\mathcal {X}}}{r _ {\epsilon}}\right).
+$$
+
+Recall that $\mathcal{F}_{\delta} \triangleq \{A : A \subset \Omega_{\delta}\} \cap \mathcal{F}_0$ is a sigma field satisfying $\mathcal{F}_{\delta} \subset \mathcal{F}_0$ . Moreover, we have $\Omega_{\delta} \in \mathcal{F}_{\delta}$ . Applying Lemma B.1, we obtain that for any $\omega \in \Omega_{\delta}$ ,
+
+$$
+\mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau T}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \mathcal {F} _ {\delta} \right] \leq 2 c _ {0} \cdot \frac {\theta_ {T , \delta} D _ {\mathcal {X}} \left(\mathbb {E} \left[ \sqrt {G _ {T - 1}} \mid \mathcal {F} _ {\delta} \right] + L d + L \sqrt {d T}\right)}{T} \log_ {+} \left(\frac {D _ {\mathcal {X}}}{r _ {\epsilon}}\right). \tag {18}
+$$
+
+Next, we bound the conditional expectation of $G_{T-1}$ using Lemma 4.2 and Lemma B.2 as follows
+
+$$
+\mathbb {E} \left[ G _ {T - 1} \mid \mathcal {F} _ {\delta} \right] = \mathbb {E} \left[ \mathbb {E} \left[ G _ {T - 1} \right] \mid \mathcal {F} _ {\delta} \right] = \sum_ {k = 0} ^ {T - 1} \mathbb {E} \left[ \mathbb {E} \left[ \| \mathbf {g} _ {k} \| ^ {2} \right] \mid \mathcal {F} _ {\delta} \right] \leq c L ^ {2} d T.
+$$
+
+Since the square root function is concave, applying Jensen's inequality (Lemma B.3) gives
+
+$$
+\mathbb {E} \left[ \sqrt {G _ {T - 1}} \mid \mathcal {F} _ {\delta} \right] \leq \sqrt {\mathbb {E} \left[ G _ {T - 1} \mid \mathcal {F} _ {\delta} \right]} \leq L \sqrt {c d T}.
+$$
+
+Substituting this back into equation (18), we conclude that, for any $\omega \in \Omega_{\delta}$ , the following inequality holds
+
+$$
+\mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \mathcal {F} _ {\delta} \right] \leq c _ {1} \left(\frac {d}{T} + \frac {\sqrt {d}}{\sqrt {T}}\right) \theta_ {T, \delta} L D _ {\mathcal {X}} \log_ {+} \left(\frac {D _ {\mathcal {X}}}{r _ {\epsilon}}\right), \tag {19}
+$$
+
+where $c_{1}$ is a constant. This implies that the upper bound (19) holds with probability at least $1 - \delta$ .
+
+# C. The Proofs for Section 5
+
+We provide detailed proofs for the results without assuming a bounded domain.
+
+# C.1. Proof of Proposition 5.3
+
+For simplicity, we define the following stopping time
+
+$$
+\zeta \triangleq \min \{t \in \mathbb {N} \mid \bar {r} _ {t} > 3 s _ {0} \}.
+$$
+
+Using this stopping time, we define a modified step size
+
+$$
+\tilde {\eta} _ {t} \triangleq \eta_ {t} \cdot \mathbb {I} (t < \zeta),
+$$
+
+where the indicator function $\mathbb{I}(t < \zeta)$ equals 1 if $t < \zeta$ , and 0 otherwise.
+
+Before proving the proposition, we first present and prove several supporting lemmas.
+
+Lemma C.1. Let $T \in \mathbb{N}^+$ . For any $t \leq T$ , the following inequality holds
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \frac {s _ {0} ^ {2}}{2}.
+$$
+
+Proof of Lemma C.1. By the definition of $\tilde{\eta}_k$ and using the identity $\| \mathbf{g}_k\|^2 = G_k - G_{k-1}$ , we can bound the sum as follows
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \sum_ {k = 0} ^ {\zeta - 1} \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} = \sum_ {k = 0} ^ {\zeta - 1} \frac {\bar {r} _ {k} ^ {2}}{G _ {k} ^ {\prime}} \cdot \| \mathbf {g} _ {k} \| ^ {2} = \sum_ {k = 0} ^ {\zeta - 1} \frac {\bar {r} _ {k} ^ {2} \left(G _ {k} - G _ {k - 1}\right)}{G _ {k} ^ {\prime}} \leq \bar {r} _ {\zeta - 1} ^ {2} \sum_ {k = 0} ^ {\zeta - 1} \frac {G _ {k} - G _ {k - 1}}{G _ {k} ^ {\prime}}, \tag {20}
+$$
+
+where we set $G_{-1} = 0$ . We now use a lower bound for $G_k'$
+
+$$
+G _ {k} ^ {\prime} \geq 8 ^ {4} \theta_ {T, \delta} (G _ {k - 1} + 2 d ^ {2} \bar {L} ^ {2}) \log_ {+} ^ {2} \left(\frac {(k + 1) d ^ {2} \bar {L} ^ {2} + d ^ {2} \bar {L} ^ {2}}{d ^ {2} \bar {L} ^ {2}}\right) \geq 8 ^ {4} \theta_ {T, \delta} (G _ {k} + d ^ {2} \bar {L} ^ {2}) \log_ {+} ^ {2} \left(\frac {G _ {k} + d ^ {2} \bar {L} ^ {2}}{d ^ {2} \bar {L} ^ {2}}\right),
+$$
+
+where the last inequality follows from $\| \mathbf{g}_k\| \leq Ld$ (See Lemma 4.2). Substituting this bound into (20), we obtain
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \frac {\bar {r} _ {\zeta - 1} ^ {2}}{8 ^ {4} \theta_ {T , \delta}} \cdot \sum_ {k = 0} ^ {\zeta - 1} \frac {G _ {k} - G _ {k - 1}}{\left(G _ {k} + d ^ {2} \bar {L} ^ {2}\right) \log_ {+} ^ {2} \left(\frac {G _ {k} + d ^ {2} \bar {L} ^ {2}}{d ^ {2} \bar {L} ^ {2}}\right)} \leq \frac {\bar {r} _ {\zeta - 1} ^ {2}}{8 ^ {4} \theta_ {T , \delta}} \leq \frac {9 s _ {0} ^ {2}}{8 ^ {4} \theta_ {T , \delta}} \leq \frac {s _ {0} ^ {2}}{2}. \tag {21}
+$$
+
+The second inequality holds by applying Lemma A.3 with $a_{k} = G_{k} + d^{2}\bar{L}^{2}$ .
+
+Lemma C.2. For any $\delta \in (0,1)$ , the following inequality holds
+
+$$
+\mathbb {P} \bigg (\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| > s _ {0} ^ {2} \bigg) \leq \delta ,
+$$
+
+Proof of Lemma C.2. We consider the filtration $\mathcal{F}_k = \sigma(\mathbf{v}_i, \pmb{\xi}_i, 0 \leq i \leq k)$ for $k \in \mathbb{N}$ and $\mathcal{F}_{-1} = \{\emptyset, \Omega_0\}$ as defined in Appendix B.2. Note that $\tilde{\eta}_k \in \mathcal{F}_{k-1}$ . Define the stochastic processes $(Z_k, k \in \mathbb{N})$ and $(\dot{Z}_k, k \in \mathbb{N})$ as
+
+$$
+Z _ {k} = \tilde {\eta} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \in \mathcal {F} _ {k} \quad \text {a n d} \quad \hat {Z} _ {k} = \tilde {\eta} _ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \in \mathcal {F} _ {k - 1},
+$$
+
+where $\pmb{\Delta}_{k} = \nabla f_{\mu_{k}}(\mathbf{x}_{k}) - \mathbf{g}_{k}$ . By construction, we have
+
+$$
+\mathbb {E} \left[ Z _ {k} \mid \mathcal {F} _ {k - 1} \right] = \tilde {\eta} _ {k} \cdot \mathbb {E} \left[ \left\langle \nabla f _ {\mu_ {k}} \left(\mathbf {x} _ {k}\right) - \mathbf {g} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle \mid \mathcal {F} _ {k - 1} \right] = 0, \quad \text {w h e r e} \quad k \in \mathbb {N}.
+$$
+
+Thus, $(Z_{k},\mathcal{F}_{k},k\in \mathbb{N})$ is a martingale difference process.
+
+We now bound $|Z_k|$ . Using the fact that $\bar{s}_t \leq \bar{r}_t + s_0$ , we obtain
+
+$$
+| Z _ {k} | \leq \tilde {\eta} _ {k} s _ {k} \| \pmb {\Delta} _ {k} \| \leq \frac {\bar {r} _ {\zeta - 1} \bar {s} _ {\zeta - 1} \| \pmb {\Delta} _ {k} \|}{\sqrt {G _ {k} ^ {\prime}}} \leq \frac {1 2 s _ {0} ^ {2} \| \pmb {\Delta} _ {k} \|}{\sqrt {G _ {k} ^ {\prime}}}.
+$$
+
+From Appendix B.2, we have $\| \pmb{\Delta}_k\| \leq 2Ld$ . Moreover, we have $G_k^\prime \geq 16\cdot 8^4\theta_{T,\delta}^2 d^2\bar{L}^2$ . Therefore, we conclude
+
+$$
+| Z _ {k} | \leq \frac {6 s _ {0} ^ {2}}{8 ^ {2} \theta_ {T , \delta}}.
+$$
+
+The same upper bound also applies to $|\hat{Z}_k|$ . Now apply Lemma A.4 with $c = 6s_0^2 / (8^2\theta_{T,\delta})$ . This gives
+
+$$
+\mathbb {P} \left(\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} Z _ {k} \right| > 4 \sqrt {\theta_ {t , \delta} \sum_ {k = 0} ^ {t - 1} \left(Z _ {k} - \hat {Z} _ {k}\right) ^ {2} + c ^ {2} \theta_ {t , \delta} ^ {2}}\right) \leq \delta . \tag {22}
+$$
+
+The upper bound for $\sum_{k=0}^{t-1}(Z_k - \hat{Z}_k)^2$ is given by
+
+$$
+\sum_ {k = 0} ^ {t - 1} \left(Z _ {k} - \hat {Z} _ {k}\right) ^ {2} = \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} ^ {2} \left(\left\langle \mathbf {g} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \right\rangle\right) ^ {2} \leq \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} ^ {2} s _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \bar {s} _ {\zeta - 1} ^ {2} \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \left(4 s _ {0}\right) ^ {2} \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} \leq \frac {1 2 ^ {2} s _ {0} ^ {4}}{8 ^ {4} \theta_ {T , \delta}}, \tag {23}
+$$
+
+where the third inequality follows from $\bar{s}_k\leq \bar{r}_k + s_0$ and the last follows from (21). Substituting (23) into (22), we get
+
+$$
+4 \sqrt {\theta_ {t , \delta} \sum_ {k = 0} ^ {t - 1} (Z _ {k} - \hat {Z} _ {k}) ^ {2} + c ^ {2} \theta_ {t , \delta} ^ {2}} \leq 4 \sqrt {\theta_ {t , \delta} \frac {1 2 ^ {2} s _ {0} ^ {4}}{8 ^ {4} \theta_ {T , \delta}} + \theta_ {t , \delta} ^ {2} \frac {6 ^ {2} s _ {0} ^ {4}}{8 ^ {4} \theta_ {T , \delta} ^ {2}}} \leq 4 \sqrt {\frac {1 2 ^ {2} s _ {0} ^ {4}}{8 ^ {4}} + \frac {6 ^ {2} s _ {0} ^ {4}}{8 ^ {4}}} \leq s _ {0} ^ {2}.
+$$
+
+Thus, we have
+
+$$
+\mathbb {P} \left(\exists t \leq T: \left| \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| > s _ {0} ^ {2}\right) \leq \delta .
+$$
+
+Lemma C.3. Let $T \in \mathbb{N}^+$ . For any $t \leq T$ , the following inequality holds
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle \leq \frac {s _ {0} ^ {2}}{4}.
+$$
+
+Proof of Lemma C.3. By Lemma 2.8, we have
+
+$$
+\langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle \leq f _ {\mu_ {k}} (\mathbf {x} _ {\star}) - f _ {\mu_ {k}} (\mathbf {x} _ {k}) \leq f (\mathbf {x} _ {\star}) - f (\mathbf {x} _ {k}) + 2 L \mu_ {k} \leq 2 L \mu_ {k},
+$$
+
+where the last inequality follows from the fact that $f(\mathbf{x}_{\star}) = \inf_{\mathbf{x} \in \mathcal{X}} f(\mathbf{x})$ . Thus, the summation can be bounded as follows
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle = \sum_ {k = 0} ^ {\min (\zeta - 1, t)} \eta_ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle \leq 2 L \sum_ {k = 0} ^ {\min (\zeta - 1, t)} \eta_ {k} \mu_ {k}.
+$$
+
+Given that $\mu_k = d\bar{r}_k / (k + 1)^2$ and $G_k' \geq 16 \cdot 8^4 d^2\bar{L}^2$ , it follows that
+
+$$
+\sum_ {k = 0} ^ {t} \tilde {\eta} _ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle \leq \frac {2 L}{4 \cdot 8 ^ {2} \bar {L}} \sum_ {k = 0} ^ {\min (\zeta - 1, t)} \frac {\bar {r} _ {k} ^ {2}}{(k + 1) ^ {2}} \leq \frac {\bar {r} _ {\zeta - 1} ^ {2}}{2 \cdot 8 ^ {2}} \sum_ {k = 0} ^ {\min (\zeta - 1, t)} \frac {1}{(k + 1) ^ {2}} \leq \frac {3 \pi^ {2} s _ {0} ^ {2}}{4 \cdot 8 ^ {2}} \leq \frac {s _ {0} ^ {2}}{4},
+$$
+
+where the third inequality uses the fact that $\sum_{k=1}^{\infty} 1 / k^{2} = \pi^{2} / 6$ and $\bar{r}_{\zeta-1} \leq 3s_{0}$ .
+
+Based on Lemmas C.1, C.2, and C.3, we now establish Proposition 5.3.
+
+Proof of Proposition 5.3. Fix any $\delta >0$ , and define the event
+
+$$
+\tilde {\Omega} _ {\delta} \triangleq \left\{\omega \in \Omega_ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| \leq s _ {0} ^ {2} \right\}.
+$$
+
+By Lemma C.2, it holds that $\mathbb{P}(\tilde{\Omega}_{\delta})\geq 1 - \delta$
+
+We now proceed by induction on $t$ to show that $\bar{r}_t\leq 3s_0$ for all $t\le T$ and any $\omega \in \tilde{\Omega}_{\delta}$ . For the base case, we have $\bar{r}_0 = r_\epsilon \leq 3s_0$ . For the induction step, we assume $\bar{r}_{t - 1}\leq 3s_0$ , which implies that $\zeta >t - 1$ . From equation (14), we have
+
+$$
+\begin{array}{l} s _ {t} ^ {2} - s _ {0} ^ {2} \leq \sum_ {k = 0} ^ {t - 1} \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \langle \boldsymbol {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle \\ = \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle + 2 \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle , \\ \end{array}
+$$
+
+where the equality holds since $\zeta > t - 1$ . Now, applying Lemmas C.1, C.2, and C.3, we obtain for any $\omega \in \hat{\Omega}_{\delta}$
+
+$$
+s _ {t} ^ {2} - s _ {0} ^ {2} \leq \frac {1}{2} s _ {0} ^ {2} + 2 s _ {0} ^ {2} + 2 \cdot \frac {1}{4} s _ {0} ^ {2} = 3 s _ {0} ^ {2},
+$$
+
+which implies that $s_t \leq 2s_0$ . Hence, we have $r_t \leq s_t + s_0 = 3s_0$ . By induction, it follows that $\bar{r}_t = \max(\bar{r}_{t-1}, r_t) \leq 3s_0$ for all $t \leq T$ for any $\omega \in \tilde{\Omega}_{\delta}$ . Equivalently, we have $\mathbb{P}(\bar{r}_T > 3s_0) \leq \delta$ .
+
+# C.2. Proof of Proposition 5.4
+
+Proof of Proposition 5.4. Since $G_{t}^{\prime} \geq G_{t}$ , we can apply the result of Ivgi et al. (2023a, Lemma 3.4), replacing $G_{t}$ with $G_{t}^{\prime}$ which ensures that Lemma 4.1 still holds in our setting. Moreover, Lemma 4.4 remains valid. For Lemma 4.5, recall that $\mu_{t} = d\bar{r}_{t} / (t + 1)^{2}$ as given in equation (13). The noise from $\mu$ can be bounded as
+
+$$
+\sum_ {k = 0} ^ {t - 1} 2 L \bar {r} _ {k} \mu_ {k} = \sum_ {k = 0} ^ {t - 1} \frac {2 L d \bar {r} _ {k} ^ {2}}{(k + 1) ^ {2}} \leq 2 L d \bar {r} _ {t - 1} ^ {2} \sum_ {k = 0} ^ {t - 1} \frac {1}{(k + 1) ^ {2}} \leq 4 L d \bar {r} _ {t - 1} ^ {2},
+$$
+
+where the last inequality uses the fact that $\sum_{k=1}^{\infty} 1 / k^{2} = \pi^{2} / 6$ .
+
+Combining the modified lemmas and using equations (9) and (10), we obtain that, with probability at least $1 - \delta$ , the following upper bound on the optimality gap holds
+
+$$
+\frac {(2 \bar {s} _ {t} + \bar {r} _ {t}) \sqrt {G _ {t - 1} ^ {\prime}} + 8 \bar {s} _ {t} \sqrt {\theta_ {t , \delta} G _ {t - 1} + 4 L ^ {2} d ^ {2} \theta_ {t , \delta} ^ {2}} + 4 L d \bar {r} _ {t}}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}},
+$$
+
+where we use the fact that $\bar{r}_{t - 1}\leq \bar{r}_t$ and $\bar{s}_{t - 1}\leq \bar{s}_t$ . Applying the inequality $\sqrt{a^2 + b^2}\leq a + b$ , the gap simplifies to
+
+$$
+\frac {(2 \bar {s} _ {t} + \bar {r} _ {t}) \sqrt {G _ {t - 1} ^ {\prime}} + 8 \bar {s} _ {t} (\theta_ {t , \delta} \sqrt {G _ {t - 1}} + 2 \theta_ {t , \delta} L d) + 4 L d \bar {r} _ {t}}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}.
+$$
+
+Finally, using the fact that $G_{t}^{\prime} \geq G_{t}$ and the triangle inequality $\bar{s}_t \leq \bar{r}_t + s_0$ , the gap becomes
+
+$$
+2 0 \cdot \frac {\theta_ {t , \delta} (\bar {r} _ {t} + s _ {0}) (\sqrt {G _ {t - 1} ^ {\prime}} + L d)}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}.
+$$
+
+# C.3. Proof of Theorem 5.5
+
+We begin by establishing the existence and uniqueness of the conditional expectation $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star) | \tilde{\mathcal{F}}_\delta]$ . According to Durrett (2019, Chapter 4.1), it suffices to verify that $\tilde{\mathcal{F}}_\delta \subset \tilde{\mathcal{F}}_0$ and $\mathbb{E}|f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_\star)| < \infty$ . By definition, we recall that $\tilde{F}_{\delta} = \{A : A \subset \tilde{\Omega}_{\delta} \cap \hat{\Omega}_{\delta}\} \cap \tilde{\mathcal{F}}_0$ , which implies that $\tilde{\mathcal{F}}_\delta \subset \tilde{\mathcal{F}}_0$ . To verify the integrability condition, we start from inequality (14), which states
+
+$$
+s _ {t} ^ {2} - s _ {0} ^ {2} \leq \sum_ {k = 0} ^ {t - 1} \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \langle \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}), \mathbf {x} _ {\star} - \mathbf {x} _ {k} \rangle ,
+$$
+
+for $t = 1,2,\ldots ,T$ . Applying the upper bounds $\| \mathbf{g}_k\| \leq Ld$ (Lemma 4.2), $\| \nabla f_{\mu_k}(\mathbf{x}_k)\| \leq Ld$ and $\| \pmb {\Delta}_k\| \leq 2Ld$ (Appendix B.2), we obtain
+
+$$
+\begin{array}{l} s _ {t} ^ {2} - s _ {0} ^ {2} \leq \sum_ {k = 0} ^ {t - 1} \eta_ {k} ^ {2} \| \mathbf {g} _ {k} \| ^ {2} + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \| \boldsymbol {\Delta} _ {k} \| \| \mathbf {x} _ {k} - \mathbf {x} _ {\star} \| + 2 \sum_ {k = 0} ^ {t - 1} \eta_ {k} \| \nabla f _ {\mu_ {k}} (\mathbf {x} _ {k}) \| \| \mathbf {x} _ {\star} - \mathbf {x} _ {k} \| \\ \leq L ^ {2} d ^ {2} \sum_ {k = 0} ^ {t - 1} \eta_ {k} ^ {2} + 6 L d \bar {s} _ {t - 1} \sum_ {k = 0} ^ {t - 1} \eta_ {k}. \\ \end{array}
+$$
+
+Since $\eta_{k} = \bar{r}_{k} / \sqrt{G_{k}^{\prime}}$ with $G_{k}^{\prime}\geq d^{2}L^{2}$ , it follows that
+
+$$
+s _ {t} ^ {2} - s _ {0} ^ {2} \leq T (\bar {r} _ {t - 1} ^ {2} + 6 \bar {r} _ {t - 1} \bar {s} _ {t - 1}).
+$$
+
+Given that $\bar{r}_0 = r_\epsilon \leq 3s_0 < \infty$ , it follows by induction that $\bar{s}_t < \infty$ and $\bar{r}_t < \infty$ for all $t \leq T$ . Finally, by the Lipschitz continuity of $f(\cdot)$ , we obtain
+
+$$
+| f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) | \leq L \| \bar {\mathbf {x}} _ {\tau_ {T}} - \mathbf {x} _ {\star} \| \leq L \bar {s} _ {T} < \infty .
+$$
+
+Hence, $\mathbb{E}|f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_{\star})| < \infty$ . Therefore, we conclude that the conditional expectation $\mathbb{E}[f(\bar{\mathbf{x}}_{\tau_T}) - f(\mathbf{x}_{\star})|\tilde{\mathcal{F}}_{\delta}]$ exists and is unique.
+
+Now, we provide the proof of Theorem 5.5.
+
+Proof of Theorem 5.5. Recall the definitions
+
+$$
+\tilde {\Omega} _ {\delta} = \left\{\omega \in \Omega_ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \tilde {\eta} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| \leq s _ {0} ^ {2} \right\},
+$$
+
+$$
+\hat {\Omega} _ {\delta} = \left\{\omega \in \Omega_ {0}: \forall t \leq T, \left| \sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} \langle \pmb {\Delta} _ {k}, \mathbf {x} _ {k} - \mathbf {x} _ {\star} \rangle \right| < b _ {t} \right\},
+$$
+
+with $\tilde{\mathbb{P}} (\tilde{\Omega}_{\delta})\geq 1 - \delta$ and $\tilde{\mathbb{P}} (\hat{\Omega}_{\delta})\geq 1 - \delta$ . By Proposition 5.4, for any $\omega \in \tilde{\Omega}_{\delta}\cap \hat{\Omega}_{\delta}$ , we have
+
+$$
+f (\bar {\mathbf {x}} _ {t}) - f (\mathbf {x} _ {\star}) \leq 2 0 \cdot \frac {\theta_ {t , \delta} (\bar {r} _ {t} + s _ {0}) (\sqrt {G _ {t - 1} ^ {\prime}} + L d)}{\sum_ {k = 0} ^ {t - 1} \bar {r} _ {k} / \bar {r} _ {t}}, \quad t \leq T.
+$$
+
+Combining this with equation (11), we obtain that for any $\omega \in \hat{\Omega}_{\delta} \cap \hat{\Omega}_{\delta}$ , the following inequality holds
+
+$$
+f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) \leq c _ {0} \cdot \frac {\theta_ {\tau_ {T} , \delta} (\bar {r} _ {\tau_ {T}} + s _ {0}) (\sqrt {G _ {\tau_ {T} - 1} ^ {\prime}} + L d)}{T} \log_ {+} \left(\frac {\bar {r} _ {\tau_ {T}}}{r _ {\epsilon}}\right),
+$$
+
+where $c_{0}$ is a constant and $\tau_{T} = \arg \max_{t\leq T}\sum_{k = 0}^{t - 1}\bar{r}_{k} / \bar{r}_{t}$ . Since $\theta_{t,\delta}$ , $\bar{r}_t$ and $G_t^\prime$ are non-deceasing in $t$ and $\tau_T\leq T$ , we derive
+
+$$
+f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \leq c _ {0} \cdot \frac {\theta_ {T , \delta} \left(\bar {r} _ {T} + s _ {0}\right) \left(\sqrt {G _ {T - 1} ^ {\prime}} + L d\right)}{T} \log_ {+} \left(\frac {\bar {r} _ {T}}{r _ {\epsilon}}\right).
+$$
+
+Moreover, from Proposition 5.3, we know that $\bar{r}_T\leq 3s_0$ for any $\omega \in \tilde{\Omega}_{\delta}\cap \hat{\Omega}_{\delta}$ . Substituting this yields
+
+$$
+f (\bar {\mathbf {x}} _ {\tau_ {T}}) - f (\mathbf {x} _ {\star}) \leq 4 c _ {0} \cdot \frac {\theta_ {T , \delta} s _ {0} \left(\sqrt {G _ {T - 1} ^ {\prime}} + L d\right)}{T} \log_ {+} \left(\frac {3 s _ {0}}{r _ {\epsilon}}\right).
+$$
+
+Recall that $\tilde{\mathcal{F}}_{\delta} = \{A:A\subset \tilde{\Omega}_{\delta}\cap \hat{\Omega}_{\delta}\} \cap \tilde{\mathcal{F}}_0$ is a sigma field satisfying $\tilde{\mathcal{F}}_{\delta}\subset \tilde{\mathcal{F}}_{0}$ and $\tilde{\Omega}_{\delta}\cap \hat{\Omega}_{\delta}\in \tilde{\mathcal{F}}_{\delta}$ . Then, applying Lemma B.1, for any $\omega \in \tilde{\Omega}_{\delta}\cap \hat{\Omega}_{\delta}$ , we have
+
+$$
+\mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \tilde {\mathcal {F}} _ {\delta} \right] \leq 4 c _ {0} \cdot \frac {\theta_ {T , \delta} s _ {0} \left(\mathbb {E} \left[ \sqrt {G _ {T - 1} ^ {\prime}} \mid \tilde {\mathcal {F}} _ {\delta} \right] + L d\right)}{T} \log_ {+} \left(\frac {3 s _ {0}}{r _ {\epsilon}}\right). \tag {24}
+$$
+
+Using Lemma 4.2 and Lemma B.2, we can bound the conditional expectation of $G_{T - 1}^{\prime}$ as follows
+
+$$
+\begin{array}{l} \mathbb {E} \left[ G _ {T - 1} ^ {\prime} \mid \tilde {\mathcal {F}} _ {\delta} \right] = \mathbb {E} \left[ \mathbb {E} \left[ G _ {T - 1} ^ {\prime} \right] \mid \tilde {\mathcal {F}} _ {\delta} \right] = 8 ^ {4} \theta_ {T, \delta} \log_ {+} ^ {2} (T + 1) \left( \right.\sum_ {k = 0} ^ {T - 1} \mathbb {E} \left[\left\| \mathbf {g} _ {k} \right\| ^ {2} \right] \mid \tilde {\mathcal {F}} _ {\delta} \left. \right] + 1 6 \theta_ {T, \delta} d ^ {2} \bar {L} ^ {2}\left. \right) \\ \leq 8 ^ {4} \theta_ {T, \delta} ^ {2} \log_ {+} ^ {2} (T + 1) (c L ^ {2} d T + 1 6 d ^ {2} \bar {L} ^ {2}). \\ \end{array}
+$$
+
+Since the square root function is concave, applying Jensen's inequality in Lemma B.3 gives
+
+$$
+\mathbb {E} [ \sqrt {G _ {T - 1} ^ {\prime}} | \tilde {\mathcal {F}} _ {\delta} ] \leq \sqrt {\mathbb {E} [ G _ {T - 1} ^ {\prime} | \tilde {\mathcal {F}} _ {\delta} ]} \leq 8 ^ {2} \theta_ {T, \delta} \log_ {+} (T + 1) \sqrt {c L ^ {2} d T + 1 6 d ^ {2} \bar {L} ^ {2}} \leq 8 ^ {2} \theta_ {T, \delta} \log_ {+} (T + 1) (\sqrt {c L} \sqrt {d T} + 4 d \bar {L}).
+$$
+
+Substituting this back into equation (24), for any $\omega \in \tilde{\Omega}_{\delta} \cap \hat{\Omega}_{\delta}$ , we have
+
+$$
+\mathbb {E} \left[ f \left(\bar {\mathbf {x}} _ {\tau_ {T}}\right) - f \left(\mathbf {x} _ {\star}\right) \mid \tilde {\mathcal {F}} _ {\delta} \right] \leq c _ {1} \left(\frac {d}{T} \cdot (L + \bar {L}) + \frac {\sqrt {d}}{\sqrt {T}} \cdot L\right) \alpha_ {T, \delta} s _ {0} \log_ {+} \left(\frac {s _ {0}}{r _ {\epsilon}}\right), \tag {25}
+$$
+
+where $c_{1}$ is a constant and $\alpha_{T,\delta} = \theta_{T,\delta}\log_{+}(T + 1)$ . Moreover, we have
+
+$$
+\tilde {\mathbb {P}} (\tilde {\Omega} _ {\delta} \cap \hat {\Omega} _ {\delta}) \geq \tilde {\mathbb {P}} (\tilde {\Omega} _ {\delta}) + \tilde {\mathbb {P}} (\hat {\Omega} _ {\delta}) - 1 \geq 1 - 2 \delta .
+$$
+
+# C.4. Proof of Theorem 5.6
+
+Proof. For a given $T \geq 2$ , let $\xi \sim \Xi$ , where $\Xi$ is a Bernoulli distribution defined by
+
+$$
+\mathbb {P} (\boldsymbol {\xi} = 0) = 1 - \frac {1}{T} \quad \text {a n d} \quad \mathbb {P} (\boldsymbol {\xi} = 1) = \frac {1}{T}.
+$$
+
+We first define the function $f_{1}:\mathbb{R}^{d}\to \mathbb{R}$ as
+
+$$
+f _ {1} (\mathbf {x}) = L \| \mathbf {x} \| _ {1},
+$$
+
+where $\| \cdot \| _1$ is the $\ell_1$ -norm. The SZO for function $f_{1}$ is constructed such that for any $\mathbf{x} \in \mathbb{R}^d$ and $\mathbf{y} \in \mathbb{R}^d$ , the oracle returns evaluations $F_{1}(\mathbf{x};\pmb{\xi})$ and $F_{1}(\mathbf{y};\pmb{\xi})$ satisfying $F_{1}(\mathbf{x};\pmb{\xi}) = L\| \mathbf{x}\|_{1}$ and $F_{2}(\mathbf{y};\pmb{\xi}) = L\| \mathbf{y}\|_{1}$ for all $\pmb{\xi}$ drawn from $\Xi$ . This oracle clearly satisfies Assumption 2.7. Moreover, the function $F_{1}(\cdot ;\pmb{\xi})$ is convex and Lipschitz continuous with $L_{1} = L$ for all $\pmb{\xi}$ .
+
+Next, we define another function $f_{2}:\mathbb{R}^{d}\to \mathbb{R}$ as
+
+$$
+f _ {2} (\mathbf {x}) = L \| \mathbf {x} - \mathbf {u} \| _ {1},
+$$
+
+where $\mathbf{u} = (1 - 1 / T)\mathbf{1}_d\in \mathbb{R}^d$ . The SZO for function $f_{2}$ is constructed such that for any $\mathbf{x}\in \mathbb{R}^d$ and $\mathbf{y}\in \mathbb{R}^d$ , it returns the evaluations $F_{2}(\mathbf{x};\pmb {\xi})$ and $F_{2}(\mathbf{y};\pmb {\xi})$ satisfying
+
+$$
+F _ {2} (\mathbf {x}; 0) = L \| \mathbf {x} \| _ {1}, \quad F _ {2} (\mathbf {x}; 1) = T L \| \mathbf {x} - \mathbf {u} \| _ {1} - (T - 1) L \| \mathbf {x} \| _ {1},
+$$
+
+$$
+F _ {2} (\mathbf {y}; 0) = L \| \mathbf {y} \| _ {1}, \quad F _ {2} (\mathbf {x}; 1) = T L \| \mathbf {y} - \mathbf {u} \| _ {1} - (T - 1) L \| \mathbf {y} \| _ {1}.
+$$
+
+Thus, we have $\mathbb{E}_{\pmb{\xi} \sim \Xi}[F_2(\mathbf{z}; \pmb{\xi})] = f_2(\mathbf{z})$ for all $\mathbf{z} \in \mathbb{R}^d$ , which satisfies Assumption 2.7. Moreover, we can verify that both $F_2(\mathbf{z}; 0)$ and $F_2(\mathbf{z}; 1)$ are convex and Lipschitz continuous on $\mathbb{R}$ , with the Lipschitz constant $L_2 = 2TL$ .
+
+We initialize the algorithm at $\mathbf{x}_0 = \mathbf{1}_d$ . The probability of obtaining the identical information from both oracles $F_{1}$ and $F_{2}$ with $T$ oracle calls is given by
+
+$$
+p = \left(1 - \frac {1}{T}\right) ^ {T} \geq \frac {1}{\mathrm {e}}
+$$
+
+for all $T \geq 2$ . This implies that any SZO algorithm cannot distinguish $f_{1}(\cdot)$ and $f_{2}(\cdot)$ in $T$ SZO calls with probability at least $1 / \mathrm{e}$ . Therefore, a near-optimal SZO algorithm $\mathcal{A}$ must achieve the nearly tight function value gaps for both functions with probability $1 / \mathrm{e}$ . In other words, algorithm $\mathcal{A}$ must output $\hat{\mathbf{x}}$ satisfying
+
+$$
+f _ {1} (\hat {\mathbf {x}}) - f _ {1} ^ {\star} \leq \theta_ {1} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {\sqrt {d} L _ {1} \| \mathbf {x} _ {0} - \mathbf {x} _ {1 , *} \| _ {2}}{\sqrt {T}} \quad \text {a n d} \quad f _ {2} (\hat {\mathbf {x}}) - f _ {2} ^ {\star} \leq \theta_ {2} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {\sqrt {d} L _ {2} \| \mathbf {x} _ {0} - \mathbf {x} _ {2 , *} \| _ {2}}{\sqrt {T}},
+$$
+
+where $\mathbf{x}_{1,\star} \triangleq \arg \min_{\mathbf{x} \in \mathbb{R}^d} f_1(\mathbf{x}) = \mathbf{0}$ , $\mathbf{x}_{2,\star} \triangleq \arg \min_{\mathbf{x} \in \mathbb{R}^d} f_2(\mathbf{x}) = \mathbf{u}$ , and $\theta_1, \theta_2: \mathbb{R}^4 \to \mathbb{R}$ are two polylogarithmic functions. Substituting $L_1 = L$ and $L_2 = 2TL$ , the corresponding bounds become
+
+$$
+\| \hat {\mathbf {x}} \| _ {1} \leq \theta_ {1} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {d}{\sqrt {T}} \quad \text {a n d} \quad \| \hat {\mathbf {x}} - \mathbf {u} \| _ {1} \leq \theta_ {2} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \sqrt {d} \| \mathbf {1} _ {d} - \mathbf {u} \| _ {2} \sqrt {T}.
+$$
+
+Since $\mathbf{u} = (1 - 1 / T)\mathbf{1}_d$ , then the point $\hat{\mathbf{x}}$ must satisfy
+
+$$
+\| \hat {\mathbf {x}} \| _ {1} \leq \theta_ {1} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {d}{\sqrt {T}} \quad \text {a n d} \quad \| \hat {\mathbf {x}} \| _ {1} \geq d - \frac {d}{T} - \theta_ {2} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {d}{\sqrt {T}}.
+$$
+
+Since the functions $\theta_{1}$ and $\theta_{2}$ are poly-logarithmic, there exist a sufficient large $T$ such that
+
+$$
+\theta_ {1} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {d}{\sqrt {T}} < d - \frac {d}{T} - \theta_ {2} \left(\frac {\bar {L}}{\underline {{L}}}, \frac {\bar {s}}{\underline {{s}}}, T, d\right) \cdot \frac {d}{\sqrt {T}},
+$$
+
+which leads to contradiction. Hence, we conclude that achieving an ideal parameter-free stochastic zeroth-order algorithm described in the theorem is impossible.
\ No newline at end of file
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/images.zip b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f2e248e986affce604ec1c609fe4ad20a91bbc92
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f0fd21810938df70efad008922d27a03bb480075ff829cb9f598b13636f4b83
+size 1229792
diff --git a/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/layout.json b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bb740656159d607497b879a47474084daf79061e
--- /dev/null
+++ b/aparameterfreeandnearoptimalzerothorderalgorithmforstochasticconvexoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eedde6f96972c10d31cb86a409d35de46ecb1afa64651ead1c8df11403db830f
+size 1271479
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_content_list.json b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..766abe5549f67f89de7db317e7dafac8692e0718
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85074f7b2446f2217c8c1ba64db334ff8a0520f06503cbedcdd8e516425ba959
+size 148629
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_model.json b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..446faf4bfb5c1a025f1c94d3e63fecc6ee6d51b2
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4fb17cdef33832f2934b7faaa91afff48a20166504c28478994a6976eb6e685
+size 181320
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_origin.pdf b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3b32057c3d7fc284b796d31204b828166496f736
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/b4eef314-e9c4-41ab-b3d9-62b881197e6c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:69ef1129147458cb15fa683f19c628c45b22065477d063e70aade830cd65b950
+size 497201
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/full.md b/aparametriccontextualonlinelearningtheoryofbrokerage/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7653b8244044dbe4623446d9daefe3cc02f7e823
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/full.md
@@ -0,0 +1,624 @@
+# A Parametric Contextual Online Learning Theory of Brokerage
+
+François Bachoc1 Tommaso Cesari2 Roberto Colomboni3,4
+
+# Abstract
+
+We study the role of contextual information in the online learning problem of brokerage between traders. In this sequential problem, at each time step, two traders arrive with secret valuations about an asset they wish to trade. The learner (a broker) suggests a trading (or brokerage) price based on contextual data about the asset and the market conditions. Then, the traders reveal their willingness to buy or sell based on whether their valuations are higher or lower than the brokerage price. A trade occurs if one of the two traders decides to buy and the other to sell, i.e., if the broker's proposed price falls between the smallest and the largest of their two valuations. We design algorithms for this problem and prove optimal theoretical regret guarantees under various standard assumptions.
+
+# 1. Introduction
+
+Inspired by a recent stream of literature (Cesa-Bianchi et al., 2021; Azar et al., 2022; Cesa-Bianchi et al., 2024b; 2023; Bolić et al., 2024; Bernasconi et al., 2024; Bachoc et al., 2024), we approach the bilateral trade problem of brokerage between traders through the lens of online learning. When viewed from a regret minimization perspective, bilateral trade has been explored over rounds of seller/buyer interactions with a broker with no prior knowledge of their private valuations. Similarly to Bolić et al. (2024), we focus on the case where traders are willing to either buy or sell (possibly short; see Section 1.1), depending on whether their valuations for the asset being traded are above or below the brokerage price.
+
+.
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+This setting is especially relevant for over-the-counter (OTC) markets. Serving as alternatives to conventional exchanges, OTC markets operate in a decentralized manner and are a vital part of the global financial system.1 In contrast to centralized exchanges, the lack of strict protocols and regulations delegates to brokers the responsibility of bridging the gap between buyers and sellers, who may not have direct access to one another. In addition to facilitating interactions between parties, brokers leverage their contextual knowledge and market insights to determine appropriate pricing for assets. By examining factors such as supply and demand, market trends, and other asset-specific information, brokers aim to propose prices that reflect the true market value of the asset being traded. This price discovery process is a crucial aspect of a broker's role, as it helps ensure efficient transactions by accounting for the unique circumstances surrounding each asset. Additionally, in many OTC markets, as in our setting, traders choose to either buy or sell depending on the contingent market conditions (Sherstyuk et al., 2020). This behavior is observed across a broad range of asset trades, including stocks, derivatives, art, collectibles, precious metals and minerals, energy commodities like gas and oil, and digital currencies (cryptocurrencies).
+
+We propose a contextual version of the online brokerage problem, that is of significant practical interest given that the broker often has access to meaningful information about the asset being traded and the surrounding market conditions before having to propose a trading price. This information might help the broker to propose more targeted trading prices by inferring (an approximation of) the current market value of the corresponding asset, and ignoring it could be extremely costly in terms of missing trading opportunities.
+
+Although an extensive amount of work has been done on non-contextual bilateral trade problems (including brokerage problems), the existing literature on the more realistic contextual versions of these problems is scarce (see Section 1.3). The main reason for the slower development of contextual results is the higher complexity of these settings and the impossibility of simply adapting non-contextual al
+
+gorghmic ideas and analyses to their contextual counterparts. We aim to fill this gap in the online learning literature on bilateral trade to guide brokers in these contextual scenarios.
+
+# 1.1. Setting
+
+In the following, the elements of any Euclidean space are treated as column vectors and, for any real number $x, y$ , we denote their minimum by $x \wedge y$ and their maximum by $x \vee y$ .
+
+Online protocol. We study the following problem.
+
+At each time $t\in \mathbb{N}$
+
+- Two traders arrive with private valuations $V_{t}, W_{t} \in [0,1]$ about an asset they want to trade.
+- The broker observes a context $c_t \in [0,1]^d$ and proposes a trading price $P_t \in [0,1]$ .
+$\circ$ The two bits $\mathbb{I}\{P_t\leq V_t\}$ $\mathbb{I}\{P_t\leq W_t\}$ (i.e., the willingness of each trader to buy or sell) are revealed to the broker.
+- If the price $P_{t}$ lies between the lowest valuation $V_{t} \land W_{t}$ and highest valuation $V_{t} \lor W_{t}$ (meaning the trader with the minimum valuation is ready to sell at $P_{t}$ and the trader with the maximum valuation is eager to buy at $P_{t}$ ), the asset is bought by the trader with the highest valuation from the trader with the lowest valuation at the brokerage price $P_{t}$ .
+
+Market value. At any time $t \in \mathbb{N}$ , the context $c_t$ is related to the traders' valuations $V_t, W_t$ via the hidden market value of the asset: a number $m_t \in [0,1]$ that satisfies the two assumptions below, which are assumed to hold throughout the whole paper.
+
+The first assumption (Assumption 1.1) states that an unknown linear relation exists between the unknown market value $m_{t}$ and the corresponding context $c_{t}$ the broker observes before proposing a trading price.
+
+Assumption 1.1 (Market values and contexts). There exists $\phi \in [0,1]^d$ , unknown to the broker, such that, for each $t \in \mathbb{N}$ it holds that $m_t = c_t^\top \phi$ .
+
+The second assumption accounts for variability due to personal preferences or individual needs of the traders by modeling traders' valuations as zero-mean perturbations of the market values.
+
+Assumption 1.2 (Market values and valuations). There exists an independent sequence of random variables
+
+$\xi_1, \zeta_1, \xi_2, \zeta_2, \ldots$ such that, for each $t \in \mathbb{N}$ , it holds that $\mathbb{E}[\xi_t] = 0 = \mathbb{E}[\zeta_t]$ and $V_t = m_t + \xi_t$ and $W_t = m_t + \zeta_t$ .3
+
+Contexts. We model the sequence of contexts $c_{1}, c_{2}, \ldots$ as a deterministic $[0, 1]^{d}$ -valued sequence (possibly generated by an adversarial environment with knowledge of the broker's algorithm) that is initially unknown but sequentially discovered by the broker. As a consequence, note that the sequence of market values $m_{1}, m_{2}, \ldots$ can change arbitrarily (and even adversially) from one time step to the next.4
+
+Gain from trade and Regret. Consistently with the existing bilateral trade literature, the reward associated with each interaction is the sum of the net utilities of the traders, known as gain from trade. Formally, for any $p, v, w \in [0,1]$ , the utility of a price $p$ when the valuations of the traders are $v$ and $w$ is
+
+$$
+\begin{array}{l} \mathrm {g} (p, v, w) := (\underbrace {v \vee w - p} _ {\text {b u y e r ' s n e t g a i n}} + \underbrace {p - v \wedge w} _ {\text {s e l l e r ' s n e t g a i n}}) \mathbb {I} \{\underbrace {v \wedge w \leq p \leq v \vee w} _ {\text {a t r a d e h a p p e n s}} \} \\ = (v \vee w - v \wedge w) \mathbb {I} \{v \wedge w \leq p \leq v \vee w \}. \\ \end{array}
+$$
+
+The aim of the learner is to minimize the regret with respect to the best not-necessarily-linear function of the contexts, defined, for any time horizon $T \in \mathbb{N}$ , as
+
+$$
+R _ {T} := \sup _ {p ^ {\star}: [ 0, 1 ] ^ {d} \rightarrow [ 0, 1 ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\operatorname {G F T} _ {t} \left(p ^ {\star} \left(c _ {t}\right)\right) - \operatorname {G F T} _ {t} \left(P _ {t}\right)\right)\right],
+$$
+
+where we let $\mathrm{GFT}_t(p) \coloneqq \mathrm{g}(p, V_t, W_t)$ for all $p \in [0,1]$ , and the expectation is taken with respect to the randomness in $(\xi_t, \zeta_t)_{t \in \mathbb{N}}$ and, possibly, the internal randomization used to choose the trading prices $(P_t)_{t \in \mathbb{N}}$ .
+
+# 1.2. Challenges and contributions
+
+Under the assumption that the traders' valuations are unknown linear functions of $d$ -dimensional contexts perturbed by zero-mean noise with time-variable densities bounded by
+
+Bounded density General 2-bit feedback √LdT T Full feedback LdlnT T
+
+Table 1. Summary of our results.
+
+some $L$ , we make the following contributions (see Table 1 for a summary).
+
+1. We prove a key structural result (Lemma 2.1) with two crucial consequences. First, Lemma 2.1 shows that posting the (unknown) market value as the trading price would maximize the expected gain from trade. $^6$ Second, it proves that the loss paid by posting a suboptimal price is at most quadratic in the distance from the market value.
+2. In our problem, the prices we post directly affect the two bits of information we retrieve (2-bit feedback). We note that this information is so scarce that it is not even enough to reconstruct bandit feedback. We solve this challenging exploration-exploitation dilemma by proposing an algorithm (Algorithm 1) that decides to either explore or exploit adaptively, based on the amount of contextual information gathered so far, and prove its optimality by showing a $\sqrt{LdT\ln T}$ regret upper bound (Theorem 3.1) and a matching (up to a $\sqrt{\ln T}$ $\sqrt{LdT}$ lower bound (Theorem 3.2).
+3. To compare and contrast the impact of our realistic 2-bit feedback in online contextual brokerage, we investigate the rates that could be achieved if the traders' valuations were revealed at the end of any interaction (full feedback); for this problem, we prove that the optimal achievable rate is exponentially faster: of order $Ld\ln T$ , by proving matching regret upper and lower bounds (Theorems 4.1 and 4.2).
+4. Finally, we investigate the necessity of the bounded density assumption: by lifting this assumption, we show that the problem becomes unlearnable (Theorem 5.2), even under full feedback.
+
+We stress that, in all our results, the dependence on all relevant parameters is tight. In contrast, as we discuss in Section 1.3, most related works on bilateral trade obtain (at best) a matching dependence in the time horizon only.
+
+6 This implies, in particular, that in our contextual setting where market prices are functions of contexts, the benchmark in the regret definition is the total expected reward of the best arbitrary sequence of prices, a benchmark that would be unattainable in similar problems (like standard adversarial bandits). This is one of the many differences between contextual and non-contextual settings.
+
+# 1.3. Related Works
+
+Our work extends the recent research on online learning algorithms for bilateral trade, in particular, Cesa-Bianchi et al. (2021); Azar et al. (2022); Cesa-Bianchi et al. (2024b; 2023; 2024c); Bernasconi et al. (2024), for non-contextual problems where sellers and buyers have definite roles. The stochastic case, where sellers' and buyers' valuations are i.i.d. across time, is studied in Cesa-Bianchi et al. (2021; 2024b). They obtain a $\sqrt{T}$ regret rate in the full-feedback setting. For the two-bit feedback case, they prove a linear worst-case lower bound, but it turns out that a tight regret rate of $T^{2/3}$ is possible, by assuming independence and uniformly bounded density for the sellers' and buyers' valuations. The adversarial setting is the topic of several works. In the worst-case, it is unlearnable as shown in Cesa-Bianchi et al. (2021; 2024b). Nevertheless, more favorable results exist under various relaxations. Cesa-Bianchi et al. (2023; 2024c) consider the adversarial case where the adversary is forced to be smooth, i.e., the sellers' and buyers' valuation distributions are allowed to change adversarily over time, but these distributions admit uniformly bounded densities. In the full-feedback case, they prove a tight $\sqrt{T}$ regret rate. In the two-bit feedback case, while the problem is still unlearnable, they allow the learner to use weakly budget-balanced mechanisms, yielding a surprisingly sharp $T^{3/4}$ regret rate. We remark that in all the two-bit feedback upper bounds requiring a bounded density assumption discussed above, there are no corresponding lower bounds with a sharp dependence on the density bound. Azar et al. (2022) consider the $\alpha$ -regret objective, weaker than the regret. In the full-feedback case, they prove a tight 2-regret rate of $\sqrt{T}$ . In the two-bit feedback case, while learning is impossible in general, they allow the learner to use weakly budget-balanced mechanisms, enabling to recover a 2-regret of order $T^{3/4}$ . No matching lower bound is provided. Bernasconi et al. (2024) further relax the notion of weak budget-balance by proposing the notion of global budget-balance. Under global budget-balance, they provide a tight regret rate of $\sqrt{T}$ in the full-feedback case, and a regret rate of $T^{3/4}$ in the two-bit feedback case, without a matching lower bound.
+
+Gaucher et al. (2025) investigated a noisy linear contextual version of the bilateral trade problem, where the authors obtain a tight regret bound (up to logarithmic factors) in the time horizon of order $T^{2/3}$ under 2-bit feedback, with mismatching dependence on the dimension and on the bounded density parameter in the lower bound. Even though their algorithm can be adapted to our setting (via the reduction that sets the seller's valuation as the minimum and the buyer's as the maximum of the traders' valuations), their regret guarantees (that would anyway be worse than our $\sqrt{T}$ ) are lost because they require that, conditioned to the context, the seller is independent of the buyer, which is not the case
+
+in the reduction because, in general, the minimum of two random variables is not independent of the maximum of the same two random variables.
+
+The brokerage problem in online learning has been introduced by Bolić et al. (2024) in a simpler i.i.d. and non-contextual setting. There, the authors study the noncontextual version of our trading problem with flexible sellers' and buyers' roles, with the further assumption that the sellers' and buyers' valuations form an i.i.d. sequence. Under the $M$ -bounded density assumption, they obtain tight $M \ln T$ and $\sqrt{MT}$ regret rates in the full-feedback and two-bit feedback settings, respectively. If the bounded density assumption is removed, they show that the learning rate degrades to $\sqrt{T}$ in the full-feedback case and the problem turns out to be unlearnable in the two-bit feedback case. We remark that, interestingly, under the bounded density assumption, we are able to achieve the same regret rates in the contextual version of this problem without requiring that traders share the same valuation distribution, while, without the bounded density assumption, the contextual problem is unlearnable even under full feedback.
+
+The non-contextual brokerage problem has also been recently studied with a different reward function aiming at maximizing the total volume of trades (Cesari & Colomboni, 2025).
+
+Our linear assumption appears commonly in the literature on digital markets, particularly in problems like pricing and auctions. In Cohen et al. (2016; 2020), the authors first address a deterministic setting, then a noisy one with known noise distribution where they obtain a regret rate of order $T^{2/3}$ without presenting a lower bound. The deterministic case has also been investigated in Lobel et al. (2017; 2018); Leme & Schneider (2018; 2022); Liu et al. (2021).
+
+The case of noisy linear functions has been studied in Xu & Wang (2021); Badanidiyuru et al. (2023); Fan et al. (2024); Luo et al. (2024); Chen & Gallego (2021); Javanmard & Nazerzadeh (2019); Bu et al. (2022); Shah et al. (2019) with guarantees limited to parametric or semi-parametric noise settings, while the recent work of Tullii et al. (2024) has given the first near-tight $T^{2/3}$ analysis of the non-parametric noise case.
+
+The only work addressing contextual brokerage is Bachoc et al. (2025), which considers a non-parametric variant of our setting and derives tight (albeit considerably slower) regret bounds. Notably, their Theorem 5, combined with our Theorem 3.1 and Theorem 4.1, yields 2-regret guarantees against oracle policies that know the traders' valuations before setting prices.
+
+Another rich related field explored in its many variants (Hanna et al., 2023; Slivkins et al., 2023; Leme et al., 2022; Foster et al., 2021; 2019; Zhou et al., 2019; Kirschner &
+
+Krause, 2019; Metevier et al., 2019; Foster & Krishnamurthy, 2018; Kannan et al., 2018; Oh & Iyengar, 2019; Hu et al., 2020; Neu & Olkhovskaya, 2020; Wei et al., 2020; Krishnamurthy et al., 2020; Luo et al., 2018; Krishnamurthy et al., 2021) is a contextual linear bandits. In its standard form, at the beginning of each round, an action set is revealed to the learner, and the assumption is that the reward (which equals the feedback) is a linear function of the action selected from the action set. Instead, in our setting, the market price is a linear function of the context, while the rewards are linked to the price the learner posts by the non-linear gain from trade function. Moreover, in contrast to contextual bandits, in our 2-bit feedback model, the feedback differs from and is not sufficient to compute the reward of the action the learner selects at every round. For these reasons, existing theoretical results from contextual linear bandits do not directly apply to our problem. Nevertheless, note that techniques from contextual linear bandits are relevant to our problem, for instance, the use of the elliptical potential lemma (proof of Theorem 3.1).
+
+Previously to online learning contributions, a fair amount of literature addressed game-theoretic and best-approximation aspects of bilateral trade. We refer in particular to the landmark work of Myerson and Satterthwaite (Myerson & Satterthwaite, 1983), as well as Colini-Baldeschi et al. (2016; 2017); Blumrosen & Mizrahi (2016); Brustle et al. (2017); Colini-Baldeschi et al. (2020); Babaioff et al. (2020); Dütting et al. (2021); Deng et al. (2022); Kang et al. (2022); Archbold et al. (2023). We also refer to Cesa-Bianchi et al. (2024b) for an analysis of the references above.
+
+Finally, we point out that Amin et al. (2013); Golrezaei et al. (2019) address pricing problems related to brokerage and bilateral trade, and account for strategic aspects in this context.
+
+# 2. Structural and Technical Results
+
+We begin by presenting a structural result whose economic interpretation is as follows: even if the broker does not know the traders' valuation distributions, if these valuations can be modeled as zero-mean noisy perturbations with bounded densities of some market value, then the best price to post to maximize the expected gain from trade is precisely the (unknown and time-varying) market value, and the cost of posting a suboptimal price is at most quadratic in the distance from the market value.
+
+In particular, this generalizes a similar result appearing in Bolić et al. (2024), which can be applied only under the further assumption that, at any time step, the traders' valuations have the exact same distribution. We argue that this assumption might be overly strong and not capture real-life behavior. This is because traders might have private
+
+preferences or contingent needs that are not known by the broker; they could be more or less volatile, have differently skewed opinions, have valuations with arbitrarily different tail behavior, etc. Instead, we merely assume that, at any time step, traders' valuations are, on average, equal to the market price (which is how market prices are essentially determined in real life) but allow for arbitrarily different (hidden and time-varying) distributions. This relaxation of the assumption comes at the expense of a more subtle proof. The mathematical reason for the added difficulty is that, under the same-distribution assumption, many of the terms appearing in the proof simplify due to symmetries, while, in our case, a different approach is needed to recover all the properties needed for our result (which we are able to obtain without introducing any new assumptions).
+
+This structural result is the key to unraveling the intricacies of the noisy contextual setting, and it is what ultimately allows us to obtain tight regret guarantees in all settings, distinguishing ours from similar contextual bilateral trade and pricing works.
+
+Lemma 2.1. Suppose that $V$ and $W$ are two $[0,1]$ -valued independent random variables, with possibly different densities bounded by some constant $L \geq 1$ , and such that $\mathbb{E}[V] = \mathbb{E}[W] =: m$ . Then, for each $p \in [0,1]$ , it holds that
+
+$$
+0 \leq \mathbb {E} \left[ \mathrm {g} (m, V, W) - \mathrm {g} (p, V, W) \right] \leq L | m - p | ^ {2}.
+$$
+
+Due to space constraints, we defer the technical proof of this lemma to Appendix A.1.
+
+As an immediate corollary of Lemma 2.1, we obtain the following important result that upper bounds the regret in terms of the sum of the squared distances between the prices the algorithm posts and the actual market values.
+
+Corollary 2.2. Consider the contextual brokerage problem introduced in Section 1.1. If the valuations admit densities bounded by a constant $L \geq 1$ , then, for any time horizon $T \in \mathbb{N}$ , we have
+
+$$
+\begin{array}{l} R _ {T} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\mathrm {G F T} _ {t} \left(c _ {t} ^ {\top} \phi\right) - \mathrm {G F T} _ {t} (P _ {t})\right) \right] \\ \leq \sum_ {t = 1} ^ {T} 1 \wedge \left(L \mathbb {E} \left[ | P _ {t} - c _ {t} ^ {\top} \phi | ^ {2} \right]\right). \\ \end{array}
+$$
+
+Proof. Given that for each $t \in \mathbb{N}$ and each $p \in [0,1]$ it holds that $\mathrm{GFT}_t(p) \in [0,1]$ , we have $\sup_{p \in [0,1]} \mathbb{E}\big[\mathrm{GFT}_t(p) - \mathrm{GFT}_t(P_t)\big] \leq 1$ , and hence, recalling that $m_t = c_t^\top \phi$ and
+
+that $\mathbb{E}[V_t] = m_t = \mathbb{E}[W_t]$ , we also have, for each $T \in \mathbb{N}$
+
+$$
+\begin{array}{l} R _ {T} = \sup _ {p ^ {\star}} \sum_ {t = 1} ^ {T} 1 \wedge \left(\mathbb {E} \left[ \mathrm {g} \left(p ^ {\star} \left(c _ {t}\right), V _ {t}, W _ {t}\right) \right] - \mathbb {E} \left[ \mathrm {g} \left(P _ {t}, V _ {t}, W _ {t}\right) \right]\right) \\ \stackrel {(\circ)} {=} \sum_ {t = 1} ^ {T} 1 \wedge \left(\mathbb {E} \left[ \mathrm {g} \left(c _ {t} ^ {\top} \phi , V _ {t}, W _ {t}\right) \right] - \mathbb {E} \left[ \mathrm {g} \left(P _ {t}, V _ {t}, W _ {t}\right) \right]\right) \\ \stackrel {(*)} {=} \sum_ {t = 1} ^ {T} 1 \wedge \mathbb {E} \left[ \right.\left.\left[ \mathbb {E} \left[ \mathrm {g} \left(c _ {t} ^ {\top} \phi , V _ {t}, W _ {t}\right) - \mathrm {g} (p, V _ {t}, W _ {t}) \right]\right] _ {p = P _ {t}} \right] \\ \stackrel {(\circ)} {\leq} \sum_ {t = 1} ^ {T} 1 \wedge \left(L \mathbb {E} \left[ \left| P _ {t} - c _ {t} ^ {\top} \phi \right| ^ {2} \right]\right), \\ \end{array}
+$$
+
+where the supremum in the first equality is over all functions $p^{\star} \colon [0,1]^{d} \to [0,1]$ , ( $\circ$ ) is a directed consequence of Lemma 2.1, and $(*)$ follows from the Freezing Lemma (Cesari & Colomboni, 2021, Lemma 8).
+
+We conclude this section by presenting the following technical lemma, which will be used in the analyses of our Algorithms 1 and 2 to control the behavior of the estimators we employ. Its proof is deferred to Appendix A.2.
+
+We write, for any $l \in \mathbb{N}$ , $\mathbf{1}_l$ for the $l$ -dimensional identity matrix. Also, for any positive definite matrix $A \in \mathbb{R}^{l \times l}$ , we define $\| \cdot \|_A : \mathbb{R}^l \to [0, \infty)$ , $v \mapsto \sqrt{v^\top A v}$ .
+
+Lemma 2.3. Let $s, l \in \mathbb{N}$ . Let $Z_{1}, \ldots, Z_{s}$ be an independent sequence of $[0,1]$ -valued random variables. Let $a_{1}, \ldots, a_{s} \in [0,1]^{l}$ . Let $\psi \in [0,1]^{l}$ . Suppose that, for each $r \in [s]$ it holds that $\mathbb{E}[Z_r] = a_r^\top \psi$ . Define $f_{s} := [a_{1} | \cdots | a_{s}]$ . Define $H_{s} := [Z_{1} | \cdots | Z_{s}]$ . Define $\hat{\psi}_{s} := (f_{s} f_{s}^\top + l^{-1} \mathbf{1}_{l})^{-1} f_{s} H_{s}^\top$ . Then, if $a \in [0,1]^{l}$ , we have that
+
+$$
+\mathbb {E} \left[ \left| a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right| ^ {2} \right] \leq \left\| \sqrt {2} a \right\| ^ {2} \left(\sum_ {r = 1} ^ {s} a _ {r} a _ {r} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1}.
+$$
+
+# 3. Learning in Contextual Brokerage
+
+In this section, we introduce an algorithm (Algorithm 1) for the contextual brokerage problem for which we prove regret guarantees of order $\widetilde{\mathcal{O}}\big(\sqrt{LdT}\big)$ . The key feature of the algorithm's design is a deterministic rule that decides to either explore or exploit based on the amount of information gathered along the various context directions (see the definition of $b_{t}$ on Line 4). When the algorithm explores, it posts a price drawn uniformly in $[0,1]$ to obtain an unbiased estimate of the current market value. When it exploits, it posts the scalar product of the context and the current estimate of the unknown weight vector $\phi$ built using the information retrieved during exploration rounds.
+
+Theorem 3.1. If the learner runs Algorithm 1 and the traders' valuations admit a density bounded by $L \geq 1$ , then, for any time horizon $T$ such that $LT \geq 2d\ln \bigl(1 + 2d(T - 1)\bigr)$ , it holds that $R_{T} \leq 1 + 6\sqrt{LdT\ln T}$ .
+
+# Algorithm 1
+
+1: Post $P_{1}$ uniformly at random in $[0,1]$ , and observe $D_{1} := \mathbb{I}\{P_{1} \leq V_{1}\}$
+2: Let $b_{1} \coloneqq 1$ , let $x_{1} \coloneqq [c_{1}]$ , let $Y_{1} \coloneqq [D_{1}]$ and compute $\hat{\phi}_{1} \coloneqq (x_{1}x_{1}^{\top} + d^{-1}\mathbf{1}_{d})^{-1}x_{1}Y_{1}^{\top}$
+3: for time $t = 2, 3, \ldots$ do
+
+4: Observe context $c_t$ and define $b_t \coloneqq \mathbb{I}\left\{\left\|\sqrt{2}c_t\right\|_{(x_{t-1}x_{t-1}^{\top} + d^{-1}\mathbf{1}_d)^{-1}} > \sqrt{\frac{2d\ln(1 + 2d(T - 1))}{LT}}\right\}$
+5: if $b_{t} = 1$ then
+6: Post $P_{t}$ uniformly at random in $[0,1]$ , and observe $D_{t} := \mathbb{I}\{P_{t} \leq V_{t}\}$
+7: Let $x_{t} \coloneqq [x_{t - 1} \mid c_{t}]$ , let $Y_{t} \coloneqq [Y_{t - 1} \mid D_{t}]$ and compute $\hat{\phi}_{t} \coloneqq (x_{t}x_{t}^{\top} + \mathbf{1}_{d})^{-1}x_{t}Y_{t}^{\top}$
+8: else
+9: post $P_{t}\coloneqq c_{t}^{\top}\hat{\phi}_{t - 1}$ and let $x_{t}\coloneqq x_{t - 1}$ $Y_{t}\coloneqq Y_{t - 1}$ and $\hat{\phi}_t\coloneqq \hat{\phi}_{t - 1}$
+10: end if
+11: end for
+
+Proof. Without loss of generality we assume that $T \geq 2$ . Note that for any $t \in \mathbb{N}$ , if $b_{t} = 1$ , then
+
+$$
+\begin{array}{l} \mathbb {E} \left[ D _ {t} \right] = \mathbb {P} \left[ P _ {t} \leq V _ {t} \right] \\ = \int_ {0} ^ {1} \mathbb {P} [ u \leq V _ {t} ] \mathrm {d} u = \mathbb {E} [ V _ {t} ] = \mathbb {E} \left[ c _ {t} ^ {\top} \phi + \xi_ {t} \right] = c _ {t} ^ {\top} \phi . \tag {1} \\ \end{array}
+$$
+
+Now, fix $t \geq 2$ . Let $s \coloneqq \sum_{i=1}^{t-1} b_i$ be the total number of exploration steps done before time step $t$ . Define recursively time steps $\tau(1), \ldots, \tau(s)$ as follows: let $\tau(1) = 1$ and, for all $n \in [s-1]$ , define $\tau(n+1) \coloneqq \min \{i \in [t-1] \mid i \geq \tau(n) + 1, b_i = 1\}$ . Now, for each $n \in [s]$ , define $Z_n \coloneqq D_{\tau(n)}$ . Notice that $Z_1, \ldots, Z_s$ are well defined because for each $n \in [s]$ we have that $b_{\tau(n)} = 1$ . Define $l \coloneqq d$ . For each $n \in [s]$ , define $a_n \coloneqq c_{\tau(n)}$ . Let $\psi \coloneqq \phi$ and $a \coloneqq c_t$ . Notice that, by Equation (1), if $j \in [s]$ is odd, then $\mathbb{E}[Z_j] = \mathbb{E}[D_{\tau(j)}] = c_{\tau(j)}^{\top} \phi = a_j^{\top} \psi$ . Then, with the same notation of Lemma 2.3, we can apply Lemma 2.3 to obtain
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left| c _ {t} ^ {\top} \hat {\phi} _ {t - 1} - c _ {t} ^ {\top} \phi \right| ^ {2} \right] = \mathbb {E} \left[ \left| a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right| ^ {2} \right] \\ \leq \left\| \sqrt {2} a \right\| ^ {2} \binom {2} {\sum_ {j = 1} ^ {s} a _ {j} a _ {j} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}}) ^ {- 1} \\ = \left\| \sqrt {2} c _ {t} \right\| ^ {2} \left(\sum_ {n = 1} ^ {s} c _ {\tau (n)} c _ {\tau (n)} ^ {\mathrm {T}} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1} \\ = \left\| \sqrt {2} c _ {t} \right\| ^ {2} \left(\sum_ {i = 1} ^ {t - 1} b _ {i} c _ {i} c _ {i} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1} = \left\| \sqrt {2} c _ {t} \right\| ^ {2} \left(x _ {t - 1} x _ {t - 1} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1} \\ \end{array}
+$$
+
+where the last step follows by definition of $x_{t - 1}$ .
+
+Being $t$ arbitrarily chosen, we have that for each $t \in [T]$ such that $t \geq 2$ ,
+
+$$
+\mathbb {E} \big [ | c _ {t} ^ {\top} \hat {\phi} _ {t - 1} - c _ {t} ^ {\top} \phi | ^ {2} \big ] \leq \left\| \sqrt {2} c _ {t} \right\| _ {\left(x _ {t - 1} x _ {t - 1} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} ^ {2}.
+$$
+
+Hence, leveraging Corollary 2.2 and the previous inequality, for any $T\in \mathbb{N}$ , we have that
+
+$$
+\begin{array}{l} R _ {T} \leq \sum_ {t = 1} ^ {T} 1 \wedge \left(L \mathbb {E} \left[ | P _ {t} - c _ {t} ^ {\top} \phi | ^ {2} \right]\right) \\ \leq \sum_ {t = 2} ^ {T} (1 - b _ {t}) L \mathbb {E} \left[ | c _ {t} ^ {\top} \hat {\phi} _ {t - 1} - c _ {t} ^ {\top} \phi | ^ {2} \right] + \sum_ {t = 1} ^ {T} b _ {t} \\ \leq L \sum_ {t = 2} ^ {T} (1 - b _ {t}) \| \sqrt {2} c _ {t} \| _ {\left(x _ {t - 1} x _ {t - 1} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} ^ {2} + \sum_ {t = 1} ^ {T} b _ {t} \\ \leq \sqrt {2 L d T \ln (1 + 2 d (T - 1))} + \sum_ {t = 1} ^ {T} b _ {t}, \\ \end{array}
+$$
+
+where in the last step we used the definition of the $b_{1}, \ldots, b_{T}$ . Now, given that $LT / \left(2d\ln \left(1 + 2d(T - 1)\right)\right) \geq 1$ , using the convention $0/0 = 0$ ,
+
+$$
+\begin{array}{l} \sum_ {t = 2} ^ {T} b _ {t} = \sum_ {t = 2} ^ {T} \frac {b _ {t} \left\| \sqrt {2} c _ {t} \right\| _ {\left(x _ {t - 1} x _ {t - 1} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} ^ {2}}{\left\| \sqrt {2} c _ {t} \right\| _ {\left(x _ {t - 1} x _ {t - 1} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}}} \\ \leq \sqrt {\frac {L T}{2 d \ln (1 + 2 d (T - 1))}} \\ \cdot \sum_ {t = 2} ^ {T} 1 \wedge b _ {t} \left\| \sqrt {2} c _ {t} \right\| _ {\left(\sum_ {s = 1} ^ {t - 1} b _ {s} c _ {s} c _ {s} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} \\ \leq \sqrt {\frac {2 L T}{d \ln (1 + 2 d (T - 1))}} \\ \cdot \sum_ {t = 1} ^ {T - 1} 1 \wedge \| b _ {t + 1} c _ {t + 1} \| _ {\left(\sum_ {s = 1} ^ {t} \left(b _ {s} c _ {s}\right) \left(b _ {s} c _ {s}\right) ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right)} ^ {2} \\ =: \left(*\right). \\ \end{array}
+$$
+
+Using the elliptical potential lemma (Lattimore & Szepesvári, 2020, Lemma 19.4), we obtain
+
+$$
+\begin{array}{l} \sum_ {t = 1} ^ {T} b _ {t} \leq 1 + (*) \\ \leq 1 + \sqrt {2 L T / \left(d \ln (1 + 2 d (T - 1))\right)} \cdot 2 d \ln (1 + 2 d (T - 1)) \\ = 1 + 2 \sqrt {2 L d T \ln (1 + 2 d (T - 1))}. \\ \end{array}
+$$
+
+Hence, if $d < T / 2$ , this implies that $R_{T} \leq 1 + 3\sqrt{2LdT\ln(1 + 2d(T - 1))} \leq 1 + 6\sqrt{LdT\ln T}$ . On the other hand, if $d \geq T / 2$ , then, since $L \geq 1$ , we obtain, again, $R_{T} \leq T \leq 1 + 6\sqrt{LdT\ln T}$ .
+
+We conclude this section by stating a matching (up to logarithmic terms) worst-case $\Omega\left(\sqrt{LdT}\right)$ regret lower bound for any algorithm, proving the optimality of Algorithm 1.
+
+At a high level, the proof of this result is based on first building a sequence of contexts defined as a common element of the canonical basis of $\mathbb{R}^d$ during each one of $d$ blocks of
+
+$T / d$ consecutive time steps. Then, in each block, we use a non-contextual lower bound construction leading to a regret of at least $\sqrt{LT / d}$ for each block, and conclude the proof by summing over blocks. For more details on the proof of this result, see Appendix C.2.
+
+Theorem 3.2. There exist two numerical constants $a, b > 0$ such that, for any $L \geq 2$ and any time horizon $T \geq \max(4, adL^3, 2d)$ , there exists a sequence of contexts $c_1, \ldots, c_T \in [0,1]^d$ such that, for any algorithm $\alpha$ for the contextual brokerage problem, there exists a vector $\phi \in [0,1]^d$ and two zero-mean independent sequences $(\xi_t)_{t \in [T]}$ and $(\zeta_t)_{t \in [T]}$ independent of each other such that, if we define $V_t := c_t^\top \phi + \xi_t$ and $W_t := c_t^\top \phi + \zeta_t$ , then for each $t \in [T]$ it holds that $c_t^\top \phi \in [0,1]$ , $V_t$ and $W_t$ are $[0,1]$ -valued random variables with density bounded by $L$ , and the regret of $\alpha$ on the sequence of traders' valuations $V_1, W_1, \ldots, V_T, W_T$ satisfies $R_T \geq b\sqrt{LdT}$ .
+
+We remark that the previous lower bound holds even for algorithms that have prior knowledge of the sequence of contexts $c_1, c_2, \ldots$ and that Theorem 3.1 shows that Algorithm 1 matches the optimal $\sqrt{LdT}$ rate (up to a $\sqrt{\ln T}$ factor) even without this a-priori knowledge.
+
+# 4. Full Feedback
+
+In this section, we discuss a "full feedback" version of the contextual brokerage problem to understand how the limited feedback the broker has normally access to impacts the regret. In this version, the valuations $V_{t}$ and $W_{t}$ are revealed at the end of each time step $t$ .
+
+For this problem, we modify Algorithm 1 in two ways to leverage the higher-quality feedback. First, the new algorithm never explores (it does not need to), i.e., $b_{t} \coloneqq 0$ for all $t$ . Second, the algorithm uses different (and better) unbiased estimators of $m_{t}$ in the columns of $Y_{t}$ : the valuations $V_{t}$ and $W_{t}$ . The resulting algorithm is Algorithm 2, for which we prove an optimal logarithmic worst-case regret: an exponential improvement with respect to what is achievable under the classic 2-bit feedback.
+
+# Algorithm 2
+
+1: Observe context $c_1$ , post $P_1 \coloneqq 1/2$ , and receive feedback $V_1, W_1$
+2: Let $x_{1} \coloneqq [c_{1} \mid c_{1}]$ , let $Y_{1} \coloneqq [V_{1} \mid W_{1}]$ , and compute $\hat{\phi}_{1} \coloneqq (x_{1}x_{1}^{\top} + d^{-1}\mathbf{1}_{d})^{-1}x_{1}Y_{1}^{\top}$
+3: for time $t = 2, 3, \ldots$ do
+4: Observe context $c_t$ , post $P_t \coloneqq c_t^\top \hat{\phi}_{t-1}$ , and receive feedback $V_t, W_t$
+5: Let $x_{t} \coloneqq [x_{t - 1} | c_{t} | c_{t}]$ , $Y_{t} \coloneqq [Y_{t - 1} | V_{t} | W_{t}]$ , and compute $\hat{\phi}_{t} \coloneqq (x_{t}x_{t}^{\top} + d^{-1}\mathbf{1}_{d})^{-1}x_{t}Y_{t}^{\top}$
+6: end for
+
+Theorem 4.1. Consider the full-feedback version of the contextual brokerage problem. If the learner runs Algorithm 2 and the traders' valuations admit a density bounded by $L \geq 1$ , then, for any time horizon $T \in \mathbb{N}$ , it holds that $R_{T} \leq 1 + 4Ld\ln T$ .
+
+Due to space constraints, we defer the proof of this result to Appendix B.1.
+
+We conclude this section by stating a matching worst-case $\Omega(Ld\ln T)$ regret lower bound for any algorithm in the full-feedback case, proving the optimality of Algorithm 2.
+
+The result is proved similarly to that of Theorem 3.2: first, in each one of $d$ blocks of $T / d$ consecutive time steps, we play a fixed context defined as a common element of the canonical basis of $\mathbb{R}^d$ . Then, in each block, we use a noncontextual lower bound construction leading to a regret of at least $L\ln (T / d)$ for the block, and conclude the proof by summing over blocks. For more details on the proof of this result, see Appendix C.1.
+
+Theorem 4.2. There exist two numerical constants $a, b > 0$ such that, for any $L \geq 2$ and any time horizon $T \geq \max(4, adL^5, 2d)$ , there exists a sequence of contexts $c_1, \ldots, c_T \in [0, 1]^d$ such that, for any algorithm $\alpha$ for the full-feedback version of the contextual brokerage problem, there exists a vector $\phi \in [0, 1]^d$ and two zero-mean independent sequences $(\xi_t)_{t \in [T]}$ and $(\zeta_t)_{t \in [T]}$ independent of each other, such that if we define $V_t \coloneqq c_t^\top \phi + \xi_t$ and $W_t \coloneqq c_t^\top \phi + \zeta_t$ , then for each $t \in [T]$ it holds that $c_t^\top \phi \in [0, 1]$ , $V_t$ and $W_t$ are $[0, 1]$ -valued random variables with density bounded by $L$ , and the regret of $\alpha$ on the sequence of traders' valuations $V_1, W_1, \ldots, V_T, W_T$ satisfies $R_T \geq bLd \ln T$ .
+
+We remark that the previous lower bound holds even for algorithms that have prior knowledge of the sequence of contexts $c_{1}, c_{2}, \ldots$ and that Theorem 4.1 shows that Algorithm 2 matches the optimal $Ld\ln T$ rate even without this a-priori knowledge.
+
+# 5. Beyond Bounded Densities
+
+In this final section, we investigate the general case where the traders' valuations are not assumed to have a bounded density. We begin with the following (perhaps counterintuitive) counterexample showing that, in general, posting the market value can be highly suboptimal if the goal is to maximize the gain from trade.
+
+Example 5.1. Let $V$ and $W$ be two independent uniform random variables on $\{0, \frac{1}{5}, 1\}$ and $m := \mathbb{E}[V] = \mathbb{E}[W] = 2/5$ . Then $\operatorname{argmax}_{p \in [0,1]} \mathbb{E}[\mathrm{g}(p, V, W)] = 1/5 \neq m$ , and $\mathbb{E}[\mathrm{g}(1/5, V, W)] - \mathbb{E}[\mathrm{g}(2/5, V, W)] = 2 \cdot \left(\frac{1}{5} - 0\right) \cdot \frac{1}{9} > 0$ .
+
+The phenomenon illustrated by the previous counterexample is the key to proving our final result: the unlearnability, in general, of the brokerage problem (even when full feedback
+
+is available). Specifically, we exploit the fact that, in general, the optimal price at time $t$ depends not only on the market value $m_t = c_t^\top \phi$ but also on properties of the adversarial and time-varying distributions of the perturbations $\xi_t$ and $\zeta_t$ , which make it impossible to compete against the benchmark in our regret definition.
+
+Theorem 5.2. There exists a sequence of contexts $c_{1}, c_{2}, \dots \in [0,1]^{d}$ and a vector $\phi \in [0,1]^{d}$ , such that for any algorithm $\alpha$ for the full-feedback version of the contextual brokerage problem, there exists an independent sequence of zero mean random variables $\xi_{1}, \zeta_{1}, \xi_{2}, \zeta_{2}, \ldots$ , such that if the valuations of the traders at time $t$ are $V_{t} = c_{t}^{\top}\phi + \xi_{t}$ and $W_{t} = c_{t}^{\top}\phi + \zeta_{t}$ , then $c_{t}^{\top}\phi \in [0,1]$ , $V_{t}, W_{t}$ are $[0,1]$ -valued random variables, and the regret of $\alpha$ on the sequence of traders' valuations $V_{1}, W_{1}, \ldots, V_{T}, W_{T}$ satisfies $R_{T} \geq \frac{1}{32} T$ .
+
+We remark that the previous unlearnability result holds even for algorithms that have prior knowledge of the sequence of contexts $c_{1}, c_{2}, \ldots$ and, strikingly, of the vector $\phi$ , that is, even for algorithms that know the entire sequence $m_{1}, m_{2}, \ldots$ of market prices in advance!
+
+Proof. Assume that $d \geq 2$ (for the case $d = 1$ , the following proof can be adapted straightforwardly by defining $\phi = 1$ and $c_{t} = 1/2 + \varepsilon_{t}$ , where $\varepsilon_{t}$ is an arbitrary small sequence of biases). Let $(a_{t})_{t \in \mathbb{N}}$ be a sequence of distinct elements in $[0,1]$ and, for all $t \in \mathbb{N}$ , let $c_{t} := (a_{t}, 1 - a_{t}, 0,0,\ldots,0)$ . Notice that $(c_{t})_{t \in \mathbb{N}}$ is a sequence of distinct elements in $[0,1]^{d}$ . Define $\phi := (1/2, 1/2, 0,0,\ldots,0)$ . Notice that for each $t \in \mathbb{N}$ it holds that $c_{t}^{\top} \phi = 1/2$ . Let $\varepsilon \in (0,1/16)$ . For any $\theta \in \{0,1\}$ , consider the following probability distribution
+
+$$
+\mu_ {\theta} := \left(\frac {1}{4} + (1 - 2 \theta) \varepsilon\right) \delta_ {- \frac {1}{2}} + \frac {1}{2} \delta_ {f (\theta)} + \left(\frac {1}{4} - (1 - 2 \theta) \varepsilon\right) \delta_ {\frac {1}{2}}
+$$
+
+where $f(\theta) \coloneqq 2(1 - \theta)\varepsilon - 2\theta\varepsilon$ and for any $a \in \mathbb{R}$ , $\delta_{a}$ is the Dirac's delta probability distribution centered in $a$ . Consider an independent family of random variables $(\xi_{t,\theta}, \zeta_{t,\theta})_{t \in \mathbb{N}, \theta \in \{0,1\}}$ such that for any $t \in \mathbb{N}$ and any $\theta \in \{0,1\}$ , we have that both $\xi_{t,\theta}$ and $\zeta_{t,\theta}$ are random variables with common distribution $\mu_{\theta}$ . Notice that for each $t \in \mathbb{N}$ and each $\theta \in \{0,1\}$ we have that $\mathbb{E}[\xi_{t,\theta}] = 0 = \mathbb{E}[\zeta_{t,\theta}]$ . Define, for each $t \in \mathbb{N}$ and each $\theta \in \{0,1\}$ , the random variables $V_{t,\theta} \coloneqq c_t^\top \phi + \xi_{t,\theta}$ and $W_{t,\theta} \coloneqq c_t^\top \phi + \zeta_{t,\theta}$ . Notice that these are $[0,1]$ -valued random variables and that $(V_{t,\theta}, W_{t,\theta})_{t \in \mathbb{N}, \theta \in \{0,1\}}$ is an independent family. Now, for each $\theta \in \{0,1\}$ and each $t \in \mathbb{N}$ , let $p \mapsto G_t^\theta(p) \coloneqq g(p, V_{t,\theta}, W_{t,\theta})$ and
+
+$$
+p ^ {\#} (\theta) \in \operatorname {a r g m a x} _ {p \in [ 0, 1 ]} \mathbb {E} \bigl [ G _ {t} ^ {\theta} (p) \bigr ],
+$$
+
+which does exist because the function $[0,1] \to [0,1], p \mapsto \mathbb{E}\big[G_t^\theta(p)\big]$ is upper semicontinuous (this can be proved, e.g., as in Cesa-Bianchi et al. 2024b, Appendix B) and defined on a compact set. Furthermore, note that the previous definition is independent of $t$ because, for any $\theta \in \{0,1\}$ , the
+
+pairs $(V_{t_1,\theta},W_{t_1,\theta})$ and $(V_{t_2,\theta},W_{t_2,\theta})$ share the same distribution for every $t_1,t_2\in \mathbb{N}$ . Fix a learning algorithm for the full-feedback contextual brokerage problem, fix a time horizon $T\in \mathbb{N}$ , and notice that since the contexts $c_{1},c_{2},\ldots$ are all distinct, it follows that
+
+$$
+\begin{array}{l} \max _ {\theta_ {1}, \dots , \theta_ {T} \in \{0, 1 \} ^ {T}} \sup _ {p ^ {\star}: [ 0, 1 ] ^ {d} \to [ 0, 1 ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(G _ {t} ^ {\theta_ {t}} \left(p ^ {\star} \left(c _ {t}\right)\right) - G _ {t} ^ {\theta_ {t}} \left(P _ {t}\right)\right) \right] \\ = \max _ {\theta_ {1}, \dots , \theta_ {T} \in \{0, 1 \} ^ {T}} \sum_ {t = 1} ^ {T} \left(\sup _ {p \in [ 0, 1 ]} \mathbb {E} \left[ G _ {t} ^ {\theta_ {t}} (p) \right] - \mathbb {E} \left[ G _ {t} ^ {\theta_ {t}} (P _ {t}) \right]\right) \\ = \max _ {\theta_ {1}, \dots , \theta_ {T} \in \{0, 1 \} ^ {T}} \sum_ {t = 1} ^ {T} \mathbb {E} \left[ G _ {t} ^ {\theta_ {t}} \left(p ^ {\#} \left(\theta_ {t}\right)\right) - G _ {t} ^ {\theta_ {t}} \left(P _ {t}\right) \right] =: (\#). \\ \end{array}
+$$
+
+Now, consider an i.i.d. family of Bernoulli random variables $(\Theta_{t})_{t\in \mathbb{N}}$ with parameter $1 / 2$ , independent of the whole family $(V_{t,\theta},W_{t,\theta})_{t\in \mathbb{N},\theta \in \{0,1\}}$ . We have that
+
+$$
+(\#) \geq \sum_ {t = 1} ^ {T} \left(\mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} \left(p ^ {\#} \left(\Theta_ {t}\right)\right) \right] - \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} \left(P _ {t}\right) \right]\right) =: (\$).
+$$
+
+Now, for each $t \in [T]$ , we see that
+
+$$
+\begin{array}{l} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} \left(p ^ {\#} \left(\Theta_ {t}\right)\right) \right] = \mathbb {E} \left[ \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} \left(p ^ {\#} \left(\Theta_ {t}\right)\right) \mid \Theta_ {t} \right] \right] \\ = \mathbb {E} \left[ \max _ {p \in [ 0, 1 ]} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} \right] \right] \\ \end{array}
+$$
+
+and long but straightforward computations show that, for each $p\in [0,1]$ , it holds that the conditional expectation $\mathbb{E}\Big[G_t^{\Theta_t}(p)\mid \Theta_t\Big]$ is equal to
+
+$$
+\left\{ \begin{array}{l l} \frac {1}{4} + \varepsilon (1 - 2 \Theta_ {t}) & \text {i f} 0 \leq p < \frac {1}{2} - 2 \Theta_ {t} \varepsilon + 2 (1 - \Theta_ {t}) \varepsilon , \\ \frac {3}{8} + 2 \varepsilon^ {2} & \text {i f} p = \frac {1}{2} - 2 \Theta_ {t} \varepsilon + 2 (1 - \Theta_ {t}) \varepsilon , \\ \frac {1}{4} - \varepsilon (1 - 2 \Theta_ {t}) & \text {i f} \frac {1}{2} - 2 \Theta_ {t} \varepsilon + 2 (1 - \Theta_ {t}) \varepsilon < p \leq 1 , \end{array} \right.
+$$
+
+from which it follows that
+
+$$
+\max _ {p \in [ 0, 1 ]} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} \right] = \frac {3}{8} + 2 \varepsilon^ {2}.
+$$
+
+On the other hand, for each $t \in [T]$ , leveraging the freezing lemma (Cesari & Colomboni, 2021, Lemma 8), we have that
+
+$$
+\begin{array}{l} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (P _ {t}) \right] = \mathbb {E} \left[ \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (P _ {t}) \mid P _ {t} \right] \right] = \mathbb {E} \left[ \left[ \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (p) \right] \right] _ {p = P _ {t}} \right] \\ = \mathbb {E} \left[ \left[ \frac {1}{2} \mathbb {E} \big [ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} = 0 \big ] + \frac {1}{2} \mathbb {E} \big [ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} = 1 \big ] \right] _ {p = P _ {t}} \right] \\ \end{array}
+$$
+
+and again, tedious but straightforward computations show
+
+that, for each $p\in [0,1]$ , it holds that
+
+$$
+\begin{array}{l} \frac {1}{2} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} = 0 \right] + \frac {1}{2} \mathbb {E} \left[ G _ {t} ^ {\Theta_ {t}} (p) \mid \Theta_ {t} = 1 \right] \\ = \frac {1}{4} \left(\mathbb {I} \left\{p < \frac {1}{2} - 2 \varepsilon \right\} + \mathbb {I} \left\{\frac {1}{2} + 2 \varepsilon < p \right\}\right) \\ + \left(\frac {5}{1 6} + \frac {\varepsilon}{2} + \varepsilon^ {2}\right) \left(\mathbb {I} \left\{p = \frac {1}{2} - 2 \varepsilon \right\} + \mathbb {I} \left\{p = \frac {1}{2} + 2 \varepsilon \right\}\right) \\ + \left(\frac {1}{4} + \varepsilon\right) \mathbb {I} \left\{\frac {1}{2} - 2 \varepsilon < p < \frac {1}{2} + 2 \varepsilon \right\} \\ \leq \frac {5}{1 6} + \frac {\varepsilon}{2} + \varepsilon^ {2}. \\ \end{array}
+$$
+
+We conclude that $(\$ )\geq \frac{T}{16} +\left(\varepsilon^{2} - \frac{\varepsilon}{2}\right)T\geq \frac{T}{32}$, from which it follows that there exists $\theta_{1},\ldots ,\theta_{T}\in \{0,1\}$ such that
+
+$$
+\sup _ {p ^ {\star}: [ 0, 1 ] ^ {d} \rightarrow [ 0, 1 ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(G _ {t} ^ {\theta_ {t}} \left(p ^ {\star} \left(c _ {t}\right)\right) - G _ {t} ^ {\theta_ {t}} \left(P _ {t}\right)\right)\right] \geq \frac {T}{3 2}.
+$$
+
+# 6. Conclusions
+
+Motivated by the real-life desideratum to exploit prior information on the traded assets, we investigated the noisy linear contextual online learning problem of brokerage between traders without predetermined seller/buyer roles. We provided a complete picture with tight regret bounds in all the proposed settings, achieving tightness (up to log terms) in all relevant parameters.
+
+Our work stands on the classic interpretation of the market value of an asset as the average opinion of the market participants. An alternative perspective, which we leave open for future research, is when, instead, assets have an "inherent value", and traders' valuations are systematic biases or strategic deviations around this quantity. In this case, this inherent value would not be the average of the traders' valuations, and new techniques will be required to analyze this setting.
+
+Finally, we highlight that there are many other online learning problems in digital markets whose contextual version is still open, such as market making (Cesa-Bianchi et al., 2025), first-price auctions with unknown costs (Cesa-Bianchi et al., 2024a), trading-volume maximization (Cesari & Colomboni, 2025), and optimal taxation (Cesa-Bianchi et al., 2025).
+
+# Acknowledgements
+
+The work of FB was supported by the Project GAP (ANR-21-CE40-0007) of the French National Research Agency (ANR) and by the Chair UQPhysAI of the Toulouse ANITI AI Cluster. TC gratefully acknowledges the support of the University of Ottawa through grant GR002837 (Start-Up Funds) and that of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grants RGPIN-2023-03688 (Discovery Grants Program) and DGECR2023-00208 (Discovery Grants Program, DGECR - Discovery Launch Supplement). RC is partially supported
+
+by the MUR PRIN grant 2022EKNE5K (Learning in Markets and Society), the FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme, the EU Horizon CL4-2022-HUMAN-02 research and innovation action under grant agreement 101120237, project ELIAS (European Lighthouse of AI for Sustainability).
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Amin, K., Rostamizadeh, A., and Syed, U. Learning prices for repeated auctions with strategic buyers. Advances in neural information processing systems, 26, 2013.
+Archbold, T., de Keijzer, B., and Ventre, C. Non-obvious manipulability for single-parameter agents and bilateral trade. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pp. 2107-2115, USA, 2023. International Foundation for Autonomous Agents and Multiagent Systems.
+Azar, Y., Fiat, A., and Fusco, F. An alpha-regret analysis of adversarial bilateral trade. Advances in Neural Information Processing Systems, 35:1685-1697, 2022.
+Babaioff, M., Goldner, K., and Gonczarowski, Y. A. Bulow-Klemperer-style results for welfare maximization in two-sided markets. In Proceedings of the Thirty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '20, pp. 2452-2471, USA, 2020. Society for Industrial and Applied Mathematics.
+Bachoc, F., Cesa-Bianchi, N., Cesari, T., and Colomboni, R. Fair online bilateral trade. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., and Zhang, C. (eds.), Advances in Neural Information Processing Systems, volume 37, pp. 37241-37263. Curran Associates, Inc., 2024.
+Bachoc, F., Cesari, T., and Colomboni, R. A tight regret analysis of non-parametric repeated contextual brokerage. In Li, Y., Mandt, S., Agrawal, S., and Khan, E. (eds.), Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research. PMLR, 03-05 May 2025.
+Badanidiyuru, A., Feng, Z., and Guruganesh, G. Learning to bid in contextual first price auctions. In Proceedings of the ACM Web Conference 2023, pp. 3489-3497, 2023.
+
+Bass, R. F. Real analysis for graduate students. Creates space Ind Pub, USA, 2013.
+Bernasconi, M., Castiglioni, M., Celli, A., and Fusco, F. No-regret learning in bilateral trade via global budget balance. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, 2024.
+Black, F. and Scholes, M. The pricing of options and corporate liabilities. Journal of political economy, 81(3): 637-654, 1973.
+Blumrosen, L. and Mizrahi, Y. Approximating gains-from-trade in bilateral trading. In Web and Internet Economics, WINE'16, volume 10123 of Lecture Notes in Computer Science, pp. 400-413, Germany, 2016. Springer.
+Bolic, N., Cesari, T., and Colomboni, R. An online learning theory of brokerage. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS '24, pp. 216-224, Richland, SC, 2024. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9798400704864.
+Brustle, J., Cai, Y., Wu, F., and Zhao, M. Approximating gains from trade in two-sided markets via simple mechanisms. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC '17, pp. 589-590, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450345279.
+Bu, J., Simchi-Levi, D., and Wang, C. Context-based dynamic pricing with partially linear demand model. Advances in Neural Information Processing Systems, 35: 23780-23791, 2022.
+Cesa-Bianchi, N., Cesari, T. R., Colomboni, R., Fusco, F., and Leonardi, S. A regret analysis of bilateral trade. In Proceedings of the 22nd ACM Conference on Economics and Computation, pp. 289-309, USA, 2021. Association for Computing Machinery.
+Cesa-Bianchi, N., Cesari, T. R., Colomboni, R., Fusco, F., and Leonardi, S. Repeated bilateral trade against a smoothed adversary. In The Thirty Sixth Annual Conference on Learning Theory, pp. 1095-1130, USA, 2023. PMLR.
+Cesa-Bianchi, N., Cesari, T., Colomboni, R., Fusco, F., and Leonardi, S. The role of transparency in repeated first-price auctions with unknown valuations. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pp. 225-236. Association for Computing Machinery, 2024a. ISBN 9798400703836.
+Cesa-Bianchi, N., Cesari, T., Colomboni, R., Fusco, F., and Leonardi, S. Bilateral trade: A regret minimization perspective. Mathematics of Operations Research, 49(1): 171-203, 2024b.
+
+Cesa-Bianchi, N., Cesari, T., Colomboni, R., Fusco, F., and Leonardi, S. Regret analysis of bilateral trade with a smoothed adversary. Journal of Machine Learning Research, 25(234):1-36, 2024c.
+Cesa-Bianchi, N., Cesari, T., Colomboni, R., Foscari, L., and Pathak, V. Market making without regret. In Proceedings of Thirty Eighth Conference on Learning Theory, volume 291 of Proceedings of Machine Learning Research, pp. 799-837. PMLR, 2025.
+Cesari, T. and Colomboni, R. An online learning theory of trading-volume maximization. In The Thirteenth International Conference on Learning Representations, 2025.
+Cesari, T. R. and Colomboni, R. A nearest neighbor characterization of Lebesgue points in metric measure spaces. Mathematical Statistics and Learning, 3(1):71-112, 2021.
+Cesa-Bianchi, N., Colomboni, R., and Kasy, M. Adaptive maximization of social welfare. *Econometrica*, 93(3): 1073-1104, 2025.
+Chen, N. and Gallego, G. Nonparametric pricing analytics with customer covariates. Operations Research, 69(3): 974-984, 2021.
+Cohen, M. C., Lobel, I., and Paes Leme, R. Feature-based dynamic pricing. In Proceedings of the 2016 ACM Conference on Economics and Computation, pp. 817-817, 2016.
+Cohen, M. C., Lobel, I., and Paes Leme, R. Feature-based dynamic pricing. Management Science, 66(11):4921-4943, 2020.
+Colini-Baldeschi, R., de Keijzer, B., Leonardi, S., and Turchetta, S. Approximately efficient double auctions with strong budget balance. In ACM-SIAM Symposium on Discrete Algorithms, SODA'16, pp. 1424-1443, USA, 2016. SIAM.
+Colini-Baldeschi, R., Goldberg, P. W., de Keijzer, B., Leonardi, S., and Turchetta, S. Fixed price approximability of the optimal gain from trade. In Web and Internet Economics, WINE'17, volume 10660 of Lecture Notes in Computer Science, pp. 146-160, Germany, 2017. Springer.
+Colini-Baldeschi, R., Goldberg, P. W., Keijzer, B. d., Leonardi, S., Roughgarden, T., and Turchetta, S. Approximately efficient two-sided combinatorial auctions. ACM Transactions on Economics and Computation (TEAC), 8 (1):1-29, 2020.
+Deng, Y., Mao, J., Sivan, B., and Wang, K. Approximately efficient bilateral trade. In STOC, pp. 718-721, Italy, 2022. ACM.
+
+Dütting, P., Fusco, F., Lazos, P., Leonardi, S., and Reifenhäuser, R. Efficient two-sided markets with limited information. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021, pp. 1452-1465, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380539.
+Fan, J., Guo, Y., and Yu, M. Policy optimization using semiparametric models for dynamic pricing. Journal of the American Statistical Association, 119(545):552-564, 2024.
+Foster, D., Rakhlin, A., Simchi-Levi, D., and Xu, Y. Instance-dependent complexity of contextual bandits and reinforcement learning: A disagreement-based perspective. In Conference on Learning Theory, pp. 2059-2059. PMLR, 2021.
+Foster, D. J. and Krishnamurthy, A. Contextual bandits with surrogate losses: Margin bounds and efficient algorithms. Advances in Neural Information Processing Systems, 31, 2018.
+Foster, D. J., Krishnamurthy, A., and Luo, H. Model selection for contextual bandits. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
+Gaucher, S., Bernasconi, M., Castiglioni, M., Celli, A., and Perchet, V. Feature-based online bilateral trade. In The Thirteenth International Conference on Learning Representations, 2025.
+Golrezaei, N., Javanmard, A., and Mirrokni, V. Dynamic incentive-aware learning: Robust pricing in contextual auctions. Advances in Neural Information Processing Systems, 32, 2019.
+Hanna, O. A., Yang, L., and Fragouli, C. Contexts can be cheap: Solving stochastic contextual bandits with linear bandit algorithms. In The Thirty Sixth Annual Conference on Learning Theory, pp. 1791-1821. PMLR, 2023.
+Hu, Y., Kallus, N., and Mao, X. Smooth contextual bandits: Bridging the parametric and non-differentiable regret regimes. In Conference on Learning Theory, pp. 2007-2010. PMLR, 2020.
+Javanmard, A. and Nazerzadeh, H. Dynamic pricing in high-dimensions. Journal of Machine Learning Research, 20(9):1-49, 2019.
+Kang, Z. Y., Pernice, F., and Vondrak, J. Fixed-price approximations in bilateral trade. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 2964-2985, Alexandria, VA, USA, 2022. SIAM, Society for Industrial and Applied Mathematics.
+
+Kannan, S., Morgenstern, J. H., Roth, A., Waggoner, B., and Wu, Z. S. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. Advances in Neural Information Processing Systems, 31, 2018.
+Kirschner, J. and Krause, A. Stochastic bandits with context distributions. Advances in Neural Information Processing Systems, 32, 2019.
+Krishnamurthy, A., Langford, J., Slivkins, A., and Zhang, C. Contextual bandits with continuous actions: Smoothing, zooming, and adapting. Journal of Machine Learning Research, 21(137):1-45, 2020.
+Krishnamurthy, A., Lykouris, T., Podimata, C., and Schapire, R. Contextual search in the presence of irrational agents. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 910-918, 2021.
+Lattimore, T. and Szepesvári, C. Bandit algorithms. Cambridge University Press, 2020.
+Leme, R. P. and Schneider, J. Contextual search via intrinsic volumes. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 268-282. IEEE, 2018.
+Leme, R. P. and Schneider, J. Contextual search via intrinsic volumes. SIAM Journal on Computing, 51(4):1096-1125, 2022.
+Leme, R. P., Podimata, C., and Schneider, J. Corruption-robust contextual search through density updates. In Conference on Learning Theory, pp. 3504–3505. PMLR, 2022.
+Liu, A., Leme, R. P., and Schneider, J. Optimal contextual pricing and extensions. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 1059-1078. SIAM, 2021.
+Lobel, I., Paes Leme, R., and Vladu, A. Multidimensional binary search for contextual decision-making. In Proceedings of the 2017 ACM Conference on Economics and Computation, pp. 585-585, 2017.
+Lobel, I., Leme, R. P., and Vladu, A. Multidimensional binary search for contextual decision-making. Operations Research, 66(5):1346-1361, 2018.
+Luo, H., Wei, C.-Y., Agarwal, A., and Langford, J. Efficient contextual bandits in non-stationary worlds. In Conference On Learning Theory, pp. 1739-1776. PMLR, 2018.
+Luo, Y., Sun, W. W., and Liu, Y. Distribution-free contextual dynamic pricing. Mathematics of Operations Research, 49(1):599-618, 2024.
+
+Metevier, B., Giguere, S., Brockman, S., Kobren, A., Brun, Y., Brunskill, E., and Thomas, P. S. Offline contextual bandits with high probability fairness guarantees. Advances in Neural Information Processing Systems, 32, 2019.
+Myerson, R. B. and Satterthwaite, M. A. Efficient mechanisms for bilateral trading. Journal of Economic Theory, 29(2):265-281, 1983.
+Neu, G. and Olkhovskaya, J. Efficient and robust algorithms for adversarial linear contextual bandits. In Conference on Learning Theory, pp. 3049-3068. PMLR, 2020.
+Oh, M.-h. and Iyengar, G. Thompson sampling for multinomial logit contextual bandits. Advances in Neural Information Processing Systems, 32, 2019.
+Shah, V., Johari, R., and Blanchet, J. Semi-parametric dynamic contextual pricing. Advances in Neural Information Processing Systems, 32, 2019.
+Sherstyuk, K., Phankitnirundorn, K., and Roberts, M. J. Randomized double auctions: gains from trade, trader roles, and price discovery. Experimental Economics, 24 (4):1-40, 2020.
+Slivkins, A., Sankararaman, K. A., and Foster, D. J. Contextual bandits with packing and covering constraints: A modular Lagrangian approach via regression. In The Thirty Sixth Annual Conference on Learning Theory, pp. 4633-4656. PMLR, 2023.
+Tullii, M., Gaucher, S., Merlis, N., and Perchet, V. Improved algorithms for contextual dynamic pricing. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., and Zhang, C. (eds.), Advances in Neural Information Processing Systems, volume 37, pp. 126088-126117. Curran Associates, Inc., 2024.
+Wei, C.-Y., Luo, H., and Agarwal, A. Taking a hint: How to leverage loss predictors in contextual bandits? In Conference on Learning Theory, pp. 3583-3634. PMLR, 2020.
+Weill, P.-O. The search theory of over-the-counter markets. Annual Review of Economics, 12:747-773, 2020.
+Williams, D. Probability with martingales. Cambridge university press, UK, 1991.
+www.bis.org. OTC derivatives statistics at end-June 2022. Bank for International Settlements, 2022. URL https://www.bis.org/publ/otc_hy2211.pdf.
+Xu, J. and Wang, Y.-X. Logarithmic regret in feature-based dynamic pricing. Advances in Neural Information Processing Systems, 34:13898-13910, 2021.
+
+Zhou, Z., Xu, R., and Blanchet, J. Learning in generalized linear contextual bandits with stochastic delays. Advances in Neural Information Processing Systems, 32, 2019.
+
+# A. Missing Proofs of Structural and Technical Results (Section 2)
+
+In this section, we provide the missing proofs of our structural and technical results.
+
+# A.1. Proof of Lemma 2.1
+
+We denote by $F$ (resp., $G$ ) the cumulative distribution function of $V$ (resp., $W$ ). For each $p \in [0,1]$ , from the Decomposition Lemma in (Cesa-Bianchi et al., 2024b, Lemma 1), it holds that
+
+$$
+\mathbb {E} \left[ (W - V) \mathbb {I} \{V \leq p \leq W \} \right] = F (p) \int_ {p} ^ {1} (1 - G (\lambda)) d \lambda + (1 - G (p)) \int_ {0} ^ {p} F (\lambda) d \lambda ,
+$$
+
+$$
+\mathbb {E} \left[ (V - W) \mathbb {I} \{W \leq p \leq V \} \right] = G (p) \int_ {p} ^ {1} (1 - F (\lambda)) d \lambda + (1 - F (p)) \int_ {0} ^ {p} G (\lambda) d \lambda .
+$$
+
+Hence, for each $p\in [0,1]$
+
+$$
+\begin{array}{l} \mathbb {E} \left[ (W - V) \mathbb {I} \{V \leq p \leq W \} \right] = F (p) \int_ {p} ^ {1} (1 - G (\lambda)) d \lambda + (1 - G (p)) \int_ {0} ^ {p} F (\lambda) d \lambda \\ = F (p) \left(m - \int_ {0} ^ {p} (1 - G (\lambda)) \mathrm {d} \lambda\right) + \int_ {0} ^ {p} F (\lambda) \mathrm {d} \lambda - G (p) \int_ {0} ^ {p} F (\lambda) \mathrm {d} \lambda \\ = \int_ {0} ^ {p} F (\lambda) d \lambda + (m - p) F (p) - p G (p) + G (p) \int_ {0} ^ {p} (1 - F (\lambda)) d \lambda + F (p) \int_ {0} ^ {p} G (\lambda) d \lambda \\ = \int_ {0} ^ {p} (F + G) (\lambda) d \lambda + (m - p) (F + G) (p) - G (p) \left(m - \int_ {0} ^ {p} (1 - F (\lambda)) d \lambda\right) + (F (p) - 1) \int_ {0} ^ {p} G (\lambda) d \lambda \\ = \int_ {0} ^ {p} (F + G) (\lambda) d \lambda + (m - p) (F + G) (p) - \left(G (p) \int_ {p} ^ {1} (1 - F (\lambda)) d \lambda + (1 - F (p)) \int_ {0} ^ {p} G (\lambda) d \lambda\right) \\ = \int_ {0} ^ {p} (F + G) (\lambda) d \lambda + (m - p) (F + G) (p) - \mathbb {E} \left[ (V - W) \mathbb {I} \{W \leq p \leq V \} \right]. \\ \end{array}
+$$
+
+Rearranging, it follows that, for each $p\in [0,1]$
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathrm {g} (p, V, W) \right] = \mathbb {E} \left[ (W - V) \mathbb {I} \{V \leq p \leq W \} \right] + \mathbb {E} \left[ (V - W) \mathbb {I} \{W \leq p \leq V \} \right] \\ = \int_ {0} ^ {p} (F + G) (\lambda) d \lambda + (m - p) (F + G) (p). \\ \end{array}
+$$
+
+Hence, for any $p \in [0,1]$ , it holds that
+
+$$
+\mathbb {E} \left[ \mathrm {g} (m, V, W) - \mathrm {g} (p, V, W) \right] = \int_ {p} ^ {m} \left(\left(F + G\right) (\lambda) - \left(F + G\right) (p)\right) \mathrm {d} \lambda \geq 0.
+$$
+
+Finally, since $F$ and $G$ are absolutely continuous with weak derivative bounded by $L$ , by the fundamental theorem of calculus (Bass, 2013, Theorem 14.16) it holds that, for $p \in [0,1]$ ,
+
+$$
+\mathbb {E} \big [ \mathrm {g} (m, V, W) - \mathrm {g} (p, V, W) \big ] = \int_ {p} ^ {m} \int_ {p} ^ {\lambda} \big (F ^ {\prime} + G ^ {\prime} \big) (\vartheta) \mathrm {d} \vartheta \mathrm {d} \lambda \leq 2 L \int_ {p} ^ {m} | \lambda - p | \mathrm {d} \lambda = L | m - p | ^ {2}.
+$$
+
+# A.2. Proof of Lemma 2.3
+
+By the bias-variance decomposition:
+
+$$
+\mathbb {E} \left[ \left| a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right| ^ {2} \right] = \left(\underbrace {\mathbb {E} \left[ a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right]} _ {\text {b i a s}}\right) ^ {2} + \underbrace {\operatorname {V a r} \left[ a ^ {\top} \hat {\psi} _ {s} \right]} _ {\text {v a r i a n c e}}.
+$$
+
+Noting that, for each $t\in \mathbb{N}$ , it holds that $\mathbb{E}[H_s^\top ] = f_s^\top \psi$ , we have,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ a ^ {\top} \hat {\psi} _ {t} - a ^ {\top} \psi \right] = a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} f _ {s} f _ {s} ^ {\top} \psi \\ - a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} \left(f _ {s} f _ {s} ^ {\top} \psi + l ^ {- 1} \psi\right) \\ = - a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} l ^ {- 1} \psi =: (\circ), \\ \end{array}
+$$
+
+and hence, by the Cauchy-Schwarz inequality applied to the scalar product $(\alpha ,\beta)\mapsto \alpha^{\top}(f_{s}f_{s}^{\top} + l^{-1}\mathbf{1}_{l})^{-1}\beta$ , by the fact that $(f_{s}f_{s}^{\top} + l^{-1}\mathbf{1}_{l})^{-1}\preceq l^{-1}\mathbf{1}_{l}^{-1}$ (where, for any two symmetric matrices $A_{1},A_{2}$ , we say that $A_{1}\preceq A_{2}$ if and only if $A_{2} - A_{1}$ is semi-positive definite), and by the fact that $\| \psi \| _2^2\leq l$ , we can control the bias term as follows
+
+$$
+\begin{array}{l} \left(\mathbb {E} \left[ a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right]\right) ^ {2} = (\circ) ^ {2} \\ \leq a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a \cdot l ^ {- 1} \psi^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} l ^ {- 1} \psi \\ \leq a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a \cdot l ^ {- 1} \psi^ {\top} \left(l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} l ^ {- 1} \psi \\ \leq a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a. \\ \end{array}
+$$
+
+Letting $\Delta_s$ be the $s \times s$ diagonal matrix with vector of diagonal elements given by $(\operatorname{Var}[Z_1], \ldots, \operatorname{Var}[Z_s])$ , we have
+
+$$
+\mathrm {V a r} \big [ a ^ {\top} \hat {\psi} _ {s} \big ] = a ^ {\top} \big (f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l} \big) ^ {- 1} \big (f _ {s} \Delta_ {s} f _ {s} ^ {\top} \big) \big (f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l} \big) ^ {- 1} a.
+$$
+
+Now, given that $Z_{1},\ldots ,Z_{s}$ are $[0,1]$ -valued, we have that $\Delta_s$ is diagonal with diagonal elements less than 1, and hence $f_{s}\Delta_{s}f_{s}^{\top}\preceq f_{s}f_{s}^{\top} + l^{-1}\mathbf{1}_{l}$ , which yields a control on the variance term as follows,
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ a ^ {\top} \hat {\psi} _ {s} \right] \leq a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a \\ = a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a. \\ \end{array}
+$$
+
+Putting everything together, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left| a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right| ^ {2} \right] \leq 2 a ^ {\top} \left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1} a = 2 \left\| a \right\| _ {\left(f _ {s} f _ {s} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1}} ^ {2} \\ = 2 \left\| a \right\| ^ {2} (\sum_ {r = 1} ^ {s} a _ {r} a _ {r} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}) ^ {- 1} \\ = \left\| \sqrt {2} a \right\| _ {\left(\sum_ {r = 1} ^ {s} a _ {r} a _ {r} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1}} , \\ \end{array}
+$$
+
+where we recall that for any positive definite matrix $A \in \mathbb{R}^{l \times l}$ and each $u \in \mathbb{R}^l$ , we have defined $\| u \|_A \coloneqq \sqrt{u^\top A u}$ .
+
+# B. Missing Upper Bound Proofs
+
+In this section, we provide all missing proofs of our regret upper bounds.
+
+# B.1. Proof of Theorem 4.1
+
+Fix any $t \geq 2$ . Now, for each $n \in [t-1]$ , define $Z_{2n-1} \coloneqq V_n$ and $Z_{2n} \coloneqq W_n$ . Define $l \coloneqq d$ and $s \coloneqq 2(t-1)$ . For each $n \in [t-1]$ , define $a_{2n-1} \coloneqq c_n \eqqcolon a_{2n}$ . Let $\psi \coloneqq \phi$ and $a \coloneqq c_t$ . Notice that, if $j \in [s]$ is odd, then $\mathbb{E}[Z_j] = \mathbb{E}\left[V_{\frac{j+1}{2}}\right] = c_{\frac{j+1}{2}}^\top \phi = a_j^\top \psi$ , while if $j \in [s]$ is even, then $\mathbb{E}[Z_j] = \mathbb{E}\left[W_{\frac{j}{2}}\right] = c_{\frac{j}{2}}^\top \phi = a_j^\top \psi$ . Hence, we can apply Lemma 2.3 to obtain
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left| c _ {t} ^ {\top} \hat {\phi} _ {t - 1} - c _ {t} ^ {\top} \phi \right| ^ {2} \right] = \mathbb {E} \left[ \left| a ^ {\top} \hat {\psi} _ {s} - a ^ {\top} \psi \right| ^ {2} \right] \leq \left\| \sqrt {2} a \right\| _ {\left(\sum_ {j = 1} ^ {s} a _ {j} a _ {j} ^ {\top} + l ^ {- 1} \mathbf {1} _ {l}\right) ^ {- 1}} ^ {2} = \left\| \sqrt {2} c _ {t} \right\| _ {\left(2 \sum_ {n = 1} ^ {t - 1} c _ {n} c _ {n} ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} \\ = \left\| \sqrt {2} c _ {t} \right\| ^ {2} \left(\sum_ {n = 1} ^ {t - 1} (\sqrt {2} c _ {n}) (\sqrt {2} c _ {n}) ^ {\intercal} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}. \\ \end{array}
+$$
+
+Hence, leveraging Corollary 2.2 and the previous inequality, for any $T \in \mathbb{N}$ , we have that
+
+$$
+\begin{array}{l} R _ {T} \leq \sum_ {t = 1} ^ {T} 1 \wedge \left(L \mathbb {E} \left[ | P _ {t} - c _ {t} ^ {\top} \phi | ^ {2} \right]\right) \leq 1 + \sum_ {t = 2} ^ {T} L \mathbb {E} \left[ | c _ {t} ^ {\top} \hat {\phi} _ {t - 1} - c _ {t} ^ {\top} \phi | ^ {2} \right] \leq 1 + \sum_ {t = 2} ^ {T} \left\| \sqrt {2} c _ {t} \right\| ^ {2} \binom {2} {\left(\sum_ {n = 1} ^ {t - 1} (\sqrt {2} c _ {n}) (\sqrt {2} c _ {n}) ^ {\top} + d ^ {- 1} \mathbf {1} _ {d}\right) ^ {- 1}} \\ \leq 1 + 2 L d \ln \left(\frac {d d ^ {- 1} + 2 d (T - 1)}{d d ^ {- 1}}\right) = 1 + 2 L d \ln \bigl (1 + 2 d (T - 1) \bigr) \leq 1 + 2 L d \ln (2 d T) \\ \end{array}
+$$
+
+where the first inequality of the second line follows from the elliptical potential lemma (Lattimore & Szepesvári, 2020, Lemma 19.4).
+
+If $d < T / 2$ , this implies that $R_{T} \leq 1 + 2Ld\ln (2dT) \leq 1 + 4Ld\ln T$ . If, instead, $d \geq T / 2$ , then, recalling that $L \geq 1$ , we obtain once again that $R_{T} \leq T \leq 1 + 4Ld\ln T$ , concluding the proof.
+
+# C. Missing Lower Bound Proofs
+
+In this section, we provide all the missing proofs of our lower bounds, starting from that of the full-feedback setting.
+
+# C.1. Proof of Theorem 4.2
+
+The high-level idea of this proof is to build a reduction to a non-contextual full-feedback lower bound construction (see, e.g., the one appearing in Bolić et al. 2024, Theorem 5).
+
+Without loss of generality, we assume that $d$ divides $T$ . In fact, if we prove the theorem for this case, then, by leveraging that $T \geq 2d$ and $T \geq 4$ , the general case follows from
+
+$$
+R _ {T} \geq b L d \ln \left(\lfloor T / d \rfloor d\right) \geq \frac {b}{2} L d \ln T.
+$$
+
+Let $n \coloneqq T / d$ . Let $e_1, \ldots, e_d$ be the canonical basis of $\mathbb{R}^d$ . Define, for all $i \in [d]$ and $j \in [n]$ , the context $c_{j + (i - 1)n} \coloneqq e_i$ . We assume that these contexts are known to the learner in advance and, therefore, we can restrict the proof to deterministic algorithms without any loss of generality.
+
+Let $L \geq 2$ , $J_{L} \coloneqq \left[\frac{1}{2} - \frac{1}{14L}, \frac{1}{2} + \frac{1}{14L}\right]$ , $f \coloneqq \mathbb{I}_{[0, \frac{3}{7}]} + L\mathbb{I}_{J_{L}} + \mathbb{I}_{[\frac{4}{7}, 1]}$ , and, for any $\varepsilon \in [-1, 1]$ , $g_{\varepsilon} \coloneqq -\varepsilon\mathbb{I}_{[\frac{1}{7}, \frac{3}{14}]} + \varepsilon\mathbb{I}_{(\frac{3}{14}, \frac{2}{7}]}$ and $f_{\varepsilon} \coloneqq f + g_{\varepsilon}$ . For any $\varepsilon \in [-1, 1]$ , note that $0 \leq f_{\varepsilon} \leq L$ and $\int_0^1 f_{\varepsilon}(x) \, \mathrm{d}x = 1$ , hence $f_{\varepsilon}$ is a valid density on $[0, 1]$ bounded by $L$ . We will denote the corresponding probability measure by $\nu_{\varepsilon}$ , set $\bar{\nu}_{\varepsilon} \coloneqq \int_{[0,1]} x \, \mathrm{d}\nu_{\varepsilon}(x)$ , and notice that direct computations show that $\bar{\nu}_{\varepsilon} = \frac{1}{2} + \frac{\varepsilon}{196}$ . Consider for each $q \in [0, 1]$ , an i.i.d. sequence $(B_{q,t})_{t \in \mathbb{N}}$ of Bernoulli random variables of parameter $q$ , an i.i.d. sequence $(\tilde{B}_t)_{t \in \mathbb{N}}$ of Bernoulli random variables of parameter $1/7$ , an i.i.d. sequence $(U_t)_{t \in \mathbb{N}}$ of uniform random variables on $[0, 1]$ , and uniform random variables $E_1, \ldots, E_d$ on $[-\bar{\varepsilon}_L, \bar{\varepsilon}_L]$ , where $\bar{\varepsilon}_L \coloneqq \frac{7}{L}$ , such that $((B_{q,t})_{t \in \mathbb{N}}, q \in [0,1], (\tilde{B}_t)_{t \in \mathbb{N}}, (U_t)_{t \in \mathbb{N}}, E_1, \ldots, E_d)$ is an independent family. Let $\varphi: [0, 1] \to [0, 1]$ be such that, if $U$ is a uniform random variable on $[0, 1]$ , then the distribution of $\varphi(U)$ has density $\frac{7}{6} \cdot f \cdot \mathbb{I}_{[0,1] \setminus [1/7,2/7]}$ (which exists by the Skorokhod representation theorem (Williams, 1991, Section 17.3)). For each $\varepsilon \in [-1, 1]$ and $t \in \mathbb{N}$ , define
+
+$$
+G _ {\varepsilon , t} := \left(\frac {2 + U _ {t}}{1 4} \left(1 - B _ {\frac {1 + \varepsilon}{2}, t}\right) + \frac {3 + U _ {t}}{1 4} B _ {\frac {1 + \varepsilon}{2}, t}\right) \tilde {B} _ {t} + \varphi \left(U _ {t}\right) \left(1 - \tilde {B} _ {t}\right), \tag {2}
+$$
+
+$V_{\varepsilon ,t}\coloneqq G_{\varepsilon ,2t - 1},W_{\varepsilon ,t}\coloneqq G_{\varepsilon ,2t},\xi_{\varepsilon ,t}\coloneqq V_{\varepsilon ,t} - \bar{\nu}_{\varepsilon},$ and $\zeta_{\varepsilon ,t}\coloneqq W_{\varepsilon ,t} - \bar{\nu}_{\varepsilon}$ . In the following, if $a_1,\ldots ,a_d$ is a sequence of elements, we will use the notation $a_{1:d}$ as a shorthand for $(a_{1},\dots ,a_{d})$ . For each $\varepsilon_1,\dots ,\varepsilon_d\in [-1,1]$ , each $i\in [d]$ , and each $j\in [n]$ define the random variables $\xi_{j + (i - 1)n}^{\varepsilon_{1:d}}\coloneqq \xi_{\varepsilon_i,j + (i - 1)n}$ and $\zeta_{j + (i - 1)n}^{\varepsilon_{1:d}}\coloneqq \zeta_{\varepsilon_i,j + (i - 1)n}$ . The family $\left(\xi_t^{\varepsilon_{1:d}},\zeta_t^{\varepsilon_{1:d}}\right)_{t\in [T],\varepsilon_{1:d}\in [-1,1]^d}$ is an independent family, independent of $(E_1,\ldots ,E_d)$ , and for each $i\in [d]$ and each $j\in [n]$ it can be checked that the two random variables $\xi_{j + (i - 1)n}^{\varepsilon_{1:d}},\zeta_{j + (i - 1)n}^{\varepsilon_{1:d}}$ are zero mean with common distribution given by $\nu_{\varepsilon_i}$ . For each $\varepsilon_1,\dots ,\varepsilon_d\in$ $[-1,1]$ , let $\phi_{\varepsilon_{1:d}}\coloneqq (\bar{\nu}_{\varepsilon_1},\dots ,\bar{\nu}_{\varepsilon_d})$ , and for each $i\in [d]$ and $j\in [n]$ , let $V_{j + (i - 1)n}^{\varepsilon_{1:d}}\coloneqq c_{j + (i - 1)n}^{\top}\phi_{\varepsilon_{1:d}} + \xi_{j + (i - 1)n}^{\varepsilon_{1:d}}$ and $W_{j + (i - 1)n}^{\varepsilon_{1:d}}\coloneqq c_{j + (i - 1)n}^{\top}\phi_{\varepsilon_{1:d}} + \zeta_{j + (i - 1)n}^{\varepsilon_{1:d}}$ Note that these last two random variables are [0, 1]-valued zero-mean perturbations of $c_{j + (i - 1)n}^{\top}\phi_{\varepsilon_{1:d}}$ with shared density given by $f_{\varepsilon_i}$ , and hence bounded by $L$
+
+We will show that any algorithm has to suffer the regret inequality in the statement of the theorem if the sequence of evaluations is $V_{1}^{\varepsilon_{1:d}}$ , $W_{1}^{\varepsilon_{1:d}}$ , ..., $V_{T}^{\varepsilon_{1:d}}$ , $W_{T}^{\varepsilon_{1:d}}$ , for some $\varepsilon_{1}, \ldots, \varepsilon_{d} \in [0,1]$ .
+
+Before doing that, we first need the following. For any $\varepsilon_1,\ldots ,\varepsilon_d\in [-1,1],p\in [0,1]$ , and $t\in [T]$ let $\mathrm{GFT}_t^{\varepsilon_{1:d}}(p)\coloneqq \mathrm{g}(p,V_t^{\varepsilon_{1:d}},W_t^{\varepsilon_{1:d}})$ .
+
+By Lemma 2.1, we have, for all $\varepsilon_1,\ldots ,\varepsilon_d\in [-1,1],i\in [d],j\in [n]$ , and $p\in [0,1]$
+
+$$
+\mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (p) \right] = 2 \int_ {0} ^ {p} \int_ {0} ^ {\lambda} f _ {\varepsilon_ {i}} (s) \mathrm {d} s \mathrm {d} \lambda + 2 (\bar {\nu} _ {\varepsilon_ {i}} - p) \int_ {0} ^ {p} f _ {\varepsilon_ {i}} (s) \mathrm {d} s ,
+$$
+
+which, together with the fundamental theorem of calculus —(Bass, 2013, Theorem 14.16), noting that $p \mapsto \mathbb{E}\big[\mathrm{GFT}_{j + (i - 1)n}^{\varepsilon_{1:d}}(p)\big]$ is absolutely continuous with derivative defined a.e. by $p \mapsto 2(\bar{\nu}_{\varepsilon_i} - p)f_{\varepsilon_i}(p)$ — yields, for any $p \in J_L$ ,
+
+$$
+\mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} \left(\bar {\nu} _ {\varepsilon_ {i}}\right) \right] - \mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (p) \right] = L \left| \bar {\nu} _ {\varepsilon_ {i}} - p \right| ^ {2}. \tag {3}
+$$
+
+Note also that for all $\varepsilon_1,\ldots ,\varepsilon_d\in [-\bar{\varepsilon}_L,\bar{\varepsilon}_L],t\in [T]$ , and $p\in [0,1]\setminus J_L$ , a direct verification shows that
+
+$$
+\mathbb {E} \left[ \mathrm {G F T} _ {t} ^ {\varepsilon_ {1: d}} (p) \right] \leq \mathbb {E} \left[ \mathrm {G F T} _ {t} ^ {\varepsilon_ {1: d}} (1 / 2) \right]. \tag {4}
+$$
+
+Fix any arbitrary deterministic algorithm for the full feedback setting $(\alpha_{t})_{t\in [T]}$ , i.e., (given that the contexts $c_{1},\ldots ,c_{T}$ are here fixed and declared ahead of time to the learner), a sequence of functions $\alpha_{t}\colon ([0,1]\times [0,1])^{t - 1}\to [0,1]$ mapping past feedback into prices (with the convention that $\alpha_{1}$ is just a number in $[0,1])$ ). For each $t\in [T]$ , define $\tilde{\alpha}_{t}\colon ([0,1]\times [0,1])^{t - 1}\to J_{L}$ equal to $\alpha_{t}$ whenever $\alpha_{t}$ takes values in $J_{L}$ , and equal to $1 / 2$ otherwise. Define $Z_{1}\coloneqq \frac{1 + E_{1}}{2},\dots,Z_{d}\coloneqq \frac{1 + E_{d}}{2}$ .
+
+Now, note the following
+
+$$
+\begin{array}{l} \sup_{\varepsilon_{1:d}\in \left[ -\bar{\varepsilon}_{L},\bar{\varepsilon}_{L}\right]^{d}}\sum_{i = 1}^{d}\sum_{j = 1}^{n}\mathbb{E}\Big[\mathrm{GFT}^{\varepsilon_{1:d}}_{j + (i - 1)n}\big(\bar{\nu}_{\varepsilon_{i}}\big) - \mathrm{GFT}^{\varepsilon_{1:d}}_{j + (i - 1)n}\big(\alpha_{t}(V_{1}^{\varepsilon_{1:d}},W_{1}^{\varepsilon_{1:d}},\ldots ,V_{j - 1 + (i - 1)n}^{\varepsilon_{1:d}},W_{j - 1 + (i - 1)n}^{\varepsilon_{1:d}})\big)\Big]\Bigg] \\ \stackrel {(4)} {\geq} \sup _ {\varepsilon_ {1: d} \in [ - \bar {\varepsilon} _ {L}, \bar {\varepsilon} _ {L} ] ^ {d}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (\bar {\nu} _ {\varepsilon_ {i}}) - \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} \big (\tilde {\alpha} _ {t} \big (V _ {1} ^ {\varepsilon_ {1: d}}, W _ {1} ^ {\varepsilon_ {1: d}}, \dots , V _ {j - 1 + (i - 1) n} ^ {\varepsilon_ {1: d}} W _ {j - 1 + (i - 1) n} ^ {\varepsilon_ {1: d}} \big) \big) \right] \\ \stackrel {\spadesuit}{=}L\sup_{\varepsilon_{1:d}\in \left[ - \bar{\varepsilon}_{L},\bar{\varepsilon}_{L}\right]^{d}}\sum_{i = 1}^{d}\sum_{j = 1}^{n}\mathbb{E}\Big[\big|\bar{\nu}_{\varepsilon_{i}} - \tilde{\alpha}_{t}\big(V_{1}^{\varepsilon_{1:d}},W_{1}^{\varepsilon_{1:d}},\dots ,V_{j - 1 + (i - 1)n}^{\varepsilon_{1:d}},W_{j - 1 + (i - 1)n}^{\varepsilon_{1:d}}\big)\big|^{2}\Big] \\ \geq L \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \left| \bar {\nu} _ {E _ {i}} - \tilde {\alpha} _ {t} \left(V _ {1} ^ {E _ {1: d}}, W _ {1} ^ {E _ {1: d}}, \dots , V _ {j - 1 + (i - 1) n} ^ {E _ {1: d}}, W _ {j - 1 + (i - 1) n} ^ {E _ {1: d}}\right) \right| ^ {2} \right] \\ \stackrel {\bullet} {\geq} L \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \left| \bar {\nu} _ {E _ {i}} - \mathbb {E} \left[ \bar {\nu} _ {E _ {i}} \mid V _ {1} ^ {E _ {1: d}}, W _ {1} ^ {E _ {1: d}}, \dots , V _ {j - 1 + (i - 1) n} ^ {E _ {1: d}}, W _ {j - 1 + (i - 1) n} ^ {E _ {1: d}} \right] \right| ^ {2} \right] \\ = \frac {L}{1 9 6 ^ {2}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \left| E _ {i} - \mathbb {E} \left[ E _ {i} \mid V _ {1} ^ {E _ {1: d}}, W _ {1} ^ {E _ {1: d}} \dots , V _ {j - 1 + (i - 1) n} ^ {E _ {1: d}}, W _ {j - 1 + (i - 1) n} ^ {E _ {1: d}} \right] \right| ^ {2} \right] \\ \begin{array}{l}\spadesuit \frac{L}{196^{2}}\sum_{i = 1}^{d}\sum_{j = 1}^{n}\mathbb{E}\Big[\bigl|E_{i} - \mathbb{E}\bigl[E_{i}\mid B_{\frac{1 + E_{i}}{2},1 + 2(i - 1)n},\ldots ,B_{\frac{1 + E_{i}}{2},2(j - 1) + 2(i - 1)n}\bigr]^{\big|^{2}}\bigr] \end{array} \\ \stackrel {\clubsuit} {=} \frac {L}{1 9 6 ^ {2}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \left| E _ {i} - \mathbb {E} \left[ E _ {i} \mid B _ {\frac {1 + E _ {i}}{2}, 1}, \dots , B _ {\frac {1 + E _ {i}}{2}, 2 (j - 1)} \right] \right| ^ {2} \right] \\ = \frac {L}{9 8 ^ {2}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \left| Z _ {i} - \mathbb {E} \left[ Z _ {i} \mid B _ {Z _ {i}, 1}, \dots , B _ {Z _ {i}, 2 (j - 1)} \right] \right| ^ {2} \right] \\ \end{array}
+$$
+
+where $\spadesuit$ follows from (3) and the fact that $\tilde{\alpha}_t$ takes values in $J_L$ ; $\spadesuit$ from the fact that the minimizer of the $L^2 (\mathbb{P})$ -distance from $\bar{\nu}_{E_i}$ in $\sigma \big(V_1^{E_{1:d}},W_1^{E_{1:d}},\ldots ,V_{j - 1 + (i - 1)n}^{E_{1:d}},W_{j - 1 + (i - 1)n}^{E_{1:d}}\big)$ is $\mathbb{E}\big[\bar{\nu}_{E_i}\mid V_1^{E_{1:d}},W_1^{E_{1:d}},\ldots ,V_{j - 1 + (i - 1)n}^{E_{1:d}},W_{j - 1 + (i - 1)n}^{E_{1:d}}\big]$ (see, e.g., (Williams, 1991, Section 9.4)); $\spadesuit$ follows from the fact that, by Equation (2) and the independence of $E_{i}$ from $\big((B_{q,t})_{t\in \mathbb{N}},q\in [0,1],(\tilde{B}_t)_{t\in \mathbb{N}},(U_t)_{t\in \mathbb{N}}\big)$ , the conditional expectation $\mathbb{E}\big[E_i\mid V_1^{E_{1:d}},W_1^{E_{1:d}},\ldots ,V_{j - 1 + (i - 1)n}^{E_{1:d}},W_{j - 1 + (i - 1)n}^{E_{1:d}}\big]$ is a measurable function of $B_{\frac{1 + E_i}{2},1 + 2(i - 1)n},\ldots ,B_{\frac{1 + E_i}{2},2(j - 1) + 2(i - 1)n}$ , together with the same observation made in $\spadesuit$ about the minimization of $L^2 (\mathbb{P})$ distance; and $\spadesuit$ follows from the fact that the sequence $\left(B_{\frac{1 + E_i}{2},t}\right)_{t\in \mathbb{N}}$ is i.i.d..
+
+Finally, the general term of this last sum is the expected squared distance between the random parameter (drawn uniformly over $[(1 - \bar{\varepsilon}_L) / 2, (1 + \bar{\varepsilon}_L) / 2])$ of an i.i.d. sequence of Bernoulli random variables and the conditional expectation of this random parameter given $2(j - 1)$ independent realizations of these Bernoullis. A probabilistic argument shows that there exist two universal constants $\tilde{a}, \tilde{b} > 0$ such that, for all $j \geq \tilde{b} L^4$ and each $i \in [d]$ ,
+
+$$
+\mathbb {E} \left[ \left| Z _ {i} - \mathbb {E} \left[ Z _ {i} \mid B _ {Z _ {i}, 1}, \dots , B _ {Z _ {i}, 2 (j - 1)} \right] \right| ^ {2} \right] \geq \tilde {a} \frac {1}{j - 1}. \tag {5}
+$$
+
+At a high level, this is because, in an event of probability $\Omega(1)$ , if $j$ is large enough, the conditional expectation $\mathbb{E}[Z_i \mid B_{Z_i,1}, \ldots, B_{Z_i,2(j-1)}]$ is very close to the empirical average $\frac{1}{2(j-1)} \sum_{s=1}^{2(j-1)} B_{Z_i,s}$ , whose expected squared distance from $Z$ is $\Omega(1/(j-1))$ . For a formal proof of (5) with explicit constants, we refer the reader to Bolić et al. (2024, Appendix B of the extended arxiv version). Summing over $i \in [d]$ and $j \in [n]$ , we obtain that there exist $\varepsilon_1, \ldots, \varepsilon_d \in [-1,1]^d$ such that
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} \left(\bar {\nu} _ {\varepsilon_ {i}}\right) - \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} \left(\tilde {\alpha} _ {t} \left(V _ {1} ^ {\varepsilon_ {1: d}}, W _ {1} ^ {\varepsilon_ {1: d}}, \dots , V _ {j - 1 + (i - 1) n} ^ {\varepsilon_ {1: d}}, W _ {j - 1 + (i - 1) n} ^ {\varepsilon_ {1: d}}\right)\right) \right] \\ = \Omega (L d \ln n) = \Omega (L d \ln T). \\ \end{array}
+$$
+
+# C.2. Proof of Theorem 3.2
+
+The high-level idea of this proof is to build a reduction to a non-contextual lower bound construction (see, e.g., the one appearing in Bolić et al. 2024, Theorem 3).
+
+Fix $L\geq 2$ and $T\in \mathbb{N}$
+
+We will use the very same notation as in the proof of Theorem 4.2. In particular, the contexts $c_{1},\ldots ,c_{T}$ are again the same as before and declared ahead of time to the learner. We will show that for each algorithm for contextual brokerage with 2-bit feedback and each time horizon $T$ , if $R_T^{\varepsilon_{1:d}}$ is the regret of the algorithm at time horizon $T$ when the traders' valuations are $V_{1}^{\varepsilon_{1:d}},W_{1}^{\varepsilon_{1:d}},\dots,V_{T}^{\varepsilon_{1:d}},W_{T}^{\varepsilon_{1:d}}$ , then $\max_{\sigma_{1:d}\in \{-1,1\} ^d}R_T^{(\sigma_1\varepsilon ,\dots,\sigma_d\varepsilon)} = \Omega \big(\sqrt{dLT}\big)$ if $\varepsilon = \Theta \big((LT / d)^{-1 / 4}\big)$ and $T = \Omega (dL^3)$
+
+Note that for all $\varepsilon_{1:d} \in [-1, 1]^d$ , $i \in [d]$ , $j \in [n]$ , and $p < \frac{1}{2}$ , if $\varepsilon_i > 0$ , then, a direct verification shows that
+
+$$
+\mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (1 / 2) \right] \geq \mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (p) \right]. \tag {6}
+$$
+
+Similarly, for all $\varepsilon_{1:d} \in [-1, 1]^d$ , $i \in [d]$ , $j \in [n]$ , and $p > \frac{1}{2}$ , if $\varepsilon_i < 0$ , then
+
+$$
+\mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (1 / 2) \right] \geq \mathbb {E} \left[ \mathrm {G F T} _ {j + (i - 1) n} ^ {\varepsilon_ {1: d}} (p) \right]. \tag {7}
+$$
+
+Furthermore, a direct verification shows that, for each $\varepsilon_{1:d} \in [-1, 1]^d$ and $t \in [T]$ ,
+
+$$
+\max _ {p \in [ 0, 1 ]} \mathbb {E} \left[ \mathrm {G F T} _ {t} ^ {\varepsilon_ {1: d}} (p) \right] - \max _ {p \in [ \frac {1}{7}, \frac {2}{7} ]} \mathbb {E} \left[ \mathrm {G F T} _ {t} ^ {\varepsilon_ {1: d}} (p) \right] \geq \frac {1}{5 0} = \Omega (1). \tag {8}
+$$
+
+Now, assume that $T \geq dL^3 / 14^4$ so that, defining $\varepsilon \coloneqq (LT / d)^{-1 / 4}$ , we have that for any $\sigma_{1:d} \in \{-1,1\}^d$ , any $i \in [d]$ and any $j \in [n]$ , the maximizer of the expected gain from trade $p \mapsto \mathbb{E}\big[\mathrm{GFT}_{j + (i - 1)n}^{(\sigma_1\varepsilon,\dots,\sigma_d\varepsilon)}(p)\big]$ is at $\frac{1}{2} + \frac{\sigma_i\varepsilon}{196}$ and hence belongs to the spike region $J_L$ . If $\sigma_i = 1$ (resp., $\sigma_i = -1$ ), the optimal price for the rounds $1 + (i - 1)n, \dots, in$ belongs to the region $\left(\frac{1}{2}, \frac{1}{2} + \frac{1}{14L}\right)$ (resp., $\left[\frac{1}{2} - \frac{1}{14L}, \frac{1}{2}\right)$ ). By posting prices in the wrong region $\left[0, \frac{1}{2}\right]$ (resp., $\left[\frac{1}{2}, 1\right]$ ) in the $\sigma_i = 1$ (resp., $\sigma_i = -1$ ) case, the learner incurs a $\Omega(L\varepsilon^2) = \Omega\big(\sqrt{L/dT}\big)$ instantaneous regret by (3) and (6) (resp., (3) and (7)). Then, in order to attempt suffering less than $\Omega\big(\sqrt{L/T} \cdot n\big) = \Omega\big(\sqrt{LT/d}\big)$ regret in the rounds $1 + (i - 1)n, \dots, in$ , the algorithm would have to detect the sign of $\sigma_i$ and play accordingly. We will show now that even this strategy will not improve the regret of the algorithm (by more than a constant) because of the cost of determining the sign of $\sigma_i$ with the available feedback. Since for any $i \in [d]$ and $j \in [n]$ , the feedback received from the two traders at time $j + (i - 1)n$ by posting a price $p$ is $\mathbb{I}\{p \leq V_{j+(i-1)n}^{(\sigma_1\varepsilon,\dots,\sigma_d\varepsilon)}\}$ and $\mathbb{I}\{p \leq W_{j+(i-1)n}^{(\sigma_1\varepsilon,\dots,\sigma_d\varepsilon)}\}$ , the only way to obtain information about (the sign of) $\sigma_i$ is to post in the costly $(\Omega(1)-\text{instantaneous regret})$ by Equation (8)) sub-optimal region $[\frac{1}{7}, \frac{2}{7}]$ . However, posting prices in the region $[\frac{1}{7}, \frac{2}{7}]$ at time $j + (i - 1)n$ can't give more information about $\sigma_i$ than the information carried by $V_{j+(i-1)n}^{(\sigma_1\varepsilon,\dots,\sigma_d\varepsilon)}$ and $W_{j+(i-1)n}^{(\sigma_1\varepsilon,\dots,\sigma_d\varepsilon)}$ , which, in turn, can't give more information about $\sigma_i$ than the information carried by the two Bernoullis $B_{\frac{1+\sigma_i\varepsilon}{2},2(j+(i-1)n}-1}$ and $B_{\frac{1+\sigma_i\varepsilon}{2},2(j+(i-1)n}'}$ . Since only during rounds $1 + (i - 1)n, \dots, in$ is possible to extract information about the sign of $\sigma_i$ and, (via an information-theoretic argument) in order to distinguish the sign of $\sigma_i$ having access to i.i.d. Bernoulli random variables of parameter $\frac{1+\sigma_i\varepsilon}{2}$ requires $\Omega(1/\varepsilon^2) = \Omega\big(\sqrt{LT/d}\big)$ samples, we are forced to post at least $\Omega\big(\sqrt{LT/d}\big)$ prices in the costly region $[\frac{1}{7}, \frac{2}{7}]$ during the rounds $1 + (i - 1)n, \dots, in$ suffering a regret of $\Omega\big(\sqrt{LT/d}\big) \cdot \Omega(1) = \Omega\big(\sqrt{LT/d}\big)$ . Putting everything together, no matter what the strategy, each algorithm will pay at least $\Omega\big(\sqrt{LT/d}\big)$ regret in each epoch $1 + (i - 1)n, \dots, in$ for every $i \in [d]$ , resulting in an overall regret of $\Omega\big(\sqrt{LT/d}\big) \cdot d = \Omega\big(\sqrt{dLT}\big)$ .
\ No newline at end of file
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/images.zip b/aparametriccontextualonlinelearningtheoryofbrokerage/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4a44992ec386bf7e33da6cbdbfd313036b9aea40
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87936251d5b9a27bbbde54f63eb5f7c20aa07fddf4ef6afd7d5f87857560b865
+size 688286
diff --git a/aparametriccontextualonlinelearningtheoryofbrokerage/layout.json b/aparametriccontextualonlinelearningtheoryofbrokerage/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d03db631f1c7ff59daec5d38708a9df59a85c82
--- /dev/null
+++ b/aparametriccontextualonlinelearningtheoryofbrokerage/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f848c773854b0c69a05def6eeabbcb48e35284969d5c903ac7f7160889688236
+size 1029497
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_content_list.json b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ccba7307b77ba47f2308c04944fcb1cdc0eb7306
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96bbf7b136f18c767ce2d84ff1269205c3c82769b54b56e7c058c608f25bf780
+size 120009
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_model.json b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b9a66e1ff66108df56c02db337d16e57f1b4f3be
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c4b1c64b8638f0da110305d99785cb31967e79a6654ca2fc69c3825b80c9dc80
+size 148821
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_origin.pdf b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..533fe2c6749ed4ce59816401e45b608eb1057cd6
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/e1305251-c20e-4e11-8efe-b39595d0d365_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a92707c9b3a5ae0c84481cfd666ac2965af2d776700e397236d1573e4f953dc4
+size 2424110
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/full.md b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c953aaa47680e261f4bc88020d8752f71daa67e5
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/full.md
@@ -0,0 +1,540 @@
+# A Peer-review Look on Multi-modal Clustering: An Information Bottleneck Realization Method
+
+Zhengzheng Lou1 Hang Xue1 Chaoyang Zhang1 Shizhe Hu1
+
+# Abstract
+
+Despite the superior capability in complementary information exploration and consistent clustering structure learning, most current weight-based multi-modal clustering methods still contain three limitations: 1) lack of trustworthiness in learned weights; 2) isolated view weight learning; 3) extra weight parameters. Motivated by the peer-review mechanism in the academia, we in this paper give a new peer-review look on the multi-modal clustering problem and propose to iteratively treat one modality as "author" and the remaining modalities as "reviewers" so as to reach a peer-review score for each modality. It essentially explores the underlying relationships among modalities. To improve the trustworthiness, we further design a new trustworthy score with a self-supervision working mechanism. Following that, we propose a novel Peer-review Trustworthy Information Bottleneck (PTIB) method for weighted multi-modal clustering, where both the above scores are simultaneously taken into account for accurate and parameter-free modality weight learning. Extensive experiments on eight multi-modal datasets suggest that PTIB can outperform the state-of-the-art multi-modal clustering methods.
+
+# 1. Introduction
+
+Learning a consistent clustering result from different modalities by fully mining the underlying modality correlations is the essence of multi-modal clustering (MMC) (Raya et al., 2024). From the view of basic method, existing MMC methods are generally based on $k$ -means, canonical correlation analysis, spectral clustering, matrix factorization, information bottleneck and deep learning methods. From
+
+$^{1}$ School of Computer Science and Artificial Intelligence, Zhengzhou University, Zhengzhou, China. Correspondence to: Shizhe Hu .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+the perspective of strategies, current MMCs can be classified into weighted MMC (Wang et al., 2020a), shared feature subspace learning based MMC (Li et al., 2022), multi-modal consensus clustering (Liu et al., 2021; Liang et al., 2024), multi-modal co-clustering (Kumar et al., 2011; Zhang et al., 2024), multi-modal subspace clustering (Li et al., 2022), and tensor representation learning based MMC (He & Atia, 2022; Gu et al., 2024). With the rapid development of various kinds of methods, MMC has been successfully applied into many practical applications, such as object recognition (Lou et al., 2013), human action recognition (Hu et al., 2020) and coherent groups detection in crowd scenes (Wang et al., 2020b).
+
+Related Works. Investigated from the above methods, the weighted MMCs have exhibited remarkable clustering performance in the past decade, which is mainly due to its superior capability in complementary relationship discovery and consistent clustering structure learning. For the example of some typical ones, Xu et al. (2016) incorporated feature selection into weighted MMC for solving the high-dimensional data clustering, so that the discriminative representations and accurate clustering results can be jointly learned. Further, to address the challenging large-scale data clustering, Zhang et al. (2019) proposed an efficient binary MMC method, worked by collaboratively conducting the discrete representation learning and binary clustering assignment discovery. Different from most weighted MMCs, Zhao et al. (2020) designed a cluster weights learning based method to find the more fine-grained cluster relationships compared to the modality-weighted methods. For the difficult parameter learning issue, Wang et al. (2020a) focused on automatically tuning the weight parameters to obtain a refined unified graph fusion matrix. In more recent year, Zhou & Shen (2020) leveraged adversarial learning and attention mechanism to align the latent feature distributions and quantify the importance of modalities respectively. Xia et al. (2023b) introduced an adaptive fusion layer to adaptively sense the importance of modalities. Hu et al. (2025) introduced a simple fusion mechanism that dynamically updates modality-specific weights via backpropagation, similar to parameter optimization. Chen et al. (2023) explored the higher-order relations across modalities by designing a low-rank Tensor based proximity learning method for MMC
+
+
+Figure 1. The upper box indicates the general peer-review process in academia, while the lower box shows the peer-review look on multi-modal clustering, where each modality can be either an "author" or a "reviewer".
+
+problem.
+
+Limitations. Investigated from the above closely-related weighted MMC works, we observed there mainly lies three limitations. First, all the existing weighted MMCs only focus on learning the modality weights using different methods, ignoring whether the learned weights are trustworthy or not. Second, the weight learning mechanisms of most weighted MMCs is separated and essentially relied on the "single-modal information", e.g., $w^{i} = \frac{1}{F^{i}}$ , where $w^{i}$ and $F^{i}$ indicate the modality weight and the objective function value of the $i$ -th modality respectively. Thus, the modality correlations are not fully explored and exploited, which may degrade the clustering performance. Third, most weighted MMCs require one or more parameters for controlling the weight distribution, where these parameters are usually difficult to be tuned by hand without any prior knowledge.
+
+Motivation. In academia, peer-review mechanism is always adopted to mutually assess the contribution of one specific work to the community, discuss the significance of work, and provide communication to peers. Similarly, different modalities jointly contribute to the multi-modal clustering task, making it necessary to assess their individual contributions. Inspired by this, we naturally give a peer-review look on the multi-modal clustering problem. As illustrated in Figure 1, in general peer-review process, a person may be an author or a reviewer. When a person acts as an author, the work will be assigned to different reviewers by Editor in Chief (EIC) / Associate Editor (AE) to provide recommendations and suggestions. The peer-review mechanism facilitates the improvement and refinement of work, ensuring high-quality and reliable publications. From the "peer-review" look on multi-modal clustering, one modality can either be an "author" or a "reviewer". The "reviewer" modalities review the work of the "author" modality, and produce feedback review scores to evaluate the contribution
+
+or quality of the "author" modality. By mimicking this interesting mechanism in Figure 1, we can employ the feedback review scores to effectively integrate the complementary discriminative information across modalities so as to promote multi-modal clustering performance.
+
+Contribution. In this paper, we propose to address the multi-modal clustering problem from the new peer-review look. It allows the remaining modalities as "reviewers" to review one "author" modality, and provides peer-review scores for each modality. Additionally, the corresponding trustworthy score is designed to measure the reliability of different "reviewer" modalities, thus ensuring the trustworthiness of the peer-review scores learned from the "peer-review" process. With the remarkable performance of the popular information bottleneck theory (Tishby et al., 1999) in multi-modal learning (Gao et al., 2007; Lou et al., 2013; Hu et al., 2022), we in this paper propose a Peer-review Trustworthy Information Bottleneck (PTIB) method for solving the weighted multi-modal clustering problem. The clustering result obtained in each iteration is used to update the trustworthy score of each "reviewer" modality, working in a self-supervised mechanism. Thus, a more reasonable trustworthy evaluation of the peer-review score for each modality is reached. The peer-review scores given by "reviewer" modalities and the corresponding trustworthy scores for evaluating these "reviewer" modalities are combined to determine the importance or contribution (using modality weights in this paper) of each "author" modality. Moreover, we solve the optimization problem of PTIB by an effective $k$ -means-like algorithm. Experiments on eight multi-modal datasets demonstrate the superiority and effectiveness of the proposed method.
+
+The major contributions are summarized as follows:
+
+- A novel peer-review trustworthy information bottleneck (PTIB) method is proposed for multi-modal clustering, which performs by jointly learning the peer-review scores on different modalities and the trustworthy scores for quantifying the trustworthiness of the peer-review scores.
+- We give a new peer-review look on the multi-modal clustering problem, and thus design a peer-review score for evaluating the quality of each modality.
+- A corresponding trustworthy score is newly designed to evaluate the trustworthiness of peer-review score, ensuring the reliability of multi-modal peer-review.
+- Rich experiments on eight multi-modal datasets demonstrate the superiority and effectiveness of the proposed PTIB.
+
+Relations Between Multi-modal and Multi-view Clustering. Generally, the aim of multi-modal clustering and
+
+multi-view clustering is similar, especially in integrating different sources of information for improving clustering performance. The differences between them are as follows: Multi-view learning focuses on diverse feature representations of the same object, while multi-modal learning deals with complex relationships between heterogeneous modalities, which is more complicated to handle. In practical applications, overlaps between them may exist (e.g., multimodal data can also be considered as generalized multi-view data). However, technical solutions should be selected based on data characteristics (feature homogeneity and semantic consistency) to ensure methodological compatibility. In this paper, we use a more general expression of multi-modal clustering instead of multi-view clustering.
+
+# 2. Revisit: Information Bottleneck
+
+Information bottleneck (IB) principle (Tishby et al., 1999) originated from the rate-distortion theory, and its details can be referred from our survey (Hu et al., 2024b).
+
+For clustering problem, IB considers it as a data compression process, which attempts to learn an optimal compressed representation $T$ of $X$ while maximally maintaining the relevant information about variable $Y$ . It is formally described as
+
+$$
+R (D) = \min _ {\{p (t | x): I (T, Y) \geq D \}} I (T; X), \tag {1}
+$$
+
+where the compact representation $T$ compresses the source variable $X$ while maximally capturing the relevant information with respect to the variable $Y$ , $p(t|x)$ indicates the probability of data point $x$ being assigned to $t$ -th cluster, $I(T;X)$ and $I(T;Y)$ denote the mutual information between the compressed variable $T$ and variable $X$ and $Y$ respectively.
+
+By adopting a positive $\beta$ , we have the following Lagrange version of the IB method
+
+$$
+\mathcal {L} _ {\min } [ p (t | x) ] = I (T; X) - \beta I (T; Y), \tag {2}
+$$
+
+where $\beta \in (0, +\infty)$ is a Lagrange multiplier which serves as a trade-off parameter between data compression and relevant information preservation. And a formal iterative solution for the above Eq. (2) is given as
+
+$$
+p (t | x) = \frac {p (t)}{Z (x , \beta)} e ^ {- \beta D _ {K L} [ p (y | x) | | p (y | t) ]}, \tag {3}
+$$
+
+where $p(t) = \sum_{x} p(t|x)p(x)$ , $Z(x, \beta)$ is a normalization function, $p(y|t) = \frac{1}{p(t)}\sum_{x} p(t|x)p(x,y)$ , and $D_{KL}$ (Cover & Thomas, 2006) is the Kullback-Leibler divergence.
+
+In recent years, IB theory has been widely used in various multi-modal clustering tasks. For example, Federici et al. (2020) propose a multi-modal IB method that can identify non-shared information between two modalities. Yan et al.
+
+(2024) propose a multi-modal IB method that uses shared representations of multiple modalities to eliminate private information of a single modality. But the modality-private information is eliminated as much as possible during the process of data compression, only exploring the shared information of modalities without taking advantage of the complex relationship between modalities. Hu et al. (2024a) learn embeddings on two distinct feature spaces, reconstruct semantic information in a parallel manner, and IB theory is further used to reduce representation noise. However, its final clustering result is obtained by directly averaging the local clusters from the modal high-dimensional features.
+
+Different from the existing multi-modal clustering methods based on IB theory, the proposed method considers the complex relationship between modalities, where the designed multi-modal peer-review process is used to reasonably score the contribution of each modality, and the trustworthiness of it is ensured in a self-supervised manner.
+
+# 3. The Proposed Method
+
+In this section, we first give the problem formulation and overall framework, then elaborately picture the details of the PTIB method.
+
+Problem Formulation. Given the random variable $X = \{x_{1}, x_{2}, \ldots, x_{n}\}$ denoting the set of $n$ data samples from the dataset $\mathcal{X}$ , we have the random variable $\{Y^{j}\}_{j=1}^{m}$ denoting the data feature representation of $m$ different modalities, where $Y^{j} \in R^{n*d^{j}}$ indicates that the feature dimension of the $j$ -th modality is $d^{j}$ . Then, we reach the corresponding co-occurrence matrix $\{X, Y^{j}\}_{j=1}^{m}$ and its joint probability distribution $\{p(X, Y^{j})_{j=1}^{m}\}$ by adopting the popular Bag-of-Words model (Fei-Fei & Perona, 2005).
+
+Overall Framework. As the overall framework illustrated in Figure 2, PTIB first captures the local clustering structure of each modality, i.e., $\{T^1, T^2, \dots, T^m\}$ , with the IB method, and then obtains the peer-review scores for each "author" modality with the evaluation of remaining "reviewer" modalities. Meanwhile, the final clustering result is adopted for assessing the trustworthiness of each modality in a self-supervision fashion. With the peer-review $\{\mu^1, \mu^2, \dots, \mu^m\}$ and trustworthy $\{\sigma^1, \sigma^2, \dots, \sigma^m\}$ score for each modality, more accurate modality weights and improved clustering result are learned in each iteration.
+
+Regarding the trustworthiness of multiple modalities, almost all existing methods focus on trustworthy multi-modal classification(Han et al., 2021; 2023; Zheng et al., 2023; Zou et al., 2023). Han et al. (2023) introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory, thus promoting both classification reliability and robustness. Zheng
+
+
+Figure 2. The framework of PTIB. Each modality is treated as an “author” or “reviewer”, with its local clustering result serving as the “work” or “criteria”. A peer-review is conducted in the multi-modal setting to obtain peer-review score. Additionally, the final clustering result played “EIC/AE” quantifies the reliability of “reviewer” modality to obtain trustworthy score. Both the two scores are taken into account for modality weight learning, leading to more accurate weights and improved clustering result through iteration.
+
+et al. (2023) proposes a trustworthy multi-modal classification network via multi-level confidence learning, which integrates both feature and label-level confidence learning for trustworthy multi-modal classification. Zou et al. (2023) induces a transparent fusion strategy based on the modality confidence estimation strategy to track information variation within different modalities for dynamical fusion. Different from them, the proposed method aims to guarantee the trustworthiness of the learned modal weights in a self-supervised manner. To the best of our knowledge, none of the existing weighted MMCs employ the trustworthy strategy in the weight learning process.
+
+# 3.1. Peer-review Score
+
+Here, we give a new peer-review look on the multi-modal clustering, resorting the advantages of this mechanism to learn modal weight through modal interaction.
+
+In a general peer-review process, there are often some specific review criteria conducted to assess and score the author's work. By mimicking this, we reach the peer-review score in a similar manner under the multi-modal peer-review process. First of all, it is necessary to establish the peer-review criteria for different reviewer modalities. The feature of each modality contains unique characteristics, but their different feature dimensions make it hard to quantify their discrepancy. Therefore, we instead adopt the local clustering result of the reviewer modality as review criteria. Then, the author's work is also represented with the local clustering result, which can be directly compared with the review
+
+
+Figure 3. The local clustering results from the modality 1 and modality 2 are regarded as the criteria and the work, respectively. The peer-review score is the mutual information between them.
+
+criteria. The peer-review score depends on how similar it is to the review criteria. Apparently, the general metric of mutual information is a good choice for quantifying them. For clarity, we give a two-modal example in Figure 3. To be more accurate, we adopt the normalized mutual information version and formally define it as follows
+
+$$
+\mu_ {i} ^ {k} = \frac {2 \times I \left(T ^ {i} , T ^ {k}\right)}{H \left(T ^ {i}\right) + H \left(T ^ {k}\right)} \tag {4}
+$$
+
+where $\mu_i^k$ is the peer-review score given by the reviewer modality $V^i$ to the author modality $V^k$ , $T^i$ and $T^k$ are the local clustering results obtained from the reviewer modality $V^i$ and the author modality $V^k$ , respectively. $H(\bullet)$ denotes the entropy of one variable. Naturally, all of the peer-review scores of author modality $V^k$ can be represented as a vector
+
+$$
+\mu^ {k} = \left\{\mu_ {1} ^ {k}, \dots , \mu_ {i} ^ {k}, \dots , \mu_ {m} ^ {k} \right\}, i \neq k. \tag {5}
+$$
+
+Remark. Actually, there are many ways to attain the final peer-review scores, while we only select the normalized mutual information as a typical metric. Additionally, although the peer-review score provides a quantitative analysis for exploring the modality correlations, it still has its limitations. Due to the varying quality of multiple modalities, low-quality "reviewer" modality may give an inaccurate peer-review score to a relatively high-quality "author" modality. This may probably lead to wrong evaluation of the importance of modalities and eventually do harm to the clustering performance. Hence, it is imperative to analyze and evaluate the trustworthiness of the peer-review scores given by reviewer modalities to ensure their reliability.
+
+# 3.2. Trustworthy Score
+
+In this part, we propose to regard the final clustering assignment as the "EIC / AE", which is then used to evaluate the trustworthiness of the peer-review score in each iteration with a self-supervision fashion. The final clustering assignment improves as the iteration increases, thus leading to more accurate trustworthy scores.
+
+Clustering is to divide samples into different clusters according to their features, which is similar to a two-layer decision tree process. By treating the input sample set as the root node of decision tree, each leaf node thus corresponds to one specific cluster of the reviewer modality. Thus, we assess the trustworthiness of the reviewer modality by measuring cluster uncertainty, as defined follow, much like evaluating the quality of decision by measuring leaf purity.
+
+Definition 3.1 (Major/Minor Category). Given a multimodal dataset, if the local clustering result of modality supervised by the final clustering result, the category of correctly assigned samples in a cluster of a specific local clustering result is called major category, the set of categories of incorrectly assigned samples in it is called minor categories.
+
+Definition 3.2 (Cluster Uncertainty). For one reviewer modality, if the probability of major category in a cluster is $p$ , then the cluster uncertainty can be measured by the following information entropy
+
+$$
+H (p) = - p \log_ {2} p - (1 - p) \log_ {2} (1 - p). \tag {6}
+$$
+
+The final clustering result cannot make a completely accurate judgment on the local clustering result like the true label. We can only expect that the two are as similar as possible, that is, the probability of major category is as larger as possible. In a self-supervised scenario, the mixing of minor categories introduces interference when judging the information of the current cluster, so we only use the $p$ and $1 - p$ to represent the cluster uncertainty. The following theorem can prove this claim and its proof is given in Appendix A.1.
+
+Theorem 3.3. For arbitrary cluster of a reviewer modality, the more mixed the minor categories gain, the higher the
+
+
+Figure 4. A typical example of the "symmetry crisis" issue that the poor and better clusters may have the same cluster uncertainty.
+
+
+Figure 5. The solution of handling the "symmetry crisis" problem.
+
+entropy of it gets, thus resulting in high uncertainty of the cluster and interference in information judgment.
+
+Eq. (6) leads to an issue that a poor cluster and a better cluster obtain the same value of Eq. (6), where we call it "symmetry crisis". A typical example is illustrated in Figure 4. To address this crisis, we assume that a cluster with more than half incorrectly assigned samples (i.e., less than half major category) is of low quality, and we directly set its cluster uncertainty as the max value, as shown our handling of it in Figure 5. Finally, based on Definition 3.2, we define the following concept of cluster distortion to evaluate the quality of the reviewer modality.
+
+Definition 3.4 (Cluster Distortion). Let $U = \{u_1, u_2, \ldots, u_{|U|}\}$ denote the set of clusters in each reviewer modality and $C = \{c_1, c_2, \ldots, c_{|U|}\}$ denote the set of clusters in the final clustering result in each iteration, then the cluster distortion from $U$ to $C$ is given by
+
+$$
+d \left(u _ {q}, c _ {*}\right) = \left\{ \begin{array}{l l} 1, & \text {i f} 0 \leq p < \frac {1}{2}, \\ H (p), & \text {i f} \frac {1}{2} \leq p \leq 1. \end{array} \right. \tag {7}
+$$
+
+where $u_{q}$ ( $1 \leq q \leq |U|$ ) denotes the arbitrary cluster in the cluster set $U$ , $c_{*}$ denotes the major category of each cluster $u_{q}$ in $U$ , and $p = \frac{1}{|u_q|} |u_q \cap c_*|$ . Note that $d(u_{q},c_{*}) = 1$ indicates that a cluster dominated by incorrectly assigned samples have the largest distortion.
+
+Based on the distortion measurement Eq. (7) with respect to a cluster, the distortion between $U$ and $C$ is defined as follows
+
+$$
+D (U, C) = \frac {1}{| U |} \sum_ {q = 1} ^ {| U |} d \left(u _ {q}, c _ {*}\right). \tag {8}
+$$
+
+The above clustering distortion works by measuring how much the local clustering result of a single reviewer modality distorts from the final clustering result, similar to the evaluation from EIC / AE to reviewer in peer-review mechanism. It is noted that this approach is coincidentally consistent with the self-supervised learning. Generally, the smaller the clustering distortion is, the more reliable the reviewer modality is. Formally, the trustworthy score is defined by
+
+$$
+\sigma_ {i} ^ {f} = \frac {1}{D \left(T ^ {i} , T ^ {f}\right)}, \tag {9}
+$$
+
+where $\sigma_i^f$ is the trustworthy score of reviewer modality $V^i$ , $T^i$ is the local clustering result of reviewer modality $V^i$ , and $T^f$ is the final clustering result. Similarly, all the trustworthy scores for reviewer modalities of reviewing the author modality $V^k$ can be represented as a vector
+
+$$
+\sigma^ {k} = \left\{\sigma_ {1} ^ {f}, \dots , \sigma_ {i} ^ {f}, \dots , \sigma_ {m} ^ {f} \right\}, i \neq k. \tag {10}
+$$
+
+# 3.3. Modality Weight Learning
+
+By jointly considering the peer-review and trustworthy score, we obtain the final modality weight with the inner product as follows
+
+$$
+w ^ {k} = \mu^ {k} \bullet \sigma^ {k} = \sum_ {i = 1, i \neq k} ^ {m} \mu_ {i} ^ {k} \cdot \sigma_ {i} ^ {f}, m > 2. \tag {11}
+$$
+
+Note that Eq. (11) is only applicable when there are more than two modalities. Next, we will discuss in detail the circumstance where two modalities are given in the following.
+
+Two-modal Prejudice Processing. In academic peer-review, unreliable reviewers can give an unreasonable dislike or preference for the author's work. Similarly, in multimodal setting, reviewer modalities tend to have modality prejudice. If we have more than two modalities, the modality prejudice will be eliminated by multiple reviewers and their trustworthiness in modality weight learning. However, it can not be solved like this if only two modalities.
+
+Assume only two modalities, it is inevitable that one modality will be better or worse than another, so they may have different trustworthy scores. As shown in Figure 6(a), due to the single reviewer, the modality prejudice can not be eliminated, leading to unreasonable modality weight. Similar to the scenario that the EIC / AE makes the final decision by directly adopting his/her own review instead of using the comments from the unreliable reviewer, it can learn the modality weight by directly leveraging the trustworthy score given by the final clustering result to the author modality, as shown in Figure 6(b). In this case, the peer-review score has no effect, as they are all the same value due to the symmetric scoring function.
+
+
+(a)
+
+
+(b)
+
+
+Figure 6. Example of two-modal prejudice processing. (a) It is unreasonable that high-quality modality 1 with high trustworthy score learned a small weight. (b) $T^{f}$ gives the final modality weights by directly adopting its own review.
+
+Based on above, we summarize the final modality weight learning as follows
+
+$$
+w ^ {k} = \left\{ \begin{array}{l l} \sum_ {i = 1, i \neq k} ^ {m} \mu_ {i} ^ {k} \cdot \sigma_ {k} ^ {f}, & \text {i f} m = 2, \\ \sum_ {i = 1, i \neq k} ^ {m} \mu_ {i} ^ {k} \cdot \sigma_ {i} ^ {f}, & \text {i f} m > 2. \end{array} \right. \tag {12}
+$$
+
+# 3.4. The Objective Function
+
+Finally, with the above modality weight learning mechanism, we have the objective function with the popular information bottleneck realization as follows
+
+$$
+\mathcal {F} _ {\max } [ p (t | x) ] = \sum_ {i = 1} ^ {m} w ^ {i} \cdot \left[ I (T; Y ^ {i}) - \beta^ {- 1} I (T; X) \right], \tag {13}
+$$
+
+when $\beta \in (0, +\infty)$ balances the information compression and preservation. Note that $\beta$ equal to $+\infty$ represents the extreme compression, and in this case it is parameter-free.
+
+The advantages and possible weaknesses of the proposed method are discussed in Appendix B.
+
+# 3.5. Optimization Method
+
+We solve the optimization problem of PTIB with an effective sequential $k$ -means-like draw-and-merger algorithm (Lou et al., 2013; Hu et al., 2020; 2022), where each sample is sequentially drawn from the old cluster and assigned to
+
+an optimal new cluster that minimizes the merger cost to maximize the objective function. It is shown as follows:
+
+Weight Initialization. Modality weights are initialized with the initial peer-review and trustworthy scores.
+
+Random Clustering. The initial input, i.e., $X$ , is randomly partitioned to $|T|$ data clusters.
+
+Draw. Each sample $x$ is sequentially drawn from its "old" cluster $t^{old}$ of the $i$ -th modality, which is then taken as a separate cluster $\{x\}$ . Now, it leads to $|T| + 1$ data clusters.
+
+Merger. To make the number of clusters recover to $|T|$ , a "new" cluster $t^{new}$ is required to be selected from the existing data clusters for merger, which meanwhile satisfies the minimal cost for the separate cluster $\{x\}$ . This ensures that the separate cluster $\{x\}$ will be merged to the optimal candidate data cluster.
+
+Appendix A.2 shows a formal definition of the above "merger" and "merger cost" process. Appendix A.3 shows the details of algorithm and its computational complexity.
+
+# 4. Experiments
+
+# 4.1. Experiments Setup
+
+Eight multi-modal datasets are used, including 20NG, COIL20, Event, Soccer, 17Flowers, 75Flowers, COIL100 and MMI. The brief information of them is summarized in Table 1 and their details are shown in the Appendix C.1.
+
+We compare with 4 traditional single-modal clustering methods, including KM, Ncuts, KM-ALL, Ncuts-ALL, and 13 state-of-the-art multi-modal clustering methods, including MVIB, Co(reg), MfIB, RMSC, LMSC, LAN, GMC, DMIB, FPMVS-CAG, MCMLE, TBGL, TIM and SMVAGC-SF. Their details are shown in the Appendix C.2.
+
+For all the compared methods, we adopt the parameter value settings from their papers to attain the best clustering results with optimal parameter setting on each dataset. For the proposed method, we search the parameters from the settings in the following parameter analysis section. Afterwards, we conduct the experiments for 10 times and report the average results and standard deviation in terms of Accuracy (Acc) and Normalized Mutual Information (NMI). All the compared methods and the proposed method are conducted in the same experimental environment, which is a desktop computer with Windows 10 operating system, 32GB RAM, and MATLAB 2021a.
+
+Due to the limited space, the T-SNE visualization analysis is shown in Appendix C.3.
+
+Table 1. Brief Information of the Datasets
+
+Dataset Type # Modality # Samples # Clusters 20NG Text 3 500 5 COIL20 Image 3 1440 20 Event Image 3 1579 8 Soccer Image 3 280 7 17Flowers Image 3 1360 17 75Flowers Image 2 5514 75 COIL100 Image 2 7200 100 MMI Video 2 1760 22
+
+# 4.2. Results and Analysis
+
+We present the comparison clustering results with state-of-the-art methods on 8 multi-modal datasets in terms of Acc and NMI on Table 2 and 3. Observed from both the tables, we have the following discoveries.
+
+Single-modal VS All-modal. Intuitively, concatenating the features from different modalities may improve the clustering quality in comparison to the single-modal clustering. However, from both tables, the clustering results degrade significantly on the multi-modal COIL20 and Soccer datasets. For instance, compared to Ncuts method on COIL20 dataset, the Acc and NMI values of Ncuts-All decrease by $28.55\%$ and $26.08\%$ , respectively. This clearly shows the instability of the all-modal methods and also reveals the necessity of multi-modal clustering.
+
+Single/all-modal VS Compared multi-modal. Overall, the compared MMCs can beat the single/all-modal methods on most multi-modal datasets. Additionally, the second best results are always reached by the compared multi-modal method for all the datasets. However, it is also seen that on some datasets the MMCs is inferior to the single/all-modal methods. This is probably because that some compared MMCs tend to be trapped into local optimal solution and thus lead to unsatisfactory results.
+
+Single/all-modal VS Ours. The proposed method outperforms the single/all-modal methods by a large margin on all the involved multi-modal datasets. For a notable example, our PTIB obtains an improvement of $57\%$ and $42.07\%$ in terms of Acc and NMI compared to the best values of single/all-modal methods on 20NG dataset. This phenomenon clearly demonstrates the superiority of the proposed method.
+
+Compared multi-modal VS Ours. For the compared multimodal IB-based methods, they have reached the second best results by two times on 17 flowers and MMI datasets. This mainly lies in the fact that the compared IB-based methods have the strong ability of the mutual information on the variable correlation quantization, while the remaining compared methods fail to do that. The compared multi-modal non-IB-based methods achieve the most second best clustering
+
+Table 2. Clustering Results (+/-Standard Deviation) on First Four Datasets with SOTA Methods (● denotes the best result, ○ denotes the second best.)
+
+Method 20NG COIL20 Event Soccer Acc NMI Acc NMI Acc NMI Acc NMI KM 22.28±1.48 4.45±2.32 53.06±3.20 65.06±2.12 33.93±4.10 19.84±2.69 25.82±5.06 18.70±7.97 Ncuts (TPAMI'00) 42.80±2.40 27.65±2.01 74.69±1.30 84.01±0.54 34.10±1.28 14.97±0.40 48.21±1.14 45.02±2.21 KM-All 21.46±0.68 1.76±0.65 46.14±6.58 60.70±4.51 28.85±2.29 11.37±2.10 22.46±3.94 8.14±3.59 Ncuts-All (TPAMI'00) 71.20±0.17 57.23±0.10 46.14±0.52 57.93±0.23 35.06±0.69 20.11±0.85 39.75±0.94 34.04±0.57 MVIB (DASFAA'07) 94.22±1.37 83.21±3.18 61.74±10.51 73.65±6.63 40.02±2.04 23.71±1.56 35.79±3.96 21.42±4.25 Co(reg) (NeurIPS'11) 20.02±0.62 3.15±0.54 64.33±1.68 83.79±0.45 38.58±0.92 24.30±0.55 24.13±0.53 11.43±0.39 MfIB (IJCAI'13) 93.76±2.89 85.11±4.54 83.81±4.29 92.39±1.97 48.58±1.50 33.41±1.35 53.64±2.76 49.74±3.44 RMSC (AAAI'14) 37.26±0.91 15.70±0.84 65.43±3.31 79.16±2.35 36.58±1.26 21.02±0.88 28.96±1.90 12.16±2.18 LMSC (CVPR'17) 96.16±0.57 88.37±1.54 71.94±2.72 82.18±2.37 43.92±2.84 27.53±2.58 31.25±6.53 15.85±8.71 MLAN (TIP'18) 96.40±0.11 89.18±0.17 87.22±2.30 94.35±1.10 19.90±0.72 6.66±0.80 28.21±0.01 21.27±0.17 GMC (TKDE'20) 98.20±0.00 93.92±0.00 60.90±0.00 84.67±0.00 18.11±0.00 10.74±0.00 29.29±0.00 25.82±0.00 DMIB (TCYB'22) 98.30±0.14 97.56±0.49 65.90±4.03 77.70±2.46 49.80±3.02 32.97±2.38 54.07±3.67 50.68±2.23 FPMVS-CAG (TIP'22) 73.80±0.00 59.23±0.00 69.17±0.00 85.11±0.00 48.89±0.00 31.99±0.00 50.14±0.00 49.56±0.00 MCMLE (TPAMI'22) 77.40±0.00 69.96±0.00 85.83±0.00 93.48±0.00 44.46±0.00 30.24±0.00 56.07±0.00 50.06±0.00 TBGL (TPAMI'23) 89.11±0.00 83.45±0.00 86.10±0.00 92.41±0.00 42.84±0.00 28.40±0.00 54.39±0.00 49.78±0.00 TIM (TIP'23) 99.40±0.00 98.08±0.00 56.70±4.08 71.39±0.29 54.60±2.50 36.86±1.75 48.93±0.51 41.42±4.09 SMVAGC-SF (TIP'24) 86.07±6.40 72.61±3.59 75.66±5.10 89.43±2.11 54.76±1.27 36.97±0.65 45.14±1.56 29.61±1.85 PTIB 99.80±0.00 99.30±0.00 93.33±0.00 96.46±0.00 60.24±0.16 45.36±0.28 62.86±0.17 53.23±0.16 Improve (● VS ○) 0.40 (↑) 1.22 (↑) 6.11 (↑) 2.11 (↑) 5.48 (↑) 8.39 (↑) 6.79 (↑) 2.55 (↑)
+
+Table 3. Clustering Results (+/-Standard Deviation) on Remaining Four Datasets with SOTA Methods (● denotes the best result, ○ denotes the second best.)
+
+Method 17Flowers 75Flowers COIL100 MMI Acc NMI Acc NMI Acc NMI Acc NMI KM 22.41±1.67 24.31±1.14 19.48±0.85 35.21±0.75 27.96±1.78 58.13±1.52 26.89±2.95 44.15±1.60 Ncuts (TPAMI'00) 27.71±0.72 26.43±0.40 24.80±0.58 41.50±0.19 40.97±1.28 58.52±0.59 38.43±0.47 53.17±0.43 KM-All 17.63±1.27 13.55±1.86 21.13±0.88 32.57±0.71 29.25±1.57 50.55±2.15 27.11±1.81 38.76±1.59 Ncuts-All (TPAMI'00) 28.77±0.63 26.31±0.27 27.41±0.31 42.41±0.21 48.63±0.97 64.74±0.56 40.53±1.52 52.77±0.62 MVIB (DASFAA'07) 21.32±1.05 18.28±1.48 18.49±0.61 33.05±0.45 46.71±2.30 70.29±1.10 44.95±2.60 54.65±1.49 Co(reg) (NeurIPS'11) 26.28±0.49 27.12±0.20 28.16±0.36 44.95±0.09 48.35±0.44 70.86±0.15 34.72±0.53 51.31±0.22 MfIB (IJCAI'13) 38.52±2.03 37.24±1.40 24.57±0.32 40.79±0.37 50.52±0.08 72.81±0.46 40.14±2.09 52.50±1.69 RMSC (AAAI'14) 19.70±0.66 17.86±0.38 26.42±0.97 42.95±0.30 46.32±0.28 69.33±0.45 30.28±1.05 43.94±0.89 LMSC (CVPR'17) 33.29±2.29 31.49±1.60 24.58±0.90 42.50±0.59 48.76±1.45 66.74±0.85 40.17±1.88 51.62±1.29 MLAN (TIP'18) 24.32±1.91 22.21±1.24 25.58±0.53 34.16±1.15 45.05±0.41 59.55±0.53 38.15±0.05 52.68±0.04 GMC (TKDE'20) 6.76±0.00 4.78±0.00 18.52±0.00 30.96±0.00 38.86±0.00 67.55±0.00 35.60±0.00 55.65±0.00 DMIB (TCYB'22) 35.48±6.04 32.56±5.47 26.72±1.13 43.13±0.79 50.33±1.88 72.57±0.87 41.10±2.65 52.96±2.10 FPMVS-CAG (TIP'22) 30.51±0.00 27.27±0.00 23.83±0.00 38.24±0.00 45.03±0.00 70.58±0.00 36.77±0.00 51.03±0.00 MCMLE (TPAMI'22) 32.13±0.00 32.11±0.00 28.76±0.00 47.03±0.00 50.47±0.00 74.59±0.00 42.04±0.00 52.97±0.00 TBGL (TPAMI'23) 31.07±0.00 32.46±0.00 26.52±0.00 47.09±0.00 51.66±0.00 67.82±0.00 43.15±0.00 53.27±0.00 TIM (TIP'23) 32.98±3.28 29.36±3.60 21.83±0.60 26.23±1.24 51.43±1.72 74.98±0.70 28.98±1.57 39.56±2.96 SMVAGC-SF (TIP'24) 42.41±2.07 36.40±1.43 31.92±0.63 46.89±0.22 56.78±1.93 76.78±0.55 40.93±2.14 53.03±1.25 PTIB 45.29±0.05 42.49±0.08 35.73±0.36 51.91±0.20 61.17±0.23 82.86±0.19 48.30±0.80 60.39±0.49 Improve (● VS ○) 2.88 (↑) 5.25 (↑) 3.81 (↑) 4.82 (↑) 4.39 (↑) 6.08 (↑) 3.35 (↑) 4.74 (↑)
+
+results, but still underperform the proposed method.
+
+Improvement Analysis. Compared with the TIM, LAN, SMVAGC-SF, MCMLE and MVIB that obtained the second best results, the proposed method has attained notable improvements about $0.4\%$ , $6.11\%$ , $5.48\%$ , $6.79\%$ , $2.88\%$ , $3.81\%$ , $4.39\%$ and $3.35\%$ on the eight multi-modal datasets in terms of Acc values respectively. For weighted MMCs, the trustworthiness of the learned modality weights is significant for ensuring better clustering performance. The proposed method explicitly considers this vital factor by jointly incorporating the peer-review and trustworthy score.
+
+# 4.3. Parameter Analysis
+
+There is only one parameter $\beta$ in the proposed PTIB method, where we give the parameter settings as [10, 50, 100, 500, 700, 900, 1000]. Then we conduct extensive experiments on all the datasets with different settings to investigate the parameter sensitivity, and show the clustering performance with both Acc and NMI results in Figure 7.
+
+From the figure, we can clearly obtain the best parameter settings corresponding to the optimal clustering results for different datasets, which are [10, 50, 100, 500, 700, 900, 1000], [500, 700, 900, 1000], [50], [50, 100], [700, 900, 1000],
+
+
+
+
+
+
+
+
+
+
+Figure 7. Parameter study of our PTIB method on eight multi-modal datasets. Note that the values in the horizontal axis indicate the seven parameters [10, 50, 100, 500, 700, 900, 1000].
+
+
+
+
+
+
+
+Table 4. Clustering Results (+/-Standard Deviation) on multimodal Datasets with Parameter-free Version
+
+Datasets PTIB Parameter-free PTIB Versus Margin Acc NMI Acc NMI Acc NMI 20NG 99.80±0.00 99.30±0.00 99.80±0.00 99.30±0.01 0.00 0.00 COIL20 93.33±0.00 96.46±0.00 86.46±0.00 93.80±0.00 -6.87 -2.66 Event 60.24±0.16 45.36±0.28 59.01±0.62 44.39±0.50 -1.23 -0.97 Soccer 62.86±0.17 53.23±0.16 59.64±0.00 51.65±0.01 -3.22 -1.58 17Flowers 45.29±0.05 42.49±0.08 42.74±1.38 40.92±0.82 -2.55 -1.57 75Flowers 35.73±0.36 51.91±0.20 34.57±0.36 51.23±0.24 -1.16 -0.68 COIL100 61.17±0.23 82.86±0.19 59.93±0.61 82.24±0.30 -1.24 -0.62 MMI 48.30±0.80 60.39±0.49 44.26±0.01 58.38±0.00 -4.04 -2.01
+
+[500], [50], and [100], respectively. It is observed that there are more than one best parameters for most datasets, such as 20NG, COIL20, and 17Flowers. Additionally, for all the datasets, the better results are obtained approximately from the range [500, 700, 900, 1000], which is a wide range for practical parameter selection. All these observations reveal that the parameter tuning in practice is not a heavy burden and also shows the great potential of the proposed method for real-world applications.
+
+# 4.4. Potential for Parameter-free Version
+
+To further investigate the potential of the proposed method in practical applications, we make parameter $\beta$ as $+\infty$ , leading to an elegant parameter-free version of our PTIB, formulated as $\mathcal{F}_{max}[p(t|x)] = \sum_{i=1}^{m} w^i \cdot I(T;Y^i)$ . Then we conduct a series of experiments on involved eight multimodal datasets, as shown in Table 4.
+
+We also add a new column called versus margin, using the value of parameter-free version minus that of the PTIB method to compare the difference between them. From
+
+the table, we observe that the versus margin value always reaches roughly $-2\%$ , and leads to about $-4\%$ on only few datasets (e.g., obtaining $-6.87\%$ on COIL20 datasets in terms of Acc value). This phenomenon demonstrates that the proposed method has a huge potential for more real-world applications, especially in the fields where we hardly tune the parameters without any prior knowledge.
+
+# 5. Conclusion
+
+Motivating by the peer-review mechanism in academia, in this article we propose a novel peer-review trustworthy information bottleneck (PTIB) method for addressing weighted multi-modal clustering problem. PTIB measures the modality correlations by iteratively learning the peer-review score of each modality, which works by alternately playing the role of "author" and "reviewer" for each modality. Additionally, we further design a trustworthy score for improving the reliability of the learned peer-review score, thus leading to more accurate modalities weights. Rich experiments on 8 datasets reveal the superiority and effectiveness of the proposed PTIB. The proposed method is also with some possible weaknesses. It is designed for fully aligned multi-modal clustering and complete multi-modal clustering, where none of the data samples across modalities are unaligned, missing, or damaged. And it requires the number of clusters of the dataset in advance, like almost all the existing multi-modal clustering methods. In future, we will focus on solving the discussed possible weaknesses of PTIB and apply it into more real-world applications. Moreover, this "peer-review" idea may be considered using for other problems, such as federated learning.
+
+# Acknowledgements
+
+The authors thank anonymous reviewers for their constructive comments. This work was supported by National Natural Science Foundation of China under Grant 62206254, Henan Province Outstanding Youth Science Fund Program under Grant 252300421223 and China Postdoctoral Science Foundation under Grant 2024T170843 and 2023M743186.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Bay, H., Tuytelaars, T., and Gool, L. V. SURF: speeded up robust features. In ECCV, pp. 404-417, 2006.
+Chen, M., Wang, C., and Lai, J. Low-rank tensor based proximity learning for multi-view clustering. IEEE TKDE, 35 (5):5076-5090, 2023.
+Cover, T. M. and Thomas, J. A. Elements of information theory. Wiley, 2006.
+Federici, M., Dutta, A., Forre, P., Kushman, N., and Akata, Z. Learning robust representations via multi-view information bottleneck. In ICLR, 2020.
+Fei-Fei, L. and Perona, P. A bayesian hierarchical model for learning natural scene categories. In CVPR, pp. 524-531, 2005.
+Gao, Y., Gu, S., Li, J., and Liao, Z. The multi-view information bottleneck clustering. In DASFAA, pp. 912-917, 2007.
+Gu, Z., Li, Z., and Feng, S. EDISON: enhanced dictionary-induced tensorized incomplete multi-view clustering with gaussian error rank minimization. In ICML, 2024.
+Han, Z., Zhang, C., Fu, H., and Zhou, J. T. Trusted multiview classification. In ICLR, 2021.
+Han, Z., Zhang, C., Fu, H., and Zhou, J. T. Trusted multiview classification with dynamic evidential fusion. IEEE TPAMI, 45(2):2551-2566, 2023.
+He, Y. and Atia, G. K. Multi-mode tensor space clustering based on low-tensor-rank representation. In AAAI, pp. 6893-6901, 2022.
+Hu, J., Yang, C., Huang, K., Wang, H., Peng, B., and Li, T. Information bottleneck fusion for deep multi-view clustering. KBS, 289:111551, 2024a.
+
+Hu, S., Yan, X., and Ye, Y. Joint specific and correlated information exploration for multi-view action clustering. Inf. Sci., 524:148-164, 2020.
+Hu, S., Shi, Z., and Ye, Y. DMIB: dual-correlated multivariate information bottleneck for multiview clustering. IEEE Trans. Cybern., 52(6):4260-4274, 2022.
+Hu, S., Lou, Z., Yan, X., and Ye, Y. A survey on information bottleneck. IEEE TPAMI, 46(8):5325-5344, 2024b.
+Hu, S., Fan, J., Zou, G., and Ye, Y. Multi-aspect self-guided deep information bottleneck for multi-modal clustering. In AAAI, pp. 17314-17322, 2025.
+Huang, Z., Ren, Y., Pu, X., Huang, S., Xu, Z., and He, L. Self-supervised graph attention networks for deep weighted multi-view clustering. In AAAI, pp. 7936-7943, 2023.
+Khan, F. S., van de Weijer, J., and Vanrell, M. Top-down color attention for object recognition. In ICCV, pp. 979-986, 2009.
+Kumar, A., Rai, P., and III, H. D. Co-regularized multi-view spectral clustering. In NeurIPS, pp. 1413–1421, 2011.
+Li, Z., Tang, C., Liu, X., Zheng, X., Zhang, W., and Zhu, E. Consensus graph learning for multi-view clustering. IEEE TMM, 24:2461-2472, 2022.
+Liang, W., Zhu, E., Yu, S., Xu, H., Zhu, X., and Liu, X. Scalable multiple kernel clustering: Learning clustering structure from expectation. In ICML, 2024.
+Lin, J. Divergence measures based on the shannon entropy. IEEE Trans. Inf. Theor., 37(1):145-151, 2006.
+Liu, X., Liu, L., Liao, Q., Wang, S., Zhang, Y., Tu, W., Tang, C., Liu, J., and Zhu, E. One pass late fusion multi-view clustering. In ICML, pp. 6850-6859, 2021.
+Lou, Z., Ye, Y., and Yan, X. The multi-feature information bottleneck with application to unsupervised image categorization. In *IJCAI*, pp. 1508-1515, 2013.
+Lowe, D. G. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91-110, 2004.
+Nie, F., Cai, G., Li, J., and Li, X. Auto-weighted multiview learning for image clustering and semi-supervised classification. IEEE TIP, 27(3):1501-1511, 2018.
+Raya, S., Orabi, M., Afyouni, I., and Aghbari, Z. A. Multimodal data clustering using deep learning: A systematic review. Neurocomputing, 607:128348, 2024.
+Shi, J. and Malik, J. Normalized cuts and image segmentation. IEEE TPAMI, 22(8):888-905, 2000.
+
+Tishby, N., Pereira, F. C., and Bialek, W. The information bottleneck method. In Proc. Annual Allerton Conf. Communnication Control Computing, pp. 368-377, 1999.
+Wang, H., Yang, Y., and Liu, B. GMC: graph-based multiview clustering. IEEE TKDE, 32(6):1116-1129, 2020a.
+Wang, Q., Chen, M., Nie, F., and Li, X. Detecting coherent groups in crowd scenes by multiview clustering. IEEE TPAMI, 42(1):46-58, 2020b.
+Wang, S., Liu, X., Zhu, X., Zhang, P., Zhang, Y., Gao, F., and Zhu, E. Fast parameter-free multi-view subspace clustering with consensus anchor guidance. IEEE TIP, 31:556-568, 2022.
+Wang, S., Liu, X., Liu, S., Tu, W., and Zhu, E. Scalable and structural multi-view graph clustering with adaptive anchor fusion. IEEE TIP, 33:4627-4639, 2024.
+Wen, J., Zhang, Z., Fei, L., Zhang, B., Xu, Y., Zhang, Z., and Li, J. A survey on incomplete multiview clustering. IEEE Trans. Syst. Man Cybern. Syst., 53(2):1136-1149, 2023.
+Wen, J., Xu, G., Tang, Z., Wang, W., Fei, L., and Xu, Y. Graph regularized and feature aware matrix factorization for robust incomplete multi-view clustering. IEEE TCSVT, 34(5):3728-3741, 2024.
+Wolf, L., Hassner, T., and Taigman, Y. Descriptor based methods in the wild. In Real-Life Images workshop at the Eur. Conf. Computer Vis., 2008.
+Xia, R., Pan, Y., Du, L., and Yin, J. Robust multi-view spectral clustering via low-rank and sparse decomposition. In AAAI, pp. 2149-2155, 2014.
+Xia, W., Gao, Q., Wang, Q., Gao, X., Ding, C., and Tao, D. Tensorized bipartite graph learning for multi-view clustering. IEEE TPAMI, 45(4):5187-5202, 2023a.
+Xia, W., Wang, T., Gao, Q., Yang, M., and Gao, X. Graph embedding contrastive multi-modal representation learning for clustering. IEEE TIP, 32:1170-1183, 2023b.
+Xu, J., Ren, Y., Tang, H., Yang, Z., Pan, L., Yang, Y., Pu, X., Yu, P. S., and He, L. Self-supervised discriminative feature learning for deep multi-view clustering. IEEE TKDE, 35(7):7470-7482, 2023.
+Xu, Y., Wang, C., and Lai, J. Weighted multi-view clustering with feature selection. PR, 53:25-35, 2016.
+Yan, X., Mao, Y., Ye, Y., and Yu, H. Cross-modal clustering with deep correlated information bottleneck method. IEEE TNNLS, 35(10):13508-13522, 2024.
+
+Zhang, C., Hu, Q., Fu, H., Zhu, P., and Cao, X. Latent multi-view subspace clustering. In CVPR, pp. 4333-4341, 2017.
+Zhang, C., Lou, Z., Zhou, Q., and Hu, S. Multi-view clustering via triplex information maximization. IEEE TIP, 32:4299-4313, 2023.
+Zhang, C., Xue, H., Nie, K., Wu, X., Lou, Z., Yang, S., Zhou, Q., and Hu, S. Nice to meet images with big clusters and features: A cluster-weighted multi-modal co-clustering method. IPM, 61(5):103735, 2024.
+Zhang, Z., Liu, L., Shen, F., Shen, H. T., and Shao, L. Binary multi-view clustering. IEEE TPAMI, 41(7):1774-1782, 2019.
+Zhao, Q., Zong, L., Zhang, X., Liu, X., and Yu, H. Multiview clustering via clusterwise weights learning. KBS, 193:105459, 2020.
+Zheng, X., Tang, C., Wan, Z., Hu, C., and Zhang, W. Multi-level confidence learning for trustworthy multimodal classification. In AAAI, pp. 11381-11389, 2023.
+Zhong, G. and Pun, C. Improved normalized cut for multiview clustering. IEEE TPAMI, 44(12):10244-10251, 2022.
+Zhou, R. and Shen, Y. End-to-end adversarial-attention network for multi-modal clustering. In CVPR, pp. 14607-14616, 2020.
+Zou, X., Tang, C., Zheng, X., Li, Z., He, X., An, S., and Liu, X. DPNET: dynamic poly-attention network for trustworthy multi-modal classification. In ACM MM, pp. 3550-3559, 2023.
+
+# A. Theorem Proof and Optimization
+
+# A.1. Proof of Theorem 3.3
+
+Proof. Given a dataset with its true category of $\theta$ clusters and a specific clustering result, for a certain cluster, the probability of its major category is denoted as $a$ and the sum of the probabilities of minor categories is $1 - a$ , leading to the first probability distribution $\{a, 1 - a\}$ . It is assumed that the minor categories obey an arbitrary probability distribution $B = \{b_1, b_2, \ldots, b_{\theta - 1}\}$ , i.e., $\sum_j b_j = 1$ , leading to the second probability distribution $\{a, b_1(1 - a), b_2(1 - a), \ldots, b_{\theta - 1}(1 - a)\}$ . Then, we use the entropy of the second distribution to minus that of the first distribution as follows
+
+$$
+\begin{array}{l} H _ {2} \left(\left\{a, b _ {1} (1 - a), b _ {2} (1 - a), \dots , b _ {\theta - 1} (1 - a) \right\}\right) - H _ {1} \left(\left\{a, 1 - a \right\}\right) \\ = - a \log_ {2} a - \sum_ {j = 1} ^ {\theta - 1} [ b _ {j} (1 - a) ] \log_ {2} [ b _ {j} (1 - a) ] + a \log_ {2} a + (1 - a) \log_ {2} (1 - a) \\ = (1 - a) \log_ {2} (1 - a) - \sum_ {j = 1} ^ {\theta - 1} [ b _ {j} (1 - a) ] [ \log_ {2} b _ {j} + \log_ {2} (1 - a) ] \\ = (1 - a) \log_ {2} (1 - a) - \left[ \sum_ {j = 1} ^ {\theta - 1} (1 - a) b _ {j} \log_ {2} b _ {j} + \sum_ {j = 1} ^ {\theta - 1} b _ {j} (1 - a) \log_ {2} (1 - a) \right] \tag {14} \\ = (1 - a) \log_ {2} (1 - a) - (1 - a) \sum_ {j = 1} ^ {\theta - 1} b _ {j} \log_ {2} b _ {j} - (1 - a) \log_ {2} (1 - a) \\ = (1 - a) \left[ - \sum_ {j = 1} ^ {\theta - 1} b _ {j} \log_ {2} b _ {j} \right] \\ = (1 - a) H (B) \\ \end{array}
+$$
+
+Since $1 - a \geq 0$ and $0 \leq H(B) \leq \log_2(\theta - 1)$ , we have $0 \leq H_2 - H_1 \leq (1 - a)\log_2(\theta - 1)$ , which clearly shows that, the more mixed the minor categories are, the higher the entropy of it gets, thus resulting in high uncertainty of the cluster. Actually, $H(B)$ is the noisy and uncertain information that the minor categories convey, which interferes with the judgment of useful information.
+
+This proves the theorem.
+
+# A.2. The Formal Definition of the Optimization Process
+
+Proposition A.1. [Merger] (Hu et al., 2020) If the separate cluster $\{x\}$ is merged into one specific cluster $t$ in the $i$ -th modality, a new merged cluster is reached, named $\hat{t}$ . This process is formulated as the following:
+
+$$
+\left\{ \begin{array}{l} p (\hat {t}) = p (x) + p (t) \\ p \left(y ^ {i} | \hat {t}\right) = \pi_ {1} \cdot p \left(y ^ {i} | x\right) + \pi_ {2} \cdot p \left(y ^ {i} | t\right) \end{array} \right. \tag {15}
+$$
+
+where $p(y^i | x)$ indicates the feature conditional distribution of the $i$ -th modality, $p(y^i | t)$ indicates the cluster centroid of the $i$ -th modality, and the merger function $\Pi = \{\pi_1, \pi_2\} = \{\frac{p(x)}{p(\hat{t})}, \frac{p(t)}{p(\hat{t})}\}$ .
+
+Proof. The detailed proof can be referred from the Proposition 1 in the work (Hu et al., 2020).
+
+To obtain the maximal value of the function $Eq.(13)$ , we attempt to select the optimal cluster $t^{new}$ for each merger, and ensure the "merger cost" i.e., the value change of function $Eq.(13)$ , is always minimal. This is formulated by
+
+$$
+t ^ {\text {n e w}} = \arg \min \left(\Delta \mathcal {F} _ {\max }\right) = \arg \min \left(\mathcal {F} _ {\max } ^ {\text {b e f}} - \mathcal {F} _ {\max } ^ {\text {a f t}}\right), \tag {16}
+$$
+
+where $\mathcal{F}_{max}^{bef}$ and $\mathcal{F}_{max}^{aft}$ indicate the value of Eq.(13) before and after the merger process respectively.
+
+Actually, the above formulation has a significant impact on whether a good or bad new cluster $t^{new}$ is selected. We thus have the merger cost formulation with the following theorem.
+
+Algorithm 1 The Proposed PTIB
+1: Input: $m$ joint distributions $\{p(X,Y^i)\}_{i=1}^m$ , the number of clusters $|T|$ , the balance parameter $\beta$ .
+2: Output: Final clustering result $p(t|x)$ .
+3: Modality Weight Initialization: Compute the initial modality weights with initial peer-review and trustworthy score;
+4: Random Clustering: $T \gets$ Random partition of $\mathcal{X}$ into $|T|$ clusters;
+5: repeat
+6: for all $x \in \mathcal{X}$ do
+7: Draw: Draw $x$ from the "old" cluster $t^{old}$ to become a separate cluster $\{x\}$ ;
+8: Merger: Select a "new" cluster $t^{new}$ for the separate cluster $\{x\}$ to merge corresponding to the minimal merger cost in Theorem A.2;
+9: end for
+10: Update the trustworthy score using the clustering result in each iteration;
+11: Update the weight for each modality;
+12: until Samples in different clusters remain unchanged or a fixed number of iterations.
+
+Theorem A.2. [Merger Cost] (Hu et al., 2020) Given two clusters $\{x\}$ and $t$ , we have the merger cost as
+
+$$
+\Delta \mathcal {F} _ {\max } (\{x \}, t) = p (\hat {t}) \cdot \operatorname {d i s t} (\{x \}, t), \tag {17}
+$$
+
+where
+
+$$
+\operatorname {d i s t} (\{x \}, t) = \sum_ {i = 1} ^ {m} w ^ {i} \left[ \right. J S _ {\Pi} \left[ p \left(y ^ {i} | x\right), p \left(y ^ {i} | t\right)\right] - \beta^ {- 1} J S _ {\Pi} \left[ \right. p \left( \right.y (x), p (x | t) \left. \right]\left. \right].
+$$
+
+where $JS$ is the Jensen-Shannon divergence (Lin, 2006).
+
+Proof. The detailed proof can be referred from the Theorem 1 in the work (Hu et al., 2020).
+
+# A.3. Algorithm and Computational Complexity
+
+The Algorithm 1 shows the details of the optimization process and we investigate the computational complexity of it. At step 8 in Algorithm 1, given sample $x$ , we have the merger cost as $\Delta \mathcal{F}_{max}(\{x\}, t)$ for every "new" cluster $t$ to reach the minimal one, and it takes $O(|T||X|(|Y^1| + |Y^2| \cdots + |Y^m|))$ . When the samples in different clusters remain unchanged or after a fixed number of iterations, it takes $O(r|T||X|(|Y^1| + |Y^2| \cdots + |Y^m|))$ , where $r$ is the number of repetitions. Generally, the number of clusters is treated as constant. Hence, the overall computational complexity takes $O(r|X|(|Y^1| + |Y^2| \cdots + |Y^m|))$ .
+
+# B. Discussion
+
+In this section, we first discuss and show the differences or advantages of the proposed method with existing multi-modal clustering methods. Then, we further analyze the weaknesses of our method.
+
+The main strengths of the PTIB method in comparison with existing methods are as follows.
+
+- Trustworthy weight learning. To our knowledge, none of the existing weighted MMCs employ the trustworthy strategy in the weight learning process, which may probably leads to inaccurate modality weights. Unlike them, we attempt to learn trustworthy modality weights in an iterative optimization process.
+- Correlation quantization based learning. We in this paper focus on modality correlation quantization using mutual information, while most related existing methods generally measure themselves for weight learning, e.g., using objective function value of each modality for complementary information learning.
+- Parameter-free weight learning. Most weighted MMCs usually adopt one or more regularization parameters to control the learned weight distribution, where the parameters are difficult to tune in practice. In contrast, the weight learning process in this paper is completely parameter-free without need any prior knowledge.
+
+
+COIL20
+
+
+Soccer
+
+
+17Flowers
+
+
+MMI
+Figure 8. Some typical images and videos from COIL20, Soccer, 17Flowers and MMI datasets.
+
+- Self-supervision mechanism. Self-supervised learning mechanism (Xu et al., 2023) is incorporated into the proposed method for guiding the modality weight learning. Thus, in this way both the clustering structure learning and the weight learning can mutually benefit from win-win cooperation.
+
+Some possible weaknesses of the proposed method are revealed in the following.
+
+- Fully aligned multi-modal clustering. The proposed method works under the assumption that each data sample across different modalities is fully aligned. For example of the multi-feature images, the first sample described with one kind of feature, e.g., shape, must be aligned with the same sample described with another feature, e.g., color.
+- Complete multi-modal clustering. The proposed method can only solve the complete multi-modal clustering problem where none of the data samples across modalities are missing or damaged. Incomplete multi-modal clustering (Wen et al., 2023; 2024) or its self-supervised version (Huang et al., 2023) has attracted lots of attention and is worth considering in the future.
+- Given number of clusters in advance. Like almost all the existing multi-modal clustering methods, the proposed method requires the number of clusters of the dataset in advance, which may limit its wide applications in completely unknown areas.
+
+# C. More Experimental Details
+
+# C.1. The Detail of Datasets
+
+Here, we provide detailed description of the dataset used in the experimental part of this paper and some exemplar samples from the image and video data are shown in Figure 8.
+
+20NG dataset is composed of 500 newsgroup documents extracted from the 20 Newsgroup dataset. Three different pre-processing methods provide the modalities for each document.
+
+COIL20 dataset ${}^{2}$ has 1440 images about 20 objects,where each object has 72 images taken at ${5}^{ \circ }$ intervals in its ${360}^{ \circ }$ horizontal rotation. Three features are adopted for shape, color and texture representation, i.e., SIFT (Lowe, 2004), Color Attention (Khan et al., 2009), and TPLBP (Wolf et al., 2008) respectively. Every feature represents one single modality.
+
+Event dataset contains eight kinds of sports event classes with 1579 images. The features are the same as those of the COIL20 dataset.
+
+Soccer dataset $^{4}$ contains 280 images of 7 soccer teams captured from the websites. The features extracted by three ways, including SIFT (Lowe, 2004), Color Attention (Khan et al., 2009), and TPLBP (Wolf et al., 2008), are used as three modalities.
+
+17Flowers dataset consists of 1360 images of flowers belonging to 17 different classes, each of which has 80 images. Three kinds of features, i.e., SURF (Bay et al., 2006), Color Attention (Khan et al., 2009), and TPLBP (Wolf et al., 2008), are used as three modalities. This dataset is challenging for clustering because there are classes with large variations within each class and close similarity across classes.
+
+75Flowers dataset is selected from the 102 Flowers dataset. The images contain a large variation on the posture and the light. This dataset contains two modalities by using SIFT (Lowe, 2004) and Color Attention (Khan et al., 2009) as shape and color feature extractors respectively.
+
+COIL100 dataset consists of images from 100 objects, where each object has 72 images. We use two features extracted by SIFT (Lowe, 2004) and SURF (Bay et al., 2006) as two modalities.
+
+MMI dataset $^{8}$ has 1760 samples with challenging 22 multi-modal (RGB, depth and Skeleton) and multi-modal (front and side modalities) interactive human actions, which are collected in cluttered and unclear places. Note that we adopt the RGB action videos with two modalities for experiments.
+
+# C.2. The Detail of Comparison Methods
+
+1. KM, Ncuts (Shi & Malik, 2000): KM ( $k$ -means) and Ncuts are traditional single-modal clustering methods, and the best results are reported among different modalities for each dataset.
+2. KM-All, Ncuts-All: Both of the methods are built by applying their single-modal version on the multi-modal datasets with concatenated features.
+3. MVIB (Gao et al., 2007): It is the first multi-view IB method proposed to address the document clustering problems from the websites, working by designing a compatible constraint to ensure the consistency among view assignments.
+4. Co(reg) (Kumar et al., 2011): It co-regularizes the data clustering hypotheses among views to learn consistent assignments based on spectral model.
+5. MfIB (Lou et al., 2013): It is a weighted multi-feature IB method designed for solving the unsupervised image classification, where the weights are set manually.
+6. RMsc (Xia et al., 2014): It solves the noisy multi-view clustering problem by designing a robust spectral method.
+7. LMSC (Zhang et al., 2017): It learns latent shared representations among views to make the feature subspace more robust and accurate.
+8. MLAN (Nie et al., 2018): It jointly learns the local structure and clustering assignments, and then automatically tunes the view weights without using parameters.
+9. GMC (Wang et al., 2020a): It is a graph-based weighted multi-view clustering method by automatically tuning the algorithm parameters.
+10. DMIB (Hu et al., 2022): It jointly takes account into the dual correlations about the cross-feature and cross-cluster view correlations for multi-view clustering based on IB theory.
+11. FPMVS-CAG (Wang et al., 2022): It deals with the multi-view subspace clustering problem by a fast parameter-free method with the guidance of selected consensus anchors.
+12. MCMLE (Zhong & Pun, 2022): It improves the traditional Ncuts method for multi-view clustering by Laplacian embedding to learn a shared binary assignment matrix among different modalities.
+
+
+20NG:View 1
+
+
+20NG: View 2
+
+
+20NG: View 3
+
+
+COIL20: View1
+
+
+COIL20:View2
+
+
+COIL20: View3
+
+
+MMI: View1
+Figure 9. T-sne visualization results on 20NG, COIL20 and MMI datasets.
+
+
+MMI: View2
+
+13. TBGL (Xia et al., 2023a): It focuses on learning tensorized bipartite graphs for clustering multi-view datasets by simultaneously considering the intra/inter-view similarities.
+14. TIM (Zhang et al., 2023): It is an information-theoretical method for solving the multi-view clustering problem, and works by following three principles, i.e., contained, complementary and compatible principle.
+15. SMVAGC-SF (Wang et al., 2024): It jointly optimizes anchor graph construction and graph alignment, and adaptively fuses multiple anchor graphs with different magnitudes to improve the quality of multi-view clustering.
+
+# C.3. T-SNE Visualization Analysis
+
+To further illustrate the learned clustering structure, we vividly show the $t$ -SNE visualization of the clustering results in Figure 9 with three typical multi-modal datasets, i.e., 20NG, COIL20 and MMI. From this figure, it is observed that the visualization of most modalities from the involved datasets illustrate a relatively compact and separated clustering structure. For a typical example of the modality 2 and 3 in COIL20 dataset, the data clusters with different colors are quite clear, and data samples in most clusters are densely distributed while samples from different clusters are in a long distance.
\ No newline at end of file
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/images.zip b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4ffe00f3cf38519a9b8e90173190da02b5eb7316
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7404f05283d56dfe573d93aac131475cd4a097e9a61bfd5ebf93745aae937299
+size 1002210
diff --git a/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/layout.json b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4273516396cfe7e16b2161570b667e97ea13702
--- /dev/null
+++ b/apeerreviewlookonmultimodalclusteringaninformationbottleneckrealizationmethod/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:133796db91dee0099e6bc954283dae7c53e7815556d8cb00fcf3597991f6713e
+size 645908
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_content_list.json b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..afe2b4e6e5eb6339d335cdefca3602c41c337542
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44eb2c82c4ab768463736684f2e5f152241df8f14d2decb0e7a4127b30d4a40d
+size 163809
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_model.json b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8547f9d89eeff7d7a55a7adf50e5abc8b08ed4ed
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d75dbd79b073cb1e4bcee785c574bec457b4121320edb6dd0c9d7540cc2d0684
+size 194316
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_origin.pdf b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bcfb39accad103cda08e2300da827b7c14d1aeb2
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/f036bef0-1eee-4181-8666-2706c8fcec58_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4a76ec1607a56835b9105162aba90aadcda3446946c05949fac88c14a51c1df
+size 2965511
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/full.md b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f8853a5c6410cabe71ca21b554c8bbc028fc2f8
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/full.md
@@ -0,0 +1,664 @@
+# A Physics-Augmented Deep Learning Framework for Classifying Single Molecule Force Spectroscopy Data
+
+Cailong Hua $^{1}$ Sivaraman Rajaganapathy $^{2}$ Rebecca A. Slick $^{3}$ Joseph Vavra $^{3}$ Joseph M. Muretta $^{3}$ James M. Ervasti $^{3}$ Murti V. Salapaka $^{1}$
+
+# Abstract
+
+Deciphering protein folding and unfolding pathways under tension is essential for deepening our understanding of fundamental biological mechanisms. Such insights hold the promise of developing treatments for a range of debilitating and fatal conditions, including muscular disorders like Duchenne Muscular Dystrophy and neurodegenerative diseases such as Parkinson's disease. Single molecule force spectroscopy (SMFS) is a powerful technique for investigating forces involved in protein domains folding and unfolding. However, SMFS trials often involve multiple protein molecules, necessitating filtering to isolate measurements from single-molecule trials. Currently, manual visual inspection is the primary method for classifying single-molecule data; a process that is both time-consuming and requires significant expertise. Here, we both apply state-of-the-art machine learning models and present a novel deep learning model tailored to SMFS data. The proposed model employs a dual-branch fusion strategy; one branch integrates the physics of protein molecules, and the other operates independently of physical constraints. This model automates the isolation of single-molecule measurements, significantly enhancing data processing efficiency. To train and validate our approach, we developed a physics-based Monte Carlo engine to simulate force spectroscopy datasets, including trials
+
+$^{1}$ Department of Electrical and Computer Engineering, University of Minnesota - Twin Cities, Minneapolis, MN 55455
+ $^{2}$ Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN 55905
+ $^{3}$ Department of Biochemistry, Molecular Biology and Biophysics, University of Minnesota - Twin Cities, Minneapolis, MN 55455. Correspondence to: Cailong Hua , Murti V. Salapaka .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+involving single molecules, multiple molecules, and no molecules. Our model achieves state-of-the-art performance, outperforming five baseline methods on both simulated and experimental datasets. It attains nearly $100\%$ accuracy across all simulated datasets and an average accuracy of $79.6 \pm 5.2\%$ on experimental datasets, using only $\sim 30$ training samples, surpassing baseline methods by $11.4\%$ . Notably, even without expert annotations on experimental data, the model achieves an average accuracy of $72.0 \pm 5.9\%$ when pre-trained on corresponding simulated datasets. With our deep learning approach, the time required to extract meaningful statistics from single-molecule SMFS trials is reduced from a day to under an hour. This work results in SMFS experimental datasets from four important protein molecules crucial to many biological pathways. To support further research, we have made our datasets publicly available and provided a Python-based toolbox (https://github.com/SalapakaLab-SIMBioSys/SMFS-Identification).
+
+# 1. Introduction
+
+Many biological processes depend on controlling mechanical forces achieved via the folding and unfolding of domains in molecules like titin (Rief et al., 1997; 1998; Oberhauser et al., 2001), dystrophin and its homologue utrophin (Rajaganapathy et al., 2019; Ramirez et al., 2023), neurotoxic proteins (Hervás et al., 2012), and extracellular matrix protein tenascin (Oberhauser et al., 1998). For example, dystrophin and utrophin work as molecular shock absorbers that limit myofiber membrane damage when undergoing reversible unfolding upon muscle stretching and contraction (Ervasti, 2007). Evidently, studying mechanical properties of single molecules can provide vital insights into mechanisms of debilitating diseases. Instruments such as optical tweezers (Ashkin et al., 1986) and atomic force microscopes (AFMs) (Binnig et al., 1986) have enabled single molecule force spectroscopy (SMFS), where molecular forces in the femto
+
+to nano-Newton range, over sub-nanometer to micrometer distances, can be measured and studied. In an SMFS experiment, measurements from a force probe are recorded while it is made to interact with molecules of interest. Since the size of a typical force probe is orders greater than a typical bio-molecule, the probe may come in contact with one or more molecules. Measurements obtained from interactions involving more than one molecule confound the interpretation of results and are a significant challenge in characterizing the behavior of single molecules. How to identify and isolate measurements and data that result from a single-molecule is an important objective for SMFS.
+
+The use of chemical functionalization of probes, along with molecular fingerprints has emerged as an approach for identifying single molecule trials (Yang et al., 2020a). Chemical functionalization modifies the surface of the force probe and substrate to enable site-specific attachment of molecules, where fingerprints are well-characterized molecules that yield distinct unfolding patterns. Here fingerprints in the trials can be leveraged to discern single molecule trials from trials that result from multiple molecules. However, surface chemical functionalization is time-consuming, often taking at least 6 hours (Zimmermann et al., 2010), and demands careful handling and practice. Moreover, advanced filtering techniques, informed by an understanding of all molecules involved in the complex, are essential for effectively identifying single molecule trials (Yang et al., 2020a).
+
+In contrast, conducting experiments without functionalizing probes and introducing fingerprints into the native molecule has significant advantages. Probes without functionalization are easier and less expensive to manufacture, and biomolecules without fingerprints engineered into their structure are easier to synthesize. Moreover, there is greater confidence that the experimental data characterizes the unaltered native bio-molecule without any confounding effects introduced by fingerprints. Despite these advantages, when there are no fingerprints, distinguishing data that result from single molecules and multiple molecules is more challenging. Currently, prevalent accepted method for distinguishing the data is based on visual inspection, which is a time-consuming process that demands a high level of expertise (Bornschlögl & Rief, 2011; Lyubchenko, 2018). Additionally, trials need to be collected from a large number of experiments, not only to ensure statistical confidence but also because the molecule concentration is generally lowered to minimize the possibility of involving multiple molecules (Ramirez et al., 2023; Oberhauser et al., 2001). These factors hinder obtaining precise statistics of single molecular trials without functionalization and the generation of a large, annotated dataset suitable for training deep learning models.
+
+A typical traditional workflow for SMFS experiments in
+
+volves purifying protein molecules from expression systems (either bacterial or insect), preparing samples with the target protein molecules, and setting up the AFM to automatically perform multiple pulls to obtain numerous SMFS trials. Subsequent to obtaining trial data, an expert manually filters which curves correspond to single molecules. Each session typically involves approximately 2,000–5,000 trials, requiring the expert to meticulously sift through the data to filter out curves that are not from single molecules. This process can take between 12 and 24 hours. To ensure reliable assessment, the molecule is expected to be expressed at least thrice, with each expression resulting in multiple sessions, making the cumulative time investment significant. This article aims to reduce this time to less than one hour.
+
+In this work, we introduce state-of-art machine learning models and present a novel deep learning model that augments the unfolding physics of protein molecules to accurately classify SMFS data into three classes: 1) no molecule, 2) single molecule, and 3) multiple molecules. The model employs a dual-branch fusion strategy, one branch incorporating the physics of protein molecules and the other operating independently of physical constraints. To train and validate our approach, we present the first publicly accessible datasets, both simulated and experimental, obtained from non-specific pulling of multi-domain molecules: titin, utrophin, and dystrophin (Hua et al., 2024). The simulated datasets, comprising of SMFS trials that originate from a single molecule, multiple molecules, or no molecules, are created with a novel physics-based Monte Carlo engine. Extensive evaluations on these datasets, in comparison with five state-of-the-art baseline models, demonstrate efficacy of our proposed model. Specifically, our model achieves nearly $100\%$ accuracy on the simulated datasets, outperforming baselines by $6\%$ . When testing on experimental datasets, our model achieves average accuracies of $79.6 \pm 5.2\%$ , surpassing baselines by $11.4\%$ , with only approximately 30 training samples. Even without expert annotations on experimental data, our model can still achieve average accuracies of $72.0 \pm 5.9\%$ when pre-trained with corresponding simulated datasets. With this deep learning approach, the time required to extract meaningful statistics from single-molecule SMFS trials is reduced from a day to under an hour. It is expected that this pilot effort will stimulate significant activity with associated impact of ML methods on the understanding of protein folding and unfolding that emphasize mechanical properties.
+
+# 2. Related work
+
+Single molecule classification A 1D convolutional neural network trained using a triplet loss function (Hoffer & Ailon, 2015) was utilized to classify force curves into single, multiple, or no molecule classes, with reported accuracy
+
+
+
+
+Figure 1. Illustration of AFM based SMFS. (a) Schematic showing the desirable case of a single protein molecule with four folded domains under tension between the tip of the AFM cantilever and the substrate. The deflection $d$ and the separation between the cantilever and the substrate $z$ are measured. The tensile force on the protein molecule is computed from the deflection $d$ . (b) Depictions of possible scenarios, categorized into three classes: (1) no molecule present between the tip and substrate, (2-3) a single molecule or a section of a single molecule present between the tip and the substrate, and (4-6) multiple molecules or sections of multiple molecules between the tip and the substrate. (c-e) Show example force curves representative of the three different classes, with blue circles highlighting unfolding events.
+
+
+
+
+
+ranging from $65 - 70\%$ (Waite et al., 2023). Moreover, a machine learning workflow was proposed to iteratively classify different unfolding pathways of single molecule curves (Doffini et al., 2023). However, these two datasets were collected with chemically functionalized probes and fingerprints. The chemical functionalization process is dependent on the specific molecule and thus cannot be made agnostic to the molecule under investigation. Moreover, in the first dataset, each single molecule curve contains only one unfolding event (Waite et al., 2023), simplifying the classification problem; the second dataset comprises images rather than time series data (Doffini et al., 2023), which introduces redundancy given that force curves are inherently time series data. There are currently no time series datasets available, which are from non-specific pulling of multi-domain protein molecules; most naturally occurring protein molecules have multiple domains. Here, we construct such datasets for classification purposes.
+
+Time series classification (TSC) More than hundreds of
+
+time series classification (TSC) algorithms, including both non-deep learning methods (Bagnall et al., 2017) and deep learning methods (Fawaz et al., 2019; Wang et al., 2017), are present in prior-art. Although more than 80 different datasets from the University of California, Riverside (UCR) time series classification repositories (Dau et al., 2019) are evaluated with these methods, none of these datasets include SMFS data. Methods that do not use deep learning become computationally intensive and impractical to execute on large-scale datasets (Bagnall et al., 2017; Fawaz et al., 2019). In this article, we focus on deep learning methods to classify our SMFS datasets.
+
+# 3. Problem formulation
+
+We first describe an atomic force microscope (AFM) based SMFS setup. Here, a microcantilever with a sharp tip is pressed against a substrate on which the protein molecules under study are deposited. Under applied force, parts of one or more protein molecules are non-specifically attached
+
+to the cantilever tip; characterized by a stochastic adhesion event (Leite et al., 2012). Upon retraction of the cantilever from the surface, sections of the protein molecules between the tip of the cantilever and the substrate experience a tensile force. The record of the force $F$ experienced by the cantilever (and therefore the molecule) versus molecule extension $x$ is known as a force curve, as depicted in Figure 1c-e. If only one protein molecule is present between the cantilever tip and substrate, the force curve unveils important mechanical properties of the protein molecule. We illustrate such a scenario in Figure 1a, where a single protein molecule with four folded domains is attached between the substrate and the tip of the cantilever. As the cantilever retracts, the molecule experiences mechanical tension, eventually causing a folded domain to unfold. The applied force drops abruptly when a domain unfolds, as highlighted by the blue circles in Figure 1d. This process continues until either all domains are unfolded or the connection between the cantilever and the substrate is broken (Rief et al., 1997), producing a saw-tooth pattern of force curves (Figure 1d).
+
+In practice, the force curves can be categorized into one of three classes - 1) No molecule: where no molecule is present between the tip and substrate, 2) Single molecule: when only a single molecule or a section of a single molecule is present, or 3) Multiple molecules: where multiple molecules or sections of multiple molecules are present between the tip and substrate. Example experimental force curves corresponding to the three classes are depicted in Figure 1c, 1d, and 1e respectively. The force curves originating from multiple molecules typically exhibit larger unfolding forces than those with a single molecule (Figure 1e) and have a mixture of unfolding events that cannot be traced back to a specific protein molecule (Fig. 1b (4-6)), confounding useful interpretation. Here, excluding force curves with no molecule and multiple molecules is necessary to obtain accurate and interpretable data from SMFS. The identification of the single molecule force curves is challenging due to a number of reasons: 1) a large number of force curves (2000-5000) need to be collected in a single experiment since protein molecule capture success rates are kept at $1 - 5\%$ (Oberhauser et al., 2001; Ramirez et al., 2023), 2) the study of a specific protein molecule involves at least three replications for confidence on results, 3) the force curves are corrupted from instrument measurement noise and intrinsic thermal noise of the molecules, and 4) often the protein molecules under investigation have no prior characterization, which makes the adjudication time consuming and difficult even for domain experts.
+
+The $i$ -th force curve, of length $T^{(i)}$ , consists of force data $\mathcal{F}^{(i)} = [F_1^{(i)}, F_2^{(i)}, \ldots, F_{T^{(i)}}^{(i)}]$ and extension data $\mathcal{X}^{(i)} = [X_1^{(i)}, X_2^{(i)}, \ldots, X_{T^{(i)}}^{(i)}]$ . Each force curve $(\mathcal{F}^{(i)}, \mathcal{X}^{(i)})$ is associated with a class label $Y_i \in \{0, 1, 2\}$ , corresponding
+
+to one of the three classes. Given a dataset of $n$ samples, $\mathcal{D} = \left[(\mathcal{F}^{(1)},\mathcal{X}^{(1)},Y^{(1)}),\ldots ,(\mathcal{F}^{(n)},\mathcal{X}^{(n)},Y^{(n)})\right]$ , the goal is to design an effective deep learning model capable of accurately predicting the class label of a force curve.
+
+# 4. Proposed model
+
+We introduce Polymer Elastic Models Neural Networks (PemNN), a novel deep learning model designed to classify force curves as originating from no molecule, single molecule or multiple molecules. PemNN contains two branches, the force trace branch, which uses the force data $\mathcal{F}^{(i)}$ , and the physics-based branch, which incorporates polymer elastic models (Section 4.1) with both extension and force data. Features extracted from these branches are fused using either early fusion (depicted in Figure 2) or late fusion strategies (see Section 4.2).
+
+Both branches pass through a convolutional block, comprising a 1-Dimensional convolutional layer, batch normalization layer (Ioffe & Szegedy, 2015) and a Rectified Linear Unit (ReLU) (Nair & Hinton, 2010) activation layer. Convolutional layers have demonstrated compelling performance and efficiency in time series classification (Wang et al., 2017; Fawaz et al., 2019; Karim et al., 2019; Pham et al., 2022; Zhang et al., 2020; Zheng et al., 2016; Foumani et al., 2021). Then features from the first convolutional block of both branches are fused with one of four methods detailed in Section 4.2. The fused output undergoes two additional convolutional blocks, and a Global Average Pooling (GAP) layer is applied to reduce parameters by averaging across the time dimension (Wang et al., 2017; Fawaz et al., 2019).
+
+To further enhance temporal encoding, a long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Sundermeyer et al., 2012) layer is applied to each branch, followed by a dropout layer (Srivastava et al., 2014) to mitigate overfitting (Karim et al., 2019). Earlier studies have shown that augmenting convolutional layers with LSTM significantly improves performance in time series classification with only a modest increase in the number of parameters (Karim et al., 2019; Zhang et al., 2020; Hewamalage et al., 2021).
+
+The outputs of the GAP and LSTM layers are concatenated and fed into a fully connected layer with three neurons, corresponding to the three classes: 1) no molecule, 2) single molecule, and 3) multiple molecules. A softmax activation function is used in the final fully connected layer, and the model is trained with categorical cross-entropy loss:
+
+$$
+L \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = - \sum_ {j = 1} ^ {3} Y _ {j} ^ {(i)} \log \left(\hat {Y} _ {j} ^ {(i)}\right), \tag {1}
+$$
+
+where $L(\mathcal{F}^{(i)},\mathcal{X}^{(i)})$ represents the loss for classifying the force curve $(\mathcal{F}^{(i)},\mathcal{X}^{(i)})$ . Here, $Y_{j}^{(i)}$ is the label for class $j$
+
+
+Figure 2. The overall architecture of the Polymer Elastic Models Neural Networks (PemNN) comprises a physics-based branch and a force trace branch, illustrated with the early fusion approach (the late fusion approach is shown in Figure 8 in the Appendix).
+
+of $i$ -th force curve $(Y_{j}^{(i)} \in \{0,1\})$ , and $\hat{Y}_{j}^{(i)}$ is predicted probability for class $j$ of $i$ -th force by the neural network.
+
+# 4.1. Polymer elastic models
+
+Polymers (proteins included) exhibit entropic elasticity that is well described by the worm-like chain (WLC) model (Bustamante et al., 1994) given by:
+
+$$
+F = \frac {k _ {B} T}{L _ {p}} \left[ \frac {1}{4 \left(1 - \frac {x}{L _ {c}}\right) ^ {2}} - \frac {1}{4} + \frac {x}{L _ {c}} \right], \tag {2}
+$$
+
+where, $F$ and $x$ represent the force and extension respectively, $k_{B}$ is the Boltzmann constant, $T$ is temperature, $L_{c}$ is the contour length, and $L_{p}$ is the persistence length. The contour length $L_{c}$ represents the maximal length of physically possible extension and the persistence length $L_{p}$ quantifies the bending stiffness of the polymer. Other widely used polymer elastic models are described in Appendix A.1.
+
+In the physics-based branch, the contour length $L_{c_p^{(i)}}$ is estimated for each force-extension pair $\left(F_p^{(i)},\mathcal{X}_p^{(i)}\right)$ using the WLC model with a fixed persistence length $L_{p}$ . A subsequent filtering step selects $P$ samples with $L_{c_p^{(i)}}\in [0,M]$ , where $M$ is the filter threshold. If the number of qualified data points is less than $P$ , sampling is performed with replacement.
+
+# 4.2. Fusion module
+
+The early fusion approach contains four methods: (i) Early-Sum: sums convolutional feature maps; (ii) Early-Max: selects the maximum value at each element from convolutional feature maps, (iii) Early-Wavg: computes a weighted sum of convolutional feature maps with learnable weights; and (iv) Early-Conv: concatenates convolutional feature
+
+maps along the filter dimension, followed by a convolution with a kernel size of 1.
+
+With late fusion method (depicted in Figure 8 in the Appendix), features from the first convolutional block are independently processed through two additional convolutional blocks for each channel, with shared parameters. GAP layers are applied to the outputs of both branches, and their results are fused using one of the following methods (see Appendix B): (i) Late-Concat: directly concatenates the outputs of GAP layers; and (ii) Late-Gating (Liu et al., 2021): Concatenates GAP layer outputs with learnable weights.
+
+# 5. Datasets
+
+Datasets from four different protein molecules are employed. Out of these four protein molecules, only Titin I270 (Athena Enzyme SystemsTM) is an engineered protein molecule composed of eight repeats of the Ig 27 domain that serves as a reference protein for calibrating and validating our methods. The other three protein molecules come from natural proteins with considerable variations in their sequence and structure, dystrophin and utrophin. Dystrophin is a protein molecule expressed primarily at the muscle cell membrane, or sarcolemma, in striated muscle tissue. Deficiencies of this protein molecule lead to severe muscle wasting disorder like Duchenne muscular dystrophy (DMD), a fatal disease occurring in 1 out of 4000 male births (Mendell et al., 2012). Structurally, dystrophin is composed of four major domains: an amino terminal (NT) actin-binding domain (ABD1), a large central rod domain with 24 triple helical spectrin-like repeats (SLRs) interspersed with 4 hinge domains, including a second actin-binding domain (ABD2), a cysteine-rich domain and a carboxy-terminal (CT) domain (Figure 3a). Utrophin (Figure 3b) is a fetal homologue of dystrophin and is under active investigation as a dystrophin replace
+
+Table 1. Details of experimental datasets
+
+DATASET NUMBER OF CURVES PER CLASS LENGTHS DIFFERENT DAYS PULLING SPEEDS [nm/s] DDRs (WAITE ET AL., 2023) [102,102,136] 400 NA 2000 TITIN I270 [181,164,191] 736-4859 5 500, 1000, 2000 BACT UTRN-R3 (HUA ET AL., 2024) [181,181,178] 957-9974 10 500, 1000, 2000, 5000 INSECT UTRN-R3 (HUA ET AL., 2024) [175,166,200] 1777-8890 11 500, 1000, 2000 DYSN-R3 (HUA ET AL., 2024) [191,185,177] 1659-4814 6 500, 1000, 2000
+
+ment therapy for DMD. We include dystrophin and utrophin fragments encoding the NT through SLR 3 domains, referred to as DysN-R3 (Fig. 3c) and UtrN-R3 (Fig. 3d), respectively. Previous studies have demonstrated that the mechanical properties of UtrN-R3 are influenced by the expression system used, such as insect or bacterial cells (Ramirez et al., 2023; Hua et al., 2024). Consequently, we further categorize UtrN-R3 into insect UtrN-R3 and bact UtrN-R3 to reflect these variations. In summary, our four real protein molecules are: Titin I27O, bact UtrN-R3, insect UtrN-R3, DysN-R3. Additionally, we include the Discoidin Domain Receptors (DDRs) dataset (Appendix D.1) (Waite et al., 2023) to compare the performance of different deep learning methods.
+
+
+Figure 3. Diagrams of dystrophin and utrophin. (a) Full-length dystrophin. (b) Full-length utrophin. (c) UtrN-R3 construct (d) DysN-R3 construct. The ovals are spectrin-like repeats (SLRs); the diamonds are hinge domains; NT is the N terminus; CT is the C terminus; CR is a cysteine-rich domain; ABD1 & 2 are actin-binding domains; DgBD is the dystroglycan binding domain. Figure courtesy of previous study (Ramirez et al., 2023)
+
+For each protein molecule, we have two datasets, one is simulation dataset generated with our physics-based Monte Carlo simulation engine described in Appendix A, and one is experimental dataset that is obtained from physical experiments conducted via AFM on real protein molecule samples. The simulation dataset, comprising 600 force curves, is generated for three classes; Class 0 has no protein molecule between the substrate and cantilever tip, Class 1 has one protein molecule, and Class 2 has two protein molecules attached. Different protein molecules are distinguished by different model parameters listed in Table
+
+2. For the experimental data, as previously described in (Rajaganapathy et al., 2019; Ramirez et al., 2023; Hua et al., 2024), force curves were collected on different days and at various pulling speeds, as outlined in Table 1. As a result, the lengths of the force curves are different, and substantial variations exist even within the same class. Annotation is performed through visual inspection of the unfolding force curves.
+
+# 6. Results and Discussion
+
+The performance of our proposed model, PemNN, is evaluated in this section. We outline the evaluation setup in Section 6.1, followed by a comparison of PemNN's classification performance against baseline models in Section 6.2. Next, we analyze the functionality of the force trace branch and physics-based branch in Section 6.3. Finally, we assess performance under limited training data, a scenario frequently encountered in SMFS applications, in Section 6.4, and apply it to AFM data analysis through a comparison to non-machine learning methods in Section 6.5.
+
+# 6.1. Evaluation setup
+
+We evaluate our proposed model, PemNN, against five baselines: 1) Triplet network (Triplet) (Waite et al., 2023): The model designed for analyzing SMFS specific pulling data. 2) Fully convolutional neural networks (FCN) and 3) the residual networks (ResNet): Two of the highest performing deep neural networks on the UCR time series classification archive (Fawaz et al., 2019). 4) InceptionTime: The current state-of-the-art deep learning model on the UCR archive (Ismail Fawaz et al., 2020). 5) LSTMFCN: It outperforms FCN and ResNet on the UCR time series classification archive and demonstrates robust performance on multivariate time series classification (Karim et al., 2018; 2019). Further details about these baselines are provided in Appendix C.1.
+
+The baselines were implemented using default parameters from sklearn (Löning et al., 2019), which is a python framework for ML with time series data. For PemNN, the default configuration incorporates the WLC model, LSTM layers in both branches, and Early-Conv fusion method. An ablation study of PemNN's architecture is presented in Section
+
+# D.2, and additional implementation details are provided in Appendix C.2.
+
+For each dataset, $20\%$ of the data was used for training, and the remaining $80\%$ was reserved for testing, as adjudication is a time-consuming task that requires significant expertise. Stratified resampling was applied to maintain class distributions. Each train-test split was seeded for reproducibility. Identical resamples were applied to all models within a single run, and performance was evaluated using overall accuracy of the test data across five runs.
+
+# 6.2. PemNN classification performance
+
+Per each dataset, we evaluated all models on testing data and ranked them based on their mean classification accuracy over five runs, assigning a rank of 1 to the most accurate model and 6 to the least accurate. The average ranking is then computed across all datasets, including both simulated and experimental testing sets for all protein molecules. These average rankings are summarized in the critical difference diagram (Demšar, 2006), as presented in Figure 4. PemNN achieves the lowest ranking of 1.4167, indicating that our model is more accurate than baseline models.
+
+
+Figure 4. Critical difference diagram of different deep learning models across the simulated and experimental testing sets of all protein molecules based on average accuracies. The most accurate model is assigned a rank of 1, with a thick horizontal line representing a group of classifiers that do not exhibit statistically significant differences in accuracy.
+
+For statistical analysis, we employed the Wilcoxon signed-rank test with Holm correction as the post-hoc test following the Friedman test (Fawaz et al., 2019; Demšar, 2006). In Figure 4, thick horizontal lines represent groups of models that are not significantly different in terms of classification accuracy. Thus we conclude that PemNN is significantly more accurate than all baseline models. Among the baselines, LSTMFCN has the lowest rank, with its performance statistically similar to ResNet and FCN, but significantly different from Triplet and InceptionTime.
+
+# 6.3. Functionality of two branches
+
+This section evaluates the functionality of the force trace and physics-based branches in PemNN using simulated datasets with known ground truths.
+
+Physics-based branch functionality The importance of the physics-based branch, which integrates polymer elastic
+
+
+
+
+Figure 5. a) The average accuracy of all models across the four simulated datasets, with error bars indicating the standard deviations over five runs. b) The average accuracy across four simulated datasets over five runs as the contour length threshold varies, depicted in a radar chart where longer radii indicate higher accuracy. c) The average accuracy across four simulated datasets over five runs as the persistence length changes.
+
+
+
+models, was evaluated by comparing PemNN (using both force trace and physics-based branches) to baselines that only utilized the force trace branch. As shown in Figure 5a, PemNN achieves near perfect accuracy $(98.9 \pm 1.9\%)$ across four simulated datasets, significantly outperforming the baseline. The largest performance gap is in the Titin I270 simulated dataset, where PemNN outperformed baselines by $15\%$ ; the smallest gap occurs in the insect UtrN-R3 dataset, with PemNN maintaining a $6\%$ higher accuracy.
+
+Force trace branch Functionality Baselines were provided with the same input data as the physics-based branch to assess the contribution of the force trace branch. This was analyzed under two scenarios: one is changing contour length threshold in the filter, the other one is varying the persistence length $L_{p}$ in the WLC model. Radar charts were employed to visualize model performance, with models represented by different transparent colors and higher radii indicating better accuracy. In Figure 5b, the contour length threshold is changed from 1,000 to 1,024,000 nm in multiples of four, with "inf" denoting no threshold. PemNN maintained near-perfect accuracy for thresholds between 1,000 and 64,000 nm, a range in which some baselines, such as LSTMFCN and ResNet, also perform well. However, as the threshold increases, baseline performance dropped sharply. In contrast, PemNN maintains robust performance, outperforming the
+
+bases by at least $7\%$ at a threshold of $1,024,000 \mathrm{~nm}$ , with this margin growing to $25\%$ when no threshold was applied. In Figure 5c, the persistence length $L_{p}$ is varied logarithmically from 0.3 to $30,000 \mathrm{pm}$ . Both PemNN and baselines achieved near-perfect accuracy for $L_{p}$ values between 30 and $30,000 \mathrm{pm}$ . However, at $L_{p} = 0.3 \mathrm{pm}$ and $L_{p} = 3 \mathrm{pm}$ , baseline performance dropped significantly, whereas PemNN maintained at least $3\%$ and $10\%$ higher accuracy, respectively. A more detailed discussion is provided in Appendix D.3. In conclusion, the physics-based branch enhances performance by incorporating polymer elastic models; the force trace branch increases robustness, ensuring reliable classification even when physics models are corrupted by parameter errors. By combining the strengths of both branches, PemNN consistently outperformed baselines in SMFS classification tasks, demonstrating its effectiveness under various conditions.
+
+# 6.4. Performance with limited data
+
+In this section, we focus on experimental datasets and analyze model performance under limited training data. The testing data is fixed at $80\%$ of the entire dataset, while the deep learning models are trained using varying proportions of the training dataset. When the training proportion ranges from $5\%$ to $20\%$ (approximately 30 to 120 samples), PemNN consistently outperforms baselines, achieving accuracies of $79.6\%$ and $85.2\%$ at training proportions of $5\%$ and $20\%$ , respectively (Figure 6). Among the baselines, LSTMFCN performs the best but remains $11.4\%$ and $2.9\%$ less accuracy than PemNN at training proportions of $5\%$ and $20\%$ , respectively.
+
+
+Figure 6. The performance of models trained with different proportions of experimental datasets (training proportion), with error bars indicating standard deviations across all experimental datasets over five runs.
+
+At a training proportion of $0\%$ , no experimental data is used during the training. Instead, a physics-guided pretraining strategy (see Appendix E) is used to eliminate the need for experimental labeling and mitigate human bias. In this approach, deep learning models are pre-trained on simu
+
+lated datasets generated using physics-based models and subsequently evaluating them on corresponding experimental datasets. The pretraining strategy achieves comparable performance to models directly trained on experimental data, with an average accuracy of $72.0\%$ , despite not utilizing any experimental data during its training. In our physics-based protein molecule unfolding model, we assume every protein domain behaves identically. However, many protein molecules, including utrophin and dystrophin have folded domains that are significantly different from each other. Despite being trained on simulation data using a single double-well potential model for the domains, PemNN demonstrates the capability to classify the number of protein molecules involved in experiments with protein molecules exhibiting heterogeneous domains (See Appendix E for details).
+
+# 6.5. Application to SMFS data analysis
+
+Here, we apply both machine learning and non-machine learning methods to Titin I27O data collected from a one-day experiment ( $\sim 3000$ curves) (See Appendix D.5 for utrophin and dystrophin). Titin I27O is a well-calibrated protein molecule with a most probable unfolding force of $204 \pm 26 \mathrm{pN}$ . In Figure 7, RawData includes all unfolding events. The Heuristic method, a non-machine learning approach, filters data with the WLC model (Rajaganapathy et al., 2019; Ramirez et al., 2023) that requires manual adjustments by experts to fine-tune parameters. PemNN was trained on our Titin I27O dataset and subsequently tested on the raw data to extract curves originating from single molecule. Notably, PemNN takes less than an hour to compute statistics, while the Heuristic method takes several hours, and RawData can take up to a full day.
+
+
+Figure 7. Application to SMFS data analysis of Titin I27O: Raw-Data as the Baseline Method, Heuristic as non-machine learning method, and PemNN as the our proposed model.
+
+PemNN achieves a most probable unfolding force of 206.68 pN, closely aligning with the expected value $(204\pm 26~\mathrm{pN})$
+
+In contrast, the Heuristic and RawData yield most probable unfolding forces of $217.41\mathrm{pN}$ and $192.21\mathrm{pN}$ , respectively (Table 5). Furthermore, the inclusion of data not originating from single-molecule events resulted in broader force distributions. We quantified the sharpness of these distributions using the interquartile range (IQR), as listed in Table 5. Our method, PemNN, achieved an IQR of 52.63, which is only a quarter of the IQR observed from the Heuristic or the RawData, effectively filtering out confounding factors. These results highlight that PemNN effectively analyzes AFM data by accurately capturing key statistical features while effectively filtering out confounding factors from multiple molecules.
+
+# 7. Conclusions
+
+Single-molecule force spectroscopy (SMFS) data of protein molecules are time and resource intensive to collect. Currently, manual visual inspection remains the primary method for classifying force curves resulting from single molecules. This process typically requires a full day per experimental iteration of a specific molecule, with multiple iterations being standard practice in the field. These factors make it challenging to obtain precise statistics of single molecular force curves and to generate a large, annotated dataset appropriate for training deep learning models.
+
+We developed a novel deep learning model that fuses a physics model based branch with another that does not employ physics to efficiently classify SMFS data as originating from no molecules, single molecules, or multiple molecules. We also applied state-of-the-art machine learning models (including ResNet, FCN, InceptionTime, and LSTMFCN) to SMFS data. Experimental datasets, obtained from four molecules (Titin I27O, bact UtrN-R3, insect UtrN-R3, and DysN-R3) are used to test and train the deep learning model. A Monte-Carlo engine, based on the physics of proteins, is developed and employed to provide simulated data. The presented approach achieves superior performance compared to state-of-the-art baseline methods on all simulated or experimental datasets. Remarkably, strong performance on experimental data even when trained solely on simulated data is corroborated. Furthermore, the model surpasses non-machine learning approaches in SMFS data analysis, demonstrating its effectiveness and reliability, while reducing processing time from a day to under an hour.
+
+# Acknowledgements
+
+This project was supported by funding from NIH (5R01AR042423).
+
+# Impact Statement
+
+Our deep learning model automates the classification of single molecule force spectroscopy (SMFS) data, effectively filtering out non-admissible data from experiments involving multiple molecules (as opposed to single molecule) without the need for expert knowledge. It also reduces the reliance on manual inspection, paving the way for faster and more consistent analysis of SMFS data. We expect the impact to be considerable on SMFS related research wherein laborious and tedious visual inspection can be automated. Additionally, this work provides SMFS experimental datasets from four important protein molecules, crucial to many biological pathways, which are made publicly available to support further research.
+
+# References
+
+Ashkin, A., Dziedzic, J. M., Bjorkholm, J. E., and Chu, S. Observation of a single-beam gradient force optical trap for dielectric particles. Optics letters, 11(5):288-290, 1986.
+Bagnall, A., Lines, J., Bostrom, A., Large, J., and Keogh, E. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 31(3):606-660, 2017.
+Binnig, G., Quate, C. F., and Gerber, C. Atomic Force Microscope. Physical Review Letters, 56(9):930-933, 1986.
+Bornschlögl, T. and Rief, M. Single-Molecule Protein Unfolding and Refolding Using Atomic Force Microscopy. In Peterman, E. J. G. and Wuite, G. J. L. (eds.), Single Molecule Analysis, volume 783, pp. 233-250. Springer, 2011.
+Bustamante, C., Marko, J. F., Siggia, E. D., and Smith, S. Entropic Elasticity of -Phage DNA. Science, 265(5178): 1599-1600, 1994.
+Chen, Y., Xie, H., and Shin, H. Multi-layer fusion techniques using a CNN for multispectral pedestrian detection. IET Computer Vision, 12(8):1179-1187, 2018.
+Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C.-C. M., Zhu, Y., Gharghabi, S., Ratanamahatana, C. A., and Keogh, E. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6):1293-1305, 2019.
+Demšar, J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine learning research, 7 (Jan):1-30, 2006.
+Doffini, V., Liu, H., Liu, Z., and Nash, M. A. Iterative Machine Learning for Classification and Discovery of
+
+Single-Molecule Unfolding Trajectories from Force Spectroscopy Data. Nano Letters, 23(22):10406-10413, 2023.
+Dudko, O. K., Hummer, G., and Szabo, A. Intrinsic rates and activation free energies from single-molecule pulling experiments. Physical Review Letters, 96(10):108101, 2006.
+Dudko, O. K., Hummer, G., and Szabo, A. Theory, analysis, and interpretation of single-molecule force spectroscopy experiments. Proceedings of the National Academy of Sciences USA, 105(41):15755-15760, 2008.
+Ervasti, J. M. Dystrophin, its interactions with other proteins, and implications for muscular dystrophy. Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease, 1772(2):108-117, 2007.
+Fawaz, H. I., Forestier, G., Weber, J., Idoumghar, L., and Muller, P.-A. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4): 917-963, 2019.
+Foumani, S. N. M., Tan, C. W., and Salehi, M. Disjoint-cnn for multivariate time series classification. In 2021 International Conference on Data Mining Workshops (ICDMW), pp. 760-769. IEEE, 2021.
+He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Hervás, R., Oroz, J., Galera-Prat, A., Goni, O., Valbuena, A., Vera, A. M., Gómez-Sicilia, Losada-Urzaíz, F., Uversky, V. N., Menéndez, M., Laurents, D. V., Bruix, M., and Carrión-Vázquez, M. Common Features at the Start of the Neurodegeneration Cascade. PLoS Biology, 10(5): e1001335, 2012.
+Hewamalage, H., Bergmeir, C., and Bandara, K. Recurrent Neural Networks for Time Series Forecasting: Current Status and Future Directions. International Journal of Forecasting, 37(1):388-427, 2021.
+Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
+Hoffer, E. and Ailon, N. Deep metric learning using triplet network. In Similarity-based pattern recognition: third international workshop, SIMBAD 2015, Copenhagen, Denmark, October 12-14, 2015. Proceedings 3, pp. 84-92. Springer, 2015.
+Hua, C., Slick, R. A., Vavra, J., Muretta, J. M., Ervasti, J. M., and Salapaka, M. V. Two operational modes of atomic force microscopy reveal similar mechanical properties for homologous regions of dystrophin and utrophin. *bioRxiv*, 2024.
+
+Huang, L., Zhang, C., and Zhang, H. Self-adaptive training: beyond empirical risk minimization. Advances in neural information processing systems, 33:19365-19376, 2020.
+Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. pmlr, 2015.
+Ismail Fawaz, H., Lucas, B., Forestier, G., Pelletier, C., Schmidt, D. F., Weber, J., Webb, G. I., Idoumghar, L., Muller, P.-A., and Petitjean, F. InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery, 34(6):1936-1962, 2020.
+Israelachvili, J. N. Intermolecular and surface forces. Academic press, 2011.
+Jobst, M. A., Schoeler, C., Malinowska, K., and Nash, M. A. Investigating Receptor-ligand Systems of the Cellulosome with AFM-based Single-molecule Force Spectroscopy. Journal of Visualized Experiments, (82):50950, 2013.
+Karim, F., Majumdar, S., Darabi, H., and Chen, S. LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access, 6:1662-1669, 2018.
+Karim, F., Majumdar, S., Darabi, H., and Harford, S. Multivariate LSTM-FCNs for time series classification. Neural Networks, 116:237-245, 2019.
+King, W. T., Su, M., and Yang, G. Monte Carlo simulation of mechanical unfolding of proteins based on a simple two-state model. International Journal of Biological Macromolecules, 46(2):159-166, 2010.
+Kingma, D. P. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Leite, F. L., Bueno, C. C., Da Róz, A. L., Ziemath, E. C., and Oliveira, O. N. Theoretical Models for Surface Forces and Adhesion and Their Measurement Using Atomic Force Microscopy. International Journal of Molecular Sciences, 13(12):12773-12856, 2012.
+Liu, M., Ren, S., Ma, S., Jiao, J., Chen, Y., Wang, Z., and Song, W. Gated transformer networks for multivariate time series classification. arXiv preprint arXiv:2103.14438, 2021.
+Liu, Z., Liu, H., Vera, A. M., Bernardi, R. C., Tinnefeld, P., and Nash, M. A. High force catch bond mechanism of bacterial adhesion in the human gut. Nature Communications, 11(1):4321, 2020.
+Livadaru, L., Netz, R. R., and Kreuzer, H. J. Stretching Response of Discrete Semiflexible Polymers. Macromolecules, 36(10):3732-3744, 2003.
+
+Lyubchenko, Y. L. Nanoscale Imaging: Methods and Protocols. Springer, 2018.
+Löning, M., Bagnall, A., Ganesh, S., Kazakov, V., Lines, J., and Király, F. J. sktime: A unified interface for machine learning with time series. arXiv preprint arXiv:1909.07872, 2019.
+Mendell, J. R., Shilling, C., Leslie, N. D., Flanigan, K. M., al-Dahhak, R., Gastier-Foster, J., Kneile, K., Dunn, D. M., Duval, B., and Aoyagi, A. Evidence-based path to newborn screening for Duchenne muscular dystrophy. Annals of neurology, 71(3):304-313, 2012.
+Nair, V. and Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.
+Oberhauser, A. F., Marszalek, P. E., Erickson, H. P., and Fernandez, J. M. The molecular elasticity of the extracellular matrix protein tenascin. Nature, 393(6681):181-185, 1998.
+Oberhauser, A. F., Hansma, P. K., Carrion-Vazquez, M., and Fernandez, J. M. Stepwise unfolding of titin under force-clamp atomic force microscopy. Proceedings of the National Academy of Sciences, 98(2):468-472, 2001. ISBN: 0027-8424 Publisher: The National Academy of Sciences.
+Ortiz, C. and Hadzioannou, G. Entropic Elasticity of Single Polymer Chains of Poly(methacrylic acid) Measured by Atomic Force Microscopy. Macromolecules, 32(3):780-787, 1999.
+Otten, M., Ott, W., Jobst, M. A., Milles, L. F., Verdorfer, T., Pippig, D. A., Nash, M. A., and Gaub, H. E. From genes to protein mechanics on a chip. Nature Methods, 11(11): 1127-1130, 2014.
+Pham, T.-A., Lee, J.-H., and Park, C.-S. MST-VAE: MultiScale Temporal Variational Autoencoder for Anomaly Detection in Multivariate Time Series. Applied Sciences, 12(19):10078, 2022.
+Puchner, E. M., Franzen, G., Gautel, M., and Gaub, H. E. Comparing Proteins by Their Unfolding Pattern. Biophysical Journal, 95(1):426-434, 2008.
+Rajaganapathy, S., McCourt, J. L., Ghosal, S., Lindsay, A., McCourt, P. M., Lowe, D. A., Ervasti, J. M., and Salapaka, M. V. Distinct mechanical properties in homologous spectrin-like repeats of utrophin. Sci. Rep., 9(1):5210, 2019.
+Ramirez, M. P., Rajaganapathy, S., Hagerty, A. R., Hua, C., Baxter, G. C., Vavra, J., Gordon, W. R., Muretta, J. M.,
+
+Salapaka, M. V., and Ervasti, J. M. Phosphorylation alters the mechanical stiffness of a model fragment of the dystrophin homologue utrophin. Journal of Biological Chemistry, 299(2):102847, 2023.
+Rief, M., Gautel, M., Oesterhelt, F., Fernandez, J. M., and Gaub, H. E. Reversible Unfolding of Individual Titin Immunoglobulin Domains by AFM. Science, 276(5315): 1109-1112, 1997.
+Rief, M., Gautel, M., Schemmel, A., and Gaub, H. E. The Mechanical Stability of Immunoglobulin and Fibronectin III Domains in the Muscle Protein Titin Measured by Atomic Force Microscopy. Biophysical Journal, 75(6): 3008-3014, 1998.
+Rief, M., Pascual, J., Saraste, M., and Gaub, H. E. Single molecule force spectroscopy of spectrin repeats: low unfolding forces in helix bundles. Journal of Molecular Biology, 286(2):553-561, 1999.
+Schoeler, C., Malinowska, K. H., Bernardi, R. C., Milles, L. F., Jobst, M. A., Durner, E., Ott, W., Fried, D. B., Bayer, E. A., Schulten, K., Gaub, H. E., and Nash, M. A. Ultrastable cellulosome-adhesion complex tightens under load. Nature Communications, 5(1):5635, 2014.
+Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014.
+Summers, C. and Dinneen, M. J. Improved mixed-example data augmentation. In 2019 IEEE winter conference on applications of computer vision (WACV), pp. 1262-1270. IEEE, 2019. ISBN 1-7281-1975-8.
+Sundermeyer, M., Schlüter, R., and Ney, H. LSTM neural networks for language modeling. In *Interspeech* 2012, pp. 194-197. ISCA, 2012.
+Tavenard, R., Faouzi, J., Vandewiele, G., Divo, F., Androz, G., Holtz, C., Payne, M., Yurchak, R., RuBwurm, M., Kolar, K., and Woods, E. Tslearn, A Machine Learning Toolkit for Time Series Data. 2020.
+Tokozume, Y., Ushiku, Y., and Harada, T. Learning from between-class examples for deep sound recognition. arXiv preprint arXiv:1711.10282, 2017.
+Tokozume, Y., Ushiku, Y., and Harada, T. Between-Class Learning for Image Classification. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5486-5494, Salt Lake City, UT, 2018. IEEE.
+Waite, J. R., Tan, S. Y., Saha, H., Sarkar, S., and Sarkar, A. Few-shot deep learning for AFM force curve characterization of single-molecule interactions. *Patterns*, 4(1): 100672, 2023.
+
+Wang, L., Zhang, J., Liu, Y., Mi, J., and Zhang, J. Multimodal Medical Image Fusion Based on Gabor Representation Combination of Multi-CNN and Fuzzy Neural Network. IEEE Access, 9:67634-67647, 2021.
+Wang, Z., Yan, W., and Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In 2017 International joint conference on neural networks (IJCNN), pp. 1578-1585. IEEE, 2017.
+Yang, B., Liu, H., Liu, Z., Doenen, R., and Nash, M. A. Influence of Fluorination on Single-Molecule Unfolding and Rupture Pathways of a Mechanostable Protein Adhesion Complex. Nano Letters, 20(12):8940-8950, 2020a.
+Yang, B., Liu, Z., Liu, H., and Nash, M. A. Next Generation Methods for Single-Molecule Force Spectroscopy on Polyproteins and Receptor-Ligand Complexes. Frontiers in Molecular Biosciences, 7:85, 2020b.
+Zhang, S., Qian, H., Liu, Z., Ju, H., Lu, Z., Zhang, H., Chi, L., and Cui, S. Towards Unveiling the Exact Molecular Structure of Amorphous Red Phosphorus by Single-Molecule Studies. Angewandte Chemie International Edition, 58(6):1659-1663, 2019.
+Zhang, X., Gao, Y., Lin, J., and Lu, C.-T. TapNet: Multivariate Time Series Classification with Attentional Prototypical Network. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):6845-6852, 2020.
+Zheng, Y., Liu, Q., Chen, E., Ge, Y., and Zhao, J. L. Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Frontiers of Computer Science, 10(1):96-112, 2016.
+Zimmermann, J. L., Nicolaus, T., Neuert, G., and Blank, K. Thiol-based, site-specific and covalent immobilization of biomolecules for single-molecule experiments. Nature Protocols, 5(6):975-985, 2010.
+
+# A. Simulating protein unfolding
+
+Monte Carlo Simulation based methods are used widely in the SMFS studies, yielding results that closely align with experimental data (Liu et al., 2020; King et al., 2010). However, prior simulation frameworks are restricted to the idealized case of force curves arising from single molecule. To build a comprehensive training dataset, we incorporate the real cases of force curves generated by no molecule and multiple molecules. Additionally, we model the cases where only partial sections of molecules are present as well as the stochastic adhesion and detachment events, of the cantilever to the protein, to better approximate experimental data.
+
+Algorithm 1 Monte Carlo Simulation
+Require: $v, L_c^{(i)}, L_p^{(i)}, \Delta L_c^{(i)}, \Delta L_p^{(i)}, k_0, \Delta x^\ddagger, \Delta G^\ddagger, N, D^{(i)}$ Initialization: $z \gets 0$ , $U^{(i)} \gets 0$
+1: for $t \gets 0: \Delta t: T$ do
+2: $z \gets z + v \Delta t$
+3: Solve (3) for extension $x$
+4: for $i \gets 1: 1: N$ do
+5: Calculate $F_{WLC}^{(i)}$ using (4)
+6: Compute $k_{off}(F_{WLC}^{(i)})$ using (13)
+7: Compute $P_u^{(i)}(F_{WLC}^{(i)})$ using (5)
+8: Draw $\eta^{(i)} \sim \mathcal{U}_{[0,1]}$
+9: if $\eta^{(i)} < P_u^{(i)}(F_{WLC}^{(i)})$ then
+10: $L_c^{(i)} \gets L_c^{(i)} + \Delta L_c, L_p^{(i)} \gets L_p^{(i)} + \Delta L_p$
+11: $U^{(i)} \gets U^{(i)} + 1$
+12: end if
+13: Draw $\eta_d^{(i)} \sim \mathcal{U}_{[0,1]}$
+14: if $U^{(i)} == D^{(i)}$ and $\eta_d^{(i)} < P_d(F_{WLC}^{(i)})$ then
+15: $L_c^{(i)} \gets L_c^{(i)} + C_{Lc}, L_p^{(i)} \gets L_p^{(i)} + C_{Lp}$
+16: end if
+17: end for
+18: end for
+
+For the simulations, $N$ proteins are considered with $i$ -th protein having $D^{(i)}$ folded domains attached between the substrate and the force probe. The base of the cantilever probe is moved away at a constant speed $v$ ; here the position of the base of the cantilever $z$ is initialized at zero and is updated every $\Delta t$ seconds. The protein extension $x$ is determined by solving the equation,
+
+$$
+k _ {c} d = \sum_ {i = 1} ^ {N} F _ {W L C} ^ {(i)} \left(x, L _ {c} ^ {(i)}, L _ {p} ^ {(i)}\right), \tag {3}
+$$
+
+where $k_{c}$ is the spring constant of the cantilever, and $d$ is the deflection (Fig 1a). Here, $F_{WLC}^{(i)}(x,L_c^{(i)},L_p^{(i)})$ is the worm-like chain (WLC) model that relates the force applied to the extension of $i$ -th protein (See Appendix A.1), given by (Rief et al., 1999),
+
+$$
+F _ {W L C} ^ {(i)} \left(x, L _ {c} ^ {(i)}, L _ {p} ^ {(i)}\right) := \frac {k _ {B} T}{L _ {p} ^ {(i)}} \left[ \frac {1}{4 \left(1 - \frac {x}{L _ {c} ^ {(i)}}\right) ^ {2}} - \frac {1}{4} + \frac {x}{L _ {c} ^ {(i)}} \right], \tag {4}
+$$
+
+where $k_{B}$ is the Boltzmann constant, $T$ is temperature, and $L_{c}^{(i)}$ and $L_{p}^{(i)}$ are the contour length and the persistence length of the $i$ -th protein, respectively. For the $i$ -th protein, the probability of a domain unfolding during the time interval $\Delta t$ is found by
+
+$$
+P _ {u} ^ {(i)} \left(F _ {W L C} ^ {(i)}\right) = \left(D ^ {(i)} - U ^ {(i)}\right) \left(1 - e ^ {- k _ {o f f} \left(F _ {W L C} ^ {(i)}\right) \Delta t}\right), \tag {5}
+$$
+
+where $U^{(i)}$ is the number of unfolded domains which is initially set to zero, and $k_{off}(F_{WLC}^{(i)})$ is the transition rate that can be determined with the Dudko-Hummer-Szabo model (Dudko et al., 2008) (See Appendix A.2).
+
+For determining unfolding events of $i$ -th protein, a random number $\eta^{(i)}$ is generated uniformly from 0 to 1 and is compared to the unfolding probability $P_{u}^{(i)}(F_{WLC}^{(i)})$ . No unfolding event is triggered if the random number is larger than $P_{u}^{(i)}(F_{WLC}^{(i)})$ ; the simulation will continue to the next time slot by adding time interval $\Delta t$ . Otherwise, one of the domains is unfolded, leading to an increase in the number of unfolded domains, $U^{(i)}$ , by 1, and the simulation continues to the next unfolding event if folded domains still exist after updating contour length and persistence length via adding increments $\Delta L_{c}^{(i)}, \Delta L_{p}^{(i)}$ respectively.
+
+Once all domains in $i$ -th protein unfold, the protein detaches from either the cantilever tip or substrate based on the detachment probability,
+
+$$
+P _ {d} (F) = \left\{ \begin{array}{l l} C _ {d} & F \geq F _ {t d} \\ 0 & F < F _ {t d} \end{array} , \right. \tag {6}
+$$
+
+where $C_d$ is a constant and $F_{td}$ is a random number sampled from the Gaussian distribution, denoting the threshold at which the detection of detachment begins. Upon detachment of the $i$ -th protein, its WLC force is reduced to zero by adding large constants $(C_{Lc}$ and $C_{Lp}$ ) to contour length $L_c^{(i)}$ and persistence length $L_p^{(i)}$ respectively. To better replicate experimental force curves, we introduce Gaussian noise to the WLC force immediately after its calculation at line 5 of Algorithm 1. The adhesion force (see Appendix A.3) is added at the end of the simulation process.
+
+# A.1. Polymer elastic models
+
+Various polymer elasticity models have been developed to model the force-extension $(F - x)$ behavior. The widely used models are the worm-like chain (WLC) model (Bustamante et al., 1994), the freely jointed chain (FJC) model (Ortiz & Hadzioannou, 1999), and the freely rotating chain (FRC) model (Livadaru et al., 2003).
+
+The force-extension relationship for each model is given by the following equations:
+
+The WLC model:
+
+$$
+F = \frac {k _ {B} T}{L _ {p}} \left[ \frac {1}{4 \left(1 - \frac {x}{L _ {c}}\right) ^ {2}} - \frac {1}{4} + \frac {x}{L _ {c}} \right], \tag {7}
+$$
+
+where $k_{B}$ is the Boltzmann constant, $T$ is temperature, $L_{c}$ is the contour length, and $L_{p}$ is the persistence length.
+
+The FJC model:
+
+$$
+\frac {x}{L _ {c}} = \coth \left(\frac {F L _ {k}}{k _ {B} T}\right) - \frac {k _ {B} T}{F L _ {k}}, \tag {8}
+$$
+
+where $L_{k}$ is the Kuhn length.
+
+The FRC model:
+
+$$
+\frac {x}{L _ {c}} = 1 - \frac {k _ {B} T}{2 F L _ {b}}. \tag {9}
+$$
+
+where $L_{b}$ is the bond length.
+
+In these models, the contour length $L_{c}$ is a robust statistical parameter representing the maximal length of physically possible extension in a given folding state. (Puchner et al., 2008) first proposed solving for contour length $L_{c}$ using each data point, rather than fitting each loading region with a polymer model. This approach has been widely adopted in SMFS studies (Liu et al., 2020; Yang et al., 2020b; Jobst et al., 2013; Otten et al., 2014; Zhang et al., 2019; Schoeler et al., 2014). Given the physically relevant constraints ( $L_{c} > 0, x > 0, F > 0, x < L_{c}$ ) and fixed values for the persistence length $L_{p}$ , Kuhn length $L_{k}$ , and the bond length $L_{b}$ , the contour length $L_{c}$ can be calculated using the following equations for three polymer elasticity models:
+
+The WLC model (Jobst et al., 2013):
+
+$$
+L _ {c} ^ {W L C} = \frac {x}{6 u} \left(3 + 4 u + \frac {9 - 3 u + 4 u ^ {2}}{g (u)} + g (u)\right), \tag {10}
+$$
+
+where $u = \frac{FL_p}{k_BT}$ and $g(u) = \left(27 - \frac{27}{2} u + 36u^2 - 8u^3 + \frac{3\sqrt{3}}{2}\sqrt{-u^2[(4u - 3)^3 - 108]}\right)^{1/3}$ .
+
+The FJC model:
+
+$$
+L c ^ {F J C} = \frac {x}{\coth \left(\frac {F L _ {k}}{k _ {B} T}\right) - \frac {k _ {B} T}{F L _ {k}}}. \tag {11}
+$$
+
+- The FRC model, assuming bonds are connected by a fixed angle $\gamma$ (Livadaru et al., 2003; Liu et al., 2020):
+
+$$
+L _ {c} ^ {F R C} = \left\{ \begin{array}{l l} \frac {3 x k _ {B} T}{F a} & \text {f o r} \frac {F L _ {b}}{k _ {B} T} < \frac {L _ {b}}{p} \\ \frac {x}{1 - \left(\frac {4 F p}{k _ {B} T}\right) ^ {- 1 / 2}} & \text {f o r} \frac {L _ {b}}{p} < \frac {F L _ {b}}{k _ {B} T} < \frac {p}{L _ {b}}, \\ \frac {x}{1 - \left(\frac {2 F L _ {b}}{k _ {B} T}\right) ^ {- 1}} & \text {f o r} \frac {p}{L _ {b}} < \frac {F L _ {b}}{k _ {B} T} \end{array} \right. \tag {12}
+$$
+
+where $a = L_{b}\frac{1 + \cos\gamma}{(1 - \cos\gamma)\cos(\gamma / 2)}$ is the Kuhn length and $p = L_{b}\frac{\cos(\gamma / 2)}{|ln(\cos\gamma)|}$ is the persistence length.
+
+# A.2. The Dudko-Hummer-Szabo model
+
+The Dudko-Hummer-Szabo (DHS) model (Dudko et al., 2008) provided an expression to find the force-dependent transition rate $k_{off}(F)$ by,
+
+$$
+k _ {o f f} (F) = k _ {0} \left(1 - \frac {\nu F \Delta x ^ {\ddagger}}{\Delta G ^ {\ddagger}}\right) ^ {\frac {1}{\nu} - 1} e ^ {\beta \Delta G ^ {\ddagger} \left[ 1 - \left(1 - \frac {\nu F \Delta x ^ {\ddagger}}{\Delta G ^ {\ddagger}}\right) ^ {1 / \nu} \right]}, \tag {13}
+$$
+
+where $k_{0}$ is the intrinsic transition rate, $\Delta x^{\ddagger}$ is the distance to energy barrier, $\Delta G^{\ddagger}$ is the energy barrier height, and $\beta = \frac{1}{k_{B}T}$ , and $\nu = 1/2$ or $2/3$ , representing the cusp-like or linear-cubic energy landscape. Here $k_{0}, \Delta x^{\ddagger}$ , and $\Delta G^{\ddagger}$ are defined in the absence of external force. The parameters for Titin I27O are reported by (Dudko et al., 2006), while those for UtrN-R3 and DysN-R3 are reported by (Hua et al., 2024).
+
+Table 2. The DHS model parameters
+
+MOLECULES DHSν=1/2 ln(k0) Δx‡[nm] ΔG‡[kB,T] TITIN I270 (DUDKO ET AL., 2006) -9.21 0.40 20.00 BACT UTRN-R3 (HUA ET AL., 2024) -6.50 0.85 14.50 BACT DYSN-R3 (HUA ET AL., 2024) -4.50 0.60 11.50 INSECT UTRN-R3 (HUA ET AL., 2024) -2.50 0.41 9.80
+
+# A.3. Adhesion force model
+
+The adhesive force can be composed of various components like van der Waals force, capillary force, and chemical forces, which depend on environmental conditions such as roughness, interacting angles, and wetness (Leite et al., 2012; Israelachvili, 2011). However, quantifying these environmental conditions is challenging, and they can vary significantly between experiments. Consequently, we adopt a straightforward yet versatile method to model adhesive force rather than a more intricate approach,
+
+$$
+F _ {a} (t) = \left\{ \begin{array}{l l} \frac {t}{t _ {1}} F _ {a d} & 0 < t < t _ {1} \\ \frac {t - t _ {2}}{t _ {2} - t _ {1}} F _ {a d} & t _ {1} < t < t _ {2}, \\ 0 & e l s e \end{array} \right. \tag {14}
+$$
+
+where the adhesive force increases linearly in the interval $[0, t_1]$ , reaching the adhesive force threshold $F_{ad}$ at $t_1$ , then adhesion between the cantilever tip and the substrate begins to disconnect at $t_1$ and vanishes at $t_2$ . The vanishing phase $[t_1, t_2]$ should be much faster than the adhesive phase $[0, t_1]$ , with a common choice being to set $t_2$ close to $t_1$ , for example, $t_2 = 1.1t_1$ . To introduce stochasticity and enhance generality, we assume both $F_{ad}$ and $t_1$ to be Gaussian distributed random variables with user-specified mean and standard deviation.
+
+
+Figure 8. Overall architecture of the PemNN model with late fusion strategy. The data augmentation block is not enabled by default but is utilized through the physics-guided pretraining strategy described in Appendix E.
+
+# B. PemNN fusion methods
+
+PemNN contains two branches: the force trace branch $\mathcal{T}$ and the physics-based branch $\mathcal{P}$ . To uniquely identify a block, we use the notation $(\mathcal{C},pos)$ , where $\mathcal{C} \in \{\mathcal{P},\mathcal{T},\mathcal{PT}\}$ specifies the channel with $\mathcal{PT}$ representing the fusion of two branches, and $pos \in \{pre - module, CNN1, BN1, RELU1, CNN2, fusion, GAP, \ldots\}$ indicates the name. For example, $(\mathcal{T},CNN1)$ identifies the first convolutional layer in the force trace branch. We use $f_{\mathcal{C},pos}$ denotes the output of a block. PemNN supports two fusion strategies—late fusion and early fusion—which differ based on the dimensionality of the features being fused.
+
+early fusion module is applied to fuse outputs of layers $(\mathcal{T},RELU1)$ and $(\mathcal{P},RELU1)$ (Figure 2). We discuss four different methods as follows:
+
+- Early-Sum:
+
+$$
+f _ {E a r l y - f u s i o n} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = f _ {\mathcal {T}, R E L U 1} \left(\mathcal {F} ^ {(i)}\right) + f _ {\mathcal {P}, R E L U 1} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) \tag {15}
+$$
+
+which sums convolutional feature maps across branches.
+
+- Early-Max:
+
+$$
+f _ {E a r l y - f u s i o n} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = \max \left\{f _ {\mathcal {T}, R E L U 1} \left(\mathcal {F} ^ {(i)}\right), f _ {\mathcal {P}, R E L U 1} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) \right\} \tag {16}
+$$
+
+where the maximum value at each element is selected.
+
+- Early-Wavg:
+
+$$
+f _ {E a r l y - f u s i o n} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = \alpha_ {\mathcal {T}} f _ {\mathcal {T}, R E L U 1} \left(\mathcal {F} ^ {(i)}\right) + \alpha_ {\mathcal {P}} f _ {\mathcal {P}, R E L U 2} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) \quad \text {s . t .} \alpha_ {\mathcal {T}} + \alpha_ {\mathcal {P}} = 1, \tag {17}
+$$
+
+which computes a weighted sum of feature maps with coefficients $\alpha_{\mathcal{T}},\alpha_{\mathcal{P}}$ learned during optimization.
+
+- Early-Conv: The three methods described above require consistent output shapes for the layers $(\mathcal{T},RELU1)$ and $(\mathcal{P},RELU1)$ . The $(i,j)$ element of $f_{Early - fusion}(\mathcal{F}^{(i)},\mathcal{X}^{(i)})$ is derived from the corresponding $(i,j)$ elements of $f_{\mathcal{T},RELU1}(\mathcal{F}^{(i)})$ and $f_{\mathcal{P},RELU2}(\mathcal{F}^{(i)},\mathcal{X}^{(i)})$ , without any information exchange across different branches. To exchange information across branches, we propose using convolution with a set of filters $\omega^{Early - conv}$ of length of 1:
+
+$$
+f _ {E a r l y - f u s i o n} (\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}) = \left(f _ {\mathcal {T}, R E L U 1} (\mathcal {F} ^ {(i)}) \| f _ {\mathcal {P}, R E L U 1} (\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)})\right) * \omega^ {E a r l y - c o n v}
+$$
+
+where $\ast$ is the convolution operator, and the vertical concatenating operator $\parallel$ is defined as $v_{i}\parallel v_{j} = (v_{i0},v_{i1},\ldots ,v_{in},v_{j0},\ldots v_{jn})^{T}$ given $v_{i} = (v_{i0},v_{i1},\dots,v_{in})^{T}$ and $v_{j} = (v_{j0},v_{j1},\dots,v_{jn})^{T}$ .
+
+Late-fusion strategy, where the fusion module is applied after the GAP layers, as depicted in Figure 8. Two methods are considered:
+
+- Late-Concat:
+
+$$
+f _ {L a t e - f u s i o n} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = f _ {\mathcal {T}, G A P} \left(\mathcal {F} ^ {(i)}\right) \| f _ {\mathcal {P}, G A P} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right)
+$$
+
+where GAP layers are concatenated directly.
+
+Late-Gating:
+
+$$
+f _ {L a t e - f u s i o n} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right) = \left(g _ {\mathcal {T}} f _ {\mathcal {T}, G A P} \left(\mathcal {F} ^ {(i)}\right)\right) \| \left(g _ {\mathcal {P}} f _ {\mathcal {P}, G A P} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right)\right)
+$$
+
+where GAP layers are concatenated with learnable weights. The gating weights $g_{\mathcal{T}}, g_{\mathcal{P}}$ are calculated as
+
+$$
+\left(g _ {\mathcal {T}}, g _ {\mathcal {P}}\right) = \sigma \left(W \cdot \left(f _ {\mathcal {T}, G A P} \left(\mathcal {F} ^ {(i)}\right) \| f _ {\mathcal {P}, G A P} \left(\mathcal {F} ^ {(i)}, \mathcal {X} ^ {(i)}\right)\right) + b\right),
+$$
+
+where $W$ and $b$ are the weights and bias of a fully connected layer and $\sigma(\cdot)$ is softmax activation function.
+
+In multivariate time series classification, the widely used approach is late fusion, which directly concatenates features extracted from Global Average Pooling (GAP) layers. For example, (Karim et al., 2019) concatenated the GAP layer of a convolutional block with output of an LSTM layer, (Zheng et al., 2016) concatenated GAP layers of convolutional blocks originated from different input branches, (Zhang et al., 2020) integrated both LSTM and GAP features from different branches, and (Liu et al., 2021) not only concatenated features from two transformer encoders but also introduced a gating mechanism to assign weights to these features. Unlike late fusion, early fusion focuses on combining convolutional feature maps from multiple branches. Examples include (Wang et al., 2021), which employed an element-level maximum operator for feature fusion in image classification; (Chen et al., 2018), which summed feature maps for image detection; and (Pham et al., 2022), which concatenated feature maps extracted with different kernel sizes for anomaly detection.
+
+# C. Baselines and implementation
+
+# C.1. Baselines
+
+Triplet The triplet network (Hoffer & Ailon, 2015) takes three samples: the anchor $x$ , the positive sample $x^{+}$ , and the negative sample $x^{-}$ . Here, $x$ is the sample under classification, $x^{+}$ comes from the same class as $x$ , and $x^{-}$ is of different class. All three samples are passed through the same network architecture, where the weights are shared, to learn representations in the embedding space. The triplet loss is employed,
+
+$$
+L _ {\text {t r i p l e t}} = \max \left\{0, \left| \left| \operatorname {N e t} \left(x ^ {+}\right) - \operatorname {N e t} (x) \right| \right| _ {2} ^ {2} - \left| \left| \operatorname {N e t} \left(x ^ {-}\right) - \operatorname {N e t} (x) \right| \right| _ {2} ^ {2} + m \right\}, \tag {18}
+$$
+
+where $||\cdot||$ denotes the $L_{2}$ norm, and $m$ represents the margin parameter that controls the separation between positive and negative samples in the embedding space. The objective is to minimize the distance between the anchor and the positive sample while maximizing the distance between the anchor and the negative sample in the embedding space. Subsequently, the resulting embeddings are fed into the MLP described earlier to learn classification. The embedding network adopts the ResNet structure, which will be elaborated on below.
+
+FCN Fully convolutional neural networks (FCN) (Wang et al., 2017) is composed of three convolutional blocks followed by a Global Average Pooling (GAP) layer and a softmax layer. Each convolutional block includes a 1D convolutional layer, batch normalization, and a Rectified Linear Unit (RELU) activation layer. The output of the final convolutional block is passed through a GAP layer, followed by a fully connected softmax layer. A stride of 1 with zero padding is used to preserve the length of time series data. The filter sizes and kernel sizes of the three convolutional layers are $\{128, 256, 128\}$ and $\{8, 5, 3\}$ , respectively.
+
+ResNet The ResNet (He et al., 2016) extends neural networks to very deep structures by incorporating shortcut connections. These connections enable the gradient flow directly through the network, easing the training of deeper models. Our Residual Network (ResNet) comprises three residual blocks followed by a GAP layer and a softmax layer. Each residual block contains three convolutional blocks similar to those in the FCN architecture. The output of the last convolutional block is added to the input of the residual block before proceeding to the next layer. The kernel size for the three convolutional layers in each residual block remains consistent with FCN architecture, set as $\{8,5,3\}$ . The number of filters is the same within the same residual block, with filter sizes specified as $\{64,128,128\}$ for three residual blocks, respectively.
+
+InceptionTime InceptionTime (Ismail Fawaz et al., 2020) ensembles the predictions of several InceptionNet models, each with different initializations, to overcome the high variance across different runs. The InceptionNet model consists of two residual blocks, followed by a global average pooling (GAP) layer and a softmax layer. Within each residual block are three Inception modules. Each Inception module begins with a bottleneck layer that reduces dimensionality while preserving the original sequence length through a kernel size and stride of 1. Subsequently, 1D convolutions with varying kernel sizes are applied to the bottleneck output, capturing patterns across different scales. Additionally, a max pooling layer, followed by another bottleneck layer, is applied to the original time series.
+
+LSTMFCN LSTMFCN (Karim et al., 2019) employs the augmentation an LSTM layer and CNN layers along with a squeeze-and-excitation block to generate feature maps, followed by a softmax layer for time series classification. The three convolutional layers in LSTMFCN are configured with filter sizes of $\{128, 256, 128\}$ and kernel sizes of $\{8, 5, 3\}$ , respectively. The LSTM layer has a hidden dimension of 8, followed by a dropout layer with a rate of 0.8.
+
+# C.2. Implementation details
+
+In PemNN, the CNN blocks contains $\{128, 256, 128\}$ filters with kernel sizes of $\{8, 5, 3\}$ , respectively; the LSTM layers are configured with a hidden dimension of 8, followed by a dropout layer with a rate of 0.8 to mitigate overfitting. The contour length threshold is set to $1000\mathrm{~nm}$ , the persistence length $L_{p} = 300\mathrm{~pm}$ , Kuhn length $L_{k} = 300\mathrm{~pm}$ , and bond length $L_{b} = 150\mathrm{~pm}$ by default. The batch size is 16 for all models except for InceptionTime with 64, and the number of training epochs are kept at 200 epochs for all models. All models are trained with Adam (Kingma, 2014) with the learning rate $0.001$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and $\epsilon = 1e - 8$ . The best-performing model, determined by achieving the lowest training loss, is then selected and evaluated on the testing data. These models are trained with Apple M1 Pro, which has 10-core CPU and 16-core GPU.
+
+# C.3. Data preprocessing
+
+In both single molecule and multiple molecule cases, the force curve typically consists of two parts, separated by the detachment of the event. Our interest lies in the region before detachment, where unfolding events occur. Here, the force curves are trimmed to retain the region before detachment by identifying the detachment point. This trimming process is illustrated in the transition from Figure 9a to Figure 9b. In the force trace branch, data is resampled using linear interpolation to a default of 400 points, followed by min-max normalization, $x_{\text{scaled}} = \frac{x - x_{\text{min}}}{x_{\text{max}} - x_{\text{min}}}$ , where $x_{\text{min}}$ and $x_{\text{max}}$ are minimum and maximum values of the data $x$ . This ensures the models focus on learning shapes rather than magnitudes, as shown in the transition from Figure 9b to Figure 9c. In the physics-based branch, 400 samples are extracted from the trimmed force curve by default, and the countour length $L_{c}$ is calculated using polymer elastic models, illustrated in Figure 9d. We utilize tools from the publicly available time series library Tslearn (Tavenard et al., 2020) to implement the two preprocessing steps: resampling and normalization.
+
+
+Figure 9. Data preprocessing example. (a) The original force curve. (b) The force curve after trimming. (c) The force data after normalization and resampling in the force trace branch. (d) The force curve with calculated $L_{c}$ in the physics-based branch.
+
+
+
+
+
+
+
+# D. More Results
+
+# D.1. Results of the Discoidin Domain Receptors (DDRs) dataset
+
+The DDRs dataset contains unfolding curves of interaction forces between DDRs and its ligand, collagen, measured by AFM at the single molecule level (Waite et al., 2023). The dataset contains three classes: 1) no molecule, 2) single molecule, and 3)multiple molecules. However, force curve collection was conducted at a single pulling speed, and the dataset is standardized to have equal lengths for each sample. However, our method, PemNN, is not applicable as extension data is not provided. Instead, we compare the performance of the remaining deep learning methods.
+
+All deep learning models, except the Triplet model, were trained using $80\%$ of the total dataset as the training dataset and tested on the remaining dataset. Their performance is compared to that of the triplet model reported in (Waite et al., 2023), as presented in Table 3. The dataset was pre-processed as proposed in (Waite et al., 2023), which involves applying a numerical first-order derivative, followed by a moving average filter with a window size of 13, and finally min-max normalization. Among the models, InceptionTime demonstrated the best performance, achieving approximately $20\%$ higher overall accuracy and F1 score compared to the Triplet model.
+
+Table 3. Overall accuracy, class accuracy, and F1-score of DDRs dataset for Triplet, MLP, FCN, ResNet, and InceptionTime are presented in the format of average (standard deviation) from 5 runs. Among these models, InceptionTime demonstrates superior performance compared to the others.
+
+MODELS OVERALL ACCURACY (%) CLASS ACCURACY (%) F1-Score(%) CLASS 0 CLASS 1 CLASS 2 TRIPLET(WAITE ET AL., 2023) 66.70 73.30 63.30 63.30 66.70 FCN 77.10(5.15) 81.22(4.96) 77.49(13.59) 73.48(10.06) 77.30(5.32) RESNET 79.03(4.84) 81.90(10.52) 83.97(4.74) 70.92(7.44) 79.02(4.88) LSTMFCN 75.81(3.42) 70.91(2.49) 88.24(4.16) 71.30(9.02) 76.17(3.32) INCEPTIONTIME 85.48(1.14) 97.14(2.61) 84.00(8.94) 72.50(12.96) 85.26(1.02)
+
+# D.2. Ablation study of model architecture
+
+In this section, we explore the architectures of our model, PemNN, in greater detail to provide insights into optimizing model designs for SMFS classification. We examine 1) different fusion strategies, 2) the inclusion or exclusion of LSTM layers, and 3) different polymer elastic models.
+
+
+a
+
+
+d
+Figure 10. Ablation study of PemNN architecture. a) Critical diagram comparing various fusion strategies, where 'E-' and 'L-' denote early fusion and late fusion strategies, respectively. b) Critical diagram evaluating the incorporation of LSTM layers, with 'F1P1' indicating LSTM layers included in both the force trace and physics-based branches. c) Critical diagram of different polymer elastic models. d) Performance of PemNN across datasets with three distinct polymer elastic models.
+
+
+b
+
+
+C
+
+Fusion Strategies Figure 10a presents results of various fusion strategies. Our model, PemNN, consistently outperforms the baselines regardless of the fusion strategy used. Moreover, all early fusion methods exhibit superior performance compared to late fusion methods. For late fusion methods, the GAP layers from the last convolutional blocks are concatenated, leading to significant information loss since each element in the GAP layer represents a single channel from the convolutional block. In contrast, early fusion methods directly integrate branches from first convolutional blocks, preserving more information. Among the early fusion strategies, the Early-Conv strategy stands out, likely due to its use of filters to effectively process information across branches.
+
+LSTM Layers Figure 10b shows the impact of incorporating LSTM layers into the two branches of PemNN. Regardless of the specific use of LSTM layers, PemNN consistently outperforms the baselines, with all models showing statistically significant improvements over the baselines, except for LSTMFCN. Including LSTM layers in both the trace and physics-based branches achieves the best overall performance, with the average ranking of 2.4167.
+
+Different polymer elastic models Regardless of the chosen polymer elastic model, PemNN consistently outperforms the baselines, as shown in Figure 10c. Furthermore, employing the FRC and FJC models improves performance compared to the WLC model. Figure 10d presents the performance of the three polymer elastic models across simulated and experimental datasets using a radar chart. Even though the WLC model is used to generate the simulated datasets, PemNN with the FRC and FJC models performs comparably, with differences in averaged accuracy remaining within $3\%$ . For real experimental datasets, PemNN maintains this robustness, with accuracy differences across the three models limited to within $3\%$ for all four protein molecules.
+
+These results highlight the consistent performance of PemNN in classification tasks, regardless of fusion methods, the incorporation of LSTM layers, or the choice of Polymer elastic models, and its ability to consistently outperform baselines.
+
+# D.3. Closer look at force trace branch functionality
+
+In Section 6.3, we demonstrate that our model, PemNN, is robust to variations in physical parameters, specifically the contour length threshold and persistence length. Here we provide a detailed discussion.
+
+
+Figure 11. Contour length calculations using the WLC model with varying persistence lengths $(L_{p})$ , visualized with a simulated force curve of Titin I270.
+
+Contour length threshold Contour lengths typically span the micrometer scale. However, factors such as signal noise or deviations from the polymer elastic models can lead to exaggerated contour length values, potentially degrading model performance. Despite these challenges, PemNN exhibits robust performance even when datasets include such extreme values.
+
+Persistence length The persistence length $L_{p}$ quantifies the bending stiffness of a polymer. When $L_{p}$ is underestimated, the polymer appears excessively flexible, causing the contour length $L_{c}$ to be overestimated. Conversely, when $L_{p}$ is overestimated, the polymer appears stiffer, and $L_{c}$ approaches the extension $x$ . Figure 11 illustrates the relationship between $L_{p}$ and $L_{c}$ using the WLC model applied to a force curve from the Titin I27O dataset. At $L_{p} = 300$ , contour length $L_{c}$ remains consistent within unfolding events, indicating a correctly chosen $L_{p}$ . Underestimated $L_{p}$ (0.3-30) results in large and inconsistent $L_{c}$ , leading to performance degradation. When $L_{p}$ is overestimated (3000, 30000), $L_{c}$ becomes linear with $x$ , converging toward an identity function (red line in the inset of Figure 11).
+
+# D.4. Performance under varying train/test splits
+
+We evaluated our model and baseline methods using a range of train/test split ratios on experimental datasets. Table 4 reports the mean accuracy (with standard deviation in parentheses) across all experimental datasets over five independent runs. Our model consistently outperforms baseline methods, particularly under low-data conditions, achieving at least $11.4\%$ higher average accuracy when trained on only $5\%$ of the data. As the training data increases, our approach maintains superior accuracy while exhibiting smaller standard deviations.
+
+Table 4. Model performance on experimental datasets with varying train/test split ratios. Values represent mean accuracy (%) with standard deviations over five runs in parentheses.
+
+TRAIN/TEST RATIO 5/80 40/60 60/40 80/20 PEMNN 79.61(5.24) 86.82(4.86) 88.50(5.09) 88.54(5.68) LSTMFCN 68.26(12.12) 85.12(7.49) 85.51(6.87) 87.66(7.66) RESNet 37.47(6.33) 81.01(9.13) 84.62(7.13) 84.47(9.26) FCN 35.88(7.37) 75.50(8.92) 75.48(13.15) 79.43(10.18) TRIPLET 55.58(10.17) 66.78(12.06) 69.15(12.26) 66.92(11.45) INCEPTIONTIME 35.32(3.97) 80.78(12.78) 81.35(9.07) 84.76(11.58)
+
+# D.5. Application to SMFS data analysis
+
+In Section 6.5, we examined the unfolding force distributions of Titin I27O using three methods: RawData as the baseline, Heuristic as a non-machine learning approach, and PemNN, our proposed model. Expanding on this analysis, we provide additional details for utrophin and dystrophin molecules, as illustrated in Figure 12.
+
+Unfolding force distributions are visualized using violin plots, where black stars indicate the values with the highest probability. The violins depict data distributions with kernel density estimation (shown as black lines on either side), and the width of each curve represents the relative frequency of data points. For each protein molecule, Titin I27O, bact UtrN-R3, insect UtrN-R3, and DysN-R3, the unfolding distributions generated by PemNN are significantly more concentrated compared to those from RawData and Heuristic, as quantified by the interquartile range (IQR) listed in Table 5. Furthermore, the most probable forces obtained with PemNN differ from those achieved by the Heuristic method by no more than $10\mathrm{pN}$ . These results highlight that PemNN effectively analyzes AFM data by accurately capturing key statistical features while effectively filtering out confounding factors from multiple molecules.
+
+Table 5. The most probable forces (in pN), along with the interquartile range (IQR) in parentheses, for all four protein molecules analyzed using different methods during AFM data analysis.
+
+RAWDATA 192.21(234.52) TITIN I27O HEURISTIC 217.41(270.46) PEMNN 206.68(52.63) RAWDATA 60.50(63.22) BACT UTRN-R3 HEURISTIC 72.84(66.55) PEMNN 70.40(36.97) INSECT UTRN-R3 DYSN-R3 RAWDATA 81.41(85.29) HEURISTIC 90.58(80.57) PEMNN 84.66(39.98) RAWDATA 63.76(53.18) HEURISTIC 74.38(53.10) PEMNN 72.54(26.79)
+
+
+Figure 12. Application of PemNN to AFM Data Analysis: RawData as the Baseline Method, Heuristic as non machine learning method, and PemNN as the our proposed model. Unfolding force distributions are depicted using violin plots, where black stars signify the positions of values with the highest probability, with values detailed in Table 5. The violins illustrate data distributions using kernel density estimation as black lines on each side, with the width of each curve indicating the relative frequency of data points.
+
+# E. Physics-guided pretraining strategy
+
+Given the challenge of constructing a large, well-annotated dataset using SMFS experimental data and to eliminate experimental labeling and human bias, we employed a physics-guided pretraining strategy. In this approach, deep learning models are pre-trained on simulated datasets generated using physics-based models and subsequently evaluating them on corresponding experimental datasets.
+
+# E.1. Pretraining deep learning models with the physics of protein unfolding
+
+By utilizing simulation data, we effectively incorporate the underlying physics of protein unfolding into our analysis. Our simulation (Appendix A) is carried out using the WLC model. The WLC model encapsulates the physics of the protein unfolding, accurately describing the entropic spring like behavior of the protein between two unfolding events of the protein, which is also corroborated by experimental data. Subsequently, we test the performance of these pre-trained deep learning models using experimental data.
+
+Data augmentation via linear combinations of examples from different classes is shown to be effective in image classification (Summers & Dinneen, 2019; Tokozume et al., 2018; Huang et al., 2020) and sound recognition (Tokozume et al., 2017). Here, we incorporate $M \in \mathbb{N}$ reference data into the force trace branch of PemNN to augment the force data under classification (Figure 8). The reference data are randomly sampled simulated force data from the training dataset. Each reference data $\mathcal{F}_j^{(i)}$ is augmented with the force data $\mathcal{F}^{(i)}$ via the difference $(\mathcal{F}^{(i)} - \mathcal{F}_j^{(i)})$ as additional channels, resulting in a total of $M + 1$ channels. The force data $\mathcal{F}^{(i)}$ can be either a simulated or experimental force curve undergoing classification. This augmented input $[\mathcal{F}^{(i)},\mathcal{F}^{(i)} - \mathcal{F}_1^{(i)},\dots ,\mathcal{F}^{(i)} - \mathcal{F}_M^{(i)}]^T$ , comprising the force data and the reference data, is then passed through the force trace branch.
+
+# E.2. Physics-guided pretraining strategy performance
+
+We first evaluate the data augmentation block discussed in Appendix E.1 by empirically testing three methods for selecting reference data:
+
+- random balanced: Reference data are randomly selected while maintaining an equal number of samples from each class for every input data.
+- random unbalanced: Reference data are randomly selected without class balance for every input data.
+
+
+a
+
+
+b
+Figure 13. Pretraining strategy results with PemNN. a)Critical difference diagram comparing different data augmentation methods. b)The physics-guided pretraining strategy (train size $= 0\%$ ) slightly underperforms compared to direct training with experimental data (train sizes $= 5\%$ to $20\%$ ).
+
+- prefixed balanced: Reference data are fixed for all input data and are selected equally from each class.
+
+Figure 13a presents the critical diagram comparing these selection criteria for different numbers of reference data $M$ . Both the random balanced and random unbalanced criteria achieve higher rankings compared to the prefixed balanced criterion and the case where no reference data is used. Among these, the random balanced approach with $M = 6$ achieves the highest ranking and and is adopted for subsequent evaluations.
+
+The physics-guided pretraining strategy demonstrates performance comparable to cases where the model is directly trained using experimental data, despite not utilizing any experimental data during its training. Specifically, the physics-guided pretraining strategy with PemNN achieves an average accuracy of $72.0\%$ and an average ROC-AUC of 0.88 without involging any experimental data (Figure 13). These metrics are only $13\%$ and 0.07 lower, respectively, compared to when $20\%$ of the experimental data is incorporated during training.
+
+# E.3. Similarities between simulated and experimental data
+
+We compared unfolding forces from experimental and simulated data based on their most probable values and interquartile ranges (IQR) (see Table 6). Most probable values closely match, with differences from $1\mathrm{pN}$ (Titin I27O) to $\sim 10$ pN $(10\%)$ (Bact UtrN-R3, Insect UtrN-R3, DysN-R3), indicating reasonable simulation accuracy. The IQR discrepancy likely arises from using a single double-well potential for domains in simulations. We note a key insight from our study; neural networks pre-trained on simulated data with homogeneous domains can effectively classify the number of proteins involved in experiments with heterogeneous protein domains.
+
+Table 6. The most probable forces (in pN), along with the interquartile range (IQR) in parentheses, for both experimental and simulated data across all four protein.
+
+TITIN I27O BACT UTRN-R3 INSECT UTRN-R3 DYSN-R3 EXPERIMENTAL 216.35(50.17) 81.58(64.07) 89.45(70.86) 91.25(65.64) SIMULATED 217.75(36.17) 85.38(22.28) 96.65(40.20) 79.75(25.09)
+
+# E.4. Non-homogeneity introduces more challenges
+
+The ROC curves (Figure 14) for PemNN were generated using the One-vs-Rest strategy, where a given class is regarded as the positive class and the remaining classes are regarded as the negative class as a bulk. For Titin I27O, the ROC-AUC remains consistently high, above 0.85 in all cases, regardless of whether the training data is experimental or simulated (Figure 14a, Table 7). For both utrophin and dystrophin, the no-molecule class consistently performs well whether experimental data is used for training or not. However, significant improvements are observed for the single-molecule and multiple-molecule classes when experimental data is used for training instead of simulation data, with ROC-AUC of single-molecule class
+
+increasing by at least 0.2 for insect UtrN-R3, bact UtrN-R3, and DysN-RR3 (Figure 14b-d, Table 7). This suggests notable differences between simulation and experimental data for both utrophin and dystrophin.
+
+Figure 14. ROC curves from a single run for four datasets, (a) Titin I270, (b) Insect UtrN-R3, (c) Bact UtrN-R3 and (d) DysN-R3 using PemNN. ROC curves are plotted using One-vs-Rest strategy, with no molecule, single molecule and multiple molecules classes depicted in yellow, red and blue, respectively. Solid lines represent results trained with simulation data, while dashed lines indicate results trained with experimental data. These ROC curves are generated using $80\%$ of experimental data, with the corresponding ROC-AUC in Table 7.
+
+Trained with simulation data: No molecule
+Trained with experimental data: No molecule
+
+
+
+
+Single molecule Multi molecules Single molecule Multi molecules
+
+
+Chance level (ROC-AUC = 0.5)
+
+The ROC curves allow us to adjust the optimal probability threshold. We can achieve high precision but accept low sensitivity to ensure the reliability of single molecule data statistics. The probability threshold can be chosen to tune for a minimum sensitivity or for a minimum specificity. For example, we have selected the thresholds to achieve an average of $93\%$ of the positive identifications of single molecule are correct across four experimental datasets. Thus the method can effectively filter data such that inferences can be drawn from true single molecule experiments. Alternatively, we can aim for high sensitivity but accept low precision to avoid losing data, given the scarcity and high cost of producing SMFS data (Appendix E.5).
+
+We further investigated the possible reasons for the superior performances of Titin I27O compared to insect UtrN-R3, bact UtrN-R3, and DysN-R3. Titin I27O has identical domains whereas utrophin and dystrophin have heterogeneous domains. The simulation model we use assumes identical folded domains in a protein. A model that can simulate a molecule with heterogeneous domains requires the model parameters for each domain. These parameters are not available for insect UtrN-R3, bact UtrN-R3, and DysN-R3. Thus, we hypothesize that when information about the molecular domains is missing in the simulation model, the usage of experimental data in training provides substantial improvements in performance. This is the case with molecules with heterogeneous domains.
+
+Table 7. ROC-AUC of ROC curves in Figure 14.
+
+TRAINED WITH SIMULATION DATA TRAINED WITH EXPERIMENTAL DATA NO MOLECULE SINGLE MOLECULE MULTIPLE MOLECULES NO MOLECULE SINGLE MOLECULE MULTIPLE MOLECULES TITIN I27O 0.98 0.86 0.90 1.00 0.95 0.96 INSECT UTRN-R3 0.92 0.61 0.75 0.99 0.81 0.84 BACT UTRN-R3 0.97 0.67 0.82 1.00 0.92 0.93 DYSN-R3 0.97 0.61 0.77 1.00 0.88 0.90
+
+# E.5. Adjusting probability thresholds
+
+The primary goal is to identify single molecule force curves, so we simplify the problem from multi-class to binary classification, focusing on distinguishing force curves from single molecules versus those from no molecule and multiple molecules. By leveraging the ROC curve, which illustrates the true positive rates (TPR) against the false positive rates (FPR) by varying the probability threshold $t_p$ , we can choose the optimal probability thresholds $t_{op}$ to classify these binary classes. The optimal probability threshold $t_{op}$ is determined by maximizing the difference between TPR and FPR with a weight $\alpha \in \mathbb{R}$ on FPR,
+
+$$
+t _ {o p} = \operatorname {a r g m a x} _ {t _ {p}} \left(T P R - \alpha \cdot F P R\right). \tag {19}
+$$
+
+Subsequently, optimal thresholds with varying $\alpha$ are applied to the binary classification task. The results, compared to the original threshold, are presented in Figure 15 for ResNet. The models are trained on simulation data and then evaluated on experimental data.
+
+
+Figure 15. Performance of PemNN with varying probability thresholds for all four experimental datasets. The x-axis represents values of $\alpha$ , with 'w/o Threshold' indicating no threshold is used.
+
+A larger weight $\alpha$ places greater emphasis on false positive, thereby increasing precision. For example, the precision of single molecule class improves to 0.93 when $\alpha = 10$ . However, this improvement comes with a trade-off in recall. This adjustment enhances the reliability of single molecule data statistics, as we can be more confident that no-molecule and multiple-molecules force curves are not misclassified as single-molecule class. Conversely, if the preference is to accept some false positives rather than exclude any true positives as the data are expensive and scarce, $\alpha = 0.2$ would be the choice to achieve high recall, albeit with lower precision.
+
+# E.6. Discussion and limitations
+
+A significant insight of the study is that we can perform the task of classification of data under no molecule, single molecule, and multiple molecules (for proteins with heterogenous domains) being pulled where the training is done based on simulation data, where a single double-well potential model of the domains is employed. This implies that there is enough other information on the force-curves that allows discrimination between no, single, and multiple proteins-based force-curves. We emphasize that neural networks trained on simulated data using homogeneous domains have the capability to classify between the number of proteins being involved in the experiments of protein molecules with heterogeneous domains.
+
+We identify the following limitations for the physics-guided pretraining strategy. First, the simulation algorithm relies on energy landscape parameters, which are not straightforward to obtain and typically necessitate experiments involving multiple pulling speeds or constant forces. Transforming our physics-guided pretraining strategy to be agnostic to the energy landscape parameters is beyond the scope of this study. Second, in our physics-based protein unfolding model, we assume every protein domain behaves identically. However, many proteins, including utrophin and dystrophin have folded domains that are significantly different from each other. Developing methods that can extract the individual energy landscapes of the different dissimilar domains in a protein is likely to aid in identifying single molecule force curves arising from proteins with dissimilar domains. The development of techniques to extract distinct energy landscapes from a small amount of experimental data should be investigated in the future.
\ No newline at end of file
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/images.zip b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..62c0c0f78c0294546bcd160c8fcf2f6543e671cd
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af3d6c45492048c7796c12d5bbecfc6bded23c9cb5fbd537df395a8aac842d95
+size 1134396
diff --git a/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/layout.json b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2284bc71670331c040c8ec1127ce80331f66dd96
--- /dev/null
+++ b/aphysicsaugmenteddeeplearningframeworkforclassifyingsinglemoleculeforcespectroscopydata/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f01df555791e2429680f594d108765865ffe4b1d675fdffd3dbe1913e2f91779
+size 874182
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_content_list.json b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c195a1153a0397ac1f3cf9654b9ec61afc06c892
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83b105395c8d66e152852761338020850e107079ae35e7fd1af8e261ef1fe1f4
+size 152785
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_model.json b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6b513e27a653d04398aee8b49b646bca1e600035
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bf24ac98a5abf74315a911aab100f3642b2532857030c6335e0233aab721f8ef
+size 180668
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_origin.pdf b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4f9d0ba82e11176669b636d273fde0dc13e140cc
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/0f225a9b-e6d1-4cb3-9faf-fd31bcd057b3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4471a3ed5deff7bafa22a41b209c28757d67904a1a46a76b9e4d97b81809e17f
+size 2146506
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/full.md b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f55631a3c06a92e8467914b4e9d33b8612aaf21
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/full.md
@@ -0,0 +1,804 @@
+# A Physics-Informed Machine Learning Framework for Safe and Optimal Control of Autonomous Systems
+
+Manan Tayal $^{*1}$ Aditya Singh $^{*1}$ Shishir Kolathaya $^{1}$ Somil Bansal $^{2}$
+
+# Abstract
+
+As autonomous systems become more ubiquitous in daily life, ensuring high performance with guaranteed safety is crucial. However, safety and performance could be competing objectives, which makes their co-optimization difficult. Learning-based methods, such as Constrained Reinforcement Learning (CRL), achieve strong performance but lack formal safety guarantees due to safety being enforced as soft constraints, limiting their use in safety-critical settings. Conversely, formal methods such as Hamilton-Jacobi (HJ) Reachability Analysis and Control Barrier Functions (CBFs) provide rigorous safety assurances but often neglect performance, resulting in overly conservative controllers. To bridge this gap, we formulate the co-optimization of safety and performance as a state-constrained optimal control problem, where performance objectives are encoded via a cost function and safety requirements are imposed as state constraints. We demonstrate that the resultant value function satisfies a Hamilton-Jacobi-Bellman (HJB) equation, which we approximate efficiently using a novel physics-informed machine learning framework. In addition, we introduce a conformal prediction-based verification strategy to quantify the learning errors, recovering a high-confidence safety value function, along with a probabilistic error bound on performance degradation. Through several case studies, we demonstrate the efficacy of the proposed framework in enabling scalable learning of safe and performant controllers for complex, high-dimensional autonomous systems.
+
+# 1. Introduction
+
+Autonomous systems are becoming increasingly prevalent across various domains, from self-driving vehicles and robotic automation to aerospace and industrial applications. Designing control algorithms for these systems involves balancing two fundamental objectives: performance and safety. Ensuring high performance is essential for achieving efficiency and task objectives under practical constraints, such as fuel limitations or time restrictions. For instance, a warehouse humanoid robot navigating to a destination must optimize its route for efficiency. At the same time, safety remains paramount to prevent catastrophic accidents or system failures. These two objectives, however, often conflict, making it challenging to develop control strategies that achieve both effectively.
+
+A variety of data-driven approaches have been explored to integrate safety considerations into control synthesis. Constrained Reinforcement Learning (CRL) methods (Altman, 1999; Achiam et al., 2017) employ constrained optimization techniques to co-optimize safety and performance where performance is encoded as a reward function and safety is formulated as a constraint. These methods often incorporate safety constraints into the objective function, leading to only a soft imposition of the safety constraints. Moreover, such formulations typically minimize cumulative constraint violations rather than enforcing strict safety at all times, which can result in unsafe behaviors.
+
+Another class of methods involves safety filtering (Hsu et al., 2024), which ensures constraint satisfaction by modifying control outputs in real-time. Methods such as Control Barrier Function (CBF)-based quadratic programs (QP) (Ames et al., 2017) and Hamilton-Jacobi (HJ) Reachability filters (Borquez et al., 2024; Wabersich et al., 2023) act as corrective layers on top of a (potentially unsafe) nominal controller, making minimal interventions to enforce safety constraints. However, because these safety filters operate independently of the underlying performance-driven controller, they often lead to myopic and suboptimal decisions. Alternatively, online optimization-based methods, such as Model Predictive Control (MPC) (García et al., 1989; Grüne et al., 2017) and Model Predictive Path Integral (MPPI) (Williams et al., 2018; Streichenberg et al., 2023), can naturally integrate
+
+safety constraints while optimizing for a performance objective. These methods approximate infinite-horizon optimal control problems (OCPs) with a receding-horizon framework, enabling dynamic re-planning. While effective, solving constrained OCPs online remains computationally expensive, limiting their applicability for high-frequency control applications. The challenge is further exacerbated when dealing with nonlinear dynamics and nonconvex (safety) constraints, limiting the feasibility of these methods for ensuring safety and optimality for real-world systems.
+
+A more rigorous approach to addressing the trade-off between performance and safety is to formulate the problem as a state-constrained optimal control problem (SC-OCP), where safety is explicitly encoded as a hard constraint, while performance is expressed through a reward (or cost) function. While theoretically sound, characterizing the solutions of SC-OCPs is challenging unless certain controllability conditions hold (Soner, 1986). To address these challenges, (Altarovici et al., 2013) proposed an epigraph-based formulation, which characterizes the value function of an SC-OCP by computing its epigraph using dynamic programming, resulting in a Hamilton-Jacobi-Bellman Partial Differential Equation (HJB-PDE). The SC-OCP value function as well as the optimized policy are then recovered from this epigraph. However, dynamic programming suffers from the curse of dimensionality, making it impractical for high-dimensional systems with traditional numerical solvers (Mitchell, 2004; Wang et al., 2024). Furthermore, the epigraph formulation itself increases the problem's dimensionality, exacerbating computational complexity further. Many techniques for speeding up the computation of solutions to the HJB PDE put restrictions on the type of system allowed (Chow et al., 2017). However, solving the HJB PDE for general nonlinear systems remains a key challenge.
+
+Recent advances in Deep Learning have enabled the development of physics-informed machine learning approaches (Raissi et al., 2017; 2019b) for solving partial differential equations (PDEs) with neural networks. These methods have demonstrated notable effectiveness in addressing high-dimensional PDEs while ensuring that the learned solutions adhere to the governing physical laws. In particular, DeepReach (Bansal & Tomlin, 2021) proposes a framework for solving Hamilton-Jacobi-Bellman (HJB) PDEs in safety-critical settings using physics-informed machine learning. However, its exclusive focus on safety neglects performance considerations, resulting in overly conservative control strategies.
+
+In this work, we propose a novel algorithmic approach to co-optimize safety and performance for high-dimensional autonomous systems. Specifically, we formulate the problem as an SC-OCP and leverage the epigraph formulation in (Altarovici et al., 2013). To efficiently solve this epigraph
+
+formulation, we leverage physics-informed machine learning (Raissi et al., 2019a; Li et al., 2022) to learn a solution to the resultant HJB-PDE by minimizing PDE residuals. This enables us to efficiently scale epigraph computation for higher-dimensional autonomous systems, leading to safe and performant policies. To summarize, our main contributions are as follows:
+
+- We propose a novel Physics-Informed Machine Learning (PIML) framework to learn policies that co-optimize safety and performance for high-dimensional autonomous systems.
+- We introduce a conformal prediction-based safety verification strategy that provides high-confidence probabilistic safety guarantees for the learned policy, reducing the impact of learning errors on safety.
+- We propose a performance quantification framework that leverages conformal prediction to provide high-confidence probabilistic error bounds on performance degradation.
+- Across three case studies, we showcase the effectiveness of our proposed method in jointly optimizing safety and performance, while scaling to complex, high-dimensional systems.
+
+# 2. Problem Setup
+
+Consider a nonlinear dynamical system characterized by the state $x \in \mathcal{X} \subseteq \mathbb{R}^n$ and control input $u \in \mathcal{U} \subseteq \mathbb{R}^m$ , governed by the dynamics $\dot{x}(t) = f(x(t), u(t))$ , where the function $f: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n$ is locally Lipschitz continuous. In this work, we assume that the dynamics model $f$ is known; however, it can also be learned from data if unavailable.
+
+We are given a failure set $\mathcal{F} \subseteq \mathcal{X}$ that represents the set of unsafe states for the system (e.g., obstacles for an autonomous ground robot). The system's performance is quantified by the cost function $C(t, x, \mathbf{u})$ , given by:
+
+$$
+C (t, x (t), \mathbf {u}) = \int_ {s = t} ^ {T} l (x (s)) d s + \phi (x (T)), \tag {1}
+$$
+
+where $l: \mathcal{X} \to \mathbb{R}_{\geq 0}$ and $\phi: \mathcal{X} \to \mathbb{R}_{\geq 0}$ are Lipschitz continuous and non-negative functions, representing the running cost over the time horizon $[t, T)$ and the terminal cost at time $T$ , respectively. $\mathbf{u}: [t, T) \to \mathcal{U}$ is the control signal applied to the system. Using this premise, we define the main objective of this paper:
+
+Objective 1. We aim to synthesize an optimal policy $\pi^{*}:[t,T)\times \mathcal{X}\to \mathcal{U}$ that minimizes the cost function $C$ while ensuring that the system remains outside the failure set $\mathcal{F}$ at all times.
+
+# 2.1. State-Constrained Optimal Control Problem
+
+To achieve the stated objective, the first step is to encode the safety constraint via a function $g: \mathbb{R}^n \to \mathbb{R}$ such that, $\mathcal{F} := \{x \in \mathcal{X} \mid g(x) > 0\}$ . Using these notations, the objective can be formulated as the following State-Constrained Optimal Control Problem (SC-OCP) to compute the value function $V$ :
+
+$$
+\begin{array}{l} V (t, x (t)) = \min _ {\mathbf {u}} \int_ {t} ^ {T} l (x (s)) d s + \phi (x (T)) \\ \text {s . t .} \dot {x} = f (x, u), \end{array} \tag {2}
+$$
+
+$$
+g (x (s)) \leq 0 \quad \forall s \in [ t, T ]
+$$
+
+This SC-OCP enhances the system's performance by minimizing the cost, while maintaining system safety through the state constraint, $g(x) \leq 0$ , ensuring that the system avoids the failure set, $\mathcal{F}$ . Thus, the policy, $\pi^{*}$ , derived from the solution of this SC-OCP co-optimizes safety and performance.
+
+# 2.2. Epigraph Reformulation
+
+Directly solving the SC-OCP in (2) presents significant challenges due to the presence of (hard) state constraints. To address this issue, we reformulate the problem in its epigraph form (Boyd & Vandenberghe, 2004), which transforms the constrained optimization into a more tractable two-stage optimization problem. This reformulation allows us to efficiently obtain a solution to the SC-OCP in (2). The resulting formulation is given by:
+
+$$
+V (t, x (t)) = \min _ {z \in \mathbb {R} ^ {+}} z \tag {3}
+$$
+
+$$
+\begin{array}{l} \text {s . t .} \hat {V} (t, x, z) \leq 0, \end{array}
+$$
+
+where $z$ is a non-negative auxiliary optimization variable, and $\hat{V}$ represents the auxiliary value function. Here, $\hat{V}$ is defined as (Altarovici et al., 2013):
+
+$$
+\hat {V} (t, x (t), z) = \min _ {\mathbf {u}} \max \left\{C (t, x (t), \mathbf {u}) - z, \max _ {s \in [ t, T ]} g (x (s)) \right\}. \tag {4}
+$$
+
+Note that if $\hat{V}(t,x,z) < 0$ , it implies that $g(x(s)) < 0$ for all $s \in [t,T]$ . In other words, the system must be outside the failure set at all times; therefore, the system is guaranteed to be safe whenever $\hat{V}(t,x,z) < 0$ .
+
+In this reformulated problem, state constraints are effectively eliminated, enabling the use of dynamic programming to characterize the value function, as we explain later in this section. Intuitively, optimal $z(z^{*})$ can be thought of as the minimum permissible cost the policy can incur without compromising on safety. From Equation 3, it can be inferred that if $z > z^{*}$ , the safety constraint dominates in the max term, resulting in a conservative policy. Conversely, if $z < z^{*}$ , the performance objective takes precedence, leading to a potentially aggressive policy that might compromise safety.
+
+Furthermore, to facilitate solving the epigraph reformulation, $z$ can be treated as a state variable, with its dynamics given by $\dot{z}(t) = -l(x(t))$ . This implies that as the trajectory progresses over time, the minimum permissible cost, $z$ , decreases by the step cost $l(x)$ at each time step. This allows us to define an augmented system that evolves according to the following dynamics:
+
+$$
+\dot {\hat {x}} = \hat {f} (t, \hat {x}, u) := \left[ \begin{array}{c} f (t, x, u) \\ - l (x) \end{array} \right], \tag {5}
+$$
+
+where $\hat{x} \coloneqq [x,z]^T$ represents the augmented state. With the augmented state representation and under assumptions A1-A4 of (Altarovici et al., 2013), the auxiliary value function $\hat{V}(t,x(t),z(t))$ is a unique continuous viscosity solution satisfying the following Hamilton-Jacobi-Bellman (HJB) PDE:
+
+$$
+\min \left(- \partial_ {t} \hat {V} - \min _ {\mathbf {u}} \langle \nabla_ {\hat {x}} \hat {V} (t, \hat {x}), \hat {f} (\hat {x}, u) \rangle , \hat {V} - g (x)\right) = 0, \tag {6}
+$$
+
+$\forall t \in [0, T)$ and $\hat{x} \in \mathcal{X} \times \mathbb{R}$ , where $\langle \cdot, \cdot \rangle$ denotes the dot product of vectors. The boundary condition for the PDE is given by:
+
+$$
+\hat {V} (T, \hat {x}) = \max (\phi (x (T)) - z, g (x)), \quad \hat {x} \in \mathcal {X} \times \mathbb {R}. \tag {7}
+$$
+
+Note that by a slight abuse of notations, we have replaced the arguments $x,z$ for $\hat{V}$ with the augmented state $\hat{x}$ .
+
+# 3. Methodology
+
+To solve the SC-OCP in Equation (2), we aim to compute the optimal value function $V$ , which minimizes the cost while ensuring system safety. In this section, we outline a structured approach: first, we learn the auxiliary value function $\hat{V}$ using a physics-informed machine learning framework. Then, we apply a conformal prediction-based method to verify safety and correct for potential learning errors in $\hat{V}$ . The final value function $V$ is obtained from the safety-corrected $\hat{V}$ using the epigraph formulation in (3). Lastly, we assess the performance of $V$ through a second conformal prediction procedure. Figure 1 gives an overview of the proposed approach. The following subsections provide a detailed explanation of each step, beginning with the methodology for learning $\hat{V}$ .
+
+# 3.1. Training the Auxiliary Value Function $(\hat{V})$
+
+The auxiliary value function, $\hat{V}$ , satisfies the HJB-PDE in Equation (6), as discussed in Section 2.2. Traditionally, numerical methods are used to solve the HJB-PDE over a grid representation of the state space (Mitchell, 2004; Schmerling, 2021), where time and spatial derivatives are approximated numerically. While grid-based methods are accurate for low-dimensional problems, they struggle with
+
+
+Figure 1. Overview of the proposed approach: The methodology is organized into four steps. The first step involves training the auxiliary value function, $\hat{V}_{\theta}$ , using a physics-informed machine learning framework. The second step applies a conformal prediction approach for safety verification of the learned $\hat{V}_{\theta}$ . In the third step, the final value function $V_{\theta}$ and the optimal safe and performant policy $\pi_{\theta}$ are inferred. The fourth step quantifies the performance of $V_{\theta}$ through a second conformal prediction procedure.
+
+the curse of dimensionality – their computational complexity increases exponentially with the number of states – limiting their use in high-dimensional systems. To address this, we adopt a physics-informed machine learning framework, inspired by (Bansal & Tomlin, 2021), which has proven effective for high-dimensional reachability problems.
+
+The solution of the HJB-PDE inherently evolves backward in time, as the value function at time $t$ is determined by its value at $t + \Delta t$ . To facilitate neural network training, we use a curriculum learning strategy, progressively expanding the time sampling interval from the terminal time $[T, T]$ to the full time horizon $[0, T]$ . This approach allows the neural network to first accurately learn the value function from the terminal boundary conditions, subsequently propagating the solution backward in time by leveraging the structure of the HJB-PDE.
+
+Specifically, the auxiliary value function is approximated by a neural network, $\hat{V}_{\theta}$ , where $\theta$ denotes the trainable parameters of the network. Training samples, $(t_k,x_k,z_k)_{k = 1}^N$ , are randomly drawn from the state space based on the curriculum training scheme. The proposed learning framework utilizes a loss function that enforces two primary objectives: (i) compliance with the PDE in (6), using the PDE residual error given by:
+
+$$
+\begin{array}{r} \mathcal {L} _ {p d e} (t _ {k}, \hat {x} _ {k} | \theta) = \| \min \left\{- \partial_ {t} \hat {V} _ {\theta} (t _ {k}, \hat {x} _ {k}) - H (t _ {k}, \hat {x} _ {k}), \right. \\ \left. \hat {V} _ {\theta} (t _ {k}, \hat {x} _ {k}) - g (x _ {k}) \right\} \|, \end{array} \tag {8}
+$$
+
+where $H(t,\hat{x}) = \min_{u\in \mathcal{U}}\langle \nabla \hat{V}_{\theta}(t,\hat{x}),\hat{f} (\hat{x},u)\rangle$ and (ii) satisfaction of the boundary condition in (7), using boundary condition loss, given by:
+
+$$
+\begin{array}{r l} \mathcal {L} _ {b c} (t _ {k}, \hat {x} _ {k} | \theta) & = \left\| \max (\phi (x _ {k}) - z _ {k}, g (x _ {k})) - \right. \\ \left. \hat {V} _ {\theta} (t _ {k}, \hat {x} _ {k}) \right\| \mathbb {1} (t _ {k} = T). \end{array} \tag {9}
+$$
+
+These terms are balanced by a trade-off parameter $\lambda$ , leading to the overall loss function:
+
+$$
+\mathcal {L} \left(t _ {k}, \hat {x} _ {k} | \theta\right) = \mathcal {L} _ {p d e} \left(t _ {k}, \hat {x} _ {k} | \theta\right) + \lambda \mathcal {L} _ {b c} \left(t _ {k}, \hat {x} _ {k} | \theta\right) \tag {10}
+$$
+
+Furthermore, we use the adaptive loss re-balancing scheme proposed in (Wang et al., 2021) to reduce the impact of $\lambda$ on the learned value function. Minimizing the overall loss function provides a self-supervised learning mechanism to approximate the auxiliary value function.
+
+# 3.2. Safety Verification
+
+The learned auxiliary value function, $\hat{V}_{\theta}$ , induces a policy, $\hat{\pi}_{\theta}$ , that minimizes the Hamiltonian term $H(t,\hat{x})$ in the HJB-PDE. The policy is given by:
+
+$$
+\hat {\pi} _ {\theta} (t, \hat {x}) = \arg \min _ {u \in \mathcal {U}} \langle \nabla \hat {V} _ {\theta} (t, \hat {x}), \hat {f} (\hat {x}, u) \rangle . \tag {11}
+$$
+
+The rollout cost corresponding to this policy is defined as:
+
+$$
+\hat {V} _ {\hat {\pi} _ {\theta}} (t, \hat {x}) = \max \left\{C (t, x (t), \mathbf {u}) - z, \max _ {s \in [ t, T ]} g (x (s)) \right\} \Big | _ {\mathbf {u} = \hat {\pi} _ {\theta}} \tag {12}
+$$
+
+Ideally, the rollout cost from a given state under $\hat{\pi}_{\theta}$ should match the value of the auxiliary value function at that state.
+
+Algorithm 1 Safety Verification using Conformal Prediction
+Require: $S, N_s, \beta_s, \epsilon_s, \hat{V}_{\theta}(\hat{x}, 0), \hat{V}_{\hat{\pi}_{\theta}}(\hat{x}, 0), M$ (number of $\delta$ -levels to search for $\delta$ ),
+1: $D_0 \gets$ Sample $N_s$ IID states from $S_{\delta=0}$
+2: $\delta_0 \gets \min_{\hat{x}_j \in D_0} \{\hat{V}_{\theta}(0, \hat{x}_j): \hat{V}_{\hat{\pi}_{\theta}}(0, \hat{x}_j) \geq 0\}$
+3: $\epsilon_0 \gets (14)$ (using $\alpha_{\delta=0}$ )
+4: $\Delta \gets$ Ordered list of $M$ uniform samples from $[\delta_0, 0]$
+5: for $i = 0, 1, \ldots, M-1$ do
+6: while $\epsilon_i \leq \epsilon_s$ do
+7: $\delta_i \gets \Delta_i$
+8: Update $\alpha_{\delta_i}$ from $\delta_i$
+9: $\epsilon_i \gets (14)$ (using $\alpha_{\delta_i}$ )
+10: end while
+11: end for
+12: return $\delta \gets \delta_i$
+
+However, due to learning inaccuracies, discrepancies can arise. This becomes critical when a state, $\hat{x}_i$ , is deemed safe by the auxiliary value function $(\hat{V}_{\theta}(t,\hat{x})\leq 0)$ but is unsafe under the induced policy $(\hat{V}_{\hat{\pi}_{\theta}}(t,\hat{x}) > 0)$ . To address this, we introduce a uniform value function correction margin, $\delta$ , which guarantees that the sub- $\delta$ level set of the auxiliary value function remains safe under the induced policy. Mathematically, the optimal $\delta$ ( $\delta^{*}$ ) can be expressed as:
+
+$$
+\delta^ {*} := \min _ {\hat {x} \in \mathcal {X}} \left\{\hat {V} _ {\theta} (0, \hat {x}): \hat {V} _ {\hat {\pi} _ {\theta}} (0, \hat {x}) \geq 0 \right\} \tag {13}
+$$
+
+Intuitively, $\delta^{*}$ identifies the tightest level of the value function that separates safe states under $\hat{\pi}_{\theta}$ from unsafe ones. Hence, any initial state within the sub- $\delta^{*}$ level set is guaranteed to be safe under the induced policy, $\hat{\pi}_{\theta}^{*}$ . However, calculating $\delta^{*}$ exactly requires infinitely many state-space points. To overcome this, we adopt a conformal-prediction-based approach to approximate $\delta^{*}$ using a finite number of samples, providing a probabilistic safety guarantee. The following theorem formalizes our approach:
+
+Theorem 3.1 (Safety Verification Using Conformal Prediction). Let $S_{\delta}$ be the set of states satisfying $\hat{V}_{\theta}(0, \hat{x}) \leq \delta$ , and let $(0, \hat{x}_i)_{i=1,\dots,N_s}$ be $N_s$ i.i.d. samples from $S_{\delta}$ . Define $\alpha_{\delta}$ as the safety error rate among these $N_s$ samples for a given $\delta$ level. Select a safety violation parameter $\epsilon_s \in (0,1)$ and a confidence parameter $\beta_s \in (0,1)$ such that:
+
+$$
+\sum_ {i = 0} ^ {l - 1} \binom {N _ {s}} {i} \epsilon_ {s} ^ {i} (1 - \epsilon_ {s}) ^ {N _ {s} - i} \leq \beta_ {s}, \tag {14}
+$$
+
+where $l = \lfloor (N_s + 1)\alpha_\delta \rfloor$ . Then, with the probability of at least $1 - \beta_{s}$ , the following holds:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} \left(0, \hat {x} _ {i}\right) \leq 0\right) \geq 1 - \epsilon_ {s}. \tag {15}
+$$
+
+The proof is available in Appendix A.1. The safety error rate $\alpha_{\delta}$ is defined as the fraction of samples satisfying $\hat{V}_{\theta} \leq \delta$ and $\hat{V}_{\hat{\pi}_{\theta}} \geq 0$ out of the total $N_{s}$ samples.
+
+Algorithm 2 Performance Quantification using Conformal Prediction
+Require: $S^{*},N_{p},\beta_{p},V_{\theta}(x,0),V_{\pi_{\theta}}(x,0)$ 1: $D\gets$ Sample $N_{p}$ IID states from $\{x:x\in$ 2: for $i = 0,1,\dots ,N_p - 1$ do 3: $P_{i}\gets p_{i}(0,D)$ 4: end for 5: $P\gets P$ sorted in decreasing order 6: $\alpha_{p}\leftarrow \frac{1}{N_{p} + 1}$ $\psi_0\gets P_0$ $\epsilon_0\gets (18)$ 7: for $i = 0,1,\ldots ,N_p - 1$ do 8: while $\epsilon_i\leq \epsilon_p$ do 9: $\alpha_{p}\gets \frac{i + 1}{N_{p} + 1},\psi_{i}\gets P_{i},\epsilon_{i}\gets (18)$ 10: end while 11: end for 12: return $\psi \gets \psi_{i}$
+
+Algorithm 1 presents the steps to calculate $\delta$ using the approach proposed in this theorem.
+
+# 3.3. Obtaining Safe and Performant Value Function and Policy from $\hat{V}_{\theta}$
+
+Using the $\delta$ -level estimate from Algorithm (1), we can finally obtain the safe and performant value function, $V_{\theta}(t,x)$ by solving the following epigraph optimization problem:
+
+$$
+V _ {\theta} (t, x) = \min _ {z \in \mathbb {R} ^ {+}} z \tag {16}
+$$
+
+$$
+\begin{array}{l} \text {s . t .} \hat {V} _ {\theta} (t, x, z) \leq \delta . \end{array}
+$$
+
+Note that $V_{\theta}(t,x)$ is trivially $\infty$ for the states where $\hat{V}_{\theta}(t,x,z) > \delta$ , since such states are unsafe and hence do not satisfy the safety constraint.
+
+In practice, we solve this optimization problem by using a binary search approach on $z$ . The resulting optimal state-feedback control policy, $\pi_{\theta} : \mathcal{X} \times [t,T) \to \mathcal{U}$ , satisfying Objective (1), is given by:
+
+$$
+\pi_ {\theta} (t, x) = \arg \min _ {u} \langle \nabla \hat {V} _ {\theta} (t, \hat {x} ^ {*}), \hat {f} (\hat {x} ^ {*}, u) \rangle , \tag {17}
+$$
+
+where $\hat{x}^*$ is the augmented state associated with the optimal $z^{*}$ obtained by solving (16), i.e., $\hat{x}^{*} = [x,z^{*}]^{T}$ . Intuitively, we can expect $\pi_{\theta}$ to learn behaviors that best tradeoff the safety and performance of the system.
+
+# 3.4. Performance Quantification
+
+In general, the learning inaccuracies in the auxiliary value function $\hat{V}_{\theta}$ , may lead to errors in the value function $V_{\theta}$ . These errors, in turn, can lead to performance degradation under policy $\pi_{\theta}$ .
+
+To quantify this degradation, we propose a conformal prediction-based performance quantification method that provides a probabilistic upper bound on the error between
+
+
+Figure 2. This figure presents a comparative study between all the methods based on our evaluation metrics. The top plot illustrates the mean percentage increase in cumulative cost relative to our method for each baseline, demonstrating that our approach consistently incurs lower costs, with the gap widening as system complexity grows. The bottom plot depicts the safety rates, showing that our method maintains a $100\%$ safety rate, while baselines that encourage safety rather than enforcing it (like MPPI and C-SAC) achieve lower rates. MPPI-CBF also attains $100\%$ safety but at the expense of performance. Overall, our method uniquely balances both safety and performance, whereas the baselines compromise on at least one aspect.
+
+the value function and the value obtained from the induced policy. The following theorem formalizes our approach:
+
+Theorem 3.2 (Performance Quantification Using Conformal Prediction). Suppose $S^*$ denotes the safe states satisfying $V_{\theta}(0,x) < \infty$ (or equivalently $\hat{V}_{\theta}(0,\hat{x}^{*}) < \delta$ ) and $(0,x_{i})_{i = 1,\dots ,N_{p}}$ are $N_{p}$ i.i.d. samples from $S^*$ . For a user-specified level $\alpha_{p}$ , let $\psi$ be the $\frac{\lceil(N_p + 1)(1 - \alpha_p)\rceil}{N_p}$ th quantile of the scores $(p_i := \frac{|V_\theta(0,x_i) - V_{\pi_\theta}(0,x_i)|}{C_{max}})_{i = 1,\dots ,N_p}$ on the $N_{p}$ state samples. Select a violation parameter $\epsilon_{p}\in (0,1)$ and a confidence parameter $\beta_{p}\in (0,1)$ such that:
+
+$$
+\sum_ {i = 0} ^ {l - 1} \binom {N _ {p}} {i} \epsilon_ {p} ^ {i} (1 - \epsilon_ {p}) ^ {N _ {p} - i} \leq \beta_ {p} \tag {18}
+$$
+
+where, $l = \lfloor (N_p + 1)\alpha_p\rfloor$ . Then, the following holds, with probability $1 - \beta_{p}$ ..
+
+$$
+\underset {x \in \mathcal {S} ^ {*}} {\mathbb {P}} \left(\frac {\left| V _ {\theta} \left(0 , x _ {i}\right) - V _ {\pi_ {\theta}} \left(0 , x _ {i}\right) \right|}{C _ {m a x}} \leq \psi\right) \geq 1 - \epsilon_ {p}. \tag {19}
+$$
+
+where $C_{max}$ is a normalizing factor and denotes the maximum possible cost that could be incurred for any $x \in S^{*}$ .
+
+The proof is available in Appendix A.2. Note that $C_{max}$ can be easily calculated by calculating the upper bound of the cost function $C(t, x(t), \mathbf{u}) \forall x \in S^{*}$ .
+
+Intuitively, the performance of the resultant policy is the best when the $\psi$ value approaches 0, while the worst performance occurs at $\psi = 1$ . Algorithm 2 presents the steps to calculate $\psi$ using the approach proposed in this theorem.
+
+# 4. Experiments
+
+The objective of this paper is to demonstrate the co-optimization of performance and safety. To achieve this, we evaluate the proposed method and compare them with baselines using three metrics: (1) Cumulative Cost: This metric represents the total cost $\int_0^T l(x(s))ds + \phi (x(T))$ , accumulated by a policy over the safe trajectories. (2) Safety Rate: This metric is defined as the percentage of trajectories that remain safe, i.e., never enter the failure region $\mathcal{F}$ at any point in time. (3) Computation Time: This metric compares the offline and online computation times of our method and the baselines.
+
+Baselines: We consider two categories of baselines: the first set of methods aims to enhance the system performance (i.e., minimize the cumulative cost) while encouraging safety, encompassing methods such as Lagrangian-based CRL algorithms like SAC-Lagrangian (SAC-Lag), PPO-Lagrangian (PPO-Lag) (Ray et al., 2019; Ji et al., 2024) and Model Predictive Path Integral (MPPI) (Williams et al., 2018) algorithms. The second category prioritizes safety, potentially at the cost of performance. This includes Constrained Policy Optimization (CPO) (Achiam et al., 2017) and safety filtering techniques such as Control Barrier Function (CBF)-based quadratic programs (QP) (Ames et al., 2017) that modify a nominal, potentially unsafe controller to satisfy the safety constraint.
+
+# 4.1. Efficient and Safe Boat Navigation
+
+In our first experiment, we consider a 2D autonomous boat navigation problem, where a boat with coordinates $(x_{b},y_{b})$
+
+
+Figure 3. Trajectories from two distinct initial states are shown, with dark grey circles representing obstacles and the green dot indicating the goal at $[1.5,0]^T$ . Notably, our method is the only one that successfully approaches the goal while adhering to safety constraints.
+
+navigates a river with state-dependent drift to reach an island. The boat must avoid two circular boulders (obstacles) of different radii, which corresponds to the safety constraint in the system (see Fig. 3). The cost function penalizes the distance to the goal. The system state, $x$ , evolves according to the dynamics:
+
+$$
+x = \left[ x _ {b}, y _ {b} \right], \quad \dot {x} = \left[ u _ {1} + 2 - 0. 5 y _ {b} ^ {2}, u _ {2} \right] \tag {20}
+$$
+
+where $[u_1, u_2]$ are the bounded control inputs in the $x_b$ and $y_b$ directions, constrained by the control space $\mathcal{U} = \{[u_1, u_2] \in \mathbb{R}^2 \mid ||[u_1, u_2)|| \leq 1\}$ . The term $2 - 0.5y_b^2$ introduces a state-dependent drift, complicating the control task as the actions must counteract the drift while ensuring safety, which is challenging under bounded control inputs. The rest of the details about the experiment setup can be found in the Appendix B.1.
+
+Safety Guarantees and Performance Quantification: We use $N_{s} = 300K$ and $N_{p} = 300K$ samples for thorough verification, ensuring dense state space sampling. For this experiment, we set $\epsilon_{s} = 0.001$ and $\beta_{s} = 10^{-10}$ , resulting in a $\delta$ -level of 0. This implies that, with $1 - 10^{-10}$ confidence, any state with $\hat{V}_{\theta}(t,x,z) \leq 0$ , is safe with at least $99.9\%$ probability. For performance quantification, we set $\epsilon_{p} = 0.01$ and $\beta_{p} = 10^{-10}$ , leading to a $\psi$ -level of 0.136. This ensures, with $1 - 10^{-10}$ confidence, that any state in $S^{*}$ has a normalized error between the predicted value and the policy value of less than 0.136 with $99\%$ probability. Low $\delta$ and $\psi$ values with high confidence indicate that the learned policy closely approximates the optimal policy and successfully co-optimizes safety and performance.
+
+Baselines: This being a 2-dimensional system, we compare our method with the ground truth value function computed by solving the HJB-PDE numerically using the Level
+
+
+Figure 4. Trajectories from two distinct initial states are depicted, with dark grey circles representing obstacles and purple trajectories indicating the evader's path, with arrows showing its direction of motion. Our method successfully tracks the evader while avoiding collisions, whereas all other methods either fail to maintain safety, struggle to track the evader or both
+
+Set Toolbox (Mitchell, 2004) (results in Appendix B.1.1). Additional baselines include: (1) MPPI, a sample-based path-planning algorithm with safety as soft constraints, (2) MPPI-NCBF, where safety is enforced using a Neural CBF-based QP with MPPI as the nominal controller (Dawson et al., 2022; Tayal et al., 2024b), and (3) Constrained RL methods like SAC-Lag, PPO-Lag, and CPO.
+
+Comparative Analysis: Figure 3 shows that our method effectively reaches the goal while avoiding obstacles, even when starting close to them. In contrast, MPPI and CRL-based policies fail to maintain safety, while MPPI-NCBF ensures safety but performs poorly (leading to very slow trajectories). Figure 2 highlights that our method outperforms all others. SAC-Lag attains a mean cost that is $7.5\%$ higher than ours, while exhibiting the lowest safety rate at $76\%$ . The remaining CRL methods display comparable trends, highlighting their inability to jointly optimize for safety and performance. MPPI, with a more competitive safety rate of $89\%$ , performs poorly with a $32.67\%$ higher mean cost. MPPI-NCBF achieves $100\%$ safety but performs significantly worse, with a $50.72\%$ higher mean cost. Additionally, CBF-based controllers sometimes violate control bounds, limiting their applicability. This demonstrates that our method balances safety and performance, unlike others that compromise on one aspect. Moreover, the $100\%$ safety rate of our method aligns closely with at least $99.9\%$ safety level that we expect using our proposed verification strategy, providing empirical validation of the safety assurances.
+
+# 4.2. Pursuer Vehicle tracking a moving Evader
+
+In our second experiment, we consider an acceleration-driven pursuer vehicle, tracking a moving evader while
+
+
+Figure 5. Snapshots of multi-agent navigation trajectories at different times using the proposed method. Agents are represented as circles with radius $R$ , indicating the minimum safe distance they must maintain from each other. Smaller dots mark their respective goals. The trajectories show that agents proactively maintain long-horizon safety by adjusting their paths to avoid close encounters, rather than enforcing safety reactively, which could lead to suboptimal behaviors. Finally, the agents reach their respective goals within the specified time horizon.
+
+
+
+
+
+avoiding five circular obstacles (see Fig. 4). This experiment involves an 8-dimensional system, with the state $x$ defined as $x = [x_p, y_p, v, \Theta, x_e, y_e, v_{xe}, v_{ye}]^T$ , where $x_p, y_p, v, \Theta$ represent the coordinates, linear velocity, and orientation of the pursuer vehicle, respectively, and $x_e, y_e, v_{xe}, v_{ye}$ represent the coordinates and linear velocities of the evader vehicle. The pursuer vehicle is controlled by linear acceleration $(u_1)$ and angular velocity $(u_2)$ . The control space is $\mathcal{U} = \{[u_1, u_2] \in [-2, 2]^2\}$ . The complexity of this system stems from the dynamic nature of the goal, along with the challenge of ensuring safety in a cluttered environment, which in itself is a difficult safety problem. More details about the experiment setup are in Appendix B.2.
+
+Similar to the previous experiment, we set $N_{s} = N_{p} = 300k$ . We choose $\epsilon_{s} = 0.01$ and $\beta_{s} = 10^{-10}$ , yielding a $\delta$ -level of $-0.04$ and a safety level of $99\%$ on the auxiliary value function. For performance, we set $\epsilon_{p} = 0.01$ and $\beta_{p} = 10^{-10}$ , leading to a $\psi$ -level of 0.137. These values indicate the learned policy maintains high safety with low-performance degradation in this cluttered environment.
+
+Baselines: As in the previous experiment, we employ MPPI and CRL methods (SAC-Lag, PPO-Lag, and CPO). For safety filtering, we utilize a QP based on the collision cone CBF (C3BF) (Goswami et al., 2024), chosen for its effectiveness in managing acceleration-driven systems.
+
+Comparative Analysis: Figure 4 shows that our method effectively tracks the moving evader while avoiding obstacles, even when starting close to them. In contrast, other methods have limitations: MPPI and CRL methods attempt to follow the evader but fail to maintain their pace, violating safety constraints, while MPPI-C3BF sacrifices performance to maintain safety. Figure 2 highlights our method's superior performance in balancing safety and performance. MPPI achieves the best performance among the baselines but with an $18\%$ higher mean cost and only a $72\%$ safety rate. MPPI
+
+NCBF ensures $100\%$ safety but has a $42\%$ higher mean cost. SAC-Lag underperforms both in safety (66% safety rate) and performance (101% higher mean cost). A similar trend is evident across all other CRL methods, indicating their difficulty in co-optimizing safety and performance in high-dimensional, complex systems.
+
+# 4.3. Multi-Agent Navigation
+
+In our third experiment, we consider a multi-agent setting where each of the 5 agents, represented by $x_{i} = [x_{a_{i}}, y_{a_{i}}, x_{g_{i}}, y_{g_{i}}]$ , tries to reach its goal while avoiding collisions with others. $(x_{a_{i}}, y_{a_{i}})$ denote the position of the $i$ th agent, while $(x_{g_{i}}, y_{g_{i}})$ represent the goal locations for that agent. The system is 20-dimensional, with each agent controlled by its $x$ and $y$ velocities. The control space for each agent is $\mathcal{U}_{i} = \{[v_{x_{i}}, v_{y_{i}}] \mid ||[v_{x_{i}}, v_{y_{i}}]|| \leq 1\}$ . The complexity of this system stems from the interactions and potential conflicts between agents as they attempt to reach their goals while avoiding collisions. The rest of the details about the experiment setup can be found in Appendix B.3.
+
+Safety Guarantees and Performance Quantification: We set $N_{s} = N_{p} = 300k$ , $\epsilon_{s} = 0.001$ , and $\beta_{s} = 10^{-10}$ , resulting in a $\delta$ -level of -0.09 with safety assurance of $99.9\%$ for the auxiliary value function. For performance quantification, we set $\epsilon_{p} = 0.01$ and $\beta_{p} = 10^{-10}$ , leading to a $\psi$ -level of 0.068. It is evident that the $\delta$ and $\psi$ values remain very low with high confidence, highlighting the effectiveness of our method in co-optimizing safety and performance for high-dimensional, multi-agent systems.
+
+Baselines: Similar to previous experiments, we have used MPPI, SAC-Lag, PPO-Lag, CPO, and MPPI-NCBF as our baselines for this experiment too.
+
+Comparative Analysis: Figure 5 shows that our method ensures long-horizon safety while enabling all agents to reach their goals without collisions. In contrast, the baseline methods either exhibit overly conservative behavior or
+
+
+Figure 6. This figure presents a comparative analysis of all methods based on online and offline computation time evaluated on the same computing machine. The top plot illustrates the offline computation time for our method and the baselines. Since our method and SAC-Lag involve training value functions, they incur higher offline computation costs, whereas MPPI-based methods require no offline training. The bottom plot depicts the online computation time, demonstrating that our method and SAC-Lag have minimal online computation requirements, whereas MPPI-based methods exhibit significantly higher online computational costs.
+
+fail to maintain safety, leading to collisions, as detailed in Appendix B.3.1. Figure 2 demonstrates the superior performance of our approach, with MPPI, MPPI-NCBF, and SAC-Lag showing mean percentage cost increases of $148\%$ , $192\%$ , and $164\%$ , respectively. Although MPPI and MPPI-NCBF achieve competitive safety rates of $90\%$ and $100\%$ , their significant performance degradation highlights their inability to balance safety and performance in complex systems. MPPI's subpar performance stems from its reliance on locally optimal solutions in a finite data regime, leading to several deadlocks along the way and overall suboptimal trajectories over a long horizon. Furthermore, CRL methods struggle with both safety and performance, further demonstrating their limitations in handling increasing system complexity and dimensionality. These results confirm our method's ability to co-optimize safety and performance in high-dimensional systems, demonstrating its scalability. Additionally, the safety guarantees hold in the test samples, validating the scalability of our safety verification framework for multi-agent systems.
+
+# 4.4. Computation time Analysis
+
+Figure 6 presents a comparative analysis of the offline and online computation times for our method against the baselines. While traditional grid-based methods suffer from an exponentially scaling computational complexity (and are completely intractable for the 8D Evader Chasing and 20D Multi-Agent case studies), the proposed method scales much better with the system dimensionality. For example, the computation time increases only minimally from the 2D system to the 8D system, thanks to neural network parallelization. Similarly, the computation time increases
+
+sublinearly from 8D to 20D system. This scalability is a key advantage of the proposed approach. We finally note that while offline training requires time, our method achieves real-time inference speeds, with optimal policy computed in just 2ms across all systems, making the approach highly suitable for real robotic systems.
+
+# 5. Conclusion and Future Work
+
+In this work, we introduced a physics-informed machine learning framework for co-optimizing safety and performance in autonomous systems. By formulating the problem as a state-constrained optimal control problem (SC-OCP) and leveraging an epigraph-based approach, we enabled scalable computation of safety-aware policies. Our method integrates conformal prediction-based safety verification to ensure high-confidence safety guarantees while maintaining optimal performance. Through multiple case studies, we demonstrated the effectiveness and scalability of our approach in high-dimensional systems. In future, we will explore methods for rapid adaptation of the learned policies in light of new information about the system dynamics, environments, or safety constraints. We will also apply our method to other high-dimensional autonomous systems and systems with unknown dynamics.
+
+# Acknowledgements
+
+Manan is supported by the Prime Minister's Research Fellowship (PMRF), Government of India. This work is partially supported by the AI & Robotics Technology Park (ARTPARK) at IISc, the DARPA Assured Neuro Symbolic
+
+Learning and Reasoning (ANSR) program, and the NSF CAREER award (2240163).
+
+# Impact Statement
+
+This paper presents an approach to co-optimize safety and performance in autonomous systems using Physics-Informed Machine Learning. This framework advances the deployment of scalable, provably safe, and high-performance controllers for complex, high-dimensional autonomous systems, with potential applications in robotics and autonomous driving.
+
+# References
+
+Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization. In International conference on machine learning, pp. 22-31. PMLR, 2017.
+Altarovici, A., Bokanowski, O., and Zidani, H. A general hamilton-jacobi framework for non-linear state-constrained control problems. *ESAIM: Control, Optimisation and Calculus of Variations*, 19(2):337-357, 2013.
+Altman, E. Constrained Markov Decision Processes. Stochastic Modeling Series. Taylor & Francis, 1999. ISBN 9780849303821. URL https://books.google.co.in/books?id=3X9S1NM2iOgC.
+Ames, A. D., Xu, X., Grizzle, J. W., and Tabuada, P. Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8):3861-3876, 2017. doi: 10.1109/tac.2016.2638961.
+Angelopoulos, A. N. and Bates, S. A gentle introduction to conformal prediction and distribution-free uncertainty quantification, 2022. URL https://arxiv.org/abs/2107.07511.
+Bansal, S. and Tomlin, C. J. Deepreach: A deep learning approach to high-dimensional reachability. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 1817-1824, 2021. doi: 10.1109/ICRA48506.2021.9561949.
+Borquez, J., Chakraborty, K., Wang, H., and Bansal, S. On safety and liveness filtering using hamilton-jacobi reachability analysis. IEEE Transactions on Robotics, 40: 4235-4251, 2024. doi: 10.1109/TRO.2024.3454470.
+Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge university press, 2004.
+Chakrabarty, A., Jha, D. K., Buzzard, G. T., Wang, Y., and Vamvoudakis, K. G. Safe approximate dynamic programming via kernelized lipschitz estimation. IEEE Transactions on Neural Networks and Learning Systems, 32(1): 405-419, 2021. doi: 10.1109/TNNLS.2020.2978805.
+
+Chen, J., Du, R., and Wu, K. A comparison study of deep galerkin method and deep ritz method for elliptic problems with different boundary conditions. arXiv preprint arXiv:2005.04554, 2020.
+Chow, Y. T., Darbon, J., Osher, S., and Yin, W. Algorithm for overcoming the curse of dimensionality for time-dependent non-convex hamilton-jacobi equations arising from optimal control and differential games problems. Journal of Scientific Computing, 73:617-643, 2017.
+Darbon, J. and Osher, S. Algorithms for overcoming the curse of dimensionality for certain hamilton-jacobi equations arising in control theory and elsewhere. Research in the Mathematical Sciences, 3(1):19, 2016.
+Dawson, C., Qin, Z., Gao, S., and Fan, C. Safe nonlinear control using robust neural lyapunov-barrier functions. In Conference on Robot Learning, pp. 1724-1735. PMLR, 2022.
+Fotiadis, F. and Vamvoudakis, K. G. A physics-informed neural networks framework to solve the infinite-horizon optimal control problem. In 2023 62nd IEEE Conference on Decision and Control (CDC), pp. 6014-6019, 2023. doi: 10.1109/CDC49753.2023.10383404.
+García, C. E., Prett, D. M., and Morari, M. Model predictive control: Theory and practice—a survey. Automatica, 25(3):335-348, 1989. ISSN 0005-1098. doi: https://doi.org/10.1016/0005-1098(89)90002-2. URL https://www.sciencedirect.com/science/article/pii/0005109889900022.
+Goswami, B. G., Tayal, M., Rajgopal, K., Jagtap, P., and Kolathaya, S. Collision cone control barrier functions: Experimental validation on ugvs for kinematic obstacle avoidance. In 2024 American Control Conference (ACC), pp. 325-331, 2024. doi: 10.23919/ACC60939.2024.10644338.
+Grüne, L., Pannek, J., Grüne, L., and Pannek, J. Nonlinear model predictive control. Springer, 2017.
+Hsu, K.-C., Hu, H., and Fisac, J. F. The safety filter: A unified view of safety-critical control in autonomous systems. Annual Review of Control, Robotics, and Autonomous Systems, 7(Volume 7, 2024):47-72, 2024. ISSN 2573-5144. doi: https://doi.org/10.1146/annurev-control-071723-102940. URL https://www.annualreviews.org/content/journals/10.1146/annurev-control-071723-102940.
+Ji, J., Zhou, J., Zhang, B., Dai, J., Pan, X., Sun, R., Huang, W., Geng, Y., Liu, M., and Yang, Y. Omnisafe: An infrastructure for accelerating safe reinforcement learning research. Journal of Machine Learning
+
+Research, 25(285):1-6, 2024. URL http://jmlr.org/papers/v25/23-0681.html.
+Li, Z., Zheng, H., Kovachki, N. B., Jin, D., Chen, H., Liu, B., Stuart, A., Azizzadenesheli, K., and Anandkumar, A. Physics-informed neural operator for learning partial differential equations, 2022. URL https://openreview.net/forum?id=dtYnHcmQKeM.
+Mitchell, I. A toolbox of level set methods. http://www.cs.ubc.ca/mitchell/ToolboxLS/toolboxLS.pdf, 2004.
+Olver, F. W. J., Daalhuis, A. B. O., Lozier, D. W., Schneider, B. I., Boisvert, R. F., Clark, C. W., Miller, B. R., Saunders, B. V., Cohl, H. S., and M. A. McClain, e. NIST Digital Library of Mathematical Functions. National Institute of Standards and Technology, 2023. URL https://dlmf.nist.gov/. Release 1.1.11 of 2023-09-15.
+Raissi, M., Perdikaris, P., and Karniadakis, G. E. Physics informed deep learning (part i & ii): Data-driven solutions of nonlinear partial differential equations, 2017.
+Raissi, M., Perdikaris, P., and Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019a. ISSN 0021-9991. doi: https://doi.org/10.1016/j.jcp.2018.10.045. URL https://www.sciencedirect.com/science/article/pii/S0021999118307125.
+Raissi, M., Perdikaris, P., and Karniadakis, G. E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686-707, 2019b.
+Ray, A., Achiam, J., and Amodei, D. Benchmarking safe exploration in deep reinforcement learning, 2019. URL https://cdn.openai.com/safexp-short.pdf.
+Schmerling, E. hj_reachability: Hamilton-Jacobi reachability analysis in JAX. https://github.com/StanfordASL/hj_reachability, 2021.
+Singh, A., Feng, Z., and Bansal, S. Exact imposition of safety boundary conditions in neural reachable tubes. In 2025 IEEE International Conference on Robotics and Automation (ICRA), 2025. URL https://arxiv.org/abs/2404.00814.
+So, O., Ge, C., and Fan, C. Solving minimum-cost reach avoid using reinforcement learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=jzngdJQ21Y.
+
+Soner, H. M. Optimal control with state-space constraint i. SIAM Journal on Control and Optimization, 24(3): 552-561, 1986. doi: 10.1137/0324032. URL https://doi.org/10.1137/0324032.
+Streichenberg, L., Trevisan, E., Chung, J. J., Siegwart, R., and Alonso-Mora, J. Multi-agent path integral control for interaction-aware motion planning in urban canals. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 1379-1385, 2023. doi: 10.1109/ICRA48891.2023.10161511.
+Tayal, M., Singh, A., Jagtap, P., and Kolathaya, S. Semi-supervised safe visuomotor policy synthesis using barrier certificates. arXiv preprint arXiv:2409.12616, 2024a.
+Tayal, M., Zhang, H., Jagtap, P., Clark, A., and Kolathaya, S. Learning a formally verified control barrier function in stochastic environment. In Conference on Decision and Control (CDC). IEEE, 2024b.
+Vovk, V. Conditional validity of inductive conformal predictors, 2012. URL https://arxiv.org/abs/1209.2673.
+Wabersich, K. P., Taylor, A. J., Choi, J. J., Sreenath, K., Tomlin, C. J., Ames, A. D., and Zeilinger, M. N. Data-driven safety filters: Hamilton-jacobi reachability, control barrier functions, and predictive methods for uncertain systems. IEEE Control Systems Magazine, 43(5):137-177, 2023. doi: 10.1109/MCS.2023.3291885.
+Wang, H., Dhande, A., and Bansal, S. Cooptimizing safety and performance with a control-constrained formulation. IEEE Control Systems Letters, 8:2739-2744, 2024. doi: 10.1109/LCSYS.2024.3511429.
+Wang, S., Teng, Y., and Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5):A3055-A3081, 2021.
+Williams, G., Drews, P., Goldfain, B., Rehg, J. M., and Theodorou, E. A. Information-theoretic model predictive control: Theory and applications to autonomous driving. IEEE Transactions on Robotics, 34(6):1603-1622, 2018. doi: 10.1109/TRO.2018.2865891.
+
+# Contents
+
+A.Notations 13
+A.1 Proof of Theorem (3.1) 13
+A.2 Proof of Theorem (3.2) 14
+A.3 Relationship between $\alpha, \beta,$ and $\epsilon$ 14
+B. Additional Details the systems in the experiments 15
+B.1 Efficient and Safe Boat Navigation 16
+B.2 Pursuer vehicle tracking an evader 17
+B.3 Multi-Agent Navigation 18
+C. Implementation Details of the Algorithms 20
+C.1 Experimentation Hardware 20
+C.2 Hyperparameters for the Proposed Algorithm 20
+C.3 Hyperparameters for MPPI 21
+C.4 Hyperparameters for SAC-Lag 21
+C.5 Hyperparameters for PPO-Lag 21
+C.6 Hyperparameters for CPO 21
+
+# A. Proofs
+
+# A.1. Theorem (3.1)
+
+Theorem 3.1 (Safety Verification Using Conformal Prediction) Let $S_{\delta}$ be the set of states satisfying $\hat{V}_{\theta}(0,\hat{x}) \leq \delta$ , and let $(0,\hat{x}_i)_{i = 1,\dots ,N_s}$ be $N_{s}$ i.i.d. samples from $S_{\delta}$ . Define $\alpha_{\delta}$ as the safety error rate among these $N_{s}$ samples for a given $\delta$ level. Select a safety violation parameter $\epsilon_{s} \in (0,1)$ and a confidence parameter $\beta_{s} \in (0,1)$ such that:
+
+$$
+\sum_ {i = 0} ^ {l - 1} \binom {N _ {s}} {i} \epsilon_ {s} ^ {i} (1 - \epsilon_ {s}) ^ {N _ {s} - i} \leq \beta_ {s},
+$$
+
+where $l = \lfloor (N_s + 1)\alpha_\delta \rfloor$ . Then, with the probability of at least $1 - \beta_{s}$ , the following holds:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} (0, \hat {x} _ {i}) \leq 0\right) \geq 1 - \epsilon_ {s}.
+$$
+
+Proof. Before we proceed with the proof of the Theorem (3.1), let us look at the following lemma which describes split conformal prediction:
+
+Lemma 1 (Split Conformal Prediction (Angelopoulos & Bates, 2022)). Consider a set of independent and identically distributed (i.i.d.) calibration data, denoted as $\{(X_i,Y_i)\}_{i=1}^n$ , along with a new test point $(X_{\mathrm{test}},Y_{\mathrm{test}})$ sampled independently from the same distribution. Define a score function $s(x,y) \in \mathbb{R}$ , where higher scores indicate poorer alignment between $x$ and $y$ . Compute the calibration scores $s_1 = s(X_1,Y_1),\ldots,s_n = s(X_n,Y_n)$ . For a user-defined confidence level $1 - \alpha$ , let $\hat{q}$ represent the $\lceil (n + 1)(1 - \alpha) \rceil / n$ quantile of these scores. Construct the prediction set for the test input $X_{\mathrm{test}}$ as:
+
+$$
+\mathcal {C} (X _ {\text {t e s t}}) = \{y: s (X _ {\text {t e s t}}, y) \leq \hat {q} \}.
+$$
+
+Assuming exchangeability, the prediction set $\mathcal{C}(X_{\mathrm{test}})$ guarantees the marginal coverage property:
+
+$$
+\mathbb {P} \left(Y _ {\text {t e s t}} \in \mathcal {C} \left(X _ {\text {t e s t}}\right)\right) \geq 1 - \alpha .
+$$
+
+Following the Lemma 1, we employ a conformal scoring function for safety verification, defined as:
+
+$$
+s (X) = \hat {V} _ {\tilde {\pi} _ {\theta}} (0, \hat {x}), \forall \hat {x} \in S _ {\tilde {\delta}},
+$$
+
+where $S_{\delta}$ denotes the set of states satisfying $\hat{V}_{\theta}(0,\hat{x})\leq \delta$ and the score function measures the alignment between the induced safe policy and the auxiliary value function.
+
+Next, we sample $N_{s}$ states from the safe set $\mathcal{S}_{\delta}$ and compute conformal scores for all sampled states. For a user-defined error rate $\alpha \in [0,1]$ , let $\hat{q}$ denote the $\frac{(N_s + 1)\alpha}{N_s}$ th quantile of the conformal scores. According to (Vovk, 2012), the following property holds:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} _ {\hat {\pi} _ {\theta}} (\hat {x} _ {i}, 0) \leq \hat {q}\right) \sim \operatorname {B e t a} \left(N _ {s} - l + 1, l\right), \tag {21}
+$$
+
+where $l = \lfloor (N_s + 1)\alpha \rfloor$
+
+Define $E_{s}$ as:
+
+$$
+E _ {s} := \mathbb {P} _ {\hat {x} \in \mathcal {S} _ {\delta}} \left(\hat {V} _ {\hat {\pi} _ {\theta}} (\hat {x} _ {i}, 0) \leq \hat {q}\right).
+$$
+
+Here, $E_{s}$ is a Beta-distributed random variable. Using properties of cumulative distribution functions (CDF), we assert that $E_{s} \geq 1 - \epsilon_{s}$ with confidence $1 - \beta_{s}$ if the following condition is satisfied:
+
+$$
+I _ {1 - \epsilon_ {s}} (N - l + 1, l) \leq \beta_ {s}, \tag {22}
+$$
+
+where $I_{x}(a,b)$ is the regularized incomplete Beta function and also serves as the CDF of the Beta distribution. It is defined as:
+
+$$
+I _ {x} (a, b) = \frac {1}{B (a , b)} \int_ {0} ^ {x} t ^ {a - 1} (1 - t) ^ {b - 1} d t,
+$$
+
+where $B(a,b)$ is the Beta function. From (Olver et al., 2023)(8.17.5), it can be shown that $I_{x}(n - k,k + 1) = \sum_{i = 1}^{k}\binom{n}{i}x^{i}(1 - x)^{n - i}$ .
+
+Then (22) can be rewritten as:
+
+$$
+\sum_ {i = 1} ^ {l - 1} \binom {N _ {s}} {i} \epsilon_ {s} ^ {i} (1 - \epsilon) ^ {N _ {s} - i} \leq \beta_ {s}, \tag {23}
+$$
+
+Thus, if Equation (23) holds, we can say with probability $1 - \beta_{s}$ that:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} _ {\hat {\pi} _ {\theta}} \left(\hat {x} _ {i}, 0\right) \leq \hat {q}\right) \geq 1 - \epsilon_ {s}. \tag {24}
+$$
+
+Now, let $k$ denote the number of allowable safety violations. Thus, the safety error rate is given by $\alpha_{\delta} = \frac{k + 1}{N_s + 1}$ . Let $\hat{q}$ represent the $\frac{(N_s + 1)\alpha_{\delta}}{N_s}$ th quantile of the conformal scores. Since $k$ denotes the number of samples for which the conformal score is positive, the $\frac{(N_s + 1)\alpha_{\delta}}{N_s}$ th quantile of scores corresponds to the maximum negative score amongst the sampled states. This implies that $\hat{q} \leq 0$ . From this and Equation (24), we can conclude with probability $1 - \beta_s$ that:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} _ {\hat {\pi} _ {\theta}} (0, \hat {x} _ {i}) \leq 0\right) \geq 1 - \epsilon_ {s}.
+$$
+
+From Equation (4), it can be inferred that $\forall (t,\hat{x}),\hat{V} (0,\hat{x}_i)\leq \hat{V}_{\hat{\pi}_\theta}(\hat{x}_i,0)$ . Hence, with probability $1 - \beta_{s}$ , the following holds:
+
+$$
+\underset {\hat {x} \in \mathcal {S} _ {\delta}} {\mathbb {P}} \left(\hat {V} (0, \hat {x} _ {i}) \leq 0\right) \geq 1 - \epsilon_ {s}.
+$$
+
+# A.2. Theorem (3.2)
+
+Theorem 3.2 (Performance Quantification Using Conformal Prediction) Suppose $S^*$ denotes the safe states satisfying $V_{\theta}(0,x) < \infty$ (or equivalently $\hat{V}_{\theta}(0,\hat{x}^{*}) < \delta$ ) and $(0,x_{i})_{i = 1,\dots ,N_{p}}$ are $N_{p}$ i.i.d. samples from $S^*$ . For a user-specified level $\alpha_{p}$ , let $\psi$ be the $\frac{\lceil(N_p + 1)(1 - \alpha_p)\rceil}{N_p}$ th quantile of the scores $(p_i\coloneqq \frac{|V_\theta(0,x_i) - V_{\pi_\theta}(0,x_i)|}{C_{max}})_i = 1,\dots,N_p)$ on the $N_{p}$ state samples. Select a violation parameter $\epsilon_{p}\in (0,1)$ and a confidence parameter $\beta_{p}\in (0,1)$ such that:
+
+$$
+\sum_ {i = 0} ^ {l - 1} \binom {N _ {p}} {i} \epsilon_ {p} ^ {i} (1 - \epsilon_ {p}) ^ {N _ {p} - i} \leq \beta_ {p}
+$$
+
+where, $l = \lfloor (N_p + 1)\alpha_p\rfloor$ . Then, the following holds, with probability $1 - \beta_{p}$ :
+
+$$
+\underset {x \in \mathcal {S} ^ {*}} {\mathbb {P}} \left(\frac {| V _ {\theta} (0 , x _ {i}) - V _ {\pi_ {\theta}} (0 , x _ {i}) |}{C _ {m a x}} \leq \psi\right) \geq 1 - \epsilon_ {p}.
+$$
+
+where $C_{max}$ is a normalizing factor and denotes the maximum possible cost that could be incurred for any $x \in S^{*}$ .
+
+Proof. To quantify the performance loss, we employ a conformal scoring function defined as:
+
+$$
+p (x) := \frac {\left| V _ {\theta} (0 , x _ {i}) - V _ {\pi_ {\theta}} (0 , x _ {i}) \right|}{C _ {m a x}}, \forall x \in \mathcal {S} ^ {*}
+$$
+
+where the score function measures the alignment between the induced optimal policy and the value function.
+
+Next, we sample $N_{p}$ states from the state space $\mathcal{S}^*$ and compute conformal scores for all sampled states. For a user-defined error rate $\alpha_{p} \in [0,1]$ , let $\psi$ denote the $\frac{(N_p + 1)\alpha_p}{N_p}$ quantile of the conformal scores. According to (Vovk, 2012), the following property holds:
+
+$$
+\underset {x \in \mathcal {S} ^ {*}} {\mathbb {P}} \left(\frac {| V _ {\theta} (0 , x _ {i}) - V _ {\pi_ {\theta}} (0 , x _ {i}) |}{C _ {m a x}} \leq \psi\right) \sim \operatorname {B e t a} (N _ {p} - l + 1, l),
+$$
+
+where $l = \lfloor (N_p + 1)\alpha_p\rfloor$
+
+Define $E_{p}$ as:
+
+$$
+E _ {p} := \underset {x \in \mathcal {S} ^ {*}} {\mathbb {P}} \left(\frac {| V _ {\theta} (0 , x _ {i}) - V _ {\pi_ {\theta}} (0 , x _ {i}) |}{C _ {m a x}} \leq \psi\right).
+$$
+
+Here, $E_{p}$ is a Beta-distributed random variable. Using properties of CDF, we assert that $E_{p} \geq 1 - \epsilon_{p}$ with confidence $1 - \beta_{p}$ if the following condition is satisfied:
+
+$$
+I _ {1 - \epsilon_ {p}} \left(N _ {p} - l + 1, l\right) \leq \beta_ {p}, \tag {25}
+$$
+
+where $I_{x}(a,b)$ is the regularized incomplete Beta function. From (Olver et al., 2023)(8.17.5), it can be shown that $I_{x}(n - k,k + 1) = \sum_{i = 1}^{k}\binom{n}{i}x^{i}(1 - x)^{n - i}$ . Hence, Equation (25) can be equivalently stated as:
+
+$$
+\sum_ {i = 1} ^ {l - 1} \binom {N _ {p}} {i} \epsilon_ {p} ^ {i} (1 - \epsilon_ {p}) ^ {N _ {p} - i} \leq \beta_ {p} \tag {26}
+$$
+
+Thus, if Equation (26) holds, we can conclude with probability $1 - \beta_{p}$ that:
+
+$$
+\underset {x \in \mathcal {S} ^ {*}} {\mathbb {P}} \left(\frac {| V _ {\theta} (0 , x _ {i}) - V _ {\pi_ {\theta}} (0 , x _ {i}) |}{C _ {m a x}} \leq \psi\right) \geq 1 - \epsilon_ {p}.
+$$
+
+# A.3. Relationship between $\alpha, \beta$ , and $\epsilon$
+
+
+Figure 7. This figure shows the $\alpha-\epsilon$ plots for different numbers of verification samples, $N$ , and different values of $\beta$ .
+
+
+
+
+
+The work (Vovk, 2012) states that a smaller number of samples leads to greater fluctuations in the conformal prediction calibration, meaning that if we redraw $N$ samples and repeat the conformal prediction process, we might get a different calibration result. This variance decreases as $N$ increases. Similarly, in our work, a small $N$ means that the value correction term $\delta$ might fluctuate each time the verification algorithm is executed. Therefore, to ensure a stable estimate of $\delta$ , it is desirable to select a sufficiently large value of $N$ .
+
+Figure 7 presents the $\alpha - \epsilon$ plots for varying numbers of verification samples $N$ and different values of $\beta$ . From the figure, we observe that as $N$ increases, the effect of $\beta$ diminishes, and the curve approaches the $\alpha = \epsilon$ line. Ideally, the user-specified safety error rate $(\alpha)$ should closely match the safety violation parameter $(\epsilon)$ while maintaining high confidence $(1 - \beta$ close to 1). Thus, selecting a larger $N$ enables a smaller $\beta$ while ensuring the alignment of $\alpha$ and $\epsilon$ . Conversely, if $N$ is small, one must either compromise on the confidence parameter $\beta$ or accept that $\alpha$ will be lower than $\epsilon$ , resulting in a more conservative upper bound on the safety rate.
+
+# B. Additional Details the systems in the experiments
+
+In this section, we will provide more details about the systems we have used in the experiments section 4.
+
+# B.1. Efficient and Safe Boat Navigation
+
+The states, $x$ of the 2D Boat system are $\boldsymbol {x} = [x_1,x_2]^T$ , where, $x_{1},x_{2}$ are the $x$ and $y$ coordinates of the boat respectively. We define the step cost at each step, $l(t,x)$ , as the distance from the goal, given by:
+
+$$
+l (t, x) := \left\| x - (1. 5, 0) ^ {T} \right\|
+$$
+
+The cost function $C(t, x(t))$ is defined as:
+
+$$
+C (t, x (t), \mathbf {u}) = \int_ {t} ^ {T} l (t, x (t)) d t + \phi (x (T)) \tag {27}
+$$
+
+where $T$ is the time horizon (2s in our experiment), $l(t,x(t)) = ||x(t) - (1.5,0)^T||$ represents the running cost, and $\phi (x(T)) = ||x(T) - (1.5,0)^T||$ is the terminal cost. Minimizing this cost drives the boat toward the island.
+
+Consequently, the (augmented) dynamics of the 2D Boat system are:
+
+$$
+\dot {x _ {1}} = u _ {1} + 2 - 0. 5 x _ {2} ^ {2}
+$$
+
+$$
+\dot {x} _ {2} = u _ {2}
+$$
+
+$$
+\dot {z} = - l (t, x)
+$$
+
+where $u_{1}, u_{2}$ represents the velocity control in $x_{1}$ and $x_{2}$ directions respectively, with $u_{1}^{2} + u_{2}^{2} \leq 1$ and $2 - 0.5x_{2}^{2}$ specifies the current drift along the $x_{1}$ -axis.
+
+The safety constraints are formulated as:
+
+$$
+g (x) := \max \left(0. 4 - \| x - (- 0. 5, 0. 5) ^ {T} \|, 0. 5 - \| x - (- 1. 0, - 1. 2) ^ {T} \|\right) \tag {28}
+$$
+
+where $g(x) > 0$ indicates that the boat is inside a boulder, thereby ensuring that the super-level set of $g(x)$ defines the failure region.
+
+# B.1.1. GROUND TRUTH COMPARISON
+
+We compute the Ground Truth value function using the Level-Set Toolbox (Mitchell, 2004) and use it as a benchmark in our comparative analysis. To facilitate demonstration, unsafe states are assigned a high value of 20 instead of $\infty$ . The value function in this problem ranges from 0 to 14.76.
+
+As illustrated in Figure 8, the value function obtained using our method closely approximates the ground truth value function. Notably, the unsafe region (highlighted in yellow) remains identical in both cases, confirming the safety of the learned value function. Furthermore, the mean squared error (MSE) between the two value functions is 0.36, which is relatively low given the broad range of possible values.
+
+It is also worth mentioning that computing a high-fidelity ground truth value function on a $210 \times 210 \times 210$ grid using the Level Set Toolbox requires approximately 390 minutes. In contrast, our proposed approach learns the value function in 122 minutes, achieving a substantial speedup. This demonstrates that even for systems with a relatively low-dimensional state space, our method efficiently recovers an accurate value function significantly faster than grid-based solvers.
+
+# B.1.2. EMPIRICAL VALIDATION OF THE $\alpha$ - $\beta$ - $\epsilon$ RELATIONSHIP AND CALCULATION OF SAFETY LEVELS
+
+We conducted an experiment to empirically validate the theoretical relationship between $\alpha$ , $\beta$ , and $\epsilon$ . The results are presented in Figure 9. The figure visualizes the relationship between theoretical and empirical safety metrics across varying levels of $\delta$ , and includes the following elements:
+
+- Safety error rate ( $\alpha$ , purple line) as a function of different $\delta$ levels. Computed on the calibration dataset as $\alpha = \frac{k + 1}{N_s + 1}$ , where $k$ is the number of allowable safety violations and $N_s$ is the number of calibration samples.
+- Theoretical safety violation probability $(\epsilon, \text{orange line})$ as a function of $\delta$ . Derived using the theoretical relation in Equation (14).
+
+
+Figure 8. Heatmap of the value function for the ground truth (left) and our method (right). The yellow region represents the unsafe area. Our method successfully captures most of the safe set, indicating that it is not overly conservative while completely recovering the unsafe regions.
+
+
+
+- Empirical safety violation probability (green points) as a function of $\delta$ . Computed by sampling 3M initial states from the $\delta$ -sublevel set of the learned value function, simulating rollouts, and measuring the observed safety violation rate. This serves as a practical estimate of system safety.
+
+For this experiment, we set $N = 300k$ and $\beta = 10^{-10}$ . As shown in the figure, the empirical violation rate remains consistently below the theoretical bound $(\epsilon)$ across all values of $\delta$ . This demonstrates that our method provides conservative and valid safety guarantees, confirming the soundness of the theoretical relationship in practice.
+
+Additionally, from the $\delta$ vs $\epsilon$ plot, we can observe that the $\delta$ level approaches 0 as the $\epsilon$ values approach the chosen safety level of 0.001. Hence, we say that the sub-level set of the auxiliary value function, $\hat{V}(t, \hat{x})$ is safe with a probability of $1 - 0.001 = 0.999$ .
+
+# B.2. Pursuer vehicle tracking an evader
+
+The state, $x$ of a ground vehicle (pursuer) tracking a moving evader is $x = [x_{p},y_{p},v,\Theta ,x_{e},y_{e},v_{xe},v_{ye}]^{T}$ , where, $x_{e},y_{e},v,\Theta$ are position, linear velocity and orientation of the pursuer respectively, $x_{e},y_{e},v_{xe},v_{ye}$ are the position and the linear velocities of the evader respectively. We define the step cost at each step, $l(t,x)$ , as the distance from the goal, given by:
+
+$$
+l (t, x) := \left\| \left(x _ {p} (t), y _ {p} (t)\right) ^ {T} - \left(x _ {e} (t), y _ {e} (t)\right) ^ {T} \right\|
+$$
+
+and the terminal cost is $\phi (x(T)) = ||(x_p(T),y_p(T))^T -(x_e(T),y_e(T))^T ||$ . The cost function $C(t,x(t))$ is defined as:
+
+$$
+C (t, x (t), \mathbf {u}) = \int_ {t} ^ {T} l (t, x (t)) d t + \phi (x (T)) \tag {29}
+$$
+
+where $T$ is the time horizon (1s in this experiment). Minimizing this cost aims to drive the pursuer toward the evader. Consequently, the (augmented) dynamics of the system is as follows:
+
+$$
+\dot {x _ {p}} = v \cos (\Theta), \quad \dot {y _ {p}} = v \sin (\Theta), \quad \dot {v} = u _ {1}, \quad \dot {\Theta} = u _ {2},
+$$
+
+$$
+\dot {x _ {e}} = v _ {x e}, \quad \dot {y _ {e}} = v _ {y e}, \quad \dot {v _ {x e}} = 0, \quad \dot {v _ {y e}} = 0, \quad \dot {z} = - l (t, x),
+$$
+
+
+Figure 9. This figure presents a comparative analysis of the relationships between $\epsilon -\delta$ , $\alpha -\delta$ , and empirical safety violation rate- $\delta$ . As observed, the empirical safety violation consistently remains below the theoretical bound, thereby supporting our theoretical guarantees. Furthermore, as $\epsilon$ decreases, the corresponding $\delta$ approaches zero, indicating that the learned value function incurs negligible safety violations.
+
+where $u_{1}$ represents the linear acceleration control and $u_{2}$ represents angular velocity control.
+
+The safety constraints are defined as:
+
+$$
+g (x) := \max \left(0. 2 - \left\| x - (0. 5, 0. 5) ^ {T} \right\|, 0. 2 - \left\| x - (- 0. 5, 0. 5) ^ {T} \right\|, 0. 2 - \left\| x - (- 0. 5, - 0. 5) ^ {T} \right\|, \right.
+$$
+
+$$
+0. 2 - \left\| x - (0. 5, - 0. 5) ^ {T} \right\|, 0. 2 - \left\| x - (0. 0, 0. 0) ^ {T} \right\|,))
+$$
+
+which represents 5 obstacles of radius 0.2 units each.
+
+# B.3. Multi-Agent Navigation
+
+A multi-agent setting with 5 agents. The state of each agent $i$ is represented by $x_{i} = [x_{a_{i}}, y_{a_{i}}, x_{g_{i}}, y_{g_{i}}]$ , tries to reach its goal while avoiding collisions with others. $(x_{a_{i}}, y_{a_{i}})$ denote the position of the $i$ th agent, while $(x_{g_{i}}, y_{g_{i}})$ represent the goal locations for that agent. We define the step cost at each step, $l(t, x(t))$ , as the mean distance of each agent from its respective goal, given by:
+
+$$
+l (t, x (t)) := \frac {\sum_ {i = 1} ^ {5} \left\| (x _ {a _ {i}} (t) , y _ {a _ {i}} (t) ^ {T} - (x _ {g _ {i}} (t) , y _ {g _ {i}} (t)) ^ {T} \right\|}{5}
+$$
+
+The cost function $C(t, x(t), \mathbf{u})$ is defined as:
+
+$$
+C (t, x (t), \mathbf {u}) := \int_ {t} ^ {T} l (t, x (t)) d t + \phi (x (T)) \tag {30}
+$$
+
+where $T$ is the time horizon (2s in this experiment). Minimizing this cost aims to drive each agent towards its goal. Consequently, the (augmented) dynamics of the system is as follows:
+
+$$
+\dot {x} _ {a _ {i}} = u _ {1 i}, \forall i \in \{1, 2, 3, 4, 5 \}
+$$
+
+$$
+\dot {y} _ {a _ {i}} = u _ {2 i}, \forall i \in \{1, 2, 3, 4, 5 \}
+$$
+
+$$
+\dot {x} _ {g _ {i}} = 0, \forall i \in \{1, 2, 3, 4, 5 \}
+$$
+
+$$
+\dot {y} _ {g _ {i}} = 0, \forall i \in \{1, 2, 3, 4, 5 \}
+$$
+
+$$
+\dot {z} = - l (t, x)
+$$
+
+where $u_{1i}, u_{2i}$ represents the linear velocity control of each agent $i$ . The safety constraints are defined as:
+
+$$
+g (x (t)) := \max _ {i, j = \{1, \dots , 5 \}, i \neq j} \left(R - \left\| \left(x _ {a _ {i}}, y _ {a _ {i}}\right) ^ {T} - \left(x _ {a _ {j}}, y _ {a _ {j}}\right) ^ {T} \right\|\right) \tag {31}
+$$
+
+# B.3.1. COMPARISON OF MULTI-AGENT NAVIGATION WITH BASELINES
+
+
+Figure 10. Snapshots of multi-agent navigation trajectories at different time instances using MPPI. The trajectories indicate that the agents adopt a highly conservative strategy to prevent collisions. Consequently, this leads to a reduction in performance, as the agents end up very far from their respective goals.
+
+
+
+
+
+
+Figure 11. Snapshots of multi-agent navigation trajectories at different time instances using MPPI-NCBF. The observed trajectories demonstrate suboptimal behavior similar to that of the MPPI policy. Consequently, this results in high-performance costs, indicating its inability to effectively co-optimize safety and performance.
+
+
+
+
+
+Figures 10, 11, and 12 illustrate the trajectories obtained by the baseline methods for the Multi-Agent Navigation problem. It can be observed that the trajectories obtained by MPPI and MPPI-SF are highly conservative, implying that these methods prioritize safety to mitigate potential conflicts among agents. In contrast, the policy derived from SAC-Lag fails to maintain safety, resulting in agent collisions. This indicates that as system complexity increases, the baseline methods tend to prioritize either safety or performance, leading to suboptimal behavior and safety violations. Conversely, the proposed approach effectively co-optimizes safety and performance, even in complex high-dimensional settings, achieving superior performance while ensuring safety. The visualization of the trajectories can be found on the project website1 .
+
+
+Figure 12. Snapshots of multi-agent navigation trajectories at different time instances using SAC-Lag. The trajectories indicate that agents demonstrate less conservative behavior compared to MPPI and MPPI-NCBF, but they lead to collisions. These safety violations are critical and cannot be disregarded, further highlighting the limitations of the baseline methods in simultaneously optimizing safety and performance.
+
+
+
+
+
+# C. Implementation Details of the Algorithms
+
+This section provides an in-depth overview of our algorithm and baseline implementations, including hyperparameter configurations and the cost/reward functions used in the baselines across all experiments.
+
+# C.1. Experimentation Hardware
+
+All experiments were conducted on a system equipped with an 11th Gen Intel Core i9-11900K @ 3.50GHz × 16 CPU, 128GB RAM, and an NVIDIA GeForce RTX 4090 GPU for training.
+
+# C.2. Hyperparameters for the Proposed Algorithm
+
+We maintained training settings across all experiments, as detailed below:
+
+Hyperparameter Value Network Architecture Multi-Layer Perceptron (MLP) Number of Hidden Layers 3 Activation Function Sine function Hidden Layer Size 256 neurons per layer Optimizer Adam optimizer Learning Rate 2 × 10-5 Boat Navigation . Number of Training Points 65000 Number of Pre Training Epochs 50K No. of Training Epochs 200K Pursuer Vehicle Tracking Evader . Number of Training Points 65000 Number of Pre Training Epochs 60K No. of Training Epochs 300K Multi Agent Navigation . Number of Training Points 65000 Number of Pre Training Epochs 60K No. of Training Epochs 400K
+
+Table 1. Hyperparameters for the proposed algorithm
+
+# C.3. MPPI based baselines
+
+For all the experiments we consider the MPPI cost term as follows:
+
+$$
+C _ {M P P I} = C (t, x (t), \mathbf {u}) + \lambda \max (g (x), 0) \tag {32}
+$$
+
+where, $\lambda$ is the trade-off parameter, $C(t,x(t),\mathbf{u})$ , $g(x)$ are the cost functions and safety functions as defined in Appendix B. Following is the list of hyperparameters we have used for MPPI experiments in all the cases:
+
+Hyperparameter Value Trade-off parameter (λ) 100 Planning Horizon 20 Softmax Lambda 200 No. of Rollouts 8000
+
+# C.4. SAC-Lag hyperparameters
+
+For all the experiments, we consider the reward term as follows:
+
+$$
+R _ {S A C - L a g} = - C (t, x (t), \mathbf {u}) - \mathbb {I} _ {g (x) > 0} \times (1 0 0) + \mathbb {I} _ {l (t, x (t)) < 0. 1} \times (1 0 0) \tag {33}
+$$
+
+where, $C(t,x(t),\mathbf{u})$ , $g(x)$ are the cost functions and safety functions as defined in Appendix B. Table 3 provides the list of hyperparameters we have used for SAC experiments in all the cases.
+
+Table 2. Hyperparameters for MPPI Baselines
+
+Parameter Value Policy Architecture Multi-Layer Perceptron (MLP) learning rate 3 × 10-4 buffer size 1,000,000 learning starts 10,000 batch size 256 Target network update rate (τ) 0.005 Discount factor (γ) 0.99 Boat Navigation . Number of Training Steps 1,000,000 Pursuer Vehicle Tracking Evader . Number of Training Steps 2,500,000 Multi Agent Navigation . Number of Training Steps 1,000,000
+
+Table 3. General Hyperparameters of SAC in our experiments
+
+# C.5. PPO-Lag hyperparameters
+
+For all the experiments, we consider the reward term as follows:
+
+$$
+R _ {P P O - L a g} = - C (t, x (t), \mathbf {u}) - \mathbb {I} _ {g (x) > 0} \times (1 0 0) + \mathbb {I} _ {l (t, x (t)) < 0. 1} \times (1 0 0) \tag {34}
+$$
+
+where, $C(t,x(t),\mathbf{u})$ , $g(x)$ are the cost functions and safety functions as defined in Appendix B. Table 4 provides the list of hyperparameters we have used for PPO experiments in all the cases.
+
+Parameter Value Policy Architecture Multi-Layer Perceptron (MLP) learning rate 3 × 10-4 buffer size 1,000,000 learning starts 10,000 batch size 256 Target network update rate (τ) 0.005 Discount factor (γ) 0.99 Boat Navigation . Number of Training Steps 1,000,000 Pursuer Vehicle Tracking Evader . Number of Training Steps 2,500,000 Multi Agent Navigation . Number of Training Steps 1,000,000
+
+# C.6. CPO hyperparameters
+
+For all the experiments, we consider the reward term as follows:
+
+$$
+R _ {C P O} = - C (t, x (t), \mathbf {u}) + \mathbb {I} _ {l (t, x (t)) < 0. 1} \times (1 0 0) \tag {35}
+$$
+
+where $C(t, x(t), \mathbf{u})$ , $g(x)$ are the cost functions and safety functions as defined in Appendix B. For the CPO implementation, we have used the training settings used in (Chen et al., 2020). Table 5 provides the list of hyperparameters we have used for CPO experiments in all the cases.
+
+Table 4. General Hyperparameters of PPO in our experiments
+
+Parameter Value Policy Architecture Multi-Layer Perceptron (MLP) Batch Size 128 Target KL Divergence 0.01 Entropy Coefficient 0.0 Reward Discount Factor (γ) 0.99 Cost Discount Factor (γc) 0.99 GAE Lambda (λ) 0.95 Cost GAE Lambda (λc) 0.95 Critic Norm Coefficient 0.001 Penalty Coefficient 0.0 Conjugate Gradient Damping 0.1 Conjugate Gradient Iterations 15 Actor Hidden Sizes [256, 256] Critic Hidden Sizes [256, 256] Critic Learning Rate 0.001 Boat Navigation . Number of Training Steps 10,000,000 Pursuer Vehicle Tracking Evader . Number of Training Steps 25,000,000 Multi Agent Navigation . Number of Training Steps 10,000,000
+
+Table 5. CPO Hyperparameters from OmniSafe Configuration used for our experiments
\ No newline at end of file
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/images.zip b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b478fade8f1b8c711d0d0d76617de52f0e976532
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:166cca4f89fade53a3ebb2cb69b56ca0b924e238cbf46814a3e86ff73bbe37a7
+size 1006715
diff --git a/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/layout.json b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4b4d0b4c90f02ee00db8473efaeac78ca99864f
--- /dev/null
+++ b/aphysicsinformedmachinelearningframeworkforsafeandoptimalcontrolofautonomoussystems/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1a3cfdc011be6947fc98d48920b8fdb9636b93fd9e2cd1b80f2e1595e929009
+size 995491
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_content_list.json b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9fdd062ba8c670c4e8ae88390ee6ed80f0263677
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27b8d633d3d1b37bed81e5c2534f53492fa61b175f2aa4e1361a3fd7966e6e84
+size 209505
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_model.json b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..da3a8be83c7d67a6a4c8d0c3c8d2e60e5b961486
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4adb6c524ce213b109724933b07b0ba5f5e78849f30fb1ddfdabd16775afb907
+size 249825
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_origin.pdf b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cb0b4e80f485ab94b3cfd5d703bfdbd4f39d9388
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/3b98f420-11b4-4b96-92b3-288673cc8d3f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:116d3d77cde4871c41f0466b631594f807cf91f7579ab9613edb6aa167cfc9f0
+size 1781937
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/full.md b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f549fad5d67ebe459f1635dcf6dea6797d758637
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/full.md
@@ -0,0 +1,1096 @@
+# A-PSRO: A Unified Strategy Learning Method with Advantage Metric for Normal-form Games
+
+Yudong Hu1 Haoran Li1 Congying Han1 Tiande Guo1 Mingqiang Li2 Bonan Li1
+
+# Abstract
+
+Solving the Nash equilibrium in normal-form games with large-scale strategy spaces presents significant challenges. Open-ended learning frameworks, such as PSRO and its variants, have emerged as effective solutions. However, these methods often lack an efficient metric for evaluating strategy improvement, which limits their effectiveness in approximating equilibria. In this paper, we introduce a novel evaluative metric called Advantage, which possesses desirable properties inherently connected to the Nash equilibrium, ensuring that each strategy update approaches equilibrium. Building upon this, we propose the Advantage Policy Space Response Oracle (A-PSRO), an innovative unified open-ended learning framework applicable to both zero-sum and general-sum games. A-PSRO leverages the Advantage as a refined evaluation metric, leading to a consistent learning objective for agents in normal-form games. Experiments showcase that A-PSRO significantly reduces exploitability in zero-sum games and improves rewards in general-sum games, outperforming existing algorithms and validating its practical effectiveness.
+
+# 1. Introduction
+
+The Nash equilibrium in normal-form games, encompassing both zero-sum and general-sum scenarios, is a fundamental concept for modeling the behavior of rational, utility-maximizing agents. By approximating these equilibria, agents that outperform humans have been developed in various domains, including chess (Silver et al., 2018), poker (Brown & Sandholm, 2019), and real-time strategy (RTS)
+
+1 University of Chinese Academy of Sciences, Beijing, China
+2 Information Science Academy, China Electronics Technology Group Corporation, Beijing, China. Correspondence to: Congying Han < hancy@ucas.ac.cn>.
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+games (Vinyals et al., 2019; Berner et al., 2019). However, achieving equilibrium becomes increasingly challenging in large-scale games with complex strategy spaces (Hernandez-Leal et al., 2017). The development of a unified and efficient equilibrium solver remains a challenge.
+
+The Policy Space Response Oracle (PSRO) offers an efficient open-ended strategy learning framework (Lanctot et al., 2017). Due to its scalability, numerous subsequent works have developed various PSRO variants to enhance the efficiency of solving specific games. In zero-sum games, strategies that emphasize diversity have proven effective for learning Nash equilibrium (Balduzzi et al., 2019). Methods such as UDF-PSRO (Liu et al., 2021), UDM-PSRO (Liu et al., 2022) and PSD-PSRO (Yao et al., 2023), which are based on diversity modeling, are among the most efficient equilibrium solvers for large-scale zero-sum games. However, while increasing diversity improves exploration efficiency, it often results in inefficient strategy improvement due to the lack of proper guidance. To address this issue, our work introduces a novel approach by integrating the concept of Advantage as an independent evaluative metric for strategies. We establish a mathematical equivalence between advantage maximization and Nash equilibrium, thereby supporting the use of advantage as a strategic learning objective within the PSRO framework.
+
+In general-sum games, Nash equilibria consist of multiple joint strategies, each associated with different rewards (Foerster et al., 2018). This contrasts with symmetric zero-sum games, where the Minimax property ensures that all Nash equilibria yield identical rewards (Li et al., 2019). Previous efforts to improve the PSRO algorithm in general-sum games have primarily focused on enhancing the efficiency of equilibrium solving (Zhang et al., 2021), often neglecting the differences in rewards between equilibria. However, recent research has highlighted that different objectives in strategy learning can lead to distinct equilibria (Willi et al., 2022). By adopting an appropriate objective, agents can achieve equilibrium strategies that also maximize rewards (Hu et al., 2022). In this work, we present a method to enhance strategy rewards using the advantage function. This approach enables our algorithm to achieve higher rewards compared to other PSRO methods.
+
+
+(a) Zero-sum game geometrical structure
+
+
+(b) General-sum game geometrical structure
+Figure 1. The geometrical structure examples of zero-sum games and general-sum games. Figure (a) shows the structure of a zero-sum game with both transitive and cyclic dimensions. The direction of the strategy gradient refers to the expected updates for a strategy that maximizes the reward. Figure (b) shows the structure of a general-sum game with multiple equilibria. The independent learning process of the agents leads to the update of the strategy in the direction indicated by the arrow.
+
+In summary, we introduce A-PSRO, an improved equilibrium solver for large-scale normal-form games that leverages the Advantage function. In symmetric zero-sum games, the Advantage function exhibits favorable properties, enabling agents to approach the Nash equilibrium deterministically. By incorporating the Advantage function into existing diversity-based approaches, we achieve significant improvements in learning Nash equilibrium strategies. In general-sum games, although the Advantage function is non-convex, its local maxima correspond to equilibria with varying rewards. By exploring strategies near the global optimum with the objective of maximizing the Advantage function, our algorithm converges to equilibria with higher rewards. We also provide methods for both exact and approximate computation of the Advantage function, allowing A-PSRO to integrate seamlessly into existing PSRO frameworks.
+
+We conducted experiments on various games to evaluate our algorithm. In zero-sum games, the strategies derived using the A-PSRO algorithm are significantly closer to equilibrium. In general-sum games, the A-PSRO algorithm enables agents to learn strategies that achieve globally optimal rewards, avoiding entrapment in locally optimal equilibria. These results underscore the effectiveness of the A-PSRO algorithm as a unified framework for solving equilibrium.
+
+# 2. Related Work and Background
+
+In this paper, we focus on normal-form games with finite dimensions, typically represented by three key elements denoted as $(\mathcal{N},\mathcal{A},\mathcal{U})$ . Here, $\mathcal{N}$ represents the players in the game, $\mathcal{A}$ denotes the action (pure strategy) space of the players, and $\mathcal{U}$ refers to the utility function. In such games, agents generally adopt strategies $\pi$ rather than directly choosing actions $a\in \mathcal{A}$ . $\pi$ is defined as a probability distribution over actions: $\pi = (p_{1},p_{2},\dots ,p_{|\mathcal{A}|})$ , $\sum p_i = 1$ ,
+
+where $p_i$ represents the probability of choosing action $a_i$ . To clearly distinguish between different strategies, we use $\pi_i^t$ to denote the $t$ -th strategy of player $i$ .
+
+The Nash equilibrium (NE) characterizes a stable state, where no agent can increase its reward by unilaterally altering its strategy. For the joint strategy $(\pi_1,\dots ,\pi_n)$ , it is an NE when $\forall i\in \{1,\dots ,n\}$ , $\pi_{i}$ is the best response (BR) to the strategies of other agents: $\forall \pi_i^{\prime}$ , $U_{i}(\pi_{i}^{\prime},\pi_{-i})\leq$ $U_{i}(\pi_{i},\pi_{-i})$ ( $U_{i}$ denotes the expected reward of agent $i$ , $\pi_{-i}$ represents the joint strategy except for agent $i$ ).
+
+Exploitability is defined as the distance of joint strategy $(\pi_{i},\pi_{-i})$ and the Nash Equilibrium:
+
+$$
+\mathcal {E} (\pi_ {i}, \pi_ {- i}) = \sum_ {k = 1} ^ {n} \left[ \max _ {\pi_ {k} ^ {*}} U _ {k} \left(\pi_ {k} ^ {*}, \pi_ {- k}\right) - U _ {k} \left(\pi_ {k}, \pi_ {- k}\right) \right].
+$$
+
+If the exploitability of a joint strategy $(\pi_i, \pi_{-i})$ is 0, it is a Nash equilibrium.
+
+# 2.1. Symmetric Zero-sum Games with Transitive Dimension and Cyclic Dimension
+
+Symmetric zero-sum games with two players $(i,j)$ are among the most studied game forms because their model is consistent with many real-world scenarios (Zhang & Sandholm, 2022; Sokota et al., 2023). In such games, the joint strategy of the agents is $(\pi_i,\pi_j)$ , with their rewards defined as $U_{i}(\pi_{i},\pi_{j}) = -U_{j}(\pi_{i},\pi_{j})$ . The symmetric property implies that both agents share the same strategy space $\Pi$ , and it holds that $U_{i}(\pi_{i}^{1},\pi_{j}^{2}) = U_{j}(\pi_{i}^{2},\pi_{j}^{1})$ .
+
+Previous studies have shown that the geometric structure of symmetric zero-sum games resembles a spinning top, consisting of both transitive and cyclic dimensions (illustrated in Figure 1(a)) (Czarnecki et al., 2020). The transitive dimension characterizes the absolute strengths between strategies.
+
+A game is considered transitive if there exists an evaluation function for the strength of a strategy, denoted as $f_{v}(\pi)$ . In strategic interactions, the strategy with a higher evaluation function always yields a higher reward.
+
+$$
+U _ {i} (\pi_ {i} ^ {1}, \pi_ {j} ^ {2}) = f _ {v} (\pi_ {i} ^ {1}) - f _ {v} (\pi_ {j} ^ {2}) = - U _ {j} (\pi_ {i} ^ {1}, \pi_ {j} ^ {2}).
+$$
+
+The cyclic dimension indicates the presence of mutual restraint among strategies, similar to the dynamics observed in Rock-Scissor-Paper (RSP). In a game with only cyclic dimension, for any strategy $\pi_i^1$ in the strategy space $\Pi$ , its expectation of reward when facing other strategies is 0:
+
+$$
+\int_ {\pi_ {j} ^ {0} \in \Pi} U _ {i} (\pi_ {i} ^ {1}, \pi_ {j} ^ {0}) \cdot d \pi_ {j} ^ {0} = 0.
+$$
+
+Real-world games typically exhibit both transitive and cyclic dimensions, making it impracticable to evaluate strategies directly using the aforementioned equations. Strategy updates based on gradients may become trapped in the cyclic dimensions, leading to slow convergence towards the Nash equilibrium in the transitive dimensions.
+
+# 2.2. General-sum Games with Multiple Equilibria
+
+Unlike zero-sum games, general-sum games typically feature multiple Nash equilibria with varying rewards (Tang et al., 2021), as illustrated in Figure 1(b). Previous studies have often focused on specific types of equilibria, such as Pareto-optimal equilibria (Sen et al., 2003), MENE (Maximum Entropy Nash Equilibrium) (Balduzzi et al., 2018), and optimal PSNE (Pure Strategy Nash Equilibrium) (Nguyen et al., 2023).
+
+In the theory of learning in games, the update rule for each iteration determines the learned equilibrium. Recent studies have focused on improving equilibrium rewards while ensuring convergence to an equilibrium (Foerster et al., 2018; Letcher et al., 2019). It has been demonstrated that providing appropriate guidance to agents can effectively enhance the utility of the learning process (Hu et al., 2022).
+
+# 2.3. Open-ended Learning Framework
+
+Prior research has introduced various methods for solving Nash equilibria, including WOLF (Win or Learn Fast) (Bowling & Veloso, 2001) and AWESOME (Adapt When Everybody is Stationary, Otherwise Move to Equilibrium) (Conitzer & Sandholm, 2003). The most widely used algorithm is fictitious play, favored for its simplicity of execution (Fudenberg & Levine, 1995). There are also algorithms based on adaptive game playing (Freund & Schapire, 1999). It is worth noting that due to the complexity of general-sum game structures, these algorithms typically exhibit guaranteed convergence only in zero-sum games.
+
+The PSRO algorithm presents an effective approach to solving Nash equilibrium in games with large-scale strategy spaces. Inspired by the Double Oracle algorithm (McMahan et al., 2003; Bošanský et al., 2016), PSRO establishes a population to represent strategies for each agent. The initial strategy population is generated randomly as $\mathcal{P}_i = (\pi_i^1, \dots, \pi_i^t)$ . In each iteration, the empirical game matrix for agent $i$ is calculated as $\mathcal{M}_i = \mathcal{P}_i \times U_i \times \mathcal{P}_{-i}$ . By adopting the fictitious play to solve the Nash equilibrium of meta-game, with a payment matrix of $(\mathcal{M}_i, \mathcal{M}_{-i})$ , we can derive the meta-equilibrium for agents: $(\theta_i, \theta_{-i})$ . Then, agent $i$ will search for a new strategy $\pi_i^{t+1}$ , usually the best response to the meta-equilibrium of the opponent $\mathrm{BR}(\theta_{-i})$ .
+
+Improvements to the PSRO algorithm primarily involve incorporating new meta-game solvers or adopting diverse objectives to guide the generation of new strategies. Additionally, there are open-ended algorithms that explore alternative equilibrium concepts, such as the $\alpha$ -Rank equilibrium ( $\alpha$ -PSRO) (Muller et al., 2020), the correlated equilibrium (JPSRO) (Marris et al., 2021) and coarse correlated equilibrium (CCE) (Liu et al., 2024).
+
+In this paper, we focus on refining the process of strategy exploration of PSRO framework. Previous works have typically enhanced strategy generation by increasing diversity. Several methods exist for measuring diversity, including Expected Cardinality (EC) (Perez-Nieves et al., 2021), Behavioral Diversity (BD), Response Diversity (RD) (Liu et al., 2021) and Policy Space Diversity (PSD) (Yao et al., 2023).
+
+# 3. From Exploitability to Advantage Function
+
+We first consider symmetric zero-sum games, where both agents share the same strategy space $\Pi = \{\pi^1,\pi^2,\dots \}$ . From the symmetry, we have the following property:
+
+Theorem 3.1. In symmetric zero-sum games, if the joint strategy $(\pi_i^1,\pi_j^2)$ is a Nash equilibrium, we have $(\pi_i^1,\pi_j^1)$ and $(\pi_i^2,\pi_j^2)$ are both Nash equilibriums.
+
+For a strategy $\pi$ in a zero-sum game, its best response $\mathrm{BR}(\pi)$ is usually a set containing many strategies. However, we have the following property:
+
+Theorem 3.2. For any two-player game, when the strategy of another player is fixed (denoted as $\pi_j$ ), there always exists pure strategy $a_i \in \mathcal{A}$ which satisfies that $a_i \in \mathrm{BR}(\pi_j)$ . Particularly, in zero-sum games, $U_i(\pi_i, \pi_j)$ is always the same for all $\pi_j \in \mathrm{BR}(\pi_i)$ .
+
+Then we have:
+
+$$
+\begin{array}{l} \mathcal {E} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) = \max _ {\pi_ {i} ^ {\prime}} U _ {i} \left(\pi_ {i} ^ {\prime}, \pi_ {j} ^ {2}\right) - U _ {i} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \\ + \max _ {\pi_ {j} ^ {\prime \prime}} U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {\prime \prime}\right) - U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \\ = U _ {i} \left(\operatorname {B R} \left(\pi_ {j} ^ {2}\right) \cap \mathcal {A}, \pi_ {j} ^ {2}\right) + U _ {j} \left(\pi_ {i} ^ {1}, \operatorname {B R} \left(\pi_ {i} ^ {1}\right) \cap \mathcal {A}\right) \\ = - U _ {j} \left(\operatorname {B R} \left(\pi_ {j} ^ {2}\right) \cap \mathcal {A}, \pi_ {j} ^ {2}\right) - U _ {i} \left(\pi_ {i} ^ {1}, \operatorname {B R} \left(\pi_ {i} ^ {1}\right) \cap \mathcal {A}\right). \\ \end{array}
+$$
+
+According to Theorem 3.1, the symmetry in most 2p0s games allows us to define the distance between a single strategy and equilibrium, similar to the exploitability. If we consider $\mathrm{BR}(\pi_i^1)$ as a function of $\pi_i^1$ , then the value of $U_{i}(\pi_{i}^{1},\mathrm{BR}(\pi_{i}^{1})\cap \mathcal{A})$ is determined only by $\pi_i^1$ .
+
+Definition 3.3. In two-player zero-sum games, we define the advantage function as:
+
+$$
+\mathcal {V} _ {i} \left(\pi_ {i}\right) = U _ {i} \left(\pi_ {i}, a _ {j} ^ {0}\right), a _ {j} ^ {0} \in \mathrm {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {j}.
+$$
+
+From Theorem 3.2, we can see that this definition makes sense because the selection of best response $\mathrm{BR}(\pi_i)$ does not affect the value of $\nu_{i}$ . Therefore, we also use $\mathcal{V}_i(\pi_i) = U_i(\pi_i,\mathrm{BR}(\pi_i)\cap \mathcal{A}_j)$ to denote the advantage function.
+
+Theorem 3.4. In two-player zero-sum games,
+
+- $\mathcal{E}(\pi_i, \pi_j) = -(\mathcal{V}_i(\pi_i) + \mathcal{V}_j(\pi_j))$ .
+- $\mathcal{V}_i(\pi_i)$ is Lipschitz continuous about $\pi_i$ , and $-\mathcal{V}_i(\pi_i)$ is a convex function about $\pi_i$ .
+- If the game is symmetric, $\forall \pi_i$ , $\mathcal{V}_i(\pi_i) \leq 0$ . The joint strategy $(\pi_i^0, \pi_j^0)$ is a Nash equilibrium if and only if $\mathcal{V}_i(\pi_i^0) = \mathcal{V}_j(\pi_j^0) = 0$ . In games with only transitive dimension, $\mathcal{V}_i(\pi_i) > \mathcal{V}_j(\pi_j)$ implies $U_i(\pi_i, \pi_j) > 0$ .
+
+From Theorem 3.4, we observe that the advantage function has a unique local and global maximum of 0 in symmetric zero-sum games, indicating that the corresponding strategy is a Nash equilibrium. This finding suggests that improving the advantage of strategies guides the learning process towards convergence at the Nash equilibrium. Additionally, the advantage function can be computed within the pure strategy space $\mathcal{A}$ .
+
+# 4. Advantage Policy Space Response Oracle
+
+In this section, we introduce the A-PSRO framework and its theoretical properties for zero-sum and general-sum games.
+
+# 4.1. A-PSRO for Symmetric Zero-sum Games
+
+In classic PSRO algorithms, new strategies added to the population $\Pi_{i} = \{\pi_{i}^{1},\dots ,\pi_{i}^{t}\}$ are typically derived through best response, with the opponent strategy fixed as the meta-Nash strategy:
+
+$$
+\pi_ {i} ^ {t + 1} \in \operatorname {B R} \left(\theta_ {j}\right), \text {w h e r e} \left(\theta_ {i}, \theta_ {j}\right) = \operatorname {N E} \left(\mathcal {M} _ {i}, \mathcal {M} _ {j}\right).
+$$
+
+Best response-based updates may stagnate within cyclic dimensions, causing the PSRO algorithm to converge slowly in non-transitive games. To address this, diversity strategy algorithms offer an improvement over the classic PSRO by increasing the probability of discovering novel strategies in
+
+the transitive dimension. For example, DPP-PSRO (Perez-Nieves et al., 2021) incorporates Expected Cardinality (EC) as a regularizer to generate new strategies. However, new strategies generated by diversity algorithms are stochastic, which means they cannot deterministically approach equilibrium.
+
+From Theorem 3.4, we can see that increasing the advantage of strategy will approach the Nash equilibrium. Since $-\mathcal{V}_i(\pi_i)$ is convex, we design the A-PSRO to introduce advantage as the objective of strategy learning.
+
+For the population-based strategy update approach in PSRO, we define $\mathcal{V}_i(\pi_i\mid \mathcal{P}_j) = U_i(\pi_i,\mathrm{BR}(\pi_i\mid \mathcal{P}_j))$ , where
+
+$$
+\operatorname {B R} \left(\pi_ {i} \mid \mathcal {P} _ {j}\right) = \operatorname {a r g m a x} _ {\pi_ {j} \in \mathcal {P} _ {j}} U _ {j} \left(\pi_ {i}, \pi_ {j}\right).
+$$
+
+Theorem 4.1. In symmetric zero-sum games, given the population $\mathcal{P}_i = \mathcal{P}_j = \{\pi_i^1,\dots ,\pi_i^t\}$ , $\forall \pi_{i}^{k}\in \mathcal{P}_{i}$ , we have $\mathcal{V}_i(\pi_i^k)\leq \mathcal{V}_i(\theta_i\mid \mathcal{P}_i)$ . Here, $\theta_{i}$ is the equilibrium of the meta-game corresponding to the population $\mathcal{P}_i$ .
+
+Note that $\mathcal{V}_i(\pi_i^k) \leq \mathcal{V}_i(\theta_i)$ does not always hold (example given in Supplementary Material). However, we have:
+
+$$
+\forall \pi_ {i} ^ {k} \in \mathcal {P} _ {i}, \mathcal {V} _ {i} \left(\pi_ {i} ^ {k}\right) \leq \mathcal {V} _ {i} \left(\theta_ {i}\right), \text {w h e n h u l l} \left(\mathcal {P} _ {i}\right) = \Pi ,
+$$
+
+where $\mathrm{hull}(\mathcal{P}_i)$ is the convex hull of population. Theorem 4.1 indicates that the equilibrium of the meta-game approximately maximizes the advantage of the current population.
+
+We aim to search for a new strategy with a deterministic increase in the advantage of population. We have the following property of the advantage in population iterations.
+
+Theorem 4.2. Given the meta-equilibrium strategy $\theta_{i}$ , if $\mathcal{V}_i(\theta_i) < 0$ , there exists $\Delta \pi_{i} \in \mathcal{A}$ and $\delta > 0$ satisfying:
+
+$$
+\forall 0 < d < \delta , \quad \mathcal {V} _ {i} \left((1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi_ {i}\right) > \mathcal {V} _ {i} (\theta_ {i}).
+$$
+
+Theorem 4.2 indicates that if the meta-equilibrium $\theta_{i}$ of the current population is not a Nash equilibrium, we can find a strategy closer to the Nash equilibrium in its neighborhood. Furthermore, according to optimization theory, since the advantage function is convex, this strategy update guarantees to approach the Nash equilibrium at a sublinear rate.
+
+Next, we explain how to improve PSRO's strategy exploration process by using the advantage function as a regularization term. A-PSRO differs from other algorithms only when adding new strategies $\pi_i^{t+1}$ to the current population $\mathcal{P}_i$ , and we refer to this process as LA (LookAhead). Given step size $d$ , the new strategy generated is:
+
+$$
+\begin{array}{l} \pi_ {i} ^ {t + 1} = (1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi_ {i}, \\ \Delta \pi_ {i} = \operatorname {a r g m a x} _ {\Delta \pi \in \mathcal {A}} \mathcal {V} _ {i} ((1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi). \\ \end{array}
+$$
+
+For finite-dimensional zero-sum games, the advantage function can be computed through matrix multiplication:
+
+$$
+\mathcal {V} _ {i} \left(\pi_ {i}\right) = \operatorname {M i n} _ {a _ {j} \in \mathcal {A} _ {j}} \pi_ {i} \times U _ {i} \times a _ {j},
+$$
+
+making its computational complexity comparable to that of the best response.
+
+# 4.2. A-PSRO for Two-player General-sum Games
+
+To the best of our knowledge, ensuring convergence in general-sum games remains a challenging problem. Most algorithms can only converge within specific game structures. However, since the advantage function is directly tied to the reward function, it can guide agents to learn strategies that achieve higher rewards. When multiple equilibria are learnable within a game, A-PSRO outperforms general algorithms by identifying equilibria with higher joint rewards.
+
+In a two-player general-sum game, an action $a_i^1$ is called dominated if $\forall \pi_j \in \Pi_j$ , there exists $a_i^k \in \mathcal{A}_i \setminus \{a_i^1\}$ , $U_i(a_i^1, \pi_j) \leq U_i(a_i^k, \pi_j)$ . It is usually assumed that dominated actions can be removed from the game. Therefore, we define the simplified game:
+
+Definition 4.3. We call a game a simplified game if there does not exist dominated pure strategy $a_{i}$ for any player $i$ .
+
+For arbitrary general-sum game, a corresponding simplified game can be obtained by removing dominated actions.
+
+When extending Definition 3.3 to general-sum games, different choices of $\mathrm{BR}(\pi_i)$ will lead to inconsistent advantage functions. Fortunately, in simplified games, we have the following property:
+
+Theorem 4.4. In two-player simplified games, $\forall \pi_{i}$ , for any $a_{j}^{l} \in \mathrm{BR}(\pi_{i}) \cap \mathcal{A}_{j}$ and $\forall \delta > 0$ , there always exists $\pi_{i}'$ which satisfies $|\pi_{i}' - \pi_{i}| < \delta$ and $\mathrm{BR}(\pi_{i}') \cap \mathcal{A}_{j} = \{a_{j}^{l}\}$ .
+
+This theorem shows that for almost all strategies, their advantage can be defined through a unique best response. To maintain consistency, when multiple best responses from opponents exist, we select the one that maximizes the agent's own reward to define the advantage function:
+
+Definition 4.5. In two-player simplified games, we define $\mathcal{V}_i(\pi_i) = \max_{a_j} U_i(\pi_i, a_j)$ , where $a_j \in \mathrm{BR}(\pi_i) \cap \mathcal{A}_j$ .
+
+This definition always makes sense. Similar to zero-sum games, the advantage function has the following properties in simplified general-sum games:
+
+Theorem 4.6. In two-player simplified games,
+
+- $\forall i, \mathcal{V}_i(\pi_i)$ is Lipschitz continuous.
+- We assume that the joint strategy $(\pi_i, \pi_j)$ is a Nash equilibrium. If $\mathrm{BR}(\pi_i) \cap \mathcal{A}_j$ has the unique element, then $\mathcal{V}_i(\pi_i)$ is a local maximum.
+- Under the same assumption, if $(\pi_i^1,\pi_j^2)$ and $(\pi_i^3,\pi_j^4)$ are both NEs, then $(\pi_i^1,\pi_j^2)$ Pareto dominates $(\pi_i^3,\pi_j^4)$ if and only if $\mathcal{V}_i(\pi_i^1)\geq \mathcal{V}_i(\pi_i^3)$ and $\mathcal{V}_j(\pi_j^2)\geq \mathcal{V}_j(\pi_j^4)$ .
+
+In general-sum games, the advantage function is nonconvex, which means that strategy gradient algorithms only converge to local maxima. However, when computing the meta-equilibrium $(\theta_i,\theta_j)$ within the population-based PSRO algorithm, we prove that there exists a space in which strategy converges to the global optimum.
+
+Theorem 4.7. In two-player simplified games, the current population for agent $i$ is $\mathcal{P}_i = \{\pi_i^1, \dots, \pi_i^t\}$ . $\theta_i$ is the global maximum point of the advantage $\mathcal{V}_i$ in $\mathrm{hull}(\mathcal{P}_i)$ . Then there must exist a non-zero measure set $\mathcal{D}' \subset \mathrm{hull}(\mathcal{P}_i)$ which satisfies that if $\theta_i'$ is a local maximum point of the advantage $\mathcal{V}_i$ in $\mathcal{D}'$ , then $\mathcal{V}_i(\theta_i') = \mathcal{V}_i(\theta_i)$ .
+
+Theorems 4.6 and 4.7 establish that in general-sum games, there exists a non-zero measure set near the optimal equilibrium where the population of strategies converges towards that equilibrium. Strategies close to equilibria with optimal rewards tend to have higher advantage values due to the Lipschitz continuity of the advantage function.
+
+In A-PSRO, we adopt a strategy exploration approach designed to increase the probability of discovering strategy with higher advantage:
+
+$$
+\pi_ {i} ^ {t + 1} = (1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi_ {i},
+$$
+
+$$
+\theta_{i} = \operatorname *{argmax}_{(\theta^{\prime}_{i},\theta^{\prime}_{j})\in \Theta}\mathcal{V}_{i}(\theta^{\prime}_{i}), \Theta = \bigcup_{\pi_{i,j}\in \mathrm{hull}(\mathcal{P}_{i,j})}\mathcal{O}(\mathcal{P}_{i},\mathcal{P}_{j}\mid \pi_{i,j}).
+$$
+
+Here, $d$ is the fixed step size, and the calculation of $\Delta \pi_{i}$ is the same as zero-sum games. $\mathcal{O}(\mathcal{P}_i,\mathcal{P}_j\mid \pi_{i,j})$ represents the meta-equilibrium obtained through a fictitious play oracle with $(\pi_i,\pi_j)$ as initial strategies.
+
+# 4.3. A-PSRO for Large-scale Games
+
+The PSRO framework has also been widely adopted for solving large-scale extensive-form games due to its compatibility with neural network-based implementations. Since this paper primarily focuses on the theoretical properties of the advantage function and its applications in normal-form games, related studies will be presented in future work. Regarding the extension of A-PSRO to other scenarios such as sequential decision-making scenarios, we have conducted some theoretical analysis and experiments. Here, we present several feasible extensions.
+
+Empirical Games Solver. The research in (Czarnecki et al., 2020) indicates that pure strategies with a wide range of skills extracted from large-scale extensive-form games (such as StarCraft) can also define a normal-form game. The strategies obtained by solving the empirical game are also important for many problems. There are several works about empirical games such as (Walsh et al., 2004). The A-PSRO algorithm presented in this paper can be directly applied to efficiently solve the aforementioned empirical
+
+
+(a) AlphaStar
+
+
+(b) Transitive game
+
+
+(c) Elo game $+$ noise $= 0.1$
+
+
+(d) Elo game $+$ noise $= 0.5$
+
+
+(e) Elo game + noise=1.0
+
+
+(f) Simplified Go game
+
+
+Figure 2. The exploitability of the joint strategy learned by agents in various zero-sum games is depicted. The reduction in exploitability through population iterations can serve as an indicator of the effectiveness in approximating the Nash equilibrium.
+(a) AlphaStar
+
+
+(b) Transitive game
+
+
+(c) Elo game + noise=1.0
+
+
+(d) Simplified Go game
+Figure 3. The advantage distribution of strategies. Lighter colored regions indicate strategies with higher advantage.
+
+games. The solutions can serve as effective approximations of Nash equilibria for large-scale extensive-form games.
+
+Neural Network Approximation. When PSRO is applied to solving large-scale extensive-form games, it typically employs RL methods such as policy gradient to explore new strategies. To introduce regularization based on the advantage function, we need to design neural network approximations of the advantage function.
+
+We sample a set of strategies based on the empirical game as a predefined best response set $\mathcal{A}$ . Then we pre-train a best response predictor $\mathrm{BR}^*(\cdot, \gamma)$ with parameter $\gamma$ , which can be implemented analogously to a classification task. For any $\pi_i$ , we generate the label as the first pure strategy maximizing the reward of $\pi_i$ in its best response set:
+
+$$
+\mathrm {B R} ^ {* *} (\pi_ {i}) = (c _ {1}, \dots , c _ {l - 1}, c _ {l}, c _ {l + 1}, \dots , c _ {| \mathcal {A} _ {j} |}).
+$$
+
+$$
+c _ {l} = 1 \text {i f} l = \operatorname {a r g m i n} _ {l} \left\{\operatorname {a r g m a x} _ {a _ {j} ^ {l} \in \mathcal {A} _ {j}} U _ {j} \left(\pi_ {i}, a _ {j} ^ {l}\right) \right\}.
+$$
+
+After training the predictor parameter $\gamma$ with cross entropy loss function $-\mathrm{BR}^{**}(\pi_i)\cdot \log \mathrm{BR}^* (\pi_i,\gamma)$ , the approximated advantage function $V_{i}^{*}(\pi_{i})$ can be computed for each strategy $\pi_{i}$ using the predicted opponent's best response $\mathrm{BR}^{*}(\pi_{i},\gamma)$ . This allows to train the advantage predictor $\hat{\mathcal{V}}_i(\pi_i,\gamma ')$ with MSE loss. Base on the advantage predictor, we can calculate $\nabla_{\pi_i}\hat{\mathcal{V}}_i(\pi_i,\gamma ')$ with fixed $\gamma^\prime$ to generate the advantage regularization for strategy updating. We demonstrate through the following theorem that when the approximation satisfies certain conditions, it can still effectively guarantee the global effectiveness of strategy exploration.
+
+Theorem 4.8. If $|\nabla_{\pi_i}\hat{\mathcal{V}}_i(\pi_i) - \nabla_{\pi_i}\mathcal{V}_i(\pi_i)|\leq \frac{1}{3} |\nabla_{\pi_i}\mathcal{V}_i(\pi_i)|$ , the strategy generated by the A-PSRO exploration process will converge to equilibrium strategy with the sublinear convergence rate in symmetric zero-sum games.
+
+
+(a) Advanced-Staghunt
+
+
+(b) Advanced-RSP
+
+
+(c) Randomly Generated Games
+
+
+(a) Advanced-Staghunt
+
+
+Figure 4. The joint reward of the agent system in general-sum games. The Staghunt game and the RSP game are repeated 10 times and averaged for plotting. Randomly generated games contain 100 games with the same reward distribution.
+(b) Advanced-RSP
+
+
+(c) Randomly Generated Games
+Figure 5. The advantage distribution of strategies in different two-player general-sum games.
+
+Expansion in Multi-player Games. In $n$ -player games ( $n > 2$ ), the advantage function cannot be defined by the best response. We define the advantage function as:
+
+$$
+\mathcal {V} _ {i} \left(\pi_ {i}\right) = \max U _ {i} \left(\pi_ {i}, \mu \left(\pi_ {i}\right)\right), \mu \left(\pi_ {i}\right) = \mathcal {O} \left(\Pi_ {- i} \mid \pi_ {i}\right).
+$$
+
+$\mu (\pi_i)$ is a joint strategy without player $i$ as the equilibrium of the $(n - 1)$ -player subgame when the strategy of player $i$ is $\pi_{i}$ . $\mu (\pi_i)$ is computed by an equilibrium oracle $\mathcal{O}$ .
+
+In order to efficiently approximate the advantage function, we define the optimistic equilibrium oracle similar to (Basilico et al., 2020; Wang et al., 2022).
+
+$$
+\mu \left(\pi_ {i}\right) = \operatorname {a r g m a x} _ {\pi_ {- i}} U _ {i} \left(\pi_ {i}, \pi_ {- i}\right), \pi_ {- i} = \mathrm {N E} \left(U _ {- i} \mid \pi_ {i}\right).
+$$
+
+Here, $\mathrm{NE}(U_{-i} \mid \pi_i)$ represents a Nash equilibrium of the subgame with $\pi_i$ fixed. We give an approximation of the optimistic equilibrium oracle by simplifying it to a two-player game. We view the other agents as a single agent $\{-i\}$ with action space $\mathcal{A}_{-i}$ , and approximate advantage as
+
+$$
+\hat {\mathcal {V}} _ {i} \left(\pi_ {i}\right) = \max _ {\pi_ {- i}} U _ {i} \left(\pi_ {i}, \pi_ {- i}\right), \pi_ {- i} \in \mathrm {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {- i}.
+$$
+
+Based on the above method, the calculation of the advantage function in multiplayer games can also be implemented with pre-trained predictor similar to the two-player setting.
+
+# 5. Experiment Results and Discussion
+
+We evaluate the performance of A-PSRO in multiple game environments. We select the state-of-the-art game solvers as baselines, including PSRO (Lanctot et al., 2017), Pipeline-PSRO (P-PSRO) (McAleer et al., 2020), DPP-PSRO (Perez-Nieves et al., 2021), UDF-PSRO (Liu et al., 2021) and PSD-PSRO (Yao et al., 2023). To ensure a fair comparison, all other components of the PSRO framework are kept unchanged, with strategy exploration being the only aspect that differs among the algorithms.
+
+# 5.1. Experiments in Symmetric Zero-sum Games
+
+In symmetric zero-sum games, we test A-PSRO with both LookAhead and diversity modules on complex real-world games. The environment we chose for testing is the normalform games generalized used in the previous PSRO algorithms (Czarnecki et al., 2020; Liu et al., 2022).
+
+In Figure 2, we show the results in typical zero-sum games. Additional experiments are presented in the Supplementary Material. From Figure 2, we can see that our method achieves a notable reduction in exploitability across all game environments, sometimes by several orders of magnitude. A-PSRO without diversity module outperforms A-PSRO in the Transitive game. The reason is that the diversity module is designed to navigate the constraints of non-transitive structure, and its impact is limited in games with strong tran
+
+
+(a) Exploitability of Zero-sum Games
+
+
+(b) Joint Reward of General-sum Games
+Figure 6. The exploitability of zero-sum multiagent games and joint reward of general-sum multiagent games. Each algorithm is tested in 4 randomly generated identically distributed game environments, and the averages are plotted.
+
+sitive dimensions. The effectiveness of A-PSRO without the LookAhead module has a significant decline in all the games, which indicates that our LookAhead module greatly contributes to approximating Nash equilibria in all games.
+
+Compared to diversity-based algorithms, A-PSRO exhibits higher exploitability during the early stages, primarily due to subgames failing to fully cover the entire strategy space. This observation underscores the importance of incorporating diversity exploration into the learning process. When combined with diversity exploration, A-PSRO achieves a stable and rapid reduction in exploitability, even in scenarios where other algorithms experience stagnation.
+
+In Figure 3, we show the distribution of advantages in different games. We use the non-linear dimensionality reduction method t-SNE (t-Distributed Stochastic Neighbor Embedding) (Van der Maaten & Hinton, 2008) to map the strategies into the unit matrix and maintain adjacency between strategies. In games with a strong transitive dimension (AlphaStar, Transitive game), the advantage function exhibits rapid changes around the Nash equilibrium. Conversely, in games with a strong cyclic dimension, when the advantage function changes slowly, the diversity module becomes crucial for the learning process of the Nash equilibrium.
+
+We also compare the running time of A-PSRO with other PSRO algorithms. The experimental results and settings are given in the Supplementary Material for detailed analysis.
+
+# 5.2. Experiments in Two-player General-sum Games
+
+It is worth noting that the compared algorithms do not guarantee convergence in all general-sum games. To ensure a fair comparison, we extended the zero-sum game structure to a general-sum game with multiple equilibria and verified that all algorithms successfully converged to equilibrium under this setting. The detailed game structure is given in the Supplementary Material.
+
+In Figures 4(a) and 4(b), we present the training results
+
+of the algorithms in the aforementioned games. Figure 4 demonstrates that A-PSRO consistently learns the optimal equilibrium strategy, whereas other algorithms acquire different equilibria and often stagnate in suboptimal equilibria.
+
+We further conduct experiments in randomly generated games, and the results are depicted in Figure 4(c). In our experiments, A-PSRO also attains the highest reward.
+
+We depict the distribution of advantage in the aforementioned games in Figure 5. It is evident from the figure that the advantage of general-sum games is non-convex. Algorithms based on the strategy gradient are likely to converge to different equilibrium strategies from various initial points.
+
+In general-sum games, A-PSRO requires exploring multiple equilibrium oracles to identify higher-reward strategies. Compared to other algorithms, this inevitably leads to increased computational complexity. We will address this limitation in the future work.
+
+# 5.3. Experiments in Multi-player Games
+
+For multi-player games, we test our method in both zero-sum and general-sum games. We randomly generated a series of identically distributed games as test environments.
+
+Figure 6(a) illustrates the distance of the learned strategies from the Nash equilibrium for different algorithms in multiplayer zero-sum games. As shown in the figure, A-PSRO effectively explores strategies with higher advantage and converges toward the Nash equilibrium, whereas other algorithms exhibit slower convergence.
+
+Figure 6(b) presents the joint rewards of the algorithms during the training process of multi-player general-sum games. The results align with Figure 4(c), indicating that A-PSRO consistently learns the equilibrium strategy with the highest joint reward. The parameter setting and experimental details are given in the Supplementary Material.
+
+
+(a) Strategy exploration time
+
+
+(b) Meta game solver time
+Figure 7. The running time of different algorithms. The left figure shows the cumulative time spent by different algorithms during a single strategy exploration given a meta-equilibrium. The right figure shows the cumulative time spent by different algorithms in solving the equilibrium of the meta-game.
+
+# 5.4. Comparison of Computational Complexity
+
+In this section, we compare the computational complexity of A-PSRO and diversity-based PSRO algorithms.
+
+Assume that the payoff $U$ is a $[n, n]$ matrix, and population $P_{i}$ and $P_{j}$ are $[p, n]$ matrixs. The current meta-equilibrium $\pi$ is an $[n, 1]$ vector, and the update step size is $d$ .
+
+Taking the classic EC diversity metric as an example:
+
+$$
+\operatorname {E C} \left(\mathcal {P} _ {i} \mid \mathcal {P} _ {j}\right) := \operatorname {T r} \left(\mathbf {I} - (\mathcal {L} + \mathbf {I}) ^ {- 1}\right)
+$$
+
+$$
+\mathcal {L} = \mathcal {M} _ {i} \mathcal {M} _ {i} ^ {T}, \mathcal {M} _ {i} = \mathcal {P} _ {i} \times U _ {i} \times \mathcal {P} _ {j}
+$$
+
+Its computational complexity is $O(pn^2 + p^2 n + p^3)$ . The strategy exploration process requires the exploration of each update directions in pure strategy space to get the one that maximizes diversity. Thus the actual computational complexity is $n \times O(pn^2 + p^2 n + p^3) = O(pn^3 + p^2 n^2 + p^3 n)$ .
+
+Regarding the computation of the advantage function, the following presents an implementation method we use in our code. First, repeat $\pi$ and derive a $[n,n]$ matrix $Q$ , and then the LookAhead update direction can be obtained through:
+
+$$
+\min \left(\left[ Q \cdot (1 - d) + I \cdot d \right] \times U \times I\right). \operatorname {a r g m a x} (\).
+$$
+
+This process has a computational complexity of $O(n^{3})$ , which is independent of the population size, and lower than the complexity of diversity-based exploration.
+
+In our experiments, the time-consuming modules include meta-game solving, diversity-based strategy exploration, and non-diversity-based strategy exploration. Among them, the experimental code only differs in the last module between A-PSRO and other algorithms.
+
+From Figure 7(a), we can see that if only the LookAhead module is used (ours without diversity), the time spent on strategy exploration in A-PSRO increases almost linearly. From other algorithms (which perform diversity exploration with a certain probability), it can be observed that diversity
+
+exploration leads to a nonlinear increase in the time per iteration. This suggests that using the advantage function as an evaluation metric does not introduce more computational complexity compared to diversity metrics.
+
+From Figure 7(b) and empirical analysis, it can be observed that the solving time of the meta-game with fictitious play is an exponential function of the population size. A-PSRO has the longest runtime, indicating that A-PSRO has the largest population size during training. Considering that in the pipeline improvement (McAleer et al., 2020), the PSRO algorithm does not expand the population at every iteration but only adds new strategies when the existing ones converge (see Algorithm 2 in appendix for details), this demonstrates that A-PSRO's strategy exploration quickly improves the existing strategies in the population to optimal.
+
+Comparison with Traditional Game Solver. The above results also demonstrate the advantages of population-based equilibrium solving algorithms over traditional approaches. Compared to exact solution algorithms such as linear programming, PSRO-type algorithms can obtain approximate solutions at lower computational cost. If we consider an exploitability level of $10^{-2}$ to be sufficiently close to equilibrium, A-PSRO requires fewer than 50 iterations to achieve this, with a total runtime less than 1 minute, which is significantly lower than the time required for exact solutions.
+
+On the populations obtained from A-PSRO, fictitious Play only requires about $10^{3}$ of iterations to reach exploitability $10^{-4}$ . In contrast, it takes about $10^{4}$ iterations directly using fictitious play. Since the process of solving meta-equilibria based on fictitious play is the most time-consuming component in the PSRO framework, A-PSRO accelerates the convergence speed of the PSRO framework toward Nash equilibria through efficient strategy exploration.
+
+# 6. Conclusion
+
+In this paper, we introduce A-PSRO, a unified open-ended framework for learning equilibrium strategies. We propose the advantage function as an evaluation metric for the strategy. The advantage function exhibits favorable properties, such as convexity and Lipschitz continuity. Leveraging the advantage function, A-PSRO effectively enhances the objective of strategy exploration during population expansion. In zero-sum games, A-PSRO can deterministically approach Nash equilibrium strategies during iterations, significantly reducing the exploitability of learned strategies. Moreover, in general-sum games with multiple equilibria, A-PSRO maximizes rewards during the learning of Nash equilibria. Experimental results demonstrate the robust generalization capabilities of A-PSRO as an open-ended framework in large-scale environments, highlighting its potential to advance equilibrium theory in multiagent systems.
+
+# Acknowledgements
+
+This paper is supported by National Key R&D (Research and Development) Programe of China (2021YFA1000403), the National Natural Science Foundation of China (No.12431012).
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of PSRO framework for solving equilibrium strategy. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Balduzzi, D., Tuyls, K., Perolat, J., and Graepel, T. Reevaluating evaluation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, volume 31, pp. 3272-3283, 2018.
+Balduzzi, D., Garnelo, M., Bachrach, Y., Czarnecki, W., Perolat, J., Jaderberg, M., and Graepel, T. Open-ended learning in symmetric zero-sum games. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 434-443, 2019.
+Basilico, N., Coniglio, S., Gatti, N., and Marchesi, A. Bilevel programming methods for computing single-leader-multi-follower equilibria in normal-form and poly-matrix games. EURO Journal on Computational Optimization, 8(1):3-31, 2020.
+Berner, C., Brockman, G., Chan, B., Cheung, V., Debiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., Jozefowicz, R., Gray, S., Olsson, C., Paechocki, J., Petrov, M., de Oliveira Pinto, H. P., Raiman, J., Salimans, T., Schlatter, J., Schneider, J., Sidor, S., Sutskever, I., Tang, J., Wolski, F., and Zhang, S. Dota 2 with large scale deep reinforcement learning. CoRR, abs/1912.06680, 2019. URL http://arxiv.org/abs/1912.06680.
+Bowling, M. H. and Veloso, M. M. Rational and convergent learning in stochastic games. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pp. 1021-1026, 2001.
+Bošansky, B., Lisy, V., Lanctot, M., Čermák, J., and Winands, M. H. Algorithms for computing strategies in two-player simultaneous move games. Artificial Intelligence, 237:1-40, 2016.
+Brown, N. and Sandholm, T. Superhuman ai for multiplayer poker. Science, 365(6456):885-890, 2019.
+
+Conitzer, V. and Sandholm, T. AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. In Machine Learning, Proceedings of the Twentieth International Conference, pp. 83-90, 2003.
+Czarnecki, W. M., Gidel, G., Tracey, B., Tuyls, K., Omidshafiei, S., Balduzzi, D., and Jaderberg, M. Real world games look like spinning tops. In Proceedings of the 34th International Conference on Neural Information Processing Systems, volume 33, pp. 17443-17454, 2020.
+Foerster, J., Chen, R. Y., Al-Shedivat, M., Whiteson, S., Abbeel, P., and Mordatch, I. Learning with opponent-learning awareness. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, pp. 122-130, 2018.
+Freund, Y. and Schapire, R. E. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1-2):79-103, 1999.
+Fudenberg, D. and Levine, D. K. Consistency and cautious fictitious play. Journal of Economic Dynamics and Control, 19(5-7):1065-1089, 1995.
+Hernandez-Leal, P., Kaisers, M., Baarslag, T., and de Cote, E. M. A survey of learning in multiagent environments: Dealing with non-stationarity. CoRR, abs/1707.09183, 2017. URL http://arxiv.org/abs/1707.09183.
+Hu, Y., Han, C., Li, H., and Guo, T. Modeling opponent learning in multiagent repeated games. Applied Intelligence, 53(13):17194-17210, 2022.
+Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Pérolat, J., Silver, D., and Graepel, T. A unified game-theoretic approach to multiagent reinforcement learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4193-4206, 2017.
+Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
+Letcher, A., Foerster, J. N., Balduzzi, D., Rocktäschel, T., and Whiteson, S. Stable opponent shaping in differentiable games. In 7th International Conference on Learning Representations., pp. 1-20, 2019.
+Li, S., Wu, Y., Cui, X., Dong, H., Fang, F., and Russell, S. Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 4213-4220, 2019.
+
+Liu, S., Harris, L., Lanctot, M., Piliouras, G., Leibo, J. Z., and Heess, N. Neural population learning beyond symmetric zero-sum games. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS '24, pp. 1247-1255, Richland, SC, 2024. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9798400704864.
+Liu, X., Jia, H., Wen, Y., Hu, Y., Chen, Y., Fan, C., HU, Z., and Yang, Y. Towards unifying behavioral and response diversity for open-ended learning in zero-sum games. In Advances in Neural Information Processing Systems, volume 34, pp. 941-952, 2021.
+Liu, Z., Yu, C., Yang, Y., sun, p., Wu, Z., and Li, Y. A unified diversity measure for multiagent reinforcement learning. In Advances in Neural Information Processing Systems, volume 35, pp. 10339-10352, 2022.
+Marris, L., Muller, P., Lanctot, M., Tuyls, K., and Graepel, T. Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pp. 7480-7491, 2021.
+McAleer, S., Lanier, J., Fox, R., and Baldi, P. Pipeline psro: A scalable approach for finding approximate nash equilibria in large games. In Proceedings of the 34th International Conference on Neural Information Processing Systems, volume 33, pp. 20238-20248, 2020.
+McMahan, H. B., Gordon, G. J., and Blum, A. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, pp. 536-543, 2003.
+Muller, P., Omidshafiei, S., Rowland, M., Tuyls, K., Pérolat, J., Liu, S., Hennes, D., Harris, L., Lanctot, M., Hughes, E., Wang, Z., Lever, G., Heess, N., Graepel, T., and Munos, R. A generalized training approach for multiagent learning. In 8th International Conference on Learning Representations, pp. 1-35, 2020.
+Nguyen, D., White, L., and Nguyen, H. Social optimum equilibrium selection for distributed multi-agent optimization, 2023. URL https://arxiv.org/abs/2307.13242.
+Perez-Nieves, N., Yang, Y., Slumbers, O., Mguni, D. H., Wen, Y., and Wang, J. Modelling behavioural diversity for learning in open-ended games. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pp. 8514-8524, 18-24 Jul 2021.
+
+Sen, S., Airiau, S., and Mukherjee, R. Towards a pareto-optimal solution in general-sum games. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 153-160, 2003.
+Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362 (6419):1140-1144, 2018.
+Sokota, S., D'Orazio, R., Kolter, J. Z., Loizou, N., Lanctot, M., Mitliagkas, I., Brown, N., and Kroer, C. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. In The Eleventh International Conference on Learning Representations, pp. 1-41, 2023.
+Tang, Z., Yu, C., Chen, B., Xu, H., Wang, X., Fang, F., Du, S. S., Wang, Y., and Wu, Y. Discovering diverse multiagent strategic behavior via reward randomization. In International Conference on Learning Representations, 2021.
+Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of machine learning research, 9(86):2579-2605, 2008.
+Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Sulsky, Y., Molloy, J., Paine, T. L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T. P., Kavukcuoglu, K., Hassabis, D., Apps, C., and Silver, D. Grandmaster level in starcraft ii using multiagent reinforcement learning. Nature, pp. 1-5, 2019.
+Walsh, W. E., Parkes, D. C., and Das, R. Choosing samples to compute heuristic-strategy nash equilibrium. In Faratin, P., Parkes, D. C., Rodríguez-Aguilar, J. A., and Walsh, W. E. (eds.), Agent-Mediated Electronic Commerce V. Designing Mechanisms and Systems, pp. 109-123, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg. ISBN 978-3-540-25947-3.
+Wang, K., Xu, L., Perrault, A., Reiter, M. K., and Tambe, M. Coordinating followers to reach better equilibria: End-to-end gradient descent for stackelberg games. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 5219-5227, 2022.
+
+Willi, T., Letcher, A. H., Treutlein, J., and Foerster, J. COLA: Consistent learning with opponent-learning awareness. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pp. 23804-23831, 2022.
+Yao, J., Liu, W., Fu, H., Yang, Y., McAleer, S., Fu, Q., and Yang, W. Policy space diversity for non-transitive games. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, pp. 67771-67793, Red Hook, NY, USA, 2023. Curran Associates Inc.
+Zhang, B. H. and Sandholm, T. Team correlated equilibria in zero-sum extensive-form games via tree decompositions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5):5252-5259, Jun. 2022.
+Zhang, Y., An, B., and Cerny, J. Computing ex ante coordinated team-maxmin equilibria in zero-sum multiplayer extensive-form games. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 5813-5821, 2021.
+
+# A. Omitted Proofs
+
+# A.1. Proof of Theorem 3.1
+
+Theorem. In symmetric zero-sum games, if the joint strategy $(\pi_i^1,\pi_j^2)$ is a Nash equilibrium, we have $(\pi_i^1,\pi_j^1)$ and $(\pi_i^2,\pi_j^2)$ are both Nash equilibriums.
+
+Proof. From the definition, the joint strategy $(\pi_i^1,\pi_j^2)$ is a Nash equilibrium implies that the exploitability $\mathcal{E}(\pi_i^1,\pi_j^2) = 0$ Then we will have:
+
+$$
+\mathcal {E} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) = \max _ {\pi_ {i} ^ {\prime}} \left[ U _ {i} \left(\pi_ {i} ^ {\prime}, \pi_ {j} ^ {2}\right) - U _ {i} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \right] + \max _ {\pi_ {j} ^ {\prime}} \left[ U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {\prime}\right) - U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \right] = 0. \tag {1}
+$$
+
+This indicates that:
+
+$$
+\max _ {\pi_ {i} ^ {\prime}} \left[ U _ {i} \left(\pi_ {i} ^ {\prime}, \pi_ {j} ^ {2}\right) - U _ {i} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \right] = \max _ {\pi_ {j} ^ {\prime}} \left[ U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {\prime}\right) - U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \right] = 0. \tag {2}
+$$
+
+Then we prove that $U_{i}(\pi_{i}^{1},\pi_{j}^{2}) = U_{j}(\pi_{i}^{1},\pi_{j}^{2}) = 0$ . If the reward of agents are not all 0, since the game is zero-sum, we assume that:
+
+$$
+U _ {i} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) > U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right). \tag {3}
+$$
+
+Since the game is symmetric, we will have:
+
+$$
+U _ {i} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {1}\right) = U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {1}\right) = 0. \tag {4}
+$$
+
+This indicates that:
+
+$$
+\max _ {\pi_ {j} ^ {\prime}} \left[ U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {\prime}\right) - U _ {j} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {2}\right) \right] \neq 0, \tag {5}
+$$
+
+which leads to contradiction. Therefore, we prove that the rewards of both agents are 0.
+
+From the equations above we have that $U_{j}(\pi_{i}^{1},\pi_{j}^{1}) = U_{j}(\pi_{i}^{1},\pi_{j}^{2}) = 0$ . Since $\pi_j^2$ is a best response to $\pi_i^1$ , we can see that $\pi_j^1$ is also a best response to $\pi_i^1$ . This indicates that $(\pi_i^1,\pi_j^1)$ and $(\pi_i^2,\pi_j^2)$ are both Nash equilibriums.
+
+# A.2. Proof of Theorem 3.2
+
+Theorem. For any two-player game, when the strategy of another player is fixed (denoted as $\pi_j$ ), there always exists pure strategy $a_i \in \mathcal{A}$ which satisfies that $a_i \in \mathrm{BR}(\pi_j)$ . Particularly, in zero-sum games, $U_i(\pi_i, \pi_j)$ is always the same for all $\pi_j \in \mathrm{BR}(\pi_i)$ .
+
+Proof. We assume that there exists strategy $\pi_i^* = (p_1, \dots, p_{|\mathcal{A}|})$ which is the best response of $\pi_j$ . Then we have:
+
+$$
+U _ {i} \left(\pi_ {i} ^ {*}, \pi_ {j}\right) = p _ {1} U _ {i} \left(a _ {1}, \pi_ {j}\right) + \dots + p _ {| \mathcal {A} |} U _ {i} \left(a _ {| \mathcal {A} |}, \pi_ {j}\right) \leq \max _ {l \in \{1, \dots , | \mathcal {A} | \}} U _ {i} \left(a _ {l}, \pi_ {j}\right), \tag {6}
+$$
+
+which implies that $a_{l}\in \mathcal{A}$ is a best response to $\pi_j$
+
+In zero-sum games, we assume that the strategy of the player $i$ is fixed as $\pi_{i}$ . Then we have:
+
+$$
+\operatorname {B R} \left(\pi_ {i}\right) = \operatorname {a r g m a x} _ {\pi_ {j}} U _ {j} \left(\pi_ {i}, \pi_ {j}\right). \tag {7}
+$$
+
+We assume that for all $\pi_j\in \mathrm{BR}(\pi_i)$ , the reward of player $j$ is $U_{j}(\pi_{i},\pi_{j}) = U^{0}$ . If the game is zero-sum, the reward of player $i$ is $U_{i}(\pi_{i},\pi_{j}) = -U^{0}$ . This implies that in zero-sum games, $U_{i}(\pi_{i},\pi_{j})$ is always the same for all $\pi_j\in \mathrm{BR}(\pi_i)$ .
+
+# A.3. Proof of Theorem 3.4
+
+Theorem. In two-player zero-sum games,
+
+- $\mathcal{E}(\pi_i, \pi_j) = -(\mathcal{V}_i(\pi_i) + \mathcal{V}_j(\pi_j))$ .
+- $\mathcal{V}_i(\pi_i)$ is Lipschitz continuous about $\pi_i$ , and $-\mathcal{V}_i(\pi_i)$ is a convex function about $\pi_i$ .
+
+- If the game is symmetric, $\forall \pi_i, \mathcal{V}_i(\pi_i) \leq 0$ . The joint strategy $(\pi_i^0, \pi_j^0)$ is a Nash equilibrium if and only if $\mathcal{V}_i(\pi_i^0) = \mathcal{V}_j(\pi_j^0) = 0$ . In games with only transitive dimension, $\mathcal{V}_i(\pi_i) > \mathcal{V}_j(\pi_j)$ implies $U_i(\pi_i, \pi_j) > 0$ .
+
+Proof. - From the definition, we can easily find that $\mathcal{E}(\pi_i,\pi_j) = -(\mathcal{V}_i(\pi_i) + \mathcal{V}_j(\pi_j))$ . We define the domain of strategies $D = \{(p_{1},p_{2},\dots ,p_{|\mathcal{A}|})\}$ which satisfies that $p_k\geq 0$ , $\sum_{k = 1}^{|\mathcal{A}|}p_k = 1$ , and $|\mathcal{A}|$ is the dimension of the action space $\mathcal{A} = \{a_1,\dots ,a_{|\mathcal{A}|}\}$ .
+
+- In order to prove that $-\mathcal{V}_i(\pi_i)$ is convex function about $\pi_i$ , we need to prove that for $\pi_i^1, \pi_i^2 \in D$ , and $c \in (0, 1)$ , we have $-\mathcal{V}_i[(1 - c)\pi_i^1 + c\pi_i^2] \leq -(1 - c)\mathcal{V}_i(\pi_i^1) - c\mathcal{V}_i(\pi_i^2)$ .
+
+We assume that $\pi_i^1 = (p_1^1, \dots, p_{|\mathcal{A}|}^1)$ and $\pi_i^2 = (p_1^2, \dots, p_{|\mathcal{A}|}^2)$ . Then we have:
+
+$$
+\begin{array}{l} \mathcal {V} _ {i} [ (1 - c) \pi_ {i} ^ {1} + c \pi_ {i} ^ {2} ] = U _ {i} [ (1 - c) \pi_ {i} ^ {1} + c \pi_ {i} ^ {2}, a _ {0} ], \quad a _ {0} \in \mathcal {A} \cap \mathrm {B R} [ (1 - c) \pi_ {i} ^ {1} + c \pi_ {i} ^ {2} ] \\ = (1 - c) U _ {i} \left(\pi_ {i} ^ {1}, a _ {0}\right) + c U _ {i} \left(\pi_ {i} ^ {2}, a _ {0}\right) \\ \geq (1 - c) U _ {i} \left(\pi_ {i} ^ {1}, a _ {0} ^ {1}\right) + c U _ {i} \left(\pi_ {i} ^ {2}, a _ {0} ^ {2}\right), \quad a _ {0} ^ {t} \in \mathcal {A} \cap \operatorname {B R} \left(\pi_ {i} ^ {t}\right), t \in \{1, 2 \} \\ = (1 - c) \mathcal {V} _ {i} \left(\pi_ {i} ^ {1}\right) + c \mathcal {V} _ {i} \left(\pi_ {i} ^ {2}\right). \\ \end{array}
+$$
+
+This implies that the inverse function of advantage function $-\mathcal{V}_i(\pi_i)$ is convex about $\pi_{i}$ .
+
+Then we prove that $\mathcal{V}_i(\pi_i)$ is Lipschitz continuous about $\pi_i$ . We assume that $\pi_i = (p_1, \dots, p_{|\mathcal{A}|})$ , $\pi_i' = \pi_i + \Delta \pi_i = (p_1 + \Delta p_1, \dots, p_{|\mathcal{A}|} + \Delta p_{|\mathcal{A}|})$ which satisfies that $\sum_{k=1}^{|A|} \Delta p_k = 0$ , and $a_0' \in \mathcal{A} \cap \mathrm{BR}(\pi_i + \Delta \pi_i)$ , $a_0 \in \mathcal{A} \cap \mathrm{BR}(\pi_i)$ .
+
+$$
+\begin{array}{l} \mathcal {V} _ {i} \left(p _ {1} + \Delta p _ {1}, \dots , p _ {| \mathcal {A} |} + \Delta p _ {| \mathcal {A} |}\right) - \mathcal {V} _ {i} \left(p _ {1}, \dots , p _ {| \mathcal {A} |}\right) \\ = U _ {i} \left(\pi_ {i} + \Delta \pi_ {i}, a _ {0} ^ {\prime}\right) - U _ {i} \left(\pi_ {i}, a _ {0}\right) \\ = \left(p _ {1} + \Delta p _ {1}\right) U _ {i} \left(a _ {1}, a _ {0} ^ {\prime}\right) + \dots + \left(p _ {| \mathcal {A} |} + \Delta p _ {| \mathcal {A} |}\right) U _ {i} \left(a _ {| \mathcal {A} |}, a _ {0} ^ {\prime}\right) - p _ {1} U _ {i} \left(a _ {1}, a _ {0}\right) - \dots - p _ {| \mathcal {A} |} U _ {i} \left(a _ {n}, a _ {0}\right) \\ = A _ {1} \Delta p _ {1} + \dots + A _ {| \mathcal {A} |} \Delta p _ {| \mathcal {A} |} + \left[ U _ {i} \left(\pi_ {i}, a _ {0} ^ {\prime}\right) - U _ {i} \left(\pi_ {i}, a _ {0}\right) \right] \\ \end{array}
+$$
+
+where $A_{1},\dots ,A_{m}$ are constants.
+
+Then we will prove that there exists $M$ such that $\forall \delta > 0$ , the following conclusion holds:
+
+$$
+\left| U _ {i} \left(\pi_ {i}, a _ {0} ^ {\prime}\right) - U _ {i} \left(\pi_ {i}, a _ {0}\right) \right| < \delta M, \quad \text {w h e n} \max \left| \Delta p _ {k} \right| \leq \delta . \tag {10}
+$$
+
+First we consider that $a_0' \in \mathcal{A} \cap \mathrm{BR}(\pi_i)$ , this indicates that $|U_i(\pi_i, a_0') - U_i(\pi_i, a_0)| = 0$ .
+
+Then we consider that $a_0^\prime \notin \mathcal{A}\cap \mathrm{BR}(\pi_i)$ , which means that:
+
+$$
+U _ {i} \left(\pi_ {i}, a _ {0}\right) > U _ {i} \left(\pi_ {i}, a _ {0} ^ {\prime}\right) \tag {11}
+$$
+
+We prove that $U_{i}(\pi_{i},a)$ is Lipschitz continuous with respect to $\pi_{i}$ . $\forall \pi_{i},\forall a_{m}$ , we have:
+
+$$
+\begin{array}{l} U _ {i} \left(\pi_ {i} + \Delta \pi_ {i}, a _ {m}\right) - U _ {i} \left(\pi_ {i}, a _ {m}\right) = \sum_ {k = 1} ^ {| \mathcal {A} |} \left(p _ {k} + \Delta p _ {k}\right) U _ {i} \left(a _ {k}, a _ {m}\right) - \sum_ {k = 1} ^ {| \mathcal {A} |} p _ {k} U _ {i} \left(a _ {k}, a _ {m}\right) \\ = \Delta p _ {1} U _ {i} \left(a _ {1}, a _ {m}\right) + \dots + \Delta p _ {| \mathcal {A} |} U _ {i} \left(a _ {| \mathcal {A} |}, a _ {m}\right) \tag {12} \\ \leq \delta M \quad \text {w h e r e} \delta = \max | \Delta p _ {k} | \text {a n d} M = \max _ {a ^ {*} \in \mathcal {A}} \left| \sum_ {i = 1} ^ {| \mathcal {A} |} U _ {i} \left(a _ {i}, a ^ {*}\right) \right|. \\ \end{array}
+$$
+
+Since we are considering games with finite dimension, $\mathbf{M}$ has consistent upper bounds. Then we have:
+
+$$
+U _ {i} \left(\pi_ {i}, a _ {0}\right) + \delta M \geq U _ {i} \left(\pi_ {i} ^ {\prime}, a _ {0}\right) \geq U _ {i} \left(\pi_ {i} ^ {\prime}, a _ {0} ^ {\prime}\right) \geq U _ {i} \left(\pi_ {i}, a _ {0} ^ {\prime}\right) - \delta M. \tag {13}
+$$
+
+Using Equation (12) twice results in (13). This indicates that $0 \leq U_{i}(\pi_{i}, a_{0}') - U_{i}(\pi_{i}, a_{0}) \leq 2\delta M$ . According to Equation (9), we have
+
+$$
+\mathcal {V} _ {i} \left(p _ {1} + \Delta p _ {1}, \dots , p _ {| \mathcal {A} |} + \Delta p _ {| \mathcal {A} |}\right) - \mathcal {V} _ {i} \left(p _ {1}, \dots , p _ {| \mathcal {A} |}\right) \leq \max _ {k} A _ {k} \cdot \delta \cdot | \mathcal {A} | + 2 \delta M \tag {14}
+$$
+
+This means that $\mathcal{V}_i(\pi_i)$ is Lipschitz continuous about $\pi_{i}$
+
+- From Theorem 1 we can easily find that if the game is symmetric, $\mathcal{V}_i(\pi_i) \leq 0$ . The joint strategy $(\pi_i^1, \pi_i^1)$ is a Nash equilibrium if and only if $\mathcal{V}_i(\pi_i^1) = 0$ .
+
+In games with only transitive dimension, the best response is the same $\pi_j^0$ for all strategies $\pi_i$ . Then we have:
+
+$$
+\mathcal {E} \left(\pi_ {i} ^ {1}, \pi_ {j} ^ {0}\right) = - \mathcal {V} _ {i} \left(\pi_ {i} ^ {1}\right) < - \mathcal {V} _ {i} \left(\pi_ {i} ^ {2}\right) = \mathcal {E} \left(\pi_ {i} ^ {2}, \pi_ {j} ^ {0}\right) \tag {15}
+$$
+
+So $\pi^1$ is closer to the Nash equilibrium than $\pi^2$ in the transitive dimension, which means that $U_{i}(\pi_{i}^{1},\pi_{j}^{2}) > 0$ .
+
+
+
+# A.4. Proof of Theorem 4.1
+
+Theorem. In symmetric zero-sum games, given the population $\mathcal{P}_i = \mathcal{P}_j = \{\pi_i^1,\dots ,\pi_i^t\}$ , $\forall \pi_i^k\in \mathcal{P}_i$ , we have $\mathcal{V}_i(\pi_i^k)\leq \mathcal{V}_i(\theta_i\mid \mathcal{P}_i)$ . Here, $\theta_{i}$ is the equilibrium of the meta-game corresponding to the population $\mathcal{P}_i$ .
+
+Proof. The population $\mathcal{P}_i$ can be viewed as a subgame. Applying Theorem 3.4 for this subgame we get the following property:
+
+$$
+\forall \pi_ {i} ^ {k} \in \mathcal {P} _ {i}, \mathcal {V} _ {i} \left(\pi_ {i} ^ {k} \mid \mathcal {P} _ {i}\right) \leq \mathcal {V} _ {i} \left(\theta_ {i} \mid \mathcal {P} _ {i}\right). \tag {16}
+$$
+
+Then we the following derivation:
+
+$$
+\begin{array}{l} \mathcal {V} _ {i} (\pi_ {i} ^ {k}) = U _ {i} (\pi_ {i} ^ {k}, \mathrm {B R} (\pi_ {i} ^ {k})) = - U _ {i} (\mathrm {B R} (\pi_ {i} ^ {k}), \pi_ {i} ^ {k}) \\ \leq - U _ {i} \left(\mathrm {B R} ^ {*} \left(\pi_ {i} ^ {k}\right), \pi_ {i} ^ {k}\right) = U _ {i} \left(\pi_ {i} ^ {k}, \mathrm {B R} ^ {*} \left(\pi_ {i} ^ {k}\right)\right) \quad \text {w h e r e} \mathrm {B R} ^ {*} \left(\pi_ {i} ^ {k}\right) = \operatorname {a r g m i n} _ {\pi^ {\prime} \in \mathcal {P} _ {i}} U _ {i} \left(\pi_ {i} ^ {k}, \pi^ {\prime}\right). \tag {17} \\ \leq U _ {i} \left(\theta_ {i}, \mathrm {B R} ^ {*} \left(\theta_ {i}\right)\right) \\ = \mathcal {V} _ {i} \left(\theta_ {i} \mid \mathcal {P} _ {i}\right). \\ \end{array}
+$$
+
+From this theorem, we can see that $\mathcal{V}_i(\theta_i\mid \mathcal{P}_i)$ is an upper bound for $\mathcal{V}_i(\pi_i^k)$ . This suggests that exploring new strategies in the neighborhood of $\theta_{i}$ increase the probability of improving the advantage of population.
+
+Here we give an example of $\mathcal{V}_i(\pi_i^k) \geq \mathcal{V}_i(\theta_i)$ .
+
+a1 a2 a3 a4 a5 a6 a1 (0,0) (1,-1) (-1,1) (-0.1,0.1) (0.9,-0.9) (-1.1,1.1) a2 (-1,1) (0,0) (1,-1) (-1.1,1.1) (-0.1,0.1) (0.9,-0.9) a3 (1,-1) (-1,1) (0,0) (0.9,-0.9) (-1.1,1.1) (-0.1,0.1) a4 (0.1,-0.1) (1.1,-1.1) (-0.9,0.9) (0,0) (1,-1) (-1,1) a5 (-0.9,0.9) (0.1,-0.1) (1.1,-1.1) (-1,1) (0,0) (1,-1) a6 (0.9,-0.9) (1.1,-1.1) (0.1,-0.1) (1,-1) (-1,1) (0,0)
+
+In this game, assuming that the current population is $\mathcal{P}_i = \mathcal{P}_j = \{a_1,a_5\}$ , then $\mathcal{V}_i(a_5) = -1$ . However, $\mathcal{V}_i(\theta_i\mid \mathcal{P}_i) = \mathcal{V}_i(a_1) = -1.1 < -1$ .
+
+# A.5. Proof of Theorem 4.2
+
+Theorem. Given the meta-equilibrium strategy $\theta_{i}$ , if $\mathcal{V}_i(\theta_i) < 0$ , there exists $\Delta \pi_{i} \in \mathcal{A}$ and $\delta > 0$ satisfying:
+
+$$
+\forall 0 < d < \delta , \quad \mathcal {V} _ {i} \left((1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi_ {i}\right) > \mathcal {V} _ {i} (\theta_ {i}).
+$$
+
+Proof. From Theorem 3.4 we have that the inverse of advantage function $-\mathcal{V}_i(\pi_i)$ is convex about $\pi_i$ . Since $\mathcal{V}_i(\theta_i) < 0 = \max \mathcal{V}_i(\pi_i)$ , from the convexity of the function $-\mathcal{V}_i$ we can find a direction of descent in the domain of the strategy $\mathcal{D} = \{(p_1, \dots, p_{|\mathcal{A}|}) | p_m \geq 0, \sum p_m = 1\}$ :
+
+$$
+\exists \delta^ {\prime}, \pi^ {\prime} \in \mathcal {D}, \forall 0 < d < \delta^ {\prime}, \quad \mathcal {V} _ {i} ((1 - d) \cdot \theta_ {i} + d \cdot \pi^ {\prime}) > \mathcal {V} _ {i} (\theta_ {i}). \tag {18}
+$$
+
+Since the domain of the strategy $\mathcal{D}$ is a convex combination of pure strategy space $\mathcal{A}$ , $\pi' \in \mathcal{D}$ can also be expressed as a convex combination of all elements in $\mathcal{A}$ . This means that there exists $\Delta \pi_i \in \mathcal{A}$ , which satisfies that $\langle \pi', \Delta \pi_i \rangle > 0$ . We define that $\delta = \frac{|\langle \pi', \Delta \pi_i \rangle|}{|\Delta \pi_i| \cdot |\pi'|} \cdot \delta'$ . Then we have:
+
+$$
+\forall 0 < d < \delta , \quad \mathcal {V} _ {i} ((1 - d) \cdot \theta_ {i} + d \cdot \Delta \pi_ {i}) > \mathcal {V} _ {i} (\theta_ {i}). \tag {19}
+$$
+
+
+
+# A.6. Proof of Theorem 4.4
+
+Theorem. In two-player simplified games, $\forall \pi_{i}$ , for any $a_{j}^{l} \in \mathrm{argmax}_{a_{j} \in \mathrm{BR}(\pi_{i}) \cap \mathcal{A}_{j}} U_{i}(\pi_{i}, a_{j})$ and $\forall \delta > 0$ , there always exists $\pi_{i}'$ which satisfies $|\pi_{i}' - \pi_{i}| < \delta$ and $\mathrm{BR}(\pi_{i}') \cap \mathcal{A}_{j} = \{a_{j}^{l}\}$ .
+
+Proof. We assume that $\pi_i = (p_1, \dots, p_{|\mathcal{A}_i|})$ . Since we are searching for $\pi_i'$ in the neighbour of $\pi_i$ , without loss of generality, we assume $p_t > 0$ , $\forall t \in \{1, \dots, |\mathcal{A}_i|\}$ . Since the game is simplified, the pure strategy $a_j^l$ is not dominated. Therefore, there exists $m \in \{1, \dots, |\mathcal{A}_i|\}$ , $\forall a_j' \neq a_j^l$ , $a_j' \cdot U_j \cdot a_i^m < a_j^l \cdot U_j \cdot a_i^m$ . We choose $\pi_i' = (\frac{1 - (1 + \delta) \cdot p_m}{1 - p_m} p_1, \dots, (1 + \delta) p_m, \dots, \frac{1 - (1 + \delta) \cdot p_m}{1 - p_m} p_{|\mathcal{A}_i|})$ , it is obvious that $|\pi_i' - \pi_i| < \delta$ . Then we proof $\forall a_j' \neq a_j^l$ , $a_j' \in \mathrm{BR}(\pi_i)$ , we have $U_j(\pi_i', a_j^l) > U_j(\pi_i', a_j')$ . It is obvious that
+
+$$
+U _ {j} \left(\pi_ {i}, a _ {j} ^ {l}\right) - U _ {j} \left(\pi_ {i}, a _ {j} ^ {\prime}\right) = p _ {1} \left(a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {1} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {1}\right) + \dots + p _ {| \mathcal {A} _ {i} |} \left(a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {| \mathcal {A} _ {i} |} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {| \mathcal {A} _ {i} |}\right) = 0 \tag {20}
+$$
+
+Then we have
+
+$$
+\begin{array}{l} U _ {j} (\pi_ {i} ^ {\prime}, a _ {j} ^ {l}) - U _ {j} (\pi_ {i} ^ {\prime}, a _ {j} ^ {\prime}) = \frac {1 - (1 + \delta) \cdot p _ {m}}{1 - p _ {m}} p _ {1} (a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {1} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {1}) \\ + \dots + (1 + \delta) p _ {m} \left(a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {m} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {m}\right) + \dots + \frac {1 - (1 + \delta) \cdot p _ {m}}{1 - p _ {m}} p _ {| \mathcal {A} _ {i} |} \left(a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {| \mathcal {A} _ {i} |} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {| \mathcal {A} _ {i} |}\right) \tag {21} \\ = \delta \cdot \frac {p _ {m}}{1 - p _ {m}} (a _ {j} ^ {l} \cdot U _ {j} \cdot a _ {i} ^ {m} - a _ {j} ^ {\prime} \cdot U _ {j} \cdot a _ {i} ^ {m}) > 0 \\ \end{array}
+$$
+
+This indicates that $\mathrm{BR}(\pi_i^{\prime})\cap \mathcal{A}_j = \{a_j^l\}$
+
+
+
+# A.7. Proof of Theorem 4.6
+
+Theorem. In two-player simplified games,
+
+- $\forall i, \mathcal{V}_i(\pi_i)$ is Lipschitz continuous.
+- We assume that the joint strategy $(\pi_i, \pi_j)$ is a Nash equilibrium. If $\mathrm{BR}(\pi_i) \cap \mathcal{A}_j$ has the unique element, then $\mathcal{V}_i(\pi_i)$ is a local maximum.
+- Under the same assumption, if $(\pi_i^1,\pi_j^2)$ and $(\pi_i^3,\pi_j^4)$ are both NEs, then $(\pi_i^1,\pi_j^2)$ Pareto dominates $(\pi_i^3,\pi_j^4)$ if and only if $\mathcal{V}_i(\pi_i^1)\geq \mathcal{V}_i(\pi_i^3)$ and $\mathcal{V}_j(\pi_j^2)\geq \mathcal{V}_j(\pi_j^4)$ .
+
+Proof. - In Theorem 3.4, the proof of the Lipschitz continuity of the advantage function does not require that the game is zero-sum. Therefore, we can similarly prove that $\mathcal{V}_i(\pi_i)$ is Lipschitz continuous in two-player general-sum games.
+
+- If the joint strategy $(\pi_i, \pi_j)$ is a Nash equilibrium, we have that $\pi_j \in \mathrm{BR}(\pi_i)$ . We assume that:
+
+$$
+\operatorname {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {j} = \left\{a _ {j} ^ {0} \right\}, \tag {22}
+$$
+
+which means that:
+
+$$
+\forall a _ {j} ^ {k} \neq a _ {j} ^ {0}, U _ {j} \left(\pi_ {i}, a _ {j} ^ {k}\right) < U _ {j} \left(\pi_ {i}, a _ {j} ^ {0}\right). \tag {23}
+$$
+
+From the proof of Theorem 3.4 we have that $\forall \pi_i, \forall a, U_i(\pi_i, a)$ is Lipschitz continuous about $\pi_i$ . Then there must exists $\delta > 0$ , which satisfies that:
+
+$$
+\forall \pi_ {i} ^ {\prime} \in B _ {\delta} (\pi_ {i}), \left(\operatorname {B R} \left(\pi_ {i} ^ {\prime}\right) \cap \mathcal {A} _ {j}\right) \subseteq \left(\operatorname {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {j}\right), \tag {24}
+$$
+
+where $B_{\delta}(\pi_i)$ is the open ball of radius $\delta$ centered on $\pi_i$ . From Theorem 3.2, we have that $\{a_j^0\} = \mathrm{BR}(\pi_i') \cap \mathcal{A}_j$ , then we have:
+
+$$
+\begin{array}{l} \mathcal {V} _ {i} (\pi_ {i}) = U _ {i} (\pi_ {i}, \pi_ {j}) = U _ {i} \left(\pi_ {i}, a _ {j} ^ {0}\right) \\ \geq U _ {i} \left(\pi_ {i} ^ {\prime}, \pi_ {j}\right) = U _ {i} \left(\pi_ {i} ^ {\prime}, a _ {j} ^ {0}\right) \quad \text {(b e c a u e} \left(\pi_ {i}, \pi_ {j}\right) \text {i s N a s h e q u i l i b r i u m}) \tag {25} \\ = \mathcal {V} _ {i} \left(\pi_ {i} ^ {\prime}\right). \\ \end{array}
+$$
+
+We assume that the elements of $\mathrm{BR}(\pi_i)\cap \mathcal{A}_j$ are unique in assumption. Since $(\pi_{i},\pi_{j})$ is Nash equilibria, $\pi_j$ is the optimal response to $\pi_{i}$ . This indicates that $\pi_j = a_j^0$ ( $\pi_{j}$ is linear combination of $\mathrm{BR}(\pi_i)\cap \mathcal{A}_j$ ). Therefore the third holds. This illustrates that $\mathcal{V}_i(\pi_i)$ is a local maximum of the advantage function $\mathcal{V}_i$ .
+
+- Under the assumption that $\mathbf{BR}(\pi_i) \cap \mathcal{A}_j$ are unique, we have $\mathcal{V}_i(\pi_i^1) = U_i(\pi_i^1, \pi_j^2)$ and $\mathcal{V}_j(\pi_j^2) = U_j(\pi_i^1, \pi_j^2)$ . Therefore, $(\pi_i^1, \pi_j^2)$ Pareto dominate $(\pi_i^3, \pi_j^4)$ is equivalent to $U_i(\pi_i^1, \pi_j^2) \geq U_i(\pi_i^3, \pi_j^4)$ and $U_j(\pi_i^1, \pi_j^2) \geq U_j(\pi_i^3, \pi_j^4)$ . Since $(\pi_i^1, \pi_j^2)$ and $(\pi_i^3, \pi_j^4)$ are both Nash equilibrium, this holds if and only if $\mathcal{V}_i(\pi_i^1) \geq \mathcal{V}_i(\pi_i^3)$ and $\mathcal{V}_j(\pi_j^2) \geq \mathcal{V}_j(\pi_j^4)$ .
+
+
+
+# A.8. Proof of Theorem 4.7
+
+Theorem. In two-player simplified games, the current population for agent $i$ is $\mathcal{P}_i = \{\pi_i^1,\dots ,\pi_i^t\}$ . $\theta_{i}$ is the global maximum point of the advantage $\nu_{i}$ in $\mathrm{hull}(\mathcal{P}_i)$ . Then there must exist a non-zero measure set $\mathcal{D}'\subset \mathrm{hull}(\mathcal{P}_i)$ , which satisfies that if $\theta_{i}^{\prime}$ is a local maximum point of the advantage $\nu_{i}$ in $\mathcal{D}'$ , then $\mathcal{V}_i(\theta_i') = \mathcal{V}_i(\theta_i)$ .
+
+Proof. We assume that the strategy of the player $i$ is $\pi_i = (p_1, \dots, p_{|\mathcal{A}_i|})$ . For $k \in \{1, \dots, |\mathcal{A}_j|\}$ , we define $g_k(\pi_i) = U_j(\pi_i, a_j^k)$ , where $a_j^k \in \mathcal{A}_j$ . Then we have:
+
+$$
+g _ {k} \left(p _ {1}, \dots , p _ {\left| \mathcal {A} _ {i} \right|}\right) = p _ {1} U _ {j} \left(a _ {i} ^ {1}, a _ {j} ^ {k}\right) + \dots + p _ {\left| \mathcal {A} _ {i} \right|} U _ {j} \left(a _ {i} ^ {\left| \mathcal {A} _ {i} \right|}, a _ {j} ^ {k}\right). \tag {26}
+$$
+
+If the elements in set $\mathrm{BR}(\pi_i) \cap \mathcal{A}_j$ are not unique, there must exists $k$ and $k'$ which satisfies that $g_k(\pi_i) = g_{k'}(\pi_i)$ . Since the game is simplified, $\left(U_j(a_i^1, a_j^k), \dots, U_j(a_i^{|\mathcal{A}_i|}, a_j^k)\right)$ and $\left(U_j(a_i^1, a_j^{k'})', \dots, U_j(a_i^{|\mathcal{A}_i|}, a_j^{k'})'\right)$ are linearly independent vectors. This indicates that $\pi_i$ satisfying $g_k(\pi_i) = g_{k'}(\pi_i)$ is a zero measure set and non-dense in the domain $\Pi_i$ .
+
+We define
+
+$$
+D ^ {0} = \left\{\pi_ {i} \mid \pi_ {i} \in \operatorname {h u l l} \left(\mathcal {P} _ {i}\right), \left(\operatorname {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {j}\right) \text {i s a s i n g l e t o n s e t} \right\}. \tag {27}
+$$
+
+Since $\pi_i$ satisfying that there exists $k$ and $k'$ with $g_k(\pi_i) = g_{k'}(\pi_i)$ is non-dense in the domain $\Pi_i$ , we consider the projection of those $\pi_i$ onto $\mathrm{hull}(\mathcal{P}_i)$ . This indicates that either $D^0$ is an empty set (which means that $\pi_i$ intersects different functions $g$ covers the plane of $\mathrm{hull}(\mathcal{P}_i)$ ), or $(\mathrm{hull}(\mathcal{P}_i) \setminus D^0)$ is a non-dense set.
+
+$D^0$ is an empty set means that $\forall \pi_i\in \mathrm{hull}(\mathcal{P}_i)$ , $g_{k}(\pi_{i}) = g_{k^{\prime}}(\pi_{i})$ . Since the game is simplified, $\forall \pi_i\in \mathrm{hull}(\mathcal{P}_i)$ , $U_{i}(\pi_{i},a_{j}^{k}) = U_{i}(\pi_{i},a_{j}^{k^{\prime}})$ . Thus, we can remove $a_j^{k_j'}$ from $\mathcal{A}_j$ , which does not affect the calculation of $\nu_{i}(\pi_{i})$ . This is because in this theorem, $g_{k}$ and $g_{k^{\prime}}$ are always the same on $\mathrm{hull}(\mathcal{P})$ when $D^0$ is an empty set. Since all operations in this theorem are performed on $\mathrm{hull}(\mathcal{P})$ , and both $g$ correspond to the same $U_{j}$ , by the definition of $\nu$ , it is only necessary to keep the one corresponding to the larger $U_{i}$ . Therefore, we can remove another one for simplification.
+
+If $(\mathrm{hull}(\mathcal{P}_i)\setminus D^0)$ is a non-dense set, we consider separately whether $\theta_{i}\in D^{0}$ . If $\theta_{i}\in D^{0}$ , there must exist non-zero measure set $\mathcal{D}^1\subseteq \mathrm{hull}(\mathcal{P}_i)$ , which satisfies that:
+
+$$
+\forall \pi_ {i} \in \mathcal {D} ^ {1}, \operatorname {B R} \left(\pi_ {i}\right) \cap \mathcal {A} _ {j} = \operatorname {B R} \left(\theta_ {i}\right) \cap \mathcal {A} _ {j}. \tag {28}
+$$
+
+This indicates that $\mathcal{V}_i(\pi_i)$ is a linear function about $\pi_i$ . Since $\theta_i$ is the global maximum of this linear function, there must exist non-zero measure set $\mathcal{D}' \subseteq \mathrm{hull}(\mathcal{P}_i)$ , which satisfies that if $\theta_i'$ is a local maximum in $\mathcal{D}'$ , then $\mathcal{V}_i(\theta_i') = \mathcal{V}_i(\theta_i)$ .
+
+If $\theta_{i}\in (\mathrm{hull}(\mathcal{P}_{i})\setminus D^{0})$ , we assume that:
+
+$$
+\operatorname {a r g m a x} _ {k} g _ {k} \left(\theta_ {i}\right) = \{1, \dots , l \}. \tag {29}
+$$
+
+Due to the Lipschitz continuity of $U_{j}$ , there must exists $d > 0$ , which satisfies that:
+
+$$
+\forall \Delta \pi_ {i}, \operatorname {B R} \left(\theta_ {i} ^ {*} + \frac {\Delta \pi_ {i}}{| \Delta \pi_ {i} |} \cdot d\right) \subseteq \left\{\pi_ {j} ^ {1}, \dots , \pi_ {j} ^ {l} \right\}. \tag {30}
+$$
+
+Since $g_{k}(\pi_{i})$ is a hyperplane corresponds to a linear function, the value of function $g_{k}(\pi_{i})$ on the ray $\theta_{i} + \frac{\Delta\pi_{i}}{|\Delta\pi_{i}|}\cdot \delta$ , $0 < \delta < d$ is either all maximal or all non-maximal for all $k$ . Thus we can assume that there is a unique pure strategy best response $\pi_j^k, k\in \{1,\dots ,l\}$ for a strategy on that ray.
+
+Thus, the open ball $B_{d}(\theta_{i})$ can be divided into at most $|\mathcal{A}_j|$ region $D^{k}, k \in \{1, \dots, |\mathcal{A}_{j}|\}$ . In every region $D^{k}$ , $\mathcal{V}_{i}(\pi_{i})$ is linear function about $\pi_{i}$ . Since $\theta_{i}$ is the global maximum of this linear function, there must exist non-zero measure set $\mathcal{D}' \subseteq B_{d}(\theta_{i})$ , which satisfies that if $\theta_{i}'$ is a local maximum in $\mathcal{D}'$ , then $\mathcal{V}_{i}(\theta_{i}') = \mathcal{V}_{i}(\theta_{i})$ .
+
+# A.9. Proof of Theorem 4.8
+
+Theorem. Assuming that $|\nabla_{\pi_i}\hat{\mathcal{V}}_i(\pi_i) - \nabla_{\pi_i}\mathcal{V}_i(\pi_i)| \leq \frac{1}{3} |\nabla_{\pi_i}\mathcal{V}_i(\pi_i)|$ , the algorithm will converge to equilibrium with sublinear convergence rate in symmetric zero-sum games.
+
+Proof. We use $x$ to denote $\pi_i$ and $f(x)$ to denote $\mathcal{V}_i(\pi_i)$ . We use $f^*$ to denote the global maximum of the function $f$ . According to Theorem 4, $f(x)$ is Lipschitz continuous about $x$ , and $-f(x)$ is a convex function about $x$ . From the convexity, there exists $M$ for $\forall \eta$ ,
+
+$$
+\begin{array}{l} f (x - \eta \nabla \hat {f} (x)) = f (x) - \eta \nabla \hat {f} (x) ^ {T} \nabla f (x) + \frac {M}{2} | - \eta \nabla \hat {f} (x) | ^ {2} \\ \leq f (x) - \frac {2 \eta}{3} | \nabla f (x) | ^ {2} + \frac {M}{2} | - \eta \cdot \frac {4}{3} \nabla f (x) | ^ {2} \tag {31} \\ = f (x) - (\frac {2 \eta}{3} - \frac {8 M \eta^ {2}}{9}) | \nabla f (x) | ^ {2}. \\ \end{array}
+$$
+
+By assuming that $\eta \leq \frac{9}{48M}$ , we have
+
+$$
+\frac {2 \eta}{3} - \frac {8 M \eta^ {2}}{9} \geq \frac {\eta}{2} \tag {32}
+$$
+
+Then we have:
+
+$$
+f (x - \eta \nabla \hat {f} (x)) \leq f (x) - \frac {\eta}{2} | \nabla f (x) | ^ {2} \tag {33}
+$$
+
+We denote $\bar{x} = x - \eta \nabla \hat{f}(x)$ , then we have:
+
+$$
+\begin{array}{l} f (\bar {x}) \leq f (x) - \frac {\eta}{2} | \nabla f (x) | ^ {2} \\ \leq f ^ {*} + \nabla f (x) ^ {T} (x - x ^ {*}) - \frac {\eta}{2} | \nabla f (x) | ^ {2} \\ = f ^ {*} + \frac {1}{2 \eta} \left(\| x - x ^ {*} \| ^ {2} - \| x - x ^ {*} - \eta \nabla f (x) \| ^ {2}\right) \tag {34} \\ = f ^ {*} + \frac {1}{2 \eta} \left(\| x - x ^ {*} \| ^ {2} - \| \tilde {x} - x ^ {*} \| ^ {2}\right) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {k} \left(f (x ^ {i}) - f ^ {*}\right) \leqslant \frac {1}{2 \eta} \sum_ {i = 1} ^ {k} \left(\left\| x ^ {i - 1} - x ^ {*} \right\| ^ {2} - \left\| x ^ {i} - x ^ {*} \right\| ^ {2}\right) \\ = \frac {1}{2 \eta} \left(\left\| x ^ {0} - x ^ {*} \right\| ^ {2} - \left\| x ^ {k} - x ^ {*} \right\| ^ {2}\right) \tag {35} \\ \leqslant \frac {1}{2 \eta} \left\| x ^ {0} - x ^ {*} \right\| ^ {2}. \\ \end{array}
+$$
+
+It is obvious that $f(x^{i})$ is non-increasing, then we have:
+
+$$
+f \left(x ^ {k}\right) - f ^ {*} \leqslant \frac {1}{k} \sum_ {i = 1} ^ {k} \left(f \left(x ^ {i}\right) - f ^ {*}\right) \leqslant \frac {1}{2 k \eta} \left\| x ^ {0} - x ^ {*} \right\| ^ {2} \tag {36}
+$$
+
+The convergence rate is $\mathcal{O}\left(\frac{1}{k}\right)$ . This indicates that the approximate gradient-based algorithm will converge to the equilibrium with sublinear convergence.
+
+
+
+# B. Algorithm Introduction and the Pseudo-Code
+
+# B.1. Fictitious Play
+
+In fictitious play algorithm with two players, strategies are randomly initialized as $(\pi_i^0,\pi_j^0)$ . In iteration $t$ , agents select the best response to the average strategy of its opponent:
+
+$$
+\pi_ {i} ^ {t + 1} = \operatorname {B R} \left(\bar {\pi} _ {j} ^ {t}\right), \quad \bar {\pi} _ {j} ^ {t} = \frac {1}{t} \sum_ {k = 1} ^ {t} \pi_ {j} ^ {k}. \tag {37}
+$$
+
+Fictitious play has convergence guarantees in simple structures such as two-player zero-sum games. However, it has the disadvantage that convergence can be very slow in games with large strategy spaces.
+
+# B.2. Classic PSRO Algorithm
+
+Algorithm 1 Policy-Space Response Oracles
+Input: initial policy populations for all players $\mathcal{P}$ . Compute the expected utilities $U^{\mathcal{P}}$ for each joint $\pi \in \mathcal{P}$ . Initialize meta-strategies $\theta_{i} = \mathrm{UNIFORM}(\mathcal{P}_{i})$
+1: while iters $e$ in $\{1,2,\dots\}$ do
+2: for player $i \in \{1,\dots,n\}$ do
+3: for many episodes do
+4: Sample $\pi_{-i} \sim \theta_{-i}$
+5: Train oracle $\pi_{i}'$ over $\mathcal{O}(\pi_{i}',\pi_{-i})$
+6: end for
+7: $\mathcal{P}_{i} = \mathcal{P}_{i} \cup \{\pi_{i}'\}$
+8: end for
+9: Compute missing entries in $U^{\mathcal{P}}$ from $\mathcal{P}$
+10: Compute a meta-strategy $\theta$ from $U^{\mathcal{P}}$
+11: end while
+Output: Current solution strategy $\theta_{i}$ for player $i$ .
+
+Pseudo-code of the classic PSRO algorithm is given in Algorithm 1. UNIFORM denotes random sampling according to the uniform distribution. The two main components of the algorithm are the exploration of the new strategy $\pi_i^{\prime}$ and the computation of the meta-strategy $\theta$ . In this paper, we focus on improving the PSRO framework from the perspective of new strategy exploration. We use the meta-strategy solver with exactly the same parameters as the other PSRO algorithms in our comparison experiments (Perez-Nieves et al., 2021; Liu et al., 2021).
+
+# B.3. A-PSRO for Solving Zero-Sum Games
+
+In this section, we provide the algorithm for the generation of new strategy, and the rest of the framework is the same as other PSRO algorithms. We assume that the current population is $(\mathcal{P}_i,\mathcal{P}_j)$ , where $\mathcal{P}_i = \{\pi_i^1,\dots ,\pi_i^t\}$ . Since A-PSRO primarily improves the strategy exploration process, we outline how to enhance strategies through exploration within a single PSRO iteration. The other components of the algorithm remain consistent with the PSRO algorithms compared in this study.
+
+It is worth noting that the PSRO variant algorithms typically prioritize updating existing strategies. New strategies are generated randomly only when the existing ones fail to improve. For further details, please refer to the DPP-PSRO or PSD-PSRO algorithms. Similarly, in our algorithm, the agents initially decide to update the last strategy $\pi_i^t$ . The new strategy $\pi_i^{t + 1}$ is generated and incorporated into the population only if the update process does not enhance the utility. As LookAhead updates the strategy in the transitive dimension, we set its learning rate lower than $\| \theta_{i}\| ^{\infty}$ to prevent stagnation as the strategy approaches the Nash equilibrium.
+
+Algorithm 2 Strategy exploration process of A-PSRO in zero-sum games
+Input: Population $(\mathcal{P}_i,\mathcal{P}_j)$ , meta-Nash equilibrium $(\theta_{i},\theta_{j})$ , strategy to be updated of the agent $\pi_i^t$
+Parameter: diversity weight $\lambda_{d}$ , learning rate $l_{r}$ , improvement bound $c_{m}$
+1: Randomly generate $d_r\sim \mathbf{U}[0,1]$
+2: if $d_r\leq \lambda_d$ then
+3: $\Delta \pi = \mathrm{argmax}_{\Delta \pi \in \mathcal{A}}[\mathrm{EC}(\mathcal{P}_i\setminus \{\pi_i^t\} \cup \{(1 - l_r)\cdot \pi_i^t +l_r\cdot \Delta \pi \} \mid \mathcal{P}_j)]$
+4: $\pi_i^* = (1 - l_r)\cdot \pi_i^t +l_r\cdot \Delta \pi .$
+5: else
+6: Randomly generate $l_{r}\sim \mathbf{U}[0,\min (l_{r},\| \theta_{i}\| \infty)]$
+7: $\Delta \pi = \mathrm{argmax}_{\Delta \pi \in \mathcal{A}}\mathcal{V}_i[(1 - l_r)\cdot \theta_i + l_r\cdot \Delta \pi ]$
+8: $\pi_i^* = (1 - l_r)\cdot \theta_i + l_r\cdot \Delta \pi .$
+9: end if
+10: if $\frac{\pi_i^*\times\mathcal{U}_i\times\theta_j}{\pi_i^t\times\mathcal{U}_i\times\theta_j} -1\geq c_m$ then
+11: $\pi_i^t = \pi_i^*$
+12: else
+13: $\pi_i^t = \pi_i^*$ . Then randomly generate $\pi_i^{t + 1},\mathcal{P}_i = \mathcal{P}_i\cup \{\pi_i^{t + 1}\}$ . (The randomly generated strategy $\pi_i^{t + 1}$ will be updated in the next iteration, this is equivalent to adding an explored strategy.)
+14: end if
+15: return $P_{i}$
+Output: $P_{i}$
+
+In Algorithm 2, we combine the diversity module and our LookAhead module. The EC (expected cardinality) function is the diversity measure used in (Perez-Nieves et al., 2021):
+
+$$
+\operatorname {E C} \left(\mathcal {P} _ {i} \mid \mathcal {P} _ {j}\right) := \operatorname {T r} \left(\mathbf {I} - \left(\mathcal {L} + \mathbf {I}\right) ^ {- 1}\right) \tag {38}
+$$
+
+$$
+\mathcal {L} = \mathcal {M} _ {i} \mathcal {M} _ {i} ^ {T}, \mathcal {M} _ {i} = \mathcal {P} _ {i} \times U _ {i} \times \mathcal {P} _ {j}.
+$$
+
+Here $\mathrm{Tr}$ denotes the trace of a matrix. In all zero-sum game experiments, we control the proportion of diversity and LookAhead modules with a uniform parameter $\lambda_{d}$ . We find from our experimental results that A-PSRO achieves good convergences in games with different transitive and cyclic structures.
+
+# B.4. A-PSRO for Solving Two-Player General-Sum Games
+
+In experiments with general-sum games, we find that the diversity module does not contribute to improving the reward of the strategy learning process. Therefore, the strategy exploration process of our algorithm A-PSRO contains only the LookAhead module. The Pseudo-Code of A-PSRO is given in Algorithm 3. In Algorithm 3, the meta-solver of the oracle $\mathcal{O}(\mathcal{P}_i,\mathcal{P}_j\mid \pi_{i,j}^k)$ is the fictitious play with 1000 iterations. In our experiments, we set the number of repeats $k = 10$ . The rest of the A-PSRO algorithm in the general-sum game is consistent with the zero-sum game.
+
+# B.5. A-PSRO for Solving Multi-Player Games
+
+The main modification in applying A-PSRO algorithm to solve multi-player games is the computation of the advantage function. Unlike the two-player game with direct utilization of the best response BR, the computation of the advantage of the strategy $\pi_{i}$ requires the usage of oracle $\mathcal{O}(\Pi_{-i}|\pi_i)$ . In multi-player games, we adopt the joint best response as an approximation to the optimistic equilibrium. The Pseudo-Code of calculating the advantage in A-PSRO is given in Algorithm 4. Besides the computation of the advantage function, A-PSRO is consistent with the two-player game in the multi-player game.
+
+Algorithm 3 Strategy exploration process of A-PSRO in general-sum games
+```txt
+Input: Population $(\mathcal{P}_i,\mathcal{P}_j)$ , meta-Nash equilibrium $(\theta_{i},\theta_{j})$ , strategy to be updated of the agent $\pi_i^t$ .
+Parameter: learning rate $l_{r}$ , improvement bound $c_{m}$ .
+```
+
+1: for repeats $k$ in $\{1,2,\dots\}$ do
+2: $(\pi_i^k,\pi_j^k) = \mathrm{UNIFORM}[\mathrm{hull}(\mathcal{P}_{i,j})]$
+3: $(\theta_i^k,\theta_j^k) = \mathcal{O}(\mathcal{P}_i,\mathcal{P}_j\mid \pi_{i,j}^k)$
+4: end for
+5: $\theta_{i} = \operatorname{argmax}_{k}\mathcal{V}_{i}(\theta_{i}^{k})$
+6: Randomly generate $l_r \sim \mathbf{U}[0, \min(l_r, \| \theta_i \|^\infty)]$ .
+7: $\Delta \pi = \operatorname{argmax}_{\Delta \pi \in \mathcal{A}} \mathcal{V}_i[(1 - l_r) \cdot \theta_i + l_r \cdot \Delta \pi]$ .
+8: $\pi_i^* = (1 - l_r) \cdot \theta_i + l_r \cdot \Delta \pi$ .
+9: if $\frac{\pi_i^* \times \mathcal{U}_i \times \theta_j}{\pi_i^t \times \mathcal{U}_i \times \theta_j} - 1 \geq c_m$ then
+10: $\pi_i^t = \pi_i^*$
+11: else
+12: $\pi_i^t = \pi_i^*$ . Then randomly generate $\pi_i^{t + 1}$ , $\mathcal{P}_i = \mathcal{P}_i\cup \{\pi_i^{t + 1}\}$ .
+13: end if
+14: return $\mathcal{P}_i$
+
+Output: $\mathcal{P}_i$
+
+Algorithm 4 Calculation of the advantage function in multi-player games
+```latex
+Input: Strategy of the player $\pi_{i}$ , initialized aion $U_{i}^{0} = -M, U_{-i}^{0} = -M$ .
+1: The identifying numbers set of other agents is $\{-i\} = \{1, \dots, k\}$
+2: The joint pure strategy space of other agents is $\mathcal{A}_{-i} = \mathcal{A}_{1} \times \dots \times \mathcal{A}_{k}$
+3: for $(a_{1}^{m}, \dots, a_{k}^{m}) \in \mathcal{A}_{-i}$ do
+4: if $U_{-i}(\pi_{i}, \pi_{-i} = (a_{1}^{m}, \dots, a_{k}^{m})) > U_{-i}^{0}$ then
+5: $U_{i}^{0} = U_{i}(\pi_{i}, \pi_{-i}), U_{-i}^{0} = U_{-i}(\pi_{i}, \pi_{-i})$
+6: else if $U_{-i}(\pi_{i}, \pi_{-i} = (a_{1}^{m}, \dots, a_{k}^{m})) = U_{-i}^{0}$ then
+7: if $U_{i}(\pi_{i}, \pi_{-i} = (a_{1}^{m}, \dots, a_{k}^{m})) > U_{i}^{0}$ then
+8: $U_{i}^{0} = U_{i}(\pi_{i}, \pi_{-i})$
+9: end if
+10: end if
+11: end for
+12: $\mathcal{V}_{i}(\pi_{i}) = U_{i}^{0}$
+Output: $\mathcal{V}_{i}(\pi_{i})$
+```
+
+Settings Value Description nb_iters 200 Training iterations metaSolver fictitious play Metasolver method meta_iter 1000 Iterations for Metasolver improvement_threshold cm 0.03 Convergence criteria learning_rate 0.5 Default learning rate num_learners 4 Number of strategies updated in each iteration num_repeats 10 Number of repetitions per experiment lr 0.5 Default step size λd 0.5 Diversity weight
+
+Table 1. Parameter setting for experiments in Zero-sum games.
+
+# C. Experiment Details and Additional Experiment Results
+
+# C.1. A-PSRO for Solving Zero-Sum Games
+
+The parameter setting of zero-sum games is given in Table 1. All experiments in this paper were run with CPU support on model Intel Core i9-10900KF CPU @ 3.70GHz. Experiments can be performed under both Windows and Linux systems.
+
+In our setup, each experiment is repeated 4 times and the results are averaged for plotting. Within each experiment, the population is initialized randomly, and the meta-game is solved in the same manner for different algorithms. The number of learners for all algorithms, except the classic PSRO, is set to 4, which implies that four strategies within the population will be updated in each iteration. In order to more accurately compare the efficiency of different algorithms in learning the strategies, we gradually increase the iterations for meta-solver. The initial iterations for meta-solver is set as 1000. Every 20 steps of training, we increase the iterations for meta-solver by 500.
+
+Our experiments for symmetric zero-sum games are conducted in the environments used in the previous papers about PSRO algorithms. Detailed description of these game environments can be found in (Czarnecki et al., 2020; Liu et al., 2022). Taking the AlphaStar as an example, it is derived from the experimental environment StarCraft, which is commonly used in multiagent reinforcement learning. By extracting meta-strategies in large scale extend-form game StarCraft, we can obtain a symmetric normal-form game AlphaStar. In detail, AlphaStar is a symmetric zero-sum games with dimension $888 \times 888$ . Other normal-form game environments are similarly obtained by extracting meta-strategies for real world games (Go, Kuhn Poker, etc.).
+
+Additional experiment results are shown in Figure 8. According to the previous work (Czarnecki et al., 2020), the following games $(8(\mathrm{c}),8(\mathrm{e}),8(\mathrm{f}),8(\mathrm{g}),8(\mathrm{h}),8(\mathrm{n}))$ has strong transitive structures. From Figure 8, we can see that in these games, adopting only the LA (lookahead) module with the objective of maximizing the advantage function is effective to reduce the exploitability. In those games with cyclic structures, it is necessary to adopt the diversity module in learning Nash equilibrium. We can see that A-PSRO combining LA and Diversity Module achieves the optimal results across all environments. In the stochastic game Disc game 8(d) with almost no transitive dimension, all algorithms obtain the same convergence results.
+
+Figure 9 shows the advantage distribution of these games. From Figure 9, we can see that although there may be large differences in the payoff matrices between different games, they may have similar advantage distribution. Games with the same advantage distribution have similar convergence processes of strategies when applying the PSRO algorithms to solve them, e.g. 9(c), 9(g), 9(n). Although the advantage function does not fully characterize the transitive dimension in zero-sum games, we believe that it has similarities to the geometric visualization of the transitive and cyclic dimensions in the previous work (Czarnecki et al., 2020).
+
+# C.2. A-PSRO for Solving Two-Player General-Sum Games
+
+The parameter setting of general-sum games is given in Table 2. The hardware and system setup used for the experiments are the same as those for zero-sum games. Similar to zero-sum games, the initial iterations for meta-solver is set as 1000. Every 20 steps of training, we increase the iterations for meta-solver by 500.
+
+The StagHunt game is a commonly used general-sum game environment to test the ability of algorithms to learn the optimal Nash equilibrium (Tang et al., 2021). The payoff matrix of the traditional StagHunt game is given in Table 3. In the Stag
+
+Settings Value Description nb_iters 100 Training iterations metaSolver fictitious play Metasolver method meta_iter 1000 Iterations for Metasolver num_oracle_repeats k 10 Repetitions of the inner loop for strategy exploration distribution_type normal Gaussian distribution distribution_mean 0 Mean value of the distribution distribution_var 20 Variance of the distribution improvement_threshold 0.03 Convergence criteria learning_rate 0.5 Default learning rate num_learners 4 Number of strategies updated in each iteration num_repeats 100 Number of repetitions per experiment
+
+Hunt game, both (U,L) and (D,R) are Nash equilibriums. In order to achieve a higher reward joint strategy, cooperation is required in the learning process of agents.
+
+Table 2. Parameter setting for experiments in General-sum games.
+
+L R U 30,30 -10,-10 D -10,-10 20,20
+
+Table 3. Traditional StagHunt game.
+
+a1 a2 ··· ai ··· an a1 U1 -u -u -u -u -u a2 -u u ··· -u ··· u ··· -u ··· ··· -u ··· ··· ai -u -u -u Ui -u -u ··· -u ··· ··· -u ··· ··· an -u u ··· -u ··· u
+
+Table 4. Advanced StagHunt game.
+
+In this paper, we extend the traditional StagHunt game structure to large scale general-sum game. The payoff matrix of Advanced-Staghunt is given in Table C.2. In Advanced-Staghunt, each agent has a pure strategy space $\{a_1\cdots,a_n\}$ , where $\{a_1,a_i,\dots\}$ corresponds to the Nash equilibrium strategies resulting from cooperation. In Table C.2, $U_{i}$ denotes the reward corresponding to the cooperative strategy in StagHunt, which is drawn from a uniform distribution $\mathbf{U}[1,2]$ . In order to judge whether A-PSRO deterministically converges to the optimal Nash equilibrium, we set one of those $U_{i}$ to 2.
+
+For the rest of the payoff matrix for the Advanced-StagHunt, we use the uniform distribution $u = \mathbf{U}[0, 0.8]$ to fill the rewards corresponding to each joint strategy. This suggests that there are many inefficient Nash equilibria in the Advanced-StagHunt game besides the cooperative equilibrium in the shape of $(a_i, a_i)$ .
+
+In our experiments, we set $n = 100$ and there are 5 cooperation equilibrium with reward $U_{i} \sim \mathbf{U}[1,2]$ . Each PSRO algorithm was run 10 times repeatedly to solve the game and the results were averaged for presentation.
+
+From the experiment result 4(a), we can see that most PSRO algorithms in the Advanced-StagHunt stagnate in the inefficient Nash equilibria. This is because the space of strategies whose strategy gradient points to a cooperative Nash equilibrium is a small proportion of the full space. In order for the agent to learn the optimal Nash equilibrium strategy, it is necessary to design reward-related objective for the agent. We can see that A-PSRO based on the advantage function effectively learns the optimal Nash equilibrium strategy, which indicates that the strategy exploration objective with advantage can improve rewards when solving general-sum games.
+
+The optimal Nash equilibrium in the Advanced-StagHunt game is a pure strategy equilibrium. In order to test the effectiveness
+
+of A-PSRO in games where the optimal equilibrium is a mixed strategy equilibrium, we design a large-scale general-sum game Advanced-RSP with the structure similar to the traditional game Rock-Paper-Scissors. We first design the structure of the RSP in general-sum game $U_{RSP}$ . The payoff matrix of $U_{RSP}$ is given in Table 5. In $U_{RSP}$ , $\epsilon$ is a random number satisfying the uniform distribution $\mathbf{U}[0,100]$ . This general-sum game has the similar mixed Nash equilibrium to the traditional RSP.
+
+Based on the $U_{RSP}$ , we design the large scale general-sum game Advanced-RSP. The payoff matrix of Advanced-RSP is given in Table C.2. For the rest of the payoff matrix for the Advanced-RSP, we use the uniform distribution $u = \mathbf{U}[0,100]$ to fill the rewards corresponding to each joint strategy. We can easily find that each subgame corresponding to the joint strategy $(R_i, S_i, P_i)$ is a mixed Nash equilibrium. There are also other equilibria with inefficient rewards.
+
+In our experiment, we set $n = 1000$ and $i = 10$ . Each PSRO algorithm was run 10 times repeatedly to solve the game and the results were averaged for presentation. The experiment result is shown in Figure 4(b). From the figure, we can see that A-PSRO learns the optimal mixed equilibrium strategy, while all other PSRO algorithms stagnate in the inefficient Nash equilibrium.
+
+R S P R 100 180+ε ε S ε 100 180+ε P 180+ε ε 100
+
+Table 5. General-sum RSP structure ${U}_{RSP}$ .
+
+a1 ··· R1 S1 P1 ··· aj ··· Ri Si Pi ··· an a1 u ··· -u -u -u ··· u ··· -u -u -u ··· u : : : -u -u -u : : : -u -u -u : : R1 -u -u URSP -u -u -u -u -u -u -u -u S1 -u -u -u -u -u -u -u -u -u -u P1 -u -u -u -u -u -u -u -u -u -u : : : -u -u -u : : : -u -u -u : : aj u ··· -u -u -u ··· u ··· -u -u -u ··· u : : : -u -u -u : : : -u -u -u : : Ri -u -u -u -u -u -u -u -u URSP -u -u Si -u -u -u -u -u -u -u -u -u -u Pi -u -u -u -u -u -u -u -u -u -u : : : -u -u -u : : : -u -u -u : : an u ··· -u -u -u ··· u ··· -u -u -u ··· u
+
+Table 6. Advanced RSP game.
+
+We also perform experiments in randomly generated games that feature the same reward distribution. The dimension of random generated games are $1000 \times 1000$ . In these games, each element of the payment matrix is generated by a normal distribution with mean $\mu = 0$ and variance $\sigma^2 = 20$ . To test the convergence results of different algorithms in random generated games, we let the PSRO algorithms operate in 100 independently generated game environments and the results were averaged for presentation. The experiment result is shown in Figure 4(c). From Figure 4(c), we can see that A-PSRO achieves the optimal reward of the joint strategy.
+
+Since A-PSRO requires multiple equilibrium oracles in general-sum games to explore strategies with higher rewards, its runtime increases significantly compared to existing algorithms. In future work, we will explore ways to simplify this process.
+
+Settings Value Description players 3 Number of agents in the game nb_iters 50 Training iterations metaSolver fictitious play Metasolver method meta_iter 10000 Iterations for Metasolver distribution_type normal Gaussian distribution distribution_mean 0 Mean value of the distribution distribution_var 20 Variance of the distribution improvement_threshold 0.03 Convergence criteria learning_rate 0.5 Default learning rate num_leaders 4 Number of strategies updated in each iteration num_repeats 4 Number of repetitions per experiment λd 0.5 Diversity weight
+
+Table 7. Parameter setting for experiments in multi-player games.
+
+# C.3. A-PSRO for Solving Multi-Player Games
+
+The parameter setting of multi-player games is given in Table 7. The hardware and system setup used for the experiments are the same as those for zero-sum games. In our multi-player game experiments, we adopt randomly generated games that feature the same reward distribution.
+
+In multi-player zero-sum games, we use randomly generated symmetric games with dimension $20 \times 20 \times 20$ . This is because we have found in our experiments that reducing exploitability in larger-scale games requires very large computational complexity. We believe that a comparison with other algorithms in the setting of this size is sufficient to demonstrate effectiveness. During the generation of these games, we added constraints to avoid generating strong pure strategies, which substantially increased the difficulty of strategy learning.
+
+In the generation of multi-player general-sum games, we use the structure similar to the Advanced-Staghunt game with dimension $10 \times 10 \times 10$ , and set the reward of the optimal equilibrium strategy to 90.
+
+# D. Code and Dataset
+
+We provide part of the code necessary for the full operation of A-PSRO in the supplementary materials. Once the paper is accepted, we will upload the complete A-PSRO code along with the game data used for testing.
+
+
+(a) AlphaStar
+
+
+(b) Blotto
+
+
+(c) Elo game
+
+
+(d) Disc game
+
+
+(e) Transitive game
+
+
+(f) Triangular game
+
+
+(g) Random game of skill
+
+
+(h) Elo game $+$ noise $= 0.1$
+
+
+(i) Elo game $+$ noise $= 0.5$
+
+
+(j) Elo game $+$ noise $= 1.0$
+
+
+(k) Tic tac toe
+
+
+(1) Kuhn poker
+
+
+(m) Connect four
+
+
+(n) 3-move parity game
+
+
+(o) Go (size=3)
+Figure 8. The exploitability of the joint strategy learned by agents in various zero-sum games is depicted. The reduction in exploitability through population iterations can serve as an indicator of the effectiveness in approximating the Nash equilibrium.
+
+
+(p) Go (size=4)
+
+
+(a) AlphaStar
+
+
+(b) Blotto
+
+
+(c) Elo game
+
+
+(d) Disc game
+
+
+(e) Transitive game
+
+
+(f) Triangular game
+
+
+(g) Random game of skill
+
+
+(h) Elo game $+$ noise $= 0.1$
+
+
+(i) Elo game $+$ noise $= 0.5$
+
+
+(j) Elo game $+$ noise $= 1.0$
+
+
+(k) Tic tac toe
+
+
+(1) Kuhn poker
+
+
+(m) Connect four
+
+
+(n) 3-move parity game
+
+
+(0) Go (size=3)
+
+
+(p) Go (size=4)
+Figure 9. The advantage distribution of strategies in various zero-sum games. Lighter colored regions indicate strategies with higher advantage.
\ No newline at end of file
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/images.zip b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bd1d2bc7473260eb5da508765cd375e20fd79101
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef4e9cbd6971726c241f7c59c969ff33226b470b3ff563738ac195039e4e5e20
+size 1592817
diff --git a/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/layout.json b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ae78b609c4449c6099b81ce86b4c003fdb17b467
--- /dev/null
+++ b/apsroaunifiedstrategylearningmethodwithadvantagemetricfornormalformgames/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2b20d62e14d8810695c174dd2e51c449266905665d1abb8e17c661f0b3ee529
+size 1382025
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_content_list.json b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..596936985cf1662dd9a6098a62f31e7a37d1dcc1
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f524d48baa0efc1fbf0d1d7237b7183a730183558b1801832ff731a770401cf
+size 128044
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_model.json b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..426dd03117b94d07a12270ed663f14babad91dc7
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef3f0e6004d2e56fc9add529ab659d67bcda36b2d20530bdaebd9b88a540ee27
+size 158846
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_origin.pdf b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..81b9ee133cd3d1a4e8da8786c3900ad64ae65277
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/34048b91-5172-4115-8182-f1b23417524b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:32115a1aef3ce9e4c0d62569a0432c2929f61199b0234eed097a8cd03d0ec47f
+size 656436
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/full.md b/areasoningbasedapproachtocrypticcrosswordcluesolving/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..31a6c5b49ce038f77550d9ca8b3c635bce652de9
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/full.md
@@ -0,0 +1,738 @@
+# A Reasoning-Based Approach to Cryptic Crossword Clue Solving
+
+Martin Andrews1 Sam Witteveen1
+
+# Abstract
+
+Cryptic crossword clues are challenging language tasks for which new test sets are released daily by major newspapers on a global basis. Each cryptic clue contains both the definition of the answer to be placed in the crossword grid (in common with regular crosswords), and 'wordplay' that proves that the answer is correct (i.e. a human solver can be confident that an answer is correct without needing crossing words as confirmation). This work describes an LLM-based reasoning system built from open-licensed components that solves cryptic clues by (i) hypothesising answers; (ii) proposing wordplay explanations; and (iii) using a verifier system that operates on codified reasoning steps. Overall, this system establishes a new state-of-the-art performance on the challenging Cryptonite dataset of clues from The Times and The Telegraph newspapers in the UK. Because each proved solution is expressed in Python, interpretable wordplay reasoning for proven answers is available for inspection.
+
+# 1. Introduction
+
+There has been significant work in reasoning in the fields of mathematics (Jiang et al., 2023; Yang et al., 2023; Trinh et al., 2024) and code generation (Ni et al., 2023; Ridnik et al., 2024) which benefit from having strong verifiers to validate their answers. This work tackles the relatively under-studied reasoning task of cryptic crossword solving, which has the following qualities:
+
+- Thousands of people find Cryptic Crosswords a satisfying intellectual challenge on a daily basis. Solving these puzzles requires understanding multi-layered language constructs, blending logic, wordplay, and contextual nuance. This provides a unique challenge for evaluating and improving LLMs' capabilities in NLU and reasoning.
+
+
+Figure 1. Proving process: answer candidate $\rightarrow$ wordplay $\rightarrow$ LLM formalisation
+
+- There are decades of solved puzzles (each one containing over 20 clues) from multiple major newspapers available, and new puzzles are published daily. This contrasts with (for instance) IMO/AIME problems, where there is a much lower number of novel problems available.
+- The method in this work explicitly reveals the reasoning (i.e. validated wordplay) required to solve each problem. By construction, there is one 'true' reasoning path, although it might be expressed in different ways by different solvers.
+
+# 1.1. An example Cryptic Clue
+
+As a concrete example, and to clarify the terminology used, consider the following moderately complex cryptic clue1 : "Cut up over politician on the French case (7)".
+
+Solvers must parse the clue carefully to separate the definition (which acts as a regular crossword clue) and the supporting wordplay that can be used to arrive at the same answer from two directions. Arriving at the same answer by two paths constitutes the necessary proof that the correct answer has been found. See Figure 2a for a visual depiction of the reasoning involved.
+
+In this work, we take our cue from the effectiveness of provers coupled with verifiers for mathematical reasoning tasks (Jiang et al., 2023). We tackle the cryptic crossword clue solving task using an LLM to (i) suggest answer candidates; (ii) create informal proofs (i.e. coming up with wordplay); and (iii) perform a formalisation process (which rewrites the wordplay logic in Python). The proposed solutions (expressed as executable Python) are then checked for validity.
+
+# 1.2. Contributions
+
+The following are the main contributions of this work:
+
+- An open-license system for reasoning over Cryptic clues - our pipeline (illustrated in Figure 1) enables 9B-scale local models to achieve state-of-the-art results on the Cryptonite dataset.
+- Local models for cryptic clue tasks - We show how local LLMs can be fine-tuned to produce answer candidates, and wordplay suggestions, and then prompted to perform Wordplay formalisation. Following an approach akin to mathematical statement formalisation, but where there are less than 10 examples of 'good proofs' available, our novel pipeline was specifically engineered to avoid 'reasoning steps' becoming stuck in dead ends.
+- Python domain-specific verifier - Using the output of the formaliser, the verifier presented here deconstructs the Python AST, so that it can evaluate each assert statement on a line-by-line basis. We believe that this is somewhat novel, since it enables the verifier to not only indicate whether the proof is valid overall, but also point to specific failures (used to regenerate failed formalisations) on all proof lines simultaneously.
+
+To promote further study in this area, all code for training the models, the formaliser and domain-specific verifier is made publicly available.
+
+# 2. Related Work
+
+# 2.1. Regular Crosswords
+
+Non-cryptic ("regular") crosswords are known throughout the world, and are the predominant type found in newspapers in the U.S.A. One key difference from cryptic crosswords is that individual regular crossword clues are generally not 'standalone' - there may be a number of different answers that fit the given clue. The key to solving regular crosswords is thus the interaction between answers (i.e. the crossing-words), which allows for planning/backtracking to enable solving rates in the high $90\%$ range (Wallace et al., 2022).
+
+This work, in contrast, focuses on the solving of clues on a standalone basis, which requires elements of reasoning through the wordplay present in cryptic clues.
+
+# 2.2. Cryptic Crosswords
+
+In an 800 participant research study into the backgrounds of cryptic crossword solvers (Friedlander & Fine, 2016), the following observation was made about the skills required to solve these linguistic/reasoning puzzles:
+
+"... cryptic crossword skill therefore appears to be bound up with code-cracking and problem-solving skills of a quasi-algebraic nature. Conversely, lexical ability, although no doubt valuable, does not appear to be a critical discriminator of high expertise among elite solvers."
+
+Cryptic crosswords have received surprisingly little attention from the machine learning community, despite being a notable language-oriented reasoning puzzle with global appeal. One possible reason is that cryptic crosswords are much less common in the United States than 'regular crosswords'. See Anthony & Goodliffe (2024) and Webb (2024) for inspiring demonstrations of experts solving cryptic crosswords in real-time.
+
+The benchmark dataset used by this work is Cryptonite (Efrat et al., 2021) - a large-scale dataset of Cryptic Crossword clues from The Times and The Telegraph (major UK newspapers). The dataset contains 523,000 naturally sourced clues (published between 2001 and 2020), with the train, validation and testing splits being chosen so that a given answer can only appear in one of the splits.
+
+While the dataset made available in Rozner et al. (2021) is also of interest, its clues are limited to those from the Guardian newspaper, and Connor (2024) notes in the Guardian's own blog "The Times hosts an annual crossword-solving competition and it remains, until such time as the Guardian has its own version, the gold standard." Moreover, the smaller number (142,000) of clues that the dataset contains have no orientation markings ('across/down'), which are required to make sense of some wordplay.
+
+For a more in-depth discussion of the decision to focus on the Cryptonite dataset (and not perform testing on the Guardian dataset), please see Appendix A.3. In summary, while the 'Init' split presented in Rozner et al. (2021) has attractive properties (explored there, and in other works), this work specifically targets the reasoning side of cryptic clues, which involves fine-tuning models on Cryptonite (including Wordplay examples with carefully matched train/val/test splits). This precludes us from doing the same kind of multi-dataset comparisons found elsewhere.
+
+# 2.2.1. RULE-BASED SOLVERS
+
+Williams & Woodhead (1979) is an early example of attempting to devise a formal language for describing cryptic clues. However, the linguistic elements of the clues tend to
+
+
+
+
+
+
+Figure 2. Clue solving illustrations. Answers are in green, definitions in blue (dashed frame), wordplays in orange, and indicators in purple. Further textual examples can be found in Appendix A.1
+
+
+
+thwart a strictly formal approach.
+
+A more flexible rule-based solver with a manually-crafted probabilistic grammar was introduced in Deits (2015; 2022). Building on the assumption that a clue can usually be split into wordplay and definition, the solver tries to find the most probable parse such that the wordplay yields a semantically-similar result to the definition. The logical form of this DSL approach is very appealing. However, it appears limited to solving clues where the wordplay is somewhat simple (due to the combinatorial explosion of possibilities created by longer/more complex clues).
+
+The goal of this work is to use the flexibility of LLMs to enable a far wider range of clues to be attempted, with the aid of a formaliser/verifier to check the solutions.
+
+# 2.2.2. LLM-BASED SOLVERS
+
+Cryptonite is a challenging task for LLMs: Efrat et al. (2021) reported that fine-tuning T5-Large (a 770M encoder-decoder model) on Cryptonite's 470k cryptic clue training set achieved only $7.6\%$ test set accuracy, slightly below the $8.6\%$ accuracy of the rule-based clue solver of Deits (2022). Interestingly, prior to 2024, even large-scale Language Models scored very poorly on cryptic clues, likely due to (i) the misleading surface reading of the clues; (ii) the obliqueness of the definitions; and (iii) the reasoning steps required to prove the answer correct based on the wordplay.
+
+Recent works, such as Sadallah et al. (2024) and Saha et al. (2024), tackle cryptic crosswords with more up-to-date local models and commercial LLMs. Saha et al. (2024) reports results with 5- and 10-Shot prompting (without fine-tuning the models), but also includes a wide-ranging study of the capabilities of models for crosswords in general. We include experiments that bring the relevant baselines up-to-date, and
+
+also touch on their illuminating Partial Correctness Metrics (which are relevant when attempting full grids, which is not the main focus here).
+
+In this work, building on the groundwork of Andrews & Witteveen (2024) and Andrews & Witteveen (2025), we use a pipeline of 9B-scale LMs to produce answer candidates and wordplay suggestions, followed by a third LM to formalise each proposed solution using Python code and then rewrite/update the solutions based on feedback from a purpose-built verifier. In our results, we focus on the 'pure' Cryptonite benchmark: Accuracy is judged based on a Top-1 basis (with the model's single answer being marked wholly correct or not), with no crossing letters being given. Framed as a reasoning task, if the model 'understands' the cryptic solution properly, the answer will be wholly correct - there should be no partial marks.
+
+# 2.3. Code & reasoning
+
+To compensate for LLM approximate generation of logical reasoning, techniques like PAL (Gao et al., 2023) exploit LLMs' facility for writing code to create verifiable reasoning chains. An important influence on this work was also the Draft, Sketch, and Prove framework (Jiang et al., 2023) which uses an LLM to draft and create proofs that are then verified formally.
+
+As with the prior LM autoformalisation work Ye et al. (2023), we chose to use Python as the intermediate language into which the natural language statement was formalised. In our case, rather than using an external prover, our system formalises its proofs directly in Python, using callable functions, such as is_anagram(). This light-DSL approach was essential for our NLP task, since we found (for instance) that LLMs have trouble recognising whether two sequences
+
+```txt
+"publication": "FT",
+"setter": "falcon", "author": "teacow",
+"num": 16, "ad": "D", "pattern": "8",
+"clue": {"Offer} of support also broadcast",
+"wordplay": "PROP (support) + (ALSO) * (*broadcast)",
+"answer": "PROPOSAL"
+```
+
+Figure 3. An example from the Wordplay dataset (in this wordplay, $(\star)$ is an anagram indicator). This clue's solution is diagrammed in Figure 2b
+
+of letters are anagrams of each other.
+
+In contrast with the tool-integrated reasoning framework Gou et al. (2024), where an LLM for mathematical problem-solving was fine-tuned on 16,000 examples of formalisation, we found that our light-DSL was able to be used by LLMs based on its in-context description alone. For full prompting details, please see Appendix A.5.4.
+
+Informed by the evolution from AlphaCode (Li et al., 2022), in which huge numbers of programs are generated and filtered in order to generate a valid solution, to AlphaCodium (Ridnik et al., 2024), in which solutions are iterated upon and involving much less computation, this work uses a verifier that can feed back 'hints' to the formalising LLM, so that the task of re-writing nearly-valid proofs is made easier.
+
+# 2.4. Wordplay dataset
+
+The Wordplay dataset (Andrews, 2024) - an example from which is given in Figure 3 - consists of data gathered from websites where cryptic crossword enthusiasts post solutions on a daily basis for each of the major publications. Each completed puzzle is annotated by an individual, identifiable author/solver that lists the approximately 20 clues with their definition, wordplay and answer fields. Note that each solver can choose their own 'standard' for writing out the wordplay, leading to a significant variation in wordplay annotation styles (even across time for an individual solver). The Wordplay dataset deliberately follows the train, validation, and test splits defined by Cryptonite.
+
+# 3. Methods
+
+The overall system described in the work is illustrated in Figure 1, and the code2 is available under an Apache 2 license. The order of operations for the pipeline was chosen based on watching human solvers - who report going through the following steps: (a) attempting to parse the clue in a number of ways, trying to isolate the definition from the wordplay; (b) seeing which parts of the wordplay they
+
+are most confident about; (c) 'having a hunch' of the final answer; and (d) gaining a full understanding of how a clue's wordplay works (such that the function of every element can be explained) as proof of the overall process.
+
+Concretely, the system starts with the clue, and generates 20 answer candidates. For each unique candidate, the next step is to generate 10 guesses at wordplay to justify the answer. Then, each of these wordplay hypotheses are 'formalised' as a Python program in a particular form, which can then be verified by executing it (with several retries in case of failure). Successful executions are taken as proof that the original answer was correct.
+
+Observations of the behaviour of GPT-4 using Chain-of-Thought prompts (Wei et al., 2022) suggest that even very capable models tend to fixate early on during the reasoning process, and are only rarely able of completely re-hypothesising. These LLMs also frequently become caught up with the literal 'surface' meaning of the clue, which is often misleading (deliberately on the part of the setter). Organising our system's pipeline to hypothesise candidate answers as the first step (so that the models must try to fit the reasoning to the answer, with varying degrees of success) bakes re-hypothesisation into the process.
+
+# 3.1. Candidate answer generation
+
+Our first step to solving a given cryptic clue is to generate multiple answer candidates from the original clue, pattern and ad (across/down) fields. For this task, we fine-tuned a Gemma2 9B base model (Gemma Team & Google DeepMind, 2024) using the LoRA (Hu et al., 2022) implementation provided by the unsloth package (unsloth.ai, 2024). The model was trained for 1 epoch on the Cryptonite training set of approximately 470,000 examples.
+
+For each clue being evaluated, we generate 20 valid answer candidates, where candidates that did not match the pattern were immediately rejected and regenerated, and those not contained in the crossword words list (Beresford, 2000) were filtered out3 . The number of candidates was chosen to balance generation cost with likelihood of the correct answer appearing in the candidate list - see Figure 7 for a cumulative frequency analysis. The list of candidates was then grouped so that the frequency of each answer could be found - enabling statistics to be collected.
+
+# 3.2. Generation of definition and wordplay suggestions
+
+To train the wordplay suggestion model, which translates each answer candidate into multiple definition
+
+
+Figure 5. External functions available via In-Context Learning
+
+and wordplay suggestions, we make use of the Wordplay dataset of Andrews (2024). For this task, we fine-tuned another Gemma2 9B base model using LoRA. The model was trained on 4 epochs on a set of approximately 16,800 examples (consisting of solution explanations of puzzles from The Times and The Financial Times from selected authors in the Wordplay dataset).
+
+# 3.3. Python formalisation
+
+Rather than create a dataset with many examples of formalisation, here we use in-context prompting with less than 10 examples of the formalisation style required. In preliminary work, we concluded that the available Gemini-Flash LLM was not capable of using a (novel) cryptic crossword domain specific language ("DSL") through in-context learning with so few examples. In contrast, we found that the LLM could be prompted to produce Python code with relative ease, so the approach taken was to frame a declarative-style-DSL as Python function calls within assert statements. The LLM was found to be able to reliably produce syntactically correct Python, and use the 'external functions' that had been described (as illustrated in Figure 5) to form logical sequences of declarations, which could then be parsed line-by-line by manipulating the Python abstract syntax tree ("AST"). An example of the Python DSL being generated by the formalisation LLM is given in Figure 4, with the workings of the clue solution being illustrated in Figure 2c.
+
+To formalise wordplay into Python 'proofs' of the correctness of solutions, we used Google's Gemini-Flash-1.5-001 LLM (a pinned model version) during development. This model was initially chosen instead of a frontier-tier model since the formalisation task should not require much inventiveness/reasoning: the actual required steps are already present in the wordplay, the task is merely to translate to Python. To determine whether the choice of Gemini-Flash was
+
+
+Figure 4. Python proving: answer candidate $\rightarrow$ wordplay $\rightarrow$ LLM formalisation
+
+a limiting factor, we subsequently tested an unmodified Gemma2-9B-it model on the same task.
+
+In terms of the DSL itself, the back-end to the is synonym and is_homophone functions consists of calls to simple language models. The action_type function performs a nearest-neighbour match against list of indicator words, and the is_abbreviation function performs a look-up against a list of abbreviations - both sourced from Deits (2022). For string manipulation actions (such as 'REVERSE'), the LLM formaliser itself was capable of producing correct string manipulation expressions unaided.
+
+# 3.4. In-Context Learning
+
+To produce Python code that could be sent to the prover, the LLM was prompted in an In-Context Learning ("ICL") manner. This consisted of the following parts:
+
+1. Cryptic crossword rubric to explain to the LLM what the principles were behind the fields such as clue, definition, wordplay, etc.
+2. 20-shot examples clue $\rightarrow$ wordplay
+3. The 'external functions' rubric shown in Figure 5
+4. Few-shot wordplay $\rightarrow$ Python formalisations (6 examples given)
+5. The actual clue, answer, definition and wordplay being formalised
+
+Gemini-Flash did not appear to be particularly sensitive to the prompting style used, except in the 'handover step' (between problem description and model generation) where several trials were needed to obtain the final function definition in the required format consistently. Further details of all the ICL prompts are given in Appendix A.5. For the Gemma2-9B-it formalisation runs, the same prompts were used unchanged (with no other tuning/training). In additional, a further Gemma2-9B model was trained on 448 valid Gemini-created proofs of ground-truth Wordplay examples.
+
+
+Figure 6. Illustrative AssertionError responses (with hinting) from the verifier
+
+
+Figure 7. Statistics of answer candidate list, as more candidates generated
+
+
+
+# 3.5. Proof Verification with Hinting
+
+The system's verifier must decide whether a given formalisation is valid, and report any errors found to iteratively improve the Python code as feedback to the LLM formaliser in a cycle, as seen in Self-Debug (Chen et al., 2024), and AlphaCodium (Ridnik et al., 2024). Examples of assertion failures, with constructive hinting, are shown in Figure 6.
+
+This cycle is repeated until a formalisation is validated (zero assertion failures, considered a 'SUCCEED' with the answer having been proved), or max_rewrites=2 is reached. If no Python formalisation can be validated, then the fallback answer is used (defined as being the most frequent answer amongst the original candidates produced in the first stage of solving the clue).
+
+# 3.6. Partial Correctness Metrics
+
+One interesting direction explored in Saha et al. (2024) was the performance of LLMs on cryptic clues if some of the letters were known (as would be the case if an entire grid were being solved). The conditions examined were with $25\%$ , $50\%$ and $70\%$ of letters 'known'. Based on observations working with the proposed system for single clues (and more general experience of crossword solving), we approached this problem in two ways.
+
+For the $25\%$ level of letters 'known', it was a simple matter to use the current system with candidate answers which didn't match the known letters filtered out. For the higher levels of known letters, we instead used the FastText embedding method of Mikolov et al. (2018) to find the nearest neighbour answer within The UK Advanced Cryptics Dictionary, Beresford (2000), by comparing against the embedding of the raw clue phrase itself.
+
+The 'Partial Correctness' results, since they are not the core thrust of this work but interesting in their own right, are given in Appendix A.5.7.
+
+# 4. Experiments
+
+# 4.1. Gema2 9B answer candidate generation
+
+During the initial experimental phases of fine-tuning local models for the answer generation task it was discovered that -base models scored more highly than -it models. This might be explained by observing that instruction fine-tuning may (to some extent) penalise off-the-wall answers, which may be essential for our task. In addition, we also observed that while the Top-1 candidate from a model generating with a temperature $t = 0.5$ had high accuracy, it was beneficial to run candidate generation with $t = 1.0$ (even though the Top-1 accuracy was lower in this case) - since having a wider spread of answer candidates was useful for our pipeline overall.
+
+Figure 7a shows that the probability of the gold answer being among the candidates produced is (unsurprisingly) monotonically increasing in the number of independent samples. It also shows that this process is not yet asymptotically limited, although slowing down with increasing $n$ .
+
+Figure 7b shows that choosing the highest-frequency answer candidate can be a very effective strategy. However, there is a clear limit to this idea: There is a significant probability that cryptic crossword answers are in the long tail of potential answers. Indeed, intentionally creating misleading clue 'surface readings' is a hallmark of good cryptic clue setting.
+
+# 4.2. Gemma2 9B wordplay candidate generation
+
+Since wordplay is so flexible, it is difficult to evaluate it for accuracy against other examples (without, say, a large LLM to evaluate the differences). However, good wordplay should result in good formalisations, so evaluation is available on an end-to-end basis.
+
+One key assumption in the system proposed here is that a correct answer should lead to interpretable wordplay, whereas an incorrect answer candidate should give rise to unformalisable/unverifiable wordplay. The following typical example illustrates how the correct answer leads readily to correct wordplay (the workings of this clue
+
+are illustrated in Figure 2d), whereas trials with an incorrect answer candidate (which was, in fact, the most frequent candidate for this clue) give clearly unverifiable wordplay:
+
+clue:"wader woman has on (5)" definition:"{wader}woman has on" answer:"HERON" # correct answer wordplay:"woman (HER) has on (ON)" #incorrect answer -3 trials shown answer:EGRET wordplay:"woman (HER) on top of REG-another word for on,as in 'do you have the heating on?')" wordplay:"EG (woman has) $^+$ RET (on)" wordplay:"woman (HER) has on/around (EG)-a wader bird"
+
+# 4.3. Cryptonite Results (Top-1 exact match)
+
+In this work, we focus our testing on using the Cryptonite dataset of Efrat et al. (2021) as a benchmark, with the Top-1 exact-match results shown in Table 1. As in Saha et al. (2024), due to computational constraints, we performed sampling of the validation and test sets, using fewer than the full 26k examples available. The standard deviation of these figures is $\approx \pm 1.5\%$ at 1000 samples, and $\approx \pm 3.3\%$ at 200. To determine whether the systems presented here 'beat' GPT-4o, we performed a Bayesian Item Response Theory test (Fox, 2010) to estimate the probability that our results outperformed the GPT-4o (over the same samples).
+
+# The 5-Shot results in Table 1 show that:
+
+- GPT-4o (2024-11-25) gives stronger results than those of GPT-4-Turbo (2024-04-09) given in Saha et al. (2024) - so this is an updated baseline;
+- The updated GPT-4o results show surprisingly strong performance on the validation split (unfortunately, the composition of this commercial model's training data is unknown);
+Gemini-1.5-Flash-001 (which was used in development of the formaliser) is not particularly good at solving cryptic clues in itself;
+- The Gemma2-9B model gets a large uplift from finetuning on the Cryptonite training set (compare the 5-Shot figures to the later Gemma2-9B FT ones).
+
+The Gemma2-9B FT accuracy figures are for the first result returned by the fine-tuned Gemma2 model. In contrast, the Gemma2-9B freq accuracy figures are for the most common (i.e. highest frequency) result among the Gemma2 answer candidates (for which 20 samples were generated
+
+for each clue). These voting-based results would have exceeded prior state-of-the-art results for open-licensed models on their own.
+
+Going beyond single models, the Gemini-Flash Formaliser demonstrates Top-1 exact-match performance of $32.5\%$ for the Cryptonite Test set, establishing a new state-of-the-art result against the updated baselines (the Bayesian IRT results are that Gemini-Flash has a probability of $92\%$ of being actually better than GPT-4o).
+
+Moreover, the results of the non-fine-tuned Gemma2-9B-it Formaliser also (marginally) beat the previous state-of-the-art results - which is perhaps an even stronger statement about the capabilities the system described here, since in this case Gemma2-9B models have been used throughout the solving process, showing that it is be possible to achieve very competitive cryptic crossword solving results through reasoning with open-licensed models. The Bayesian IRT results are that the Gemma-9B FT model has a probability of $81\%$ of being actually better than GPT-4o on Hard clues, $57\%$ overall.
+
+The formaliser results are (surprisingly) relatively worse for Quick clues. This seems to be related to the fact that the agreement/frequency-based Gemma2 freq model is very strong on these clues, and any 'contribution' from the formalising/verification procedure is likely to overrule a good base-line result, due to erroneous verification of 'proofs' that are not valid.
+
+# 4.4. Ablations
+
+The lines in Table 1 marked ' (AB)' are ablations. Both utilise the measurement of average logprob of the output tokens given by the relevant model.
+
+The first ('logprob answer') shows the results of using the candidate answer generation Gemma2-9B FT model from above, with the candidate answer being chosen from the list of 20 possibilities according to highest logprob. Since answers are typically very short, this method is similar to the frequency-based selection model.
+
+The second ('logprob wordplay') shows the results of evaluating the Gemma2-9B FT model that generates wordplay hypotheses, and choosing an answer based on the highest logprob according that generating model. Somewhat unexpectedly, this was not as effective as might be assumed from the generated wordplay seen in Section 4.2 - where the wordplay for wrong answers looks absurd. Examining samples of the wordplay most favoured by pure logprob order, it seems that the generating LLM finds simply-worded but completely fictitious wordplay quite likely.
+
+Both of these ablations demonstrate that the formalisation and verification steps are essential components in our system
+
+Table 1. Cryptonite results : Standard splits, Top-1 answer accuracy rate
+
+Model samples Validation Test Overall Quick Hard Overall Quick Hard Rule-based (*) 26k 8.3% 8.6% 13.5% 5.8% T5-large (770M) FT (*) 26k 7.4% 7.6% 12.8% 3.4% Gemma2-9B-it 5-shot 1000 5.7% 11.5% 5.2% 4.5% 10.5% 4.0% Gemini-Flash 5-shot 1000 6.6% 12.5% 6.1% 6.5% 11.8% 6.1% GPT-4o 5-shot 1000 29.8% 45.0% 28.5% 27.6% 47.4% 26.0% Gemma2-9B FT 1000 21.7% 28.8% 21.1% 15.9% 38.2% 14.1% Gemma2-9B freq (#=20) 1000 26.6% 31.3% 26.2% 25.5% 55.3% 23.1% (AB) logprob answer 500 23.9% 35.9% 22.9% 22.7% 55.3% 20.1% (AB) logprob wordplay 200 21.0% 15.4% 21.4% 20.5% 46.7% 18.4% Gemini-Flash Formaliser 200 28.0% 23.1% 28.3% 32.5% 46.7% 31.4% Gemma2 9B-it Formaliser 200 26.0% 23.1% 26.2% 29.0% 46.7% 27.6% Gemma2 9B-FT Formaliser 200 27.0% 23.1% 27.3% 29.5% 53.3% 27.6%
+
+Rows (*) are as reported in Efrat et al. (2021); The Hard columns are for the non-Quick clues
+
+- they cannot be shortcut by a 'dumb ranker' in the pipeline.
+
+# 4.5. Qualitative Error Analysis
+
+In addition to the numerical results presented in Table 1, we note the following qualitative aspects of our system's performance:
+
+- The headline success rate is bounded above by the initial candidate answer generation process. If the system cannot guess the answer in its top-k ( $k = 20$ here), the remaining process is doomed. As shown in Figure 7a, even with higher top-k, this puts an upper bound on performance that is well below $100\%$ correct. Having better candidate answer generation would be beneficial - and this would directly feed through our verification process
+- While the proprietary models may output the correct final answer, it is often the case that their 'reasoning process' makes no logical sense (indicating, perhaps, that they have memorised clue/answer pairs). In contrast, our method does give us useful human-interpretable reasoning for each solution
+- A significant source of false negatives is the is_synonym function, which relies on a sequence of steps: first we attempt a look-up in an open-source thesaurus, then in a dataset of 'regular crossword answers'. But the final fall-back is asking an LLM whether given phrases are synonyms. While the first two steps may vote positively (for easy matches), it is common in cryptic clues that the definition and the answer are more distantly related than regular crosswords. For instance, in
+
+Appendix A.1.7, we have the true answer UNDERMINED being defined by 'damaged'. This would likely be too distant to be reasonable for a regular crossword, but the strength of the wordplay (the answer being literally given in the clue) is confirmation enough to satisfy solvers. Setting this 'synonym distance hurdle' is an ongoing challenge.
+
+# 4.6. Known Limitations of the System
+
+While the verifier implemented for this work is effective, it does not completely cover the following possible potential 'shortcuts' in the Python functions it analyses:
+
+- The entire Python function might consist of comments, so that nothing could trigger an assert. This has been partly countered by requiring the Python code to include at least 2 assert statements
+- The Python function contains conditional execution, routing around assert statements
+- Occasionally, the hint assert XYZ failed results in the re-write : assert XYZ==False, which is clearly not productive
+- The proof may be logically disconnected, with left-hand-side terms not being supported / justified by right-hand-side terms in other lines of the code
+
+These issues do not appear insurmountable, given time and effort. It should be noted that since the formalising LLM
+
+is only being used In-Context there is little chance that the above issues are being systematically abused (which would almost certainly happen if there was learning-in-the-loop in a Reinforcement Learning setting).
+
+# 5. Conclusions
+
+The authors recognize the domain-specificity of cryptic crossword solving. However, we believe that it serves as a rigorous and complex test-bed for reasoning, requiring multi-faceted language understanding and logic. While the specific DSL developed here is tailored to crosswords, the underlying principles of our approach – decomposition, formalization, and verification with feedback – are intended to be more broadly applicable to other reasoning tasks. Cryptic crosswords, with their clearly defined rules and solutions, allow for precise evaluation and iterative refinement of these principles.
+
+Our results have validated our overall approach to codification of the cryptic crossword problem domain: Generating answer candidates and wordplay suggestions followed by production of code via an LLM-based formalisation process, verification using Python code analysis tools, and iterative prompting of the LLM formaliser proved quite effective. Generating multiple candidate answers, followed by multiple wordplay samples, can be framed as inference-time computation (using 9B models) rather than using a large proprietary model. Due to the verification element, our system can benefit directly from additional test-time computation. Beyond numerical improvements, a key contribution is the verifiable reasoning process itself, offering interpretability not available from black-box models.
+
+We were happy to discover that our development work using the Gemini-Flash LLM as a formaliser was directly transferable to a the open-licensed Gemma2-it model for the same role, with little loss of performance, enabling the whole pipeline to be run locally. The weakest link in the chain was, predictably, getting the 'Aha' of wordplay creation to work - humans can still generate wordplay that is beyond the capabilities of current models.
+
+The authors sincerely hope that this work sparks interest in the cryptic crossword domain, which presents an array of interesting and challenging reasoning problems on which fruitful research can take place, even for those with limited computation budgets.
+
+# 5.1. Further Work
+
+Although we haven't explicitly tested generalizability to other NLP tasks, we believe the developed techniques – particularly formal verification and iterative refinement – offer valuable insights for improving LLM reasoning in complex NLU scenarios. Future research could fruitfully explore
+
+applying these techniques to other reasoning-intensive NLP tasks.
+
+Around the time of the initial submission of this work, Guo et al. (2025) publicly disclosed a practical framework for learning to reason using Reinforcement Learning with an outcome-only reward scheme, which opens up a whole new avenue for investigation. While the OpenAI $\circ 1$ models had previously displayed reasoning traces, these were not considered in Table 1: Partly due to cost/API considerations, but also because the authors strongly feel that proprietary black-box methods have limited research value.
+
+Going forward, clearly a Reinforcement Learning approach would be very interesting to apply to the Cryptic Crossword domain, since these NLP reasoning problems are rather different from the mathematical proof / programming challenge tasks that are typical being tackled. Section 4.6 highlights a potential issue with our verification approach when combined with RL, since there is a clear opportunity for RL reward hacking unless the verifier is made 'bullet-proof'.
+
+We look forward to exploring the Cryptic Crossword reasoning task - there are a wide number of different avenues available.
+
+# Acknowledgements
+
+Support for this research was provided by the Google AI Developer Programs team, including access to the Gemini models and GPUs on Google Cloud Platform.
+
+The authors thank the ICML reviewers for their time and valuable feedback.
+
+# Impact Statement
+
+# Societal impact
+
+There are many current cryptic crossword enthusiasts that would potentially not welcome AI-enabled solvers to 'take over' their favourite pastime. In particular, when taken further, this line of work would potentially disruptive to public leaderboards that rank people according to the time taken to solve puzzles $100\%$ correctly.
+
+More generally, this paper presents work whose goal is to advance the field of Machine Learning. The techniques developed in this work extend work done in other domains for machine reasoning to a broader field that includes NLP/NLU tasks. That being said, we do not feel that there are significant societal consequences of our work that require specific highlighting.
+
+# Potential bias in favour of native English speakers
+
+While the English language has a high capacity for ambiguity and wordplay overall, making this type of crossword possible, cryptic crosswords also exist in other languages (Wikipedia contributors, 2024). In addition, although solving cryptic crossword answers may be very difficult (even for native English speakers), understanding the answer from given wordplay is much simpler.
+
+# Reproducibility
+
+# Datasets and Code
+
+The following outline the efforts that have been made to ensure reproducibility:
+
+- Datasets - Resources such as dictionaries used, and the Cryptonite and Wordplay datasets are available online, via the sources referenced in the main text.
+- System Code & LLM Prompts - Python code for the complete end-to-end system is available under an Apache 2 license at https://github.com/mdda/ cryptic-crossword-reasoning-verifier. The prompting required for the LLM formalisation are also included in the Appendix.
+- Models used / training - The models referenced in this work are available with open weights (the Gemma2 mod
+
+els), or via API (the Gemini-Flash model, with a pinned version number). The training procedures are outlined in the text, and therefore have some degree of reproducibility. The fine-tuned Gemma2 models will be made available in any case, on publication.
+
+# Computational requirements
+
+The Gemini LLM was accessed by API, and the total spend to create the results in this paper was less than $100 USD, with each prompt round-trip taking around 5 seconds. The Fine-Tuning of the Gemma2 9B model took around 24 hours for a full Cryptonite training run, and 8 hours for the Wordplay dataset runs. Thus, the single-GPU model runs totalled less than$ 50 USD.
+
+# References
+
+Andrews, M. Wordplay Dataset repository. https://github.com/mdda/cryptic-wordplay, 2024.
+Andrews, M. and Witteveen, S. Proving that cryptic crossword clue answers are correct. In ICML 2024 Workshop on LLMs and Cognition, 2024.
+Andrews, M. and Witteveen, S. Generating code to verify cryptic crossword reasoning. In *ICLR 2025 Workshop on Deep Learning for Code*, 2025.
+Anthony, S. and Goodliffe, M. Cracking the Cryptic (17-May-2024). https://youtu.be/vudt7LlUX00?t=124, 2024.
+Beresford, J. R. The UK Advanced Cryptics Dictionary. Technical report, published online, 2000. https://cfajohnson.com/wordfinder/.
+Chen, X., Lin, M., Scharli, N., and Zhou, D. Teaching large language models to self-debug. In The Twelfth International Conference on Learning Representations, 2024.
+Connor, A. Devious humour and painful puns: Will the cryptic crossword remain the last thing AI can't conquer? Guardian UK Crossword blog, 2024.
+Deits, R. rdeits/cryptics code repository. https://github.com/rdeits/cryptics, 2015.
+Deits, R. CrypticCrosswords.jl code repository. https://github.com/rdeits/CrypticCrosswords.jl, 2022.
+Efrat, A., Shaham, U., Kilman, D., and Levy, O. Cryptonite: A cryptic crossword benchmark for extreme ambiguity in language. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 4186-4192, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.344. URL https://aclanthology.org/2021.emnlp-main.344.
+Fox, J.-P. Bayesian Item Response Modeling. Statistics for Social and Behavioral Sciences. Springer New York, 2010. ISBN 978-1-4419-0741-7. doi: 10.1007/978-1-4419-0742-4.
+Friedlander, K. J. and Fine, P. A. The grounded expertise components approach in the novel area of cryptic crossword solving. Frontiers in Psychology, 7, 2016. ISSN 1664-1078. doi: 10.3389/fpsyg.2016.00567. URL https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2016.00567.
+
+Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G. PAL: Program-aided language models. In Proceedings of the 40th International Conference on Machine Learning, pp. 10764–10799, 2023.
+Gemma Team and Google DeepMind. Gemma 2: Improving open language models at a practical size, 2024. URL https://arxiv.org/abs/2408.00118.
+Gou, Z., Shao, Z., Gong, Y., Yang, Y., Huang, M., Duan, N., Chen, W., et al. ToRA: A tool-integrated reasoning agent for mathematical problem solving. In The Twelfth International Conference on Learning Representations, 2024.
+Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
+Hu, E. J., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
+Jiang, A. Q., Welleck, S., Zhou, J. P., Lacroix, T., Liu, J., Li, W., Jamnik, M., Lample, G., and Wu, Y. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. In The Eleventh International Conference on Learning Representations, 2023.
+Li, Y., Choi, D., Chung, J., Kushner, N., Schrittwieser, J., Leblond, R., Eccles, T., Keeling, J., Gimeno, F., Dal Lago, A., Hubert, T., Choy, P., de Masson d'Autume, C., Babuschkin, I., Chen, X., Huang, P.-S., Welbl, J., Gowal, S., Cherepanov, A., Molloy, J., Mankowitz, D. J., Sutherland Robson, E., Kohli, P., de Freitas, N., Kavukcuoglu, K., and Vinyals, O. Competition-level code generation with AlphaCode. Science, 378(6624): 1092-1097, December 2022. ISSN 1095-9203. doi: 10.1126/science.abq1158.
+Macnutt, D. S. Ximenes on the art of the crossword. Methuen, 1966.
+Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., and Joulin, A. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018), 2018.
+Ni, A., Iyer, S., Radev, D., Stoyanov, V., Yih, W.-t., Wang, S., and Lin, X. V. Lever: Learning to verify language-to-code generation with execution. In International Conference on Machine Learning, pp. 26106-26128. PMLR, 2023.
+
+Ridnik, T., Kredo, D., and Friedman, I. Code generation with AlphaCodium: From prompt engineering to flow engineering. arXiv preprint arXiv:2401.08500, 2024.
+Rozner, J., Potts, C., and Mahowald, K. Decrypting cryptic crosswords: Semantically complex wordplay puzzles as a target for NLP. In Advances in Neural Information Processing Systems, volume 34, pp. 11409-11421, 2021.
+Sadallah, A., Kotova, D., and Kochmar, E. Are LLMs good cryptic crossword solvers? arXiv preprint arXiv:2403.12094, 2024.
+Saha, S., Chakraborty, S., Saha, S., and Garain, U. Language models are crossword solvers. arXiv preprint arXiv:2406.09043, 2024.
+Trinh, T., Wu, Y., Le, Q., He, H., and Luong, T. Solving Olympiad geometry without human demonstrations. Nature, 2024. doi: 10.1038/s41586-023-06747-5.
+unsloth.ai. Unsloth code repo. https://github.com/unslothai/unsloth, 2024.
+Wallace, E., Tomlin, N., Xu, A., Yang, K., Pathak, E., Ginsberg, M., and Klein, D. Automated crossword solving. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3073-3085, 2022.
+Webb, D. Epic crossword battle: Expert vs. Times cryptic puzzle #29029. https://youtu.be/N5p4TqdjsHs, 2024.
+Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022.
+Wikipedia. Cryptic crossword — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title= Cryptic-crossword&oldid=1228427465, 2024. [Online; accessed 1-July-2024].
+Wikipedia contributors. Cryptic crossword - regional variation. https://en.wikipedia.org/wiki/Cryptic CROSSWORD#Regionalvariation, 2024.
+Williams, P. and Woodhead, D. Computer assisted analysis of cryptic crosswords. The Computer Journal, 22(1): 67-70, 1979.
+Yang, K., Swope, A., Gu, A., Chalamala, R., Song, P., Yu, S., Godil, S., Prenger, R. J., and Anandkumar, A. LeanDojo: Theorem proving with retrieval-augmented
+
+language models. Advances in Neural Information Processing Systems, 36:21573-21612, 2023.
+Ye, X., Chen, Q., Dillig, I., and Durrett, G. SatLM: Satisfiability-aided language models using declarative prompting. In Proceedings of NeurIPS, 2023.
+
+# A. Appendix
+
+# A.1. Cryptic Crossword Background
+
+The following borows extensively from the description on Wikipedia (2024) (kudos to the authors there), to which we have added wordplay annotations in a notation typical of the FifteenSquare.com website (and in the Wordplay dataset use in this work).
+
+# A.1.1. BASICS
+
+A cryptic clue leads to its answer only if it is read in the right way. What the clue appears to say when read normally (the surface reading) is usually a distraction with nothing to do with the solution. The challenge is to find the way of reading the clue that leads to the solution.
+
+A typical clue consists of two parts:
+
+- The straight or definition. This is in essence the same as any non-cryptic crossword clue: a synonym for the answer. It usually exactly matches the part of speech, tense, and number of the answer, and usually appears at the start or end of a clue. For our annotations, the span that encompasses the definition is highlighted using curly braces.
+- The cryptic, subsidiary indication or wordplay. This gives the solver some instructions on how to get to the answer in another (less literal) way. The wordplay parts of clues can be obscure, especially to a newcomer, but they tend to utilise standard rules and conventions which become more familiar with practice.
+
+Sometimes the two parts of the clue are joined with a link word or phrase such as 'from', 'gives' or 'could be'. One of the tasks of the solver is to find the boundary between the definition and the wordplay, and insert a mental pause there when reading the clue cryptically.
+
+We list below several of the important styles of wordplay that are commonly used, each with an annotated example. For a more comprehensive list, along with an outline of the 'Ximenean principles', please see Wikipedia (2024).
+
+# A.1.2. ANAGRAMS
+
+An anagram is a rearrangement of a certain section of the clue to form the answer. This is usually indicated by a codeword which indicates change, movement, breakage or something otherwise amiss. For example:
+
+```txt
+clue: Chaperone shredded corset (6)
+definition: {Chaperone} shredded corset
+answer: ESCORT
+wordplay: (corset) $\star$ (\*shredded)
+```
+
+# A.1.3. CHARADE
+
+In a charade, the answer is formed by joining individually clued words to make a larger word (namely, the answer). For example:
+
+```txt
+clue: Outlaw leader managing money (7)
+definition: Outlaw leader {managing money}
+answer: BANKING
+wordplay: BAN (outlaw) + KING (leader)
+```
+
+# A.1.4. CONTAINERS
+
+A container or insertion clue puts one set of letters inside another. For example (also starting to add a little more indirection):
+
+```txt
+clue: Utter nothing when there's wickedness about (5)
+definition: {utter} nothing when there's wickedness about
+answer: VOICE
+wordplay: O (nothing) with VICE (wickedness) around it (about)
+```
+
+# A.1.5. DELETIONS
+
+Deletion is a wordplay mechanism which removes some letters of a word to create a shorter word. For example:
+
+```txt
+definition: {Bird} is cowardly, about to fly away
+answer: RAVEN
+wordplay: [c]RAVEN (cowardly) - 'C' (i.e. circa, about) (-fly away)
+```
+
+# A.1.6. DOUBLE DEFINITION
+
+A clue may, rather than having a definition part and a wordplay part, have two definition parts. For example:
+
+```txt
+clue: Not seeing window covering (5)
+definition: {Not seeing} {window covering}
+answer: BLIND
+wordplay: Double Definition (DD)
+```
+
+# A.1.7. HIDDEN WORDS
+
+With hidden word clues, the solution itself is written within the clue – either as part of a longer word or across more than one word. For example:
+
+```yaml
+clue: Found ermine, deer hides damaged (10)
+definition: Found ermine, deer hides {damaged}
+answer: UNDERMINED
+wordplay: [fo]UND ERMINE D[eer] (hides)
+```
+
+# A.1.8. HOMOPHONES
+
+Homophones are words that sound the same but have different meanings, such as 'night' and 'knight'. Homophone clues always have an indicator word or phrase that has to do with being spoken or heard. For example:
+
+```txt
+clue: We hear twins shave (4)
+definition: We hear twins {shave}
+answer: PARE
+wordplay: "pair" (twins, "we hear")
+```
+
+# A.1.9. REVERSALS
+
+A word that gets turned around to make another is a reversal. For example:
+
+```txt
+clue: Returned beer fit for a king (5)
+definition: Returned beer {fit for a king}
+answer: REGAL
+wordplay: (LAGER) < (beer, >> Input:
+ clue: "musical and ballet, oddly, that can be avoided"
+ answer: EVITABLE ~ evitable
+
+```python
+#!/## Response:
+definlion: musical and ballet, oddly, {that can be avoided}
+wordplay: EVITA (musical) + B[a]L[1]E[t] (ballet, odd letters)
+
+# A.5. In-Context Learning Prompts for the Gemini LLM
+
+The Gemini LLM is prompted in-context with the concatenation of the following sections:
+
+- Cryptic Crossword overview
+- Many-shot wordplay examples
+- Declaration of 'external' Python functions
+- 6-shot formalisation demonstration
+- Actual problem statement (for continuation as a Python proof)
+- After a verification failure: Error messages for the generated proof, with hints if available, and request to improve iteratively
+
+The sections of the prompt are described more fully below, note that care was taken to ensure that the chosen terminology was used consistently throughout.
+
+# A.5.1. CRYPTO CROSSWORD PREAMBLE
+
+The following is the rubric and wordplay preamble given to the Gemini LLM:
+
+A Cryptic crossword question involves using the words in \ the given clue to yield an answer that matches the letter pattern. The clue will provide a definition of the answer, as well \ as some 'wordplay' that can also be used to confirm the answer. Expert question solvers write informal 'proofs' using a \ particular format.
+
+For the definition, the original clue is annotated with \{'\}' to denote where the definition is to be found. For the wordplay, the following conventions are loosely used:
+* The answer is assembled from the letters in CAPS
+* Words in brackets show the origins of letters in CAPS,
+often being synonyms, or short forms
+* Action words are annotated as illustrated:
++ (ETO N) * (*mad = anagram-signifier) = TONE
++ (FO OR) < ( bool:
+ # Determines whether 'test synonym' is a reasonable synonym for 'phrase',
+ # with letters optionally matching 'pattern'
+def is_abbreviationphrase: str, test_abbreviation: str) -> bool:
+ # Determines whether 'test_abbreviation' is
+ # a valid abbreviation or short form for 'phrase'
+def action_type(phrase: str, action: Action) -> bool:
+ # Determines whether 'phrase' might signify the given 'action'
+def is_anagramletters: str, word: str) -> bool:
+ # Determines whether 'word' can be formed from 'letters' (i.e. an anagram)
+def is_homophone(phrase: str, test_homophone: str) -> bool:
+ # Determines whether 'test_homophone' sounds like 'phrase'
+```
+
+# A.5.4. FEW-SHOT FORMALISATION EXAMPLES
+
+The following are 3 (out of 6) of the few-shot formalisation examples given before the final test-case prompt:
+
+```python
+The following are examples of simple functions that prove that \
+each puzzle solution is correct:
+``python
+def proof的回答="ONCE",
+ clue="head decapitated long ago", pattern='4'):
+ '''
+definition: head decapitated {long ago}
+wordplay: [b]ONCE (head decapitated = remove first letter of BONCE)
+'''
+assert is synonym("head", "BONCE")
+assert action_type("decapitated", Action.Remove_FIRST) \
+and "BONCE"[1:]=="ONCE"
+assert is synonym("long ago", "ONCE", pattern='4')
+proof()
+``
+``python
+def proof的回答="DECIMAL",
+ clue="the point of medical treatment", pattern='7'):
+ '''
+```
+
+```python
+definition:{the point}of medical treatment
+wordplay:(MEDICAL)\* (\*treatment $\equiv$ anagram)
+""
+assert is synonym("the point","DECIMAL",pattern $= 17^{\prime}$ 1
+assert action_type("treatment",Action.ANAGRAM)
+assert is_anagram("MEDICAL","DECIMAL")
+proof()
+``
+python
+def proof answer="SUPERMARKET", clue="fat bags for every brand that's a big seller", pattern $= 11^{\prime}$ :
+""
+definition:fat bags for every brand that's {a big seller}
+wordplay:SUET (fat)(bags $\equiv$ goes outside) of \\ (PER (for every) $^+$ MARK (brand))
+""
+assert is.synonym("fat","SUET")
+assert action_type("bags",Action.IS_OUTSIDE)
+assert "SUET" $= =$ "SU" $^+$ "ET"
+assert is_abbreviation("for every","PER")
+assert is_synonym("brand","MARK")
+assert "SU"+ "PER"+ "MARK"+ "ET" $= =$ "SUPERMARKET"
+assert is_synonym("a big seller", "SUPERMARKET", pattern $= 11^{\prime}$ )
+proof()
+```
+
+# A.5.5. FORMALISATION INSTRUCTION
+
+The following instruction is given before the final 'test-case' prompt illustrated in Figure 4:
+
+```txt
+Please complete the following in a similar manner, and return the whole function:
+``python
+def proof的回答 = ...
+```
+
+# A.5.6. PROOF VERIFICATION WITH HINTING
+
+Examples of assertion failures, with constructive hinting, are shown:
+
+```txt
+AssertionError: assert: is_abbreviation('an Artist', 'RA'): 'an Artist' does not have a valid abbreviation; 'RA' is an abbreviation for : artist, artillery, Royal Artillery, gunners, painter
+AssertionError: assert action_type('goes crazy', Action.ANAGRAM): 'goes crazy' itself does not suggest Action.ANAGRAM, but 'crazy' does
+AssertionError: assert action_type('worked', Action.HOMOPHONE): 'worked' does not suggest Action.HOMOPHONE, but maybe Action.ANAGRAM
+# Please re-implement the SOLUTION above \ (altering both the docstring and the python code as required), \ taking care to fix each of the problems identified, \ and return the whole function:
+``python
+def proof的回答= ...
+```
+
+Once the prover has fully parsed a given output with zero assertion failures, the proof is considered a success (up to 2 re-write iterations are allowed, more that that is considered an overall failure to prove the answer).
+
+Table 2. Partial Correctness Metrics Results
+
+Model known% samples Validation Test Overall Quick Hard Overall Quick Hard GPT-4T ('Init') 25% 33.7% GPT-4T ('Init') 50% 52.9% GPT-4T ('Init') 70% 76.3% Gemini-Flash 25% 200 37.0% 38.5% 36.9% 45.5% 66.7% 43.8% Gemma2-9B-it 25% 200 37.5% 38.5% 37.4% 44.0% 66.7% 42.2% FastText k=1 NN 25% 200 15.5% 15.4% 15.5% 21.0% 33.3% 20.0% FastText k=1 NN 50% 200 52.5% 38.5% 53.5% 62.0% 46.7% 63.2% FastText k=1 NN 70% 200 79.0% 61.5% 80.2% 81.0% 100.0% 79.5%
+
+# A.5.7. PARTIAL CORRECTNESS METRICS RESULTS
+
+Our results in Table 2, which corresponds to the Exploiting Partially Filled Grids section in Saha et al. (2024), are based on running models on Cryptonite splits, rather than their 'Init'. However, the percentage differences observed are likely large enough outweigh the shift between the two datasets.
+
+The GPT-4T rows in Table 2 are as reported in Saha et al. (2024), and apply to the 'Init' test dataset (i.e. different from our Cryptonite numbers, but still comparable figures in terms of what is being shown here).
+
+The Gemini/Gemma2 rows show the effect of simply filtering the output of our Gemma2-9B fine-tuned candidate answer proposal model, based on a random letter pattern using the same formulation as Saha et al. (2024) and then using the rest of our pipeline. Here, our approach beats the previously reported GPT-4T results, and is itself limited by our first stage Gemma2 model's 'Top-20' candidate answers only containing the correct answer only around $45\%$ of the time.
+
+Our FastText $k = 1$ kNN systematic approach is clearly very powerful - particularly considering that it does not involve any large model, merely a brute-force search. This only works because the number of known letters for these rows are so high - indeed the $70\%$ level would not be allowed in crosswords that obey the Ximenean guidelines of Macnutt (1966).
+
+If solving complete grids were the target of our research, we would certainly incorporate this kind of solution and overlay the reasoning component to choose from the short-list output (rather than just selecting the first entry, as here). Note that also the performance is bounded above, because the wordlist is not exhaustive - we determined that $7.0\%$ of the gold answers (on the Cryptonite test set) do not appear in the list. This may not be such an issue with the 'Init' dataset, since that wordlist is likely more restricted.
\ No newline at end of file
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/images.zip b/areasoningbasedapproachtocrypticcrosswordcluesolving/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f5b2f6c9d91b1eb04aeff890e97a51632b986e98
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c3f988a8441d2aa2c13d08598ed87b11c808453f5ccb04bc823eedf58d6b91d
+size 442992
diff --git a/areasoningbasedapproachtocrypticcrosswordcluesolving/layout.json b/areasoningbasedapproachtocrypticcrosswordcluesolving/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5279f7dfd66ad87157aa9dbef23f1ff84a53135b
--- /dev/null
+++ b/areasoningbasedapproachtocrypticcrosswordcluesolving/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61c4c1fd880e04feef18d74012b87ba0fd7c33413a4fe02bbd15793b23f86387
+size 583118
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_content_list.json b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0d61ee100c0ec38d7d76f76eee6723fab1f50e70
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c3e3d1c013c4b21b6f808fb9f843cd42bf4866bd38bc070da590ea047a8b447
+size 106708
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_model.json b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..865052ce671fc56a7859071a0801d3ebd330feb8
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f6f70fd28f71f36ae3ab3714dc2267751f71e3d726e082ec3c4c20c6aa60dcf
+size 130280
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_origin.pdf b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..609e512b671e57fa649c75bea718b3d8ed281d6a
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/d775b85c-c1e3-4ee2-8b91-edbf7e9a36cf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63d6130b52577be2cfcb20c3a46e2d03ada1c435872411383dd89ad831448d3b
+size 1380283
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/full.md b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5f2b0109d871903f79835b10641f9090298b34de
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/full.md
@@ -0,0 +1,486 @@
+# A Recipe for Causal Graph Regression: Confounding Effects Revisited
+
+Yujia Yin\* Tianyi $\mathbf{Q}\mathbf{u}^{23}$ Zihao Wang4 Yifan Chen1
+
+# Abstract
+
+Through recognizing causal subgraphs, causal graph learning (CGL) has risen to be a promising approach for improving the generalizability of graph neural networks under out-of-distribution (OOD) scenarios. However, the empirical successes of CGL techniques are mostly exemplified in classification settings, while regression tasks, a more challenging setting in graph learning, are overlooked. We thus devote this work to tackling causal graph regression (CGR); to this end we reshape the processing of confounding effects in existing CGL studies, which mainly deal with classification. Specifically, we reflect on the predictive power of confounders in graph-level regression, and generalize classification-specific causal intervention techniques to regression through a lens of contrastive learning. Extensive experiments on graph OOD benchmarks validate the efficacy of our proposals for CGR. The model implementation and the code are provided on https://github.com/causal-graph/CGR.
+
+# 1. Introduction
+
+Causal graph learning (CGL) (Lin et al., 2021) holds particular importance due to its relevance in fields such as drug discovery (Qiao et al., 2024) and climate modeling (Zhao et al., 2024). However, previous CGL studies focus on classification settings. Some of them cannot be directly extended to regression tasks, such as property prediction (Rollins et al., 2024), traffic flow forecasting (Li et al., 2021), and credit risk scoring (Ma et al., 2024), because the transition from finite to infinite support makes discrete labels unavailable. Graphs thus cannot be informatively grouped. A systematical understanding of how CGL techniques should be adapted to graph-level regression is still under-explored.
+
+*Equal contribution $^{1}$ Hong Kong Baptist University $^{2}$ SF Tech $^{3}$ Zhejiang University $^{4}$ Hong Kong University of Science and Technology. Correspondence to: Tianyi Qu , Yifan Chen .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+Figure 1. Structural causal model (SCM) for graph regression.
+
+G:Full Graph
+C: Causal Subgraph
+S:Confounding Subgraph
+Y:Response
+
+The core methodology of causal learning involves the identification and differentiation of causal features from confounding ones. As shown in Figure 1, causal features $C$ are those directly deciding responses $Y$ , whereas confounding features $S$ (shorthand for "spurious") solely present spurious correlations. Therefore, understanding how causal features (as well as confounding features) and responses interact plays a central role in practical designing of causal learning methods. From this perspective, causal graph regression (CGR) warrants specialized handling since the interaction between features and responses therein is significantly different from classification. Furthermore, regression is in general a more challenging task than classification, and techniques working for classification, Perceptron (Rosenblatt, 1958) for example, may not apply to regression.
+
+Specifically in CGL, the identification of causal subgraphs is seemingly transferable since this step, explicitly or implicitly, relies on the calculation of mutual information and is compatible with both settings (c.f. Section 3.2). However, the empirical performance of this vanilla adaptation on regression tasks is dwarfed by empirical risk minimization (Vapnik, 1991, ERM) w.r.t. least squares loss (see the results in Sections 5.3 and 5.4).
+
+To crack CGR, we revisit the processing of confounding effects, which conceptually constitutes causal graph learning along with causal subgraph identification as shown in Figure 1. Existing CGL methods, such as CAL (Sui et al., 2022) and DisC (Fan et al., 2022), are built on a strong assumption that confounding subgraphs contain strictly no predictive power. We reflect on this assumption and speculate it is hardly practical due to the contradiction with real-world observations: in molecular property prediction, for example, molecular weight is noncausal to toxicity while does exhibit strong correlations.
+
+In this work, we develop an enhanced graph information bottleneck (GIB) loss function, which no longer takes the strong assumption. Moreover, some confounding effect processing techniques, such as backdoor adjustment (Sui et al., 2022; 2024) and counterfactual reasoning (Guo et al., 2025), heavily rely on discrete label information and cannot be adapted to regression at all. We follow the principle of those methods and generalize it from class separation to instance discrimination; the discrimination principle aligns with the philosophy of contrastive learning (CL) and CL techniques are therefore leveraged to tackle CGR in our proposal.
+
+Following the intuition, we develop a new framework for causal graph regression, which spotlights the confounding effects within. In summary, our contributions are as follows:
+
+- To the best of our knowledge, we are the first to explicitly consider the predictive role of confounding features in graph regression tasks, a critical yet overlooked aspect in graph OOD generalization.
+- We introduce a new causal intervention approach that generates random graph representations by leveraging a contrastive learning loss to enhance causal representation, outperforming label-dependent methods.
+- Extensive experiments on OOD benchmarks demonstrate that our method significantly improves generalization in graph regression tasks.
+
+# 2. Related Work
+
+Out-of-distribution (OOD) challenges in graph learning has drawn significant attention, particularly in methods aiming to disentangle causal and confounding factors (Ma, 2024). Existing approaches can be broadly categorized into invariant learning (Wu et al., 2022a), causal modeling (Sui et al., 2024), and stable learning (Li et al., 2022).
+
+Invariant learning focuses on identifying features that remain stable across different environments, filtering out spurious correlations in the process. While not explicitly grounded in causal reasoning, prior studies (Wang & Veitch, 2022; Mitrovic et al., 2020) have highlighted its inherent connection to causality. Methods in invariant learning, such as CIGA (Chen et al., 2022), GSAT (Miao et al., 2022), and GALA (Chen et al., 2024), aim to learn invariant representations by isolating causal components.
+
+However, these approaches are typically designed for classification tasks, limiting their out-of-distribution (OOD) generalization capability in regression settings. Post-hoc methods, such as PGExplainer (Luo et al., 2020) and Reg-Explainer (Zhang et al., 2023), attempt to discover invariant subgraphs after training. However, these methods fail to equip the model with the ability to learn invariant represen
+
+tations during the training process.
+
+Causal modeling leverages structural causal models (SCMs) to improve the performance of graph neural networks (GNNs) on out-of-distribution (OOD) data. These approaches incorporate various traditional causal inference techniques, such as backdoor adjustment (e.g., CAL (Sui et al., 2022), CAL+ (Sui et al., 2024)), frontdoor adjustment (e.g., DSE (Wu et al., 2022c)), instrumental variables (e.g., RCGRL (Gao et al., 2023)), and counterfactual reasoning (e.g., DisC (Fan et al., 2022)). By simulating causal interventions through supervised training, these methods aim to achieve OOD generalization. However, they often disregard the predictive potential of confounding features, which hinders effective disentanglement. Moreover, the supervised loss functions tailored for classification tasks are not easily adaptable to regression problems, as the inherent complexity of regression introduces additional challenges.
+
+Stable learning aims to ensure consistent performance across environments by reweighting samples or balancing covariate distributions. For example, StableGNN (Fan et al., 2023) employs a regularizer to reduce the influence of confounding variables. However, such methods often rely on heuristic reweighting strategies, which may not fully disentangle causal from confounding factors.
+
+In addition to graph-based approaches, traditional machine learning methods have also explored causality in regression tasks. For instance, Pleiss et al. (2019) observed that causal features tend to concentrate in a low-dimensional subspace, whereas non-causal features are more randomly distributed. Similarly, Amini et al. (2020) proposed a framework for learning continuous targets by placing an evidence prior on a Gaussian likelihood function and training a non-Bayesian neural network to infer the hyperparameters of the evidence distribution. These methods highlight the potential of leveraging causal insights for improved regression performance.
+
+# 3. Preliminaries and Notations
+
+Along this paper, we denote a graph $G$ as $(A, X)$ . Here, $A \in \{0, 1\}^{n \times n}$ is the adjacency matrix indicating connectivity among $n$ nodes $(A_{ij} = 1$ if nodes $i$ and $j$ are connected, otherwise 0); $X \in \mathbb{R}^{n \times d}$ is the node feature matrix, where each row $X_i$ represents the $d$ -dimensional feature vector of node $i$ . The regression task in graph learning is to learn a function $f: G \mapsto y$ , where $y \in \mathbb{R}$ denotes the response for the graph $G$ .
+
+# 3.1. Causal Graph Learning
+
+In causal graph learning, a graph $G$ can be split into a causal subgraph $C$ and a confounding subgraph $S$ . This process is non-trivial and our proposed paradigm will hinge on the output of this process. We follow the definition in Sui et al.
+
+(2022) and first introduce the construction of the causal subgraph $C$ :
+
+$$
+C := \left(\boldsymbol {M} _ {\text {e d g e}} \odot \boldsymbol {A}, \boldsymbol {M} _ {\text {n o d e}} \cdot \boldsymbol {X}\right), \tag {1}
+$$
+
+where the mask matrix $M_{\mathrm{edge}} \in [0,1]^{n \times n}$ and the diagonal matrix $M_{\mathrm{node}}$ (whose diagonal elements are in [0, 1]) will filter out the non-causal nodes and edges. The confounding subgraph is then the "complement": $S \coloneqq G - C$ .
+
+In our framework, these masks $M_{\mathrm{edge}}$ and $M_{\mathrm{node}}$ are not predefined. Instead, they are learnable soft masks, generated by MLPs conditioned on the representations of $G$ . The parameters of these MLPs are optimized end-to-end as part of the overall model training, enabling the model to autonomously learn how to construct $C$ and $S$ . Further architectural details are provided in Appendix B and illustrated in Figure 2.
+
+Notably, mutual information plays an essential role in CGL, and we introduce its calculation, exemplified by the mutual information between the hidden embedding vectors (learned by a graph neural network) of the causal subgraphs and the original graphs, as follows:
+
+$$
+I (C; G) := \mathbb {E} _ {C, G} [ \log p (C \mid G) / p (C) ], \tag {2}
+$$
+
+where we follow the convention in CGL literature and abuse the notation $C, G$ to represent a random variable following the underlying distribution of embedding pairs $H_{g,i}$ 's and $H_{c,i}$ 's. In particular, those hidden embeddings are assumed Gaussian and the joint distribution can thus be well-estimated by sample embedding pairs. We refer readers interested to Miao et al. (2022, Appendix A) for more details. Moreover, the computation/approximation of the mutual information terms is a crucial component in causal graph learning, while still under-explored for CGR; we will dissect the computation of our proposed terms in Section 4.2 through deriving the variational bounds.
+
+# 3.2. Graph Information Bottleneck
+
+The information bottleneck (Tishby et al., 2000; Tishby & Zaslavsky, 2015, IB) principle aims to balance the trade-off between preserving the information necessary for prediction and discarding irrelevant redundancy. Specifically, IB suggests to maximize $I(Z;Y)$ while minimizing $I(Z;X)$ for regular data compression, where $Z$ is the compressed representation, $X$ is the input, and $Y$ is the response.
+
+Graph information bottleneck (GIB) (Wu et al., 2020) extends the IB principle to graph-structured data, facilitating the identification of subgraphs that are most relevant for predicting graph-level responses. By minimizing the mutual information $I(C; G)$ between the extracted causal subgraph $C$ and the original graph $G$ , GIB reduces redundant information. However, GIB alone does not guarantee the extraction of a purely causal subgraph, as isolating causal effects re
+
+requires additional interventions (Miao et al., 2022; Chen et al., 2022).
+
+Formally, the GIB objective is expressed as:
+
+$$
+- I (C; Y) + \alpha I (C; G), \tag {3}
+$$
+
+where $I(C;Y)$ quantifies the predictive information retained by $C$ (and thus needs to maximize). $I(C;G)$ serves as a regularizer to exclude irrelevant details from the original graph; the parameter $\alpha$ controls the trade-off between information preservation and compression.
+
+# 3.3. Causal Intervention in GNNs
+
+We borrow the structural causal model (SCM) diagram in Figure 1 to illustrate the causal intervention techniques. As shown in Figure 1, the graph $G$ decides both the causal subgraph $C$ and the confounding subgraph $S$ , and the former $C$ affects the prediction of response $Y$ . In more detail,
+
+- $C \gets G \to S$ : Graph data $G$ encodes both $C$ , which directly impacts $Y$ , and $S$ , which introduces spurious correlations.
+- $S \to C \to Y$ : The causal feature $C$ has the potential to predict $Y$ not only directly but also indirectly through its influence along this backdoor path $S \to C \to Y$ .
+
+In causal inference, confounder $S$ incurs spurious correlations, preventing the discovery of underlying causality. To address this issue, backdoor adjustment methods focus on the interventional effect $P(Y|\mathrm{do}(C))$ , and suggest to estimate it by stratifying over $S$ and calculating the conditional distribution $P(Y|C,S)$ (Pearl, 2014; Sui et al., 2024).
+
+# 4. Revisiting Confounding Effects for CGR
+
+In this section, we present a causal graph regression paradigm that integrates an enhanced graph information bottleneck (GIB) objective with causal discovery, reshaping the processing of confounding effects in CGL.
+
+# 4.1. Overview
+
+We first provide an overview of how graph inputs are turned into regression outputs. As shown in Figure 2, we follow the framework of Sui et al. (2024) and first encode graph embeddings $H_{g,i}$ 's using a GNN-based encoder. Attention modules are then adopted to generate soft masks for extracting causal and confounding subgraphs (c.f. Equation (1)). These subgraphs are processed through two GNN modules ( $\mathcal{G}_c$ and $\mathcal{G}_s$ ) with shared parameters to extract causal ( $H_{c,i}$ 's) and confounding ( $H_{s,i}$ 's) representations, which are passed through distinct readout layers for regression.
+
+The optimization features an enhanced graph information bottleneck (GIB) loss $L_{\mathrm{GIB}}$ , comprising the causal part $L_{c}$ and the confounding part $L_{s}$ , to disentangle causal signals
+
+
+Figure 2. Given a mini-batch of graphs, (1) the GNN encoder computes the graph embeddings $H_{g}$ , and an attention layer generates soft masks to extract causal and confounding subgraphs. (2) GNN $\mathcal{G}_c$ processes the causal subgraph $C$ , generates its representation $H_{c}$ , and employs readout to predict responses; it is optimized with causal subgraph loss $L_{c}$ . (3) GNN $\mathcal{G}_s$ , sharing parameters with $\mathcal{G}_c$ , processes the confounding subgraph $S$ , generates $H_{s}$ , and applies readout for prediction; it is optimized with confounding subgraph loss $L_{s}$ . (4) For causal intervention, contrastive learning guides the process. Given a graph $H_{g,i}$ , the positive sample is a mixed graph $H_{\mathrm{mix},ij}$ from random addition, while any other graph $H_{g,k}$ serves as the negative sample. The causal intervention loss $L_{\mathrm{CI}}$ is used accordingly.
+
+(c.f. Section 4.2). Also, counterfactual samples $(H_{\mathrm{mix},ij})$ are generated by randomly injecting confounding representations into causal ones; unsupervised learning is then performed, guided by contrastive-learning-based causal intervention loss $L_{\mathrm{CI}}$ (c.f. Section 4.3). More implementation details of the overall framework are deferred to Appendix B.
+
+# 4.2. Enhanced GIB Objective
+
+CGL adopts the GIB objective to extract subgraphs that retain essential predictive information while excluding redundant components (Zhang et al., 2023), which aligns with the disentanglement of causal subgraph $C$ and confounding subgraph $S$ in CGL. Original GIB assumes the confounding subgraph $S$ is pure noise and cannot predict the response $Y$ (Chen et al., 2022), while as we discussed in Section 1 $S$ may still contain information that is predictive of the re
+
+sponse $Y$ . In its current form, the GIB framework overlooks this aspect, causing the model to allocate all $Y$ -relevant information to $C$ and to potentially lose meaningful content.
+
+This limitation leads to incomplete causal disentanglement, which impacts the generalization of models to out-of-distribution (OOD) settings. To overcome this issue, we propose an enhanced GIB loss function that takes the predictive roles of both $C$ and $S$ into consideration. By introducing mutual information terms on $S$ during optimization, we avoid overburdening $C$ with all relevant information, and consequently enable a more precise disentanglement.
+
+Overall, our enhanced GIB objective is defined as follows:
+
+$$
+- I (C; Y) + \alpha I (C; G) - \beta I (S; Y), \tag {4}
+$$
+
+which formally extends the original GIB objective by introducing a confounder-related term $I(S;Y)$ to capture the
+
+predictive capacity of $S$ , along with a parameter $\beta$ . In particular, we intentionally exclude the $I(S;G)$ term because, in the SCM diagram of Figure 1, $S$ primarily introduces shortcut rather than directly encoding causality; overly imposing structural regularization on $S$ can disrupt disentanglement and lead to suboptimal separation between $C$ and $S$ . Notably, the conceptual objective (4) is incomputable in practice. We devote the remainder of this subsection to the practical computation of Equation (4) for CGR.
+
+Variational bounds for approximating $I(C;G)$ . The mutual information $I(C;G)$ is mathematically defined based on the marginal distributin $p(C) = \sum_{G}p(C|G)p(G)$ . Since $p(C)$ is intractable, a variational distribution $q(C)$ is introduced and induces an upper bound:
+
+$$
+I (C; G) \leq \mathbb {E} _ {p (G)} \left[ \mathrm {K L} \big (p (C \mid G) \| q (C) \big) \right]. \tag {5}
+$$
+
+To efficiently compute the KL divergence in Equation (5), we follow the literature (Chechik et al., 2003; Kingma et al., 2013) and assume that $p(C \mid G)$ and $q(C)$ are multivariate Gaussian distributions:
+
+$$
+p (C \mid G) = \mathcal {N} \left(\mu_ {\phi} (G), \Sigma_ {\phi} (G)\right), \quad q (C) = \mathcal {N} (0, I), \tag {6}
+$$
+
+where $\mu_{\phi}(G)$ and $\Sigma_{\phi}(G)$ are the mean vector and covariance matrix estimated by GNNs. To simplify computation and stabilize training, we further assume $\Sigma_{\phi}(G)$ is an identity matrix, removing the need to learn covariance parameters. This simplification is not only practical but also theoretically justified, as any full-rank covariance can be whitened without loss of generality (Chechik et al., 2003, Appendix A). KL $(p(C \mid G) \| q(C))$ then reduces to:
+
+$$
+\begin{array}{l} \frac {1}{2} \left[ \operatorname {t r} \left(\Sigma_ {\phi} (G)\right) + \| \mu_ {\phi} (G) \| ^ {2} - d - \log \det \Sigma_ {\phi} (G) \right] \tag {7} \\ = \frac {1}{2} \| \mu_ {\phi} (G) \| ^ {2}. \\ \end{array}
+$$
+
+where $d$ is the dimensionality of $C$ . Further substituting Equation (7) into Equation (5), we obtain an upper bound for $I(C;G)$ :
+
+$$
+I (C; G) \leq \frac {1}{2} \mathbb {E} _ {p (G)} \left[ \| \mu_ {\phi} (G) \| ^ {2} \right], \tag {8}
+$$
+
+which serves as an easy-to-compute proxy for $I(C;G)$ .
+
+Variational bounds for approximating $I(C;Y), I(S;Y)$ . We first recall $I(C;Y)$ mathematically reads:
+
+$$
+I (C; Y) = H (Y) - H (Y \mid C), \tag {9}
+$$
+
+where $H(Y)$ denotes the entropy of $Y$ , representing the overall uncertainty in the target variable. Since $H(Y)$ remains constant, maximizing $I(C;Y)$ reduces to minimizing the conditional entropy $H(Y\mid C)$ , given by:
+
+$$
+H (Y \mid C) = - \mathbb {E} _ {C, Y} [ \log p (Y \mid C) ]. \tag {10}
+$$
+
+The computation of $H(Y \mid C)$ is supposed to hinge on the hidden embeddings $H_{c,i}$ 's produced by a GNN $\mathcal{G}_c$ (see Section 4.1); we model the conditional distribution $p(Y \mid H_c)$ as a Gaussian distribution:
+
+$$
+p (Y \mid H _ {c}) = \mathcal {N} (Y; \mu_ {(c)}, \sigma_ {(c)} ^ {2}), \tag {11}
+$$
+
+where $\mu_{(c)}$ and $\sigma_{(c)}^2$ represent the scalar conditional mean and variance of $Y$ (estimated by networks) given a causal subgraph representation $H_{c}$ . The probability density function for this Gaussian is:
+
+$$
+p (Y \mid H _ {c}) = \frac {1}{\sqrt {2 \pi \sigma_ {(c)} ^ {2}}} \exp \left(- \frac {\left(Y - \mu_ {(c)}\right) ^ {2}}{2 \sigma_ {(c)} ^ {2}}\right). \tag {12}
+$$
+
+Substituting Equation (12) into Equation (10), we can further approximate $H(Y \mid C)$ through empirical data:
+
+$$
+\frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \frac {\left(Y _ {i} - \mu_ {(c) , i}\right) ^ {2}}{2 \sigma_ {(c) , i} ^ {2}} + \frac {1}{2} \log \left(2 \pi \sigma_ {(c), i} ^ {2}\right) \right], \tag {13}
+$$
+
+where $N$ represents sample size, $Y_{i}$ is the target response for the $i$ -th sample, and $\mu_{(c),i}$ and $\sigma_{(c),i}^2$ are the corresponding mean and variance of $Y$ given $H_{c,i}$ .
+
+If a constant conditional variance (i.e., $\sigma_{(c)}^2 = 1$ ) is assumed, a choice adopted for stability and aligning with approaches in (Nix & Weigend, 1994; Yu et al., 2024), then $I(C;Y)$ (or, equivalently, $-H(Y\mid C)$ ) reduces to the least squares loss:
+
+$$
+- \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \frac {\left(Y _ {i} - \mu_ {(c) , i}\right) ^ {2}}{2 \sigma_ {(c) , i} ^ {2}} + \frac {1}{2} \log \left(2 \pi \sigma_ {(c), i} ^ {2}\right) \right]
+$$
+
+$$
+\propto - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(Y _ {i} - \mu_ {(c), i}\right) ^ {2}, \tag {14}
+$$
+
+which turns to the causal subgraph objective $L_{\mathrm{CP}}$
+
+Similarly, the mutual information $I(S;Y)$ can induce the confounding subgraph objective
+
+$$
+L _ {\mathrm {S P}} \propto - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(Y _ {i} - \mu_ {(s), i}\right) ^ {2}. \tag {15}
+$$
+
+Empirically, we employ two independent readout layers to compute the causal and confounding subgraph mean $\mu_{(c),i}$ 's and $\mu_{(s),i}$ 's.
+
+In summary, our enhanced GIB objective can be decomposed into two distinct loss components: the causal subgraph loss $L_{c}(G,C,Y) = -I(C;Y) + \alpha I(C;G)$ and the confounding subgraph loss $L_{s}(S,Y) = -I(S;Y)$ . The complete enhanced GIB objective we propose is:
+
+$$
+\begin{array}{l} L _ {\mathrm {G I B}} = L _ {c} + \beta L _ {s} \\ = - I (C; Y) + \alpha I (C; G) - \beta I (S; Y), \\ \end{array}
+$$
+
+and in practice we use $-L_{\mathrm{CP}} + \alpha \mathbb{E}_{p(G)}\left[\| \mu_{\phi}(G)\| ^2\right] - \beta L_{\mathrm{SP}}$
+
+# 4.3. Causal Intervention
+
+To further strengthen causal learning in CGR, we introduce a causal intervention loss and reshape the processing of confounding effects therein. In general, our approach injects randomness at the graph level by randomly pairing confounding subgraphs with target causal subgraphs from the entire dataset. By generating counterfactual graph representations through the random combination of these subgraphs, we effectively implement causal intervention.
+
+This strategy can be understood as an implicit realization of backdoor adjustment (Pearl, 2014) in the representation space. In existing research on graph classification tasks (Fan et al., 2022; Sui et al., 2024), causal intervention is typically modeled by predicting $P(Y|C,S)$ through intervened graphs, adjusting for causal effects by comparing predictive distributions under different confounding conditions. However, in regression tasks, $Y$ is a continuous variable, and directly modeling $P(Y|C,S)$ becomes significantly more challenging. To overcome this, we follow the spirit of contrastive learning to get rid of the reliance on explicit labels.
+
+In more detail, following Sui et al. (2022), we use a random addition method to pair the confounding subgraph with the target causal subgraph, which gives $H_{\mathrm{mix}}$ :
+
+$$
+H _ {\text {m i x}, i j} = H _ {c, i} + H _ {s, j}. \tag {16}
+$$
+
+Comparing the predictions of $H_{\mathrm{mix}}$ with the original graph's labels, as shown in Sui et al. (2022), can inadvertently force the mixed graph to discard all confounding effects, thereby nullifying the intended causal disentanglement.
+
+To mitigate this issue, we suggest learning causal representations through contrastive learning. Specifically, the causal subgraph, when combined with different confounding subgraphs, consistently produces mixed graph representations that are aligned with the original graph representation. This formulation enables the model to learn causal subgraphs that are invariant across varying confounders, and to avoid the causal subgraphs boiled down to non-informative ones.
+
+To achieve this, we propose a causal intervention loss guided by contrastive learning. Specifically, the method aligns the representation of the original graph with that of its corresponding random mixture graph, while simultaneously ensuring that representations of unrelated graphs remain distinct. In implementation, draw inspiration from the InfoNCE loss (Oord et al., 2018), we treat $H_{g}$ and $H_{\mathrm{mix}}$ from the same causal subgraph as positive pairs, and $H_{g}$ with representations of other graphs within the batch as negative pairs. Formally, the mixed graph contrastive loss is defined as:
+
+$$
+L _ {\mathrm {C I}} = - \frac {1}{B} \sum_ {i = 1} ^ {B} \log \frac {\exp \left(\sin \left(H _ {g , i} , H _ {\text {m i x} , i j}\right) \right.}{\sum_ {k = 1 , k \neq i} ^ {B} \exp \left(\sin \left(H _ {g , i} , H _ {g , k}\right)\right)}, \tag {17}
+$$
+
+where $B$ is the batch size, $H_{\mathrm{mix},ij}$ is the representation of the mixed graph combining the $i$ -th causal subgraph and the $j$ -th confounding subgraph, and $H_{g,i}$ is the representation of the original graph.
+
+Remark 4.1. The ultimate loss used in our paradigm is a simple combination of the GIB objective and the causal intervention loss: $L = L_{\mathrm{GIB}} + \lambda L_{\mathrm{CI}}$ .
+
+# 5. Experiments
+
+In this section, we evaluate the prediction performance and OOD generalization ability of our method. We comprehensively compare our method with existing models to demonstrate the superior generalization ability of our method on regression tasks. We briefly introduce the dataset, baselines, and experimental settings here.
+
+# 5.1. Datasets
+
+GOOD-ZINC. GOOD-ZINC is a regression task in the GOOD benchmark (Gui et al., 2022), which aims to test the out-of-distribution performance of real-world molecular property regression datasets from the ZINC database (Gómez-Bombarelli et al., 2018). The input is a molecular graph containing up to 38 heavy atoms, and the task is to predict the restricted solubility of the molecule (Jin et al., 2018; Kusner et al., 2017). GOOD-ZINC includes four specific OOD types: Scaffold-Covariate, Scaffold-Concept, Size-Covariate, and Size-Concept. Scaffold OOD involves changes in molecular structures, while Size OOD varies graph size. Each can manifest as Covariate Shift $(P(X)$ changes, $P(Y|X)$ remains stable) or Concept Shift (spurious correlations in training break in testing).
+
+ReactionOOD-SOOD. In addition to the GOOD benchmark, we also used three S-OD datasets in the ReactionOOD benchmark (Wang et al., 2023), namely Cycloaddition (Stuyver et al., 2023), E2&S $_{\mathrm{N}}$ 2 (von Rudorff et al., 2020), and RDB7 (Spiekermann et al., 2022), which are designed to extract information outside the structural distribution during molecular reactions. Cycloaddition and RDB7 have two domains: Total Atom Number (where the total number of atoms in a reaction exceeds the training range) and First Reactant Scaffold (where the first reactant has a new molecular scaffold unseen in training), while E2&S $_{\mathrm{N}}$ 2 dataset contains reactions with molecules whose scaffold cannot be properly defined, which prevents the scaffold from being an applicable domain index for this dataset. The definitions of two shifts Covariate and Concept in ReactionOOD are consistent with those in GOOD.
+
+# 5.2. Baselines and Setup
+
+As our framework is general and aims to address distribution shifts, we compare it against several baseline methods.
+
+Table 1. OOD generalization performance on GOOD-ZINC dataset, with boldface being the best and underline being the runner-up.
+
+GOOD-ZINC SCAFFOLD SIZE COVARIATE CONCEPT COVARIATE CONCEPT ID OOD ID OOD ID OOD ID OOD ERM 0.1188±0.0030 0.1660±0.0093 0.1174±0.0013 0.1248±0.0018 0.1222±0.0061 0.2331±0.0169 0.1304±0.0010 0.1406±0.0002 IRM 0.1258±0.0033 0.2313±0.0243 0.1176±0.0052 0.1245±0.0062 0.1217±0.0014 0.5840±0.0039 0.1331±0.0045 0.1338±0.0011 VREX 0.0978±0.0016 0.1561±0.0021 0.1928±0.0021 0.1271±0.0020 0.1841±0.0009 0.2276±0.0005 0.1206±0.0008 0.1289±0.0039 MIXUP 0.1348±0.0025 0.2157±0.0098 0.1192±0.0026 0.1296±0.0049 0.1431±0.0070 0.2573±0.0042 0.1625±0.0121 0.1660±0.0063 DANN 0.1152±0.0021 0.1734±0.0005 0.1284±0.0031 0.1289±0.0020 0.1053±0.0081 0.2254±0.0140 0.1227±0.0008 0.1271±0.0039 CORAL 0.1252±0.0043 0.1734±0.0034 0.1173±0.0029 0.1260±0.0024 0.1164±0.0004 0.2243±0.0147 0.1246±0.0062 0.1270±0.0020 CIGA 0.1568±0.0034 0.2986±0.0041 0.1926±0.0120 0.2415±0.0115 0.1500±0.0001 0.6102±0.0148 0.3560±0.0160 0.3240±0.0451 DIR 0.2483±0.0056 0.3650±0.0032 0.2510±0.0001 0.2619±0.0076 0.2515±0.0529 0.4224±0.0679 0.4831±0.0823 0.3630±0.0872 GSAT 0.0890±0.0031 0.1419±0.0043 0.0928±0.0029 0.0999±0.0029 0.0876±0.0032 0.2112±0.0033 0.1002±0.0013 0.1043±0.0001 OURS 0.0514±0.0061 0.1046±0.0007 0.0659±0.0041 0.0518±0.0007 0.0466±0.0034 0.1484±0.0033 0.0577±0.0008 0.0580±0.0004
+
+Empirical Risk Minimization (ERM) (Vapnik, 1991) serves as a non-OOD baseline for comparison with OOD methods. We consider both Euclidean and graph-based state-of-the-art OOD approaches: (1) Euclidean OOD methods include IRM (Arjovsky et al., 2019), VREx (Krueger et al., 2021), GroupDRO (Sagawa et al., 2019), DANN (Ganin et al., 2016), Coral (Sun & Saenko, 2016), and Mixup (Zhang, 2017); (2) Graph OOD methods include CIGA (Chen et al., 2022), GSAT (Miao et al., 2022), and DIR (Wu et al., 2022b).
+
+For a fair comparison, all methods are implemented with consistent architectures and hyperparameters, ensuring that performance differences arise solely from the method itself. To provide reliable results, each experiment is repeated three times with different random seeds, and we report the mean and standard error of the results. Detailed settings and hyperparameter configurations are described in Appendix A.4.
+
+# 5.3. Results of GOOD
+
+As shown in Table 1, our proposed method achieves SOTA performance on GOOD-ZINC, consistently outperforming all baseline methods across both domains (Scaffold and Size) and under different distribution shifts (Covariate and Concept). Specifically, in terms of Mean Absolute Error (MAE), our method demonstrates significant improvements in both in-distribution (ID) and out-of-distribution (OOD) settings.
+
+For instance, in the Scaffold domain under the Covariate shift, our method achieves an MAE of $0.0514 \pm 0.0061$ (ID) and $0.1046 \pm 0.0007$ (OOD), outperforming GSAT, the next-best method, by $42.2\%$ in ID and $26.3\%$ in OOD performance. Similarly, under the Concept shift, our method achieves $0.0659 \pm 0.0041$ (ID) and $0.0518 \pm 0.0007$ (OOD), representing improvements of $29.0\%$ and $48.1\%$ , respectively, over GSAT.
+
+In the Size domain, our method also achieves remarkable results. Under the Covariate shift, it achieves an MAE of $0.0466 \pm 0.0034$ (ID) and $0.1484 \pm 0.0033$ (OOD), which translate to $46.8\%$ lower ID error and $29.7\%$ lower OOD
+
+error compared to GSAT. Similarly, under the Concept shift, our approach yields an MAE of $0.0577 \pm 0.0008$ (ID) and $0.0580 \pm 0.0004$ (OOD), improving upon GSAT by $42.4\%$ and $44.4\%$ , respectively.
+
+In addition to achieving lower MAE values, our method exhibits significantly reduced variances compared to other approaches, highlighting its stability under diverse conditions. These findings confirm the strong generalization capability of our method across different domains and types of distributional shifts.
+
+# 5.4. Results of ReactionOOD
+
+Table 2 and Table 3 highlight the robust generalization ability of our method across multiple datasets and evaluation settings, as measured by RMSE. Our method achieves the best OOD performance in 6 out of 10 cases and ranks second in 2 cases. Notably, in cases where another method outperforms ours, the performance gap is within a small margin.
+
+For instance, in the Cycloaddition dataset, under the total atom number domain with a concept shift, Our method achieves an OOD RMSE of $5.53 \pm 0.12$ , outperforming all baseline methods. While some non-causal baselines (e.g., Coral in this specific setting, achieving an ID RMSE of $4.10 \pm 0.05$ versus our $4.41 \pm 0.22$ ) might get better ID performance by exploiting spurious but predictive features, such approaches can become less reliable under OOD conditions (e.g., Coral's OOD RMSE degrades to $5.74 \pm 0.04$ ). In contrast, our method's focus on identifying and removing these spurious features contributes to its stable and superior OOD performance. Even in other Cycloaddition cases where ours ranks second, such as the same domain with a covariate shift, the OOD RMSE $(4.42 \pm 0.24)$ is only 0.06 away from the best-performing method $(4.36 \pm 0.15)$ .
+
+In RDB7, a smaller dataset within the ReactionOOD where causal inference can be more difficult, our method achieves the lowest OOD RMSE $(15.73 \pm 0.37)$ under the concept shift. Our method's principled focus on true causal features, which leads to better OOD generalization ability and stabil-
+
+Table 2. OOD generalization performance on Cycloaddition and RDB7 dataset.
+
+DATASET METHODS FIRST REACTANT SCAFFOLD TOTAL ATOM NUMBER COVARIATE CONCEPT COVARIATE CONCEPT ID OOD ID OOD ID OOD ID OOD CYCLOADDITION ERM 4.38±0.04 4.80±0.38 4.79±0.03 5.60±0.02 3.77±0.01 4.36±0.15 4.22±0.04 5.69±0.03 IRM 15.30±0.05 21.16±0.01 17.55±0.03 18.64±0.25 17.53±0.17 17.44±0.14 23.14±0.02 22.56±0.01 VREX 5.54±0.02 6.69±0.48 5.02±0.05 6.14±0.09 4.79±0.03 5.22±0.06 4.92±0.14 6.39±0.04 MIXUP 4.51±0.04 5.24±0.83 4.90±0.01 5.90±0.05 3.90±0.13 4.53±0.03 4.11±0.09 5.93±0.13 DANN 4.42±0.03 4.68±0.12 4.81±0.01 5.75±0.06 3.87±0.05 4.65±0.10 4.18±0.02 5.68±0.10 CORAL 4.36±0.07 4.95±0.30 4.82±0.03 5.72±0.16 4.39±0.59 5.05±0.48 4.10±0.05 5.74±0.04 CIGA 5.26±0.04 5.67±0.04 5.30±0.29 5.64±0.03 4.93±0.05 6.62±1.09 5.03±0.09 6.21±0.06 DIR 4.94±0.02 5.31±0.79 5.85±0.20 6.30±0.38 5.52±0.03 6.86±0.05 5.21±0.12 7.09±0.03 GSAT 4.42±0.05 4.63±0.05 4.87±0.01 5.69±0.01 3.81±0.01 4.56±0.01 4.12±0.04 5.64±0.11 OURS 4.57±0.13 4.22±0.09 4.53±0.04 5.37±0.05 4.06±0.01 4.42±0.24 4.41±0.22 5.53±0.12 RDB7 ERM 10.28±0.05 22.95±0.90 11.38±0.08 14.81±0.05 10.86±0.01 7.66±0.55 11.28±0.15 15.79±0.24 IRM 59.87±0.02 76.51±0.46 65.72±0.13 63.03±0.13 63.55±0.02 69.06±0.37 81.14±0.02 46.84±0.42 VREX 16.62±0.18 21.89±0.02 14.62±0.04 18.28±0.09 14.60±0.01 13.84±0.07 34.66±1.56 32.59±3.28 MIXUP 10.76±0.07 23.49±0.09 11.89±0.05 15.64±0.10 11.13±0.02 10.78±0.17 11.66±0.04 17.21±0.28 DANN 10.28±0.05 23.54±0.07 11.28±0.01 14.93±0.05 10.77±0.22 8.29±0.10 11.34±0.05 16.28±0.15 CORAL 10.30±0.12 22.19±0.63 11.12±0.03 14.81±0.06 10.61±0.01 8.04±0.14 11.33±0.08 16.13±0.08 CIGA 14.97±0.75 30.08±0.84 18.68±1.94 21.35±1.34 16.48±0.69 19.12±1.85 20.58±1.54 18.53±1.30 DIR 14.34±0.68 26.99±0.49 17.13±1.76 20.18±1.86 14.03±2.06 15.01±0.98 13.52±0.51 16.60±1.09 GSAT 10.52±0.04 23.45±0.11 11.26±0.25 14.85±0.12 10.80±0.01 8.66±0.10 11.58±0.03 16.08±0.41 OURS 10.12±0.08 23.11±0.46 11.26±0.02 14.94±0.25 10.51±0.08 6.84±0.32 11.46±0.06 15.73±0.37
+
+Table 3. OOD generalization performance on E2&S ${\mathrm{N}}_{2}$ dataset.
+
+METHODS COVARIATE CONCEPT ID OOD ID OOD ERM 4.45±0.04 5.47±0.27 4.87±0.02 5.04±0.02 IRM 11.61±0.18 21.54±1.07 20.95±0.02 17.57±0.03 VREX 4.58±0.02 5.48±0.13 10.75±1.54 8.77±2.31 MIXUP 4.55±0.09 5.55±0.01 4.69±0.08 5.11±0.01 DANN 4.51±0.06 5.38±0.04 4.48±0.10 5.04±0.02 CORAL 4.44±0.11 5.68±0.20 4.54±0.02 4.97±0.07 CIGA 5.05±0.35 6.57±0.52 4.65±0.26 5.39±0.47 DIR 5.61±0.26 6.59±0.31 6.56±0.34 6.29±0.11 GSAT 4.55±0.01 5.69±0.05 4.55±0.09 5.04±0.03 OURS 4.40±0.03 4.83±0.10 4.53±0.12 5.03±0.09
+
+ity. Even though causal methods generally face challenges in smaller datasets (Guo et al., 2020), our approach consistently outperforms other listed causal intervention baselines such as CIGA in all RDB7 settings. In the E2&S $_{\mathrm{N}}$ 2 dataset, our method delivers the best OOD RMSE $(4.83 \pm 0.10)$ under the covariate shift and achieves highly competitive results under the concept shift $(5.03 \pm 0.09)$ .
+
+As noted in OOD-GNN (Tajwar et al., 2021), no method consistently performs best on every dataset due to varying distribution shifts and inductive biases. Our approach, designed under more general and weaker assumptions which do not assume that spurious features are non-predictive, aims to tackle a wider range of real-world distribution shifts.
+
+# 5.5. Effectiveness of Ours in Classification Task
+
+To validate the generality and effectiveness of our proposed losses, $L_{\mathrm{GIB}}$ and $L_{\mathrm{CI}}$ , we conduct ablation studies on the GOOD-Motif dataset under the size domain setting. The
+
+results, evaluated in terms of accuracy, are reported on the OOD dataset, as shown in Figure 3. The ablation study on $L_{\mathrm{GIB}}$ aims to examine our hypothesis that confounders possess certain predictive power; thus, this experiment excludes the causal intervention loss $L_{\mathrm{CI}}$ . Conversely, the ablation study on $L_{\mathrm{CI}}$ evaluates whether the contrastive learning-driven causal intervention loss can independently achieve strong OOD performance. Therefore, in this experiment, we do not incorporate the predictive power of confounding factors.
+
+Predictive power of confounding subgraphs. The left panel compares minimizing confounding subgraph prediction alone versus introducing constraints to model their predictive ability. The results show that ignore the predictive role of confounding subgraphs leads to incomplete disentanglement and weaker OOD generalization, demonstrating that accounting for their influence is crucial.
+
+Effectiveness of contrastive learning. The right panel compares using predictions from randomly generated counterfactual graphs as causal intervention loss versus our proposed contrastive learning loss. The results show that our contrastive learning approach, initially validated in regression tasks, is equally effective in classification tasks, highlighting its general applicability.
+
+These studies confirm the importance of explicitly modeling confounding subgraphs and the robustness of our contrastive learning loss for OOD generalization. More experimental results are provided in the Appendix A.5.
+
+
+Figure 3. Ablation study on confounder predictive power (left) and causal intervention methods (right) for OOD generalization on GOOD-Motif.
+
+
+
+# 6. Conclusion
+
+In this work, we propose a recipe for causal graph regression through reshaping the processing of confounding effects in existing CGL classification-specific techniques. In particular, we develop an enhanced graph information bottleneck (GIB) loss function which highlights the impact of confounding effects and consequently benefits the recognition of causal subgraphs. Moreover, we revisit the causal intervention technique, which randomly combines causal subgraphs and confounder from the same class (label) to eliminate confounding effects. Adapting this technique to regression requires removal of label information; to this end, we analyze the principle of causal intervention and propose to connect it with unsupervised contrastive learning loss. Experimental results on graph OOD benchmarks demonstrate the effectiveness of our proposed techniques in improving the generalizability of graph regression models.
+
+# Acknowledgements
+
+We sincerely thank the Area Chair and the anonymous reviewers for their valuable feedback and constructive suggestions, which helped improve this work. The authors acknowledge funding from Research Grants Council (RGC) under grant 22303424 and GuangDong Basic and Applied Basic Research Foundation under grant 2025A1515010259.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Amini, A., Schwarting, W., Soleimany, A., and Rus, D. Deep evidential regression. Advances in neural information processing systems, 33:14927-14937, 2020.
+Arjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
+Chechik, G., Globerson, A., Tishby, N., and Weiss, Y. Information bottleneck for gaussian variables. Advances in Neural Information Processing Systems, 16, 2003.
+Chen, Y., Zhang, Y., Bian, Y., Yang, H., Kaili, M., Xie, B., Liu, T., Han, B., and Cheng, J. Learning causally invariant representations for out-of-distribution generalization on graphs. Advances in Neural Information Processing Systems, 35:22131-22148, 2022.
+Chen, Y., Bian, Y., Zhou, K., Xie, B., Han, B., and Cheng, J. Does invariant graph learning via environment augmentation learn invariance? Advances in Neural Information Processing Systems, 36, 2024.
+Fan, S., Wang, X., Mo, Y., Shi, C., and Tang, J. Debiasing graph neural networks via learning disentangled causal substructure. Advances in Neural Information Processing Systems, 35:24934-24946, 2022.
+Fan, S., Wang, X., Shi, C., Cui, P., and Wang, B. Generalizing graph neural networks on out-of-distribution graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
+Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., March, M., and Lempitsky, V. Domain-adversarial training of neural networks. Journal of machine learning research, 17(59):1-35, 2016.
+
+Gao, H., Li, J., Qiang, W., Si, L., Xu, B., Zheng, C., and Sun, F. Robust causal graph representation learning against confounding effects. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 7624-7632, 2023.
+Gómez-Bombarelli, R., Wei, J. N., Duvenaud, D., Hernández-Lobato, J. M., Sánchez-Lengeling, B., Sheberla, D., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams, R. P., and Aspiru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268-276, 2018.
+Gui, S., Li, X., Wang, L., and Ji, S. Good: A graph out-of-distribution benchmark. Advances in Neural Information Processing Systems, 35:2059-2073, 2022.
+Guo, R., Cheng, L., Li, J., Hahn, P. R., and Liu, H. A survey of learning causality with data: Problems and methods. ACM Computing Surveys (CSUR), 53(4):1-37, 2020.
+Guo, Z., Wu, Z., Xiao, T., Aggarwal, C., Liu, H., and Wang, S. Counterfactual learning on graphs: A survey. Machine Intelligence Research, 22(1):17-59, 2025.
+Jin, W., Barzilay, R., and Jaakkola, T. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pp. 2323-2332. PMLR, 2018.
+Kingma, D. P., Welling, M., et al. Auto-encoding variational bayes, 2013.
+Krueger, D., Caballero, E., Jacobsen, J.-H., Zhang, A., Binas, J., Zhang, D., Le Priol, R., and Courville, A. Out-of-distribution generalization via risk extrapolation (rex). In International conference on machine learning, pp. 5815-5826. PMLR, 2021.
+Kusner, M. J., Paige, B., and Hernández-Lobato, J. M. Grammar variational autoencoder. In International conference on machine learning, pp. 1945-1954. PMLR, 2017.
+Li, G., Knoop, V. L., and Van Lint, H. Multistep traffic forecasting by dynamic graph convolution: Interpretations of real-time spatial correlations. Transportation Research Part C: Emerging Technologies, 128:103185, 2021.
+Li, H., Wang, X., Zhang, Z., and Zhu, W. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Transactions on Knowledge and Data Engineering, 35 (7):7328-7340, 2022.
+Lin, W., Lan, H., and Li, B. Generative causal explanations for graph neural networks. In International Conference on Machine Learning, pp. 6666-6679. PMLR, 2021.
+
+Luo, D., Cheng, W., Xu, D., Yu, W., Zong, B., Chen, H., and Zhang, X. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620-19631, 2020.
+Ma, F., Li, H., and Ilyas, M. Utilizing reinforcement learning and causal graph networks to address the intricate dynamics in financial risk prediction. International Journal of Information Technologies and Systems Approach (IJITSA), 17(1):1-19, 2024.
+Ma, J. A survey of out-of-distribution generalization for graph machine learning from a causal view. arXiv preprint arXiv:2409.09858, 2024.
+Miao, S., Liu, M., and Li, P. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pp. 15524-15543. PMLR, 2022.
+Mitrovic, J., McWilliams, B., Walker, J., Buesing, L., and Blundell, C. Representation learning via invariant causal mechanisms. arXiv preprint arXiv:2010.07922, 2020.
+Nix, D. and Weigend, A. Learning local error bars for nonlinear regression. Advances in neural information processing systems, 7, 1994.
+Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
+Pearl, J. Interpretation and identification of causal mediation. Psychological methods, 19(4):459, 2014.
+Pleiss, G., Souza, A., Kim, J., Li, B., and Weinberger, K. Q. Neural network out-of-distribution detection for regression tasks. 2019.
+Qiao, G., Wang, G., and Li, Y. Causal enhanced drug-target interaction prediction based on graph generation and multi-source information fusion. Bioinformatics, 40 (10):btae570, 2024.
+Rollins, Z. A., Cheng, A. C., and Metwally, E. Molprop: Molecular property prediction with multimodal language and graph fusion. Journal of Cheminformatics, 16(1):56, 2024.
+Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
+Sagawa, S., Koh, P. W., Hashimoto, T. B., and Liang, P. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
+
+Spiekermann, K., Pattanaik, L., and Green, W. H. High accuracy barrier heights, enthalpies, and rate coefficients for chemical reactions. Scientific Data, 9(1):417, 2022.
+Stuyver, T., Jorner, K., and Coley, C. W. Reaction profiles for quantum chemistry-computed $[3 + 2]$ cycloaddition reactions. Scientific Data, 10(1):66, 2023.
+Sui, Y., Wang, X., Wu, J., Lin, M., He, X., and Chua, T.-S. Causal attention for interpretable and generalizable graph classification. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1696-1705, 2022.
+Sui, Y., Mao, W., Wang, S., Wang, X., Wu, J., He, X., and Chua, T.-S. Enhancing out-of-distribution generalization on graphs via causal attention learning. ACM Transactions on Knowledge Discovery from Data, 18(5):1-24, 2024.
+Sun, B. and Saenko, K. Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision-ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pp. 443-450. Springer, 2016.
+Tajwar, F., Kumar, A., Xie, S. M., and Liang, P. No true state-of-the-art? ood detection methods are inconsistent across datasets. arXiv preprint arXiv:2109.05554, 2021.
+Tishby, N. and Zaslavsky, N. Deep learning and the information bottleneck principle. In 2015 *ieee information theory workshop (itw)*, pp. 1-5. IEEE, 2015.
+Tishby, N., Pereira, F. C., and Bialek, W. The information bottleneck method. arXiv preprint physics/0004057, 2000.
+Vapnik, V. Principles of risk minimization for learning theory. Advances in neural information processing systems, 4, 1991.
+von Rudorff, G. F., Heinen, S. N., Bragato, M., and von Lilienfeld, O. A. Thousands of reactants and transition states for competing e2 and s2 reactions. Machine Learning: Science and Technology, 1(4):045026, 2020.
+Wang, Z. and Veitch, V. A unified causal view of domain invariant representation learning. 2022.
+Wang, Z., Chen, Y., Duan, Y., Li, W., Han, B., Cheng, J., and Tong, H. Towards out-of-distribution generalizable predictions of chemical kinetics properties. arXiv preprint arXiv:2310.03152, 2023.
+Wu, Q., Zhang, H., Yan, J., and Wipf, D. Handling distribution shifts on graphs: An invariance perspective. arXiv preprint arXiv:2202.02466, 2022a.
+
+Wu, T., Ren, H., Li, P., and Leskovec, J. Graph information bottleneck. Advances in Neural Information Processing Systems, 33:20437-20448, 2020.
+Wu, Y.-X., Wang, X., Zhang, A., He, X., and Chua, T.-S. Discovering invariant rationales for graph neural networks. arXiv preprint arXiv:2201.12872, 2022b.
+Wu, Y.-X., Wang, X., Zhang, A., Hu, X., Feng, F., He, X., and Chua, T.-S. Deconfounding to explanation evaluation in graph neural networks. arXiv preprint arXiv:2201.08802, 2022c.
+Yu, S., Yu, X., Løkse, S., Jenssen, R., and Principe, J. C. Cauchy-schwarz divergence information bottleneck for regression. In ICLR, 2024.
+Zhang, H. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
+Zhang, J., Chen, Z., Mei, H., Luo, D., and Wei, H. Regexplainer: Generating explanations for graph neural networks in regression task. arXiv preprint arXiv:2307.07840, 2023.
+Zhao, S., Prapas, I., Karasante, I., Xiong, Z., Papoutsis, I., Camps-Valls, G., and Zhu, X. X. Causal graph neural networks for wildfire danger prediction. arXiv preprint arXiv:2403.08414, 2024.
+
+# A. Supplementary Experiments
+
+# A.1. GOOD Benchmark
+
+The Graph Out-Of-Distribution (GOOD) benchmark is the most comprehensive and authoritative benchmark for assessing the OOD generalization of graph learning models. It includes 11 datasets, covering six graph-level and five node-level tasks, with 51 dataset splits across covariate shift, concept shift, and no shift scenarios. Among them, nine datasets focus on classification (binary and multi-class), one (GOOD-ZINC) on regression, and one (GOOD-PCBA) on multi-objective binary classification. GOOD is the first benchmark to incorporate both covariate and concept shifts within the same domain, enabling controlled comparisons. It evaluates 10 state-of-the-art OOD methods, including four tailored for graphs, resulting in 510 dataset-model combinations. As a result, GOOD provides a systematic and rigorous framework for benchmarking OOD generalization in graph learning
+
+# A.2. ReactionOD Benchmark
+
+The ReactionOOD benchmark is a specialized out-of-distribution (OOD) evaluation framework designed to systematically assess the generalization capabilities of machine learning models in predicting the kinetic properties of chemical reactions. It introduces three distinct levels of OOD shifts—structural, conditional, and mechanistic—and comprises six datasets, all formulated as regression tasks. Structural OOD (S-OOD) examines variations in reactant structures, including shifts based on total atomic count (E2 & SN2) and reactant scaffolds (RDB7, Cycloaddition). Conditional OOD (C-OOD) investigates the effect of environmental conditions on kinetic properties, considering shifts in temperature (RMG Lib. T) and combined temperature-pressure settings (RMG Lib. TP). Mechanistic OOD (M-OOD) explores the impact of different reaction mechanisms (RMG Family) on kinetic property predictions.
+
+# A.3. GOOD-ZINC Dataset Details
+
+Table 4 presents the number of graphs/nodes in different dataset splits for the GOOD-ZINC dataset. The dataset is analyzed under three types of distribution shifts: covariate, concept, and no shift. Each row represents the number of graphs/nodes in training, in-distribution (ID) validation, ID test, out-of-distribution (OOD) validation, and OOD test sets. The no-shift scenario serves as a baseline with no distributional difference between training and test sets.
+
+Table 4. Details of GOOD-ZINC dataset.
+
+Dataset Shift Train ID validation ID test OOD validation OOD test GOOD-ZINC covariate 149674 24945 24945 24945 24946 concept 101867 21828 21828 43539 60393 no shift 149673 49891 49891 - -
+
+# A.4. Experimental Settings
+
+We use the GOOD-ZINC dataset from the GOOD benchmark and the S-OOD tasks from ReactionOOD, excluding other OOD tasks from ReactionOOD as they are still under maintenance. Our baseline results on ReactionOOD have been acknowledged by the original authors. We use a three-layer GIN as the backbone model, with 300 hidden dimensions, which is consistently applied in both OURs and baseline models. The model is trained for 300 epochs, with the learning rate adjusted using the cosine annealing strategy. The initial learning rate is set to 0.001, with a minimum value of 1e-8. For the OURS model, all tunable hyperparameters in the loss function $L$ are set to 0.5.
+
+# A.5. Ablation Studies
+
+Effectiveness Analysis To evaluate the effectiveness of the proposed loss functions $L_{GIB}$ and $L_{CI}$ in improving the model's OOD generalization ability, we conducted a series of ablation studies across four ood datasets: ZINC, Cycloaddition, E2SN2, and RDB7. Ours w/o BO serves as the baseline model, where both loss functions are removed, and only the causal subgraph readout layer's $l_{1}$ loss is used for optimization. Ours w/o GIB ablates $L_{GIB}$ , eliminating the constraint on confounding subgraphs to assess the impact of removing confounder control on generalization. Conversely, Ours w/o CI removes $L_{CI}$ while keeping $L_{GIB}$ , allowing us to examine the contribution of the causal intervention loss to OOD generalization. Ours represents the complete model, incorporating both loss functions for optimization. Notably, ZINC is
+
+
+Figure 4. The comparison of different components.
+
+evaluated using MAE, while the other datasets adopt RMSE as the evaluation metric. Given that the ZINC results are small (approximately $0.0\mathrm{x}$ ), we scale them by a factor of 10 in the Figure 4 for better visualization and comparison.
+
+The results reveal several key insights. The full model (green) consistently achieves the lowest RMSE across all datasets, demonstrating the effectiveness of jointly applying both the enhanced GIB loss and the CI loss. Removing both components (yellow) leads to the worst performance, confirming that both components are essential. Between the two losses, removing CI (orange) generally causes a larger degradation than removing GIB (blue), suggesting that CI plays a more dominant role. On E2SN2, however, GIB contributes more significantly. These results indicate that GIB and CI provide complementary benefits, and that using both yields the best OOD generalization.
+
+Parameter Sensitivity Analysis In this experiment, we analyzed the sensitivity of loss function hyperparameters under different settings in the Cycloaddition dataset, focusing on two key components of our proposed loss function: the hyperparameter $\lambda$ for the causal intervention term and $\alpha$ , $\beta$ for the confounding constraint term. The results in Figure 5
+
+
+Figure 5. Parameter sensitivity.
+
+indicate that, there is no clear trend toward getting better or worse for $\alpha$ . For $\beta$ , which balances the GIB loss, there is a gradual increase in RMSE when it is too large, especially in scaffold-covariate settings, suggesting an optimal range around 0.3-0.6. For $\lambda$ , which controls the causal intervention loss, has the strongest impact. A suitable parameter interval (0.2-0.4) consistently leads to lower RMSE, while overly large or small $\lambda$ causes performance degradation, especially in the size-concept setting. This demonstrates the importance of carefully tuning $\lambda$ to achieve effective OOD generalization.
+
+# B. Framework Details
+
+Given a GNN-based encoder $f(\cdot)$ and a graph $G_{i} = (A_{i},X_{i})$ , the graph representation is computed as:
+
+$$
+H _ {g, i} = f \left(A _ {i}, X _ {i}\right), \tag {18}
+$$
+
+Then, to estimate attention scores, inspired by (Sui et al., 2022), we utilize separate MLPs for nodes and edges. The node attention scores, which can be seen as the node-level soft mask can be computed as:
+
+$$
+\mathrm {M} _ {\text {n o d e}}, \bar {\mathrm {M}} _ {\text {n o d e}} = \sigma \left(\mathrm {M L P} _ {\text {n o d e}} \left(H _ {g, i}\right)\right), \tag {19}
+$$
+
+where $\sigma$ denotes the softmax operation applied across attention dimensions. Similarly, edge-level soft masks are determined by concatenating node embeddings from connected edges, followed by an edge-specific MLP:
+
+$$
+\mathrm {M} _ {\text {e d g e}}, \overline {{\mathrm {M}}} _ {\text {e d g e}} = \sigma (\mathrm {M L P} _ {\text {e d g e}} ([ H _ {g, i} [ \text {r o w} ], H _ {g, i} [ \text {c o l} ]))), \tag {20}
+$$
+
+These soft masks serve as weighting mechanisms, allowing the model to focus on the most relevant nodes and edges while maintaining differentiability.
+
+Next, we decompose the initial graph to causal and confounding attened-subgraph:
+
+$$
+\mathrm {C} _ {i} = \left\{A _ {i} \odot M _ {\text {e d g e}}, X _ {i} \odot M _ {\text {n o d e}} \right\}, \tag {21}
+$$
+
+$$
+\mathbf {S} _ {i} = \left\{A _ {i} \odot \bar {M} _ {\text {e d g e}}, X _ {i} \odot \bar {M} _ {\text {n o d e}} \right\}. \tag {22}
+$$
+
+To encode these subgraphs, $C_i$ and $S_i$ are processed through a pair of GNNs with shared parameters, extracting causal and confounding representations $H_c$ and $H_s$ , respectively. Finally, the representations of the two subgraphs are respectively used to obtain the predictions of the regression task through the corresponding readout layers.
+
+# C. Variational Bounds for the GIB Objective
+
+The mutual information $I(C;G)$ quantifies the dependency between $C$ and $G$ and is defined as:
+
+$$
+I (C; G) = \mathbb {E} _ {p (C, G)} \left[ \log \frac {p (C \mid G)}{p (C)} \right]. \tag {23}
+$$
+
+However, computing the marginal distribution $p(C) = \sum_{G} p(C \mid G)p(G)$ is intractable, to overcome this challenge, we approximate $p(C)$ with a variational distribution $q(C)$ . Substituting $q(C)$ into Eq. (23), we reformulate $I(C;G)$ as:
+
+$$
+I (C; G) = \mathbb {E} _ {p (C, G)} \left[ \log \frac {p (C \mid G)}{q (C)} \right] - \mathrm {K L} \big (p (C) \| q (C) \big). \tag {24}
+$$
+
+The KL divergence term $\mathrm{KL}\big(p(C)\| q(C)\big)$ is non-negative, providing an upper bound for $I(C;G)$ :
+
+$$
+I (C; G) \leq \mathbb {E} _ {p (G)} \left[ \mathrm {K L} \big (p (C \mid G) \| q (C) \big) \right]. \tag {25}
+$$
\ No newline at end of file
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/images.zip b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1f6af035814014de13dd2c1dac4aeb6baddf9d42
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41797f018a2a0c50ace574ee4f74179e4b3db8f6ee5d3a59dcff9722b79115da
+size 758150
diff --git a/arecipeforcausalgraphregressionconfoundingeffectsrevisited/layout.json b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e7758876268b83df11dbc03bd0f38c7f83dd8ce
--- /dev/null
+++ b/arecipeforcausalgraphregressionconfoundingeffectsrevisited/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0278a51e09ffcc51ddc15bd0f980800163619bdd2bc0f1eb3890faaa9efbb3a
+size 625926
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_content_list.json b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e5a486121ac901ed97774e72d5af5dcde54bb37
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e86bcdb9149e6732cb4e684c8bee0d090fc83536d50aff179f45fe8c55821ab8
+size 337400
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_model.json b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..305f0b626d4588dc4d6d28ac7c34ca24a9fa66dd
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33603b23dbf6116c6784259fb6ff1e2cfa38d86750075ea949bf89284bf03198
+size 393027
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_origin.pdf b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..da1b3f42bf58a637e42bb1bba92fd16e7eb87cfa
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/ac267d89-b6d3-4bbd-a1be-23969c95d530_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:947dc6b3fcbabb214808fb613be32dc36337be560f55ddee047a6d1084508152
+size 1862085
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/full.md b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..dbe7044cbfc190520a6d7d2d00c3cefbfc9e9b0c
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/full.md
@@ -0,0 +1,1830 @@
+# A Reduction Framework for Distributionally Robust Reinforcement Learning under Average Reward
+
+Zachary Roch1 George Atia12 Yue Wang12
+
+# Abstract
+
+Robust reinforcement learning (RL) under the average reward criterion, which seeks to optimize long-term system performance in uncertain environments, remains a largely unexplored area. To address this challenge, we propose a reduction-based framework that transforms robust average reward optimization into the more extensively studied robust discounted reward optimization by employing a specific discount factor. Our framework provides two key advantages. Data Efficiency: We design a model-based reduction algorithm that achieves near-optimal sample complexity, enabling efficient identification of optimal robust policies; Scalability: By bypassing the inherent challenges of scaling up average reward optimization, our framework facilitates the design of scalable, convergent algorithms for robust average reward optimization leveraging function approximation. Our algorithmic design, supported by theoretical and empirical analyses, provides a concrete solution to robust average reward RL with the first data efficiency and scalability guarantees, highlighting the framework's potential to optimize long-term performance under model uncertainty in practical problems.
+
+# 1. Introduction
+
+Reinforcement Learning (RL) aims to optimize an agent's performance by identifying a policy that maximizes cumulative rewards based on a specified criterion while interacting with an environment. Despite its remarkable success in applications such as synthetic control problems, board games
+
+$^{1}$ Department of Electrical and Computer Engineering
+ $^{2}$ Department of Computer Science, University of Central Florida, Orlando, Florida. Correspondence to: Zachary Roch , George Atia , Yue Wang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+(Silver et al., 2016; Zha et al., 2021), and video games (Wei et al., 2022; Liu et al., 2022a), RL often experiences significant performance degradation in practical settings. This phenomenon, known as the Sim-to-Real gap, arises from discrepancies between the training environment and the deployment environment. In simulation-based applications like games, the training and deployment environments are typically identical and vanilla RL performs well. However, in real-world scenarios, differences such as modeling errors, perturbations, partial observability, and potential adversarial attacks introduce model mismatches between the training and deployment environments, resulting in suboptimal policies and poor performance outcomes, undermining the reliability of RL in practical applications.
+
+To address this issue, a framework of robust RL was introduced (Bagnell et al., 2001; Nilim & El Ghaoui, 2004; Iyengar, 2005). It deviates from vanilla RL by considering a set of environment transition dynamics instead of a fixed one, and its goal is to optimize performance under the worst-case scenario across these models, which provides performance guarantees across all uncertain environments within the defined uncertainty set, making the policy more robust to model mismatches and more generalizable.
+
+On the other hand, different reward criteria in (robust) RL can result in substantially distinct problem settings. Among these, the discounted reward criterion is the most extensively studied. While it offers elegant mathematical properties, its focus on short-term rewards can lead to suboptimal long-term performance due to the exponential decay of rewards. In practical applications such as queuing control, supply chain inventory management, and communication networks (Kober et al., 2013), however, evaluating policies based on their long-term average performance becomes crucial. This highlights the importance of optimizing the long-term average reward in environments with uncertainty, motivating our focus on robust RL under average reward in this paper.
+
+Despite its practical importance, robust RL under average reward is generally more complex than its discounted counterpart due to its dependence on the limiting behavior of the underlying stochastic processes, and is hence understudied. Recent work (Wang et al., 2023e; Grand-Clement et al., 2023) further emphasizes its inherent challenges, including
+
+the non-contracted nature of the Bellman operator, the high dimensionality of the solution space, and the difficulties in relaxing underlying assumptions.
+
+To address these challenges, a natural approach is to draw on insights from the extensive studies of robust RL for discounted rewards as an intermediate step. This idea has been validated in (Wang et al., 2023d; Grand-Clement et al., 2023), which show that, under certain assumptions, the performance of discounted robust RL asymptotically converges to that of average reward as the discount factor approaches 1. While this convergence highlights the potential of using discounted robust RL to study the average reward setting, there remains a lack of results demonstrating its practical applicability. Key questions regarding its efficiency, effectiveness, and scalability remain unanswered, leaving gaps in understanding its implementation and real-world impact.
+
+In this paper, we explore this approach in greater depth and propose a reduction-based framework for concrete algorithm implementation. This framework facilitates the use of various robust discounted RL algorithms to address robust average reward RL problems. To assess the practicality of our framework, we evaluate its performance across two key dimensions: data efficiency and scalability. Our contributions are summarized as follows.
+
+Reduction of robust average reward RL to discounted one: Under a standard assumption (Assumption 3.1), we propose a reduction-based framework that shows how robust average reward optimization under model uncertainty can be equivalently addressed through robust discounted RL with a specific discount factor. While prior work has explored asymptotic convergence, no practical guidance has been offered on selecting a reduction discount factor to guarantee the optimality of the resulting policy. Our framework provides a concrete choice of the reduction discount factor, ensuring that the robust policy learned for the discounted reward is also optimal under the average reward criterion. This universal framework deepens the understanding of the fundamental connections between average and discounted rewards while enabling robust average reward optimization.
+
+Design of data-efficient reduction algorithms: Building on our reduction framework, we present the first model-based algorithm for robust RL under average reward, applicable to various uncertainty set models, with a thorough sample complexity analysis. We study the total number of samples required to learn an $\epsilon$ -optimal robust policy for the average reward criterion under different uncertainty set structures. Specifically, we provide detailed analyses for total variation, $\chi^2$ divergence, and Kullback-Leibler divergence uncertainty sets, demonstrating that our reduction algorithms achieve near-optimal sample complexity. These results highlight the practical potential of our framework in data-intensive settings, offering the first finite-sample com
+
+plexity characterization of robust RL with average reward.
+
+Design of scalable reduction algorithms: To further validate the practical applicability of our framework, we adapt it to design scalable algorithms for robust average reward RL. After identifying key challenges in scaling up robust average reward RL, we show that our reduction framework circumvents these difficulties, enabling the design of efficient algorithms for large-scale problems. We evaluate our algorithms in large-scale MuJoCo environments, showcasing the capability of our framework to optimize long-term rewards under model uncertainty in complex systems. These results underscore the potential of our framework to efficiently solve large-scale, real-world problems.
+
+# 2. Preliminaries and Problem Formulation
+
+Discounted reward MDPs. A discounted reward Markovian decision process (DMDP) $(S, \mathcal{A}, P, r, \gamma)$ is specified by: a state space $S$ , an action space $\mathcal{A}$ , a transition kernel $P = \{\mathsf{P}_s^a \in \Delta(S), a \in \mathcal{A}, s \in S\}^1$ , where $\mathsf{P}_s^a$ is the distribution of the next state over $S$ upon taking action $a$ in state $s$ (with $\mathsf{P}_{s,s'}^a$ denoting the probability of transitioning to $s'$ ), a reward function $r: S \times \mathcal{A} \to [0,1]$ , and a discount factor $\gamma \in [0,1)$ . At each time step $t$ , the agent at state $s_t$ takes an action $a_t$ , the environment then transitions to the next state $s_{t+1}$ according to $\mathsf{P}_{s_t}^{a_t}$ , and produces a reward signal $r_t = r(s_t, a_t)$ to the agent.
+
+A stationary policy $\pi : S \to \Delta(\mathcal{A})$ is a distribution over $\mathcal{A}$ for any given state $s$ . The agent follows the policy by taking action subject to the distribution $\pi(s)$ . The cumulative reward of a stationary policy $\pi$ starting from $s \in S$ for DMDPs is measured by the discounted value function: $V_{\gamma, \mathrm{P}}^{\pi}(s) \triangleq \mathbb{E}_{\pi, \mathrm{P}}\left[\sum_{t=0}^{\infty} \gamma^{t} r_{t} \mid S_{0} = s\right]$ .
+
+Average reward MDPs. Unlike DMDPs, average reward MDPs (AMDPs) do not discount the rewards over time and instead measure the accumulative reward by considering the behavior of the underlying Markov process under the steady-state distribution. Specifically, the average reward (or the gain) of a policy $\pi$ starting from $s \in S$ is2
+
+$$
+g _ {\mathsf {P}} ^ {\pi} (s) \triangleq \lim _ {n \rightarrow \infty} \mathbb {E} _ {\pi , \mathsf {P}} \left[ \frac {1}{n} \sum_ {t = 0} ^ {n - 1} r _ {t} | S _ {0} = s \right]. \tag {1}
+$$
+
+It is also useful to define the following relative value function or bias for an AMDP:
+
+$$
+h _ {\mathsf {P}} ^ {\pi} (s) \triangleq \mathbb {E} _ {\pi , \mathsf {P}} \left[ \sum_ {t = 0} ^ {\infty} \left(r _ {t} - g _ {\mathsf {P}} ^ {\pi}\right) | S _ {0} = s \right], \tag {2}
+$$
+
+which is the cumulative difference over time between the immediate reward and the average reward.
+
+Robust MDPs. In robust MDPs, the transition kernel is not fixed but, instead, belongs to a designated uncertainty set denoted as $\mathcal{P}$ . Following an action, the environment undergoes a transition to the next state based on an arbitrary transition kernel $\mathsf{P} \in \mathcal{P}$ . We specifically concentrate on the $(s,a)$ -rectangular uncertainty set (Nilim & El Ghaoui, 2004; Iyengar, 2005), where $\mathcal{P} = \bigotimes_{s,a} \mathcal{P}_s^a$ , with $\mathcal{P}_s^a \subseteq \Delta(\mathcal{S})$ defined independently over all state-action pairs.
+
+Robust MDPs aim to optimize the worst-case performance over the uncertainty set. The robust discounted value function of a policy $\pi$ is defined as the worst-case discounted value function over all possible transition kernels:
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {\pi} (s) \triangleq \min _ {\kappa \in \bigotimes_ {t \geq 0} \mathcal {P}} \mathbb {E} _ {\pi , \kappa} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {t} | S _ {0} = s \right], \tag {3}
+$$
+
+where $\kappa = (\mathsf{P}_0,\mathsf{P}_1\dots)\in \bigotimes_{t\geq 0}\mathcal{P}$ . The discounted robust value functions are shown to be the unique solution to the robust discounted Bellman equation (Iyengar, 2005):
+
+$$
+V (s) = \sum_ {a} \pi (a | s) (r (s, a) + \gamma \sigma \mathcal {P} _ {s} ^ {a} (V)), \tag {4}
+$$
+
+where $\sigma_{\mathcal{P}_s^a}(V) \triangleq \min_{\mathsf{P} \in \mathcal{P}_s^a} \mathsf{PV}$ is the support function of $V$ on the uncertainty set $\mathcal{P}_s^a$ .
+
+In scenarios where the long-term performance under model uncertainty is concerned, we focus on the following worst-case average reward:
+
+$$
+g _ {\mathcal {P}} ^ {\pi} (s) \triangleq \min _ {\kappa \in \bigotimes_ {t \geq 0}} \lim _ {\mathcal {P} n \rightarrow \infty} \mathbb {E} _ {\pi , \kappa} \left[ \frac {1}{n} \sum_ {t = 0} ^ {n - 1} r _ {t} | S _ {0} = s \right], \tag {5}
+$$
+
+to which we refer as the robust average reward. The robust AMDP aims to find an optimal policy w.r.t. it, that is, $\pi^{*} \triangleq \arg \max_{\pi \in \Pi} g_{\mathcal{P}}^{\pi}(s)$ , for any $s \in S$ .
+
+In (Wang et al., 2023d), it is shown that the robust discounted value functions converge to the robust average reward w.r.t. the same MDP as the discount factor approaches 1:
+
+$$
+\lim _ {\gamma \rightarrow 1} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} = g _ {\mathcal {P}} ^ {\pi}. \tag {6}
+$$
+
+Hence, the robust AMDP can be approximately solved through the corresponding robust DMDP with a sufficiently large discount factor, known as the reduction method. However, selecting a discount factor to ensure near-optimality under the average reward remains unclear, leaving the adaptation of the reduction method uncertain.
+
+In this paper, our goal is to develop a concrete reduction framework and design algorithms for optimizing the robust average reward, and to demonstrate the practical applicability of our framework.
+
+# 3. Reduction Framework for Robust AMDPs
+
+Our framework aims to reduce the robust average reward problem to a robust discounted reward one, leveraging well-developed algorithms in this space. The convergence (6) and the existence of a robust Blackwell optimal policy (Wang et al., 2023d; Grand-Clement & Petrik, 2024) (a policy that optimizes the robust discounted reward for any $\gamma > \gamma_{bw}$ ) further inspires us to reduce a robust AMDP to a robust DMDP with a sufficiently large discount factor. However, existing results focus on asymptotic convergence, leaving the choice of discount factor for a desired level of accuracy unresolved. In this section, we study the relationship between the two robust MDPs and determine a specific reduction discount factor.
+
+We first adopt the following compactness and unichain assumption, which is commonly used in robust AMDPs.
+
+Assumption 3.1. For any $s \in S, a \in \mathcal{A}$ , the uncertainty set $\mathcal{P}_s^a$ is a compact subset of $\Delta(S)$ . Moreover, any deterministic policy $\pi$ and any kernel $\mathsf{P} \in \mathcal{P}$ induce a unichain Markovian process3 .
+
+Due to the Heine-Borel theorem (Dugac, 1989), the first part of Assumption 3.1 is satisfied if the uncertainty set is closed as it is always bounded. We remark that many standard uncertainty sets satisfy this assumption, e.g., those defined by $\epsilon$ -contamination (Huber, 1965), finite interval (Tewari & Bartlett, 2007), total-variation (Rahimian et al., 2022) and KL-divergence (Hu & Hong, 2013).
+
+The second part of Assumption 3.1 imposes additional structure on the underlying MDP, an assumption commonly used in non-robust AMDP studies due to their inherent complexity (e.g., Puterman, 1994; Wan et al., 2021; Zhang & Ross, 2021; Lan, 2020; Zhang et al., 2021b)). For robust AMDPs, the unichain assumption ensures the solvability of the average reward robust Bellman equation (Wang et al., 2023d; e), which plays an essential role in our analysis. Under this assumption, the stationary distribution $\eta_{\mathsf{P}}^{\pi}$ always exists and does not depend on the initial state, and the average reward is identical for all starting states (Bertsekas, 2011), i.e., $g_{\mathsf{P}}^{\pi}(s_1) = g_{\mathsf{P}}^{\pi}(s_2), \forall s_1, s_2 \in S$ .
+
+Inspired by non-robust AMDP studies (Wang et al., 2022; Zurek & Chen, 2023), we further extend the concept of the optimal bias span (Bartlett & Tewari, 2012) therein to the robust setting.
+
+Definition 3.2. For a robust AMDP $(\mathcal{S},\mathcal{A},\mathcal{P},r)$ , its robust
+
+optimal bias span is defined as
+
+$$
+\mathcal {H} \triangleq \max _ {\mathrm {P} \in \mathcal {P}} \mathbf {S p} \left(h _ {\mathrm {P}} ^ {\pi^ {*}}\right) \tag {7}
+$$
+
+where $h_{\mathsf{P}}^{\pi^{*}}$ is the relative value function as in (2), and $\mathbf{Sp}(V) \triangleq \max_{i} V(i) - \min_{i} V(i)$ is the Span semi-norm.
+
+Remark 3.3. We assume $\mathcal{H}$ is known, which can be viewed as a robust extension from the common assumption of knowledge of non-robust span in non-robust AMDPs studies (Wang et al., 2022; Zurek & Chen, 2023; Zhang & Xie, 2023; Wang et al., 2023a). Our framework and results remain valid if $\mathcal{H}$ is replaced with any upper bound on $\mathcal{H}$ , such as the corresponding robust extensions of the mixing time or the diameter of a non-robust MDP (Wang et al., 2022). Additional discussion on estimating $\mathcal{H}$ is provided in Section 10. Specifically, we can derive an upper bound on $\mathcal{H}$ for robust MDPs with some additional structures.
+
+Next, we present our reduction framework.
+
+Theorem 3.4. (Reduction Framework) For any $\epsilon$ , set $\gamma := 1 - \frac{\epsilon}{\mathcal{H}}$ , then any $\epsilon_{\gamma}$ -optimal policy $^5$ $\hat{\pi}_{\gamma}$ for the robust DMDP $(S, \mathcal{A}, \mathcal{P}, r, \gamma)$ is also an $\mathcal{O}(\epsilon)$ -optimal policy for the corresponding robust AMDP $(S, \mathcal{A}, \mathcal{P}, r)$ :
+
+$$
+g _ {\mathcal {P}} ^ {\pi^ {*}} - g _ {\mathcal {P}} ^ {\hat {\pi} _ {\gamma}} \leq \left(8 + \frac {5 \epsilon_ {\gamma}}{\mathcal {H}}\right) \epsilon .
+$$
+
+While we defer the full proof to the appendix, the intuition behind the proof of our reduction framework relies on showing a bound on the convergence error of the robust discounted value function to the average reward. As shown in Lemma 11.1, under Assumption 3.1 with $\gamma \in (0,1)$ and for any stationary $\pi \in \Pi$ , $\|g_{\mathcal{P}}^{\pi} - (1 - \gamma)V_{\gamma,\mathcal{P}}^{\pi}\|_{\infty} \leq \mathbf{Sp}\big((1 - \gamma)V_{\gamma,\mathcal{P}}^{\pi}\big)$ . We then show that we can bound the $\epsilon_{\gamma}$ -optimal robust discounted value function induced by $\pi_{\gamma}$ as $\mathbf{Sp}\big((1 - \gamma)V_{\gamma,\mathcal{P}}^{\pi_{\gamma}}\big) \leq \epsilon$ . Similarly, we show we can bound $\mathbf{Sp}\big((1 - \gamma)V_{\gamma,\mathcal{P}}^{\pi^{*}}\big) \leq \epsilon$ , before we combine these two results with that of Lemma 11.1 to derive our bound in the Theorem 3.4.
+
+The result in Theorem 3.4 shows that we provide a concrete choice for the reduction discount factor, ensuring that the robust DMDP and robust AMDP share the same near-optimal policy. Our framework allows any algorithm designed for robust DMDPs to be directly applied to solve robust AMDPs, with theoretical performance guarantees, effectively bypassing the challenges of robust AMDPs by transforming them into the well-studied DMDP domain.
+
+In the following sections, we investigate the applicability of our reduction framework from two key perspectives: data efficiency and scalability. These demonstrate the framework's
+
+ability to optimize long-term performance in data-intensive and large-scale practical scenarios.
+
+# 4. Sample Complexity for Robust RL under Average Reward
+
+In this section, we study the data efficiency of our reduction framework for robust RL under average reward, to characterize the total number of samples required to identify an $\epsilon$ -optimal policy $\pi$ through our reduction framework.
+
+We first present a general result on the sample complexity of robust AMDP reduction. Specifically, leveraging Theorem 3.4, the sample complexity of robust AMDP algorithms aligns with that of robust DMDP algorithms with a specific discount factor. This result is formally stated as follows:
+
+Theorem 4.1. Consider any algorithm $\mathcal{Y}$ optimizing the robust discounted reward. Denote the sample complexity of $\mathcal{Y}$ to identify an $\epsilon_{\gamma}$ -optimal policy (w.r.t. the discounted reward) by $\mathcal{N}(\mathcal{S},\mathcal{A},\mathcal{P},\gamma ,\epsilon_{\gamma})$ . Then, we can identify an $\epsilon$ -optimal policy for the robust average reward through reduction and algorithm $\mathcal{Y}$ , with a sample complexity of
+
+$$
+\mathcal {N} \left(\mathcal {S}, \mathcal {A}, \mathcal {P}, 1 - \frac {\epsilon}{\mathcal {H}}, \mathcal {H}\right). \tag {8}
+$$
+
+This result holds universally, regardless of the uncertainty set models or algorithms used for robust DMDPs. More importantly, it provides a basis for studying sample complexity and data requirements for optimizing the average reward under model uncertainty. In the following, we analyze the sample complexity of robust AMDPs under different uncertainty sets, offering a concrete understanding of their data efficiency and our framework.
+
+Remark 4.2. Although Theorem 4.1 holds for general uncertainty sets, existing sample complexity studies of robust RL focus on the 'ball-structured' uncertainty sets:
+
+$$
+\mathcal {P} _ {s} ^ {a} = \{q \in \Delta (\mathcal {S}): D (q | | \mathrm {P} _ {s} ^ {a}) \leq R \}, \tag {9}
+$$
+
+where $\mathsf{P}_s^a$ is the nominal kernel, $R$ is the uncertainty radius indicating the uncertainty level, and $D$ is some distribution distance measure or divergence function. Hence, we similarly focus on uncertainty sets with this structure.
+
+In our subsequent analysis, we focus on the generative model setting, where the agent can arbitrarily generate samples following the nominal kernel P. This setting has been widely adopted for sample complexity analysis under both non-robust RL (Agarwal et al., 2020; Li et al., 2020; Zurek & Chen, 2023) and robust RL (Shi et al., 2023; Panaganti & Kalathil, 2022; Wang et al., 2023e). While robust RL has also been explored in other settings, such as offline (Shi & Chi, 2022; Blanchet et al., 2024; Wang et al., 2024b; Panaganti et al., 2022; Liu & Xu, 2024; Wang et al., 2024a) and
+
+online (Lu et al., 2024) scenarios, and sample complexity results for robust average reward under these settings can be directly derived from Theorem 4.1, we concentrate on the generative setting, allowing us to focus on the challenges of the robust average reward framework itself rather than the complexities of data collection under restricted settings.
+
+We develop a model-based reduction meta-algorithm for robust AMDPs. Specifically, after generating the data, we construct an estimate of the nominal kernel, and build an empirical uncertainty set centered around it with the same $D$ and $R$ in (9). We then solve a DMDP with a specific discount factor to identify a near-optimal policy. Our algorithm is presented in Algorithm 1.
+
+Algorithm 1 Model-based algorithm for robust AMDPs
+
+1: Input: $N$ nominal samples $\{(s,a,s_i')_{i = 1}^N,s_i'\sim \mathsf{P}_s^a\}$ under each $(s,a)$ pair, uncertainty level $R$ , robust bias span $\mathcal{H}$ , and accuracy level $\epsilon$
+2: Initialization: $Q \gets 0$
+3: Estimate transition model $\hat{\mathsf{P}}_{s,s'}^a = \frac{\sum_i\mathbf{1}_{(s,a,s'_i) = (s,a,s')}}N$
+4: Construct empirical uncertainty set $\hat{\mathcal{P}}$ centered at $\hat{\mathbb{P}}$ : $\hat{\mathcal{P}}_s^a = \{q \in \Delta(\mathcal{S}) : D(q||\hat{\mathbb{P}}_s^a) \leq R\}$
+5: Set $\gamma \gets 1 - \frac{\epsilon}{\mathcal{H}}, \epsilon_{\gamma} \gets \mathcal{H}$
+6: Obtain an $\epsilon_{\gamma}$ -optimal policy $\hat{\pi}_{\gamma}$ for the robust DMDP $(S, \mathcal{A}, r, \hat{\mathcal{P}}, \gamma)$ with value iteration
+7: Output: $\hat{\pi}_{\gamma}$
+
+Remark 4.3. In Line 6 of Algorithm 1, we need to identify an $\epsilon_{\gamma}$ -optimal policy for robust DMDPs, which can be done through robust value/policy iteration (Panaganti & Kalathil, 2022; Yang et al., 2021; Shi et al., 2023). For commonly used uncertainty sets, e.g., when $D$ is total variation or $\chi^2$ divergence, the algorithms can be implemented with polynomial computational complexity (Iyengar, 2005) and exponentially fast convergence rate.
+
+Next, we present the sample complexity of Algorithm 1 for the robust AMDP in the following theorem.
+
+Theorem 4.4. Consider an uncertainty set defined by total variation (TV) or $\chi^2$ divergence (CS). Let $C$ be some universal constant. If the total number of samples satisfies
+
+$$
+\begin{array}{l} N S A \geq \frac {C S A \min \left\{\frac {1}{R} , \mathcal {H} ^ {1 + I _ {R < } \frac {1}{\mathcal {H}}} \right\} \log \left(\frac {S A N}{\delta}\right)}{\epsilon^ {2}}, (T V) \\ N S A \geq \frac {C S A \mathcal {H} ^ {2} (1 + R) \log \left(\frac {S A N}{\delta}\right)}{\epsilon^ {2}}, (C S) \\ \end{array}
+$$
+
+then, with probability at least $1 - 4\delta$ , $\hat{\pi}_{\gamma}$ is $\epsilon$ -optimal under the robust average reward.
+
+The result shows that Algorithm 1 requires at most $\tilde{\mathcal{O}}\left(\frac{SA\mathcal{H}^2}{\epsilon^2}\right)$ samples to identify an $\epsilon$ -optimal policy for both
+
+robust AMDPs. We note that the minimax lower bound on the sample complexity for non-robust AMDPs is $\tilde{\Omega}\left(\frac{SAH}{\epsilon^2}\right)$ , with $H$ being the non-robust optimal span (Zurek & Chen, 2023; Wang et al., 2022; Jin & Sidford, 2021). Thus, our results are near-optimal under these uncertainty sets, aligning with the lower bound in terms of $S, A, \epsilon$ , with an additional dependence on $\mathcal{H}$ . Notably, this is the first sample complexity analysis for robust RL under average reward, offering insights into data requirements for long-term reward optimization under model uncertainty. Furthermore, our framework demonstrates strong data efficiency, requiring nearly minimal samples to optimize the long-term reward, underscoring its potential in data-intensive scenarios.
+
+Remark 4.5. Theorem 4.4 is not a direct combination of Theorem 4.1 with existing sample complexity of robust DMDPs (Panaganti & Kalathil, 2022; Shi et al., 2023). Specifically, the existing sample complexity of a robust DMDP with TV set is $\tilde{\mathcal{O}}\left(\frac{SA}{(1 - \gamma)^2R\epsilon_\gamma^2}\right)$ , and $\tilde{\mathcal{O}}\left(\frac{SA}{(1 - \gamma)^4\epsilon_\gamma^2}\right)$ for CS set. By setting $\epsilon_{\gamma} = \mathcal{H}$ and $\gamma = 1 - \frac{\epsilon}{\mathcal{H}}$ , the resulting average reward complexity is $\tilde{\mathcal{O}}\left(\frac{SA}{\epsilon^2R}\right)$ and $\tilde{\mathcal{O}}\left(\frac{SA\mathcal{H}^2}{\epsilon^4}\right)$ , respectively. Such higher complexity results are due to the higher dependence on $(1 - \gamma)$ in the DMDP complexity, which becomes $\epsilon$ -order in the average reward setting through our framework. To achieve tighter results, we need to further tighten the complexity result for robust DMDPs. Specifically, we showed that with a reward perturbation technique (Li et al., 2020; Wang et al., 2022; Zurek & Chen, 2023) and more careful analysis involving the connection between robust DMDPs and AMDPs, we can reduce both sample complexities to $\tilde{\mathcal{O}}\left(\frac{SA\mathcal{H}^2}{(1 - \gamma)^2\epsilon_\gamma^2}\right)$ , which further result in the near-optimal complexity in Theorem 4.4.
+
+We can further obtain sample complexity for optimizing robust AMDPs for other types of uncertainty sets, by combining Theorem 4.1 with existing results for DMDPs. For instance, combining with (Panaganti & Kalathil, 2022), Algorithm 1 requires $\mathcal{O}\left(\frac{S^2A\mathcal{H}^2\log\left(\frac{SAN}{\delta}\right)}{\epsilon^4}\exp \left(\frac{\mathcal{H}}{\epsilon}\right)\right)$ samples to identify an $\epsilon$ -optimal policy under the Kullback-Leibler (KL) divergence. The results in (Clavier et al., 2023) imply a sample complexity of $\mathcal{O}\left(\frac{SA\mathcal{H}^2\log\left(\frac{SAN}{\delta}\right)}{\epsilon^4}\right)$ for the $l_{p}$ -normed uncertainty set. Our framework thus provides the first concrete sample complexity characterization for robust AMDPs, establishing a foundation for studying robust long-term performance optimization.
+
+Remark 4.6. As demonstrated in prior studies on non-robust AMDPs (Zurek & Chen, 2023; Wang et al., 2022; 2023b), tightening the complexity bounds for non-robust DMDPs leads to optimal AMDP complexity in the corresponding reduction framework. We therefore attribute our sub-optimal sample complexity to the loose bounds of robust DMDPs rather than limitations in our reduction framework.
+
+Refining the complexity analysis of robust DMDPs to align with the optimal bounds for robust AMDPs is an important direction for future research.
+
+# 5. Scalable Robust RL for Average Reward
+
+In this section, we explore scalable approaches for optimizing the average reward under model uncertainty, aiming to facilitate long-term performance optimization in practical, large-scale problems. Specifically, we show that our framework overcomes the major challenges in solving large scale robust AMDPs, and further enables us to design scalable algorithms for the robust average reward, greatly enhancing the scalability and applicability of our methods.
+
+When the problem scales are large, function approximation (FA) techniques have been extensively studied. FA methods aim to approximate the value functions by some low-dimensional function class, $\mathcal{F} = \{f_{\theta}(s):\theta \in \Theta \subseteq \mathbb{R}^d,d\ll S\}$ , to find some $\theta^{*}\in \Theta$ such that $V(s)\approx f_{\theta^{*}}(s),\forall s$ . Two commonly studied function classes for FA are the linear function class and neural networks (Cai et al., 2019; Bhatnagar et al., 2009; Wai et al., 2019). We focus on linear FA to illustrate the scalability of our framework, but our method can be directly extended to neural networks or other function classes.
+
+Linear FA is based on a set of feature functions $\{\phi : S \to \mathbb{R}^d\}$ . In robust DMDPs, the robust value function is approximated using a linear function: $V_{\theta}(s) \triangleq \phi(s)^{\top} \theta \approx V_{\gamma, \mathcal{P}}^{*}(s)$ , where $\theta \in \mathbb{R}^d$ is some weight vector. Despite extensive studies on linear FA in robust DMDPs (Tamar et al., 2014; Xu & Mannor, 2010; Wang & Zou, 2021; Zhou et al., 2024; Roy et al., 2017; Badrinath & Kalathil, 2021), it remains largely understudied for (robust) AMDPs. In the non-robust average reward policy evaluation problem, we aim to approximate the bias $h_{\mathsf{P}}^{\pi}$ to estimate $g_{\mathsf{P}}^{\pi}$ , by approximating the solution to the non-robust Bellman equation
+
+$$
+h (s) = \sum_ {a} \pi (a | s) (r (s, a) - g + \mathrm {P} _ {s} ^ {a} h). \tag {10}
+$$
+
+However, two major challenges hinder the study of linear FA for average reward MDPs. The first is that (10) admits non-unique solutions (Puterman, 1994; Wan et al., 2021; Wan & Sutton, 2022). Specifically, besides the average reward and relative value function pair $(g_{\mathsf{P}}^{\pi},h_{\mathsf{P}}^{\pi})$ , any pair $^6$ $(g_{\mathsf{P}}^{\pi},h_{\mathsf{P}}^{\pi} + ce)$ with any $c\in \mathbb{R}$ is also a solution to (10). This implies that the weight vector may not be unique, leading to a divergent algorithm. A common approach to address this issue is to impose additional assumptions on the feature functions (Zhang et al., 2021a; Tsitsiklis & Van Roy, 1999; Konda & Tsitsiklis, 1999; Yu & Bertsekas, 2009), ensuring that the all-one vector $e$ does not lie within the span of $\{\phi (s)\}$ ,
+
+i.e., $e \neq \Phi \theta, \forall \theta$ . Under this assumption, there exists a unique $\theta$ that minimizes the approximation loss. We note that the robust average reward also encounters this issue, but the higher dimensionality of its solution space (compared to the one-dimensional solution space in the non-robust setting) (Wang et al., 2024c) may result in more restrictive assumptions, making it even more challenging to directly apply linear FA to the robust average reward setting.
+
+Another challenge arises from unstable convergence under the average reward. Even in the tabular setting, algorithm design and convergence analysis remain limited due to the instability caused by the Span semi-norm multi-step contraction of the Bellman operator (Puterman, 1994), compared to the norm contraction in the discounted setting. Existing studies address this issue either by introducing additional offset functions to stabilize convergence (Wan et al., 2021; Wan & Sutton, 2022; Puterman, 1994; Bertsekas, 2011) or by only ensuring convergence to a solution set (Zhang et al., 2021b). As demonstrated in (Wang et al., 2023d;e), similar challenges persist in the robust average reward setting, and addressing them is expected to involve even greater complexity.
+
+Noting these two issues, it can be much more challenging to directly apply FA techniques to the robust average reward. However, our reduction framework simplifies this by transforming the complex robust average reward problem into the more manageable discounted reward setting. This bypasses the aforementioned difficulties and facilitates the design of scalable algorithms. As an immediate application, we extend our reduction framework to the robust natural actor-critic (NAC) algorithm with linear FA (Zhou et al., 2024), resulting in a robust NAC algorithm for robust AMDPs, detailed in Algorithm 2. Algorithm 2 consists of two key steps: (i) a robust TD step to update the weight vector $\theta$ for value approximation, and (ii) an actor step to update the policy $^7$ , neither of which suffers from the issues mentioned above.
+
+Note that the feature vectors in Algorithm 2 are predefined, e.g., by tile coding (Sutton, 1995), Fourier Basis (Konidaris et al., 2011), or randomly generated (Ghavamzadeh et al., 2010), by the learner prior to training.
+
+We further characterize the convergence of Algorithm 2 (see Theorem 14.2 for the formal theorem statement).
+
+Theorem 5.1. (Informal) Under some additional standard assumptions, set $T = \tilde{\mathcal{O}}\left(\frac{\mathcal{H}}{\epsilon}\right)$ in Algorithm 2, then $\pi_{w_T}$ is an $\mathcal{O}(\epsilon + \epsilon_c)$ -optimal policy, where $\epsilon_c$ is the critic error due to the representation power of the linear function class. The total sample complexity is $\tilde{\mathcal{O}}(\mathcal{H}^2 \epsilon^{-3})$ .
+
+Our results demonstrate that, without requiring any addi
+
+# Algorithm 2 Reduction Robust Natural Actor-Critic
+
+1: Input: $T, \mathcal{H}$ , base functions $\{\phi\}$
+2: Initialization: $\theta_0$ for value function approximation and $w_{0}$ for policy parametrization
+3: $\gamma \gets 1 - \frac{\epsilon}{\mathcal{H}}$
+4: for $t = 0,1,\ldots ,T - 1$ do
+5: Robust critic updates $\theta_t$ with Algorithm 3
+6: Robust natural actor updates $w_{t+1}$ with Algorithm 4
+7: end for
+8: Output: $w_{T}$
+
+tional assumptions on the base functions, our framework enables the optimization of robust average reward with function approximation with a stable convergent algorithm, underscoring its scalability. This represents the first solution for large-scale robust long-term reward optimization, offering both convergence and performance guarantees.
+
+Remark 5.2. In addition to function approximation, designing model-free algorithms for robust AMDPs can also improve scalability. Our framework also facilitates model-free algorithm design, which we discuss in detail in Section 15.
+
+# 6. Numerical Experiments
+
+We first assess the effectiveness and efficiency of our framework in tabular settings, aiming to verify that with the reduction discount factor in Theorem 3.4, optimizing the robust DMDP approximately optimizes the robust AMDP.
+
+Our experiments are conducted on the Garnet problem (Archibald et al., 1995), where the nominal transition kernel and reward functions are randomly generated from Gaussian distributions. We consider a Garnet problem with 20 states and 8 actions, constructing uncertainty sets using Total Variation (TV) and Chi-Square (CS) divergences, both with radius $R = 0.2$ . Additional results for other problems are provided in Section 9.
+
+To verify the effectiveness of our framework, we optimize robust DMDPs with different discount factors and plot the robust average reward $g_{\mathcal{P}}^{\pi_{\gamma}}$ of the learned policy $\pi_{\gamma}$ using Algorithm 1 from (Wang et al., 2023d). As a baseline, we plot the optimal robust average reward $g_{\mathcal{P}}^{*}$ , obtained via Algorithm 2 from (Wang et al., 2023d). Additionally, we estimate $\mathcal{H}$ using Algorithm 2 from (Wang et al., 2023e) and compute the corresponding reduction factor for different accuracy levels $\epsilon$ , as described in Theorem 3.4. The results, shown in Figure 1, indicate that the robust average reward can be approximately optimized through a robust DMDP with a sufficiently large discount factor. Furthermore, the robust average reward corresponding to each $\epsilon$ falls within the prescribed accuracy level, confirming the effectiveness of our framework and the validity of the reduction factor.
+
+
+Figure 1. Effectiveness under the Garnet problem.
+
+
+
+Next, we evaluate the data efficiency of our model-based framework under the same settings by optimizing empirical robust DMDPs with a reduction factor corresponding to $\epsilon = 0.01$ across different dataset sizes. The results, presented in Figure 2, show that our model-based algorithm converges to the optimal policy as dataset size $N$ increases and achieves near-optimal performance with a limited amount of data. These findings highlight the data efficiency of our framework, demonstrating its ability to optimize long-term rewards with fewer samples. Since the computational cost for finding the worst performance (i.e., robust policy evaluation) in TV and CS uncertainty sets are $O\big(S\log (S)\big)$ (Iyengar, 2005) under the tabular setting, their estimation can be tractable.
+
+
+Figure 2. Efficiency under the Garnet problem.
+
+
+
+We then demonstrate the scalability of our framework in large-scale problems by showing that robust average reward optimization can be equivalently formulated as a robust DMDP with a sufficiently large reduction factor. Specifically, we implement Algorithm 2 in the MuJoCo simulation environments (Todorov et al., 2012) for robust DMDPs with varying discount factors and evaluate the average reward performance under model uncertainty. Implementation details are provided in Section 14.
+
+Our experiments are conducted in two continuous large-scale environments: Walker2d-v3 and Hopper-v3. We first train our algorithms in the nominal environments to learn the optimal policy for the given robust discounted reward. To estimate the robust average reward, we deploy the learned policies in perturbed environments, where at each evaluation epoch, we randomly sample parameters within the perturbation interval and apply them to each joint. The average reward is then computed over 30 independently perturbed environments.
+
+
+Figure 3. Scalability under Walker2d-v3.
+
+
+Figure 5. Neural network approximation under Walker2d-v3.
+
+
+Figure 4. Scalability under Hopper-v3.
+
+
+
+As noted in Remark 3.3, we assume we have the knowledge of $\mathcal{H}$ , however, we would like to note that in practice this is rarely the case. In lieu of a pre-obtained $\mathcal{H}$ , we can equivalently use any upper-bound on $\mathcal{H}$ and see that our results still hold. In our experimentation on Walker2d-v3 and Hopperv3, we opted to estimate the diameter as it could be easily obtained. For each of the independently perturbed environments, we sampled 1,000 randomly obtained trajectories and recorded how many steps it took the given trajectory to terminate. Based on this, we recorded the highest value for the number of steps across each of the 30 environments before averaging these values together to obtain an estimate for the diameter. The results, presented in Figures 3 and 4, show that as the discount factor increases, robust DMDP optimization aligns more closely with robust average reward optimization. In both environments, performance stabilizes when $\gamma$ is large, further validating the effectiveness of our framework. Moreover, our results demonstrate that, when combined with function approximation techniques, our framework effectively scales to large-scale problems.
+
+We emphasize that our reduction framework is independent of any specific discounted algorithm used, thus it is not limited to only linear approximation. To verify this claim, we present additional experimentation using neural network approximation in Figures 5 and 6. As the results show, our reduction framework remains valid even with neural network approximation, and it is more robust than the non-robust reduction method. We present additional results for this in Section 14.
+
+# 7. Related Work
+
+# 7.1. Comparison with Prior Art
+
+In this section, we compare our results with the most related work (Grand-Clément & Petrik, 2024), which also explores reduction of robust AMDPs to DMDPs. However, it focuses
+
+
+
+
+
+
+Figure 6. Neural network approximation under Hopper-v3.
+
+
+
+on bounding the Blackwell discount factor. The Blackwell discount factor, denoted as $\gamma_{bw}$ , is defined such that the optimal robust policy for the DMDP with discount factor $\gamma_{bw}$ is also optimal to any robust DMDP with discount factor $\gamma \geq \gamma_{bw}$ . This implies that any optimal policy for the robust $\gamma_{bw}$ -DMDP also optimizes the robust AMDP.
+
+With this notation, a robust AMDP can similarly be reduced to a robust DMDP with $\gamma_{bw}$ . To enable this reduction, (Grand-Clement & Petrik, 2024) derives an upper bound on $\gamma_{bw}$ . While this reduction-based framework shares some similarity with our approach, our method offers several advantages compared to their approach.
+
+First, the results in (Grand-Clément & Petrik, 2024) are only applied to robust MDPs that meet two conditions: (1) the uncertainty sets are defined using $l_{1}$ - or $l_{\infty}$ -norms; and (2) the nominal kernels are rational numbers, i.e., $\mathsf{P}_s^a = \frac{n_{s,a}}{m_{s,a}}$ , for some $n_{s,a}, m_{s,a} \in \mathbb{N}$ . In contrast, we only require the uncertainty set to be compact and the Markov chain induced by each kernel in it to be a unichain. The restrictions in their work are due to the reliance on the separation bounds of algebraic numbers for a rational polynomial in their proofs. In contrast, we developed a more detailed structural characterization of robust AMDPs, allowing us to obtain more general results. More importantly, the resulting sample complexity from (Grand-Clément & Petrik, 2024) is less favorable than ours. The robust Blackwell discount factor is bounded by $\gamma_{bw} \leq 1 - \frac{C}{S^S m^{S^2}}$ , where $m$ is the minimal denominator of the nominal kernel. Using this result in our reduction framework leads to an exponentially large sample complexity, making the results impractical. We note that the Blackwell optimality can be overly stringent for merely solving a robust AMDP, and hence it results in significantly worse sample complexity compared to ours.
+
+# 7.2. Other Related Work
+
+Robust AMDPs. Robust AMDPs studies are quite limited. Model-based robust AMDPs were first studied in (Tewari & Bartlett, 2007) for a specific finite-interval uncertainty set, which is further extended to more general models in recent works including (Wang et al., 2023d; Grand-Clement et al., 2023). A game-based method is also proposed in (Chatterjee et al., 2023). These works reveal the fundamental structure of robust AMDPs, illustrating their connections to robust DMDPs. However, all of them are model-based with asymptotic convergence, whereas we developed sample complexity analysis.
+
+Robust DMDPs. Robust DMDPs were studied in (Iyengar, 2005; Nilim & El Ghaoui, 2004; Bagnell et al., 2001; Wiesemann et al., 2013; Lim et al., 2013), where the uncertainty set is assumed to be fully known. This inspired model-based methods for robust MDPs, where the learner first estimates a model, then solves the estimated model using robust dynamic programming (Zhang et al., 2021c; Panaganti & Kalathil, 2022; Yang et al., 2021; Shi et al., 2023). The studies were also extended to the model-free setting for more practical settings (Roy et al., 2017; Badrinath & Kalathil, 2021; Wang & Zou, 2021; 2022; Liu et al., 2022b; Zhou et al., 2021; Goyal & Grand-Clement, 2018; Kaufman & Schaefer, 2013; Ho et al., 2018; 2021; Wang et al., 2024c). Our work shows that the sample complexity of solving robust AMDPs can be transformed to that of solving DMDPs, enabling us to leverage the extensive prior work on the discounted setting.
+
+Non-robust AMDPs. Early contributions to non-robust AMDPs involve fundamental characterizations of the problem and the development of model-based methods (Puterman, 1994; Bertsekas, 2011). Recently, model-free methods in the tabular setting, e.g., (Abounadi et al., 2001; Wan et al., 2021; Wan & Sutton, 2022), have been developed and demonstrated convergence to the optimal average reward. The sample complexity of non-robust AMDPs has been a recent focus (Wang et al., 2022; Zhang & Xie, 2023). Among them, similar reduction-based methods are considered in, e.g. (Wang et al., 2022; 2023b; Zurek & Chen, 2023), achieving the optimal complexity. Extending such frameworks to robust settings is notably challenging, due to the inherent complexity of the robust average reward setting, stemming from the non-linearity of the Bellman operator and a more complicated high-dimensional solution space for the robust Bellman equation (Wang et al., 2023e).
+
+# 8. Conclusion
+
+In this work, we studied the fundamental connection between robust AMDPs and DMDPs. We reveal that obtaining an optimal policy for the robust average reward
+
+is equivalent to achieving a near-optimal policy under the discounted reward with a specific reduction discount factor, based on which we constructed a reduction-based framework that solves robust RL with average reward effectively. Our framework is adaptable to any method or oracle and versatile across uncertainty sets. It offers two key benefits: data efficiency and scalability, as illustrated by our design of both tabular and function approximation algorithms, along with their sample complexity analysis and experimental performance. Our results represent the first concrete solutions for robust RL in the average reward setting under the mild assumption (unichain), advancing the understanding of optimizing long-term RL performance under model uncertainty.
+
+# Acknowledgments
+
+This work was supported by DARPA under Agreement No. HR0011-24-9-0427 and NSF under Award CCF-2106339. We would also like to show our appreciation to the reviewers of this work for their impactful insight during the revision of this paper.
+
+# Impact Statement
+
+The work contained in this paper is to advance the field of robust reinforcement learning. As there are many different potential impacts to different fields and that we provide a theory-based general framework, we do not feel that there is a need to delve into specific potential impacts here.
+
+# References
+
+Abounadi, J., Bertsekas, D., and Borkar, V. S. Learning algorithms for Markov decision processes with average cost. SIAM Journal on Control and Optimization, 40(3): 681-698, 2001.
+Agarwal, A., Kakade, S., and Yang, L. F. Model-based reinforcement learning with a generative model is minimax optimal. In Conference on Learning Theory, pp. 67-83. PMLR, 2020.
+Archibald, T., McKinnon, K., and Thomas, L. On the generation of Markov decision processes. Journal of the Operational Research Society, 46(3):354-361, 1995.
+Badrinath, K. P. and Kalathil, D. Robust reinforcement learning using least squares policy iteration with provable performance guarantees. In Proc. International Conference on Machine Learning (ICML), pp. 511-520. PMLR, 2021.
+Bagnell, J. A., Ng, A. Y., and Schneider, J. G. Solving uncertain Markov decision processes. Carnegie Mellon University, Technical Report, 2001.
+
+Bartlett, P. L. and Tewari, A. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. arXiv preprint arXiv:1205.2661, 2012.
+Bertsekas, D. P. Dynamic Programming and Optimal Control 3rd edition, volume II. Belmont, MA: Athena Scientific, 2011.
+Bhatnagar, S., Precup, D., Silver, D., Sutton, R. S., Maei, H., and Szepesvári, C. Convergent temporal-difference learning with arbitrary smooth function approximation. In Proc. Advances in Neural Information Processing Systems (NIPS), volume 22, pp. 1204-1212, 2009.
+Blanchet, J., Lu, M., Zhang, T., and Zhong, H. Double pessimism is provably efficient for distributionally robust offline reinforcement learning: Generic algorithm and robust partial coverage. Advances in Neural Information Processing Systems, 36, 2024.
+Blanchet, J. H. and Glynn, P. W. Unbiased Monte Carlo for optimization and functions of expectations via multi-level randomization. In 2015 Winter Simulation Conference (WSC), pp. 3656-3667. IEEE, 2015.
+Blanchet, J. H., Glynn, P. W., and Pei, Y. Unbiased multilevel Monte Carlo: Stochastic optimization, steady-state simulation, quantiles, and other applications. arXiv preprint arXiv:1904.09929, 2019.
+Cai, Q., Yang, Z., Lee, J. D., and Wang, Z. Neural temporal-difference learning converges to global optima. In Proc. Advances in Neural Information Processing Systems (NeurIPS), pp. 11312-11322, 2019.
+Chatterjee, K., Goharshady, E. K., Karrabi, M., Novotny, P., and Zikelic, u. Solving long-run average reward robust mdps via stochastic games. arXiv preprint arXiv:2312.13912, 2023.
+Clavier, P., Pennec, E. L., and Geist, M. Towards minimax optimality of model-based robust reinforcement learning. arXiv preprint arXiv:2302.05372, 2023.
+Dugac, P. Sur la correspondance de borel et le théorème de dirichlet-heine-weierstrass-borel-schoenflies-lebesgue. Archives internationales d'histoire des sciences, 39(122): 69-110, 1989.
+Ghavamzadeh, M., Lazaric, A., Maillard, O., and Munos, R. LSTD with random projections. In Proc. Advances in Neural Information Processing Systems (NIPS), 2010.
+Goyal, V. and Grand-Clement, J. Robust Markov decision process: Beyond rectangularity. arXiv preprint arXiv:1811.00215, 2018.
+
+Grand-Clément, J. and Petrik, M. Reducing blackwell and average optimality to discounted MDPs via the blackwell discount factor. Advances in Neural Information Processing Systems, 36, 2024.
+Grand-Clement, J., Petrik, M., and Vieille, N. Beyond discounted returns: Robust markov decision processes with average and blackwell optimality. arXiv preprint arXiv:2312.03618, 2023.
+Ho, C. P., Petrik, M., and Wiesemann, W. Fast Bellman updates for robust MDPs. In Proc. International Conference on Machine Learning (ICML), pp. 1979-1988. PMLR, 2018.
+Ho, C. P., Petrik, M., and Wiesemann, W. Partial policy iteration for L1-robust Markov decision processes. Journal of Machine Learning Research, 22(275):1-46, 2021.
+Hu, Z. and Hong, L. J. Kullback-Leibler divergence constrained distributionally robust optimization. Available at Optimization Online, pp. 1695-1724, 2013.
+Huber, P. J. A robust version of the probability ratio test. Ann. Math. Statist., 36:1753-1758, 1965.
+Iyengar, G. N. Robust dynamic programming. Mathematics of Operations Research, 30(2):257-280, 2005.
+Jin, Y. and Sidford, A. Towards tight bounds on the sample complexity of average-reward mdps. In International Conference on Machine Learning, pp. 5055-5064. PMLR, 2021.
+Kaufman, D. L. and Schaefer, A. J. Robust modified policy iteration. INFORMS Journal on Computing, 25(3):396-410, 2013.
+Kober, J., Bagnell, J. A., and Peters, J. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274, 2013.
+Konda, V. and Tsitsiklis, J. Actor-critic algorithms. Advances in neural information processing systems, 12, 1999.
+Konidaris, G., Osentoski, S., and Thomas, P. Value function approximation in reinforcement learning using the fourier basis. In Proceedings of the AAAI conference on artificial intelligence, volume 25, pp. 380-385, 2011.
+Lan, G. First-order and Stochastic Optimization Methods for Machine Learning. Springer Nature, 2020.
+Levin, D. A. and Peres, Y. Markov chains and mixing times, volume 107. American Mathematical Soc., 2017.
+
+Li, G., Wei, Y., Chi, Y., Gu, Y., and Chen, Y. Breaking the sample size barrier in model-based reinforcement learning with a generative model. Advances in neural information processing systems, 33:12861-12872, 2020.
+Lim, S. H., Xu, H., and Mannor, S. Reinforcement learning in robust Markov decision processes. In Proc. Advances in Neural Information Processing Systems (NIPS), pp. 701-709, 2013.
+Liu, R.-Z., Pang, Z.-J., Meng, Z.-Y., Wang, W., Yu, Y., and Lu, T. On efficient reinforcement learning for full-length game of starcraft ii. Journal of Artificial Intelligence Research, 75:213-260, 2022a.
+Liu, Z. and Xu, P. Minimax optimal and computationally efficient algorithms for distributionally robust offline reinforcement learning. arXiv preprint arXiv:2403.09621, 2024.
+Liu, Z., Bai, Q., Blanchet, J., Dong, P., Xu, W., Zhou, Z., and Zhou, Z. Distributionally robust $Q$ -learning. In Proc. International Conference on Machine Learning (ICML), pp. 13623-13643. PMLR, 2022b.
+Lu, M., Zhong, H., Zhang, T., and Blanchet, J. Distributionally robust reinforcement learning with interactive data collection: Fundamental hardness and near-optimal algorithm. arXiv preprint arXiv:2404.03578, 2024.
+Müller, A. Integral probability metrics and their generating classes of functions. Advances in applied probability, 29 (2):429-443, 1997.
+Nilim, A. and El Ghaoui, L. Robustness in Markov decision problems with uncertain transition matrices. In Proc. Advances in Neural Information Processing Systems (NIPS), pp. 839-846, 2004.
+Panaganti, K. and Kalathil, D. Sample complexity of robust reinforcement learning with a generative model. arXiv preprint arXiv:2112.01506, 2021.
+Panaganti, K. and Kalathil, D. Sample complexity of robust reinforcement learning with a generative model. In International Conference on Artificial Intelligence and Statistics, pp. 9582-9602. PMLR, 2022.
+Panaganti, K., Xu, Z., Kalathil, D., and Ghavamzadeh, M. Robust reinforcement learning using offline data. arXiv preprint arXiv:2208.05129, 2022.
+Puterman, M. L. Markov decision processes: Discrete stochastic dynamic programming, 1994.
+Rahimian, H., Bayraksan, G., and De-Mello, T. H. Effective scenarios in multistage distributionally robust optimization with a focus on total variation distance. SIAM Journal on Optimization, 32(3):1698-1727, 2022.
+
+Roy, A., Xu, H., and Pokutta, S. Reinforcement learning under model mismatch. In Proc. Advances in Neural Information Processing Systems (NIPS), pp. 3046-3055, 2017.
+Shi, L. and Chi, Y. Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity. arXiv preprint arXiv:2208.05767, 2022.
+Shi, L., Li, G., Wei, Y., Chen, Y., Geist, M., and Chi, Y. The curious price of distributional robustness in reinforcement learning with a generative model. arXiv preprint arXiv:2305.16589, 2023.
+Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016.
+Sobel, M. J. The variance of discounted markov decision processes. Journal of Applied Probability, 19(4):794-802, 1982.
+Sutton, R. S. Generalization in reinforcement learning: Successful examples using sparse coarse coding. Advances in neural information processing systems, 8, 1995.
+Tamar, A., Mannor, S., and Xu, H. Scaling up robust MDPs using function approximation. In Proc. International Conference on Machine Learning (ICML), pp. 181-189. PMLR, 2014.
+Tewari, A. and Bartlett, P. L. Bounded parameter Markov decision processes with average reward criterion. In International Conference on Computational Learning Theory, pp. 263-277. Springer, 2007.
+Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 5026-5033. IEEE, 2012.
+Tsitsiklis, J. N. and Van Roy, B. Average cost temporal-difference learning. Automatica, 35(11):1799-1808, 1999.
+Vershynin, R. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
+Wai, H.-T., Hong, M., Yang, Z., Wang, Z., and Tang, K. Variance reduced policy evaluation with smooth function approximation. In Proc. Advances in Neural Information Processing Systems (NeurIPS), volume 32, pp. 5784-5795, 2019.
+
+Wan, Y. and Sutton, R. S. On convergence of average-reward off-policy control algorithms in weakly-communicating MDPs. arXiv preprint arXiv:2209.15141, 2022.
+Wan, Y., Naik, A., and Sutton, R. S. Learning and planning in average-reward Markov decision processes. In Proc. International Conference on Machine Learning (ICML), pp. 10653-10662. PMLR, 2021.
+Wang, G. and Wang, T. Unbiased multilevel Monte Carlo methods for intractable distributions: Mlmc meets mcmc. arXiv preprint arXiv:2204.04808, 2022.
+Wang, H., Shi, L., and Chi, Y. Sample complexity of offline distributionally robust linear markov decision processes. arXiv preprint arXiv:2403.12946, 2024a.
+Wang, J., Wang, M., and Yang, L. F. Near sample-optimal reduction-based policy learning for average reward mdp. arXiv preprint arXiv:2212.00603, 2022.
+Wang, S., Blanchet, J., and Glynn, P. Optimal sample complexity for average reward Markov decision processes. arXiv preprint arXiv:2310.08833, 2023a.
+Wang, S., Blanchet, J., and Glynn, P. Optimal sample complexity of reinforcement learning for mixing discounted Markov decision processes. arXiv preprint arXiv:2302.07477, 2023b.
+Wang, S., Si, N., Blanchet, J., and Zhou, Z. A finite sample complexity bound for distributionally robust $q$ -learning. In International Conference on Artificial Intelligence and Statistics, pp. 3370-3398. PMLR, 2023c.
+Wang, Y. and Zou, S. Online robust reinforcement learning with model uncertainty. In Proc. Advances in Neural Information Processing Systems (NeurIPS), 2021.
+Wang, Y. and Zou, S. Policy gradient method for robust reinforcement learning. In Proc. International Conference on Machine Learning (ICML), volume 162, pp. 23484-23526. PMLR, 2022.
+Wang, Y., Velasquez, A., Atia, G., Prater-Bennette, A., and Zou, S. Robust average-reward Markov decision processes. In Proc. Conference on Artificial Intelligence (AAAI), 2023d.
+Wang, Y., Velasquez, A., Atia, G. K., Prater-Bennette, A., and Zou, S. Model-free robust average-reward reinforcement learning. In International Conference on Machine Learning, pp. 36431-36469. PMLR, 2023e.
+Wang, Y., Sun, Z., and Zou, S. A unified principle of pessimism for offline reinforcement learning under model mismatch. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024b.
+
+Wang, Y., Zou, S., and Wang, Y. Model-free robust reinforcement learning with sample complexity analysis. In The 40th Conference on Uncertainty in Artificial Intelligence, 2024c.
+Wei, H., Chen, J., Ji, X., Qin, H., Deng, M., Li, S., Wang, L., Zhang, W., Yu, Y., Linc, L., et al. Honor of kings arena: an environment for generalization in competitive reinforcement learning. Advances in Neural Information Processing Systems, 35:11881-11892, 2022.
+Wiesemann, W., Kuhn, D., and Rustem, B. Robust Markov decision processes. Mathematics of Operations Research, 38(1):153-183, 2013.
+Xu, H. and Mannor, S. Distributionally robust Markov decision processes. In Proc. Advances in Neural Information Processing Systems (NIPS), pp. 2505-2513, 2010.
+Yang, W., Zhang, L., and Zhang, Z. Towards theoretical understandings of robust Markov decision processes: Sample complexity and asymptotics. arXiv preprint arXiv:2105.03863, 2021.
+Yu, H. and Bertsekas, D. P. Convergence results for some temporal difference methods based on least squares. IEEE Transactions on Automatic Control, 54(7):1515-1531, 2009.
+Zha, D., Xie, J., Ma, W., Zhang, S., Lian, X., Hu, X., and Liu, J. Douzero: Mastering doudizhu with self-play deep reinforcement learning. In international conference on machine learning, pp. 12333-12344. PMLR, 2021.
+Zhang, H., Chen, H., Xiao, C., Li, B., Liu, M., Boning, D., and Hsieh, C.-J. Robust deep reinforcement learning against adversarial perturbations on state observations. Advances in Neural Information Processing Systems, 33: 21024-21037, 2020.
+Zhang, S., Wan, Y., Sutton, R. S., and Whiteson, S. Average-reward off-policy policy evaluation with function approximation. In Proc. International Conference on Machine Learning (ICML), pp. 12578-12588. PMLR, 2021a.
+Zhang, S., Zhang, Z., and Maguluri, S. T. Finite sample analysis of average-reward TD learning and $Q$ -learning. In Proc. Advances in Neural Information Processing Systems (NeurIPS), volume 34, pp. 1230-1242, 2021b.
+Zhang, X., Chen, Y., Zhu, X., and Sun, W. Robust policy gradient against strong data corruption. arXiv preprint arXiv:2102.05800, 2021c.
+Zhang, Y. and Ross, K. W. On-policy deep reinforcement learning for the average-reward criterion. In Proc. International Conference on Machine Learning (ICML), pp. 12535-12545. PMLR, 2021.
+
+Zhang, Z. and Xie, Q. Sharper model-free reinforcement learning for average-reward Markov decision processes. In The Thirty Sixth Annual Conference on Learning Theory, pp. 5476-5477. PMLR, 2023.
+Zhou, R., Liu, T., Cheng, M., Kalathil, D., Kumar, P., and Tian, C. Natural actor-critic for robust reinforcement learning with function approximation. Advances in neural information processing systems, 36, 2024.
+Zhou, Z., Bai, Q., Zhou, Z., Qiu, L., Blanchet, J., and Glynn, P. Finite-sample regret bound for distributionally robust offline tabular reinforcement learning. In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 3331-3339. PMLR, 2021.
+Zurek, M. and Chen, Y. Span-based optimal sample complexity for average reward mdps. arXiv preprint arXiv:2311.13469, 2023.
+
+# Appendix
+
+# 9. Additional Experiments for Tabular Settings
+
+Building upon the results in 6, we aim to verify our theoretical findings by presenting concrete experimental validation using the true transition kernel as well as the empirical kernel in figures 7 and 8 respectively. In each setting, both the nominal transition kernel and the reward function are generated via a normal distribution for 20 states and 8 actions. Given this, we then construct the uncertainty set with either TV or CS divergence, both with a radius $R = 0.2$ . For each fixed value of $\gamma$ , we estimate $\pi_{\gamma}$ for the associated robust DMDP using robust value iteration (Iyengar, 2005). We then use Algorithm 1 in (Wang et al., 2023d) on this policy to obtain its robust average reward $g_{\mathcal{P}}^{\pi_{\gamma}}$ . Additionally, we plot the optimal robust average reward using Algorithm 2 from (Wang et al., 2023d). By iterating through discount factors and obtaining the robust average reward, our results show that as $\gamma \rightarrow 1$ , the estimated policies obtained from the corresponding robust DMDPs yield an increasingly higher robust average reward, thus we converge to the optimal policy for the robust AMDP. To further solidify this point, given arbitrarily chosen values for $\epsilon$ , we can estimate the robust optimal bias span $\mathcal{H}$ using the work of (Wang et al., 2023e). With these values, we know the optimal discount factor necessary to obtain the optimal policy of the robust AMDP.
+
+
+Figure 7. Convergence of true kernel for TV (left) and $\chi^2$ -divergence (right).
+
+
+
+
+Figure 8. Convergence of empirical kernel for TV (left) and $\chi^2$ -divergence (right).
+
+
+
+We also wish to compare other methods to our reduction framework. Due to the novelty of our work, we opted to compare our method to two baselines for robust AMDPs, robust value iteration (RVI) (Wang et al., 2023d), and robust relative value iteration (RRVI) (Wang et al., 2023e), under the tabular Garnet problem due to the baseline method's asymptotic convergence guarantees. We set the reduction factor to be 0.99 in our framework (corresponds to $\epsilon = 0.001$ . As we show in Figure 9, our method obtains a better policy within the same number of steps, thus achieving state-of-the-art performance in robust average reward optimization.
+
+
+Figure 9. Comparison with AMDP baselines
+
+# 10. Discussion on $\mathcal{H}$
+
+For general uncertainty sets, we can obtain an upper bound of $\mathcal{H}$ with an additional assumption.
+
+Lemma 10.1. Assume there exists some positive integer $n > 0$ and some positive value $\rho$ , such that
+
+$$
+\left(\mathsf {P} ^ {n}\right) _ {s, x} \geq \rho , \forall \mathsf {P} \in \mathcal {P}, (s, x) \in \mathcal {S} \times \mathcal {S}. \tag {11}
+$$
+
+Then,
+
+$$
+\boldsymbol {S} \boldsymbol {p} \left(h _ {\mathrm {P}} ^ {\pi}\right) \leq \frac {1}{1 - \rho} \frac {1}{1 - (1 - \rho) ^ {\frac {1}{n}}}. \tag {12}
+$$
+
+Proof. Without loss of generality, we prove the case $n = 1$ . The results in other cases can be derived in a similar way. First, it holds that
+
+$$
+\mathrm {P} _ {x, y} ^ {\pi} \geq \rho \eta_ {\mathrm {P}} (y), \tag {13}
+$$
+
+where $\mathsf{P}_{\cdot ,\cdot}^{\pi}$ is the reduced transition kernel by $\pi$ and $\eta_{\mathbb{P}}$ is the stationary distribution. Define a stochastic matrix $Q$ through the equation
+
+$$
+\mathrm {P} = \rho \mathrm {P} ^ {*} + (1 - \rho) Q. \tag {14}
+$$
+
+We claim that
+
+$$
+\left(\mathrm {P}\right) ^ {k} = \left(1 - \left(1 - \rho\right) ^ {k}\right) \mathrm {P} ^ {*} + \left(1 - \rho\right) ^ {k} Q ^ {k}. \tag {15}
+$$
+
+To show this, we use induction. Clearly (15) holds when $k = 1$ . Assume that (15) holds when $k = n$ :
+
+$$
+(\mathsf {P}) ^ {n} = (1 - (1 - \rho) ^ {n}) \mathsf {P} ^ {*} + (1 - \rho) ^ {n} Q ^ {n}. \tag {16}
+$$
+
+Then,
+
+$$
+\begin{array}{l} (\mathsf {P}) ^ {1 + n} = (1 - (1 - \rho) ^ {n}) \mathsf {P} ^ {*} \mathsf {P} + (1 - \rho) ^ {n} Q ^ {n} \mathsf {P} \\ = (1 - (1 - \rho) ^ {n}) P ^ {*} + (1 - \rho) ^ {n} Q ^ {n} (\rho P ^ {*} + (1 - \rho) Q) \\ = (1 - (1 - \rho) ^ {n + 1}) \mathrm {P} ^ {*} + (1 - \rho) ^ {n + 1} Q ^ {n + 1}, \tag {17} \\ \end{array}
+$$
+
+which proves the claim (15). Thus, rearranging terms implies
+
+$$
+\left\| \mathsf {P} ^ {k} - \mathsf {P} ^ {*} \right\| \leq (1 - \rho) ^ {k} \| Q ^ {k} - \mathsf {P} ^ {*} \| \leq \frac {1}{1 - \rho} (1 - \rho) ^ {k}. \tag {18}
+$$
+
+Then, it holds that
+
+$$
+\left\| h _ {\mathsf {P}} ^ {\pi} \right\| = \left\| \lim _ {T \rightarrow \infty} \sum_ {t = 0} ^ {T} \mathsf {P} ^ {t} r - T \mathsf {P} ^ {*} r \right\| \leq \lim _ {T \rightarrow \infty} \left\| \sum_ {t = 0} ^ {T} \mathsf {P} ^ {t} r - T \mathsf {P} ^ {*} r \right\| \leq \sum_ {t = 0} ^ {\infty} (1 - \rho) ^ {t - 1} \leq \frac {1}{\rho (1 - \rho)}, \tag {19}
+$$
+
+which further implies that
+
+$$
+\mathbf {S p} \left(h _ {\mathsf {P}} ^ {\pi}\right) \leq 2 \| h _ {\mathsf {P}} ^ {\pi} \| \leq \frac {2}{\rho (1 - \rho)}. \tag {20}
+$$
+
+
+
+Remark 10.2. As a sufficient condition for the assumption in Lemma 10.1, it is commonly assumed that the Markov chains induced by any deterministic policy and transition kernels in the uncertainty set are aperiodic and irreducible (Levin & Peres, 2017). Since the aperiodicity can be ensured by applying the aperiodicity transformation (Puterman, 1994), the only additional assumption required is the irreducibility of the induced Markov chains.
+
+Remark 10.3. Another well adopted assumption that can imply an upper bound on $\mathcal{H}$ is to assume uniformly Geometric ergodic chains. Namely, there exists constants $m,\rho$ , such that for all deterministic policy and kernel,
+
+$$
+\left\| \left(\mathrm {P} ^ {\pi}\right) ^ {n} - \eta^ {\pi} \right\| \leq m \rho^ {- n}. \tag {21}
+$$
+
+In this case, it holds that
+
+$$
+\mathcal {H} \leq \max _ {\mathsf {P} \in \mathcal {P}} \mathbf {S p} \left(\lim _ {T \rightarrow \infty} \sum_ {i = 1} ^ {T - 1} \left(\left(P ^ {\pi}\right) ^ {i} - \eta^ {\pi}\right) r\right) \leq \frac {2 m}{1 - \rho}. \tag {22}
+$$
+
+# 11. Proof of Theorem 3.4
+
+We first prove some important lemmas.
+
+Lemma 11.1. Under Assumption 3.1, for any $\gamma \in (0,1)$ and any stationary policy $\pi$ , it holds that
+
+$$
+\left\| g _ {\mathcal {P}} ^ {\pi} - (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq \mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi}).
+$$
+
+Proof. Recalling the definition of $g_{\mathsf{P}}^{\pi}$ in (1), we have that for any kernel $\mathsf{P}$ and policy $\pi$
+
+$$
+g _ {\mathsf {P}} ^ {\pi} = \mathsf {P} _ {\pi} ^ {*} r _ {\pi},
+$$
+
+where $r_{\pi}$ is the induced immediate reward function: $r_{\pi}(s) = \mathbb{E}[r(s,a)|a\sim \pi (s)]$ , and $\mathsf{P}_{\pi}^{*}$ is the Cesàro limit (Puterman, 1994) of the transition kernel $\mathsf{P}$ that follows the policy $\pi$ .
+
+On the other hand, we have that
+
+$$
+V _ {\gamma , \mathrm {P}} ^ {\pi} = (I - \gamma \mathrm {P} _ {\pi}) ^ {- 1} r _ {\pi}. \tag {23}
+$$
+
+Note that:
+
+$$
+\mathrm {P} _ {\pi} ^ {*} (I - \gamma \mathrm {P} _ {\pi}) = \mathrm {P} _ {\pi} ^ {*} - \gamma \mathrm {P} _ {\pi} ^ {*} = (1 - \gamma) \mathrm {P} _ {\pi} ^ {*}, \tag {24}
+$$
+
+where the above equation is from the fact that for every policy $\pi$ , $\mathsf{P}_{\pi}\mathsf{P}_{\pi}^{*} = \mathsf{P}_{\pi}^{*}\mathsf{P}_{\pi} = \mathsf{P}_{\pi}^{*}\mathsf{P}_{\pi}^{*} = \mathsf{P}_{\pi}^{*}$ . Thus,
+
+$$
+\mathrm {P} _ {\pi} ^ {*} = (1 - \gamma) \mathrm {P} _ {\pi} ^ {*} (I - \gamma \mathrm {P} _ {\pi}) ^ {- 1}. \tag {25}
+$$
+
+Hence,
+
+$$
+g _ {\mathrm {P}} ^ {\pi} = \mathrm {P} _ {\pi} ^ {*} r _ {\pi} = (1 - \gamma) \mathrm {P} _ {\pi} ^ {*} (I - \gamma \mathrm {P} _ {\pi}) ^ {- 1} r _ {\pi} = \mathrm {P} _ {\pi} ^ {*} \cdot (1 - \gamma) V _ {\gamma , \mathrm {P}} ^ {\pi}. \tag {26}
+$$
+
+Here, the last equation is from (23).
+
+Since each row of $\mathsf{P}_{\pi}^{*}$ is a transition kernel of a unichain, $g_{\mathsf{P}}^{\pi}(s)$ is constant $\forall s$ and can be bounded as:
+
+$$
+\min _ {s} (1 - \gamma) V _ {\gamma , \mathrm {P}} ^ {\pi} (s) \leq g _ {\mathrm {P}} ^ {\pi} \leq \max _ {s} (1 - \gamma) V _ {\gamma , \mathrm {P}} ^ {\pi} (s). \tag {27}
+$$
+
+Taking $\min_{\mathsf{P}\in \mathcal{P}}$ on both sides of inequality (27) implies that
+
+$$
+\min _ {\mathsf {P} \in \mathcal {P}} \min _ {s} (1 - \gamma) V _ {\gamma , \mathsf {P}} ^ {\pi} (s) \leq \min _ {\mathsf {P} \in \mathcal {P}} g _ {\mathsf {P}} ^ {\pi} = g _ {\mathcal {P}} ^ {\pi}. \tag {28}
+$$
+
+By interchanging the order of two min operators, we have that
+
+$$
+\min _ {s} (1 - \gamma) \min _ {\mathcal {P} \in \mathcal {P}} V _ {\gamma , \mathcal {P}} ^ {\pi} (s) = \min _ {s} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} (s) \leq g _ {\mathcal {P}} ^ {\pi}. \tag {29}
+$$
+
+On the other hand, we have
+
+$$
+g _ {\mathcal {P}} ^ {\pi} = \min _ {\mathrm {P} \in \mathcal {P}} g _ {\mathrm {P}} ^ {\pi} = \min _ {\mathrm {P} \in \mathcal {P}} \left[ \mathrm {P} _ {\pi} ^ {*} \cdot (1 - \gamma) V _ {\gamma , \mathrm {P}} ^ {\pi} \right]. \tag {30}
+$$
+
+We denote by $Q_{\gamma} \in \mathcal{P}$ the worst case transition kernel of $V_{\gamma, \mathcal{P}}^{\pi}$ . Then,
+
+$$
+\begin{array}{l} g _ {\mathcal {P}} ^ {\pi} \leq Q _ {\gamma} ^ {*} (1 - \gamma) V _ {\gamma , Q _ {\gamma}} ^ {\pi} (31) \\ = Q _ {\gamma} ^ {*} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} (32) \\ \leq \max _ {s} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} (s), (33) \\ \end{array}
+$$
+
+where (31) is from (30), (32) is from the definition of $Q_{\gamma}$ , and (33) is because for every distribution of $Q_{\gamma}$ we have $\mathbb{E}_{Q_{\gamma}}[(1 - \gamma)V_{\gamma, Q_{\gamma}}^{\pi}] \leq \max_s(1 - \gamma)V_{\gamma, Q_{\gamma}}^{\pi}(s)$ .
+
+Combining (29) and (33), we get
+
+$$
+\min _ {s} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} (s) \leq g _ {\mathcal {P}} ^ {\pi} \leq \max _ {s} (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} (s). \tag {34}
+$$
+
+This implies that
+
+$$
+\left\| g _ {\mathcal {P}} ^ {\pi} - (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq \mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi}), \tag {35}
+$$
+
+which completes the proof.
+
+Lemma 11.2. Under assumption 3.1, for any $\epsilon \in [0,\mathcal{H})$ , if we set $\gamma := 1 - \frac{\epsilon}{\mathcal{H}}$ , where $\mathcal{H} = \max_{\mathsf{P}\in \mathcal{P}}\mathbf{S}\pmb {p}(h_{\mathsf{P}}^{\pi^{*}})$ , then
+
+$$
+\mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}) \leq \epsilon .
+$$
+
+Proof. Since the MDP is assumed to be a unichain, both $g_{\mathsf{P}}^{\pi^{*}}$ and $g_{\mathcal{P}}^{\pi^{*}}$ are constant. Moreover, $g_{\mathcal{P}}^{\pi^{*}}$ and $h_{\mathcal{P}}^{\pi^{*}}$ satisfy the robust Bellman optimality equation (Wang et al., 2023e):
+
+$$
+\left(g _ {\mathcal {P}} ^ {\pi^ {*}} + h _ {\mathcal {P}} ^ {\pi^ {*}}\right) (s) = \max _ {a} \left\{r (s, a) + \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) \right\}, \tag {36}
+$$
+
+where $\sigma_{\mathcal{P}_s^a}(V) \triangleq \min_{p \in \mathcal{P}_s^a} p^\top V$ .
+
+It is also known that for any $\gamma \in [0,1)$ , the optimal robust discounted value function $V_{\gamma ,\mathcal{P}}^{\pi_{\gamma}}$ satisfies the discounted Bellman equation:
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} (s) = \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}\right) \right\}. \tag {37}
+$$
+
+Next, we aim to rewrite the robust discounted Bellman equation to obtain a form similar to the Bellman equation for the average reward setting. First, we define
+
+$$
+h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} = V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}. \tag {38}
+$$
+
+Note the fact that $h_{\gamma, \mathcal{P}}^{\pi_{\gamma}, \pi^{*}}$ and $V_{\gamma, \mathcal{P}}^{\pi_{\gamma}}$ have the same span because $g_{\mathcal{P}}^{\pi^{*}}$ is a constant:
+
+$$
+\mathbf {S p} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) = \mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right). \tag {39}
+$$
+
+Now, we substitute (38) into (37) to obtain an equation similar to (36):
+
+$$
+\begin{array}{l} V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} (s) = \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} + \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) (s) = \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} + \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) \right\} \\ = \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) + \frac {\gamma}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}} \right\}, \tag {40} \\ \end{array}
+$$
+
+where (40) is because $\frac{\gamma}{1 - \gamma} g_{\mathcal{P}}^{\pi^{*}}$ is a constant and can be taken from the support function. The above equation can be written as
+
+$$
+\left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} + \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) (s) = \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \right\} + \frac {\gamma}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}. \tag {41}
+$$
+
+This implies that
+
+$$
+\left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} + g _ {\mathcal {P}} ^ {\pi^ {*}}\right) (s) = \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \right\}. \tag {42}
+$$
+
+Combine (36) and (42) and $\forall s\in S$ we have
+
+$$
+\begin{array}{l} \left| h _ {\mathcal {P}} ^ {\pi^ {*}} - h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} (s) \right| = \left| \max _ {a} \left\{r (s, a) + \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) \right\} - \max _ {a} \left\{r (s, a) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \right\} \right| (43) \\ \leq \left| \max _ {a} \left\{\sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) - \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \right\} \right| (44) \\ = \left| \max _ {a} \left\{\sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) - \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) - \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi^ {*} \gamma , \pi^ {*}}\right) \right\} \right| (45) \\ \leq \left| \max _ {a} \left\{\gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) - \gamma \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \right\} \right| + \left| \max _ {a} \left\{(1 - \gamma) \sigma_ {\mathcal {P} _ {s} ^ {a}} \left(h _ {\mathcal {P}} ^ {\pi^ {*}}\right) \right\} \right|. (46) \\ \end{array}
+$$
+
+And since $|\sigma_{\mathcal{P}_s^a}(V) - \sigma_{\mathcal{P}_s^a}(W)| \leq \| V - W\|_\infty$ , and $|\sigma_{\mathcal{P}_s^a}(V)| \leq \| V\|_\infty$ :
+
+$$
+\left| h _ {\mathcal {P}} ^ {\pi^ {*}} - h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} (s) \right| \leq \gamma \| h _ {\mathcal {P}} ^ {\pi^ {*}} - h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} \| _ {\infty} + (1 - \gamma) \| h _ {\mathcal {P}} ^ {\pi^ {*}} \| _ {\infty}. \tag {47}
+$$
+
+Thus, it follows that,
+
+$$
+\left\| h _ {\mathcal {P}} ^ {\pi^ {*}} - h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} \right\| _ {\infty} \leq \left\| h _ {\mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty}. \tag {48}
+$$
+
+Now, we combine (39) and (48):
+
+$$
+\mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}\right) = \mathbf {S p} \left(h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}}\right) \leq 2 \| h _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}, \pi^ {*}} \| _ {\infty} \leq 4 \| h _ {\mathcal {P}} ^ {\pi^ {*}} \| _ {\infty}. \tag {49}
+$$
+
+From Theorem 3.1 of (Wang et al., 2023e), for any policy $\pi$ , there exists a transition kernel $\mathsf{P}_V \in \mathcal{P}_W$ such that $h_{\mathcal{P}}^{\pi} = h_{\mathsf{P}_V}^{\pi} + ce$ for $c \in \mathbb{R}$ , where $e$ denotes the vector $(1,1,1,..,1) \in \mathbb{R}^{|S|}$ . Hence, we have that
+
+$$
+\mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}\right) \leq 4 \| h _ {\mathcal {P}} ^ {\pi^ {*}} \| _ {\infty} = 4 \| h _ {\mathcal {P} _ {V}} ^ {\pi^ {*}} \| _ {\infty}, \tag {50}
+$$
+
+and
+
+$$
+\mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}\right) \leq 4 \| h _ {\mathrm {P} _ {V}} ^ {\pi^ {*}} \| _ {\infty} \leq 4 \mathbf {S p} \left(h _ {\mathrm {P} _ {V} ^ {\pi^ {*}}} ^ {\pi^ {*}}\right) \leq 4 \mathcal {H}, \tag {51}
+$$
+
+which completes the proof.
+
+Combine the above two results, then we directly have the following result.
+
+Corollary 11.3. Under Assumption 3.1, for any $\gamma \in (0,1)$ and the policy $\pi_{\gamma}$ , it holds that
+
+$$
+\| \frac {g _ {\mathcal {P}} ^ {\pi_ {\gamma}}}{1 - \gamma} - V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} \| _ {\infty} \leq \mathcal {H}.
+$$
+
+Lemma 11.4. Under assumption 3.1, for any $\epsilon \in [0,\mathcal{H})$ , if we set $\gamma \coloneqq 1 - \frac{\epsilon}{\mathcal{H}}$ , then $\mathbf{Sp}((1 - \gamma)V_{\gamma ,\mathcal{P}}^{\pi^*})\leq \epsilon$
+
+Proof. First, we utilize the definitions of the finite time-horizon reward function $V_{T,\mathsf{P}}^{\pi}(s) \triangleq \mathbb{E}_{\pi,\mathsf{P}}\left[\sum_{0}^{T-1}r_{t}|S_{0}=s\right]$ and the other definition of bias $h_{\mathsf{P}}^{\pi}(s) \triangleq \lim_{T \to \infty}\left[V_{T,\mathsf{P}}^{\pi}-Tg_{\mathsf{P}}^{\pi}\right]$ . Note that:
+
+$$
+\begin{array}{l} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} = \min _ {\mathcal {P} \in \mathcal {P}} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} (52) \\ = \min _ {\mathsf {P} \in \mathcal {P}} \left(\lim _ {T \rightarrow \infty} \sum_ {t = 0} ^ {T - 1} \gamma^ {t} \mathsf {P} _ {\pi^ {*}} ^ {t} r _ {\pi^ {*}}\right) (53) \\ = \min _ {\mathsf {P} \in \mathcal {P}} \left(\lim _ {T \rightarrow \infty} V _ {T, \mathsf {P}} ^ {\pi^ {*}} - (1 - \gamma) \sum_ {t = 1} ^ {T - 1} \gamma^ {t - 1} \mathsf {P} _ {\pi^ {*}} ^ {t} V _ {T - t, \mathsf {P}} ^ {\pi^ {*}}\right). (54) \\ \end{array}
+$$
+
+Recall the worst case transition kernel $Q_{\gamma}$ where $Q_{\gamma} \in \mathcal{P}$ :
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} = V _ {\gamma , Q _ {\gamma}} ^ {\pi^ {*}} = \lim _ {T \rightarrow \infty} V _ {T, Q _ {\gamma}} ^ {\pi^ {*}} - (1 - \gamma) \sum_ {t = 1} ^ {T - 1} \gamma^ {t - 1} Q _ {\gamma , \pi^ {*}} ^ {t} V _ {T - t, Q _ {\gamma}} ^ {\pi^ {*}}. \tag {55}
+$$
+
+We have:
+
+$$
+\mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) = \mathbf {S p} \left(V _ {\gamma , Q _ {\gamma}} ^ {\pi^ {*}}\right) \leq \lim _ {T \rightarrow \infty} \sup _ {\gamma} \mathbf {S p} \left(V _ {T, Q _ {\gamma}} ^ {\pi^ {*}}\right) + (1 - \gamma) \sum_ {t = 1} ^ {T - 1} \gamma^ {t - 1} \mathbf {S p} \left(V _ {T - t, Q _ {\gamma}} ^ {\pi^ {*}}\right) \tag {56}
+$$
+
+Now the objective is to find an upper bound for $\mathbf{Sp}(V_{T,Q_{\gamma}}^{\pi^{*}})$ . From the definition of $h_{Q_{\gamma}}^{\pi}$ , we have
+
+$$
+\begin{array}{l} h _ {Q _ {\gamma}} ^ {\pi} = \lim _ {t \rightarrow \infty} \left(V _ {t, Q _ {\gamma}} ^ {\pi} - t g _ {Q _ {\gamma}} ^ {\pi}\right) (57) \\ = \lim _ {N \rightarrow \infty} \frac {1}{N} \sum_ {t = 1} ^ {N} \left(V _ {t, Q _ {\gamma}} ^ {\pi} - t g _ {Q _ {\gamma}} ^ {\pi}\right) (58) \\ = \lim _ {N \rightarrow \infty} \frac {1}{N} \sum_ {t = T + 1} ^ {N} \left(V _ {t, Q _ {\gamma}} ^ {\pi} - t g _ {Q _ {\gamma}} ^ {\pi}\right). (59) \\ \end{array}
+$$
+
+Thus, we have
+
+$$
+\begin{array}{l} h _ {Q _ {\gamma}} ^ {\pi} = \lim _ {N \to \infty} \frac {1}{N} \sum_ {t = T + 1} ^ {N} [ (V _ {T, Q _ {\gamma}} ^ {\pi} - T g _ {Q _ {\gamma}} ^ {\pi}) + Q _ {\gamma , \pi} ^ {T} V _ {t - T, Q _ {\gamma}} ^ {\pi} - (t - T) g _ {Q _ {\gamma}} ^ {\pi} ] \\ = \left(V _ {T, Q _ {\gamma}} ^ {\pi} - T g _ {Q _ {\gamma}} ^ {\pi}\right) + Q _ {\gamma , \pi} ^ {T} \lim _ {N \rightarrow \infty} \frac {1}{N} \sum_ {t = T + 1} ^ {N - T} \left(V _ {t - T, Q _ {\gamma}} ^ {\pi} - (t - T) g _ {Q _ {\gamma}} ^ {\pi}\right) \tag {60} \\ = V _ {T, Q _ {\gamma}} ^ {\pi} - T g _ {Q _ {\gamma}} ^ {\pi} + Q _ {\gamma , \pi} ^ {T} h _ {Q _ {\gamma}} ^ {\pi}. \\ \end{array}
+$$
+
+It follows that
+
+$$
+V _ {T, Q _ {\gamma}} ^ {\pi} = T g _ {Q _ {\gamma}} ^ {\pi} + h _ {Q _ {\gamma}} ^ {\pi} - Q _ {\gamma , \pi} ^ {T} h _ {Q _ {\gamma}} ^ {\pi}. \tag {61}
+$$
+
+Thus, we have
+
+$$
+\mathbf {S p} \left(V _ {T, Q _ {\gamma}} ^ {\pi}\right) \leq 2 \mathbf {S p} \left(h _ {Q _ {\gamma}} ^ {\pi}\right). \tag {62}
+$$
+
+Note that $\mathbf{Sp}(h_{\mathsf{P}}^{\pi})$ is continuous in $\mathsf{P}$ and $\mathcal{P}$ is a compact set, thus $\mathbf{Sp}(h_{\mathsf{P}}^{\pi}) \leq \mathcal{H}, \forall \mathsf{P} \in \mathcal{P}$ . Now we can combine the result from (56) and (62):
+
+$$
+\begin{array}{l} \mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) = \mathbf {S p} \left(V _ {\gamma , Q _ {\gamma}} ^ {\pi^ {*}}\right) \leq \lim _ {T \rightarrow \infty} \sup _ {\gamma} \mathbf {S p} \left(V _ {T, Q _ {\gamma}} ^ {\pi^ {*}}\right) + (1 - \gamma) \sum_ {t = 1} ^ {T - 1} \gamma^ {t - 1} \mathbf {S p} \left(V _ {T - t, Q _ {\gamma}} ^ {\pi^ {*}}\right) (63) \\ \leq 2 \mathbf {S p} \left(h _ {\mathsf {P}} ^ {\pi^ {*}}\right) \left(1 + (1 - \gamma) \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1}\right) (64) \\ = 4 \mathbf {S} \mathbf {p} \left(h _ {\mathrm {P}} ^ {\pi^ {*}}\right). (65) \\ \end{array}
+$$
+
+Thus,
+
+$$
+\mathbf {S p} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \leq 4 \mathcal {H}. \tag {66}
+$$
+
+
+
+As a direct corollary, it holds that
+
+Corollary 11.5. Under Assumption 3.1, for any $\gamma \in (0,1)$ , it holds that
+
+$$
+\left\| \frac {g _ {\mathcal {P}} ^ {\pi^ {*}}}{1 - \gamma} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \leq 4 \mathcal {H}.
+$$
+
+Proof. From Lemma 11.1 and Equation (66), it holds that
+
+$$
+\left\| g _ {\mathcal {P}} ^ {\pi^ {*}} - (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \leq \mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}) \leq 4 (1 - \gamma) \mathcal {H},
+$$
+
+which completes the proof.
+
+
+
+We are now ready to prove the main theorem.
+
+Theorem 11.6. (Restatement of Theorem 3.4) Under Assumption 3.1, for any $\epsilon$ , if we set $\gamma \coloneqq 1 - \frac{\epsilon}{\mathcal{H}}$ , then any $\epsilon_{\gamma}$ -optimal policy $\hat{\pi}_{\gamma}$ for the robust $\gamma$ -DMDP8 is also an $\mathcal{O}(\epsilon)$ -optimal policy for the robust AMDP, i.e.,
+
+$$
+g _ {\mathcal {P}} ^ {\pi^ {*}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi} _ {\gamma}} \leq \left(8 + \frac {5 \epsilon_ {\gamma}}{\mathcal {H}}\right) \epsilon . \tag {67}
+$$
+
+Specifically, an $\mathcal{O}(\mathcal{H})$ -optimal robust policy for the robust DMDP is an $\mathcal{O}(\epsilon)$ -optimal robust policy under the average reward.
+
+Proof. Under Assumption 3.1, for any $\epsilon \in (0,\mathcal{H}]$ and any $\delta \in (0,1]$ , we consider $\gamma = 1 - \frac{\epsilon}{\mathcal{H}}$ . Suppose $\pi_{\gamma}$ is an optimal policy of the robust DMDP and $\hat{\pi}_{\gamma}$ is an $\epsilon_{\gamma}$ -optimal policy in the robust DMDP. Then,
+
+$$
+\left\| \left(1 - \gamma\right) V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} - (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\hat {\pi} _ {\gamma}} \right\| _ {\infty} \leq (1 - \gamma) \epsilon_ {\gamma}. \tag {68}
+$$
+
+Considering (35), (51), and (66), we have that:
+
+$$
+\begin{array}{l} \mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\hat {\pi} _ {\gamma}}) \leq \mathbf {S p} ((1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}) + 2 \| (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} - (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\hat {\pi} _ {\gamma}} \| _ {\infty} \\ \leq 4 \epsilon + 2 (1 - \gamma) \epsilon_ {\gamma}. \tag {69} \\ \end{array}
+$$
+
+Moreover, because of the optimality of $\pi_{\gamma}$
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \leq V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}}, \tag {70}
+$$
+
+recalling that $\pi^{*}$ is an optimal policy of the robust AMDP.
+
+By merging Lemma 11.1, (68), (69), and (70) we have:
+
+$$
+g _ {\mathcal {P}} ^ {\pi^ {*}} \leq (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} + 4 \epsilon + 2 (1 - \gamma) \epsilon_ {\gamma} \tag {71}
+$$
+
+$$
+\leq (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\pi_ {\gamma}} + 4 \epsilon + 2 (1 - \gamma) \epsilon_ {\gamma} \tag {72}
+$$
+
+$$
+\leq (1 - \gamma) V _ {\gamma , \mathcal {P}} ^ {\hat {\pi} _ {\gamma}} + 4 \epsilon + 3 (1 - \gamma) \epsilon_ {\gamma} \tag {73}
+$$
+
+$$
+\leq g _ {\mathcal {P}} ^ {\hat {\pi} _ {\gamma}} + 8 \epsilon + 5 (1 - \gamma) \epsilon_ {\gamma}, \tag {74}
+$$
+
+which completes the proof.
+
+
+
+# 12. Proof of Theorem 4.4 Part 1
+
+We first note that the result under the case $R \geq \frac{1}{\mathcal{H}}$ can be directly obtained from the existing result in (Shi et al., 2023). Specifically, it is shown in (Shi et al., 2023) that learning an $\epsilon_{\gamma}$ -optimal policy for a $\gamma$ -discounted robust MDP requires samples of size
+
+$$
+\mathcal {N} (\mathcal {S}, \mathcal {A}, \hat {\mathcal {P}}, \gamma , \epsilon_ {\gamma}) = \frac {C S A \log \frac {c S A}{\delta}}{(1 - \gamma) ^ {2} \max \{R , 1 - \gamma \} \epsilon_ {\gamma} ^ {2}}. \tag {75}
+$$
+
+Thus, by setting $\gamma = 1 - \frac{\epsilon}{\mathcal{H}}$ and $\epsilon_{\gamma} = \mathcal{H}$ as Theorem 4.1, we have that the complexity to learn an $\epsilon$ -optimal policy for the robust average reward MDP is
+
+$$
+\frac {C S A \log \frac {c S A}{\delta}}{(1 - \gamma) ^ {2} \max \{R , 1 - \gamma \} \epsilon_ {\gamma} ^ {2}} = \frac {C S A \log \frac {c S A}{\delta}}{\epsilon^ {2} R} \leq \frac {C S A \mathcal {H} \log \frac {c S A}{\delta}}{\epsilon^ {2}}. \tag {76}
+$$
+
+We hence mainly focus on the case of $R \leq \frac{1}{\mathcal{H}}$ . In the following proof, we denote $\epsilon_{\gamma}$ as $\epsilon$ .
+
+We first introduce some notation. Let $\hat{\mathsf{P}}$ and $\hat{\mathcal{P}}$ be the estimated nominal kernel and estimated uncertainty set. For any policy $\pi$ , let $\tilde{V}_{\gamma,\hat{\mathcal{P}}}^{\pi}$ , $V_{\gamma,\hat{\mathcal{P}}}^{\pi}$ , $V_{\gamma,\mathcal{P}}^{\pi}$ be the robust value function w.r.t. perturbed reward and estimated uncertainty set, unperturbed reward and estimated uncertainty set, unperturbed reward and the true uncertainty set. The optimal robust policy w.r.t. $\tilde{V}_{\hat{\mathcal{P}}}^{\pi}$ is denoted by $\tilde{\pi}^*$ , and the corresponding optimal robust value functions are denoted by $\tilde{V}_{\gamma,\hat{\mathcal{P}}}^*$ and $\tilde{Q}_{\gamma,\hat{\mathcal{P}}}^*$ .
+
+We consider a general policy $\pi$ , and we denote the worst-case kernel of any vector $V$ under $\hat{\mathcal{P}}$ and $\mathcal{P}$ by $\hat{\mathrm{P}}_{w}^{\pi, V}$ and $\mathrm{P}_{w}^{\pi, V}$ . Then, it holds that
+
+$$
+\begin{array}{l} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - V _ {\gamma , \mathcal {P}} ^ {\pi} = r _ {\pi} + \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \left(r _ {\pi} + \gamma \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \\ = \left(\gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma \mathsf {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right) + \left(\gamma \mathsf {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \\ \stackrel {\mathrm {(i)}} {\leq} \gamma \Big (\mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} \Big) + \Big (\gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma \mathsf {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} \Big), \\ \end{array}
+$$
+
+where (i) holds by observing that
+
+$$
+\mathsf {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \widehat {\mathcal {P}}} ^ {\pi} \leq \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \widehat {\mathcal {P}}} ^ {\pi}
+$$
+
+due to the worst-case kernel. Rearranging terms leads to
+
+$$
+V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - V _ {\gamma , \mathcal {P}} ^ {\pi} \leq \gamma (I - \gamma \mathrm {P} _ {w} ^ {\pi , V}) ^ {- 1} (\hat {\mathrm {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \mathrm {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}). \tag {77}
+$$
+
+Similarly, we can also deduce
+
+$$
+\begin{array}{l} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - V _ {\gamma , \mathcal {P}} ^ {\pi} = r _ {\pi} + \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \left(r _ {\pi} + \gamma \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \\ = \left(\gamma \hat {\mathrm {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma \mathrm {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right) + \left(\gamma \mathrm {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma \mathrm {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \\ \geq \gamma \left(P _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - P _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) + \left(\gamma \hat {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \gamma P _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right) \\ \geq \gamma \left(I - \gamma \mathrm {P} _ {w} ^ {\pi , \widehat {V}}\right) ^ {- 1} \left(\hat {\mathrm {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \mathrm {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right). \tag {78} \\ \end{array}
+$$
+
+Combining (77) and (78), we arrive at
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq \gamma \max \left\{\left\| (I - \gamma \mathrm {P} _ {w} ^ {\pi , V}) ^ {- 1} \left(\hat {\mathrm {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \mathrm {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right) \right\| _ {\infty}, \right. \\ \left\| \left(I - \gamma \mathsf {P} _ {w} ^ {\pi , \widehat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi}\right) \right\| _ {\infty} \}. \tag {79} \\ \end{array}
+$$
+
+By decomposing the error in a symmetric way, we can similarly obtain
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi} - V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq \gamma \max \left\{\left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \right\| _ {\infty}, \right. \\ \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , V}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \right\| _ {\infty} \}. \tag {80} \\ \end{array}
+$$
+
+12.1. Part A: $\| \tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\pi^{*}} - V_{\gamma ,\mathcal{P}}^{\pi^{*}}\|$
+
+Consider $|\tilde{V}_{\gamma,\hat{\mathcal{P}}}^{\pi^{*}}(s) - V_{\gamma,\hat{\mathcal{P}}}^{\pi^{*}}(s)|$ for some state $s$ . If $\tilde{V}_{\gamma,\hat{\mathcal{P}}}^{\pi^{*}}(s) - V_{\gamma,\hat{\mathcal{P}}}^{\pi^{*}}(s) > 0$ , then
+
+$$
+\begin{array}{l} | \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} (s) - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} (s) | = \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} (s) - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} (s) \\ = \left(I - \gamma \tilde {\mathrm {P}} _ {w} ^ {\pi^ {*}}\right) ^ {- 1} \tilde {r} ^ {\pi^ {*}} - \left(I - \gamma \mathrm {P} _ {w} ^ {\pi^ {*}}\right) ^ {- 1} r ^ {\pi^ {*}} \\ \leq \left(I - \gamma \mathsf {P} _ {w} ^ {\pi^ {*}}\right) ^ {- 1} \tilde {r} ^ {\pi^ {*}} - \left(I - \gamma \mathsf {P} _ {w} ^ {\pi^ {*}}\right) ^ {- 1} r ^ {\pi^ {*}} \\ \leq \frac {\epsilon}{6}, \tag {81} \\ \end{array}
+$$
+
+where $\tilde{\mathsf{P}}_w^{\pi^*}$ and $\mathsf{P}_w^{\pi^*}$ are the corresponding worst-case transition kernel.
+
+A similar result can also be obtained for the other case, hence it holds that
+
+$$
+\left\| \tilde {V} _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} (s) - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \frac {\epsilon}{6}. \tag {82}
+$$
+
+We thus have that
+
+$$
+\left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} \right\| + \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \frac {\epsilon}{6} + \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\|, \tag {83}
+$$
+
+and it suffices to study the second term $\| V_{\gamma, \mathcal{P}}^{\pi^{*}} - V_{\gamma, \mathcal{P}}^{\pi^{*}}\|$ . Applying (80) further implies that
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \gamma \max \left\{\left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, \widehat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty}, \right. \\ \left. \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \right\| _ {\infty} \right\}. \tag {84} \\ \end{array}
+$$
+
+We first consider the case that $\max \left\{\left\| \left(I - \gamma \hat{\mathsf{P}}_{w}^{\pi^{*},\widehat{V}}\right)^{-1}\left(\hat{\mathsf{P}}_{w}^{\pi^{*},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}} - \mathsf{P}_{w}^{\pi^{*},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}}\right)\right\|_{\infty},\left\| \left(I - \gamma \hat{\mathsf{P}}_{w}^{\pi^{*},V}\right)^{-1}\left(\hat{\mathsf{P}}_{w}^{\pi^{*},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}} - \right.\right.$ $\left.\left.\mathsf{P}_{w}^{\pi^{*},V}V_{\gamma ,\mathcal{P}}^{\pi}\right)\right\|_{\infty}\right\} = \left\| \left(I - \gamma \hat{\mathsf{P}}_{w}^{\pi^{*},\widehat{V}}\right)^{-1}\left(\hat{\mathsf{P}}_{w}^{\pi^{*},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}} - \mathsf{P}_{w}^{\pi^{*},V}V_{\gamma ,\mathscr{P}}^{\pi^{*}}\right)\right\|_{\infty}$ .
+
+We first apply the following lemma.
+
+Lemma 12.1. (Lemma 11 of (Shi et al., 2023)) Consider any $\delta \in (0,1)$ . Setting $N \geq \log \left(\frac{18SAN}{\delta}\right)$ , with probability at least $1 - \delta$ , one has
+
+$$
+\left| \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathrm {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right| \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \sqrt {\operatorname {V a r} _ {\mathrm {P} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)} + \frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma)} 1. \tag {85}
+$$
+
+Thus, it holds that
+
+$$
+\begin{array}{l} \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, \hat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty} \\ \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \Big \| (I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}}, \hat {V}) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathsf {P} ^ {\pi^ {*}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}})} \Big \| _ {\infty} + \frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} \\ \leq \underbrace {2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| (I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*} , \hat {V}}) ^ {- 1} \sqrt {\operatorname {V a r} _ {\hat {\mathsf {P}} _ {w} ^ {\pi^ {*} , \hat {V}}} (V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}})} \right\| _ {\infty}} _ {A 1} \\ + \underbrace {2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| (I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, \hat {V}}) ^ {- 1} \sqrt {\operatorname {V a r} _ {\hat {\mathsf {P}} _ {w} ^ {\pi^ {*} , \hat {V}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}}})} \right\| _ {\infty}} _ {A 2} \\ + \underbrace {2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| (I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, \hat {V}}) ^ {- 1} \sqrt {\left| \operatorname {V a r} _ {\hat {\mathrm {P}} _ {w} ^ {\pi^ {*}} , \hat {V}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}) - \operatorname {V a r} _ {\hat {\mathrm {P}} ^ {\pi^ {*}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}) \right|} \right\| _ {\infty}} _ {A 3} \\ + \underbrace {2 \sqrt {\frac {\log \left(\frac {1 8 S A N}{\delta}\right)}{N}} \left\| \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}} , \hat {V}\right) ^ {- 1} \left(\sqrt {\operatorname {V a r} _ {\mathrm {P} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)} - \sqrt {\operatorname {V a r} _ {\hat {\mathrm {P}} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)}\right) \right\| _ {\infty}} _ {A 4} + \frac {\log \left(\frac {1 8 S A N}{\delta}\right)}{N (1 - \gamma) ^ {2}}. \tag {86} \\ \end{array}
+$$
+
+Term A1. We note that $\mathsf{Q} \triangleq \hat{\mathsf{P}}_w^{\hat{V}} = \arg \min_{\mathsf{P} \in \hat{\mathcal{P}}} \mathsf{P} V_{\gamma, \hat{\mathcal{P}}}^{\pi^*}$ , thus we have that
+
+$$
+V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} = r ^ {\pi^ {*}} + \gamma \sigma_ {\hat {\mathcal {P}}} ^ {\pi^ {*}} \left(V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}}\right) = r ^ {\pi^ {*}} + \gamma \mathrm {Q} ^ {\pi^ {*}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}}; \tag {87}
+$$
+
+On the other hand, we have that
+
+$$
+V _ {\gamma , \mathrm {Q}} ^ {\pi^ {*}} = r ^ {\pi^ {*}} + \gamma \mathrm {Q} ^ {\pi^ {*}} V _ {\gamma , \mathrm {Q}} ^ {\pi^ {*}}, \tag {88}
+$$
+
+hence both $V_{\gamma, \hat{\mathcal{P}}}^{\pi^{*}}$ and $V_{\gamma, \mathbb{Q}}^{\pi^{*}}$ are fixed points of the Bellman operator w.r.t. $\mathbb{Q}$ , which implies they are identical $V_{\gamma, \hat{\mathcal{P}}}^{\pi^{*}} = V_{\gamma, \mathbb{Q}}^{\pi^{*}}$ . Thus, the term $\left(I - \gamma \hat{P}_{w}^{\pi^{*}, \hat{V}}\right)^{-1} \sqrt{\operatorname{Var}_{\hat{P}_{w}^{\pi^{*}, \hat{V}}}(V_{\gamma, \hat{\mathcal{P}}}^{\pi^{*}})}$ can be rewritten as
+
+$$
+\left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, \hat {V}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\hat {\mathrm {P}} _ {w} ^ {\pi^ {*}} , \hat {V}} \left(V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}}\right)} = \left(I - \gamma \mathrm {Q} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {Q} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathrm {Q}} ^ {\pi^ {*}}\right)}, \tag {89}
+$$
+
+and the term $A1$ can be rewritten as
+
+$$
+A 1 = 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| (I - \gamma Q ^ {\pi^ {*}}) ^ {- 1} \sqrt {\operatorname {V a r} _ {Q ^ {\pi^ {*}}} \left(V _ {\gamma , Q} ^ {\pi^ {*}}\right)} \right\| _ {\infty}. \tag {90}
+$$
+
+We then apply the following lemma to bound $A1$ , which is the main result for the complexity improvement.
+
+Lemma 12.2. For any policy $\pi$ , if $N \geq \mathcal{O}\left(\frac{\log\frac{SA}{(1 - \gamma)\delta\epsilon}}{1 - \gamma}\right)$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\left\| \left(I - \gamma \mathrm {Q} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {Q} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathrm {Q}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} \leq \sqrt {\frac {c _ {1} \mathcal {H}}{(1 - \gamma) ^ {2}}}. \tag {91}
+$$
+
+The proof of this lemma can be derived similarly to the ones in (Zurek & Chen, 2023). For completeness, we provide its proof in Section 12.4.
+
+Thus, it holds that
+
+$$
+A 1 \leq \sqrt {\frac {2 c _ {1} \log (\frac {1 8 S A N}{\delta}) \mathcal {H}}{(1 - \gamma) ^ {2} N}}. \tag {92}
+$$
+
+Term A2. It holds that
+
+$$
+\begin{array}{l} A 2 = 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \Big \| (I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}}) ^ {- 1} \sqrt {\operatorname {V a r} _ {\hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}})} \Big \| _ {\infty} \\ \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}}} \left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} \right\| _ {\infty}. \tag {93} \\ \end{array}
+$$
+
+Term A3. It holds that
+
+$$
+\begin{array}{l} \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \sqrt {\left| \operatorname {V a r} _ {\hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}) - \operatorname {V a r} _ {\hat {\mathrm {P}} ^ {\pi^ {*}}} (V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}) \right|} \\ = \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \sqrt {\left| \prod_ {\hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}} ^ {\pi^ {*}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) - \operatorname {V a r} _ {\hat {\mathrm {P}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)\right)} \\ \leq \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \sqrt {\left\| \right. \operatorname {V a r} _ {\hat {\mathrm {P}} _ {w} ^ {\pi , \hat {V}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) - \operatorname {V a r} _ {\hat {\mathrm {P}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)\left. \right)\left. \right\|}. \tag {94} \\ \end{array}
+$$
+
+Note that both $\hat{\mathsf{P}}_w^{\pi ,\hat{V}},\hat{\mathsf{P}}$ belong to the uncertainty set $\hat{\mathcal{P}}$ , hence $\| \hat{\mathsf{P}}_{w}^{\pi ,\hat{V}} - \hat{\mathsf{P}}\| _1\leq 2R$ , which further implies that
+
+$$
+\begin{array}{l} \left. \left| \operatorname {V a r} _ {\hat {\mathbf {P}} _ {w} ^ {\pi , \hat {V}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) - \operatorname {V a r} _ {\hat {\mathbf {P}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)\right) \right| _ {s, a} \\ = \left| \operatorname {V a r} _ {\hat {\mathbf {P}} _ {w} ^ {\pi , V}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) - \operatorname {V a r} _ {\hat {\mathbf {P}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) | _ {s, a} \\ \leq \left\| \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}} - \hat {\mathsf {P}} \right\| _ {1} \left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}} \right\| ^ {2} \\ \leq 2 R \mathcal {H} ^ {2}, \tag {95} \\ \end{array}
+$$
+
+where the last inequality is from Lemma 11.1 and Lemma 11.2.
+
+Since $R \leq \frac{1}{\mathcal{H}}$ , it holds that
+
+$$
+A 3 \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| \left(I - \gamma \hat {\mathbf {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \sqrt {2 \mathcal {H}} \right\| \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta}) \mathcal {H}}{N (1 - \gamma) ^ {2}}}. \tag {96}
+$$
+
+Term A4. We directly apply Lemma 6 of (Panaganti & Kalathil, 2022) and Lemma 11 of (Shi et al., 2023), and it implies that
+
+$$
+A 4 \leq \frac {4 \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}}. \tag {97}
+$$
+
+We then plug (92), (93), (96) and (97) in (86), and we have that
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \\ \leq \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi , \hat {V}} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \right\| _ {\infty} \\ \leq \frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {2 c _ {1} \log (\frac {1 8 S A N}{\delta}) \mathcal {H}}{(1 - \gamma) ^ {2} N}} + 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}}} \left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} \right\| _ {\infty} + 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta}) \mathcal {H}}{N (1 - \gamma) ^ {2}}} + \frac {4 \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}}. \tag {98} \\ \end{array}
+$$
+
+We note that if we set $N \geq \frac{32\log(\frac{18\mathsf{SAN}}{\delta})}{(1 - \gamma)^2}$ , it holds that
+
+$$
+\left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \frac {C _ {1} \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {2 C _ {2} \log (\frac {1 8 S A N}{\delta}) \mathcal {H}}{(1 - \gamma) ^ {2} N}}, \tag {99}
+$$
+
+which completes the first term in (80).
+
+To bound the second term in (80), following eq (69) in (Shi et al., 2023), we have that
+
+$$
+\begin{array}{l} \left\| \left(I - \gamma \hat {\mathbf {P}} _ {w} ^ {\pi^ {*}, V}\right) ^ {- 1} \left(\hat {\mathbf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathbf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty} \\ \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta})}{N}} \left\| \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, V}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {P} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} + \frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}}. \tag {100} \\ \end{array}
+$$
+
+Now applying Lemma 12.10,
+
+$$
+\left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi , V}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi}\right) \right\| _ {\infty} \leq \frac {C _ {3} \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {2 C _ {4} \log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}}. \tag {101}
+$$
+
+We hence obtain the bound on $\| \tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\pi^{*}} - V_{\gamma ,\mathcal{P}}^{\pi^{*}}\|$ as follows:
+
+$$
+\left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| \leq \frac {a _ {1} \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {a _ {2} \log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}} + \frac {\epsilon}{6}, \tag {102}
+$$
+
+when $N\geq \frac{C\log(\frac{18SAN}{\delta})}{(1 - \gamma)^2}$
+
+# 12.2. Part B: $\| \tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - V_{\gamma ,\mathcal{P}}^{\hat{\pi}}\|$
+
+Similarly, we have that
+
+$$
+\begin{array}{l} \left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| \leq \left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \tilde {V} _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| + \left\| \tilde {V} _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| \\ \leq \frac {\epsilon}{6} + \| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \tilde {V} _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \|, \tag {103} \\ \end{array}
+$$
+
+hence it suffices to bound the term $\| \tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - \tilde{V}_{\gamma ,\mathcal{P}}^{\hat{\pi}}\|$ . By setting $\pi = \hat{\pi}$ in (79), we have that
+
+$$
+\begin{array}{l} \left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \tilde {V} _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| _ {\infty} \leq \gamma \max \left\{\left\| \left(I - \gamma \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {\tilde {V}}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {\tilde {V}}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {\tilde {V}}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \right\| _ {\infty}, \right. \\ \left. \left\| \left(I - \gamma \mathrm {P} _ {w} ^ {\hat {\pi}, \tilde {V}}\right) ^ {- 1} \left(\hat {\mathrm {P}} _ {w} ^ {\hat {\pi}, \tilde {\hat {V}}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathrm {P} _ {w} ^ {\hat {\pi}, \tilde {\hat {V}}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \right\| _ {\infty} \right\}. \tag {104} \\ \end{array}
+$$
+
+We first bound the first term $\left\| \left(I - \gamma \mathsf{P}_{w}^{\hat{\pi},\tilde{V}}\right)^{-1}\left(\hat{\mathsf{P}}_{w}^{\hat{\pi},\tilde{V}}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - \mathsf{P}_{w}^{\hat{\pi},\tilde{V}}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}}\right)\right\|_{\infty}$ . To simplify notation, we rewrite $\mathsf{P}_{w}^{\hat{\pi},\tilde{V}}$ as $\mathsf{P}_{w}^{\hat{\pi},\tilde{V}}$ and $\hat{\mathsf{P}}_{w}^{\hat{\pi},\tilde{V}}$ by $\hat{\mathsf{P}}_{w}^{\hat{\pi},\tilde{V}}$ .
+
+We first introduce the following separation events:
+
+$$
+\hat {\Omega} _ {\omega} \triangleq \left\{\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {*} (s) - \max _ {a \neq \hat {\pi} ^ {*} (s)} \tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {*} (s, a) \geq \omega , \forall s \in \mathcal {S} \right\}, \tag {105}
+$$
+
+$$
+\Omega_ {\omega} \triangleq \left\{\tilde {V} _ {\gamma , \mathcal {P}} ^ {*} (s) - \max _ {a \neq \pi^ {*} (s)} \tilde {Q} _ {\gamma , \mathcal {P}} ^ {*} (s, a) \geq \omega , \forall s \in \mathcal {S} \right\}. \tag {106}
+$$
+
+These events indicate that there exists some threshold between the value functions of the optimal action and other actions, and there is no tie between the optimal robust value functions. It further implies that the optimal policy $\hat{\pi}^*$ and $\pi^*$ are unique. As we shall show in Lemma 12.3, with a carefully chosen threshold, such events will occur with high probability.
+
+Lemma 12.3. Set $\omega = \frac{\xi\delta(1 - \gamma)}{3SA^2}$ , then both (105) and (106) occur with probability at least $1 - \delta$ .
+
+We then combine Lemma 14 from (Shi et al., 2023) and Lemma 9 in (Li et al., 2020) together to show the following result. Such a result allows us to decouple the dependence between $\hat{\pi}$ and other terms.
+
+Lemma 12.4. Consider any $\delta \in (0,1)$ . Taking $N \geq \mathcal{O}\left(\frac{\log\left(\frac{54SAN^2}{(1 - \gamma)\delta}\right)}{1 - \gamma}\right)$ , with probability at least $1 - 2\delta$ , events (105) and (106) occur, and it holds that
+
+$$
+\left| \hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \widehat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \widehat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right| \leq 2 \sqrt {\frac {\log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N}} \sqrt {\operatorname {V a r} _ {\mathsf {P} _ {s , a}} (\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}})} 1 + \frac {8 \log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N (1 - \gamma)} 1. \tag {107}
+$$
+
+With Lemma 12.4 in hand, we have
+
+$$
+\begin{array}{l} \left(I - \gamma \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \\ \stackrel {\mathrm {(i)}} {\leq} \left(I - \gamma \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}}\right) ^ {- 1} \left| \hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right| \\ \leq 2 \sqrt {\frac {\log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N}} \left(I - \gamma \mathrm {P} _ {w} ^ {\hat {\pi}, \hat {V}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {P ^ {\hat {\pi}}} \left(\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right)} + \left(\frac {8 \log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N (1 - \gamma) ^ {2}}\right) 1 \\ \stackrel {\mathrm {(i i)}} {\leq} \left(\frac {8 \log (\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta})}{N (1 - \gamma) ^ {2}}\right) 1 + \underbrace {2 \sqrt {\frac {\log (\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta})}{N}} \left(I - \gamma \mathsf {P} _ {w} ^ {\hat {\pi} , \hat {V}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathsf {P} _ {w} ^ {\hat {\pi} , \hat {V}}} (\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}})}} _ {=: B _ {1}} \\ + \underbrace {2 \sqrt {\frac {\log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N}} \left(I - \gamma \mathrm {P} _ {w} ^ {\hat {\pi} , \hat {V}}\right) ^ {- 1} \sqrt {\left| \operatorname {V a r} _ {P ^ {\hat {\pi}}} \left(\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) - \operatorname {V a r} _ {\mathrm {P} _ {w} ^ {\hat {\pi} , \hat {V}}} \left(\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \right|}}, \tag {108} \\ \end{array}
+$$
+
+where (i) and (ii) hold by the fact that each row of $(1 - \gamma)\left(I - \gamma \mathsf{P}_{w}^{\widehat{\pi},\widehat{V}}\right)^{-1}$ is a probability vector that falls into $\Delta (\mathcal{S})$
+
+Term B1. Similar to term A1, term B1 is equivalent to $2\sqrt{\frac{\log(\frac{54SAN^2}{(1 - \gamma)\delta})}{N}} (I - \gamma P)^{-1}\sqrt{\mathrm{Var}_P(V_{\gamma,\mathsf{P}})}$ with $\mathsf{P} = \mathsf{P}_{w}^{\widehat{\pi},\widehat{V}}$ . Specifically, $\hat{\pi}$ can be viewed as the optimal policy for $\gamma$ and the empirical uncertainty set. Thus, applying Corollary 11.3 implies that
+
+$$
+\left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \frac {\tilde {g} _ {\hat {\mathcal {P}}} ^ {*}}{1 - \gamma} \right\| \leq \mathcal {H}, \tag {109}
+$$
+
+and hence
+
+$$
+B 1 \leq \sqrt {\frac {2 d _ {1} \log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}}. \tag {110}
+$$
+
+Term B2. Similar to term A3, term B2 can be bounded by noting that $R \leq \frac{1}{\mathcal{H}}$
+
+$$
+B 2 \leq 2 \sqrt {\frac {\log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{N (1 - \gamma) ^ {2}}}. \tag {111}
+$$
+
+Combine both bounds together, and we have that when $N \geq \frac{C \log \frac{SAN}{\delta}}{(1 - \gamma)^2}$ ,
+
+$$
+\left(I - \gamma \mathrm {P} _ {w} ^ {\hat {\pi}, \widehat {V}}\right) ^ {- 1} \left(\hat {\mathrm {P}} _ {w} ^ {\hat {\pi}, \widehat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathrm {P} _ {w} ^ {\hat {\pi}, \widehat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \leq \frac {D _ {1} \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {2 D _ {2} \log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}} \tag {112}
+$$
+
+with probability at least $1 - \delta$ . Similarly, we can get the bound on the second term of (104), which finally implies that
+
+$$
+\left\| \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \tilde {V} _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| _ {\infty} \leq \frac {D _ {1} \log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + \sqrt {\frac {2 D _ {2} \log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}}. \tag {113}
+$$
+
+# 12.3. Summing Up the Results
+
+Combine the bounds obtained from both Part A and Part B, it holds that with probability at least $1 - 4\delta$
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {*} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \leq m _ {1} \epsilon + m _ {2} \frac {\log (\frac {1 8 S A N}{\delta})}{N (1 - \gamma) ^ {2}} + m _ {3} \sqrt {\frac {\log (\frac {1 8 S A N}{\delta}) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} N}}, \tag {114}
+$$
+
+when $N\geq \frac{C\log(\frac{18SAN}{\delta})}{(1 - \gamma)^2}$
+
+Thus, to achieve an $\epsilon$ -optimal policy, it requires a total number of samples of
+
+$$
+N S A = \frac {C S A \log \left(\frac {1 8 S A N}{\delta}\right) \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2} \epsilon^ {2}} + \frac {C S A \log \left(\frac {1 8 S A N}{\delta}\right)}{(1 - \gamma) ^ {2}}, \tag {115}
+$$
+
+for some constant $C$ . Then setting $\epsilon = \mathcal{H}$ and $1 - \gamma = \frac{\epsilon}{\mathcal{H}}$ implies that
+
+$$
+N S A \geq \frac {C S A \mathcal {H} ^ {2} \log \frac {S A N}{\delta}}{\epsilon^ {2}} \tag {116}
+$$
+
+samples are required to find an $\epsilon$ -optimal policy for robust average reward.
+
+# 12.4. Proofs of Lemmas
+
+Lemma 12.5. (Lemma 6 of (Zurek & Chen, 2023)) For any deterministic stationary policy $\pi$ , we have
+
+$$
+\gamma \left\| (I - \gamma P ^ {\pi}) ^ {- 1} \sqrt {\mathbf {V a r} _ {P ^ {\pi}} \left[ V _ {\gamma , P} ^ {\pi} \right]} \right\| _ {\infty} \leq \sqrt {\frac {2}{1 - \gamma}} \sqrt {\left\| \mathbf {V a r} _ {P ^ {\pi}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty}} \tag {117}
+$$
+
+Proof. The following variance Bellman equation holds from (Sobel, 1982):
+
+$$
+\mathbf {V a r} _ {\mathrm {P} ^ {\pi}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] = \gamma^ {2} \mathbf {V a r} _ {\mathrm {P} ^ {\pi}} \left[ V _ {\gamma , \mathrm {P}} ^ {\pi} \right] + \gamma^ {2} \mathrm {P} ^ {\pi} \mathbf {V a r} _ {\mathrm {P} ^ {\pi}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right]. \tag {118}
+$$
+
+On the other hand, it holds that
+
+$$
+\left| (1 - \gamma) e _ {s} ^ {\top} (I - \gamma \mathsf {P} ^ {\pi}) ^ {- 1} \sqrt {\mathbf {V a r} _ {\mathsf {P} ^ {\pi}} \left[ V _ {\gamma , \mathsf {P}} ^ {\pi} \right]} \right| \leq \sqrt {\left| (1 - \gamma) e _ {s} ^ {\top} (I - \gamma \mathsf {P} ^ {\pi}) ^ {- 1} \mathbf {V a r} _ {\mathsf {P} ^ {\pi}} \left[ V _ {\gamma , \mathsf {P}} ^ {\pi} \right] \right|}.
+$$
+
+Denote that $v = \mathbf{Var}_{\mathsf{P}^{\pi}}\left[V_{\gamma ,\mathsf{P}}^{\pi}\right]$ , we then have that
+
+$$
+\begin{array}{l} \left. \gamma \left\| (I - \gamma \mathrm {P} ^ {\pi}) ^ {- 1} \sqrt {v} \right\| _ {\infty} = \gamma \frac {1}{1 - \gamma} \left\| (1 - \gamma) (I - \gamma \mathrm {P} ^ {\pi}) ^ {- 1} \sqrt {v} \right\| _ {\infty} \right. (119) \\ \leq \gamma \frac {1}{1 - \gamma} \sqrt {\left\| (1 - \gamma) (I - \gamma P ^ {\pi}) ^ {- 1} v \right\| _ {\infty}} (120) \\ = \gamma \frac {1}{\sqrt {1 - \gamma}} \sqrt {\left\| \left(I - \gamma P ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty}}. (121) \\ \end{array}
+$$
+
+Moreover,
+
+$$
+\begin{array}{l} \left\| \left(I - \gamma \mathrm {P} ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty} = \left\| \left(I - \gamma \mathrm {P} ^ {\pi}\right) ^ {- 1} \left(I - \gamma^ {2} \mathrm {P} ^ {\pi}\right) \left(I - \gamma^ {2} \mathrm {P} ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty} (122) \\ = \left\| (I - \gamma \mathrm {P} ^ {\pi}) ^ {- 1} ((1 - \gamma) I + \gamma (I - \gamma \mathrm {P} ^ {\pi})) (I - \gamma^ {2} \mathrm {P} ^ {\pi}) ^ {- 1} v \right\| _ {\infty} (123) \\ = \left\| \left((1 - \gamma) (I - \gamma \mathrm {P} ^ {\pi}) ^ {- 1} + \gamma I\right) (I - \gamma^ {2} \mathrm {P} ^ {\pi}) ^ {- 1} v \right\| _ {\infty} (124) \\ \leq \left\| (1 - \gamma) (I - \gamma \mathrm {P} ^ {\pi}) ^ {- 1} (I - \gamma^ {2} \mathrm {P} ^ {\pi}) ^ {- 1} v \right\| _ {\infty} + \gamma \left\| (I - \gamma^ {2} \mathrm {P} ^ {\pi}) ^ {- 1} v \right\| _ {\infty} (125) \\ \leq (1 - \gamma) \left\|\left(I - \gamma P ^ {\pi}\right) ^ {- 1} \right\| _ {\infty \rightarrow \infty} \left\|\left(I - \gamma^ {2} P ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty} + \gamma \left\|\left(I - \gamma^ {2} P ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty} (126) \\ \leq (1 + \gamma) \| \left(I - \gamma^ {2} \mathrm {P} ^ {\pi}\right) ^ {- 1} v \| _ {\infty} (127) \\ \leq 2 \left\| \left(I - \gamma^ {2} \mathrm {P} ^ {\pi}\right) ^ {- 1} v \right\| _ {\infty}. (128) \\ \end{array}
+$$
+
+Combining them with the variance Bellman equation (118), it holds that
+
+$$
+\gamma \left\| (I - \gamma P ^ {\pi}) ^ {- 1} \sqrt {v} \right\| _ {\infty} \leq \gamma \frac {1}{\sqrt {1 - \gamma}} \sqrt {2 \left\| (I - \gamma^ {2} P ^ {\pi}) ^ {- 1} v \right\| _ {\infty}} \leq \sqrt {\frac {2}{1 - \gamma}} \sqrt {\left\| \mathbf {V a r} _ {P ^ {\pi}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty}}. \tag {129}
+$$
+
+Lemma 12.6. (Lemma 7 of (Zurek & Chen, 2023)) For any integer $T \geq 1$ , for any deterministic stationary policy $\pi$ , we have
+
+$$
+\left\| \boldsymbol {V a r} _ {\mathsf {P} ^ {\pi}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty} \leq \frac {\left\| \boldsymbol {V a r} _ {\mathsf {P} ^ {\pi}} \left[ \sum_ {t = 0} ^ {T - 1} \gamma^ {t} R _ {t} + \gamma^ {T} V _ {\gamma} ^ {\pi} (S _ {T}) \right] \right\| _ {\infty}}{1 - \gamma^ {2 T}}.
+$$
+
+Lemma 12.7. (Lemma 8 of (Zurek & Chen, 2023)) If $\gamma \geq 1 - \frac{1}{\mathcal{H}}$ for some integer $\mathcal{H} \geq 1$ , then
+
+$$
+\frac {1 - \gamma^ {2 \mathcal {H}}}{1 - \gamma} \geq \left(1 - \frac {1}{e ^ {2}}\right) \mathcal {H} \geq \frac {4}{5} \mathcal {H}.
+$$
+
+Lemma 12.8. Letting $\pi^{*}$ be the optimal policy for the robust DMDP $(S, \mathcal{A}, \gamma, r, \mathcal{P})$ , we have
+
+$$
+\left\| \boldsymbol {V a r} _ {\mathsf {P} \pi^ {*}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty} \leq 5 \frac {\mathcal {H}}{1 - \gamma}.
+$$
+
+Proof. By using Lemma 12.6, it suffices to bound $\left\| \mathbf{Var}_{\mathsf{P}^{\pi^{*}}}\left[\sum_{t = 0}^{\mathcal{H} - 1}\gamma^{t}R_{t} + \gamma^{\mathcal{H}}V_{\gamma ,\mathsf{P}}^{\pi^{*}}(S_{\mathcal{H}})\right]\right\|_{\infty}$ .
+
+Fixing a state $s_0 \in S$ ,
+
+$$
+\begin{array}{l} \mathbf {V a r} _ {\mathsf {P} _ {s _ {0}} ^ {\pi^ {*}}} \left[ \sum_ {t = 0} ^ {\mathcal {H} - 1} \gamma^ {t} R _ {t} + \gamma^ {\mathcal {H}} V _ {\gamma , \mathsf {P}} ^ {\pi^ {*}} (S _ {\mathcal {H}}) \right] = \mathbf {V a r} _ {\mathsf {P} _ {s _ {0}} ^ {\pi^ {*}}} \left[ \sum_ {t = 0} ^ {\mathcal {H} - 1} \gamma^ {t} R _ {t} + \gamma^ {\mathcal {H}} \left(V _ {\gamma , \mathsf {P}} ^ {\pi^ {*}} (S _ {\mathcal {H}}) - \frac {1}{1 - \gamma} g _ {\mathsf {P}} ^ {\pi^ {*}}\right) \right] \\ \leq \mathbb {E} _ {\mathsf {P} _ {s _ {0}} ^ {\pi^ {*}}} \left| \sum_ {t = 0} ^ {\mathcal {H} - 1} \gamma^ {t} R _ {t} + \gamma^ {\mathcal {H}} \left(V _ {\gamma , \mathsf {P}} ^ {\pi^ {*}} (S _ {\mathcal {H}}) - \frac {1}{1 - \gamma} g _ {\mathsf {P}} ^ {\pi^ {*}}\right) \right| ^ {2} \\ \leq 2 \mathbb {E} _ {\mathsf {P} _ {s _ {0}} ^ {\pi^ {*}}} \left| \sum_ {t = 0} ^ {\mathcal {H} - 1} \gamma^ {t} R _ {t} \right| ^ {2} + 2 \mathbb {E} _ {\mathsf {p} _ {s _ {0}} ^ {\pi^ {*}}} \left| \gamma^ {\mathcal {H}} \left(V _ {\gamma , \mathsf {P}} ^ {\pi^ {*}} (S _ {\mathcal {H}}) - \frac {1}{1 - \gamma} g _ {\mathsf {P}} ^ {\pi^ {*}}\right) \right| ^ {2} \\ \leq 2 \mathcal {H} ^ {2} + 2 \sup _ {s} \left(V _ {\gamma , \mathsf {P}} ^ {\pi^ {*}} (s) - \frac {1}{1 - \gamma} g _ {\mathsf {P}} ^ {\pi^ {*}}\right) ^ {2} \\ \leq 4 \mathcal {H} ^ {2}, \\ \end{array}
+$$
+
+where the last inequality can be similarly derived as Lemma 11.1. We thus have that
+
+$$
+\left\| \mathbf {V a r} _ {\mathsf {P} ^ {\pi^ {*}}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty} \leq \frac {4 \mathcal {H} ^ {2}}{1 - \gamma^ {2 H}}.
+$$
+
+Together with Lemma 12.7, this completes the proof.
+
+Lemma 12.9. (Lemma 12.2) For any policy $\pi$ , if $N \geq \mathcal{O}\left(\frac{\log\frac{SA}{(1 - \gamma)\delta\epsilon}}{1 - \gamma}\right)$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\left\| \left(I - \gamma \mathrm {Q} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {Q} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathrm {Q}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} \leq \sqrt {\frac {c _ {1} \mathcal {H}}{(1 - \gamma) ^ {2}}}. \tag {130}
+$$
+
+Proof. We prove a more general result: for any kernel $\mathsf{P}$ , it holds that
+
+$$
+\left\| \left(I - \gamma \mathrm {P} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {P} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathrm {P}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} \leq \sqrt {\frac {c _ {1} \mathcal {H}}{(1 - \gamma) ^ {2}}}. \tag {131}
+$$
+
+By Lemma 12.5, we have that
+
+$$
+\begin{array}{l} \left. \left\| \left(I - \gamma \mathrm {P} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {\mathrm {P} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathrm {P}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} \right. \\ \leq \sqrt {\frac {2}{1 - \gamma}} \sqrt {\left\| \mathbf {V a r} _ {\mathsf {P} ^ {\pi^ {*}}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right] \right\| _ {\infty}} \\ \stackrel {(a)} {\leq} \sqrt {\frac {2}{1 - \gamma}} \sqrt {5 \frac {\mathcal {H}}{1 - \gamma}} \\ = \sqrt {\frac {1 0 \mathcal {H}}{(1 - \gamma) ^ {2}}}, \tag {132} \\ \end{array}
+$$
+
+where $(a)$ is due to Lemma 12.8.
+
+Lemma 12.10. For any transition kernels $q_{1}$ and $q_{2}$ , it holds that
+
+$$
+\left\| \left(I - \gamma q _ {1} ^ {\pi^ {*}}\right) ^ {- 1} \sqrt {\operatorname {V a r} _ {q _ {2} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right)} \right\| _ {\infty} \leq \sqrt {\frac {c _ {1} \mathcal {H} ^ {2}}{(1 - \gamma) ^ {2}}}. \tag {133}
+$$
+
+Proof. Note that $\mathbf{Var}_q(V) = \mathbf{Var}_q(V - ke)$ for any $k$ and $e = (1, \dots, 1)$ . Moreover, from Corollary 11.5, it holds that
+
+$$
+\left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}} \right\| \leq 4 \mathcal {H}. \tag {134}
+$$
+
+Thus
+
+$$
+\operatorname {V a r} _ {q _ {2} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) = \operatorname {V a r} _ {q _ {2} ^ {\pi^ {*}}} \left(V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}}\right) \leq \left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \frac {1}{1 - \gamma} g _ {\mathcal {P}} ^ {\pi^ {*}} \right\| ^ {2} \leq 1 6 \mathcal {H} ^ {2}. \tag {135}
+$$
+
+The proof is then completed.
+
+Lemma 12.11. (Lemma 12.3) Set $\omega = \frac{\xi\delta(1 - \gamma)}{3SA^2}$ , then both (105) and (106) occur with probability at least $1 - \delta$ .
+
+Proof. The proof is similar for both events, hence we only present the proof for (106). We show a more general result, namely, with probability at least $1 - \delta$ , for any $s$ and $a_1 \neq a_2$ ,
+
+$$
+\left| Q _ {\gamma , \mathcal {P}} ^ {*} (s, a _ {1}) - Q _ {\gamma , \mathcal {P}} ^ {*} (s, a _ {2}) \right| > \frac {\xi \delta (1 - \gamma)}{3 S A ^ {2}}. \tag {136}
+$$
+
+We further introduce the following notation:
+
+$$
+r _ {\tau} (s, a _ {1}) = \tau , \tag {137}
+$$
+
+$$
+r _ {\tau} \left(s ^ {\prime}, a ^ {\prime}\right) = \tilde {r} \left(s ^ {\prime}, a ^ {\prime}\right), \forall \left(s ^ {\prime}, a ^ {\prime}\right) \neq (s, a _ {1}). \tag {138}
+$$
+
+We denote the optimal robust value functions and the optimal policy w.r.t. $r_{\tau}$ as $Q_{\tau}^{*},V_{\tau}^{*}$ and $\pi_{\tau}^{*}$
+
+We first prove the following claim: there exists some $\tau'$ , such that
+
+$$
+\pi_ {\tau} ^ {*} (s) \neq a _ {1}, \text {f o r a l l} \tau < \tau^ {\prime}, \tag {139}
+$$
+
+$$
+\pi_ {\tau} ^ {*} (s) = a _ {1}, \text {f o r a l l} \tau > \tau^ {\prime}. \tag {140}
+$$
+
+Define
+
+$$
+\tau^ {\prime} = \sup \left\{u: \pi_ {\tau} ^ {*} (s) \neq a _ {1}, \forall \tau < u \right\}, \tag {141}
+$$
+
+then it suffices to show (140) for our choice, which exactly follows as the proofs of eq (95) in (Li et al., 2020). We then prove the lemma as follows.
+
+First, define the following sets:
+
+$$
+I _ {0, \omega} \triangleq \left\{\tau : \left| Q _ {\tau} ^ {*} (s, a _ {1}) - Q _ {\tau} ^ {*} (s, a _ {2}) \right| < \omega \right\}, \tag {142}
+$$
+
+$$
+I _ {1, \omega} \triangleq \left\{\tau : \tau < \tau^ {\prime}, \left| Q _ {\tau} ^ {*} (s, a _ {1}) - Q _ {\tau} ^ {*} (s, a _ {2}) \right| < \omega \right\}, \tag {143}
+$$
+
+$$
+I _ {2, \omega} \triangleq \left\{\tau : \tau \geq \tau^ {\prime}, \left| Q _ {\tau} ^ {*} (s, a _ {1}) - Q _ {\tau} ^ {*} (s, a _ {2}) \right| < \omega \right\}. \tag {144}
+$$
+
+Clearly, $I_{0,\omega} = I_{1,\omega} \cup I_{2,\omega}$ , and we will show that the probability of these events is small.
+
+Step 1. For $\tau \in I_{1,\omega}$ , note that $V_{\tau}^{*}$ does not depend on $\tau$ , since $\pi_{\tau}^{*}(s) \neq a_{1}$ , and $\tau$ is never active when calculating $V_{\tau}^{*}$ . Thus, the robust Bellman equation becomes
+
+$$
+Q _ {\tau} ^ {*} (s, a _ {1}) = \tau + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a _ {1}}} \left(V _ {\tau} ^ {*}\right),
+$$
+
+$$
+Q ^ {*} (s, a _ {2}) = \tilde {r} (s, a _ {2}) + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a _ {2}}} \left(V _ {\tau} ^ {*}\right). \tag {145}
+$$
+
+Thus, it holds that
+
+$$
+I _ {1, \omega} \subset \left\{\tau : \left| \tau + \gamma \sigma_ {\mathcal {P} _ {s} ^ {a _ {1}}} \left(V _ {\tau} ^ {*}\right) - Q ^ {*} (s, a _ {2}) \right| < \omega \right\}. \tag {146}
+$$
+
+Since both terms $\gamma \sigma_{\mathcal{P}_s^{a_1}}(V_\tau^*)$ and $Q^{*}(s,a_{2})$ are independent from $\tau$ , the Lebesgue measure of $I_{1,\omega}$ is at most $2\omega$ .
+
+Step 2. We now consider $I_{2,\omega}$ . First, note that
+
+$$
+\begin{array}{l} 0 \leq Q _ {\tau_ {2}} ^ {*} - Q _ {\tau_ {1}} ^ {*} \leq r _ {\tau_ {2}} - r _ {\tau_ {1}} + \gamma (\sigma (V _ {\tau_ {2}} ^ {*}) - \sigma (V _ {\tau_ {1}} ^ {*})) \\ \leq r _ {\tau_ {2}} - r _ {\tau_ {1}} + \gamma \| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \|, \tag {147} \\ \end{array}
+$$
+
+for any $\tau_{2} > \tau_{1} > \tau^{\prime}$ , which is from the 1-Lipschitz of the support functions. Moreover, for any $(x,b)\neq (s,a_1)$ , since $r_{\tau_2}(x,b) = r_{\tau_1}(x,b)$ , it holds that
+
+$$
+0 \leq Q _ {\tau_ {2}} ^ {*} (x, b) - Q _ {\tau_ {1}} ^ {*} (x, b) \leq \gamma \| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \|. \tag {148}
+$$
+
+On the other hand, note that
+
+$$
+0 \leq V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} = \max _ {a} Q _ {\tau_ {2}} ^ {*} - \max _ {a} Q _ {\tau_ {1}} ^ {*} \leq \| Q _ {\tau_ {2}} ^ {*} - Q _ {\tau_ {1}} ^ {*} \|, \tag {149}
+$$
+
+and thus
+
+$$
+\left\| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \right\| \leq \left\| Q _ {\tau_ {2}} ^ {*} - Q _ {\tau_ {1}} ^ {*} \right\|. \tag {150}
+$$
+
+Note that (148) implies that
+
+$$
+Q _ {\tau_ {2}} ^ {*} (x, b) - Q _ {\tau_ {1}} ^ {*} (x, b) \leq \gamma \| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \| < \| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \|, \forall (x, b) \neq (s, a _ {1}), \tag {151}
+$$
+
+thus
+
+$$
+\left\| Q _ {\tau_ {2}} ^ {*} - Q _ {\tau_ {1}} ^ {*} \right\| = \left| Q _ {\tau_ {2}} ^ {*} (s, a _ {1}) - Q _ {\tau_ {1}} ^ {*} (s, a _ {1}) \right| \geq \left\| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \right\|. \tag {152}
+$$
+
+Since $\tau_1, \tau_2 \geq \tau'$ , $V_{\tau_2}^*(s) = Q^*\tau_2(s, a_1)$ and $V_{\tau_1}^*(s) = Q^*\tau_1(s, a_1)$ , and we further have that
+
+$$
+V _ {\tau_ {2}} ^ {*} (s) - V _ {\tau_ {1}} ^ {*} (s) = Q _ {\tau_ {2}} ^ {*} (s, a _ {1}) - Q _ {\tau_ {1}} ^ {*} (s, a _ {1}) \geq \| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \|, \tag {153}
+$$
+
+and hence
+
+$$
+\left\| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \right\| = Q _ {\tau_ {2}} ^ {*} (s, a _ {1}) - Q _ {\tau_ {1}} ^ {*} (s, a _ {1}). \tag {154}
+$$
+
+Now from the robust Bellman equation, it holds that
+
+$$
+\begin{array}{l} Q _ {\tau_ {2}} ^ {*} (s, a _ {1}) - Q _ {\tau_ {1}} ^ {*} (s, a _ {1}) \\ = \left\| V _ {\tau_ {2}} ^ {*} - V _ {\tau_ {1}} ^ {*} \right\| \\ = \| r _ {\tau_ {2}} - r _ {\tau_ {1}} + \gamma \left(\sigma \left(V _ {\tau_ {2}} ^ {*}\right) - \sigma \left(V _ {\tau_ {1}} ^ {*}\right)\right) \| \\ \geq r _ {\tau_ {2}} - r _ {\tau_ {1}}, \tag {155} \\ \end{array}
+$$
+
+due to the monotonicity properties of the support functions.
+
+We note that (148) and (155) exactly match eqs. (99) and (102) in (Li et al., 2020), and hence the rest of the proof follows similarly.
+
+Lemma 12.12. (Lemma 12.4) Consider any $\delta \in (0,1)$ . Taking $N \geq \mathcal{O}\left(\frac{\log\left(\frac{54SAN^2}{(1 - \gamma)\delta}\right)}{1 - \gamma}\right)$ , with probability at least $1 - 2\delta$ , events (105) and (106) occur, and it holds that
+
+$$
+\left| \hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right| \leq 2 \sqrt {\frac {\log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N}} \sqrt {\operatorname {V a r} _ {\mathsf {P} _ {s , a}} (\tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}})} 1 + \frac {8 \log \left(\frac {5 4 S A N ^ {2}}{(1 - \gamma) \delta}\right)}{N (1 - \gamma)} 1. \tag {156}
+$$
+
+Proof. The proof is obtained similarly as Lemma 14 from (Shi et al., 2023), by only replacing $r(s, a)$ therein by $\mathbb{E}[\tilde{r}(s, a)]$ . For any $(s, a)$ , by the duality we have that
+
+$$
+\left| \left(\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}}\right) _ {s, a} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \left(\mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}}\right) _ {s, a} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right| \leq \max _ {\alpha \in \left[ \min _ {s} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s), \max _ {s} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) \right]} \left| \left(\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}\right) \left[ \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha} \right|. \tag {157}
+$$
+
+Construction of auxiliary RMDPs with deterministic empirical nominal transitions. Recall that we target the empirical infinite-horizon robust MDP with the nominal transition kernel $\hat{\mathbb{P}}$ . We define the nominal transition kernel and reward function as $P^{s,u}$ and $r^{s,u}$ , which are expressed as follows
+
+$$
+\left\{ \begin{array}{l l} P ^ {s, u} \left(s ^ {\prime} \mid s, a\right) = \mathbf {1} \left(s ^ {\prime} = s\right) & \text {f o r a l l} \left(s ^ {\prime}, a\right) \in \mathcal {S} \times \mathcal {A}, \\ P ^ {s, u} (\cdot | \widetilde {s}, a) = \hat {\mathrm {P}} (\cdot | \widetilde {s}, a) & \text {f o r a l l} (\widetilde {s}, a) \in \mathcal {S} \times \mathcal {A} \text {a n d} \widetilde {s} \neq s, \end{array} \right. \tag {158}
+$$
+
+and
+
+$$
+\left\{ \begin{array}{l l} r ^ {s, u} (s, a) = u & \text {f o r a l l} a \in \mathcal {A}, \\ r ^ {s, u} (\widetilde {s}, a) = \mathbb {E} [ \tilde {r} (\widetilde {s}, a) ] & \text {f o r a l l} (\widetilde {s}, a) \in \mathcal {S} \times \mathcal {A} \text {a n d} \widetilde {s} \neq s. \end{array} \right. \tag {159}
+$$
+
+Correspondingly, the associated robust Bellman operator is then
+
+$$
+\forall (\tilde {s}, a) \in \mathcal {S} \times \mathcal {A}: \quad \mathbf {T} _ {s, u} (Q) (\tilde {s}, a) = r ^ {s, u} (\tilde {s}, a) + \gamma \inf _ {\mathsf {P} \in \mathcal {P} \left(P _ {\tilde {s}, a} ^ {s, u}\right)} \mathsf {P} V, \quad \text {w i t h} V (\tilde {s}) = \max _ {a} Q (\tilde {s}, a). \tag {160}
+$$
+
+Fixed-point equivalence. Recall that $\tilde{Q}_{\gamma, \hat{\mathcal{P}}}^{\hat{\pi}}$ is the unique fixed point of the Bellman operator with the corresponding robust value $\tilde{V}_{\gamma, \hat{\mathcal{P}}}^{\hat{\pi}}$ . We assert that the corresponding robust value function $(\tilde{V}_{\gamma, \hat{\mathcal{P}}}^{\hat{\pi}})_{s, u^*}$ obtained from the fixed point of $\mathbf{T}_{s, u}(\cdot)$ aligns with the robust value function $\tilde{V}_{\gamma, \hat{\mathcal{P}}}^{\hat{\pi}}$ , as long as we choose $u$ in the following manner:
+
+$$
+u ^ {*} = u ^ {*} (s) = \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - \gamma \inf _ {\mathsf {P} \in \mathcal {P} \left(e _ {s}\right)} \mathsf {P} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}. \tag {161}
+$$
+
+where $e_s$ is the $s$ -th standard basis vector in $\mathbb{R}^S$ . Towards verifying this, we shall break our arguments in two different cases.
+
+- For state $s$ : One has for any $a \in \mathcal{A}$ :
+
+$$
+\begin{array}{l} r^{s,u^{*}}(s,a) + \gamma \inf_{\mathsf{P}\in \mathcal{P}(P_{s,a}^{s,u^{*}})}\mathsf{P}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} = u^{*} + \gamma \inf_{\mathsf{P}\in \mathcal{P}(e_{s})}\mathsf{P}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} \\ = \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - \gamma \inf _ {\mathsf {P} \in \mathcal {P} \left(e _ {s}\right)} \mathsf {P} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} + \gamma \inf _ {\mathsf {P} \in \mathcal {P} \left(e _ {s}\right)} \mathsf {P} \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} = \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s), \tag {162} \\ \end{array}
+$$
+
+where the first equality follows from the definition of $P_{s,a}^{s,u^*}$ in (158), and the second equality follows from plugging in the definition of $u^*$ in (161).
+
+- For state $s' \neq s$ : It is easily verified that for all $a \in \mathcal{A}$ ,
+
+$$
+\begin{array}{l} r^{s,u^{*}}(s^{\prime},a) + \gamma \inf_{\mathsf{P}\in \mathcal{P}(P_{s^{\prime},a}^{s,u^{*}})}\mathsf{P}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} = r(s^{\prime},a) + \gamma \inf_{\mathsf{P}\in \mathcal{P}(\hat{\mathsf{P}}_{s^{\prime},a})}\mathsf{P}\tilde{V}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} \\ = \mathbf {T} \left(\tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \left(s ^ {\prime}, a\right) = \tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \left(s ^ {\prime}, a\right), \tag {163} \\ \end{array}
+$$
+
+where the first equality follows from the definitions in (159) and (158), and the last line arises from the definition of the robust Bellman operator, and that $\tilde{Q}_{\gamma,\hat{\mathcal{P}}}^{\hat{\pi}}$ is the fixed point of $\mathbf{T}(\cdot)$ .
+
+Combining the facts in the above two cases, we establish that there exists a fixed point $(\tilde{Q}_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}})_{s,u^{*}}$ of the operator $\mathbf{T}_{s,u^{*}}(\cdot)$ by taking
+
+$$
+\left\{ \begin{array}{l l} \left(\tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) _ {s, u ^ {*}} (s, a) = \tilde {V} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) & \text {f o r a l l} a \in \mathcal {A}, \\ \left(\tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) _ {s, u ^ {*}} \left(s ^ {\prime}, a\right) = \tilde {Q} _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \left(s ^ {\prime}, a\right) & \text {f o r a l l} s ^ {\prime} \neq s \text {a n d} a \in \mathcal {A}. \end{array} \right. \tag {164}
+$$
+
+Consequently, we confirm the existence of a fixed point of the operator $\mathbf{T}_{s,u^*}(\cdot)$ . In addition, its corresponding value function $(\tilde{V}_{\gamma,\hat{\mathcal{P}}}^{\hat{\pi}})_{s,u^*}$ also coincides with $\tilde{V}_{\gamma,\hat{\mathcal{P}}}^{\hat{\pi}}$ .
+
+This equivalence exactly matches with Step 1 and Step 2 in Lemma 14 of (Shi et al., 2023), and hence the remaining part directly follows.
+
+# 13. Proof of Theorem 4.4 Part 2
+
+The proof of Theorem 4.4 mainly follows a similar structure. We note that it is equivalent to show that
+
+$$
+\left\| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| _ {\infty} \leq 1 6 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log \left(\frac {3 6 S A N ^ {2}}{\delta}\right)}{(1 - \gamma) ^ {2} N}}. \tag {165}
+$$
+
+In order to control the performance gap $\left\| V_{\gamma, \mathcal{P}}^{\pi^{*}} - V_{\gamma, \mathcal{P}}^{\hat{\pi}} \right\|_{\infty}$ , note that
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \leq V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} + V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} + V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi} ^ {*}} \leq V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} + V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}}. \tag {166}
+$$
+
+It is hence sufficient to bound the two terms on the RHS.
+
+Part A: $\left\| V_{\gamma ,\hat{\mathcal{P}}}^{\pi^{*}} - V_{\gamma ,\mathcal{P}}^{\pi^{*}}\right\|_{\infty}$ . Towards this, recall the bound in (80):
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \leq \gamma \max \left\{\left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, \widehat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty}, \right. \\ \left. \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty} \right\}. \tag {167} \\ \end{array}
+$$
+
+To control the main term $\hat{\mathsf{P}}_{w}^{\pi^{\star},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}} - \mathsf{P}_{w}^{\pi^{\star},V}V_{\gamma ,\mathcal{P}}^{\pi^{*}}$ in (167), we first introduce the following lemma.
+
+Lemma 13.1. For any $\delta \in (0,1)$ and any fixed policy $\pi$ , one has with probability at least $1 - \delta$
+
+$$
+\left\| \hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}. \tag {168}
+$$
+
+Applying Lemma 13.1 by taking $\pi = \pi^{\star}$ gives
+
+$$
+\left\| \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathrm {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}, \tag {169}
+$$
+
+which directly leads to
+
+$$
+\begin{array}{l} \left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, \widehat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty} \\ \leq \left\| \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathrm {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \cdot \left\| \left(I - \gamma \hat {\mathrm {P}} _ {w} ^ {\pi^ {*}, \widehat {V}}\right) ^ {- 1} 1 \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{(1 - \gamma) ^ {2} N}}. \tag {170} \\ \end{array}
+$$
+
+Similarly, we have
+
+$$
+\left\| \left(I - \gamma \hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - \mathsf {P} _ {w} ^ {\pi^ {*}, V} V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}}\right) \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{(1 - \gamma) ^ {2} N}}. \tag {171}
+$$
+
+Inserting (170) and (171) back to (167) yields
+
+$$
+\left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log \left(\frac {2 4 S A N}{\delta}\right)}{(1 - \gamma) ^ {2} N}}. \tag {172}
+$$
+
+Part B: controlling $\left\| V_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - V_{\gamma ,\mathcal{P}}^{\hat{\pi}}\right\|_{\infty}$ . Similarly, we have that
+
+$$
+\begin{array}{l} \left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| _ {\infty} \leq \gamma \max \left\{\left\| \left(I - \gamma \mathrm {P} _ {w} ^ {\hat {\pi}, V}\right) ^ {- 1} \left(\hat {\mathrm {P}} _ {w} ^ {\hat {\pi}, \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathrm {P} _ {w} ^ {\hat {\pi}, \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \right\| _ {\infty}, \right. \\ \left\| \left(I - \gamma \mathsf {P} _ {w} ^ {\hat {\pi}, \widehat {V}}\right) ^ {- 1} \left(\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \widehat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}}\right) \right\| _ {\infty} \}. \tag {173} \\ \end{array}
+$$
+
+We introduce the following lemma which controls $\hat{\mathsf{P}}_{w}^{\widehat{\pi},\widehat{V}}V_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - \mathsf{P}_{w}^{\widehat{\pi},\widehat{V}}V_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}}$ in (173);
+
+Lemma 13.2. With probability at least $1 - \delta$ , one has
+
+$$
+\left\| \hat {\mathsf {P}} _ {w, \hat {\mathcal {P}}} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w, \hat {\mathcal {P}}} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right\| _ {\infty} \leq 1 2 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {3 6 S A N ^ {2}}{\delta})}{N}}. \tag {174}
+$$
+
+Repeating the arguments from (169) to (172) yields
+
+$$
+\left\| V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right\| _ {\infty} \leq 1 2 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log \left(\frac {3 6 S A N ^ {2}}{\delta}\right)}{(1 - \gamma) ^ {2} N}}. \tag {175}
+$$
+
+Finally, combining all bounds together implies that
+
+$$
+\begin{array}{l} \left| \left| V _ {\gamma , \mathcal {P}} ^ {\pi^ {*}} - V _ {\gamma , \mathcal {P}} ^ {\hat {\pi}} \right| \right| _ {\infty} \\ \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{(1 - \gamma) ^ {2} N}} + 1 2 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {3 6 S A N ^ {2}}{\delta})}{(1 - \gamma) ^ {2} N}}} \\ \leq 1 6 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log \left(\frac {3 6 S A N ^ {2}}{\delta}\right)}{(1 - \gamma) ^ {2} N}}. \tag {176} \\ \end{array}
+$$
+
+# 13.1. Proofs of Lemmas
+
+Lemma 13.3. (Lemma 13.1) For any $\delta \in (0,1)$ and any fixed policy $\pi$ , one has with probability at least $1 - \delta$
+
+$$
+\left\| \hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}. \tag {177}
+$$
+
+Proof. Step 1: controlling the point-wise concentration. Consider any fixed policy $\pi$ and the corresponding robust value vector $V \triangleq V_{\gamma, \mathcal{P}}^{\pi} - \frac{g_{\mathcal{P}}^{\pi}}{1 - \gamma}$ (independent from $\hat{\mathsf{P}}$ ). We note that $\|V\| \leq \mathcal{H}$ as showed before. By the duality of CS sets (Shi et al., 2023) it holds that
+
+$$
+\begin{array}{l} \left| \left(\hat {\mathsf {P}} _ {w} ^ {\pi , V}\right) _ {s, a} V _ {\gamma , \mathcal {P}} ^ {\pi} - \left(\mathsf {P} _ {w} ^ {\pi , V}\right) _ {s, a} V _ {\gamma , \mathcal {P}} ^ {\pi} \right| \\ = \left| \max _ {\alpha \in [ \min _ {s} V (s), \max _ {s} V (s) ]} \left\{\mathsf {P} _ {s, a} [ V ] _ {\alpha} - \sqrt {R \mathsf {V a r} _ {\mathsf {P} _ {s , a}} ([ V ] _ {\alpha})} \right\} \right. \\ \left. - \max _ {\alpha \in \left[ \min _ {s} V (s), \max _ {s} V (s) \right]} \left\{\hat {\mathsf {P}} _ {s, a} [ V ] _ {\alpha} - \sqrt {R \operatorname {V a r} _ {\hat {\mathsf {P}} _ {s , a}} ([ V ] _ {\alpha})} \right\} \right| \\ \leq \max _ {\alpha \in [ \min _ {s} V (s), \max _ {s} V (s) ]} \left| \left(\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}\right) [ V ] _ {\alpha} + \sqrt {R \mathsf {V a r} _ {\hat {\mathsf {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {R \mathsf {V a r} _ {\mathsf {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \\ \leq \max _ {\alpha \in [ \min _ {s} V (s), \max _ {s} V (s) ]} \left| \left(\mathrm {P} _ {s, a} - \hat {\mathrm {P}} _ {s, a}\right) [ V ] _ {\alpha} \right| + \\ + \max _ {\alpha \in \left[ \min _ {s} V (s), \max _ {s} V (s) \right]} \sqrt {R} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathrm {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {\operatorname {V a r} _ {\mathrm {P} _ {s , a}} ([ V ] _ {\alpha})} \right|, \tag {178} \\ \end{array}
+$$
+
+where the first inequality follows by the maximum operator being 1-Lipschitz, and the second inequality follows from the triangle inequality.
+
+The first term in (178) can be directly bounded through an $\epsilon$ -net technique and Hoeffding's inequality, which implies that with probability at least $1 - \delta$ ,
+
+$$
+\max _ {\alpha \in \left[ \min _ {s} V (s), \max _ {s} V (s) \right]} \left| \left(\mathrm {P} _ {s, a} - \hat {\mathrm {P}} _ {s, a}\right) [ V ] _ {\alpha} \right| \leq 2 \sqrt {\frac {\log (\frac {2 S A N}{\delta}) \mathcal {H} ^ {2}}{N}}, \tag {179}
+$$
+
+holds for all $(s,a)$
+
+Step 2: controlling the second term in (178). Consider a fixed $\alpha \in [0, \frac{1}{1 - \gamma}]$ , applying Lemma 6 of (Panaganti & Kalathil, 2021) with $\| [V]_{\alpha}\|_{\infty} \leq \mathcal{H}$ , we get that
+
+$$
+\left| \sqrt {\operatorname {V a r} _ {\hat {\mathrm {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {\operatorname {V a r} _ {\mathrm {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \leq \sqrt {\frac {2 \log (\frac {2}{\delta}) \mathcal {H}}{N}} \tag {180}
+$$
+
+holds with probability at least $1 - \delta$ . We then introduce the following lemma, whose proof can be similarly derived as Lemma 18 of (Shi et al., 2023).
+
+Lemma 13.4. For any $V$ obeying $\| V\|_{\infty}\leq \mathcal{H}$ , the function $J_{s,a}(\alpha ,V)\coloneqq \left|\sqrt{\operatorname{Var}_{\hat{\mathsf{P}}_{s,a}}([V]_{\alpha})} -\sqrt{\operatorname{Var}_{\mathsf{P}_{s,a}}([V]_{\alpha})}\right|$ w.r.t. $\alpha$ obeys
+
+$$
+| J _ {s, a} (\alpha_ {1}, V) - J _ {s, a} (\alpha_ {2}, V) | \leq 4 \sqrt {| \alpha_ {1} - \alpha_ {2} | \mathcal {H}}.
+$$
+
+We then construct an $\epsilon$ -net $\mathcal{N}$ over $[0, \mathcal{H}]$ with size $N_{n} \leq 3\epsilon \mathcal{H}$ (Vershynin, 2018), so that with probability at least $1 - \frac{\delta}{SA}$ , it holds that for any $(s, a)$ ,
+
+$$
+\begin{array}{l} \max _ {\alpha \in [ \min _ {s} V (s), \max _ {s} V (s) ]} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathsf {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {\operatorname {V a r} _ {\mathsf {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \\ \leq \max _ {\alpha \in [ 0, 1 / (1 - \gamma) ]} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathsf {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {\operatorname {V a r} _ {\mathsf {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \\ \stackrel {(i)} {\leq} 4 \sqrt {\frac {\epsilon}{1 - \gamma}} + \sup _ {\alpha \in \mathcal {N}} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathrm {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {\operatorname {V a r} _ {\mathrm {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \\ \stackrel {\mathrm {(i i)}} {\leq} 4 \sqrt {\frac {\epsilon}{1 - \gamma}} + \sqrt {\frac {2 \mathcal {H} ^ {2} \log (\frac {2 S A | N _ {\epsilon} |}{\delta})}{N}} \\ \stackrel {\text {(i i i)}} {\leq} 2 \sqrt {\frac {2 \mathcal {H} ^ {2} \log (\frac {2 S A | N _ {\epsilon} |}{\delta})}{N}} \\ \leq 2 \sqrt {\frac {2 \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}, \tag {181} \\ \end{array}
+$$
+
+where (i) holds by the property of the $\epsilon$ -net, (ii) follows from (180), (iii) follows from taking $\epsilon = \frac{\mathcal{H}\log\left(\frac{2SA|\mathcal{N}|}{\delta}\right)}{8N}$ . Inserting (179) and (181) back to (178) and taking the union bound over $(s,a)$ , with probability at least $1 - \delta$
+
+$$
+\begin{array}{l} \left| (\hat {\mathsf {P}} _ {w} ^ {\pi , V}) _ {s, a} V - (\mathsf {P} _ {w} ^ {\pi , V}) _ {s, a} V \right| \leq \max _ {\alpha \in [ \min _ {s} V (s), \max _ {s} V (s) ]} \left| (\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}) [ V ] _ {\alpha} \right| + \\ + \max _ {\alpha \in \left[ \min _ {s} V (s), \max _ {s} V (s) \right]} \left| \sqrt {R \mathsf {V a r} _ {\hat {\mathsf {P}} _ {s , a}} ([ V ] _ {\alpha})} - \sqrt {R \mathsf {V a r} _ {\mathsf {P} _ {s , a}} ([ V ] _ {\alpha})} \right| \\ \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}. \\ \end{array}
+$$
+
+Finally, we complete the proof by recalling the matrix form as below:
+
+$$
+\left\| \hat {\mathsf {P}} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} - \mathsf {P} _ {w} ^ {\pi , V} V _ {\gamma , \mathcal {P}} ^ {\pi} \right\| _ {\infty} \leq \max _ {(s, a)} \left| \hat {\mathsf {P}} _ {s, a} ^ {\pi , V} V - (\mathsf {P} _ {w} ^ {\pi , V}) _ {s, a} V \right| \leq 4 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {2 4 S A N}{\delta})}{N}}.
+$$
+
+Lemma 13.5. (Lemma 13.2) With probability at least $1 - \delta$ , one has
+
+$$
+\left\| \hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right\| _ {\infty} \leq 1 2 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {3 6 S A N ^ {2}}{\delta})}{N}}. \tag {182}
+$$
+
+Proof. For any $(s, a)$ , following the same arguments of (178) yields
+
+$$
+\begin{array}{l} \left| (\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}}) _ {s, a} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - (\mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}}) _ {s, a} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right| \\ = \Big | (\hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}}) _ {s, a} (V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}) - (\mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}}) _ {s, a} (V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}) \Big | \\ \leq \max _ {\alpha \in \left[ \min _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}, \max _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right]} \left| \left(\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}\right) \left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha} \right| + \\ + \max _ {\alpha \in \left[ \min _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}, \max _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right]} \sqrt {R} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathcal {P}} _ {s , a}} \left(\left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha}\right)} - \sqrt {\operatorname {V a r} _ {\mathbb {P} _ {s , a}} \left(\left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha}\right)} \right|. \tag {183} \\ \end{array}
+$$
+
+The first term in (183) can be bounded through Hoeffding's inequality as
+
+$$
+\begin{array}{l} \max _ {\alpha \in \left[ \min _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}, \max _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right]} \left| \left(\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}\right) \left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha} \right| \\ \leq \max _ {\alpha \in [ - \mathcal {H}, \mathcal {H} ]} \left| \left(\mathsf {P} _ {s, a} - \hat {\mathsf {P}} _ {s, a}\right) \left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha} \right| \\ \leq 4 \sqrt {\mathcal {H} ^ {2} \frac {\log \left(\frac {3 S A N ^ {3 / 2}}{(1 - \gamma) \delta}\right)}{N}}. \tag {184} \\ \end{array}
+$$
+
+We then consider the second term of (183). Towards this, we can construct an auxiliary robust MDP $(\mathcal{S},\mathcal{A},\mathcal{P}^{s,u},r_{s,u},\gamma)$ as in Section D.2.2. in (Shi et al., 2023), so that $V_{\gamma ,\hat{\mathcal{P}}}^{\hat{\pi}} - g_{\hat{\mathcal{P}}}^{\hat{\pi}} = V_{s,u^{*}} - g_{s,u^{*}}$ for some $u^{*}\in [-\mathcal{H},\mathcal{H}]$ . We then construct an $\epsilon$ -net $\mathcal{N}$ over $[- \mathcal{H},\mathcal{H}]$ , so that $|u^{*} - u|\leq \epsilon$ for some $u\in \mathcal{N}$ . Following (Shi et al., 2023), it holds that
+
+$$
+\begin{array}{l} \max _ {\alpha \in \left[ \min _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}}, \max _ {s} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} (s) - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right]} \left| \sqrt {\operatorname {V a r} _ {\hat {\mathcal {P}} _ {s , a}} \left(\left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha}\right)} - \sqrt {\operatorname {V a r} _ {\mathbb {P} _ {s , a}} \left(\left[ V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - g _ {\hat {\mathcal {P}}} ^ {\hat {\pi}} \right] _ {\alpha}\right)} \right| \\ \leq 6 \sqrt {\frac {2 R \mathcal {H} ^ {2} \log \left(\frac {3 6 S A N ^ {2} | \mathcal {N} |}{\delta}\right)}{N}}, \tag {185} \\ \end{array}
+$$
+
+with probability at least $1 - \delta$
+
+Inserting (185) and (184) back to (183), we have that with probability at least $1 - \delta$
+
+$$
+\left\| \hat {\mathsf {P}} _ {w} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} - \mathsf {P} _ {w} ^ {\hat {\pi}, \hat {V}} V _ {\gamma , \hat {\mathcal {P}}} ^ {\hat {\pi}} \right\| _ {\infty} \leq 1 2 \sqrt {\frac {2 (1 + R) \mathcal {H} ^ {2} \log (\frac {3 6 S A N ^ {2}}{\delta})}{N}}. \tag {186}
+$$
+
+# 14. Function Approximation for Robust AMDPs
+
+The algorithm design part is almost identical to (Zhou et al., 2024). For completeness we provide a brief discussion.
+
+Following (Zhou et al., 2024), we consider a class of Integral Probability Metric (IPM) Uncertainty Set. Given some function class $\mathcal{F} \subset \mathbb{R}^S$ including the zero function, the integral probability metric (IPM) is defined by $d_{\mathcal{F}}(p,q) \coloneqq \sup_{f \in \mathcal{F}} \{ p^\top f - q^\top f \} \geq 0$ (Müller, 1997). Many metrics such as Kantorovich metric, total variation, etc., are special cases of IPM under different function classes (Müller, 1997). The IPM uncertainty set is defined as $\mathcal{P}_s^a = \{ q : d_{\mathcal{F}}(q, \mathsf{P}_s^a) \leq R \}$ .
+
+We also consider the linear function class as (Zhou et al., 2024). Denote $\Phi \in \mathbb{R}^{Sd}$ the feature matrix with rows $\phi(s)$ , we set
+
+$$
+\mathcal {F} := \{s \mapsto \psi (s) ^ {\top} \xi : \xi \in \mathbb {R} ^ {d}, \| \xi \| \leq 1 \}. \tag {187}
+$$
+
+Without loss of generality, assume $\Phi$ has full column rank, and let the first coordinate of $\phi(s)$ be 1 for any $s$ .
+
+We then detail the algorithm designs of the two updates steps in Algorithm 2.
+
+# Algorithm 3 Robust Linear Temporal Difference (RLTD)
+
+1: Input: $\pi, K$
+2: Initialization: $\theta_0, s_0$
+3: for $k = 0,1,\ldots ,K - 1$ do
+4: Sample $a_{k} \sim \pi(\cdot | s_{k})$ , $y_{k+1}$ according to $\mathsf{P}_{s_{k}, a_{k}}$ , and $s_{k+1}$ from $y_{k+1}$
+5: Update $w_{k + 1} = \theta_k + \alpha_k\phi (s_k)\left[(\hat{\mathbf{T}}^\pi V_{\theta_k})(s_k,a_k,y_{k + 1}) - \phi (s_k)^\top \theta_k\right]$
+6: end for
+7: Output: $\theta_{K}$
+
+We adopt the following standard assumption from robust RL studies.
+
+# Algorithm 4 Robust Q-Natural Policy Gradient (RQNPG)
+
+1: Input: $\theta, \eta, w, N$
+2: Initialization: $u_0, s_0$
+3: for $n = 0,1,\ldots ,N - 1$ do
+4: Sample $a_{n} \sim \pi_{w}(\cdot | s_{n})$ , $y_{n+1}$ according to $\mathsf{P}_{s_{k}, a_{k}}$ and determine $s_{n+1}$ from $y_{n+1}$
+5: Update $u_{n + 1} = u_n + \zeta_n\phi (s_n,a_n)\left[(\hat{\mathbf{T}}^\pi V_\theta)(s_n,a_n,y_{n + 1}) - \phi (s_n,a_n)^\top u_n\right]$
+6: end for
+7: Output: $w + \eta u_N$
+
+Assumption 14.1. There exists $\beta < 1$ such that
+
+$$
+\gamma q _ {s, s ^ {\prime}} ^ {a} \leq \beta \mathbb {P} _ {s, s ^ {\prime}} ^ {a}, \forall q _ {s} ^ {a} \in \mathcal {P} _ {s} ^ {a}. \tag {188}
+$$
+
+This assumption is widely adopted in function approximation studies in robust RL, e.g., (Tamar et al., 2014; Xu & Mannor, 2010; Zhou et al., 2024), to ensure the solvability of function approximation.
+
+We then provide the formal statement of Theorem 5.1.
+
+Theorem 14.2. Set geometrically increasing step sizes $\eta^t \geq \frac{S^2\mathcal{H}^2}{\epsilon S\mathcal{H} - \epsilon^2}\eta^{t-1}$ for each $t = 1, 2, \ldots, T$ . Set $N, K = \mathcal{O}(\mathcal{H}\epsilon^{-2})$ in Algorithm 3 and Algorithm 4, and $T \geq \frac{\mathcal{H}\log\mathcal{H}}{\epsilon}$ , then
+
+$$
+g _ {\mathcal {P}} ^ {*} - \mathbb {E} [ g _ {\mathcal {P}} ^ {\pi_ {T}} ] \leq C \left(1 - \frac {\epsilon}{\mathcal {H} S}\right) ^ {T - 1} + \frac {M \mathcal {H} ^ {2}}{\epsilon^ {2}} \epsilon_ {e} ^ {2},
+$$
+
+where $C$ is some constant, and $\epsilon_{e} = \epsilon_{stat} + \epsilon_{bias}$ measures the approximation error.
+
+Proof. To apply Theorem 1 of (Zhou et al., 2024), it is assumed that for initial state distribution $\rho$ , there exists $M$ such that $\sup_{\kappa \in \mathcal{P}}\| \frac{d_{\rho}^{*,\kappa}}{\rho}\|_{\infty}\leq M < \infty$ . We note that since $g_{\mathcal{P}}^{\pi}$ is a constant and does not depend on the initial distribution, we can simply set uniform initial distribution $\rho = (\frac{1}{S},\dots,\frac{1}{S})$ , in which case $M = S$ .
+
+By applying Theorem 1 of (Zhou et al., 2024) with $\gamma = 1 - \frac{\epsilon}{\mathcal{H}}$ , it holds that
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {*} - \mathbb {E} \left[ V _ {\gamma , \mathcal {P}} ^ {\pi_ {T}} \right] \leq \left(1 - \frac {\epsilon}{\mathcal {H} S}\right) ^ {T - 1} C + \frac {S \mathcal {H} ^ {2}}{\epsilon} \epsilon_ {e} ^ {2}. \tag {189}
+$$
+
+Note that $\epsilon_{e} = \tilde{O}\left(\frac{1}{\sqrt{K}} + \frac{1}{\sqrt{N}}\right)$ when omitting the critic error, thus setting $N, K = \mathcal{O}(S\mathcal{H}\epsilon^{-2})$ implies that
+
+$$
+V _ {\gamma , \mathcal {P}} ^ {*} - \mathbb {E} \left[ V _ {\gamma , \mathcal {P}} ^ {\pi_ {T}} \right] \leq \left(1 - \frac {\epsilon}{\mathcal {H} S}\right) ^ {T - 1} C + S \mathcal {H}. \tag {190}
+$$
+
+Note that if we set $T$ large enough so that $\left(1 - \frac{\epsilon}{\mathcal{H}S}\right)^{T - 1} \leq \mathcal{H}$ , which can be satisfied if $T \geq \frac{\mathcal{H} \log \mathcal{H}}{\epsilon}$ , then $\pi_T$ is an $\mathcal{H}$ -optimal policy for the robust DMDP. Combining the above with Theorem 3.4 completes the proof.
+
+# 14.1. Details for Experiments
+
+To expand on our results in Section 6, we provide additional detail here. In our experiments, for different discount factors $\gamma$ , we run the corresponding RNAC algorithm (Zhou et al., 2024) to learn a robust policy, and then we estimate the robust average reward of the learned policy by calculating the average reward under environment perturbations. Experiments were performed in two custom perturbed MuJoCo environments, Walker2d-v3 and Hopper-v3, using the integral probability metric (IPM) uncertainty set (Zhou et al., 2024). We evaluate the performance of our algorithm with different perturbation levels, and the results, including one standard deviation from the mean, are shown below in Figure 10 for varying perturbation levels. These experimental results align with our theoretical results, and thus verify the scalability of combining our framework with function approximation.
+
+To show that our framework does not result in high computational costs regarding varying reduction factors, we show the execution time of our method using these factors. As can be seen in Tables 1 and 2, even with a very large value for $\gamma$ , the
+
+
+Figure 10. Scalability under Walker2d-v3.
+
+
+
+execution time is similar, thus showing that our method is also computationally efficient. Similar to how we showed in Section 6, we show additional results comparing our robust reduction method to that of the non-robust reduction method in Figures 11 and 12 for neural network approximation in the Walker2d-v3 and Hopper-v3 environments, respectively. Additionally, while the focus of this work is on distributional robustness, we hypothesize that our reduction framework should work under the adversarial robustness formulation. Due to this idea and inspired by (Zhang et al., 2020), we conducted a preliminary experiment on (discounted) adversarial robust RL in the Humanoid-v4 environment using increasing factors of $\gamma$ . As our result in Figure 13 shows, the reward under attack increases as $\gamma$ increases, indicating the potential to develop a similar reduction framework for this setting.
+
+Table 1. Computational efficiency under the Walker2d-v3 environment.
+
+PHASE γ=[0.9, 0.99, 0.999, 0.9999, 0.99998, 0.999980448383733] TRAINING 603.92, 630.72, 690.33, 706.32, 711.70, 676.36 EVALUATION 5.93, 25.60, 10.03, 9.16, 10.44, 6.05
+
+Table 2. Computational efficiency under the Hopper-v3 environment.
+
+PHASE γ=[0.9, 0.99, 0.999, 0.9999, 0.99998, 0.9999925319260162] TRAINING 579.41, 634.98, 678.29, 682.53, 686.92, 683.56 EVALUATION 6.42, 37.80, 7.15, 5.80, 7.10, 6.84
+
+
+Figure 11. Neural network approximation under Walker2d-v3.
+
+
+
+
+Figure 12. Neural network approximation under Hopper-v3.
+
+
+
+
+Figure 13. Preliminary discounted adversarial robust RL under Humanoid-v4.
+
+# 15. Model-Free RL for Robust AMDPs
+
+# 15.1. Multi-Level Monte Carlo Robust Q-Learning
+
+One potential way to improve scalability is to design model-free algorithms. In contrast to the model-based method in the last section, model-free methods do not store the transition kernels and aim to learn the optimal policy directly. To illustrate the applicability of our framework and ensure it's scalability, we develop two model-free algorithms that can be applied efficiently for large-scale problems. First, we introduce an algorithm that combines our reduction framework with the multi-level Monte Carlo (MLMC) technique (Liu et al., 2022b; Wang et al., 2023e; Blanchet & Glynn, 2015; Blanchet et al., 2019; Wang & Wang, 2022), and provide a theoretical analysis of its sample complexity. We further design another mini-batch model-free robust Q-learning algorithm for robust AMDP in Section 15.2. Both algorithms do not require model estimation and storage, making them scalable for robust RL with average reward.
+
+The MLMC approach is widely used to construct an unbiased estimator of the robust Bellman operator, which is challenging due to its non-linear dependence on the nominal kernel $^9$ (Wang et al., 2023e). It relies on a geometric distribution with parameter $\Psi \in (0,1)$ , and requires $2^{N + 1}$ samples at each step, with $N \sim \mathbf{Geom}(\Psi)$ , to construct an unbiased estimator of $\sigma_{\mathcal{P}_s^a}(V)$ .
+
+By adapting the MLMC estimator with our reduction-based framework, we propose the MLMC robust Q-learning algorithm for robust AMDPs, given in Algorithm 5.
+
+Our algorithm is hence the first model-free one for robust RL under average reward, along with finite sample analysis. These results demonstrate the broad applicability of our reduction-based framework, enabling the direct integration of any model-free algorithm designed for robust DMDPs to yield algorithms for robust AMDPs with sample complexity guarantees.
+
+Algorithm 5 MLMC robust Q-learning for robust AMDPs
+1: Input: A generative model of $(S, \mathcal{A}, \mathsf{P}, r)$ , uncertainty radius $\sigma$ , robust bias span $\mathcal{H}$ , accuracy $\epsilon$ , $\Psi = 0.5$ , threshold $N_{m}$
+2: Initialization: $Q_{1} \gets 0$ , $s_{0}$
+3: $\gamma \gets 1 - \frac{\epsilon}{\mathcal{H}}$ , $T \gets \frac{1}{\mathcal{H}^{2}(1 - \gamma)^{5}}$
+4: for $t < T$ do
+5: for all $s \in S$ , $a \in \mathcal{A}$ do
+6: $V(s) \gets \max_{a} Q(s, a)$
+7: Sample $N \sim \mathrm{Geom}(\Psi)$
+8: $N \gets \min \{N_{m}, N\}$
+9: Sample $2^{N + 1}$ samples following $\mathsf{P}_{s}^{a}$
+10: Obtain the multi-level estimator according to (191)
+11: $Q(s, a) \gets r(s, a) + \gamma \hat{\sigma}_{\mathcal{P}_{s}^{a}}(V)$
+12: end for
+13: end for
+14: $\hat{\pi}_{\gamma}(s) \gets \arg \max_{a} Q(s, a), \forall s$
+15: Output: $\hat{\pi}_{\gamma}$
+
+As a result, our framework is scalable to large-scale problems while maintaining high data efficiency.
+
+The MLMC operator is based on a geometric distribution with parameter $\Psi \in (0,1)$ . For any $s,a$ , we first generate a number $N$ from a geometric distribution with parameter $\Psi \in (0,1)$ . Then, we take action $a$ at state $s$ for $2^{N+1}$ times, and observe $r(s,a)$ and the subsequent state $\{s_i'\}, i = 1,\ldots,2^{N+1}$ . We divide these $2^{N+1}$ samples into two groups: samples with odd indices, and samples with even indices. We then individually calculate the empirical distribution of $s'$ using the even-index samples, the odd-index samples, all the samples, and the first sample: $\hat{\mathsf{P}}_{s,N+1}^{a,E} = \frac{1}{2^N}\sum_{i=1}^{2^N}\mathbf{1}_{s_{2i}',}$ , $\hat{\mathsf{P}}_{s,N+1}^{a,O} = \frac{1}{2^N}\sum_{i=1}^{2^N}\mathbf{1}_{s_{2i-1}',}$ , $\hat{\mathsf{P}}_{s,N+1}^{a,1} = \frac{1}{2^{N+1}}\sum_{i=1}^{2^{N+1}}\mathbf{1}_{s_i'}$ , $\hat{\mathsf{P}}_{s,N+1}^{a,1} = \mathbf{1}_{s_1'}$ . Then, we use these estimated transition kernels as nominal kernels to construct four estimated uncertainty sets (with the same uncertainty radius): $\hat{\mathcal{P}}_{s,N+1}^{a,E},\hat{\mathcal{P}}_{s,N+1}^{a,O},\hat{\mathcal{P}}_{s,N+1}^{a},\hat{\mathcal{P}}_{s,N+1}^{a,1}$ . The multi-level estimator is then defined as
+
+$$
+\hat {\sigma} _ {\mathcal {P} _ {s} ^ {a}} (V) \triangleq \sigma_ {\hat {\mathcal {P}} _ {s, N + 1} ^ {a, 1}} (V) + \frac {\Delta_ {N} (V)}{p _ {N}}, \tag {191}
+$$
+
+where $p_N = \Psi (1 - \Psi)^N$ and
+
+$$
+\Delta_ {N} (V) \triangleq \sigma_ {\hat {\mathcal {P}} _ {s, N + 1} ^ {a}} (V) - \frac {\sigma_ {\hat {\mathcal {P}} _ {s , N + 1} ^ {a , E}} (V) + \sigma_ {\hat {\mathcal {P}} _ {s , N + 1} ^ {a , O}} (V)}{2}.
+$$
+
+Our threshold-MLMC estimator is constructed as follows
+
+$$
+\hat {\sigma} _ {\mathcal {P} _ {s} ^ {a}} (V) \triangleq \sigma_ {\hat {\mathcal {P}} _ {s, \min \{N _ {m} + 1, N + 1 \}} ^ {a, 1}} (V) + \frac {\Delta_ {\min \{N , N _ {m} \}} (V)}{p _ {\min \{N , N _ {m} \}}}.
+$$
+
+The formal statements on the sample complexity are as follows. The proofs are straightforward by combining our framework with the results in (Wang et al., 2024c).
+
+Theorem 15.1. (1). For the TV-defined uncertainty set, set $N_{m} = \frac{2\log T}{\log 2}$ and the step size as $\beta_{t} = \frac{2\log T}{(1 - \gamma)T}$ . Then, the output of Algorithm 5 is an $\epsilon$ -optimal policy for the robust AMDP if
+
+$$
+N \geq \widetilde {\mathcal {O}} \left(\frac {S A \mathcal {H} ^ {3}}{\epsilon^ {5}}\right). \tag {192}
+$$
+
+(2). For the Chi-Square-divergence-defined uncertainty set, set $N_{m} = \frac{2\log T}{\log 2}$ and step size as $\beta_{t} = \frac{2\log T}{(1 - \gamma)T}$ . Then the output of Algorithm 5 is an $\epsilon$ -optimal policy for the robust AMDP if
+
+$$
+N \geq \widetilde {\mathcal {O}} \left(\frac {S A \mathcal {H} ^ {3}}{\epsilon^ {5}}\right). \tag {193}
+$$
+
+(3). For the KL-divergence-defined uncertainty set, set
+
+$$
+N _ {m} = \max \left\{\frac {2 \log T}{\log 2}, \frac {\log (1 + p _ {\wedge} ^ {2} \log (2 S) \log T)}{\log 2} \right\},
+$$
+
+where $p_{\wedge}$ is the minimal positive entry of the nominal kernel $\mathsf{P}$ . Then, the output of Algorithm 5 is an $\epsilon$ -optimal policy for the robust AMDP if
+
+$$
+N \geq \widetilde {\mathcal {O}} \left(\frac {S A \mathcal {H} ^ {3}}{p _ {\wedge} ^ {2} \epsilon^ {5}}\right). \tag {194}
+$$
+
+# 15.2. Mini-Batch Robust Q-Learning with Variance Reduction
+
+In this section, we present a model-free mini-batch Q-learning algorithm. The algorithm is derived from the robust DMDP algorithm (Wang et al., 2023c), which employs a variance reduction technique to improve the sample complexity. When combined with our framework, we can also achieve improved sample complexity for robust AMDPs. The details can be found in Algorithm 6.
+
+Algorithm 6 Mini-batch robust Q-learning for robust AMDPs
+1: Input: A generative model of $(S, \mathcal{A}, \mathsf{P}, r)$ , uncertainty radius $\sigma$ , robust bias span $\mathcal{H}$ , accuracy $\epsilon$ , batch size $n$ , and behavior policy $\pi$
+2: Initialization: $Q_{1} \gets 0$ , $s_{0}$
+3: Set $\gamma \gets 1 - \frac{\epsilon}{\mathcal{H}}$
+4: for $t < T$ do
+5: $V_{t}(s) \gets \max_{a} Q_{t}(s, a), \forall s$
+6: Sample $a_{t} \sim \pi(\cdot | s_{t})$
+7: Sample $n$ samples from $\mathsf{P}_{s_{t}}^{a_{t}}$ and obtain empirical uncertainty set $\hat{\mathcal{P}}_{s_{t}}^{a_{t}}$
+8: $\lambda_{t} \gets \frac{1}{1 + (1 - \gamma)t}$
+9: $Q_{t+1}(s_{t}, a_{t}) \gets (1 - \lambda_{t}) Q_{t}(s_{t}, a_{t}) + \lambda_{t}(r(s_{t}, a_{t}) + \gamma \sigma_{\hat{\mathcal{P}}_{s_{t}}^{a_{t}}} (V_{t}))$
+10: Sample the next state $s_{t+1} \sim \mathsf{P}_{s_{t}}^{a_{t}}$ from the generative model
+11: end for
+12: $\hat{\pi}_{\gamma}(s) \gets \arg \max_{a} Q_{T}(s, a), \forall s$
+13: Output: $\hat{\pi}_{\gamma}$
+
+We then derive the sample complexity of the mini-batch robust Q-learning algorithm, as stated in the following theorem.
+
+Theorem 15.2. Consider the uncertainty set defined by the KL divergence. If we set10
+
+$$
+T = \tilde {\mathcal {O}} \left(\frac {S A \mathcal {H} ^ {2}}{\epsilon^ {2}}\right), n = \tilde {\mathcal {O}} \left(\frac {\mathcal {H}}{\epsilon}\right) \tag {195}
+$$
+
+in Algorithm 6, then the output policy $\hat{\pi}_{\gamma}$ of Algorithm 6 is an $\epsilon$ -optimal policy for robust AMDP.
+
+The result shows that the mini-batch robust Q-learning requires $\tilde{\mathcal{O}}\left(\frac{SA\mathcal{H}^3}{\epsilon^3}\right)$ samples to obtain an $\epsilon$ -optimal policy for the robust AMDP. This improves the sample complexity for our MLMC robust Q-learning algorithm and represents the state-of-the-art in robust AMDP model-free sample complexity.
+
+# 15.3. Numerical Experiments
+
+To exemplify the scalability of our framework for robust AMDPs, we present concrete proof of convergence of our model-free MLMC robust $Q$ -learning algorithm in figure 14. We first created the uncertainty set using total variation. Then for each $\gamma$ value we wanted to test, we generate a number $N$ from a geometric distribution with $\Psi \in (0,1)$ and take action $a$ at state
+
+$s$ for $2^{N + 1}$ times in order to learn the optimal policy under the robust discounted DMDP. Once we have the $\epsilon_{\gamma}$ -optimal policy, $\hat{\pi}_{\gamma}$ , we then applied algorithm 1 from (Wang et al., 2023d) to evaluate the robust average reward for each policy. This experiment was independently conducted 5 times, where we plot the mean of the estimated robust average reward for every $\gamma$ factor along with 1 standard deviation above and below the mean in figure 14. Following this, we then implemented algorithm 2 from (Wang et al., 2023d) in order to find the optimal robust average reward. By combining our reduction-based framework with the multi-level Monte Carlo (MLMC) technique (Liu et al., 2022b; Wang et al., 2023e; Blanchet & Glynn, 2015; Blanchet et al., 2019; Wang & Wang, 2022), we are able to prove our theoretical results in showing that our model-free algorithm converges to the optimal robust average reward.
+
+
+Figure 14. Model-Free Experimental Results Under Total Variation
\ No newline at end of file
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/images.zip b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a51fc041a6ccca9b6d9abd05a1b0774cdd2df096
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6f90b954b9b2291329e3734447d62c868807fbb6603a11e1a6047b72f1178256
+size 2566921
diff --git a/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/layout.json b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ded61ff6633dc694ce3c5d3d5e7985d9de8dee9
--- /dev/null
+++ b/areductionframeworkfordistributionallyrobustreinforcementlearningunderaveragereward/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:32807848ea5fe62a539800b9d5023726c94e9652098677e862764a7313c0d960
+size 1825363
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_content_list.json b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..786589df5eca26e77ee3c0f7b922c7536d3a1630
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b33c335939e9ad43c540afd380e82899346c64e8f2d7a738411db3c88af80ed
+size 203146
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_model.json b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a78b65e04840a50a857ee037382989ef8c838a7f
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9f40882f113069c56fe79ffe32a13d4a18a17bb07798857bf1cbcc3e7fbb3c8
+size 250081
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_origin.pdf b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fb5d0bc3ac5e2178d28a3bc11f954da304d4bd6e
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/fc472fd8-814c-4be0-aacc-e85f67a4b20e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:265be511ebf3817d98b14e181e14a4c13c461513c03fcb753179af350221bcdb
+size 701446
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/full.md b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..27cfb41884baa7bb6dabece26bd0a611f038a1dd
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/full.md
@@ -0,0 +1,985 @@
+# A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents
+
+Kaiwen Wang $^{12}$ Dawen Liang $^{2}$ Nathan Kallus $^{12}$ Wen Sun $^{1}$
+
+# Abstract
+
+We study risk-sensitive RL where the goal is to learn a history-dependent policy that optimizes some risk measure of cumulative rewards. We consider a family of risks called the optimized certainty equivalents (OCE), which captures important risk measures such as conditional value-at-risk (CVaR), entropic risk and Markowitz's mean-variance. In this setting, we propose two meta-algorithms: one grounded in optimism and another based on policy gradients, both of which can leverage the broad suite of risk-neutral RL algorithms in an augmented Markov Decision Process (MDP). Via a reductions approach, we leverage theory for risk-neutral RL to establish novel OCE bounds in complex, rich-observation MDPs. For the optimism-based algorithm, we prove bounds that generalize prior results in CVaR RL and that provide the first risk-sensitive bounds for exogenous block MDPs. For the gradient-based algorithm, we establish both monotone improvement and global convergence guarantees under a discrete reward assumption. Finally, we empirically show that our algorithms learn the optimal history-dependent policy in a proof-of-concept MDP, where all Markovian policies provably fail.
+
+# 1. Introduction
+
+In reinforcement learning (RL), the classical objective by which we measure how good is a policy is the expected cumulative rewards along the trajectory (Sutton, 2004; Sutton and Barto, 2018). However, the mean reward of a choice (the choice of policy in RL being an example of such a choice) is a risk-neutral objective that is often observed to be inconsistent with human preferences, causing Allais Paradoxes (Allais, 1990; Kahneman and Tversky, 2013) and calling for
+
+1 Cornell Tech2 Netflix Research. Correspondence to: Kaiwen Wang . Work done as Netflix intern.
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+alternative risk-sensitive objectives (Yaari, 1987; Bowling et al., 2023). This is especially true in safety-critical settings (Artzner et al., 1999; Coronato et al., 2020; Wang et al., 2023b). In risk-sensitive RL (RSRL), the objective is some risk measure of cumulative rewards under a policy rather than the mean cumulative rewards (Howard and Matheson, 1972; Artzner et al., 1999).
+
+In this paper, we propose a framework for solving RSRL with any static risk measure that can be expressed as an optimized certainty equivalent (OCE) (Ben-Tal and Teboulle, 2007). Specifically, for a utility function $u: \mathbb{R} \to [-\infty, \infty)$ , the static OCE of a policy $\pi$ is defined as
+
+$$
+\mathrm {O C E} _ {u} (\pi) := \max _ {b \in \mathbb {R}} \left\{b + \mathbb {E} _ {\pi} \left[ u \left(\sum_ {h = 1} ^ {H} r _ {h} - b\right) \right] \right\}, \tag {1}
+$$
+
+where the expectation is w.r.t. the random cumulative reward $\sum_{h=1}^{H} r_h$ under trajectories from $\pi$ . The OCE captures several important risks for different choices of $u$ . For example, with the hinge utility $u(t) = \min(t / \tau, 0)$ , $\mathrm{OCE}_u$ becomes CVaR at level $\tau \in (0, 1]$ , which measures the mean outcome among the worst $\tau$ -fraction of cases (Rockafellar and Uryasev, 2000). Moreover, OCE with the quadratic utility recovers Markowitz's mean-variance (Markowitz, 1952) and the exponential utility recovers entropic risk (Follmer and Schied, 2011). We present a primer on OCE and more examples in App. B. We remark that another risk-sensitive objective is the iterated risk, which captures risk at each time step and is orthogonal to our static risk objective (as we explain in related works Sec. 1.1).
+
+A key technical challenge in OCE RL is that optimal policies are in general non-Markovian (i.e., history-dependent), which renders this more complex than risk-neutral RL where a Markovian optimal policy always exists. In prior works, a promising solution is to augment the MDP with a new state that tracks the cumulative rewards so far, which has been studied for CVaR and spectral risks (Bäuerle and Ott, 2011; Bäuerle and Glauner, 2021; Bastani et al., 2022). In this paper, we show that the augmented MDP (AugMDP) can be generalized with an OCE-based reward function to solve OCE RL with Markovian augmented policies (Sec. 2.1). We then propose two meta-algorithms that reduce OCE RL to risk-neutral RL by using the AugMDP as a bridge.
+
+First, in Sec. 3, we propose a meta-algorithm based opti
+
+mistic RL oracles and we prove that its OCE regret can be bounded by the RL oracle's regret within the AugMDP up to a factor depending on $u$ (Thm. 3.2). We show that this approach can be instantiated with many RL oracles including UCB-VI for tabular MDPs (Azar et al., 2017) and Rep-UCB (Uehara et al., 2022) for low-rank MDPs, generalizing prior results in CVaR RL (Wang et al., 2023a; Zhao et al., 2024) to the OCE setting. While prior works in RSRL only considered model-based algorithms, we show that our approach is also amenable to the model-free optimistic RL oracle called GOLF (Jin et al., 2021). This establishes the first risk-sensitive RL bounds for exogenous block MDPs (Efroni et al., 2022; Xie et al., 2023).
+
+Next, in Sec. 4, we propose a gradient-based meta-algorithm that uses policy gradient oracles and prove two types of guarantees: (1) convergence to the optimal OCE policy (Thm. 4.2), and (2) monotone improvement in a risk lower bound (Thm. 4.4), a novel property in RSRL theory. We instantiate our method with the natural policy gradient (NPG) oracle (Kakade, 2001) and prove these two guarantees under the standard conditions of policy optimization (Agarwal et al., 2021), plus a discrete reward assumption. Finally, our numerical simulation validates that our meta-algorithms can indeed learn the optimal OCE policy in a simple MDP where all Markovian policies (e.g., Dabney et al., 2018; Lim and Malik, 2022) must have sub-optimal performance. This validates our thesis that history-dependent, non-Markovian policies are required for OCE optimality.
+
+In sum, our main contributions are the following:
+
+1. We propose an optimistic meta-algorithm for OCE RL that generalizes all prior works in CVaR to the more general OCE setting. We also apply GOLF as a novel oracle and prove the first risk-sensitive bounds in the challenging exogenous block MDP. (Sec. 3)
+2. We propose a gradient-based meta-algorithm for OCE RL that enjoys both local improvement and global convergence. These are the first finite-sample bounds for gradient-based RSRL algorithms. (Sec. 4)
+3. In a numerical simulation, we show that our meta-algorithms indeed optimally solve OCE RL in a proof-of-concept MDP where Markovian policies provably have sub-optimal performance. (Sec. 5)
+
+# 1.1. Related Works
+
+From the theoretical side, prior works with risk-sensitive regret or PAC bounds are based on optimistic, model-based oracles for CVaR or entropic risk. For CVaR, Bastani et al. (2022); Wang et al. (2023a) proposed UCB-based algorithms for tabular MDPs by using the AugMDP proposed by Bäuerle and Ott (2011). Zhao et al. (2024) then extended this approach to low-rank MDPs using the Rep-UCB oracle
+
+(Uehara et al., 2022) in the AugMDP. For entropic risk, Fei et al. (2020; 2021) proposed UCB-based algorithms for tabular MDPs based on exponential Bellman equations. These prior works are limited to model-based optimistic oracles and largely do not extend beyond tabular MDPs, while our work provides a general framework for applying both model-free and model-based oracles to rich-observation OCE RL, including the challenging exogenous block MDP (Efroni et al., 2022). Moreover, these prior works are limited to CVaR or entropic risk, while our work provides a unifying framework for the broad class of OCE risk measures.
+
+Chen et al. (2024) derives regret bounds for static Lipschitz risks under function approximation which is more related to our setting. However, their optimistic algorithm requires bounded eluder dimension or witness rank, which does not easily apply to the challenging exogenous block MDP setting. Indeed, the Bellman eluder dimension can grow with the size of the exogenous state space which is exponentially large or infinite (Xie et al., 2023). In contrast, we employ a coverability argument in the augmented MDP to obtain the first PAC bound for exogenous block MDP in risk-sensitive RL. Moreover, our work also studies other methods such as policy gradients, which are complementary to optimistic algorithms.
+
+There are also several prior works on gradient-based RSRL algorithms that prove local and asymptotic convergence (Chow and Ghavamzadeh, 2014; Tamar et al., 2015a;b; Chow et al., 2018; Greenberg et al., 2022). These guarantees are not ideal since (1) any local optima can be much worse than the global optima and (2) asymptotics do not quantify the rate of convergence which could be very slow. In contrast, we fill this gap by proving non-asymptotic finite-sample bounds for the global convergence of gradient-based algorithms, by leveraging recent advances in risk-neutral policy gradient theory (Agarwal et al., 2021; Xiao, 2022). Finally, we also prove monotone improvement which is a novel property for RSRL to the best of our knowledge.
+
+From the empirical side, a popular framework for RSRL is distributional RL (DistRL), which has demonstrated state-of-the-art results in both online (Bellemare et al., 2017; Dabney et al., 2018; Keramati et al., 2020; Ma et al., 2020) and offline RL (Urpí et al., 2021; Ma et al., 2021). At a high level, these algorithms learn the reward-to-go distribution and then greedily act to optimize the risk of the learned distribution at each step. Unfortunately, this can diverge for some MDPs since it is not consistent with any Bellman equations. To address this, Lim and Malik (2022) proposed to apply DistRL in an AugMDP, but their algorithm still relies on the strong assumption that the optimal policy is Markov, which is false in our counterexample (Sec. 5).
+
+Another solution is Trajectory Q-Learning (TQL) (Zhou et al., 2023) which learns history-dependent policies and dis
+
+tributional $Q$ -functions. While learning history-dependent policies ensures that TQL converges to the optimal RSRL policy, the learning process can be quite costly in sample complexity, computation and memory since it essentially ignores the Markov property of MDPs. Instead, our approach searches over the minimal sufficient policy set, the Markovian policies in the AugMDP, to optimally solve OCE RL while also maintaining statistical and computational efficiency. Finally, TQL requires DistRL to learn the reward-to-go distribution, while our framework supports both regression-based and distribution-based methods for learning the AugMDP value function.
+
+Iterated risk (a.k.a. dynamic risk). Another risk-sensitive RL objective is the iterated risk $\rho(r_1 + \rho(r_2 + \rho(r_3 + \ldots))$ , where $\rho$ is some risk measure. The iterated risk, a.k.a. dynamic risk, captures risk of the rewards-to-go at each time step and has been studied by many works (Ruszczynski, 2010; Du et al., 2023; Lam et al., 2023; Xu et al., 2023; Rigter et al., 2023; Liang and Luo, 2024). Dynamic risk also has close relations to distributionally robust MDPs, where an adversary can perturb the transition kernel at each time step (Wiesemann et al., 2013; Kumar et al., 2023; Bennett et al., 2024). The main benefit of iterated risk is that there exist Bellman equations and an optimal Markovian policy like in the classic RL setting. However, iterated risk may be overly risky in certain cases and overly conservative in other cases, and is also less interpretable due to the nested structure (Lim and Malik, 2022). In contrast, our static risk $\rho(r_1 + r_2 + r_3 + \ldots)$ models the trajectory-level risk rather than at the step-level and is more interpretable. We highlight that static and dynamic risk are orthogonal objectives and the objective is ultimately the user's choice.
+
+# 2. Preliminaries
+
+We consider an MDP with state space $S$ , action space $\mathcal{A}$ , horizon $H$ , transition kernels $P_{h}:\mathcal{S}\times \mathcal{A}\to \Delta (\mathcal{S})$ and conditional reward distributions $R_{h}:\mathcal{S}\times \mathcal{A}\rightarrow \Delta ([0,1])$ , where $\Delta (S)$ denotes the set of distributions on the set $S$ . A history-dependent policy $\pi = (\pi_h)_{h\in [H]}$ interacts with the MDP in the following way: start from an initial state $s_1$ , then, for each step $h = 1,2,\ldots ,H$ , sample an action $a_{h}\sim \pi_{h}(s_{h},\mathcal{H}_{h})$ , collect a reward $r_h\sim R_h(s_h,a_h)$ , observe the next state $s_{h + 1}\sim P_h(s_h,a_h)$ . Here, $\mathcal{H}_h = \{(s_i,a_i,r_i)\}_{i\in [h - 1]}$ denotes the history so far. We denote the cumulative reward distribution under $\pi$ as $Z(\pi)\coloneqq \sum_{h = 1}^{H}r_{h}$ , where randomness arises from both the MDP and the policy $\pi$ . We also assume that $Z(\pi)\in [0,1]$ almost surely for all policies (Jiang and Agarwal, 2018). The goal of OCE RL is to learn a history-dependent policy that maximizes the OCE objective:
+
+$$
+\mathrm {O C E} _ {u} ^ {\star} := \max _ {\pi \in \Pi_ {\mathrm {H D}}} \mathrm {O C E} _ {u} (\pi), \tag {2}
+$$
+
+where $\Pi_{\mathrm{HD}}$ denotes the set of history-dependent policies.
+
+We focus on the online setting, where at each round $k = 1,2,\ldots ,K$ , the learner outputs a (history-dependent) policy $\pi^k\in \Pi_{\mathsf{HD}}$ . The learner's goal is to minimize the total sub-optimality across $K$ rounds:
+
+$$
+\operatorname {R e g} _ {\mathrm {O C E}} (K) := \sum_ {k = 1} ^ {K} \operatorname {O C E} _ {u} ^ {\star} - \operatorname {O C E} _ {u} (\pi^ {k}).
+$$
+
+Probably approximately correct (PAC) bounds are upper bounds on $\operatorname{Reg}_{\mathrm{OCE}}(K)$ that hold with high probability over the randomness of the learner. In particular, PAC bounds imply that at least one policy has high OCE, i.e., $\min_{k\in [K]}\{\mathrm{OCE}_u^\star -\mathrm{OCE}_u(\pi^k)\} \leq \frac{1}{K}\operatorname{Reg}_{\mathrm{OCE}}(K)$ . If the learner only executes policy $\pi^k$ at round $k$ , then $\operatorname{Reg}_{\mathrm{OCE}}(K)$ is also called a regret bound.
+
+# 2.1. Augmented MDP for OCE
+
+The key challenge in RSRL with static risk is that the optimal policies are history-dependent, as there are no Bellman-like equations. A promising solution from prior works is to augment the MDP with a scalar state, which has been studied for CVaR and spectral risks (Bäuerle and Ott, 2011; Bäuerle and Glauner, 2021) – in this augmented state space, there does exist a Markovian optimal policy, bypassing the need to maintain history-dependent policies. In this section, we describe the augmented MDP (AugMDP) and extend it to the OCE setting, generalizing prior constructions.
+
+The AugMDP has states $(s_h, b_h) \in S_{\mathrm{aug}} := S \times [-1, 1]$ , where $s_h$ is the original MDP state and $b_h$ is a new scalar state called the budget, which intuitively tracks the cumulative rewards so far. The initial budget $b_1 \in [0, 1]$ is chosen by the learning algorithm and the budget transitions via $b_{h+1} = b_h - r_h$ where $r_h$ is reward at step $h$ . The reward of the AugMDP is defined as $r_{\mathrm{aug}}^h(s, b, a) = 0$ for $h < H$ and $r_{\mathrm{aug}}^H(s, b, a) = \mathbb{E}_{r \sim R_H(s, a)}[u(r - b)]$ , where $V_u^{\max} = \max_{c \in [-1, 1]} |u(c)|$ measures the scale of $r_{\mathrm{aug}}$ . The augmented reward encodes the utility $u$ and is critical for the following optimality theorem.
+
+Theorem 2.1. There exists an initial budget $b_1^\star \in [0,1]$ s.t. the optimal risk-neutral $\pi_{\mathrm{aug}}^\star$ in the AugMDP with initial budget $b_1^\star$ achieves optimal OCE in the original MDP.
+
+Since the optimal risk-neutral policy $\pi_{\mathrm{aug}}^{\star}$ is Markovian and deterministic, this theorem crucially mitigates the need to maintain history-dependent policies. We note that $(\pi_{\mathrm{aug}}^{\star}, b_1^{\star})$ is a special history-dependent policy in the original MDP. Thus, Thm. 2.1 proves that the set of $(\pi_{\mathrm{aug}}, b_1)$ , where $\pi_{\mathrm{aug}}$ is a Markovian augmented policy and $b_1$ is an initial budget, is a special class of history-dependent policies that is sufficient for OCE optimality. Moreover, a natural implementation of Thm. 2.1 is to solve risk-neutral RL in the AugMDP, which is the key idea behind our algorithms. We remark that our AugMDP construction generalizes the CVaR AugMDP from (Bäuerle and Ott, 2011) to any OCE risk.
+
+For the AugMDP, we also define its quality function $Q_{\mathrm{aug}}^{\pi,h}(s_h,b_h,a_h) = \mathbb{E}_\pi [u(\sum_{t=h}^H r_t - b_h) \mid s_h,a_h,b_h]$ and value function $V_{\mathrm{aug}}^{\pi,h}(s_h,b_h) = \mathbb{E}_\pi [Q_{\mathrm{aug}}^{\pi,h}(s_h,b_h,a_h) \mid s_h,b_h]$ . Also, let $V_{\mathrm{aug}}^{\star,h}(s_h,b_h) = \max_{\pi \in \Pi_{\mathrm{HD}}} V_{\mathrm{aug}}^{\pi,h}(s_h,b_h)$ be the optimal value function of the AugMDP.
+
+Proof of Thm. 2.1. The main step is to write $\mathrm{OCE}_u^\star$ in terms of the optimal value of the AugMDP:
+
+$$
+\begin{array}{l} \mathrm {O C E} _ {u} ^ {\star} \stackrel {(i)} {=} \max _ {\pi \in \Pi_ {\mathbb {H D}}} \max _ {b \in [ 0, 1 ]} \{b + \mathbb {E} [ u (Z (\pi) - b) ] \} \\ = \max _ {b \in [ 0, 1 ]} \left\{b + \max _ {\pi \in \Pi_ {\mathrm {H D}}} \mathbb {E} \left[ u (Z (\pi) - b) \right] \right\} \\ \stackrel {(i i)} {=} \max _ {b \in [ 0, 1 ]} \{b + \max _ {\pi \in \Pi_ {\mathsf {H D}}} V _ {\mathsf {a u g}} ^ {\pi , 1} (s _ {1}, b) \} \\ \stackrel {(i i i)} {=} \max _ {b \in [ 0, 1 ]} \{b + V _ {\mathrm {a u g}} ^ {\star , 1} (s _ {1}, b) \}. \tag {3} \\ \end{array}
+$$
+
+Step (i) uses OCE's definition in Eq. (1) and $Z(\pi) \in [0,1]$ w.p. 1, so the $\max_b$ can be taken over $[0,1]$ . Step (ii) uses the definition of $b_h$ and $r_{\mathrm{aug}}$ to deduce that $\mathbb{E}_{\pi}[u(\sum_h r_h - b_1)] = \mathbb{E}_{\pi}[u(r_H - b_H) \mid b_1] = \mathbb{E}_{\pi}[r_{\mathrm{aug}}^H(s_H, b_H, a_H)] \mid b_1] = V_{\mathrm{aug}}^{\pi,1}(s_1, b_1)$ for any $\pi$ and $b_1$ . Step (iii) uses the classical RL result that the optimal risk-neutral policy $\pi_{\mathrm{aug}}^{\star}$ achieves the maximum value $V_{\mathrm{aug}}^{\star}$ over history-dependent policies (Puterman, 2014). Thus, letting $b_1^{\star} = \arg \max_{b \in [0,1]} \{b + V_{\mathrm{aug}}^{\star,1}(s_1, b)\}$ , we have:
+
+$$
+\begin{array}{l} \mathrm {O C E} _ {u} (\pi_ {\mathrm {a u g}} ^ {\star}, b _ {1} ^ {\star}) \\ = \max _ {b \in [ 0, 1 ]} \{b + \mathbb {E} _ {\pi_ {\mathsf {a u g}} ^ {\star}, b _ {1} ^ {\star}} [ u (Z (\pi_ {\mathsf {a u g}} ^ {\star}, b _ {1} ^ {\star}) - b) ] \} \\ \geq b _ {1} ^ {\star} + \mathbb {E} _ {\pi_ {\mathrm {a u g}} ^ {\star}, b _ {1} ^ {\star}} [ u (Z (\pi_ {\mathrm {a u g}} ^ {\star}, b _ {1} ^ {\star}) - b _ {1} ^ {\star}) ] \\ = b _ {1} ^ {\star} + V _ {\mathsf {a u g}} ^ {\star , 1} \left(s _ {1}, b _ {1} ^ {\star}\right) = \mathrm {O C E} _ {u} ^ {\star}. \\ \end{array}
+$$
+
+This proves that $(\pi_{\mathrm{aug}}^{\star}, b_1^{\star})$ achieves the optimal $\mathrm{OCE}_u^{\star}$ .
+
+Remark: the AugMDP is easy to simulate. The transition function for the augmented state $b_{h}$ is simply $b_{h + 1} = b_{h} - r_{h}$ , which is known and deterministic. This ensures that simulating the AugMDP is computationally efficient. This is also important for proving bounds in the AugMDP. For example, model-based algorithms do not need to learn the AugMDP transitions (since it is known), so they generally inherit the same bounds as in the original MDP.
+
+# 3. Meta-Algorithm with Optimism
+
+In this section, we propose a meta-algorithm for OCE RL based on optimistic RL oracles. Our framework generalizes existing works in CVaR RL to more OCE risk measures and to more complex MDPs such as exogenous block MDPs. At a high level, the meta-algorithm implements the intuition from Thm. 2.1 and has two parts: (1) we apply an RL oracle in the AugMDP to learn $\pi_{\mathrm{aug}}^{\star}$ , and (2) then select the right initial budget $b_{1}$ . We start by formally defining optimistic oracles, which are RL algorithms that can explore using optimistic value functions:
+
+Definition 3.1 (Optimistic oracle). At each round $k = 1,2,\ldots ,K$ , the oracle OPTALG first outputs an optimistic value function $\widehat{V}_{1,k}(\cdot)$ , then sees an initial state $s_{1,k}$ , and finally outputs a policy $\pi^k$ which is rolled-out from $s_{1,k}$ to collect data. Two conditions must be satisfied:
+
+1. (Optimism) $\widehat{V}_{1,k}(s_1)\geq V_1^{\star}(s_1) - \varepsilon_k^{\mathrm{opt}}$ for all initial states $s_1$ , where $\varepsilon_{k}^{\mathrm{opt}}$ are slack variables;
+2. (Bounded Regret) $\sum_{k=1}^{K} (\widehat{V}_{1,k}(s_{1,k}) - V_1^{\pi^k}(s_{1,k})) \leq V^{\max} \operatorname{Reg}_{\mathrm{Opt}}(K)$ , where $V^{\max}$ is the scale of cumulative rewards and $\operatorname{Reg}_{\mathrm{Opt}}(K)$ is bounds the total suboptimality gap across $K$ rounds. For this to be useful, $\operatorname{Reg}_{\mathrm{Opt}}(K)$ should be sublinear in $K$ , i.e., $\mathcal{O}(\sqrt{K})$ .
+
+We remark that the conditions of OPTALG can be satisfied in the Augmented MDP by modifying standard optimistic algorithms such as UCB-VI, Rep-UCB and GOLF. For model-based oracles (e.g., UCB-VI, Rep-UCB), the augmented transition kernel for $b$ is known and deterministic and thus introduces no extra statistical or computational complexity for model learning in the augmented MDP. For model-free oracles (e.g., GOLF), we need to be more careful and redefine important concepts such as completeness and coverability in the augmented MDP. We provide detailed algorithms and proofs for satisfying OPTALG in App. D.
+
+Given an oracle OPTALG, our optimistic meta-algorithm (Alg. 1) proceeds as follows: at each round $k = 1,2,\dots ,K$ , we query OPTALG in the AugMDP and obtain an optimistic value function $\widehat{V}_{1,k}(\cdot)$ . Then, we compute the initial budget by solving:
+
+$$
+\widehat {b} _ {k} \leftarrow \arg \max _ {b _ {1} \in [ 0, 1 ]} \left\{b _ {1} + \widehat {V} _ {1, k} \left(s _ {1}, b _ {1}\right) \right\}. \tag {4}
+$$
+
+Finally, we give the initial state $(s_1, \widehat{b}_k)$ to OPTALG, receive policy $\pi^k$ , and proceed to the next round. We now state our main result for Alg. 1.
+
+Theorem 3.2. Assuming that OPTALG satisfies Def. 3.1, running Alg. 1 for $K$ rounds ensures that:
+
+$$
+\sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \leq V _ {u} ^ {\max } \operatorname {R e g} _ {\text {O p t}} (K) + \sum_ {k = 1} ^ {K} \varepsilon_ {k} ^ {\text {o p t}}.
+$$
+
+This theorem is a deterministic statement which bounds Alg. 1's OCE regret by the oracle's AugMDP regret, up to a scaling of $V_{u}^{\max}$ . Here, $V_{u}^{\max}$ intuitively measures the statistical hardness of estimating $\mathrm{OCE}_u$ relative to $\mathbb{E}$ , which has $V_{\mathbb{E}}^{\max} = 1$ . For instance, $V_{\mathrm{CVaR}_{\tau}}^{\max} = \tau^{-1}$ . Another intuition for $V_{u}^{\max}$ is the Lipschitz constant of the risk measure (Liang and Luo, 2024; Chen et al., 2024). Please see App. B for more examples. We now instantiate this result with three optimistic oracles from the RL literature.
+
+# Algorithm 1 Meta-algorithm for optimistic oracles
+
+1: Input: number of rounds $K$ , optimistic oracle OPTALG satisfying Def. 3.1.
+2: for round $k = 1,2,\ldots ,K$ do
+3: Query OPTALG in AugMDP for value func. $\widehat{V}_{1,k}(\cdot)$ .
+4: Compute initial budget $\widehat{b}_k$ via Eq. (4).
+5: Give initial state $(s_1, \widehat{b}_k)$ to OPTALG and receive $\pi^k$ .
+6: Collect a trajectory with $\pi^k$ starting from $(s_1, \widehat{b}_k)$ .
+7: end for
+
+First, UCB-VI is a model-based algorithm for tabular MDPs with near-minimax-optimal dependence on $|\mathcal{S}|, |\mathcal{A}|, K$ (Azar et al., 2017). While the AugMDP is not tabular due to the real-valued augmented state $b_h$ , the augmented state has a known and deterministic transition $(b_{h+1} = b_h - r_h)$ ; hence, UCB-VI only needs to learn the original MDP's transition model when operating in the AugMDP. Indeed, by adapting the proof of Wang et al. (2023a) in Thm. D.1, we prove that UCB-VI satisfies Def. 3.1's conditions in the Aug-MDP with $\varepsilon_k^{\mathrm{opt}} = 0$ and $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq \widetilde{\mathcal{O}}(\sqrt{|\mathcal{S}||\mathcal{A}|HK})$ . Combined with Thm. 3.2, this immediately implies an OCE regret bound of $\widetilde{\mathcal{O}}(V_u^{\max}\sqrt{SAHK})$ , which is also near-minimax-optimal in $|\mathcal{S}|$ , $|\mathcal{A}|$ , $K$ for tabular MDPs. Moreover, in App. D.4, we obtain tight rates for CVaR using second-order oracles, showing our upper bound is tight.
+
+The second oracle we discuss is Rep-UCB (Uehara et al., 2022), a model-based optimistic algorithm for low-rank MDPs (defined in Def. D.2), which are MDPs with a rank $d$ transition kernel and potentially infinite state space (Agarwal et al., 2020); low-rank MDPs capture tabular MDPs and linear MDPs. Again, the original MDP being low-rank does not necessarily imply that the augmented MDP is low-rank. Fortunately, as before, the augmented state $b_{h}$ has a known transition function and thus Rep-UCB only needs to learn the low-rank transitions of the original MDP, even when running in the AugMDP. Indeed Zhao et al. (2024) proved that Rep-UCB enjoys the same PAC bound in the AugMDP as in original low-rank MDP. We adapt their proof in Thm. D.4 and show that Rep-UCB satisfies Def. 3.1's conditions in the AugMDP with $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq \widetilde{\mathcal{O}}(H^3|\mathcal{A}|d^2\sqrt{K})$ and $\sum_{k} \varepsilon_{k}^{\mathrm{opt}} \leq \mathrm{Reg}_{\mathrm{Opt}}(K)$ . Together with Thm. 3.2, we thus have an OCE PAC bound of $\widetilde{\mathcal{O}}(V_u^{\max}H^3|\mathcal{A}|d^2\sqrt{K})$ , generalizing Zhao et al. (2024) to any OCE risk measure.
+
+We have presented two examples with model-based oracles, whose bounds naturally lifts to the AugMDP since the augmented state $b_h$ has a known and deterministic transition. In general, model-based oracles enjoy the same bounds in the AugMDP as in the original MDP, making them convenient oracles for our meta-algorithm. Next, we discuss a model-free oracle to solve the challenging exogenous block MDP, where it is not tractable to learn the transition model.
+
+# 3.1. Bounds for exogenous block MDPs
+
+We now present a third optimistic oracle called GOLF (Jin et al., 2021) and prove the first risk-sensitive bound in the challenging exogenous block MDP (Ex-BMDP) problem, due to Efroni et al. (2022).
+
+Definition 3.3. An Ex-BMDP has latent states $(z_h^{\mathrm{en}}, z_h^{\mathrm{ex}}) \in \mathcal{Z}_h^{\mathrm{en}} \times \mathcal{Z}_h^{\mathrm{ex}}$ , where $\mathcal{Z}^{\mathrm{en}}$ is the endogenous part which is tabular, and $\mathcal{Z}^{\mathrm{ex}}$ is the exogenous part which is arbitrarily large. The endogenous latent transition conditions on the action $z_{h+1}^{\mathrm{en}} \sim P_h^{\mathrm{en}}(z_h^{\mathrm{en}}, a_h)$ , while the exogenous does not $z_{h+1}^{\mathrm{ex}} \sim P_h^{\mathrm{ex}}(z_h^{\mathrm{ex}})$ . The observation state is emitted from the latents $s_h \sim o_h(z_h^{\mathrm{en}}, z_h^{\mathrm{ex}})$ and, importantly, there exists a unique function $\phi_h^\star$ such that $\phi_h^\star(s_h) = (z_h^{\mathrm{en}}, z_h^{\mathrm{ex}})$ recovers the latents from the observation.
+
+GOLF is a value-based (model-free) RL algorithm that achieves optimism by optimizing over a version space of $Q$ -functions (Jin et al., 2021). The version space consists of candidate $Q$ -functions from a function class $\mathcal{F} = (\mathcal{F}_h)_{h \in [H]}, \mathcal{F}_h \subset S_{\mathrm{aug}} \times \mathcal{A} \to \mathbb{R}$ whose Bellman backup is consistent with the data up to statistical noise. Value-based methods have shown promise in important applications such as LLM post-training (Mudgal et al., 2024; Zhou et al., 2025; Wang et al., 2025b). Due to space constraints, we defer the formal definition of GOLF in the AugMDP to App. D.3.
+
+The standard condition required for Bellman-consistent algorithms like GOLF is completeness, which is required for Bellman backups to be well-behaved (Tsitsiklis and Van Roy, 1996; Munos and Szepesvári, 2008). Since we apply GOLF in the AugMDP, we posit completeness w.r.t. the AugMDP Bellman operator denoted as $\mathcal{T}_{\mathrm{aug}}^{\star,h}$ .
+
+Assumption 3.4. $\mathcal{T}_{\mathrm{aug}}^{\star,h}f_{h+1}\in\mathcal{F}_h$ for all $f_{h+1}\in\mathcal{F}_{h+1}$ .
+
+Another condition we require is discrete rewards (Assump. 3.5), which we use for bounding coverability in the AugMDP (Xie et al., 2023). Many safety-critical RL problems have discrete rewards, e.g., in sepsis treatment, reward is the patient's discharge status (Johnson et al., 2016).
+
+Assumption 3.5. We have a finite set $\mathcal{B} \subset [0,1]$ such that $Z(\pi) \in \mathcal{B}$ w.p. 1 for all $\pi$ , where $Z(\pi)$ is the cumulative reward of roll-outs from $\pi$ .
+
+This assumption can be satisfied by discretizing rewards as is common in distributional RL (Bellemare et al., 2017; Imani et al., 2024; Ayoub et al., 2024; Wang et al., 2025a) and in theory (Bastani et al., 2022; Wang et al., 2023a).
+
+We now state our main result for Ex-BMDPs.
+
+Theorem 3.6. For $\delta \in (0,1)$ , under Assumps. 3.4 and 3.5, running Alg. 1 with OPTALG as GOLF (Alg. 5 in App. D.3) ensures that w.p. $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \\ \leq \widetilde {\mathcal {O}} \left(V _ {u} ^ {\max } H \sqrt {| \mathcal {B} | | \mathcal {Z} ^ {\mathrm {e n}} | | \mathcal {A} | K \log (| \mathcal {F} | / \delta)}\right). \\ \end{array}
+$$
+
+Notably, the bound scales with the endogenous state size $|\mathcal{Z}^{\mathrm{en}}|$ and the function class's complexity $\log (|\mathcal{F}|)$ ; indeed, all salient problem parameters matches the original risk-neutral bound from Xie et al. (2023). The completeness assumption may be weakened to realizability if we have local simulator access (Mhammedi et al., 2024); but otherwise is unavoidable even in risk-neutral RL (Jin et al., 2021; Xie et al., 2023). Thus, Thm. 3.6 demonstrates the versatility of our meta-algorithm to solve challenging MDPs that go beyond tabular and low-rank.
+
+To prove Thm. 3.6, we need to show that GOLF satisfies the OPTALG conditions in Def. 3.1 with $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq \widetilde{\mathcal{O}}(H\sqrt{|\mathcal{B}||\mathcal{Z}^{\mathrm{en}}||\mathcal{A}|K})$ and $\varepsilon_k^{\mathrm{opt}} = 0$ . Due to completeness, we can invoke GOLF's coverability regret bound from Xie et al. (2023). Then, we prove that the AugMDP coverability is bounded by $|\mathcal{B}||\mathcal{Z}^{\mathrm{en}}||\mathcal{A}|$ in Ex-BMDPs via a change-of-measure argument. Then, Thm. 3.6 immediately follows from Thm. 3.2. Please see App. D.3 for the full proof.
+
+Remark on Discrete Rewards. It is possible to relax Assump. 3.5 by rounding rewards to fixed bins spaced $\epsilon$ apart. This discretization of width $\epsilon$ introduces an additional regret of at most $O(V_u^{\max}K\epsilon)$ and creates $|\mathcal{B}| = O(1 / \epsilon)$ (Wang et al., 2023a). Thus, if an algorithm has a regret bound of $O(\sqrt{|\mathcal{B}|K})$ , then applying this discretization would yield a regret bound of $O(V_u^{\max}\sqrt{K / \epsilon} +V_u^{\max}K\epsilon)$ . Then, we can choose $\epsilon = \Theta (K^{-1 / 3})$ to yield a regret bound of $O(V_u^{\max}K^{2 / 3})$ which is sublinear. Thus, even when rewards are continuous, we can avoid Assump. 3.5 by discretizing rewards and paying a regret with rate $O(K^{2 / 3})$ instead of the minimax-optimal rate of $O(K^{1 / 2})$ .
+
+# 3.2. Proof for Main Reduction (Thm. 3.2)
+
+Proof of Thm. 3.2. First, we upper bound $\mathrm{OCE}_u^{\star}$ :
+
+$$
+\begin{array}{l} \mathrm {O C E} _ {u} ^ {\star} \stackrel {(i)} {=} b _ {1} ^ {\star} + V _ {\mathbf {a u g}} ^ {\star , 1} \left(s _ {1}, b _ {1} ^ {\star}\right) \\ \stackrel {(i i)} {\leq} b _ {1} ^ {\star} + \widehat {V} _ {1, k} (s _ {1}, b _ {1} ^ {\star}) + \varepsilon_ {k} ^ {\text {o p t}} \\ \stackrel {(i i i)} {\leq} \widehat {b} _ {k} + \widehat {V} _ {1, k} (s _ {1}, \widehat {b} _ {k}) + \varepsilon_ {k} ^ {\text {o p t}}, \\ \end{array}
+$$
+
+where (i) is by Sec. 2.1, (ii) is by optimism of the oracle (first condition of Def. 3.1), and (iii) is by the definition of $\widehat{b}_k$ via Eq. (4). Second, we lower bound $\mathrm{OCE}_u(\pi^k,\widehat{b}_k)$ :
+
+$$
+\begin{array}{l} \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) = \max _ {b \in [ 0, 1 ]} \{b + \mathbb {E} \left[ u \left(Z \left(\pi^ {k}, \widehat {b} _ {k}\right) - b\right) \right] \} \\ \geq \widehat {b} _ {k} + \mathbb {E} \left[ u \left(Z \left(\pi^ {k}, \widehat {b} _ {k}\right) - \widehat {b} _ {k}\right) \right] \\ = \widehat {b} _ {k} + V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, \widehat {b} _ {k}). \\ \end{array}
+$$
+
+Thus, $\sum_{k}\mathrm{OCE}_{u}^{\star} - \mathrm{OCE}_{u}(\pi^{k},\widehat{b}_{k})$ is bounded by:
+
+$$
+\begin{array}{l} \leq \sum_ {k} \widehat {V} _ {1, k} \left(s _ {1}, \widehat {b} _ {k}\right) - V _ {\text {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, \widehat {b} _ {k}\right) + \varepsilon_ {k} ^ {\text {o p t}} \\ \leq V _ {u} ^ {\max } \operatorname {R e g} _ {\mathrm {O p t}} (K) + \sum_ {k} \varepsilon_ {k} ^ {\mathrm {o p t}}, \\ \end{array}
+$$
+
+# Algorithm 2 Meta-algorithm for PO oracles
+
+1: Input: number of rounds $K$ , initial budget set $\mathcal{B}$ (Assump. 3.5), oracle POALG (Def. 4.1).
+2: Let $d_1^{\mathrm{aug}} = (\delta(s_1), \mathrm{Unif}(\mathcal{B}))$ be the init. augmented state distribution. Initialize POALG with $d_1^{\mathrm{aug}}$ .
+3: for round $k = 1,2,\ldots ,K$ do
+4: Obtain policy $\pi^k$ and value estimate $\widehat{V}_1^{\pi^k}$ from running POALG in the AugMDP.
+5: Compute the initial budget $\hat{b}_k$ with Eq. (5).
+6: Give init. augmented state $(s_1, \widehat{b}_k)$ to POALG.
+7: end for
+
+where the last inequality is due to the oracle's regret bound (second condition of Def. 3.1).
+
+Computational Complexity of Alg. 1. There are two sources of computational cost. The first cost is OPTALG: from the examples we discussed, UCB-VI and Rep-UCB are computationally and oracle efficient while GOLF is inefficient due to its version space optimism (Dann et al., 2018). So long as OPTALG is computationally efficient, then calls to OPTALG should also be efficient. The second cost is $\widehat{b}_k$ : computing Eq. (4) exactly is difficult since the objective is non-convex. However, it can be efficiently approximated by projecting rewards to a discrete grid with spacing $\iota = \mathcal{O}(1 / K)$ , so each round's Eq. (4) can be done in $\mathcal{O}(K)$ time. Notably, the approximation error per round is at most $H\iota$ and across $K$ rounds is at most $\mathcal{O}(H\iota \cdot K) = \mathcal{O}(H)$ , which is a lower order term (Wang et al., 2023a, App. H). Thus, provided that the oracle is efficient, Alg. 1 is both computationally and statistically efficient for OCE RL.
+
+# 4. Meta-Algorithm with Policy Optimization
+
+In this section, we propose a meta-algorithm that uses standard policy optimization (PO) oracles, which enjoy a new type of guarantee called local improvement that ensures the policy's performance monotonically improves at each round. First, we formalize the conditions needed for the risk-neutral PO oracle, which we remark is satisfied by gradient-based algorithms such as REINFORCE, NPG and PPO (Kakade, 2001; Agarwal et al., 2021; Grudzien et al., 2022).
+
+Definition 4.1 (PO Oracle). Let $d_{1}$ denote an exploratory initial state distribution. At each round $k = 1,2,\dots ,K$ , the oracle POALG outputs a policy $\pi^k$ and its value estimate $\widehat{V}_1^{\pi^k}(\cdot)$ such that $\mathbb{E}_{s_1\sim d_1}|V_1^{\pi^k}(s_1) - \widehat{V}_1^{\pi^k}(s_1)|\leq \varepsilon_k^{\mathrm{po}}$ . Then, there are two conditions on $\{\pi^k\}_{k\in [K]}$ 's performance:
+
+1. (Approx. Improvement) $V_{1}^{\pi^{k + 1}}(d_{1}) \geq V_{1}^{\pi^{k}}(d_{1}) - \varepsilon_{k}^{\mathrm{po}}$ , where $V(d_{1}) \coloneqq \mathbb{E}_{s_{1} \sim d_{1}} V(s_{1})$ and $\varepsilon_{k}^{\mathrm{po}}$ are slack vars;
+2. (Global Convergence) $\sum_{k=1}^{K}\left(V_{1}^{\star}(d_{1})-V_{1}^{\pi^{k}}(d_{1})\right) \leq$
+
+$V^{\max}\operatorname{Reg}_{\mathrm{PO}}(K)$ where $\operatorname{Reg}_{\mathrm{PO}}(K)$ bounds the total suboptimality gap of POALG across $K$ rounds.
+
+Since this oracle will be applied in the AugMDP, the $V^{\pi^k}$ term will be set as $V_{\mathrm{aug}}^{\pi^k}$ . Moreover, we remark that the small estimation error condition $\mathbb{E}_{s_1\sim d_1}|V_1^{\pi^k}(s_1) - \widehat{V}_1^{\pi^k}(s_1)|\leq \varepsilon_k^{\mathrm{po}}$ can be satisfied by via off-policy evaluation (OPE) by using the data collected in prior rounds (Munos and Szepesvári, 2008; Chang et al., 2022; Kallus and Uehara, 2020). Given access to simulator, we can of course also estimate the value by collecting online Monte Carlo roll-outs.
+
+Given the oracle POALG, our meta-algorithm (Alg. 2) proceeds as follows: at each round $k = 1, 2, \ldots, K$ , we query POALG with the initial budget distribution Unif( $\mathcal{B}$ ), where $\mathcal{B}$ is the set of possible initial budgets that we assume to be discrete (Assump. 3.5). We require Assump. 3.5 because PO oracles, unlike optimistic ones, do not strategically explore and thus rely on an exploratory initial state distribution even in risk-neutral RL (Agarwal et al., 2021). Then, POALG returns a policy $\pi^k$ and an estimate of its value $\widehat{V}_1^{\pi^k}(\cdot)$ , with which we compute the initial budget $\widehat{b}_k$ via
+
+$$
+\widehat {b} _ {k} \leftarrow \arg \max _ {b _ {1} \in \mathcal {B}} \{b _ {1} + \widehat {V} _ {1, k} (s _ {1}, b _ {1}) \}. \tag {5}
+$$
+
+This is different from the optimistic version (Eq. (4)) since the maximization is restricted to $\mathcal{B}$ . We now state our main results for Alg. 2, starting with global convergence.
+
+Theorem 4.2 (Global Convergence). Under Assump. 3.5 and assuming POALG satisfies the global convergence criterion of Def. 4.1, running Alg. 2 ensures that:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \\ \leq | \mathcal {B} | \left(V _ {u} ^ {\max } \operatorname {R e g} _ {\mathrm {P O}} (K) + 2 \sum_ {k} \varepsilon_ {k} ^ {\mathrm {p o}}\right). \\ \end{array}
+$$
+
+This theorem bounds Alg. 2's sub-optimality by the PO oracle's sub-optimality, up to a factor of $|\mathcal{B}|V_u^{\max}$ , plus the value estimation errors. Thus, if POALG has small value estimation errors and converges to the optimal policy, then Alg. 2 converges to the optimal OCE policy. We highlight that Thm. 4.2 is a finite-sample PAC bound whereas prior guarantees for risk-sensitive policy gradients have been asymptotic (Tamar et al., 2015a;b).
+
+Next, toward stating the local improvement result, we introduce the risk lower bound (RLB) $\mathrm{RLB}^{(k)}$ defined as:
+
+$$
+\operatorname {R L B} (\pi) := \max _ {b _ {1} \in \mathcal {B}} \left\{b _ {1} + V _ {\mathrm {a u g}} ^ {\pi , 1} \left(s _ {1}, b _ {1}\right) \right\}. \tag {6}
+$$
+
+Notice that the lower bound is tight at $\pi^{\star}$ : $\mathrm{RLB}(\pi^{\star}) = \mathrm{OCE}_u^{\star}$ , due to Eq. (3). In general, RLB approximately lower bounds the true OCE of $(\pi^k,\widehat{b}_k)$ , as the following lemma proves.
+
+Lemma 4.3 (RLB). The $\mathrm{RLB}^{(k)}$ approximately lower bounds the true OCE of $\pi^k$ with initial budget $\widehat{b}_k$ :
+
+$$
+\mathrm {R L B} ^ {(k)} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \leq 2 | \mathcal {B} | \varepsilon_ {k} ^ {\mathrm {p o}}. \tag {7}
+$$
+
+Moreover, the lower bound is tight on average:
+
+$$
+\sum_ {k} \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) - \mathrm {R L B} ^ {(k)} \leq | \mathcal {B} | V _ {u} ^ {\max } \operatorname {R e g} _ {\mathrm {P O}} (K).
+$$
+
+We now state the approximate improvement guarantee.
+
+Theorem 4.4 (Local Improvement). Under Assump. 3.5 and assuming POALG satisfies the approximate improvement criterion of Def. 4.1, running Alg. 2 ensures that:
+
+$$
+\forall k \in [ K ]: \mathrm {R L B} ^ {(k + 1)} \geq \mathrm {R L B} ^ {(k)} - | \mathcal {B} | \varepsilon_ {k} ^ {\mathrm {p o}}.
+$$
+
+This theorem shows that Alg. 2's policies are approximately improving at each step, up to an error $|\mathcal{B}|\varepsilon_k^{\mathrm{po}}$ . Hence, if the value estimation error from POALG is small, then the new policy's RLB cannot be much worse than the current. While local improvement is known for risk-neutral RL (Agarwal et al., 2021), to the best of our knowledge, Thm. 4.4 is the first analog in risk-sensitive RL. In sum, Thm. 4.2 and Thm. 4.4 provide complementary global and local guarantees for OCE RL, stated in a finite-sample manner.
+
+
+Figure 1. A simple MDP where the optimal CVaR policy is history-dependent. Each policy's cumulative reward distribution is shown in the table below.
+
+Action at s2 Cumulative-Reward Dist. CVaR0.25 a1 (Markov) {(0,1/8), (1,1/8)
+(1.5,3/8), (2.5,3/8)} 0.5 a2 (Markov) {(0.5,1/2), (1.5,1/2)} 0.5 a1 if r1=0 {(0,1/8), (1.5,7/8)} 0.75 a2 if r1=1
+
+Table 1. The cumulative reward distribution of Markovian policies (rows 1-2) and the optimal policy (row 3). For the distribution, $\{(v_i,p_i)\}_{i}$ denotes a random variable that takes value $v_{i}$ w.p. $p_i$ s.t. $\sum_{i}p_{i} = 1$ . The optimal CVaR policy has $\mathrm{CVaR}_{0.25}^{\star} = 0.75$ while both Markovian policies have $\mathrm{CVaR}_{0.25} = 0.5$ .
+
+# 4.1. Case Study: Natural Policy Gradient
+
+As an example POALG, we consider natural policy gradients (NPG; Kakade, 2001) whose core ideas underpin TRPO and PPO (Schulman et al., 2015; 2017). Let $\pi_h^\theta$ denote
+
+OCE Optimistic Alg (Alg. 1) PO Alg (Alg. 2) Best Markovian Markovian Optimal? E[X] - Var(X) 1.07 ± 0.01 1.06 ± 0.01 0.95 X E[X] - 2Var(X) 0.81 ± 0.01 0.75 ± 0.08 0.5 X Entr-1.0(X) 1.25 ± 0.01 1.23 ± 0.03 1.25 ✓ Entr-2.0(X) 0.90 ± 0.02 0.90 ± 0.01 0.91 ✓ CVaR0.25(X) 0.75 ± 0.02 0.71 ± 0.08 0.5 X CVaR0.5(X) 1.12 ± 0.03 1.12 ± 0.03 1.0 X
+
+Table 2. We benchmark our optimistic Alg. 1 with UCB-VI as OPTALG and our PO Alg. 2 with NPG as POALG against the best Markovian policy for various OCEs. We repeat the experiment 10 times and report $95\%$ confidence intervals for the average performance.
+
+an augmented policy with parameters $\theta_h$ . Then, the NPG update is given by
+
+$$
+\theta_ {h} ^ {k + 1} = \theta_ {h} ^ {k} + \eta F _ {h, k} ^ {\dagger} \nabla_ {\theta_ {h}} V _ {\mathbf {a u g}} ^ {\pi^ {k}}, \tag {8}
+$$
+
+where $F_{h,k} = \mathbb{E}_{\pi^k}\big[\nabla_\theta \log \pi_h^k (a_h\mid s_h,b_h)\nabla_\theta \log \pi_h^k (a_h\mid s_h,b_h)\top \big]$ is the Fisher info matrix and $F_{h,k}^{\dagger}$ is its pseudo-inverse. A special class of policies are softmax policies: $\pi_h^\theta (a\mid s,b)\propto \exp (\theta_{s,b,a}^h)$ with $\theta_{s,b,a}^{h}\in \mathbb{R}$ . Under softmax parameterization, the NPG update in Eq. (8) is equivalent to soft policy iteration (Kakade, 2001):
+
+$$
+\pi^ {k + 1} (a \mid s, b) \propto \pi^ {k} (a \mid s, b) \exp (\eta \cdot Q _ {\mathrm {a u g}} ^ {\pi^ {k}} (s, b, a)).
+$$
+
+We now show that NPG in the AugMDP satisfies both conditions in Def. 4.1. First, for local improvement, Agarwal et al. (2021, Lemma 5.2) implies that $\mathbb{E}_{b\sim \mathrm{Unif}(\mathcal{B})}[V_{\mathrm{aug}}^{\pi^{k + 1}}(s_1,b) - V_{\mathrm{aug}}^{\pi^k}(s_1,b)]\geq 0$ , which uses the fact that the initial budget distribution is Unif(B). Then, for global convergence, Agarwal et al. (2021, Theorem 5.3) proves that $\operatorname{Reg}_{\mathrm{PO}}(K)\leq \mathcal{O}(H)$ , provided we set an appropriate learning rate $\eta = H\log |\mathcal{A}|$ . Thus, NPG satisfies Def. 4.1 so we can apply both Thm. 4.2 and Thm. 4.4. To the best of our knowledge, these are the first non-asymptotic guarantees for gradient-based risk-sensitive RL. In App. E.1, we extend this analysis further to smooth policy parameterization and unknown $Q_{\mathrm{aug}}$ functions, using compatible function approximation a la Agarwal et al. (2021); Xiao (2022).
+
+# 5. Simulation Experiments
+
+We describe a numerical simulation to demonstrate the importance of learning history-dependent policies for OCE RL and to empirically evaluate our algorithms. Our code can be found at https://github.com/kiwenw/ocelr.
+
+Setting up synthetic MDP. The proof-of-concept MDP is shown in Figure 1 and has two states. At $s_1$ , all actions lead to a random reward $r_1 \sim \mathrm{Ber}(0.5)$ and transits to $s_2$ . At $s_2$ , the first action $a_1$ gives a random reward $r_2 \mid s_2, a_1 \sim 1.5 \cdot \mathrm{Ber}(0.75)$ , while another action $a_2$ gives a deterministic reward $r_2 \mid s_2, a_2 = 0.5$ . The trajectory ends after $s_2$ .
+
+The MDP is designed so that the optimal CVaR policy is non-Markovian. Due to the MDP's simplicity, we can compute the optimal CVaR and the Markovian policies' CVaR in closed form, which we list in Table 1. Specifically, the optimal action at $s_2$ depends on the random outcome of the first reward $r_1$ : if $r_1 = 0$ then it should be risky and pick $a_1$ ; if $r_1 = 1$ then it should be conservative and pick $a_2$ . This optimal policy achieves $\mathrm{CVaR}_{0.25}^{\star} = 0.75$ , while Markovian policies, which do not react to $r_1$ , can only achieve $\mathrm{CVaR}_{0.25} = 0.5$ . This reinforces our thesis that history-dependent policies are required for optimality.
+
+Experiment with tabular policies. We now apply our meta-algorithms to the synthetic MDP. For our optimistic meta-algorithm (Alg. 1), we use UCB-VI (Azar et al., 2017) as the oracle OPTALG. For our gradient-based meta-algorithm (Alg. 2), we use NPG as the oracle POALG. We evaluate our algorithms and compute the best Markovian policies for six OCEs: two CVaRs, two entropic risks and two mean-variances, where recall CVaR and entropic risks are defined as:
+
+$$
+\operatorname {C V a R} _ {\tau} (X) := \max _ {b \in \mathbb {R}} \left\{b - \tau^ {- 1} \mathbb {E} [ (b - X) _ {+} ] \right\},
+$$
+
+$$
+\operatorname {E n t r} _ {\beta} (X) := \frac {1}{\beta} \ln \mathbb {E} [ \exp (\beta X) ].
+$$
+
+The results are in Table 2 and we see that our algorithms are consistently better than the best Markovian policy. We remark that entropic risk is a special case where the optimal policy is Markovian (Fei et al., 2020; 2021); nonetheless, our algorithms still consistently learn the optimal policy despite AugMDP not being necessary for entropic risk.
+
+Experiment with neural network policies. To test the versatility of our PO algorithm (Alg. 2), we also evaluate three deep RL oracles: PPO with forward KL (Schulman et al., 2017), PPO with backward KL (Hsu et al., 2020), and REINFORCE (Sutton et al., 1999). Using neural networks to approximate the policy and $Q$ functions, we train all three oracles to maximize $\mathrm{CVaR}_{0.25}$ in the synthetic MDP.
+
+We plot the learning curves in Figure 2, where we see that Alg. 2 consistently converges to the optimal $\mathrm{CVaR}_{0.25}^{\star}$ value of 0.75 for all three oracles. An interesting trend is that
+
+
+Figure 2. Learning curves for Alg. 2 with three oracles: REINFORCE and PPO with fwd & bwd KL. We repeat runs five times and report $95\%$ confidence intervals for the mean performance.
+
+Alg. 2 tends to first plateau at a Markovian policy at $\sim 2k$ steps, and then eventually converge to the optimal non-Markovian policy at $\sim 8k$ steps, suggesting that Markovian policies are perhaps easier-to-learn local optima in the AugMDP. Nonetheless, our algorithm consistently converges to the optimal policy given enough gradient steps. In sum, our experiments demonstrate the importance of history-dependent policies in risk-sensitive RL and the effectiveness of our AugMDP algorithms. We report all hyperparameters and training details in App. C.
+
+# 6. Conclusion
+
+In this paper, we proposed two meta-algorithms for RSRL with the static OCE risk. These meta-algorithms provide a general reduction from OCE RL to risk-neutral RL in the augmented MDP framework. First, we proposed an optimistic meta-algorithm (Alg. 1) that generalizes all prior bounds in CVaR RL to the OCE setting when UCB-VI and Rep-UCB are used as the optimistic oracle. Moreover, using GOLF as the optimistic oracle, we proved the first risk-sensitive regret bounds for exogenous block MDPs. Second, we proposed a gradient-based meta-algorithm (Alg. 2) that enjoys both global convergence and local improvement. With this framework, we deduced the first finite-sample bounds for policy gradients in risk-sensitive RL. Finally, we evaluate our algorithms in a proof-of-concept MDP where they consistently learn the optimal policy and outperform the best Markovian policy. A promising direction for future work is to scale our ideas to more complex tasks such as robotics or finetuning LLMs (Ouyang et al., 2022). Given the plurality of values and risks, it is also promising to apply our ideas to design algorithms for risk-sensitive pluralistic alignment (Sorensen et al., 2024; Wang et al., 2024a).
+
+Acknowledgements. This work was supported by a Google PhD fellowship and grants NSF IIS-2154711, NSF CAREER 2339395, IIS-1846210, and DARPA LANCER: LeArning Network CybERagents.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Alekh Agarwal, Sham Kakade, Akshay Krishnamurthy, and Wen Sun. Flambe: Structural complexity and representation learning of low rank mdps. Advances in neural information processing systems, 33:20095-20107, 2020.
+Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431-4506, 2021.
+Maurice Allais. Allais paradox. In Utility and probability, pages 3-9. Springer, 1990.
+Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath. Coherent measures of risk. Mathematical finance, 9(3):203-228, 1999.
+Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James McInerney, Dawen Liang, Nathan Kallus, and Csaba Szepesvári. Switching the loss reduces the cost in batch reinforcement learning. In *Forty-first International Conference on Machine Learning*, 2024.
+Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning, pages 263-272. PMLR, 2017.
+Osbert Bastani, Yecheng Jason Ma, Estelle Shen, and Wanqiao Xu. Regret bounds for risk-sensitive reinforcement learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=yJEUDfzsTX7.
+Nicole Bäuerle and Alexander Glauner. Minimizing spectral risk measures applied to markov decision processes. Mathematical Methods of Operations Research, 94(1): 35-69, 2021.
+Nicole Bäuerle and Jonathan Ott. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research, 74(3):361-379, 2011.
+Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International Conference on Machine Learning, pages 449-458. PMLR, 2017.
+
+Aharon Ben-Tal and Marc Teboulle. An old-new concept of convex risk measures: The optimized certainty equivalent. Mathematical Finance, 17(3):449-476, 2007.
+Andrew Bennett, Nathan Kallus, Miruna Oprescu, Wen Sun, and Kaiwen Wang. Efficient and sharp off-policy evaluation in robust markov decision processes. Advances in Neural Information Processing Systems, 2024.
+Michael Bowling, John D Martin, David Abel, and Will Dabney. Settling the reward hypothesis. In International Conference on Machine Learning, pages 3003-3020. PMLR, 2023.
+Jonathan Chang, Kaiwen Wang, Nathan Kallus, and Wen Sun. Learning bellman complete representations for offline policy evaluation. In International Conference on Machine Learning, pages 2938-2971. PMLR, 2022.
+Yu Chen, XiangCheng Zhang, Siwei Wang, and Longbo Huang. Provable risk-sensitive distributional reinforcement learning with general function approximation. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=0xmfExPqFf.
+Yinlam Chow and Mohammad Ghavamzadeh. Algorithms for cvar optimization in mdps. Advances in neural information processing systems, 27, 2014.
+Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research, 18(167):1-51, 2018.
+Antonio Coronato, Muddasar Naeem, Giuseppe De Pietro, and Giovanni Paragliola. Reinforcement learning for intelligent healthcare applications: A survey. Artificial intelligence in medicine, 109:101964, 2020.
+Will Dabney, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. In International conference on machine learning, pages 1096-1105. PMLR, 2018.
+Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire. On oracle-efficient pacrl with rich observations. Advances in neural information processing systems, 31, 2018.
+Yihan Du, Siwei Wang, and Longbo Huang. Provably efficient risk-sensitive reinforcement learning: Iterated CVar and worst path. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=Yn0xg-kHNW-.
+Yonathan Efroni, Dipendra Misra, Akshay Krishnamurthy, Alekh Agarwal, and John Langford. Provably filtering
+
+exogenous distractors using multistep inverse dynamics. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=RQLLzMCefQu.
+Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, and Qiaomin Xie. Risk-sensitive reinforcement learning: Near-optimal risk-sample tradeoff in regret. Advances in Neural Information Processing Systems, 33:22384-22395, 2020.
+Yingjie Fei, Zhuoran Yang, Yudong Chen, and Zhaoran Wang. Exponential bellman equation and improved regret bounds for risk-sensitive reinforcement learning. Advances in Neural Information Processing Systems, 34: 20436-20446, 2021.
+Hans Föllmer and Alexander Schied. Stochastic finance: an introduction in discrete time. Walter de Gruyter, 2011.
+Ido Greenberg, Yinlam Chow, Mohammad Ghavamzadeh, and Shie Mannor. Efficient risk-averse reinforcement learning. Advances in Neural Information Processing Systems, 35:32639-32652, 2022.
+Jakub Grudzien, Christian A Schroeder De Witt, and Jakob Foerster. Mirror learning: A unifying framework of policy optimisation. In International Conference on Machine Learning, pages 7825-7844. PMLR, 2022.
+Ronald A Howard and James E Matheson. Risk-sensitive markov decision processes. Management science, 18(7): 356-369, 1972.
+Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, and Moritz Hardt. Revisiting design choices in proximal policy optimization. arXiv preprint arXiv:2009.10897, 2020.
+Audrey Huang, Jinglin Chen, and Nan Jiang. Reinforcement learning in low-rank mdps with density features. In International Conference on Machine Learning, pages 13710-13752. PMLR, 2023.
+Ehsan Imani, Kai Luedemann, Sam Scholnick-Hughes, Esraa Elelimy, and Martha White. Investigating the histogram loss in regression. arXiv preprint arXiv:2402.13425, 2024.
+Nan Jiang and Alekh Agarwal. Open problem: The dependence of sample complexity lower bounds on planning horizon. In Conference On Learning Theory, pages 3395-3398. PMLR, 2018.
+Chi Jin, Qinghua Liu, and Sobhan Miryoosefi. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems, 34:13406-13418, 2021.
+
+Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9, 2016.
+Daniel Kahneman and Amos Tversky. Prospect theory: An analysis of decision under risk. In Handbook of the fundamentals of financial decision making: Part I, pages 99-127. World Scientific, 2013.
+Sham M Kakade. A natural policy gradient. Advances in neural information processing systems, 14, 2001.
+Nathan Kallus and Masatoshi Uehara. Double reinforcement learning for efficient off-policy evaluation in markov decision processes. Journal of Machine Learning Research, 21(167):1-63, 2020.
+Ramtin Keramati, Christoph Dann, Alex Tamkin, and Emma Brunskill. Being optimistic to be conservative: Quickly learning a cvar policy. In Proceedings of the AAAI conference on artificial intelligence, 2020.
+Navdeep Kumar, Esther Derman, Matthieu Geist, Kfir Yehuda Levy, and Shie Mannor. Policy gradient for rectangular robust markov decision processes. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=NLpXRrijpa6.
+Thanh Lam, Arun Verma, Bryan Kian Hsiang Low, and Patrick Jalillet. Risk-aware reinforcement learning with coherent risk measures and non-linear function approximation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=-RwZ0Vybbj.
+Hao Liang and Zhiquan Luo. Regret bounds for risk-sensitive reinforcement learning with lipschitz dynamic risk measures. In International Conference on Artificial Intelligence and Statistics, pages 1774-1782. PMLR, 2024.
+Shiau Hong Lim and Ilyas Malik. Distributional reinforcement learning for risk-sensitive policies. Advances in Neural Information Processing Systems, 35:30977-30989, 2022.
+Xiaoteng Ma, Li Xia, Zhengyuan Zhou, Jun Yang, and Qianchuan Zhao. Dsac: Distributional soft actor critic for risk-sensitive reinforcement learning. arXiv preprint arXiv:2004.14547, 2020.
+Yecheng Ma, Dinesh Jayaraman, and Osbert Bastani. Conservative offline distributional reinforcement learning. Advances in Neural Information Processing Systems, 34: 19235-19247, 2021.
+
+HM Markowitz. Portfolio selection, the journal of finance. 7 (1). N, 1:71-91, 1952.
+Zakaria Mhammedi, Dylan J Foster, and Alexander Rakhlin. The power of resets in online reinforcement learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=7sACcaOmGi.
+Sidharth Mudgal, Jong Lee, Harish Ganapathy, Yaguang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, et al. Controlled decoding from language models. In International Conference on Machine Learning, pages 36486-36503. PMLR, 2024.
+Rémi Munos and Csaba Szepesvári. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(5), 2008.
+Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+Fabio Pardo, Arash Tavakoli, Vitaly Levdik, and Petar Kormushev. Time limits in reinforcement learning. In International Conference on Machine Learning, pages 4045-4054. PMLR, 2018.
+Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
+Marc Rigter, Bruno Lacerda, and Nick Hawes. One risk to rule them all: A risk-sensitive perspective on model-based offline reinforcement learning. Advances in Neural Information Processing Systems, 36, 2023.
+R Tyrrell Rockafellar and Stanislav Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21-42, 2000.
+Andrzej Ruszczyński. Risk-averse dynamic programming for markov decision processes. Mathematical programming, 125:235-261, 2010.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pages 1889-1897. PMLR, 2015.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+
+Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L Gordon, Niloofar Mireshghallah, Christopher Michael Ryting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, et al. Position: A roadmap to pluralistic alignment. In International Conference on Machine Learning, 2024.
+Richard S. Sutton. The reward hypothesis. http:// incompleteideas.net/rlai.cs.ualberta.ca/RLAI/rewardhypothesis.html, 2004.
+Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
+Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12, 1999.
+Aviv Tamar, Yinlam Chow, Mohammad Ghavamzadeh, and Shie Mannor. Policy gradient for coherent risk measures. Advances in neural information processing systems, 28, 2015a.
+Aviv Tamar, Yonatan Glassner, and Shie Mannor. Optimizing the cvar via sampling. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015b.
+John Tsitsiklis and Benjamin Van Roy. Analysis of temporal-difference learning with function approximation. Advances in neural information processing systems, 9, 1996.
+Masatoshi Uehara, Xuezhou Zhang, and Wen Sun. Representation learning for online and offline RL in low-rank MDPs. In ICLR, 2022. URL https://openreview.net/forum?id=J4iSIR9fhY0.
+Núria Armengol Urpi, Sebastian Curi, and Andreas Krause. Risk-averse offline reinforcement learning. In International Conference on Learning Representations, 2021.
+Kaiwen Wang, Nathan Kallus, and Wen Sun. Near-minimax-optimal risk-sensitive reinforcement learning with cvar. International Conference on Machine Learning, 2023a.
+Kaiwen Wang, Junxiong Wang, Yueying Li, Nathan Kallus, Immanuel Trummer, and Wen Sun. Joingym: An efficient query optimization environment for reinforcement learning. arXiv preprint arXiv:2307.11704, 2023b.
+Kaiwen Wang, Rahul Kidambi, Ryan Sullivan, Alekh Agarwal, Christoph Dann, Andrea Michi, Marco Gelmi, Yunxuan Li, Raghav Gupta, Avinava Dubey, et al. Conditional language policy: A general framework for steerable multi-objective finetuning. Findings of Empirical Methods in Natural Language Processing, 2024a.
+
+Kaiwen Wang, Owen Oertell, Alekh Agarwal, Nathan Kallus, and Wen Sun. More benefits of being distributional: Second-order bounds for reinforcement learning. In *Forty-first International Conference on Machine Learning*, 2024b. URL https://openreview.net/forum?id=kZBCFQe1Ej.
+Kaiwen Wang, Nathan Kallus, and Wen Sun. The central role of the loss function in reinforcement learning. Statistical Science, 2025a.
+Kaiwen Wang, Jin Peng Zhou, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kianté Brantley, and Wen Sun. Value-guided search for efficient chain-of-thought reasoning. arXiv preprint arXiv:2505.17373, 2025b.
+Zhiyong Wang, Dongruo Zhou, John Lui, and Wen Sun. Model-based rl as a minimalist approach to horizon-free and second-order bounds. arXiv preprint arXiv:2408.08994, 2024c.
+Wolfram Wiesemann, Daniel Kuhn, and Berç Rustem. Robust markov decision processes. Mathematics of Operations Research, 38(1):153-183, 2013.
+Lin Xiao. On the convergence rates of policy gradient methods. Journal of Machine Learning Research, 23 (282):1-36, 2022.
+Tengyang Xie, Dylan J Foster, Yu Bai, Nan Jiang, and Sham M. Kakade. The role of coverage in online reinforcement learning. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=LQIjzPdDt3q.
+Wenhao Xu, Xuefeng Gao, and Xuedong He. Regret bounds for markov decision processes with recursive optimized certainty equivalents. International Conference on Machine Learning, 2023.
+Menahem E Yaari. The dual theory of choice under risk. *Econometrica: Journal of the Econometric Society*, pages 95-115, 1987.
+Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds. In International Conference on Machine Learning, pages 7304-7312. PMLR, 2019.
+Yulai Zhao, Wenhao Zhan, Xiaoyan Hu, Ho fung Leung, Farzan Farnia, Wen Sun, and Jason D. Lee. Provably efficient CVar RL in low-rank MDPs. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=9x6yrFAPnx.
+
+Jin Peng Zhou, Kaiwen Wang, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kilian Q Weinberger, Kianté Brantley, and Wen Sun. q#: Provably optimal distributional rl for llm post-training. arXiv preprint arXiv:2502.20548, 2025.
+Ruiwen Zhou, Minghuan Liu, Kan Ren, Xufang Luo, Weinan Zhang, and Dongsheng Li. Is risk-sensitive reinforcement learning properly resolved? arXiv preprint arXiv:2307.00547, 2023.
+
+# Appendices
+
+# A. List of Notations
+
+Table 3. List of notations used in the paper.
+
+S, A State and action spaces. H Time horizon. Δ(S) The set of distributions supported by set S. ΠMarkov Class of Markovian policies that act only based on current state. ΠHD Class of history-dependent policies. Πaug, ΠHD Class of Markovian and history-dependent policies in the AugMDP. Qaug, Vaug Q and value functions of π in the AugMDP. Q*,h, Vaug Optimal Q and value functions in the AugMDP. (π, b) Run the policy π from an initial state (s1, b) in the AugMDP. Z(π) The cumulative reward distribution of policy π. OCEu(π) Optimized certainty equivalent with utility u for Z(π) (see App. B for primer on OCE). OCEu* Optimal OCE of cumulative rewards by history-dependent policy, i.e., maxπ∈ΠHD OCEu(π). RLB(k) The risk lower bound at round k for policy optimization algorithms (defined in Eq. (6)). Vmaxu Scale of utility u defined as Vmaxu := maxc∈[−1,1] |u(c)|.
+
+# B. Primer on Optimized Certainty Equivalents (OCE)
+
+In section is a short primer on optimized certainty equivalents (OCE), which capture many important risk measures including conditional value-at-risk (CVaR) (Rockafellar and Uryasev, 2000), entropic risk (Föllmer and Schied, 2011) and Markowitz's mean-variance (Markowitz, 1952). For a utility function $u: \mathbb{R} \rightarrow [-\infty, \infty)$ , the OCE of a random variable $X$ is defined as:
+
+$$
+\mathrm {O C E} _ {u} (X) := \max _ {b \in \mathbb {R}} \{b + \mathbb {E} [ u (X - b) ] \}.
+$$
+
+In the paper, we focused on the case where $X$ is the cumulative reward distribution of some policy $\pi$ , and the goal is to learn the policy that maximizes the OCE. While the OCE is well-defined for any utility function, there are common regularity conditions proposed by Ben-Tal and Teboulle (2007) that ensure the OCE is well-behaved. These conditions are:
+
+[R1] $u$ is proper meaning that its domain is non-empty, i.e., $\mathrm{dom}(u) := \{t \in \mathbb{R} : u(t) > -\infty\} \neq \emptyset$ ;
+[R2] $u$ is closed meaning that its hypograph $\mathrm{hyp}(u)\coloneqq \{(t,r)\in \mathbb{R}\times \mathbb{R}:r\leq u(t)\}$ is a closed set;
+[R3] $u$ is concave meaning that its hypograph is convex;
+[R4] $u$ is non-decreasing;
+[R5] $u(0) = 0$ and $1 \in \partial u(0)$ , where $\partial u(x) = \{g : \forall z \in \mathbb{R}, g \leq (u(z) - u(x)) / (z - x)\}$ is the subgradient.
+
+Ben-Tal and Teboulle (2007, Theorem 2.1) showed that, if $u$ satisfies R1-R5, then $\mathrm{OCE}_u$ enjoys the following properties:
+
+[P1] Translation invariance: $\mathrm{OCE}_u(X + c) = \mathrm{OCE}_u(X) + c$ for constant $c\in \mathbb{R}$
+[P2] Monotonicity: if $X(\omega)\leq Y(\omega),\forall \omega \in \Omega$ , then $\mathrm{OCE}_u(X)\leq \mathrm{OCE}_u(Y)$
+[P3] Concavity: for any $X, Y \in L_{\infty}, \lambda \in [0,1]$ , we have $\mathrm{OCE}_u(\lambda X + (1 - \lambda)Y) \geq \lambda \mathrm{OCE}_u(X) + (1 - \lambda) \mathrm{OCE}_u(Y)$ ;
+[P4] Consistency: $\mathrm{OCE}_u(0) = 0$
+
+Translation invariance (P1) and consistency (P4) are intuitive since a deterministic payoff of $c$ should have a value of $c$ . Monotonicity (P2) states that if $X$ is dominated by $Y$ , then the risk of $X$ should be no greater than the risk of $Y$ . Concavity (P3) implies that $-\mathrm{OCE}_u$ is a convex risk measure. However, we remark that these properties do not imply that $\mathrm{OCE}_u$ is a coherent risk measure (Artzner et al., 1999), which would require sub-additivity and positive homogeneity. Indeed, convexity is a relaxation of sub-additivity and positive homogeneity, which are not always satisfied by $\mathrm{OCE}_u$ even if $u$ satisfies R1-R5. For example, the entropic risk is an OCE with an exponential utility function satisfying R1-R5, but it is not a coherent risk measure.
+
+In Table 4 below, we provide important examples of OCE, which includes (1) expectation, (2) entropic risk, (3) CVaR and (4) Markowitz's mean-variance. Recall that CVaR and Entropic risk are defined by:
+
+$$
+\mathrm {C V a R} _ {\tau} (X) := \max _ {b \in \mathbb {R}} \{b - \tau^ {- 1} \mathbb {E} [ (b - X) _ {+} ] \}, \qquad \mathrm {E n t r} _ {\beta} (X) := \frac {1}{\beta} \log \mathbb {E} \exp (\beta X).
+$$
+
+Risk Name Parameter Utility function u Vmaxu Mean (EX) None u(t) = t 1 Cond. Value-at-Risk (CVaRτ(X)) τ ∈ (0, 1] u(t) = -τ-1(-t)+ τ-1 Entropic Risk (Entrβ(X)) β ∈ (-∞, 0) u(t) = 1/β(exp(βt) - 1) 1/|β| (exp(|β|) - 1) Mean-Variance (EX - c · Var(X)) c > 0 u(t) = t - ct2 if t ≤ 1/(2c) else u(t) = 1/(4c) 1 + c Mean-CVaRτ (κ1EX + (1 - κ1) · CVaRτ(X)) κ1 ∈ [0, 1] u(t) = κ1(t)+ - κ2(-t)+, where κ2 = τ-1(1 - κ1) + κ1. κ2
+
+Table 4. Examples of OCE risk measures with their corresponding $u$ and ${V}_{u}^{\max }$ .
+
+Table 4 also lists each OCE's $V_{u}^{\max}$ which recall is defined as $V_{u}^{\max} := \max_{c \in [-1,1]} |u(c)|$ and roughly measures the statistical hardness of learning $\mathrm{OCE}_u$ relative to $\mathbb{E}$ (which has $V_{u}^{\max} = 1$ ). We remark that OCE is a Lipschitz risk measure (Liang and Luo, 2024; Chen et al., 2024) with Lipschitz constant $V_{u}^{\max}$ , and so $V_{u}^{\max}$ can also be interpreted in this way.
+
+The first four rows of Table 4 are well-known risk measures and are known to be OCEs. The final row $\mathrm{Mean - CVaR}_{\tau}$ is a lesser known example of OCE, but is perhaps more relevant in practice since it captures both the average and tail risks together. We now provide a proof that $\mathrm{Mean - CVaR}_{\tau}$ is an OCE with the piecewise linear utility.
+
+Theorem B.1. Let $0 \leq \kappa_{1} < 1 < \kappa_{2}$ and consider the piece-wise linear utility:
+
+$$
+u _ {\kappa_ {1}, \kappa_ {2}} (t) = \kappa_ {1} (t) _ {+} - \kappa_ {2} (- t) _ {+}.
+$$
+
+where we denote $(y)_{+} = \max (y,0)$ . Then, with $\tau = \frac{1 - \kappa_1}{\kappa_2 - \kappa_1}$ , we have that
+
+$$
+\mathrm {O C E} _ {u _ {\kappa_ {1}, \kappa_ {2}}} (X) = \kappa_ {1} \mathbb {E} X + (1 - \kappa_ {1}) \mathrm {C V a R} _ {\tau} (X).
+$$
+
+Proof. By Ben-Tal and Teboulle (2007, Example 2.3),
+
+$$
+\mathrm {O C E} _ {u _ {\kappa_ {1}, \kappa_ {2}}} (X) = \max _ {b \in [ 0, 1 ]} \left\{b + \kappa_ {1} \mathbb {E} \left[ (X - b) _ {+} \right] - \kappa_ {2} \mathbb {E} \left[ (b - X) _ {+} \right] \right\}. \tag {9}
+$$
+
+Moreover, the optimal dual variable is $b^{\star} = F^{\dagger}((1 - \kappa_{1}) / (\kappa_{2} - \kappa_{1}))$ , where $F^{\dagger}(t) = \inf \{x\mid F(x)\geq t\}$ is the quantile function and $F$ is the CDF of $X$ . Expanding Eq. (9) and using the fact that $(t)_{+} - (-t)_{+} = t$ , we get
+
+$$
+\begin{array}{l} \mathrm {O C E} _ {u _ {\kappa_ {1}, \kappa_ {2}}} (X) = \kappa_ {1} \mathbb {E} X + \max _ {b \in [ 0, 1 ]} \left\{(1 - \kappa_ {1}) b - (\kappa_ {2} - \kappa_ {1}) \mathbb {E} [ (b - X) _ {+} ] \right\} \\ = \kappa_ {1} \mathbb {E} X + (1 - \kappa_ {1}) \max _ {b \in [ 0, 1 ]} \left\{b - \frac {\kappa_ {2} - \kappa_ {1}}{1 - \kappa_ {1}} \mathbb {E} [ (b - X) _ {+} ] \right\} \\ = \kappa_ {1} \mathbb {E} X + (1 - \kappa_ {1}) \operatorname {C V a R} _ {\tau} (X), \\ \end{array}
+$$
+
+where $\tau = \frac{1 - \kappa_1}{\kappa_2 - \kappa_1}$ . This completes the proof.
+
+
+
+# C. More Details on Experimental Setup
+
+# C.1. Network Architecture and Input Features
+
+We parameterize both the augmented $Q$ -values and the softmax policy using multilayer perceptrons (MLPs) with two hidden layers of dimension 64. For the policy network, we one-hot encode the budget $b_{h}$ before passing it into the network. This transformation does not introduce approximation error since rewards are discrete in the Markov Decision Process (MDP). We observed that when $b_{h}$ was inputted directly as a real number, the training of the policy was less stable and often diverged. In contrast, the $Q$ -network demonstrated greater robustness to how $b_{h}$ is featurized, likely due to the augmented value function being Lipschitz and monotonic in $b_{h}$ . Unlike the policy network, the $Q$ -network receives $b_{h}$ as a direct input without encoding. Finally, since the task is finite-horizon, we incorporate the time step $h$ as part of the state representation, following Pardo et al. (2018).
+
+# C.2. Regularization
+
+Log-barrier regularization played a crucial role in stabilizing and preventing the optimization from getting stuck due to vanishing gradients. Indeed in softmax policies, the gradients tend to vanish as the policy becomes more deterministic. To mitigate this issue, we applied a regularization weight of 0.1 which helped to prevent the policy from becoming too deterministic. We remark that policy gradients with log-barrier regularization also have theoretical guarantees Agarwal et al. (2021, Theorem 12), making this a valid POALG for our PO algorithm.
+
+# C.3. Hyperparameter Settings
+
+Component Value/Description Policy Network Softmax policy with MLP with two hidden layers of dimension 64 Value Network MLP with two hidden layers of dimension 64 Budget Encoding (Policy) One-hot Budget Encoding (Q-network) Direct input Optimizer Adam with β1=0.9, β2=0.999 Batch Size 256 Learning Rate 5 × 10-3 GAE λ 0.95 PPO KL weight 0.1 Regularization Weight 0.1
+
+Table 5. Hyperparameter settings used in our experiments.
+
+# D. Proofs for Optimistic Meta-Algorithm
+
+In this section, we study three illustrative examples of optimistic oracles that satisfy Def. 3.1 in the AugMDP. In particular, all these oracles can be used with our meta-algorithm Alg. 1 to form an optimistic OCE algorithm with the regret / PAC bounds in Thm. 3.2. The three oracles we consider are UCB-VI (Azar et al., 2017), Rep-UCB (Uehara et al., 2022) and GOLF (Jin et al., 2021).
+
+First, UCB-VI is a model-based algorithm based on optimistic value iteration with exploration bonuses and achieves minimax-optimal regret in tabular MDPs (Azar et al., 2017). Second, Rep-UCB is a model-based algorithm based on elliptical exploration bonuses and achieves PAC bounds in low-rank MDPs (Uehara et al., 2022). While the AugMDP may initially seem more complex, since it is no longer tabular or low-rank, the regret bounds of these algorithms actually easily transfers to the AugMDP when the underlying MDP (i.e., the MDP being augmented) is tabular or low-rank. The intuition is because the AugMDP does not introduce any new unknowns to the model, as the augmented transition and reward functions are known and deterministic, as highlighted in Sec. 2.1. Hence, the model that needs to be learned is the same in the original MDP and the AugMDP, and the prior regret bounds naturally translate to the AugMDP, which we formalize in the following subsections. In particular, we will see that prior works such as (Wang et al., 2023a; Zhao et al., 2024) have already implicitly proven that UCB-VI and Rep-UCB satisfies Def. 3.1 in the CVaR AugMDP.
+
+The third optimistic oracle we consider is GOLF, a model-free algorithm based on version space optimism and can achieve regret bounds in exogenous block MDPs (Xie et al., 2023). Unlike model-based algorithm, extending GOLF to the AugMDP requires more care since the value functions being learned also takes in the augmented state $b$ as input; that is, the value function class is more complex in the AugMDP. To handle this added complexity, we posit a discreteness assumption in Assump. 3.5. Under this premise, we prove the previous GOLF analysis based on coverability can be extended to the AugMDP. Consequently, we derive the first OCE regret bounds in exogenous block MDPs (Efroni et al., 2022).
+
+Finally, we conclude this section by showing how second-order bounds for the oracle can lead to tight and optimal regret for $\mathrm{CVaR}_{\tau}$ in tabular MDPs, recovering the main result of (Wang et al., 2023a). In particular, we observe that the key idea of the complex argument of (Wang et al., 2023a) is simply access to an oracle with second-order regret. We believe this observation can lead to tighter RSRL bounds, especially since second-order bounds have recently been possible in much more general MDPs via distributional RL (Wang et al., 2024b; 2025a; 2024c).
+
+# Algorithm 3 Optimistic Oracle: UCB-VI (Azar et al., 2017)
+
+1: Input: Number of rounds $K$ , failure probability $\delta$ .
+2: for round $k = 1,2,\ldots ,K$ do
+3: Compute counts and empirical transition estimate,
+
+$$
+N _ {k} (s, a, s ^ {\prime}) = \sum_ {h = 1} ^ {H} \sum_ {i = 1} ^ {k - 1} \mathbb {I} \left[ \left(s _ {h, i}, a _ {h, i}, s _ {h + 1, i}\right) = (s, a, s ^ {\prime}) \right],
+$$
+
+$$
+N _ {k} (s, a) = 1 \vee \sum_ {s ^ {\prime} \in \mathcal {S}} N _ {k} (s, a, s ^ {\prime}), \widehat {P} _ {k} (s ^ {\prime} \mid s, a) = \frac {N _ {k} (s , a , s ^ {\prime})}{N _ {k} (s , a)},
+$$
+
+4: For all $s \in S, b \in [0,1]$ , set $\widehat{V}_{H + 1,k}(s,b) = u(-b)$ .
+5: for $h = H, H - 1, \ldots, 1$ do
+6: For all $s, b, a$ ,
+
+$$
+\widehat {Q} _ {h, k} (s, b, a) = \widehat {P} _ {k} (s, a) ^ {\top} \mathbb {E} _ {r _ {h} \sim R (s, a)} \left[ \widehat {V} _ {h + 1, k} (\cdot , b - r _ {h}) \right] + \sqrt {\frac {\log (H S A K / \delta)}{N _ {k} (s , a)}},
+$$
+
+$$
+\pi_ {h} ^ {k} (s, b) = \underset {a} {\arg \max} \widehat {Q} _ {h, k} (s, b, a), \qquad \widehat {V} _ {h, k} (s, b) = \min \Bigl \{\widehat {Q} _ {h, k} (s, b, \pi_ {h} ^ {k} (s, b)), V ^ {\max} \Bigr \}.
+$$
+
+7: end for
+8: Output optimistic value function $\widehat{V}_{1,k}(s_1,\cdot)$ .
+9:Receive adversarial $\widehat{b}_k$
+10: Collect $\{(s_{h,k},a_{h,k},r_{h,k})\}_{h\in [H]}$ by executing $\pi^k$ starting from $(s_1,\widehat{b}_k)$ in AugMDP.
+11: end for
+
+# D.1. Example 1: UCB-VI
+
+UCB-VI (Azar et al., 2017) is a model-based algorithm for tabular MDPs based on value iteration with exploration bonuses. Alg. 3 formalizes the UCB-VI algorithm in the AugMDP. The following theorem recovers the results from Wang et al. (2023a, Theorem 5.2), who focused on the more restricted CVaR RL setting.
+
+Theorem D.1. For any $\delta \in (0,1)$ , running Alg. 3 enjoys the following w.p. $1 - \delta$ :
+
+1. (Optimism) $\widehat{V}_{1,k}(s_1,b_1)\geq V_{\mathsf{aug}}^{\star ,1}(s_1,b_1)$ for all $k\in [K],b_{1}\in [0,1]$
+2. (Regret) The regret in the AugMDP is at most,
+
+$$
+\sum_ {k = 1} ^ {K} \widehat {V} _ {1, k} (s _ {1}, \widehat {b} _ {k}) - V _ {\mathrm {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, \widehat {b} _ {k}) \leq \widetilde {\mathcal {O}} (V _ {u} ^ {\max} (\sqrt {S A H K \log (1 / \delta)} + S ^ {2} A H)).
+$$
+
+where $\tilde{\mathcal{O}} (\cdot)$ ignores terms logarithmic in $S,A,H,K$
+
+Proof Sketch. Thm. D.1 can be proved by following the same argument in (Wang et al., 2023a, Appendix G.3), except that the CVaR utility $\tau^{-1}\min(t,0)$ is replaced by any generic OCE utility $u$ . In particular, optimism is ensured by Lemma G.3 and the Equation Bon★. The regret bound is ensured by the proof of Theorem 5.2 on Page 35. Thus, by applying the argument in (Wang et al., 2023a), we can show that UCB-VI satisfies the oracle conditions in Def. 3.1.
+
+Therefore, Thm. D.1 shows that UCB-VI satisfies the oracle conditions Def. 3.1 in the AugMDP with $\varepsilon_{k}^{\mathrm{opt}} = 0$ and $\operatorname{Reg}_{\mathrm{Opt}}(K) \leq \tilde{\mathcal{O}}(\sqrt{SAHK\log(1/\delta)} + S^{2}AH)$ .
+
+# Algorithm 4 Optimistic Oracle: Rep-UCB (Uehara et al., 2022)
+
+1: Input: No. episodes $K$ , Model Classes $(\Phi, \Upsilon)$ .
+2: Initialize $\mathcal{D}_h^{(0)} = \widetilde{\mathcal{D}}_h^{(0)} = \emptyset$ for all $h\in [H]$
+3: for round $k = 1,2,\ldots ,K$ do
+4: Learn the model via maximum likelihood estimation (MLE): for all $h \in [H]$ :
+
+$$
+\phi_{h}^{(k)},\mu_{h}^{(k)}:= \operatorname *{arg max}_{\phi \in \Phi ,\mu \in \Upsilon}\sum_{\substack{s,a,s^{\prime}\in \mathcal{D}_{h}^{(k - 1)}\cup \widetilde{\mathcal{D}}_{h}^{(k - 1)}}}\log \phi (s_{h,i},a_{h,i})^{\top}\mu (s_{h + 1},i)
+$$
+
+5: Define the empirical covariance $\Sigma_{h,k} = \sum_{s,a\in \mathcal{D}_h^{(k - 1)}}\phi_h^{(k)}(s,a)(\phi_h^{(k)}(s,a))^{\top} + \lambda_kI.$
+6: Define the bonus, $b_{h,k}(s,a) = \min(\alpha_k\|\phi_h^{(k)}(s,a)\|_{\Sigma_{h,k}^{-1}},2)$ if $h < H$ . Otherwise $b_{H,k} = 0$ .
+7: Let $\widehat{Q}_{h,k}(s,b,a)$ denote quality function of the AugMDP with reward $r_{\mathrm{aug}}^{h,k}(s,b,a) + b_{h,k}(s,a)$ and transitions $\mathcal{P}_h^{(k)}(s'\mid s,a) = \phi_h^{(k)}(s,a)^\top \mu_h^{(k)}(s')$ . Let $\widehat{V}_{h,k}(s,b) \coloneqq \max_a\widehat{Q}_{h,k}(s,b,a)$ and $\pi^k \coloneqq \arg \max_{a\in \mathcal{A}}\widehat{Q}_{h,k}(s,b,a)$ . This can be achieved via a planning oracle (Zhao et al., 2024).
+8: Output optimistic value function $\dot{V}_{1,k}(s_1,\cdot)$ .
+9:Receive adversarial $\widehat{b}_k$
+10: For each $h \in [H]$ , execute $\pi^k$ starting from $(s_1, \widetilde{b}_k)$ in AugMDP until $s_h, b_h$ and take two uniform actions. That is, $a_h \sim \mathrm{Unif}(\mathcal{A})$ , observe $s_{h+1} \sim P_h^\star(s_h, a_h)$ , and take $a_{h+1} \sim \mathrm{Unif}(\mathcal{A})$ , observe $s_{h+2} \sim P_{h+1}^\star(s_{h+1}, a_{h+1})$ . Then, add to datasets: $\mathcal{D}_h^{(k)} \gets \mathcal{D}_h^{(k-1)} \cup \{(s_h, a_h, s_{h+1}\}$ and $\widetilde{\mathcal{D}}_{h+1}^{(k)} \gets \widetilde{\mathcal{D}}_{h+1}^{(k)} \cup \{(s_{h+1}, a_{h+1}, s_{h+2})\}$ .
+
+11: end for
+
+# D.2. Example 2: Rep-UCB
+
+Rep-UCB (Uehara et al., 2022) is a model-based algorithm for low-rank MDPs based on elliptical bonuses. We first recall the definition of low-rank MDP (Agarwal et al., 2020).
+
+Definition D.2 (Low-Rank MDP). An MDP is has rank $d$ if its transitions have a low-rank decomposition $P_{h}(s^{\prime}\mid s,a) = \phi_{h}^{\star}(s,a)^{\top}\mu_{h}^{\star}(s^{\prime})$ where $\phi_h^\star (s,a),\mu_h^\star (s')\in \mathbb{R}^d$ are unknown features that satisfy $\sup_{s,a}\| \phi_h^\star (s,a)\| _2\leq 1$ and $\| \int g(s^{\prime})\mathrm{d}\mu_h^\star (s^{\prime})\| \leq \| g\|_{\infty}\sqrt{d}$ for all $g:S\to \mathbb{R}$
+
+Rep-UCB utilizes a model class $(\Phi, \Upsilon)$ to learn the low-rank transition $\phi_h^\star(s, a)^\top \mu_h^\star(s')$ .
+
+Assumption D.3 (Realizability). $\phi_h^\star \in \Phi$ and $\mu_h^\star \in \Upsilon$ for all $h\in [H]$
+
+Rep-UCB as an oracle in the AugMDP is presented in Alg. 4. Our theory recovers the results of Zhao et al. (2024), who focused on the more restricted CVaR setting.
+
+Theorem D.4. Under Def. D.2 and Assump. D.3, for any $\delta \in (0,1)$ , Alg. 4 enjoys the following w.p. $1 - \delta$ :
+
+1. (Optimism) For all $k \in [K]$ and $b_1, b_2, \ldots, b_k$ , we have $V_{\mathrm{aug}}^{\star, 1}(s_1, b_1) - \widehat{V}_{1,k}(s_1, b_1) \leq \mathcal{O}(H\sqrt{AL / k})$ where $L = \log(|\Phi| |\Upsilon| HK / \delta)$ ;
+2. (Total sub-optimality gap)
+
+$$
+\sum_ {k = 1} ^ {K} \widehat {V} _ {1, k} (s _ {1}, \widehat {b} _ {k}) - V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, \widehat {b} _ {k}) \leq \mathcal {O} (V _ {u} ^ {\max } (H ^ {3} A d ^ {2} \sqrt {K L})).
+$$
+
+Proof Sketch. Thm. D.4 can be proved by following the same argument in Zhao et al. (2024), except that the CVaR utility $\tau^{-1}\min(t,0)$ is replaced by any generic OCE utility $u$ . In particular, optimism is ensured by Lemma C.2. The total sub-optimality gap bound was proven in Lemma C.1, where any $\tau^{-1}(t - R)^+$ can be replaced by $u(R - t)$ . Thus, by applying the argument in (Zhao et al., 2024), we can show that Rep-UCB satisfies the oracle conditions in Def. 3.1.
+
+Therefore, Thm. D.4 shows that Rep-UCB satisfies the oracle conditions Def. 3.1 in the AugMDP with $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq \mathcal{O}(H^3 Ad^2 \sqrt{KL})$ and $\sum_{k} \varepsilon_k^{\mathrm{opt}} \leq \mathcal{O}(H \sqrt{AKL})$ .
+
+Algorithm 5 Optimistic Oracle: GOLF (Jin et al., 2021)
+1: Input: No. rounds $K$ , Function class $\mathcal{F} = \mathcal{F}_1 \times \dots \times \mathcal{F}_H$ , threshold $\beta$ .
+2: Initialize $\mathcal{D}_h^{(0)} = \emptyset$ for all $h$ , and $\mathcal{F}^{(0)} = \mathcal{F}$ .
+3: for round $k = 1, 2, \ldots, K$ do
+4: For each $b_1 \in [0, 1]$ , define $\widehat{Q}_k(\cdot; b_1) := \arg \max_{f \in \mathcal{F}^{(k-1)}} \max_{a \in \mathcal{A}} f_1(s_1, b_1, a)$ .
+5: Output optimistic value functions $\widehat{V}_k(s, b; b_1) = \max_{a \in \mathcal{A}} \widehat{Q}_k(s, b, a; b_1)$ .
+6: Receive adversarial $\widehat{b}_k$ .
+7: Define policy $\pi_h^k(a | s, b) := \arg \max_{a \in \mathcal{A}} \widehat{Q}_{h,k}(s, b, a; \widehat{b}_k)$ .
+8: Execute $\pi^k$ from $\widehat{b}_k$ in the AugMDP and collect a trajectory $(s_h^{(k)}, b_h^{(k)}, a_h^{(k)}, r_h^{(k)})_{h \in [H], k \in [K]}$ .
+9: Update dataset $\mathcal{D}_h^{(k)} = \mathcal{D}_h^{(k-1)} \cup \{(s_h^{(k)}, b_h^{(k)}, a_h^{(k)}, r_h^{(k)}, s_{h+1}^{(k)}, b_{h+1}^{(k)}\}$ .
+10: Compute confidence set:
+[ \mathcal{F}^{(k)} \gets \left\{f \in \mathcal{F}: \mathcal{L}_h^{(k)}(f_h, f_{h+1}) \leq \min_{g \in \mathcal{F}_h} \mathcal{L}_h^{(k)}(g, f_{h+1}) + \beta, \forall h \in [H]\right\}, ]
+where $\mathcal{L}_h^{(k)}(g, f') = \sum_{i \leq k} (g(s_h^{(i)}, b_h^{(i)}, a_h^{(i)}) - r_h^{(i)} - \max_{a \in \mathcal{A}} f'(s_{h+1}^{(i)}, b_{h+1}^{(i)}, a))^2$
+
+11: end for
+
+# D.3. Example 3: GOLF
+
+GOLF (Jin et al., 2021) is a model-free algorithm that establishes optimism by optimizing over a version space of candidate $Q$ -functions. In this section, we show that GOLF's regret in the AugMDP can be bounded under a discreteness premise on the cumulative rewards (cf. Assump. 3.5).
+
+We consider a function class $\mathcal{F} = \mathcal{F}_1 \times \dots \times \mathcal{F}_H$ with elements $f = (f_1, \ldots, f_H) \in \mathcal{F}$ such that $f_h : \mathcal{S} \times [0,1] \times \mathcal{A} \to [-V_u^{\max}, V_u^{\max}]$ . As convention, we set $f_{H+1}(s, b, a) = u(-b)$ .
+
+GOLF constructs the version space by selecting a subset of $\mathcal{F}$ such that the TD-error is small for all $h$ . This is formalized in Alg. 5 with the squared TD-error. To ensure the regression at each step can succeed, GOLF requires Bellman completeness (Jin et al., 2021; Xie et al., 2023). To state Bellman completeness, we first define the Bellman optimality operator in the AugMDP: for a function $f:\mathcal{S}\times [0,1]\times \mathcal{A}\to \mathbb{R}$ and any $h < H$ :
+
+$$
+\mathcal {T} _ {\mathrm {a u g}} ^ {\star , h} f (s _ {h}, b _ {h}, a _ {h}) = \mathbb {E} _ {s _ {h + 1} \sim P _ {h} (s _ {h}, a _ {h}), r _ {h} \sim R _ {h} (s _ {h}, a _ {h})} [ \max _ {a ^ {\prime} \in \mathcal {A}} f (s _ {h + 1}, b _ {h} - r _ {h}, a ^ {\prime}) ],
+$$
+
+$$
+\mathcal {T} _ {\mathrm {a u g}} ^ {\star , H} f _ {H + 1} (s _ {H}, b _ {H}, a _ {H}) = \mathbb {E} _ {r _ {H} \sim R _ {H} (s _ {H}, a _ {H})} [ u (r _ {H} - b _ {H}) ].
+$$
+
+Assumption D.5 (AugMDP Bellman Completeness). $\mathcal{T}_{\mathrm{aug}}^{\star,h}f_{h+1} \in \mathcal{F}_h$ for all $f_{h+1} \in \mathcal{F}_{h+1}, h \in [H]$ .
+
+To prove regret bounds for exogenous block MDPs, we recall the definition of coverability from Xie et al. (2023). Coverability is a complexity measure that captures the minimum possible concentrability coefficient in the MDP:
+
+$$
+\mathsf {C o v} := \min _ {\mu_ {h} \in \Delta (\mathcal {S} \times \mathcal {A})} \max _ {\pi \in \Pi_ {\mathrm {M a r k o v}}, h \in [ H ]} \left\| \frac {d _ {h} ^ {\pi}}{\mu_ {h}} \right\| _ {\infty},
+$$
+
+where $\Pi_{\text{Markov}}$ is the set of policies in the original MDP. We extend this notion to the AugMDP:
+
+$$
+\mathsf {C o v} _ {\mathsf {a u g}} := \min _ {\mu_ {h} \in \Delta (\mathcal {S} \times [ 0, 1 ] \times \mathcal {A})} \max _ {\pi \in \Pi_ {\mathsf {a u g}}, h \in [ H ], b _ {1} \in [ 0, 1 ]} \left\| \frac {d _ {h} ^ {\pi , b _ {1}}}{\mu_ {h}} \right\| _ {\infty}.
+$$
+
+We now state the AugMDP regret bound for GOLF.
+
+Theorem D.6. Under Assump. $D.5,$ for any $\delta \in (0,1)$ , running Alg. 5 with $\beta = \Theta (\log (KH|\mathcal{F}| / \delta))$ enjoys the following w.p. $1 - \delta$ :
+
+1. (Optimism) $\widehat{V}_{1,k}(s_1,b_1;b_1)\geq V_{\mathrm{aug}}^{\star ,1}(s_1,b_1)$ for all $k\in [K],b_{1}\in [0,1]$
+
+# 2. (Regret bound)
+
+$$
+\sum_ {k = 1} ^ {K} \widehat {V} _ {1, k} (s _ {1}, \widehat {b} _ {k}; \widehat {b} _ {k}) - V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, \widehat {b} _ {k}) \leq \mathcal {O} \big (V _ {u} ^ {\max } H \sqrt {\mathsf {C o v} _ {\mathsf {a u g}} \beta K \log (K)} \big).
+$$
+
+Proof Sketch. Thm. D.6 is a direct consequence of Jin et al. (2021); Xie et al. (2023), since all that we have done is rewritten everything (algorithms, assumptions and theorems) in the AugMDP notation. For example, optimism is proved by Jin et al. (2021, Section 4.2, Page 9). Moreover, the regret bound is proved by Xie et al. (2023, Theorem 1). $\square$
+
+Therefore, Thm. D.6 shows that GOLF satisfies the oracle conditions Def. 3.1 in the AugMDP with $\varepsilon_{k}^{opt} = 0$ and $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq (H \sqrt{\mathrm{Cov}_{\mathrm{aug}} \beta K \log(K)})$ .
+
+The main question now is whether we can bound $\mathsf{Cov}_{\mathrm{aug}}$ by the original coverability $\mathsf{Cov}$ , which we know is bounded for low-rank MDPs (Def. D.2) and exogenous block MDPs (Def. 3.3). We now show that if cumulative rewards live in a discrete set $\mathcal{B}$ (cf. Assump. 3.5), we can bound $\mathsf{Cov}_{\mathrm{aug}} \leq |\mathcal{B}|\mathsf{Cov}$ by a simple importance sampling argument.
+
+Lemma D.7. Under Assump. 3.5, we have
+
+$$
+\operatorname {C o v} _ {\text {a u g}} \leq | \mathcal {B} | \operatorname {C o v}.
+$$
+
+Proof.
+
+$$
+\begin{array}{l} \mathsf {C o v} _ {\mathsf {a u g}} \stackrel {(i)} {=} \max _ {h \in [ H ]} \sum_ {s, b, a \in \mathcal {S} \times [ 0, 1 ] \times \mathcal {A}} \max _ {\pi \in \Pi_ {\mathsf {a u g}}, b _ {1} \in [ 0, 1 ]} d _ {h} ^ {\pi , b _ {1}} (s, b, a) \\ \stackrel {(i i)} {\leq} \max _ {h \in [ H ]} \sum_ {s, b, a \in \mathcal {S} \times [ 0, 1 ] \times \mathcal {A}} \max _ {\pi \in \Pi_ {\mathrm {a u g}}, b _ {1} \in [ 0, 1 ]} d _ {h} ^ {\pi , b _ {1}} (s, a) \\ \stackrel {(i i i)} {\leq} \max _ {h \in [ H ]} \sum_ {s, b, a \in \mathcal {S} \times [ 0, 1 ] \times \mathcal {A}} \max _ {\pi \in \Pi_ {\mathsf {H D}}} d _ {h} ^ {\pi} (s, a) \\ \stackrel {(i v)} {=} \max _ {h \in [ H ]} \sum_ {s, b, a \in \mathcal {S} \times [ 0, 1 ] \times \mathcal {A}} \max _ {\pi \in \Pi_ {\text {M a r k o v}}} d _ {h} ^ {\pi} (s, a) \\ \stackrel {(v)} {=} | \mathcal {B} | \max _ {h \in [ H ]} \sum_ {s, a \in \mathcal {S} \times \mathcal {A}} \max _ {\pi \in \Pi_ {\mathrm {M a r k o v}}} d _ {h} ^ {\pi} (s, a) \\ \stackrel {(v i)} {=} | \mathcal {B} | \operatorname {C o v}, \\ \end{array}
+$$
+
+where (i) is by coverability's equivalence to cumulative reachability (Xie et al., 2023, Lemma 3); (ii) is by $d^{\pi ,b_1}(s,b,a)\leq d^{\pi ,b_1}(s,a)$ since $\mathcal{B}$ is discrete; (iii) is since $\pi ,b_{1}$ can be viewed as a policy in the original MDP; (iv) is by the Markov optimality theorem for risk-neutral RL: $\max_{\pi \in \Pi_{\mathrm{HD}}}d_h^\pi (s',a')$ is equivalent to standard RL with the reward $r_h(s,a) = \mathbb{I}[(s,a) = (s',a')]$ and zero otherwise, and we know that Markovian policies are optimal for risk-neutral RL; (v) is by discreteness of $\mathcal{B}$ to collect common terms; and (vi) is again by Xie et al. (2023, Lemma 3) in the original MDP.
+
+We can now state our OCE regret guarantees for low-rank MDPs and exogenous block MDPs. Note these are the first risk-sensitive regret bounds for these rich-observation MDPs.
+
+Theorem D.8. In an exogenous block MDP (Def. 3.3), under Assump. 3.5, we have
+
+$$
+\operatorname {C o v} _ {\text {a u g}} \leq | \mathcal {B} | | \mathcal {Z} ^ {\text {e n}} | | \mathcal {A} |.
+$$
+
+Therefore, under Assump. D.5, running Alg. 1 with the GOLF oracle (cf. Alg. 5) enjoys the regret bound:
+
+$$
+\sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \leq \mathcal {O} \left(V _ {u} ^ {\max } H \sqrt {\left| \mathcal {B} \right| \left| \mathcal {Z} ^ {\mathrm {e n}} \right| \left| \mathcal {A} \right| \beta K \log (K)\right).
+$$
+
+Proof. To prove the first statement, recall that $\mathsf{Cov} \leq |\mathcal{Z}^{\mathrm{en}}| \mathcal{A}$ , as was proved by Xie et al. (2023, Proposition 5). Thus, Lemma D.7 implies that $\mathsf{Cov}_{\mathrm{aug}} \leq |\mathcal{B}| \cdot |\mathcal{Z}^{\mathrm{en}}||\mathcal{A}|$ . To prove the second statement, simply combine our meta-algorithm guarantee in Thm. 3.2 with the GOLF oracle guarantee in Thm. D.6.
+
+Theorem D.9. In a low-rank MDP with rank $d$ (Def. D.2), under Assump. 3.5, we have
+
+$$
+\operatorname {C o v} _ {\text {a u g}} \leq | \mathcal {B} | d | \mathcal {A} |.
+$$
+
+Therefore, under Assump. D.5, running Alg. 1 with the GOLF oracle (cf. Alg. 5) enjoys the regret bound:
+
+$$
+\sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \leq \mathcal {O} \left(V _ {u} ^ {\max } H \sqrt {| \mathcal {B} | d | \mathcal {A} | \beta K \log (K)}\right).
+$$
+
+Proof. To prove the first statement, recall that $\mathsf{Cov} \leq d|\mathcal{A}|$ , as was proved by Huang et al. (2023, Proposition 3). Thus, Lemma D.7 implies that $\mathsf{Cov}_{\mathrm{aug}} \leq |\mathcal{B}| \cdot d|\mathcal{A}|$ . To prove the second statement, simply combine our meta-algorithm guarantee in Thm. 3.2 with the GOLF oracle guarantee in Thm. D.6.
+
+Note the above is the first risk-sensitive regret bound for low-rank MDPs. The previous Rep-UCB result ((Zhao et al., 2024) or Thm. D.4) is technically not a regret bound since the data collection policy takes uniform exploratory actions, i.e., Rep-UCB can only yield PAC bounds.
+
+# D.4. Tight bounds for CVaR via an oracle with second-order regret
+
+Recall that Thm. 3.2 combined with UCB-VI (Thm. D.1) gave an OCE regret bound of $\mathcal{O}(V_u^{\max}\sqrt{SAHK})$ . Specializing to CVaR, we have $\mathcal{O}(\tau^{-1}\sqrt{SAHK})$ . A weakness in this bound is that it is not tight in $\tau$ , since the minimax-optimal rate for CVaR RL actually scales with $\tau^{-1/2}$ (Wang et al., 2023a, Theorem 3.1).
+
+In this section, we address this issue by assuming that the oracle has a second-order regret. Second-order regret, a.k.a. variance-dependent regret, is an instance-dependent regret bound that scales with the variance of returns throughout $K$ episodes, and is strictly stronger than the standard $\sqrt{K}$ minimax-optimal regret. Specifically, an oracle with second-order regret takes the following form.
+
+Assumption D.10 (Second-Order Oracle). $\operatorname{Reg}_{\mathrm{Opt}}(K) \leq \sqrt{C_1 \sum_k \operatorname{Var}(Z(\pi^k, \widehat{b}_k))} + C_2$ for constants $C_1, C_2$ .
+
+For example, optimistic model-based algorithms such as UCB-VI with Bernstein bonus (Azar et al., 2017; Zanette and Brunskill, 2019) can achieve second-order bounds in tabular MDPs. Indeed, this property of UCB-VI was used to prove the tight $\tau^{-1/2}$ rate for CVaR RL in Wang et al. (2023a); our following result generalizes this argument. Recently, distributional RL has been used to also prove second-order bounds in much more general MDPs, such as low-rank MDPs (Wang et al., 2024b; 2025a).
+
+We also assume a continuous returns assumption of (Wang et al., 2023a).
+
+Assumption D.11 (Continuously Distributed Returns). For all policies $\pi \in \Pi_{\mathsf{HD}}$ , the cumulative reward $Z(\pi)$ is continuously distributed with density lower-bounded by $p_{\mathrm{min}}$ .
+
+The following result has a sharp $\tau^{-1/2}$ dependence and generalizes the minimax-optimal results of (Wang et al., 2023a).
+
+Theorem D.12 ( $\tau^{-1/2}$ -regret for CVaR). Under Assumps. D.10 and D.11, Alg. 1 enjoys
+
+$$
+\mathrm {R e g} _ {\mathrm {C V a R} _ {\tau}} (K) \leq 4 \tau^ {- 1 / 2} \sqrt {C _ {1} K} + 4 \tau^ {- 1} (C _ {1} p _ {\mathrm {m i n}} ^ {- 1} + C _ {2}).
+$$
+
+We note that the density lower bound $p_{\mathrm{min}}$ only scales a lower-order term, which is independent of $K$ . The proof uses the fact that the AugMDP return variance is at most $\tau$ under the true initial budget $b_k^\star = \arg \max_{b \in [0,1]} \{b + V_{\mathrm{aug}}^{\pi^k}(s_1, b)\}$ . Then, under Assump. D.11, the approximation $(\widehat{b}_k - b_k^\star)^2$ can be related to $\mathrm{Reg}_{\mathrm{Opt}}$ , which leads to a self-bounding inequality that solves to $\mathrm{Reg}_{\mathrm{Opt}}(K) \leq \sqrt{C_1 K \tau} + 2 C_1 p_{\mathrm{min}}^{-1} + 2 C_2$ . Compared to Wang et al. (2023a, Theorem 5.5), our result is more general (applies beyond tabular MDPs) and Thm. D.12 also sharpens lower-order terms ( $p_{\mathrm{min}}$ is not multiplied with $K^{1/4}$ ).
+
+Proof. First, we want to show: under continuous returns (Assump. D.11), we have
+
+$$
+\operatorname {R e g} _ {\mathrm {O p t}} (K) \leq 2 \sqrt {C _ {1} K \tau} + 2 C _ {1} p _ {\min } ^ {- 1} + 2 C _ {2}. \tag {10}
+$$
+
+Recall for CVaR, the normalized reward in the AugMDP is precisely precisely $(\widehat{b}_k - Z(\pi^k,\widehat{b}_k))_+$ . Let $b_{k}^{\star}$ denote the true
+
+$\tau$ -quantile of $Z(\pi^k,\widehat{b}_k)$
+
+$$
+\begin{array}{l} \operatorname {R e g} _ {\mathrm {O p t}} (K) \\ \stackrel {(i)} {\leq} \sqrt {C _ {1} \sum_ {k} \operatorname {V a r} (b _ {k} ^ {\star} - Z (\pi^ {k} , \widehat {b} _ {k}) _ {+})} + \sqrt {C _ {1} \sum_ {k} \operatorname {V a r} ((\widehat {b} _ {k} - Z (\pi^ {k} , \widehat {b} _ {k})) _ {+} - (b _ {k} ^ {\star} - Z (\pi^ {k} , \widehat {b} _ {k})) _ {+})} + C _ {2} \\ \stackrel {(i i)} {\leq} \sqrt {C _ {1} K \tau} + \sqrt {C _ {1} \sum_ {k} \operatorname {V a r} ((\widehat {b} _ {k} - Z (\pi^ {k} , \widehat {b} _ {k})) _ {+} - (b _ {k} ^ {\star} - Z (\pi^ {k} , \widehat {b} _ {k})) _ {+})} + C _ {2} \\ \stackrel {(i i i)} {\leq} \sqrt {C _ {1} K \tau} + \sqrt {C _ {1} \sum_ {k} (\widehat {b} _ {k} - b _ {k} ^ {\star}) ^ {2}} + C _ {2} \\ \stackrel {(i v)} {\leq} \sqrt {C _ {1} K \tau} + \sqrt {2 C _ {1} p _ {\mathrm {m i n}} ^ {- 1} \operatorname {R e g} _ {\mathrm {O p t}} (K)} + C _ {2} \\ \leq \sqrt {C _ {1} K \tau} + C _ {1} p _ {\min } ^ {- 1} + \frac {1}{2} \operatorname {R e g} _ {\text {O p t}} (K) + C _ {2}. \tag {AM-GM} \\ \end{array}
+$$
+
+(i) holds since $\sqrt{\operatorname{Var}(X + Y)} \leq \sqrt{\operatorname{Var}(X)} + \sqrt{\operatorname{Var}(Y)}$ for any random variables $X, Y$ . (ii) holds since $\operatorname{Var}(X_{+}) \leq \mathbb{E}[X^{2}\mathbb{I}[X \geq 0]] \leq \operatorname*{Pr}(X \geq 0)$ for any bounded $X \in [-1,1]$ and $b_{k}^{\star}$ is the true $\tau$ -quantile. (iii) holds since $(\widehat{b}_{k} - Z_{k})_{+} - (b_{k}^{\star} - Z_{k})_{+} \leq |\widehat{b}_{k} - b_{k}^{\star}|$ almost surely. (iv) holds by (Wang et al., 2023a, Lemma G.10): it states that under Assump. D.11, the choice of $\widehat{b}_{k}$ ensures:
+
+$$
+\left(\widehat {b} _ {k} - b _ {k} ^ {\star}\right) ^ {2} \leq 2 p _ {\min } ^ {- 1} \left(\widehat {V} _ {1, k} \left(s _ {1}, \widehat {b} _ {k}\right) - V _ {\mathrm {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, \widehat {b} _ {k}\right)\right).
+$$
+
+Rearranging $\mathrm{Reg}_{\mathrm{Opt}}(K)$ implies the desired Eq. (10). Therefore, we can conclude the proof applying Thm. 3.2 and using $V_{u}^{\max} = \tau^{-1}$ and Eq. (10):
+
+$$
+\operatorname {R e g} _ {\mathrm {O C E}} (K) = 4 \tau^ {- 1 / 2} \sqrt {C _ {1} K} + 4 \tau^ {- 1} (C _ {1} p _ {\min } ^ {- 1} + C _ {2}).
+$$
+
+# E. Proofs for Policy Optimization Meta-Algorithm
+
+We begin by proving a stronger global convergence guarantee than the one stated in the main paper.
+
+Theorem E.1 (Strong Global Convergence). Under Assump. 3.5 and Def. 4.1, we have
+
+$$
+\sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \operatorname {R L B} ^ {(k)} \leq | \mathcal {B} | V _ {u} ^ {\max } \operatorname {R e g} _ {\mathrm {P O}} (K).
+$$
+
+Proof.
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \mathrm {O C E} _ {u} ^ {\star} - \mathrm {R L B} ^ {(k)} = \sum_ {k = 1} ^ {K} \left\{b _ {1} ^ {\star} + V _ {\mathsf {a u g}} ^ {\star} \left(s _ {1}, b _ {1} ^ {\star}\right) \right\} - \max _ {b \in [ 0, 1 ]} \left\{b + V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, b\right) \right\} \\ \leq \sum_ {k = 1} ^ {K} V _ {\mathsf {a u g}} ^ {\star} (s _ {1}, b _ {1} ^ {\star}) - V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, b _ {1} ^ {\star}) \\ \leq \sum_ {k = 1} ^ {K} | \mathcal {B} | \frac {1}{| \mathcal {B} |} \sum_ {b \in \mathcal {B}} \left(V _ {\mathrm {a u g}} ^ {\star} (s _ {1}, b) - V _ {\mathrm {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, b)\right) \\ \leq | \mathcal {B} | \operatorname {R e g} _ {\mathrm {P O}} (K), \\ \end{array}
+$$
+
+where the last step uses the global convergence premise of POALG in the AugMDP.
+
+This is stronger than the global convergence in the main paper (cf. Thm. 4.2), since the RLB approximately lower bounds $\mathrm{OCE}_u(Z(\pi^k,\widehat{b}_k))$ , as we prove next. Therefore, Thm. 4.2 is a direct consequence of Thm. E.1 and the following RLB lemma (Eq. (7)).
+
+Lemma 4.3 (RLB). The $\mathrm{RLB}^{(k)}$ approximately lower bounds the true OCE of $\pi^k$ with initial budget $\widehat{b}_k$ :
+
+$$
+\mathrm {R L B} ^ {(k)} - \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) \leq 2 | \mathcal {B} | \varepsilon_ {k} ^ {\mathrm {p o}}. \tag {7}
+$$
+
+Moreover, the lower bound is tight on average:
+
+$$
+\sum_ {k} \mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) - \mathrm {R L B} ^ {(k)} \leq | \mathcal {B} | V _ {u} ^ {\max } \operatorname {R e g} _ {\mathrm {P O}} (K).
+$$
+
+Proof. First, we prove Eq. (7). By importance sampling and the value estimate guarantee of Def. 4.1: for all $b_{1} \in \mathcal{B}$ , we have $|V_{1}^{\pi^{k}}(s_{1},b_{1}) - \widehat{V}_{1}^{\pi^{k}}(s_{1},b_{1})| \leq |\mathcal{B}|\varepsilon_{k}^{\mathrm{po}} =: \varepsilon_{k}'$ . Thus,
+
+$$
+\mathrm {O C E} _ {u} \left(\pi^ {k}, \widehat {b} _ {k}\right) = \max _ {b \in \mathcal {B}} \left\{b + \mathbb {E} \left[ u \left(Z \left(\pi^ {k}, \widehat {b} _ {k}\right) - b\right) \right] \right\}
+$$
+
+$$
+\begin{array}{l} \geq \left\{\widehat {b} _ {k} + \mathbb {E} \left[ u \left(Z \left(\pi^ {k}, \widehat {b} _ {k}\right) - \widehat {b} _ {k}\right) \right] \right\} \quad (\widehat {b} _ {k} \in \mathcal {B}, \text {b y E q . (5)}) \\ = \left\{\widehat {b} _ {k} + V _ {\mathrm {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, \widehat {b} _ {k}\right) \right\} \quad (\text {d e f . o f} V ^ {\pi^ {k}}) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i)} {\geq} \left\{\widehat {b} _ {k} + \widehat {V} _ {1} ^ {\pi^ {k}} \left(s _ {1}, \widehat {b} _ {k}\right) \right\} - \varepsilon_ {k} ^ {\prime} \\ = \max _ {b _ {1} \in \mathcal {B}} \left\{b _ {1} + \widehat {V} _ {1} ^ {\pi^ {k}} \left(s _ {1}, b _ {1}\right) \right\} - \varepsilon_ {k} ^ {\prime} \quad (\text {d e f . o f} \widehat {b} _ {k}, \text {b y E q . (5)}) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i i)} {\geq} \max _ {b _ {1} \in \mathcal {B}} \left\{b _ {1} + V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, b _ {1}) \right\} - 2 \varepsilon_ {k} ^ {\prime} \\ = \mathrm {R L B} ^ {(k)} - 2 \varepsilon_ {k} ^ {\prime}, \quad (\text {d e f . o f R L B , b y E q . (6)}) \\ \end{array}
+$$
+
+where the inequalities (i,ii) are due to the value estimate guarantee. This finishes the proof of Eq. (7).
+
+To prove the second statement, simply apply the stronger global convergence result Thm. E.1 with the fact that $\mathrm{OCE}_u(Z(\pi^k,\widehat{b}_k))\leq \mathrm{OCE}_u^*$ .
+
+Theorem 4.4 (Local Improvement). Under Assump. 3.5 and assuming POALG satisfies the approximate improvement criterion of Def. 4.1, running Alg. 2 ensures that:
+
+$$
+\forall k \in [ K ]: \mathrm {R L B} ^ {(k + 1)} \geq \mathrm {R L B} ^ {(k)} - | \mathcal {B} | \varepsilon_ {k} ^ {\mathrm {p o}}.
+$$
+
+Proof. Let $b_{k}^{\star} = \arg \max_{b\in \mathcal{B}}\{b + V_{\mathrm{aug}}^{\pi^{k},1}(s_{1},b)\}$ . Then,
+
+$$
+\begin{array}{l} \mathrm {R L B} ^ {(k)} - \mathrm {R L B} ^ {(k + 1)} = \left\{b _ {k} ^ {\star} + V _ {\mathbf {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, b _ {k} ^ {\star}\right) \right\} - \max _ {b \in [ 0, 1 ]} \left\{b + V _ {\mathbf {a u g}} ^ {\pi^ {k + 1}, 1} \left(s _ {1}, b\right) \right\} \\ \leq V _ {\mathsf {a u g}} ^ {\pi^ {k}, 1} \left(s _ {1}, b _ {k} ^ {\star}\right) - V _ {\mathsf {a u g}} ^ {\pi^ {k + 1}, 1} \left(s _ {1}, b _ {k} ^ {\star}\right) \\ \leq | \mathcal {B} | \frac {1}{| \mathcal {B} |} \sum_ {b \in \mathcal {B}} \left(V _ {\text {a u g}} ^ {\pi^ {k}, 1} (s _ {1}, b) - V _ {\text {a u g}} ^ {\pi^ {k + 1}, 1} (s _ {1}, b)\right) \\ \leq | \mathcal {B} | \varepsilon_ {k} ^ {\text {p o}}, \\ \end{array}
+$$
+
+where the last step uses local improvement premise of POALG in the AugMDP.
+
+# E.1. Primer on Natural Policy Gradient in Finite-Horizon MDPs
+
+In this section, we provide a primer on natural policy gradient (NPG) for finite-horizon MDPs and its guarantees, supplementing the case study (Sec. 4.1) of the main paper. Our analysis is based on the infinite-horizon analysis of Agarwal et al. (2021). Let $d_{1}$ denote an fixed initial state distribution. For a policy $\pi$ , let $Q_{h}^{\pi}(s_{h},a_{h}) = \mathbb{E}_{\pi}[r_{h} + r_{h + 1} + \dots +r_{H}\mid s_{h},a_{h}]$ denote the $Q$ -function, $V_{h}^{\pi}(s_{h}) = \mathbb{E}_{a_{h}\sim \pi (s_{h})}[Q_{h}^{\pi}(s_{h},a_{h})]$ denote the value function, and $A_{h}^{\pi}(s,a) = Q_{h}^{\pi}(s,a) - V_{h}^{\pi}(s)$ denote the advantage function.
+
+We consider policies $\pi^{\theta} = (\pi_1^{\theta},\dots,\pi_H^{\theta})$ parameterized by weight vectors $\theta = (\theta_{1},\ldots ,\theta_{H})$ where $\theta_h\in \mathbb{R}^m$ . Initialize $\theta_h^{(0)} = \mathbf{0}$ for all $h\in [H]$ . Consider the generic update rule for $k = 1,2,\ldots ,K$ : for all $h$
+
+$$
+\theta_ {h} ^ {(k + 1)} = \theta_ {h} ^ {(k)} + \eta w _ {h} ^ {(k)},
+$$
+
+where $w_h^{(k)} \in \mathbb{R}^m$ have $\ell_2$ -norm at most $W$ . Let $\pi^k$ denote the policy parameterized by $\theta^{(k)}$ . Then, we have the following lemma that bounds the sub-optimality of the update rule.
+
+Lemma E.2. Suppose the log-probability parameterization $\theta_h \mapsto \log \pi_{\theta_h}(a \mid s)$ is $\beta$ -smooth for all $h, s, a$ . Then, setting $\eta = \sqrt{\frac{2\log(A)}{\beta KW^2}}$ , we have
+
+$$
+\sum_ {k = 1} ^ {K} \mathbb {E} _ {s _ {1} \sim d _ {1}} [ V _ {1} ^ {\star} (s _ {1}) - V _ {1} ^ {\pi^ {k}} (s _ {1}) ] \leq H W \sqrt {2 \beta K \log (A)} + \sum_ {h, k} \mathrm {e r r} _ {h, k},
+$$
+
+where $\mathrm{err}_{h,k} := \mathbb{E}_{s_1 \sim d_1, \pi^{\star}}[A_h^{\pi^k}(s_h, a_h) - w_h^{(k)} \cdot \nabla_{\theta_h} \log \pi_{\theta_h^{(k)}}(a_h \mid s_h)]$ .
+
+Proof. First, notice that the smoothness assumption implies that for all $h, s, a$ ,
+
+$$
+\begin{array}{l} \log \pi_ {h} ^ {k + 1} (a | s) - \log \pi_ {h} ^ {k} (a | s) \geq (\theta_ {h} ^ {(k + 1)} - \theta_ {h} ^ {(k)}) \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a | s) - \frac {\beta}{2} \| \theta_ {h} ^ {(k + 1)} - \theta_ {h} ^ {(k)} \| _ {2} ^ {2} \\ \geq \eta w _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a \mid s) - \frac {\beta \eta^ {2} W ^ {2}}{2}. \quad (\text {b y}) \\ \end{array}
+$$
+
+Rearranging, we have
+
+$$
+w _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a \mid s) \leq \eta^ {- 1} \log \left(\pi_ {h} ^ {k + 1} (a \mid s) / \pi_ {h} ^ {k} (a \mid s)\right) + \frac {\beta \eta W ^ {2}}{2}. \tag {11}
+$$
+
+Then, we can bound the sub-optimality at the $k$ -th round by:
+
+$$
+\begin{array}{l} \mathbb {E} _ {s _ {1} \sim d _ {1}} \left[ V _ {1} ^ {\star} (s _ {1}) - V _ {1} ^ {\pi^ {k}} (s _ {1}) \right] \\ = \sum_ {h = 1} ^ {H} \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} \left[ A _ {h} ^ {\pi^ {k}} \left(s _ {h}, a _ {h}\right) \right] \quad (\text {b y}) \\ = \sum_ {h} \operatorname {e r r} _ {h, k} + \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} \left[ w _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a _ {h} \mid s _ {h}) \right] \\ \leq \sum_ {h} \operatorname {e r r} _ {h, k} + \eta^ {- 1} \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} [ \log \left(\pi_ {h} ^ {k + 1} \left(a _ {h} \mid s _ {h}\right) / \pi_ {h} ^ {k} \left(a _ {h} \mid s _ {h}\right)\right) ] + \frac {\beta \eta W ^ {2}}{2} \tag {byEq.(11)} \\ = \sum_ {h} \mathrm {e r r} _ {h, k} + \eta^ {- 1} \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} [ D _ {K L} (\pi_ {h} ^ {\star} (s _ {h}) \parallel \pi_ {h} ^ {k} (s _ {h})) - D _ {K L} (\pi_ {h} ^ {\star} (s _ {h}) \parallel \pi_ {h} ^ {k + 1} (s _ {h})) ] + \frac {\beta \eta W ^ {2}}{2}, \\ \end{array}
+$$
+
+where $D_{KL}$ is the KL-divergence.
+
+Finally, summing over $k$ implies the final result by telescoping:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \mathbb {E} _ {s _ {1} \sim d _ {1}} \left[ V _ {1} ^ {\star} (s _ {1}) - V _ {1} ^ {\pi^ {k}} (s _ {1}) \right] \\ \leq \frac {\beta \eta H W ^ {2} K}{2} + \eta^ {- 1} \sum_ {h} \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} \left[ D _ {K L} \left(\pi_ {h} ^ {\star} (s _ {h}) \parallel \pi_ {h} ^ {1} (s _ {h})\right) - D _ {K L} \left(\pi_ {h} ^ {\star} (s _ {h}) \parallel \pi_ {h} ^ {K} (s _ {h})\right) \right] + \sum_ {h, k} \operatorname {e r r} _ {h, k} \\ \leq \frac {\beta \eta H W ^ {2} K}{2} + \eta^ {- 1} H \log (A) + \sum_ {h, k} \operatorname {e r r} _ {h, k} \quad (\pi^ {(1)} \text {i s u n i f o r m}) \\ = 2 \sqrt {\beta H ^ {2} W ^ {2} K \log (A) / 2} + \sum_ {h, k} \operatorname {e r r} _ {h, k}. \quad (\text {c h o i c e o f} \eta) \\ \end{array}
+$$
+
+This finishes the proof.
+
+A natural idea is to set $w_{h}^{(k)}$ to minimize the $\mathrm{err}_{h,k}$ error terms, which exactly motivates the NPG update. Specifically, the (idealized) NPG update vector is defined as:
+
+$$
+\begin{array}{l} \widetilde {w} _ {h} ^ {(k)} = \arg \min _ {\| w \| _ {2} \leq W} L _ {h} (w; \theta^ {(k)}, d _ {h} ^ {\pi^ {k}}), \\ \text {w h e r e} L _ {h} \left(w _ {h}; \theta^ {\prime}, \nu_ {h}\right) := \mathbb {E} _ {s _ {h}, a _ {h} \sim \nu_ {h}} \left[ \left(A _ {h} ^ {\pi_ {\theta^ {\prime}}} \left(s _ {h}, a _ {h}\right) - w _ {h} \cdot \nabla \log \pi_ {\theta_ {h} ^ {\prime}} \left(a _ {h} \mid s _ {h}\right)\right) ^ {2} \right], \\ \end{array}
+$$
+
+for $\theta_h \in \mathbb{R}^d$ and $\nu_h \in \Delta(\mathcal{S} \times \mathcal{A})$ . In other words, the idealized NPG update vector minimizes the squared error term while having Euclidean norm at most $W$ . Since the true mean is unknown, we approximate the above using samples. Thus, the actual NPG update we consider minimizes the empirical error
+
+$$
+w _ {h} ^ {(k)} := \underset {\| w \| _ {2} \leq W} {\arg \min } \widehat {L} _ {h} \left(w; \theta^ {(k)}, \left\{s _ {h, i}, a _ {h, i} \right\} _ {i \in [ N ]}\right), \tag {12}
+$$
+
+where $\widehat{L}_h(w_h;\theta ',\{s_{h,i},a_{h,i}\}_{i\in [N]}) = \frac{1}{N}\sum_i(A_h^{\pi_{\theta '}}(s_{h,i},a_{h,i}) - w_h\cdot \nabla \log \pi_{\theta_h'}(a_{h,i}|\ s_{h,i}))^2$ and $s_{h,i},a_{h,i}\sim d_h^{\pi^k}$ are $N$ i.i.d. samples from roll-outs.
+
+We now bound the sub-optimality of the NPG update. We define two sources of error. First, there is the statistical estimation error from using the empirical loss $\widehat{L}_h$ instead of the true $L_{h}$ . This error is expected to converge at a $O(1 / \sqrt{N})$ rate or faster, where $N$ is the number of samples per round. Second, there is a transfer / approximation error that measures the performance of the best population vector $\widetilde{w}_h^{(k)}$ under the optimal policy's visitations $d_h^{\pi^*}$ . This quantity is small if the training policies for data collection cover the traces of the optimal policy $\pi^*$ . The first estimation error is also measured under the distribution induced by $\pi^*$ , but can be handled by using the relative condition number, which is a weaker measure of coverage than $\ell_{\infty}$ density ratio bounds.
+
+The following results are based on the infinite-horizon MDP analysis of Agarwal et al. (2021, Section 6.3).
+
+Assumption E.3. The update vectors generated by NPG satisfy for all $k \in [K], h \in [H]$ :
+
+1. (Excess risk) The statistical estimation error is bounded by $\varepsilon_{\mathrm{stat}}$
+
+$$
+L \left(w _ {h} ^ {(k)}; \theta^ {(k)}, d _ {h} ^ {\pi^ {k}}\right) - L \left(\widetilde {w} _ {h} ^ {(k)}; \theta^ {(k)}, d _ {h} ^ {\pi^ {k}}\right) \leq \varepsilon_ {\text {s t a t}}.
+$$
+
+2. (Coverage) The relative condition number of the covariance under $d_h^{\pi^k}$ and $d_h^{\pi^*}$ is bounded by $\kappa$ :
+
+$$
+\sup _ {w \in \mathbb {R} ^ {d}} \frac {w ^ {\top} \Sigma_ {h , k , \pi^ {\star}} w}{w ^ {\top} \Sigma_ {h , k , \pi^ {k}} w} \leq \kappa ,
+$$
+
+where $\Sigma_{h,k,\pi'} \coloneqq \mathbb{E}_{\pi'}[\nabla_{\theta_h} \log \pi_h^k(a_h \mid s_h)(\nabla_{\theta_h} \log \pi_h^k(a_h \mid s_h))^{\top}]$ .
+
+3. (Transfer Error) Assume the transfer error is bounded by $\varepsilon_{\mathrm{bias}}$
+
+$$
+L \left(\widetilde {w} ^ {(k)}; \theta^ {(k)}, d _ {h} ^ {\pi^ {*}}\right) \leq \varepsilon_ {\text {b i a s}}.
+$$
+
+Theorem E.4 (Agnostic NPG). Under Assump. E.3, running the NPG update in Eq. (12) enjoys:
+
+$$
+\sum_ {k = 1} ^ {K} \mathbb {E} _ {s _ {1} \sim d _ {1}} [ V _ {1} ^ {\star} (s _ {1}) - V _ {1} ^ {\pi^ {k}} (s _ {1}) ] \leq H W \sqrt {2 \beta K \log (A)} + H K \sqrt {\varepsilon_ {b i a s}} + H K \sqrt {\kappa \varepsilon_ {s t a t}}.
+$$
+
+This is a finite-horizon analog of Agarwal et al. (2021, Theorem 6.2), where their effective horizon $1 / (1 - \gamma)$ is replaced with the horizon $H$ .
+
+Proof. The proof focuses on decomposing the error term in Lemma E.2.
+
+$$
+\begin{array}{l} \mathrm {e r r} _ {h, k} = \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} [ A _ {h} ^ {\pi^ {k}} (s _ {h}, a _ {h}) - w _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a _ {h} \mid s _ {h}) ] \\ = \underbrace {\mathbb {E} _ {s _ {1} \sim d _ {1} , \pi^ {\star}} [ A _ {h} ^ {\pi^ {k}} (s _ {h} , a _ {h}) - \widetilde {w} _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a _ {h} \mid s _ {h}) ]} _ {a p p r o x. e r r o r} + \underbrace {(\widetilde {w} _ {h} ^ {(k)} - w _ {h} ^ {(k)}) \cdot \mathbb {E} _ {s _ {1} \sim d _ {1} , \pi^ {\star}} [ \nabla_ {\theta} \log \pi_ {\theta_ {h}} (a _ {h} \mid s _ {h}) ]} _ {e s t. e r r o r}. \\ \end{array}
+$$
+
+Let $d_h^\star = d_{1,d_1}^\pi$ be the $h$ -th visitation distribution of rolling in $\pi^{\star}$ from $d_{1}$ . The approximation error can be handled by:
+
+$$
+\begin{array}{l} \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} \left[ A _ {h} ^ {\pi^ {k}} \left(s _ {h}, a _ {h}\right) - \tilde {w} _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} \left(a _ {h} \mid s _ {h}\right) \right] \\ \leq \sqrt {\mathbb {E} _ {s _ {1} \sim d _ {1} , \pi^ {\star}} [ (A _ {h} ^ {\pi^ {k}} (s _ {h} , a _ {h}) - \widetilde {w} _ {h} ^ {(k)} \cdot \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} (a _ {h} | s _ {h})) ^ {2} ]} = \sqrt {L _ {h} (\widetilde {w} _ {h} ^ {(k)} ; \theta^ {(k)} , d _ {h} ^ {\star})}. \\ \end{array}
+$$
+
+The estimation error can be handled by:
+
+$$
+\left| \left(\widetilde {w} _ {h} ^ {(k)} - w _ {h} ^ {(k)}\right) \cdot \mathbb {E} _ {s _ {1} \sim d _ {1}, \pi^ {\star}} \left[ \nabla_ {\theta_ {h}} \log \pi_ {\theta_ {h} ^ {(k)}} \left(a _ {h} \mid s _ {h}\right) \right] \right| \leq \| \widetilde {w} _ {h} ^ {(k)} - w _ {h} ^ {(k)} \| _ {\Sigma_ {h, k, \pi^ {\star}}}.
+$$
+
+Then by definition of the relative condition number $\kappa_{h,k} = \left\| \left(\Sigma_{h,k,\pi^{(k)}}\right)^{-1 / 2}\Sigma_{h,k,\pi^{\star}}\left(\Sigma_{h,k,\pi^{(k)}}\right)^{-1 / 2}\right\| _2$ , we have
+
+$$
+\begin{array}{l} \| \widetilde {w} _ {h} ^ {(k)} - w _ {h} ^ {(k)} \| _ {\Sigma_ {h, k, \pi^ {\star}}} \leq \sqrt {\kappa_ {h , k}} \| \widetilde {w} _ {h} ^ {(k)} - w _ {h} ^ {(k)} \| _ {\Sigma_ {h, k, \pi^ {k}}} \\ = \sqrt {\kappa_ {h , k} (L _ {h} (w _ {h} ^ {(k)} , \theta^ {(k)} , d _ {h} ^ {\pi^ {k}}) - L _ {h} (\tilde {w} _ {h} ^ {(k)} , \theta^ {(k)} , d _ {h} ^ {\pi^ {k}}))}. \\ \end{array}
+$$
+
+Therefore, we have shown that
+
+$$
+\begin{array}{l} \operatorname {e r r} _ {h, k} \leq \sqrt {L _ {h} \left(\widetilde {w} _ {h} ^ {(k)} ; \theta^ {(k) ,} d _ {h} ^ {\star}\right)} + \sqrt {\kappa_ {h , k} \left(L _ {h} \left(w _ {h} ^ {(k)} ; \theta^ {(k) ,} d _ {h} ^ {\pi^ {k}}\right) - L _ {h} \left(\widetilde {w} _ {h} ^ {(k)} ; \theta^ {(k) ,} d _ {h} ^ {\pi^ {k}}\right)\right)} \\ \leq \sqrt {\varepsilon_ {\text {b i a s}}} + \sqrt {\kappa \varepsilon_ {\text {s t a t}}}, \\ \end{array}
+$$
+
+which concludes the proof.
\ No newline at end of file
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/images.zip b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4355dfd1e55bd5afa0abfb27ee32a995c9a950c8
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f54e55578b778c4fcea6bfb8c60f207267fdc2eec186279b66a5cd7b7ac3139d
+size 1077962
diff --git a/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/layout.json b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d64dfd41188f6928d45aa9a3b28354fefb0e7966
--- /dev/null
+++ b/areductionsapproachtorisksensitivereinforcementlearningwithoptimizedcertaintyequivalents/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac5fe1708ffab3e3e314206541afe3727dada61874eca6b134a5ce2be07536d2
+size 1275791
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_content_list.json b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4c7aaf6b60cf341783d6532aea9a95e74c7a3ce4
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cd70955bfc50829ae3dafb9525a5d58b1cd0c125a595b5f50e2a9803cf8e0afc
+size 224697
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_model.json b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..285e4cd058c1e299f95af23e3eab1eb78a575071
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:92553fdee2e4f04d58c9762ea0a83e2ca03274ac8cedddf1f872276916f9efb4
+size 265820
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_origin.pdf b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a04b0ec18a46f28030675bb7a488f0e2617b914d
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/d8efca03-2368-4433-b561-acf341f6de2a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59d745b767b66d64c70ccdd6286e764fbd2958e51aee22ee4aa0a42f2c3aeb55
+size 1307404
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/full.md b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4bce2e39bdcfe56d4379f9570eef86f60f923dd4
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/full.md
@@ -0,0 +1,1074 @@
+# A Rescaling-Invariant Lipschitz Bound Based on Path-Metrics for Modern ReLU Network Parameterizations
+
+Antoine Gonon $^{1,2}$ Nicolas Brisebarre $^{3}$ Elisa Riccietti $^{1}$ Rémi Gribonval $^{4}$
+
+# Abstract
+
+Robustness with respect to weight perturbations underpins guarantees for generalization, pruning and quantization. Existing guarantees rely on Lipschitz bounds in parameter space, cover only plain feed-forward MLPs, and break under the ubiquitous neuron-wise rescaling symmetry of ReLU networks. We prove a new Lipschitz inequality expressed through the $\ell^1$ -path-metric of the weights. The bound is (i) rescaling-invariant by construction and (ii) applies to any ReLU-DAG architecture with any combination of convolutions, skip connections, pooling, and frozen (inference-time) batch-normalization —thus encompassing ResNets, U-Nets, VGG-style CNNs, and more. By respecting the network's natural symmetries, the new bound strictly sharpens prior parameter-space bounds and can be computed in two forward passes. To illustrate its utility, we derive from it a symmetry-aware pruning criterion and show—through a proof-of-concept experiment on a ResNet-18 trained on ImageNet—that its pruning performance matches that of classical magnitude pruning, while becoming totally immune to arbitrary neuron-wise rescalings.
+
+# 1. Introduction
+
+An important challenge about neural networks is to upper bound as tightly as possible the distances between the so-called realizations (i.e., the functions implemented by the considered network) $R_{\theta}, R_{\theta'}$ with parameters $\theta, \theta'$ when evaluated at an input vector $x$ , in terms of a (pseudo-)
+
+$^{1}$ ENS de Lyon, CNRS, Inria, Université Claude Bernard Lyon 1, LIP, UMR 5668, 69342, Lyon cedex 07, France $^{2}$ Institute of Mathematics, EPFL, Lausanne, Switzerland $^{3}$ CNRS, ENS de Lyon, Inria, Université Claude Bernard Lyon 1, LIP, UMR 5668, 69342, Lyon cedex 07, France $^{4}$ Inria, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP, UMR 5668, 69342, Lyon cedex 07, France. Correspondence to: Antoine Gonon .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+distance $d(\theta, \theta')$ and a constant $C_x$ :
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant C _ {x} d \left(\theta , \theta^ {\prime}\right). \tag {1}
+$$
+
+This controls the robustness of the function $R_{\theta}$ with respect to changes in the parameters $\theta$ , which can be crucially leveraged to derive generalization bounds (Neyshabur et al., 2018) or theoretical guarantees about pruning or quantization algorithms (Gonon et al., 2023). Yet, to the best of our knowledge, such bounds remain relatively little explored in the literature, and existing ones are expressed with $\ell^p$ metrics on parameters (Gonon et al., 2023; Neyshabur et al., 2018; Berner et al., 2020). For example, such a bound is known (Gonon et al., 2023, Theorem III.1 with $p = \infty$ and $q = 1$ ) with
+
+$$
+d \left(\theta , \theta^ {\prime}\right) := \| \theta - \theta^ {\prime} \| _ {\infty}, \tag {2}
+$$
+
+$$
+C _ {x} := (W \| x \| _ {\infty} + 1) W L ^ {2} R ^ {L - 1},
+$$
+
+in the case of a layered fully-connected neural network $R_{\theta}(x) = M_L \mathrm{ReLU}(M_{L-1} \dots \mathrm{ReLU}(M_1 x))$ with $L$ layers, maximal width $W$ , and with weight matrices $M_{\ell}$ having some operator norm bounded by $R$ . Moreover, these known bounds are not satisfying for at least two reasons:
+
+- they are not invariant under neuron-wise rescalings of the parameters $\theta$ that leave unchanged its realization $R_{\theta}$ . As we will show, this implies that numerical evaluations of such bounds can be arbitrarily large;
+- they only hold for simple fully-connected models organized in layers, but not for modern networks that include pooling, skip connections, etc.
+
+To circumvent these issues, we leverage the so-called path-lifting, a tool that has recently emerged (Stock & Gribonval, 2023; Bona-Pellissier et al., 2022; Marcotte et al., 2023; Gonon et al., 2024a) in the theoretical analysis of modern neural networks with positively homogeneous activations.
+
+Main contribution. We introduce a natural (rescaling-invariant) metric based on the path-lifting, and shows that it indeed yields a rescaling-invariant upper bound for the distance of two realizations of a network. Specifically, denoting $\Phi (\theta)$ the path-lifting (a finite-dimensional vector whose
+
+Table 1: The path-lifting provides an intermediate space between parameters and function spaces.
+
+θ
+parameters space Φ(θ)
+path-lifting space Rθ
+function space what we end up analyzing what we should analyze? what we want to analyze dim<∞? ✓ ✓ × rescaling-invariant? × ✓ ✓ relation to Rθ locally polynomial locally linear
+
+definition will be recalled in Section 3) of the network parameters $\theta$ , we establish (Theorem 4.1) that for any input $x$ , and network parameters $\theta, \theta'$ with the same entrywise signs:
+
+$$
+\begin{array}{l} \left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \\ \leqslant \max \left(\| x \| _ {\infty}, 1\right) \| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \| _ {1}. \tag {3} \\ \end{array}
+$$
+
+We call $d(\theta, \theta') \coloneqq \|\Phi(\theta) - \Phi(\theta')\|_1$ the $\ell^1$ -path-metric, by analogy with the so-called $\ell^1$ -path-norm $\|\Phi(\theta)\|_1$ , see e.g. (Neyshabur et al., 2015; Barron & Klusowski, 2019; Gonon et al., 2024a). Of course, since the $\ell^1$ -norm is the largest $\ell^q$ -norm ( $q \geqslant 1$ ), this also implies the same inequality for any $\ell^q$ -norm on the left-hand side. Besides being intrinsically rescaling-invariant, Inequality (3) holds for the very same general neural network model as in Gonon et al. (2024a) that encompasses pooling, skip connections and so on. This solves the two problems mentioned above and improves on Equation (2). Finally, we show that, under conditions that hold in practical pruning and quantization scenarios, the path-metric is easy to compute in two forward passes, and we provide the corresponding pytorch implementation.
+
+Our main theoretical finding, Inequality (3), together with the known properties of $\Phi$ (Gonon et al., 2024a) confirms that the path-lifting $\Phi$ provides an intermediate space between the parameter space and the function space, that shares some advantages of both, see Table 1.
+
+Plan. Section 2 places our contribution in context. Section 3 recalls the path-lifting framework of Gonon et al. (2024a) and the notational tools we will use. Section 4 presents our central result—a rescaling-invariant Lipschitz bound expressed through the $\ell^1$ -path-metric (Theorem 4.1)—and explains how it sharpens existing bounds and can be computed in two forward passes. Finally, Section 5 shows how the new bound yields a symmetry-aware pruning criterion, shown to match in a proof-of-concept experiment the accuracy of magnitude pruning, while becoming totally immune to neuron-wise rescalings.
+
+# 2. Related Work
+
+Understanding how small weight changes affect a network's output is crucial, e.g., for pruning, quantization, or general-
+
+ization error control. We review these three different use of parameter-space Lipschitz bounds in Section 2.1, and then highlight in Section 2.2 how our new, rescaling-invariant bound (Theorem 4.1) interfaces with recent notions of scale-invariant sharpness.
+
+# 2.1. Parameter-Space Lipschitz Bounds in Practice
+
+Parameter-space Lipschitz (or "perturbation" / "sensitivity") bounds already underpin several practical guarantees, but prior results are restricted to plain MLPs and ignore rescaling symmetry.
+
+(i) Pruning. Provable pruning schemes quantify how much the output drifts when weights are set to zero. (Lieberwein et al., 2020) and (Baykal et al., 2019) derive such guarantees from layer-wise—but rescaling-dependent—Lipschitz constants, and the same mechanism underlies Theorem 5.4 of (Baykal et al., 2022). Our Theorem 4.1 offers an architecture-agnostic, symmetry-aware alternative; Section 5 illustrates this on a ResNet-18.
+(ii) Quantization. Bounding the error induced by weight rounding likewise depends on how the network reacts to small parameter perturbations. Gonon et al. (2023) provide such bounds for fully-connected nets, while Zhang et al. (2023) and Lybrand & Saab (2021) control the error at the neuron level. Extending those guarantees to CNNs, ResNets or U-Nets requires a global, symmetry-invariant Lipschitz constant—precisely what we provide in Theorem 4.1.
+(iii) Generalization via covering numbers. Several compression-style analyses (e.g., Arora et al., 2018; Bartlett et al., 2017; Schnoor et al., 2021) follow two steps: (1) a parameter-space Lipschitz bound shows that the $\varepsilon$ -ball around a weight vector $\theta$ maps into an $\varepsilon'$ -ball around its realization $R_{\theta}$ in function space, yielding an upper bound on that function-space covering number; (2) this covering bound is plugged into Dudley's entropy integral to obtain a Rademacher-complexity (and thus generalization) bound. Because our Lipschitz constant is rescaling-invariant and holds for modern DAG networks, the same pipeline runs without restricting to MLPs and without the looseness intro-
+
+duced when one first factors out rescaling symmetries.
+
+Across pruning, quantization and generalization, two limitations of previous parameter-space bounds—lack of rescaling-invariance and restriction to plain MLPs—are precisely the issues addressed by Theorem 4.1.
+
+# 2.2. Relation to Scale-Invariant Sharpness
+
+Sharpness metrics (Tsuzuki et al., 2020; Rangamani et al., 2021; Kwon et al., 2021; Wen et al., 2023; Andriushchenko et al., 2023) measure how much the loss increases under parameter perturbations, often normalizing those perturbations to remove rescaling dependencies. Our perspective is complementary: we directly bound the output change $\|R_{\theta}(x) - R_{\theta'}(x)\|_1$ , independent of any loss or data distribution. As shown in Section 4.4, whenever the loss $\mathcal{L}(\hat{y}, y)$ is Lipschitz in its first argument (e.g., cross-entropy or MSE on a compact domain), Theorem 4.1 yields an immediate upper bound on several scale-invariant sharpness definitions, thus providing a loss-agnostic control over the same perturbation neighborhoods.
+
+# 3. ReLU DAGs, Invariances, and Path-Lifting
+
+The neural network model we consider generalizes and unifies several models from the literature, including those from Neyshabur et al. (2015); Kawaguchi et al. (2017); DeVore et al. (2021); Bona-Pellissier et al. (2022); Stock & Gribonval (2023), as detailed in Gonon et al. (2024a, Definition 2.2). This model allows for any Directed Acyclic Graph (DAG) structure that combines standard layers—max-pooling, average-pooling, skip connections, convolution, and (inference-time / frozen form) batch normalization—thereby covering modern networks such as ResNets, VGGs, AlexNet, and many others. The complete formal definition appears in Appendix A. $^{1}$
+
+
+Figure 1: A network with the same ingredients as a ResNet.
+
+# 3.1. Rescaling Symmetries.
+
+All network parameters (weights and biases) are gathered in a parameter vector $\theta$ , and we denote $R_{\theta}(x)$ the output of the network when evaluated at input $x$ (the function $x \mapsto R_{\theta}(x)$ is the so-called realization of the network with parameters $\theta$ ). Due to positive-homogeneity of the ReLU function $t \to \mathrm{ReLU}(t) \coloneqq \max(0, t)$ , in the simple case of a single neuron with no bias we have $R_{\theta}(x) = v \max(0, \langle u, x \rangle)$ with $\theta = (u, v)$ , and for any $\lambda > 0$ , the "rescaled" parameter $\tilde{\theta} = (\lambda u, \frac{v}{\lambda})$ implements the same function $R_{\tilde{\theta}} = R_{\theta}$ . A similar rescaling-invariance property holds for the general model of (Stock & Gribonval, 2023; Gonon, 2024) leading to the notion of rescaling-equivalent parameters, denoted $\tilde{\theta} \sim \theta$ , which still satisfy $R_{\tilde{\theta}} = R_{\theta}$ .
+
+Need for rescaling-invariant Lipschitz bounds. Consider our initial problem of finding a pseudo-metric $d(\theta, \theta')$ and a constant $C_x$ for any input $x$ , for which (1) holds. The left hand-side of (1) is invariant under rescaling-symmetries: if $\tilde{\theta} \sim \theta$ then $\| R_{\tilde{\theta}}(x) - R_{\theta'}(x)\|_1 = \| R_\theta(x) - R_{\theta'}(x)\|_1$ . However, when $d(\cdot, \cdot)$ is based on a standard $\ell^p$ norm, the right hand-side of (1) is not invariant, and in fact $\sup_{\tilde{\theta} \sim \theta} \| \tilde{\theta} - \theta' \|_p = +\infty$ , so the bound can in fact be arbitrarily pessimistic:
+
+$$
+\sup _ {\tilde {\theta} \sim \theta} \frac {d (\tilde {\theta} , \theta^ {\prime})}{\| R _ {\tilde {\theta}} (x) - R _ {\theta^ {\prime}} (x) \| _ {1}} = \infty .
+$$
+
+Although in general one could make a bound such as (1) invariant by considering the infimum
+
+$$
+\inf_ {\tilde {\theta} \sim \theta , \tilde {\theta} ^ {\prime} \sim \theta^ {\prime}} d (\tilde {\theta}, \tilde {\theta} ^ {\prime}),
+$$
+
+this infimum may be difficult to compute in practice. Therefore, a "good" bound should ideally be both invariant under rescaling symmetries and easy to compute. Invariance to rescaling symmetries is precisely the motivation for the introduction of the path-lifting.
+
+# 3.2. Path-Lifting $\Phi$ and Path-Activation Matrix $A$
+
+Background. The path-lifting map $\Phi$ and its associated $\ell^1$ -path-norm were introduced to equip ReLU networks with a coordinate system that is invariant under neuron-wise rescaling. This construction has enabled advances in identifiability (Stock & Gribonval, 2023; Bona-Pellissier et al., 2022), analysis of training dynamics (Marcotte et al., 2023), input-space Lipschitz bounds (Gonon et al., 2024a), and (PAC-Bayes and Rademacher) generalization guarantees (Neyshabur et al., 2015; Gonon et al., 2024a).
+
+This paper does not redefine the path-lifting but leverages it to derive, for the first time, a rescaling-invariant parameter-space Lipschitz bound that holds for general DAG-ReLU architectures.
+
+Definitions (informal). Given network parameters $\theta$ and an input $x$ , we consider two objects from Gonon et al. (2024a, Definition A.1): the path-lifting vector $\Phi(\theta)$ and the path-activation matrix $A(\theta, x)$ . Below we give a simplified description sufficient for understanding our main results; full definitions are deferred to Appendix A.
+
+
+Figure 2: The path-lifting coordinate $\Phi_p(\theta)$ for the path $p = u \to v \to w$ is the product of the weights along that path: $\Phi_p(\theta) = \theta^{u \to v} \theta^{v \to w}$ .
+
+The vector $\Phi (\theta)\in \mathbb{R}^{\mathcal{P}}$ is indexed by the set $\mathcal{P}$ of paths in the network-i.e., sequences of neurons from an input to an output. For each path, its coordinate in $\Phi (\theta)$ is simply the product of the weights along that path (ignoring nonlinearities). For example, if $p = u\rightarrow v\rightarrow w$ is a path starting from an input neuron $u$ and ending at an output neuron $w$ , and if $\theta_{a\to b}$ denotes the weight on edge $a\rightarrow b$ , then $\Phi_p(\theta) = \theta^{u\to v}\theta^{v\to w}$ , as illustrated in Figure 2.
+
+The path-activation matrix $A(\theta, x) \in \{0, 1\}^{\mathcal{P} \times d_{\mathrm{in}}}$ encodes the information about non-linearities, storing which paths are active (i.e., all RLUs along them are on) for a given input $x$ . Entry $A_{p,u}(\theta, x) = 1$ if path $p$ starts at input coordinate $u$ and all neurons along $p$ are activated.
+
+In networks with biases, additional paths starting from hidden neurons are included in $\mathcal{P}$ , and $A(\theta, x)$ is extended to $\{0, 1\}^{\mathcal{P} \times (d_{\mathrm{in}} + 1)}$ to include bias contributions.
+
+Key properties of $(\Phi, A)$ . These two objects enjoy the following critical features:
+
+- $\Phi(\theta)$ is a vector of monomials in the weights.
+- $A(\theta, x)$ is a binary, piecewise-constant matrix of $(\theta, x)$ .
+- Both $\Phi(\theta)$ and $A(\theta, x)$ are rescaling-invariant: if $\tilde{\theta} \sim \theta$ (i.e., $\theta$ and $\tilde{\theta}$ only differ by neuron-wise rescaling, leaving $R_{\theta} = R_{\tilde{\theta}}$ unchanged), then $\Phi(\tilde{\theta}) = \Phi(\theta)$ and $A(\tilde{\theta}, x) = A(\theta, x)$ for all $x$ (Gonon, 2024, Theorem 2.4.1).
+- The network output can be recovered directly from these quantities. For scalar-valued outputs $R_{\theta}(x)$ :
+
+$$
+R _ {\theta} (x) = \left\langle \Phi (\theta), A (\theta , x) \binom {x} {1} \right\rangle , \tag {4}
+$$
+
+and a similar form holds for vector-valued networks (Gononi et al., 2024a, Theorem A.1).
+
+Example (one-hidden-layer network). Consider a one-hidden-layer ReLU network without bias, with parameters
+
+$\theta = (u_{1},\dots ,u_{k},v_{1},\dots ,v_{k})$ where $u_{i}\in \mathbb{R}^{d_{\mathrm{in}}}$ $v_{i}\in \mathbb{R}^{d_{\mathrm{out}}}$ and realization
+
+$$
+R _ {\theta} (x) = \sum_ {i = 1} ^ {k} \max (0, \langle x, u _ {i} \rangle) v _ {i} \in \mathbb {R} ^ {d _ {\text {o u t}}}.
+$$
+
+Then, the path-lifting is
+
+$$
+\Phi (\theta) = \left(u _ {i} v _ {i} ^ {\top}\right) _ {i \in \{1, \dots , k \}} \in \mathbb {R} ^ {k d _ {\mathrm {i n}} d _ {\mathrm {o u t}}}.
+$$
+
+The path-activation matrix is
+
+$$
+A (\theta , x) = \mathbf {I} _ {d _ {\mathrm {i n}}} \otimes \left(\mathbb {1} _ {\langle x, u _ {i} \rangle > 0}\right) _ {i = 1} ^ {k} \otimes \mathbf {1} _ {d _ {\mathrm {o u t}}},
+$$
+
+concatenated with a zero column (no biases here). Here, $\mathbf{I}_d$ is the $d\times d$ identity matrix, and $\mathbf{1}_d$ (resp. $\mathbf{0}_d$ ) is the vector of ones (resp. zeros) of size $d$ .
+
+It is straightforward to verify that both $\Phi (\theta)$ and $A(\theta ,x)$ remain unchanged under the neuron-wise rescaling $\theta \mapsto \lambda \diamond$ $\theta$ , defined by $(v_{i},u_{i})\mapsto (\frac{1}{\lambda_{i}} v_{i},\lambda_{i}u_{i})$ for any $\lambda \in (\mathbb{R}_{>0})^k$ . This transformation leaves the function $R_{\theta}$ unchanged, i.e., $R_{\theta} = R_{\lambda \diamond \theta}$ (Gonon et al., 2024a).
+
+# 4. A Rescaling Invariant Lipschitz Bound
+
+Our main result, Theorem 4.1, is a Lipschitz bound with respect to the parameters of the network, as opposed to widespread Lipschitz bounds with respect to the inputs. It precisely proves that (1) holds with a rescaling-invariant pseudo-distance (called the $\ell^1$ -path metric) defined via $\Phi$ as $d(\theta, \theta') \coloneqq \|\Phi(\theta) - \Phi(\theta')\|_1$ and $C_x = \max(\|x\|_{\infty}, 1)$ .
+
+Theorem 4.1. Consider a ReLU DAG neural network, corresponding to an arbitrary DAG network with max-pool etc. as in Section 3, see Figure 1 for an illustration and Definition A.2 in the appendix for a precise definition. Consider parameters vectors $\theta, \theta'$ . If for every coordinate $i$ , it holds $\theta_i \theta_i' \geqslant 0$ , then for every input $x$ :
+
+$$
+\begin{array}{l} \left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \\ \leqslant \max (\| x \| _ {\infty}, 1) \| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \| _ {1}. \tag {5} \\ \end{array}
+$$
+
+Moreover, for every such neural network architecture, there are non-negative parameters $\theta \neq \theta^{\prime}$ and a non-negative input $x$ such that Inequality (5) is an equality.
+
+Since $\| \cdot \|_q \leqslant \| \cdot \|_1$ for any $q \geq 1$ , Inequality (5) implies the same bound with the $\ell^q$ -norm on the left hand-side.
+
+We sketch the proof in Section 4.5. The complete proof is in Appendix B - we actually prove something slightly stronger, but we stick here to Inequality (5) for simplicity.
+
+As discussed in Section 2.1, the parameter-space Lipschitz bound (5), like any such bound, can be incorporated into various pipelines—either to establish theoretical guarantees or to guide practical methods (e.g., algorithms that
+
+minimize these bounds), with applications to pruning, quantization, or generalization. In Section 5, we will focus on pruning. Regarding generalization, let us briefly note that this bound can be used to derive a Rademacher complexity bound for the class of functions $\mathcal{F} \coloneqq \{R_{\theta}, \| \Phi(\theta)\|_1 \leqslant r\} = \bigcup_{\mathrm{sign}s} \{R_{\theta}, \| \Phi(\theta)\|_1 \leqslant r, \operatorname{sgn}(\theta) = s\}$ . To bound this complexity, Dudley's integral reduces the task to bounding the covering numbers of each fixed-sign sub-ball $\{R_{\theta}, \| \Phi(\theta)\|_1 \leqslant r, \operatorname{sgn}(\theta) = s\}$ . The inequality (5) enables exactly this, by linking the covering numbers of these function classes to those of the corresponding finite-dimensional sets $\{\Phi(\theta) : \operatorname{sgn}(\theta) = s, \| \Phi(\theta)\|_1 \leqslant r\}$ . A full derivation of this approach can be found in Theorem 4.3.1 of (Gonon, 2024). That said, the resulting (Rademacher) generalization bounds are typically looser—by a factor of roughly $\sqrt{\# \mathrm{params}}$ —than those of Gonon et al. (2024a), who also leverage the path-norm but through a more refined analysis.
+
+In the rest of this section, we discuss the assumptions of the theorem, the practical computation of the bound and the positioning with respect to previously established Lipschitz bounds.
+
+# 4.1. Why the same-sign assumption is necessary
+
+A hard impossibility (new contribution). Let us highlight that the condition $\theta_{i}\theta_{i}^{\prime}\geqslant 0\forall i$ in Theorem 4.1 is not a technical convenience. We exhibit in Figure 6 (Appendix B) a minimalistic ReLU network for which no finite constant $C_x$ can satisfy (1) once two weights change sign. By preponding and appending arbitrary sub-networks to that minimal counter-example, one gets families where all but two edges keep their sign, yet the same divergence occurs. This impossibility shows that every rescaling-invariant parameter-space Lipschitz bound based on the path-lifting must, at a minimum, control sign changes. We are not aware of a prior formal statement of this theoretical impossibility.
+
+Practical relevance. Many real-world workflows preserve the signs: pruning, uniform quantization, and small SGD steps preserve them; locally, any non-zero $\theta$ admits an $\ell_{\infty}$ ball where signs are fixed. When occasional flips do occur, Theorem 4.1 remains useful as a local building block: one may use it on each fixed-sign quadrant individually and then glue the results established on each quadrant together—exactly the strategy evoked for covering-number generalization proofs (see the discussion after Theorem 4.1).
+
+# 4.2. Approximation and exact computation of $\ell^1$ -path-metrics
+
+Since $\Phi (\theta)$ is a vector of combinatorial dimension (it is indexed by paths), it would be intractable to compute the $\ell^1$ -path metric $\| \Phi (\theta) - \Phi (\theta^{\prime})\|_{1}$ by direct computation of the vector $\Phi (\theta) - \Phi (\theta^{\prime})$ . In this section we investigate efficient and rescaling-invariant approximations of the $\ell^1$ -
+
+path-metric that turn out to yield exact implementations in cases of practical interest.
+
+A key fact on which the approach is built is that the $\ell^1$ -path-norm can be computed in one forward pass (Gonon et al., 2024a). Since, by the lower triangle inequality, we have
+
+$$
+\left| \left\| \Phi (\theta) \right\| _ {1} - \left\| \Phi \left(\theta^ {\prime}\right) \right\| _ {1} \right| \leqslant \left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {1}, \tag {6}
+$$
+
+the left-hand side of (6) serves as an approximation that can be computed in two forward passes of the network2 .
+
+As we now show, this is an exact evaluation of the $\ell^1$ -pathometric under practical assumptions, and completed by a rescaling-invariant upper bound (cf. Inequality (8) below).
+
+Lemma 4.2. Inequality (6) is an equality as soon as $|\Phi(\theta)| \geqslant |\Phi(\theta')|$ coordinatewise: in this case we have
+
+$$
+\left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {1} = \left\| \Phi (\theta) \right\| _ {1} - \left\| \Phi \left(\theta^ {\prime}\right) \right\| _ {1}. \tag {7}
+$$
+
+Proof. For vectors $a, b$ with $|a_i| \geqslant |b_i|$ for every $i$ , we have
+
+$$
+\left\| a \right\| _ {1} - \left\| b \right\| _ {1} = \sum_ {i} \left| a _ {i} \right| - \left| b _ {i} \right| = \sum_ {i} \left| a _ {i} - b _ {i} \right| = \left\| a - b \right\| _ {1}.
+$$
+
+
+
+An important scenario where $|\Phi(\theta)| \geqslant |\Phi(\theta')|$ indeed holds is when $|\theta| \geqslant |\theta'|$ coordinatewise. The latter is true in at least two significant situations: when $\theta'$ is obtained from $\theta$ by pruning, or through quantization provided that rounding is done either systematically towards zero or systematically away from zero.
+
+Note that $|\theta |\geqslant |\theta^{\prime}|$ is not the only situation where $|\Phi (\theta)|\geqslant$ $|\Phi (\theta^{\prime})|$ . For instance, due to the rescaling-invariance of $\Phi (\cdot)$ , if $\tilde{\theta}$ is rescaling-equivalent to $\theta$ the coordinatewise inequality $|\Phi (\tilde{\theta})|\geqslant |\Phi (\theta^{\prime})|$ remains valid, even though in general such a $\tilde{\theta}$ non longer satisfies $|\tilde{\theta} |\geqslant |\theta^{\prime}|$ coordinatewise.
+
+Even out of such practical scenarios, the $\ell^1$ -path-metric also satisfies an invariant upper bound.
+
+Lemma 4.3 (Informal version of Lemma F.3). Consider a DAG ReLU network with $L$ layers and width $W$ . For any parameter $\theta$ , denote by $\mathbb{N}(\theta)$ its normalized version, deduced from $\theta$ by applying rescaling-symmetries such that each neuron has its vector of incoming weights equal to 1, except for output neurons. It holds for all parameters $\theta, \theta'$ :
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {1} \\ \leqslant \left(W ^ {2} + \min \left(\| \Phi (\theta) \| _ {1}, \| \Phi \left(\theta^ {\prime}\right) \| _ {1}\right) \cdot L W\right) \| \mathrm {N} (\theta) - \mathrm {N} \left(\theta^ {\prime}\right) \| _ {\infty}. \tag {8} \\ \end{array}
+$$
+
+The proof is in Appendix F. In all the cases of interest we consider, the lower bound (6) is exact as a consequence of Lemma 4.2. We leave it to future work to compare the lower bound with the upper bound of Lemma 4.3 in specific cases where the lower bound is inexact.
+
+
+Figure 3: Illustration of the proof of Theorem 4.1, see Section 4.5 for an explanation.
+
+# 4.3. Improvement over previous Lipschitz bounds
+
+Inequality (5) improves on the Lipschitz bound (1) specified with Equation (2), as the next result shows.
+
+Lemma 4.4. Consider a simple layered fully-connected neural network architecture with $L \geqslant 1$ layers, corresponding to functions $R_{\theta}(x) = M_L \operatorname{ReLU}(M_{L-1} \ldots \operatorname{ReLU}(M_1 x))$ with each $M_{\ell}$ denoting a matrix, and parameters $\theta = (M_1, \ldots, M_L)$ . For a matrix $M$ , denote by $\| M \|_{1,\infty}$ the maximum $\ell^1$ -norm of a row of $M$ . Consider $R \geqslant 1$ and define the set $\Theta$ of parameters $\theta = (M_1, \ldots, M_L)$ such that $\| M_{\ell} \|_{1,\infty} \leqslant R$ for every $\ell \in [[1, L]]$ . Then, for every parameters $\theta, \theta' \in \Theta$
+
+$$
+\left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {1} \leqslant L W ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}. \tag {9}
+$$
+
+Moreover the right hand-side can be arbitrarily worse than the $\ell^1$ -pseudo-metric in the left hand side: over all rescaling-equivalent parameters $\tilde{\theta} \sim \theta$ , it holds
+
+$$
+\sup _ {\tilde {\theta} \sim \theta} \frac {\| \tilde {\theta} - \theta^ {\prime} \| _ {\infty}}{\| \Phi (\tilde {\theta}) - \Phi (\theta^ {\prime}) \| _ {1}} = \infty .
+$$
+
+The proof of Lemma 4.4 is in Inequality (23) in Appendix G.
+
+The invariant Lipschitz bound (5) combined with (9) yields a (non-invariant) bound on $\| R_{\theta}(x) - R_{\theta'}(x)\| _1$ :
+
+$$
+\max (\| x \| _ {\infty}, 1) L W ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}.
+$$
+
+In comparison the generic bound (1) specified with (2) reads
+
+$$
+(W \| x \| _ {\infty} + 1) W L ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}.
+$$
+
+As soon as $\| x \|_{\infty} \geqslant 1$ the latter is a looser bound than the former.
+
+# 4.4. Implication for scale-invariant sharpness
+
+Let $\ell : \mathbb{R}^{d_{\mathrm{out}}} \times \mathbb{R}^{d_{\mathrm{out}}} \to \mathbb{R}_{+}$ be $\kappa$ -Lipschitz in its first argument with respect to $\ell^1$ -norm, and assume a data distribution $\mathcal{D}$ over $(x,y)$ . For any parameter $\theta$ and perturbation radius $\rho > 0$ , consider the scale-adaptative worst-case sharpness (see Definition 2 in Kwon et al. (2021), or Equation 1 in Andriushchenko et al. (2023)):
+
+$$
+\begin{array}{l} \operatorname {S h a r p} _ {\rho} (\theta) := \\ \sup _ {\| \delta \odot | \theta | ^ {- 1} \| _ {p} \leqslant \rho} \mathbb {E} _ {(x, y) \sim \mathcal {D}} \Big (\ell (R _ {\theta + \delta} (x), y) - \ell (R _ {\theta} (x), y) \Big) \\ \end{array}
+$$
+
+Lemma 4.5. For every $\rho \in (0,1)$ and every $\theta$ ,
+
+$$
+\begin{array}{l} S h a r p _ {\rho} (\theta) \leqslant \kappa \mathbb {E} _ {x \sim \mathcal {D} _ {x}} \left[ \max \left(\| x \| _ {\infty}, 1\right) \right] \\ \sup_{\| \delta \odot |\theta |^{-1}\|_{p}\leqslant \rho}\| \Phi (\theta +\delta) - \Phi (\theta)\|_{1} \\ \end{array}
+$$
+
+Proof. Lipschitzness of the loss yields $\ell(R_{\theta + \delta}(x), y) - \ell(R_{\theta}(x), y) \leqslant \kappa \|R_{\theta + \delta}(x) - R_{\theta}(x)\|_1$ . The condition $\|\delta \odot |\theta|^{-1}\|_p \leqslant \rho$ implies $\|\delta \odot |\theta|^{-1}\|_\infty \leqslant \rho$ . Thus $\delta_i \leqslant |\theta_i| \rho < |\theta_i|$ for every coordinate $i$ and we get $\operatorname{sgn}(\theta_i + \delta_i) = \operatorname{sgn}(\theta_i)$ . Therefore Theorem 4.1 applies and gives $\|R_{\theta + \delta}(x) - R_{\theta}(x)\|_1 \leqslant \max(\|x\|_\infty, 1)\|\Phi(\theta + \delta) - \Phi(\theta)\|_1$ .
+
+Lemma 4.5 shows that our path-metric controls the scale-adaptative sharpness notions used, e.g., in (Kwon et al., 2021) and (Andriushchenko et al., 2023).
+
+# 4.5. Proof sketch of Theorem 4.1 (full proof in Appendix B)
+
+Given an input $x$ , the proof of Theorem 4.1 consists in defining a trajectory $t \in [0,1] \to \theta(t) \in \Theta$ (red curve in Figure 3) that starts at $\theta$ , ends at $\theta'$ , and with finitely many breakpoints $0 = t_0 < t_1 < \dots < t_m = 1$ such that the path-activations $A(\theta(t), x)$ are constant on the open intervals $t \in (t_k, t_{k+1})$ . Each breakpoint corresponds to a value where the activation of at least one path (hence at least one neuron) changes in the neighborhood of $\theta(t)$ . For instance, in the left part of Figure 3, the straight green line (resp. quadratic green curve) corresponds to a change of activation of a ReLU neuron (for a given input $x$ to the network) in the first (resp. second) layer.
+
+With such a trajectory, given the key property (4), each quantity $|R_{\theta(t_k)}(x) - R_{\theta(t_{k+1})}(x)|$ can be controlled in terms of $\| \Phi(\theta(t_k)) - \Phi(\theta(t_{k+1})) \|_1$ , and if the path is "nice enough", then this control can be extended globally from $t_0$ to $t_m$ .
+
+There are two obstacles: 1) proving that there are finitely many breakpoints $t_k$ as above (think of $t \mapsto t^{n+2} \sin(1/t)$ ) that is $n$ -times continuously differentiable but still crosses $t = 0$ an infinite number of times around zero), and 2)
+
+proving that the length $\sum_{k=1}^{m} \|\Phi(\theta(t_k)) - \Phi(\theta(t_{k+1}))\|_1$ of the broken line with vertices $\Phi(\theta(t_k))$ (dashed line on the right part of Figure 3) is bounded from above by $\|\Phi(\theta) - \Phi(\theta')\|_1$ times a reasonable factor. Trajectories satisfying these two properties are called "admissible" trajectories.
+
+The first property is true as soon as the trajectory $t \mapsto \theta(t)$ is smooth enough (analytic, say). For this, we will notably exploit that the output of a ReLU neuron in the $d$ -th layer of a layered fully-connected network is a piecewise polynomial function of the parameters $\theta$ of degree at most $d$ (Gonon et al., 2024a, consequence of Lemma A.1), (Bona-Pellissier et al., 2022, consequence of Propositions 1 and 2). The property second is true with factor one thanks to a monotonicity property of the chosen trajectory.
+
+The core of the proof consists in exhibiting a trajectory with these two properties. To the best of our knowledge, the proof of Inequality (3) is the first to practically leverage the idea of "adequately navigating" through the different regions in $\theta$ where the network is polynomial by respecting the geometry induced by $\Phi$ , see Figure 3 for an illustration.
+
+# 5. Rescaling-Invariant Pruning
+
+We exploit Inequality (3) to design a pruning rule that is both effective and invariant to neuron-wise rescaling. Instead of ranking weights by their magnitude, we rank them by their $\ell^1$ -path-metric contribution. We show in a proof-of-concept on a ResNet-18 trained on ImageNet-1k under the lottery-ticket "rewind-and-fine-tune" schedule (Frankle et al., 2020) that this path-magnitude rule achieves the same accuracy as classical magnitude pruning while becoming totally immune to arbitrary rescalings.
+
+# 5.1. Pruning: a quick overview
+
+Pruning typically involves ranking weights by a chosen criterion and removing (setting to zero) those deemed less important (Han et al., 2016). Early criteria considered either weight magnitudes (Hanson & Pratt, 1988; Han et al., 2016) or the loss's sensitivity to each weight (LeCun et al., 1989; Hassibi & Stork, 1992). Building on these foundations, more sophisticated pruning methods have emerged, often formulated as complex optimization problems solved via advanced algorithms. For example, consider the entrywise loss's sensitivity criterion of (LeCun et al., 1989). In principle, all the costs should be recomputed after each pruning decision, since removing one weight affects the costs of the others. A whole literature focuses on turning the cost of (LeCun et al., 1989) into an algorithm that would take
+
+into account these global dependencies (Singh & Alistarh, 2020; Yu et al., 2022; Benbaki et al., 2023). This line of work recently culminated in CHITA (Benbaki et al., 2023), a pruning approach that scales up to millions of parameters through substantial engineering effort.
+
+Here, we introduce a path-magnitude cost defined for each individual weight but that depends on the global configuration of the weights. Just as sensitivity-based costs (LeCun et al., 1989), these costs should in principle be re-computed after each pruning decision. While taking these global dependencies into account is expected to provide better performance, this is also expected to require a huge engineering effort, similar to what has been done in (Singh & Alistarh, 2020; Yu et al., 2022; Benbaki et al., 2023), which is beyond the scope of this paper. Our goal here is more modest: we aim at providing a simple proof-of-concept to show the promises of the path-lifting for rescaling-invariant pruning.
+
+Notion of pruned parameter. Considering a neural network architecture given by a graph $G$ , we use the shorthand $\mathbb{R}^G$ to denote the corresponding set of parameters (see Definition A.2 for a precise definition). By definition, a pruned version $\theta'$ of $\theta \in \mathbb{R}^G$ is a "Hadamard" product $\theta' = s \odot \theta$ , where $s \in \{0,1\}^G$ and $\|s\|_0$ is "small". We denote $\mathbf{1}_G \in \mathbb{R}^G$ the vector filled with ones, $e_i \in \mathbb{R}^G$ the $i$ -th canonical vector, $s_i := \mathbf{1}_G - e_i$ , and introduce the specialized notation $\theta_{-i} := s_i \odot \theta$ for the vector where a single entry (the weight of an edge or the bias of a hidden or output neuron) of $\theta$ , indexed by $i$ , is set to zero.
+
+# 5.2. Proposed rescaling-invariant pruning criterion
+
+The starting point of the proposed pruning criterion is that, given any $\theta$ , the pair $\theta, \theta'$ with $\theta' := s \odot \theta$ satisfies the assumptions of Theorem 4.1, hence for all input $x$ we have $|R_{\theta}(x) - R_{\theta'}(x)| \leqslant \| \Phi(\theta) - \Phi(\theta') \|_1 \max(1, \| x \|_{\infty})$ . Specializing this observation to the case where a single entry (the weight of an edge, or the bias of hidden or output neuron indexed by $i$ ) of $\theta$ is pruned (i.e., $\theta' = \theta_{-i}$ ) suggests the following definition, which will serve as a pruning criterion:
+
+# Definition 5.1. We denote
+
+$$
+\operatorname {P a t h - M a g} (\theta , i) := \left\| \Phi (\theta) - \Phi \left(\theta_ {- i}\right) \right\| _ {1}. \tag {10}
+$$
+
+This measures the contribution to the path-norm of all paths $p$ containing entry $i$ : when $i \notin p$ we have $\Phi_p(\theta_{-i}) = \Phi_p(\theta)$ , while otherwise $\Phi_p(\theta_{-i}) = 0$ . Since $\theta$ and $\theta_{-i}$ satisfy the assumptions of Lemma 4.2 we have
+
+$$
+\begin{array}{l} \operatorname {P a t h - M a g} (\theta , i) = \left| \begin{array}{l} \Phi (\theta) \end{array} \right| _ {1} - \left| \begin{array}{l} \Phi (\theta_ {- i}) \end{array} \right| _ {1} (11) \\ = \sum_ {p \in \mathcal {P}} | \Phi_ {p} (\theta) | - \sum_ {p \in \mathcal {P}: i \notin p} | \Phi_ {p} (\theta) | \\ = \sum_ {p \in \mathcal {P}: i \in p} | \Phi_ {p} (\theta) | (12) \\ \end{array}
+$$
+
+Table 2: Comparison of pruning criteria across key properties. Being data-specific or loss-specific can be both a strength (leveraging the training loss and data for more accurate pruning) and a limitation (requiring access to additional information). Being rescaling-invariant ensures the pruning mask is unaffected by neuron-wise weight rescaling.
+
+Criterion Rescaling-Invariant Error bound Data-Specific Loss-Specific Efficient to Compute Versatilea Magnitude No Yes – (1)-(2) No No Yes Yes Loss-Sensitivity
+(Taylor Expansion) Yes in theory
+Not in practicec No Yes Yes Dependsb Yes Path-Magnitude Yes Yes – (13) No No Yes Yes
+
+a Can be used to design greedy approaches (including $\ell^0$ -based methods) and supports both structured and unstructured pruning.
+b Depends on how higher-order derivatives of the loss are taken into account. E.g., using only the diagonal of the Hessian can be relatively quick, but computing the full Hessian is infeasible for large networks. See Table 3 for experiments.
+c See Equation (16) in Appendix E for invariance in theory, and end of Appendix E for non-invariance in practice.
+
+In light of (5), to limit the impact of pruning on the perturbation of the initial function $R_{\theta}$ , it is natural to choose a coordinate $i$ of $\theta$ leading to a small value of this criterion.
+
+Lemma 5.2. Path-Mag enjoys the following properties:
+
+- rescaling-invariance: for each $\theta \in \mathbb{R}^G$ and index $i$ , $\mathrm{Path - Mag}(\theta ,i) = \mathrm{Path - Mag}(\tilde{\theta},i)$ for every rescaling-equivalent parameters $\tilde{\theta}\sim \theta$ ;
+- error bound: denote $s \coloneqq 1_G - \sum_{i \in I} e_i$ where $I$ is the indexes entries of $\theta \in \mathbb{R}^G$ to be pruned. We have
+
+$$
+\begin{array}{l} \left| R _ {\theta} (x) - R _ {s \odot \theta} (x) \right| \\ \leqslant \left(\sum_ {i \in I} \operatorname {P a t h - M a g} (\theta , i)\right) \max (1, \| x \| _ {\infty}). \tag {13} \\ \end{array}
+$$
+
+- computation with only two forward passes: using Equation (11) and the fact that $\|\Phi(\cdot)\|_1$ is computable in one forward pass (Gonon et al., 2024a).
+- efficient joint computation for all entries: we have
+
+$$
+\left(\operatorname {P a t h} - \operatorname {M a g} (\theta , i)\right) _ {i} = \theta \odot \nabla_ {\theta} \| \Phi (\theta) \| _ {1} \tag {14}
+$$
+
+that enables computation via auto-differentiation.
+
+The proof is given in Appendix C. We summarize these properties in Table 2.
+
+# 5.3. Considered (basic) path-magnitude pruning method
+
+Equipped with Path-Mag, a basic rescaling-invariant pruning approach is to minimize the upper-bound (13). This is achieved via simple reverse hard thresholding:
+
+1. Score all weights. The entire vector $(\mathrm{Path - Mag}(\theta ,i))_i$ can be produced in one reverse-mode autograd pass via Eq. (14).
+2. Prune. Zero-out the weights with the smallest scores.
+
+To the best of our knowledge, this is the first practical network pruning method that is both invariant under rescaling symmetries and endowed with guarantees such as (11) on modern networks.
+
+Table 3: Run-time (in milliseconds) to score all weights. Time of a forward pass included for reference. Entries in "OBD" and "Forward" columns show times for batch-sizes 1 and 128 (e.g., "13-60" means 13 ms at batch size 1 vs. 60 ms at batch size 128). See Appendix E for details.
+
+Network Forward Mag OBD Path-Mag AlexNet 1.7–133 0.5 13–60 14 VGG16 2.3–198 1.4 31–675 61 ResNet18 3.6–142 3.2 51–155 32
+
+While Table 3 shows that path-magnitude pruning is computationally feasible, we must also verify that when injected in usual pruning pipelines, it yields acceptable accuracies.
+
+# 5.4. Proof-of-concept study
+
+As a simple proof-of-concept, we prune a dense ResNet-18 trained on ImageNet-1k.
+
+Setup. Dense ResNet-18 on ImageNet-1k, standard training hyper-parameters, lottery-ticket "rewind-and-fine-tune" schedule (Frankle et al., 2021). We benchmark three pruning criteria: (i) magnitude, (ii) magnitude after a random neuron-wise rescaling, (iii) our path-magnitude. See Appendix D for details.
+
+Results. Table 4 reports top-1 accuracy after fine-tuning when pruning either 40, 60 or $80\%$ of the weights. Path-magnitude matches4 magnitude pruning on the un-rescaled network and completely eliminates the $5 - 50\%$ accuracy drop
+
+Table 4: Top-1 ImageNet accuracy (\%) on ResNet-18 after one-shot pruning, rewind, and 85-epoch fine-tune. Original accuracy: $67.7\%$ . Three pruning levels shown; more in Appendix D.
+
+Pruning level 40% 60% 80% Path-magnitude 68.6 67.9 66.0 Magnitude 68.8 68.2 66.5 Magnitude (rescaled) 63.1 57.5 15.8
+
+incurred when magnitude is applied after rescaling. Figure 4 shows the full training trajectory at $40\%$ sparsity.
+
+Runtime. Path-magnitude scores for all weights are computed in 32 ms (Table 3), comparable to a single forward pass (see Appendix E for details).
+
+These results confirm that rescaling invariance is not just cosmetic: it prevents large accuracy losses under benign weight re-scalings while keeping the computational cost low. A broader comparison with structured and iterative methods such as CHITA is left for future work.
+
+
+Figure 4: Top-1 accuracy during fine-tuning at $40\%$ sparsity. Path-magnitude overlaps exactly with itself after random neuron-wise rescaling, while magnitude pruning degrades.
+
+# 5.5. Discussion and possible future extension
+
+The cost $\text{Path-Mag}(\theta, i)$ is defined per weight, but its value for a given weight indexed by $i$ also depends on the other weights. Therefore, one could hope to achieve better pruning properties if, once a weight is pruned, the path-magnitude costs of the remaining weights were updated. This is reminiscent of the loss-sensitivity cost (LeCun et al., 1989) that associates to each weight $i$ (a surrogate of) the difference $\ell(\theta_{-i}) - \ell(\theta)$ , where $\ell$ is a given loss function. The challenge is similar in both cases: how to account for global dependencies between the pruning costs associated to each individual weight? In this direction, a whole literature
+
+has developed techniques attempting to globally minimize (a surrogate of) $\ell(s \odot \theta) - \ell(\theta)$ over the (combinatorial) choice of a support $s$ satisfying an $\ell^0$ -constraint. Such approaches have been scaled up to million of parameters in (Benbaki et al., 2023) by combining a handful of clever algorithmic designs. Similar iterative or greedy strategies could be explored to aim at solving the (seemingly) combinatorial $\ell^0$ -optimization problem $\| \Phi(s \odot \theta) - \Phi(\theta) \|_1$ .
+
+# 6. Conclusion
+
+We introduced a new Lipschitz bound on the distance between two neural network realizations, leveraging the path-lifting framework of Gonon et al. (2024a). By formulating this distance in terms of the $\ell^1$ -path-metric, our result applies to a broad class of modern ReLU networks—including ones like ResNets or AlphaGo—and crucially overcomes the arbitrary pessimism arising in non-invariant parameter-based bounds. Beyond providing a theoretical guarantee, we also argued that this metric can be computed efficiently in practical scenarios such as pruning and quantization.
+
+We then demonstrated how to apply path-lifting to pruning: the path-magnitude criterion defines a rescaling-invariant measure of the overall contribution of a weight. In a proof-of-concept on a ResNet-18 trained on ImageNet, path-magnitude pruning yields an accuracy on par with standard magnitude pruning. This connects the theoretical notion of path-lifting to a practical goal: making pruning decisions that cannot be undermined by mere neuron-wise rescaling.
+
+This work raises several directions for future research. First, a natural challenge is to establish sharper versions of our core result (Theorem 4.1), typically with metrics still based on the path-lifting but using $\ell^p$ -norms with $p > 1$ , or by deriving functional bounds in expectation (over a given probability distribution of inputs).
+
+Second, more advanced iterative algorithms, akin to second-order pruning techniques, might benefit from path-lifting as a fundamental building block, improving upon the simple one-pass approach used in our proof-of-concept while retaining invariance properties (see Section 5.5).
+
+Finally, although our main theorem improves existing Lipschitz bounds and extends them to a wide range of network architectures, the potential applications of the path-lifting perspective—and its invariance under rescaling—are far from exhausted. Quantization and generalization, in particular, are two important areas where the present findings might stimulate further developments on metrics that offer both theoretical grounding and compelling practical properties.
+
+# Acknowledgements
+
+This work was supported in part by the AllegroAssai ANR-19-CHIA-0009, by the NuSCAP ANR-20-CE48-0014 projects of the French Agence Nationale de la Recherche and by the SHARP ANR project ANR-23-PEIA-0008 in the context of the France 2030 program.
+
+The authors thank the Blaise Pascal Center for the computational means. It uses the SIDUS solution (Quemener & Corvellec, 2013) developed by Emmanuel Quemener.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Andriushchenko, M., Croce, F., Müller, M., Hein, M., and Flammarion, N. A modern look at the relationship between sharpness and generalization. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 840-902. PMLR, 2023. URL https://proceedings.mlr.press/v202/andriushchenko23a.htm1.
+Arora, S., Ge, R., Neyshabur, B., and Zhang, Y. Stronger generalization bounds for deep nets via a compression approach. In Dy, J. G. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 254-263. PMLR, 2018. URL http://proceedings.mlr.press/v80/arora18b.html.
+Barron, A. R. and Klusowski, J. M. Complexity, statistical risk, and metric entropy of deep nets using total path variation. CoRR, abs/1902.00800, 2019. URL http://arxiv.org/abs/1902.00800.
+Bartlett, P. L., Foster, D. J., and Telgarsky, M. Spectrally-normalized margin bounds for neural networks. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6240-6249, 2017.
+Baykal, C., Liebenwein, L., Gilitschenski, I., Feldman,
+
+D., and Rus, D. Data-dependent coresets for compressing neural networks with applications to generalization bounds. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=HJfwJ2A5KX.
+Baykal, C., Liebenwein, L., Gilitschenski, I., Feldman, D., and Rus, D. Sensitivity-informed provable pruning of neural networks. SIAM Journal on Mathematics of Data Science, 4(1):26-45, 2022. doi: 10.1137/20M1383239. URL https://doi.org/10.1137/20M1383239.
+Bekas, C., Kokiopoulou, E., and Saad, Y. An estimator for the diagonal of a matrix. Applied Numerical Mathematics, 57(11):1214-1229, 2007. ISSN 0168-9274. doi: https://doi.org/10.1016/j.apnum.2007.01.003. URL https://www.sciencedirect.com/science/article/pii/S0168927407000244. Numerical Algorithms, Parallelism and Applications (2).
+Benbaki, R., Chen, W., Meng, X., Hazimeh, H., Ponomareva, N., Zhao, Z., and Mazumder, R. Fast as CHITA: neural network pruning with combinatorial optimization. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 2031-2049. PMLR, 2023. URL https://proceedings.mlr.press/v202/benbaki23a.html.
+Berner, J., Grohs, P., and Jentzen, A. Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. SIAM J. Math. Data Sci., 2(3):631-657, 2020. doi: 10.1137/19M125649X. URL https://doi.org/10.1137/19M125649X.
+Bona-Pellissier, J., Malgouyres, F., and Bachoc, F. Local identifiability of deep relu neural networks: the theory. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
+Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. Introduction to Algorithms, 3rd Edition. MIT Press, 2009. ISBN 978-0-262-03384-8. URL http://mitpress.mit.edu/books/introduction-algorithms.
+DeVore, R. A., Hanin, B., and Petrova, G. Neural network approximation. Acta Numer., 30:327-444, 2021. doi:
+
+10.1017/S0962492921000052. URL https://doi.org/10.1017/S0962492921000052.
+Frankle, J., Schwab, D. J., and Morcos, A. S. The early phase of neural network training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Hk11iRNFwS.
+Frankle, J., Dziugaite, G. K., Roy, D. M., and Carbin, M. Pruning neural networks at initialization: Why are we missing the mark? In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=Ig-VyQc-MLK.
+Gonon, A. Harnessing symmetries for modern deep learning challenges: a path-lifting perspective. Theses, Ecole normale supérieure de lyon - ENS LYON, November 2024. URL https://theses.hal.science/ tel-04784426.
+Gonon, A., Brisebarre, N., Gribonval, R., and Riccietti, E. Approximation speed of quantized versus unquantized relu neural networks and beyond. IEEE Trans. Inf. Theory, 69(6):3960-3977, 2023. doi: 10.1109/TIT.2023.3240360. URL https://doi.org/10.1109/TIT.2023.3240360.
+Gonon, A., Brisebarre, N., Riccietti, E., and Gribonval, R. A path-norm toolkit for modern networks: consequences, promises and challenges. In International Conference on Learning Representations, ICLR 2024 Spotlight, Vienna, Austria, May 7-11. OpenReview.net, 2024a. URL https://openreview.net/pdf?id=hiHZVUIYik.
+Gonon, A., Brisebarre, N., Riccietti, E., and Gribonval, R. Code for reproducible research - A path-norm toolkit for modern networks: consequences, promises and challenges, March 2024b. URL https://halscience/hal-04498597. It is the code tagged with v1.0.0 at https://github.com/agonon/pathnorm_toolkit, and any updates will be available directly on that git repository.
+Gonon, A., Brisebarre, N., Riccietti, E., and Gribonval, R. Code for reproducible research - A rescaling-invariant Lipschitz bound using path-metrics, May 2025. Deposited on HAL and Software Heritage. Updates will be available directly at https://github.com/agonon/pathnorm_toolkit.
+Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained
+
+quantization and huffman coding. In Bengio, Y. and LeCun, Y. (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1510.00149.
+Hanson, S. J. and Pratt, L. Y. Comparing biases for minimal network construction with back-propagation. In Touretzky, D. S. (ed.), Advances in Neural Information Processing Systems 1, [NIPS Conference, Denver, Colorado, USA, 1988], pp. 177-185. Morgan Kaufmann, 1988.
+Hassibi, B. and Stork, D. G. Second order derivatives for network pruning: Optimal brain surgeon. In Hanson, S. J., Cowan, J. D., and Giles, C. L. (eds.), Advances in Neural Information Processing Systems 5, [NIPS Conference, Denver, Colorado, USA, November 30 - December 3, 1992], pp. 164-171. Morgan Kaufmann, 1992.
+He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770-778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
+Kawaguchi, K., Kaelbling, L. P., and Bengio, Y. Generalization in deep learning. CoRR, abs/1710.05468, 2017. URL http://arxiv.org/abs/1710.05468.
+Kwon, J., Kim, J., Park, H., and Choi, I. K. ASAM: adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 5905-5914. PMLR, 2021. URL http://proceedings.mlr.press/v139/kwon21b.html.
+LeCun, Y., Denker, J. S., and Solla, S. A. Optimal brain damage. In Touretzky, D. S. (ed.), Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver, Colorado, USA, November 27-30, 1989], pp. 598-605. Morgan Kaufmann, 1989. URL http://papers.nips.cc/paper/250-optimal-brain-damage.
+Liebenwein, L., Baykal, C., Lang, H., Feldman, D., and Rus, D. Provable filter pruning for efficient neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=BJxkOlSYDH.
+
+Lybrand, E. and Saab, R. A greedy algorithm for quantizing neural networks. J. Mach. Learn. Res., 22:156:1-156:38, 2021. URL https://jmlr.org/papers/v22/20-1233.html.
+Marcotte, S., Gribonval, R., and Peyre, G. Abide by the law and follow the flow: Conservation laws for gradient flows. CoRR, abs/2307.00144, 2023. doi: 10.48550/arXiv.2307.00144. URL https://doi.org/10.48550/arXiv.2307.00144.
+Neyshabur, B., Tomioka, R., and Srebro, N. Norm-based capacity control in neural networks. In Grünwald, P., Hazan, E., and Kale, S. (eds.), Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Workshop and Conference Proceedings, pp. 1376-1401. JMLR.org, 2015. URL http://proceedings.mlr.press/v40/Neyshabur15.html.
+Neyshabur, B., Bhojanapalli, S., and Srebro, N. A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Skz_WfbCZ.
+Quemener, E. and Corvellec, M. Sidus—the solution for extreme dedduplication of an operating system. *Linux Journal*, 2013.
+Rangamani, A., Nguyen, N. H., Kumar, A., Phan, D. T., Chin, S. P., and Tran, T. D. A scale invariant measure of flatness for deep network minima. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pp. 1680-1684. IEEE, 2021. doi: 10.1109/ICASSP39728.2021.9413771. URL https://doi.org/10.1109/ICASSP39728.2021.9413771.
+Schnoor, E., Behboodi, A., and Rauhut, H. Generalization error bounds for iterative recovery algorithms unfolded as neural networks. CoRR, abs/2112.04364, 2021. URL https://arxiv.org/abs/2112.04364.
+Singh, S. P. and Alistarh, D. Woodfisher: Efficient second-order approximation for neural network compression. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Stock, P. and Gribonval, R. An embedding of ReLU networks and an analysis of their identifiability. Constr.
+
+Approx., 57(2):853-899, 2023. ISSN 0176-4276,1432-0940. doi: 10.1007/s00365-022-09578-1. URL https://doi.org/10.1007/s00365-022-09578-1.
+Tsuzuki, Y., Sato, I., and Sugiyama, M. Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 9636-9647. PMLR, 2020. URL http://proceedings.mlr.press/v119/tsuzoku20a.html.
+Wen, K., Ma, T., and Li, Z. How sharpness-aware minimization minimizes sharpness? In International Conference on Learning Representations (ICLR), 2023. URL https://openreview.net/forum?id=5spDgWmpY6x.
+Yu, X., Serra, T., Ramalingam, S., and Zhe, S. The combinatorial brain surgeon: Pruning weights that cancel one another in neural networks. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 25668-25683. PMLR, 2022. URL https://proceedings.mlr.press/v162/yu22f.html.
+Zhang, J., Zhou, Y., and Saab, R. Post-training quantization for neural networks with provable guarantees. SIAM J. Math. Data Sci., 5(2):373-399, 2023. doi: 10.1137/22M1511709. URL https://doi.org/10.1137/22m1511709.
+
+# Appendices
+
+# A. Path-lifting, activations, and a fixed incidence matrix
+
+We recall the construction of Gonon et al. (2024a), but instead of considering the path-activation matrix $A(\theta, x)$ as in Gonon et al. (2024a), we introduce two new objects $A$ and $a(\theta, x)$ that lead to mathematically equivalent formulas but to a lighter proof of Theorem 4.1:
+
+- the path-activation vector $a(\theta, x)$ , and
+- a fixed incidence matrix $A$ that depends only on the DAG architecture, never on $\theta$ or $x$ .
+
+# A.1. Network architecture
+
+Definition A.1 (ReLU and $k$ -max-pooling activation functions). The ReLU function is defined as $\mathrm{ReLU}(x) := x \mathbb{1}_{x \geqslant 0}$ for $x \in \mathbb{R}$ . The $k$ -max-pooling function $k$ -pool $(x) := x_{(k)}$ returns the $k$ -th largest coordinate of $x \in \mathbb{R}^d$ .
+
+Definition A.2 (DAG-ReLU neural network (Gonon et al., 2024a)). Consider a Directed Acyclic Graph (DAG) $G = (N, E)$ with edges $E$ , and vertices $N$ called neurons. For a neuron $v$ , the sets $\operatorname{ant}(v), \operatorname{succ}(v)$ of antecedents and successors of $v$ are $\operatorname{ant}(v) := \{u \in N, u \to v \in E\}$ , $\operatorname{succ}(v) := \{u \in N, v \to u \in E\}$ . Neurons with no antecedents (resp. no successors) are called input (resp. output) neurons, and their set is denoted $N_{\mathrm{in}}$ (resp. $N_{\mathrm{out}}$ ). Neurons in $N \setminus (N_{\mathrm{in}} \cup N_{\mathrm{out}})$ are called hidden neurons. Input and output dimensions are respectively $d_{\mathrm{in}} := |N_{\mathrm{in}}|$ and $d_{\mathrm{out}} := |N_{\mathrm{out}}|$ .
+
+- A ReLU neural network architecture is a tuple $(G, (\rho_v)_{v \in N \setminus N_{\mathrm{in}}})$ composed of a DAG $G = (N, E)$ with attributes $\rho_v \in \{\mathrm{id}, \mathrm{ReLU}\} \cup \{k\text{-pool1}, k \in \mathbb{N}_{>0}\}$ for $v \in N \setminus (N_{\mathrm{out}} \cup N_{\mathrm{in}})$ and $\rho_v = \mathrm{id}$ for $v \in N_{\mathrm{out}}$ . We will again denote the tuple $(G, (\rho_v)_{v \in N \setminus N_{\mathrm{in}}})$ by $G$ , and it will be clear from context whether the results depend only on $G = (N, E)$ or also on its attributes. Define $N_\rho := \{v \in N, \rho_v = \rho\}$ for an activation $\rho$ , and $N_{*\text{-pool1}} := \cup_{k \in \mathbb{N}_{>0}} N_{k\text{-pool1}}$ . A neuron in $N_{*\text{-pool1}}$ is called a *-max-pooling neuron. For $v \in N_{*\text{-pool1}}$ , its kernel size is defined as being $|\operatorname{ant}(v)|$ .
+- Parameters associated with this architecture are vectors $\theta \in \mathbb{R}^G \coloneqq \mathbb{R}^{E \cup N \setminus N_{\mathrm{in}}}$ . We call bias $b_v \coloneqq \theta_v$ the coordinate associated with a neuron $v$ (input neurons have no bias), and denote $\theta^{u \to v}$ the weight associated with an edge $u \to v \in E$ . We will often denote $\theta^{\rightarrow v} \coloneqq (\theta^{u \rightarrow v})_{u \in \operatorname{ant}(v)}$ and $\theta^{v \rightarrow} \coloneqq (\theta^{u \rightarrow v})_{u \in \operatorname{suc}(v)}$ .
+- The realization of a neural network with parameters $\theta \in \mathbb{R}^G$ is the function $R_{\theta}^{G}:\mathbb{R}^{N_{\mathrm{in}}}\to \mathbb{R}^{N_{\mathrm{out}}}$ (simply denoted $R_{\theta}$ when $G$ is clear from the context) defined for every input $x\in \mathbb{R}^{N_{\mathrm{in}}}$ as
+
+$$
+R _ {\theta} (x) := (v (\theta , x)) _ {v \in N _ {\text {o u t}}},
+$$
+
+where we use the same symbol $v$ to denote a neuron $v \in N$ and the associated function $v(\theta, x)$ , defined as $v(\theta, x) \coloneqq x_v$ for an input neuron $v$ , and defined by induction otherwise
+
+$$
+v (\theta , x) := \left\{ \begin{array}{c l} \rho_ {v} (b _ {v} + \sum_ {u \in \mathrm {a n t} (v)} u (\theta , x) \theta^ {u \to v}) & \text {i f} \rho_ {v} = \mathrm {R e L U o r} \rho_ {v} = \mathrm {i d}, \\ k \text {- p o o l} \big ((b _ {v} + u (\theta , x) \theta^ {u \to v}) _ {u \in \mathrm {a n t} (v)} \big) & \text {i f} \rho_ {v} = k \text {- p o o l}. \end{array} \right.
+$$
+
+# A.2. Paths and the path-lifting
+
+Definition A.3 (Paths and depth in a DAG (Gonon et al., 2024a)). Consider a DAG $G = (N, E)$ as in Definition A.2. A path of $G$ is any sequence of neurons $v_0, \ldots, v_d$ such that each $v_i \to v_{i+1}$ is an edge in $G$ . Such a path is denoted $p = v_0 \to \ldots \to v_d$ . This includes paths reduced to a single $v \in N$ , denoted $p = v$ . The length of a path is $\text{length}(p) = d$ (the number of edges). We will denote $p_\ell := v_\ell$ the $\ell$ -th neuron for a general $\ell \in \{0, \ldots, \text{length}(p)\}$ and use the shorthand $p_{\text{end}} = v_{\text{length}(p)}$ for the last neuron. The depth of the graph $G$ is the maximum length over all of its paths. If $v_{d+1} \in \text{supc}(p_{\text{end}})$ then $p \to v_{d+1}$ denotes the path $v_0 \to \ldots \to v_d \to v_{d+1}$ . We denote by $\mathcal{P}^G$ (or simply $\mathcal{P}$ ) the set of paths ending at an output neuron of $G$ .
+
+Definition A.4 (Sub-graph ending at a given neuron). Given a neuron $v$ of a DAG $G$ , we denote $G \to v$ the graph deduced from $G$ by keeping only the largest subgraph with the same inputs as $G$ and with $v$ as a single output: every neuron $u$ with no path to reach $v$ through the edges of $G$ is removed, as well as all its incoming and outcoming edges. We will use the shorthand $\mathcal{P}^{\rightarrow v} := \mathcal{P}^{G \to v}$ to denote the set of paths in $G$ ending at $v$ .
+
+Definition A.5 (Path-lifting $\Phi (\theta)$ ). Consider a DAG-ReLU neural network $G$ as in Definition A.2 and parameters $\theta \in \mathbb{R}^{G}$ associated with $G$ . For $p\in \mathcal{P}$ , define
+
+$$
+\Phi_ {p} (\theta) := \left\{ \begin{array}{l l} \prod_ {\ell = 1} ^ {\text {l e n g t h} (p)} \theta^ {v _ {\ell - 1} \to v _ {\ell}} & \text {i f} p _ {0} \in N _ {\text {i n}}, \\ b _ {p _ {0}} \prod_ {\ell = 1} ^ {\text {l e n g t h} (p)} \theta^ {v _ {\ell - 1} \to v _ {\ell}} & \text {o t h e r w i s e}, \end{array} \right.
+$$
+
+where an empty product is equal to 1 by convention. The path-lifting $\Phi^G (\theta)$ of $\theta$ is
+
+$$
+\Phi^ {G} (\theta) := \left(\Phi_ {p} (\theta)\right) _ {p \in \mathcal {P} ^ {G}}.
+$$
+
+This is often denoted $\Phi$ when the graph $G$ is clear from the context. We will use the shorthand $\Phi^{\rightarrow v} \coloneqq \Phi^{G^{\rightarrow v}}$ to denote the path-lifting associated with $G^{\rightarrow v}$ (Definition A.4).
+
+# A.3. Path-activation vector and fixed incidence matrix
+
+Definition A.6 (Activation of edges, neurons, and paths). Given $\theta, x$ , the activation of an edge $u \to v$ is $a_{u \to v}(\theta, x) := 1$ if $v$ is identity, $\mathbb{1}_{v(\theta, x) > 0}$ if $v$ is ReLU, and for $k$ -max-pool it is 1 only for the (lexicographically) first antecedent achieving the $k$ -th maximum. For a neuron $v$ set $a_v(\theta, x) := 1$ if $v$ is input, identity, or max-pool, and $\mathbb{1}_{v(\theta, x) > 0}$ if $v$ is ReLU. For a path $p = v_0 \to \dots \to v_d$ define
+
+$$
+a _ {p} (\theta , x) := a _ {v _ {0}} (\theta , x) \prod_ {\ell = 1} ^ {d} a _ {v _ {\ell - 1} \rightarrow v _ {\ell}} (\theta , x) \in \{0, 1 \}.
+$$
+
+The path-activation vector is $a(\theta ,x)\coloneqq (a_p(\theta ,x))_{p\in \mathcal{P}}\in \{0,1\}^{\mathcal{P}}$
+
+Definition A.7 (Fixed incidence matrix $A$ ). Consider a new symbol $v_{\mathrm{bias}}$ that is not used to denote neurons. Instead of considering as in (Gonon et al., 2024a) the path-activations matrix $\mathbf{A}(\theta, x) \in \mathbb{R}^{\mathcal{P} \times (N_{\mathrm{in}} \cup \{v_{\mathrm{bias}}\})}$ whose coordinates are indexed by paths $p \in \mathcal{P}$ and neurons $u \in N_{\mathrm{in}} \cup \{v_{\mathrm{bias}}\}$ and are given by
+
+$$
+(\boldsymbol {A} (\theta , x)) _ {p, u} := \left\{ \begin{array}{l l} a _ {p} (\theta , x) \mathbb {1} _ {p _ {0} = u} & \text {i f} u \in N _ {\mathrm {i n}}, \\ a _ {p} (\theta , x) \mathbb {1} _ {p _ {0} \notin N _ {\mathrm {i n}}} & \text {o t h e r w i s e w h e n} u = v _ {\mathrm {b i a s}}. \end{array} \right.
+$$
+
+we define a fixed incidence matrix $A$ , which corresponds to the all-activated case in the definition of $A(\theta, x)$ above, and which maps input neurons to the path they belong to:
+
+$$
+A _ {p, u} := \left\{ \begin{array}{l l} 1 & \text {i f} u \in N _ {\mathrm {i n}} \text {a n d} p _ {0} = u, \\ 1 & \text {i f} u = v _ {\mathrm {b i a s}} \text {a n d} p _ {0} \notin N _ {\mathrm {i n}}, \\ 0 & \text {o t h e r w i s e}, \end{array} \right.
+$$
+
+so $A\in \{0,1\}^{\mathcal{P}\times (|N_{\mathrm{in}}| + 1)}$ depends only on the graph.
+
+# A.4. Key inner-product identity
+
+With our new notations, Equation (4) (corresponding to Theorem A.1 in (Gonon et al., 2024a)) can be rewritten as:
+
+$$
+R _ {\theta} (x) = \left\langle \underbrace {\Phi (\theta) \odot a (\theta , x)} _ {\text {p a t h w e i g h t s}}, \underbrace {A} _ {\text {f i x e d i n c i d e n c e}} \binom {x} {1} \right\rangle . \tag {4'}
+$$
+
+# B. Proof of Theorem 4.1
+
+In this section, we prove a slightly stronger version of Theorem 4.1. We do not state this stronger version in the main body as it requires having in mind the definition of the path-lifting $\Phi$ , recalled in Definition A.5, to understand the following notations. For parameters $\theta$ , we will denote $\Phi^I(\theta)$ (resp. $\Phi^H(\theta)$ ) the sub-vector of $\Phi(\theta)$ corresponding to the coordinates associated with paths starting from an input (resp. hidden) neuron. Thus, $\Phi(\theta)$ is the concatenation of $\Phi^I(\theta)$ and $\Phi^H(\theta)$ .
+
+
+Figure 5: The coordinate of the path-lifting $\Phi$ associated with the path $p = v_{1} \to v_{2} \to v_{3}$ is $\Phi_{p}(\theta) = \theta^{v_{1} \to v_{2}}\theta^{v_{2} \to v_{3}}$ since it starts from an input neuron (Definition A.5). While the path $p' = w_{1} \to w_{2} \to w_{3}$ starts from a hidden neuron (in $N \setminus (N_{\mathrm{in}} \cup N_{\mathrm{out}})$ ), so there is also the bias of $w_{1}$ to take into account: $\Phi_{p'}(\theta) = b_{w_1}\theta^{w_1 \to w_2}\theta^{w_2 \to w_3}$ . As specified in Definition A.6, the columns of the incidence matrix $A$ are indexed by $N_{\mathrm{in}} \cup \{v_{\mathrm{bias}}\}$ and its rows are indexed by $\mathcal{P} = \mathcal{P}_I \cup \mathcal{P}_H$ , with $\mathcal{P}_I$ the set of paths in $\mathcal{P}$ starting from an input neuron, and $\mathcal{P}_H$ the set of paths starting from a hidden neuron.
+
+$$
+\begin{array}{r l r} & & \overbrace {\frac {N _ {\mathrm {i n}}}{v _ {1}}} ^ {N _ {\mathrm {i n}}} v _ {\mathrm {b i a s}} \\ A & = & \mathcal {P} _ {I} \left\{ \begin{array}{l} p \end{array} \left( \begin{array}{c c c c c c c c} & \dots & & & & & \\ 0 & \dots & 0 & 1 & 0 & \dots & 0 & 0 \\ & & \dots & & & & & \\ & & & \ddots & & & & \\ \vdots & & & & & & \vdots \\ 0 & & & & & & 1 \\ & & & \vdots \end{array} \right) \right. \\ & & \mathcal {P} _ {H} \left\{p ^ {\prime} \right. \\ & & \end{array}
+$$
+
+Theorem B.1. Consider a ReLU neural network as in Definition A.2, with output dimension equal to one. Consider associated parameters $\theta, \theta'$ . If for every coordinate $i$ , $\theta_i$ and $\theta_i'$ have the same signs or at least one of them is zero ( $\theta_i\theta_i' \geqslant 0$ ), we have for every input $x$ :
+
+$$
+\left| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right| \leqslant \| x \| _ {\infty} \| \Phi^ {I} (\theta) - \Phi^ {I} \left(\theta^ {\prime}\right) \| _ {1} + \| \Phi^ {H} (\theta) - \Phi^ {H} \left(\theta^ {\prime}\right) \| _ {1}. \tag {15}
+$$
+
+Moreover, for every neural network architecture, there are non-negative parameters $\theta \neq \theta^{\prime}$ and a non-negative input $x$ such that Inequality (5) is an equality.
+
+Theorem B.1 is intentionally stated with scalar output to avoid imposing a specific norm on the outputs; readers can naturally extend it to the vector-valued setting using the norm most relevant to their application. As an example, we derive the next corollary for $\ell^q$ -norms on the outputs, which corresponds to the Theorem 4.1 given in the text body (except for the equality case, which is also an easy consequence of the equality case of Inequality (15)).
+
+Corollary B.2. Consider an exponent $q \in [1, \infty)$ and a ReLU neural network as in Definition A.2. Consider associated parameters $\theta, \theta'$ . If for every coordinate $i$ , it holds $\theta_i \theta_i' \geqslant 0$ , then for every input $x \in \mathbb{R}^{d_{in}}$ :
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {q} \leqslant \max (\| x \| _ {\infty}, 1) \| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \| _ {1}.
+$$
+
+Proof of Corollary B.2. By definition of the model, it holds:
+
+$$
+\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \| _ {q} ^ {q} = \sum_ {v \in N _ {\text {o u t}}} | v (\theta , x) - v \left(\theta^ {\prime}, x\right) | ^ {q}.
+$$
+
+Recall that $\Phi^{\rightarrow v}$ is the path-lifting associated with the sub-graph $G^{\rightarrow v}$ (Definition A.5). By Theorem B.1, it holds:
+
+$$
+| v (\theta , x) - v \left(\theta^ {\prime}, x\right) | ^ {q} \leqslant \max \left(\left\| x \right\| _ {\infty} ^ {q}, 1\right) \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right) \| _ {1} ^ {q}.
+$$
+
+Since $\Phi (\theta) = (\Phi^{\rightarrow v}(\theta))_{v\in N_{\mathrm{out}}}$ , this implies:
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {q} ^ {q} \leqslant \max (\left\| x \right\| _ {\infty} ^ {q}, 1) \| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {1} ^ {q}.
+$$
+
+
+Figure 6: Counter-example showing that the conclusion of Theorem 4.1 does not hold when the parameters have opposite signs. If the hidden neurons are ReLU neurons, the left network implements $R_{\theta}(x) = \mathrm{ReLU}(x)$ (with $\theta = (1 - 1)^T$ ) and the right network implements $R_{\theta'}(x) = -\mathrm{ReLU}(-x)$ (with $\theta' = (-1 - 1)^T$ ). Inequality (5) does not hold since there is a single path and the product of the weights along this path is equal to one in both cases, so that $\Phi(\theta) = \Phi(\theta') = 1$ (cf Section 3) while these two functions are nonzero and have disjoint supports.
+
+
+
+Proof of Theorem B.1. A geometric illustration of the spirit of the proof is given in Figure 3, as detailed in the figure caption.
+
+Step 1 - Reduction to non-zero coordinates. Since both sides of (15) are continuous in $(\theta, \theta')$ , without loss of generality it is enough to prove it for weight vectors $\theta, \theta'$ with no zero entries.
+
+Step 2 - Proof for parameters leading to the same activations $a(\theta, x) = a(\theta', x)$ . If the two parameters activate exactly the same paths on $x$ , then $(4')$ yields
+
+$$
+| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) | = \left| \langle (\Phi (\theta) - \Phi (\theta^ {\prime})) \odot a (\theta , x), A \left( \begin{array}{c} x \\ 1 \end{array} \right) \rangle \right| \leqslant \| (\Phi (\theta) - \Phi (\theta^ {\prime})) \odot a (\theta , x) \| _ {1} \| A \left( \begin{array}{c} x \\ 1 \end{array} \right) \| _ {\infty}.
+$$
+
+Because $A$ is binary with at most one "1" per row, $\| A\left( \begin{array}{c}x\\ 1 \end{array} \right)\| _1\leqslant \| \left( \begin{array}{c}x\\ 1 \end{array} \right)\| _1$ . Moreover, $a(\theta ,x)$ is a binary vector so $\| (\Phi (\theta) - \Phi (\theta^{\prime}))\odot a(\theta ,x)\| _1\leqslant \| \Phi (\theta) - \Phi (\theta^{\prime})\| _1$ . This gives the bound of Theorem 4.1 in the simple case where $a(\theta ,x) = a(\theta^{\prime},x)$ :
+
+$$
+\left| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right| \leqslant \max (\left\| x \right\| _ {\infty}, 1) \left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {1}.
+$$
+
+To prove the slightly stronger bound appearing in Theorem B.1, first split the paths depending on whether they start at an input neuron or at a hidden neuron:
+
+$$
+\left| \left\langle (\Phi (\theta) - \Phi (\theta^ {\prime})) \odot a (\theta , x), A \binom {x} {1} \right\rangle \right| \leqslant \left| \left\langle (\Phi^ {I} (\theta) - \Phi^ {I} (\theta^ {\prime})) \odot a ^ {I} (\theta , x), A ^ {I} x \right\rangle \right| + \left| \left\langle (\Phi^ {H} (\theta) - \Phi^ {H} (\theta^ {\prime})) \odot a ^ {H} (\theta , x), A ^ {H} \right\rangle \right|
+$$
+
+and then apply the same argument on each part to get:
+
+$$
+\left| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right| \leqslant \| x \| _ {\infty} \| \Phi^ {I} (\theta) - \Phi^ {I} \left(\theta^ {\prime}\right) \| _ {1} + \| \Phi^ {H} (\theta) - \Phi^ {H} \left(\theta^ {\prime}\right) \| _ {1}.
+$$
+
+Step 3 - A bound for a trajectory with finitely many break-points. Let $t \mapsto \theta(t)$ be any continuous curve from $\theta$ to $\theta'$ such that the activation vector $a(\theta(t), x)$ is constant on finitely many intervals $(t_k, t_{k+1})$ with $t_k < t_{k+1}$ and $[0,1] = \cup_{k=0}^{m} [t_k, t_{k+1}]$ . Applying Step 2 on every interval6 , and summing gives
+
+$$
+\left| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right| \leqslant \| x \| _ {\infty} \sum_ {k} \left\| \Phi^ {I} \left(\theta \left(t _ {k + 1}\right)\right) - \Phi^ {I} \left(\theta \left(t _ {k}\right)\right) \right\| _ {1} + \sum_ {k} \left\| \Phi^ {H} \left(\theta \left(t _ {k + 1}\right)\right) - \Phi^ {H} \left(\theta \left(t _ {k}\right)\right) \right\| _ {1}. \tag {A.2}
+$$
+
+Step 4 - Construction of a monotone path in log-space. For each coordinate index $i$ of the vector $\theta$ define
+
+$$
+\theta_ {i} (t) = \operatorname {s g n} \left(\theta_ {i}\right) \left| \theta_ {i} \right| ^ {1 - t} \left| \theta_ {i} ^ {\prime} \right| ^ {t}, \quad t \in [ 0, 1 ]. \tag {A.3}
+$$
+
+This trajectory $t \to \theta(t)$ is well-defined since by Step 1 we assumed without loss of generality that the coordinates of $\theta$ and $\theta'$ are non-zero. Moreover, since $\operatorname{sgn}(\theta) = \operatorname{sgn}(\theta')$ , this trajectory goes from $\theta$ to $\theta'$ and we can use (A.2) provided that this path has only finitely many break-points.
+
+For every path $p$ , the scalar function $t \mapsto \Phi_p(\theta(t)) = |\Phi_p(\theta)|^{1 - t} |\Phi_p(\theta')|^t$ is monotone, so developing the $\ell^1$ -norms in (A.2) yields sums that telescope exactly:
+
+$$
+\sum_ {k} \| \Phi^ {I} \left(\theta_ {k + 1}\right) - \Phi^ {I} \left(\theta_ {k}\right) \| _ {1} = \| \Phi^ {I} (\theta) - \Phi^ {I} \left(\theta^ {\prime}\right) \| _ {1}, \quad \text {a n d s i m i l a r l y f o r} \Phi^ {H}.
+$$
+
+Thus Theorem B.1 follows from (A.2) provided the path has only finitely many break-points.
+
+Step 5 – Proving the existence of finitely many break-points (technical). Each coordinate in (A.3) is an analytic function of $t$ , and the activation of a ReLU or max-pool neuron evaluated on an analytic input can only change at isolated roots. On the compact interval [0, 1] there can be only finitely many such roots. Lemma B.3 formalizes this argument and completes the proof of (15).
+
+To prove Theorem B.1, it remains to prove the claim about the equality case: we must find $\theta \neq \theta'$ and an input $x$ such that the inequality is actually an equality.
+
+Sharpness of the bound (equality cases) in Theorem B.1 Consider an input neuron $v_{0}$ and a path $p = v_{0} \to v_{1} \to \dots \to v_{d}$ . Define two parameter vectors that differ only on that path:
+
+$$
+\theta_ {v _ {\ell} \rightarrow v _ {\ell + 1}} = a > 0, \quad \theta^ {\prime} _ {v _ {\ell} \rightarrow v _ {\ell + 1}} = b > 0, \quad \ell = 0, \dots , d - 1,
+$$
+
+and set every other coordinate of $\theta, \theta'$ to 0.
+
+Choose the input $x$ with $x_{v_0} > 0$ and all other coordinates equal to 0. Because the signal propagates solely along $p$ ,
+
+$$
+R _ {\theta} (x) = a ^ {d} x _ {v _ {0}}, \qquad R _ {\theta^ {\prime}} (x) = b ^ {d} x _ {v _ {0}}.
+$$
+
+For the path-lifting, only the coordinate $\Phi_p$ changes, hence
+
+$$
+\left\| \Phi^ {I} (\theta) - \Phi^ {I} \left(\theta^ {\prime}\right) \right\| _ {1} = \left| a ^ {d} - b ^ {d} \right|, \quad \left\| \Phi^ {H} (\theta) - \Phi^ {H} \left(\theta^ {\prime}\right) \right\| _ {1} = 0.
+$$
+
+Since $\| x\|_{\infty} = x_{v_0}$ , inequality (15) is an equality:
+
+$$
+\left| a ^ {d} - b ^ {d} \right| x _ {v _ {0}} = \| x \| _ {\infty} \| \Phi^ {I} (\theta) - \Phi^ {I} \left(\theta^ {\prime}\right) \| _ {1} + \| \Phi^ {H} (\theta) - \Phi^ {H} \left(\theta^ {\prime}\right) \| _ {1},
+$$
+
+Thus the bound of Theorem B.1 cannot be improved in general.
+
+Lemma B.3. Fix $n \in \mathbb{N}$ inputs $x_{1},\ldots ,x_{n} \in \mathbb{R}^{d_{in}}$ and two parameter vectors $\theta, \theta'$ with no zero coordinates. Let $\theta(t)$ be the geometric trajectory (A.3). There are finitely many points $0 = t_{0} < t_{1} < \dots < t_{m} = 1$ such that for every input $x_{i}$ the path-activation vector $a(\theta(t), x_{i})$ is constant on each open interval $(t_{k}, t_{k+1})$ .
+
+Proof of Lemma B.3. Step 1 - reduce to a single input. If a finite breakpoint set works for each $x_{i}$ individually, their union works for all inputs. We therefore fix one arbitrary input $x$ .
+
+Step 2 - property to prove for each neuron. For a neuron $v$ define
+
+$$
+\mathbf {P} (v): \left\{ \begin{array}{l} \text {t h e r e e x i s t f i n i t e l y m a n y b r e a k p o i n t s 0 = t _ {0} < t _ {1} < \cdots < t _ {m} = 1 s u c h t h a t} \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad t \mapsto v (\theta (t), x) \text {i s a n a l y t i c o n e v e r y} [ t _ {k}, t _ {k + 1} ], \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \q quad t \mapsto a _ {v} (\theta (t), x) \text {a n d} t \mapsto a _ {u \to v} (\theta (t), x) \forall u \in \operatorname {a n t} (v) \\ \qquad \qquad \qquad \text {a r e c o n s t a n t o n} (t _ {k}, t _ {k + 1}). \end{array} \right.
+$$
+
+If $\mathbf{P}(v)$ holds for every neuron $v$ , the union of their breakpoints gives finitely many intervals on which all edge and path activations are frozen, completing the lemma.
+
+# Step 3 - prove $\mathbf{P}(v)$ by topological induction.
+
+We perform induction on a topological sorting (Cormen et al., 2009, Section 22.4) of the underlying DAG. We start with input neurons $v$ since by Definition A.2, these are the ones without antecedents so they are the first to appear in a topological sorting.
+
+Initialization: Input neurons. $v(\theta, x) = x_v$ does not depend on $\theta$ , hence $a_v(\cdot, x) \equiv 1$ ; $\mathbf{P}(v)$ holds with $m = 1$ .
+
+Induction: Now consider a neuron $v \notin N_{\mathrm{in}}$ and assume $\mathbf{P}(u)$ to hold for every neuron $u$ coming before $v$ in the topological sorting. There are finitely many breakpoints $0 = t_0 < t_1 < \dots < t_m = 1$ such that for every $u \in \operatorname{ant}(v)$ and every $k$ , the map $t \in [t_k, t_{k+1}] \mapsto u(\theta(t), x)$ is analytic. We distinguish three cases depending on the activation function of the neuron $v$ :
+
+(i) Identity neuron. $v(\theta(t), x) = b_v + \sum_{u \in \mathrm{ant}(v)} u(\theta(t), x) \theta_{u \to v}(t)$ is analytic on the same intervals $[t_k, t_{k+1}]$ because each factor is analytic; $a_v \equiv 1$ . Thus $\mathbf{P}(v)$ inherits the finite breakpoint set of its antecedents.
+(ii) ReLU neuron. The pre-activation $\mathrm{pre}_v(t)\coloneqq b_v + \sum_{u\in \mathrm{ant}(v)}u(\theta (t),x)\theta_{u\to v}(t)$ is analytic on each $[t_k,t_{k + 1}]$ by induction. Either $\mathrm{pre}_v$ is identically zero, in which case $a_{v}\equiv 0$ , or its zero set is finite (as an analytic function on a compact domain), so the sign of $\mathrm{pre}_v$ (and therefore $a_{v}$ and each $a_{u\rightarrow v})$ is constant between consecutive zeros. Hence $\mathbf{P}(v)$ holds.
+(iii) $K$ -max-pool neuron. The output of $v$ is the $K$ -th largest component of $\mathrm{pre}_v \coloneqq \left(u(\theta(t), x)\theta_{u \to v}(t)\right)_{u \in \operatorname{ant}(v)}$ . Each coordinate of $\mathrm{pre}_v$ is analytic on each $[t_k, t_{k+1}]$ by induction. Two coordinates can swap order only at isolated $t$ where their analytic difference becomes zero, so the ranking—and thus the selected $K$ -th value—changes only finitely many times. Thus $\mathbf{P}(v)$ holds.
+
+By topological induction $\mathbf{P}(v)$ is true for every neuron. The argument in Step 2 then gives the desired global breakpoint set.
+
+# C. Proof of Lemma 5.2
+
+Rescaling-invariance is a direct consequence of the known properties of the path-lifting $\Phi$ (Gonon et al., 2024a).
+
+In the case of a singleton $I = \{i\}$ , as already evoked, (13) simply follows from (5) and the definition of Path-Mag. When $|I| \geqslant 2$ , consider any enumeration $i_j$ , $1 \leqslant j \leqslant |I|$ of elements in $I$ , and $s_j := \mathbf{1}_G - \sum_{\ell=1}^j e_{i_\ell} = \mathbf{1}_G - \mathbf{1}_{\cup_{\ell=1}^j \{i_\ell\}}$ (as well as $s_0 := \mathbf{1}_G$ ): since the pair $(\theta, s \odot \theta)$ as well as the pairs $(s_{j-1} \odot \theta, s_j \odot \theta)$ satisfies the assumptions of Lemma 4.2, and
+
+$s_j\odot s_{j - 1} = s_j$ we have
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (s \odot \theta) \| _ {1} = \| \Phi (\theta) \| _ {1} - \| \Phi (s \odot \theta) \| _ {1} \\ = \sum_ {j = 1} ^ {| I |} \| \Phi (s _ {j - 1} \odot \theta) \| _ {1} - \| \Phi (s _ {j} \odot \theta) \| _ {1} \\ = \sum_ {(7)} ^ {| I |} \| \Phi (s _ {j - 1} \odot \theta) - \Phi (s _ {j} \odot (s _ {j - 1} \odot \theta)) \| _ {1} \\ = \sum_ {(1 0)} ^ {| I |} \text {P a t h - M a g} \left(s _ {j - 1} \odot \theta , i _ {j}\right) \\ \leqslant \sum_ {(1 2)} ^ {| I |} \operatorname {P a t h - M a g} (\theta , i _ {j}). \\ \end{array}
+$$
+
+Finally, to establish (14), observe that for each path $p$ we have $|\Phi_p(\theta)| = \prod_{j\in p}|\theta_j|$ so, for each $i\in p$ (NB: $i$ can index either an edge in the path or the first neuron of $p$ when $p$ starts from a hidden or output neuron, in which case $\theta_i$ is the associated bias) it holds
+
+$$
+\frac {\partial}{\partial \theta_ {i}} | \Phi_ {p} (\theta) | = \operatorname {s g n} \left(\theta_ {i}\right) \prod_ {j \in p, j \neq i} | \theta_ {j} |.
+$$
+
+Because $\operatorname{sgn}(\theta_i)\theta_i = |\theta_i|$ we get that, when $i\in p$
+
+$$
+\theta_ {i} \cdot \frac {\partial}{\partial \theta_ {i}} | \Phi_ {p} (\theta) | = | \Phi_ {p} (\theta) |.
+$$
+
+Summing over all paths for a given index $i$ shows that
+
+$$
+\begin{array}{l} \left(\theta \odot \nabla_ {\theta} \| \Phi (\theta) \| _ {1}\right) _ {i} = \theta_ {i} \frac {\partial}{\partial \theta_ {i}} \sum_ {p \in \mathcal {P}} | \Phi_ {p} (\theta) | \\ = \theta_ {i} \sum_ {p \in \mathcal {P}: i \in p} \frac {\partial}{\partial \theta_ {i}} | \Phi_ {p} (\theta) | \\ = \sum_ {p \in \mathcal {P}: i \in p} | \Phi_ {p} (\theta) | \\ = \text {P a t h - M a g} (\theta , i). \tag {12} \\ \end{array}
+$$
+
+# D. Proof-of-concept: accuracy of path-magnitude pruning
+
+To provide a proof-of-concept of the utility of the main Lipschitz bound in Theorem 4.1 for pruning, we implement the following "prune and finetune" procedure:
+
+1. train: we train a dense network,
+2. rescale (optional): we apply a random rescale to the trained weights (this includes biases),
+3. prune: we prune the resulting network,
+4. rewind: we rewind the weights to their value after a few initial epochs (standard in the lottery ticket literature to enhance performance (Frankle et al., 2020)),
+5. finetune: we retrain the pruned network, with the pruned weights frozen to zero and the other ones initialized from their rewinded values.
+
+Doing that to prune $p = 40\%$ of the weights at once of a ResNet18 trained on ImageNet-1k, we observe that (Figure 4):
+
+- without random rescale (plain lines), the test accuracy obtained at the end is similar for both magnitude pruning and path-magnitude pruning;
+- with random rescale (dotted lines - the one associated with path-magnitude pruning is invisible as it coincides with the corresponding plain line), magnitude pruning suffers a large drop of top-1 test accuracy, which is not the case of path-magnitude pruning since it makes the process invariant to potential rescaling.
+
+We observe similar results when pruning between $p = 10\%$ and $p = 80\%$ of the weights at once, see Table 5.
+
+Table 5: Extended version of Table 4. Top-1 accuracy after pruning, optional rescale, rewind and retrain, as a function of the pruning level. $(*) =$ results valid with as well as without rescaling, as path-magnitude pruning is invariant to rescaling.
+
+Pruning level none 10% 20% 40% 60% 80% Path-Magnitude (*) 67.7% 68.6 68.8 68.6 67.9 66.0 Magnitude w/o Random Rescale 69.0 69.0 68.8 68.2 66.5 Magnitude w/ Random Rescale 68.8 68.7 63.1 57.5 15.8
+
+We now give details on each stage of the procedure
+
+1. Train. We train a dense ResNet18 (He et al., 2016) on ImageNet-1k, using $99\%$ of the 1,281,167 images of the training set for training, the other $1\%$ for validation. We use SGD for 90 epochs, learning rate 0.1, weight-decay 0.0001, batch size 1024, classical ImageNet data normalization, and a multi-step scheduler where the learning rate is divided by 10 at epochs 30, 60 and 80. The epoch out of the 90 ones with maximum validation top-1 accuracy is considered as the final epoch. Doing 90 epochs took us about 18 hours on a single A100-40GB GPU.
+
+2. Random rescaling. Consider a pair of consecutive convolutional layers in the same basic block of the ResNet18 architecture, for instance the ones of the first basic block: model(layer1[0].conv1 and model(layer1[0].conv2 in PyTorch, with model being the ResNet18. Denote by $C$ the number of output channels of the first convolutional layer, which is also the number of input channels of the second one. For each channel $c \in [[1, C]]$ , we choose uniformly at random a rescaling factor $\lambda \in \{1, 128, 4096\}$ and multiply the output channel $c$ of the first convolutional layer by $\lambda$ , and divide the input channel $c$ of the second convolutional layer by $\lambda$ . In order to preserve the input-output relationship, we also multiply by $\lambda$ the running mean and the bias of the batch normalization layer that is in between (model(layer1[0].bn1 in the previous example). Here is an illustrative Python code (that should be applied to the correct layer weights as described above):
+
+```python
+factors = np.array([1, 128, 4096])
+out_channels1, _, _, _ = weights_conv1.shape
+for out in range(out_channels1):
+ factor = np.random.choice(factors)
+ weights_conv1[out, :, :, :] *= factor
+ weights_conv2[:, out, :, :] /= factor
+ running_mean[out] *= factor
+ bias[out] *= factor
+```
+
+3. Pruning. At the end of the training phase, we globally prune (i.e. set to zero) $p\%$ of the remaining weights in all the convolutional layers plus the final fully connected layer.
+4. Rewinding. We save the mask and rewind the weights to their values after the first 5 epochs of the dense network, and train for 85 remaining epochs. This exactly corresponds to the hyperparameters and pruning algorithm of the lottery ticket literature (Frankle et al., 2021).
+5. Finetune. This is done in the same conditions as the training phase
+
+# E. Computational cost: comparing pruning criteria
+
+This section details how the results of Table 3 were obtained.
+
+# E.1. Hardware and software
+
+All experiments were performed on an NVIDIA A100-PCIE-40GB GPU, with CPU Intel(R) Xeon(R) Silver 4215R CPU @ 3.20GHz. We used PyTorch (version 2.2, with CUDA 12.1 and cuDNN 8.9 enabled) to implement model loading, inference, and custom pruning-cost computation. All timings were taken using the torch.utils.benchmark module, synchronizing the GPU to ensure accurate measurement of wall-clock time.
+
+# E.2. Benchmarked code
+
+Single-forward pass. We fed a tensor torch randn(B, 3, 224, 224) to each model (batch size $B = 1$ or $B = 128$ , $224 \times 224$ RGB image).
+
+Path-magnitude scores. We followed the recipe given in (14): $(\mathrm{Path - Mag}(\theta ,i))_i = \theta \odot \nabla_\theta \| \Phi (\theta)\| _1$ . To do that, we computed the path-norm $\| \Phi (\theta)\| _1$ using the function get_path_norm we released online at github.com/agonon/pathnorm_toolkit (Gonon et al., 2024b). And we simply added one line to auto-differentiate the computations and multiply the result pointwise with the parameters $\theta$ . Thus, our code (see (Gonon et al., 2025) and github.com/agonon/pathnorm_toolkit for updates) has the following structure:
+
+- it starts by replacing max-pooling neurons by summation neurons, or equivalently max-pooling layers by convolutional layers (following the recipe given in (Gonon et al., 2024a) to compute correctly the path-norm),
+- it replaces each weight by its absolute value,
+- it does a forward pass to compute the path-norm,
+- here we added auto-differentiation (backwarding the path-norm computations), and pointwise multiplication with original weights,
+- and it finally reverts to the original maxpool layers and the weights' value to restore the original network.
+
+Table 3 reports the time to do all this.
+
+Magnitude scores. It takes as input a torch model, and does a simple loop over all model's parameters:
+
+- to check if these are the parameters of a torch(nn.Linear or torch(nn.Conv2d module,
+- if this is the case, it adds to a list the absolute values of these weights.
+
+Loss-sensitivity scores (LeCun et al., 1989). In the Optimal Brain Damage (OBD) framework introduced in (LeCun et al., 1989), each weight $\theta_{i}$ in the network is assigned a score approximating the expected increase in loss if $\theta_{i}$ were pruned (set to zero). The score of $\theta_{i}$ is defined by:
+
+$$
+\mathrm {O B D} (\theta , i) = \frac {1}{2} h _ {i i} \theta_ {i} ^ {2},
+$$
+
+where $h_{ii}$ is the diagonal entry of the Hessian matrix $H = \nabla^2\ell$ of the empirical loss
+
+$$
+\ell (\theta) = \sum_ {k = 1} ^ {n} \ell \left(R _ {\theta} \left(x _ {k}\right), y _ {k}\right)
+$$
+
+with respect to the parameters $\theta$ . As we could not locate a proof of the rescaling-invariance of OBD we give below a short proof, before discussing its numerical computation.
+
+Rescaling-invariance. Denote $D = \mathrm{diag}(\lambda_i)$ a diagonal rescaling matrix such that for each $\theta$ the parameters $\theta' \coloneqq D\theta$ are rescaling-equivalent to $\theta$ . This implies that $R_{\theta}(x_k) = R_{D\theta}(x_k)$ for each training sample $x_k$ and every $\theta$ , hence
+
+$\ell (\theta) = \ell (D\theta)$ for every $\theta$ . Simple calculus then yields equality of the Jacobians $\partial \ell (\theta) = \partial \ell (D\theta)D$ , i.e., since $D$ is symmetric, taking the transpose
+
+$$
+\nabla \ell (\theta) = D \nabla \ell (D \theta), \quad \forall \theta ,
+$$
+
+that is to say $\nabla \ell (\cdot) = D\nabla \ell (D\cdot)$ . Differentiating once more yields
+
+$$
+H (\theta) = \nabla^ {2} \ell (\theta) = \partial [ \nabla \ell ] (\theta) = \partial [ D \nabla \ell (D \cdot) ] (\theta) = D \partial [ \nabla \ell (D \cdot) ] (\theta) = D \partial [ \nabla \ell (\cdot) ] (D \theta) D = D H (D \theta) D.
+$$
+
+Extracting the $i$ -th diagonal entry yields $h_{ii}(\theta) = \lambda_i^2 h_{ii}(D\theta)$ (and more generally $h_{ij}(\theta) = \lambda_i\lambda_jh_{ij}(D\theta)$ ), hence
+
+$$
+\mathrm {O B D} (D \theta , i) = \frac {1}{2} h _ {i i} (D \theta) ((D \theta) _ {i}) ^ {2} = \frac {1}{2} h _ {i i} (D \theta) (\lambda_ {i} \theta_ {i}) ^ {2} = \frac {1}{2} [ h _ {i i} (D \theta) \lambda_ {i} ^ {2} ] \theta_ {i} ^ {2} = \frac {1}{2} h _ {i i} (\theta) \theta_ {i} ^ {2} = \mathrm {O B D} (\theta , i). \tag {16}
+$$
+
+Computation. Computing the full Hessian matrix $H$ exactly would be prohibitive for large networks. Instead, a well-known variant of Hutchinson's trick (Bekas et al., 2007) is that its diagonal can be computed as
+
+$$
+\operatorname {d i a g} (H) = \mathbb {E} _ {v} \left[ (H v) \odot v \right]
+$$
+
+where the expectation is over Rademacher vectors $v$ (i.i.d. uniform $v_{i} \in \{-1,1\}$ ) and where $\odot$ denotes pointwise multiplication. In practice, we approximate it as follows:
+
+- draw a single vector $v$ as above,
+- compute the Hessian-vector product $Hv$ using the "reverse-over-forward" higher-order autodiff in PyTorch's torch.func API,
+- deduce the estimate $\mathrm{diag}(H) \simeq (Hv) \odot v \eqqcolon u$ ,
+- finally estimate $\mathrm{OBD} \simeq \frac{1}{2} u \odot \theta \odot \theta = \frac{1}{2} (Hv) \odot v \odot \theta \odot \theta$ .
+
+The performance to do all this depends on the size of the batch on which is computed the loss, as the cost of the Hessian vector product $Hv$ depends on it. Table 3 reports the milliseconds required for this entire procedure on batch sizes of 1 and 128, listing corresponding values as $x - y$ .
+
+This approximation is not rescaling-invariant in general. Indeed, we have
+
+$$
+\begin{array}{l} \left(\left(H v\right) \odot v \odot \theta \odot \theta\right) _ {i} = \left(H v\right) _ {i} \cdot v _ {i} \cdot \theta_ {i} ^ {2} = \left(\sum_ {j} h _ {i j} (\theta) v _ {j}\right) v _ {i} \theta_ {i} ^ {2} \\ \underset {(1 6)} {=} \left(\sum_ {j} \lambda_ {i} \lambda_ {j} h _ {i j} (D \theta) v _ {j}\right) v _ {i} \theta_ {i} ^ {2} = \left(\sum_ {j} \lambda_ {j} h _ {i j} (D \theta) v _ {j}\right) v _ {i} \lambda_ {i} \theta_ {i} ^ {2} \\ \end{array}
+$$
+
+which would be the same as the estimate made for $D\theta$ if and only if it were equal to
+
+$$
+\left(\sum_ {j} h _ {i j} (D \theta) v _ {j}\right) v _ {i} \lambda_ {i} ^ {2} \theta_ {i} ^ {2}.
+$$
+
+There is no reason for this to happen (and it did not happen in any of our experiments). For instance, take $v_{i} = \theta_{i} = 1$ for every $i$ , we would need $\sum_{j}\lambda_{j}h_{ij}(D\theta) = \lambda_{i}\sum_{j}h_{ij}(D\theta)$ , which is the same as saying that $\lambda$ is an eigenvector of $H(D\theta)$ with eigenvalue 1.
+
+# F. Lipschitz property of $\Phi$ : proof of Lemma 4.3
+
+We first establish Lipschitz properties of $\theta \mapsto \Phi(\theta)$ . Combined with the main result of this paper, Theorem 4.1, or with Corollary B.2, they establish a Lipschitz property of $\theta \mapsto R_{\theta}(x)$ for each $x$ , and of the functional map $\theta \mapsto R_{\theta}(\cdot)$ in the uniform norm on any bounded domain. This is complementary to the Lipschitz property of $x \mapsto R_{\theta}(x)$ studied elsewhere in the literature, see e.g. (Gonon et al., 2024a).
+
+Lemma F.1. Consider $q \in [1, \infty)$ , parameters $\theta$ and $\theta'$ , and a neuron $v$ . Then, it holds:
+
+$$
+\begin{array}{l} \left\| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right)\right\| _ {q} ^ {q} \\ \leqslant \max _ {p \in \mathcal {P} \rightarrow v} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p)} \left(\prod_ {k = \ell + 1} ^ {\text {l e n g t h} (p)} \| \theta^ {\rightarrow p _ {k}} \| _ {q} ^ {q}\right)\left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - \left(\theta^ {\prime}\right) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q} \max _ {u \in \operatorname {a n t} \left(p _ {\ell}\right)} \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {q} ^ {q}\right) \tag {17} \\ \end{array}
+$$
+
+with the convention that an empty sum and product are respectively equal to zero and one.
+
+Note that when all the paths in $\mathcal{P} \to v$ have the same length $L$ , Inequality (17) is homogeneous: multiplying both $\theta$ and $\theta'$ coordinate-wise by a scalar $\lambda$ scales both sides of the equations by $\lambda^L$ .
+
+Proof. The proof of Inequality (17) goes by induction on a topological sorting of the graph. The first neurons of the sorting are the neurons without antecedents, i.e., the input neurons by definition. Consider an input neuron $v$ . There is only a single path ending at $v$ : the path $p = v$ . By Definition A.5, $\Phi^{\rightarrow v}(\cdot) = \Phi_v(\cdot) = 1$ so the left hand-side is zero. On the right-hand side, there is only a single choice for a path ending at $v$ : this is the path $p = v$ that starts and ends at $v$ . Thus $D = 0$ , and the maximum is zero (empty sum). This proves Inequality (17) for input neurons.
+
+Consider a neuron $v \notin N_{\mathrm{in}}$ and assume that this is true for every neuron before $v$ in the considered topological sorting. Recall that, by definition, $\Phi^{\rightarrow v}$ is the path-lifting of $G^{\rightarrow v}$ (see Definition A.5). The paths in $G^{\rightarrow v}$ are $p = v$ , and the paths going through antecedents of $v$ ( $v$ has antecedents since it is not an input neuron). So we have $\Phi^{\rightarrow v}(\theta) = \left( \begin{array}{c} (\Phi^{\rightarrow u} \times \theta^{u \rightarrow v})_{u \in \mathrm{ant}(v)} \\ b_v \end{array} \right)$ , where we again recall that $\Phi^{\rightarrow u}(\cdot) = 1$ for input neurons $u$ , and $b_u = 0$ for *-max-pooling neurons. Thus we have:
+
+$$
+\begin{array}{l} \left\| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right)\right\| _ {q} ^ {q} \\ = \left| b _ {v} - b _ {v} ^ {\prime} \right| ^ {q} + \sum_ {u \in \operatorname {a n t} (v)} \| \Phi^ {\rightarrow u} (\theta) \times \theta^ {u \rightarrow v} - \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \times \left(\theta^ {\prime}\right) ^ {u \rightarrow v} \| _ {q} ^ {q} \\ \leqslant | b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \sum_ {u \in \operatorname {a n t} (v)} \left(\| \Phi^ {\rightarrow u} (\theta) - \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {q} ^ {q} \mid \theta^ {u \rightarrow v} \mid^ {q} + \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {q} ^ {q} \mid \theta^ {u \rightarrow v} - \left(\theta^ {\prime}\right) ^ {u \rightarrow v} \mid^ {q}\right) \\ \leqslant |b_{v} - b^{\prime}_{v}|^{q} + \| \theta^{\rightarrow v}\|_{q}^{q}\max_{u\in \operatorname {ant}(v)}\| \Phi^{\rightarrow u}(\theta) - \Phi^{\rightarrow u}(\theta^{\prime})\|_{q}^{q} + \| \theta^{\rightarrow v} - (\theta^{\prime})^{\rightarrow v}\|_{q}^{q}\max_{u\in \operatorname {ant}(v)}\| \Phi^{\rightarrow u}(\theta^{\prime})\|_{q}^{q}. \\ \end{array}
+$$
+
+Using the induction hypothesis (Inequality (17)) on the antecedents of $v$ and observing that $p \in \mathcal{P} \to v$ if, and only if there are $u \in \operatorname{ant}(v), r \in \mathcal{P} \to u$ such that $p = r \to v$ gives (we highlight in blue the important changes):
+
+$$
+\begin{array}{l} \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} (\theta) \| _ {q} ^ {q} \leqslant | b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow v} - \left(\theta^ {\prime}\right) ^ {\rightarrow v} \| _ {q} ^ {q} \max _ {u \in \operatorname {a n t} (v)} \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {q} ^ {q} \\ + \left\| \theta^ {\rightarrow v} \right\| _ {q} ^ {q} \max _ {u \in \operatorname {a n t} (v)} \max _ {r \in \mathcal {P} ^ {\rightarrow u}} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (r)} \left(\prod_ {k = \ell + 1} ^ {\operatorname {l e n g t h} (r)} \| \theta^ {\rightarrow r _ {k}} \| _ {q} ^ {q}\right)\left(| b _ {r _ {\ell}} - b _ {r _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow r _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow r _ {\ell}} \| _ {q} ^ {q} \max _ {w \in \operatorname {a n t} (r _ {\ell})} \| \Phi^ {\rightarrow w} (\theta^ {\prime}) \| _ {q} ^ {q}\right). \\ = \left| b _ {v} - b _ {v} ^ {\prime} \right| ^ {q} + \| \theta^ {\rightarrow v} - (\theta^ {\prime}) ^ {\rightarrow v} \| _ {q} ^ {q} \max _ {u \in \operatorname {a n t} (v)} \| \Phi^ {\rightarrow u} (\theta^ {\prime}) \| _ {q} ^ {q} \\ + \max _ {p \in \mathcal {P} ^ {\rightarrow_ {v}}} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p) - 1} \left(\prod_ {k = \ell + 1} ^ {\text {l e n g t h} (p)} \| \theta^ {\rightarrow p _ {k}} \| _ {q} ^ {q}\right)\left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q} \max _ {w \in \operatorname {a n t} (p _ {\ell})} \| \Phi^ {\rightarrow w} (\theta^ {\prime}) \| _ {q} ^ {q}\right) \\ = \max _ {p \in \mathcal {P} \to v} \sum_ {\ell = 1} ^ {\mathrm {l e n g t h} (p)} \left(\prod_ {k = \ell + 1} ^ {\mathrm {l e n g t h} (p)} \| \theta^ {\to p _ {k}} \| _ {q} ^ {q}\right) \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\to p _ {\ell}} - (\theta^ {\prime}) ^ {\to p _ {\ell}} \| _ {q} ^ {q} \max _ {w \in \operatorname {a n t} (p _ {\ell})} \| \Phi^ {\to w} (\theta^ {\prime}) \| _ {q} ^ {q}\right). \\ \end{array}
+$$
+
+This proves Inequality (17) for $v$ and concludes the induction.
+
+In the sequel it will be useful to restrict the analysis to normalized parameters, defined as parameters $\tilde{\theta}$ such that $\left\| \left( \begin{array}{c} \tilde{\theta} \rightarrow v \\ \tilde{b}_v \end{array} \right) \right\|_1 \in \{0, 1\}$ for every $v \in N \setminus (N_{\mathrm{out}} \cup N_{\mathrm{in}})$ . Thanks to the rescaling-invariance of ReLU neural network
+
+parameterizations, Algorithm 1 in Gonon et al. (2024a) allows to rescale any parameters $\theta$ into a normalized version $\tilde{\theta}$ such that $R_{\tilde{\theta}} = R_{\theta}$ and $\Phi (\theta) = \Phi (\tilde{\theta})$ (Gonon et al., 2024a, Lemma B.2). This implies the next simpler results for normalized parameters.
+
+Theorem F.2. Consider $q \in [1, \infty)$ . For every normalized parameters $\theta, \theta'$ obtained as the output of Algorithm 1 in Gonorn et al. (2024a), it holds:
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {q} ^ {q} \leqslant \sum_ {v \in N _ {o u t} \backslash N _ {i n}} | b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow v} - (\theta^ {\prime}) ^ {\rightarrow v} \| _ {q} ^ {q} \\ + \min \left(\| \Phi (\theta) \| _ {q} ^ {q}, \| \Phi \left(\theta^ {\prime}\right) \| _ {q} ^ {q}\right) \max _ {p \in \mathcal {P}: p _ {\text {e n d}} \notin N _ {i n}} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p) - 1} \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - \left(\theta^ {\prime}\right) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q}\right). \tag {18} \\ \end{array}
+$$
+
+Denote by $\mathbb{N}(\theta)$ the normalized version of $\theta$ , obtained as the output of Algorithm 1 in Gonon et al. (2024a). It can be checked that if $\theta = \mathbb{N}(\tilde{\theta})$ and $\theta' = \mathbb{N}(\tilde{\theta}')$ , and if all the paths have the same lengths $L$ , then multiplying both $\tilde{\theta}$ and $\tilde{\theta}'$ coordinate-wise by a scalar $\lambda$ does not change their normalized versions $\theta$ and $\theta'$ , except for the biases and the incoming weights of all output neurons that are scaled by $\lambda^L$ . As a consequence, Inequality (18) is homogeneous: both path-liftings on the left-hand-side and the right-hand-side are multiplied by $\lambda^L$ , and so is the sum over $v \in N_{\mathrm{out}} \setminus N_{\mathrm{in}}$ in the right-hand-side, while the maximum over $p$ is unchanged since it only involves normalized coordinates that do not change.
+
+For networks used in practice, it holds $N_{\mathrm{out}} \cap N_{\mathrm{in}} = \emptyset$ so that $N_{\mathrm{out}} \setminus N_{\mathrm{in}}$ is just $N_{\mathrm{out}}$ , but the above theorem also covers the somewhat pathological case of DAG architectures $G$ where one or more input neurons are also output neurons.
+
+Proof of Theorem F.2. Since $\Phi (\theta) = (\Phi^{\rightarrow v}(\theta))_{v\in N_{\mathrm{out}}}$ , it holds
+
+$$
+\| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {q} ^ {q} = \sum_ {v \in N _ {\text {o u t}}} \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} (\theta^ {\prime}) \| _ {q} ^ {q}.
+$$
+
+By Definition A.5, it holds for every input neuron $v$ : $\Phi^{\rightarrow v}(\cdot) = 1$ . Thus, the sum can be taken over $v \in N_{\mathrm{out}} \setminus N_{\mathrm{in}}$ :
+
+$$
+\| \Phi (\theta) - \Phi (\theta^{\prime})\|_{q}^{q} = \sum_{v\in N_{\text{out}}\setminus N_{\text{in}}}\| \Phi^{\rightarrow v}(\theta) - \Phi^{\rightarrow v}(\theta^{\prime})\|_{q}^{q}.
+$$
+
+Besides, observe that many norms appearing in Inequality (17) are at most one for normalized parameters. Indeed, for such parameters it holds for every $u \in N \setminus (N_{\mathrm{in}} \cup N_{\mathrm{out}})$ : $\| \theta^{\rightarrow u} \|_q^q \leqslant 1$ (Gonon et al., 2024a, Lemma B.2). As a consequence, for $p \in \mathcal{P}$ and any $\ell \in [0, \text{length}(p) - 1]$ we have:
+
+$$
+\prod_ {k = \ell + 1} ^ {\text {l e n g t h} (p)} \| \theta^ {\rightarrow p _ {k}} \| _ {q} ^ {q} = \left(\prod_ {k = \ell + 1} ^ {\text {l e n g t h} (p) - 1} \underbrace {\| \theta^ {\rightarrow p _ {k}} \| _ {q} ^ {q}} _ {\leqslant 1}\right) \| \theta^ {\rightarrow p _ {\text {e n d}}} \| _ {q} ^ {q} \leqslant \| \theta^ {\rightarrow p _ {\text {e n d}}} \| _ {q} ^ {q}.
+$$
+
+Moreover, for normalized parameters $\theta$ and $u\notin N_{\mathrm{out}}$ , it also holds $\| \Phi^{\rightarrow u}(\theta)\| _q^q\leqslant 1$ (Gonon et al., 2024a, Lemma B.3). Thus, Inequality (17) implies for any $v\in N_{\mathrm{out}}$ , and any normalized parameters $\theta$ and $\theta^\prime$ :
+
+$$
+\begin{array}{l} \left\| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right)\right\| _ {q} ^ {q} \\ \leqslant | b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow v} - (\theta^ {\prime}) ^ {\rightarrow v} \| _ {q} ^ {q} + \| \theta^ {\rightarrow v} \| _ {q} ^ {q} \max _ {p \in \mathcal {P} \rightarrow v} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p) - 1} \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q}\right). \\ \end{array}
+$$
+
+Thus, we get:
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {q} ^ {q} \\ = \sum_ {v \in N _ {\text {o u t}} \backslash N _ {\text {i n}}} \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right) \| _ {q} ^ {q} \\ \leqslant \sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} \left(| b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow v} - (\theta^ {\prime}) ^ {\rightarrow v} \| _ {q} ^ {q}\right) \\ + \sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} \| \theta^ {\rightarrow v} \| _ {q} ^ {q} \max _ {p \in \mathcal {P} \rightarrow v} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p) - 1} \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q}\right) \\ \leqslant \sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} \left(| b _ {v} - b _ {v} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow v} - (\theta^ {\prime}) ^ {\rightarrow v} \| _ {q} ^ {q}\right) \\ + \left(\sum_ {v \in N _ {\text {o u t}} \backslash N _ {\text {i n}}} \| \theta^ {\rightarrow v} \| _ {q} ^ {q}\right) \max _ {p \in \mathcal {P}: p _ {\text {e n d}} \notin N _ {\text {i n}}} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p) - 1} \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q}\right). \\ \end{array}
+$$
+
+It remains to use that $\sum_{v\in N_{\mathrm{out}}\setminus N_{\mathrm{in}}}\| \theta^{\rightarrow u_v}\| _q^q\leqslant \| \Phi (\theta)\| _q^q$ for normalized parameters $\theta$ (Gonon et al., 2024a, Theorem B.1, case of equality) to conclude that:
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^{\prime})\|_{q}^{q}\leqslant \sum_{v\in N_{\text{out}}\setminus N_{\text{in}}}\left(|b_{v} - b_{v}^{\prime}|^{q} + \| \theta^{\rightarrow v} - (\theta^{\prime})^{\rightarrow v}\|_{q}^{q}\right) \\ + \| \Phi (\boldsymbol {\theta}) \| _ {q} ^ {q} \max _ {p \in \mathcal {P}: p _ {\mathrm {e n d}} \notin N _ {\mathrm {i n}}} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p) - 1} \left(| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} | ^ {q} + \| \theta^ {\rightarrow p _ {\ell}} - (\theta^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {q} ^ {q}\right). \\ \end{array}
+$$
+
+The term in blue can be replaced by min $\left(\| \Phi (\theta)\| _q^q,\| \Phi (\theta ')\| _q^q\right)$ by repeating the proof with $\theta$ and $\theta^\prime$ exchanged (everything else is invariant under this exchange).
+
+Lemma F.3. Consider a DAG ReLU network with $L \coloneqq D - 1$ where the depth $D$ is $\max_{\text{path } p \in \mathcal{P}} |\text{length}(p)|$ and width $W = \max(d_{\text{out}}, \max_{\text{neuron } v \in N} |\text{ant}(v)|)$ where $\text{ant}(v)$ is the set of antecedents of $v$ in the DAG. Denote by $\theta$ the normalized parameters of $\theta$ as obtained as the output of Algorithm 1 in (Gonon et al., 2024a) with $q = 1$ , i.e., $\theta$ is obtained from $\theta$ by rescaling neurons from the input to output layer, ensuring every neuron has a vector of incoming weights equal to one on all layers except the last one. It holds for every $\theta, \theta'$ and every $q \in [1, \infty)$ .
+
+$$
+\left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {q} ^ {q} \leqslant \left(W ^ {2} + \min \left(\left\| \Phi (\theta) \right\| _ {q} ^ {q}, \left\| \Phi \left(\theta^ {\prime}\right) \right\| _ {q} ^ {q}\right) \cdot L W\right) \| \theta - \theta^ {\prime} \| _ {\infty} ^ {q}
+$$
+
+Lemma 4.3 corresponds to Lemma F.3 with $q = 1$ .
+
+Proof of Lemma F.3. Lemma B.1 of (Gonon et al., 2024a) guarantees that $\Phi(\mathbb{N}(\theta)) = \Phi(\theta)$ for every $\theta$ . In particular,
+
+$$
+\left\| \Phi (\theta) - \Phi \left(\theta^ {\prime}\right) \right\| _ {1} = \left\| \Phi \left(\mathrm {N} (\theta)\right) - \Phi \left(\mathrm {N} \left(\theta^ {\prime}\right)\right) \right\| _ {1}
+$$
+
+so it is enough to prove Lemma F.3 for normalized parameters, so we may and will assume $\theta = \mathbb{N}(\theta),\theta^{\prime} = \mathbb{N}(\theta^{\prime})$ . Denote
+
+$\bar{\theta}^{\rightarrow v}\coloneqq (\theta^{\rightarrow v},b_v)$ . With this notation, (18) implies (for normalized parameters $\theta ,\theta^{\prime}$
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^{\prime})\|_{q}^{q}\leqslant \sum_{v\in N_{\text{out}}\setminus N_{\text{in}}}\| \bar{\theta}^{\rightarrow v} - (\bar{\theta}^{\prime})^{\rightarrow v}\|_{q}^{q} + \min (\| \Phi (\theta)\|_{q}^{q},\| \Phi (\theta^{\prime})\|_{q}^{q})\cdot \max_{p\in \mathcal{P}:p_{\text{end}}\not\in N_{\text{in}}}\sum_{\ell = 1}^{\text{length}(p) - 1}\| \bar{\theta}^{\rightarrow p_{\ell}} - (\bar{\theta}^{\prime})^{\rightarrow p_{\ell}}\|_{q}^{q} \\ \leqslant \sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} | \operatorname {a n t} (v) | \cdot \| \bar {\theta} ^ {\rightarrow v} - (\bar {\theta} ^ {\prime}) ^ {\rightarrow v} \| _ {\infty} ^ {q} \\ + \min (\| \Phi (\theta) \| _ {q} ^ {q}, \| \Phi (\theta^ {\prime}) \| _ {q} ^ {q}) \cdot \max _ {p \in \mathcal {P}: p _ {\text {e n d}} \notin N _ {\text {i n}}} \sum_ {\ell = 1} ^ {\text {l e n g t h} (p) - 1} | \operatorname {a n t} (p _ {\ell}) | \cdot \| \bar {\theta} ^ {\rightarrow p _ {\ell}} - (\bar {\theta} ^ {\prime}) ^ {\rightarrow p _ {\ell}} \| _ {\infty} ^ {q} \\ \leqslant \left(\sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} | \operatorname {a n t} (v) | + \min (\| \Phi (\theta) \| _ {q} ^ {q}, \| \Phi (\theta^ {\prime}) \| _ {q} ^ {q}) \cdot \max _ {p \in \mathcal {P}: p _ {\mathrm {e n d}} \notin N _ {\mathrm {i n}}} \left(\sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p) - 1} | \operatorname {a n t} (p _ {\ell}) |\right)\right) \| \theta - \theta^ {\prime} \| _ {\infty} ^ {q} \\ \end{array}
+$$
+
+The maximum length of a path is $D = L + 1$ . Moreover $W \geqslant d_{\mathrm{out}} = |N_{\mathrm{out}}|$ and $W \geqslant |\operatorname{ant}(v)|$ for every neuron, so this yields
+
+$$
+\| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {q} ^ {q} \leqslant \left(W ^ {2} + \min (\| \Phi (\theta) \| _ {q} ^ {q}, \| \Phi (\theta^ {\prime}) \| _ {q} ^ {q}) \cdot L W\right) \| \theta - \theta^ {\prime} \| _ {\infty}.
+$$
+
+
+
+# G. Recovering a known bound with Theorem 4.1
+
+It is already known in the literature that for every input $x$ and every parameters $\theta, \theta'$ (even with different signs) of a layered fully-connected neural network with $L$ affine layers and $L + 1$ layers of neurons, $N_0 = N_{\mathrm{in}}, \dots, N_L = N_{\mathrm{out}}$ , width $W := \max_{0 \leqslant \ell \leqslant L} |N_\ell|$ , and each matrix having some operator norm bounded by $R \geqslant 1$ , it holds (Gonon et al., 2023, Theorem III.1 with $p = q = \infty$ and $D = \|x\|_{\infty}$ ) (Nevshabur et al., 2018; Berner et al., 2020):
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant \left(W \| x \| _ {\infty} + 1\right) W L ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}.
+$$
+
+Can it be retrieved from Theorem 4.1? Next corollary almost recovers it: with $W \max(\|x\|_{\infty}, 1)$ instead of $W\|x\|_{\infty} + 1$ , and $2L$ instead of $L^2$ . This is better as soon as there are at least $L \geqslant 2$ layers and as soon as the input satisfies $\|x\|_{\infty} \geqslant 1$ .
+
+Corollary G.1. (Gonon et al., 2023, Theorem III.1) Consider a simple layered fully-connected neural network architecture with $L \geqslant 1$ layers, corresponding to functions $R_{\theta}(x) = M_L \mathrm{ReLU}(M_{L-1} \ldots \mathrm{ReLU}(M_1 x))$ with each $M_{\ell}$ denoting a matrix, and parameters $\theta = (M_1, \ldots, M_L)$ . For a matrix $M$ , denote by $\| M \|_{1,\infty}$ the maximum $\ell^1$ -norm of a row of $M$ . Consider $R \geqslant 1$ and define the set $\Theta$ of parameters $\theta = (M_1, \ldots, M_L)$ such that $\| M_{\ell} \|_{1,\infty} \leqslant R$ for every $\ell \in [1, L]$ . Then, for every parameters $\theta, \theta' \in \Theta$ , and every input $x$ :
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant \max (\left\| x \right\| _ {\infty}, 1) 2 L W ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}.
+$$
+
+Proof. For every neuron $v$ , define $f(v) \coloneqq \ell$ such that neuron $v$ belongs to the output neurons of matrix $M_{\ell}$ (i.e., of layer $\ell$ ). By Lemma F.1 with $q = 1$ , we have for every neuron $v$
+
+$$
+\begin{array}{l} \left\| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} \left(\theta^ {\prime}\right)\right\| _ {1} \\ \leqslant \max _ {p \in \mathcal {P} \to v} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p)} \left(\prod_ {k = \ell + 1} ^ {\operatorname {l e n g t h} (p)} \underbrace {\left\| \theta^ {\to p _ {k}} \right\| _ {1}} _ {\leqslant \left\| M _ {f \left(p _ {k}\right)} \right\| _ {1, \infty}}\right) \\ \left(\underbrace {\left| b _ {p _ {\ell}} - b _ {p _ {\ell}} ^ {\prime} \right|} _ {= 0 (\text {n o b i a s e s})} + \underbrace {\left\| \theta^ {\rightarrow p _ {\ell}} - \left(\theta^ {\prime}\right) ^ {\rightarrow p _ {\ell}} \right\| _ {1}} _ {\leqslant | \operatorname {a n t} \left(p _ {\ell}\right) | \| \theta - \theta^ {\prime} \| _ {\infty} \leqslant W \| \theta - \theta^ {\prime} \| _ {\infty}} \max _ {u \in \operatorname {a n t} \left(p _ {\ell}\right)} \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {1}\right) (19) \\ \leqslant W \| \theta - \theta^ {\prime} \| _ {\infty} \max _ {p \in \mathcal {P} \rightarrow v} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p)} R ^ {\operatorname {l e n g t h} (p) - \ell} \max _ {u \in \operatorname {a n t} \left(p _ {\ell}\right)} \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {1} (20) \\ \end{array}
+$$
+
+with the convention that an empty sum and product are respectively equal to zero and one. Consider $\theta' = 0$ . It holds $\| \Phi^{\rightarrow u}(\theta') \|_1 = 0$ for every $u \notin N_{\mathrm{in}}$ , and $\| \Phi^{\rightarrow u}(\theta') \|_1 = 1$ for input neurons $u$ (Definition A.5). Therefore, we have:
+
+$$
+\max _ {u \in \operatorname {a n t} \left(p _ {\ell}\right)} \| \Phi^ {\rightarrow u} \left(\theta^ {\prime}\right) \| _ {1} = \mathbb {1} _ {\operatorname {a n t} \left(p _ {\ell}\right) \cap N _ {\mathrm {i n}} \neq \emptyset} = \mathbb {1} _ {\ell = 1 \text {a n d} p _ {0} \in N _ {\mathrm {i n}}}. \tag {21}
+$$
+
+Specializing Inequality (19) to $\theta' = 0$ and using Equation (21) yields
+
+$$
+\begin{array}{l} \| \Phi^{\rightarrow v}(\theta)\|_{1}\leqslant \max_{p\in \mathcal{P}\to v}\sum_{\ell = 1}^{\text{length}(p)}\left(\prod_{k = \ell +1}^{\text{length}(p)}R\right)\\\underbrace{\|\theta^{\rightarrow p_{\ell}}\|_{1}}_{\leqslant \| M_{f(p_{\ell})}\|_{1,\infty}}\underbrace{\max_{u\in\operatorname{ant}(p_{\ell})}\|\Phi^{\rightarrow u}(\theta^{\prime})\|_{1}}_{= \mathbb{1}_{\ell = 1\text{and} p_{0}\in N_{\text{in}}}} \\ = \max _ {p \in \mathcal {P} ^ {\rightarrow v}: p _ {0} \in N _ {\mathrm {i n}}} R ^ {\text {l e n g t h} (p)}. \tag {22} \\ \end{array}
+$$
+
+Since the network is layered, every neuron $u \in \operatorname{ant}(p_{\ell})$ is on the $\ell - 1$ -th layer, and every $p' \in \mathcal{P}^{\rightarrow u}$ is of length $\ell - 1$ , hence we deduce using Inequality (20), Equation (22) for $\theta'$ and $u$ :
+
+$$
+\begin{array}{l} \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} (\theta^ {\prime}) \| _ {1} \leqslant W \| \theta - \theta^ {\prime} \| _ {\infty} \max _ {p \in \mathcal {P} ^ {\rightarrow v}} \sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p)} R ^ {\operatorname {l e n g t h} (p) - \ell} \underbrace {\max _ {u \in \operatorname {a n t} (p _ {\ell})} \max _ {p ^ {\prime} \in \mathcal {P} ^ {\rightarrow u} : p _ {0} ^ {\prime} \in N _ {\mathrm {i n}}} R ^ {\operatorname {l e n g t h} (p ^ {\prime})}} _ {= R ^ {\ell - 1}} \\ = W \| \theta - \theta^ {\prime} \| _ {\infty} \max _ {p \in \mathcal {P} \rightarrow v} \underbrace {\sum_ {\ell = 1} ^ {\operatorname {l e n g t h} (p)} R ^ {\operatorname {l e n g t h} (p) - 1}} _ {\leqslant L R ^ {L - 1}} \\ \leqslant L W R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}. \\ \end{array}
+$$
+
+We get:
+
+$$
+\begin{array}{l} \| \Phi (\theta) - \Phi (\theta^ {\prime}) \| _ {1} = \sum_ {v \in N _ {\mathrm {o u t}} \backslash N _ {\mathrm {i n}}} \| \Phi^ {\rightarrow v} (\theta) - \Phi^ {\rightarrow v} (\theta^ {\prime}) \| _ {1} \\ \leqslant \left| N _ {\text {o u t}} \backslash N _ {\text {i n}} \right| \cdot L W R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty} \\ \leqslant L W ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}. \tag {23} \\ \end{array}
+$$
+
+Using Corollary B.2 with $q = 1$ , we deduce that as soon as $\theta, \theta'$ satisfy $\theta_i \theta_i' \geqslant 0$ for every parameter coordinate $i$ , then for every input $x$ :
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant \max (\| x \| _ {\infty}, 1) L W ^ {2} R ^ {L - 1} \| \theta - \theta^ {\prime} \| _ {\infty}. \tag {24}
+$$
+
+Now, consider general parameters $\theta$ and $\theta^{\prime}$ . Define $\theta^{\mathrm{inter}}$ to be such that for every parameter coordinate $i$ ,
+
+$$
+\theta_ {i} ^ {\text {i n t e r}} = \left\{ \begin{array}{l l} \theta_ {i} ^ {\prime} & \text {i f} \theta_ {i} \theta_ {i} ^ {\prime} \geqslant 0, \\ 0 & \text {o t h e r w i s e .} \end{array} \right.
+$$
+
+By definition, it holds for every parameter coordinate $i$ : $\theta_i^{\mathrm{inter}}\theta_i \geqslant 0$ and $\theta_i^{\mathrm{inter}}\theta_i' \geqslant 0$ so we can apply Inequality (24) to the pairs $(\theta, \theta^{\mathrm{inter}})$ and $(\theta^{\mathrm{inter}}, \theta')$ to get:
+
+$$
+\begin{array}{l} \left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant \left\| R _ {\theta} (x) - R _ {\theta^ {\text {i n t e r}}} (x) \right\| _ {1} + \left\| R _ {\theta^ {\text {i n t e r}}} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \\ \leqslant \max (\| x \| _ {\infty}, 1) L W ^ {2} R ^ {L - 1} \left(\| \theta - \theta^ {\text {i n t e r}} \| _ {\infty} + \| \theta^ {\text {i n t e r}} - \theta^ {\prime} \| _ {\infty}\right). \\ \end{array}
+$$
+
+It remains to see that $\| \theta -\theta^{\mathrm{inter}}\|_{\infty} + \| \theta^{\mathrm{inter}} - \theta^{\prime}\|_{\infty} = 2\| \theta -\theta^{\prime}\|_{\infty}$ . Consider a parameter coordinate $i$
+
+If $\theta_{i}\theta_{i}^{\prime}\geqslant 0$ then $\theta_{i}^{\mathrm{inter}} = \theta_{i}^{\prime}$ and:
+
+$$
+\left| \theta_ {i} - \theta_ {i} ^ {\prime} \right| = \left| \theta_ {i} - \theta_ {i} ^ {\text {i n t e r}} \right| + \left| \theta_ {i} ^ {\text {i n t e r}} - \theta_ {i} ^ {\prime} \right|.
+$$
+
+Otherwise, $\theta_{i}^{\mathrm{inter}} = 0$ and:
+
+$$
+\begin{array}{l} \left| \theta_ {i} - \theta_ {i} ^ {\prime} \right| = \left| \theta_ {i} \right| + \left| \theta_ {i} ^ {\prime} \right| \\ = \left| \theta_ {i} - \theta_ {i} ^ {\text {i n t e r}} \right| + \left| \theta_ {i} ^ {\text {i n t e r}} - \theta_ {i} ^ {\prime} \right|. \\ \end{array}
+$$
+
+This implies $\| \theta - \theta^{\mathrm{inter}} \|_{\infty} = \max_i |\theta_i - \theta_i^{\mathrm{inter}}| \leqslant \max_i |\theta_i - \theta_i^{\mathrm{inter}}| + |\theta_i^{\mathrm{inter}} - \theta_i'| = \| \theta - \theta' \|_{\infty}$ and similarly $\| \theta^{\mathrm{inter}} - \theta' \|_{\infty} \leqslant \| \theta - \theta' \|_{\infty}$ . This yields the desired result:
+
+$$
+\left\| R _ {\theta} (x) - R _ {\theta^ {\prime}} (x) \right\| _ {1} \leqslant \max (\left\| x \right\| _ {\infty}, 1) 2 L W ^ {2} R ^ {L - 1} \left\| \theta - \theta^ {\prime} \right\| _ {\infty}.
+$$
\ No newline at end of file
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/images.zip b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dea196da71e1feead1116db2f09a7ad90741f44d
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1258c12b09da3311b658316785785186a9567dadb58bf87973bb21a9cc90b74
+size 1151812
diff --git a/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/layout.json b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e40ff7ff0648a808659bf2c4c0d8bef48b00f73b
--- /dev/null
+++ b/arescalinginvariantlipschitzboundbasedonpathmetricsformodernrelunetworkparameterizations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ead1b401535b3a915c3d9986288150ba750b8e5546da1f59ddfaa1f6c80a9ab3
+size 1543034
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_content_list.json b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..59ab8301a08d1d471c920b1217ebb267c039170e
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d5251c5409e36da73532b5850e55ca7bffd1f0488e5a7e82b1127c41fc6bfa3
+size 127380
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_model.json b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1c256cf25c79915ec4d3f4e2bd445f6c3328faaa
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9433d07ec006ad9cc5503589e2c31e7125d617e179db0387f26cf55183dfe8e6
+size 157666
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_origin.pdf b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a7921025498a18c323c59460b0549394267f0437
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/eb4660e4-8d00-4302-8940-840c8da3757a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a634570c4b25cbf59e7deb6054264506d7c47c953a965fd5a53e730d270f77bd
+size 1637052
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/full.md b/arsadaptiverewardscalingformultitaskreinforcementlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..76276ab016f4dd920cb2d3ede6431ebef29c616c
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/full.md
@@ -0,0 +1,544 @@
+# ARS: Adaptive Reward Scaling for Multi-Task Reinforcement Learning
+
+Myungsik Cho1 Jongeui Park1 Jeonghye Kim1 Youngchul Sung1
+
+# Abstract
+
+Multi-task reinforcement learning (RL) encounters significant challenges due to varying task complexities and their reward distributions from the environment. To address these issues, in this paper, we propose Adaptive Reward Scaling (ARS), a novel framework that dynamically adjusts reward magnitudes and leverages a periodic network reset mechanism. ARS introduces a history-based reward scaling strategy that ensures balanced reward distributions across tasks, enabling stable and efficient training. The reset mechanism complements this approach by mitigating overfitting and ensuring robust convergence. Empirical evaluations on the Meta-World benchmark demonstrate that ARS significantly outperforms baseline methods, achieving superior performance on challenging tasks while maintaining overall learning efficiency. These results validate ARS's effectiveness in tackling diverse multi-task RL problems, paving the way for scalable solutions in complex real-world applications.
+
+# 1. Introduction
+
+In recent years, the field of deep reinforcement learning (RL) has achieved remarkable success in addressing complex control problems, such as mastering Atari games (Hafner et al., 2021; Kapturowski et al., 2023; Schwarzer et al., 2023) and advancing locomotion control (Haarnoja et al., 2018b; Fujimoto et al., 2018; Cetin et al., 2022). Despite these achievements, the field remains limited by a task-specific paradigm that demands substantial data and computational resources to train separate policies for each task. Generalizing a single policy to perform effectively across multiple tasks poses a significant challenge, particularly in domains such as robotic control (Kaufmann et al., 2023; Tang et al., 2024), where mastering diverse skills is essential.
+
+$^{1}$ School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea. Correspondence to: Youngchul Sung .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Multi-task reinforcement learning (multi-task RL) (Wilson et al., 2007; Zeng et al., 2018; Yang et al., 2020; Sun et al., 2022; Hendawy et al., 2024; Cho et al., 2024) enables a single policy network to handle multiple tasks with shared parameters (Caruana, 1997), improving data efficiency. However, it faces the challenge of negative transfer, where learning one task disrupts others, destabilizing training and limiting scalability (Sun et al., 2020).
+
+In addition to negative transfer, this paper introduces another critical issue in multi-task RL: the significant variation in reward scales across tasks, which can adversely affect overall performance. To explore this issue, we focus on the impact of reward scaling approach in multi-task scenarios, studied in single-task settings (Henderson et al., 2018; Wu et al., 2018), which involves applying a constant scaling factor to rewards. We observed that the varying magnitudes of rewards across tasks cause biases during training, resulting in overfitting on tasks with highly amplified rewards and a decline in average performance across all tasks.
+
+To address these challenges, we propose Adaptive Reward Scaling (ARS), a novel framework that dynamically adjusts reward scaling factors, ensuring balanced reward magnitudes across diverse tasks. ARS is built upon two core components: (1) adaptive reward scaling that adjusts scaling factors by analyzing the distribution of rewards within each task's experience replay buffer (Mnih et al., 2015), and (2) periodic resetting of network parameters to prevent overfitting and improve convergence. This approach ensures that challenging tasks receive adequate emphasis on their rewards, preventing overfitting to simpler tasks with higher rewards and enhancing overall performance. Experiments conducted on the Meta-World benchmark (Yu et al., 2019), which includes 50 robotic manipulation tasks, demonstrate that ARS significantly outperforms baseline methods. Additionally, ARS integrates seamlessly into existing off-policy algorithms with minimal modifications, making it a practical and effective solution for real-world applications.
+
+The primary contributions of this work are:
+
+- The introduction of reward scale variation of a challenge in multi-task RL, demonstrating how fixed reward scaling can lead to biased training, overfitting, and overall performance degradation
+
+- A novel framework, adaptive reward scaling mechanism with reset strategies, which robustly handles varying reward distributions and enhances the learning process for challenging tasks.
+- Empirical validation on the Meta-World benchmark Empirical, demonstrating superior performance and providing detailed insights into the mechanisms driving ARS's effectiveness.
+
+# 2. Preliminaries
+
+# 2.1. Multi-Task Reinforcement Learning
+
+A RL problem is typically represented as a Markov decision process (MDP), defined as $\mathcal{M} = (\mathcal{S},\mathcal{A},\mathcal{P},r,\rho ,\gamma)$ . Here, $\mathcal{S}$ represents the state space, $\mathcal{A}$ is the action space, $\mathcal{P}:S\times$ $\mathcal{A}\times \mathcal{S}\rightarrow \mathbb{R}^{+}$ is the transition probability, $r:S\times \mathcal{A}\to \mathbb{R}$ is the reward function, $\rho :\mathcal{S}\rightarrow \mathbb{R}^{+}$ is the distribution over initial states, and $\gamma \in [0,1)$ is the discount factor. At each time step $t$ , the agent receives an observation of the state $s_t\in S$ and selects an action $a_{t}\in \mathcal{A}$ according to the policy $\pi (a_{t}\mid s_{t})$ . The environment returns a reward $r_t =$ $r(s_{t},a_{t})$ and transitions to the next state $s_{t + 1}$ according to the transition probability distribution $\mathcal{P}(s_{t + 1}\mid s_t,a_t)$ . In single-task RL, the goal is to maximize the expected sum of discounted rewards:
+
+$$
+J (\pi) = \mathbb {E} _ {\tau \sim \rho_ {\pi}} \left[ \sum_ {t = 1} ^ {H} \gamma^ {t - 1} r _ {t} \right]. \tag {1}
+$$
+
+In multi-task RL, the focus shifts to optimizing a policy that achieves high performance across a broad range of tasks. Specifically, multi-task RL contains a set of tasks $\mathcal{C} = \{\mathcal{T}_i\}_{i=1}^N$ and a distribution $p(\mathcal{T})$ over these tasks, where each task $\mathcal{T}_i$ is represented by an MDP $\mathcal{M}_i = (\mathcal{S}, \mathcal{A}, \mathcal{P}_i, r_i, \rho_i, \gamma, H)$ . The tasks share common state and action spaces but differ in reward functions, transition dynamics, and initial state distributions. Assuming $p(\mathcal{T})$ is uniform, the goal is to learn a shared policy $\pi$ that maximizes the average return over all tasks. Formally, the objective is:
+
+$$
+\max _ {\pi} \mathbb {E} _ {\mathcal {T} \sim p (\mathcal {T})} [ J (\pi , \mathcal {T}) ], \tag {2}
+$$
+
+where $J(\pi, \mathcal{T})$ denotes the expected sum of discounted rewards for task $\mathcal{T}$ under policy $\pi$ .
+
+In this work, we use a shared policy $\pi_{\theta}(a|s,z)$ , which incorporates a task representation $z$ alongside the state $s$ . We represent each task using a one-hot encoding. To train the policy, we adopt Soft Actor-Critic (SAC) (Haarnoja et al., 2018b) as the underlying algorithm.
+
+# 2.2. Replay Buffer
+
+In off-policy RL, the replay buffer (Mnih et al., 2015) improves sample efficiency and training stability. Denoted
+
+as $\mathcal{D}$ , it stores transitions $(s_t, a_t, r_t, s_{t+1})$ collected from the agent's interactions, which are reused during training to stabilize updates to the policy or value function.
+
+To extend this concept to multi-task RL, each task $\mathcal{T}_i$ in the task set $\mathcal{C}$ is assigned a distinct replay buffer $\mathcal{D}_i$ . This design isolates experiences for each task, enabling the agent to adapt effectively to the unique dynamics and reward structures of individual tasks.
+
+# 2.3. Reward Scaling
+
+Reward scaling (Henderson et al., 2018; Wu et al., 2018) is a common preprocessing technique in RL that adjusts reward magnitudes to enhance learning stability and convergence. In single-task RL, the reward signal $r_t$ is transformed using a scaling factor $c^{\mathrm{rew}}$ , helping to mitigate the impact of inconsistent reward magnitudes and enabling more stable updates to the value function or policy.
+
+When applied to multi-task RL, the need for reward scaling becomes even more pronounced due to the variation in reward distributions across tasks. To address these differences, task-specific reward scaling is implemented, where each task's rewards are adjusted using unique scaling factors $\{c_i^{\mathrm{rew}}\}_{i = 1}^N$ . This approach ensures consistency in reward magnitudes across tasks, supporting stable learning.
+
+# 2.4. Parameter Resetting
+
+In single-task deep RL, early learning often leads to overfitting to initial experiences, a phenomenon known as primacy bias, as highlighted by Nikishin et al. (2022). To mitigate this, Nikishin et al. (2022) proposed a method that periodically resets the parameters of the RL agent while preserving the replay buffer. This method leverages the replay buffer's stored experiences to enable the agent to recover quickly from the reset, bypassing the primacy bias. Although resets temporarily reduce performance, the agent rapidly regains its capabilities by focusing on high-quality trajectories acquired later in the learning process, resulting in significant improvements across various environments.
+
+In multi-task deep RL, resetting deep networks offers additional benefits. Beyond counteracting primacy bias, it also addresses biases toward tasks learned early in training. By redistributing focus across tasks, the reset mechanism promotes more balanced learning and enhances performance across the entire task set (Cho et al., 2024).
+
+# 3. Motivation: Reward Scaling in Deep RL
+
+This section introduces a motivating example highlighting the limitations of conventional deep multi-task RL, which assigns an equal reward scale $\{c_i^{\mathrm{rew}} = 1.0\}_{i=1}^{N}$ to all target tasks. The challenge becomes particularly evident when the
+
+
+(a) push
+
+
+(b) pick-place
+
+
+(c) shelf-place
+
+
+(d) basketball
+
+target tasks have significantly different reward distributions.
+
+# 3.1. Reward Scaling in Single-Task RL
+
+Previous works (Henderson et al., 2018; Wu et al., 2018) demonstrate the effectiveness of reward scaling in single-task RL. To investigate its impact within the Meta-World benchmark (Yu et al., 2019), a widely used environment for evaluating multi-task RL, we conducted independent experiments on four challenging tasks—‘push,’ ‘pick-place’, ‘shelf-place’ and ‘basketball’. These experiments utilized the Soft Actor-Critic (SAC) algorithm (Haarnoja et al., 2018b), comparing performance across two reward scaling factors—{1.0, 100.0}, where a reward scaling factor of 1.0 represents the naive SAC method.
+
+As shown in Figure 1, reward scaling method outperforms the non-scaling approach and ensures successful task completion, demonstrating its effectiveness in single-task RL.
+
+# 3.2. Challenges of Reward Scale in Multi-Task RL
+
+In the previous subsection, we observed that the reward scaling approach effectively supports the training of successful policies in Meta-World. Here, we examine the impact of the reward scaling method on a multi-task RL agent.
+
+# 3.2.1. FAILURES IN CHALLENGING TASKS WITHOUT THE REWARD SCALING APPROACH
+
+We considered the MT10 benchmark, consisting of 10 manipulation tasks in Meta-World, and trained the agent with the SAC-MT algorithm, using one-hot encoding for task
+
+
+Figure 1. Learning curves of four tasks: 'push,' 'pick-place,' 'shelf-place' and 'basketball'. With a reward scaling factor of 100, all tasks succeed, whereas without reward scaling, all tasks fail.
+Figure 2. Comparison of the final success rate per task in the MT10 benchmark among SAC-MT, SAC-MT with reward scaling applied to all tasks, and SAC-MT with reward scaling applied only to the challenging tasks: 'push,' 'pick-place,' and 'peg-insert-side.'
+
+identification. Figure 2 (blue) shows the final success ratio per task with SAC-MT. Notably, tasks with IDs 1, 2, and 7 ('push,' 'pick-place,' and 'peg-insert-side') failed to train successfully.
+
+In order to investigate the failure in these challenging tasks with the SAC-MT algorithm, we focused on the magnitude of the initial reward for each task in the MT10 benchmark. We observed that the reward scale for certain challenging tasks is substantially lower compared to others, differing by a factor of $10 \sim 100$ . This imbalance negatively affected the training of the Q-network for these tasks, as shown in Figure 3. Specifically, the Q-values for these tasks were negative, even though the rewards from Meta-World are always positive. This highlights the challenges of training the Q-network when reward scales are not sufficiently high. More learning curves of Q-value are provided in Appendix D.1
+
+# 3.2.2. FIXED REWARD SCALING IN MULTI-TASK RL
+
+To further investigate the effects of reward scaling in multi-task RL, we conducted additional two experiments with SAC-MT. In the first, a reward scaling factor of 100 was applied uniformly across all tasks, while in the second, the scaling was selectively applied only to the challenging tasks—‘push,’ ‘pick-place,’ and ‘peg-insert-side’—leaving the other tasks unscaled. Figure 2 shows the final success ratios achieved with SAC-MT when incorporating reward scaling. Orange represents the scenario where scaling was applied to all tasks, and green denotes the selective scaling approach. In the uniform scaling experiment, Task ID 1 showed progress, but Task IDs 2 and 7 remained unresolved. In contrast, with selective scaling applied only to challenging tasks, Task IDs 1, 2, and 7 were successfully trained.
+
+
+(a) push
+
+
+(b) pick-place
+Figure 3. Learning curves of Q-values for two tasks: 'push,' and 'pick-place'.
+
+However, this approach caused many other tasks to fail during training, reducing the overall average performance.
+
+This observation highlights a key challenge in multi-task RL: careful reward scaling is essential for handling tasks with diverse reward distributions. Improper scaling can result in overfitting on tasks with highly amplified rewards, failure to learn others, and overall performance decline. To address this challenge, the remainder of this paper introduces a novel approach to effectively adjust reward scaling in multi-task RL, aiming to achieve better overall performance.
+
+# 4. Adaptive Reward Scaling for Multi-Task Reinforcement Learning
+
+To address the issue raised in the previous section regarding the effective handling of varying reward distributions in multi-task RL, we propose the Adaptive Reward Scaling (ARS) algorithm. This approach is specifically designed to address imbalances in reward scales across tasks in multi-task RL, thereby improving overall learning efficiency. The ARS framework comprises two key components: a reset mechanism and a dynamic reward scaling strategy, which work together to optimize training performance.
+
+- History-Based Reward Scaling Strategy: At the core of ARS is a reward scaling strategy that adjusts reward scales for tasks with lower reward magnitudes, ensuring uniform reward scaling across tasks. This approach leverages a metric derived from the distribution of rewards within each task's experience replay buffer, enabling real-time and adaptive reward scaling.
+- Reset Mechanism: ARS integrates a reset mechanism to mitigate biases toward the tasks with highly amplified rewards. Through periodic reinitialization of network parameters while maintaining the replay buffer, this approach enhances adaptability and promotes improved performance across diverse tasks.
+
+The overall structure of the proposed ARS framework is presented in Appendix F. ARS employs two networks: a
+
+# Algorithm 1 Adaptive Reward Scaling (ARS)
+
+1: Initialize policy network $\pi_{\theta}$ , $Q$ -value network $Q_{\psi}$
+2: Initialize replay buffer $\mathcal{D}_i$ for each task $\mathcal{T}_i\in \mathcal{C}$
+3: for $t = 1,2,\ldots ,T_{init}$ do
+4: for all $\mathcal{T}_i\in \mathcal{C}$ do
+5: Interact with the environment of $\mathcal{T}_i$ with a random policy and store data in $\mathcal{D}_i$
+6: end for
+7: end for
+8: Initialize the reward scaling factors $\{c_i^{rew}\}_{i = 1}^N$ using (6)
+9: for $t = T_{init} + 1, T_{init} + 2, \ldots,$ do
+10: for all $\mathcal{T}_i\in \mathcal{C}$ do
+11: Interact with the environment of $\mathcal{T}_i$ with $\pi_{\theta}$ and store data in $\mathcal{D}_i$
+12: end for
+13: Update $\theta$ and $\psi$ , using the data in $\{\mathcal{D}_i\}_{i=1}^N$ and the scaling factors $\{c_i^{rew}\}_{i=1}^N$
+14: if $t\% T_{\mathrm{reset}} = = 0$ then
+15: Update $\{c_i^{new}\}_{i = 1}^N$ using (6)
+16: Randomly reinitialize $\theta$ and $\psi$
+17: end if
+18: end for
+
+policy distribution $\pi_{\theta}$ and a state-action value function $Q_{\psi}$ . Each task $\mathcal{T}_i$ in the task set $\mathcal{C}$ is assigned a separate buffer $\mathcal{D}_i$ , maintained independently to store task-specific interactions. The goal is to maximize the sum of each task's objective, i.e., $\sum_{\mathcal{T}i\in \mathcal{C}}J(\pi \theta ,\mathcal{T}_i)$ . The key components of ARS are further detailed in the following subsections.
+
+# 4.1. History-Based Reward Scaling Strategy
+
+From the observations in the previous section, assigning higher reward scaling factors to complex tasks is necessary. However, using a fixed reward scaling approach is inadequate as shown in Figure 2, as the reward magnitude for tasks with a high scaling factor progressively increases during training, leading to biases toward those tasks. To address this, the reward for each task $\mathcal{T}_i$ should be adaptively scaled.
+
+This raises the question: "How should the scaling factor for each task $\mathcal{T}_i$ be determined in real-time?" To address this, we examine the training objective in standard off-policy multi-task RL implementations. Given a task set $\mathcal{C} = \{\mathcal{T}_i\}_{i=1}^N$ and the corresponding experience replay buffers $\{\mathcal{D}_i\}_{i=1}^N$ , the agent is trained using the following objectives:
+
+$$
+\ell^ {\pi} (\theta) = \frac {1}{| \mathcal {C} |} \sum_ {\mathcal {T} _ {i} \in \mathcal {C}} \mathbb {E} _ {s \sim \mathcal {D} _ {i}} \left[ \ell^ {\pi} (\theta ; s) \right], \tag {3}
+$$
+
+$$
+\ell^ {Q} (\psi) = \frac {1}{| \mathcal {C} |} \sum_ {\mathcal {T} _ {i} \in \mathcal {C}} \mathbb {E} _ {(s, a) \sim \mathcal {D} _ {i}} \left[ \ell^ {Q} \left(\psi ; s, a, r, s ^ {\prime}\right) \right], \tag {4}
+$$
+
+where $\ell^{\pi}(\theta ;s)$ and $\ell^Q (\psi ;s,a,r,s')$ are the per-sample actor and critic loss functions, respectively.
+
+In this objective, the reward scaling factor only influences critic training. For the SAC algorithm, the per-sample critic loss for $(s,a,r,s')\in \mathcal{D}_i$ is expressed as
+
+$$
+\begin{array}{l} \ell^ {Q} (\psi ; s, a, r, s ^ {\prime}) = \\ \left[ \left(r + \gamma \left(Q _ {\hat {\psi}} \left(s ^ {\prime}, \pi_ {\theta} \left(s ^ {\prime}\right)\right) - \alpha \log \pi_ {\theta} \left(s ^ {\prime}\right)\right) - Q _ {\psi} (s, a) \right] ^ {2} \right. \tag {5} \\ \end{array}
+$$
+
+Critic training, therefore, can be viewed as a regression problem, where the targets vary, and the mean reward in the experience replay buffer $\mathcal{D}_i$ plays a crucial role in determining the magnitude of the critic values.
+
+Building on these observations, we define the reward scaling factor $c_i^{\mathrm{rew}}$ for each task $\mathcal{T}_i$ as the following equalizer rule:
+
+$$
+c _ {i} ^ {\text {r e w}} = \frac {\operatorname* {m a x} \left(\left\{\bar {r} _ {1} , \dots , \bar {r} _ {N} \right\}\right)}{\bar {r} _ {i}} \tag {6}
+$$
+
+where $\bar{r}_i$ is the mean reward for task $\mathcal{T}_i$ within its replay buffer $\mathcal{D}_i$ for $i = 1,\dots ,N$ . This formulation ensures consistent reward magnitudes across tasks, enabling stable critic learning without bias towards easy tasks. Using this adaptive reward scaling factor $c_{i}^{\mathrm{rew}}$ , the critic loss for $(s,a,r,s^{\prime})\in \mathcal{D}_i$ is modified as follows:
+
+$$
+\begin{array}{l} \ell^ {Q} (\psi ; s, a, r, s ^ {\prime}) = \\ \left[ \right.\left( \right.c _ {i} ^ {\text {r e w}} r + \gamma \left(Q _ {\hat {\psi}} \left(s ^ {\prime}, \pi_ {\theta} \left(s ^ {\prime}\right)\right) - \alpha \log \pi_ {\theta} \left(s ^ {\prime}\right)\right) - Q _ {\psi} (s, a) \left. \right] ^ {2} \tag {7} \\ \end{array}
+$$
+
+# 4.2. Reset Mechanism
+
+A key issue with the naive adaptive reward scaling approach is that the rewards stored in the experience replay buffer fluctuate throughout training due to the changing reward scaling factor. This variability can destabilize critic training, as the regression target in Equation (7) changes frequently. Additionally, it may introduce biases toward tasks with highly amplified reward scaling factors, ultimately leading to suboptimal performance.
+
+To address this issue, we adopt the reset mechanism proposed by Nikishin et al. (2022), which involves resetting the policy parameters $\theta$ and $Q$ -value function parameters $\psi$ while preserving the experience replay buffers. Furthermore, we adjust the reward scaling factor $c_{i}^{\mathrm{rew}}$ only prior to resets, ensuring stable critic training throughout the process.
+
+To conclude, our 'History-Based Reward Scaling' strategy adaptively scales rewards across tasks of varying complexities, preventing bias toward simpler tasks with larger raw magnitude of rewards. However, dynamically changing reward scales can destabilize training critic network. To address this, we employ a Reset Mechanism, which resets the policy and critic networks whenever the reward scales
+
+are updated, while retaining all historical data in the replay buffers. This approach maintains stable critic learning throughout training, effectively balancing performance across a diverse set of tasks. The main contribution of our work is the novel reward scaling framework, thus any type of multi-task RL method can be used for practical implementation. The overall process of the proposed method is summarized in Algorithm 1.
+
+# 4.3. Enhancements
+
+Our ARS framework builds upon the SAC-MT algorithm, a simplest baseline for multi-task RL. In the main results, we refer to this combination as ARS. Beyond the default ARS setup, we apply layer normalization (Ba et al., 2016), a well-known technique for stabilizing learning in deep RL (Ball et al., 2023), to both the input and hidden layers of the critic network, and denote this variant as ARS-LN. Building on ARS-LN, we further incorporate Low-Rank Adaptation (LoRA) (Hu et al., 2022) to introduce task-specific parameterization during the later stages of training—specifically, after $75\%$ and $83.3\%$ of the total training steps for MT10 and MT50, respectively. We denote this variant as ARS-LoRA. For efficient adaptation, we use a rank of $r = 8$ for MT10 and $r = 16$ for MT50. Further implementation details regarding the integration of LoRA into ARS are provided in Appendix B.
+
+# 5. Experiments
+
+**Benchmarks.** To evaluate the effectiveness of the proposed method on various tasks, we conducted experiments using the Meta-World benchmark (Yu et al., 2019), which includes 50 distinct robotic control tasks involving a Sawyer arm in the MuJoCo environment (Todorov et al., 2012). Our experiments used two setups: MT10 and MT50, which consist of 10 and 50 manipulation tasks, respectively. A detailed description of the benchmarks are provided in Appendix A.
+
+Baselines. We compared the proposed method against the following baseline approaches: 1) SAC with Multi-Task (SAC-MT): A shared policy that uses a one-hot task identification encoding along with the current state as input. 2) SAC with Multi-Task Multi-Head (SAC-MT-MH) (Yu et al., 2019): Similar to SAC-MT but incorporates an independent final layer (multi-head) in the policy network for each task. 3) SAC with Soft Modularization (SAC-Soft-Modular) (Yang et al., 2020): A policy architecture utilizing multiple modules with a soft modularization technique that determines a routing strategy for each task. 4) Gradient Surgery for Multi-Task Learning (PCGrad) (Yu et al., 2020): This method addresses conflicting gradients in multi-task learning by projecting gradients onto a shared subspace. 5) Parameter-Compositional Multi-Task Reinforcement Learning (PaCo)(Sun et al., 2022): A multi-task RL approach
+
+Table 1. Comparison of average and per-task success rates (%) on the Meta-World MT10 benchmark. Refer to Appendix A for task names corresponding to each task ID.
+
+Algorithm Task ID Average 0 1 2 3 4 5 6 7 8 9 SAC-MT 98±2.4 0±0.0 0±0.0 100±0.0 100±0.0 100±0.0 96±2.0 0±0.0 100±0.0 100±0.0 69.4 ± 0.8 SAC-MT-MH 100±0.0 28±21.8 0±0.0 100±0.0 98±2.5 100±0.0 100±0.0 46±21.8 100±0.0 100±0.0 77.2 ± 11.9 Soft Modular 100±0.0 32±14.9 0±0.0 100±0.0 100±0.0 100±0.0 100±0.0 12±14.9 100±0.0 100±0.0 74.4 ± 10.5 PCGrad 94±3.7 0±0.0 0±0.0 100±0.0 100±0.0 100±0.0 100±0.0 54±39.9 100±0.0 100±0.0 74.8 ± 13.7 PaCo 100±0.0 44±25.2 0±0.0 100±0.0 100±0.0 100±0.0 100±0.0 80 ± 40 100±0.0 100±0.0 82.4 ± 14.2 MOORE 100±0.0 92.5 ± 7.1 0±0.0 100±0.0 100±0.0 100±0.0 100±0.0 67.5 ± 46.5 100±0.0 100±0.0 86.0 ± 4.8 SMT 96±3.7 62±17.9 34±13.8 100±0.0 100±0.0 100±0.0 100±0.0 76±31.9 100±0.0 100±0.0 86.8 ± 8.6 ARS (Ours) 92.5±11.7 98.8±3.5 86.3±15.1 98.8±3.5 100±0.0 100±0.0 100±0.0 96.3±7.4 100±0.0 100±0.0 97.3 ± 2.1 ARS-LN (Ours) 93.8±7.4 98.8±3.5 93.8±9.2 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 98.6 ± 1.5 ARS-LoRA (Ours) 100±0.0 97.5±7.1 98.8±3.5 100±0.0 100±0.0 100±0.0 100±0.0 98.8±3.5 100±0.0 100±0.0 99.5 ± 0.8
+
+that utilizes parameter compositionality by constructing task-specific parameters from a common pool of shared base parameters. 6) Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts (MOORE) (Hendawy et al., 2024): Neural network architecture utilizing mixture-of-experts with gram-schmidt process 7) Scheduled Multi-Task Training for Multi-Task RL (SMT) (Cho et al., 2024): A framework designed to mitigate simplicity bias using a "Hard Tasks First" scheduling scheme combined with a reset mechanism.
+
+Training settings. All methods are trained using 20 million samples in the MT10 benchmark and 100 million samples in the MT50 benchmark. Policy evaluation is based on the success ratio across all tasks, where the success ratio for a specific task is determined by averaging its success rate over 10 episodes with different sampled goals. We report the mean performance along with the standard deviations of policies trained across 8 different random seeds, as summarized in Tables 1 and 2. Details of the hyperparameters used in the experiments can be found in Appendix C.
+
+# 5.1. Results on Meta-World
+
+The results for two benchmarks, MT10 and MT50, are presented in Table 1 and Table 2, respectively. These findings highlight the effectiveness of the ARS algorithm's innovative strategy for multi-task RL, especially its adaptive reward scaling and reset mechanism. Together, these components drive notable performance improvements across a wide range of tasks in the Meta-World benchmark.
+
+Results on MT10. As shown in the MT10 results in Table 1, our ARS framework with three variants achieves outstanding performance, attaining average success rates of $97.3\%$ , $98.6\%$ , and $99.5\%$ , respectively, which are the highest among all compared methods. Furthermore, ARS-LoRA achieves the best overall performance with a $99.5\%$
+
+Table 2. Comparison of average bottom- $k$ success rates on the Meta-World MT50 benchmark.
+
+Algorithm Average Bottom-k Success Ratio (%) k=10 k=20 k=30 k=40 k=50 SAC-MT 0.0 ± 0.0 0.0 ± 0.0 14.1 ± 3.1 34.5 ± 2.5 47.6 ± 2.0 SAC-MT-MH 0.0 ± 0.0 0.0 ± 0.0 14.1 ± 1.7 34.0 ± 2.0 47.2 ± 1.7 Soft Modular 0.0 ± 0.0 1.8 ± 3.7 23.7 ± 12.3 42.6 ± 9.5 54.1 ± 7.6 PCGrad 0.0 ± 0.0 0.0 ± 0.0 21.0 ± 12.9 39.9 ± 10.7 51.9 ± 8.5 PaCo 0.0 ± 0.0 4.6 ± 8.2 26.1 ± 15.0 44.6 ± 11.2 55.6 ± 9.1 MOORE 0.0 ± 0.0 15.9 ± 7.0 41.2 ± 5.0 55.3 ± 3.9 64.2 ± 3.1 SMT 0.0 ± 0.0 8.0 ± 8.9 26.8 ± 13.1 45.0 ± 9.9 56.0 ± 8.0 ARS (Ours) 9.1 ± 6.6 26.3 ± 6.1 45.3 ± 4.7 57.5 ± 3.6 65.9 ± 2.9 ARS-LN (Ours) 21.0 ± 11.4 49.1 ± 6.5 64.3 ± 6.2 72.9 ± 4.7 78.3 ± 3.7 ARS-LoRA (Ours) 29.3 ± 7.7 58.4 ± 4.5 72.0 ± 3.0 79.0 ± 2.2 83.2 ± 1.8
+
+success rate, successfully solving all tasks with at least $97\%$ accuracy. This marks a significant advancement over existing methods and a major breakthrough as the first to solve the MT10 benchmark using scratch multi-task RL training. Notably, ARS framework performs exceptionally well on challenging tasks, such as task ID 1 ('push'), 2 ('pick-place'), and 7 ('peg-insert-side'). These results highlight the effectiveness of ARS's adaptive reward scaling combined with the reset mechanism, which emphasizes addressing difficult tasks and accelerating progress during the early stages of training.
+
+Results on MT50. The MT50 results in Table 2 highlight the consistent effectiveness of ARS, which outperforms other methods, particularly in the average bottom- $k$ success ratios, as suggested by Cho et al. (2024). In particular, in the most challenging subset of tasks (bottom-10), where all baseline methods fail, ARS variants achieve average success rates of $9.1\%$ , $21.0\%$ , and $29.3\%$ , respectively—demonstrating a clear advantage on difficult tasks. Moreover, the use of Layer Normalization and LoRA fine-tuning remarkably improves performance from $65.9\%$ to
+
+Table 3. Results of incorporating the ARS framework into existing multi-task RL methods.
+
+Benchmark ARS Multi-Task Algorithm SAC-MT PCGrad Soft Modular MOORE MT10 w/ ARS 97.3±2.1 95.9±2.6 98.8±1.3 98.0±0.4 w/o ARS 69.4±0.8 74.8±13.7 74.4±10.5 86.0±4.8 MT50 w/ ARS 65.9±2.9 72.4±2.6 75.5±3.6 84.6±1.6 w/o ARS 47.6±2.0 51.9±8.5 54.1±7.6 64.2±3.1
+
+$78.3\%$ and $83.2\%$ , respectively, in terms of average success rate across all tasks. This marks a significant advancement over existing methods and represents a major milestone as the first instance of successfully solving diverse tasks in the MT50 benchmark through scratch multi-task RL training. Note that while the performance of MOORE (Hendawy et al., 2024) appears competitive with our basic ARS's performance, this is not a fair comparison, as the MOORE method uses approximately $4\times$ more parameters than ours. To examine the effects of network capacity, we conducted an ablation study on the number of parameters in Section 5.4. With a bigger network, ARS-LN achieves $88.9\%$ average success rate for the MT50 benchmark. Notably, ARS-LN successfully solves all tasks with a success rate of at least $45\%$ , except for two tasks: assembly and disassemble, resulting in an average success rate of $92.3\%$ across 48 tasks.
+
+To further investigate the advantages of mastering diverse tasks, we analyze the Effective Solvable Task Ratio (ESTR) on the MT50 benchmark. The ESTR is defined as:
+
+$$
+E S T R = \frac {\left| \left\{\mathcal {T} _ {i} \in \mathcal {C} \mid \operatorname {E v a l F i n a l S R} \left(\mathcal {T} _ {i}\right) \geq \delta \right\} \right|}{N} \tag {8}
+$$
+
+where $\delta$ is the success ratio threshold, and EvalFinalSR $(\mathcal{T}_i)$ represents the final average success ratio for a task $\mathcal{T}_i$ . A higher ESTR indicates that the agent successfully solves more tasks. We compute the ESTR for threshold values $\delta \in \{0.1, 0.3, 0.5, 0.7\}$ . The threshold $\delta$ specifies the criteria for a task to be considered solvable, with tasks achieving an average success ratio equal to or above $\delta$ classified as solvable. As $\delta$ increases, the criteria become stricter, causing the ESTR metric to decrease accordingly.
+
+As shown in the ESTR results in Figure 4, our ARS framework demonstrates exceptional performance across all threshold values $\delta \in \{0.1, 0.3, 0.5, 0.7\}$ , indicating that ARS successfully solves more tasks than other baselines. Notably, ARS significantly outperforms competing methods as $\delta$ decreases, highlighting its ability to address a broader range of tasks, including those with lower success ratios. In contrast, the baseline methods tend to focus on only a subset of tasks, failing to tackle more challenging ones effectively.
+
+
+Figure 4. Comparison of the Effective Solvable Task Ratio (ESTR) with respect to the threshold $\delta$ in the MT50 benchmark.
+
+# 5.2. Analysis of the Reward Scaling Approach
+
+In this section, we assess the effectiveness of the proposed reward scaling method by analyzing changes in the reward scaling factor $c_{i}^{\mathrm{rew}}$ for each task $\mathcal{T}_i$ . Figure 5 shows the learning curve of the reward scale factors $\{c_i^{\mathrm{rew}}\}$ on a logarithmic scale for the MT10 benchmark. Initially, the agent assigns higher scaling factors to three difficult tasks including 'push,' 'pick-place,' and 'peg-insert-side' due to their low initial reward magnitudes. Over time, these factors gradually become more uniform, as shown in Figure 5. This indicates that as tasks with high scaling factors are successfully learned, their mean rewards increase, leading to a reduction in the scaling factor.
+
+# 5.3. Integration of the ARS Framework with Off-Policy Multi-Task RL Methods
+
+The proposed ARS framework is a universal solution that can be applied to any off-policy multi-task reinforcement learning (RL) method. To evaluate its applicability, we incorporated the dynamic reward scaling and reset mechanisms of ARS into various off-policy multi-task approaches, including PCGrad, Soft Modular, and MOORE.
+
+Table 3 presents the performance of these algorithms combined with the ARS framework (training curves are provided in Appendix D.3). The results demonstrate that the ARS framework consistently improves all evaluated multi-task RL algorithms on both the MT10 and MT50 benchmarks, highlighting its broad compatibility and effectiveness. In particular, the integration of ARS boosts the average success rate by at least $20\%$ across all off-policy MTRL methods on MT50. Notably, MOORE (Hendawy et al., 2024) integrated with ARS achieves the highest average success rate of $84.6\%$ on MT50 benchmark, a performance level never
+
+
+Figure 5. Learning curve of the reward scale factor $c_i^{\mathrm{rew}}$ on a logarithmic scale for each task $\mathcal{T}_i$ within the MT10 benchmark. Complex tasks such as 'pick-place,' 'push,' and 'peg-insert-side' have larger reward scaling factors compared to others.
+
+before reached by other multi-task RL approaches.
+
+# 5.4. Ablation Studies
+
+First, We conducted ablation studies to evaluate the key components of ARS: (1) the adaptive reward scaling scheme and (2) the reset mechanism. To assess their contributions, we performed experiments where each component was omitted individually, denoted using the notation "w/o." Specifically, the 'ARS w/o reward scaling' corresponds to the naive resetting framework proposed by (Nikishin et al., 2022).
+
+Table 4 shows the average success ratio for the task set configurations in the MT10 benchmark (training curves provided in Appendix D.2)). Here, $C_{\text{easy}}$ represents the set of seven tasks with the highest success ratios, while the remaining tasks constitute the difficult set, $C_{\text{difficult}}$ . Removing the reset mechanism or reward scaling results in performance drops of $27.5\%$ and $27.0\%$ , respectively. The absence of the reset mechanism significantly reduces the performance of the easy task set, underscoring its importance for stable training. On the other hand, reward scaling notably enhances the performance of the difficult task set, demonstrating its effectiveness in addressing varying reward distributions and ensuring balanced learning across tasks. These observations highlight the significance of both components in the effectiveness of the ARS framework.
+
+Next, to verify the effectiveness of the proposed reward scaling method, we conducted experiments using alternative scaling approaches including mean reward, standard deviation of reward, and normalization. The scaled rewards based on standard deviation $(\hat{r}_i^\sigma)$ and normalization $(\hat{r}_i^{\mathrm{normal}})$ for each task $\mathcal{T}_i$ are defined as $\hat{r}_i^\sigma = \frac{r}{\sigma(r_i)}$ and $\hat{r}_i^{\mathrm{normal}} = \frac{r - \bar{r}_i}{\sigma(r_i)}$ ,
+
+Table 4. Ablation study on the components of ARS.
+
+Task set Algorithm ARS w/o reset ARS w/o reward scaling ARS Ceasy 85.9±11.0 100±0.0 99.6±1.0 Cdifficult 32.1±15.6 0.8±1.5 91.7±7.3 C 69.8 ±8.1 70.3 ± 0.5 97.3±2.1
+
+Table 5. Ablation study on the reward scaling method.
+
+Task set Reward Scaling Method Normalization Standard Deviation Mean Reward(Ours) Ceasy 85.9 ± 26.0 99.1±2.0 99.6±1.0 Cdifficult 75.0 ± 33.3 82.9±22.0 91.7±7.3 C 82.4 ± 27.9 94.3±7.1 97.3±2.1
+
+respectively. Table 5 presents the average success ratio for each scaling metric. The results show that scaled-up based on the mean rewards consistently outperforms other approaches, highlighting the effectiveness of our method.
+
+Finally, we conducted an ablation study to investigate the effect of network capacity. While the MOORE (Hendawy et al., 2024) model integrated with ARS achieves the highest performance on the MT50 benchmark among multi-task methods, a key contributing factor is its larger network size. Each network in MOORE contains approximately 2.0M parameters, whereas our main result in Table 2 uses only 0.5M. To assess the impact of model size, we increased the hidden unit size in our architecture to 512, 800, and 1024. The configuration with 800 hidden units has a parameter count comparable to that of the MOORE model. As shown in Table 6, the performance of ARS improves consistently with increased capacity, whereas SAC-MT (w/o ARS) does not. This highlights ARS as a scalable and effective method for leveraging larger models through reset and dynamic reward scaling. Remarkably, ARS-LN with 1024 hidden units achieves the highest average success rate of $88.9\%$ across all evaluated MTRL methods. Furthermore, despite having similar network capacity, ARS-LN with 800 hidden units outperforms the MOORE model integrated with ARS. This performance gain demonstrates the effectiveness of the layer normalization technique, even though its adoption is significantly simpler compared to the complex model design of MOORE. Additional ablation studies are provided in Appendix E.
+
+# 6. Related Works
+
+Multi-task RL. Multi-task learning has become a key area in machine learning, focusing on algorithms that perform well across various tasks. In the context of RL, it aims
+
+Table 6. Ablation study on the network capacity.
+
+Algorithm Network Capacity: Hidden Layer Dimensions [400, 400, 400, 400] [512, 512, 512, 512] [800, 800, 800, 800] [1024, 1024, 1024, 1024] ARS-LN 78.3±3.7 84.2±1.7 88.0±2.9 88.9±2.6 ARS 65.9±2.9 73.6±4.4 75.1±3.8 78.7±2.9 SAC-MT 47.6±2.0 48.8±2.6 47.2±1.7 45.5±2.4
+
+to develop models capable of handling diverse tasks (Wilson et al., 2007; Pinto & Gupta, 2017; Zeng et al., 2018; Hausman et al., 2018).
+
+Various approaches for Multi-taks RL Addressing negative transfer is crucial for the success of multi-task RL, and various strategies have been developed to tackle this issue. (i) Distillation methods leverage policy distillation to merge knowledge from multiple tasks into a unified model but often require task-specific networks, increasing resource demands(Rusu et al., 2016a; Parisotto et al., 2016; Teh et al., 2017). (ii) Modular networks assign distinct modules to tasks with task-specific routing, enabling strategic parameter sharing to reduce interference and negative transfer (Rusu et al., 2016b; Devin et al., 2017; Andreas et al., 2017; Haarnoja et al., 2018a; Yang et al., 2020; Sun et al., 2022; Hendawy et al., 2024). (iii) Gradient-based approaches analyze task gradients to address conflicts but face challenges due to gradient noise and variability (Zhang & Yeung, 2013; Chen et al., 2018; Hu et al., 2019; Yu et al., 2020). (iv) Explicit policy-sharing methods share behaviors or policies across tasks to obtain good samples from different tasks, enabling knowledge transfer without sharing parameters (Zhang et al., 2023; He et al., 2024). (v) Task scheduling methods use a scheduling framework to prioritize and train more effective tasks earlier in the training process of diverse tasks. (Cho et al., 2024).
+
+In contrast, our method focuses on the reward function. The varying reward distributions make training challenging, so we introduce an adaptive reward scaling scheme to enable stable and efficient training in multi-task RL.
+
+Resetting Deep RL Agent In deep RL, primacy bias, or overfitting to early experiences, has been addressed through periodic resets of agent parameters while retaining the replay buffer (Nikishin et al., 2022). Extending this concept, D'Oro et al. (2023) Subsequent advancements include reset frequency modulation (D'Oro et al., 2023), ensemble-based resets (Kim et al., 2023), and refined mechanisms achieving human-level Atari performance (Schwarzer et al., 2023).
+
+Our method extends these ideas to a multi-task RL setting, where resets not only address primacy bias but also mitigate biases toward tasks learned earlier in training.
+
+Reward Scaling in Deep RL Early studies showed that re
+
+ward scaling is one of the key factors for stabilizing training: Henderson et al. (2018) demonstrated that SAC and DDPG are sensitive to this choice, while Wu et al. (2018) proposed Automatic Reward Scaling (ANS), which adjusts a global factor based on return statistics. In multi-task RL, PopArt (Hessel et al., 2019) introduced scale-invariant updates by normalizing critic targets through an adaptive affine transformation, enabling training across 57 Atari games with different reward scales. However, PopArt only normalizes the value head and overlooks the broader impact of reward scaling on deep RL training, which remains critical for solving difficult tasks.
+
+Our ARS differs in that it equalizes the mean reward across tasks using a simple, parameter-free rule based on replay buffer statistics (Eq. 6), automatically assigning higher scaling factors to harder tasks. As demonstrated in Sections 5.1-5.2 and E.3, this lightweight framework consistently outperforms PopArt-style and other normalization methods on MT10 and MT50.
+
+# 7. Conclusion
+
+In this work, we introduced Adaptive Reward Scaling (ARS), a novel framework designed to tackle the critical challenges caused by varying reward distributions. By employing a history-based reward scaling strategy, ARS dynamically adjusts reward magnitudes to balance training focus across diverse tasks. The integration of a reset mechanism further enhances ARS by mitigating biases introduced by early learned tasks, ensuring improved adaptability and convergence. Together, these innovations enable ARS to achieve state-of-the-art performance on challenging benchmarks such as Meta-World, demonstrating its effectiveness in handling diverse and complex task sets. Future work could extend ARS to other multi-task RL algorithms and more diverse domains, as well as refining its reward scaling and reset strategies for greater efficiency. Investigating alternative approaches to task representation and value normalization, alongside scaling ARS to multi-agent or hierarchical RL frameworks, presents exciting directions for further research. By addressing key limitations in multi-task RL, ARS contributes significantly to advancing the field, paving the way for more scalable and robust solutions to real-world problems.
+
+# Impact Statement
+
+A major challenge in reinforcement learning (RL) is generalization, where a policy trained for one task often struggles to perform effectively on different tasks. Addressing this issue is crucial for training policies that can handle multiple related tasks effectively. In this paper, we introduced a novel reward scaling framework for multi-task RL that improves overall performance. Our approach contributes to the advancement of RL in real-world applications requiring the ability to solve multiple similar tasks.
+
+# Acknowledgments
+
+This work was supported in part by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.RS-2022-II220124, Development of Artificial Intelligence Technology for Self-Improving Competency-Aware Learning Capabilities, $50\%$ ) and in part by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.RS-2022-II220469, Development of Core Technologies for Task-oriented Reinforcement Learning for Commercialization of Autonomous Drones, $50\%$ )
+
+# References
+
+Andreas, J., Klein, D., and Levine, S. Modular multitask reinforcement learning with policy sketches. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 166-175. PMLR, 2017.
+Ba, L. J., Kiros, J. R., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016.
+Ball, P. J., Smith, L. M., Kostrikov, I., and Levine, S. Efficient online reinforcement learning with offline data. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 1577-1594. PMLR, 2023.
+Caruana, R. Multitask learning. Mach. Learn., 28(1):41-75, 1997.
+Cetin, E., Ball, P. J., Roberts, S. J., and Celiktutan, O. Stabilizing off-policy deep reinforcement learning from pixels. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings
+
+of Machine Learning Research, pp. 2784-2810. PMLR, 2022.
+Chen, Z., Badrinarayanan, V., Lee, C., and Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 793-802. PMLR, 2018.
+Cho, M., Park, J., Lee, S., and Sung, Y. Hard tasks first: Multi-task reinforcement learning through task scheduling. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
+Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. Learning modular neural network policies for multi-task and multi-robot transfer. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, 2017, pp. 2169-2176. IEEE, 2017.
+D'Oro, P., Schwarzer, M., Nikishin, E., Bacon, P., Bellemare, M. G., and Courville, A. C. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
+Fujimoto, S., van Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 1582-1591. PMLR, 2018.
+Haarnoja, T., Pong, V., Zhou, A., Dalal, M., Abbeel, P., and Levine, S. Composable deep reinforcement learning for robotic manipulation. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018, pp. 6244-6251. IEEE, 2018a.
+Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 1856-1865. PMLR, 2018b.
+Hafner, D., Lillicrap, T. P., Norouzi, M., and Ba, J. Mastering atari with discrete world models. In 9th International Conference on Learning Representations, ICLR 2021,
+
+Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
+Hausman, K., Springenberg, J. T., Wang, Z., Heess, N., and Riedmiller, M. A. Learning an embedding space for transferable robot skills. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
+He, J., Li, K., Zang, Y., Fu, H., Fu, Q., Xing, J., and Cheng, J. Efficient multi-task reinforcement learning with cross-task policy guidance. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
+Hendawy, A., Peters, J., and D'Eramo, C. Multi-task reinforcement learning with mixture of orthogonal experts. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
+Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., and Meger, D. Deep reinforcement learning that matters. In McIlraith, S. A. and Weinberger, K. Q. (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 3207-3214. AAAI Press, 2018. doi: 10.1609/AAAI.V32I1.11694.
+Hessel, M., Soyer, H., Espeholt, L., Czarnecki, W., Schmitt, S., and van Hasselt, H. Multi-task deep reinforcement learning with popart. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 3796-3803. AAAI Press, 2019. doi: 10.1609/AAAI.V33I01.33013796.
+Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
+Hu, H., Dey, D., Hebert, M., and Bagnell, J. A. Learning anytime predictions in neural networks via adaptive loss balancing. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu,
+
+Hawaii, USA, January 27 - February 1, 2019, pp. 3812-3821. AAAI Press, 2019.
+Kapturowski, S., Campos, V., Jiang, R., Rakicevic, N., van Hasselt, H., Blundell, C., and Badia, A. P. Human-level atari 200x faster. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
+Kaufmann, E., Bauersfeld, L., Loquercio, A., Müller, M., Koltun, V., and Scaramuzza, D. Champion-level drone racing using deep reinforcement learning. Nat., 620(7976):982-987, 2023. doi: 10.1038/S41586-023-06419-4.
+Kim, W., Shin, Y., Park, J., and Sung, Y. Sample-efficient and safe deep reinforcement learning via reset deep ensemble agents. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10-16, 2023, 2023.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M. A., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nat., 518(7540):529-533, 2015.
+Nikishin, E., Schwarzer, M., D'Oro, P., Bacon, P., and Courville, A. C. The primacy bias in deep reinforcement learning. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 16828-16847. PMLR, 2022.
+Parisotto, E., Ba, L. J., and Salakhutdinov, R. Actor-mimic: Deep multitask and transfer reinforcement learning. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
+Pinto, L. and Gupta, A. Learning to push by grasping: Using multiple tasks for effective learning. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, 2017, pp. 2161-2168. IEEE, 2017.
+Rusu, A. A., Colmenarejo, S. G., Gülcehre, C., Desjardins, G., Kirkpatrick, J., Pascanu, R., Mnih, V., Kavukcuoglu,
+
+K., and Hadsell, R. Policy distillation. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016a.
+Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., and Hadsell, R. Progressive neural networks. CoRR, abs/1606.04671, 2016b.
+Schwarzer, M., Ceron, J. S. O., Courville, A., Bellemare, M. G., Agarwal, R., and Castro, P. S. Bigger, better, faster: Human-level atari with human-level efficiency. In International Conference on Machine Learning, pp. 30365-30380. PMLR, 2023.
+Sun, L., Zhang, H., Xu, W., and Tomizuka, M. Paco: Parameter-compositional multi-task reinforcement learning. Advances in Neural Information Processing Systems, 35:21495-21507, 2022.
+Sun, X., Panda, R., Feris, R., and Saenko, K. Adashare: Learning what to share for efficient deep multi-task learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Tang, C., Abbatematteo, B., Hu, J., Chandra, R., Martin-Martín, R., and Stone, P. Deep reinforcement learning for robotics: A survey of real-world successes. Annual Review of Control, Robotics, and Autonomous Systems, 8, 2024.
+Teh, Y. W., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., and Pascanu, R. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 4496-4506, 2017.
+Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, October 7-12, 2012, pp. 5026-5033. IEEE, 2012.
+Wilson, A., Fern, A., Ray, S., and Tadepalli, P. Multi-task reinforcement learning: a hierarchical bayesian approach. In Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, volume 227 of ACM International Conference Proceeding Series, pp. 1015-1022. ACM, 2007.
+
+Wu, Y.-H., Sun, F.-Y., Chang, Y.-Y., and Lin, S.-D. Ans: adaptive network scaling for deep rectifier reinforcement learning models. arXiv preprint arXiv:1809.02112, 2018.
+Yang, R., Xu, H., Wu, Y., and Wang, X. Multi-task reinforcement learning with soft modularization. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Yu, T., Quillen, D., He, Z., Julian, R., Hausman, K., Finn, C., and Levine, S. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In 3rd Annual Conference on Robot Learning, CoRL 2019, Osaka, Japan, October 30 - November 1, 2019, Proceedings, volume 100 of Proceedings of Machine Learning Research, pp. 1094-1100. PMLR, 2019.
+Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., and Finn, C. Gradient surgery for multi-task learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Zeng, A., Song, S., Welker, S., Lee, J., Rodriguez, A., and Funkhouser, T. A. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, October 1-5, 2018, pp. 4238-4245. IEEE, 2018.
+Zhang, G., Jain, A., Hwang, I., Sun, S., and Lim, J. J. Efficient multi-task reinforcement learning via selective behavior sharing. CoRR, abs/2302.00671, 2023. doi: 10.48550/ARXIV.2302.00671.
+Zhang, Y. and Yeung, D. A regularization approach to learning task relationships in multitask learning. ACM Trans. Knowl. Discov. Data, 8(3):12:1-12:31, 2013.
+
+# A. MT10 and MT50 Environments
+
+To evaluate multi-task reinforcement learning methods, we adopt the robotic manipulation tasks provided by the Meta-World benchmark suite (Yu et al., 2019). Meta-World contains 50 distinct tasks and defines two standard benchmarks for multi-task RL: MT10 and MT50, comprising 10 and 50 tasks, respectively. The MT10 benchmark consists of the following 10 tasks with their corresponding task ID numbers: (1) reach, (2) push, (3) pick-place, (4) door-open, (5) drawer-open, (6) drawer-close, (7) button-press-topdown, (8) peg-insert-side, (9) window-open, and (10) window-close.
+
+Performance is measured using the success rate per task. An episode is considered successful if the agent completes the task at least once during the episode, following common evaluation protocols in prior work (Yu et al., 2020; Haarnoja et al., 2018b; Sun et al., 2022; Hendawy et al., 2024; Cho et al., 2024). In addition, we adopt modified versions of MT10 and MT50 as our default evaluation setting, where each task is trained with randomly sampled goal positions and evaluated across 10 randomized goal configurations, consistent with prior studies (Yang et al., 2020; Sun et al., 2022; Hendawy et al., 2024).
+
+# B. Low-Rank Adaptation for Multi-Task RL
+
+Low-Rank Adaptation (LoRA) (Hu et al., 2022) injects a learnable, low-rank update into a frozen weight matrix so that only a small number of additional parameters are trained while the original backbone remains intact. In our experiments we attach LoRA adapters to the actor and critic multilayer perceptrons (MLPs) used in ARS-LN.
+
+Parameterisation. For any linear layer with weights $W_0 \in \mathbb{R}^{d_{\mathrm{out}} \times d_{\mathrm{in}}}$ we introduce two trainable matrices $A \in \mathbb{R}^{d_{\mathrm{out}} \times r}$ and $B \in \mathbb{R}^{r \times d_{\mathrm{in}}}$ of rank $r \ll \min(d_{\mathrm{out}}, d_{\mathrm{in}})$ . The effective weight becomes
+
+$$
+W = W _ {0} + \alpha \frac {A B}{r}, \tag {9}
+$$
+
+where $\alpha$ is a scaling constant set to the LoRA rank (8 for MT10, 16 for MT50). Gradients flow only through $A$ and $B$ ; $W_0$ is frozen throughout.
+
+Adapter placement. We wrap the input projection and all hidden linear layers of both the actor and critic with LoRA. The final output heads are left untouched to preserve output variance accumulated during pre-LoRA training.
+
+Activation schedule. To avoid destabilizing early training, LoRA adapters are disabled (i.e. $A = B = 0$ ) for the first $0.75 \times T$ training steps on MT10 and $0.833 \times T$ steps on MT50, where $T$ is the total number of gradient updates reported in Table 8. From that point onwards the adapters are enabled and trained jointly with the actor and critic optimizer.
+
+Parameter overhead. The additional learnable parameters introduced by LoRA are $r(d_{\mathrm{out}} + d_{\mathrm{in}})$ per adapted layer. With $r = 8$ (MT10) and $r = 16$ (MT50) this overhead amounts to less than $3\%$ of the original network size, providing efficient task-specific specialization without significant memory or compute cost.
+
+# C. Hyperparameters
+
+In this section, we provide the hyperparameters for ARs used in the MT10 and MT50 experiments in Table 7, along with some general hyperparameters used across ARs and the baselines in Table 8. Specifically, for the Soft Modular method, we adopt the deep structure proposed in (Yang et al., 2020), consisting of 4 layers with 4 modules and a hidden unit size of 128.
+
+Table 7. ARS specific hyperparameters.
+
+Hyperparameters MT10 MT50 number of Reset 4 6
+
+Table 8. General Multi-task RL hyperparameters.
+
+Hyperparameters MT10 MT50 training steps 2 × 107 1 × 108 number of reset (nreset) 4 6 replay buffer size per task 1 × 106 5 × 105 episode length 500 optimizer Adam (Kingma & Ba, 2015) batch size per task 100 learning rate (all networks) 3e-4 activation for critic Tanh activation for actor ReLU discount factor (γ) 0.99 MLP hidden layer size [400, 400, 400, 400] target network update period 1 tau(τ) 5e-3
+
+# D. Additional Results
+
+# D.1. Learning Curves of Q-Values with the SAC-MT Algorithm
+
+In addition to Figure 3 in Section 3.2, we plot additional learning curves of Q-values for all tasks in the MT10 benchmark. As shown in Figure 6, the training of the Q-network struggles with the more challenging tasks—'push,' 'pick-place,' and 'peg-insert-side.' where their Q-values remain negative.
+
+
+(a) reach
+
+
+(b) push
+
+
+(c) pick-place
+
+
+(d) door-open
+
+
+(e) drawer-open
+
+
+(f) drawer-close
+
+
+(g) button-press-topdown
+
+
+(h) peg-insert-side
+
+
+(i) window-open
+Figure 6. Learning curves of Q-values with SAC-MT for all tasks in MT10 benchmark
+
+
+(j) window-close
+
+# D.2. Learning Curves of Average Success Ratios on MT10 Based on Key ARS Components
+
+Figures 7 and 8 show the average success rates of ARS and its two variants (without reward scaling and without reset) in MT10 experiments. 'ARS w/o reset' exhibits greater variation and instability due to biases from amplified rewards, whereas 'ARS w/o reward scaling' is more stable but underperforms on challenging tasks, such as 'push,' 'pick-place,' and 'peg-insert-side,' due to low reward magnitudes, as shown in Figure 7. The full ARS outperforms both variants, achieving improved stability and success rates, with at least 0.93 average success across all random seeds.
+
+
+(a) reach
+
+
+(b) push
+
+
+(c) pick-place
+
+
+(d) door-open
+
+
+(e) drawer-open
+
+
+(f) drawer-close
+
+
+(g) button-press-topdown
+
+
+(h) peg-insert-side
+Figure 7. Learning curves of the success ratio averaged over all tasks in the MT10 benchmark, based on the key ARS components: (1) the adaptive reward scaling scheme and (2) the reset mechanism.
+
+
+(i) window-open
+
+
+(j) window-close
+
+
+Figure 8. Learning curves of average success ratios across tasks in the MT10 benchmark, based on the key ARS components: (1) the adaptive reward scaling scheme and (2) the reset mechanism.
+
+# D.3. Learning Curves of Average Success Ratios on MT50
+
+Figure 9 shows that all off-policy multi-task RL methods incorporating our ARS framework surpass the previous state-of-the-art (SOTA) performance (Cho et al., 2024) on the MT50 benchmark, indicated by the dotted line. Even the simplest method, SAC-MT, achieves exceptional results, emphasizing the remarkable effectiveness of the proposed framework. Furthermore, our approach achieves at least the previous SOTA performance using only half the samples, i.e., 50 million.
+
+Figure 10 show that the performance of ARS improves with capacity, whereas SAC-MT (w/o ARS) does not. This confirms ARS as an effective approach for scaling model size and improving performance via reset and dynamic reward scaling.
+
+
+Figure 9. Learning curves of average success ratios across tasks in the MT50 benchmark using off-policy multi-task RL methods incorporating the ARS framework. The dashed line indicates the previous state-of-the-art performance (Cho et al., 2024).
+
+
+Figure 10. Learning curves of average success ratios across tasks in the MT50 benchmark based on the network capacity.
+
+# E. Additional Ablation Studies
+
+# E.1. Ablation Study on the Hyperparameter $n_{\text{reset}}$
+
+Incorporating our ARS framework into other off-policy multi-task RL algorithms introduces a unique hyperparameter: the number of resets ( $n_{\mathrm{reset}}$ ). To evaluate its impact, we conducted ablation studies on $n_{\mathrm{reset}}$ . Figure 11 shows the average success ratios for different $n_{\mathrm{reset}}$ values. The results show that performance remains robust across various values, though lower $n_{\mathrm{reset}}$ generally achieves better performance. This trend is likely due to the reduced frequency of model updates with higher $n_{\mathrm{reset}}$ , resulting in insufficient updates.
+
+
+Figure 11. Comparison of the final success rate per task in the MT10 benchmark based on the number of resets ( $n_{\mathrm{reset}}$ ).
+
+# E.2. Ablation Study on the Presence of Reset Mechanisms in Actor and Critic Networks
+
+In this subsection, we conduct an ablation study to analyze the impact of reset mechanisms in the actor and critic networks. Table 9 shows the average success ratio based on the reset strategy, showing the advantage of resetting both the actor and critic networks.
+
+Table 9. Ablation study on the Reset Mechanism
+
+Benchmark Reset Strategy No reset Reset Critic Only Reset Actor & Critic MT10 69.4±0.8 96.3±2.3 97.3±2.1
+
+# E.3. Comparison with PopArt
+
+Prior work, PopArt (Hessel et al., 2019), identifies the issue of varying reward scales and addresses it by introducing scale-invariant updates, which normalize critic targets using an adaptive affine layer. To evaluate the effectiveness of our proposed adaptive reward scaling (ARS) method against PopArt's scale-invariant updates, we integrate the latter into the SAC-MT algorithm and compare it with the ARS framework. We vary the update frequency of the scale-invariant updates over 1, 10, 100, 500. As shown in Table 10, our ARS consistently outperforms PopArt on the MT10 benchmark across all frequencies, demonstrating the effectiveness of the ARS framework.
+
+Table 10. Results of average ratio $(\%)$ on Meta-World MT10 for PopArt Variants
+
+Update Frequency for PopArt ARS 1 10 100 500 MT10 67.1 ± 12.8 73.3 ± 7.6 75.0 ± 6.3 71.5 ± 7.8 97.3 ± 2.1
+
+# E.4. Ablation Study on the Horizon Length
+
+In the default setup of the MetaWorld-v2 (Yu et al., 2019) environments, the horizon length is set to 500, which was used consistently across all our experiments. However, we noticed that the experiments reported in the PaCo and MOORE papers (Sun et al., 2022; Hendawy et al., 2024) were conducted using a horizon length of 150, making a direct comparison between the reported results and our ARS method inappropriate. To address this, we reported the results of PaCo and MOORE with the horizon length set to 500 in the main results (Table 1 & 2 in Section 5.1).
+
+In addition to the results with a horizon length of 500, we compare the proposed method, ARS-LN, with two baselines (PaCo and MOORE) on both the MT10 and MT50 benchmarks using a shorter horizon length of 150. As shown in Table 11, ARS-LN significantly outperforms both baselines, achieving the highest average success rates of $98.4\%$ on MT10 and $91.0\%$ on MT50, respectively. Notably, the scaling law observed with increased network capacity (as described in Section 5.4) still holds in the horizon-150 setting. Moreover, ARS-LN even achieves slightly higher performance with a horizon length of 150 compared to the 500-horizon setting.
+
+Table 11. Comparisons of average ratio (%) of the Meta-World MT10 and MT50 benchmark with Horizon Legnth 150
+
+ARS-LN (Hidden Layer Dimensions) PaCo MOORE [400, 400, 400, 400] [800, 800, 800, 800] [1024, 1024, 1024, 1024] MT10 98.4 ± 1.5 - - 85.4±4.5 88.7 ± 5.6 MT50 83.3±4.3 89.8±1.6 91.0±2.4 57.3±1.3 72.9±3.3
+
+# F. Overview of the ARS framework
+
+This section provides an overview of the proposed ARS framework. The orange box highlights its key components, including history-based reward scaling, supported by the reset mechanism (green box). It also illustrates that ARS can be seamlessly integrated into any off-policy multi-task RL algorithm with minimal effort.
+
+
+Figure 12. Overview of the ARS framework. The orange box illustrates the history-based reward scaling, supported by a reset mechanism.
\ No newline at end of file
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/images.zip b/arsadaptiverewardscalingformultitaskreinforcementlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ac7ee6f2e9b1f3ecf1413642147e94afc538a70d
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:caa801ff0c41550814177097497d4f2d92d8b0c9250c02631274e2dda2a1c69b
+size 962156
diff --git a/arsadaptiverewardscalingformultitaskreinforcementlearning/layout.json b/arsadaptiverewardscalingformultitaskreinforcementlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..be26ce792947269fa68145f9ed45dae9c87c48c0
--- /dev/null
+++ b/arsadaptiverewardscalingformultitaskreinforcementlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4585afd95f365afe595c4920d2c427f0f6d8e586085605fdeb90ce402ac62e1e
+size 684847
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_content_list.json b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c887cdb4165fb28ce0c2f1a01bc6127da9b77bfd
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3bd4e5d16740466fab79c6927084e2e19744ec11b6625b00ce52c1a03cb72f22
+size 195341
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_model.json b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e2ee174d5a4bd32c816e200f2f0f1d49ee8a0d65
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:780768ccbc370edc44e20750158be3cbe9fedad8daa4aa1c6c819c2da1ba3c63
+size 239472
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_origin.pdf b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a7d8223e50d1f27653a96cfdbb9d6ae2ed2715cb
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/09ccb8c1-d998-4275-b443-57abdd19b821_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:56dc994e4ba84a5a473b170123c521a16cb242606465a384474fe4c5bea77a71
+size 19024450
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/full.md b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7952053eb8bc39aaa337f64041f4079e84766960
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/full.md
@@ -0,0 +1,1045 @@
+# A Sample-Efficient Conditional Independence Test in the Presence of Discretization
+
+Boyang Sun1 Yu Yao2 Xinshuai Dong3 Zongfang Liu1 Tongliang Liu2 Yumou Qiu4 Kun Zhang3
+
+# Abstract
+
+In many real-world scenarios, interested variables are often represented as discretized values due to measurement limitations. Applying Conditional Independence (CI) tests directly to such discretized data, however, can lead to incorrect conclusions. To address this, recent advancements have sought to infer the correct CI relationship between the latent variables through binarizing observed data. However, this process inevitably results in a loss of information, which degrades the test's performance. Motivated by this, this paper introduces a sample-efficient CI test that does not rely on the binarization process. We find that the independence relationships of latent continuous variables can be established by addressing an over-identifying restriction problem with Generalized Method of Moments (GMM). Based on this insight, we derive an appropriate test statistic and establish its asymptotic distribution correctly reflecting CI by leveraging node-wise regression. Theoretical findings and Empirical results across various datasets demonstrate that the superiority and effectiveness of our proposed test. Our code implementation is provided in https://github.com/boyangaaaa/DCT.
+
+# 1. Introduction
+
+Conditional independence tests for discrete variables are fundamental in statistical analysis and widely applied across various disciplines. Traditional methods including the chi-squared test (F.R.S., 2009), the G-test (likelihood ratio test) (McDonald, 2009), and measures based on conditional mutual information (Kubkowski et al., 2021) are well established and extensively used. However, a critical yet often
+
+$^{1}$ Mohamed bin Zayed University of Artificial Intelligence $^{2}$ Sydney AI Centre, The University of Sydney $^{3}$ Carnegie Mellon University $^{4}$ Peking University. Correspondence to: Yumou Qiu , Kun Zhang .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025
+
+
+(a)
+
+
+(b)
+Figure 1. Illustration of data generative processes using causal graphical models: (a) fork, (b) and (c) chain. The discretization process maps latent continuous variables (white nodes) to observable discrete variables (gray nodes), denoted with a tilde $(\sim)$ .
+
+
+(c)
+
+overlooked issue is whether the analyzed variables are truly discrete or if they are inherently continuous but appear discrete due to measurement limitations.
+
+In many real-world applications, data collection methods impose unavoidable discretization. That is, continuous variables are artificially binned into categories because of constraints in measurement precision. This phenomenon across various fields, including finance (Changsheng & Yongfeng, 2012; Damodaran, 2012), psychology (Mossman et al., 2017; Johnson et al., 2019), and recommendation systems (Sparling & Sen, 2011; Dooms et al., 2013). In these domains, inherently continuous variables—such as stock prices, cognitive ability scores, and user preferences—are frequently transformed into discrete scales, often leading to biases in statistical inference.
+
+When discretization occurs, traditional CI tests can fail to capture the true CI relationship. As shown in Figure1 where we illustrate the discretization process with a causal graphic model (Pearl, 2000), for all CI tests unaware of discretization, the intent is to test the CI of latent continuous variables $X_{1}, X_{3}$ given $X_{2}$ , what is actually being tested is their discretized counterparts $\tilde{X}_{1}, \tilde{X}_{3}$ given $\tilde{X}_{2}$ . According to the faithfulness assumption (Spirtes et al., 2000), we can infer that $X_{1} \perp X_{3} | \{X_{2}\}$ while $\tilde{X}_{1} \not\perp \tilde{X}_{3} | \{\tilde{X}_{2}\}$ . This mismatch between the latent continuous variables and their discretized counterparts causes traditional CI tests, when applied to discretized observations, to draw incorrect conclusions about the true CI relationships of interested variables.
+
+Recent work Discretization-Aware CI test (DCT) (Sun et al.,
+
+2024) has attempted to address this issue by establishing the correct relationship between discretized data and latent variables through binarization of the observed data. While this approach facilitates more accurate CI testing by simplifying the data structure, it inherently leads to a loss of information. The reduction of data to binary form can significantly degrade the performance of CI tests, especially in settings with small sample sizes where the preservation of information is crucial for reliable statistical inference.
+
+Motivated by the limitations of existing methodologies, this paper aims to introduce a sample-efficient CI test that circumvents the need for binarization, thereby preserving the full richness of the data. Our approach leverages the Generalized Method of Moments (GMM) to address the overidentifying restrictions problem, enabling the estimation of covariance of latent continuous variables without sacrificing information. We then adopt the strategy of DCT to derive test statistics and their asymptotic distribution for CI testing, utilizing nodewise regression (Callot et al., 2019). The paper seeks to contribute as follows:
+
+- We propose Discretization-Aware CI Test with GMM (DCT-GMM), a novel CI test tailored for discretization.
+- We provide a theoretical analysis proving that DCT-GMM is consistent and has lower variance than DCT, making it more sample efficient. (Ziegel, 2002).
+- We empirically demonstrate DCT-GMM's effectiveness and superiority over state-of-the-art CI tests, particularly in small-sample regimes.
+
+# 2. Related Work
+
+Conditional Independence Test Testing for CI is a fundamental concept in statistics, with linear Gaussian models traditionally dominating due to their simplicity and interoperability. These models assume linear dependencies and Gaussian noise, providing closed-form solutions for testing through metrics like partial correlation (Yuan & Lin, 2007; Peterson et al., 2015; Mohan et al., 2012; Ren et al., 2015). However, the linear Gaussian assumption restricts the generality. Recent CI testing advancements leverage kernel methods for nonlinear continuous relationships (Fukumizu et al., 2004). Methods like KCI (Zhang et al., 2012) and RCI (Strobl et al., 2019) analyze partial associations, while KCIP (Doran et al., 2014) employs sample permutations to simulate CI. For discrete variables, $G^2$ (Aliferis et al., 2010) and conditional mutual information (Zhang et al., 2010) are standard tests. A recent advance on permutation-based rank test, MPRT (Dong et al., 2025), can also be used to test vanishing partial correlation in the presence of discretization.
+
+Prior work DCT (Sun et al., 2024) moves the first step towards the CI test specifically for the discretization scenario.
+
+Their approach can be decomposed into three steps: 1. The calculation of estimated covariance $\hat{\Sigma}$ , based on the property that the proportion of both observed variables exceeding their means reflects the underlying covariance, solved using a single equation. 2. The deviation of covariance matrix $\hat{\Sigma} - \Sigma^{*}$ follows a multivariate normal distribution utilizing Z-estimator(Vaart, 1998). 3. The deviation of precision matrix $\hat{\Omega} - \Omega^{*}$ also follows a multivariate normal distribution utilizing node-wise regression (Callot et al., 2019). However, despite having multiple solvable equations from the discretized observations, only one parameter of interest exists per variable pair, leading to an over-identification issue. Efficiently utilizing all available information is the key challenge, motivating us to explore the usage of GMM.
+
+Generalized Method of Moments GMM (Newey, 2007; Hansen, 1982) is a statistical estimation technique offering a principal solution to the over-identification problem. Suppose $\pmb{\theta}$ denotes a $p\times 1$ vector, $x^i$ denotes the observation of a data sample where $i\in (1,\dots ,n)$ is the index. $f_{i}(\pmb {\theta}) = f(x^{i},\pmb {\theta})$ be a $m\times 1$ vector of functions. For the true parameter $\pmb{\theta}^{*}$ , we will have
+
+$$
+\mathbb {E} \left[ f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) \right] = \mathbf {0}.
+$$
+
+Using $\hat{g} (\pmb {\theta}) = \frac{1}{n}\sum_{i = 1}^{n}f_{i}(\pmb {\theta})$ denote the sample average of the $f_{i}(\pmb {\theta})$ . The $\mathbf{A}$ is a $m\times m$ positive semi-definite matrix. The GMM estimator is given by
+
+$$
+\hat {\boldsymbol {\theta}} = \operatorname * {a r g m i n} _ {\boldsymbol {\theta}} \quad \hat {g} (\boldsymbol {\theta}) ^ {T} \mathbf {A} \hat {g} (\boldsymbol {\theta}),
+$$
+
+providing a framework for valid inference (Newey, 2007). In our case, by properly designing the moment functions and selecting the parameter of interest, GMM can efficiently fulfill the objective of estimation and addressing the over-identification issue.
+
+# 3. DCT-GMM: Discretization-Aware CI Test with GMM
+
+Notation Throughout this work, we use $X_{j}$ to denote the $j$ -th component of the vector of variables $\mathbf{X} = (X_{1},\ldots ,X_{p})$ with finite observations $\{x_j^1,\dots,x_j^n\}$ . We denote the sample mean given $n$ samples by $\mathbb{E}_n[X_j] = \frac{1}{n}\sum_{i = 1}^{n}x_j^i$ while its true expectation is $\mathbb{E}[X_j]$ . Similarly, the empirical probability is represented by $\mathbb{P}_n$ , and the true probability by $\mathbb{P}$ . For a parameter $\alpha$ , its true value is $\alpha^{*}$ and its estimation is $\hat{\alpha}$ . For a matrix $\mathbf{X}$ , $\mathbf{X}^{-T}$ denotes the transpose of its inverse. We use $\mathbf{X}_{-j}$ to represent all other columns of $\mathbf{X}$ without $X_{j}$ . Similarly, $\mathbf{X}_{-j - j}$ is the submatrix of $\mathbf{X}$ without $j$ th column and $j$ th row, and the $\mathbf{X}_{-jj}$ is the vector of $j$ th column without $j$ th row. For a full notation table, please refer to Appendix A.1.
+
+Problem Setting In this paper, we adopt the same nonparanormal model as DCT (Sun et al., 2024). Specifically,
+
+we consider a set of independent identically distributed (i.i.d.) $p$ -dimensional random discrete variables, denoted as $\tilde{\boldsymbol{X}} = (\tilde{X}_1,\tilde{X}_2,\dots ,\tilde{X}_j,\dots ,\tilde{X}_p)$ . For each discrete variable $\tilde{X}_j$ with finite observations $\{\tilde{x}_j^1,\ldots ,\tilde{x}_j^n\}$ , there exists a corresponding latent Gaussian variable $X_{j}$ . The transformation from $X_{j}$ to $\tilde{X}_{j}$ is governed by an unknown monotone nonlinear function $g_{j}$ and a thresholding function $f_{j}$ . The function $f_{j}\circ g_{j}:\mathcal{X}\to \tilde{\mathcal{X}}$ maps the continuous domain of $X_{j}$ onto the discrete domain $\tilde{\mathcal{X}}_j$ . Specifically, for each variable $X_{j}$ , there exists a finite constant vector $\mathbf{d}_j = [d_{j,1},\dots,d_{j,M - 1}]$ characterized by strictly increasing elements such that
+
+$$
+\tilde {X} _ {j} = f _ {j} \left(g _ {j} \left(X _ {j}\right)\right) = \left\{ \begin{array}{l l} 1 & g _ {j} \left(X _ {j}\right) < d _ {j, 1}, \\ m & d _ {j, m - 1} < g _ {j} \left(X _ {j}\right) < d _ {j, m}, \\ M & g _ {j} \left(X _ {j}\right) > d _ {j, M - 1}. \end{array} \right. \tag {1}
+$$
+
+Equivalently, we can conclude $\tilde{X}_j = m$ , for $g_j^{-1}(d_{j,m-1}) < X_j < g_j^{-1}(d_{j,m})$ , where $m$ is an integer ranging from 1 to $M$ . That is, there exists a finite constant vector $\mathbf{c}_j = [g_j^{-1}(d_{j,1}), \ldots, g_j^{-1}(d_{j,M-1})]$ acting as the "discretization boundary" that partition the $X_j$ into $M$ categories. We refer $M$ as the cardinality of the discrete variable $\tilde{X}_j$ . Without loss of generality, we assume $\mathbf{X} \sim N(\mathbf{0}, \mathbf{\Sigma})$ with $\boldsymbol{\Sigma} = (\sigma_{j_1 j_2})$ and $\sigma_{jj} = 1$ . That is, we assume the original continuous variables $\mathbf{X}$ follow a multivariate normal distribution with zero mean and unit variance on the diagonal of $\boldsymbol{\Sigma}$ . We provide a detailed discussion regarding its rationality and w.l.o.g in Appendix B.
+
+Objective We aim to develop a CI test to infer the correct CI relationship between latent continuous variables $\mathbf{X} = (X_{1},\ldots ,X_{p})$ , which are the interested ones given their discretized observations $\tilde{\mathbf{X}}$ only. By assuming a linear Gaussian of original continuous variables, our objective directly transfers to deduce the statistical inference of covariance matrix $\boldsymbol {\Sigma} = (\sigma_{j_1j_2})$ for independent test and precision matrix $\Omega = \Sigma^{-1} = (\omega_{jk})$ for CI test (Baba et al., 2004). Specifically, the covariance $\sigma_{j_1j_2} = 0$ indicates that $X_{j_1}\perp X_{j_2}$ , and the precision coefficient $\omega_{jk} = 0$ indicates that $X_{j}\bot X_{k}|X_{-\{jk\}}$ , where $X_{-\{jk\}}$ represents all other variables in $\mathbf{X}$ except $X_{j}$ and $X_{k}$ . Technically, we are interested in two key tasks:
+
+- Estimation: Obtain $\hat{\sigma}_{j_1j_2}$ and $\hat{\omega}_{jk}$ serving as the estimation of the corresponding true parameters with only discretized observations available.
+- Inference: Derive the distribution $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ for independence test and $\hat{\omega}_{jk} - \omega_{jk}^*$ for CI test.
+
+In the subsequent section, we develop our theoretical framework through three key steps. First, we demonstrate that for any pair of continuous variables $X_{j_1}$ and $X_{j_2}$ with their corresponding discretized observations $\tilde{X}_{j_1}$ and $\tilde{X}_{j_2}$ , we can
+
+effectively construct both the estimator $\hat{\sigma}_{j_1j_2}$ and characterize the distribution of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ using GMM. Second, we establish that the nodewise regression parameter $\beta_{j,k}$ serves as an effective surrogate for the precision matrix element $\omega_{jk}$ . Finally, we show the asymptotic normal distribution of $\hat{\beta}_{j,k} - \beta_{j,k}^*$ by analyzing its relationship with the component distributions of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ .
+
+# 3.1. GMM for covariance estimation and inference
+
+Estimating discretization boundaries The discretization scheme maps the $X_{j}$ onto a finite set of discrete values according to the discretization boundaries, maintaining the ordinal relationship of the original continuous variable while reducing its resolution. For ease of notation, we denote the augmented discretization boundary $\mathbf{c}_j^* := [c_{j,0}^*, c_{j,1}^*, \ldots, c_{j,M-1}^*, c_{j,M}^*] = [-\infty, g_j^{-1}(d_{j,1}), \ldots, g_j^{-1}(d_{j,M-1}), +\infty]$ . We further denote $\Phi(\cdot)$ as the cumulative distribution function (cdf) of the standard normal distribution. Our available observation consists of binned discrete values. Since $X_{j} \sim N(0,1)$ according to the assumption, we conclude that $\mathbb{P}(\tilde{X}_j = m) = \mathbb{P}(c_{j,m-1}^* < X_j < c_{j,m}^*)$ . That is, the probability of observing a discrete value corresponds to the probability of the original continuous variable falling into a particular region. Although the true probability is not directly accessible, it can be estimated by calculating the sample proportion of the observations within each bin. Specifically, we can obtain the estimation of the discretization boundaries
+
+$$
+\hat {c} _ {j, m} = \Phi^ {- 1} \left(\sum_ {k = 1} ^ {m} \hat {\tau} _ {j, k}\right), \tag {2}
+$$
+
+where $\hat{\tau}_{j,k}$ is the empirical probability defined as $\hat{\tau}_{j,k} := \mathbb{P}_n(\tilde{X}_j = k) = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}(\tilde{x}_j^i = k)$ , serving as the estimation of the true probability $\tau_{j,k} := \mathbb{P}(\tilde{X}_j = k)$ . The indicator function $\mathbb{1}(condition)$ is 1 if the condition holds true, 0 otherwise. This formulation provides a closed-form solution for estimating the discretization boundaries from observed discrete data.
+
+Estimate covariance through single equation The challenge lies in estimating the latent covariance $\sigma_{j_1j_2}$ with discretized values. For a pair of continuous variables $X_{j_1}$ and $X_{j_2}$ , the discretization scheme essentially creates a "grid". Each cell represents a specific combination of discretized values for both variables. When we count how many samples fall into one cell, the sample proportion within each cell provides an empirical estimate of the joint probability density, which can be expressed as $\mathbb{P}_n(\tilde{X}_{j_1} = m,\tilde{X}_{j_2} = k)$ , serving as the estimation of the true probability $\mathbb{P}(c_{j_1,m - 1}^* < X_{j_1} < c_{j_1,m}^*,c_{j_2,k - 1}^* < X_{j_2} < c_{j_2,k}^*)$ .
+
+According to our assumption that the latent variables follow a multivariate normal distribution, the true probability
+
+above is given by $\Phi (c_{j_1,m - 1}^*,c_{j_1,m}^*,c_{j_2,k - 1}^*,c_{j_2,k}^*;\sigma_{j_1j_2}^*)$ which is the cdf of a bivariate normal distribution with the true covariance $\sigma_{j_1j_2}^*$ integrated over the rectangular region defined by $[c_{j_1,m - 1}^*,c_{j_1,m}^* ]\times [c_{j_2,k - 1}^*,c_{j_2,k}^* ]$ . For a specific form of the function, please refer to Appendix A.2.
+
+For notational convenience, we define $\hat{\tau}_{j_1j_2,mk} := \mathbb{P}_n(\tilde{X}_{j_1} = m,\tilde{X}_{j_2} = k)$ as the empirical joint probability, and $\tau_{j_1j_2,mk} := \mathbb{P}(\tilde{X}_{j_1} = m,\tilde{X}_{j_2} = k)$ as the true probability. We use $\hat{\tau}_{j_1j_2,mk}^i = \mathbb{1}(\tilde{x}_{j_1}^i = m,\tilde{x}_{j_2}^i = k)$ as the indicator of sample $i$ . The empirical cell density can be easily computed from the observation as $\hat{\tau}_{j_1j_2,mk} = \frac{1}{n}\sum_{i=1}^{n}\hat{\tau}_{j_1j_2,mk}^i$ . The estimated covariance $\hat{\sigma}_{j_1j_2}$ can then be obtained by solving following equation:
+
+$$
+\hat {\tau} _ {j _ {1} j _ {2}, m k} = \Phi \left(\hat {c} _ {j _ {1}, m - 1}, \hat {c} _ {j _ {1}, m}, \hat {c} _ {j _ {2}, k - 1}, \hat {c} _ {j _ {2}, k}; \sigma_ {j _ {1} j _ {2}}\right), \tag {3}
+$$
+
+where $\hat{c}$ can be computed using Eq. (2). We call the equation above a "bridge equation", which provides a direct solution to recovering the underlying covariance by only using the discretized observations.
+
+However, this formulation presents an overidentification challenge. For any pair of discrete variables $\tilde{X}_{j_1}$ and $\tilde{X}_{j_2}$ with cardinalities $M$ and $K$ respectively, we obtain $M\times K$ distinct cells, each corresponding to its own equation. This results in an overdetermined system with $M\times K$ equations but only one parameter of interest $\sigma_{j_1,j_2}$ . This overidentification is a key limitation of DCT (Sun et al., 2024), which utilizes only a single equation despite the availability of multiple informative constraints. In the following section, we demonstrate how GMM acts as a principled framework for efficiently leveraging all available information from these multiple equations, thereby offering a preciser solution.
+
+Move from single equation to multiple equation For a pair of variables $\tilde{X}_{j_1}$ and $\tilde{X}_{j_2}$ with cardinality $M$ and $K$ correspondingly, we define the parameters of interest $\pmb{\theta} = (\sigma_{j_1j_2},\mathbf{c}_{j_1},\mathbf{c}_{j_2})\in \mathbb{R}^{M + K - 1}$ . Let $f_{i}(\pmb {\theta}) = f(\tilde{x}_{j_{1}}^{i},\tilde{x}_{j_{2}}^{i},\pmb {\theta})\in$ $\mathbb{R}^{MK}$ referred as the moment function with the form:
+
+$$
+f _ {i} (\boldsymbol {\theta}) = \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}, 1 1} ^ {i} - \Phi \left(c _ {j _ {1}, 0}, c _ {j _ {1}, 1}, c _ {j _ {2}, 0}, c _ {j _ {2}, 1}; \sigma_ {j _ {1} j _ {2}}\right) \\ \vdots \\ \hat {\tau} _ {j _ {1} j _ {2}, M K} ^ {i} - \Phi \left(c _ {j _ {1}, M - 1}, c _ {j _ {1}, M}, c _ {j _ {2}, K - 1}, c _ {j _ {2}, K}; \sigma_ {j _ {1} j _ {2}}\right) \end{array} \right). \tag {4}
+$$
+
+For the true parameter $\pmb{\theta}^{*}$ , the population moment condition satisfies $\mathbb{E}[f_i(\pmb{\theta}^*)] = 0$ . The detailed derivation of this condition can be found in Appendix F.1. Let the sample analogue of the moment condition be $\hat{g} (\pmb {\theta}) = \frac{1}{n}\sum_{i = 1}^{n}f_{i}(\pmb {\theta})$ . Given a positive semi-definite matrix $\mathbf{A}\in \mathbb{R}^{MK\times MK}$ as weighting matrix, the GMM estimator is given by
+
+$$
+\hat {\boldsymbol {\theta}} = \arg \min _ {\boldsymbol {\theta}} \quad \hat {g} (\boldsymbol {\theta}) ^ {T} \mathbf {A} \hat {g} (\boldsymbol {\theta}). \tag {5}
+$$
+
+This formulation leverages all $M \times K$ moment functions simultaneously to obtain the estimation $\hat{\pmb{\theta}}$ , efficiently utilizing the available information from the discretized observations. The estimated covariance $\hat{\sigma}_{j_1j_2}$ is nothing but the
+
+first element of $\hat{\theta}$ . The next question is, how to construct the distribution $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ . Given this, we propose the following theorem:
+
+Theorem 3.1. Under the null hypothesis that the original continuous variables $X_{j_1} \perp X_{j_2}$ , with the moment function $f_i(\theta)$ defined as Eq. (4), when number of samples $n \to +\infty$ , the estimator $\hat{\sigma}_{j_1j_2}$ is asymptotically normal distributed:
+
+$$
+\sqrt {n} \left(\hat {\sigma} _ {j _ {1} j _ {2}} - \sigma_ {j _ {1} j _ {2}} ^ {*}\right) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \left(\hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {\mathbf {G}}\right) ^ {- 1} \hat {\mathbf {G}} ^ {T} \mathbf {A} f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) \right] _ {1}, \tag {6}
+$$
+
+which will converge in distribution to $N(0,\mathbf{V}_{11})$
+
+where $\mathbf{V} = (\mathbf{G}^T\mathbf{A}\mathbf{G})^{-1}\mathbf{G}^T\mathbf{A}\mathbf{S}\mathbf{A}\mathbf{G}(\mathbf{G}^T\mathbf{A}\mathbf{G})^{-1}$ , and $\mathbf{V}_{11}$ is its first entry,
+- $\mathbf{G} = \mathbb{E}\left[\frac{\partial f_i(\boldsymbol{\theta}^*)}{\partial\boldsymbol{\theta}^*}\right]$ is the expectation of the Jacobian matrix of the moment function at true parameter $\boldsymbol{\theta}^*$ ,
+- $\hat{\mathbf{G}} = \mathbb{E}_n\left[\frac{\partial f_i(\hat{\pmb{\theta}})}{\partial\hat{\pmb{\theta}}}\right]$ is sample average of the Jacobian matrix of moment function at estimated parameter $\hat{\pmb{\theta}}$
+- $\mathbf{S} = \mathbb{E}[f_i(\pmb{\theta}^*) f_i(\pmb{\theta}^*)^T]$ is the covariance matrix of the moment function $f_i(\pmb{\theta}^*)$ .
+
+The detailed derivation can be found in Appendix F.2. Since we never have the access to the true parameters $\pmb{\theta}^{*}$ and the true expectation $\mathbb{E}[\cdot]$ , in practice, we can plug in their estimation $\mathbb{E}_n\left[\frac{\partial f_i(\hat{\pmb{\theta}})}{\partial\pmb{\theta}}\right]$ and $\mathbb{E}_n[f_i(\hat{\pmb{\theta}})f_i(\hat{\pmb{\theta}})^T]$ to calculate the variance of the asymptotic distribution. For the weighting matrix $\mathbf{A}$ , theoretically, any positive semi-definite matrix is applicable to the theorem above. A common choice could be the identity matrix.
+
+Two-step GMM Apparently, the choice of weighting matrix $\mathbf{A}$ plays an important role in determining the statistical property of the GMM. Specifically, $\mathbf{A}$ directly influences the variance of the distribution $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ . The question is how to choose $\mathbf{A}$ optimally to minimize the asymptotic variance of the estimator. According to the classical theory of GMM (Hansen, 1982), we have the following lemma:
+
+Lemma 3.2. Suppose the choice of $\mathbf{A} \xrightarrow{p} \mathbf{S}^{-1}$ , where $\mathbf{S} = \mathbb{E}[f_i(\boldsymbol{\theta}^*) f_i(\boldsymbol{\theta}^*)^T]$ is the covariance matrix of $f_i(\boldsymbol{\theta}^*)$ for the true parameters $\boldsymbol{\theta}^*$ , then
+
+$$
+\sqrt {n} \left(\hat {\sigma} _ {j _ {1} j _ {2}} - \sigma_ {j _ {1} j _ {2}} ^ {*}\right) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \left(\hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {\mathbf {G}}\right) ^ {- 1} \hat {\mathbf {G}} ^ {T} \mathbf {A} f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) \right] _ {1}, \tag {7}
+$$
+
+will converge in distribution to a normal distribution with variance $N(0, \mathbf{V}_{11})$ , where $V = (\mathbf{G}^T \mathbf{S}^{-1} \mathbf{G})^{-1}$ , strictly smaller in the positive semi-definite sense compared to the asymptotic covariance matrix in the one-step GMM estimator given in Theorem 3.1. Here, $\mathbf{G}$ and $\hat{\mathbf{G}}$ have the same definition as in Theorem 3.1.
+
+The detailed derivation is provided in Appendix F.3. In practice, the procedure begins by estimating the parameter of interest using a predefined weighting matrix, such as the identity matrix. Next, the covariance of the moment functions is used to construct the optimal weighting matrix, enabling the final GMM estimator to efficiently re-estimate the parameters. This two-step GMM approach is a well-established technique for achieving asymptotic efficiency, and its superiority is empirically validated in Section 4.
+
+The estimated covariance $\sigma_{j_1j_2}$ serves as an indicator of unconditional independence. Following the framework proposed by (Sun et al., 2024), we build such a CI test utilizing nodewise regression.
+
+# 3.2. Nodewise regression for constructing CI test
+
+In this subsection, we follow (Sun et al., 2024) and leverage the nodewise regression to derive the CI test. For completeness, we present the main results here and refer to the original paper for a detailed treatment. The practical implementation is also discussed at the end of this subsection.
+
+By assuming the $X$ follows a multivariate normal distribution, our task of constructing CI test is equivalent to (1). computation of $\hat{\omega}_{jk}$ based on the discretized observations; (2). construction of $\hat{\omega}_{jk} - \omega_{jk}^{*}$ . Targeting both tasks, the nodewise regression is leveraged, which shows that:
+
+- The regression parameter $\beta_{j,k}$ can serve as an effective surrogate for testing the null hypothesis that $X_{j} \perp X_{k} | X_{-\{jk\}}$ , i.e., $\omega_{jk} = 0$ .
+- The formulation of $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ can be expressed as a linear combination of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ , which allows us to derive its distribution and facilitating the CI test.
+
+The following lemma establishes the key properties of node-wise regression that support this approach.
+
+Lemma 3.3. [Nodewise Regression Properties] (Sun et al., 2024) For a $p$ -dimensional multivariate normal variable $\mathbf{X} = (X_{1},\ldots ,X_{p})\sim N(0,\mathbf{\Sigma})$ with covariance matrix $\boldsymbol{\Sigma}$ and precision matrix $\Omega = \Sigma^{-1} = (\omega_{jk})_{1\leq j,k\leq p}$ . For any $j\in \{1,\dots ,p\}$ , consider the nodewise regression where each $X_{j}$ is regressed on all other variables:
+
+$$
+X _ {j} = \sum_ {k \neq j} X _ {k} \beta_ {j, k} + \epsilon_ {j},
+$$
+
+where $\beta_{j,k}$ is the regression coefficient of $X_{k}$ in predicting $X_{j}$ , $\beta_{j} = (\beta_{j,k})_{k\neq j}\in \mathbb{R}^{p - 1}$ is the vector of all coefficients, and $\epsilon_{j}$ is the residual term. Then the following relationships hold:
+
+$$
+\begin{array}{l} \beta_ {j, k} = - \frac {\omega_ {j k}}{\omega_ {j j}}, \quad j \neq k. \tag {8} \\ \beta_ {j} = \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j} \in \mathbb {R} ^ {p - 1}. \\ \end{array}
+$$
+
+The derivation can be found in Appendix F.4.1. The first row of Eq. (8) indicates that $\beta_{j,k}$ is a scaled version of $\omega_{jk}$ . Since $\omega_{jj}$ will never be zero due to the positive definiteness of $\Omega$ , testing if $\beta_{j,k} = 0$ is exactly the same as testing $\omega_{jk} = 0$ . Thus, $\beta_{j,k}$ serves as an effective surrogate of $\omega_{jk}$ . Now our focus transfers to calculating $\hat{\beta}_{j,k}$ and deriving the distribution of $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ .
+
+We further note that the second row of Eq. (8) constructs a consistent relationship between $\beta_{j}$ and the covariance matrix $\pmb{\Sigma}$ . Thus, we can conduct its estimation as $\hat{\beta}_{j} = (\hat{\beta}_{j,k})_{j\neq k} = \hat{\Sigma}_{-j - j}^{-1}\hat{\Sigma}_{-jj}$ , where the estimated covariance terms can be obtained through solving Eq. (5).
+
+Statistical Inference for $\beta_{j,k}$ Nodewise regression transfers the parameter of interest from $\omega_{jk}$ to $\beta_{j,k}$ . While the estimation of $\beta_{j,k}$ has been effectively solved, the next question is how to construct the distribution of $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ . Fortunately, we have already established the asymptotic distribution of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ . Therefore, if we can express $\hat{\beta}_j - \beta_j^*$ as a linear combination of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ , the problem is readily solved, as $\hat{\beta}_j - \beta_j^*$ will be a linear combination of dependent asymptotically normal random variables. The underlying relationship between these variables is as follows:
+
+$$
+\hat {\beta} _ {j} - \beta_ {j} ^ {*} = - \hat {\Sigma} _ {- j - j} ^ {- 1} \left(\left(\hat {\Sigma} _ {- j - j} - \Sigma_ {- j - j} ^ {*}\right) \beta_ {j} ^ {*} - \left(\hat {\Sigma} _ {- j j} - \Sigma_ {- j j} ^ {*}\right)\right). \tag {9}
+$$
+
+The derivation of this result is provided in Appendix F.4.2. For notational convenience, we express the difference between the estimated and true covariances as:
+
+$$
+\hat {\sigma} _ {j _ {1} j _ {2}} - \sigma_ {j _ {1} j _ {2}} ^ {*} = \frac {1}{n} \sum_ {i = 1} ^ {n} \xi_ {j _ {1} j _ {2}} ^ {i}, \tag {10}
+$$
+
+where the specific form of $\xi_{j_1j_2}^i$ is given in Theorem 3.1 and Lemma 3.2. We further denote $\hat{\Sigma}_{-j - j} - \Sigma_{-j - j} = \frac{1}{n}\sum_{i = 1}^{n}\Xi_{-j - j}^i$ and $\hat{\Sigma}_{-jj} - \Sigma_{-jj} = \frac{1}{n}\sum_{i = 1}^{n}\Xi_{-jj}^i$ are matrix form of $\xi_{j_1j_2}^i$ .
+
+Conditional Independence Test We apply the following theorem for the CI test, with proof provided at Appendix F.4.2 for completeness:
+
+Theorem 3.4. (Sun et al., 2024) Under the null hypothesis that $X_{j}$ and $X_{k}$ are conditional statistically independent given a set of variables $\mathbf{X}_{\{-jk\}}$ , i.e., $\beta_{j,k} = 0$ , the statistic
+
+$$
+\hat {\beta} _ {j, k} = \left(\hat {\Sigma} _ {- j - j} ^ {- 1} \hat {\Sigma} _ {- j j}\right) _ {[ k ]}, \tag {11}
+$$
+
+where $[k]$ denotes the element corresponding to the variable $X_{k}$ in $\hat{\Sigma}_{-j - j}^{-1}\hat{\Sigma}_{-jj}$ , has the asymptotic distribution:
+
+$$
+\sqrt {n} \left(\hat {\beta} _ {j, k} - \beta_ {j, k} ^ {*}\right) \stackrel {d} {\to} N (0, \mathbf {V}), w h e r e
+$$
+
+$$
+\bullet \mathbf {V} = \boldsymbol {a} ^ {[ k ] ^ {T}} \frac {1}{n} \sum_ {i = 1} ^ {n} v e c (\boldsymbol {B} _ {- j} ^ {i}) v e c (\boldsymbol {B} _ {- j} ^ {i}) ^ {T} \boldsymbol {a} ^ {[ k ]},
+$$
+
+
+Figure 2. Comparison of results of Type I and Type II error (1-power) for discretized observations. DCT-GMM_one uses one-step GMM with A setting as identity, and DCT-GMM_two uses two-step GMM with A setting as the sample covariance of moment functions.
+
+$$
+\begin{array}{l} \bullet \mathbf {B} ^ {i} = \left[ \begin{array}{c} \boldsymbol {\Xi} _ {- j j} ^ {i} ^ {T} \\ \boldsymbol {\Xi} _ {- j - j} ^ {i} \end{array} \right], a n d \tilde {\beta} _ {j} i s \beta_ {j} ^ {*} w h o s e \beta_ {j, k} ^ {*} = 0, \\ \begin{array}{l} \bullet \mathbf {a} ^ {[ k ]} = \left[ \begin{array}{c} - (\hat {\boldsymbol {\Sigma}} _ {- j - j} ^ {- 1}) _ {[ k ],,:} ^ {T} \\ v e c \left((\hat {\boldsymbol {\Sigma}} _ {- j - j} ^ {- 1}) _ {[ k ],,:} ^ {T} \tilde {\boldsymbol {\beta}} _ {j} ^ {T}\right) \end{array} \right], a n d v e c i s r o w - w i s e \\ v e c t o r i z a t i o n o f a m a t r i x, a n d (\hat {\boldsymbol {\Sigma}} _ {- j, - j} ^ {- 1}) _ {[ k ],,:} d e n o t e s t h e r \\ r o w i n \hat {\boldsymbol {\Sigma}} _ {- j, - j} ^ {- 1} t h a t c o r r e s p o n d s t o X _ {k}. \end{array} \\ \end{array}
+$$
+
+Practical implementation for DCT-GMM In practical implementation, we estimate the regression parameter $\hat{\beta}_j$ and set $\hat{\beta}_{j,k} = 0$ as a substitute for $\tilde{\beta}_j$ to calculate variance and conduct the confidence interval test. Specifically, we derive $\hat{\beta}_{j,k}$ using the estimation equation in Equation (11), where estimated covariance terms are calculated utilizing GMM. As discussed in Section 3.1, we consider two GMM estimators: naive GMM (also called one-step GMM, the one without carefully designing A) and two-step GMM. In this paper, we empirically validate the effectiveness of both, whereas A is chosen as an identity matrix for the one-step GMM and the $\mathbb{E}_n[f_i(\hat{\pmb{\theta}})f_i(\hat{\pmb{\theta}})^T]$ for the two-step GMM following Lemma 3.2. The pseudo-code of both approaches is provided in Appendix C.
+
+Under the null hypothesis of conditional independence $(\beta_{j,k} = 0)$ , we substitute the calculated $\hat{\beta}_{j,k}$ into the distribution defined in Theorem 3.4 to obtain the p-value. Statistical inference follows a standard hypothesis testing approach: if the p-value is less than the predefined significance level $\alpha$ (typically 0.05), we conclude that the tested pairs are conditionally dependent. Conversely, if the p-value exceeds $\alpha$ , we fail to reject the null hypothesis and deduce the tested pairs are conditionally independent.
+
+# 3.3. Comparison with DCT
+
+One of the motivations behind DCT-GMM is to address the insufficient data utilization in DCT. This naturally raises the question: Is DCT-GMM necessarily superior to DCT? Toward this question, we propose the following theorem:
+
+Theorem 3.5. (Informal) When $n \to +\infty$ , by constructing the moment functions properly, the DCT-GMM with two-step GMM has a lower variance than DCT.
+
+The formal theorem and detailed derivation can be found in Appendix G. Intuitively, if we construct the moment functions the same as DCT, we will reach the estimator with the same variance. However, the GMM framework allows the introduction of additional moment functions, thereby reducing the variance. We empirically validate the theorem in Section 4.3.
+
+# 4. Experiment
+
+We applied the proposed test DCT-GMM to synthetic dataset to evaluate its performance compared with baselines including DCT (Sun et al., 2024), Fisher-z test (Fisher, 1921), Chi-square test (F.R.S., 2009). Specifically, we investigate its Type I and Type II error in different scenarios and its application in causal discovery. The experiments investigating its performance in denser graphs and effectiveness in real-world dataset can be found in Appendix E.
+
+# 4.1. On the Effect of the Cardinality of Conditioning Set and the Sample Size
+
+We conducted an experimental study to examine the behavior of Type I and Type II error probabilities under two distinct experimental designs. The first design explores the impact of sample size variation, specifically testing sample sizes of $n = (200, 500, 1000, 2000)$ while maintaining a single conditioning variable, noted as $D = 1$ . In the second design, we fixed the sample size at $n = 2000$ and systematically varied the number of conditioning variables $D = (1, \dots, 5)$ . We assumed that all variables in the conditioning set are influential and affect the confidence intervals of the tested pairs. Each experimental configuration was replicated 2,000 times to ensure robust statistical analysis. We use $X$ and $Y$ to denote the tested pairs and $Z$ to denote the variables being conditioned on.
+
+To assess the accuracy of the derived asymptotic null distribution, we evaluated whether the Type I error probability aligns with the predetermined significance level $\alpha = 0.05$ . We first generate $\mathbf{Z}$ as an independent multivariate normal distribution whose mean and variance are randomly sampled from a uniform distribution $U(0,1)$ . We then generate cor
+
+
+
+
+(a) Fixed samples $n = 2000$ , changing number of nodes $p = (4,6,8,12)$
+
+
+
+
+
+(b) Fixed nodes $p = 10$ , changing sample size $n = (100,500,1000,2000)$
+
+Chi-square DCT-GMM_one DCT-GMM_two Fisherz DCT Fisherz_oracle
+
+
+Figure 3. Experimental result of skeleton discovery on synthetic data for changing number of nodes (a) and changing sample size (b). Fisherz_oracle is the Fisher-z test applied to original continuous data. We evaluate $F_{1}(\uparrow)$ , Precision ( $\uparrow$ ), Recall ( $\uparrow$ ) and SHD ( $\downarrow$ ).
+
+
+
+
+
+responding $X$ and $Y$ using $\mathbf{Z}$ , structured as $\sum_{i=1}^{D} a_i Z_i + E_i$ (for the first scenario, $D = 1$ ), where $a_i$ is a scalar sampled from a standard normal distribution and $E_i$ follows a standard normal distribution. This ensures that $X \perp Y \mid \mathbf{Z}$ . The data are then discretized into three levels, with random boundaries set based on the support of each variable, producing the discretized observations $\tilde{X}, \tilde{Y}$ , and $\tilde{\mathbf{Z}}$ . The first two columns of Figure 2 show the resulting Type I error at a significance level of $\alpha = 0.05$ .
+
+A robust statistical test should minimize the Type II error, thereby maximizing statistical power. To evaluate the power of the proposed DCT-GMM, we first generate $X$ and $Y$ as independent pairs following a normal distribution, with mean and variance randomly sampled from a uniform distribution $U(0,1)$ . We then generate the conditioning variable $\mathbf{Z}$ as $Z_{i} = a_{i}X + b_{i}Y + E_{i}$ , where $a_{i}$ and $b_{i}$ are scalars randomly drawn from a standard normal distribution, and $E_{i}$ follows a standard normal distribution, i.e., $X \not\perp Y \mid \mathbf{Z}$ . The same discretization approach is applied here. The last two columns of Figure 2 illustrate the Type II error rates for both varying sample sizes and changing cardinalities of the conditioning set scenarios.
+
+According to the first row of Figure 2, DCT-GMM (for both steps) and DCT show superior performance in maintaining Type I error close to the significance level across all sample sizes and conditioning set cardinalities, while other baselines, which do not account for discretization, exhibit significantly higher Type I errors. As the sample size in
+
+creases, both the Chi-Square and Fisher-Z tests tend to yield larger Type I errors because these methods measure whether $\tilde{X} \perp \tilde{Y} \mid \tilde{Z}$ . More samples only reinforce incorrect conclusions. Additionally, DCT-GMM demonstrates significantly higher power compared to DCT, particularly for small sample sizes. This advantage arises from DCT-GMM's ability to utilize more information by solving multiple equations, highlighting its superiority and effectiveness.
+
+# 4.2. Application in Causal Discovery
+
+CI testing plays a pivotal role in causal discovery, which aims to uncover causal relationships from observational data. Two fundamental assumptions—faithfulness and the causal Markov condition—allow causal structures, represented by a Directed Acyclic Graph (DAG) $\mathcal{G}$ , to be inferred from statistical independence relations. Based on these principles, constraint-based methods like the PC algorithm (Spirtes et al., 2000) recover graph structures through CI testing. However, discretization compromises the reliability of CI tests, leading to incorrect dependence assertions and distorting the inferred DAG.
+
+To validate the effectiveness of DCT-GMM, we follow the setting of (Sun et al., 2024) and apply the PC algorithm with different CI testing methods on a synthetic dataset. Specifically, the true DAG is generated using the Bipartite Pairing (BP) model (Asratian et al., 1998), with weights drawn from a uniform distribution $U \sim (1,3)$ and incorporating noise following a standard normal distribution. The number of
+
+
+(a) $\sigma_{xy} = 0$
+
+
+Figure 4. Comparison of Variance and MSE of estimated covariance using DCT and DCT-GMM.
+
+
+(b) $\sigma_{xy} = 0.5$
+
+
+
+
+
+edges in the DAG is one fewer than the number of nodes. While this graph is relatively sparse, the main focus of DCT-GMM is to correct CIs incorrectly judged as conditional dependence due to discretization. As the number of edges increases, such cases of true CI become rarer. We also investigate the performance of DCT-GMM in denser graphs, detailed in Appendix E.1.
+
+The continuous data is then discretized into three levels, with boundaries randomly generated according to each variable's support. The experiment is divided into two cases: In the first, we fix the sample size at $n = 2000$ while varying the number of nodes $p = (4,6,8,12)$ . In the second, we fix the number of nodes at $p = 10$ and explore sample sizes of $n = (100,500,1000,2000)$ .
+
+We compare DCT-GMM (both steps) against the Fisher-Z test (Fisher, 1921), the Chi-square test (F.R.S., 2009), and the previous work DCT (Sun et al., 2024) applied to discretized data. Additionally, we apply the Fisher-Z test to the original continuous data as a theoretical upper bound. Since the PC algorithm can only identify the causal graph up to a Completed Partially Directed Graph (CPDAG), we apply the same orientation rule from (Dor & Tarsi, 1992), implemented by (Chandler Squires, 2018), to convert the returned CPDAG to a DAG for easier comparison. For each setting, we run 10 graph instances with different seeds and report the mean and standard deviation of F1-score, precision, recall, and Structural Hamming Distance (SHD) in Figure 3 for skeleton discovery and Figure 5 for DAG comparison in Appendix D.
+
+Experimental results show that DCT-GMM consistently outperforms DCT across all metrics, particularly recall, aligning with its higher statistical power demonstrated in Section 4.1. Moreover, DCT-GMM significantly outperforms the Chi-square and Fisher-Z tests, especially at large sample sizes, where the performance of these traditional tests deteriorates as the sample size increases. This phenomenon underscores the importance of discretization-aware CI test: for all tests not aware of discretization, increasing the sample size only reinforces incorrect conclusions with greater confidence. The lower recall observed in DCT-GMM and DCT compared to other baselines is expected, as other base
+
+lines tend to misinterpret conditional independence as dependence, leading to denser inferred graph structures.
+
+# 4.3. Empirical Comparison of DCT and DCT-GMM
+
+To validate the superiority of DCT-GMM over DCT, we conducted experiments to examine the variance and Mean Square Error (MSE) of their respective covariance estimators. Both methods adopt the same nodewise regression framework to transition from $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ to $\hat{\beta}_{j,k} - \beta_{j,k}^*$ . Therefore, evaluating the covariance estimator directly reflects the performance of the CI test.
+
+For a given pair of variables $X$ and $Y$ , we denote the covariance estimated by DCT as $\hat{\sigma}_{XY}^{D}$ and that by DCT-GMM as $\hat{\sigma}_{XY}^{G}$ . We empirically assess the stability and accuracy of these estimators by comparing their variance and MSE across varying sample sizes $n = (100, 200, 500, 1000, 2000)$ under the scenario that the true covariance $\sigma_{XY} = (0, 0.5)$ and fixed discretization level $M = 3$ . The Figure 4 shows the results.
+
+As shown in Figure 4, DCT-GMM consistently outperforms DCT. Specifically, the variance of $\hat{\sigma}_{XY}^{G}$ is consistently lower than that of $\hat{\sigma}_{XY}^{D}$ across all evaluated sample sizes. Additionally, the MSE of $\hat{\sigma}_{XY}^{G}$ is smaller, indicating a more accurate estimation. These experimental results validate the superiority and improved sample efficiency of DCT-GMM, providing support for Theorem 3.5.
+
+# 5. Conclusion
+
+In this paper, we propose DCT-GMM, a novel sample-efficient CI test to address challenges in CI testing with discretized data. By formulating parameter estimation as an overidentification problem and leveraging GMM, DCT-GMM surpasses existing methods, achieving lower estimation variance and greater statistical power, particularly in small-sample scenarios. Its proven effectiveness in causal discovery highlights its practical utility, bridging the gap between discretized observations and latent variable relationships in both synthetic and real-world datasets.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Aliferis, C. F., Statnikov, A., Tsamardinos, I., Mani, S., and Koutsoukos, X. D. Local causal and markov blanket induction for causal discovery and feature selection for classification part i: algorithms and empirical evaluation. Journal of Machine Learning Research, 11(1), 2010.
+Asratian, A. S., Denley, T. M., and Häggkvist, R. Bipartite graphs and their applications, volume 131. Cambridge university press, 1998.
+Baba, K., Shibata, R., and Sibuya, M. Partial correlation and conditional correlation as measures of conditional independence. Australian & New Zealand Journal of Statistics, 46(4):657-664, 2004.
+Callot, L., Caner, M., Ulasan, E., and Ozlem Onder, A. A nodewise regression approach to estimating large portfolios, 2019.
+Chandler Squires. _causaldag: creation, manipulation, and learning of causal models_, 2018. URL https://github.com/uhlerlab/causaldag.
+Changsheng, H. and Yongfeng, W. Investor sentiment and assets valuation. Systems Engineering Procedia, 3:166-171, 2012.
+Damodaran, A. Investment valuation: Tools and techniques for determining the value of any asset, volume 666. John Wiley & Sons, 2012.
+Dong, X., Huang, B., Ng, I., Song, X., Zheng, Y., Jin, S., Legaspi, R., Spirtes, P., and Zhang, K. A versatile causal discovery framework to allow causally-related hidden variables. In ICLR, 2024a.
+Dong, X., Ng, I., Huang, B., Sun, Y., Jin, S., Legaspi, R., Spirtes, P., and Zhang, K. On the parameter identifiability of partially observed linear causal models. In NeurIPS, 2024b.
+Dong, X., Ng, I., Sun, B., Dai, H., Hao, G.-Y., Fan, S., Spirtes, P., Qiu, Y., and Zhang, K. Permutation-based rank test in the presence of discretization and application in causal discovery with mixed data. In ICML, 2025.
+Dooms, S., De Pessemier, T., and Martens, L. Movietweetings: a movie rating dataset collected from twitter. In Workshop on Crowdsourcing and human computation
+
+for recommender systems, CrowdRec at RecSys, volume 2013, pp. 43, 2013.
+Dor, D. and Tarsi, M. A simple algorithm to construct a consistent extension of a partially oriented graph. 1992. URL https://api-semanticscholar.org/CorpusID:122949140.
+Doran, G., Muandet, K., Zhang, K., and Scholkopf, B. A permutation-based kernel conditional independence test. In UAI, pp. 132-141, 2014.
+Fisher, R. A. On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. *Metron*, 1: 3-32, 1921.
+F.R.S., K. P. X. on the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine Series 1, 50:157-175, 2009. URL https://api-semanticscholar.org/CorpusID:121472089.
+Fukumizu, K., Bach, F. R., and Jordan, M. I. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5 (Jan):73-99, 2004.
+Hansen, L. P. Large sample properties of generalized method of moments estimators. *Econometrica*, 50(4): 1029-1054, 1982. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1912775.
+Johnson, S. U., Ulvenes, P. G., Øktedalen, T., and Hoffart, A. Psychometric properties of the general anxiety disorder 7-item (gad-7) scale in a heterogeneous psychiatric sample. Frontiers in psychology, 10:449461, 2019.
+Kubkowski, M., Mielniczuk, J., and Teisseyre, P. How to gain on power: Novel conditional independence tests based on short expansion of conditional mutual information. Journal of Machine Learning Research, 22(62):1-57, 2021. URL http://jmlr.org/papers/v22/19-600.html.
+McDonald, J. H. Handbook of biological statistics, volume 2. sparky house publishing Baltimore, MD, 2009.
+Mohan, K., Chung, M., Han, S., Witten, D., Lee, S.-I., and Fazel, M. Structured learning of gaussian graphical models. Advances in neural information processing systems, 25, 2012.
+Mossman, S. A., Luft, M. J., Schroeder, H. K., Varney, S. T., Fleck, D. E., Barzman, D. H., Gilman, R., DelBello, M. P., and Strawn, J. R. The generalized anxiety disorder 7-item (gad-7) scale in adolescents with generalized
+
+anxiety disorder: signal detection and validation. Annals of clinical psychiatry: official journal of the American Academy of Clinical Psychiatrists, 29(4):227, 2017.
+Newey, W. K. Generalized method of moments. Access through internet: https://ocw.mit.edu/courses/economics/14-386-new-econometric-methods-spring-2007/readings/ngmm07.pdf, 2007.
+Pearl, J. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. ISBN 0521773628. URL http://www.worldcat.org/isbn/0521773628.
+Peterson, C., Stingo, F. C., and Vannucci, M. Bayesian inference of multiple gaussian graphical models. Journal of the American Statistical Association, 110(509):159-174, 2015.
+Ren, Z., Sun, T., Zhang, C.-H., and Zhou, H. H. Asymptotic normality and optimalities in estimation of large gaussian graphical models. 2015.
+Sparling, E. I. and Sen, S. Rating: how difficult is it? In Proceedings of the fifth ACM conference on Recommender systems, pp. 149-156, 2011.
+Spirtes, P., Glymour, C., and Scheines, R. Causation, Prediction, and Search. MIT press, 2nd edition, 2000.
+Strobl, E. V., Zhang, K., and Visweswaran, S. Approximate kernel-based conditional independence tests for fast nonparametric causal discovery. Journal of Causal Inference, 7(1):20180017, 2019.
+Sun, B., Yao, Y., Hao, H., Qiu, Y., and Zhang, K. A conditional independence test in the presence of discretization, 2024. URL https://arxiv.org/abs/2404.17644.
+Vaart, A. W. v. d. M-and Z-Estimators, pp. 41-84. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. doi: 10.1017/CBO9780511802256.006.
+Yuan, M. and Lin, Y. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19-35, 2007.
+Zhang, K., Peters, J., Janzing, D., and Scholkopf, B. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012.
+Zhang, Y., Zhang, Z., Liu, K., and Qian, G. An improved iamb algorithm for markov blanket discovery. J. Comput., 5(11):1755-1761, 2010.
+Ziegel, E. R. Statistical inference, 2002.
+
+# Appendix for
+
+# "A Sample-Efficient Conditional Independence Test in the Presence of Discretization"
+
+# A. Notation Table and Function Form
+
+# A.1. Notation Table
+
+Category Description Number and Indices n Number of samples p Number of variables j1,j2,j,k Index of a variable j1,j2,j,k ∈ (1,...,p) Random Variables X A vector of Gaussian variables X̂ The discretized counterparts of X Σ Covariance of X Σ−j−j Submatrix of Σ with j-th row and j-th column removed Σ−jj j-th column of Σ with j-th row removed Ω Precision matrix of X, equals to Σ-1 X-{jk} All other variables of X with Xjand Xkremoved cj The discretization boundaries mapping Xjto X̂j Xj j-th component of the X σj1j2 Covariance between Xj1and Xj2 ωjk Precision coefficient ωjk xij i-th sample of Xj x̂ij i-th sample of X̂j cj,k One component of cj τj,k True probability of Xjhas the value k τj1j2,mk True probability of X̂j1has the value m, and X̂j2has the value k. βj,k Regression coefficient of Xk in predicting Xj βj Vector of all coefficients regressing Xj ξi j1j2 Influence function component, it represents the influence of the i-th observation on the covariance estimation error Ξi Matrix form of ξi Estimation of Variables σj1j2 Estimated covariance of Xj1,Xj2 Σ̂ Estimated covariance matrix, also the matrix form of σj1j2 ωjk Estimation of ωjk ˆcj,k Estimation of cj,k, calculated using Equation (2) τj,k Estimation of τj,k, the sample probability that X̂jequals to k τj1j2,mk Estimation of τj1j2,mk, the sample probability that X̂j1equals to m and X̂j2equals to k βj Estimation of βj, calculated as Σ-1-j-ĵΣ-jj Functions and Operators P True probability Pn Sample probability E[Z] Expectation of a random variable Z En[Z] Sample mean of a random variable Z over n samples 1 Indicator function: is 1 if the condition is true, 0 otherwise Φ(z) Cumulative distribution function of a standard normal distribution integrated from -∞ to z Φ(a,b,c,d;σj1j2) Cumulative distribution function of a bivariate normal distribution with covariance σj1j2 integrated over the rectangular region defined by [a,b] × [c,d]. Notations of GMM fi(θ) Moment function defined in 4 gi(θ) Sample mean of moment functions given n samples gi(θ) = 1/n ∑i=1n fi(θ) A Weighting matrix of GMM G The expectation of the Jacobian of the moment function G = E[∂fi(θ*)/∂θ] S Covariance matrix of moment function S = E[fi(θ*)fi(θ*)^T]
+
+# A.2. Cumulative Distribution Function of Bivariate Normal Distribution
+
+The probability density function of a bivariate normal distribution with random variables $X_{j_1}, X_{j_2}$ , mean $\mathbf{0}$ , unit variance and covariance $\sigma_{j_1j_2}$ is given by:
+
+$$
+\phi \left(x _ {j _ {1}}, x _ {j _ {2}}; \sigma_ {j _ {1} j _ {2}}\right) = \frac {1}{\sqrt {1 - \sigma_ {j _ {1} j _ {2}} ^ {2}}} \exp \left(- \frac {x _ {j _ {1}} ^ {2} - 2 \sigma_ {j _ {1} j _ {2}} x _ {j _ {1} j _ {2}} + x _ {j _ {2}} ^ {2}}{2 \left(1 - \sigma_ {j _ {1} j _ {2}} ^ {2}\right)}\right). \tag {12}
+$$
+
+The cdf of the bivariate normal distribution with the covariance $\sigma_{j_1j_2}^*$ integrating over the rectangular region defined by $[c_{j_1,m - 1}^*,c_{j_1,m}^* ]\times [c_{j_2,k - 1}^*,c_{j_2,k}^* ]$ is
+
+$$
+\Phi \left(c _ {j _ {1}, m - 1} ^ {*}, c _ {j _ {1}, m} ^ {*}, c _ {j _ {2}, k - 1} ^ {*}, c _ {j _ {2}, k} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}\right) = \int_ {c _ {j _ {1}, m - 1} ^ {*}} ^ {c _ {j _ {1}, m} ^ {*}} \int_ {c _ {j _ {2}, k - 1} ^ {*}} ^ {c _ {j _ {2}, k} ^ {*}} \phi \left(x _ {j _ {1}}, x _ {j _ {2}}; \sigma_ {j _ {1} j _ {2}} ^ {*}\right) d x _ {j _ {1}} d x _ {j _ {2}}. \tag {13}
+$$
+
+# B. Discussion of Assumptions
+
+Rationality of the assumption A primary limitation of this work lies in the assumption of a multivariate normal distribution for the latent continuous variables, which, to some extent, restricts its generality. However, it is important to emphasize the inherent challenge of proposing a valid conditional independence test in a discretization scenario without relying on appropriate assumptions.
+
+To accurately infer conditional independence relationships within this framework, three key components are required: 1. A meaningful statistic — capable of capturing the conditional independence among latent variables. 2. A consistent estimator — the statistic must be computable solely from discretized observations. 3. Statistical inference — the null distribution of the statistic must be derivable. The discretization drastically reduces available information, making these components less straightforward to implement compared to scenarios where all variables are directly observable.
+
+Without a parametric assumption, deriving a meaningful statistic is already challenging, let alone performing its statistical inference. We adopt the same framework of (Sun et al., 2024), relying on the property that with a parametric assumption, the covariance of the original latent variables is computable, and for Gaussian variables, the covariance matrix corresponds to the independence and conditional independence among variables.
+
+w.l.o.g of the assumption One question that may intrigue readers is why the assumption of zero mean and unit variance is made without loss of generality. The answer is straightforward: we can always adjust the discretization function and its boundaries to produce equivalent results for models with non-zero means and varying variances.
+
+To illustrate, consider an intuitive example. Suppose we observe a discrete variable $\tilde{X}_j$ with $n$ samples, where half are labeled as "ones" and the other half as "twos." This discrete distribution could correspond to multiple continuous variables. For instance, a continuous variable $X_j \sim N(0,1)$ with a discretization boundary at 0 and another variable $X'_j \sim N(1,2)$ with a discretization boundary at 1 would yield exactly the same discretized observations $\tilde{X}_j$ . Thus, the framework presented in this paper supports mapping the same discretized observations to multivariate normal distributions with any mean and variance, i.e., the zero mean and unit variance assumption is without loss of generality.
+
+# C. Pseudo Code
+
+# Algorithm 1 one step DCT-GMM
+
+1: Require:
+
+- Observed data matrix $\tilde{\mathbf{X}}^{\prime} \in \mathbb{R}^{n \times d}$
+- Tested pair indices $j_{1}, j_{2}$ with $j_{1} \neq j_{2}$
+- Conditioning set $\mathbf{C} \subseteq \{1, \dots, d\} \setminus \{j_1, j_2\}$
+Significance level $\alpha$
+
+2: Rearrange Data Matrix
+
+$$
+\tilde {\boldsymbol {X}} = \left[ \tilde {\boldsymbol {X}} ^ {\prime} [:, j _ {1} ], \tilde {\boldsymbol {X}} ^ {\prime} [:, j _ {2} ], \tilde {\boldsymbol {X}} ^ {\prime} [:, \mathbf {C} ] \right] \in \mathbb {R} ^ {n \times p}, \quad \text {w h e r e} p = 2 + | \mathbf {C} |
+$$
+
+3: Initialize Covariance Matrix
+
+$$
+\hat {\boldsymbol {\Sigma}} \leftarrow \mathbf {I} _ {p} \quad (\text {i d e n t i t y m a t r i x o f s i z e} p \times p)
+$$
+
+4: for $q \gets 1$ to $p$ do
+5: for $k \gets q + 1$ to $p$ do
+6: Obtain the Cardinality of $X^{\prime}[:,q]$ as $Q$
+7: Obtain the Cardinality of $\mathbf{X}'[:,k]$ as $K$
+8: Set the naive weighting matrix $A\gets \mathbf{I}_{QK}$
+9: Compute covariance $\hat{\sigma}_{qk}$ through minimizing Equation (5)
+10: Update covariance matrix:
+
+$$
+\hat {\Sigma} [ q, k ] \leftarrow \hat {\sigma} _ {q k} \quad \hat {\Sigma} [ k, q ] \leftarrow \hat {\sigma} _ {q k} \quad (\text {e n s u r i n g s y m m e t r y})
+$$
+
+11: end for
+12: end for
+13: Extract Submatrices $(j_{1}$ and $j_{2}$ correspond the first and second column of $\tilde{X}$ due to the regroup)
+
+- Let $\hat{\Sigma}_{-1 - 1} \in \mathbb{R}^{p - 1 \times p - 1} \gets$ the submatrix of $\hat{\Sigma}$ without 1st column and 1st row
+- Let $\hat{\Sigma}_{-11} \in \mathbb{R}^{p - 1}$ be the 1st column of $\hat{\Sigma}$ with first row removed
+
+14: Compute Test Statistics
+
+$$
+\hat {\beta} _ {1, 2} \leftarrow \hat {\Sigma} _ {- 1 - 1} ^ {- 1} \hat {\Sigma} _ {- 1 1}
+$$
+
+15: Formulate Null Distribution
+
+$\Phi(z) \gets$ Cumulative distribution function of the Normal Distribution defined in Thm. 3.4
+
+16: Calculate p-value
+
+$$
+p \text {- v a l u e} \leftarrow 2 \cdot \left(1 - \Phi \left(\left| \hat {\beta} _ {1, 2} \right|\right)\right)
+$$
+
+17: Make Decision
+18: if $p$ -value $> \alpha$ then
+19: Conclude: $X_{j_1} \perp X_{j_2} \mid X_{\mathbf{S}}$
+20: else
+21: Conclude: $X_{j_1} \nsubseteq X_{j_2} \mid X_{\mathbf{S}}$
+22: end if
+23: Return The conditional independence decision
+
+# Algorithm 2 two step DCT-GMM
+
+# 1: Require:
+
+- Observed data matrix $\tilde{\mathbf{X}}^{\prime} \in \mathbb{R}^{n \times d}$
+- Tested pair indices $j_{1}, j_{2}$ with $j_{1} \neq j_{2}$
+- Conditioning set $\mathbf{C} \subseteq \{1, \dots, d\} \setminus \{j_1, j_2\}$
+Significance level $\alpha$
+
+# 2: Rearrange Data Matrix
+
+$$
+\tilde {\boldsymbol {X}} = \left[ \tilde {\boldsymbol {X}} ^ {\prime} [:, j _ {1} ], \tilde {\boldsymbol {X}} ^ {\prime} [:, j _ {2} ], \tilde {\boldsymbol {X}} ^ {\prime} [:, \mathbf {C} ] \right] \in \mathbb {R} ^ {n \times p}, \quad \text {w h e r e} p = 2 + | \mathbf {C} |
+$$
+
+# 3: Initialize Covariance Matrix
+
+$$
+\hat {\boldsymbol {\Sigma}} \leftarrow \mathbf {I} _ {p} \quad (\text {i d e n t i t y m a t r i x o f s i z e} p \times p)
+$$
+
+4: for $q \gets 1$ to $p$ do
+
+5: for $k \gets q + 1$ to $p$ do
+6: Obtain the Cardinality of $X^{\prime}[:,q]$ as $Q$
+7: Obtain the Cardinality of $X^{\prime}[:,k]$ as $K$
+8: Set the naive weighting matrix $A\gets \mathbf{I}_{QK}$
+9: Obtain estimated parameters $\hat{\theta}$ through minimizing Equation (5)
+10: Set the weighting matrix $A \gets \mathbb{E}_n[f_i(\hat{\pmb{\theta}}) f_i(\hat{\pmb{\theta}})^T]$ , where $f_i(\pmb{\theta})$ defined in Equation (4)
+11: Resolve Equation (5) with updated $A$ to calculate the estimated covariance $\hat{\sigma}_{qk}$
+12: Update covariance matrix:
+
+$$
+\hat {\Sigma} [ q, k ] \leftarrow \hat {\sigma} _ {q k} \quad \hat {\Sigma} [ k, q ] \leftarrow \hat {\sigma} _ {q k} \quad (\text {e n s u r i n g s y m m e t r y})
+$$
+
+13: end for
+14: end for
+15: Extract Submatrices $(j_{1}$ and $j_{2}$ correspond the first and second column of $\tilde{X}$ due to the regroup)
+
+- Let $\hat{\Sigma}_{-1 - 1} \in \mathbb{R}^{p - 1 \times p - 1} \leftarrow$ the submatrix of $\hat{\Sigma}$ without 1st column and 1st row
+- Let $\hat{\Sigma}_{-11} \in \mathbb{R}^{p - 1}$ be the 1st column of $\hat{\Sigma}$ with first row removed
+
+# 16: Compute Test Statistics
+
+$$
+\hat {\beta} _ {1, 2} \leftarrow \hat {\Sigma} _ {- 1 - 1} ^ {- 1} \hat {\Sigma} _ {- 1 1}
+$$
+
+# 17: Formulate Null Distribution
+
+$\Phi(z) \gets$ Cumulative distribution function of the Normal Distribution defined in Thm. 3.4
+
+# 18: Calculate p-value
+
+$$
+p \text {- v a l u e} \leftarrow 2 \cdot \left(1 - \Phi \left(\left| \hat {\beta} _ {1, 2} \right|\right)\right)
+$$
+
+19: Make Decision
+20: if $p$ -value $> \alpha$ then
+21: Conclude: $X_{j_1} \perp X_{j_2} \mid X_{\mathbf{S}}$
+22: else
+23: Conclude: $X_{j_1} \nsubseteq X_{j_2} \mid X_S$
+24: end if
+25: Return The conditional independence decision
+
+# D. Figure of Main Experiments: Causal Discovery
+
+
+
+
+(a) Fixed samples $n = 2000$ , changing number of nodes $p = (4,6,8,12)$
+
+
+
+
+
+
+Chi-square DCT-GMM one DCT-GMM two Fisherz DCT Fisherz oracle
+
+
+(b) Fixed nodes $p = 10$ , changing sample size $n = (100,500,1000,2000)$
+
+
+Figure 5. Experimental result of DAG discovery on synthetic data for changing number of nodes (a) and changing sample size(b). Fisherz_oracle is the Fisher-z test applied to original continuous data. We evaluate $F_{1}(\uparrow)$ , Precision ( $\uparrow$ ), Recall ( $\uparrow$ ) and SHD ( $\downarrow$ ).
+
+
+
+# E. Additional Experiments
+
+# E.1. Denser Graph
+
+DCT-GMM is most effective in cases where discretization causes true conditional independencies to be incorrectly identified as dependencies. Its performance is therefore particularly strong in sparse graph settings, where true conditional independence relationships are abundant. However, to comprehensively evaluate a test's statistical power—its ability to correctly identify true conditional dependencies—it is crucial to examine its performance in dense graph scenarios. To this end, we conduct experiments with $p = 10$ nodes and $n = 2000$ samples, varying the edge density $(p + 2, p + 4, p + 6, p + 8)$ . The underlying continuous data follows a multivariate Gaussian distribution, with the true DAG $\mathcal{G}$ generated using the BP model. We perform 10 independent trials with different random seeds and present both skeleton discovery and DAG reconstruction results in Fig. 6.
+
+Experimental results show that DCT-GMM continues to outperform other baselines in terms of precision and SHD. As the number of edges increases, the advantage of discretization-aware CI tests (DCT-GMM and DCT) gradually diminishes due to the decreasing prevalence of conditional independence cases. Notably, DCT-GMM maintains superior recall, consistent with the findings from the main causal discovery experiment.
+
+
+
+
+
+
+
+
+
+Figure 6. Experimental comparison of causal discovery on synthetic datasets for denser graphs with $p = 10$ , $n = 2000$ and edges varying $p + 2, p + 4, p + 6, p + 8$ . We evaluate F1 (↑), Precision (↑), Recall (↑) and SHD (↓) on both skeleton and DAG.
+
+Chi-square DCT-GMM_one DCT-GMM_two Fisherz DCT Fisherz_oracle
+
+
+
+
+
+
+
+# E.2. Real-world Experiment
+
+
+Fisher-z test and Chi-square test [I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+
+
+DCT [I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+(a) $\alpha = 1e^{-3}$
+
+
+DCT-GMM
+[I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+
+
+Fisher-z test and Chi-square test [I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+Figure 7. PC algorithm applied on the real-world dataset with Fisher-z test, Chi-square test, DCT and DCT-GMM for different significance level $\alpha$ . Red edge are found by other baselines while DCT-GMM removes.
+
+
+DCT
+[I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+(b) $\alpha = 1e^{-8}$
+
+
+DCT-GMM
+[I often feel blue]
+[I worry about things]
+[I seldom feel blue]
+
+To validate the effectiveness of DCT-GMM, we conduct experiments on the Big Five Personality dataset, where each variable has 5 discrete values representing agreement levels (1=Disagree to 5=Agree). For example, "N3=1" indicates "I disagree that I worry about things". This setting aligns well with DCT-GMM, as agreement levels are inherently continuous but observed as discrete categories. This dataset has been closely examined by Dong et al. (2024a) and Dong et al. (2024b), yet it does not solve the discretization problem. We focus on three variables: [N3: I worry about things], [N10: I often feel blue], and [N4: I seldom feel blue]. Using the PC algorithm for causal discovery, we compare DCT-GMM with the Chi-square and Fisher-Z tests. Results are shown in Fig. 7.
+
+Experimental results validate both the effectiveness and superiority of DCT-GMM. Notably, both discretization-aware CI tests (DCT and DCT-GMM) successfully remove the edge between $N3$ and $N4$ , whereas other baselines fail. The inferred graph directly aligns with our motivating causal graph illustrated in Figure 1. Furthermore, DCT-GMM demonstrates a stronger ability to capture conditional independence relationships. Increasing the significance level $\alpha$ generally makes CI tests more prone to inferring conditional dependence. While DCT fails at $\alpha = 10^{-3}$ , DCT-GMM remains robust, correctly identifying that $N3 \perp N4 \mid N10$ .
+
+# F. Proof and Derivations
+
+# F.1. Proof of Moment Condition
+
+In this part, We show the derivation that $\mathbb{E}[f_i(\pmb{\theta}^*)] = \mathbf{0}$ . For the moment functions $f_{i}(\pmb{\theta}^{*})$ defined in Equation (4) with the parameters achieving their optimal $\pmb{\theta} = \pmb{\theta}^{*}$ , we have the specific form:
+
+$$
+f _ {i} (\boldsymbol {\theta}) = \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}, 1 1} ^ {i} - \Phi (c _ {j _ {1}, 0} ^ {*}, c _ {j _ {1}, 1} ^ {*}, c _ {j _ {2}, 0} ^ {*}, c _ {j _ {2}, 1} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}) \\ \vdots \\ \hat {\tau} _ {j _ {1} j _ {2}, M K} ^ {i} - \Phi (c _ {j _ {1}, M - 1} ^ {*}, c _ {j _ {1}, M} ^ {*}, c _ {j _ {2}, K - 1} ^ {*}, c _ {j _ {2}, K} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}) \end{array} \right).
+$$
+
+For any $m \in (1, \ldots, M), k \in (1, \ldots, K)$ , the cdf term $\Phi(c_{j_1, m-1}^*, c_{j_1, m}^*, c_{j_2, k-1}^*, c_{j_2, k}^*, \sigma_{j_1, j_2}^*)$ represents the area of this bivariate normal distribution integrated over the region defined by $[c_{j_1, m-1}^*, c_{j_1, m}^*] \times [c_{j_2, k-1}^*, c_{j_2, k}^*]$ . In probability terms, this corresponds to
+
+$$
+\mathbb {P} \left(c _ {j _ {1}, m - 1} ^ {*} < X _ {j _ {1}} < c _ {j _ {1}, m} ^ {*}, c _ {j _ {2}, k - 1} ^ {*} < X _ {j _ {2}} < c _ {j _ {2}, k} ^ {*}\right).
+$$
+
+For its corresponding counterpart of the discrete domain, the relation holds
+
+$$
+\Phi \left(c _ {j _ {1}, m - 1} ^ {*}, c _ {j _ {1}, m} ^ {*}, c _ {j _ {2}, k - 1} ^ {*}, c _ {j _ {2}, k} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}\right) = \mathbb {P} \left(\tilde {X} _ {j _ {1}} = m, \tilde {X} _ {j _ {2}} = k\right).
+$$
+
+Recall the definition that $\tilde{\tau}_{j_1j_2,mk}^i = \mathbb{1}(\tilde{x}_{j_1}^i = m,\tilde{x}_{j_2}^i = k)$ is the indicator function of the sample $i$ . Its expectation is equivalent to the corresponding probability:
+
+$$
+\mathbb {E} [ \hat {\tau} _ {j _ {1} j _ {2}, m k} ^ {i} ] = \mathbb {P} (\tilde {X} _ {j _ {1}} = m, \tilde {X} _ {j _ {2}} = k).
+$$
+
+We note that $\Phi (\cdot ;\sigma_{j_1j_2}^*)$ is a constant with respect to the sample, we can take expectations over $f_{i}(\pmb{\theta}^{*})$ term-wise:
+
+$$
+\mathbb {E} [ f _ {i} (\boldsymbol {\theta}) ] = \left( \begin{array}{c} \mathbb {E} [ \hat {\tau} _ {j _ {1} j _ {2}, 1 1} ^ {i} ] - \Phi (c _ {j _ {1}, 0} ^ {*}, c _ {j _ {1}, 1} ^ {*}, c _ {j _ {2}, 0} ^ {*}, c _ {j _ {2}, 1} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}) \\ \vdots \\ \mathbb {E} [ \hat {\tau} _ {j _ {1} j _ {2}, M K} ^ {i} ] - \Phi (c _ {j _ {1}, M - 1} ^ {*}, c _ {j _ {1}, M} ^ {*}, c _ {j _ {2}, K - 1} ^ {*}, c _ {j _ {2}, K} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}). \end{array} \right)
+$$
+
+Substituting both $\mathbb{E}[\hat{\tau}_{j_1j_2,11}^i]$ and $\Phi (c_{j_1,0}^*,c_{j_1,1}^*,c_{j_2,0}^*,c_{j_2,1}^*;\sigma_{j_1j_2}^*)$ as $\mathbb{P}(\tilde{X}_{j_1} = m,\tilde{X}_{j_2} = k)$ , each term evaluates to zero, giving:
+
+$$
+\mathbb {E} \left[ f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) \right] = \mathbf {0}.
+$$
+
+This concludes the proof.
+
+# F.2. Proof of Theorem 3.1
+
+In this part, we show the detailed derivation of Theorem 3.1. Recall the definition of GMM defined in Equation (5), we are trying to minimize
+
+$$
+\hat {J} (\boldsymbol {\theta}) = \hat {g} (\boldsymbol {\theta}) ^ {T} A \hat {g} (\boldsymbol {\theta}),
+$$
+
+where the $\hat{g} (\pmb {\theta})\in \mathbb{R}^{MK}$ is the sample mean of the moment functions, and the $\pmb {\theta}\in \mathbb{R}^{M + K - 1}$ is the interested parameters. When the interested parameter $\pmb {\theta} = \hat{\pmb{\theta}}$ , we define the Jacobian matrix
+
+$$
+\hat {\mathbf {G}} = \frac {\partial \hat {g} (\hat {\boldsymbol {\theta}})}{\partial \hat {\boldsymbol {\theta}}} \in \mathbb {R} ^ {M K \times (M + K - 1)}.
+$$
+
+Using the chain rule, we have
+
+$$
+\frac {\partial \hat {J} (\hat {\boldsymbol {\theta}})}{\partial \hat {\boldsymbol {\theta}}} = 2 \mathbf {\hat {G}} ^ {T} \mathbf {A} \hat {g} (\hat {\boldsymbol {\theta}}).
+$$
+
+We note that when the interested parameter $\theta = \hat{\theta}$ , which is the minimum of $\hat{J}(\theta)$ , its gradient should be zero:
+
+$$
+2 \hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {g} (\boldsymbol {\theta}) = \mathbf {0}. \tag {14}
+$$
+
+Leveraging Taylor expansion, we have
+
+$$
+\hat {g} \left(\boldsymbol {\theta} ^ {*}\right) = \hat {g} (\hat {\boldsymbol {\theta}}) + \hat {\mathbf {G}} \left(\boldsymbol {\theta} ^ {*} - \hat {\boldsymbol {\theta}}\right) + \dots ,
+$$
+
+where the second-order terms and more are omitted. Rearrange the equation above, we have
+
+$$
+\hat {g} (\hat {\boldsymbol {\theta}}) = \hat {g} \left(\boldsymbol {\theta} ^ {*}\right) - \hat {\mathbf {G}} \left(\boldsymbol {\theta} ^ {*} - \hat {\boldsymbol {\theta}}\right). \tag {15}
+$$
+
+Substituting back into the first-order condition Equation (14), the Equation (15) becomes
+
+$$
+2 \hat {\mathbf {G}} ^ {T} \mathbf {A} \left(\hat {g} \left(\boldsymbol {\theta} ^ {*}\right) - \hat {\mathbf {G}} \left(\boldsymbol {\theta} ^ {*} - \hat {\boldsymbol {\theta}}\right)\right) = \mathbf {0}. \tag {16}
+$$
+
+Simplify and rearrange terms, we have the difference between the estimator and the true parameter:
+
+$$
+\begin{array}{l} \hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*} = - (\hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {\mathbf {G}}) ^ {- 1} \hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {g} (\boldsymbol {\theta} ^ {*}) \\ = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\mathbf {G}} ^ {T} \mathbf {A} \hat {\mathbf {G}}\right) ^ {- 1} \hat {\mathbf {G}} ^ {T} \mathbf {A} f _ {i} \left(\boldsymbol {\theta} ^ {*}\right). \\ \end{array}
+$$
+
+According to the Central Limit Theorem, when $n \to +\infty$ , the sample average of the moment functions should be asymptotically normal:
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (\boldsymbol {\theta} ^ {*}) \stackrel {d} {\rightarrow} N (\mathbf {0}, \mathbb {E} [ f _ {i} (\boldsymbol {\theta} ^ {*}) f _ {i} (\boldsymbol {\theta} ^ {*}) ^ {T} ] / n).
+$$
+
+Since the mean is zero due to the definition. We note that the Jacobian term
+
+$$
+\hat {\mathbf {G}} = \frac {\partial \hat {g} (\hat {\boldsymbol {\theta}})}{\partial \hat {\boldsymbol {\theta}}} = \mathbb {E} _ {n} \left[ \frac {\partial f _ {i} (\hat {\boldsymbol {\theta}})}{\partial \hat {\boldsymbol {\theta}}} \right]
+$$
+
+due to the sample terms are irrelevant with the parameter $\theta$ . According to the Law of large numbers, when $n \to +\infty$ , the estimated parameter $\hat{\theta} \xrightarrow{p} \theta^{*}$ . Thus, the Jacobian
+
+$$
+\hat {\mathbf {G}} \stackrel {p} {\rightarrow} \mathbf {G} := \mathbb {E} [ \frac {\partial f _ {i} (\boldsymbol {\theta} ^ {*})}{\partial \boldsymbol {\theta} ^ {*}} ].
+$$
+
+Let $\mathbf{S} := \mathbb{E}[f_i(\pmb{\theta}^*) f_i(\pmb{\theta}^*)^T]$ for simplicity of the notation, according to Slutsky's theorem, we have
+
+$$
+\sqrt {n} \left(\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*}\right) \stackrel {d} {\rightarrow} N \left(\mathbf {0}, \left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1} \mathbf {G} ^ {T} \mathbf {A S A G} \left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1}\right). \tag {17}
+$$
+
+Since $\sigma_{j_1j_2}$ is nothing but the first element of the $\theta$ , we conclude that
+
+$$
+\sqrt {n} \left(\hat {\sigma} _ {j _ {1} j _ {2}} - \sigma_ {j _ {1} j _ {2}}\right) \xrightarrow {d} N \left(0, \left[ \left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1} \mathbf {G} ^ {T} \mathbf {A S A G} \left(\mathbf {G} ^ {T} \mathbf {A G}\right) ^ {- 1} \right] _ {1 1}\right), \tag {18}
+$$
+
+which concludes the proof.
+
+# F.3. Proof of Lemma 3.2
+
+In this part, we show the detailed derivation of Lemma 3.2. Our proof is divided in two parts: we first show the specific form of variance will follow when $\mathbf{A} \xrightarrow{p} \mathbf{S}^{-1}$ . We then establish its superiority by showing $(\mathbf{G}^T \mathbf{A} \mathbf{G})^{-1} \mathbf{G}^T \mathbf{A} \mathbf{S} \mathbf{A} \mathbf{G} (\mathbf{G}^T \mathbf{A} \mathbf{G})^{-1} \succeq (\mathbf{G}^T \mathbf{S}^{-1} \mathbf{G})^{-1}$ .
+
+When $n \to +\infty$ , the positive semi-definite weighting matrix $\mathbf{A}$ converges to the $\mathbf{S}^{-1}$ , the variance of the original asymptotical defined in Theorem 3.1, will be written as:
+
+$$
+\begin{array}{l} (\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}) ^ {- 1} \mathbf {G} ^ {T} \mathbf {A} \mathbf {S} \mathbf {A} \mathbf {G} (\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}) ^ {- 1} = (\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}) ^ {- 1} \mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {S} \mathbf {S} ^ {- 1} \mathbf {G} (\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}) ^ {- 1} \\ = \left(\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}\right) ^ {- 1} \mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G} \left(\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}\right) ^ {- 1} \tag {19} \\ = (\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}) ^ {- 1}. \\ \end{array}
+$$
+
+That is,
+
+$$
+\sqrt {n} \left(\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*}\right) \stackrel {d} {\rightarrow} N \left(\mathbf {0}, \left(\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}\right) ^ {- 1}\right). \tag {20}
+$$
+
+Since $\sigma_{j_1j_2}$ is nothing but the first element of the $\theta$ , we conclude that
+
+$$
+\sqrt {n} \left(\hat {\sigma} _ {j _ {1} j _ {2}} - \sigma_ {j _ {1} j _ {2}}\right) \xrightarrow {d} N \left(0, \left[ \left(\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}\right) ^ {- 1} \right] _ {1 1}\right), \tag {21}
+$$
+
+which concludes the first part of the proof. We now dive into the second part.
+
+First, we factor $\mathbf{S} = \mathbf{C}\mathbf{C}^T$ , where $\mathbf{C} \in \mathbb{R}^{MK \times MK}$ which is non-singular. Second, we let
+
+$$
+\mathbf {H} = (\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}) ^ {- 1} \mathbf {G} \mathbf {C} - (\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}) ^ {- 1} \mathbf {G} ^ {T} \mathbf {C} ^ {- T}.
+$$
+
+Third, we not that
+
+$$
+\mathbf {H C} ^ {- 1} \mathbf {G} = \mathbf {0}.
+$$
+
+Fourth, we verify that
+
+$$
+\left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1} \mathbf {G} ^ {T} \mathbf {A} \mathbf {S} \mathbf {A} \mathbf {G} \left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1} = \mathbf {H} \mathbf {H} ^ {T} + \left(\mathbf {G} ^ {T} \mathbf {S} ^ {- 1} \mathbf {G}\right) ^ {- 1}.
+$$
+
+Since $\mathbf{H}\mathbf{H}^T$ is positive semi-definite, the $(\mathbf{G}^T\mathbf{S}^{-1}\mathbf{G})^{-1}$ is a lower bound of $(\mathbf{G}^T\mathbf{A}\mathbf{G})^{-1}\mathbf{G}^T\mathbf{A}\mathbf{S}\mathbf{A}\mathbf{G}(\mathbf{G}^T\mathbf{A}\mathbf{G})^{-1}$ , which concludes the proof.
+
+# F.4. Proof of Thm. 3.4
+
+We note that the following proof is a direct copy from (Sun et al., 2024). We include it here for completeness.
+
+# F.4.1. DERIVATION OF EQUATION 8
+
+Consider our latent continuous variables $\mathbf{X} = (X_{1},\dots ,X_{p})\sim N(0,\Sigma)$ and do nodewise regression
+
+$$
+X _ {j} = \boldsymbol {X} _ {- j} \beta_ {j} + \epsilon_ {j}, \tag {22}
+$$
+
+where $\mathbf{X}_{-j}$ is the submatrix of $\mathbf{X}$ with $X_{j}$ removed. We can divide its covariance $\boldsymbol{\Sigma}$ and its precision matrix $\Omega = \boldsymbol{\Sigma}^{-1}$ into the predictor $\mathbf{X}_{-j}$ and outcome variable $X_{j}$ in our regression:
+
+$$
+\boldsymbol {\Sigma} = \left( \begin{array}{l l} \boldsymbol {\Sigma} _ {j j} & \boldsymbol {\Sigma} _ {j - j} \\ \boldsymbol {\Sigma} _ {- j j} & \boldsymbol {\Sigma} _ {- j - j} \end{array} \right) \quad \boldsymbol {\Omega} = \left( \begin{array}{l l} \boldsymbol {\Omega} _ {j j} & \boldsymbol {\Omega} _ {j - j} \\ \boldsymbol {\Omega} _ {- j j} & \boldsymbol {\Omega} _ {- j - j} \end{array} \right). \tag {23}
+$$
+
+Just like regular linear regression, we can get
+
+$$
+n \rightarrow \infty , \quad \beta_ {j} = \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j}. \tag {24}
+$$
+
+From the invertibility of a block matrix
+
+$$
+\left[ \begin{array}{l l} A & B \\ C & D \end{array} \right] ^ {- 1} = \left[ \begin{array}{c c} (A - B D ^ {- 1} C) ^ {- 1} & - (A - B D ^ {- 1} C) ^ {- 1} B D ^ {- 1} \\ - D ^ {- 1} C (A - B D ^ {- 1} C) ^ {- 1} & D ^ {- 1} + D ^ {- 1} C (A - B D ^ {- 1} C) ^ {- 1} B D ^ {- 1} \end{array} \right]. \tag {25}
+$$
+
+If $A$ and $D$ is invertible, we will have
+
+$$
+\left[ \begin{array}{c c} A & B \\ C & D \end{array} \right] ^ {- 1} = \left[ \begin{array}{c c} (A - B D ^ {- 1} C) ^ {- 1} & 0 \\ 0 & (D - C A ^ {- 1} B) ^ {- 1} \end{array} \right] \left[ \begin{array}{c c} I & - B D ^ {- 1} \\ - C A ^ {- 1} & I \end{array} \right]. \tag {26}
+$$
+
+Thus, we can get:
+
+$$
+\boldsymbol {\Omega} _ {j j} = \left(\boldsymbol {\Sigma} _ {j j} - \boldsymbol {\Sigma} _ {j - j} \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j}\right) ^ {- 1};
+$$
+
+$$
+\boldsymbol {\Omega} _ {j - j} = - \left(\boldsymbol {\Sigma} _ {j j} - \boldsymbol {\Sigma} _ {j - j} \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j}\right) ^ {- 1} \boldsymbol {\Sigma} _ {j - j} \left(\boldsymbol {\Sigma} _ {- j - j}\right) ^ {- 1}. \tag {27}
+$$
+
+Move one step forward:
+
+$$
+- \boldsymbol {\Omega} _ {j j} ^ {- 1} \boldsymbol {\Omega} _ {j - j} = \boldsymbol {\Sigma} _ {j - j} \left(\boldsymbol {\Sigma} _ {- j - j}\right) ^ {- 1}. \tag {28}
+$$
+
+Take transpose for both sides, as long as $\Omega$ is a symmetric matrix and $\Omega_{-jj} = \Omega_{j-j}^T$ , we will have
+
+$$
+- \boldsymbol {\Omega} _ {j j} ^ {- 1} \boldsymbol {\Omega} _ {- j j} = \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j} = \beta_ {j}. \tag {29}
+$$
+
+We should note testing $\Omega_{-jj} = 0$ is equivalent to testing $\beta_{j} = 0$ as the $\Omega_{jj}$ will always be nonzero. The variable $\Omega_{-jj}$ captures the CI of $X_{j}$ with other variables. As long as the variable $\Omega_{jj}$ is just one scalar, we can get
+
+$$
+\beta_ {j, k} = - \frac {\omega_ {j k}}{\omega_ {j j}} \tag {30}
+$$
+
+capturing the CI relationship between variable $X_{j}$ with $X_{k}$ conditioning on all other variables.
+
+# F.4.2. DETAILED DERIVATION OF INFERENCE FOR $\beta_{j}$
+
+Nodewise regression allows us to use the regression parameter $\beta_{j}$ as the surrogate of $\Omega_{-jj}$ . The problem now transfers to constructing the inference for $\beta_{j}$ , specifically, the derivation of distribution of $\hat{\beta}_j - \beta_j$ . The overarching concept is that we are already aware of the distribution of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}$ and we know that there exists a deterministic relationship between $\beta_{j}$ with $\Sigma$ . Consequently, we can express $\hat{\beta}_j - \beta_j$ as a composite of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}$ to establish such an inference. Specifically, we have
+
+$$
+\begin{array}{l} \hat {\beta} _ {j} - \beta_ {j} = \hat {\Sigma} _ {- j - j} ^ {- 1} \hat {\Sigma} _ {- j j} - \Sigma_ {- j - j} ^ {- 1} \Sigma_ {- j j} \\ = \hat {\boldsymbol {\Sigma}} _ {- j - j} ^ {- 1} \left(\hat {\boldsymbol {\Sigma}} _ {- j j} - \hat {\boldsymbol {\Sigma}} _ {- j - j} \boldsymbol {\Sigma} _ {- j - j} ^ {- 1} \boldsymbol {\Sigma} _ {- j j}\right) \\ = - \hat {\boldsymbol {\Sigma}} _ {- j - j} ^ {- 1} \left(\hat {\boldsymbol {\Sigma}} _ {- j - j} \beta_ {j} - \boldsymbol {\Sigma} _ {- j - j} \beta_ {j} + \boldsymbol {\Sigma} _ {- j - j} \beta_ {j} - \hat {\boldsymbol {\Sigma}} _ {- j j}\right) \tag {31} \\ = - \hat {\boldsymbol {\Sigma}} _ {- j - j} ^ {- 1} \left((\hat {\boldsymbol {\Sigma}} _ {- j - j} - \boldsymbol {\Sigma} _ {- j - j}) \beta_ {j} - (\hat {\boldsymbol {\Sigma}} _ {- j j} - \boldsymbol {\Sigma} _ {- j j})\right), \\ \end{array}
+$$
+
+where each entry in matrix $(\hat{\Sigma}_{-j-j} - \Sigma_{-j-j})$ and $(\hat{\Sigma}_{-jj} - \Sigma_{-jj})$ denotes the difference between estimated covariance with true covariance.
+
+Suppose that we want to test the CI of the variable $X_{1}$ with other variables, $j = 1$ . then
+
+$$
+\begin{array}{l} \hat {\boldsymbol {\Sigma}} _ {- 1 - 1} - \boldsymbol {\Sigma} _ {- 1 - 1} = \left[ \begin{array}{c} \hat {\sigma} _ {2 2} \dots \hat {\sigma} _ {2 p} \\ \dots \\ \hat {\sigma} _ {p 2} \dots \hat {\sigma} _ {p p} \end{array} \right] - \left[ \begin{array}{c} \sigma_ {2 2} \dots \sigma_ {2 p} \\ \dots \\ \sigma_ {p 2} \dots \sigma_ {p p} \end{array} \right] (32) \\ := \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \begin{array}{c} \xi_ {2 2} ^ {i} \dots \xi_ {2 p} ^ {i} \\ \dots \\ \xi_ {p 2} ^ {i} \dots \xi_ {p p} ^ {i} \end{array} \right], (33) \\ \end{array}
+$$
+
+where $\{\xi_{j_1,j_2}^i\}$ are i.i.d random variables with specific form defined in Theorem 3.1 for one-step GMM and Lemma 3.2 for two-step GMM correspondingly. Put them together:
+
+$$
+\hat {\beta} _ {1} - \beta_ {1} = \left[ \begin{array}{c} \hat {\beta} _ {1, 2} - \beta_ {1, 2} \\ \hat {\beta} _ {1, 3} - \beta_ {1, 3} \\ \dots \\ \hat {\beta} _ {1, p} - \beta_ {1, p} \end{array} \right] = - \hat {\boldsymbol {\Sigma}} _ {- 1 - 1} ^ {- 1} \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\left[ \begin{array}{c c c c} \xi_ {2 2} ^ {i} & \xi_ {2 3} ^ {i} & \dots & \xi_ {2, p} ^ {i} \\ \xi_ {3 2} ^ {i} & \xi_ {3 3} ^ {i} & \dots & \xi_ {3 p} ^ {i} \\ \dots & \dots & \dots & \dots \\ \xi_ {p 2} ^ {i} & \xi_ {p 3} ^ {i} & \dots & \xi_ {p p} ^ {i} \end{array} \right] \left[ \begin{array}{c} \beta_ {1, 2} \\ \beta_ {1, 3} \\ \dots \\ \beta_ {1, p} \end{array} \right] - \left[ \begin{array}{c} \xi_ {2 1} ^ {i} \\ \xi_ {3 1} ^ {i} \\ \dots \\ \xi_ {p 1} ^ {i} \end{array} \right]\right). \tag {34}
+$$
+
+As $\frac{1}{n}\sum_{i=1}^{n}\xi_{j_1j_2}^i$ is asymptotically normal, the who vector of $\hat{\beta}_1 - \beta_1$ is a linear combination of Gaussian distribution. However, We cannot merely engage in a linear combination of its variance as they are dependent with each other. For example, if $Y_1, Y_2$ are dependent and we are trying to find out $Var(aY_1 + bY_2)$ , we should have
+
+$$
+\operatorname {V a r} \left(a Y _ {1} + b Y _ {2}\right) = \left[ \begin{array}{l l} a & b \end{array} \right] \left[ \begin{array}{c c} \operatorname {V a r} \left(Y _ {1}\right) & \operatorname {C o v} \left(Y _ {1}, Y _ {2}\right) \\ \operatorname {C o v} \left(Y _ {1}, Y _ {2}\right) & \operatorname {V a r} \left(Y _ {2}\right) \end{array} \right] \left[ \begin{array}{l} a \\ b \end{array} \right]. \tag {35}
+$$
+
+Now, suppose we are interested in the distribution of $\hat{\beta}_{1,2} - \beta_{1,2}$ , we have
+
+$$
+\hat {\beta} _ {1, 2} - \beta_ {1, 2} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\hat {\boldsymbol {\Sigma}} _ {- 1 - 1} ^ {- 1}\right) _ {[ 2 ],:} \left(\left[ \begin{array}{l l l l} \xi_ {2, 2} ^ {i} & \xi_ {2, 3} ^ {i} & \dots & \xi_ {2, p} ^ {i} \\ \xi_ {3, 2} ^ {i} & \xi_ {3, 3} ^ {i} & \dots & \xi_ {3, p} ^ {i} \\ \dots & \dots & \dots & \dots \\ \xi_ {p, 2} ^ {i} & \xi_ {p, 3} ^ {i} & \dots & \xi_ {p, p} ^ {i} \end{array} \right] \left[ \begin{array}{l} \beta_ {1, 2} \\ \beta_ {1, 3} \\ \dots \\ \beta_ {1, p} \end{array} \right] - \left[ \begin{array}{l} \xi_ {2, 1} ^ {i} \\ \xi_ {3, 1} ^ {i} \\ \dots \\ \xi_ {p, 1} ^ {i} \end{array} \right]\right), \tag {36}
+$$
+
+where $(\hat{\Sigma}_{-1 - 1}^{-1})_{[2],:}$ is the row of index of $X_{2}$ of $\hat{\Sigma}_{-1 - 1}^{-1}$ ([2] denotes the index of the variable, e.g., $(\hat{\Sigma}_{-1, - 1}^{-1})_{[2]},$ represents the first row of $\hat{\Sigma}_{-1, - 1}^{-1}$ since the row of first variable is removed.). For ease of notation, we define
+
+$$
+\boldsymbol {Y} ^ {i} = \boldsymbol {\Xi} _ {- 1, - 1} ^ {i} = \left[ \begin{array}{l l l l} \xi_ {2, 2} ^ {i} & \xi_ {2, 3} ^ {i} & \dots & \xi_ {2, p} ^ {i} \\ \xi_ {3, 2} ^ {i} & \xi_ {3, 3} ^ {i} & \dots & \xi_ {3, p} ^ {i} \\ \dots & \dots & \dots & \dots \\ \xi_ {p, 2} ^ {i} & \xi_ {p, 3} ^ {i} & \dots & \xi_ {p, p} ^ {i} \end{array} \right] \in \mathbb {R} ^ {p - 1 \times p - 1}, \quad \boldsymbol {v} ^ {i} := \boldsymbol {\Xi} _ {- 1, 1} ^ {i} = \left[ \begin{array}{l} \xi_ {2, 1} ^ {i} \\ \xi_ {3, 1} ^ {i} \\ \dots \\ \xi_ {p, 1} ^ {i} \end{array} \right] \in \mathbb {R} ^ {p - 1}, \tag {37}
+$$
+
+$$
+\boldsymbol {u} := (\hat {\boldsymbol {\Sigma}} _ {- 1, - 1} ^ {- 1}) _ {[ 2 ],:} ^ {T} \in \mathbb {R} ^ {p - 1} \qquad \boldsymbol {w} := \left[ \begin{array}{c} \beta_ {1, 2} \\ \beta_ {1, 3} \\ \dots \\ \beta_ {1, p} \end{array} \right] \in \mathbb {R} ^ {p - 1}.
+$$
+
+We can rewrite the equation as
+
+$$
+\hat {\beta} _ {1, 2} - \beta_ {1, 2} = - \frac {1}{n} \sum_ {l = 1} ^ {n} \boldsymbol {u} \left(\boldsymbol {Y} ^ {i} \boldsymbol {w} - \boldsymbol {v} ^ {i}\right).
+$$
+
+We note that $\mathbf{Y}^i$ , $\mathbf{v}^i$ are variables, and $\mathbf{u}$ , $\mathbf{w}$ are constants (just like the example $aY_1 + bY_2$ ). We further let $m = p - 1$ to simplify the notation. We can thus write the equation above as vector form:
+
+$$
+\begin{array}{l} \hat {\beta} _ {1, 2} - \beta_ {1, 2} = - \frac {1}{n} \sum_ {l = 1} ^ {n} \left[ u _ {1}, \ldots , u _ {m}, u _ {1} w _ {1}, u _ {1} w _ {2}, \ldots , u _ {m} w _ {m} \right] \left[ \begin{array}{l} - v _ {1} ^ {i} \\ \ldots , \\ - v _ {m} ^ {i} \\ Y _ {1 1} ^ {i} \\ Y _ {1 2} ^ {i} \\ \ldots \\ Y _ {m m} ^ {i} \end{array} \right] \\ = - \frac {1}{n} \sum_ {i = 1} ^ {n} [ \boldsymbol {u} ^ {T}, \operatorname {v e c} (\boldsymbol {u} \boldsymbol {w} ^ {T}) ^ {T} ] \left[ \begin{array}{c} - \boldsymbol {v} ^ {i} \\ \operatorname {v e c} (\boldsymbol {Y} ^ {i})) \end{array} \right], \\ \end{array}
+$$
+
+where $u_{k}$ represents the $k$ -th element of vector $\mathbf{u}$ and $Y_{jk}^{i}$ represents the entry in $j$ -th row and $k$ -th column of matrix $\mathbf{Y}^{i}$ , vec represents the row-wise vectorization of a matrix, e.g,
+
+$$
+\operatorname {v e c} (\boldsymbol {Y} ^ {l}) = \left[ \begin{array}{c} Y _ {1 1} \\ Y _ {1 2} \\ Y _ {1 3} \\ \dots \\ Y _ {m m} \end{array} \right] \in \mathbb {R} ^ {m ^ {2}}.
+$$
+
+Similar as equation 35, the variance is calculated as
+
+$$
+\operatorname {V a r} \left(\sqrt {n} \left(\hat {\beta} _ {1, 2} - \beta_ {1, 2}\right)\right) = \frac {1}{n} \sum_ {l = 1} ^ {n} [ \boldsymbol {u} ^ {T}, \operatorname {v e c} (\boldsymbol {u} \boldsymbol {w} ^ {T}) ^ {T} ] \left[ \begin{array}{c} - \boldsymbol {v} ^ {l} \\ \operatorname {v e c} (\boldsymbol {Y} ^ {i}) \end{array} \right] \left[ \begin{array}{c} - \boldsymbol {v} ^ {l} \\ \operatorname {v e c} (\boldsymbol {Y} ^ {i}) \end{array} \right] ^ {T} \left[ \begin{array}{c} \boldsymbol {u} \\ \operatorname {v e c} (\boldsymbol {u} \boldsymbol {w} ^ {T}) \end{array} \right].
+$$
+
+Now we go back to use the notations of $\xi$ and $\Sigma$ . Under the null hypothesis that $X_{1} \perp X_{2}|X_{\text{others}}$ , i.e., $\beta_{1,2} = 0$ . We thus use $\tilde{\beta}_{1}$ to denote $\beta_{1}$ where $\beta_{1,2} = 0$ . Let
+
+$$
+\boldsymbol {B} _ {- 1} ^ {i} = \left( \begin{array}{c c c c} \xi_ {2 1} ^ {i} & \xi_ {3 1} ^ {i} & \ldots & \xi_ {p 1} ^ {i} \\ \xi_ {2 2} ^ {i} & \xi_ {2 3} ^ {i} & \ldots & \xi_ {2 p} ^ {i} \\ \xi_ {3 2} ^ {i} & \xi_ {3 3} ^ {i} & \ldots & \xi_ {3 p} ^ {i} \\ \ldots & \ldots & \ldots & \ldots \\ \xi_ {p 2} ^ {i} & \xi_ {p 3} ^ {l} & \ldots & \xi_ {p p} ^ {i} \end{array} \right) = \left[ \begin{array}{c c c c} \Xi_ {- 1 1} ^ {i} & T \\ \Xi_ {- 1 - 1} ^ {i} \end{array} \right],
+$$
+
+and
+
+$$
+\boldsymbol {a} ^ {[ 2 ]} = \left[ \begin{array}{c} - (\hat {\boldsymbol {\Sigma}} _ {- 1, - 1} ^ {- 1}) _ {[ 2 ],;} ^ {T} \\ \operatorname {v e c} \left((\hat {\boldsymbol {\Sigma}} _ {- 1, - 1} ^ {- 1}) _ {[ 2 ], \cdot} ^ {T} \tilde {\boldsymbol {\beta}} _ {1} ^ {T}\right) \end{array} \right]
+$$
+
+Similarly as (35), The variance is calculated as
+
+$$
+V a r \left(\sqrt {n} (\hat {\beta} _ {1, 2} - \beta_ {1, 2})\right) = \boldsymbol {a} ^ {[ 2 ] T} \frac {1}{n} \sum_ {l = 1} ^ {n} \operatorname {v e c} (\boldsymbol {B} _ {- 1} ^ {i}) \operatorname {v e c} (\boldsymbol {B} _ {- 1} ^ {i}) ^ {T} \boldsymbol {a} ^ {[ 2 ]},
+$$
+
+Simply replace the index 1, 2 as general index $j, k$ , the distribution of $\hat{\beta}_{j,k} - \beta_{j,k}$ is
+
+$$
+\hat {\beta} _ {j, k} - \beta_ {j, k} \xrightarrow {d} N (0, \boldsymbol {a} ^ {[ k ] T} \frac {1}{n ^ {2}} \sum_ {l = 1} ^ {n} \operatorname {v e c} \left(\boldsymbol {B} _ {- j} ^ {i}\right) \operatorname {v e c} \left(\boldsymbol {B} _ {- j} ^ {i}\right) ^ {T}) \boldsymbol {a} ^ {[ k ]}).
+$$
+
+In practice, we can plug in the estimates of $\beta_{j}$ to estimate the interested distribution and do the CI test by hypothesizing $\beta_{j,k} = 0$ .
+
+# G. Formal Claim of Theorem 3.5 and Derivation
+
+In this section, we try to demonstrate the theoretical advantage of DCT-GMM over DCT. Specifically, the variance of $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ obtained using DCT-GMM is consistently lower than that of DCT. Since DCT-GMM and DCT adopt exactly the same strategy to transition from $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ to $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ , which is simply a linear combination of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ . A reduction in the variance of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*$ directly translates to a reduction in the variance of $\hat{\beta}_{j,k} - \beta_{j,k}^{*}$ is. Thus, our task is to prove the variance of the estimator of covariance using DCT, denoted $\mathrm{Var}_{\mathrm{DCT}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ , is consistently greater than the one of two-step DCT-GMM, denoted $\mathrm{Var}_{\mathrm{GMM}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ .
+
+The proof is organized into two parts:
+
+1. Review of Variance Derivation of DCT: We first provide an review of the derivation for $\mathrm{Var}_{\mathrm{DCT}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ . 2. Moment Function Selection: Next, we show that with appropriate moment functions, $\mathrm{Var}_{\mathrm{DCT}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ equals $\mathrm{Var}_{\mathrm{GMM}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ . We then directly use the property of GMM that incorporating valid moment functions lead to less variance, which concludes the proof.
+
+# G.1. Review of Variance Derivation of DCT
+
+We begin with the derivation of $\mathrm{Var}_{\mathrm{DCT}}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}^*)$ with a particular focus on discrete case. For discretized observed variable pair $\tilde{X}_{j_1}$ and $\tilde{X}_{j_2}$ , DCT implicitly treats it as a pair of binary variables. Recall the definitions in DCT, we have interested parameters $\theta = (\sigma_{j_1j_2},h_{j_1},h_{j_2})$ , with the function
+
+$$
+g (\boldsymbol {\theta}) = \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (\boldsymbol {\theta}) = \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}}, h _ {j _ {2}}; \sigma_ {j _ {1} j _ {2}}\right) \\ \hat {\tau} _ {j _ {1}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}}\right) \\ \hat {\tau} _ {j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {2}}\right) \end{array} \right). \tag {38}
+$$
+
+For the true parameters $\pmb{\theta}^{*} = (\sigma_{j_{1}j_{2}}^{*},h_{j_{1}}^{*},h_{j_{2}}^{*})$ , we have
+
+$$
+g \left(\boldsymbol {\theta} ^ {*}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) = \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}} ^ {*}, h _ {j _ {2}} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}\right) \\ \hat {\tau} _ {j _ {1}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}} ^ {*}\right) \\ \hat {\tau} _ {j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {2}} ^ {*}\right) \end{array} \right), \tag {39}
+$$
+
+and the function of estimated parameters
+
+$$
+g (\hat {\boldsymbol {\theta}}) = \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (\hat {\boldsymbol {\theta}}) = \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}} ^ {i} - \bar {\Phi} \left(\hat {h} _ {j _ {1}}, \hat {h} _ {j _ {2}}; \hat {\sigma} _ {j _ {1} j _ {2}}\right) \\ \hat {\tau} _ {j _ {1}} ^ {i} - \bar {\Phi} \left(\hat {h} _ {j _ {1}}\right) \\ \hat {\tau} _ {j _ {2}} ^ {i} - \bar {\Phi} \left(\hat {h} _ {j _ {2}}\right) \end{array} \right) = \mathbf {0}, \tag {40}
+$$
+
+where
+
+- $\hat{\tau}_{j_1j_2}^i = \mathbb{1}(\tilde{x}_{j_1}^i >\mathbb{E}_n[\tilde{X}_{j_1}],\tilde{x}_{j_2}^i >\mathbb{E}_n[\tilde{X}_{j_2}])$ serving as the estimation of $\tau_{j_1j_2}^i = \mathbb{1}(\tilde{x}_{j_1}^i >\mathbb{E}[\tilde{X}_{j_1}],\tilde{x}_{j_2}^i >\mathbb{E}[\tilde{X}_{j_2}])$
+- $\hat{\tau}_{j_1}^i = \mathbb{1}(\tilde{x}_{j_1}^i >\mathbb{E}_n[\tilde{X}_{j_1}])$ serving as the estimation of $\tau_{j_1}^i = \mathbb{1}(\tilde{x}_{j_1}^i >\mathbb{E}[\tilde{X}_{j_1}])$
+- $\Phi(x, y; z) = \int_{-\infty}^{x} \int_{-\infty}^{y} \frac{1}{2\pi\sqrt{1 - z^2}} (\exp - \frac{u_1^2 - 2zu_1u_2 + u_2^2}{2(1 - z^2)})du_1du_2$ is the cumulative distribution function of a bivariate normal distribution.
+- $\bar{\Phi}(x) = 1 - \Phi(x)$ , $\bar{\Phi}(x, y; z) = 1 - \Phi(x, y; z)$
+- $\Phi(x) = \int_{-\infty}^{x} \frac{1}{\sqrt{2\pi}} \exp \left(-\frac{u^2}{2}\right) du$ is the cumulative distribution function of a standard normal distribution.
+
+Our objective is to construct the distribution of $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}$ , equivalently $\hat{\pmb{\theta}} -\pmb{\theta}$ . By leveraging the Taylor expansion, we can construct the following equation
+
+$$
+g (\hat {\boldsymbol {\theta}}) = g \left(\boldsymbol {\theta} ^ {*}\right) + \frac {\partial g \left(\boldsymbol {\theta} ^ {*}\right)}{\partial \boldsymbol {\theta} ^ {*}} \left(\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*}\right) + \dots \tag {41}
+$$
+
+where $\frac{\partial g(\pmb{\theta}^*)}{\partial\pmb{\theta}^*}$ is the Jacobian matrix of function $g$ at $\pmb{\theta}^*$ . The second terms and more are omitted here. Rearrange terms, since $g(\hat{\pmb{\theta}})$ equals to zero, we have
+
+$$
+\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*} = \frac {\partial g \left(\boldsymbol {\theta} ^ {*}\right)}{\partial \boldsymbol {\theta} ^ {*}} ^ {- 1} g \left(\boldsymbol {\theta} ^ {*}\right), \tag {42}
+$$
+
+if the Jacobian is invertible, which will always be true in this framework. Express $g(\pmb{\theta}^{*})$ in vector form, we have
+
+$$
+\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*} = \frac {\partial g \left(\boldsymbol {\theta} ^ {*}\right)}{\partial \boldsymbol {\theta} ^ {*}} ^ {- 1} \frac {1}{n} \sum_ {i = 1} ^ {n} \left( \begin{array}{c} \hat {\tau} _ {j _ {1} j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}} ^ {*}, h _ {j _ {2}} ^ {*}; \sigma_ {j _ {1} j _ {2}} ^ {*}\right) \\ \hat {\tau} _ {j _ {1}} ^ {i} - \bar {\Phi} \left(h _ {j _ {1}} ^ {*}\right) \\ \hat {\tau} _ {j _ {2}} ^ {i} - \bar {\Phi} \left(h _ {j _ {2}} ^ {*}\right) \end{array} \right). \tag {43}
+$$
+
+When $n \to +\infty$ , the first term Jacobian matrix $\frac{\partial g(\pmb{\theta}^*)}{\partial \pmb{\theta}^*}$ will converge to $E[\frac{\partial f_i(\pmb{\theta}^*)}{\partial \pmb{\theta}^*}]$ . It's noteworthy that $E[f_i(\pmb{\theta}^*)] = 0$ according to the definition. By leveraging the Central limit theorem, we have
+
+$$
+n \rightarrow + \infty , \quad \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) \sim N \left(\mathbf {0}, \frac {1}{n} E \left[ f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) ^ {T} \right]\right). \tag {44}
+$$
+
+Thus, we have
+
+$$
+\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*} \sim N \left(\mathbf {0}, \frac {1}{n} E \left[ \frac {\partial f _ {i} \left(\boldsymbol {\theta} ^ {*}\right)}{\partial \boldsymbol {\theta} ^ {*}} \right] ^ {- 1} E \left[ f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) f _ {i} \left(\boldsymbol {\theta} ^ {*}\right) ^ {T} \right] E \left[ \frac {\partial f _ {i} \left(\boldsymbol {\theta} ^ {*}\right)}{\partial \boldsymbol {\theta} ^ {*}} \right] ^ {- T}\right) \tag {45}
+$$
+
+# G.2. Moment Function Selection and Additional Moment Functions
+
+We note that this derivation process is pretty similar to the one using GMM. Intuitively, if the moment functions of GMM are the same as Equation (38), we may have a similar distribution. We now provide the formal statement of the Theorem 3.5:
+
+Theorem G.1. For GMM whose a subset of moment functions $g(\theta)$ are the same as Equation (38), with additional moment functions defined in Equation 4, have strictly less variance than DCT, whose variance is given in (45).
+
+We now provide the proof of the theorem above. With appropriate moment functions, the variance of the $\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}$ gotten using two-step DCT-GMM, is exactly the same as $Var_{GMM}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2})$ . Specifically, for the interested parameters $\theta = (\sigma_{j_1j_2}, h_{j_1}, h_{j_2})$ , we define the moment function the same as Equation (38). Specifically, we are solving the following minimization problem:
+
+$$
+\hat {\boldsymbol {\theta}} = \arg \min _ {\boldsymbol {\theta}} g (\boldsymbol {\theta}) ^ {T} \mathbf {A} g (\boldsymbol {\theta}), \tag {46}
+$$
+
+with the moment condition $E[f_i(\pmb{\theta}^*)] = \mathbf{0}$ satisfied for the true parameters $\pmb{\theta}^*$ . According to Lemma 3.2,
+
+$$
+\hat {\boldsymbol {\theta}} - \boldsymbol {\theta} ^ {*} \sim \left(\mathbf {0}, \frac {1}{n} \left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1}\right) \tag {47}
+$$
+
+for the two-step estimation where
+
+$\mathbf{G} = E\left[\frac{\partial f_{i}\left(\boldsymbol{\theta}^{*}\right)}{\partial\boldsymbol{\theta}^{*}}\right]$
+- $\mathbf{S} = E[f_i(\pmb{\theta}^*)f_i(\pmb{\theta}^*)^T]$
+- $\mathbf{A} = \mathbf{S}^{-1}$
+
+Since $\mathbf{G}$ is invertible1 , we can rewrite
+
+$$
+\left(\mathbf {G} ^ {T} \mathbf {A} \mathbf {G}\right) ^ {- 1} = \mathbf {G} ^ {- 1} \mathbf {S} \mathbf {G} ^ {- T}, \tag {48}
+$$
+
+which is the same as the variance in Equation (45). That is, $Var_{GMM}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2}) = Var_{DCT}(\hat{\sigma}_{j_1j_2} - \sigma_{j_1j_2})$ . However, GMM accommodates additional moment functions (solvable equations as in Equation 3). Based on the property of GMM (Newey, 2007), adding valid moment functions (moment functions defined in Equation 4) generally reduces the variance of the parameter estimates, which concludes the proof.
\ No newline at end of file
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/images.zip b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..66f528c7681ccdee293be0b9cc569bd175f4512e
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:627c7826a2908b8f4224ffd7af5bd99b3ef30258bda66ddaab52068b6d0cbf70
+size 1399955
diff --git a/asampleefficientconditionalindependencetestinthepresenceofdiscretization/layout.json b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0aaf78bb4570587494f0fdecfabff4be0ef80d6c
--- /dev/null
+++ b/asampleefficientconditionalindependencetestinthepresenceofdiscretization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e761f0399d9c68a1c73a2e985f4e32b1b4912c72fee17c2ea5a4a06627a0a842
+size 1295291
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_content_list.json b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1f57e0451b889f17e235c537f7d845992b8b4343
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b530691eebfa515bca7ada6c5dc0b1aeaf1faaebbb879785f9e7eb0731d1ee33
+size 152435
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_model.json b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c17e2a7b38618e8fbd355821b2affff200db538b
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b9c0c5a6bc4f97d68d15534dd9dde84afd06dfcb99ed20eae05e61dfbd2c3ab
+size 182027
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_origin.pdf b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..eaa0f014bd96d95f36546310667983ccd9f0e0dc
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/9906d436-0642-4828-881c-3bb469109bbb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e668a6b36bca2bc96f7cb28525f33b49215094a3e483e07ce92eb1c95397c2e3
+size 941478
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/full.md b/aselectivelearningmethodfortemporalgraphcontinuallearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4acc7f1ff106b8f370ed9b60acabea8f2685f8a
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/full.md
@@ -0,0 +1,570 @@
+# A Selective Learning Method for Temporal Graph Continual Learning
+
+Hanmo LIU12 Shimin DI3 Haoyang LI4 Xun JIAN5 Yue WANG6 Lei CHEN21
+
+# Abstract
+
+Node classification is a key task in temporal graph learning (TGL). Real-life temporal graphs often introduce new node classes over time, but existing TGL methods assume a fixed set of classes. This assumption brings limitations, as updating models with full data is costly, while focusing only on new classes results in forgetting old ones. Graph continual learning (GCL) methods mitigate forgetting using old-class subsets but fail to account for their evolution. We define this novel problem as temporal graph continual learning (TGCL), which focuses on efficiently maintaining up-to-date knowledge of old classes. To tackle TGCL, we propose a selective learning framework that substitutes the old-class data with its subsets, Learning Towards the Future (LTF). We derive an upper bound on the error caused by such replacement and transform it into objectives for selecting and learning subsets that minimize classification error while preserving the distribution of the full old-class data. Experiments on three real-world datasets validate the effectiveness of LTF on TGCL.
+
+# 1. Introduction
+
+Temporal graphs are essential data structures for real-world applications, such as social networks (Baumgartner et al., 2020) and online shopping (Ni et al., 2019). In temporal graphs, the edges and/or nodes change over time, with these additions or deletions captured as a sequence of events (Yang et al., 2023; de Barros et al., 2023). In recent years, various temporal graph learning (TGL) methods have been developed to extract insights from temporal graphs by incorporating temporal-neighbor information into node em
+
+$^{1}$ Hong Kong University of Science and Technology, China $^{2}$ Hong Kong University of Science and Technology (Guangzhou), China $^{3}$ Southeast University, China $^{4}$ Hong Kong Polytechnic University, China $^{5}$ Northwestern Polytechnical University, China $^{6}$ Shenzhen Institute of Computing Sciences, China. Correspondence to: Shimin DI .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+
+
+
+Figure 1: The differences in temporal graph learning (TGL), graph continual learning (GCL) and temporal graph continual learning (TGCL). At a new period, TGL assumes no data of new classes appear, while GCL assumes static old-class data. TGCL holds neither of these assumptions, thus is more suitable to real-life temporal graphs.
+
+
+
+
+
+beddings (Kumar et al., 2019; Rossi et al., 2020; Xu et al., 2020; Cong et al., 2023; Wen & Fang, 2022; Li & Chen, 2023; Li et al., 2023). The current approaches for processing both structural and temporal information in these graphs utilize a range of model architectures, including message-passing mechanisms (Rossi et al., 2020; Xu et al., 2020; Wen & Fang, 2022), multi-layer perceptrons (MLPs) (Cong et al., 2023; Gardner & Dorling, 1998), and transformers (Yu et al., 2023; Vaswani et al., 2017). A key application of TGL methods is node classification, which is a critical task in the analysis of temporal graphs (Rossi et al., 2020; Xu et al., 2020; Yu et al., 2023). For example, in social networks, TGL methods classify normal and malicious users based on their interactions.
+
+While TGL methods are effective at classifying nodes, they face a significant limitation: they assume a static set of node classes, which does not reflect the dynamic reality of these environments. This static assumption is illustrated in Fig. 1, where node classes under the TGL setting remain unchanged over time. However, real-world scenarios often exhibit an open class setting, where new classes frequently emerge. Like in social networks, new malicious behaviours contin
+
+ually arise, introducing new node classes into the system (Feng et al., 2023). Due to the fixed class set assumption, adapting current TGL methods to this open class setting presents efficiency and effectiveness challenges. Updating the model for all classes becomes inefficient as the temporal graph grows, while fine-tuning the model for only new classes risks catastrophic forgetting (French & Chater, 2002; Parisi et al., 2019; Masana et al., 2023) of older classes, particularly when their data distribution diverges from past instances.
+
+To address the issue of forgetting when fine-tuning TGL models, continual learning (Parisi et al., 2019; Masana et al., 2023) provides a promising solution. Recently, several graph continual learning (GCL) methods have been proposed to preserve old-class knowledge by either regularizing model parameters associated with previous classes (Liu et al., 2021) or replaying representative subsets of old-class data (Kim et al., 2022; Zhou & Cao, 2021; Chen et al., 2021; Wang et al., 2022; Zhang et al., 2022; Feng et al., 2023). However, existing GCL methods struggle to handle open-class temporal graphs. The major limitation is that they assume the seen data will be static in the future, as shown in Fig.1. Such an assumption contradicts with the dynamic nature of temporal graphs, making the model become out-dated for future temporal graphs.
+
+These limitations in current TGL and GCL methods make updating models for open-class temporal graphs a challenging problem, which we define as temporal graph continual learning (TGCL). The challenge in TGCL is how to maintain both the effectiveness and recency of old-class knowledge while ensuring high efficiency. To address this, we propose a selective learning method that identifies and learns representative subsets of old-class data in new temporal graphs, named Learning Towards the Future (LTF). While subset selection is a common approach, detailed analysis of how well these subsets represent the full dataset remains limited, especially when learning from the entire dataset is impractical. To address this, we derive an upper bound on the error introduced by approximating the full dataset with a subset. We transform the upper bound into a subset selection objective to minimize this error. Additionally, guided by the upper bound, we design a regularization loss that aligns the embedding distribution of the selected subset with that of the full dataset to pursue better performance. Our contribution can be summarized as follows:
+
+- We are among the first to investigate how to effectively and efficiently update a model on temporal graphs with emerging new-class data and evolving old-class data, which we term the temporal graph continual learning (TGCL) problem.
+- Selecting representative old-class subsets is crucial for addressing the TGCL problem. To achieve this, we define
+
+a selection objective that minimizes the upper-bound error on the old classes.
+
+- The knowledge from the subsets is hard to generalize to the full old-class data. We solve this problem by designing an efficient loss that aligns the distributions of the subset and the full data.
+- We conduct extensive experiments on real-world web data, Yelp, Reddit, and Amazon. The results show that our method is effective while ensuring high efficiency.
+
+# 2. Background
+
+Many real-world scenarios are modeled as temporal graphs (Yang et al., 2023; de Barros et al., 2023; Kazemi et al., 2020), such as social networks and online shopping networks. In this paper, the temporal graph $G = (V,E,T,Y) \sim \mathcal{G}$ is a set of nodes $V$ with labels $Y$ connected by time-stamped events $E$ happening among $V$ within the time period $T$ . $G$ follows the distribution $\mathcal{G}$ . Each event $e = \{u_t,v_t,t\} \in E$ is an interaction (edge) between two nodes $u_{t},v_{t} \in V$ at time $t \in T$ . $G$ can be equivalently expressed as $E$ . Each node $v_{t} \in V$ is related with time $t$ and has a time-dependent feature $\mathbf{x}_t$ .
+
+Suppose the temporal graph has evolved for $N$ time periods $\{T_1, T_2, \dots, T_N\}$ , and the corresponding temporal graphs are noted as $\{G_1, G_2, \dots, G_N\}$ which follow the distributions $\{\mathcal{G}_1, \mathcal{G}_2, \dots, \mathcal{G}_N\}$ . Each period has a new set of classes $\{Y_1, Y_2, \dots, Y_N\}$ , where $Y_i \cap Y_j = \emptyset, \forall i, j \leq N$ and $i \neq j$ . For simplicity, we note the old classes at $T_N$ as $Y_{old} = \cup \{Y_n\}_{n < N}$ . Corresponding to the node classes, $G_N$ can be separated into $G_N^{new} = (V_N^{new}, E_N^{new}, T_N, Y_N)$ and $G_N^{old} = (V_N^{old}, E_N^{old}, T_N, Y_{old})$ , where $G_N = G_N^{old} \cup G_N^{new}$ , $V_N^{old} \cap V_N^{new} = \emptyset$ and $E_N^{old} \cap E_N^{new} \neq \emptyset$ . $E_N^{old}$ and $E_N^{new}$ are overlapping at the events connecting between $V_N^{old}$ and $V_N^{new}$ . It is worth noting that $G_N^{old}$ has the same set of classes as $G_{N-1}$ , but the data distribution is different due to the temporal evolution. The illustration on relationships among $G_{N-1}$ , $G_N^{old}$ and $G_N^{new}$ and the notation summary are presented in Appendix A.
+
+# 2.1. Temporal Graph Learning
+
+In recent years, many temporal graph learning methods are proposed (Parisi et al., 2019; Masana et al., 2023) to extract knowledge from the temporal graphs. Under the fixed class set assumption, at a new period $T_N$ , the TGL methods under the node classification task aims to minimize the classification error of the model $h$ on $G_N^{\text{old}}$ , i.e. the part of $G_N$ with only $Y_{old}$ . Suppose that $h$ is a binary classification hypothesis (model) from the hypothesis space $\mathcal{H}$ with finite VC-dimension, the TGL objective is formulated as:
+
+$$
+\tilde {h} _ {N} = \arg \min _ {h \in \mathcal {H}} \epsilon (h | \mathcal {G} _ {N} ^ {\text {o l d}}), \tag {1}
+$$
+
+where $\epsilon(h|\mathcal{G}) \coloneqq \mathbb{E}_{v_t \in \mathcal{G}}[h(v_t) \neq f(v_t)]$ is the expected classification error of $h$ on any distribution $\mathcal{G}$ , and $f(\cdot)$ is an unknown deterministic function that gives the ground truth classification on each $v_t$ .
+
+Early TGL works (Wu et al., 2021; Skarding et al., 2021) integrate events of temporal graphs into a sequence of snapshots, which loses fine-grained continuous-time information. Thus recent TGL methods preserve events as the basic training instances (Yang et al., 2023). As the pioneers, JODIE (Kumar et al., 2019) processes and updates the embeddings of each node based on their related events by using a recursive neural network (Alom et al., 2019). CTDNE (Nguyen et al., 2018) and CAW (Wang et al., 2021a) aggregate the information through random walks on the events. TGAT (Rossi et al., 2020), TGN (Xu et al., 2020) and TREND (Wen & Fang, 2022) apply the GNN-like message-passing mechanism to capture the temporal and structural information at the same time. More recently, there are also works using the multi-layer perceptrons (Cong et al., 2023) or transformers (Yu et al., 2023) to understand the temporal graphs. Other than structure designs, temporal learning techniques like the temporal point process (Trivedi et al., 2019; Wen & Fang, 2022) are also integrated into the TGL methods to better capture the temporal dynamics.
+
+# 2.2. Graph Continual Learning
+
+As new classes continuously emerge for real-life temporal graphs, how to efficiently learn new classes without forgetting the old knowledge (French & Chater, 2002) becomes an important problem. For Euclidean data like images, the forgetting issue has been addressed by many continual learning methods (Parisi et al., 2019; Masana et al., 2023). The common approaches include regularizing the model parameters, replaying the subsets of the old data, or adjusting the model parameters. Recently, some GCL methods are trying to connect continual learning to dynamic graphs (Tian et al., 2024). The objective of the GCL problem is:
+
+$$
+\tilde {h} _ {N} = \arg \min _ {h \in \mathcal {H}} \epsilon (h | \mathcal {G} _ {N} ^ {\text {n e w}}) + \epsilon (h | \mathcal {G} _ {N - 1}), \tag {2}
+$$
+
+where $\mathcal{G}_N^{new}$ and $\mathcal{G}_{N - 1}$ are distributions of $G_N^{new}$ and $G_{N - 1}$ , and the former term is for learning new-class knowledge while the latter one is for maintaining old-class knowledge. To reduce the cost of learning $G_{N - 1}$ , most of the GCL methods try to use subsets of $G_{N - 1}$ to approximate its error, where the subsets are selected by the node influence (Zhou & Cao, 2021) or the structural dependency (Chen et al., 2021; Kim et al., 2022; Zhang et al., 2022), or generated via auxiliary models (Wang et al., 2022). TWP (Liu et al., 2021) takes a different approach by preventing the important parameters for classification and message-passing from being modified. SSRM (Su et al., 2023a) aligns distributions between old and new class data distributions for better
+
+performances. However, these methods primarily target graph snapshots and are less effective for event-based temporal graphs (Feng et al., 2023). This gap is first addressed by OTGNet (Feng et al., 2023), which proposes to replay important and diverse triads (Zhou et al., 2018) and learn class-agnostic embeddings.
+
+# 3. Methodology
+
+TGCL problem takes a more realistic temporal graph setting that considers both the appearance of new classes and the evolution of old-class data. At a new period $T_N$ , TGCL requires the model $h$ to learn the new classes from $G_N^{new}$ and maintain the old-class knowledge from $G_N^{old}$ :
+
+$$
+\tilde {h} _ {N} = \underset {h \in \mathcal {H}} {\operatorname {a r g m i n}} \epsilon (h | \mathcal {G} _ {N}) = \underset {h \in \mathcal {H}} {\operatorname {a r g m i n}} \epsilon (h | \mathcal {G} _ {N} ^ {\text {n e w}}) + \epsilon (h | \mathcal {G} _ {N} ^ {\text {o l d}}). \tag {3}
+$$
+
+Compared with the TGL problem at Eq. (1), the TGCL problem additionally minimizes $\epsilon(h|\mathcal{G}_N^{new})$ . Besides, TGCL maintains more recent old-class knowledge from $G_N^{old}$ , differing from $G_{N-1}$ in the GCL problem at Eq. (2).
+
+In this work, we focus on how to achieve both effectiveness and efficiency in minimizing $\epsilon(h|\mathcal{G}_N^{old})$ , and directly minimize $\epsilon(h|\mathcal{G}_N^{new})$ as most continual learning works do. We follow a common strategy by selecting and learning a subset $G_N^{sub}$ of $G_N^{old}$ . To obtain an optimal performance, we first derive an upper bound on the error introduced by approximating $G_N^{old}$ with $G_N^{sub}$ in Sec.3.1. We then transform this theoretical bound into a tractable optimization problem to facilitate subset selection in Sec.3.2. Lastly, this error is further minimized during learning by aligning the distribution of $G_N^{sub}$ with $G_N^{old}$ , as detailed in Sec.3.3. The framework of LTF is illustrated in Fig.2.
+
+# 3.1. Classification Error Upper-bound
+
+A small classification error $\epsilon(h|\mathcal{G}_N^{old})$ on $G_N^{old}$ is essential for the model effectiveness. Selecting and learning $G_N^{sub} \subset G_N^{old}$ assumes that minimizing $\epsilon(h|\mathcal{G}_N^{sub})$ will also minimize $\epsilon(h|\mathcal{G}_N^{old})$ . While heuristics can help align these errors, a theoretical analysis connecting them is lacking. We address this gap using domain adaptation theory (Redko et al., 2020a; Ben-David et al., 2010) and show that $\epsilon(h|\mathcal{G}_N^{sub})$ can approximate $\epsilon(h|\mathcal{G}_N^{old})$ with upper-bounded additional error.
+
+Based on Lemma 3 from (Ben-David et al., 2010) (see Appendix C), the classification disagreement of any two models $h$ and $h'$ on any two data distributions is upper-bounded by the discrepancy between those two distributions. Then, the upper-bound on the classification error of $\mathcal{G}_N^{old}$ can be derived as the following theorem:
+
+Theorem 3.1. Let $\mathcal{G}_N^{old}$ , $\mathcal{G}_N^{sub}$ be the distributions of $G_N^{old}$ and $G_N^{sub}$ . Let $h\in \mathcal{H}$ be a function in the hypothesis space
+
+
+Figure 2: The selective learning framework of LTF on old-class data. From $G_N^{old}$ , $G_N^{sub}$ is greedily selected by having the lowest classification error $j_{cls}(\cdot)$ and distribution discrepancy $j_{MMD}(\cdot)$ , while $G_N^{sim}$ is greedily selected only by the lowest $j_{MMD}(\cdot)$ . Afterwards, $G_N^{sub}$ is learned by minimizing the classification error and aligning the distribution with $G_N^{sim}$ .
+
+$\mathcal{H}$ and $\tilde{h}_N^{sub}$ be the function optimized on $\mathcal{G}_N^{sub}$ . The classification error on $\mathcal{G}_N^{old}$ then has the following upper bound:
+
+$$
+\begin{array}{l} \min _ {h \in \mathcal {H}} \epsilon (h | \mathcal {G} _ {N} ^ {o l d}) \leq \min _ {h, \mathcal {G} _ {N} ^ {s u b}} \epsilon (\tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) \tag {4} \\ + \frac {1}{2} d _ {\mathcal {H} \Delta \mathcal {H}} (\mathcal {G} _ {N} ^ {o l d}, \mathcal {G} _ {N} ^ {s u b}) + \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}), \\ \end{array}
+$$
+
+where $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_a,\mathcal{G}_b) = 2\sup_{h\in \mathcal{H}\Delta \mathcal{H}}|Pr_{v_t\sim \mathcal{G}_a}[h(v_t) = 1] - Pr_{v_t\sim \mathcal{G}_b}[h(v_t) = 1]|$ measures the $\mathcal{H}\Delta \mathcal{H}$ divergence between the distributions $\mathcal{G}_a$ and $\mathcal{G}_b$ , and $\epsilon (h,h^{\prime}|\mathcal{D})\coloneqq \mathbb{E}_{x\in \mathcal{D}}[h(x)\neq h^{\prime}(x)]$ is the expected prediction differences of $h$ and $h^\prime$ on $\mathcal{D}$ .
+
+The proof is given in Appendix C. Following Theorem 3.1, there are three criteria to ensure that $h$ achieves a lower error on $G_N^{old}$ by finding a suitable $G_N^{sub}$ :
+
+- Small error $\epsilon(\tilde{h}_N^{sub}|\mathcal{G}_N^{old})$ indicates that $\tilde{h}_N^{sub}$ learned on subset $G_N^{sub}$ is also predictive on the entire old data $G_N^{old}$ ;
+- Small distribution difference $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old},\mathcal{G}_N^{sub})$ indicates that the subset $G_N^{sub}$ is diverse enough to represent the entire old class data $G_N^{old}$ ;
+- Small error $\epsilon(h, \tilde{h}_N^{sub} | \mathcal{G}_N^{sub})$ indicates that $h$ classifies $\mathcal{G}_N^{sub}$ similarly as $\tilde{h}_N^{sub}$ , which guarantees that the knowledge of $\tilde{h}_N^{sub}$ can be transferred to $h$ .
+
+By relaxing $\epsilon (h|\mathcal{G}_N^{old})$ to the upper bound in Eq. (4), we convert the original problem into selecting subset $G_N^{sub}$ (Sec. 3.2) and updating the TGCL model $h$ (Sec.3.3).
+
+# 3.2. Subset Selection
+
+Theorem 3.1 specifies three criteria for selecting $G_N^{sub}$ from $G_N^{old}$ . Since $h$ is unknown during selection, third criterion is omitted in following analysis. The remaining two criteria for a representative $G_N^{sub}$ are: 1) a low $\epsilon(\tilde{h}_N^{sub}|\mathcal{G}_N^{old})$ , reflecting strong predictive performance on $G_N^{old}$ , and 2) a low $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old},\mathcal{G}_N^{sub})$ , ensuring close alignment with the distribution of $G_N^{old}$ . Next, we introduce how we transform these two criteria into a tractable optimization problem for selecting representative $G_N^{sub}$ .
+
+Base Algorithm. In minimizing $\epsilon (\tilde{h}_N^{sub}|\mathcal{G}_N^{old})$ , it is infeasible to obtain $\tilde{h}_N^{sub}$ from each possible $\mathcal{G}_N^{sub}$ for selection. Therefore we approximate $\tilde{h}_N^{sub}$ with $\tilde{h}_{N - 1}$ that is optimized on $\mathcal{G}_{N - 1}$ . This is a reasonable approximation, as $\tilde{h}_{N - 1}$ is readily available and $G_{N - 1}$ closely resembles the full data $G_N^{old}$ by sharing the same class set $Y_{old}$ and being temporally proximate. With $\tilde{h}_N^{sub}$ substituted by $\tilde{h}_{N - 1}$ , based on the inequality in (Ben-David et al., 2010), $\epsilon (\tilde{h}_N^{sub}|\mathcal{G}_N^{old})$ is expanded to: $\epsilon (\tilde{h}_N^{sub}|\mathcal{G}_N^{old})\leq \epsilon (\tilde{h}_N^{sub},\tilde{h}_{N - 1}|\mathcal{G}_N^{old}) + \epsilon (\tilde{h}_{N - 1}|\mathcal{G}_N^{old})$ . As $\epsilon (\tilde{h}_{N - 1}|\mathcal{G}_N^{old})$ is unrelated to the subset, we focus on minimizing $\epsilon (\tilde{h}_N^{sub},\tilde{h}_{N - 1}|\mathcal{G}_N^{old})$ , which requires aligning $\tilde{h}_N^{sub}$ with $\tilde{h}_{N - 1}$ . Because $\tilde{h}_N^{sub}$ depends on $G_N^{sub}$ , we select $G_N^{sub}$ that minimizes $\epsilon (\tilde{h}_{N - 1}|\mathcal{G}_N^{sub})$ to match the behaviors of $\tilde{h}_N^{sub}$ and $\tilde{h}_{N - 1}$ . This simplifies our selection problem to:
+
+$$
+\tilde {\mathcal {G}} _ {N} ^ {s u b} = \arg \min _ {\mathcal {G} _ {N} ^ {s u b}} \epsilon (\tilde {h} _ {N - 1} | \mathcal {G} _ {N} ^ {s u b}) + d _ {\mathcal {H} \Delta \mathcal {H}} \left(\mathcal {G} _ {N} ^ {o l d}, \mathcal {G} _ {N} ^ {s u b}\right). \tag {5}
+$$
+
+Due to limited available observations in reality, we transform the distribution-level error $\epsilon (\tilde{h}_{N - 1}|\mathcal{G}_N^{sub})$ into the empirical error $\hat{\epsilon} (\cdot)$ on the finite subest $G_N^{sub}$ :
+
+$$
+\hat {\epsilon} \left(\tilde {h} _ {N - 1} \mid G _ {N} ^ {s u b}\right) = \frac {1}{\left| G _ {N} ^ {s u b} \right|} \sum_ {\left(v _ {t}, y\right) \in G _ {N} ^ {s u b}} l _ {c l s} \left(\tilde {h} _ {N - 1} \left(v _ {t}\right), y\right), \tag {6}
+$$
+
+where $l_{cls}(\cdot)$ is the classification error like the mean square error. Similarly, we estimate $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old},\mathcal{G}_N^{sub})$ by the square of the mean maximum distribution (MMD) (Gretton et al., 2006) on the finite sets $G_N^{\mathrm{old}}$ and $G_N^{\mathrm{sub}}$ :
+
+$$
+\hat {d} _ {M M D} ^ {2} \left(G _ {a}, G _ {b}\right) = \frac {1}{\left| G _ {a} \right| ^ {2}} \sum_ {v _ {t}, u _ {t} \in G _ {a}} k \left(v _ {t}, u _ {t}\right) \tag {7}
+$$
+
+$$
+- \frac {2}{\left| G _ {a} \right| \left| G _ {b} \right|} \sum_ {v _ {t} \in G _ {a}, u _ {t} \in G _ {b}} k (v _ {t}, u _ {t}) + \frac {1}{\left| G _ {b} \right| ^ {2}} \sum_ {v _ {t}, u _ {t} \in G _ {b}} k (v _ {t}, u _ {t}),
+$$
+
+where $k(\cdot ,\cdot)$ is the kernel function, $G_{a}$ and $G_{b}$ are adopted for simplifying notations. To evaluate the kernel values, we
+
+take a common practice to replace the raw data by their embeddings (Su et al., 2023b; Shi & Wang, 2023; Redko et al., 2020b), which are extracted by $\tilde{h}_{N-1}$ and noted as $\hat{d}_{MMD}^{2}(G_{N}^{sub}, G_{N}^{old}|\tilde{h}_{N-1})$ .
+
+With above estimations, the selection objective of $G_{N}^{sub}$ in Eq.(5) is transformed into:
+
+$$
+\begin{array}{l} \tilde {G} _ {N} ^ {s u b} = \arg \min _ {| G _ {N} ^ {s u b} | \leq m, G _ {N} ^ {s u b} \subset G _ {N} ^ {o l d}} \alpha \hat {\epsilon} \left(\tilde {h} _ {N - 1} \mid G _ {N} ^ {s u b}\right) \tag {8} \\ + \hat {d} _ {M M D} ^ {2} (G _ {N} ^ {o l d}, G _ {N} ^ {s u b} | \tilde {h} _ {N - 1}), \\ \end{array}
+$$
+
+where $\alpha$ is a weight hyper-parameter and $m$ is the memory budget for $G_N^{sub}$ . A larger $m$ brings a better estimation but also increases the computation complexity.
+
+Greedy Algorithm. Directly optimizing the selection objective in Eq. (8) is infeasible due to its factorial time complexity. However, it can be proven to be a monotone submodular function, allowing greedy optimization with a guaranteed approximation to the optimal solution.
+
+The first term $\hat{\epsilon} (\tilde{h}_{N - 1}|G_N^{sub})$ is linear to the classification error of each instance, thus it is directly monotone submodular. Following the proof in (Kim et al., 2016), $\hat{d}_{MMD}^2 (G_N^{old},G_N^{sub}|\tilde{h}_{N - 1})$ is also monotone submodular function with respect to $G_{N}^{sub}$ provided $k(v_{t},u_{t})$ satisfies $0\leq k(v_{t},u_{t})\leq$ $k(v_{t},v_{t}) / (|G_{N}^{old}|^{3} - 2|G_{N}^{old}|^{2} - 2|G_{N}^{old}| - 3)$ $\forall v_{t},u_{t}\in G_{N}^{old}$ $v_{t}\neq$ $u_{t}$ . This requirement is met with a properly parameterized kernel, and we use the Radial Basis Function kernel (Scholkopf et al., 1997) in practice. Thus, Eq. (8), as a sum of two monotone submodular functions, is itself monotone submodular based on the theory in (Cook et al., 2011). Consequently, as per (Nemhauser et al., 1978), Eq.(8) can be efficiently approximated by a greedy algorithm, achieving an error bound of $(1 - 1 / e)$ relative to the optimal solution.
+
+To implement the greedy algorithm, we derive the witness function $j(\cdot)$ to evaluate how adding one node to $G_N^{sub}$ affects the value of Eq.(8). The function $j(\cdot)$ is a summation of two separate witness functions $j_{cls}(\cdot)$ and $j_{MMD}(\cdot)$ , corresponding to $\hat{\epsilon} (\tilde{h}_{N - 1}|G_N^{sub})$ and $\hat{d}_{MMD}^2 (G_N^{old},G_N^{sub})$ . Since the classification error is a summation over node-wise losses defined in Eq.(6), $j_{cls}(v_t) = l_{cls}(v_t,y|\tilde{h}_{N - 1})$ . On the other hand, $j_{MMD}(\cdot)$ can be derived from Eq. (7) as:
+
+$$
+\begin{array}{l} j _ {M M D} (v _ {t}) = \frac {2}{\left| G _ {N} ^ {s u b} \right|} \sum_ {u _ {t} \in G _ {N} ^ {s u b}} k \left(v _ {t}, u _ {t} \mid \tilde {h} _ {N - 1}\right) \tag {9} \\ - \frac {2}{| G _ {N} ^ {o l d} |} \sum_ {u _ {t} \in G _ {N} ^ {o l d}} k (v _ {t}, u _ {t} | \tilde {h} _ {N - 1}), \\ \end{array}
+$$
+
+where $k(v_{t},u_{t}|h)$ notes the kernel calculated by the node embeddings extracted from the model $h$ . Then, the overall
+
+witness function $j(\cdot)$ is expressed as:
+
+$$
+j (v _ {t}) = \alpha j _ {c l s} (v _ {t}) + j _ {M M D} (v _ {t}),
+$$
+
+where $\alpha$ is the same as in Eq. (8). Afterwards, we greedily select the nodes with the smallest $j(\cdot)$ from $G_N^{old}$ to $G_N^{sub}$ until the buffer is full.
+
+Cost Reduction. During greedy selection, estimating $\hat{d}_{MMD}^{2}(G_{N}^{old}, G_{N}^{sub}|\tilde{h}_{N-1})$ has a high computational complexity of $O((|G_{N}^{old}| + |G_{N}^{sub}|)^{2})$ , which limits its application to large data sets. Thus, we propose to evenly partition $G_{N}^{old}$ into groups of size $p$ , $|G_{N}^{old}| > p \gg m$ , resulting in $W = \lceil |G_{N}^{old}| / p \rceil$ partitions. We then greedily select $1/w$ of $G_{N}^{sub}$ from each partition and join them as the final subset. The complexity of selecting $G_{N}^{sub}$ from each partition is reduced to $O((|G_{N}^{old}| + |G_{N}^{sub}|)^{2}/W^{2})$ .
+
+Based on the triangle inequality of $d_{\mathcal{H}\Delta \mathcal{H}}(\cdot ,\cdot)$ (Gretton et al., 2006), this partition procedure enlarges the second term of Eq. (8) to $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old},\mathcal{G}_N^{sub})\leq d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old},\mathcal{G}_{N,w}^{old}) + d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_{N,w}^{old},\mathcal{G}_{N,w}^{sub})$ for each partition $G_{N,w}^{old}\sim G_{N,w}^{old}$ . To reduce this additional error, $\mathcal{G}_{N,w}^{old}$ should be similar to $\mathcal{G}_N^{old}$ . As the partitioned data can remain a large size for the subset selection, the random partition can well satisfy this requirement.
+
+# 3.3. Model Optimization
+
+After selecting an optimal subset from Sec. 3.2, we transform Eq. (4) into a concrete learning objective to train an effective model $h$ with $G_N^{sub}$ . Because $\tilde{h}_N^{sub}$ is determined after the subset is selected and $\mathcal{G}_N^{old}$ is fixed, the first term $\epsilon(\tilde{h}_N^{sub} | \mathcal{G}_N^{old})$ of Eq. (4) cannot be further optimized and is omitted. We minimize $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old}, \mathcal{G}_N^{sub})$ by enclosing the embedding distributions of both data sets extracted by $h$ (Ben-David et al., 2010; Su et al., 2023b; Shi & Wang, 2023; Redko et al., 2020b), i.e., minimize $d_{\mathcal{H}\Delta \mathcal{H}}(\mathcal{G}_N^{old}, \mathcal{G}_N^{sub} | h)$ . By replacing the population terms with their estimations, the objective of learning $G_N^{sub}$ is $\hat{\epsilon}(h, \tilde{h}_N^{sub} | G_N^{sub}) + d_{MMD}(G_N^{old}, G_N^{sub} | h)$ .
+
+As we avoid learning $G_N^{old}$ for efficiency reason, we substitute $G_N^{old}$ with its similarly distributed subset $G_N^{sim} \subset G_N^{old}$ , $\mathcal{G}_N^{sim} \approx \mathcal{G}_N^{old}$ . To satisfy this requirement, $G_N^{sim}$ is selected by solely minimizing its distribution discrepancy with $G_N^{old}$ :
+
+$$
+\tilde{G}_{N}^{sim} = \arg \min_{\substack{|G_{N}^{sim}|\leq m^{\prime}\\ G_{N}^{sim}\subset G_{N}^{old}}}\hat{d}_{MMD}(G_{N}^{sim},G_{N}^{old}|\tilde{h}_{N - 1}), \tag{10}
+$$
+
+where $m^{\prime}$ is the memory budget for $G_N^{sim}$ . A larger $m^{\prime}$ improves the estimation but also increases the computation complexity. $G_N^{sim}$ can be selected by greedily finding the nodes with the lowest $j_{MMD}(\cdot)$ defined in Eq. (9). We further reduce the selection cost of $G_N^{sim}$ by data partitioning,
+
+Table 1: Data statistics
+
+Data Set Yelp Reddit Amazon # Nodes 19,918 13,106 84,605 # Events 2,321,186 310,231 875,563 # Timespan / Period 1 year 20 days 24 days # Periods 5 3 3 # Classes / Period 3 5 3 # Total Classes 15 15 9
+
+similar to $G_N^{sub}$ . After substituting $G_N^{old}$ with $G_N^{sim}$ , the learning objective is transformed to:
+
+$$
+l _ {o l d} \left(G _ {N} ^ {s i m}, G _ {N} ^ {s u b} | h\right) = \hat {\epsilon} (h \mid G _ {N} ^ {s u b}) + \hat {d} _ {M M D} \left(G _ {N} ^ {s i m}, G _ {N} ^ {s u b} | h\right), \tag {11}
+$$
+
+where $\hat{\epsilon}(h, \tilde{h}_N^{sub}|G_N^{sub})$ is written as $\hat{\epsilon}(h|G_N^{sub})$ since both terms are equivalent in making $h$ perform like $\tilde{h}_N^{sub}$ .
+
+In practice, the complexity of calculating $\hat{d}_{MMD}^{2}(G_{N}^{sim}, G_{N}^{sub}|h)$ is $O((|G_{N}^{sim}| + |G_{N}^{sub}|)^{2})$ , which is much higher than $O(|G_{N}^{sub}|)$ of the classification error calculation. So that we further simplify its format to improve the efficiency. To ensure that $G_{N}^{sim}$ is the target distribution of $G_{N}^{sub}$ during optimization, we stop the gradients of $G_{N}^{sim}$ from being back-propagated. Following this stop gradient design, the first and third terms in $\hat{d}_{MMD}^{2}(\cdot)$ definition at Eq. (7) are omitted, since $G_{N}^{sim}$ is not learned and the self-comparison within $G_{N}^{sub}$ is meaningless. After this simplification, the complexity is reduced from $O((|G_{N}^{sim}| + |G_{N}^{sub}|)^{2})$ to $O(|G_{N}^{sim}||G_{N}^{sub}|)$ , and $\hat{d}_{MMD}^{2}(G_{N}^{sim}, G_{N}^{sub}|h)$ is optimized by:
+
+$$
+l_{dst}(G_{N}^{sub},G_{N}^{sim}|h) = -s\sum_{\substack{v_{t}\in G_{N}^{sub}\\ u_{t}\in G_{N}^{sim}}}k(v_{t},\mathrm{sg}(u_{t})|h),
+$$
+
+where $s = 2 / |G_N^{sub}| \cdot |G_N^{sim}|$ and $\mathrm{sg}(\cdot)$ means stopping the gradients from back-propagation.
+
+Finally, by including the objective of learning $G_N^{new}$ , our total objective of updating $h$ at $T_N$ is:
+
+$$
+l _ {t o t} = \hat {\epsilon} (h | G _ {N} ^ {n e w}) + \hat {\epsilon} (h | G _ {N} ^ {s u b}) + \beta l _ {d s t} (G _ {N} ^ {s i m}, G _ {N} ^ {s u b} | h),
+$$
+
+where $\beta$ is the hyper-parameter weighting the distribution regularization importance. The pseudo-code of our overall framework is presented in Algorithm 1 at Appendix D.
+
+# 4. Experiments
+
+# 4.1. Experiment Setup
+
+Data Set. We evaluate our method using three real-world datasets: Yelp (dat), Reddit (Baumgartner et al., 2020), and
+
+Amazon (Ni et al., 2019). Yelp, a business-to-business temporal graph from 2014 to 2019, treats businesses in the same category as nodes of the same class, with user interactions creating events. The graph is divided into five periods, each representing a year, with three new business categories introduced each year. Reddit, a post-to-post temporal graph, treats subreddit topics as classes and posts as nodes. User comments create events, with every 24 days forming a period and five new subreddits introduced each period. Amazon is constructed similarly to Yelp, with 20-day periods and three new businesses per period. The temporal graph transformation mechanism is similar to OTGNet (Feng et al., 2023), but adapted to our unique problem definition. Dataset statistics are summarized in Tab. 1, with additional details in the Appendix G.
+
+Backbone Model. As LTF is agnostic to TGL model designs, we select the classic model TGAT (Rossi et al., 2020) and the state-of-the-art model DyGFormer (Yu et al., 2023) as our backbone models. TGAT uses the self-attention mechanism to aggregate the temporal neighbor information and embed the nodes. DyGFormer applies the transformer structure and uses the structural encoding to embed the nodes.
+
+Baselines. First, we select three classic continual learning models that are adaptable to the TGCL problem, which are EWC (Kirkpatrick et al., 2016), LwF (Li & Hoiem, 2018) and iCaRL (Rebuffi et al., 2017). EWC and LwF use regularization losses to prevent forgetting the old class knowledge while not using the old class data. iCaRL selects the representative old class data based on the closeness to the mean of the embeddings. For the GCL methods, we select the replay-based methods ER (Zhou & Cao, 2021), SSM (Zhang et al., 2022), OTGNet (Feng et al., 2023), and URCL (Miao et al., 2024). Besides, the naive baselines of learning the full $G_{N}$ (Joint) and learning only the $G_{N}^{new}$ (Finetune) are included. Joint is the upper-bound for performance with the lowest efficiency, while Finetune is the opposite.
+
+Evaluation Metric. The average precision (AP) and average forgetting (AF) on each set of classes within a period are used to evaluate the model performance. In $G_{n}$ , there are $n$ sets of classes $\{Y_{1},\dots ,Y_{n}\}$ , and the model's precision on each of them is $P_{n,i},\forall i\leq n$ . Then AP at period $T_{n}$ is calculated as $AP_{n}\coloneqq \frac{1}{n}\sum_{i = 1}^{n}P_{n,i}$ . To evaluate the forgetting issue at $T_{n}$ , we use the precision difference between the current method and Joint $(P_{n,i}^{jnt})$ as the forgetting score for $Y_{i}$ , which is $F_{n,i}\coloneqq P_{n,i}^{jnt} - P_{n,i}$ . Then the AF at period $T_{n}$ is calculated as $AF_{n}\coloneqq \frac{1}{n - 1}\sum_{i = 1}^{n - 1}F_{n,i}$ . For simplicity, we omit the subscript $N$ for $AP_N$ and $AF_N$ as they reflect the final performance. A higher value of AP is better, while a lower value of AF is better. To evaluate the efficiency, the average training time per epoch (abbreviated as Time) at the
+
+Table 2: The comparison between LTF and the baselines methods. The best and second best results are noted in Bold and Underline. Joint and Finetune are excluded from the notations.
+
+Method TGAT DyGFormer Yelp Reddit Amazon Yelp Reddit Amazon AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ Joint 0.0810 — 58.37 0.1378 — 50.50 0.1477 — 128.71 0.0813 — 95.11 0.1256 — 70.64 0.1500 — 177.38 Finetune 0.0141 0.0843 9.11 0.0312 0.1550 14.93 0.0340 0.1408 65.81 0.0172 0.0800 14.43 0.0360 0.1433 20.58 0.0551 0.1517 88.34 LwF 0.0209 0.0620 13.90 0.0439 0.1091 23.67 0.0303 0.1024 102.92 0.0399 0.0386 26.03 0.0469 0.0944 37.53 0.0763 0.0856 155.03 EWC 0.0443 0.0408 9.19 0.0467 0.1384 14.95 0.0524 0.1152 68.37 0.0601 0.0295 14.24 0.0521 0.1046 20.20 0.1005 0.0832 89.32 iCaRL 0.0607 0.0198 11.57 0.0602 0.0860 19.22 0.0699 0.0794 70.03 0.0558 0.0214 18.31 0.0917 0.0248 26.34 0.0945 0.0775 92.36 ER 0.0521 0.0332 11.63 0.0622 0.0783 19.07 0.0799 0.0617 69.06 0.0546 0.0276 18.49 0.0771 0.0386 26.65 0.1026 0.0650 92.11 SSM 0.0552 0.0232 11.82 0.0308 0.1203 82.99 0.0723 0.0912 145.27 0.0560 0.0235 18.27 0.0723 0.0641 26.09 0.1063 0.0568 92.15 OTGNet* 0.0648 0.0236 316.15 0.0868 0.0518 49.42 0.1031 0.0459 709.49 — — — — — — — — — URCL 0.0562 0.0303 11.57 0.0726 0.0649 20.13 0.0915 0.0431 70.32 0.0584 0.0216 20.13 0.0902 0.0284 27.58 0.1089 0.0566 93.43 LTF 0.0682 0.0195 25.05 0.0871 0.0474 39.16 0.1110 0.0165 72.94 0.0681 0.0096 51.80 0.1134 0.0081 58.56 0.1253 0.0383 101.06
+
+
+Figure 3: The average precision (AP) of LTF and the baselines at each period based on TGAT.
+
+$N$ 's update is recorded, as it accumulates the most data and is the most time-consuming.
+
+Implementation Details. The experiments are run on Nvidia A30 GPU. The implementation of backbone models follows the code provided by DyGLib (Yu et al., 2023). TGAT contains two layers, and each layer has 4 attention heads. DyGFormer contains two layers, and each layer has 4 attention heads. The number of temporal neighbors is 10 and they are selected based on their freshness. We use a single 2-layer perceptron to classify all the nodes of the period, which is called class-incremental setting in continual learning. For all data sets, dropout rate is 0.4, learning rate is 0.00001, training epochs for each period is 100, and batch size is 600. Early stop is applied when validate AP does not improve for 20 epochs. For the selection based methods, 1000 events are selected for each class data at each period of Reddit and Amazon, and 500 for Yelp. Additionally, for LTF, the size of $G_N^{stim}$ is set to 500 for all data sets, and data are partitioned to have around 10000 samples in each part. The reported results are averaged over 3 random seeds. For all data sets, each period is split into $80\%$ training, $10\%$ validation, and $10\%$ test. The testing data are not seen in training and validation.
+
+# 4.2. Main Experiments
+
+The overall comparison of LTF with other baselines is shown in Tab.2, with performance trends in Fig.3 and Fig.4. OTGNet modifies TGAT with a unique structure, making adap
+
+
+Figure 4: The average forgetting (AF) of LTF and the baselines at each period based on TGAT.
+
+tation to DyGFormer non-trivial. Naive approaches like Joint and Finetune face efficiency and effectiveness issues; Joint requires long training times, while Finetune performs worse than other baselines. Existing continual learning methods partially address TGCL, achieving higher APs (lower AFs) than Finetune with significantly lower time costs than Joint. Regularization-based methods are generally weaker than selection-based methods, highlighting the importance of data for updating old knowledge. However, a performance gap remains compared to Joint. OTGNet improves subset selection by ensuring importance and diversity, outperforming other baselines but suffering from high time complexity. LTF, with its theoretical guarantees, achieves better performance than OTGNet with lower time costs. On DyGFormer, LTF outperforms OTGNet across all datasets. Full results with standard deviations are provided in Appendix I.
+
+# 4.3. Ablation Study
+
+Selection and Regularization Components. In Tab.3, we evaluate the impact of each LTF component. The key terms of our selection objective in Eq.(8) are error $\hat{\epsilon} (\cdot |\cdot)$ and distribution $\hat{d}_{MMD}^2 (\cdot ,\cdot)$ , which are represented by Err. and Dist. respectively. We also analyze the effect of adding $ldst(\cdot)$ to the training objective. The first two lines of Tab. 3 show that neither selection component alone is sufficient to find an effective subset. Yelp and Reddit rely more on distribution similarity, while Amazon benefits from lower error. Com
+
+Table 3: Ablation study on the selecting and learning components of LTF. The applied components are noted with Y. The best and second best results are noted in Bold and Underline.
+
+Component TGAT DyGFormer Select Learn Yelp Reddit Amazon Yelp Reddit Amazon Err. Dist. ldst(·) AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ Y 0.0438 0.0439 0.0579 0.0736 0.1063 0.0185 0.0467 0.0309 0.0807 0.0407 0.1203 0.0456 Y 0.0565 0.0322 11.79 0.0640 0.0695 19.28 0.0592 0.0684 67.77 0.0543 0.0266 18.51 0.0863 0.0350 27.12 0.1161 0.0465 90.18 Y Y 0.0654 0.0215 0.0866 0.0447 0.1004 0.0078 0.0618 0.0155 0.0939 0.0272 0.1231 0.0366 Y Y 0.0682 0.0195 25.05 0.0871 0.0474 39.16 0.1110 0.0165 72.94 0.0681 0.0096 51.80 0.1134 0.0081 58.56 0.1253 0.0383 101.06
+
+
+Figure 5: Sensitivity on the key hyper-parameters based on TGAT.
+
+
+
+
+
+Table 4: Ablation study on the partition methods when selecting the subsets. The reported values are AP.
+
+Partition TGAT DyGFormer Method Yelp Reddit Amazon Yelp Reddit Amazon Kmeans 0.0598 0.0754 0.0929 0.0569 0.0912 0.1104 Hierarchical 0.0613 0.0792 0.0897 0.0621 0.0987 0.1091 Random 0.0682 0.0871 0.1110 0.0681 0.1134 0.1253
+
+bining both improves performance across all datasets and backbones. After selecting the most effective data, incorporating $l_{dst}(\cdot)$ further enhances performance. The additional optimization time is less significant for denser graphs, especially in Amazon, where new class data dominates. Full results are provided in Appendix J.
+
+Data Partition Approaches. We reduce selection complexity by partitioning old-class data and prove that an optimal partition should preserve distribution. Tab. 4 examines different partition methods. Selecting subsets without partitioning exceeds GPU memory limits (24G on Nvidia A30), making performance untrackable. Among intuitive methods, random partitioning is more effective, as k-means or Hierarchical clustering alter data distribution of each partition, conflicting with theorem requirements. The study on the impact of partition size is included in the Appendix K.
+
+# 4.4. Sensitivity Analysis
+
+In Fig. 5, we evaluate the sensitivity of our method over the essential hyper-parameters on TGAT. The results on DyGFormer are in Appendix L. The empirical analyses are listed as follows, which apply to both backbones:
+
+- The impact of $\alpha$ . $\alpha$ balances error and distribution in selecting $G_N^{sub}$ in Eq. (8). Across values [0.25, 0.5, 1, 2, 4], Yelp and Reddit favor smaller weights, while Amazon prefers larger ones, aligning with the ablation study. Optimal performance occurs around $\alpha = 1$ for all datasets, confirming the importance of both error and distribution in subset selection.
+- The impact of $\beta$ . $\beta$ controls the weight of $l_{dst}(\cdot)$ during subset learning. Despite favoring distribution in selection, Yelp and Reddit require a low $\beta$ for regularization, while Amazon needs a higher value. This suggests that distribution is crucial for generalizing subset knowledge, with $l_{dst}(\cdot)$ enhancing its effect during learning.
+- Size $m$ of $G_N^{sub}$ . As $G_N^{sub}$ is the major carrier of the old class knowledge, its size $m$ is an important factor for the performance. Following the setup of the main experiment, the memory size is set to [250, 500, 750] for Yelp, and [500, 1000, 1500] for Reddit and Amazon. With the increase of memory size, the performance continuously improves, which is consistent with the intuition.
+- Size $m'$ of $G_N^{sim}$ . $m'$ affects the quality of distribution approximation in Eq. 10. It can be seen Amazon requires a larger size, while 500 is effective enough for Yelp and Reddit. This is because the distribution of Amazon is more complex and requires a larger sample size to approximate.
+
+In addition to the effectiveness study, we also evaluated the efficiency-performance tradeoff by varying $m$ and $m'$ in Tab. 5. Results show that increasing either $m$ or $m'$ consistently improves average precision (AP), at the cost of longer training time. This trend is expected due to the $O(mm')$ complexity introduced by the regularization loss.
+
+Notably, the full LTF model (with both subgraph selection and regularization) achieves the highest accuracy among all continual learning baselines, but also incurs additional runtime due to regularization. However, since our selection and regularization modules are decoupled, the regularization can be disabled when efficiency is a priority. In such cases, the selection-only variant (No Reg. for $m'$ ) still outperforms existing replay-based methods with comparable runtime, offering a flexible trade-off between performance and efficiency.
+
+Table 5: Sensitivity analysis on the efficiency-performance tradeoff by varying $m$ and ${m}^{\prime }$ on Yelp dataset and DyG-Former backbone.
+
+Setting AP↑ Time (s) ↓ Varying m (size of GsubN) m = 250 0.0434 34.99 m = 500 0.0681 54.30 m = 750 0.0713 72.31 Varying m' (size of GsimN) Best Baseline 0.0601 14.24 m' = 0 (No Reg.) 0.0618 18.51 m' = 250 0.0624 36.70 m' = 500 0.0681 54.30 m' = 750 0.0693 71.88
+
+# 4.5. Case Studies
+
+In this section, we include experiments on special questions of our problem, including the necessity of TGNNs in solving the proposed problem, and how more complex datasets may affect the performance of LTF.
+
+The necessity of TGNNs. As node embeddings are the key media of selecting data and classifying nodes, here we study whether the embeddings along can support the whole training process, rather than using the topological structures. As shwon in Tab. 6, the MLP backbone, which only uses the node embeddings, is not able to achieve a good performance on even under the joint setting. This shows that the topological structures are essential for the TGCL problem, and the TGNNs are necessary to solve it.
+
+Table 6: The comparison of AP across different backbone models and TGCL methods.
+
+MLP TGAT DyGFormer Joint 0.0184 0.1477 0.1500 Finetune 0.0160 0.0340 0.0551 LTF 0.0171 0.1110 0.1253
+
+More Complex Datasets To evaluate the scalability and robustness of our approach under more challenging conditions, we construct two new large-scale benchmarks, Reddit-Large for more data updates and Reddit-Long for longer period durations:
+
+- Reddit-Large comprises 344,630 nodes, 4,962,297 edges, and spans 16 time periods, with 2 novel classes introduced per period, totaling 32 classes.
+- Reddit-Long consists of 558,486 nodes, 5,323,230 edges, and spans 4 time periods, each covering 180 days. In each period, 6 new classes are introduced, resulting in a total of 24 classes evenly added over time.
+
+We evaluate several baseline methods alongside our proposed LTF framework. As shown in Table 7, despite the increased difficulty in data updates and durations, LTF maintains competitive performance in terms of both predictive accuracy and runtime, outperforming other continual learning baselines such as Finetune and iCaRL when paired with the TGAT backbone.
+
+Table 7: Performance comparison across Reddit-Large and Reddit-Long datasets.
+
+Method Reddit-Large Reddit-Long AP↑ Time ↓ AP↑ Time ↓ Joint-TGAT 0.02042 107.73 0.0734 174.02 Finetune-TGAT 0.00237 6.37 0.0113 54.16 iCaRL-TGAT 0.00747 14.71 0.0354 60.03 LTF-TGAT 0.01043 37.21 0.0499 110.35
+
+# 5. Conclusion
+
+This paper introduces a novel challenge of updating models in temporal graphs with open-class dynamics, termed temporal graph continual learning (TGCL). Unlike existing problems, TGCL necessitates adapting to both emerging new-class data and evolving old-class data, requiring model updates to be both effective and efficient. Our proposed Learning Towards the Future (LTF) method addresses TGCL by selectively learning from representative subsets of old classes, a strategy substantiated by theoretical analysis. Experiments on real-life datasets show that LTF effectively mitigates forgetting with minimal additional cost.
+
+# Acknowledgements
+
+Lei Chen's work is partially supported by National Key Research and Development Program of China Grant No. 2023YFF0725100, National Science Foundation of China (NSFC) under Grant No. U22B2060, Guangdong-Hong Kong Technology Innovation Joint Funding Scheme Project No. 2024A0505040012, the Hong Kong RGC GRF Project 16213620, RIF Project R6020-19, AOE Project AoE/E-603/18, Theme-based project TRS T41-603/20R, CRF Project C2004-21G, Guangdong Province Science and Technology Plan Project 2023A0505030011, Guangzhou municipality big data intelligence key lab, 2023A03J0012, Hong Kong ITC ITF grants MHX/078/21 and PRP/004/22FX, Zhujiang scholar program 2021JC02X170, Microsoft Research Asia Collaborative Research Grant, HKUST-Webank joint research lab and 2023 HKUST Shenzhen-Hong Kong Collaborative Innovation Institute Green Sustainability Special Fund, from Shui On Xintiandi and the InnoSpace GBA. Xun Jian's work is partially supported by the Fundamental Research Funds for the Central Universities No. D5000240167.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+URL https://www.yelp.com/dataset.
+Alom, M. Z., Taha, T. M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M. S., Hasan, M., Essen, B. C. V., Awwal, A. A. S., and Asari, V. K. A state-of-the-art survey on deep learning theory and architectures. Electronics, 2019. URL https://api-semanticscholar.org/CorpusID:115606413.
+Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., and Blackburn, J. The pushshift reddit dataset. In Choudhury, M. D., Chunara, R., Culotta, A., and Welles, B. F. (eds.), Proceedings of the Fourteenth International AAAI Conference on Web and Social Media, ICWSM 2020, Held Virtually, Original Venue: Atlanta, Georgia, USA, June 8-11, 2020, pp. 830-839. AAAI Press, 2020. URL https://ojs(aaai.org/index.php/ICWSM/article/view/7347.
+Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. Mach. Learn., 79(1-2):151-175,
+
+2010. doi: 10.1007/S10994-009-5152-4. URL https://doi.org/10.1007/s10994-009-5152-4.
+Chen, X., Wang, J., and Xie, K. Trafficstream: A streaming traffic flow forecasting framework based on graph neural networks and continual learning. In Zhou, Z. (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pp. 3620-3626. ijcai.org, 2021. doi: 10.24963/IJCAI.2021/498. URL https://doi.org/10.24963/ijcai.2021/498.
+Cong, W., Zhang, S., Kang, J., Yuan, B., Wu, H., Zhou, X., Tong, H., and Mahdavi, M. Do we really need complicated model architectures for temporal networks? In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=ayPPc0SyLv1.
+Cook, W., Cunningham, W., Pulleyblank, W., and Schrijver, A. Combinatorial Optimization. Wiley Series in Discrete Mathematics and Optimization. Wiley, 2011. ISBN 9781118031391. URL https://books.google.com.sg/books?id=tarLTNwM3gEC.
+de Barros, C. D. T., Mendonça, M. R. F., Vieira, A. B., and Ziviani, A. A survey on embedding dynamic graphs. ACM Comput. Surv., 55(2):10:1-10:37, 2023. doi: 10.1145/3483595. URL https://doi.org/10.1145/3483595.
+Di, S. and Chen, L. Message function search for knowledge graph embedding. In Ding, Y., Tang, J., Sequeda, J. F., Aroyo, L., Castillo, C., and Houben, G. (eds.), Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023, pp. 2633-2644. ACM, 2023. doi: 10.1145/3543507.3583546. URL https://doi.org/10.1145/3543507.3583546.
+Di, S., Yao, Q., and Chen, L. Searching to sparsify tensor decomposition for n-ary relational data. In Proceedings of the Web Conference 2021, WWW '21, pp. 4043-4054, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. doi: 10.1145/3442381.3449853. URL https://doi.org/10.1145/3442381.3449853.
+Di, S., Zhang, Y., Yao, Q., Zhou, X., and Chen, L. Efficient latent-based scoring function search for n-ary relational knowledge bases. ACM Trans. Knowl. Discov. Data, 19(2):34:1-34:26, 2025. doi: 10.1145/3707644. URL https://doi.org/10.1145/3707644.
+
+Feng, K., Li, C., Zhang, X., and Zhou, J. Towards open temporal graph neural networks. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=N9Pk5iSCzAn.
+French, R. M. and Chater, N. Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting. Neural Comput., 14(7):1755-1769, 2002. doi: 10.1162/08997660260028700. URL https://doi.org/10.1162/08997660260028700.
+Gao, S., Li, Y., Zhang, X., Shen, Y., Shao, Y., and Chen, L. SIMPLE: efficient temporal graph neural network training at scale with dynamic data placement. Proc. ACM Manag. Data, 2(3):174, 2024. doi: 10.1145/3654977. URL https://doi.org/10.1145/3654977.
+García-Durán, A., Dumancic, S., and Niepert, M. Learning sequence encoders for temporal knowledge graph completion. In Riloff, E., Chiang, D., Hockenmaier, J., and Tsujii, J. (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 4816-4821. Association for Computational Linguistics, 2018. URL https://aclanthology.org/D18-1516/.
+Gardner, M. W. and Dorling, S. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmospheric environment, 32 (14-15):2627-2636, 1998.
+Gretton, A., Borgwardt, K. M., Rasch, M. J., Scholkopf, B., and Smola, A. J. A kernel method for the two-sample-problem. In Scholkopf, B., Platt, J. C., and Hofmann, T. (eds.), Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pp. 513-520. MIT Press, 2006. URL https://proceedings.neurips.cc/paper/2006/black/e9fb2eda3d9c55a0d89c98d6c54b5b3e-Abstr.html.
+Hamilton, W. L., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1024-1034, 2017.
+
+Jin, G., Liang, Y., Fang, Y., Shao, Z., Huang, J., Zhang, J., and Zheng, Y. Spatio-temporal graph neural networks for predictive learning in urban computing: A survey. IEEE Transactions on Knowledge and Data Engineering, 2023.
+Kazemi, S. M., Goel, R., Jain, K., Kobyzev, I., Sethi, A., Forsyth, P., and Poupart, P. Representation learning for dynamic graphs: a survey. J. Mach. Learn. Res., 21(1), jan 2020. ISSN 1532-4435.
+Kim, B., Koyejo, O., and Khanna, R. Examples are not enough, learn to criticize! criticism for interpretability. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2280-2288, 2016.
+Kim, S., Yun, S., and Kang, J. Dygrain: An incremental learning framework for dynamic graphs. In Raedt, L. D. (ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pp. 3157-3163. ijcai.org, 2022. doi: 10.24963/IJCAI.2022/438. URL https://doi.org/10.24963/ijcai.2022/438.
+Kirkpatrick, J., Pascanu, R., Rabinowitz, N. C., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. Overcoming catastrophic forgetting in neural networks. CoRR, abs/1612.00796, 2016.
+Kumar, S., Zhang, X., and Leskovec, J. Predicting dynamic embedding trajectory in temporal interaction networks. In Teredesai, A., Kumar, V., Li, Y., Rosales, R., Terzi, E., and Karypis, G. (eds.), Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pp. 1269-1278. ACM, 2019. doi: 10.1145/3292500.3330895. URL https://doi.org/10.1145/3292500.3330895.
+Li, H. and Chen, L. Early: Efficient and reliable graph neural network for dynamic graphs. Proc. ACM Manag. Data, 1(2), jun 2023. doi: 10.1145/3589308. URL https://doi.org/10.1145/3589308.
+Li, Y., Shen, Y., Chen, L., and Yuan, M. Zebra: When temporal graph neural networks meet temporal personalized pagerank. Proc. VLDB Endow., 16(6):1332-1345, 2023. doi: 10.14778/3583140.3583150. URL https://www.vldb.org/pvldb/vol16/p1332-li.pdf.
+
+Li, Z. and Hoiem, D. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell., 40(12):2935-2947, 2018. doi: 10.1109/TPAMI.2017.2773081. URL https://doi.org/10.1109/TPAMI.2017.2773081.
+Li, Z., Jin, X., Li, W., Guan, S., Guo, J., Shen, H., Wang, Y., and Cheng, X. Temporal knowledge graph reasoning based on evolutionary representation learning. In Diaz, F., Shah, C., Suel, T., Castells, P., Jones, R., and Sakai, T. (eds.), SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pp. 408-417. ACM, 2021. doi: 10.1145/3404835.3462963. URL https://doi.org/10.1145/3404835.3462963.
+Liu, H., Yang, Y., and Wang, X. Overcoming catastrophic forgetting in graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 8653-8661. AAAI Press, 2021. doi: 10.1609/AAAI.V35I10.17049. URL https://doi.org/10.1609/aaai.v35i10.17049.
+Liu, H., Di, S., and Chen, L. Incremental tabular learning on heterogeneous feature space. Proc. ACM Manag. Data, 1(1):18:1-18:18, 2023. doi: 10.1145/3588698. URL https://doi.org/10.1145/3588698.
+Liu, H., Di, S., Li, H., Li, S., Chen, L., and Zhou, X. Effective data selection and replay for unsupervised continual learning. In 40th IEEE International Conference on Data Engineering, ICDE 2024, Utrecht, The Netherlands, May 13-16, 2024, pp. 1449-1463. IEEE, 2024. doi: 10.1109/ICDE60146.2024.00119. URL https://doi.org/10.1109/ICDE60146.2024.00119.
+Masana, M., Liu, X., Twardowski, B., Menta, M., Bagdanov, A. D., and van de Weijer, J. Class-incremental learning: Survey and performance evaluation on image classification. IEEE Trans. Pattern Anal. Mach. Intell., 45(5):5513-5533, 2023. doi: 10.1109/TPAMI.2022.3213473. URL https://doi.org/10.1109/TPAMI.2022.3213473.
+Miao, H., Zhao, Y., Guo, C., Yang, B., Zheng, K., Huang, F., Xie, J., and Jensen, C. S. A unified replay-based continuous learning framework for spatio-temporal prediction on streaming data, 2024. URL https://arxiv.org/abs/2404.14999.
+Nemhauser, G. L., Wolsey, L. A., and Fisher, M. L. An analysis of approximations for maximizing submodular
+
+set functions-i. Mathematical programming, 14:265-294, 1978.
+Nguyen, G. H., Lee, J. B., Rossi, R. A., Ahmed, N. K., Koh, E., and Kim, S. Continuous-time dynamic network embeddings. In Champin, P., Gandon, F., Lalmas, M., and Ipeirotis, P. G. (eds.), Companion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon, France, April 23-27, 2018, pp. 969-976. ACM, 2018. doi: 10.1145/3184558.3191526. URL https://doi.org/10.1145/3184558.3191526.
+Ni, J., Li, J., and McAuley, J. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Inui, K., Jiang, J., Ng, V., and Wan, X. (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 188-197, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1018. URL https://aclanthology.org/D19-1018.
+Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and Wermter, S. Continual lifelong learning with neural networks: A review. Neural Networks, 113:54-71, 2019. doi: 10.1016/J.NEUNET.2019.01.012. URL https://doi.org/10.1016/j.neunet.2019.01.012.
+Rebuffi, S., Kolesnikov, A., Sperl, G., and Lampert, C. H. icarl: Incremental classifier and representation learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 5533-5542. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.587. URL https://doi.org/10.1109/CVPR.2017.587.
+Redko, I., Morvant, E., Habrard, A., Sebban, M., and Bennani, Y. A survey on domain adaptation theory. CoRR, abs/2004.11829, 2020a. URL https://arxiv.org/abs/2004.11829.
+Redko, I., Morvant, E., Habrard, A., Sebban, M., and Bennani, Y. A survey on domain adaptation theory. CoRR, abs/2004.11829, 2020b. URL https://arxiv.org/abs/2004.11829.
+Rossi, E., Chamberlain, B., Frasca, F., Eynard, D., Monti, F., and Bronstein, M. M. Temporal graph networks for deep learning on dynamic graphs. CoRR, abs/2006.10637, 2020. URL https://arxiv.org/abs/2006.10637.
+Schölkopf, B., Sung, K. K., Burges, C. J. C., Girosi, F., Niyogi, P., Poggio, T. A., and Vapnik, V. Comparing support vector machines with gaussian kernels to radial
+
+basis function classifiers. IEEE Trans. Signal Process., 45(11):2758-2765, 1997. doi: 10.1109/78.650102. URL https://doi.org/10.1109/78.650102.
+Shi, H. and Wang, H. A unified approach to domain incremental learning with memory: Theory and algorithm. CoRR, abs/2310.12244, 2023. doi: 10.48550/ARXIV.2310.12244. URL https://doi.org/10.48550/arXiv.2310.12244.
+Skarding, J., Gabrys, B., and Musial, K. Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey. IEEE Access, 9: 79143-79168, 2021. doi: 10.1109/ACCESS.2021.3082932. URL https://doi.org/10.1109/ACCESS.2021.3082932.
+Su, J., Zou, D., Zhang, Z., and Wu, C. Towards robust graph incremental learning on evolving graphs. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org, 2023a.
+Su, J., Zou, D., Zhang, Z., and Wu, C. Towards robust graph incremental learning on evolving graphs. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 32728-32748. PMLR, 2023b. URL https://proceedings.mlr.press/v202/su23a.html.
+Tian, Z., Zhang, D., and Dai, H. Continual learning on graphs: A survey. CoRR, abs/2402.06330, 2024. doi: 10.48550/ARXIV.2402.06330. URL https://doi.org/10.48550/arXiv.2402.06330.
+Trivedi, R., Farajtabar, M., Biswal, P., and Zha, H. Dyrep: Learning representations over dynamic graphs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyePrhR5KX.
+Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In NeurIPS, pp. 5998-6008, 2017.
+Wang, J., Zhu, W., Song, G., and Wang, L. Streaming graph neural networks with generative replay. In Zhang, A. and Rangwala, H. (eds.), KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022, pp. 1878-1888. ACM, 2022. doi: 10.1145/3534678.3539336. URL https://doi.org/10.1145/3534678.3539336.
+
+Wang, Y., Chang, Y., Liu, Y., Leskovec, J., and Li, P. Inductive representation learning in temporal networks via causal anonymous walks. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021a. URL https://openreview.net/forum?id=KYPz4YsCPj.
+Wang, Z., Di, S., and Chen, L. Autogel: An automated graph neural network with explicit link information. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 24509-24522, 2021b.
+Wen, Z. and Fang, Y. TREND: temporal event and node dynamics for graph representation learning. In Laforest, F., Troncy, R., Simperl, E., Agarwal, D., Gionis, A., Herman, I., and Medini, L. (eds.), WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, pp. 1159-1169. ACM, 2022. doi: 10.1145/3485447.3512164. URL https://doi.org/10.1145/3485447.3512164.
+Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Yu, P. S. A comprehensive survey on graph neural networks. IEEE Trans. Neural Networks Learn. Syst., 32(1):4-24, 2021. doi: 10.1109/TNNLS.2020.2978386. URL https://doi.org/10.1109/TNNLS.2020.2978386.
+Xu, D., Ruan, C., Körpeoglu, E., Kumar, S., and Achan, K. Inductive representation learning on temporal graphs. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=rJeW1yHYwH.
+Yan, X., Song, T., Jiao, Y., He, J., Wang, J., Li, R., and Chu, W. Spatio-temporal hypergraph learning for next POI recommendation. In Chen, H., Duh, W. E., Huang, H., Kato, M. P., Mothe, J., and Poblete, B. (eds.), Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, pp. 403-412. ACM, 2023. doi: 10.1145/3539618.3591770. URL https://doi.org/10.1145/3539618.3591770.
+Yang, L., Adam, S., and Chatelain, C. Dynamic graph representation learning with neural networks: A survey. CoRR, abs/2304.05729, 2023. doi: 10.48550/ARXIV.2304.05729. URL https://doi.org/10.48550/arXiv.2304.05729.
+Yu, L., Sun, L., Du, B., and Lv, W. Towards better dynamic graph learning: New architecture and uni
+
+fied library. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
+Zhang, X., Song, D., and Tao, D. Sparsified subgraph memory for continual graph representation learning. In 2022 IEEE International Conference on Data Mining (ICDM), pp. 1335-1340, 2022. doi: 10.1109/ICDM54844.2022.00177.
+Zhou, F. and Cao, C. Overcoming catastrophic forgetting in graph neural networks with experience replay. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 4714-4722. AAAI Press, 2021. doi: 10.1609/AAAI.V35I5.16602. URL https://doi.org/10.1609/aaai.v35i5.16602.
+Zhou, H., Zheng, D., Nisa, I., Ioannidis, V., Song, X., and Karypis, G. Tgl: a general framework for temporal gnn training on billion-scale graphs. Proc. VLDB Endow., 15(8):1572-1580, apr 2022. ISSN 2150-8097. doi: 10. 14778/3529337.3529342. URL https://doi.org/ 10.14778/3529337.3529342.
+Zhou, L., Yang, Y., Ren, X., Wu, F., and Zhuang, Y. Dynamic network embedding by modeling triadic closure process. In McIlraith, S. A. and Weinberger, K. Q. (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 571-578. AAAI Press, 2018. doi: 10.1609/AAAI.V32I1.11257. URL https://doi.org/10.1609/aaai.v32i1.11257.
+Zhou, Z., Huang, Q., Yang, K., Wang, K., Wang, X., Zhang, Y., Liang, Y., and Wang, Y. Maintaining the status quo: Capturing invariant relations for OOD spatiotemporal learning. In Singh, A. K., Sun, Y., Akoglu, L., Gunopulos, D., Yan, X., Kumar, R., Ozcan, F., and Ye, J. (eds.), Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 3603-3614. ACM, 2023. doi: 10.1145/3580305.3599421. URL https://doi.org/10.1145/3580305.3599421.
+
+Zhou, Z., Huang, Q., Wang, B., Hou, J., Yang, K., Liang, Y., and Wang, Y. Coms2t: A complementary spatiotemporal learning system for data-adaptive model evolution. CoRR, abs/2403.01738, 2024. doi: 10.48550/ARXIV.2403.01738. URL https://doi.org/10.48550/arXiv.2403.01738.
+
+# A. Important Notations
+
+The important notations used in the paper are summarized in Tab. 8 below. The relationships among $G_{N-1}$ , $G_N^{\text{old}}$ and $G_N^{\text{new}}$ are illustrated in Fig. 6.
+
+Table 8: Important Notations
+
+Notation Meaning G ~ G Temporal graph G that follows the distribution G V, E, T, Y Nodes V, events E, time period T and class set Y of G e = (ut, vt, t) An event e ∈ E that links nodes ut, vt ∈ V at t ∈ T vt, xt The node v and its feature x at time t TN, Tn The latest period N and a past period n < N h ∈ H Model h from hypothesis space H ε(·),ˆ(·) Classification error on distribution and finite set dHΔH(·) Discrepancy between two distributions dMMD(·) Estimated Maximum Mean Discrepancy
+
+
+Figure 6: An illustration on the relationships among $G_{N-1}$ , $G_N^{old}$ and $G_N^{new}$ . $G_N^{old}$ and $G_N^{new}$ DO NOT overlap over nodes, but DO share the events connecting old and new class nodes.
+
+# B. Additional Discussion on Related Works
+
+Besides the related works in TGL discussed in the main content, we provide additional discussions on other TGL variants in this section. Firstly, the efficiency issue in TGL is addressed by redesigning the training framework (Zhou et al., 2022; Gao et al., 2024), sampling the representative nodes (Li & Chen, 2023), and integrating the random walk with temporal graph neural networks (Li et al., 2023).
+
+Beyond simple temporal graphs, research has also explored temporal hyper-graphs (Yan et al., 2023), spatio-temporal graphs (Jin et al., 2023), and temporal knowledge graphs (Li et al., 2021; García-Durán et al., 2018). However, these methods typically assume a fixed set of node or entity labels and still encounter the forgetting issue when adapting to new classes, making them foundational but limited backbones for studying the TGCL problem.
+
+There are also works addressing the OOD generalization problem in temporal graphs (Zhou et al., 2023; 2024), which address a fundamentally different problem from ours. These works aim to improve model performance on test datasets that have distributions differing from the training datasets.
+
+In contrast, we focus on the continual learning problem that tackles the challenge of selecting new data subsets to efficiently fine-tune the past model, ensuring its effectiveness and preventing forgetting at the new period.
+
+Some recent continual learning works have also applied the domain adaptation theory to their research, which are UDIL (Shi & Wang, 2023) and SSRM (Su et al., 2023a). However, both of them differ largely from ours. UDIL focuses on automatically finding the most suitable hyperparameters for the losses to best balance the model stability and plasticity during learning new datasets. SSRM directly minimizes the distribution discrepancy between old and new data. Our work takes an orthogonal direction from them by selecting the most representative subset from the old class data. Besides, our TGCL problem differs from the previous continual learning settings in the considering the evolving old-class data.
+
+# C. Proofs of Theorems
+
+We first introduce and prove the important Lemma 3 that is originally proposed in (Ben-David et al., 2010):
+
+Lemma C.1. For any hypothesis $h$ , $h' \in \mathcal{H}$ and any two different data distributions $\mathcal{D}, \mathcal{D}'$ ,
+
+$$
+\left| \epsilon \left(h, h ^ {\prime} \mid \mathcal {D}\right) - \epsilon \left(h, h ^ {\prime} \mid \mathcal {D} ^ {\prime}\right) \right| \leq \frac {1}{2} d _ {\mathcal {H} \Delta \mathcal {H}} \left(\mathcal {D}, \mathcal {D} ^ {\prime}\right), \tag {12}
+$$
+
+where $\epsilon (h,h^{\prime}|\mathcal{D}):= \mathbb{E}_{x\in \mathcal{D}}[h(x)\neq h^{\prime}(x)]$ is the expected prediction differences of $h$ and $h^\prime$ on $\mathcal{D}$ , and
+
+$$
+\begin{array}{l} d _ {\mathcal {H} \Delta \mathcal {H}} (\mathcal {D}, \mathcal {D} ^ {\prime}) := 2 \sup _ {h, h ^ {\prime} \in \mathcal {H}} | P r _ {x \in \mathcal {D}} [ h (x) \neq h ^ {\prime} (x) ] \\ - P r _ {x \in \mathcal {D} ^ {\prime}} [ h (x) \neq h ^ {\prime} (x) ] | \\ \end{array}
+$$
+
+is the discrepancy between the two distributions.
+
+Proof. By definition, we have
+
+$$
+\begin{array}{l} d _ {\mathcal {H} \Delta \mathcal {H}} (\mathcal {D}, \mathcal {D} ^ {\prime}) = 2 \sup _ {h, h ^ {\prime} \in \mathcal {H}} | \mathbb {P} _ {x \in \mathcal {D}} [ h (x) \neq h ^ {\prime} (x) ] \\ - \mathbb {P} _ {x \in \mathcal {D} ^ {\prime}} [ h (x) \neq h ^ {\prime} (x) ] | \\ = 2 \sup _ {h, h ^ {\prime} \in \mathcal {H}} | \epsilon (h, h ^ {\prime} | \mathcal {D}) - \epsilon (h, h ^ {\prime} | \mathcal {D} ^ {\prime}) | \\ \geq 2 \left| \epsilon \left(h, h ^ {\prime} \mid \mathcal {D}\right) - \epsilon \left(h, h ^ {\prime} \mid \mathcal {D} ^ {\prime}\right) \right| \tag {13} \\ \end{array}
+$$
+
+
+
+Based on Lemma C.1, Theorem 3.1 can be proved as follows: Theorem 3.1. Let $\mathcal{G}_N^{old}$ , $\mathcal{G}_N^{sub}$ be the distributions of $G_N^{old}$ and $G_N^{sub}$ . Let $h\in \mathcal{H}$ be a function in the hypothesis space $\mathcal{H}$ and $\tilde{h}_N^{sub}$ be the function optimized on $\mathcal{G}_N^{sub}$ . The classification error on $\mathcal{G}_N^{old}$ then has the following upper bound:
+
+$$
+\begin{array}{l} \min _ {h \in \mathcal {H}} \epsilon (h | \mathcal {G} _ {N} ^ {o l d}) \leq \min _ {h, \mathcal {G} _ {N} ^ {s u b}} \epsilon (\tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) + \frac {1}{2} d _ {\mathcal {H} \Delta \mathcal {H}} (\mathcal {G} _ {N} ^ {o l d}, \mathcal {G} _ {N} ^ {s u b}) \\ + \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}). \\ \end{array}
+$$
+
+Algorithm 1 Pseudo Code
+1: Input: $G_N^{old}, G_N^{new}, \tilde{h}_{N-1}$
+2: $G_N^{sub} \gets \{\}, G_N^{sim} \gets \{\}$
+3: Partition $G_N^{old}$ into parts with sizes $p$ , resulting in $W = \lceil |G_N^{old}| / p \rceil$ parts $\{G_{N,w}^{old}\}_{w=1}^W$
+4: for $G_{N,w}^{old} \in \{G_{N,w}^{old}\}_{w=1}^W$ do
+5: Select $\tilde{G}_{N,w}^{sub}$ of size $m/W$ by optimizing Eq.(8)
+6: Select $\tilde{G}_{N,w}^{sim}$ of size $m/W$ by optimizing Eq.(10)
+7: $G_N^{sub} \gets G_N^{sub} \cup \tilde{G}_{N,w}^{sub}, G_N^{sim} \gets G_N^{sim} \cup \tilde{G}_{N,w}^{sim}$
+8: end for
+9: $\tilde{h}_N = \arg \min_{h \in \mathcal{H}} \hat{\epsilon}(h|G_N^{new}) + \hat{\epsilon}(h|G_N^{sub}) + \beta l_{dst}(G_N^{sim}, G_N^{sub}|h)$
+10: return $\tilde{h}_N$
+
+Proof. From the triangle inequality in (Ben-David et al., 2010) and Lemma C.1,
+
+$$
+\begin{array}{l} \epsilon (h | \mathcal {G} _ {N} ^ {o l d}) \leq \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) + \epsilon (\tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) \\ = \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) + \epsilon (\tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) - \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}) \\ + \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}) \\ \leq \epsilon (\tilde {h} _ {N} ^ {\text {s u b}} | \mathcal {G} _ {N} ^ {\text {o l d}}) + | \epsilon (h, \tilde {h} _ {N} ^ {\text {s u b}} | \mathcal {G} _ {N} ^ {\text {o l d}}) - \epsilon (h, \tilde {h} _ {N} ^ {\text {s u b}} | \mathcal {G} _ {N} ^ {\text {s u b}}) \\ + \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}) \\ \leq \epsilon (\tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {o l d}) + \frac {1}{2} d _ {\mathcal {H} \Delta \mathcal {H}} (\mathcal {G} _ {N} ^ {o l d}, \mathcal {G} _ {N} ^ {s u b}) \\ + \epsilon (h, \tilde {h} _ {N} ^ {s u b} | \mathcal {G} _ {N} ^ {s u b}). \\ \end{array}
+$$
+
+As this inequality applies for all $h \in \mathcal{H}$ and $G_N^{sub}$ , the minimum of the left side is always less than the minimum of the right side.
+
+# D. Pesudo Code of LTF
+
+The pseudo code of the Learning Towards the Future (LTF) method is presented in Algorithm 1 below.
+
+# E. Definition of RBF Kernel
+
+Definition (Radial Basis Function Kernel (Schölkopf et al., 1997)). Consider the radial basis function kernel $\mathbb{K}$ with entries $k_{i,j} = k(x_i,x_j) = \exp (-\gamma ||x_i - x_j||)$ evaluated on a sample set $X$ with non-duplicated points i.e. $x_{i}\neq x_{j}\forall x_{i},x_{j}\in X$ . The off-diagonal kernel entries $k_{i,j},i\neq j,$ monotonically decrease with respect to increasing $\gamma$ .
+
+# F. Complexity Analysis
+
+Note that the sizes of $G_N^{old}$ , $G_N^{sub}$ and $G_N^{sim}$ as $r, m$ and $m'$ , and $G_N^{old}$ is partitioned into $W$ groups, our time complexity is analyzed as follows:
+
+Selection: For each partition, we first need $O(r / W)$ to ob
+
+tain the errors and embeddings of all nodes. When selecting $G_N^{sub}$ from each partition, it takes $O(m / W)$ for loss estimation and $O(rm / W^2)$ for distribution estimation, which finally gives $O(m / W + rm / W^2)$ . For $G_N^{sim}$ , we only need $O(rm' / W^2)$ for distribution estimation. Because different partitions can be processed in parallel, the overall selection complexity is $O((r + m) / W + r(m + m') / W^2)$ .
+
+Learning: When learning $G_N^{sub}$ , the complexity for error is $O(m)$ and that for distribution alignment is $O(mm')$ . When learning $G_N^{new}$ , the complexity is $O(|G_N^{new}|)$ . So the overall learning complexity is $O(mm' + |G_N^{new}|)$ .
+
+# G. Data Set Details
+
+Yelp Yelp is a business review dataset that contains a large amount of reviews on different businesses. When building Yelp data set, we regard the businesses as the nodes to construct the temporal graph. The business categories are used as the class labels. From 2015 to 2019, we take each of the five years as one period of temporal graph. The reviews from the same user within a month create events among the bussesesses they are evaluating. For each period, we select the largest three categories as the new classes and include corresponding businesses into the temporal graphs from then on. We extract word embeddings on the reviews of each business with GloVe-200d, and average these 200-dimension embeddings to get the initial node features.
+
+Reddit Reddit is a online forum dataset that contains a large amount of posts and comments on different topics. When building Reddit data set, we regard the posts as the nodes to construct the temporal graph, following the paradigm of (Hamilton et al., 2017). The post topics are used as the class labels. We take January 1st 2017 as the start date and construct the temporal graphs of 20 days a period. The comments from the same user within 5 days create events among the posts they are commenting one. There are three periods of temporal graphs created. For each period, we select the 5 topics that have generally even number of posts as the new classes. We extract word embeddings on the comments of each post with GloVe-200d, and average these 200-dimension embeddings to get the initial node features.
+
+Amazon Amazon is a product review dataset that contains a large amount of reviews on different products. When building Amazon data set, we regard the products as the nodes to construct the temporal graph. The product categories are used as the class labels. Starting from January 1st 2016, we take every 24 days as one period of temporal graph. The reviews from the same user within 5 days create events among the products they are reviewing. There are three periods of temporal graphs created. For each period, we select the
+
+Table 9: The performance of different methods on the TGAT backbone. The reported values are the mean and standard deviation of AP, AF and Time.
+
+Method TGAT Yelp Reddit Amazon AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ Joint 0.0810±0.0033 — 58.37±0.60 0.1378±0.0031 — 50.50±0.50 0.1477±0.0014 — 128.71±1.40 Finetune 0.0141±0.0045 0.0843±0.0000 9.11±0.10 0.0312±0.0051 0.1550±0.0000 14.93±0.14 0.0340±0.0213 0.1408±0.0000 65.81±2.60 LwF 0.0209±0.0050 0.0620±0.0028 13.90±0.22 0.0439±0.0057 0.1091±0.0046 23.67±0.44 0.0303±0.0097 0.1024±0.0105 102.92±3.92 EWC 0.0443±0.0142 0.0408±0.0178 9.19±0.15 0.0467±0.0063 0.1384±0.0136 14.95±0.14 0.0524±0.0292 0.1152±0.0363 68.37±3.69 iCaRL 0.0607±0.0035 0.0198±0.0066 11.57±0.14 0.0602±0.0076 0.0860±0.0274 19.22±0.21 0.0699±0.0127 0.0794±0.0145 70.03±0.48 ER 0.0521±0.0098 0.0332±0.0108 11.63±0.15 0.0622±0.0146 0.0783±0.0157 19.07±0.13 0.0799±0.0148 0.0617±0.0293 69.06±3.11 SSM 0.0552±0.0070 0.0232±0.0110 11.82±0.21 0.0308±0.0068 0.1203±0.0143 82.99±2.23 0.0723±0.0164 0.0912±0.0076 145.27±3.78 OTGNet* 0.0648±0.0120 0.0236±0.0139 316.15±17.13 0.0868±0.0071 0.0518±0.0013 49.42±5.55 0.1031±0.0259 0.0459±0.0232 709.49±42.81 URCL 0.0562±0.0091 0.0303±0.0085 11.57±0.13 0.0726±0.0140 0.0649±0.0170 20.13±0.55 0.0915±0.0065 0.0431±0.0130 70.32±2.19 LTF 0.0682±0.0108 0.0195±0.0130 25.05±0.37 0.0871±0.0052 0.0474±0.0097 39.16±0.62 0.1110±0.0018 0.0165±0.0041 72.94±2.17
+
+Table 10: The performance of different methods on the DyGFormer backbone. The reported values are the mean and standard deviation of AP, AF and Time.
+
+Method DyGFormer Yelp Reddit Amazon AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ Joint 0.0813±0.0038 — 95.11±2.04 0.1256±0.0140 — 70.64±1.22 0.1500±0.0054 — 177.38±4.27 Finetune 0.0172±0.0008 0.0800±0.0000 14.43±0.40 0.0360±0.0008 0.1433±0.0000 20.58±0.39 0.0551±0.0154 0.1517±0.0258 88.34±0.85 LwF 0.0399±0.0079 0.0386±0.0087 26.03±0.37 0.0469±0.0107 0.0944±0.0112 37.53±0.90 0.0763±0.0207 0.0856±0.0189 155.03±2.31 EWC 0.0601±0.0074 0.0295±0.0112 14.24±0.17 0.0521±0.0190 0.1046±0.0198 20.20±0.14 0.1005±0.0105 0.0832±0.0135 89.32±1.15 iCaRL 0.0558±0.0095 0.0214±0.0156 18.31±0.29 0.0917±0.0095 0.0248±0.0125 26.34±0.24 0.0945±0.0255 0.0775±0.0396 92.36±1.03 ER 0.0546±0.0066 0.0276±0.0094 18.49±0.85 0.0771±0.0172 0.0386±0.0170 26.65±0.30 0.1026±0.0029 0.0650±0.0057 92.11±0.97 SSM 0.0560±0.0009 0.0235±0.0044 18.27±0.29 0.0723±0.0246 0.0641±0.0310 26.09±0.59 0.1063±0.0197 0.0568±0.0170 92.15±1.48 OTGNet* — — — — — — — — — URCL 0.0584±0.0064 0.0216±0.0083 20.13±0.87 0.0902±0.0112 0.0284±0.0182 27.58±1.33 0.1089±0.0110 0.0566±0.0112 93.43±2.45 LTF 0.0681±0.0064 0.0096±0.0073 51.80±0.75 0.1134±0.0089 0.0081±0.0120 58.56±1.17 0.1253±0.0139 0.0383±0.0121 101.06±3.50
+
+
+Figure 7: The average precision (AP) of LTF and the baselines at each period, based on DyGFormer.
+
+
+Figure 8: The average forgetting (AF) of LTF and the baselines at each period, based on DyGFormer.
+
+3 products that have generally even number of reviews as the new classes. We extract word embeddings on the reviews of each product with GloVe-200d, and average these 200-dimension embeddings to get the initial node features.
+
+# H. Selection on the Hyper-parameters
+
+The hyperparameters related to the backbone models are selected within the reported range of DyGFormer (Yu et al., 2023) based on the experience. We do not conduct further tuning on the backbone performance. The hyperparameters of our method are selected by grid search. The searching ranges and results of important hyper-parameters are reported in Sec. 4.4, with standard deviation reported as well. The hyperparameters of the baselines are selected by grid
+
+search as well, whose names and values are: Weight of regularization loss (LwF, EWC): [0.1, 0.5, 1, 2]; Size of exemplar set (iCaRL, ER, SSM, OTGNet): 1000 for Reddit and Amazon and 500 for Yelp, which are the same as ours for fair comparison; Number of maintained neighbors (SSM): [5, 10, 20]; The other hyperparameters of OTGNet (Feng et al., 2023) are searched in the same range as reported in the original paper. The searching is performed on hyperopt package with 10 iterations.
+
+# I. Full results on Main Experimets
+
+In this section, we present the results, including standard deviations, for Tab. 2 in Sec. 4.2. The detailed results for TGAT are shown in Tab. 9, and those for DyGFormer are
+
+provided in Tab. 10. The results demonstrate that LTF not only performs well across all three datasets but also has a relatively low standard deviation, indicating the stability of the method. The standard deviation for Finetune goes to smaller than 4 digits for most datasets because its forgetting issue is sever and at the end of increments the model stably forgets most of the knowledge.
+
+Besides, the per period performances of all methods based on DyGFormer are shown in Fig. 7 and Fig. 8. Because OTGNet is not compatible to DyGFormer, we exclude it from the presentation. The results show that LTF consistently outperforms the other methods in terms of AP and AF across all periods.
+
+# J. Full Results on Ablation Study
+
+The results of the ablation study in Tab. 3 with the standard deviation are shown in Tab. 12 and Tab. 13. The results demonstrate that the proposed LTF consistently outperforms the baselines across all datasets and metrics. The standard deviation of LTF is relatively low, indicating the stability of the method.
+
+# K. Additional Study on Partition Number
+
+In order for random partitioning to preserve the original embedding distribution, the size of each partition should be larger than a threshold. Statistically, based on the Dvoretzky-Kiefer-Wolffowitz (DKW) inequality, each partition should have 1152 samples to guarantee that the randomly sampled subset can approximate the population distribution with a $95\%$ confidence and 0.04 approximation error. Compared with the size of our dataset (60k samples for each old class within a period in Amazon), this threshold is significantly smaller and can be easily satisfied. In our experiment, we randomly partition the dataset into parts containing 6k samples each, which is sufficient to represent the original embedding distribution.
+
+We further study the impact of different partition sizes based on DyGFormer backbone and Amazon datasets, whose AP results are presented in Tab. 11. Performance decreases as the number of partitions increases, primarily because fewer samples in each partition result in higher selection errors. On the other hand, random constantly outperforms other clustering methods, which is consistent with our analysis that keeping the original distribution is important for effective selection.
+
+# L. Sensitivity Analysis on DyGFormer
+
+The additional sensitivity analysis results on DyGFormer is shown in Fig. 9. The same conclusion as in Sec. 4.4 can be drawn from this set of results. This further validates the
+
+Table 11: Comparison of Different Partition Sizes.
+
+Size 10000 5000 2500 k-Means 0.1104 0.1055 0.0994 Hierarchical 0.1091 0.1063 0.1002 Random 0.1253 0.1147 0.1027
+
+robustness of LTF on addressing TGCL.
+
+# M. Future Directions
+
+While this work focuses on node classification, similar challenges in integrating newly introduced, differently-distributed data are prevalent in other temporal graph tasks, such as link prediction (Di et al., 2021; Wang et al., 2021b; Di & Chen, 2023; Di et al., 2025) with new user profiles or content categories in social networks. By establishing a robust approach for handling open-class dynamics, our framework lays essential groundwork for future research. Additionally, selecting data under various scenarios (Liu et al., 2024) has also been trending recently, it is worth exploring how to extend our method to these scenarios.
+
+Table 12: Ablation study on the selecting and learning components of LTF based on TGAT with standard deviations. The applied components are noted with Y. The best and second best results are noted in Bold and Underline.
+
+Component TGAT Select Learn Yelp Reddit Amazon Err. Dist. ldst(·) AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ Y 0.0438±0.0117 0.0439±0.0136 0.0579±0.0060 0.0736±0.0064 0.1063±0.0016 0.0185±0.0033 Y 0.0565±0.0093 0.0322±0.0101 11.79±1.08 0.0640±0.0054 0.0695±0.0073 19.28±0.77 0.0592±0.0019 0.0684±0.0041 67.77±2.05 Y Y 0.0654±0.0104 0.0215±0.0110 0.0866±0.0048 0.0447±0.0067 0.1004±0.0023 0.0078±0.0032 Y Y Y 0.0682±0.0108 0.0195±0.0130 25.05±0.37 0.0871±0.0052 0.0474±0.0097 39.16±0.62 0.1110±0.0018 0.0165±0.0041 72.94±2.17
+
+Table 13: Ablation study on the selecting and learning components of LTF based on DyGFormer with standard deviations. The applied components are noted with Y. The best and second best results are noted in Bold and Underline.
+
+Component DyGFormer Select Learn Yelp Reddit Amazon Err. Dist. ldst(·) AP↑ AF↓ Time↓ AP↑ AF↓ Time↓ AP↑ AF↓ Y 0.0467±0.0073 0.0309±0.0079 0.0807±0.0084 0.0407±0.0220 0.1203±0.0172 0.0456±0.0131 Y 0.0543±0.0056 0.0266±0.0076 18.51±1.13 0.0863±0.0080 0.0350±0.0221 27.12±2.02 0.1161±0.0167 0.0465±0.0126 Y Y 0.0618±0.0060 0.0155±0.0082 0.0939±0.0083 0.0272±0.0119 0.1231±0.0108 0.0366±0.0117 Y Y Y 0.0681±0.0064 0.0096±0.0073 51.80±0.75 0.1134±0.0089 0.0081±0.0120 58.56±1.17 0.1253±0.0139 0.0383±0.0121
+
+
+Figure 9: Sensitivity on the key hyper-parameters based on DyGFormer.
+
+
+
+
\ No newline at end of file
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/images.zip b/aselectivelearningmethodfortemporalgraphcontinuallearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a4828fdcf31836f14d12d1cc17b6c350e975b95b
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09c578b877b70430a939f34428d6a74ff1fcce5c9c7e43c3d58c196904c09fc1
+size 1183251
diff --git a/aselectivelearningmethodfortemporalgraphcontinuallearning/layout.json b/aselectivelearningmethodfortemporalgraphcontinuallearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..50d05386ff3bc9848b960a30387c8ea2da600c3d
--- /dev/null
+++ b/aselectivelearningmethodfortemporalgraphcontinuallearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9f06ae52e1b091f15fc83ccdcdf69d271b160ca353f1ab2550a6956365ba8bb
+size 889537
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_content_list.json b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..23f40d1207ffde641921f8acfcfd9b9fedb8db6d
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:428289d9b537fbb11daceacd773e71776c35b249fa02c9ec8e0e72bbf323e28e
+size 209030
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_model.json b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9bf59b51f5f43d3db6218014733a0e9ae84dd8e3
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41d1ae0b3416fabe995f0daa98914e7dac1869d3c4eccdc18f44cf8f1466682f
+size 243531
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_origin.pdf b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..af8c0598481617bc05d485e447f551c21d119a81
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/bd76b4df-8f75-475f-a821-53f540ae0fdd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ed0a429976176333cc9e862093391e55c02aa82a0ecdc12c3e07f1ab5309b21
+size 525261
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/full.md b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a7508c2f4e882005024de245fc5bd7ad4b9f77d5
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/full.md
@@ -0,0 +1,1063 @@
+# A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach
+
+Swetha Ganesh $^{1,2}$ Washim Uddin Mondal $^{3}$ Vaneet Aggarwal
+
+# Abstract
+
+This work examines average-reward reinforcement learning with general policy parametrization. Existing state-of-the-art (SOTA) guarantees for this problem are either suboptimal or hindered by several challenges, including poor scalability with respect to the size of the state-action space, high iteration complexity, and a significant dependence on knowledge of mixing times and hitting times. To address these limitations, we propose a Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm. Our work is the first to achieve a global convergence rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$ for average-reward Markov Decision Processes (MDPs) (where $T$ is the horizon length), using an Actor-Critic approach. Moreover, the convergence rate does not scale with the size of the state space, therefore even being applicable to infinite state spaces.
+
+# 1. Introduction
+
+Reinforcement Learning (RL) is a framework where an agent interacts with a Markovian environment and maximizes the total reward it receives. The temporal dependence of the state transitions makes the problem of RL much more challenging than ordinary stochastic optimization, where data are selected in an independent and identically distributed (i.i.d.) manner. RL problems are typically analyzed via three setups: episodic, discounted reward with an infinite horizon, and average reward with an infinite horizon. The average reward framework is particularly significant for real-world applications, including robotics (Gonzalez et al., 2023), transportation (Al-Abbasi et al., 2019), com
+
+$^{1}$ School of Industrial Engineering, Purdue University, West Lafayette, USA $^{2}$ Department of Computer Science and Automation, Indian Institute of Science, Bengaluru, India $^{3}$ Department of Electrical Engineering, Indian Institute of Technology, Kanpur, India. Correspondence to: Vaneet Aggarwal .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+munication networks (Agarwal et al., 2022), and healthcare (Tamboli et al., 2024). Model-based algorithms (Jaksch et al., 2010; Agrawal & Jia, 2017; Agarwal & Aggarwal, 2023) that learn the state transition kernel from Markovian trajectories, are well-known approaches for solving RL problems. However, these methods are typically limited to small state spaces. Policy Gradient (PG) methods, a cornerstone of RL, offer a model-free alternative that naturally supports function approximation (FA), making them well-suited for large state-action spaces. When the size of the state-action space, $SA$ , is large or infinite, the framework of FA (also known as general parameterization) indexes the candidate policies by a $d$ -dimensional parameter, $\theta$ , where $d \ll SA$ . Recently, some works have established global convergence guarantees for the average-reward setting with general policy parametrization, which we discuss below.
+
+There are two main approaches in this context. The first is the direct PG method, where value functions are estimated directly from sampled trajectories (Bai et al., 2024; Ganesh et al., 2025b). The second is the Temporal Difference (TD)-based policy evaluation approach, commonly known as the Actor-Critic (AC) method (Patel et al., 2024; Wang et al., 2024). Currently, the order-optimal $\tilde{\mathcal{O}}(T^{-1/2})$ convergence rate result, where $T$ is the horizon length, exists for direct PG method (Ganesh et al., 2025b). However, these methods face key limitations, including poor scalability to large state-action spaces and a strong reliance on precise knowledge of mixing and hitting times to decorrelate samples, an assumption that is often impractical. In contrast, existing AC algorithms circumvent these issues but are generally more challenging to analyze. More specifically, AC methods employ a TD-based critic to estimate the value function, which helps in reducing the variance. However, this reduction comes at the cost of introducing a bias, in addition to the inherent bias arising due to Markovian sampling. Direct PG methods using Markovian sampling leverage knowledge of mixing time to obtain nearly i.i.d. samples, whereas this would not suffice for AC methods due to the additional bias from the critic. As a result, the state-of-the-art bounds for AC is $\tilde{\mathcal{O}}(1/T^{1/4})$ , which is suboptimal. This raises the following question:
+
+Algorithm Infinite States and Actions? Global Convergence Model-Free? Policy Parametrization UCRL2 (Jaksch et al., 2010) No \(\tilde{\mathcal{O}}(1/\sqrt{T})\) No Tabular PSRL (Agrawal & Jia, 2017) No \(\tilde{\mathcal{O}}(1/\sqrt{T})\) No Tabular MDP-OOMD (Wei et al., 2020)\(^{(1)}\) No \(\tilde{\mathcal{O}}(1/\sqrt{T})\) Yes Tabular PPGAE (Bai et al., 2024) No \(\tilde{\mathcal{O}}(1/T^{1/4})\) Yes General PHPG (Ganesh et al., 2025b) No \(\tilde{\mathcal{O}}(1/\sqrt{T})\) Yes General NAC-CFA (Wang et al., 2024) Yes \(\tilde{\mathcal{O}}(1/T^{1/4})\) Yes General MAC (Patel et al., 2024) Yes \(\tilde{\mathcal{O}}(1/T^{1/4})\) Yes General MLMC-NAC (Algorithm 1) Yes \(\tilde{\mathcal{O}}(1/\sqrt{T})\) Yes General
+
+Table 1. Summary of the key results on global convergence guarantees for average reward reinforcement learning. (1) This work also analyzes another algorithm using the more general weakly-communicating MDP assumption while achieving a higher rate of $\tilde{\mathcal{O}}\left( {1/{T}^{1/3}}\right)$ .
+
+Is it possible to achieve a state-action space size independent global convergence rate of $\tilde{\mathcal{O}}\left(T^{-1/2}\right)$ for general parameterized policies in average reward infinite-horizon MDPs, using a practical, Actor-Critic approach?
+
+Our Contribution: This work answers this question affirmatively. In particular, we introduce a Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm that comprises two major components. The first component, referred to as the Natural Policy Gradient (NPG) subroutine, obtains an approximate NPG direction which is used to update the policy parameter. One sub-task of obtaining the NPG is estimating the advantage function. This is done via the second component, known as the Critic subroutine that achieves its desired target via Temporal Difference (TD) learning. Both NPG and critic subroutines apply MLMC-based gradient estimators that eliminate the use of the mixing time in the algorithm. We establish that MLMC-NAC achieves a global convergence rate of $\tilde{\mathcal{O}}(T^{-1/2})$ which is optimal in the order of the horizon length $T$ . The key contributions in this work are summarized as follows:
+
+- While existing AC analyses often use relatively loose bounds, we refine the analysis to achieve sharper results. Our first step towards this is to show that the global convergence error is bounded by terms proportional to the bias and second-order error in NPG estimation (Lemma 1). Since the critic updates underlie the NPG subroutine, the NPG estimation errors are inherently linked to critic estimation errors.
+- In prior AC works (Wang et al., 2024; Patel et al., 2024), the global convergence bound includes the critic error, $\mathbb{E}\| \xi_t - \xi^*\|$ , where $\xi_{t}$ is the critic estimate at time $t$ and $\xi^{*}$ is the true value. Instead, using Lemma 1 and Theorem 3, our analysis refines this term to $\| \mathbb{E}[\xi_t] - \xi^*\|$ , which can be significantly smaller than the previous estimate.
+- Bounding $\| \mathbb{E}[\xi_t] - \xi^*\|$ still remains challenging due to Markovian noise. The critic update can be interpreted as a linear recursion with Markovian noise. Under i.i.d. noise, this term decays exponentially, but with Markovian noise,
+
+it can remain constant (Nagaraj et al., 2020). (Nagaraj et al., 2020) mitigates this by using one sample every $t_{\mathrm{mix}}$ steps. Instead, we leverage MLMC to reduce the bias.
+
+- In Theorem 2, we establish a convergence rate for a generic stochastic linear recursion. Given that both the NPG and critic updates can be viewed as a stochastic linear update, this forms a basis for Theorems 3 and 4.
+- Theorem 1 proves the first $\tilde{\mathcal{O}}(T^{-1/2})$ global convergence result for AC methods.
+
+Related works: We will discuss the relevant works in two key areas as stated below. Our discussion primarily focuses on works that employ general parameterized policies. A summary of relevant works is available in Table 1.
+
+Discussion on Practicality on direct PG methods: In direct PG methods, value function estimates are nearly unbiased but suffer from high variance, which scales with the size of the action space (Wei et al., 2020; Bai et al., 2024; Ganesh et al., 2025b). Furthermore, the convergence results depend on the hitting time, which is at least the size of the state space, making the algorithm inapplicable to large or infinite state spaces. Finally, the implementation of these algorithms require precise knowledge of mixing and hitting times to decorrelate samples, which can be impractical. In contrast, our algorithm leverages Multi-Level Monte Carlo (MLMC) to mitigate bias arising from Markovian sampling.
+
+Other recent PG-based works include (Kumar et al., 2025), which studies the tabular policy parameterization under the assumption of exact gradient access and establishes a convergence rate of $\tilde{\mathcal{O}}(1/T)$ . However, this assumption sidesteps a key challenge in PG-based methods, efficient estimation of the policy gradient, which remains a significant bottleneck in more practical settings. Another related work is (Murthy et al., 2023), which studies robust average-reward Markov decision processes (MDPs) and establishes convergence guarantees for dynamic programming approaches.
+
+Average Reward RL with Actor-Critic approaches: The
+
+authors of (Chen & Zhao, 2023; Panda & Bhatnagar, 2025; Suttle et al., 2023) provided local convergence guarantees for the AC based approaches. Recently, the global convergence for the AC methods in average reward setup have been studied in (Wang et al., 2024; Patel et al., 2024), where global sample complexity of $\tilde{\mathcal{O}}(T^{-1/4})$ is shown1 .
+
+We note that (Suttle et al., 2023; Patel et al., 2024) uses the Multi-Level Monte Carlo (MLMC)-based AC algorithm combined with AdaGrad (Duchi et al., 2011), inspired by stochastic gradient descent (SGD) related work in (Dorfman & Levy, 2022). Unfortunately, none of these studies lead to an optimal global convergence rate which is the goal of our work. We also note that the current state-of-the-art global convergence rate for AC methods in discounted MDPs is $\mathcal{O}(T^{-1/3})$ (Xu et al., 2020; Gaur et al., 2024), and the approaches proposed in this work have the potential to be applied in that setting.
+
+# 2. Setup
+
+In this paper, we explore an infinite horizon reinforcement learning problem with an average reward criterion, modeled by a Markov Decision Process (MDP) represented as a tuple $\mathcal{M} = (\mathcal{S},\mathcal{A},r,P,\rho)$ . Here $\mathcal{S}$ indicates the state space, $\mathcal{A}$ defines the action space with a size of $A$ , $r:\mathcal{S}\times \mathcal{A}\to [0,1]$ represents the reward function, $P:\mathcal{S}\times \mathcal{A}\to \Delta (\mathcal{S})$ defines the state transition function, where $\Delta (\mathcal{S})$ denotes the probability simplex over $\mathcal{S}$ , and $\rho \in \Delta (\mathcal{S})$ signifies the initial distribution of states. A (stationary) policy $\pi : S\rightarrow \Delta (\mathcal{A})$ determines the distribution of the action to be taken given the current state. It induces the following state transition $P^{\pi}:S\to \Delta (\mathcal{S})$ given as $P^{\pi}(s,s^{\prime}) = \sum_{a\in \mathcal{A}}P(s^{\prime}|s,a)\pi (a|s),\forall s,s^{\prime}\in \mathcal{S}$ . Observe that for any policy $\pi$ , the sequence of states yielded by the MDP forms a Markov chain. We assume the following throughout the paper.
+
+Assumption 1. The Markov chain induced by every policy $\pi$ , $\{s_t\}_{t \geq 0}$ , is ergodic.
+
+Before proceeding further, we point out that we consider a parameterized class of policies $\Pi$ , which consists of all policies $\pi_{\theta}$ such that $\theta \in \Theta$ , where $\Theta \subset \mathbb{R}^{\mathrm{d}}$ . It is well-established that if $\mathcal{M}$ is ergodic, then $\forall \theta \in \Theta$ , there exists a unique stationary $\rho$ -independent distribution, denoted as $d^{\pi_{\theta}} \in \Delta(S)$ , defined as follows.
+
+$$
+d ^ {\pi_ {\theta}} (s) = \lim _ {T \rightarrow \infty} \mathbb {E} _ {\pi_ {\theta}} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T} \mathbf {1} \left(s _ {t} = s\right) \mid s _ {0} \sim \rho \right] \tag {1}
+$$
+
+where $\mathbb{E}_{\pi_{\theta}}$ denotes the expectation over the distribution of all $\pi_{\theta}$ -induced trajectories and $\mathbf{1}(\cdot)$ is an indicator function. The above distribution also obeys $(P^{\pi_{\theta}})^{\top}d^{\pi_{\theta}} = d^{\pi_{\theta}}$ . With this notation in place, we define the mixing time of an MDP.
+
+Definition 1. The mixing time of an MDP $\mathcal{M}$ with respect to a policy parameter $\theta$ is defined as
+
+$$
+t _ {\operatorname {m i x}} ^ {\theta} := \min \left\{t \geq 1 \left| \| (P ^ {\pi_ {\theta}}) ^ {t} (s, \cdot) - d ^ {\pi_ {\theta}} \| _ {\mathrm {T V}} \leq \frac {1}{4}, \forall s \in \mathcal {S} \right. \right\}
+$$
+
+where $\| \cdot \|_{\mathrm{TV}}$ denotes the total variation distance. We define $t_{\mathrm{mix}} \coloneqq \sup_{\theta \in \Theta} t_{\mathrm{mix}}^{\theta}$ as the the overall mixing time. This paper assumes $t_{\mathrm{mix}}$ to be finite.
+
+The mixing time of an MDP measures how quickly the MDP approaches its stationary distribution when the same policy is executed repeatedly. In the average reward setting, we aim to find a policy $\pi_{\theta}$ that maximizes the long-term average reward defined below.
+
+$$
+J ^ {\pi_ {\theta}} := \lim _ {T \rightarrow \infty} \mathbb {E} _ {\pi_ {\theta}} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T} r \left(s _ {t}, a _ {t}\right) \mid s _ {0} \sim \rho \right] \tag {2}
+$$
+
+For simplicity, we denote $J(\theta) = J^{\pi_{\theta}}$ . This paper uses an actor-critic approach to optimize $J$ . Before proceeding further, we would like to introduce a few important terms. The action-value $(Q)$ function corresponding to the policy $\pi_{\theta}$ is defined as
+
+$$
+Q ^ {\pi_ {\theta}} (s, a) = \mathbb {E} _ {\pi_ {\theta}} \left[ \sum_ {t = 0} ^ {\infty} \left\{r \left(s _ {t}, a _ {t}\right) - J (\theta) \right\} \mid s _ {0} = s, a _ {0} = a \right] \tag {3}
+$$
+
+We can further define the state value function as
+
+$$
+V ^ {\pi_ {\theta}} (s) = \mathbb {E} _ {a \sim \pi_ {\theta} (\cdot | s)} [ Q ^ {\pi_ {\theta}} (s, a) ] \tag {4}
+$$
+
+Bellman's equation, thus, takes the following form (Puterman, 2014)
+
+$$
+Q ^ {\pi_ {\theta}} (s, a) = r (s, a) - J (\theta) + \mathbb {E} \left[ V ^ {\pi_ {\theta}} \left(s ^ {\prime}\right) \right], \tag {5}
+$$
+
+where the expectation is over $s' \sim P(\cdot | s, a)$ . We define the advantage as $A^{\pi_{\theta}}(s, a) \triangleq Q^{\pi_{\theta}}(s, a) - V^{\pi_{\theta}}(s)$ . With the notations in place, we express below the well-known policy gradient theorem established by (Sutton et al., 1999).
+
+$$
+\nabla_ {\theta} J (\theta) = \mathbb {E} _ {(s, a) \sim \nu^ {\pi_ {\theta}}} \left[ A ^ {\pi_ {\theta}} (s, a) \nabla_ {\theta} \log \pi_ {\theta} (a | s) \right] \tag {6}
+$$
+
+where $\nu^{\pi_{\theta}}(s,a) = d^{\pi_{\theta}}(s)\pi (a|s)$ . Policy Gradient (PG) algorithms maximize the average reward by updating $\theta$ along the policy gradient $\nabla_{\theta}J(\theta)$ . In contrast, Natural Policy Gradient (NPG) methods update $\theta$ along the NPG direction $\omega_{\theta}^{*}$ where
+
+$$
+\omega_ {\theta} ^ {*} = F (\theta) ^ {\dagger} \nabla_ {\theta} J (\theta), \tag {7}
+$$
+
+The symbol $\dagger$ denotes the Moore-Penrose pseudoinverse and $F(\theta)$ is the Fisher matrix as defined as
+
+$$
+F (\theta) = \mathbb {E} _ {(s, a) \sim \nu^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (a | s) \otimes \nabla_ {\theta} \log \pi_ {\theta} (a | s) \right] \tag {8}
+$$
+
+where $\otimes$ symbolizes the outer product. The precoder $F(\theta)$ takes the change of the parameterized policy with respect to $\theta$ into account, thereby preventing overshooting or slow updates of $\theta$ . Note that $\omega_{\theta}^{*}$ can be written as the minimizer of the function $L_{\nu^{\pi_{\theta}}}(\cdot ,\theta)$ where
+
+$$
+L _ {\nu^ {\pi_ {\theta}}} (\omega , \theta) = \frac {1}{2} \mathbb {E} _ {(s, a) \sim \nu^ {\pi_ {\theta}}} \left[ \left(A ^ {\pi_ {\theta}} (s, a) - \omega^ {\top} \nabla_ {\theta} \log \pi_ {\theta} (a | s)\right) ^ {2} \right] \tag {9}
+$$
+
+for all $\omega \in \mathbb{R}^{\mathrm{d}}$ . This is essentially a convex optimization that can be iteratively solved utilizing a gradient-based method. Invoking (6), one can show that
+
+$$
+\nabla_ {\omega} L _ {\nu^ {\pi_ {\theta}}} (\omega , \theta) = F (\theta) \omega - \nabla_ {\theta} J (\theta) \tag {10}
+$$
+
+Note that $\nabla_{\omega}L_{\nu^{\pi_{\theta}}}(\omega ,\theta)$ is not exactly computable since the transition function $P$ and hence the stationary distribution, $d^{\pi_{\theta}}$ , and the advantage function, $A^{\pi_{\theta}}(\cdot ,\cdot)$ are typically unknown in most practical cases. Recall that $A^{\pi_{\theta}}(s,a) = Q^{\pi_{\theta}}(s,a) - V^{\pi_{\theta}}(s).$ Moreover, Bellman's equation (5) states that $Q^{\pi_{\theta}}(s,a)$ is determined by $J(\theta)$ and $V^{\pi_{\theta}}$ . Notice that $J(\theta)$ can be written as a solution to the following optimization problem.
+
+$$
+\min _ {\eta \in \mathbb {R}} R (\theta , \eta) := \frac {1}{2} \sum_ {s \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \nu^ {\pi_ {\theta}} (s, a) \left\{\eta - r (s, a) \right\} ^ {2} \tag {11}
+$$
+
+The above formulation allows us to compute $J(\theta)$ in a gradient-based iterative manner. In particular,
+
+$$
+\nabla_ {\eta} R (\theta , \eta) = \sum_ {s \in S} \sum_ {a \in A} \nu^ {\pi_ {\theta}} (s, a) \left\{\eta - r (s, a) \right\} \tag {12}
+$$
+
+To facilitate the estimation of the advantage function, we assume that $V^{\pi_{\theta}}(\cdot)$ can be approximated by a critic function $\hat{V} (\zeta_{\theta},\cdot) = (\zeta_{\theta})^{\top}\phi (\cdot)$ where $\zeta_{\theta}\in \mathbb{R}^{m}$ denotes a solution to the following optimization problem, and $\phi (s)\in \mathbb{R}^{m}$ , $\| \phi (s)\| \leq 1$ is a feature vector, $\forall s\in S$ .
+
+$$
+\min _ {\zeta \in \mathbb {R} ^ {m}} E (\theta , \zeta) := \frac {1}{2} \sum_ {s \in \mathcal {S}} d ^ {\pi_ {\theta}} (s) \left(V ^ {\pi_ {\theta}} (s) - \hat {V} (\zeta , s)\right) ^ {2} \tag {13}
+$$
+
+The above formulation paves a way to compute $\zeta_{\theta}$ via a gradient-based iterative procedure. Note the following.
+
+$$
+\begin{array}{l} \nabla_ {\zeta} E (\theta , \zeta) \\ = \sum_ {s \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \nu^ {\pi_ {\theta}} (s, a) \left\{\zeta^ {\top} \phi (s) - Q ^ {\pi_ {\theta}} (s, a) \right\} \phi (s) \tag {14} \\ \end{array}
+$$
+
+The iterative updates of $\theta$ , $\eta$ , and $\zeta$ , along their associated gradients provided in (10), (12), and (14) form the basis of our algorithm stated next.
+
+Algorithm 1 Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC)
+
+1: Input: Initial parameters $\theta_0, \{\omega_H^k\}$ , and $\{\xi_0^k\}$ , policy update stepsize $\alpha$ , parameters for NPG update, $\gamma$ , parameters for critic update, $\beta$ , $c_{\beta}$ , initial state $s_0 \sim \rho$ , outer loop size $K$ , inner loop size $H$ , $T_{\max}$
+2: Initialization: $T \gets 0$
+3: for $k = 0,1,\dots ,K - 1$ do
+4: for $h = 0,1,\dots ,H - 1$ do{Average reward and critic estimation}
+
+5: $s_0^{kh} \gets s_0, Q_{kh} \sim \operatorname{Geom}(1/2)$
+6: $\bar{Q}_{kh} \gets 2^{Q_{kh}}$ if $2^{Q_{kh}} \leq T_{\max}$ else $\bar{Q}_{kh} \gets 0$
+7: for $t = 0,\dots ,\bar{Q}_{kh} - 1$ do
+
+8: Take action $a_{t}^{kh} \sim \pi_{\theta_{k}}(\cdot | s_{t}^{kh})$ .
+9: Collect next state $s_{t+1}^{kh} \sim P(\cdot | s_t^{kh}, a_t^{kh})$
+10: Receive reward $r(s_t^{kh},a_t^{kh})$
+11: end for
+12: $T_{kh}\gets \bar{Q}_{kh},s_0\gets s_{T_{kh}}^{kh}$
+13: Update $\xi_h^k$ using (16) and (25)
+14: $T\gets T + T_{kh}$
+15: end for
+16: $\xi_{k}\gets \xi_{H}^{k}$
+17: for $h = H, H + 1, \dots, 2H - 1$ do {Natural Policy Gradient (NPG) estimation}
+18: $s_0^{kh} \gets s_0, Q_{kh} \sim \operatorname{Geom}(1/2)$
+19: $\bar{Q}_{kh} \gets 2^{Q_{kh}}$ if $2^{Q_{kh}} \leq T_{\max}$ else $\bar{Q}_{kh} \gets 0$
+20: for $t = 0,\dots ,\bar{Q}_{kh} - 1$ do
+21: Take action $a_{t}^{kh}\sim \pi_{\theta_{k}}(\cdot |s_{t}^{kh})$
+22: Collect next state $s_{t+1}^{kh} \sim P(\cdot | s_t^{kh}, a_t^{kh})$
+23: Receive reward $r(s_t^{kh},a_t^{kh})$
+24: end for
+25: $T_{kh}\gets \bar{Q}_{kh},s_0\gets s_{T_{kh}}^{kh}$
+26: Update $\omega_h^k$ using (17) and (21).
+27: $T\gets T + T_{kh}$
+28: end for
+29: $\omega_{k}\gets \omega_{2H}^{k},\theta_{k + 1}\gets \theta_{k} + \alpha \omega_{k}$ {Policy update}
+30: end for
+
+# 3. Proposed Algorithm
+
+We propose a Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm (Algorithm 1) that runs $K$ number of epochs (also called outer loops). The $k$ th loop obtains $\xi_{k} = [\eta_{k},\zeta_{k}]^{\top}$ where $\eta_{k}$ denotes an estimate of the average reward $J(\theta_k)$ , and $\zeta_{k}$ is an estimate of the critic parameter, $\zeta_{\theta_k}$ . These estimates are then used to compute the approximate NPG, $\omega_{k}$ , which is applied to update the policy parameter $\theta_{k}$ .
+
+$$
+\theta_ {k + 1} = \theta_ {k} + \alpha \omega_ {k} \tag {15}
+$$
+
+where $\alpha$ is a learning parameter. The estimate $\xi_{k}$ is obtained in $H$ inner loop steps. In particular, $\forall h\in \{0,\dots ,H - 1\}$ , we apply the following updates starting from arbitrary $\xi_0^k$
+
+and finally assign $\xi_H^k = \xi_k$
+
+$$
+\begin{array}{l} \xi_ {h + 1} ^ {k} = \xi_ {h} ^ {k} - \beta \mathbf {v} _ {h} \left(\theta_ {k}, \xi_ {h} ^ {k}\right), \text {w h e r e} \\ \mathbf {v} _ {h} \left(\theta_ {k}, \xi_ {h} ^ {k}\right) = \left[ c _ {\beta} \hat {\nabla} _ {\eta} R \left(\theta_ {k}, \eta_ {h} ^ {k}\right), \hat {\nabla} _ {\zeta} E \left(\theta_ {k}, \xi_ {h} ^ {k}\right) \right] ^ {\top} \tag {16} \\ \end{array}
+$$
+
+where $c_{\beta}$ is a constant, and $\beta$ is a learning rate. Observe that $\hat{\nabla}_{\zeta}E(\theta_k,\xi_h^k)$ has dependence on $\xi_h^k = [\eta_h^k,\zeta_h^k ]^\top$ due to the presence of $Q$ -function in (14) while $\hat{\nabla}_{\eta}R(\theta_k,\eta_h^k)$ is dependent only on $\eta_h^k$ (details given later).
+
+The approximate NPG $\omega_{k}$ is also obtained in $H$ inner loop steps, starting from arbitrary $\omega_{H}^{k}$ , then recursively applying the stochastic gradient descent (SGD) method as stated below $\forall h\in \{H,\dots ,2H - 1\}$ , and assigning $\omega_{k} = \omega_{2H}^{k}{}^{2}$ .
+
+$$
+\omega_ {h + 1} ^ {k} = \omega_ {h} ^ {k} - \gamma \hat {\nabla} _ {\omega} L _ {\nu^ {\pi_ {\theta_ {k}}}} \left(\omega_ {h} ^ {k}, \theta_ {k}, \xi_ {k}\right) \tag {17}
+$$
+
+where $\gamma$ is a learning parameter, and $\hat{\nabla}_{\omega}L_{\nu^{\pi_{\theta}}}\left(\omega_h^k,\theta_k,\xi_k\right)$ symbolizes an estimate of $\nabla_{\omega}L_{\nu^{\pi_{\theta_k}}}\left(\omega_h^k,\theta_k\right)$ . We would like to clarify that the above gradient estimate depends on $\xi_{k}$ because of the presence of the advantage function in the expression of the policy gradient whose estimation needs both $\eta_{k},\zeta_{k}$ (details given later). A process is stated below to calculate the gradient estimates used in (17) and (16).
+
+Gradient Estimation via MLMC: Consider a $\pi_{\theta_k}$ -induced trajectory $\mathcal{T}_{kh} = \{(s_t^{kh},a_t^{kh},s_{t + 1}^{kh})\}_{t = 0}^{l_{kh} - 1}$ whose length is given as $l_{kh} = 2^{Q_{kh}}$ where $Q_{kh}\sim \mathrm{Geom}(\frac{1}{2})$ . We can write the $Q$ estimate as below $\forall t\in \{0,\dots ,l_{kh} - 1\}$ using Bellman's equation (5).
+
+$$
+\hat {Q} ^ {\pi_ {\theta_ {k}}} \left(\xi_ {k}, z _ {t} ^ {k h}\right) = r \left(s _ {t} ^ {k h}, a _ {t} ^ {k h}\right) - \eta_ {k} + \zeta_ {k} ^ {\top} \phi \left(s _ {t + 1} ^ {k h}\right) \tag {18}
+$$
+
+where $z_{t}^{kh} \coloneqq (s_{t}^{kh}, a_{t}^{kh}, s_{t+1}^{kh})$ . This leads to the advantage estimate (also called the temporal difference error) as follows $\forall t \in \{0, \dots, l_{kh} - 1\}$ .
+
+$$
+\begin{array}{l} \hat {A} ^ {\pi_ {\theta_ {k}}} \left(\xi_ {k}, z _ {t} ^ {k h}\right) = r \left(s _ {t} ^ {k h}, a _ {t} ^ {k h}\right) - \eta_ {k} \\ + \zeta_ {k} ^ {\top} \left[ \phi \left(s _ {t + 1} ^ {k h}\right) - \phi \left(s _ {t} ^ {k h}\right) \right] \tag {19} \\ \end{array}
+$$
+
+Define the following quantity $\forall t\in \{0,\dots ,l_{kh} - 1\}$
+
+$$
+\mathbf {u} _ {t} ^ {k h} = \hat {A} _ {\mathbf {u}} \left(\theta_ {k}, z _ {t} ^ {k h}\right) \omega_ {h} ^ {k} - \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z _ {t} ^ {k h}\right) \text {w h e r e} \tag {20}
+$$
+
+$$
+\hat {A} _ {\mathbf {u}} \left(\theta_ {k}, z _ {t} ^ {k h}\right) = \nabla_ {\theta} \log \pi_ {\theta_ {k}} \left(a _ {t} ^ {k h} \mid s _ {t} ^ {k h}\right) \otimes \nabla_ {\theta} \log \pi_ {\theta_ {k}} \left(a _ {t} ^ {k h} \mid s _ {t} ^ {k h}\right)
+$$
+
+$$
+\hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z _ {t} ^ {k h}\right) = \hat {A} ^ {\pi_ {\theta_ {k}}} \left(\xi_ {k}, z _ {t} ^ {k h}\right) \nabla_ {\theta} \log \pi_ {\theta_ {k}} \left(a _ {t} ^ {k h} \mid s _ {t} ^ {k h}\right)
+$$
+
+The term $\mathbf{u}_t^{kh}$ can be thought of as a crude estimate of $\nabla_{\omega}L_{\nu}^{\pi_{\theta_k}}(\omega_h^k,\theta_k)$ , obtained using a single transition $z_t^{kh}$ . One naive way to refine this estimate is to calculate an empirical average of $\{\mathbf{u}_t^{kh}\}_{t = 0}^{l_{kh} - 1}$ . In this work, however, we resort to the MLMC method where the final estimate is
+
+given as follows.
+
+$$
+\begin{array}{l} \hat {\nabla} _ {\omega} L _ {\nu^ {\pi_ {\theta_ {k}}}} \left(\omega_ {h} ^ {k}, \theta_ {k}, \xi_ {k}\right) \\ = U _ {0} + \left\{ \begin{array}{l l} 2 ^ {Q} (U ^ {Q} - U ^ {Q - 1}), & \text {i f} 2 ^ {Q} \leq T _ {\max } \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {21} \\ = \hat {A} _ {\mathbf {u}, k, h} ^ {\mathrm {M L M C}} (\theta_ {k}) \omega_ {h} ^ {k} - \hat {b} _ {\mathbf {u}, k, h} ^ {\mathrm {M L M C}} (\theta_ {k}, \xi_ {k}) \\ \end{array}
+$$
+
+where $U^{j} = \frac{1}{2^{j}}\sum_{t = 1}^{2^{j}}\mathbf{u}_{t}^{kh}$ $j\in \{Q - 1,Q\}$ $T_{\mathrm{max}}\geq 2$ is a parameter, and $\hat{A}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k)$ , and $\hat{b}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k,\xi_k)$ denote MLMC-based estimates of samples $\{\hat{A}_{\mathbf{u}}(\theta_k,z_t^{kh})\}_{t = 0}^{l_{kh} - 1}$ and $\{\hat{b}_{\mathbf{u}}(\theta_k,\xi_k,z_t^{kh})\}_{t = 0}^{l_{kh} - 1}$ respectively.
+
+The advantage of MLMC is that it generates the same order of bias as the empirical average of $T_{\mathrm{max}}$ samples but requires only $\mathcal{O}(\log T_{\mathrm{max}})$ samples on an average.
+
+Using a similar method, we will now obtain an estimate of $\mathbf{v}_h(\theta_k,\xi_h^k)$ . Following our earlier notations, we denote $\mathcal{T}_{kh} = \{(s_t^{kh},a_t^{kh})\}_{t = 0}^{l_{kh} - 1}$ as a $\pi_{\theta_k}$ -induced trajectory of length $l_{kh} = 2^{Q_{kh}}$ , where $Q_{kh}\sim \mathrm{Geom}(\frac{1}{2})$ . Notice the terms stated below $\forall t\in \{0,\dots ,l_{kh} - 1\}$ .
+
+$$
+\begin{array}{l} \mathbf {v} _ {t} ^ {k h} = \left[ \begin{array}{c} c _ {\beta} \left\{\eta_ {h} ^ {k} - r \left(s _ {t} ^ {k h}, a _ {t} ^ {k h}\right) \right\} \\ \left\{\left(\zeta_ {h} ^ {k}\right) ^ {\top} \phi \left(s _ {t} ^ {k h}\right) - \hat {Q} ^ {\pi_ {\theta_ {k}}} \left(\xi_ {h} ^ {k}, z _ {t} ^ {k h}\right) \right\} \phi \left(s _ {t} ^ {k h}\right) \end{array} \right] \tag {22} \\ = \hat {A} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) \xi_ {h} ^ {k} - \hat {b} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) \\ \end{array}
+$$
+
+where $z_{t}^{kh}\coloneqq (s_{t}^{kh},a_{t}^{kh},s_{t + 1}^{kh})$ $\hat{Q}^{\pi_{\theta_k}}(\xi_h^k,z_t^{kh})$ is given by (18) and $\hat{A}_{\mathbf{v}}(z_t^{kh}),\hat{b}_{\mathbf{v}}(z_t^{kh})$ are defined as
+
+$$
+\hat {A} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) = \left[ \begin{array}{c c} c _ {\beta} & 0 \\ \phi \left(s _ {t} ^ {k h}\right) & \phi \left(s _ {t} ^ {k h}\right) \left[ \phi \left(s _ {t} ^ {k h}\right) - \phi \left(s _ {t + 1} ^ {k h}\right) \right] ^ {\top} \end{array} \right], \tag {23}
+$$
+
+$$
+\hat {b} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) = \left[ \begin{array}{c} c _ {\beta} r \left(s _ {t} ^ {k h}, a _ {t} ^ {k h}\right) \\ r \left(s _ {t} ^ {k h}, a _ {t} ^ {k h}\right) \phi \left(s _ {t} ^ {k h}\right) \end{array} \right] \tag {24}
+$$
+
+Based on (12) and (14), the term $\mathbf{v}_t^{kh}$ can be thought of as a crude approximation of $\mathbf{v}_h(\theta_k, \xi_h^k)$ obtained using a single transition, $z_t^{kh}$ . The final estimate is
+
+$$
+\begin{array}{l} \mathbf {v} _ {h} (\theta_ {k}, \xi_ {h} ^ {k}) = V _ {0} + \left\{ \begin{array}{l l} 2 ^ {Q} \left(V ^ {Q} - V ^ {Q - 1}\right), & \text {i f} 2 ^ {Q} \leq T _ {\max } \\ 0 & \text {o t h e r w i s e} \end{array} \right. \\ = \hat {A} _ {\mathbf {v}, k, h} ^ {\mathrm {M L M C}} \xi_ {h} ^ {k} - \hat {b} _ {\mathbf {v}, k, h} ^ {\mathrm {M L M C}} \tag {25} \\ \end{array}
+$$
+
+where $V^{j} \coloneqq 2^{-j}\sum_{t = 1}^{2^{j}}\mathbf{v}_{t}^{kh}, j \in \{Q - 1, Q\}$ . Moreover, $\hat{A}_{\mathbf{v},k,h}^{\mathrm{MLMC}}$ and $\hat{b}_{\mathbf{v},k,h}^{\mathrm{MLMC}}$ symbolize MLMC-based estimates of $\{\hat{A}_{\mathbf{v}}(z_t^{kh})\}_{t = 0}^{l_{kh} - 1}$ and $\{\hat{b}_{\mathbf{v}}(z_t^{kh})\}_{t = 0}^{l_{kh} - 1}$ respectively.
+
+A few remarks are in order. Although the MLMC-based estimates achieve the same order of bias as the empirical average with a lower average sample requirement, its variance is larger. Many existing literature reduce the impact of the increased variance via AdaGrad-based parameter updates. Though such methods typically work well for general
+
+non-convex optimization problems, it does not exploit any inherent structure of strongly convex optimization problems, thereby being sub-optimal for both the NPG-finding subroutine and the average reward and critic updates. In this paper, we resort to a version of the standard SGD to cater to our following needs. First, by judiciously choosing the learning parameters, we prove that it is possible to achieve the optimal rate without invoking AdaGrad-type updates. Finally, although our gradient estimates suffer from bias due to the inherent error present in the critic approximation, our novel analysis suitably handles these issues.
+
+# 4. Main Results
+
+We first state some relevant assumptions. Let $A_{\mathbf{v}}(\theta) \coloneqq \mathbb{E}_{\theta}\left[\hat{A}_{\mathbf{v}}(z)\right]$ , and $b_{\mathbf{v}}(\theta) \coloneqq \mathbb{E}_{\theta}\left[\hat{b}_{\mathbf{v}}(z)\right]$ where matrices $\hat{A}_{\mathbf{v}}(z), \hat{b}_{\mathbf{v}}(z)$ are described in (23), (24) and the expectation $\mathbb{E}_{\theta}$ is computed over the distribution of $z = (s, a, s')$ where $(s, a) \sim \nu^{\pi_{\theta}}, s' \sim P(\cdot | s, a)$ . For arbitrary policy parameter $\theta$ , we denote $\xi_{\theta}^{*} = [A_{\mathbf{v}}(\theta)]^{\dagger} b_{\mathbf{v}}(\theta) = [\eta_{\theta}^{*}, \zeta_{\theta}^{*}]^{\top}$ . Using these notations, below we state some assumptions related to the critic analysis.
+
+Assumption 2. We assume the critic approximation error, $\epsilon_{\mathrm{app}}$ (defined below) is finite.
+
+$$
+\epsilon_ {\mathrm {a p p}} = \sup _ {\theta} E (\theta , \zeta_ {\theta} ^ {*}) \tag {26}
+$$
+
+Assumption 3. There exist $\lambda >0$ such that $\forall \theta$
+
+$$
+\left. \mathbb {E} _ {\theta} \left[ \phi (s) \left(\phi (s) - \phi \left(s ^ {\prime}\right)\right) ^ {\top} \right] \succcurlyeq \lambda I \right. \tag {27}
+$$
+
+where $\succcurlyeq$ is the positive semidefinite inequality and the expectation, $\mathbb{E}_{\theta}$ is obtained over $s \sim d^{\pi_{\theta}}$ , $s' \sim P^{\pi_{\theta}}(s, \cdot)$ .
+
+Both Assumptions 2 and 3 are frequently employed in the analysis of actor-critic methods (Suttle et al., 2023; Patel et al., 2024; Wu et al., 2020; Panda & Bhatnagar, 2025). Assumption 2 intuitively relates to the quality of the feature mapping where $\epsilon_{\mathrm{app}}$ measures the quality. Well-designed feature maps may lead to small or even zero $\epsilon_{\mathrm{app}}$ , whereas poorly designed features result in higher errors. Assumption 3 is essential for guaranteeing the uniqueness of the solution to the minimization problem (13). Assumption 3 also follows when the set of policy parameters, $\Theta$ is compact and $e \notin W_{\phi}$ , where $e$ is the vector of all ones and $W_{\phi}$ is the space spanned by the feature vectors. To see this, note that if $e \notin W_{\phi}$ , there exists $\lambda_{\theta}$ for every policy $\pi_{\theta}$ such that $\mathbb{E}_{\theta}[\phi(s)(\phi(s) - \phi(s'))^{\top}] \succcurlyeq \lambda_{\theta}I$ (Zhang et al., 2021b). Since $\Theta$ is compact, setting $\lambda = \inf_{\theta \in \Theta} \lambda_{\theta} > 0$ satisfies Assumption 3.
+
+For large enough $c_{\beta}$ , Assumption 3 implies that $A_{\mathbf{v}}(\theta) - (\lambda /2)I$ is positive definite (refer to Lemma 8). It also implies that $A_{\mathbf{v}}(\theta)$ is invertible. We will now state some assumptions related to the policy parameterization.
+
+Assumption 4. For any $\theta$ , the transferred compatible function approximation error, $L_{\nu^{\pi^{*}}}\left(\omega_{\theta}^{*};\theta\right)$ , satisfies the following inequality.
+
+$$
+L _ {\nu^ {\pi^ {*}}} \left(\omega_ {\theta} ^ {*}; \theta\right) \leq \epsilon_ {\mathrm {b i a s}}
+$$
+
+where $\omega_{\theta}^{*}$ denotes the exact NPG direction at $\theta$ defined by (7), $\pi^{*}$ indicates the optimal policy, and the function $L_{\nu^{\pi^{*}}}(\cdot ,\cdot)$ is given by (9).
+
+Assumption 5. For all $\theta, \theta_1, \theta_2$ and $(s, a) \in S \times \mathcal{A}$ , the following statements hold.
+
+$(a)\| \nabla_{\theta}\log \pi_{\theta}(a|s)\| \leq G_1$
+(b) $\| \nabla_{\theta}\log \pi_{\theta_1}(a|s) - \nabla_{\theta}\log \pi_{\theta_2}(a|s)\| \leq G_2\| \theta_1 - \theta_2\|$
+
+Assumption 6 (Fisher non-degenerate policy). There exists a constant $\mu >0$ such that $F(\theta) - \mu I_{d}$ is positive semidefinite where $I_{d}$ denotes an identity matrix.
+
+Comments on Assumptions 4-6: We would like to highlight that all these assumptions are commonly found in PG literature (Liu et al., 2020; Agarwal et al., 2021; Papini et al., 2018; Xu et al., 2019; Fatkhullin et al., 2023). We elaborate more on these assumptions below.
+
+The term $\epsilon_{\mathrm{bias}}$ captures the expressivity of the parameterized policy class. If, for example, the policy class is complete such as in the case of softmax parametrization, $\epsilon_{\mathrm{bias}} = 0$ (Agarwal et al., 2021). However, for restricted parametrization which may not contain all stochastic policies, we have $\epsilon_{\mathrm{bias}} > 0$ . It is known that $\epsilon_{\mathrm{bias}}$ is insignificant for rich neural parametrization (Wang et al., 2019). Note that Assumption 5 requires that the score function is bounded and Lipschitz continuous. This assumption is widely used in the analysis of PG-based methods (Liu et al., 2020; Agarwal et al., 2021; Papini et al., 2018; Xu et al., 2019; Fatkhullin et al., 2023). Assumption 6 requires that the eigenvalues of the Fisher information matrix can be bounded from below and is commonly used in obtaining global complexity bounds for PG-based procedures (Liu et al., 2020; Zhang et al., 2021a; Bai et al., 2022; Fatkhullin et al., 2023). Assumptions 5-6 were shown to hold for various examples recently including Gaussian policies with linearly parameterized means and certain neural parametrizations (Liu et al., 2020; Fatkhullin et al., 2023).
+
+With the relevant assumptions in place, we are now ready to state our main result.
+
+Theorem 1. Consider Algorithm 1 with $K = \Theta(\sqrt{T})$ , $H = \Theta(\sqrt{T} / \log(T))$ . Let Assumptions 1-6 hold and $J$ be L-smooth. There exists a choice of parameters such that the following holds for a sufficiently large $T$ .
+
+$$
+\begin{array}{l} J ^ {*} - \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} [ J (\theta_ {k}) ] \leq \mathcal {O} \left(\sqrt {\epsilon_ {\mathrm {a p p}}} + \sqrt {\epsilon_ {\mathrm {b i a s}}}\right) \\ + \tilde {\mathcal {O}} \left(t _ {\operatorname {m i x}} ^ {3} T ^ {- 1 / 2}\right) \tag {28} \\ \end{array}
+$$
+
+where $J^{*}$ is the optimal value of $J(\cdot)$ .
+
+The values of the learning parameters used in the above theorem can be found in Appendix H. It is to be mentioned that the above bound of $\tilde{\mathcal{O}}(1/\sqrt{T})$ is a significant improvement in comparison to the state-of-the-art bounds of $\tilde{\mathcal{O}}(1/T^{1/4})$ in the average reward general parameterization setting (Bai et al., 2024; Patel et al., 2024). Also, our bounds do not depend on the size of the action space and hitting time unlike that in (Bai et al., 2024). Although (Patel et al., 2024) provides bounds with $\mathcal{O}(\sqrt{t_{\mathrm{mix}}})$ dependence, these bounds, unfortunately, depend on the projection radius of the critic updates, $R_{\omega}$ , which can be large and scale with $t_{\mathrm{mix}}$ (Wei et al., 2020). In contrast, our algorithm does not use such projection operators and therefore, does not scale with $R_{\omega}$ .
+
+Our analysis assumes $L$ -smoothness of the average reward objective $J$ , a standard assumption in the PG literature. In the average-reward setting, smoothness is typically assumed, either explicitly or implicitly via Lipschitz continuity of the value function or by appealing to discounted-setting bounds (Chen & Zhao, 2023; Suttle et al., 2023; Patel et al., 2024; Ganesh et al., 2025a; Wang et al., 2024; Bai et al., 2024; Panda & Bhatnagar, 2025). A recent result (Ganesh et al., 2025b, Theorem 3) establishes a smoothness-type result under ergodicity in the average-reward, infinite-horizon setting, which could be used in our analysis but is omitted here to streamline the presentation. Algorithm 1 assumes knowledge of $L$ to set the policy learning rate, and the smoothness upper bound in the cited work depends on $t_{\mathrm{mix}}$ . However, the dependence on $t_{\mathrm{mix}}$ in Algorithm 1 is much weaker than in existing direct policy gradient methods, which require samples to be spaced $\tilde{\mathcal{O}}(t_{\mathrm{mix}})$ apart at each iteration.
+
+# 5. Proof Outline
+
+# 5.1. Policy update analysis
+
+Lemma 1. Consider any policy update rule of form
+
+$$
+\theta_ {k + 1} = \theta_ {k} + \alpha \omega_ {k}. \tag {29}
+$$
+
+If Assumptions 4 and 5 hold, then the following inequality is satisfied for any $K$ .
+
+$$
+\begin{array}{l} J ^ {*} - \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} [ J (\theta_ {k}) ] \leq \sqrt {\epsilon_ {\mathrm {b i a s}}} \\ + \frac {G _ {1}}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \left\| \left(\mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*}\right) \right\| + \frac {\alpha G _ {2}}{2 K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \left\| \omega_ {k} \right\| ^ {2} \\ + \frac {1}{\alpha K} \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \left[ \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {0}} (\cdot | s)\right) \right] \tag {30} \\ \end{array}
+$$
+
+where $\mathrm{KL}(\cdot \| \cdot)$ is the Kullback-Leibler divergence, $\omega_{k}^{*}$ is the NPG direction $F(\theta_k)^{-1}\nabla J(\theta_k)$ , $\pi^{*}$ is the optimal policy,
+
+$J^{*}$ is the optimal value of the function $J(\cdot)$ , and $\mathbb{E}_k[\cdot]$ denotes conditional expectation given the history up to epoch $k$ .
+
+Note that the last term in (30) is $\mathcal{O}(1 / K)$ . The term $\mathbb{E}\left\| \omega_k\right\|^2$ can be further decomposed as
+
+$$
+\begin{array}{l} \mathbb {E} \left\| \omega_ {k} \right\| ^ {2} \leq 2 \mathbb {E} \left\| \omega_ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + 2 \mathbb {E} \left\| \omega_ {k} ^ {*} \right\| ^ {2} \\ \stackrel {(a)} {\leq} 2 \mathbb {E} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} + \frac {2}{\mu^ {2}} \mathbb {E} \| \nabla_ {\theta} J (\theta_ {k}) \| ^ {2} \tag {31} \\ \end{array}
+$$
+
+where $(a)$ follows from Assumption 6 and the definition that $\omega_{k}^{*} = F(\theta_{k})^{-1}\nabla_{\theta}J(\theta_{k})$ . Further, it can be proven that for the choice of $\alpha$ used in Theorem 1, we have
+
+$$
+\begin{array}{l} \frac {1}{\mu^ {2} K} \left(\sum_ {k = 0} ^ {K - 1} \| \nabla_ {\theta} J \left(\theta_ {k}\right) \| ^ {2}\right) \leq \frac {3 2 L G _ {1} ^ {4}}{\mu^ {4} K} \tag {32} \\ + \left(\frac {2 G _ {1} ^ {4}}{\mu^ {2}} + 1\right) \left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2}\right) \\ \end{array}
+$$
+
+Evidently, one can obtain a global convergence bound by bounding the terms $\mathbb{E}\left\| \omega_k - \omega_k^*\right\|^2$ , and $\mathbb{E}\left\| (\mathbb{E}_k[\omega_k] - \omega_k^*)\right\|$ . These terms define the second-order error and bias of the NPG estimator, $\omega_{k}$ . In the next subsections, we briefly describe how to obtain these bounds.
+
+# 5.2. Analysis of a General Linear Recursion
+
+Observe that the NPG finding subroutine (17) and the update of the critic parameter and the average reward (16) can be written in the following form for a given $k$ .
+
+$$
+x _ {h + 1} = x _ {h} - \bar {\beta} \left(\hat {P} _ {h} x _ {h} - \hat {q} _ {h}\right) \tag {33}
+$$
+
+where $\hat{P}_h, \hat{q}_h$ are MLMC based estimates of the matrices $P \in \mathbb{R}^{n \times n}$ , $q \in \mathbb{R}^n$ respectively, and $h \in \{0, \dots, H - 1\}$ . Assume that the following bounds hold $\forall h$ .
+
+$$
+\begin{array}{l} \mathbb {E} _ {h} \left[ \left\| \hat {P} _ {h} - P \right\| ^ {2} \right] \leq \sigma_ {P} ^ {2}, \left\| \mathbb {E} _ {h} \left[ \hat {P} _ {h} \right] - P \right\| ^ {2} \leq \delta_ {P} ^ {2}, \\ \mathbb {E} _ {h} \left[ \| \hat {q} _ {h} - q \| ^ {2} \right] \leq \sigma_ {q} ^ {2}, \| \mathbb {E} _ {h} [ \hat {q} _ {h} ] - q \| ^ {2} \leq \delta_ {q} ^ {2}, \tag {34} \\ \end{array}
+$$
+
+and $\| \mathbb{E}\left[\hat{q}_h\right] - q\| ^2\leq \bar{\delta}_q^2$
+
+where $\mathbb{E}_h$ denotes conditional expectation given history up to step $h$ . Since $\mathbb{E}[\hat{q}_h] = \mathbb{E}[\mathbb{E}_h[\hat{q}_h]]$ , we have $\bar{\delta}_q^2 \leq \delta_q^2$ . Additionally, assume that
+
+$$
+0 \prec \lambda_ {P} I \preccurlyeq P, \| P \| \leq \Lambda_ {P} \text {a n d} \| q \| \leq \Lambda_ {q} \tag {35}
+$$
+
+The condition that $\lambda_P > 0$ implies that $P$ is invertible. The goal of recursion (33) is to approximate the term $x^{*} = P^{-1}q$ . We have the following result.
+
+Theorem 2. Consider the recursion (33). Assume that the conditions (34), and (35) hold. Also, let $\delta_P \leq \lambda_P / 8$ , and $\bar{\beta} = \frac{2\log H}{\lambda_P H}$ . The following relation holds whenever $H$ is sufficiently large.
+
+$$
+\mathbb {E} \left[ \| x _ {H} - x ^ {*} \| ^ {2} \right] \leq \frac {\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right]}{H ^ {2}} + \tilde {\mathcal {O}} \left(\frac {R _ {0}}{H} + R _ {1}\right)
+$$
+
+where $R_0 = \lambda_P^{-4}\Lambda_q^2\sigma_P^2 +\lambda_P^{-2}\sigma_q^2$ , $R_{1} = \lambda_{P}^{-2}\big[\delta_{P}^{2}\lambda_{P}^{-2}\Lambda_{q}^{2} + \delta_{q}^{2}\big]$ , and $\tilde{\mathcal{O}} (\cdot)$ hides logarithmic factors of $H$ . Moreover,
+
+$$
+\begin{array}{l} \| \mathbb {E} [ x _ {H} ] - x ^ {*} \| ^ {2} \leq \frac {\| \mathbb {E} [ x _ {0} ] - x ^ {*} \| ^ {2}}{H ^ {2}} + \mathcal {O} (\bar {R} _ {1}) \\ + \mathcal {O} \left(\lambda_ {P} ^ {- 2} \delta_ {P} ^ {2} \left\{\mathbb {E} \left[ \left\| x _ {0} - x ^ {*} \right\| ^ {2} \right] + \mathcal {O} \left(R _ {0} + R _ {1}\right) \right\}\right) \\ \end{array}
+$$
+
+where $\bar{R}_1 = \lambda_P^{-2}\big[\delta_P^2\lambda_P^{-2}\Lambda_q^2 +\bar{\delta}_q^2\big]$
+
+We shall now use Theorem 2 to characterize the estimation errors in the NPG-finding subroutine and average reward and critic updates.
+
+# 5.3. Analysis of NPG-Finding Subroutine
+
+In the NPG finding subroutine, the goal is to compute $\omega_{k}^{*} = [F(\theta_{k})]^{-1}\nabla_{\theta}J(\theta_{k})$ . An estimate of $F(\theta_{k})$ is given by $\hat{A}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k)$ , and that of the policy gradient $\nabla_{\theta}J(\theta_k)$ is given by $\hat{b}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k,\xi_k)$ (see (21)). One can establish the following inequalities invoking the properties of the MLMC estimates.
+
+Lemma 2. Fix an instant $k$ of the outer loop in Algorithm 1. Given $(\theta_k, \xi_k)$ , the MLMC estimates defined in (21) satisfy the following bounds $\forall h \in \{H, \dots, 2H - 1\}$ provided the assumptions in Theorem 1 hold.
+
+(a) $\left\| \mathbb{E}_{k,h}\left[\hat{A}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k)\right] - F(\theta_k)\right\|^2 \leq \mathcal{O}\left(G_1^4 t_{\mathrm{mix}}T_{\mathrm{max}}^{-1}\right)$
+
+(b) $\mathbb{E}_{k,h}\left[\left\| \hat{A}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k) - F(\theta_k)\right\| ^2\right]$
+
+(c) $\left\| \mathbb{E}_{k,h}\left[\hat{b}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k,\xi_k)\right] - \nabla_\theta J(\theta_k)\right\|^2$
+
+(d) $\mathbb{E}_{k,h}\left[\left\| \hat{b}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k,\xi_k) - \nabla_\theta J(\theta_k)\right\| ^2\right]$
+
+$$
+\begin{array}{l} \leq \mathcal {O} \left(G _ {1} ^ {4} t _ {\mathrm {m i x}} \log T _ {\mathrm {m a x}}\right) \\ \leq \mathcal {O} \left(\sigma_ {\mathbf {u}, k} ^ {2} t _ {\operatorname {m i x}} T _ {\max } ^ {- 1} + \delta_ {\mathbf {u}, k} ^ {2}\right) \\ \leq \mathcal {O} \left(\sigma_ {\mathbf {u}, k} ^ {2} t _ {\mathrm {m i x}} \log T _ {\mathrm {m a x}} + \delta_ {\mathbf {u}, k} ^ {2}\right) \\ \end{array}
+$$
+
+where $\mathbb{E}_{k,h}$ defines the conditional expectation given the history up to the inner loop step $h$ (within the $k$ th outer loop instant), $\sigma_{\mathbf{u},k}^2 = \mathcal{O}\left(G_1^2\|\xi_k\|^2\right)$ and
+
+$$
+\delta_ {\mathbf {u}, k} ^ {2} = \mathcal {O} \left(G _ {1} ^ {2} \left\| \xi_ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right)
+$$
+
+where $\xi_k^* \coloneqq \xi_{\theta_k}^* = [A_{\mathbf{v}}(\theta_k)]^{-1} b_{\mathbf{v}}(\theta_k)$ and $\mathbb{E}_k$ defines the conditional expectation given the history up to the outer loop instant $k$ . Moreover, given $\theta_k$ , we also have
+
+(e) $\left\| \mathbb{E}_k\left[\hat{b}_{\mathbf{u},k,h}^{\mathrm{MLMC}}(\theta_k,\xi_k)\right] - \nabla_\theta J(\theta_k)\right\|^2$
+
+$$
+\leq \mathcal {O} \left(\bar {\sigma} _ {\mathbf {u}, k} ^ {2} t _ {\operatorname {m i x}} T _ {\max } ^ {- 1} + \bar {\delta} _ {\mathbf {u}, k} ^ {2}\right)
+$$
+
+where $\bar{\sigma}_{\mathbf{u},k}^2 = \mathcal{O}(G_1^2\mathbb{E}_k\| \xi_k\|^2)$ , and $\bar{\delta}_{\mathbf{u},k}^2$ is given as
+
+$$
+\bar {\delta} _ {\mathbf {u}, k} ^ {2} = \mathcal {O} \left(G _ {1} ^ {2} \left\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \right\| ^ {2} + G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right)
+$$
+
+Combining Lemma 2 and Theorem 2, we arrive at the following results.
+
+Theorem 3. Consider the NPG-finding recursion (17) with $\gamma = \frac{2\log H}{\mu H}$ and $T_{\mathrm{max}} = H^2$ . If all assumptions in Theorem 1 hold, then for sufficiently large $c_{\beta}$ , $H$
+
+$$
+\mathbb {E} _ {k} \left[ \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \| ^ {2} + \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3}}{\mu^ {4} H}\right)
+$$
+
+$$
+\tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\operatorname* {m i x}}}{\mu^ {2} \lambda^ {2} H}\right) + \mu^ {- 2} G _ {1} ^ {2} \mathcal {O} \left(\mathbb {E} _ {k} \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} + \epsilon_ {\mathrm {a p p}}\right)
+$$
+
+Additionally, we also have
+
+$$
+\begin{array}{l} \left\| \mathbb {E} _ {k} \left[ \omega_ {k} \right] - \omega_ {k} ^ {*} \right\| ^ {2} \leq \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {4} t _ {\operatorname* {m i x}}}{\mu^ {2} H ^ {2}} \left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2}\right) \\ + \mu^ {- 2} G _ {1} ^ {2} \mathcal {O} \left(\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \| ^ {2} + \epsilon_ {\mathrm {a p p}}\right) \\ + \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {6} t _ {\operatorname* {m i x}} ^ {2}}{\mu^ {4} H ^ {2}} \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right]\right) \\ + \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {4} t _ {\text {m i x}}}{\mu^ {2} H ^ {2}} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\text {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\text {m i x}} \right\}\right) \\ \end{array}
+$$
+
+# 5.4. Critic and Average Reward Analysis
+
+The goal of the recursion (16) is to compute the term $\xi_{k}^{*} = [A_{\mathbf{v}}(\theta_{k})]^{-1}b_{\mathbf{v}}(\theta_{k})$ . An estimate of $A_{\mathbf{v}}(\theta_k)$ is given by $\hat{A}_{\mathbf{v},k,h}^{\mathrm{MLMC}}$ while that of $b_{\mathbf{v}}(\theta_k)$ is given by $\hat{b}_{\mathbf{v},k,h}^{\mathrm{MLMC}}$ (see (25)). Similar to Lemma 2, we have the following result.
+
+Lemma 3. Given the parameter $\theta_{k}$ , the MLMC estimates defined in (25) obey the following bounds provided the assumptions in Theorem 1 hold.
+
+(a) $\left\| \mathbb{E}_{k,h}\left[\hat{A}_{\mathbf{v},k,h}^{\mathrm{MLMC}}\right] - A_{\mathbf{v}}(\theta_k)\right\| ^2\leq \mathcal{O}\left(c_\beta^2 t_{\mathrm{mix}}T_{\mathrm{max}}^{-1}\right)$
+(b) $\mathbb{E}_{k,h}\left[\left\| \hat{A}_{\mathbf{v},k,h}^{\mathrm{MLMC}} - A_{\mathbf{v}}(\theta_k)\right\| ^2\right]\leq \mathcal{O}\left(c_\beta^2 t_{\mathrm{mix}}\log T_{\mathrm{max}}\right)$
+(c) $\left\| \mathbb{E}_{k,h}\left[\hat{b}_{\mathbf{v},k,h}^{\mathrm{MLMC}}\right] - b_{\mathbf{v}}(\theta_k)\right\|^2 \leq \mathcal{O}\left(c_\beta^2 t_{\mathrm{mix}}T_{\mathrm{max}}^{-1}\right)$
+(d) $\mathbb{E}_{k,h}\left[\left\| \hat{b}_{\mathbf{v},k,h}^{\mathrm{MLMC}} - b_{\mathbf{v}}(\theta_k)\right\| ^2\right]\leq \mathcal{O}\left(c_\beta^2 t_{\mathrm{mix}}\log T_{\mathrm{max}}\right)$
+
+where $h \in \{0,1,\dots ,H - 1\}$ and $\mathbb{E}_{k,h}$ is interpreted in the same way as in Lemma 2.
+
+Lemma 3 and Theorem 2 lead to the following.
+
+Theorem 4. Consider the recursion (25). Let $T_{\max} = H^2$ , $\beta = \frac{4\log H}{\lambda H}$ . If all assumptions of Theorem 1 hold, then the following is true for sufficiently large $c_{\beta}, H$ .
+
+$$
+\mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \| ^ {2} + \tilde {\mathcal {O}} \left(\frac {c _ {\beta} ^ {4} t _ {\mathrm {m i x}}}{\lambda^ {4} H}\right),
+$$
+
+$$
+\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \| ^ {2} \leq \mathcal {O} \left(\frac {c _ {\beta} ^ {2} t _ {\mathrm {m i x}}}{\lambda^ {2} H ^ {2}} \left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + \frac {c _ {\beta} ^ {6} t _ {\mathrm {m i x}} ^ {2}}{\lambda^ {6} H ^ {2}}\right)
+$$
+
+Combining Lemma 1, Theorem 3 and 4, we establish Theorem 1. We would like to emphasize the importance of the term $\bar{\delta}_q^2$ in Theorem 2. A naive analysis would have resulted in a worse upper bound in Theorem 2 that replaces $\bar{\delta}_q^2$ with $\delta_q^2$ . Such degradation in Theorem 2 would have resulted in a convergence rate of $\tilde{\mathcal{O}}(T^{-1/3})$ as opposed to our current bound of $\tilde{\mathcal{O}}(T^{-1/2})$ . Finally, it is to be mentioned that our convergence bound does not depend on $|\mathcal{S}|$ , thereby enabling its application to large state space MDPs as long as $t_{\mathrm{mix}}$ is finite.
+
+# 6. Conclusions
+
+This work presents the Multi-Level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm for addressing average-reward reinforcement learning challenges. The proposed method achieves an order-optimal global convergence rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$ , significantly surpassing the state-of-the-art results in this domain, particularly for actor-critic approaches with general policy parametrization.
+
+Building on this line of work, Xu et al. (2025) investigated the impact of constraints. Our analysis considers a linear critic, a limitation that has been relaxed to neural critics in discounted settings (Gaur et al., 2024; Ganesh et al., 2025a). However, extending this relaxation to the average reward setting remains an open problem.
+
+# Acknowledgement
+
+This work was supported by the Anusandhan National Research Foundation (ANRF), India, through the Overseas Visiting Doctoral Fellowship; the Office of Naval Research under grant N00014-23-1-2532; and Cisco Systems, Inc.
+
+# Impact Statement
+
+The goal of this paper is to advance the current understanding of the field of Machine Learning. We do not see any potential societal consequences of our work.
+
+# References
+
+Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. On the theory of policy gradient methods: Optimality,
+
+approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431-4506, 2021.
+Agarwal, M. and Aggarwal, V. Reinforcement learning for joint optimization of multiple rewards. Journal of Machine Learning Research, 24(49):1-41, 2023.
+Agarwal, M., Bai, Q., and Aggarwal, V. Concave utility reinforcement learning with zero-constraint violations. Transactions on Machine Learning Research, 2022.
+Agrawal, S. and Jia, R. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. Advances in Neural Information Processing Systems, 30, 2017.
+Al-Abbasi, A. O., Ghosh, A., and Aggarwal, V. Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 20(12):4714-4727, 2019.
+Bai, Q., Bedi, A. S., Agarwal, M., Koppel, A., and Aggarwal, V. Achieving zero constraint violation for constrained reinforcement learning via primal-dual approach. Proceedings of the AAAI Conference on Artificial Intelligence, 36:3682-3689, Jun. 2022.
+Bai, Q., Mondal, W. U., and Aggarwal, V. Regret analysis of policy gradient algorithm for infinite horizon average reward markov decision processes. In AAAI Conference on Artificial Intelligence, 2024.
+Beznosikov, A., Samsonov, S., Sheshukova, M., Gasnikov, A., Naumov, A., and Moulines, E. First order methods with markovian noise: from acceleration to variational inequalities. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
+Chen, X. and Zhao, L. Finite-time analysis of single-timescale actor-critic. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=jh3UNSQK01.
+Dorfman, R. and Levy, K. Y. Adapting to mixing time in stochastic optimization with Markovian data. In Proceedings of the 39th International Conference on Machine Learning, Jul 2022.
+Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011.
+Fatkhullin, I., Barakat, A., Kireeva, A., and He, N. Stochastic policy gradient methods: Improved sample complexity for fisher-non-degenerate policies. In International Conference on Machine Learning, pp. 9827-9869. PMLR, 2023.
+
+Ganesh, S., Chen, J., Mondal, W. U., and Aggarwal, V. Order-optimal global convergence for actor-critic with general policy and neural critic parametrization. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), July 2025a.
+Ganesh, S., Mondal, W. U., and Aggarwal, V. Order-optimal regret with novel policy gradient approaches in infinite horizon average reward MDPs. In The 28th International Conference on Artificial Intelligence and Statistics, 2025b. URL https://openreview.net/forum?id=ZJwMfQ6W9P.
+Gaur, M., Bedi, A. S., and Di Wang, V. A. Closing the gap: Achieving global convergence (last iterate) of actor-critic under markovian sampling with neural network parametrization. In International Conference on Machine Learning, 2024.
+Gonzalez, G., Balakuntala, M., Agarwal, M., Low, T., Knoth, B., Kirkpatrick, A. W., McKee, J., Hager, G., Aggarwal, V., Xue, Y., Voyles, R., and Wachs, J. Asap: A semi-autonomous precise system for telesurgery during communication delays. IEEE Transactions on Medical Robotics and Bionics, 5(1):66-78, 2023.
+Jaksch, T., Ortner, R., and Auer, P. Near-optimal regret bounds for reinforcement learning. The Journal of Machine Learning Research, 11:1563-1600, 2010.
+Kumar, N., Murthy, Y., Shufaro, I., Levy, K. Y., Srikant, R., and Mannor, S. Global convergence of policy gradient in average reward MDPs. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=2PRpcmJecX.
+Liu, Y., Zhang, K., Basar, T., and Yin, W. An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. Advances in Neural Information Processing Systems, 33:7624-7636, 2020.
+Mondal, W. U. and Aggarwal, V. Improved sample complexity analysis of natural policy gradient algorithm with general parameterization for infinite horizon discounted reward markov decision processes. In International Conference on Artificial Intelligence and Statistics, pp. 3097-3105. PMLR, 2024.
+Murthy, Y., Moharrami, M., and Srikant, R. Modified policy iteration for exponential cost risk sensitive mdps. In Learning for Dynamics and Control Conference, pp. 395-406. PMLR, 2023.
+Nagaraj, D., Wu, X., Bresler, G., Jain, P., and Netrapalli, P. Least squares regression with markovian data: Fundamental limits and algorithms. In Advances in Neural
+
+Information Processing Systems, volume 33, pp. 16666-16676, 2020.
+Panda, P. and Bhatnagar, S. Two-timescale critic-actor for average reward mdps with function approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19):19813-19820, Apr. 2025.
+Papini, M., Binaghi, D., Canonaco, G., Pirotta, M., and Restelli, M. Stochastic variance-reduced policy gradient. In International conference on machine learning, pp. 4026-4035, 2018.
+Patel, B., Suttle, W. A., Koppel, A., Aggarwal, V., Sadler, B. M., Bedi, A. S., and Manocha, D. Global optimality without mixing time oracles in average-reward rl via multi-level actor-critic. In International Conference on Machine Learning, 2024.
+Puterman, M. L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
+Suttle, W. A., Bedi, A., Patel, B., Sadler, B. M., Koppel, A., and Manocha, D. Beyond exponentially fast mixing in average-reward reinforcement learning via multi-level monte carlo actor-critic. In International Conference on Machine Learning, pp. 33240-33267, 2023.
+Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12, 1999.
+Tamboli, D., Chen, J., Jotheeswaran, K. P., Yu, D., and Aggarwal, V. Reinforced sequential decision-making for sepsis treatment: The posnegdm framework with mortality classifier and transformer. IEEE Journal of Biomedical and Health Informatics, 2024.
+Wang, L., Cai, Q., Yang, Z., and Wang, Z. Neural policy gradient methods: Global optimality and rates of convergence. In International Conference on Learning Representations, 2019.
+Wang, Y., Wang, Y., Zhou, Y., and Zou, S. Non-asymptotic analysis for single-loop (natural) actor-critic with compatible function approximation. In International Conference on Machine Learning, 2024.
+Wei, C.-Y., Jahromi, M. J., Luo, H., Sharma, H., and Jain, R. Model-free reinforcement learning in infinite-horizon average-reward markov decision processes. In International conference on machine learning, pp. 10170-10180. PMLR, 2020.
+Wu, Y. F., Zhang, W., Xu, P., and Gu, Q. A finite-time analysis of two time-scale actor-critic methods. Advances
+
+in Neural Information Processing Systems, 33:17617-17628, 2020.
+Xu, P., Gao, F., and Gu, Q. Sample efficient policy gradient methods with recursive variance reduction. In International Conference on Learning Representations, 2019.
+Xu, T., Wang, Z., and Liang, Y. Improving sample complexity bounds for (natural) actor-critic algorithms. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4358-4369. Curran Associates, Inc., 2020.
+Xu, Y., Ganesh, S., Mondal, W. U., Bai, Q., and Aggarwal, V. Global convergence for average reward constrained mdps with primal-dual actor critic algorithm. arXiv preprint arXiv:2505.15138, 2025.
+Zhang, J., Ni, C., Yu, Z., Szepesvari, C., and Wang, M. On the convergence and sample efficiency of variance-reduced policy gradient method. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021a.
+Zhang, S., Zhang, Z., and Maguluri, S. T. Finite sample analysis of average-reward td learning and q-learning. In Advances in Neural Information Processing Systems, volume 34, pp. 1230-1242, 2021b.
+
+# A. Proof of Lemma 1
+
+The proof of the lemma is inspired by the analysis in (Mondal & Aggarwal, 2024). The major distinction is that the bound derived in (Mondal & Aggarwal, 2024) applies to the discounted reward setting, whereas our derivation pertains to the average-reward case. We begin by stating a useful lemma.
+
+Lemma 4 (Lemma 4, (Bai et al., 2024)). The difference in the performance for any policies $\pi_{\theta}$ and $\pi_{\theta'}$ is bounded as follows
+
+$$
+J (\theta) - J \left(\theta^ {\prime}\right) = \mathbb {E} _ {s \sim d ^ {\pi_ {\theta}}} \mathbb {E} _ {a \sim \pi_ {\theta} (\cdot | s)} \left[ A ^ {\pi_ {\theta^ {\prime}}} (s, a) \right]. \tag {36}
+$$
+
+Continuing with the proof, we have:
+
+$$
+\begin{array}{l} \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \left[ \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k}} (\cdot | s)\right) - \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k + 1}} (\cdot | s)\right) \right] \\ = \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} \left[ \log \frac {\pi_ {\theta_ {k + 1}} (a | s)}{\pi_ {\theta_ {k}} (a | s)} \right] \\ \stackrel {(a)} {\geq} \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\theta_ {k + 1} - \theta_ {k}) ] - \frac {G _ {2}}{2} \| \theta_ {k + 1} - \theta_ {k} \| ^ {2} \\ = \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot \omega_ {k} ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2} \\ = \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot \omega_ {k} ^ {*} ] + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\omega_ {k} - \omega_ {k} ^ {*}) ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2} \\ = \alpha \left[ J ^ {*} - J \left(\theta_ {k}\right) \right] + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} \left[ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot \omega_ {k} ^ {*} \right] - \alpha \left[ J ^ {*} - J \left(\theta_ {k}\right) \right] \\ + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\omega_ {k} - \omega_ {k} ^ {*}) ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2} \\ \stackrel {(b)} {=} \alpha \left[ J ^ {*} - J \left(\theta_ {k}\right) \right] + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} \left[ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot \omega_ {k} ^ {*} - A ^ {\pi_ {\theta_ {k}}} (s, a) \right] \\ + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\omega_ {k} - \omega_ {k} ^ {*}) ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2} \\ \stackrel {(c)} {\geq} \alpha [ J ^ {*} - J (\theta_ {k}) ] - \alpha \sqrt {\mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} \left[ \left(\nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot \omega_ {k} ^ {*} - A ^ {\pi_ {\theta_ {k}}} (s , a)\right) ^ {2} \right]} \\ + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\omega_ {k} - \omega_ {k} ^ {*}) ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2} \\ \stackrel {(d)} {\geq} \alpha [ J ^ {*} - J (\theta_ {k}) ] - \alpha \sqrt {\epsilon_ {\mathrm {b i a s}}} + \alpha \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\omega_ {k} - \omega_ {k} ^ {*}) ] - \frac {G _ {2} \alpha^ {2}}{2} \| \omega_ {k} \| ^ {2}. \\ \end{array}
+$$
+
+Here $(a)$ and $(b)$ follow from Assumption 5(b) and Lemma 4, respectively. Inequality $(c)$ results from the convexity of the function $f(x) = x^{2}$ . Lastly, $(d)$ is a consequence of Assumption 4. By taking expectations on both sides, we derive:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \left[ \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k}} (\cdot | s)\right) - \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k + 1}} (\cdot | s)\right) \right] \right] \\ \geq \alpha \left[ J ^ {*} - \mathbb {E} \left[ J \left(\theta_ {k}\right) \right] \right] - \alpha \sqrt {\epsilon_ {\text {b i a s}}} \\ + \alpha \mathbb {E} \left[ \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \cdot (\mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*}) ] \right] - \frac {G _ {2} \alpha^ {2}}{2} \mathbb {E} \left[ \| \omega_ {k} \| ^ {2} \right] \\ \geq \alpha \left[ J ^ {*} - \mathbb {E} \left[ J \left(\theta_ {k}\right) \right] \right] - \alpha \sqrt {\epsilon_ {\text {b i a s}}} \tag {37} \\ - \alpha \mathbb {E} \left[ \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \mathbb {E} _ {a \sim \pi^ {*} (\cdot | s)} [ \| \nabla_ {\theta} \log \pi_ {\theta_ {k}} (a | s) \| \| \mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*} \| ] \right] - \frac {G _ {2} \alpha^ {2}}{2} \mathbb {E} \left[ \| \omega_ {k} \| ^ {2} \right] \\ \end{array}
+$$
+
+$$
+\stackrel {(a)} {\geq} \alpha [ J ^ {*} - \mathbb {E} [ J (\theta_ {k}) ] ] - \alpha \sqrt {\epsilon_ {\mathrm {b i a s}}} - \alpha G _ {1} \mathbb {E} \| (\mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*}) \| - \frac {G _ {2} \alpha^ {2}}{2} \mathbb {E} [ \| \omega_ {k} \| ^ {2} ]
+$$
+
+where $(a)$ follows from Assumption 5(a). Rearranging the terms, we get,
+
+$$
+\begin{array}{l} J ^ {*} - \mathbb {E} [ J (\theta_ {k}) ] \leq \sqrt {\epsilon_ {\text {b i a s}}} + G _ {1} \mathbb {E} \| \left(\mathbb {E} _ {k} \left[ \omega_ {k} \right] - \omega_ {k} ^ {*}\right) \| + \frac {G _ {2} \alpha}{2} \mathbb {E} \| \omega_ {k} \| ^ {2} \tag {38} \\ + \frac {1}{\alpha} \mathbb {E} \left[ \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} \left[ \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k}} (\cdot | s)\right) - \mathrm {K L} \left(\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {k + 1}} (\cdot | s)\right) \right] \right] \\ \end{array}
+$$
+
+Adding the above inequality from $k = 0$ to $K - 1$ , using the non-negativity of KL divergence, and dividing the resulting expression by $K$ , we obtain the final result.
+
+# B. Proof of Theorem 2
+
+Let $g_{h} = \hat{P}_{h}x_{h} - \hat{q}_{h}$ . To prove the first statement, observe the following relations.
+
+$$
+\begin{array}{l} \left\| x _ {h + 1} - x ^ {*} \right\| ^ {2} = \left\| x _ {h} - \bar {\beta} g _ {h} - x ^ {*} \right\| ^ {2} \\ = \left\| x _ {h} - x ^ {*} \right\| ^ {2} - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, g _ {h} \rangle + \bar {\beta} ^ {2} \| g _ {h} \| ^ {2} \\ = \| x _ {h} - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, P (x _ {h} - x ^ {*}) \rangle - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, g _ {h} - P (x _ {h} - x ^ {*}) \rangle + \bar {\beta} ^ {2} \| g _ {h} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(a)} {\leq} \| x _ {h} - x ^ {*} \| ^ {2} - 2 \bar {\beta} \lambda_ {P} \| x _ {h} - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, g _ {h} - P (x _ {h} - x ^ {*}) \rangle \\ + 2 \bar {\beta} ^ {2} \| g _ {h} - P (x _ {h} - x ^ {*}) \| ^ {2} + 2 \bar {\beta} ^ {2} \| P (x _ {h} - x ^ {*}) \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(b)} {\leq} \left\| x _ {h} - x ^ {*} \right\| ^ {2} - 2 \bar {\beta} \lambda_ {P} \left\| x _ {h} - x ^ {*} \right\| ^ {2} - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, g _ {h} - P \left(x _ {h} - x ^ {*}\right) \rangle \\ + 2 \bar {\beta} ^ {2} \| g _ {h} - P (x _ {h} - x ^ {*}) \| ^ {2} + 2 \Lambda_ {P} ^ {2} \bar {\beta} ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+where $(a)$ and $(b)$ follow from $\lambda_P I \preccurlyeq P$ and $\|P\| \leq \Lambda_P$ . Taking conditional expectation $\mathbb{E}_h$ on both sides, we obtain
+
+$$
+\begin{array}{l} \mathbb {E} _ {h} \left[ \| x _ {h + 1} - x ^ {*} \| ^ {2} \right] \leq \left(1 - 2 \bar {\beta} \lambda_ {P} + 2 \Lambda_ {P} ^ {2} \bar {\beta} ^ {2}\right) \| x _ {h} - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle x _ {h} - x ^ {*}, \mathbb {E} _ {h} [ g _ {h} - P (x _ {h} - x ^ {*}) ] \rangle \\ + 2 \bar {\beta} ^ {2} \mathbb {E} _ {h} \| g _ {h} - P \left(x _ {h} - x ^ {*}\right) \| ^ {2} \tag {39} \\ \end{array}
+$$
+
+Observe that the third term in (39) can be bounded as follows.
+
+$$
+\begin{array}{l} \left\| g _ {h} - P (x _ {h} - x ^ {*}) \right\| ^ {2} = \left\| (\hat {P} _ {h} - P) (x _ {h} - x ^ {*}) + (\hat {P} _ {h} - P) x ^ {*} + (q - \hat {q} _ {h}) \right\| ^ {2} \\ \leq 3 \| \hat {P} _ {h} - P \| ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} + 3 \| \hat {P} _ {h} - P \| ^ {2} \| x ^ {*} \| ^ {2} + 3 \| q - \hat {q} _ {h} \| ^ {2} \\ \leq 3 \| \hat {P} _ {h} - P \| ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} + 3 \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \| \hat {P} _ {h} - P \| ^ {2} + 3 \| q - \hat {q} _ {h} \| ^ {2} \\ \end{array}
+$$
+
+where the last inequality follows from $\| x^{*}\|^{2} = \left\| P^{-1}q\right\|^{2}\leq \lambda_{P}^{-2}\Lambda_{q}^{2}$ . Taking expectation yields
+
+$$
+\begin{array}{l} \mathbb {E} _ {h} \left\| g _ {h} - P \left(x _ {h} - x ^ {*}\right) \right\| ^ {2} \leq 3 \mathbb {E} _ {h} \left\| \hat {P} _ {h} - P \right\| ^ {2} \left\| x _ {h} - x ^ {*} \right\| ^ {2} + 3 \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \mathbb {E} _ {h} \left\| \hat {P} _ {h} - P \right\| ^ {2} + 3 \mathbb {E} _ {h} \left\| \hat {q} _ {h} - q \right\| ^ {2} \\ \leq 3 \sigma_ {P} ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} + 3 \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + 3 \sigma_ {q} ^ {2} \tag {40} \\ \end{array}
+$$
+
+The second term in (39) can be bounded as
+
+$$
+\begin{array}{l} - \left\langle x _ {h} - x ^ {*}, \mathbb {E} _ {h} \left[ g _ {h} - P \left(x _ {h} - x ^ {*}\right) \right] \right\rangle \leq \frac {\lambda_ {P}}{4} \left\| x _ {h} - x ^ {*} \right\| ^ {2} + \frac {1}{\lambda_ {P}} \left\| \mathbb {E} _ {h} \left[ g _ {h} - P \left(x _ {h} - x ^ {*}\right) \right] \right\| ^ {2} \\ \leq \frac {\lambda_ {P}}{4} \| x _ {h} - x ^ {*} \| ^ {2} + \frac {1}{\lambda_ {P}} \left\| \left\{\mathbb {E} _ {h} [ \hat {P} _ {h} ] - P \right\} x _ {h} + \left\{q - \mathbb {E} _ {h} [ \hat {q} _ {h} ] \right\} \right\| ^ {2} \\ \leq \frac {\lambda_ {P}}{4} \| x _ {h} - x ^ {*} \| ^ {2} + \frac {2 \delta_ {P} ^ {2} \| x _ {h} \| ^ {2} + 2 \delta_ {q} ^ {2}}{\lambda_ {P}} \\ \leq \frac {\lambda_ {P}}{4} \| x _ {h} - x ^ {*} \| ^ {2} + \frac {4 \delta_ {P} ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} + 4 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + 2 \delta_ {q} ^ {2}}{\lambda_ {P}} \tag {41} \\ \end{array}
+$$
+
+where the last inequality follows from $\| x^{*}\|^{2} = \| P^{-1}q\|^{2}\leq \lambda_{P}^{-2}\Lambda_{q}^{2}$ . Substituting the above bounds in (39),
+
+$$
+\begin{array}{l} \mathbb {E} _ {h} \left[ \| x _ {h + 1} - x ^ {*} \| ^ {2} \right] \leq \left(1 - \frac {3 \bar {\beta} \lambda_ {P}}{2} + \frac {8 \bar {\beta} \delta_ {P} ^ {2}}{\lambda_ {P}} + 6 \bar {\beta} ^ {2} \sigma_ {P} ^ {2} + 2 \bar {\beta} ^ {2} \Lambda_ {P} ^ {2}\right) \| x _ {h} - x ^ {*} \| ^ {2} + \frac {4 \bar {\beta}}{\lambda_ {P}} \left[ 2 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right] \\ + 6 \bar {\beta} ^ {2} \left[ \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \sigma_ {q} ^ {2} \right] \\ \end{array}
+$$
+
+For $\delta_P\leq \lambda_P / 8$ , and $\bar{\beta}\leq \lambda_P / [4(6\sigma_P^2 +2\Lambda_P^2)]$ , we can modify the above inequality to the following.
+
+$$
+\mathbb {E} _ {h} \big [ \| x _ {h + 1} - x ^ {*} \| ^ {2} \big ] \leq \big (1 - \beta \lambda_ {P} \big) \| x _ {h} - x ^ {*} \| ^ {2} + \frac {4 \bar {\beta}}{\lambda_ {P}} \left[ 2 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right] + 6 \bar {\beta} ^ {2} \left[ \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \sigma_ {q} ^ {2} \right]
+$$
+
+Taking expectation on both sides and unrolling the recursion yields
+
+$$
+\begin{array}{l} \mathbb {E} [ \| x _ {H} - x ^ {*} \| ^ {2} ] \\ \leq \left(1 - \bar {\beta} \lambda_ {P}\right) ^ {H} \mathbb {E} \| x _ {0} - x ^ {*} \| ^ {2} + \sum_ {h = 0} ^ {H - 1} \left(1 - \bar {\beta} \lambda_ {P}\right) ^ {h} \left\{\frac {4 \bar {\beta}}{\lambda_ {P}} \left[ 2 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right] + 6 \bar {\beta} ^ {2} \left[ \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \sigma_ {q} ^ {2} \right] \right\} \\ \leq \exp \left(- H \bar {\beta} \lambda_ {P}\right) \mathbb {E} \| x _ {0} - x ^ {*} \| ^ {2} + \frac {1}{\bar {\beta} \lambda_ {P}} \left\{\frac {4 \bar {\beta}}{\lambda_ {P}} \left[ 2 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right] + 6 \bar {\beta} ^ {2} \left[ \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \sigma_ {q} ^ {2} \right] \right\} \\ = \exp \left(- H \bar {\beta} \lambda_ {P}\right) \mathbb {E} \| x _ {0} - x ^ {*} \| ^ {2} + \left\{4 \lambda_ {P} ^ {- 2} \left[ 2 \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right] + 6 \bar {\beta} \lambda_ {P} ^ {- 1} \left[ \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \sigma_ {q} ^ {2} \right] \right\} \\ \end{array}
+$$
+
+Set $\bar{\beta} = \frac{2\log H}{\lambda_P H}$ to obtain the following result.
+
+$$
+\mathbb {E} \left[ \| x _ {H} - x ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} \left(\frac {\log H}{H} \underbrace {\left\{\lambda_ {P} ^ {- 4} \Lambda_ {q} ^ {2} \sigma_ {P} ^ {2} + \lambda_ {P} ^ {- 2} \sigma_ {q} ^ {2} \right\}} _ {R _ {0}} + \underbrace {\lambda_ {P} ^ {- 2} \left[ \delta_ {P} ^ {2} \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} + \delta_ {q} ^ {2} \right]} _ {R _ {1}}\right) \tag {42}
+$$
+
+Note that, for consistency, we must have $\log H / H\leq \lambda_P^2 /[8(6\sigma_P^2 +2\Lambda_P^2)]$ . To prove the second statement, observe that we have the following recursion.
+
+$$
+\begin{array}{l} \left\| \mathbb {E} \left[ x _ {h + 1} \right] - x ^ {*} \right\| ^ {2} = \left\| \mathbb {E} \left[ x _ {h} \right] - \bar {\beta} \mathbb {E} \left[ g _ {h} \right] - x ^ {*} \right\| ^ {2} \\ = \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} [ g _ {h} ] \rangle + \bar {\beta} ^ {2} \| \mathbb {E} [ g _ {h} ] \| ^ {2} \\ = \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \rangle - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \rangle + \bar {\beta} ^ {2} \| \mathbb {E} [ g _ {h} ] \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(a)} {\leq} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \lambda_ {P} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \rangle \\ + 2 \bar {\beta} ^ {2} \| \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \| ^ {2} + 2 \bar {\beta} ^ {2} \| P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(b)} {\leq} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \lambda_ {P} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \rangle \\ + 2 \bar {\beta} ^ {2} \| \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \| ^ {2} + 2 \Lambda_ {P} ^ {2} \bar {\beta} ^ {2} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \leq \left(1 - 2 \bar {\beta} \lambda_ {P} + 2 \Lambda_ {P} ^ {2} \bar {\beta} ^ {2}\right) \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} - 2 \bar {\beta} \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \rangle \\ + 2 \bar {\beta} ^ {2} \left\| \mathbb {E} \left[ g _ {h} \right] - P \left(\mathbb {E} \left[ x _ {h} \right] - x ^ {*}\right) \right\| ^ {2} \tag {43} \\ \end{array}
+$$
+
+where $(a)$ and $(b)$ follow from $\lambda_P I \preccurlyeq P$ and $\|P\| \leq \Lambda_P$ . The third term in the last line of (43) can be bounded as follows.
+
+$$
+\begin{array}{l} \left. \right.\left\| \mathbb {E} \left[ g _ {h} \right] - P \left(\mathbb {E} \left[ x _ {h} \right] - x ^ {*}\right)\right\| ^ {2} = \left\| \mathbb {E} \left[\left(\hat {P} _ {h} - P\right)\left(x _ {h} - x ^ {*}\right)\right] + \left(\mathbb {E} [ \hat {P} _ {h} ] - P\right) x ^ {*} + \left(q - \mathbb {E} [ \hat {q} _ {h} ]\right)\right\| ^ {2} \\ \leq 3 \mathbb {E} \left[ \| \mathbb {E} _ {h} [ \hat {P} _ {h} ] - P \| ^ {2} \| x _ {h} - x ^ {*} \| ^ {2} \right] + 3 \mathbb {E} \left[ \| \mathbb {E} _ {h} [ \hat {P} _ {h} ] - P \| ^ {2} \right] \| x ^ {*} \| ^ {2} + 3 \| q - \mathbb {E} [ \hat {q} _ {h} ] \| ^ {2} \\ \leq 3 \delta_ {P} ^ {2} \mathbb {E} \left[ \left\| x _ {h} - x ^ {*} \right\| ^ {2} \right] + 3 \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + 3 \bar {\delta} _ {q} ^ {2} \\ \stackrel {(a)} {\leq} 3 \delta_ {P} ^ {2} \Bigg \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} \left(R _ {0} + R _ {1}\right) \Bigg \} + 3 \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + 3 \bar {\delta} _ {q} ^ {2} \\ \end{array}
+$$
+
+where $(a)$ follows from (42). The second term in the last line of (43) can be bounded as follows.
+
+$$
+\begin{array}{l} - \langle \mathbb {E} [ x _ {h} ] - x ^ {*}, \mathbb {E} _ {h} [ \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) ] \rangle \\ \leq \frac {\lambda_ {P}}{4} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} + \frac {1}{\lambda_ {P}} \| \mathbb {E} [ g _ {h} ] - P (\mathbb {E} [ x _ {h} ] - x ^ {*}) \| ^ {2} \\ \leq \frac {\lambda_ {P}}{4} \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} + \frac {3}{\lambda_ {P}} \left[ \delta_ {P} ^ {2} \Bigg \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} (R _ {0} + R _ {1}) \Bigg \} + \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + \bar {\delta} _ {q} ^ {2} \right] \\ \end{array}
+$$
+
+Substituting the above bounds in (43), we obtain the following recursion.
+
+$$
+\begin{array}{l} \left\| \mathbb {E} \left[ x _ {h + 1} \right] - x ^ {*} \right\| ^ {2} \leq \left(1 - \frac {3 \bar {\beta} \lambda_ {P}}{2} + 2 \Lambda_ {P} ^ {2} \bar {\beta} ^ {2}\right) \| \mathbb {E} [ x _ {h} ] - x ^ {*} \| ^ {2} \\ + 6 \bar {\beta} \left(\bar {\beta} + \frac {1}{\lambda_ {P}}\right) \left[ \delta_ {P} ^ {2} \Bigg \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} (R _ {0} + R _ {1}) \right\} + \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + \bar {\delta} _ {q} ^ {2} \Bigg ] \\ \end{array}
+$$
+
+If $\beta < \lambda_P / (4\Lambda_P^2)$ , the above bound implies the following.
+
+$$
+\begin{array}{l} \left\| \mathbb {E} \left[ x _ {h + 1} \right] - x ^ {*} \right\| ^ {2} \leq \left(1 - \bar {\beta} \lambda_ {P}\right) \left\| \mathbb {E} \left[ x _ {h} \right] - x ^ {*} \right\| ^ {2} \\ + 6 \bar {\beta} \left(\bar {\beta} + \frac {1}{\lambda_ {P}}\right) \left[ \delta_ {P} ^ {2} \Bigg \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} (R _ {0} + R _ {1}) \right\} + \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + \bar {\delta} _ {q} ^ {2} \Bigg ] \\ \end{array}
+$$
+
+Unrolling the above recursion, we obtain
+
+$$
+\begin{array}{l} \left\| \mathbb {E} \left[ x _ {H} \right] - x ^ {*} \right\| ^ {2} \leq \left(1 - \bar {\beta} \lambda_ {P}\right) ^ {H} \left\| \mathbb {E} \left[ x _ {0} \right] - x ^ {*} \right\| ^ {2} \\ + \sum_ {h = 0} ^ {H - 1} 6 \left(1 - \bar {\beta} \lambda_ {P}\right) ^ {h} \bar {\beta} \left(\bar {\beta} + \frac {1}{\lambda_ {P}}\right) \left[ \delta_ {P} ^ {2} \Bigg \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} \left(R _ {0} + R _ {1}\right) \right\} + \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + \bar {\delta} _ {q} ^ {2} \Bigg ] \\ \leq \exp \left(- H \bar {\beta} \lambda_ {P}\right) \| \mathbb {E} [ x _ {0} ] - x ^ {*} \| ^ {2} + \frac {6}{\lambda_ {P}} \left(\bar {\beta} + \frac {1}{\lambda_ {P}}\right) \left[ \delta_ {P} ^ {2} \left\{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} \left(R _ {0} + R _ {1}\right) \right\} + \lambda_ {P} ^ {- 2} \Lambda_ {q} ^ {2} \delta_ {P} ^ {2} + \bar {\delta} _ {q} ^ {2} \right] \\ \end{array}
+$$
+
+Substituting $\bar{\beta} = 2\log H / (\lambda_{P}H)$ , we finally arrive at the following result.
+
+$$
+\| \mathbb {E} [ x _ {H} ] - x ^ {*} \| ^ {2} \leq \frac {1}{H ^ {2}} \| \mathbb {E} [ x _ {0} ] - x ^ {*} \| ^ {2} + 6 \left(1 + \frac {2 \log H}{H}\right) \left[ \lambda_ {P} ^ {- 2} \delta_ {P} ^ {2} \Big \{\mathbb {E} \left[ \| x _ {0} - x ^ {*} \| ^ {2} \right] + \mathcal {O} (R _ {0} + R _ {1}) \Big \} + \bar {R} _ {1} \right]
+$$
+
+where $\bar{R}_1 = \lambda_P^{-2}\big[\delta_P^2\lambda_P^{-2}\Lambda_q^2 +\bar{\delta}_q^2\big]$ . This concludes the result.
+
+# C. Properties of the MLMC Estimates
+
+This section provides some guarantees on the error of the MLMC estimator. This is similar to the results available in (Dorfman & Levy, 2022; Suttle et al., 2023; Beznosikov et al., 2023), although (Dorfman & Levy, 2022; Beznosikov et al., 2023) consider the case of unbiased estimates while our results deal with biased estimates.
+
+Lemma 5. Consider a time-homogeneous, ergodic Markov chain $(Z_{t})_{t\geq 0}$ with a unique stationary distribution $d_Z$ and a mixing time $t_\mathrm{mix}$ . Assume that $\nabla F(x)$ is an arbitrary gradient and $\nabla F(x,Z)$ denotes an estimate of $\nabla F(x)$ . Let $\| \mathbb{E}_{d_Z}[\nabla F(x,Z)] - \nabla F(x)\|^2\leq \delta^2$ and $\| \nabla F(x,Z_t) - \mathbb{E}_{d_Z}[\nabla F(x,Z)]\|^2\leq \sigma^2$ for all $t\geq 0$ . If $Q\sim \operatorname {Geom}(1 / 2)$ , then the following MLMC estimator
+
+$$
+g _ {\mathrm {M L M C}} = g ^ {0} + \left\{ \begin{array}{l l} 2 ^ {Q} (g ^ {Q} - g ^ {Q - 1}), & i f 2 ^ {Q} \leq T _ {\max } \\ 0, & o t h e r w i s e \end{array} \quad w h e r e g ^ {j} = 2 ^ {- j} \sum_ {t = 1} ^ {2 ^ {j}} \nabla F (x, Z _ {t}) \right. \tag {44}
+$$
+
+satisfies the inequalities stated below.
+
+(a) $\mathbb{E}[g_{\mathrm{MLMC}}] = \mathbb{E}[g^{\lfloor \log T_{\max}\rfloor}]$
+(b) $\mathbb{E}[\| \nabla F(x) - g_{\mathrm{MLMC}}\| ^2 ]\leq \mathcal{O}\left(\sigma^2 t_{\mathrm{mix}}\log_2T_{\mathrm{max}} + \delta^2\right)$
+(c) $\| \nabla F(x) - \mathbb{E}[g_{\mathrm{MLMC}}] \|^{2} \leq \mathcal{O}\left(\sigma^{2} t_{\mathrm{mix}} T_{\mathrm{max}}^{-1} + \delta^{2}\right)$
+
+Before proceeding to the proof, we state a useful lemma.
+
+Lemma 6 (Lemma 1, (Beznosikov et al., 2023)). Consider the same setup as in Lemma 5. The following inequality holds.
+
+$$
+\mathbb {E} \left[ \left\| \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla F (x, Z _ {i}) - \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] \right\| ^ {2} \right] \leq \frac {C _ {1} t _ {\operatorname* {m i x}}}{N} \sigma^ {2} \tag {45}
+$$
+
+where $N$ is a constant, $C_1 = 16\left(1 + \frac{1}{\ln^24}\right)$ , and the expectation is over the distribution of $\{Z_i\}_{i=1}^N$ emanating from any arbitrary initial distribution.
+
+Proof of Lemma 5. The statement $(a)$ can be proven as follows.
+
+$$
+\begin{array}{l} \mathbb {E} \left[ g _ {\mathrm {M L M C}} \right] = \mathbb {E} \left[ g ^ {0} \right] + \sum_ {j = 1} ^ {\lfloor \log_ {2} T _ {\max } \rfloor} \Pr \left\{Q = j \right\} \cdot 2 ^ {j} \mathbb {E} \left[ g ^ {j} - g ^ {j - 1} \right] \\ = \mathbb {E} [ g ^ {0} ] + \sum_ {j = 1} ^ {\lfloor \log_ {2} T _ {\max } \rfloor} \mathbb {E} [ g ^ {j} - g ^ {j - 1} ] = \mathbb {E} [ g ^ {\lfloor \log_ {2} T _ {\max } \rfloor} ] \\ \end{array}
+$$
+
+For the proof of $(b)$ , notice that
+
+$$
+\mathbb {E} \left[ \| \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] - g _ {\mathrm {M L M C}} \| ^ {2} \right]
+$$
+
+$$
+\leq 2 \mathbb {E} \left[ \left\| \mathbb {E} _ {d _ {Z}} [ \nabla F (x _ {t}) ] - g ^ {0} \right\| ^ {2} \right] + 2 \mathbb {E} \left[ \left\| g _ {\mathrm {M L M C}} - g ^ {0} \right\| ^ {2} \right]
+$$
+
+$$
+= 2 \mathbb {E} \left[ \left\| \mathbb {E} _ {d _ {Z}} \left[ \nabla F (x _ {t}) \right] - g ^ {0} \right\| ^ {2} \right] + 2 \sum_ {j = 1} ^ {\lfloor \log_ {2} T _ {\max } \rfloor} \Pr \{Q = j \} \cdot 4 ^ {j} \mathbb {E} \left[ \left\| g ^ {j} - g ^ {j - 1} \right\| ^ {2} \right]
+$$
+
+$$
+= 2 \mathbb {E} \left[ \left\| \mathbb {E} _ {d _ {Z}} [ \nabla F (x _ {t}) ] - g ^ {0} \right\| ^ {2} \right] + 2 \sum_ {j = 1} ^ {\lfloor \log_ {2} T _ {\max } \rfloor} 2 ^ {j} \mathbb {E} \left[ \left\| g ^ {j} - g ^ {j - 1} \right\| ^ {2} \right]
+$$
+
+$$
+\begin{array}{l} \leq 2 \mathbb {E} \left[ \left| \left| \mathbb {E} _ {d _ {Z}} [ \nabla F (x _ {t}) ] - g ^ {0} \right| \right| ^ {2} \right] \\ + 4 \sum_ {j = 1} ^ {\left\lfloor \log_ {2} T _ {\max } \right\rfloor} 2 ^ {j} \left(\mathbb {E} \left[ \left\| \mathbb {E} _ {d _ {z}} [ \nabla F (x, Z) ] - g ^ {j - 1} \right\| ^ {2} \right] + \mathbb {E} \left[ \left\| g ^ {j} - \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] \right\| ^ {2} \right]\right) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(a)} {\leq} C _ {1} t _ {\text {m i x}} \sigma^ {2} \left[ 2 + 4 \sum_ {j = 1} ^ {\lfloor \log_ {2} T _ {\max } \rfloor} 2 ^ {j} \left(\frac {1}{2 ^ {j - 1}} + \frac {1}{2 ^ {j}}\right) \right] \\ = \mathcal {O} \left(\sigma^ {2} t _ {\mathrm {m i x}} \log_ {2} T _ {\mathrm {m a x}}\right) \\ \end{array}
+$$
+
+where $(a)$ follows from Lemma 6. Using this result, we obtain the following.
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla F (x) - g _ {\mathrm {M L M C}} \| ^ {2} \right] \leq 2 \mathbb {E} \left[ \| \nabla F (x) - \mathbb {E} _ {d _ {z}} [ \nabla F (x, Z) ] \| ^ {2} \right] + 2 \mathbb {E} \left[ \| \mathbb {E} _ {d _ {z}} [ \nabla F (x, Z) ] - g _ {\mathrm {M L M C}} \| ^ {2} \right] \\ \leq \mathcal {O} \left(\sigma^ {2} t _ {\text {m i x}} \log_ {2} T _ {\text {m a x}} + \delta^ {2}\right) \\ \end{array}
+$$
+
+This completes the proof of statement (b). For part (c), we have
+
+$$
+\begin{array}{l} \left\| \nabla F (x) - \mathbb {E} \left[ g _ {\mathrm {M L M C}} \right] \right\| ^ {2} \leq 2 \left\| \nabla F (x) - \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] \right\| ^ {2} + 2 \left\| \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] - \mathbb {E} [ g _ {\mathrm {M L M C}} ] \right\| ^ {2} \\ \leq 2 \delta^ {2} + 2 \left\| \mathbb {E} _ {d _ {Z}} [ \nabla F (x, Z) ] - \mathbb {E} \left[ g ^ {\lfloor \log_ {2} T _ {\max } \rfloor} \right] \right\| ^ {2} \overset {(a)} {\leq} 2 \delta^ {2} + \frac {2 C _ {1} t _ {\operatorname* {m i x}}}{T _ {\operatorname* {m a x}}} \sigma^ {2} \tag {46} \\ \end{array}
+$$
+
+where $(a)$ follows from Lemma 6. This concludes the proof of Lemma 5.
+
+# D. Proof of Lemma 2
+
+Fix an outer loop instant $k$ and an inner loop instant $h\in \{H,\dots ,2H - 1\}$ . Recall the definition of $\hat{A}_{\mathbf{u}}(\theta_k,\cdot)$ from (20). The following inequalities hold for any $\theta_{k}$ and $z_{t}^{kh}\in S\times \mathcal{A}\times S$
+
+$$
+\mathbb {E} _ {\theta_ {k}} \left[ \hat {A} _ {\mathbf {u}} (\theta_ {k}, z) \right] \stackrel {(a)} {=} F (\theta_ {k}), \text {a n d} \left\| \hat {A} _ {\mathbf {u}} (\theta_ {k}, z _ {t} ^ {k h}) - \mathbb {E} _ {\theta_ {k}} \left[ \hat {A} _ {\mathbf {u}} (\theta_ {k}, z) \right] \right\| ^ {2} \stackrel {(b)} {\leq} 2 G _ {1} ^ {4}
+$$
+
+where $\mathbb{E}_{\theta_k}$ denotes the expectation over the distribution of $z = (s,a,s')$ where $(s,a)\sim \nu^{\pi_{\theta_k}},s'\sim P(\cdot |s,a)$ . The equality $(a)$ follows from the definition of the Fisher matrix, and $(b)$ is a consequence of Assumption 5. Statements (a) and (b), therefore, directly follow from Lemma 5.
+
+To prove the other statements, recall the definition of $\hat{b}_{\mathbf{u}}(\theta_k,\xi_k,\cdot)$ from (20). Observe the following relations for arbitrary
+
+$$
+\theta_ {k}, \xi_ {k}.
+$$
+
+$$
+\begin{array}{l} \mathbb {E} _ {\boldsymbol {\theta} _ {k}} \left[ \hat {b} _ {\mathbf {u}} (\boldsymbol {\theta} _ {k}, \boldsymbol {\xi} _ {k}, z) \right] - \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta} _ {k}) = \mathbb {E} _ {\boldsymbol {\theta} _ {k}} \left[ \left\{r (s, a) - \eta_ {k} + \langle \phi (s ^ {\prime}) - \phi (s), \zeta_ {k} \rangle \right\} \nabla_ {\boldsymbol {\theta} _ {k}} \log_ {\pi_ {\boldsymbol {\theta} _ {k}}} (a | s) \right] - \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta} _ {k}) \\ \stackrel {(a)} {=} \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{\eta_ {k} ^ {*} - \eta_ {k} + \langle \phi (s ^ {\prime}) - \phi (s) , \zeta_ {k} - \zeta_ {k} ^ {*} \rangle \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right]} _ {T _ {0}} + \\ + \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{V ^ {\pi_ {\theta_ {k}}} (s) - \langle \phi (s) , \zeta_ {k} ^ {*} \rangle + \langle \phi (s ^ {\prime}) , \zeta_ {k} ^ {*} \rangle - V ^ {\pi_ {\theta_ {k}}} (s ^ {\prime}) \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right]} _ {T _ {1}} \\ + \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{V ^ {\pi_ {\theta_ {k}}} (s ^ {\prime}) - \eta_ {k} ^ {*} + r (s , a) - V ^ {\pi_ {\theta_ {k}}} (s) \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right] - \nabla_ {\theta} J (\theta_ {k})} _ {T _ {2}} \\ \end{array}
+$$
+
+In (a), we have used the notation that $\xi_k^* = [\eta_k^*,\zeta_k^* ]^\top$ . Observe that
+
+$$
+\left\| T _ {0} \right\| ^ {2} \stackrel {(a)} {=} \mathcal {O} \left(G _ {1} ^ {2} \left\| \xi_ {k} - \xi_ {k} ^ {*} \right\| ^ {2}\right), \left\| T _ {1} \right\| ^ {2} \stackrel {(b)} {=} \mathcal {O} \left(G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right), \text {a n d} T _ {2} \stackrel {(c)} {=} 0 \tag {47}
+$$
+
+where $(a)$ follows from Assumption 5 and the boundedness of the feature map, $\phi$ while $(b)$ is a consequence of Assumption 5 and 2. Finally, $(c)$ is an application of Bellman's equation. We get,
+
+$$
+\left\| \mathbb {E} _ {\theta_ {k}} \left[ \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z\right) \right] - \nabla_ {\theta} J \left(\theta_ {k}\right) \right\| ^ {2} \leq \delta_ {\mathbf {u}, k} ^ {2} = \mathcal {O} \left(G _ {1} ^ {2} \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} + G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right) \tag {48}
+$$
+
+Moreover, observe that, for arbitrary $z_{t}^{kh}\in \mathcal{S}\times \mathcal{A}\times \mathcal{S}$
+
+$$
+\left\| \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z _ {t} ^ {k h}\right) - \mathbb {E} _ {\theta_ {k}} \left[ \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z _ {t}\right) \right] \right\| ^ {2} \leq \sigma_ {\mathbf {u}, k} ^ {2} = \mathcal {O} \left(G _ {1} ^ {2} \| \xi_ {k} \| ^ {2}\right) \tag {49}
+$$
+
+where $(a)$ follows from Assumption 5 and the boundedness of the feature map, $\phi$ . We can, therefore, conclude statements (c) and (d) by applying (48) and (49) in Lemma 5. To prove the statement (e), note that
+
+$$
+\mathbb {E} _ {\boldsymbol {\theta} _ {k}} \left[ \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}} (\boldsymbol {\theta} _ {k}, \boldsymbol {\xi} _ {k}, z) \right] \right] - \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta} _ {k}) = \mathbb {E} _ {\boldsymbol {\theta} _ {k}} \left[ \left\{r (s, a) - \mathbb {E} _ {k} [ \eta_ {k} ] + \langle \phi (s ^ {\prime}) - \phi (s), \mathbb {E} _ {k} [ \zeta_ {k} ] \rangle \right\} \nabla_ {\boldsymbol {\theta} _ {k}} \log_ {\pi_ {\boldsymbol {\theta} _ {k}}} (a | s) \right] - \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta} _ {k})
+$$
+
+$$
+\stackrel {(a)} {=} \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{\eta_ {k} ^ {*} - \mathbb {E} _ {k} [ \eta_ {k} ] + \langle \phi (s ^ {\prime}) - \phi (s) , \mathbb {E} _ {k} [ \zeta_ {k} ] - \zeta_ {k} ^ {*} \rangle \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right]} _ {T _ {0}} +
+$$
+
+$$
++ \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{V ^ {\pi_ {\theta_ {k}}} (s) - \langle \phi (s) , \zeta_ {k} ^ {*} \rangle + \langle \phi (s ^ {\prime}) , \zeta_ {k} ^ {*} \rangle - V ^ {\pi_ {\theta_ {k}}} (s ^ {\prime}) \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right]} _ {T _ {1}}
+$$
+
+$$
++ \underbrace {\mathbb {E} _ {\theta_ {k}} \left[ \left\{V ^ {\pi_ {\theta_ {k}}} (s ^ {\prime}) - \eta_ {k} ^ {*} + r (s , a) - V ^ {\pi_ {\theta_ {k}}} (s) \right\} \nabla_ {\theta_ {k}} \log_ {\pi_ {\theta_ {k}}} (a | s) \right] - \nabla_ {\theta} J (\theta_ {k})} _ {T _ {2}}
+$$
+
+Observe the following bounds.
+
+$$
+\left\| T _ {0} \right\| ^ {2} \stackrel {(a)} {=} \mathcal {O} \left(G _ {1} ^ {2} \left\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \right\| ^ {2}\right), \left\| T _ {1} \right\| ^ {2} \stackrel {(b)} {=} \mathcal {O} \left(G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right), \text {a n d} T _ {2} \stackrel {(c)} {=} 0 \tag {50}
+$$
+
+where $(a)$ follows from Assumption 5 and the boundedness of the feature map, $\phi$ while $(b)$ is a consequence of Assumption 5 and 2. Finally, $(c)$ is an application of Bellman's equation. We get,
+
+$$
+\left\| \mathbb {E} _ {\theta_ {k}} \left[ \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z\right) \right] \right] - \nabla_ {\theta} J (\theta_ {k}) \right\| ^ {2} \leq \bar {\delta} _ {\mathbf {u}, k} ^ {2} = \mathcal {O} \left(G _ {1} ^ {2} \| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \| ^ {2} + G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right) \tag {51}
+$$
+
+Using the above bound, we deduce the following.
+
+$$
+\begin{array}{l} \left\| \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}, k, h} ^ {\mathrm {M L M C}} \left(\theta_ {k}, \xi_ {k}\right) \right] - \nabla_ {\theta} J (\theta_ {k}) \right\| ^ {2} \\ \leq 2 \left\| \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}, k, h} ^ {\mathrm {M L M C}} \left(\theta_ {k}, \xi_ {k}\right) \right] - \mathbb {E} _ {\theta_ {k}} \left[ \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z\right) \right] \right] \right\| ^ {2} + 2 \left\| \mathbb {E} _ {\theta_ {k}} \left[ \mathbb {E} _ {k} \left[ \hat {b} _ {\mathbf {u}} \left(\theta_ {k}, \xi_ {k}, z\right) \right] \right] - \nabla_ {\theta} J (\theta_ {k}) \right\| ^ {2} \\ \stackrel {(a)} {\leq} 2 \mathbb {E} _ {k} \left\| \mathbb {E} _ {k, h} \left[ \hat {b} _ {\mathbf {u}, k, h} ^ {\mathrm {M L M C}} (\theta_ {k}, \xi_ {k}) \right] - \mathbb {E} _ {\theta_ {k}} \left[ \hat {b} _ {\mathbf {u}} (\theta_ {k}, \xi_ {k}, z) \right] \right\| ^ {2} + \mathcal {O} \left(\bar {\delta} _ {\mathbf {u}, k} ^ {2}\right) \stackrel {(b)} {\leq} \mathcal {O} \left(t _ {\operatorname {m i x}} T _ {\operatorname * {m a x}} ^ {- 1} \bar {\sigma} _ {\mathbf {u}, k} ^ {2} + \bar {\delta} _ {\mathbf {u}, k} ^ {2}\right) \\ \end{array}
+$$
+
+where $(a)$ follows from (51). Moreover, $(b)$ follows from Lemma 5(a), 6, and the definition of $\bar{\sigma}_{\mathbf{u},k}^2$ . This concludes the proof of Lemma 2.
+
+# E. Proof of Lemma 3
+
+Recall the definitions of $\hat{A}_{\mathbf{v}}(\cdot)$ and $\hat{b}_{\mathbf{v}}(\cdot)$ given in (23) and (24) respectively. Note that the following equalities hold for any $\theta_{k}$ .
+
+$$
+\mathbb {E} _ {\theta_ {k}} \left[ \hat {A} _ {\mathbf {v}} (z) \right] = A _ {\mathbf {v}} \left(\theta_ {k}\right), \text {a n d} \mathbb {E} _ {\theta_ {k}} \left[ \hat {b} _ {\mathbf {v}} (z) \right] = b _ {\mathbf {v}} \left(\theta_ {k}\right) \tag {52}
+$$
+
+where $\mathbb{E}_{\theta_k}$ denotes the expectation over the distribution of $z = (s,a,s')$ where $(s,a)\sim \nu^{\pi_{\theta_k}}$ , $s' \sim P(\cdot | s, a)$ . Also, for any $z = (s,a,s') \in S \times \mathcal{A} \times S$ , we have the following.
+
+$$
+\left\| \hat {A} _ {\mathbf {v}} (z) \right\| \leq | c _ {\beta} | + \| \phi (s) \| + \left\| \phi (s) \left(\phi (s) - \phi \left(s ^ {\prime}\right)\right) ^ {\top} \right\| \overset {(a)} {\leq} c _ {\beta} + 3 = \mathcal {O} \left(c _ {\beta}\right), \tag {53}
+$$
+
+$$
+\left\| \hat {b} _ {\mathbf {v}} (z) \right\| \leq | c _ {\beta} r (s, a) | + \| r (s, a) \phi (s) \| \stackrel {(b)} {\leq} c _ {\beta} + 1 = \mathcal {O} \left(c _ {\beta}\right) \tag {54}
+$$
+
+where $(a)$ , $(b)$ hold since $|r(s,a)|\leq 1$ and $\| \phi (s)\| \leq 1,\forall (s,a)\in S\times \mathcal{A}$ . Hence, for any $z_{t}^{kh}\in S\times \mathcal{A}\times S$ , we have
+
+$$
+\left\| \hat {A} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) - \mathbb {E} _ {\theta_ {k}} \left[ \hat {A} _ {\mathbf {v}} (z) \right] \right\| ^ {2} \leq \mathcal {O} \left(c _ {\beta} ^ {2}\right), \text {a n d} \left\| \hat {b} _ {\mathbf {v}} \left(z _ {t} ^ {k h}\right) - \mathbb {E} _ {\theta_ {k}} \left[ \hat {b} _ {\mathbf {v}} (z) \right] \right\| ^ {2} \leq \mathcal {O} \left(c _ {\beta} ^ {2}\right)
+$$
+
+Combining the above results with Lemma 5 establishes the result.
+
+# F. Proof of Theorem 3
+
+We first state an important result regarding ergodic MPDs.
+
+Lemma 7. Lemma 14, (Wei et al., 2020) For any ergodic MDP with mixing time $t_{\mathrm{mix}}$ , the following holds for any policy $\pi$ .
+
+$$
+\left| A ^ {\pi} (s, a) \right| = \mathcal {O} \left(t _ {\operatorname {m i x}}\right), \forall (s, a)
+$$
+
+If follows from Assumptions 5, 6, and Lemma 7 that
+
+$$
+\mu I \preccurlyeq F (\theta), \| F (\theta) \| \leq G _ {1} ^ {2}, \text {a n d} \| \nabla_ {\theta} J (\theta) \| \leq \mathcal {O} \left(G _ {1} t _ {\operatorname {m i x}}\right) \tag {55}
+$$
+
+where $\theta$ is any arbitrary policy parameter. Combining the above results with Lemma 2 and invoking Theorem 2, we arrive at the following.
+
+$$
+\begin{array}{l} \mathbb {E} _ {k} \left[ \left\| \omega_ {k} - \omega_ {k} ^ {*} \right\| ^ {2} \right] \leq \frac {1}{H ^ {2}} \left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + \tilde {\mathcal {O}} \left(\frac {R _ {0}}{H} + R _ {1}\right), \\ \left\| \mathbb {E} _ {k} \left[ \omega_ {k} \right] - \omega_ {k} ^ {*} \right\| ^ {2} \leq \frac {1}{H ^ {2}} \left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + \mathcal {O} (\bar {R} _ {1}) + \mathcal {O} \left(\frac {G _ {1} ^ {4} t _ {\operatorname* {m i x}}}{\mu^ {2} H ^ {2}} \left\{\left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + \tilde {\mathcal {O}} (R _ {0} + R _ {1}) \right\}\right) \\ \end{array}
+$$
+
+where the terms $R_0, R_1, \bar{R}_1$ are defined as follows.
+
+$$
+R _ {0} = \tilde {\mathcal {O}} \left(\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} G _ {1} ^ {2} t _ {\mathrm {m i x}} \mathbb {E} _ {k} \left[ \| \xi_ {k} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right),
+$$
+
+$$
+R _ {1} = \mathcal {O} \left(H ^ {- 2} \mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + H ^ {- 2} \mu^ {- 2} G _ {1} ^ {2} t _ {\mathrm {m i x}} \mathbb {E} _ {k} \left[ \| \xi_ {k} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right)
+$$
+
+$$
+\bar {R} _ {1} = \mathcal {O} \left(H ^ {- 2} \mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + H ^ {- 2} \mu^ {- 2} G _ {1} ^ {2} t _ {\mathrm {m i x}} \mathbb {E} _ {k} \left[ \| \xi_ {k} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \| ^ {2} + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right)
+$$
+
+Moreover, note that
+
+$$
+\mathbb {E} _ {k} \left[ \| \xi_ {k} \| ^ {2} \right] \leq 2 \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + 2 \mathbb {E} _ {k} \left[ \| \xi_ {k} ^ {*} \| ^ {2} \right] \stackrel {(a)} {\leq} \mathcal {O} \left(\mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \lambda^ {- 2} c _ {\beta} ^ {2}\right)
+$$
+
+where $(a)$ follows from (57) for sufficiently large $c_{\beta}$ and the definition that $\xi_k^* = [A_{\mathbf{v}}(\theta_k)]^{-1}b_{\mathbf{v}}(\theta_k)$ . Hence,
+
+$$
+\begin{array}{l} \mathbb {E} _ {k} \left[ \left\| \omega_ {k} - \omega_ {k} ^ {*} \right\| ^ {2} \right] \leq \frac {1}{H ^ {2}} \left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\mathrm {m i x}} \right\}\right) \\ + \tilde {\mathcal {O}} \left(\mu^ {- 2} G _ {1} ^ {2} \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \left\| \mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*} \right\| ^ {2} \leq \mathcal {O} \left(\mu^ {- 2} G _ {1} ^ {2} \left\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \right\| ^ {2} + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right) \\ + \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {4} t _ {\mathrm {m i x}}}{\mu^ {2} H ^ {2}} \left\{\left\| \omega_ {H} ^ {k} - \omega_ {k} ^ {*} \right\| ^ {2} + \mu^ {- 2} G _ {1} ^ {2} t _ {\mathrm {m i x}} \mathbb {E} _ {k} \left[ \left\| \xi_ {k} - \xi_ {k} ^ {*} \right\| ^ {2} \right] + \mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\mathrm {m i x}} \right\}\right) \\ \end{array}
+$$
+
+This concludes the proof.
+
+# G. Proof of Theorem 4
+
+We start with an important result on $A_{\mathbf{v}}(\theta)$
+
+Lemma 8. For a large enough $c_{\beta}$ , Assumption 3 implies that $A_{\mathbf{v}}(\theta) \succeq (\lambda /2)I$ where $I$ is an identity matrix of appropriate dimension and $\theta$ is an arbitrary policy parameter.
+
+Proof of Lemma 8. Recall that $A_{\mathbf{v}}(\theta) = \mathbb{E}_{\theta}[\hat{A}_{\mathbf{v}}(z)]$ where $\mathbb{E}_{\theta}$ denotes expectation over the distribution of $z = (s,a,s^{\prime})$ where $(s,a)\sim \nu^{\pi_{\theta}},s^{\prime}\sim P(\cdot |s,a)$ . Hence, for any $\xi = [\eta ,\zeta ]$ , we have
+
+$$
+\begin{array}{l} \xi^ {\top} A _ {\mathbf {v}} (\theta) \xi = c _ {\beta} \eta^ {2} + \eta \zeta^ {\top} \mathbb {E} _ {\theta} [ \phi (s) ] + \zeta^ {\top} \mathbb {E} _ {\theta} [ \phi (s) [ \phi (s) - \phi (s ^ {\prime}) ] ^ {\top} ] \zeta \\ \geq c _ {\beta} \eta^ {2} - | \eta | \| \zeta \| + \lambda \| \zeta \| ^ {2} \tag {56} \\ \geq \| \xi \| ^ {2} \left\{\min _ {u \in [ 0, 1 ]} c _ {\beta} u - \sqrt {u (1 - u)} + \lambda (1 - u) \right\} \overset {(b)} {\geq} (\lambda / 2) \| \xi \| ^ {2} \\ \end{array}
+$$
+
+where $(a)$ is a consequence of Assumption 3 and the fact that $\| \phi (s)\| \leq 1, \forall s\in S$ . Finally, $(b)$ is satisfied when $c_{\beta}\geq \lambda +\sqrt{\frac{1}{\lambda^2} - 1}$ . This concludes the proof of Lemma 8.
+
+Combining Lemma 8 with (52), (53), and (54), we can, therefore, conclude that the following inequalities hold for arbitrary $\theta_{k}$ whenever $c_{\beta} \geq \lambda + \sqrt{\frac{1}{\lambda^{2}} - 1}$ .
+
+$$
+\frac {\lambda}{2} \leq \| A _ {\mathbf {v}} (\theta_ {k}) \| \leq \mathcal {O} (c _ {\beta}), \text {a n d} \| b _ {\mathbf {v}} (\theta_ {k}) \| \leq \mathcal {O} (c _ {\beta}) \tag {57}
+$$
+
+Utilizing the above result with Lemma 3 and invoking Theorem 2, we arrive at the following.
+
+$$
+\begin{array}{l} \mathbb {E} _ {k} \left[ \left\| \xi_ {k} - \xi_ {k} ^ {*} \right\| ^ {2} \right] \leq \frac {1}{H ^ {2}} \left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + \tilde {\mathcal {O}} \left(\frac {R _ {0}}{H} + R _ {1}\right), \\ \mathbb {E} _ {k} \left[ \| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + \mathcal {O} (\bar {R} _ {1}) + \mathcal {O} \left(\frac {c _ {\beta} ^ {2} t _ {\operatorname* {m i x}}}{\lambda^ {2} H ^ {2}} \left\{\left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + \mathcal {O} (R _ {0} + R _ {1}) \right\}\right) \\ \end{array}
+$$
+
+where the terms $R_0,R_1,\bar{R}_1$ are defined as follows.
+
+$$
+R _ {0} = \tilde {\mathcal {O}} \left(\lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}} + \lambda^ {- 2} c _ {\beta} ^ {2} t _ {\text {m i x}}\right) = \tilde {\mathcal {O}} \left(\lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}}\right),
+$$
+
+$$
+R _ {1} = \mathcal {O} \left(H ^ {- 2} \lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}} + H ^ {- 2} \lambda^ {- 2} c _ {\beta} ^ {2} t _ {\text {m i x}}\right) = \mathcal {O} \left(H ^ {- 2} \lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}}\right)
+$$
+
+$$
+\bar {R} _ {1} = \mathcal {O} \left(H ^ {- 2} \lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}} + H ^ {- 2} \lambda^ {- 2} c _ {\beta} ^ {2} t _ {\text {m i x}}\right) = \mathcal {O} \left(H ^ {- 2} \lambda^ {- 4} c _ {\beta} ^ {4} t _ {\text {m i x}}\right)
+$$
+
+Hence, we have the following results.
+
+$$
+\mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2} + \tilde {\mathcal {O}} \left(\frac {c _ {\beta} ^ {4} t _ {\operatorname* {m i x}}}{\lambda^ {4} H}\right),
+$$
+
+$$
+\left\| \mathbb {E} _ {k} \left[ \xi_ {k} \right] - \xi_ {k} ^ {*} \right\| ^ {2} \leq \mathcal {O} \left(\frac {c _ {\beta} ^ {2} t _ {\operatorname* {m i x}}}{\lambda^ {2} H ^ {2}} \left\| \xi_ {0} ^ {k} - \xi_ {k} ^ {*} \right\| ^ {2}\right) + \mathcal {O} \left(\frac {c _ {\beta} ^ {6} t _ {\operatorname* {m i x}} ^ {2}}{\lambda^ {6} H ^ {2}}\right)
+$$
+
+This concludes the proof of Theorem 4.
+
+# H. Proof of Theorem 1
+
+Recall that the global convergence of any update of form $\theta_{k + 1} = \theta_k + \alpha \omega_k$ can be bounded as
+
+$$
+\begin{array}{l} J ^ {*} - \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} [ J (\theta_ {k}) ] \leq \sqrt {\epsilon_ {\text {b i a s}}} + \frac {G _ {1}}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \| \left(\mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*}\right) \| + \frac {\alpha G _ {2}}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \tag {58} \\ + \frac {\alpha \mu^ {- 2}}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \| \nabla_ {\theta} J (\theta_ {k}) \| ^ {2} + \frac {1}{\alpha K} \mathbb {E} _ {s \sim d ^ {\pi^ {*}}} [ \mathrm {K L} (\pi^ {*} (\cdot | s) \| \pi_ {\theta_ {0}} (\cdot | s)) ]. \\ \end{array}
+$$
+
+We shall now derive a bound for $\frac{1}{K}\sum_{k = 0}^{K - 1}\| \nabla_{\theta}J(\theta_k)\|^2$ . Given that the function $J$ is $L$ -smooth, we obtain:
+
+$$
+J (\theta_ {k + 1}) \geq J (\theta_ {k}) + \langle \nabla_ {\theta} J (\theta_ {k}), \theta_ {k + 1} - \theta_ {k} \rangle - \frac {L}{2} \| \theta_ {k + 1} - \theta_ {k} \| ^ {2}
+$$
+
+$$
+\begin{array}{l} = J (\theta_ {k}) + \alpha \left\langle \nabla_ {\theta} J (\theta_ {k}), \omega_ {k} \right\rangle - \frac {\alpha^ {2} L}{2} \| \omega_ {k} \| ^ {2} \\ = J (\theta_ {k}) + \alpha \left\langle \nabla_ {\theta} J (\theta_ {k}), \omega_ {k} ^ {*} \right\rangle + \alpha \left\langle \nabla_ {\theta} J (\theta_ {k}), \omega_ {k} - \omega_ {k} ^ {*} \right\rangle - \frac {\alpha^ {2} L}{2} \| \omega_ {k} - \omega_ {k} ^ {*} + \omega_ {k} ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(a)} {\geq} J \left(\theta_ {k}\right) + \alpha \left\langle \nabla_ {\theta} J \left(\theta_ {k}\right), F \left(\theta_ {k}\right) ^ {- 1} \nabla_ {\theta} J \left(\theta_ {k}\right) \right\rangle + \alpha \left\langle \nabla_ {\theta} J \left(\theta_ {k}\right), \omega_ {k} - \omega_ {k} ^ {*} \right\rangle \\ - \alpha^ {2} L \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} - \alpha^ {2} L \| \omega_ {k} ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(b)} {\geq} J (\theta_ {k}) + \frac {\alpha}{G _ {1} ^ {2}} \| \nabla_ {\theta} J (\theta_ {k}) \| ^ {2} + \alpha \langle \nabla_ {\theta} J (\theta_ {k}), \omega_ {k} - \omega_ {k} ^ {*} \rangle - \alpha^ {2} L \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} - \alpha^ {2} L \| \omega_ {k} ^ {*} \| ^ {2} \\ = J \left(\theta_ {k}\right) + \frac {\alpha}{2 G _ {1} ^ {2}} \| \nabla_ {\theta} J \left(\theta_ {k}\right) \| ^ {2} + \frac {\alpha}{2 G _ {1} ^ {2}} \left[ \| \nabla_ {\theta} J \left(\theta_ {k}\right) \| ^ {2} + 2 G _ {1} ^ {2} \left\langle \nabla_ {\theta} J \left(\theta_ {k}\right), \omega_ {k} - \omega_ {k} ^ {*} \right\rangle + G _ {1} ^ {4} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \right] \tag {59} \\ - \left(\frac {\alpha G _ {1} ^ {2}}{2} + \alpha^ {2} L\right) \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} - \alpha^ {2} L \| \omega_ {k} ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = J (\theta_ {k}) + \frac {\alpha}{2 G _ {1} ^ {2}} \| \nabla_ {\theta} J (\theta_ {k}) \| ^ {2} + \frac {\alpha}{2 G _ {1} ^ {2}} \| \nabla_ {\theta} J (\theta_ {k}) + G _ {1} ^ {2} (\omega_ {k} - \omega_ {k} ^ {*}) \| ^ {2} - \left(\frac {\alpha G _ {1} ^ {2}}{2} + \alpha^ {2} L\right) \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \\ - \alpha^ {2} L \| \omega_ {k} ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \geq J \left(\theta_ {k}\right) + \frac {\alpha}{2 G _ {1} ^ {2}} \left\| \nabla_ {\theta} J \left(\theta_ {k}\right) \right\| ^ {2} - \left(\frac {\alpha G _ {1} ^ {2}}{2} + \alpha^ {2} L\right) \left\| \omega_ {k} - \omega_ {k} ^ {*} \right\| ^ {2} - \alpha^ {2} L \left\| F \left(\theta_ {k}\right) ^ {- 1} \nabla_ {\theta} J \left(\theta_ {k}\right) \right\| ^ {2} \\ \stackrel {(c)} {\geq} J (\theta_ {k}) + \left(\frac {\alpha}{2 G _ {1} ^ {2}} - \frac {\alpha^ {2} L}{\mu^ {2}}\right) \| \nabla_ {\theta} J (\theta_ {k}) \| ^ {2} - \left(\frac {\alpha G _ {1} ^ {2}}{2} + \alpha^ {2} L\right) \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \\ \end{array}
+$$
+
+where $(a)$ use the Cauchy-Schwarz inequality and the definition that $\omega_{k}^{*} = F(\theta_{k})^{-1}\nabla_{\theta}J(\theta_{k})$ . Relations $(b)$ , and $(c)$ are consequences of Assumption 5(a) and 6 respectively. Summing the above inequality over $k\in \{0,\dots ,K - 1\}$ , rearranging
+
+the terms and substituting $\alpha = \frac{\mu^2}{4G_1^2L}$ , we obtain
+
+$$
+\begin{array}{l} \frac {\mu^ {2}}{1 6 G _ {1} ^ {4} L} \left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \| \nabla_ {\theta} J \left(\theta_ {k}\right) \| ^ {2}\right) \leq \frac {J \left(\theta_ {K}\right) - J \left(\theta_ {0}\right)}{K} + \left(\frac {\mu^ {2}}{8 L} + \frac {\mu^ {4}}{1 6 G _ {1} ^ {4} L}\right) \left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2}\right) \tag {60} \\ \stackrel {(a)} {\leq} \frac {2}{K} + \left(\frac {\mu^ {2}}{8 L} + \frac {\mu^ {4}}{1 6 G _ {1} ^ {4} L}\right) \left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2}\right) \\ \end{array}
+$$
+
+where $(a)$ uses the fact that $J(\cdot)$ is absolutely bounded above by 1. Inequality (60) can be simplified as follows.
+
+$$
+\frac {\mu^ {- 2}}{K} \left(\sum_ {k = 0} ^ {K - 1} \| \nabla_ {\theta} J \left(\theta_ {k}\right) \| ^ {2}\right) \leq \frac {3 2 L G _ {1} ^ {4}}{\mu^ {4} K} + \left(\frac {2 G _ {1} ^ {4}}{\mu^ {2}} + 1\right) \left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2}\right) \tag {61}
+$$
+
+Now all that is left is to bound $\mathbb{E}\left[\| \omega_k - \omega_k^*\| ^2\right]$ and $\| \mathbb{E}_k[\omega_k] - \omega_k^*\|$ . Assume $\xi_0^k = 0, \forall k$ . From Theorem 4, we have
+
+$$
+\mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \| \xi_ {k} ^ {*} \| ^ {2} + \tilde {\mathcal {O}} \left(\frac {c _ {\beta} ^ {4} t _ {\operatorname* {m i x}}}{\lambda^ {4} H}\right) \stackrel {(a)} {=} \tilde {\mathcal {O}} \left(\frac {c _ {\beta} ^ {4} t _ {\operatorname* {m i x}}}{\lambda^ {4} H}\right), \tag {62}
+$$
+
+$$
+\left\| \mathbb {E} _ {k} \left[ \xi_ {k} \right] - \xi_ {k} ^ {*} \right\| ^ {2} \leq \mathcal {O} \left(\frac {c _ {\beta} ^ {2} t _ {\operatorname* {m i x}}}{\lambda^ {2} H ^ {2}} \| \xi_ {k} ^ {*} \| ^ {2}\right) + \mathcal {O} \left(\frac {c _ {\beta} ^ {6} t _ {\operatorname* {m i x}} ^ {2}}{\lambda^ {6} H ^ {2}}\right) \stackrel {(b)} {=} \mathcal {O} \left(\frac {c _ {\beta} ^ {6} t _ {\operatorname* {m i x}} ^ {2}}{\lambda^ {6} H ^ {2}}\right) \tag {63}
+$$
+
+The relations $(a)$ , $(b)$ are due to the fact that $\| \xi_k^*\|^2 = \left\| [A_{\mathbf{v}}(\theta_k)]^{-1}b_{\mathbf{v}}(\theta_k)\right\|^2 \leq \mathcal{O}\left(\lambda^{-2}c_\beta^2\right)$ where the last inequality is a consequence of (57). Assume $\omega_H^k = 0, \forall k$ . We have the following from Theorem 3.
+
+$$
+\begin{array}{l} \mathbb {E} _ {k} \left[ \| \omega_ {k} - \omega_ {k} ^ {*} \| ^ {2} \right] \leq \frac {1}{H ^ {2}} \| \omega_ {k} ^ {*} \| ^ {2} + \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\mathrm {m i x}} \right\}\right) + \mu^ {- 2} G _ {1} ^ {2} \tilde {\mathcal {O}} \left(\mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \epsilon_ {\mathrm {a p p}}\right) \\ \stackrel {(a)} {\leq} \mathcal {O} \left(\frac {G _ {1} ^ {2} t _ {\mathrm {m i x}} ^ {2}}{\mu^ {2} H ^ {2}}\right) + \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\mathrm {m i x}} \right\}\right) + \frac {G _ {1} ^ {2}}{\mu^ {2}} \tilde {\mathcal {O}} \left(\frac {c _ {\beta} ^ {4} t _ {\mathrm {m i x}}}{\lambda^ {4} H} + \epsilon_ {\mathrm {a p p}}\right) \\ \stackrel {(b)} {\leq} \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 4} G _ {1} ^ {2} e _ {\beta} ^ {4} t _ {\mathrm {m i x}} \right\}\right) + \frac {G _ {1} ^ {2}}{\mu^ {2}} \mathcal {O} (\epsilon_ {\mathrm {a p p}}) \tag {64} \\ \end{array}
+$$
+
+Inequality (a) utilizes the fact that $\| \omega_k^*\|^2 = \left\| F(\theta_k)^\dagger \nabla_\theta J(\theta_k)\right\|^2 \leq \mathcal{O}\left(\mu^{-2}G_1^2 t_{\mathrm{mix}}^2\right)$ where the last inequality follows from Assumption 5, 6, and Lemma 7. We also apply (62) to prove (a) whereas (b) is established by retaining only the dominant terms. Theorem 3 also states that
+
+$$
+\begin{array}{l} \left\| \mathbb {E} _ {k} [ \omega_ {k} ] - \omega_ {k} ^ {*} \right\| ^ {2} \leq \mathcal {O} \left(\mu^ {- 2} G _ {1} ^ {2} \left\| \mathbb {E} _ {k} [ \xi_ {k} ] - \xi_ {k} ^ {*} \right\| ^ {2} + \mu^ {- 2} G _ {1} ^ {2} \epsilon_ {\mathrm {a p p}}\right) \\ + \mathcal {O} \left(\frac {G _ {1} ^ {4} t _ {\operatorname* {m i x}}}{\mu^ {2} H ^ {2}} \left\{\| \omega_ {k} ^ {*} \| ^ {2} + \mu^ {- 2} G _ {1} ^ {2} t _ {\operatorname* {m i x}} \mathbb {E} _ {k} \left[ \| \xi_ {k} - \xi_ {k} ^ {*} \| ^ {2} \right] + \mu^ {- 4} G _ {1} ^ {6} t _ {\operatorname* {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 2} G _ {1} ^ {2} c _ {\beta} ^ {2} t _ {\operatorname* {m i x}} \right\}\right) \\ \stackrel {(a)} {\leq} \tilde {\mathcal {O}} \left(\frac {G _ {1} ^ {2} c _ {\beta} ^ {6} t _ {\mathrm {m i x}} ^ {2}}{\mu^ {2} \lambda^ {6} H ^ {2}} + \frac {G _ {1} ^ {2}}{\mu^ {2}} \epsilon_ {\mathrm {a p p}}\right) + \tilde {\mathcal {O}} \left(\frac {1}{H ^ {2}} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \frac {G _ {1} ^ {6} c _ {\beta} ^ {4} t _ {\mathrm {m i x}} ^ {3}}{\mu^ {4} \lambda^ {4} H} + \mu^ {- 6} G _ {1} ^ {1 0} t _ {\mathrm {m i x}} ^ {4} + \mu^ {- 4} \lambda^ {- 2} G _ {1} ^ {6} c _ {\beta} ^ {2} t _ {\mathrm {m i x}} ^ {2} \right\}\right) \\ \stackrel {(b)} {\leq} \tilde {\mathcal {O}} \left(\frac {1}{H ^ {2}} \left\{\mu^ {- 6} G _ {1} ^ {1 0} t _ {\text {m i x}} ^ {4} + \mu^ {- 2} \lambda^ {- 6} G _ {1} ^ {2} c _ {\beta} ^ {6} t _ {\text {m i x}} ^ {2} + \mu^ {- 4} \lambda^ {- 2} G _ {1} ^ {6} c _ {\beta} ^ {2} t _ {\text {m i x}} ^ {2} \right\}\right) + \frac {G _ {1} ^ {2}}{\mu^ {2}} \mathcal {O} (\epsilon_ {\mathrm {a p p}}) \tag {65} \\ \end{array}
+$$
+
+where $(a)$ is a consequence of (62), (63), and the upper bound $\| \omega_k^*\|^2\leq \mathcal{O}\left(\mu^{-2}G_1^2 t_{\mathrm{mix}}^2\right)$ derived earlier. Inequality $(b)$ retains only the dominant terms. Combining (58), (61), (64), and (65), we arrive at the following.
+
+$$
+\begin{array}{l} J ^ {*} - \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \mathbb {E} [ J (\theta_ {k}) ] \leq \sqrt {\epsilon_ {\mathrm {b i a s}}} + \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 3} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {2} + \mu^ {- 1} \lambda^ {- 3} G _ {1} ^ {2} c _ {\beta} ^ {3} t _ {\mathrm {m i x}} + \mu^ {- 2} \lambda^ {- 1} G _ {1} ^ {4} c _ {\beta} t _ {\mathrm {m i x}} \right\}\right) + \frac {G _ {1} ^ {2}}{\mu} \mathcal {O} \left(\sqrt {\epsilon_ {\mathrm {a p p}}}\right) \\ + \frac {1}{L} \left(G _ {1} ^ {2} + \frac {\mu^ {2} G _ {2}}{G _ {1} ^ {2}}\right) \left[ \tilde {\mathcal {O}} \left(\frac {1}{H} \left\{\mu^ {- 4} G _ {1} ^ {6} t _ {\mathrm {m i x}} ^ {3} + \mu^ {- 2} \lambda^ {- 4} G _ {1} ^ {2} c _ {\beta} ^ {4} t _ {\mathrm {m i x}} \right\}\right) + \frac {G _ {1} ^ {2}}{\mu^ {2}} \mathcal {O} (\epsilon_ {\mathrm {a p p}}) \right] + \mathcal {O} \left(\frac {G _ {1} ^ {2} L}{\mu^ {2} K}\right) (6 6) \\ \end{array}
+$$
+
+We get the desired result by substituting the values of $H, K$ as stated in the theorem. We want to emphasize that the $G_{1}^{2}$ factor with the $\sqrt{\epsilon_{\mathrm{app}}}$ term is a standard component in actor-critic results with a linear critic (Suttle et al., 2023; Patel et al., 2024). However, this factor is often not explicitly mentioned in previous works, whereas we have included it here.
\ No newline at end of file
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/images.zip b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b75385ad6f99467e8e1e9ca96f1fe19a5daa11fa
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f4713e7c61931d9deba0391ab5fad764334bf79f32652d1d93461b3ad9dac88
+size 1869773
diff --git a/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/layout.json b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..737ec1facc1d2c351c5ab87e87e2132a65a7616b
--- /dev/null
+++ b/asharperglobalconvergenceanalysisforaveragerewardreinforcementlearningviaanactorcriticapproach/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5322a0fa3136e864535e33dbb46aec872a47b5b7b4ec31533f1b748f4c161b97
+size 1154830
diff --git a/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_content_list.json b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e8fa9cdeb49a9f7a6db193367d7334071c1a0287
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:157998e15badad68fe40da500aebe9982e2e13544f399f95869c7a050f92c20e
+size 96367
diff --git a/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_model.json b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..957d0cbbd2b47b44a86ed8e8275cee90820bd32f
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:766097a78ae1a7306520d6815cfd28af273cb32880fb82daed8f0b0c6d8e21e2
+size 121625
diff --git a/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_origin.pdf b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ac83e94ebd31de15402d487186a3df6a01c248fe
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/57332da1-ebfd-4de5-9c79-2bac060fb56a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8de1e7c1c957be55ecd6972cb128e3eb9bcd99aaf4dfccf6cc8f9c720a0d64b2
+size 4429128
diff --git a/asimplemodelofinferencescalinglaws/full.md b/asimplemodelofinferencescalinglaws/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c00bfb399a7511162cc6c21a6e5b06298c7271d6
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/full.md
@@ -0,0 +1,476 @@
+# A Simple Model of Inference Scaling Laws
+
+Noam Levi
+
+# Abstract
+
+Neural scaling laws have garnered significant interest due to their ability to predict model performance as a function of increasing parameters, data, and compute. In this work, we propose a simple statistical ansatz based on memorization to study scaling laws in the context of inference. Specifically, how performance improves with multiple inference attempts. We explore the coverage, or pass@k metric, which measures the chance of success over repeated attempts and provide a motivation for the observed functional form of the inference scaling behavior of the coverage in large language models (LLMs) on reasoning tasks. We then define an "inference loss", which exhibits a power law decay as the number of trials increases, and connect this result with prompting costs. We further test the universality of our construction by conducting experiments on a simple generative model, and find that our predictions are in agreement with the empirical coverage curves in a controlled setting. Our simple framework sets the ground for incorporating inference scaling with other known scaling laws.
+
+# 1. Introduction
+
+Advancements in deep learning have demonstrated that the performance of neural networks scales predictably as a function of model size, dataset size, and computational resources (Hestness et al., 2017; Kaplan et al., 2020a; Rosenfeld et al., 2020; Henighan et al., 2020a). These trends, known colloquially as neural scaling laws, have motivated research into understanding how scaling influences model performance in a wide range of domains, but in particular Large Language Models (LLMs) (Brown et al., 2020; Hoffmann et al., 2022).
+
+1 Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland. Correspondence to: Noam Levi .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+While the scaling laws literature focuses on test performance induced by pre-training, scaling during inference—the process by which a trained model makes predictions on new data—has received less attention.
+
+Recent works have shown empirically that LLMs can gain substantial benefits from repeated prompts to perform better on difficult tasks such as coding and formal proofs, where verification of the correct answer can be done (Brown et al., 2024; Snell et al., 2024; Bansal et al., 2024). These works demonstrate that the performance of weaker models can be amplified without further training by repeating inference trials, giving rise to a natural question:
+
+Can we interpret, or predict the inference scaling behavior of a model with repeated attempts?
+
+To answer this question, we propose a simple toy model that isolates the inference scaling laws which dictate how certain performance metrics improve as a function of the number of inference attempts.
+
+Inspired by the work of Hutter (2021), which introduced a model to study scaling behavior for memorization and generalization capabilities, we devise a simple setting to capture the effect of repeated inference attempts, focusing on the coverage metric, also known as $\text{pass} @ k$ .
+
+In this work, we present analytical predictions for coverage from a probabilistic perspective and demonstrate how inference improves with the number of repeated trials in a predictable way, which matches the observed behavior in (Brown et al., 2024) and (Snell et al., 2024).
+
+We use two different approaches to obtain the predicted pass@k, and highlight the connection between coverage and total inference cost. Additionally, we define a simple "inference loss", similar to the familiar test loss, but allowing for repeated trials, and predict its scaling.
+
+Our main predictions are verified by empirical results on mathematical reasoning tasks for several LLMs, following Brown et al. (2024).
+
+Lastly, we test the universality of our theory on an entirely different generative model. We train a Variational Autoencoder (VAE) (Kingma and Welling, 2022) to generate reconstructions of its training data by sampling from a latent
+
+space with an associated temperature. We find that the same behavior persists for both LLMs and the VAE setup, in spite of the vast differences in models and tasks.
+
+Given that our results are isolated from the effects of other neural scaling laws, they could be incorporated into a broader exploration to find the optimal train/inference point. In particular, we hope this work sets the ground for exploring the optimal trade-off between training and inference attempts, such that the total performance is improved while cost is minimized.
+
+The rest of the paper is organized as follows: In Section 3, we discuss our main setup, analogizing large models to perfect memorizers. We explain how this model can lead to the empirical pass@k curves for LLMs in Section 4, and provide an interpretation for the parameters of the model as defining an effective perceived difficulty for the dataset. We further connect these parameters with compute costs in Section 4.3. In Section 5, we reaffirm our results using a controlled simple generative model. We conclude in Section 6 and discuss future directions.
+
+# 2. Related Work
+
+Neural Scaling Laws: Scaling laws for neural networks have been extensively studied in recent years. Empirical research has shown that error rates decrease predictably as a function of increased data, parameters, and compute, following power-law relationships. Notable contributions in this space include work by Kaplan et al. (2020b), who demonstrated consistent scaling behavior across language models, and Henighan et al. (2020b), who extended these ideas to multimodal models, as well as Cabannes et al. (2024); Maloney et al. (2022); Bordelon et al. (2020); Spigler et al. (2020); Caponnetto and De Vito (2007); Steinwart et al. (2009); Fischer and Steinwart (2020); Cui et al. (2021); Levi and Oz (2023; 2024); Nam et al. (2024) who studied scaling laws for solvable yet sufficiently complex models, ranging from generalized linear regression on random feature models to kernel ridge regression, connecting them to results from random matrix theory and the underlying properties of the data.
+
+While most scaling laws focus on training, the study of inference scaling remains under-explored. Our work strives to fill this gap by studying how performance improves with repeated inference attempts.
+
+Inference and reasoning: Recent advancements in inference scaling and reasoning within LLMs have been significantly influenced by techniques such as Chain-of-Thought (CoT) prompting and Tree-of-Thought (ToT) reasoning. CoT prompting, as explored by (Wei et al., 2022), enables models to generate intermediate reasoning steps, enhancing their problem-solving capabilities. Building upon this,
+
+(Yao et al., 2023) introduced ToT reasoning, which scales the CoT approach by structuring reasoning paths as trees, allowing for exploration of multiple reasoning paths simultaneously. These methods draw inspiration from earlier works like AlphaGo (Silver et al., 2016) and AlphaZero (Silver et al., 2017), which demonstrated the effectiveness of self-play and guided self-trajectory training in complex decision-making tasks. Additionally, the concept of self-consistency, as discussed by (Wang et al., 2022), employs majority voting over multiple reasoning paths to improve answer accuracy. These collective efforts aim to enhance the reasoning abilities of LLMs by refining their inference processes and scaling their reasoning capabilities, potentially leading to scaling laws for performance (Wu et al., 2024). However, the way these inference processes scale with the number of inference attempts is poorly understood.
+
+# 3. Memorizing Ansatz
+
+In the following, we first briefly review the simplest model which produces the known data scaling law prediction by appealing to a memorizing construction, then consider our proposed model for repeated inference attempts.
+
+The Hutter model (Hutter, 2021) is a probabilistic framework originally introduced to study the scaling behavior of learning curves, focusing on the relationship between memorization and generalization. It assumes a model which perfectly memorizes features during training, allowing it to correctly match sets of features and labels $\{i,y_i\}$ , such that only unseen features can incur an error. The set of features is assumed to follow a Zipf power law decay with parameter $\alpha$ , where the probability of encountering feature $i$ is $\theta_{i} \propto i^{-1 - \alpha}$ and decreases with its rank. For $n$ training samples, the expected single feature error $E_{i}$ is
+
+$$
+\mathcal {E} _ {n} = \mathbb {E} [ E _ {i} ] = \sum_ {i = 1} ^ {\infty} \theta_ {i} (1 - \theta_ {i}) ^ {n} \approx n ^ {- \beta}, \quad \beta = \frac {1}{1 + \alpha}. \tag {1}
+$$
+
+Equation (1) captures the average likelihood of encountering and labeling the feature incorrectly after $n$ training samples. Similar to this model, we will adopt the idea of memorization as a surrogate for training, but will depart from the finite training set assumption as a basis for test errors.
+
+# 3.1. Perfect Sample Memorization, Imperfect Predictions
+
+In contrast to the Hutter model, we focus on a scenario where all samples up to the model capacity $n_c$ have been memorized, hence there is no notion of test error coming from unseen data. Instead, failure during inference or data generation could arise from the need to follow a sequence of steps to reach the correct answer.
+
+Concretely, we consider a joint model of memory $M$ and in
+
+
+Figure 1. Pass@k and failure distribution curves for various LLMs on difficult tasks, against theoretical scaling predictions. Left: The relationship between pass@k and the number of samples for several coding and maths tasks for different models, as described in (Brown et al., 2024), compared with the analytical predictions presented in Equations (7) and (13). The solid curves are data, while the dashed curves are the predictions from Equation (7), where $\alpha$ and $\beta$ correspond to the concentration of easy and hard problems, respectively. The dotted curves are the results of Equation (13). The functional form in both cases captures well the LLM pass@k curves for various models, by adjusting $\alpha$ , $\beta$ or $p$ , $\kappa$ . Right: The $\mathrm{Beta}(\alpha, \beta)$ distributions for the failure probabilities are shown for the different models. We can see that most of the questions are "difficult", while the left tail behavior informs us regarding the rate of improvement with additional trials on the "easier" samples. For example, the right panel demonstrates that Pythia-2.8B perceives the MATH dataset as having a larger density of difficult questions (PDF concentrated at $p_i$ close to 1), while Llama3-8B finds the same dataset easier.
+
+
+
+ference $I$ . The memory is $M \in \mathcal{M} \coloneqq \mathcal{X} \to \mathcal{Y}$ , where the samples are identical to the labels $\{x_i\}_{i=1}^n = \{y_i\}_{i=1}^n$ , and corresponds to a model which simply learns to memorize its training data. The inference model $I \in \mathcal{H} \coloneqq \mathbb{N} \to M$ takes an input index $i$ , which serves as a proxy for a particular prompt or task, and should produce the associated memorized label (sample) $y_i$ by recalling it from the memory $M$ . The inference model is taken to be imperfect in its retrieval, and subject to some error $\epsilon$ , it makes predictions
+
+$$
+I (i) = \left\{ \begin{array}{c c} y _ {i} + \epsilon , & \text {w i t h p r o b a b i l i t y} p _ {i}, \\ y _ {i}, & \text {w i t h p r o b a b i l i t y} 1 - p _ {i}, \end{array} \right. \tag {2}
+$$
+
+for any sample $i = 1, \dots, n_c$ . For samples outside the model capacity the prediction is always wrong $I(i) \neq y_i$ .
+
+At this point, we only accept perfect model "answers", such that the performance of the model on a single sample is measured simply by
+
+$$
+A (i) = \mathbf {1} _ {\{y _ {i} \}} (I (i)), \tag {3}
+$$
+
+where $\mathbf{1}_{\mathcal{V}}(x) = 1$ if $x\in \mathcal{V}$ else 0 is the indicator function.
+
+We start from the general case, where each sample has an unknown failure probability $p_i \in [0,1]$ at inference, and we are interested in the probability of at least one successful generation of a sample over $k$ attempts. This is in-line with the type of reasoning tasks studied by (Snell et al., 2024; Brown et al., 2024), where the performance of a model requires only one correct answer, rather than every answer
+
+to be correct.
+
+The rest of our analysis relies on the following assumptions:
+
+Assumption 3.1. For every sample $i$ , we have access to a perfect verification method, which can determine if there exists a correct generated answer during inference $I(i) = y_{i}$ , among $k$ possible candidates $\{I_1(i),\ldots ,I_k(i)\}$ .
+
+Assumption 3.2. Inference attempts $\{I_1(i),\dots,I_k(i)\}$ are independent and identically distributed (i.i.d.) random variables.
+
+We note that Assumption 3.1 sets aside the important topic of the quality of answer verification. In tasks such as coding and formal proofs, automatic verification is often possible, as a candidate solution can be quickly identified to be correct by proof checkers (Zheng et al., 2021). Likewise, unit tests can be used to verify the correctness of candidate solutions to coding tasks (Jimenez et al., 2024). How the following analysis changes if we relax this assumption to the case of an imperfect verification method is an interesting question that we postpone to future studies.
+
+Under Assumption 3.2, the probability of the model failing on all $k$ trials for a sample within its capacity with failure probability $p_i$ is simply
+
+$$
+\mathbb {P} (k \text {f a i l u r e s o n s a m p l e} i) = \prod_ {t = 1} ^ {k} (1 - A _ {t} (i)) = p _ {i} ^ {k}, \tag {4}
+$$
+
+where $t$ is the trial index. Therefore, the probability of at least one success in $k$ trials averaged over the entire dataset of size $n$ , known as $\mathbf{pass}@\mathbf{k}$ , is given by
+
+$$
+\begin{array}{l} \operatorname {p a s s} @ \mathrm {k} = \frac {n _ {c}}{n} \times \left(1 - \frac {1}{n _ {c}} \sum_ {i = 1} ^ {n _ {c}} \prod_ {t = 1} ^ {k} \left(1 - A _ {t} (i)\right)\right) \tag {5} \\ = \mathcal {A} \times \left(1 - \frac {1}{n _ {c}} \sum_ {i = 1} ^ {n _ {c}} p _ {i} ^ {k}\right), \\ \end{array}
+$$
+
+where we define the fraction of samples the model can store as $\mathcal{A} = n_c / n\in [0,1]$ , describing the maximal potential pass@k for a given model. This will be the metric we use to describe inference accuracy for the rest of this work.
+
+# 4. Power Law Distributions Predict Inference Scaling for LLMs
+
+The setup described in Section 3 is completely specified by the distribution of failure probabilities $p_i$ . To construct the failure distribution, we assume that different samples may have different inference complexity levels, incorporating some "easy" and some "difficult" samples with respect to the inference model.
+
+One way to model the different complexities is to appeal to the so-called Beta distribution. We think of the failure probability across samples itself $p = p_{i}$ as a random variable, drawn from $p \sim \mathrm{Beta}(\alpha, \beta)$ , whose probability density function (PDF) is given explicitly by
+
+$$
+\operatorname {B e t a} (\alpha , \beta ; p) = \frac {p ^ {- 1 + \alpha} (1 - p) ^ {- 1 + \beta}}{B (\alpha , \beta)}, \tag {6}
+$$
+
+where $B(\alpha, \beta)$ is the Euler beta function.
+
+The $\mathrm{Beta}(\alpha, \beta)$ distribution parameters are $\alpha \in \mathbb{R}^{+}$ , which controls the amount of "easier" problems in the sample dataset, where smaller $\alpha$ pushes the distribution mass towards zero, while $\beta \in \mathbb{R}^{+}$ dictates how often we encounter "harder" problems. Namely, a lower $\beta$ parameter increases the distribution mass towards the right tail (high failure probabilities).
+
+We can therefore compute the pass@k metric by simply averaging over the failure distributions as
+
+$$
+\begin{array}{l} \operatorname {p a s s} @ \mathrm {k} = \mathcal {A} \times \left(1 - \frac {1}{n} \sum_ {i = 1} ^ {n} p _ {i} ^ {k}\right) \approx \mathcal {A} \times \left(1 - \langle p ^ {k} \rangle\right) \\ = \mathcal {A} \times \left(1 - \int_ {0} ^ {1} d p p ^ {k} \frac {p ^ {- 1 + \alpha} (1 - p) ^ {- 1 + \beta}}{B (\alpha , \beta)}\right) \\ = \mathcal {A} \times \left(1 - \frac {\Gamma (\beta) \Gamma (k + \alpha)}{B (\alpha , \beta) \Gamma (k + \alpha + \beta)}\right), \tag {7} \\ \end{array}
+$$
+
+where $\Gamma (z)$ is the Euler gamma function, and $\langle .\rangle$ indicates
+
+
+Figure 2. Inference attempts loss $\mathcal{L}_{\mathrm{inference}}(k)$ for repeated attempts on mathematics tasks. The inference loss as a function of trials for the LLM experiments in (Brown et al., 2024).
+
+averaging over the $p$ distribution.
+
+To test these predictions, we utilize the reported pass@k results of (Brown et al., 2024), which evaluated Gemma-2B (Team et al., 2024), Llama3-8B (AI@Meta, 2024) and Pythia-2.8B (Biderman et al., 2023) on mathematical and coding tasks. Here, we take the results for the MATH dataset, which consists of difficult math word problems (Chen et al., 2024), where 128 random problems from the test set were chosen for evaluation.
+
+In Figure 1, we show that the functional form of pass@k given in Equation (7) is a good approximation for the empirical pass@k curves for the LLMs evaluated on mathematical tasks, as reported in Brown et al. (2024).
+
+At large $k$ values, we find a power-law decay of the effective average failure probability
+
+$$
+\operatorname {p a s s} @ \mathrm {k} \underset {k \rightarrow \infty} {\approx} \mathcal {A} \times \left(1 - \frac {\Gamma (\beta) k ^ {- \beta}}{B (\alpha , \beta)}\right), \tag {8}
+$$
+
+which is a common feature in neural scaling laws (Bahri et al., 2024). If we then define the inference loss $\mathcal{L}_{\mathrm{inference}}(k)$ as the expectation with respect to the sample distribution over errors, our results for pass@ $k$ correspond to
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {i n f e r e n c e}} (k) \equiv \mathbb {E} (\text {E r r o r i n} k \text {t r i a l s}) = \mathbb {E} \left(\mathcal {A} \times p ^ {k}\right) \tag {9} \\ \approx \mathcal {A} \times \frac {\Gamma (\beta) \Gamma (k + \alpha)}{B (\alpha , \beta) \Gamma (k + \alpha + \beta)} \\ \underset {k \rightarrow \infty} {\approx} \mathcal {A} \times \frac {\Gamma (\beta) k ^ {- \beta}}{B (\alpha , \beta)}. \\ \end{array}
+$$
+
+This result implies that the model test loss with repeated inference steps will decrease mainly depending on the value of the exponent $\beta$ . Intuitively, it means that for a fixed $\alpha$ parameter, the harder the questions appear to the model, the more inference attempts are required to reach a low loss. We illustrate this behavior in Figure 3.
+
+
+Figure 3. Inference attempts loss $\mathcal{L}_{\mathrm{inference}}(k)$ for repeated attempts on the memorizing model for different parameter choices. Inference loss for different $\beta$ and $k$ values. Different colors indicate inference loss values at fixed $\alpha = 5$ (left) and at fixed $k = 10^4$ (right), illustrating the behavior of Equation (9).
+
+
+
+# 4.1. Interpretability
+
+The result in Equation (9) not only allows one to predict a threshold for inference in order to get an increase in model performance on difficult problems, but could also offer some interpretation for the difficulty of tasks with respect to the trained model.
+
+We can gain some insights regarding the real-world models by fitting the pass@k metric according to Equation (7) to the ones given in (Brown et al., 2024), and attempt to interpret the properties of the test data from the parameters $\alpha$ and $\beta$ . In particular, note that the ratio $\frac{\alpha}{\alpha + \beta}$ gives the mean of the Beta distribution, which represents the average failure probability across the samples. If $\alpha > \beta$ , the mean failure probability is high (i.e., most samples are harder). On the other hand, if $\alpha < \beta$ , the mean failure probability is low, implying that most samples are easy. Furthermore, the denominator $\alpha + \beta$ governs the concentration of the distribution, where a large $\alpha + \beta$ means that the failure probabilities $p_i$ are more tightly clustered around the mean (more homogeneity in difficulty).
+
+From Figure 1, we can see that the typical values for the "harder" problem parameter $\beta \sim 0.35$ , while the tail parameter is $2 < \alpha < 20$ . This implies that most of the problems in the datasets used to measure the pass@k curves were indeed difficult in terms of the model's perception, while the existence of a left tail implies that the easy samples are covered quickly with less than 100 trials. This is reflected in the right panel of Figure 1.
+
+An additional point of interest would be connecting the observed empirical pass@k curves with the unknown failure probability distribution of the different models. This can be done by noting that the average $\langle p^k\rangle$ , performed in Equation (7), can be thought of as performing a Laplace transform from the failure probability variable $\sigma = \log (1 / p)$ to
+
+the trials space $k$
+
+$$
+\begin{array}{l} \tilde {f} (k) = \left\langle p ^ {k} \right\rangle = \int_ {0} ^ {\infty} d \sigma e ^ {- \sigma k} \frac {e ^ {- \alpha \sigma} \left(1 - e ^ {- \sigma}\right) ^ {- 1 + \beta}}{B (\alpha , \beta)} \tag {10} \\ = \int_ {0} ^ {\infty} d \sigma e ^ {- \sigma k} f (\sigma). \\ \end{array}
+$$
+
+This interpretation implies that it is possible to derive the probability distribution function of the samples in terms of their perceived difficulty by performing the inverse transform. In particular, given an empirical pass@k metric obtained for a given model, the inverse transform on $\tilde{f} (k) = (\mathcal{A} - \mathrm{pass}@\boldsymbol {k}) / \mathcal{A}$ will yield the perceived difficulty PDF. Potentially, such a procedure can be used to identify "difficult" and "easy" questions and construct improved fine-tuning algorithms by choosing training samples biased towards the "difficult" but obtainable tasks.
+
+# 4.2. Correlated Trials and Effective $k$ Approach
+
+In the previous section, we showed that correlated samples drawn i.i.d. from a varying complexity distribution can effectively describe the pass@k metric for memorizing models. Here, we take a converse approach, where do not assume that samples are correlated through their failure rate distribution, but instead that trials themselves are correlated.
+
+One can conjecture that dependencies between trials arise due to the internal model structure and the data itself. To capture these correlations, we suggest a model where the correlation between trials decays as a power law, implying that successive trials become less independent as we increase the number of trials.
+
+In order to incorporate the correlation between trials, we define the notion of an effective number of independent trials, denoted $k_{\mathrm{eff}}$ . This adjusts the original $k$ to account for the decay in trial independence. The correlation between trials is modeled via a power-law decay in the eigenvalues
+
+
+Figure 4. Pass@k as a function of total inference cost for Llama-3-8B MATH (Oracle Verifier). Left and Center: We show the pass@k metric as a function of number of total inference cost and number of FLOPS per token $F$ or number of prompt/decode tokens $N_{p} = N_{d}$ in log, log. We see that there is a clear trade-off between total inference cost whenever keeping one of the parameters fixed, in a predictable way from Equation (16). Right: We show a slice of the contour plots for fixed $N_{p} = N_{d}$ , and changing the number of FLOPS per token. The parameters chosen for these figures are fitted from Equation (7) applied to the data taken from (Brown et al., 2024).
+
+
+
+
+
+of the correlation matrix, such that the effective number of independent trials is given by
+
+$$
+k _ {\text {e f f}} = \sum_ {i = 1} ^ {k} i ^ {- \kappa} = H _ {k} (\kappa), \quad \kappa \in \mathbb {R} _ {\geq 0}, \tag {11}
+$$
+
+where $\kappa$ is the power-law exponent governing how quickly the correlation between trials decays, $H_{k}(\kappa)$ is the Harmonic number, and $\zeta (\kappa)$ is the Riemann zeta function. At the large inference trial limit, Equation (11) asymptotes to
+
+$$
+k _ {\text {e f f}} \underset {k \rightarrow \infty} {\approx} \left(\frac {1}{2} - \frac {k}{\kappa - 1}\right) k ^ {- \kappa} + \zeta (\kappa) \tag {12}
+$$
+
+Thus, the probability of at least one success in $k$ trials, incorporating correlations, becomes
+
+$$
+\begin{array}{l} \operatorname {p a s s} @ \mathrm {k} = \mathcal {A} \times \left(1 - \frac {1}{n} \sum_ {i = 1} ^ {n} p _ {i} ^ {k _ {\text {e f f}}}\right) = \mathcal {A} \times \left(1 - p ^ {H _ {k} (\kappa)}\right) \tag {13} \\ \approx \mathcal {A} \times \left(1 - p ^ {\left(\frac {1}{2} - \frac {k}{\kappa - 1}\right) k ^ {- \kappa} + \zeta (\kappa)}\right), \\ \end{array}
+$$
+
+where $p = p_{i}$ is the error probability of every sample, and $k_{\mathrm{eff}}$ accounts for correlations between trials. The result of Equation (13) is shown in Figure 1 as the dotted curves, which approximate the LLM behavior well for $k \gg 1$ . We stress that this approach should be taken as an effective description, which nevertheless manages to accurately capture the same behavior as sample correlations.
+
+# 4.3. Connection to Compute Scaling
+
+Here, we would like to translate our results from the attempts variable to the inference cost. A natural proxy for the inference cost may be the number of required Floating Point Operations Per Second (FLOPS) $^{1}$ . For concreteness, we adapt the total inference cost formula suggested in (Brown et al., 2024), given by
+
+$$
+\mathcal {C} = N _ {p} \times F + N _ {d} \times F \times k, \tag {14}
+$$
+
+where $\mathcal{C}$ is the total inference cost, $N_{p}, N_{d}$ are the number of prompt and decode tokens, respectively, and $F$ is the number of FLOPS per token.
+
+We can convert some of our pass@k results to this metric by taking the large $k$ limit of Equation (7), giving
+
+$$
+k = \left(\frac {(\mathcal {A} - \operatorname {p a s s} @ \mathrm {k}) B (\alpha , \beta)}{\mathcal {A} \Gamma (\beta)}\right) ^ {- 1 / \beta}, \tag {15}
+$$
+
+hence the resulting coverage is simply
+
+$$
+\operatorname {C o v e r a g e} (\mathcal {C}) \approx \mathcal {A} \cdot \left(1 - \frac {\Gamma (\beta)}{B (\alpha , \beta)} \left(\frac {\bar {\mathcal {C}} - N _ {p}}{N _ {d}}\right) ^ {- \beta}\right), \tag {16}
+$$
+
+where $\bar{\mathcal{C}}\equiv \mathcal{C} / F$ is the normalized total inference cost. Keeping all uncontrollable parameters fixed, such as the number of FLOPS per token fixed, we see from Figure 4 that it is worthwhile to reduce the number of prompting and decoding tokens or increase the number of attempts, up to
+
+
+Original Images
+
+
+
+
+Figure 5. Visualization of the task described in Section 5. Here, a VAE is tasked with generating samples from its training data, where a "failure" occurs when the reconstruction error falls above a certain threshold $\epsilon$ .
+
+
+Reconstructed Images
+
+
+
+
+Empirical vs Theoretical Pass@k with Correlation
+Figure 6. Results for the VAE reconstruction task, compared with semi-analytical predictions. Left: The pass@k metric as a function of number of attempts $k$ , for different threshold values, with temperature $T = 1.1$ . The curves have been normalized to asymptote at 1 for visual clarity. Right: The reconstruction error behavior across multiple trials, indicated by different colors. The errors obey a quasi power law behavior.
+
+
+Reconstruction Error Distribution for Multiple Trials
+
+the minimal amount of total inference compute required to reach a target coverage. The numbers in Figure 4 are chosen to mimic the results found in (Brown et al., 2024), and are only meant to give an illustration of the functional behavior of Equation (16).
+
+Alternatively, we can phrase Equation (16) in terms of the inference loss as
+
+$$
+\mathcal {L} _ {\text {i n f e r e n c e}} (\mathcal {C}) \approx \mathcal {A} \times \frac {\Gamma (\beta)}{B (\alpha , \beta)} \left(\frac {\bar {\mathcal {C}} - N _ {p}}{N _ {d}}\right) ^ {- \beta}, \tag {17}
+$$
+
+which demonstrates the power law decay of the inference loss with total inference cost, depending on the value of $\beta$ .
+
+# 5. Experiments on a Simple Generative Model
+
+To further validate some of our analytical understanding in a controllable setting, we perform a series of experiments in which we train a simple generative model to reconstruct images taken from Fashion-MNIST (Xiao et al., 2017). Our goal is to connect the theoretical memorizing model and the behavior of more complex generative models by attempting
+
+to accurately reconstruct "memorized" examples, heuristically shown in Figure 5.
+
+To do this, we train a VAE with a temperature parameter to study how errors propagate over multiple trials and to compare empirical pass@k with theoretical predictions under correlated trials. We refer to this as the VAE reconstruction task.
+
+To quantify the error probability of the model over multiple samples, we define the error per sample using the norm of the difference between the reconstructed and original image:
+
+$$
+\operatorname {e r r o r} (i) = \frac {\left\| \hat {y} _ {i} - y _ {i} \right\|}{\left\| y _ {i} \right\|}. \tag {18}
+$$
+
+Here, $y_{i}$ represents the original image, and $\hat{y}_i$ is the reconstruction. This per-sample error metric allows us to define success or failure at the sample level, where a trial is considered successful if the reconstruction error falls below a threshold $\epsilon$ .
+
+To empirically calculate pass@k, we sample multiple reconstructions from the VAE for each input sample. For each
+
+
+
+
+Figure 7. Correlation matrix for errors across different trials for the VAE reconstruction task. Top: The eigenvalues of the correlation matrix follow a power law decay with the number of trials. Bottom: Visualization of the correlation matrix itself shows clear correlations between different trials for $k = 1000$ .
+
+sample, we conduct $k$ trials and calculate whether at least one trial resulted in a reconstruction with error less than some chosen threshold value $\epsilon \in [0,1]$ . Pass@k is then computed as the fraction of samples for which the model succeeded to reconstruct at least once in $k$ trials.
+
+In Figure 6, we show the pass@k results for the VAE reconstruction task, for different threshold $\epsilon$ values (left), as well as the reconstruction error distribution for multiple attempts (right). The theoretical predictions for the pass@k curves are shown for the effective $k$ approach, given in Equation (13), and approximate the VAE results well. Here, $p_i$ is computed empirically by taking the distribution at the maximal $k$ and $\kappa$ is taken from the correlation matrix in Figure 7.
+
+To complete the picture, we empirically confirm that the assumption of independent trials is indeed violated, as trials are effectively correlated. To capture this effect, we compute the correlation matrix for the errors across trials $\epsilon_{kk'}$ as
+
+$$
+\epsilon_ {k k ^ {\prime}} = \frac {1}{n} \sum_ {i} \operatorname {e r r o r} _ {i, k} \times \operatorname {e r r o r} _ {i, k ^ {\prime}}. \tag {19}
+$$
+
+The eigenvalues of the correlation matrix decay as a power law, suggesting that the effective number of independent trials diminishes as $k$ increases, which is clearly depicted in Figure 7 (left).
+
+# 6. Conclusions
+
+In this paper, we have proposed a simple statistical explanation for a so-called inference scaling law, which describes the scaling of the coverage (pass@k) metric with number of repeated attempts for generative models. We presented two possible models which lead to inference scaling: One based on introducing a sample space distribution of "easy" and "difficult" problems, and the other on an effective Zipf-like correlation structure between trials. Using these simple constructions, we were able to derive analytical predictions for the pass@k, as well the test loss as a function of repeated attempts, which we dubbed the inference loss. We then verified our predictions empirically both through previous experimental results for LLMs and for a simple generative VAE construction.
+
+We stress that the merit of our construction is in its simplicity, and there are many other models who can give rise to the same functional behavior. We view this as a positive rather than a negative, since it means that this simple model captures a universal behavior, which should not depend much on the modeling itself. For instance, another way to arrive at a similar scaling law would be to choose a different modeling for the failure distribution, based perhaps on program length, and introducing the notion of a distribution of program lengths corresponding to different samples, similar to Ringel and de Bem (2018). In the end, this type of construction will have a similar interpretation in terms of task complexity w.r.t the model.
+
+We believe our toy model offers a simple yet effective phenomenological framework for understanding how inference quality improves with more opportunities to predict correctly. Future work could extend this framework to more complex models, including applying similar methodology as (Maloney et al., 2022) to generalized linear regression, kernel regression and neural networks, and investigate how it interacts with existing scaling laws based on model size and training data.
+
+# Acknowledgements
+
+We thank Yohai Bar-Sinai, Alon Beck, Itay Lavie, Nadav Outmezguine, Zohar Ringel and Antonio Sclocchi for fruitful discussions. The work of NL is supported by the EPFL AI4science program.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, in particular due to the large model sizes considered in this work, but we do not feel there are specific aspects of this work with broader impacts beyond the considerations relevant to all large machine learning models.
+
+# References
+
+Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
+Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020a.
+Jonathan S Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales. In International Conference on Learning Representations, 2020.
+Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020a.
+Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
+Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems, volume 35, pages 30016-30030, 2022.
+
+Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling, 2024. URL https://arxiv.org/abs/2407.21787.
+Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/2408.03314.
+Hritik Bansal, Arian Hosseini, Rishabh Agarwal, Vinh Q. Tran, and Mehran Kazemi. Smaller, weaker, yet better: Training llm reasoners via compute-optimal sampling. 2024. URL https://api_semanticscholar.org/CorpusID:272146630.
+Marcus Hutter. Learning curve theory, 2021. URL https://arxiv.org/abs/2102.04074.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2022. URL https://arxiv.org/abs/1312.6114.
+Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020b. URL https:// arxiv.org/abs/2001.08361.
+Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. Scaling laws for autoregressive generative modeling, 2020b. URL https://arxiv.org/abs/2010.14701.
+Vivien Cabannes, Elvis Dohmatob, and Alberto Bietti. Scaling laws for associative memories, 2024. URL https://arxiv.org/abs/2310.02984.
+Alexander Maloney, Daniel A Roberts, and James Sully. A solvable model of neural scaling laws. arXiv preprint arXiv:2210.16859, 2022.
+Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. In International Conference on Machine Learning, pages 1024-1034. PMLR, 2020.
+Stefano Spigler, Mario Geiger, and Matthieu Wyart. Asymptotic learning curves of kernel methods: empirical data versus teacher-student paradigm. Journal of Statistical Mechanics: Theory and Experiment, (12):124001, 2020.
+
+Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7:331-368, 2007.
+Ingo Steinwart, Don R Hush, Clint Scovel, et al. Optimal rates for regularized least squares regression. In $COLT$ , pages 79-93, 2009.
+Simon Fischer and Ingo Steinwart. Sobolev norm learning rates for regularized least-squares algorithms. The Journal of Machine Learning Research, 21(1):8464-8501, 2020.
+Hugo Cui, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborova. Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime. In Advances in Neural Information Processing Systems, volume 34, pages 10131-10143, 2021.
+Noam Levi and Yaron Oz. The universal statistical structure and scaling laws of chaos and turbulence, 2023. URL https://arxiv.org/abs/2311.01358.
+Noam Levi and Yaron Oz. The underlying scaling laws and universal statistical structure of complex datasets, 2024. URL https://arxiv.org/abs/2306.14975.
+Yoonsoo Nam, Nayara Fonseca, Seok Hyeong Lee, Chris Mingard, and Ard A. Louis. An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem, 2024. URL https://arxiv.org/abs/2404.17563.
+Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. arXiv [cs.CL], pages 24824-24837, 27 January 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper.pdf.
+Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv [cs.CL], 17 May 2023. URL http://arxiv.org/abs/2305.10601.
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature,
+
+529(7587):484-489, 28 January 2016. URL https://www.nature.com/articles/nature16961.
+David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv [cs.AI], 5 December 2017. URL http://arxiv.org/abs/1712.01815.
+Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv [cs.CL], 21 March 2022. URL http://arxiv.org/abs/2203.11171.
+Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv [cs.AI], 1 August 2024. URL http://arxiv.org/abs/2408.00724.
+Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for formal olympiad-level mathematics. CoRR, abs/2109.00110, 2021. URL https://arxiv.org/abs/2109.00110.
+Carlos E Jimenez, John Yang, Alexander Wettig, et al. Swebench: Can language models resolve real-world github issues? arXiv preprint arXiv:2310.06770, 2024.
+Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Leónard Hussenot, Pier Giuseppe Sessa, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikula, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan
+
+Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, and Kathleen Kenealy. Gemma: Open models based on gemini research and technology, 2024. URL https://arxiv.org/abs/2403.08295.
+AI@Meta. Meta llama 3. 2024. URL https://llama meta.com/llama3/.
+Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304.01373.
+Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: Process supervision without process, 2024. URL https://arxiv.org/abs/2405.03553.
+Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. Explaining neural scaling laws. Proceedings of the National Academy of Sciences, 121 (27), June 2024. ISSN 1091-6490. doi: 10.1073/pnas.2311878121. URL http://dx.doi.org/10.1073/pnas.2311878121.
+Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, and Yi Tay. The efficiency misnomer. CoRR, abs/2110.12894, 2021. URL https://arxiv.org/abs/2110.12894.
+Jordan Juravsky, Bradley Brown, Ryan Ehrlich, Daniel Y. Fu, Christopher R'e, and Azalia Mirhoseini. *Hydragen: High-throughput llm inference with shared prefixes. ArXiv, abs/2402.05099, 2024. URL https://api-semanticscholar.org/CorpusID:267523163.*
+Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. Efficiently programming large language models using sglang, 2023.
+Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine
+
+learning algorithms, 2017. URL https://arxiv.org/abs/1708.07747.
+Zohar Ringel and Rodrigo de Bem. Critical percolation as a framework to analyze the training of deep networks, 2018. URL https://arxiv.org/abs/1802.02154.
+Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017. URL https://arxiv.org/abs/1412.6980.
+
+# A. Experimental Setup
+
+
+Figure 8. Examples of original and reconstructed samples from the VAE reconstruction task.
+
+Here, we specify the experimental procedure used for the analysis in Section 5.
+
+We utilize a Variational Autoencoder (VAE) with the following architectural details:
+
+- Input dimension: ${28} \times {28} = {784}$ ,corresponding to the flattened pixel values of the Fashion MNIST dataset.
+- Hidden dimension: 400.
+- Latent dimension: 20, controlling the bottleneck for information in the latent space.
+- Decoder: The decoder reconstructs the original input through two fully connected layers, outputting a 784-dimensional vector followed by a sigmoid activation to ensure pixel values remain between 0 and 1.
+- Temperature parameter: A temperature parameter $T = 1.1$ is applied during the reparameterization step to control the variance of the latent variables, allowing us to model uncertainty in the latent space more effectively.
+
+The VAE was trained on the first 400 samples from the Fashion MNIST dataset. The loss function combines binary cross-entropy for reconstruction and the Kullback-Leibler divergence to regularize the latent variables. We ran the training for 1000 epochs using the Adam optimizer with a learning rate of $1 \times 10^{-3}$ .
+
+To provide qualitative insights into the model's performance, we visualize several input samples from the Fashion-MNIST dataset along with their corresponding reconstructions in Figure 8. This allows us to inspect both successful and failed reconstructions, and examine the types of errors the model makes.
+
+# B. Example of a Memorizing and Inferring Model
+
+This appendix details the experimental setup used to simulate a neural network as a "memory" system with sample-specific retrieval probabilities, and its evaluation using a pass@k metric.
+
+# B.1. Dataset and Preprocessing
+
+We utilized a subset of the MNIST dataset (Deng, 2012).
+
+- Subset Size: We selected the first $N_{\text{samples}} = 1000$ images from the MNIST training set.
+- Unique Class Assignment: Each of these $N_{\text{samples}}$ images was treated as belonging to its own unique class. Thus, the classification task for the memory model involved $N_{\text{samples}}$ distinct classes. The label for the $i$ -th sample in the subset was simply $i$ .
+- Image Preprocessing: MNIST images, originally $28 \times 28$ pixels, were flattened into vectors of size $D_{in} = 784$ . Pixel values were normalized using the standard MNIST mean (0.1307) and standard deviation (0.3081).
+
+
+Figure 9. Training curves for the "memory" model in Appendix B.
+
+
+
+# B.2. Memory Model
+
+The "memory" component was implemented as a simple Multi-Layer Perceptron (MLP).
+
+- Architecture: The MLP consisted of:
+
+1. An input layer accepting $D_{in} = 784$ features.
+2. A first hidden layer with 256 neurons, followed by a ReLU activation function.
+3. A second hidden layer with 128 neurons, followed by a ReLU activation function.
+4. An output layer with $N_{\text{samples}}$ neurons (one for each unique class), producing logits.
+
+- Training Objective: The model was trained to perform classification, mapping each unique input sample to its assigned unique class index. The goal was for the model to effectively memorize its training data.
+
+- Training Parameters:
+
+- Loss Function: Cross-Entropy Loss $(\mathcal{L}_{CE})$
+- Optimizer: Adam optimizer (Kingma and Ba, 2017).
+- Learning Rate: $\eta = 0.001$ .
+- Epochs: The model was trained for $E = 50$ epochs.
+- Batch Size: $B = 32$ .
+
+- Training Monitoring: Training loss and accuracy on the training set (which comprised all $N_{\text{samples}}$ unique items) were monitored per epoch. The expectation was for the model to achieve near-perfect accuracy, indicating successful memorization, as shown in Figure 9.
+
+# B.3. Probabilistic Retrieval Process with Sample-Specific Difficulty
+
+To simulate a retrieval process where samples have inherent difficulties, we assigned a success probability to each sample. This approach differs from directly adding noise to the model's output logits.
+
+- Sample Failure Probability Assignment: For each of the $N_{\text{samples}}$ unique training samples $s$ , an intrinsic probability of failing recall in a single trial, $p_s$ , was drawn from a Beta distribution:
+
+$$
+p _ {s} \sim \mathrm {B e t a} (\alpha_ {S}, \beta_ {S})
+$$
+
+The parameters for this Beta distribution were set to $\alpha_{S} = 0.3$ and $\beta_{S} = 0.5$ . This ensures that some samples are inherently "easier" (lower $p_{s}$ ) or "harder" (higher $p_{s}$ ) to retrieve. This $p_{s}$ is fixed for sample $s$ across all its retrieval trials, as shown in Figure 10.
+
+
+Figure 10. Failure probability distribution for the inference model in Appendix B.
+
+
+Figure 11. Pass@k curve for the inference model in Appendix B.
+
+- Simulating a Retrieval Trial: For a given sample $s$ with its associated failure probability $p_s$ , a single retrieval trial was simulated as follows:
+
+1. A random number $u$ was drawn from a uniform distribution, $u\sim \mathcal{U}(0,1)$
+2. If $u > p_{s}$ , the trial was considered a "successful recall." In this event, it was assumed that the memory model would correctly identify the sample's true class as its top-1 prediction.
+3. If $u \leq p_s$ , the trial was considered a "failed recall."
+
+# B.4. Evaluation Metric: Pass@k
+
+The performance of the probabilistic retrieval process was evaluated using the Pass@k metric.
+
+Definition: For a given sample $s$ , Pass@k evaluates whether the true class of $s$ is successfully recalled (as defined above) in at least one of $k$ independent retrieval trials.
+- Calculation:
+
+1. For each sample $s$ in the dataset:
+
+- Up to $K_{max} = 10^{4}$ independent retrieval trials were simulated.
+- If a successful recall occurred at trial $t \leq k$ , the sample was marked as passing for all $k' \geq t$ (up to $K_{max}$ ) for which $\text{pass} @ \text{k}'$ was being evaluated. The simulation for sample $s$ could then stop.
+
+2. The overall Pass@k rate for a given $k$ is the fraction of the $N_{\text{samples}}$ samples that passed at $k$ trials.
+
+The results are shown in Figure 11, with the precise phenomenology captured by the main text.
\ No newline at end of file
diff --git a/asimplemodelofinferencescalinglaws/images.zip b/asimplemodelofinferencescalinglaws/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0306a141d33f1404aefd75feeeec0f4edef1e93d
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1d17ee9dc28e1f471eda1b50484f51a385f9dfa24d5e97185795d7c2980d758
+size 662864
diff --git a/asimplemodelofinferencescalinglaws/layout.json b/asimplemodelofinferencescalinglaws/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ad0c9e15b11ba6ec5c817da9cc607cf78fc71b22
--- /dev/null
+++ b/asimplemodelofinferencescalinglaws/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8890070588c94b18fe5c266bba2580ae0df9d42b7bfe205a62f3301c1b65a8bd
+size 574304
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_content_list.json b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6af8f11d55231a554a7a328c2878bb3a8d1e8a44
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e175c46449a3ed3ac17e21ea4b5787b6c642dd48f5d43b53323028f71198a96
+size 121658
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_model.json b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f71e39a1907769d49895d80c5b0eabfbd9a5570b
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5994065223f59beb3110af72e7097f590915208a7c2f52547b0e1481441e56fb
+size 147950
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_origin.pdf b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6d55d8d806c1916fdea04b6286ca97b832a4bcb6
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/4caf38c3-8e03-431f-a291-9277f0b5cb78_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa45c2246819482693613f4e2c614d5e58568b8f41457b37e7f973386ce74403
+size 577413
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/full.md b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f6a6fb669c504036be4b8d0dca7e8174272088c2
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/full.md
@@ -0,0 +1,448 @@
+# A Square Peg in a Square Hole: Meta-Expert for Long-Tailed Semi-Supervised Learning
+
+Yaxin Hou1 Yuheng Jia1,2
+
+# Abstract
+
+This paper studies the long-tailed semi-supervised learning (LTSSL) with distribution mismatch, where the class distribution of the labeled training data follows a long-tailed distribution and mismatches with that of the unlabeled training data. Most existing methods introduce auxiliary classifiers (experts) to model various unlabeled data distributions and produce pseudo-labels, but the expertises of various experts are not fully utilized. We observe that different experts are good at predicting different intervals of samples, e.g., long-tailed expert is skilled in samples located in the head interval and uniform expert excels in samples located in the medium interval. Therefore, we propose a dynamic expert assignment module that can estimate the class membership (i.e., head, medium, or tail class) of samples, and dynamically assigns suitable expert to each sample based on the estimated membership to produce high-quality pseudo-label in the training phase and produce prediction in the testing phase. We also theoretically reveal that integrating different experts' strengths will lead to a smaller generalization error bound. Moreover, we find that the deeper features are more biased toward the head class but with more discriminative ability, while the shallower features are less biased but also with less discriminative ability. We, therefore, propose a multi-depth feature fusion module to utilize different depth features to mitigate the model bias. Our method demonstrates its effectiveness through comprehensive experiments on the CIFAR-10-LT, STL-10-LT, and SVHN-LT datasets across various settings.
+
+$^{1}$ School of Computer Science and Engineering, Southeast University, Nanjing, China. $^{2}$ Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, China. Correspondence to: Yuheng Jia .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Table 1. Accuracy (\%) of testing set under three different unlabeled data distributions with varying experts. In CPE (Ma et al., 2024), $E_{2}$ denotes uniform expert, while $E_{1}$ and $E_{3}$ denote long-tailed and inverse long-tailed experts, respectively. Our proposed method and Upper-E use the proposed dynamic expert assignment (DEA) module and the ground-truth class membership to select a specific expert, respectively. The dataset is CIFAR-10-LT with imbalance ratio $\gamma_{l} = 200$ . $\dagger$ indicates our proposed method using the ground-truth class membership to select a specific expert.
+
+Distribution Expert Head Medium Tail Overall Consistent E1 94.67 74.10 38.73 69.66 E2 87.23 77.30 71.60 78.57 E3 3.57 72.35 68.47 50.55 Ours 89.13 79.52 77.07 81.67 Upper-E 94.67 77.30 68.47 79.86 Upper-E† 94.17 77.53 85.93 85.04 Uniform E1 93.47 74.73 66.83 77.98 E2 86.93 77.60 87.83 83.47 E3 0.53 74.55 90.20 57.04 Ours 89.40 78.35 86.00 83.96 Upper-E 93.47 77.60 90.20 86.14 Upper-E† 93.40 78.40 90.43 86.51 Inverse E1 93.56 74.25 75.87 80.53 E2 83.90 78.58 92.67 84.40 E3 57.00 77.43 95.10 76.60 Ours 88.60 78.90 92.03 85.75 Upper-E 93.56 78.58 95.10 88.03 Upper-E† 92.20 80.05 96.40 88.60
+
+# 1. Introduction
+
+Over the last decade, extensive high-quality labeled data have improved the performance of deep neural networks (DNNs). However, in specialized domains such as medical diagnosis (Yuan et al., 2023; Zhang et al., 2023b), the scarcity and imbalance of labeled data can be a significant challenge due to the high costs associated with data collection or annotation (Chen et al., 2024). To solve this issue, semi-supervised learning (SSL) (Sohn et al., 2020; Berthelot et al., 2020; Zhang et al., 2021) has been proposed and become a popular method to utilize the easier and cheaper acquired unlabeled data to improve the performance of DNN models. Its core idea is generating pseudo-labels for unlabeled data and selecting high-confidence ones to train the model together with the labeled data, so as to obtain a better
+
+
+(a) Consistent
+
+
+(b) Uniform
+Figure 1. Comparison of F1 score $(\%)$ of pseudo-label predictions between CPE (Ma et al., 2024) and our proposed method under three different cases. Upper-E denotes the F1 score $(\%)$ of CPE with ground-truth class membership. The dataset is CIFAR-10-LT with imbalance ratio $\gamma_{l} = 200$ . Our proposed method can generate pseudo-labels with higher F1 scores than CPE on all the cases, indicating its effectiveness in the utilization of unlabeled samples.
+
+
+(c) Inverse
+
+model than using labeled data only. However, traditional SSL usually assumes that the class distributions of the labeled and unlabeled data are balanced and consistent, which is easily violated in real-world applications. Specifically, data typically exhibit a long-tailed distribution, and the class distribution of unlabeled data is not always the same as that of the labeled data, i.e., unlabeled data may exhibit any one of the long-tailed, uniform, or inverse long-tailed distribution, which further exacerbates the difficulty of model training (Zhang et al., 2023c). This problem is known as long-tailed semi-supervised learning (LTSSL).
+
+A motivating example. In the medical field, when collecting clinical data, we may obtain a long-tailed dataset from hospitals, i.e., many common disease cases (head classes) accompanied by very few rare disease cases (tail classes). However, the clinical data collected from a wide range of populations is unlabeled and characterized by an abundance of non-diseased individuals and a scarcity of diseased individuals, especially those with rare diseases. Thus, the unlabeled data distribution is mismatched with the labeled data distribution.
+
+To mitigate the model bias arising from long-tailed distribution, long-tailed learning techniques, such as logit adjustment (Menon et al., 2021) and re-sampling (Xu et al., 2022), have been utilized in LTSSL to produce unbiased and high-quality pseudo-labels. Despite their effectiveness, they still cannot effectively solve the model bias resulting from distribution mismatch between labeled and unlabeled training data. Recently, ACR (Wei & Gan, 2023) proposes to handle the mismatched unlabeled data with various distributions by incorporating adaptive logit adjustment. While this method is effective, it relies on pseudo-labels generated by a single classifier (expert), limiting its performance. In response to this limitation, CPE (Ma et al., 2024) suggests training three classifiers (experts) to handle unlabeled data across various class distributions. However, CPE still suffers from the following two drawbacks. First, it employs three experts to
+
+generate pseudo-labels simultaneously in the training phase, which may introduce more error pseudo-labels. Second, in the testing phase, it only employs the uniform expert for prediction, ignoring the different characteristics of three experts.
+
+Motivation: We argue that each expert has its strength and weakness, e.g., the long-tailed expert is good at handling the samples in the head classes but not the samples in the medium and tail classes, while the uniform expert excels in medium classes but not in tail and head classes, and we should use different experts to process samples from different intervals, i.e., "a square peg in a square hole". To check this assumption, we first assume the class membership (i.e., head, medium, or tail class) of each sample is known. Then, we construct the Upper-E, which uses the long-tailed expert in CPE to produce pseudo-labels and predictions for head class samples in the training and testing phase, respectively, and the uniform and inverse long-tailed experts for medium and tail class samples. As shown in Fig. 1 and Table 1, this strategy can largely improve the quality of pseudo-labels in the training phase, and the prediction accuracy in the testing phase across all cases, indicating that each expert has its expertise. This observation motivates us to design a module to accurately estimate the class membership of each sample to realize the prior of "a square peg in a square hole".
+
+To this end, in this paper, we propose a flexible and end-to-end LTSSL algorithm, namely Meta-Expert. To estimate the class membership of each sample, we propose a Dynamic Expert Assignment (DEA) module, which takes features from the encoder and logits from the three experts as input, to produce the probability (soft class membership) of assigning a specific expert to each sample. Subsequently, based on a multi-information fusion mechanism (Peng et al., 2023; 2022), we integrate the expertises of these three experts according to the estimated probabilities to construct an aggregator. The aggregator ensures the long-tailed expert dominates pseudo-label generation in the training phase and
+
+Table 2. Comparison of accuracy (\%) on testing set using three different depth features and the corresponding performance gap (\%) between head and tail classes. We apply the K-means clustering on different depth features to produce the results, and we can clearly see that the feature is more biased towards the head class but more discriminative as the depth increases.
+
+Feature depth Overall Head Medium Tail Gap Shallow 30.30 26.00 35.50 27.67 1.67 Middle 38.10 36.67 41.25 35.33 1.33 Deep 71.00 84.67 77.25 49.00 35.67
+
+prediction in the testing phase when the sample belongs to the head class, and the uniform and inverse long-tailed experts dominate when the sample belongs to the medium and tail classes, respectively. As shown in Fig. 1, the proposed method can produce significantly higher-quality pseudolabels than CPE in the training phase. And as shown in Table 1, the proposed method can produce significantly higher prediction accuracies than CPE (i.e., employing the uniform expert for prediction) in the testing phase. Note that the proposed Meta-Expert integrates the logits from the three experts in a soft manner, pushing different experts to learn better. Thus, in several cases, it even outperforms CPE with the ground-truth class membership in pseudolabel generation in the training phase (Upper-E in Fig. 1) and prediction in the testing phase (Upper-E in Table 1).
+
+The proposed Meta-Expert can produce better pseudo-labels and predictions. More importantly, our theoretical analysis confirms that integrating different experts' expertises reduces the model's generalization error, thereby enhancing its overall performance. However, the model is still naturally biased towards the head classes due to the scarcity of tail class samples. Fortunately, as shown in Table 2, we observed that shallow features are relatively balanced although less discriminative, and deep features improve the discriminative ability but are less balanced. This phenomenon aligns with the known behavior of deep networks: shallow layers capture local patterns while deep layers learn global semantics. For long-tailed learning, since head and tail classes may share similar local patterns, shallow features exhibit balanced discriminability across classes. Meanwhile, deep layers predominantly encode head class semantics due to their overwhelming sample dominance, thus biasing predictions toward head classes. Motivated by this observation, we further propose a Multi-depth Feature Fusion (MFF) module to mitigate the model bias towards the head class by fusing features across different depths to achieve both balanced and discriminative representation, which also echoes the wisdom of the proverb, "a square peg in a square hole".
+
+# In summary, our contributions are as follows:
+
+1. We demonstrate that the pseudo-label quality and prediction accuracy can be notably improved by incorporating the
+
+expertises of different experts. Motivated by the empirical guide, we propose the Dynamic Expert Assignment (DEA) module to assign experts to different samples based on their specific expertises.
+
+2. We further theoretically show that leveraging different experts' strengths efficiently will bring a lower generalization error bound.
+3. We are the first to discover that shallow depth features are less biased than deep ones, and propose the Multi-depth Feature Fusion (MFF) module to help deal with the model bias towards the head class.
+4. We reach the new state-of-the-art (SOTA) performances on the popular LTSSL benchmarks under various settings.
+
+# 2. Related Work
+
+Semi-supervised learning (SSL) uses both labeled and unlabeled training data to obtain a better model than using labeled training data only. Recent SSL methods are mostly based on consistency regularization, pseudo-labeling, or both. Consistency regularization methods (Miyato et al., 2019) are based on the manifold or smoothness assumption and apply consistency constraints to the final loss function. Pseudo-labeling methods (Chen et al., 2018) produce pseudo-labels for unlabeled training data according to the model's high-confidence predictions and then use them to assist the model training. As a representative method of combining both of these techniques, FixMatch (Sohn et al., 2020) encourages similar predictions between weak and strong augmentation views of an image, to improve model's performance and robustness. Afterward, many variants based on FixMatch have been proposed, such as FlexMatch (Zhang et al., 2021), FlatMatch (Huang et al., 2023), SoftMatch (Chen et al., 2023), FreeMatch (Wang et al., 2023), $(\mathrm{FL})^2$ (Lee et al., 2024), and WiseOpen (Yang et al., 2024). Despite the superior performance of the above methods, they cannot effectively handle the case where labeled data exhibit a long-tailed distribution.
+
+Long-tailed semi-supervised learning (LTSSL) has gained increased attention due to its high relevance to real-world applications. It takes both the long-tailed distribution in long-tailed learning (LTL) and the limited labeled training data in SSL into consideration, which makes it more realistic and challenging. Existing LTSSL methods primarily improve the model performance by introducing LTL techniques (Li & Jia, 2025; Jia et al., 2024; Zhang et al., 2023a) to the off-the-shelf SSL methods like FixMatch (Sohn et al., 2020). For instance, ABC (Lee et al., 2021), CReST (Wei et al., 2021), BMB (Peng et al., 2025), and RECD (Park et al., 2024) sample more tail class samples to balance training bias towards the head class. SAW (Lai et al., 2022) introduces the class learning difficulty based weight to the consistency loss
+
+to enhance the model's robustness, INPL (Yu et al., 2023) proposes to select unlabeled data by the in-distribution probability, CDMAD (Lee & Kim, 2024) proposes to refine pseudo-labels by the estimated classifier bias, CoSSL (Fan et al., 2022) introduces feature enhancement strategies to refine classifier learning, and ACR (Wei & Gan, 2023) proposes to incorporate adaptive logit adjustment to handle unlabeled training data across various class distributions. Very recently, CPE (Ma et al., 2024) proposes to train multiple classifiers (experts) to handle unlabeled data across various distributions and further enhances the pseudo-label quality through an ensemble strategy. Although effective, it lacks a comprehensive strategy to utilize the expertise of each expert in pseudo-label generation and unseen sample prediction, leading to sub-optimal performance.
+
+More related literature and discussions are detailed in Appendix B.
+
+# 3. Method
+
+# 3.1. Problem Statement
+
+In long-tailed semi-supervised learning (LTSSL), we have a labeled training dataset $\mathcal{D}_l = \{x_i^l,y_i^l\}_{i = 1}^N$ of size $N$ and an unlabeled training dataset $\mathcal{D}_u = \{x_j^u\}_{j = 1}^M$ of size $M$ , where $\mathcal{D}_l$ and $\mathcal{D}_u$ share the same feature and label space, $x_{j}^{u}$ is the $j^{th}$ unlabeled sample, $x_{i}^{l}$ is the $i^{th}$ labeled sample with a ground-truth label $y_{i}^{l}\in \{1,\dots ,C\}$ , and $C$ is the number of classes. Let $N_{c}$ denote the number of samples in the $c^{th}$ class of labeled dataset, we assume that the $C$ classes are sorted in descending order, i.e., $N_{1} > N_{2}>\dots >N_{c}$ , thus its imbalance ratio can be denoted as $\gamma_l = N_1 / N_c$ . As the label is inaccessible for unlabeled dataset, we denote the number of samples in its $c^{th}$ class by $M_{c}$ , and define its imbalance ratio $\gamma_{u}$ in the same way as labeled dataset for theoretical illustration only. In this paper, we follow the previous works (Wei & Gan, 2023; Ma et al., 2024) to consider three cases of unlabeled data distribution, i.e., consistent, uniform, and inverse. Specifically, i) for consistent setting, we have $M_{1} > M_{2} > \dots >M_{c}$ and $\gamma_u = \gamma_l$ ; ii) for uniform setting, we have $M_{1} = M_{2} = \dots = M_{c}$ and $\gamma_u = 1$ ; iii) for inverse setting, we have $M_{1} < M_{2} < \dots < M_{c}$ and $\gamma_u = 1 / \gamma_l$ . The goal of LTSSL is to learn a classifier $F:\mathbb{R}^d\longmapsto [1,\ldots ,C]$ parameterized by $\theta$ on $\mathcal{D}_l$ and $\mathcal{D}_u$ , that generalizes well on all classes.
+
+# 3.2. Proposed Framework
+
+Multi-expert based LTSSL. As the class distribution of the unlabeled training data may be inconsistent with that of the labeled ones, we follow the previous work (Ma et al., 2024) to train three experts to handle the unlabeled training data across various class distributions, i.e., long-tailed, uniform, and inverse long-tailed distributions. Specifically,
+
+we attach two auxiliary classifiers on a typical SSL method like FixMatch (Sohn et al., 2020), and train each classifier (expert) with a specific logit adjustment intensity to realize that the first (long-tailed) expert is skilled in long-tailed distribution, and the second (uniform) and third (inverse long-tailed) experts are skilled in uniform and inverse ones, respectively. Similar to FixMatch, the loss $L_{base}$ for the base LTSSL includes a supervised classification loss on the labeled data and an unsupervised consistency regularization loss on the unlabeled data, i.e.,
+
+$$
+\begin{array}{l} L _ {\text {b a s e}} = \sum_ {k = 1} ^ {Q} \frac {1}{B _ {l}} \sum_ {i = 1} ^ {B _ {l}} \ell \left(E _ {k} \left(g \left(x _ {i} ^ {l}\right)\right) + \tau_ {k} \log \pi , y _ {i}\right) \tag {1} \\ + \sum_ {k = 1} ^ {Q} \frac {1}{B _ {u}} \sum_ {j = 1} ^ {B _ {u}} \ell \left(E _ {k} \left(g \left(x _ {j} ^ {u}\right)\right), \hat {y} _ {j, k}\right) \mathbb {I}, \\ \end{array}
+$$
+
+where $Q = 3$ denotes the number of classifiers (experts), $B_{l}$ and $B_{u}$ denote the batch sizes of labeled and unlabeled data, respectively, $\ell$ denotes the cross-entropy loss, $x_{i}^{l}$ and $x_{j}^{u}$ denote the $i^{th}$ labeled and $j^{th}$ unlabeled samples in the current batch, respectively, $g$ denotes the encoder, $\pi$ denotes the label frequency of the labeled data, $E_{k}$ denotes the expert trained by cross-entropy loss with a specific logit adjustment intensity $\tau_{k},\hat{y}_{j,k}$ denotes the pseudo-label predicted by $E_{k}$ on the $j^{th}$ unlabeled sample in the current batch, and $\mathbb{I}$ denotes a binary sample mask to select samples with confidence larger than the threshold $t$ . The first term in Eq. 1 is the supervised classification loss on the labeled data, and the second term defines the unsupervised consistency regularization loss on the unlabeled data.
+
+Dynamic expert assignment. As shown in Fig. 1 and Table 1, we observe that each expert has its strength and weakness, i.e., long-tailed expert is skilled in handling head class samples but not medium and tail class samples, while the uniform and inverse long-tailed experts are skilled in handling medium and tail class samples, respectively. Based on this observation, we propose to gather the strengths of different experts. To this end, we first propose a dynamic expert assignment (DEA) module to estimate the class membership (i.e., head, medium, or tail class) of each sample.
+
+As shown in Fig. 2, the DEA module adopts a multilayer perceptron (MLP) architecture and takes the feature from the encoder and logits from the three experts as input and outputs the soft class membership for each sample, i.e., $w = DEA([v, z_1, z_2, z_3])$ , where $w = [w_1, w_2, w_3]$ denotes the probability of assigning each expert to produce pseudo-label and make prediction, $v$ and $z_k|_{k=1}^3$ denote the feature and logit generated by the encoder $g$ and expert $E_k|_{k=1}^3$ on the sample $x$ , respectively. The parameters of the DEA module can be inferred by minimizing the following DEA loss $L_{dea}$ ,
+
+$$
+L _ {d e a} = \frac {1}{B _ {l}} \sum_ {i = 1} ^ {B _ {l}} \ell \left(w _ {i} ^ {l}, s _ {i}\right) + \frac {1}{B _ {u}} \sum_ {j = 1} ^ {B _ {u}} \ell \left(w _ {j} ^ {u}, \hat {s} _ {j}\right) \mathbb {I}, \tag {2}
+$$
+
+where $w_{i}^{l}$ and $w_{j}^{u}$ denote the probabilities of assigning each
+
+
+Figure 2. Overview of our Meta-Expert algorithm. Meta-Expert leverages a DEA module to adaptively assigns suitable expert to each sample to generate pseudo-label in the training phase and make prediction in the testing phase, crucial for integrating expertises of different experts and constructing the aggregator. $E_{1}$ , $E_{2}$ and $E_{3}$ denote the long-tailed, uniform, and inverse long-tailed experts, respectively, $\tau_{k}$ denotes the logit adjustment intensity for expert $E_{k}$ ( $k \in \{1,2,3\}$ ), and “+” sign denotes adding different features.
+
+expert to the $i^{th}$ labeled and $j^{th}$ unlabeled samples in the current batch, respectively, $s_i$ denotes the ground-truth class membership of the $i^{th}$ labeled sample in the current batch, and $\hat{s}_j$ denotes the pseudo class membership of the $j^{th}$ unlabeled sample in the current batch.
+
+Aggregator. Subsequently, we construct an aggregator. The aggregator integrates the expertises of three experts through a weighted summation of their respective logits based on the estimated class membership $w$ by the DEA module, i.e.,
+
+$$
+y _ {m} = \sigma \left(\sum_ {k = 1} ^ {Q} w _ {k} z _ {k}\right), \tag {3}
+$$
+
+where $y_{m}$ denotes the soft prediction produced by the aggregator and $\sigma(\cdot)$ denotes the softmax function. As the class membership estimated by the DEA module can reflect the class membership of each sample, aggregator in Eq. 3 ensures that the long-tailed expert dominates the pseudo-label generation in the training phase and prediction in the testing phase when the sample belongs to the head class, and the uniform and inverse long-tailed experts dominate when the sample belongs to the medium and tail classes, respectively.
+
+Then, the META loss $L_{meta}$ for optimizing the overall network parameters based on the output of the aggregator is formulated as:
+
+$$
+L _ {m e t a} = \frac {1}{B _ {l}} \sum_ {i = 1} ^ {B _ {l}} \ell \left(y _ {m, i} ^ {l}, y _ {i}\right) + \frac {1}{B _ {u}} \sum_ {j = 1} ^ {B _ {u}} \ell \left(y _ {m, j} ^ {u}, \hat {y} _ {j}\right) \mathbb {I}, \tag {4}
+$$
+
+where $y_{m,i}^{l}$ and $y_{m,j}^{u}$ denote the soft prediction produced by the aggregator on the $i^{th}$ labeled and $j^{th}$ unlabeled samples
+
+in the current batch, respectively, $\hat{y}_j$ denotes the pseudolabel produced by the aggregator on the $j^{th}$ unlabeled sample in the current batch. As shown in Fig. 1 and Table 1, our proposed method can integrate the expertises of all experts and generate high-quality pseudo-labels and predictions in the training and testing phase, respectively.
+
+Multi-depth feature fusion. Although the proposed method is effective in integrating the expertise of each expert, the model is still naturally biased towards the head class due to the scarcity of tail class samples. Fortunately, as shown in Table 2, we observe that different depth features have different bias intensities, i.e., shallow features are relatively balanced although less discriminative, and deep features are more discriminative and more biased. Such a phenomenon motivates us to use multiple depth features to learn a representation with a good trade-off between bias and discriminative ability.
+
+Specifically, we propose a multi-depth feature fusion (MFF) module to fuse different depth features. As shown in Fig. 2, the MFF module involves several MLP layers and takes different depth features from the encoder as input and outputs the fusion feature $v$ , i.e., $MFF(v_{1},v_{2},v_{3})\longmapsto v$ , where $v_{1},v_{2}$ and $v_{3}$ denote the shallow, medium and deep features, respectively. Specifically, for shallow depth features $v_{1}$ and medium ones $v_{2}$ , we first align their dimensions using an MLP and add them together. Subsequently, the resulting feature will be added to the subsequent depth's feature set to perform similar operations. Beside the addition operation, we can also concatenate different depth features in the MFF
+
+module, which is investigated in Sec. 4.4. The parameters of the MFF module are updated by the end-to-end training.
+
+# 3.3. Model Training and Prediction
+
+In the training phase, we warm up the model by eighteen epochs with the base loss $L_{base}$ in Eq. 1. Then, we take the fusion feature $v$ from the MFF module and logit $z_{k}$ from the expert $E_{k}$ ( $k \in \{1,2,3\}$ ) to calculate the DEA loss $L_{dea}$ in Eq. 2 and META loss $L_{meta}$ in Eq. 4, and together with the base loss $L_{base}$ to jointly optimize the model. The overall loss $L_{overall}$ is formulated as:
+
+$$
+L _ {\text {o v e r a l l}} = L _ {\text {b a s e}} + L _ {\text {d e a}} + L _ {\text {m e t a}}. \tag {5}
+$$
+
+The whole framework of our method is shown in Fig. 2. The pseudo-code is summarized in Alg. 1. After training, we could obtain the predicted label of an unseen sample $x^{*}$ by selecting the label index with the highest confidence by $y_{m}$ in Eq. 3. Our code is made available1 .
+
+# 3.4. Theoretical Analysis
+
+We provide the generalization error bound for our Meta-Expert to analyze the factors that affect the model's generalization ability. Before providing the main results, we first define the true risk with respect to the classification model $f(x; \theta)$ as follows:
+
+$$
+R (f) = \mathbb {E} _ {(x, y)} [ \ell (f (x), y) ]. \tag {6}
+$$
+
+Definition 1. We aim to learn a good classification model by minimizing the empirical risk $\widehat{R}(f) = \widehat{R}_l(f) + \widehat{R}_u(f)$ , where $\widehat{R}_l(f) = \frac{1}{N} \sum_{i=1}^{N} \ell(f(x_i), y_i)$ and $\widehat{R}_u(f) = \frac{1}{M} \sum_{j=1}^{M} \ell(f(x_j), y_j)$ denote the empirical risk on labeled and unlabeled training data, respectively. In SSL, we cannot minimize the empirical risk $\widehat{R}_u(f)$ directly, since the ground-truth label is inaccessible for unlabeled training data. Therefore, we need to train the model with $\widehat{R}_u'(f) = \frac{1}{M} \sum_{j=1}^{M} \ell(f(x_j), \hat{y}_j)$ , where $\hat{M}$ denotes the number of selected high-confidence unlabeled samples and $\hat{y}_j$ denotes the pseudo-label of unlabeled sample $x_j$ .
+
+Let $\ell_{SSL} = \frac{1}{N}\sum_{i = 1}^{N}\ell (f(x_i),y_i) + \frac{1}{\hat{M}}\sum_{j = 1}^{\hat{M}}\ell (f(x_j),\hat{y}_j)$ be the loss for SSL, where $\hat{M}$ denotes the number of selected high-confidence unlabeled samples. Let the function space $\mathcal{H}_y$ for the label $y\in \{1,\dots ,C\}$ be $\{h:x\longmapsto f_y(x)|f\in$ $\mathcal{F}\}$ , where $f_{y}(x)$ denotes the predicted probability of the $y^{th}$ class. Let $\mathcal{R}_O(\mathcal{H}_y)$ be the expected Rademacher complexity (Mohri et al., 2018) of $\mathcal{H}_y$ with $O = N + M$ training samples (including $N$ labeled and $M$ unlabeled training samples). Let $\hat{f} = \mathrm{argmin}_{f\in \mathcal{F}}\widehat{R} (f)$ be the empirical risk minimizer, and $f^{*} = \mathrm{argmin}_{f\in \mathcal{F}}R(f)$ be the true risk minimizer. Then we have the following theorem.
+
+# Algorithm 1 Training Process of the Proposed Method
+
+1: Input: Labeled training dataset $\mathcal{D}_l$ , unlabeled training dataset $D_u$ , hyper-parameter $t$ .
+2: Output: Encoder $g$ , long-tailed expert $E_{1}$ , uniform expert $E_{2}$ , inverse long-tailed expert $E_{3}$ , and parameters of DEA module $W_{dea}$ and MFF module $W_{mff}$ .
+3: Initialize the parameters of $g$ , $E_1$ , $E_2$ , $E_3$ , $W_{dea}$ and $W_{mff}$ randomly.
+4: for epoch $= 1,2,\ldots$ do
+5: for batch $= 1,2,\dots$ do
+6: Get expert $E_{k}$ prediction $z_{k}$ on a batch of data $B$ ;
+7: Calculate the base loss by Eq. 1;
+8: if warm up ended then
+9: Calculate the loss for DEA module by Eq. 2;
+10: Integrate the expertises of different experts by Eq. 3;
+11: Calculate the loss for optimizing the overall network parameters based on the integrated expertises by Eq. 4;
+12: Obtain the overall loss by Eq. 5;
+13: end if
+14: Update network parameters via gradient descent;
+15: end for
+16: end for
+
+Theorem 1 (Generalization Error Bound). Suppose that the loss function $\ell(f(x), y)$ is $\rho$ -Lipschitz with respect to $f(x)$ for all $y \in \{1, \ldots, C\}$ and upper-bounded by $U$ . Given the class membership $\eta \in \{1, \ldots, Q\}$ and overall pseudo-labeling error $\epsilon > 0$ , if $\frac{1}{M} \sum_{j=1}^{M} \sum_{k=1}^{Q} \mathbb{I}(\eta_{j,k} = 1) |\mathbb{I}(f_k(x_j))| > t$ and $\hat{\eta}_j(y_j) | \leq \epsilon$ for any $\delta > 0$ with probability at least $1 - \delta$ , we have:
+
+$$
+R (\hat {f}) - R (f ^ {*}) \leq 2 U \epsilon + 4 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} \left(\mathcal {H} _ {y}\right) + 2 U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}}. \tag {7}
+$$
+
+The proof of Theorem 1 is provided in Appendix D. It can be observed that the generalization performance of $\hat{f}$ mainly depends on two factors, i.e., the overall pseudolabeling error $\epsilon$ and the number of training samples $O$ . As $O \to \infty$ , $\epsilon \to 0$ , Theorem 1 shows that the empirical risk minimizer $\hat{f}$ will get closer to the true risk minimizer $f^{*}$ . In CPE, the overall pseudo-labeling error is defined as $\epsilon_{CPE} = \frac{1}{Q^2} \sum_{i=1}^{Q} \sum_{j=1}^{Q} \epsilon_{i,j}$ , where $\epsilon_{i,j}$ denotes the pseudo-labeling error of the $i^{th}$ expert on the unlabeled samples with class membership $\eta_j = 1$ . While the counterpart of our Meta-Expert is defined as $\epsilon_{Ours} = \frac{1}{Q} \sum_{i=1}^{Q} \sum_{j=1}^{Q} \mathbb{I}_{i=j} \epsilon_{i,j}$ , where $\mathbb{I}_{i=j}$ denotes a binary expert mask to assign each expert to select high-confidence unlabeled samples located in its skilled interval. As illustrated in Table 3, compared with CPE in the consistent case, with the estimated class membership by the DEA module, our
+
+Table 3. Pseudo-labeling error rate (i.e., $\epsilon$ ) (\%) and utilization ratio (i.e., $\hat{M}/M$ ) (\%) under three different unlabeled data distributions with varying experts. In CPE (Ma et al., 2024), $E_2$ denotes uniform expert, while $E_1$ and $E_3$ denote long-tailed and inverse long-tailed experts, respectively. Our proposed method uses the DEA module to select a specific expert. The dataset is CIFAR-10-LT with imbalance ratio $\gamma_l = 200$ .
+
+Distribution Expert Head Medium Tail ε M/M Consistent E1 9.01 20.90 43.04 23.98 94.90 E2 26.82 20.85 22.35 23.09 64.30 E3 92.15 21.47 23.04 43.14 95.37 CPE 42.66 21.07 29.48 30.07 84.86 Ours 21.79 18.67 23.87 21.17 95.31 Uniform E1 7.56 24.50 25.22 19.63 95.73 E2 13.67 24.58 11.00 17.23 96.57 E3 97.44 25.33 9.33 42.17 98.80 CPE 39.56 24.81 15.19 26.34 97.03 Ours 12.89 24.92 15.11 18.37 98.44 Inverse E1 6.10 22.18 22.97 17.59 84.23 E2 15.79 21.13 9.03 15.90 93.39 E3 40.39 22.48 5.67 22.81 96.55 CPE 20.76 21.93 12.56 18.77 91.39 Ours 11.08 19.91 8.59 13.87 93.75
+
+Meta-Expert achieves a smaller overall pseudo-labeling error (from 30.07 percentage points (pp) reduced to 21.17 pp) and higher unlabeled data utilization ratio (from 84.86 pp improved to 95.31 pp), and similar conclusions can be observed in uniform and inverse cases, which are beneficial for obtaining a smaller generalization error bound.
+
+# 4. Experiments
+
+# 4.1. Experimental Setting
+
+Dataset. We perform our experiments on three widely-used datasets for the LTSSL task, including CIFAR-10-LT (Krizhevsky, 2009), SVHN-LT (Netzer et al., 2011), and STL-10-LT (Coates et al., 2011). We follow the dataset settings in ACR (Wei & Gan, 2023) and CPE (Ma et al., 2024), details are as below.
+
+- CIFAR-10-LT: We test with four settings in the consistent case: $(N_{1},M_{1}) = (1500,3000)$ and $(N_{1},M_{1}) = (500,4000)$ , with $\gamma \in \{150,200\}$ . In the uniform case, we test with $(N_{1},M_{1}) = (1500,300)$ , with $\gamma_{l} \in \{150,200\}$ , and $\gamma_{u}$ being 1. In the inverse case, we test with $(N_{1},M_{c}) = (1500,3000)$ , with $\gamma_{l} \in \{150,200\}$ , and $\gamma_{u}$ being $1 / \gamma_{l}$ .
+- SVHN-LT: We test our method under $(N_{1},M_{1}) = (1500,3000)$ setting. The imbalance ratio $\gamma_{l}$ is set to 150 or 200. With a fixed $\gamma_{l} = 150$ , we also test our method under $\gamma_{u}\in \{1,1 / 150\}$ for the uniform and inverse cases.
+- $STL - 10 - LT$ : Since the ground-truth labels of unlabeled data in STL-10-LT are unknown, we conduct experiments by controlling the imbalance ratio of labeled data only. We set $N_{1}$ as 150 or 450, with $\gamma_{l} \in \{15, 20\}$ , and directly use
+
+the original unlabeled data.
+
+Baseline. We compare our method with seven LTSL algorithms published in top-conferences in the past few years, including SAW (Lai et al., 2022), Adsh (Guo & Li, 2022), DePL (Wang et al., 2022), ACR (Wei & Gan, 2023), BaCon (Feng et al., 2024), CPE (Ma et al., 2024), and SimPro (Du et al., 2024), which are all based on the typical SSL method FixMatch. For a fair comparison, we test these baselines and our Meta-Expert on the widely-used codebase $\mathrm{USB}^2$ . We use the same dataset splits with no overlap between labeled and unlabeled training data for all datasets.
+
+# 4.2. Implementation Details
+
+We follow the default settings and hyper-parameters in USB, i.e., the batch size of labeled data $B_{l}$ is set to 64 and unlabeled data $B_{u}$ is set to 128, and the confidence threshold $t$ is set to 0.95. Moreover, we use the WRN-28-2 (Zagoruyko & Komodakis, 2016) architecture, and the SGD optimizer with learning rate 3e-2, momentum 0.9, and weight decay 5e-4 for training. We repeat each experiment over three different random seeds and report the mean performance and standard deviation. We conduct the experiments on a single GPU of NVIDIA A100 using PyTorch v2.3.1.
+
+# 4.3. Main Result
+
+In the case of consistent distribution $(\gamma_l = \gamma_u)$ . We initiate our investigation by conducting experiments in the scenario where $\gamma_l = \gamma_u$ . The primary results for CIFAR-10-LT are presented in Table 4. It is clear that across all different training dataset sizes and imbalance ratios, Meta-Expert achieves higher classification accuracy than all the previous baselines on CIFAR-10-LT. For example, given $(N_1,M_1,\gamma) = (500,4000,200)$ , Meta-Expert surpasses all previous baselines by up to 2.47 pp. When moving to SVHN-LT dataset in Table 5, Meta-Expert performs comparable to the previous SOTA method BaCon, surpassing other baselines by up to 0.66 pp given $(N_1,M_1,\gamma) = (1500,3000,200)$ .
+
+In the case of mismatched distribution $(\gamma_l \neq \gamma_u)$ . In practical applications, the distribution of unlabeled data might significantly differ from that of labeled data. Therefore, we investigate uniform and inverse class distributions, such as setting $\gamma_u$ to 1 or 1/200 for CIFAR-10-LT. For STL-10-LT dataset, as the ground-truth labels of the unlabeled data are unknown, we only control the imbalance ratio of the labeled data. The results are presented in Tables 4 and 5, where we can find Meta-Expert can
+
+Table 4. Comparison of accuracy (%) on CIFAR-10-LT under $\gamma_l = \gamma_u$ and $\gamma_l \neq \gamma_u$ settings. We set $\gamma_l$ to 150 and 200 for CIFAR-10-LT. We use bold to mark the best results.
+
+Algorithm CIFAR-10-LT CIFAR-10-LT N1=500,M1=4000 N1=1500,M1=3000 N1=1500,Mc=3000 N1=1500,Mc=3000 γl=200γu=200 γl=150γu=150 γl=200γu=200 γl=150γu=150 γl=200γu=1 γl=200γu=1/200 γl=150γu=1 γl=150γu=1/150 Supervised 41.15±1.15 43.88±1.61 56.83±1.10 59.82±0.32 56.83±1.10 56.83±1.10 59.82±0.32 59.82±0.32 FixMatch (NIPS 20) 61.74±0.81 65.68±0.67 69.39±0.56 72.15±0.94 65.59±0.18 63.98±0.36 69.07±0.74 65.24±0.63 w / SAW (ICML 22) 61.22±4.11 68.51±0.16 74.15±1.52 77.67±0.14 78.60±0.23 70.55±0.48 80.02±0.50 73.67±0.50 w / Adsh (ICML 22) 62.04±1.31 66.55±2.94 67.13±0.39 73.96±0.47 71.06±0.77 65.68±0.44 73.65±0.36 66.51±0.69 w / DePL (CVPR 22) 69.21±0.62 71.95±2.54 73.23±0.62 76.58±0.12 73.26±0.46 69.35±0.26 75.62±0.86 71.23±0.54 w / ACR (CVPR 23) 71.92±0.94 76.72±1.13 79.96±0.85 81.81±0.49 81.18±0.73 81.23±0.59 83.46±0.42 84.63±0.66 w / BaCon (AAAI 24) 66.41±0.31 71.33±1.75 78.64±0.40 81.63±0.44 77.89±0.97 81.87±0.16 82.05±1.09 83.30±1.12 w / CPE (AAAI 24) 67.45±1.27 76.77±0.53 78.12±0.34 82.25±0.34 83.46±0.03 84.07±0.40 84.50±0.40 85.52±0.02 w / SimPro (ICML 24) 59.94±0.73 65.54±3.17 75.37±0.74 77.18±0.38 73.05±0.35 75.33±2.85 76.12±1.11 79.42±2.78 w / Meta-Expert (Ours) 74.39±0.46 77.19±0.58 80.63±0.83 82.52±0.40 83.90±0.11 85.71±0.03 84.91±0.14 86.78±0.31
+
+Table 5. Comparison of accuracy (%) on STL-10-LT and SVHN-LT under $\gamma_l = \gamma_u$ and $\gamma_l \neq \gamma_u$ settings. We set $\gamma_l$ to 15 and 20 for STL-10-LT, and $\gamma_l$ to 150 and 200 for SVHN-LT. We use **bold** to mark the best results. $N / A$ denotes the unknown $\gamma_u$ in STL-10-LT since its inaccessible ground-truth label for unlabeled dataset.
+
+Algorithm STL-10-LT SVHN-LT N1=150, M1=100k N1=450, M1=100k N1=1500, M1=3000 N1=1500, Mc=3000 γl=20γu=N/A γl=15γu=N/A γl=20γu=N/A γl=15γu=N/A γl=200γu=200 γl=150γu=150 γl=150γu=1 γl=150γu=1/150 Supervised 40.44±1.42 42.31±0.42 56.56±1.07 59.81±0.45 84.10±0.05 86.14±0.50 86.14±0.50 86.14±0.50 FixMatch (NIPS 20) 56.12±1.38 60.63±0.92 68.33±0.80 71.55±0.74 91.36±0.15 91.99±0.18 93.94±0.79 90.25±2.45 w / SAW (ICML 22) 66.62±0.34 67.00±0.79 74.59±0.13 75.58±0.28 92.17±0.10 92.27±0.01 94.41±0.38 91.42±0.41 w / Adsh (ICML 22) 66.56±0.61 66.72±0.32 72.95±0.45 74.28±0.24 90.87±0.32 91.68±0.25 94.04±0.68 88.71±0.52 w / DePL (CVPR 22) 66.10±0.63 67.02±0.89 73.43±0.11 74.55±0.14 92.16±0.16 92.85±0.04 94.12±0.63 87.86±0.50 w / ACR (CVPR 23) 69.24±0.95 68.74±0.95 78.13±0.29 78.55±0.50 92.90±0.40 93.52±0.32 91.11±0.17 92.03±0.34 w / BaCon (AAAI 24) 66.68±0.38 68.26±1.16 77.29±0.23 77.73±0.40 93.30±0.15 93.94±0.21 94.54±0.42 93.69±0.41 w / CPE (AAAI 24) 68.01±0.65 67.07±1.72 78.02±0.14 78.71±0.24 85.79±0.54 86.31±0.05 94.14±0.24 93.06±0.34 w / SimPro (ICML 24) 43.65±0.55 44.45±0.98 57.23±1.43 60.33±0.59 92.51±0.71 93.94±0.10 94.59±0.28 94.76±0.41 w / Meta-Expert (Ours) 71.19±0.07 69.23±0.82 80.18±1.21 79.98±0.33 93.56±0.09 93.99±0.07 94.66±0.23 94.24±0.19
+
+consistently and significantly outperform baseline algorithms on CIFAR-10-LT, validating its effectiveness to cope with varying class distributions of unlabeled data. Concretely, Meta-Expert surpasses the previous SOTA method by 1.64 pp and other baselines by up to 4.48 pp with $(N_{1},M_{1},\gamma_{l},\gamma_{u}) = (1500,3000,200,1 / 200)$ . For STL-10-LT dataset, Meta-Expert surpasses the previous SOTA method by 1.27 pp and other baselines by up to 1.43 pp with $N_{1} = 450$ and $\gamma_l = 15$ . On SVHN-LT dataset, Meta-Expert achieves comparable performances with SimPro, surpassing other baselines by up to 0.55 pp.
+
+In summary, our method outperforms almost all previous baselines regardless of training dataset sizes, imbalance ratios, and unlabeled training data distributions. We also evaluate all methods on FreeMatch (Wang et al., 2023) in Appendix C, where we can get a similar conclusion.
+
+# 4.4. Ablation Study
+
+The effect of each module. In Table 6, we evaluate the contribution of each key component in Meta-Expert. Specifically, we set $N_{1}$ to 1500 and $M_{1}$ to 3000, and perform
+
+experiments on CIFAR-10-LT and SVHN-LT. According to Table 6, we can observe that both DEA and MFF bring significant improvements. For example, on CIFAR-10-LT with $\gamma_{l} = \gamma_{u} = 200$ , DEA and MFF bring accuracy gains of $0.65~\mathrm{pp}$ and $1.26~\mathrm{pp}$ , respectively, and the accuracy is improved up to $3.10~\mathrm{pp}$ when using them together. When evaluating the overall performance, the DEA module provides an average $1.68~\mathrm{pp}$ accuracy improvement across all imbalance ratios, showing relatively greater performance gains; while the MFF module delivers a $0.76~\mathrm{pp}$ average gain, which though comparatively smaller, remains statistically significant. The combined DEA+MFF configuration achieves $2.42~\mathrm{pp}$ improvement, confirming their complementary effectiveness and synergistic interaction. These improvements conclusively validate the effectiveness of our proposed modules in enhancing model robustness across diverse imbalance ratios.
+
+Note that MFF improves the accuracy of all cases except the inverse case $(\gamma_u = 1 / 200)$ . This phenomenon can be attributed to two reasons. First, the overall imbalance ratio will be reduced as the inverse long-tailed unlabeled data $(\gamma_u = 1 / 200)$ complements the long-tailed labeled data
+
+Table 6. Comparison of accuracy $(\%)$ on with and without the DEA and MFF modules. The datasets are CIFAR-10-LT with $(N_{1},M_{1},\gamma_{l}) = (1500,3000,200)$ and SVHN-LT with $(N_{1},M_{1},\gamma_{l}) = (1500,3000,150)$ .
+
+CPE CIFAR-10-LT(γu) SVHN-LT(γu) w/ DEA w/ MFF 200 1 1/200 150 1 1/150 78.57 83.47 84.40 86.26 93.82 92.62 ✓ 79.22 83.61 84.58 93.90 94.30 93.60 ✓ 79.83 83.89 83.10 90.49 92.89 93.47 ✓ ✓ 81.67 83.96 85.75 93.89 94.34 94.05
+
+
+Figure 3. Confidence level for assigning a specific expert to the head, medium, and tail class samples. The datasets is CIFAR-10-LT with $(N_{1},M_{1},\gamma_{l}) = (1500,3000,200)$ . We follow the setting in CPE to consider the first two of all classes as head classes, the last six as tail classes, and the remaining classes as the medium. We clearly see that when a sample belongs to the head class, $w_{1}$ is the largest, and $w_{2}$ and $w_{3}$ are the largest when a sample belongs to the medium and tail classes, respectively.
+
+$(\gamma_l = 200)$ . Second, without the DEA model, our method uses all experts to generate pseudo-labels in the training phase, which introduces more error pseudo-labels and limits the MFF module's effectiveness. When using the two modules (DEA+MFF) together, the DEA module reduces the quantity of error pseudo-labels, thus, the effectiveness of the MFF module is released completely.
+
+Feature combination strategy. We evaluate the performance of using different feature combination strategies in the MFF module. As shown in Table 7, we observe that the addition operation outperforms concatenation, achieving a $0.58\mathrm{pp}\sim 1.35$ pp higher accuracy. Based on these empirical results, we adopt the addition operation for fusing multi-depth features.
+
+# How does the DEA module improve the performance?
+
+As previously mentioned, DEA can correctly estimate the class membership of given samples, which is essential to integrate the expertise of each expert and realize "a square peg in a square hole". To validate this assumption, we plot the soft class membership estimated by DEA in Fig. 3. As can be seen, DEA precisely assigns the head expert to the head class samples, and medium and tail experts to the medium and tail class samples, respectively. Based on this
+
+Table 7. Comparison of accuracy $(\%)$ on utilizing different feature combination strategies in the MFF module. add and con denote using the addition and concatenation feature operation strategies, respectively. The dataset is CIFAR-10-LT with $(N_{1},M_{1},\gamma_{l}) =$ (1500,3000,200).
+
+Strategy γu 200 1 1/200 con 80.32 83.38 84.65 add 81.67 83.96 85.75
+
+accurate estimation, our method can produce higher-quality pseudo-labels in the training phase, as can be seen in Fig. 1. Simultaneously, according to Table 1, the accuracies of our method in testing phase are also the highest in all cases. The above observations verify that our method utilizes the expertises of all three experts effectively.
+
+Computational overhead analysis. While our proposed modules (three experts, MFF, and DEA) introduce a controlled parameter increase of $13.3\mathrm{pp}$ $(1.5\mathrm{M} \rightarrow 1.7\mathrm{M})$ , this design achieves a strategically balanced efficiency-performance trade-off. Experimental results on CIFAR-10-LT demonstrate: a $6.4\mathrm{pp}$ increase in epoch time $(234.5\mathrm{s} \rightarrow 249.5\mathrm{s})$ , a $1.6\mathrm{s}$ increase in inference time for evaluating 10,000 samples $(7.1\mathrm{s} \rightarrow 8.8\mathrm{s})$ , and a substantial accuracy improvement of $3.5\mathrm{pp}$ $(71.9\mathrm{pp} \rightarrow 74.4\mathrm{pp})$ . These results collectively indicate significant performance enhancement with modest computational overhead.
+
+# 5. Conclusion
+
+In this work, we address the LTSSL problem from a fresh perspective, i.e., automatically integrating the expertises of various experts to produce high-quality pseudo-labels and predictions. We also theoretically prove that effectively exploiting different experts' expertises can reduce the generalization error bound. Specifically, the proposed Meta-Expert algorithm comprises a dynamic expert assignment module and a multi-depth feature fusion module, the former can assign a suitable expert to each sample based on the estimated class membership, and the latter relieves the biased training by fusing different depth features based on the observation that deep feature is more biased towards the head class but more discriminative, which has not been observed before. We validate our proposed method's effectiveness through extensive experiments across three widely-used LTSSL datasets with different imbalance ratios and unlabeled data distributions, reaching the new state-of-the-art performance. The ablation results show that both the DEA and MFF modules contribute to the performance improvement. Moreover, the DEA module can correctly estimate the class membership of given samples, which is essential to integrate the expertise of each expert.
+
+# Acknowledgements
+
+This work was supported by the National Natural Science Foundation of China under Grant U24A20322. This research work is also supported by the Big Data Computing Center of Southeast University.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. Specifically, we propose a new long-tailed semi-supervised learning algorithm to improve the performance in distribution mismatched scenarios by integrating the expertises of various experts to produce high-quality pseudo-labels and predictions. We also theoretically prove that the model's generalization error can be reduced by integrating the expertises of different experts. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Aimar, E. S., Jonnarth, A., Felsberg, M., and Kuhlmann, M. Balanced product of calibrated experts for long-tailed recognition. In CVPR, 2023.
+Bai, J., Liu, Z., Wang, H., Hao, J., Feng, Y., Chu, H., and Hu, H. On the effectiveness of out-of-distribution data in self-supervised long-tail learning. In ICLR, 2023.
+Berthelot, D., Carlini, N., Cubuk, E. D., Kurakin, A., Sohn, K., Zhang, H., and Raffel, C. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In ICLR, 2020.
+Cao, K., Wei, C., Gaidon, A., Aréchiga, N., and Ma, T. Learning imbalanced datasets with label-distribution-aware margin loss. In NeurIPS, 2019.
+Chen, D., Wang, W., Gao, W., and Zhou, Z. Tri-net for semi-supervised deep learning. In IJCAI, 2018.
+Chen, H., Tao, R., Fan, Y., Wang, Y., Wang, J., Schiele, B., Xie, X., Raj, B., and Savvides, M. Softmatch: Addressing the quantity-quality tradeoff in semi-supervised learning. In ICLR, 2023.
+Chen, Y., Mancini, M., Zhu, X., and Akata, Z. Semi-supervised and unsupervised deep visual learning: A survey. IEEE TPAMI, 46(3):1327-1347, 2024.
+Coates, A., Ng, A. Y., and Lee, H. An analysis of single-layer networks in unsupervised feature learning. In AISTATS, 2011.
+Du, C., Han, Y., and Huang, G. Simpro: A simple probabilistic framework towards realistic long-tailed semi-supervised learning. In ICML, 2024.
+
+Du, Y. and Wu, J. No one left behind: Improving the worst categories in long-tailed learning. In CVPR, 2023.
+Fan, Y., Dai, D., Kukleva, A., and Schiele, B. Cossl: Co-learning of representation and classifier for imbalanced semi-supervised learning. In CVPR, 2022.
+Feng, Q., Xie, L., Fang, S., and Lin, T. Bacon: Boosting imbalanced semi-supervised learning via balanced feature-level contrastive learning. In AAAI, 2024.
+Guo, L. and Li, Y. Class-imbalanced semi-supervised learning with adaptive thresholding. In ICML, 2022.
+Huang, Z., Shen, L., Yu, J., Han, B., and Liu, T. Flat-match: Bridging labeled data and unlabeled data with cross-sharpness for semi-supervised learning. In NeurIPS, 2023.
+Jia, Y., Peng, X., Wang, R., and Zhang, M. Long-tailed partial label learning by head classifier and tail classifier cooperation. In AAAI, 2024.
+Kini, G. R., Paraskevas, O., Oymak, S., and Thrampoulidis, C. Label-imbalanced and group-sensitive classification under overparameterization. In NeurIPS, 2021.
+Krizhevsky, A. Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/ kriz/learning-features-2009-TR.pdf, 2009.
+Lai, Z., Wang, C., Gunawan, H., Cheung, S. S., and Chuah, C. Smoothed adaptive weighting for imbalanced semi-supervised learning: Improve reliability against unknown distribution data. In ICML, 2022.
+Lee, H. and Kim, H. CDMAD: class-distribution-mismatch-aware debiasing for class-imbalanced semi-supervised learning. In CVPR, 2024.
+Lee, H., Shin, S., and Kim, H. ABC: auxiliary balanced classifier for class-imbalanced semi-supervised learning. In NeurIPS, 2021.
+Lee, S., Le, T. V., Shin, J., and Lee, S. (fl) $^{2}$ : Overcoming few labels in federated semi-supervised learning. In NeurIPS, 2024.
+Li, Z. and Jia, Y. Conmix: Contrastive mixup at representation level for long-tailed deep clustering. In ICLR, 2025.
+Liu, B., Li, H., Kang, H., Hua, G., and Vasconcelos, N. Breads crumbs: Adversarial class-balanced sampling for long-tailed recognition. In ECCV, 2022.
+Ma, C., Elezi, I., Deng, J., Dong, W., and Xu, C. Three heads are better than one: Complementary experts for long-tailed semi-supervised learning. In AAAI, 2024.
+
+Maurer, A. A vector-contraction inequality for rademacher complexities. In ALT, 2016.
+Menon, A. K., Jayasumana, S., Rawat, A. S., Jain, H., Veit, A., and Kumar, S. Long-tail learning via logit adjustment. In ICLR, 2021.
+Miyato, T., Maeda, S., Koyama, M., and Ishii, S. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE TPAMI, 41(8): 1979-1993, 2019.
+Mohri, M., Rostamizadeh, A., and Talwalkar, A. Foundations of Machine Learning. MIT Press, 2018.
+Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. In NeurIPS, 2011.
+Pan, L., Zhang, Y., Yang, Q., Li, T., and Chen, Z. Combat long-tails in medical classification with relation-aware consistency and virtual features compensation. In MIC-CAI, 2023.
+Park, T., Lee, H., and Kim, H. Rebalancing using estimated class distribution for imbalanced semi-supervised learning under class distribution mismatch. In ECCV, 2024.
+Peng, W., Weng, Z., Li, H., Wu, Z., and Jiang, Y.-G. Bmb: Balanced memory bank for long-tailed semi-supervised learning. IEEE TMM, 2025.
+Peng, Z., Jia, Y., Liu, H., Hou, J., and Zhang, Q. Maximum entropy subspace clustering network. IEEE TCSV, 32(4): 2199-2210, 2022.
+Peng, Z., Liu, H., Jia, Y., and Hou, J. Deep attention-guided graph clustering with dual self-supervision. IEEE TCVS, 33(7):3296-3307, 2023.
+Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C., Cubuk, E. D., Kurakin, A., and Li, C. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020.
+Wang, X., Lian, L., Miao, Z., Liu, Z., and Yu, S. X. Long-tailed recognition by routing diverse distribution-aware experts. In ICLR, 2021.
+Wang, X., Wu, Z., Lian, L., and Yu, S. X. Debiased learning from naturally imbalanced pseudo-labels. In CVPR, 2022.
+Wang, Y., Chen, H., Heng, Q., Hou, W., Fan, Y., Wu, Z., Wang, J., Savvides, M., Shinozaki, T., Raj, B., Schiele, B., and Xie, X. Freematch: Self-adaptive thresholding for semi-supervised learning. In ICLR, 2023.
+
+Wei, C., Sohn, K., Mellina, C., Yuille, A. L., and Yang, F. Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In CVPR, 2021.
+Wei, T. and Gan, K. Towards realistic long-tailed semi-supervised learning: Consistency is all you need. In CVPR, 2023.
+Xu, Y., Li, Y., Li, J., and Lu, C. Constructing balance from imbalance for long-tailed image recognition. In ECCV, 2022.
+Yang, Y., Jiang, N., Xu, Y., and Zhan, D. Robust semi-supervised learning by wisely leveraging open-set data. IEEE TPAMI, 46(12):8334-8347, 2024.
+Yu, Z., Li, Y., and Lee, Y. J. Inpl: Pseudo-labeling the inliers first for imbalanced semi-supervised learning. In ICLR, 2023.
+Yuan, Y., Wang, X., Yang, X., Li, R., and Heng, P. Semi-supervised class imbalanced deep learning for cardiac MRI segmentation. In MICCAI, 2023.
+Zagoruyko, S. and Komodakis, N. Wide residual networks. In BMVC, 2016.
+Zhang, B., Wang, Y., Hou, W., Wu, H., Wang, J., Okumura, M., and Shinozaki, T. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In NeurIPS, 2021.
+Zhang, C., Hou, Y., Chen, K., Cao, S., Fan, G., and Liu, J. Quality-aware self-training on differentiable synthesis of rare relational data. In AAAI, 2023a.
+Zhang, Y., Hooi, B., Hong, L., and Feng, J. Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition. In NeurIPS, 2022.
+Zhang, Y., Chen, J., Wang, K., and Xie, F. ECL: class-enhancement contrastive learning for long-tailed skin lesion classification. In MICCAI, 2023b.
+Zhang, Y., Kang, B., Hooi, B., Yan, S., and Feng, J. Deep long-tailed learning: A survey. IEEE TPAMI, 45(9): 10795-10816, 2023c.
+
+# Appendix
+
+# A. overview of the Appendix
+
+Our appendix consists of three main sections:
+
+- More related literature and discussions.
+Additional evaluation on FreeMatch (Wang et al., 2023).
+- Proof of the theorem in Sec. 3.4: theoretical analysis, i.e., Theorem 1 (generalization error bound).
+
+# B. More Related Literature and Discussions
+
+# B.1. Long-tailed learning (LTL)
+
+Long-tailed learning (LTL) is tailored for the long-tailed distribution exhibiting in real-world applications (Pan et al., 2023), which aims to improve the performance of the tail class without compromising that of the head class. The existing methods can be roughly grouped into three categories: re-sampling, logit adjustment, and ensemble learning. Re-sampling (Bai et al., 2023; Liu et al., 2022; Xu et al., 2022) adjusts the number of samples for each class, i.e., under-sampling the head class or over-sampling the tail class. Logit adjustment (Cao et al., 2019; Menon et al., 2021; Kini et al., 2021) seeks to resolve the class imbalance by adjusting the predicted logit of the classifier. Ensemble learning (Aimar et al., 2023; Du & Wu, 2023; Zhang et al., 2022) based methods combine multiple classifiers (experts) to improve the performance and robustness of the model. While these methods have made significant progress in long-tailed learning, they often fail to achieve the expected performance gains when directly applied to long-tailed semi-supervised learning (LTSSL), particularly when the distributions are mismatched between labeled and unlabeled training data.
+
+# B.2. Connections and Differences Compared with Multiple Experts Based Methods
+
+In the fields of long-tailed learning (LTL) and long-tailed semi-supervised learning (LTSSL), several works have employed multiple experts to enhance the model's performance. Nevertheless, they are significantly different from the proposed Meta-Expert.
+
+In LTSSL, only CPE (Ma et al., 2024) utilizes multiple experts to handle unlabeled data across various class distributions. CPE, however, lacks integrating the multiple experts, it only employs three experts to generate pseudo-label simultaneously in the training phase and the uniform expert to make prediction in the testing phase. Consequently, CPE may introduce more error pseudo-labels, thereby limiting its performance.
+
+In contrast to LTSSL, numerous studies incorporate multiple experts in LTL, such as RID (Wang et al., 2021), SADE (Zhang et al., 2022), and BalPoE (Aimar et al., 2023). All of these methods are designed for supervised learning and are incapable of handling semi-supervised learning (SSL). Extending them to SSL requires extra effort, as exploiting unlabeled samples is not trivial.
+
+RIDE seeks to learn from multiple independent and diverse experts, making predictions through the averaging of their outputs. The router in RIDE primarily focuses on minimizing the number of experts to reduce the computational cost during the testing phase. SADE leverages self-supervision on testing set to aggregate the learned multiple experts and subsequently utilize the aggregated ones to make prediction in the testing phase. Although effective, it lacks efforts to guide different experts to learn better in their expertises during the training phase. BalPoE refines the logit adjustment intensity across all three experts and averages their outputs to minimize the balanced error while ensuring Fisher consistency.
+
+Different from the above works, we find that different experts excel at predicting different intervals of samples, e.g., a long-tailed/uniform/inverse long-tailed expert is skilled in samples located in the head/medium/tail interval. Consequently, we propose the dynamic expert assignment (DEA) module to estimate the class membership of samples and dynamically assign suitable experts to each sample based on the estimated membership to produce high-quality pseudo-labels in the training phase (Fig. 1) and excellent predictions in the testing phase (Tables 4, 5, 8, and 9). Moreover, Fig. 3 proves the class membership estimated by the DEA module is accurate.
+
+Finally, for the first time, we observe that shallow features are relatively balanced although less discriminative, and deep features improve the discriminative ability but are less balanced, thus proposing the multi-depth feature fusion (MFF) module to make the model both discriminative and balanced (Table 2).
+
+In summary, the differences between our Meta-Expert and existing methods are substantial, even though they also utilize a multi-expert network architecture.
+
+# C. Evaluation on FreeMatch
+
+To further illustrate our method's universality, we reproduce our method and all the compared methods by taking FreeMatch (Wang et al., 2023) as the base SSL method and evaluate them on CIFAR-10-LT, SVHN-LT, and STL-10-LT with the same settings used in the main paper. Tables 8 and 9 present the main results, indicating that our Meta-Expert can produce higher prediction accuracies by incorporating the expertises of different experts. For example, on CIFAR-10-LT, given $(N_{1},M_{1},\gamma) = (500,4000,200)$ , Meta-Expert surpasses all previous baselines by up to $7.05~\mathrm{pp}$ in the case of $\gamma_l = \gamma_u$ . Moreover, in the case of $\gamma_l \neq \gamma_u$ , Meta-Expert surpasses the previous SOTA method by $1.02~\mathrm{pp}$ and outperforms all other baselines by $1.90~\mathrm{pp}$ given $(N_{1},M_{1},\gamma_l,\gamma_u) = (1500,3000,150,1)$ . When moving to SVHN-LT dataset, Meta-Expert surpasses the previous SOTA method by $0.55~\mathrm{pp}$ and other baselines by up to $3.29~\mathrm{pp}$ given $(N_{1},M_{1},\gamma_l,\gamma_u) = (1500,3000,150,1/150)$ . On STL-10-LT, Meta-Expert performs comparable to the previous SOTA method SimPro, surpassing other baselines by up to $0.62~\mathrm{pp}$ . All of these results further illustrate our method's effectiveness.
+
+Table 8. Comparison of accuracy (%) on CIFAR-10-LT under $\gamma_l = \gamma_u$ and $\gamma_l \neq \gamma_u$ settings. We set $\gamma_l$ to 150 and 200 for CIFAR-10-LT. We use bold to mark the best results.
+
+Algorithm CIFAR-10-LT CIFAR-10-LT N1=500,M1=4000 N1=1500,M1=3000 N1=1500,Mc=3000 N1=1500,Mc=3000 γl=200γu=200 γl=150γu=150 γl=200γu=200 γl=150γu=150 γl=200γu=1 γl=200γu=1/200 γl=150γu=1 γl=150γu=1/150 Supervised 41.15±1.15 43.88±1.61 56.83±1.10 59.82±0.32 56.83±1.10 56.83±1.10 59.82±0.32 59.82±0.32 FreeMatch (ICLR 23) 63.35±0.49 68.03±0.68 69.83±1.36 73.00±0.63 78.99±0.53 71.33±0.36 79.60±0.72 72.76±0.79 w / SAW (ICML 22) 59.31±1.26 65.69±0.94 72.05±0.87 74.69±0.85 80.16±0.25 73.24±0.57 81.90±1.00 73.45±0.24 w / Adsh (ICML 22) 62.64±1.11 66.22±1.54 69.27±1.54 73.81±0.53 71.88±0.47 65.16±0.06 73.14±0.57 66.29±0.76 w / DePL (CVPR 22) 65.04±1.07 69.65±1.43 70.66±0.98 73.29±0.53 80.37±0.49 72.28±0.17 80.14±1.03 73.20±1.37 w / ACR (CVPR 23) 58.36±2.35 60.98±1.24 70.24±1.99 72.55±1.34 81.89±0.04 82.68±0.21 83.49±0.36 83.85±0.41 w / BaCon (AAAI 24) 68.63±0.77 72.69±0.68 75.97±1.34 77.71±1.36 83.20±0.11 82.48±0.08 83.91±0.42 83.66±1.25 w / CPE (AAAI 24) 66.82±1.36 69.80±1.28 77.28±0.63 79.24±0.30 83.59±0.17 80.37±1.11 84.37±0.14 79.12±0.80 w / SimPro (ICML 24) 64.20±0.39 69.17±3.31 78.57±0.72 80.49±0.39 80.82±0.24 77.59±0.72 81.30±0.98 79.67±0.99 w / Meta-Expert (Ours) 75.68±0.28 78.26±0.63 80.53±0.71 82.69±0.51 84.30±0.29 85.32±0.40 85.39±0.48 85.89±0.79
+
+Table 9. Comparison of accuracy (%) on STL-10-LT and SVHN-LT under $\gamma_l = \gamma_u$ and $\gamma_l \neq \gamma_u$ settings. We set $\gamma_l$ to 15 and 20 for STL-10-LT, and $\gamma_l$ to 150 and 200 for SVHN-LT. We use **bold** to mark the best results. $N / A$ denotes the unknown $\gamma_u$ in STL-10-LT since its inaccessible ground-truth label for unlabeled dataset.
+
+Algorithm STL-10-LT SVHN-LT N1=150, M1=100k N1=450, M1=100k N1=1500, M1=3000 N1=1500, Mc=3000 γl=20γu=N/A γl=15γu=N/A γl=20γu=N/A γl=15γu=N/A γl=200γu=200 γl=150γu=150 γl=150γu=1 γl=150γu=1/150 Supervised 40.44±1.42 42.31±0.42 56.56±1.07 59.81±0.45 84.10±0.05 86.14±0.50 86.14±0.50 86.14±0.50 FreeMatch (ICLR 23) 70.67±0.83 70.58±0.17 76.66±0.32 77.40±0.31 90.87±1.01 91.66±0.21 94.66±0.41 88.01±0.87 w / SAW (ICML 22) 71.27±0.69 70.91±0.54 78.07±0.06 78.15±0.44 89.04±0.59 90.09±0.16 94.57±0.24 89.22±1.18 w / Adsh (ICML 22) 67.37±1.15 68.10±0.10 73.44±0.70 74.43±0.14 90.22±0.45 91.78±0.22 94.21±0.57 88.47±0.66 w / DePL (CVPR 22) 70.89±0.40 70.47±0.48 77.41±0.11 77.47±0.47 90.64±0.61 91.44±0.37 94.62±0.32 87.74±1.06 w / ACR (CVPR 23) 69.89±0.54 70.98±0.57 78.51±0.26 79.35±0.35 84.78±1.11 87.00±0.75 92.89±0.35 91.21±0.09 w / BaCon (AAAI 24) 71.51±0.22 71.69±0.69 78.93±0.13 79.34±0.72 91.13±1.10 92.45±0.18 94.82±0.24 92.85±0.44 w / CPE (AAAI 24) 69.73±0.25 70.46±0.54 78.84±0.13 79.18±0.54 91.83±0.35 92.21±0.03 94.32±0.41 90.39±0.41 w / SimPro (ICML 24) 43.30±3.18 46.05±1.22 66.70±1.69 68.03±1.17 93.17±0.35 93.73±0.06 94.86±0.22 93.95±0.29 w / Meta-Expert (Ours) 71.57±0.40 71.60±1.00 78.94±0.08 79.40±0.25 93.21±0.09 93.67±0.07 94.80±0.37 94.50±0.33
+
+# D. Proof of Theorem 1
+
+We first copy the Theorem 1 here.
+
+Theorem 1. Suppose that the loss function $\ell(f(x), y)$ is $\rho$ -Lipschitz with respect to $f(x)$ for all $y \in \{1, \ldots, C\}$ and upper-bounded by $U$ . Given the class membership $\eta \in \{1, \ldots, Q\}$ and overall pseudo-labeling error $\epsilon > 0$ , if $\frac{1}{M} \sum_{j=1}^{M} \sum_{k=1}^{Q} \mathbb{I}(\eta_{j,k} = 1) |\mathbb{I}(max(f_k(x_j)) > t) - \mathbb{I}(\hat{y}_j = y_j)| \leq \epsilon$ , for any $\delta > 0$ , with probability at least $1 - \delta$ , we have:
+
+$$
+R (\hat {f}) - R (f ^ {*}) \leq 2 U \epsilon + 4 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} \left(\mathcal {H} _ {y}\right) + 2 U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}}. \tag {8}
+$$
+
+Proof. We first derive the uniform deviation bound between $R(f)$ and $\widetilde{R}(f)$ by the following lemma.
+
+Lemma 1. Suppose that the loss function $\ell(f(x), y)$ is $\rho$ -Lipschitz with respect to $f(x)$ for all $y \in \{1, \ldots, C\}$ and upper-bounded by $U$ . For any $\delta > 0$ , with probability at least $1 - \delta$ , we have:
+
+$$
+\left| R (f) - \widehat {R} (f) \right| \leq 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {N + M} \left(\mathcal {H} _ {y}\right) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 (N + M)}}, \tag {9}
+$$
+
+where the function space $\mathcal{H}_y$ for the label $y\in \{1,\ldots ,C\}$ is $\{h:x\longmapsto f_y(x)|f\in \mathcal{F}\}$ .
+
+Proof. In order to prove this lemma, we define the Rademacher complexity of the composition of loss function $\ell$ and model $f\in \mathcal{F}$ with $N$ labeled and $M$ unlabeled training samples as follows:
+
+$$
+\begin{array}{l} \mathcal {R} _ {N + M} (\boldsymbol {\ell} \circ \mathcal {F}) \\ = \mathbb {E} _ {(x, y, \mu)} \left[ \sup _ {f \in \mathcal {F}} \sum_ {i = 1} ^ {N} \mu_ {i} \left(\ell \left(f \left(x _ {i}\right), y _ {i}\right)\right) + \sum_ {j = 1} ^ {M} \mu_ {j} \left(\ell \left(f \left(x _ {j}\right), y _ {j}\right)\right) \right] \\ \leq \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {N + M} \left(\mathcal {H} _ {y}\right), \tag {10} \\ \end{array}
+$$
+
+where $\circ$ denotes the function composition operator, $\mathbb{E}_{(x,y,\mu)}$ denotes the expectation over $x$ , $y$ , and $\mu$ , $\mu$ denotes the Rademacher variable, $\sup_{f\in \mathcal{F}}$ denotes the supremum (or least upper bound) over the function set $\mathcal{F}$ of model $f$ . The second line holds because of the Rademacher vector contraction inequality (Maurer, 2016).
+
+Then, we proceed with the proof by showing that the one direction $\sup_{f\in \mathcal{F}}R(f) - \widehat{R} (f)$ is bounded with probability at least $1 - \delta /2$ , and the other direction $\sup_{f\in \mathcal{F}}\widehat{R} (f) - R(f)$ can be proved similarly. Note that replacing a sample $(x_{i},y_{i})$ leads to a change of $\sup_{f\in \mathcal{F}}R(f) - \widehat{R} (f)$ at most $\frac{U}{N + M}$ due to the fact that $\ell$ is bounded by $U$ . According to the McDiarmid's inequality (Mohri et al., 2018), for any $\delta >0$ , with probability at least $1 - \delta /2$ , we have:
+
+$$
+\sup _ {f \in \mathcal {F}} R (f) - \widehat {R} (f) \leq \mathbb {E} \left[ \sup _ {f \in \mathcal {F}} R (f) - \widehat {R} (f) \right] + U \sqrt {\frac {\log \frac {2}{\delta}}{2 (N + M)}}. \tag {11}
+$$
+
+According to the result in (Mohri et al., 2018) that shows $\mathbb{E}\left[\sup_{f\in \mathcal{F}}R(f) - \widehat{R} (f)\right]\leq 2\mathcal{R}_{N + M}(\mathcal{F})$ , and further considering the other direction $\sup_{f\in \mathcal{F}}\widehat{R} (f) - R(f)$ , with probability at least $1 - \delta$ , we have:
+
+$$
+\sup _ {f \in \mathcal {F}} | R (f) - \widehat {R} (f) | \leq 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {N + M} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 (N + M)}}, \tag {12}
+$$
+
+which completes the proof.
+
+Then, we can bound the difference between $\widehat{R}_u(f)$ and $\widehat{R}_u^\prime (f)$ as follows.
+
+Lemma 2. Suppose that the loss function $\ell(f(x), y)$ is $\rho$ -Lipschitz with respect to $f(x)$ for all $y \in \{1, \ldots, C\}$ and upper-bounded by $U$ . Given the class membership $\eta \in \{1, \ldots, Q\}$ and overall pseudo-labeling error $\epsilon > 0$ , if $\frac{1}{M} \sum_{j=1}^{M} \sum_{k=1}^{Q} \mathbb{I}(\eta_{j,k} = 1) |\mathbb{I}\big(\max\big(f_k(x_j)\big) > t\big) - \mathbb{I}(\hat{y}_j = y_j)| \leq \epsilon$ , we have:
+
+$$
+\left| \widehat {R} _ {u} ^ {\prime} (f) - \widehat {R} _ {u} (f) \right| \leq U \epsilon . \tag {13}
+$$
+
+Proof. Let's first expand $\widehat{R}_u^{\prime}(f)$ :
+
+$$
+\widehat {R} _ {u} ^ {\prime} (f) = \frac {1}{\hat {M}} \sum_ {j = 1} ^ {\hat {M}} \sum_ {k = 1} ^ {Q} \mathbb {I} \left(\eta_ {j, k} = 1\right) \mathbb {I} \left(m a x \left(f _ {k} \left(x _ {j}\right)\right) > t\right) \ell \left(f _ {k} \left(x _ {j}\right), \hat {y} _ {j}\right). \tag {14}
+$$
+
+Then, we show its upper bound:
+
+$$
+\begin{array}{l} \widehat {R} _ {u} ^ {\prime} (f) \leq \frac {1}{M} \sum_ {j = 1} ^ {M} \sum_ {k = 1} ^ {Q} \mathbb {I} (\eta_ {j, k} = 1) \ell \big (f _ {k} (x _ {j}), y _ {j} \big) + \mathbb {I} (\eta_ {j, k} = 1) \mathbb {I} \big (\hat {y} _ {j} \neq y _ {j}, \max \big (f _ {k} (x _ {j}) \big) > t \big) \ell \big (f _ {k} (x _ {j}), \hat {y} _ {j} \big) \\ \leq \frac {1}{M} \sum_ {j = 1} ^ {M} \sum_ {k = 1} ^ {Q} \mathbb {I} \left(\eta_ {j, k} = 1\right) \left(\ell \left(f _ {k} \left(x _ {j}\right), y _ {j}\right) + \epsilon_ {k, k} \ell \left(f _ {k} \left(x _ {j}\right), \hat {y} _ {j}\right)\right) \\ \leq \widehat {R} _ {u} (f) + U \frac {1}{Q} \sum_ {k = 1} ^ {Q} \epsilon_ {k, k} = \widehat {R} _ {u} (f) + U \epsilon , \tag {15} \\ \end{array}
+$$
+
+and its lower bound:
+
+$$
+\begin{array}{l} \widehat {R} _ {u} ^ {\prime} (f) \geq \frac {1}{M} \sum_ {j = 1} ^ {M} \sum_ {k = 1} ^ {Q} \mathbb {I} (\eta_ {j, k} = 1) \ell \big (f _ {k} (x _ {j}), y _ {j} \big) - \mathbb {I} (\eta_ {j, k} = 1) \mathbb {I} \big (m a x \big (f _ {k} (x _ {j}) \big) \leq t \big) \ell \big (f _ {k} (x _ {j}), \hat {y} _ {j} \big) \\ \geq \frac {1}{M} \sum_ {j = 1} ^ {M} \sum_ {k = 1} ^ {Q} \mathbb {I} \left(\eta_ {j, k} = 1\right) \left(\ell \left(f _ {k} \left(x _ {j}\right), y _ {j}\right) - \epsilon_ {k, k} \ell \left(f _ {k} \left(x _ {j}\right), \hat {y} _ {j}\right)\right) \\ \geq \widehat {R} _ {u} (f) - U \frac {1}{Q} \sum_ {k = 1} ^ {Q} \epsilon_ {k, k} = \widehat {R} _ {u} (f) - U \epsilon . \tag {16} \\ \end{array}
+$$
+
+By combining these two bounds, we can obtain the following result:
+
+$$
+\left| \widehat {R} _ {u} ^ {\prime} (f) - \widehat {R} _ {u} (f) \right| \leq U \epsilon , \tag {17}
+$$
+
+which concludes the proof.
+
+Based on the above lemmas, for any $\delta > 0$ , with probability at least $1 - \delta$ , we have:
+
+$$
+\begin{array}{l} R (\hat {f}) \leq \widehat {R} (\hat {f}) + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \\ \leq \widehat {R} _ {l} (\widehat {f}) + \widehat {R} _ {u} (\widehat {f}) + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \\ \leq \widehat {R} _ {l} (\widehat {f}) + \widehat {R} _ {u} ^ {\prime} (\widehat {f}) + U \epsilon + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \leq \widehat {R} _ {l} (f) + \widehat {R} _ {u} ^ {\prime} (f) + U \epsilon + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} \left(\mathcal {H} _ {y}\right) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \tag {18} \\ \leq \widehat {R} _ {l} (f) + \widehat {R} _ {u} (f) + 2 U \epsilon + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \\ \leq \widehat {R} (f) + 2 U \epsilon + 2 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}} \\ \leq R (f) + 2 U \epsilon + 4 \sqrt {2} \rho \sum_ {y = 1} ^ {C} \mathcal {R} _ {O} (\mathcal {H} _ {y}) + 2 U \sqrt {\frac {\log \frac {2}{\delta}}{2 O}}, \\ \end{array}
+$$
+
+where the first and seventh lines are based on Lemma 1, and three and fifth lines are due to Lemma 2. The fourth line is by the definition of $\hat{f}$ . At this point, we have proven the Theorem 1.
\ No newline at end of file
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/images.zip b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7ab45dca1aeba23628c0b040b428884ad79c95bf
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f59a544fe23f7b9447d516e667543470418bca346d2084c2ddaf0454b6d4d50c
+size 982277
diff --git a/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/layout.json b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1eca0fe901f10a4984031134d464d020005dcbb
--- /dev/null
+++ b/asquarepeginasquareholemetaexpertforlongtailedsemisupervisedlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a864de1c4cdf99f09de615eba7d8293a5562ba6de52c28b1c9e9a860a88539cd
+size 738942
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_content_list.json b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..98145f78a220e060ca0cc5735039a9894651e26e
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5adc669fddd2b9eea64309986e8129e3fc322fc42a2ce5555266b8c82bd2ccdc
+size 112706
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_model.json b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..256ba37a6e050716988fbe8c6dfadd991a38cfe5
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3395faad440b91d7e414a21f066a319b4756fe6887968c032c457f03bdb20ede
+size 137200
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_origin.pdf b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dba475d9451cd7c00abc1d0a3687bd2a1882efee
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/68209f34-02bd-471b-bf3f-74ef94c38de6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b76cccc7cfe790806821beaaa2a69be5061874eddce67ce75c8b57f4cfd5c62
+size 2585683
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/full.md b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..99370efe7793b0e75d640fddb8d911f09585aed9
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/full.md
@@ -0,0 +1,418 @@
+# A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models
+
+Mengyang Sun†1 Yihao Wang2 Tao Feng1 Dan Zhang†1 Yifan Zhu3 Jie Tang1
+
+# Abstract
+
+In order to streamline the fine-tuning of foundation models, Low-Rank Adapters (LoRAs) have been substantially adopted across various fields, including instruction tuning and domain adaptation. The underlying concept of LoRA involves decomposing a full-rank matrix into the product of two lower-rank matrices, which reduces storage consumption and accelerates the training process. Furthermore, to address the limited expressive capacity of LoRA, the Mixture-of-Expert (MoE) has been introduced for incorporating multiple LoRA adapters. The integration of LoRA experts leads to a visible improvement across several downstream scenes. However, the mixture of LoRAs (MoE-LoRA) still exhibits its low robustness during tuning and inferring. Inspired by the Riemannian Preconditioners which train LoRA as a sub-space projector, we propose a new training strategy for MoE-LoRA, to stabilize and boost its feature learning by gate-rescaled multi-space projections. We provide both a theoretical solution as well as an alternative engineering strategy. Examinations on SGD and AdamW optimizers demonstrate the effectiveness of our methodology. Source code is available at https://github.com/THUDM/MoELoRA_Riemannian.
+
+# 1. Introduction
+
+Parameter-Efficient Fine-Tuning (PEFT) techniques offer a cost-effective solution for fine-tuning foundation models (FMs) (Zhang et al., 2025a; Fu et al., 2024). Among these, Low-Rank Adaptation (LoRA) is a prevalent technology due
+
+to its versatility and simplicity. In detail, LoRA introduces trainable low-rank matrices $A$ and $B$ to update the internal modules of FMs, which is given by $X = W + BA$ , where $X$ represents the overall weight matrix after integrating pretrained weights $W$ and LoRA modules $A$ and $B$ . In a sense, the product of $A$ and $B$ serves as an approximation of the full-rank update (in other words, Fully Fine-Tuning, or FFT) for the pre-trained weights. While LoRA significantly reduces the number of trainable parameters, it also imposes two limitations: One is limited representation, and the other one is gradient sub-optimality.
+
+Limitation 1: Limited representation. A natural problem of low-rank matrices lies in less powerful representation, especially in complex tasks. To tackle this, one straightforward solution is the integration of multiple LoRA modules into the mixture-of-expert framework, known as MoE-LoRA. Figure 1 (left) illustrates a plain MoE-LoRA framework. These efforts tangibly improved the performance of LoRA in many scenarios, like vision-language tasks, multi-task learning, continual learning, etc. In a nutshell, the route of MoE-LoRA can be roughly categorized into two lines: (i) Designing dedicated MoE-LoRA frameworks for specific domains, such as MOELoRA (Liu et al., 2023) and MoCLE (Gou et al., 2023). (ii) Technically improving MoE-LoRA via architectural, updating, and loss constraints, such as MoLA (Gao et al., 2024) and HydraLoRA (Tian et al., 2024). Nevertheless, most of these efforts fail to consider the instability and inefficiency of training MoE-LoRA.
+
+Limitation 2: Gradient Sub-optimality. Another concern that plagues LoRA is gradient sub-optimality. This occurs since the low-rank matrices $A$ and $B$ together form a quotient manifold space with a certain curvature, leading to an inconsistency between the inner-manifold optimal and the full-rank optimal gradient. This further leads to a suboptimal training process for LoRA. To alleviate, Zhang et al. (Zhang & Pilanci, 2024) enhances LoRA gradients by a Riemannian gradient preconditioner, given by $\nabla_A\mathcal{L} = (B^T B)^{-1}\nabla_A\mathcal{L}$ and $\nabla_B\mathcal{L} = \nabla_B\mathcal{L}(AA^T)^{-1}$ . These preconditioners contribute to constructing two gradient projectors after a mathematical derivation, ensuring the update is done in accord with the full-rank gradient projection onto the row space of $A$ and the column space of $B$ , that is $X_{new} = X - \eta [Proj_{col(B)}(\nabla_X\mathcal{L})^T +Proj_{row(A)}(\nabla_X\mathcal{L})]$ . Here the notation $Proj_V(M)$ represents a projection func
+
+
+Figure 1. The whole MoE-LoRA architecture and an insight into its gradient updating process. The left part of this figure shows a pipeline of mixture of LoRAs, which fixes the pretrained weights of Feed-Forward Network (FFN), and trains a series of LoRA adapters together with a routering gate. The right part exhibits how MoE-LoRA is updated. Specifically, we plot an example of a 2-Expert MoE-LoRA in a condition that $g_{1} < g_{2}$ , which results in a further distorted manifold $g_{1}B_{1}A_{1}$ . Here we simply omit the fixed pretrained weights and suppose $X = g_{1}E_{1} + g_{2}E_{2}$ for convenient display. Since that, for a random step $t$ we plot a state point $\frac{1}{2} X^{(t)}$ , which equals to $\frac{g_{1}^{(t)}B_{1}^{(t)}A_{1}^{(t)} + g_{2}^{(t)}B_{2}^{(t)}A_{2}^{(t)}}{2}$ and so that serves as the center point of the two manifold states at $t$ . This figure illustrates that $g_{1}B_{1}A_{1}$ has a higher curvature so that its local optimal descent and its global optimal descent projection are more distinct. That indicates a requirement for gate-related preconditioners.
+
+tion which projects a given matrix $M$ onto a subspace constructed by all vectors in set $V$ .
+
+Through a comprehensive analysis of Limitation 1 and Limitation 2, a natural question arises:
+
+How can LoRA-based structure further approximate the full fine-tuning with the guaranteed Limitations 1 and 2?
+
+Inspired by MoE-LoRA and the gradient preconditioning methods, a straightforward answer to this question is to integrate both approaches to simultaneously overcome the representative and sub-optimal limitations. Specifically, the gradients of each LoRA expert can be refined by a respective Riemannian preconditioner. However, we claim that the process of weighed summing experts in MoE-LoRA introduces a gate-based scaling for each LoRA expert's manifold, thereby altering their curvatures with regard to their respective gate value $g_{i}$ . We illustrate this phenomenon in the right part of Figure 1, which plots an example of a 2-Expert MoE-LoRA in a condition that $g_{1} < g_{2}$ . Specially, in their respective spaces of Expert 1 and 2, Manifolds constructed by $B_{1}A_{1}$ and $B_{2}A_{2}$ initially share the same curvature since their low-rank matrices are in the same rank. However, after being multiplied by gate values, Manifold $g_{1}B_{1}A_{1}$ is more rescaled so that it provides a larger curvature than $g_{2}B_{2}A_{2}$ in the MoE full space. As a result, Expert 1 exhibits a higher distinction between global optimal and inner-manifold optimal descents. This phenomenon indicates that the preconditioners for each expert shall be further refined, to take the impact of gate values into consideration. In this paper, we propose a simple but effective solution to further rescale the gradients of each expert in a lightweight way by respective
+
+gate value $g_{i}$ . Our improved gradient updating process for MoE-LoRA is given by:
+
+$$
+\begin{array}{l} X _ {n e w} = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} \operatorname {P r o j} _ {\operatorname {c o l} (B _ {i})} (\nabla_ {X} \mathcal {L}) ^ {T} \\ - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} P r o j _ {r o w (A _ {i})} (\nabla_ {X} \mathcal {L}). \\ \end{array}
+$$
+
+We summarize our contributions as follows:
+
+- We integrate the mixture of LoRAs structure with the Riemannian preconditioners to alleviate both limited representation and sub-optimality issues of LoRA.
+- We emphasize a distortion issue behind per-expert preconditioning, and respectively propose a theoretical and an engineering solution for gate-value-rescaled gradient preconditioning of MoE-LoRA.
+- We implement and examine our rescaling approach for MoE-LoRA under a series of foundation models, illustrating our effectiveness across various tasks.
+
+# 2. Related Works
+
+# 2.1. LoRA and LoRA Variants
+
+LoRA (Hu et al., 2022) decomposes a full-rank matrix into a product of two low-rank matrices, which has been widely considered an effective solution for parameter-efficient fin-tuning. Studies have proposed several variants to reform LoRA: For initialization, PISSA (Meng et al., 2024) leverages singular value decomposition (SVD) to obtain the principal singular components of $W$ , while MiLoRA (Wang
+
+et al., 2025a) utilizes secondary singular values and vectors. LoRA-Pro (Wang et al., 2025b) and LoRA-GA (Wang et al., 2024b) approximate the direction of initial gradients to align them with that of the fully fine-tuning. LoRA+ (Hayou et al., 2024) introduces a learning rate separating strategy with $\eta_B > \eta_A$ . ResLoRA (Shi et al., 2024b) and SIBO (Wen et al., 2024) accelerate convergence and mitigate over-smoothing by introducing residual paths. DoRA (Liu et al., 2024b) decomposes the weight vector into direction and magnitude and only uses its direction component. rsLoRA (Kalajdzievski, 2023) proposes a rank-stabilized scaling factor $\lambda_t = r_t^{1/2}$ to ensure stable gradient updates. To prevent overfitting, BiLoRA (Qiang et al., 2024) adopts a bi-level optimizing strategy, while others implement dropout mechanisms (Wang et al., 2024a; Lin et al., 2024).
+
+# 2.2. Mixture of LoRAs
+
+MoE has emerged as a critical framework for addressing complex tasks. By incorporating multiple expert modules, it dynamically selects appropriate experts based on specific inputs (Jacobs et al., 1991). Early studies, such as LoRAMoE (Dou et al., 2024) and MixLoRA (Li et al., 2024), have pioneered the introduction of the MoE-LoRA architecture by integrating LoRA experts for both global and downstream tasks. Afterward, MoE-LoRA has demonstrated its effectiveness across a range of fields such as continual learning (Dou et al., 2024; Yang et al., 2024), vision-language multi-model tasks (Gou et al., 2023; Chen et al., 2024), and multi-task applications (Liu et al., 2023).
+
+Recent studies have focused on enhancing MoE-LoRA through architectural advancements and improved training strategies. For instance, MoLA (Gao et al., 2024) allocates a varying number of experts at different layers, and MixDA (Diao et al., 2023) introduces multiple domain-adaptive modules to support multi-domain knowledge. Other methods such as (Wu et al., 2024a; Liu et al., 2023; Wu et al., 2024b; Gou et al., 2023; Wang & Agarwal, 2022) have also been proposed for strengthening MoE-LoRA. To boost the training of MoE-LoRA, Luo et al. (Luo et al., 2024) address the random routing issue by introducing a contrastive loss (Shi et al., 2024a). At the same time, MoV (Zadouri et al., 2024) chooses to combine lightweight vectors with a sparse selection mechanism for efficient expert allocation. Other approaches, including (Dou et al., 2024; Li et al., 2024; Zhu et al., 2023), focus on load balancing among experts. However, to the best of our knowledge, there is still a lack of work on gradient optimizing (Bian et al., 2024) specifically for MoE-LoRA models.
+
+# 2.3. Gradient Preconditioners
+
+In most deep learning cases (Zhang et al., 2024a; Luo et al., 2025a), gradient descent algorithms update model parame-
+
+ters by calculating gradient-based updates. To accelerate the optimizing process, the concept of gradient preconditioning has been introduced. Advanced techniques such as Ada-grad (Duchi et al., 2011) dynamically adjust the learning rate by an accumulated squared gradients $G_{t} = \sum_{i=1}^{t} g_{i}^{2}$ and update model by $\Delta \theta_{t} = -\eta G_{t}^{-1/2} \cdot g_{t}$ . Adam (Kingma & Ba, 2015) extends this approach by incorporating momentum and bias correction, scaling gradients through a diagonal preconditioner, and resulting in updates in the form of $\Delta \theta_{t} = -\eta \frac{m_{t}}{\sqrt{v_{t} + \epsilon}}$ , where $v_{t} = \beta_{2} v_{t-1} + (1 - \beta_{2}) g_{t}^{2}$ . AdamW (Loshchilov & Hutter, 2019) further introduces a weight decay to Adam.
+
+Recent studies have provided theoretical support for scaled gradient descent methods under different preconditioning strategies. The core idea is to adjust both the direction and magnitude of updates by introducing a scaling matrix to gradients. Tong et al. (Tong et al., 2021) demonstrate the local convergence of scaled gradient descent methods. Jia et al. (Jia et al., 2024) extend this work by proving global convergence of scaled gradient descent for the least-squares matrix decomposition problem $\| AB^T - Y\|_F^2 / 2$ , showing that this approach achieves global convergence under different condition numbers. Other variants of scaled gradient descent have also emerged, such as Zhang et al. who proposed two regularization strategies (Zhang et al., 2023; 2024c). In higher-dimensional settings, scaled gradient descent has been further extended to tensor optimization (Tong et al., 2022; Ma et al., 2024). Mishra et al. (Mishra et al., 2013; Mishra & Sepulchre, 2016) also applied the principles of Riemannian to the optimization involving low-rank matrices. Considering the data's manifold geometry, a Riemannian metric $g_p(v, w)$ is introduced to guide gradient updates along the manifold. Recently, Zhang et al. (Zhang & Pilanci, 2024) introduced the idea of Riemannian preconditioners to LoRA by attaching an $r \times r$ preconditioner to the gradients of low-rank matrices. As a result, they provide improved fine-tuning performance of LoRA, compared with conventional gradient optimizers such as SGD and AdamW.
+
+# 3. Method
+
+We elaborate on our motivations and detail the modification we have made to the Riemannian preconditioning method specifically for MoE-LoRA. Our theoretical foundations and engineering solutions are also presented.
+
+# 3.1. Riemannian Preconditioner in LoRA Expert
+
+As a preliminary, we first briefly introduce the Riemannian preconditioner (Zhang & Pilanci, 2024). Suppose the pretrained model weight is $W$ and its additive low-rank components as $B$ and $A$ , let $X = W + BA$ denote the whole weight matrix and let $\mathcal{L}$ and $\eta$ denote the loss function and
+
+the learning rate, respectively. For the plain gradient descent method, the gradient updating process is described through Equation (1) to (4), in which the derivation from (2) to (3) relies on ignoring the second-order term of learning rate. Obviously, $B\nabla_{A}\mathcal{L} + \nabla_{B}\mathcal{L}A$ in (4) serves as an approximation of the ideal FFT gradient of $X$ .
+
+$$
+\begin{array}{l} X _ {n e w} = W + B _ {n e w} A _ {n e w} (1) \\ = W + (B - \eta \nabla_ {B} \mathcal {L}) (A - \eta \nabla_ {A} \mathcal {L}) (2) \\ \approx W + B A - \eta B \nabla_ {A} \mathcal {L} - \eta \nabla_ {B} \mathcal {L} A (3) \\ = X - \eta (B \nabla_ {A} \mathcal {L} + \nabla_ {B} \mathcal {L} A) (4) \\ \end{array}
+$$
+
+Subsequently, according to the derivation chain rule and the simple fact that $X = W + BA$ , we directly obtain that $\nabla_A\mathcal{L} = (\nabla_A X)(\nabla_X\mathcal{L}) = B^T (\nabla_X\mathcal{L})$ , and likewise $\nabla_B\mathcal{L} = (\nabla_X\mathcal{L})A^T$ . Thus, (4) can be transformed to:
+
+$$
+X _ {n e w} = X - \eta \left[ B B ^ {T} \left(\nabla_ {X} \mathcal {L}\right) + \left(\nabla_ {X} \mathcal {L}\right) A ^ {T} A \right], \tag {5}
+$$
+
+which actually updates the model in a different direction compared to the FFT update formula $X_{new} = X - \eta \nabla_X\mathcal{L}$ . This phenomenon occurs since the distorted sub-space of $X$ constructed by $BA$ brings inconsistency between the optimal gradient descent within its manifold and that of the full matrix $X$ . To address this inconsistency, Zhang et al. (Zhang & Pilanci, 2024) scale the gradients of $A$ and $B$ by:
+
+$$
+\nabla_ {A} \mathcal {L} = \left(B ^ {T} B\right) ^ {- 1} \nabla_ {A} \mathcal {L} \tag {6}
+$$
+
+$$
+\nabla_ {B} \mathcal {L} = \nabla_ {B} \mathcal {L} \left(A A ^ {T}\right) ^ {- 1},
+$$
+
+so that (5) is expressed as:
+
+$$
+\begin{array}{l} X _ {n e w} = X - \eta \left[ B \left(B ^ {T} B\right) ^ {- 1} B ^ {T} \left(\nabla_ {X} \mathcal {L}\right) \right. \\ + \left(\nabla_ {X} \mathcal {L}\right) A ^ {T} \left(A A ^ {T}\right) ^ {- 1} A ] \tag {7} \\ = X - \eta [ P r o j _ {c o l (B)} (\nabla_ {X} \mathcal {L}) ^ {T} \\ + \operatorname {P r o j} _ {\operatorname {r o w} (A)} (\nabla_ {X} \mathcal {L}) ], \\ \end{array}
+$$
+
+where the update inside the manifold is performed according to the full matrix gradient projection onto the row space of $A$ and the column space of $B$ . Therefore, it better approximates fully fine-tuning than the unscaled descent step.
+
+Inspired by this work, a straightforward way to expand their solution to MoE-LoRA is to individually scale the gradient of each LoRA expert by (6). However, equation $X = W + BA$ lays out in a different form in MoE-LoRA:
+
+$$
+X = W + \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} B _ {i} A _ {i}, \tag {8}
+$$
+
+where $N_{Expert}$ denotes the number of activated experts and $g_{i}$ denotes the gate value of specific expert $i$ . As a result,
+
+it not only brings a gate value $g_{i}$ for each expert $i$ into Equation (1)-(4), but also introduces an extra gate value $g_{i}$ for each expert $i$ into (5), since the derivation chain rule $\nabla_{B_i}\mathcal{L} = g_i(\nabla_X\mathcal{L})A_i^T$ and $\nabla_{A_i}\mathcal{L} = g_iB_i^T (\nabla_X\mathcal{L})$ . To further clarify, we formally derive the whole result. Note that gate values are computed through a softmax with complex non-linear operations, thus we just treat them as constants for an easier deriving approximation. Following the conventional Riemannian preconditioners in (6), we have:
+
+$$
+\begin{array}{l} X _ {n e w} = W + \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} (B _ {i} - \eta \nabla_ {B _ {i}} \mathcal {L}) (A _ {i} - \eta \nabla_ {A _ {i}} \mathcal {L}) \\ \approx X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} \left(B _ {i} \nabla_ {A _ {i}} \mathcal {L} + \nabla_ {B _ {i}} \mathcal {L} A _ {i}\right) \\ = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} \left[ B _ {i} \left(B _ {i} ^ {T} B _ {i}\right) ^ {- 1} \nabla_ {A _ {i}} \mathcal {L} \right. \\ \left. + \nabla_ {B _ {i}} \mathcal {L} \left(A _ {i} A _ {i} ^ {T}\right) ^ {- 1} A _ {i} \right] \tag {9} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} \left[ g _ {i} B _ {i} \left(B _ {i} ^ {T} B _ {i}\right) ^ {- 1} B _ {i} ^ {T} \left(\nabla_ {X} \mathcal {L}\right) \right. \\ \left. + g _ {i} \left(\nabla_ {X} \mathcal {L}\right) A _ {i} ^ {T} \left(A _ {i} A _ {i} ^ {T}\right) ^ {- 1} A _ {i} \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} ^ {2} \operatorname {P r o j} _ {\operatorname {c o l} \left(B _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right) ^ {T} \\ - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} ^ {2} \operatorname {P r o j} _ {\text {r o w} \left(A _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right), \tag {10} \\ \end{array}
+$$
+
+in which the derivation step (9) denotes the conventional Riemannian preconditioner scaling. It should be interpreted that (10) consists of an ensemble of projections of the full matrix gradient onto the row spaces of $A$ experts and the column spaces of $B$ experts.
+
+# 3.2. Rescaling Preconditioners
+
+Equation (10) presents a squared-value weighted sum of an ensemble of gradient projections. Generally, more activated experts lead to smaller per-expert gate values and so lead to a more reduced assembled gradient; On the other hand, more balanced experts also lead to a more reduced assembled gradient since the basic inequality theorem $\sum_{i}x_{i}^{2} > = \frac{(\sum_{i}x_{i})^{2}}{n} = \frac{1}{n}$ satisfies its equality condition when $x_{i}$ s are equal. As a result, the gradient of the full matrix $X$ will be underestimated due to those squared gate values. From the perspective of manifolds and curvature, we explain that by considering $g_{i}$ in (8) as a manifold scalar, which reduces the size of $B_{i}A_{i}$ so that would probably increase its curvature. However, the conventional Riemannian preconditioner failed to take the manifold scalar $g_{i}$ into consideration, since it is designed for a single LoRA adapter.
+
+To alleviate this squared issue, we assume a further rescaling step for the Riemannian preconditioners:
+
+$$
+\nabla_ {A _ {i}} \mathcal {L} = \frac {\left(\boldsymbol {B} _ {i} ^ {T} \boldsymbol {B} _ {i}\right) ^ {- 1} \nabla_ {A _ {i}} \mathcal {L}}{g _ {i}} \tag {11}
+$$
+
+$$
+\nabla_ {B _ {i}} \mathcal {L} = \frac {\nabla_ {B _ {i}} \mathcal {L} (A _ {i} A _ {i} ^ {T}) ^ {- 1}}{g _ {i}},
+$$
+
+which is introduced to replace (6) in the derivation of Equation (9), to eliminate the variable $g_{i}$ and keeps only a first power of $g_{i}$ in the final equation (10). Throughout this transformation, the final ensemble of multi-expert projections shares an equivalent scale with the projection of a single LoRA adapter, shown in Equation (12). Therefore, training of an MoE-LoRA will be alleviated from under-estimation.
+
+$$
+\begin{array}{l} X _ {n e w} = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} P r o j _ {c o l \left(B _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right) ^ {T} \\ - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} \operatorname {P r o j} _ {\operatorname {r o w} \left(A _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right). \tag {12} \\ \end{array}
+$$
+
+We claim that (12) further approaches global fully finetuning because of two basic reasons: Firstly, it is derived by implementing Riemannian preconditioners to calibrate each LoRA expert's gradient (given by (6)), thus ensuring each LoRA expert can get close to their respective local full-rank training behavior (i.e., per-expert fully-finetuning equivalency), according to Zhang et al. (Zhang & Pilanci, 2024); Secondly, since there exists a further distortion of each expert space introduced by their respective gate value, leading to an inconsistency between per-expert local optimals and the global optimal. Therefore, (11) further introduces a respective gate value $g_{i}$ as a re-scaler to each expert's Riemannian preconditioner to relieve the expert distortion resulting from the multiplication of gate value during forwarding. Derived by (11) and (9), (12) is achieved. As a result, (12) can further approach global fully-finetuning equivalency. For example, larger gate values introduce less distortion. Thus, through (12) experts with larger gate values are re-scaled less than those with smaller ones.
+
+# 3.3. Engineering Approximation
+
+Although Equation (11) provides an approach to eliminate under-estimation for MoE-LoRA, it is unrealizable since each LoRA module exists a respective $g_{i}$ for every single token of every batch sample. Actually, during the training, backpropagation always runs after averaging all the losses of each single token of each sample in a batch. Thus, it is impossible to reconstruct and rescale the respective gradient contributed by each single token when we optimize a LoRA module. Alternatively, we design an engineering approximation to (11) and (12), by replacing each gate value $g_{i}$ with
+
+its square root $\sqrt{g_i}$ during model forwarding. Consequently, Equation (12) can be achieved only under the preconditioners of (6), because the quadratic terms of gate values $g_i^2$ in Equation (10) are now naturally become linear terms $g_i$ .
+
+Replacing $g_{i}$ by $\sqrt{g_i}$ simultaneously introduces destruction to forwarding, as the sum of square roots does not equal 1. One possible solution is to re-normalize those square roots to be summed up as 1. However, it brings inconsistency between the assigned weights of experts during forwarding and backwarding. Therefore, we propose another strategy to accommodate both aspects, which is manually assigning estimizable and unoptimizable components of Equation (8), to satisfy the requirements of both forwarding in (8) and backwarding in (12). During the forwarding process, the proposed strategy is simply expressed by:
+
+$$
+X = \hat {W} + \sum_ {i = 1} ^ {N _ {E x p e r t}} \sqrt {g _ {i}} B _ {i} A _ {i} + (g _ {i} - \sqrt {g _ {i}}) \hat {B} _ {i} \hat {A} _ {i}, \tag {13}
+$$
+
+where we define the hat symbol $\hat{}$ as an operation of gradient detaching. Specifically, a $\hat{p}$ denotes that the variable $p$ does not require gradient, which also means $p$ should be detached from gradient tracking along the whole neural network. By decomposing estimizable and unoptimizable components like this, low-rank matrices $A$ and $B$ are able to be optimized following (12) (see Appendix E for detailed derivations), while the equivalent behavior and the same result of conventional module forwarding are still preserved. Moreover, by maintaining the estimizable $g_{i}$ terms in forwarding and treating all the $\sqrt{g_i}$ as constants that are not subject to optimization, (13) also keeps the conventional training behaviors of gates. Additionally, this modification introduces only a minimal overhead to the original forward computation process.
+
+# 4. Experiments
+
+We present a series of comparative experiments to evaluate the performances of MoE-LoRA across various downstream tasks (Zhang et al., 2024b; Luo et al., 2025b) including Question Answering, the GLUE Benchmark, and the Vision-Language task. Specifically, two types of experimental candidates are mainly involved in our experiments: (1) MoE-LoRA with experts updated independently using Riemannian scaled optimizer; and (2) MoE-LoRA updated using Riemannian scaled optimizer, plus incorporating our proposed rescaling technique (the engineering approximation). We implement both of them on SGD and AdamW optimizers respectively. As a further reference, we also exhibit our comparisons and possibility of integrations with previous MoE-LoRA baselines, such as MoLA (Gao et al., 2024). Finally, to lend support to our theoretical foundation, we conduct an ablation study by assessing our forwarding
+
+Table 1. Question answering evaluations across four QA datasets with Llama-3.2-3B as the foundation model. Our gate-based rescaling methodology outperforms conventional Riemannian preconditioned optimizers, in terms of both SGD and AdamW. Each pair of comparing candidates is trained through the same steps until they both achieve good stable performances.
+
+ScienceQA CommonsenseQA OpenBookQA SIQA avg. RSGD20,10,4 62.8 52.4 53.2 65.7 58.5 gRSGD20,10,4 70.1 55.4 59.8 68.5 63.5 RAdamW20,10,4 82.6 67.7 70.4 81.5 75.6 gRAdamW20,10,4 83.8 68.2 72.4 82.3 76.7
+
+
+Figure 2. Converging Performances of $RSGD_{20,10,4}$ and $gRSGD_{20,10,4}$ MoE-LoRA with Llama-3.2-3B as the foundation model. The x-axis represents training steps, the left y-axis in each figure represents the training or validation losses, while the right y-axis in each figure represents the accuracy metrics of test sets. Before implementing our gate-based rescaling method, the training and validation losses of RSGD optimizer across four tasks are significantly reduced around 100-200 steps, while after implementing our method, they are significantly reduced earlier around 0-100 steps.
+
+revisions only under a classic optimizer without Riemannian preconditioners support.
+
+# 4.1. Experimental Setup
+
+For most experiments, unless otherwise specified, we construct a mixture of LoRAs modules with a total of 20 experts, a rank of 4 for each expert, and a selection of top-10 experts activated each time. Furthermore, a range of other architectural MoE settings are also discussed in the ablation section. We perform experiments based on Llama-3.2-3B (Touvron et al., 2023), GLM-4-9B (Zeng et al., 2024), and LLaVAv1.5-7B (Liu et al., 2024a) as the foundation models. During training, we follow a linear decay learning-rate scheduler. We assign a relatively smaller learning rate to gate module compared to other trainable components, to achieve a stable training behavior. The reduced learning rate for gate helps to prevent model from experiencing abrupt and erratic routing changes. For further stabilization, we also cap its maximum gradient norm at 1.0. We carefully assign different initial learning rates for various tasks, trying to ensure all models achieve their best performances in a capable running time.
+
+We denote the number of experts, top-k, and the per-expert rank as $n, k, r$ respectively; For experimental candidates using conventional Riemannian preconditioned optimizers, we denote them as $RSGD_{n,k,r}$ and $RAdamW_{n,k,r}$ , in which the front $R$ represents the word Riemannian; While
+
+those candidates integrated with our gate-based rescaling approach are denoted as $gRSGD_{n,k,r}$ and $gRAdamW_{n,k,r}$ respectively, in which the front $g$ represents that we rescale the gradient by gate values.
+
+# 4.2. Question Answering Evaluations
+
+We evaluate our proposed method on several question-answering benchmarks, including ScienceQA (Lu et al., 2022), CommonsenseQA (Talmor et al., 2019), OpenBookQA (Mihaylov et al., 2018) and SIQA (Sap et al., 2019). These question-answering datasets encompass a diverse range of domains and types, such as science, social interactions, common sense, and open-book exams, etc. We implement all the experimental candidates based on Llama3.2-3B as their foundation model. For the SGD optimizer, we set an initial learning rate to $3 \times 10^{-5}$ for every LoRA expert; For the AdamW optimizer, we utilize an initial learning rate of $1 \times 10^{-5}$ . We run through all the experiments until they are stabilized at a stable performance, and especially ensure that each pair of comparing candidates (i.e., independently Riemannian preconditioned MoE-LoRA, and that with our proposed rescaling approach) are trained through the same steps to make sure they are fairly comparable. Specifically, depending on the complexity of datasets, we choose from two settings, 800 or 1,400 steps, for all the QA evaluations, except that of $RAdamW$ and $gRAdamW$
+
+Table 2. GLUE Benchmark evaluations across nine tasks with Llama-3.2-3B and GLM-4-9B as the foundation models. Our gate-based rescaling method contributes an overall improvement over GLUE Benchmark, in terms of Riemannian preconditioned SGD and AdamW.
+
+CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE WNLI avg. Llama 3.2 (3B) \( RSGD_{20,10,4} \) 48.25 93.58 73.57 48.15 80.22 62.42 78.39 62.45 39.44 65.16 \( gRSGD_{20,10,4} \) 55.86 94.95 76.29 56.70 83.48 80.37 83.42 72.92 52.11 72.90 \( RAdamW_{20,10,4} \) 65.94 96.56 70.03 60.12 85.36 81.36 91.55 53.95 45.07 72.22 \( gRAdamW_{20,10,4} \) 67.38 96.67 81.57 61.08 86.35 81.59 91.32 55.75 47.89 74.40 GLM 4 (9B) \( RSGD_{20,10,4} \) 62.33 91.74 82.32 63.06 85.36 81.67 90.27 62.59 76.05 77.27 \( gRSGD_{20,10,4} \) 62.97 95.18 81.68 63.68 86.84 86.27 91.17 80.21 77.46 80.61 \( RAdamW_{20,10,4} \) 62.54 45.53 79.30 64.15 88.72 88.03 91.58 89.56 84.51 77.10 \( gRAdamW_{20,10,4} \) 62.25 46.22 79.59 64.36 88.82 88.19 91.66 91.37 85.92 77.60
+
+on CommonsenseQA, which we train up to 2,000 steps to achieve a more clear distinction between two comparable candidates. We present our evaluated performances in Table 1. It is observed that: (1) Riemannian preconditioned Optimizers incorporating our approach achieve better performances for every QA benchmark, albeit with varying degrees of improvement; (2) Overall, we exhibit more contribution to Riemannian preconditioned SGD than to that of AdamW: We improve the performance of $RSGD$ by around $8.5\%$ , while we improve $RAdamW$ by around $1.5\%$ .
+
+Besides our improvements in final performances, we also witness a boost in terms of converging speed under our optimization. To clearly display this, we plot loss-decreasing curves and metric variations of the four question-answering datasets under SGD optimizer in Figure 2. It is clearly shown that $gRSGD$ converges faster than $RSGD$ , in terms of training and evaluating losses as well as accuracy metrics.
+
+# 4.3. Performance on GLUE Benchmark
+
+To comprehensively examine our effectiveness, we perform a series of downstream evaluations on the benchmark of GLUE (Wang et al., 2019), which is a collection of resources for evaluating model performances on natural language understanding. We first run through all the evaluations in GLUE with Llama-3.2-3B as the foundation model and present the benchmark results in Table 2. For most SGD experiments we set an initial learning rate for LoRA experts as $3 \times 10^{-5}$ , except WNLI for which we set its initial learning rate to $3 \times 10^{-6}$ ; For AdamW experiments we choose an initial learning rate from $\{3 \times 10^{-5}, 1 \times 10^{-5}\}$ . For most datasets, we train for 2,000 steps, excluding some AdamW experiments in which we perform an early stop at around 1,000 since they appear to be converged or even overfitting. Table 2 illustrates our effectiveness across various downstream applications as well as the overall assessment under Llama-3.2-3B. In terms of overall performances, our approach improves RSGD and $RAdamW$ by $11.9\%$ and $3.0\%$ respectively.
+
+Table 3. Visual7W and VMCBench performances after trained for 1,000 steps, with LLaVA-v1.5-7B as the foundation model. (For VMCBench, we use 100 samples to evaluate, thus the accuracy will be at most a two-digit decimal. That's why we list all numbers in percentage here for a more comfortable present.)
+
+Visual7W VMCBench RSGD20,10,4 68% 59% gRSGD20,10,4 72% 70% RAdamW20,10,4 77% 75% gRAdamW20,10,4 78% 75%
+
+Subsequently, we extend experiments to a larger foundation model, GLM-4-9B. Since the 9B model is more powerful in few-shot learning, for some datasets such as SST-2 etc., we set lower learning rates such as $3 \times 10^{-6}$ and $1 \times 10^{-6}$ respectively for SGDs and AdamWs, to make sure a clear loss decreasing period can be witnessed. We train for the same number of steps for each pair of competitive candidates. Table 2 also illustrates the performances of training MoE-LoRA through different optimizing strategies with GLM-4-9B. Results still witness our overall outperformance. In particular, we improve the average performance of RSGD by around $4.3\%$ , and that of $RAdamW$ by around $0.7\%$ .
+
+# 4.4. Performance on LLaVA
+
+Beyond pure textual tasks (Bi et al., 2025), vision-language cross-modal tasks have garnered increasing attention in recent years, witnessing the emergence of notable achievements such as LLaVA, CogVLM, etc. (Chen et al., 2024; Wang et al., 2024c; Ge et al., 2021; Jin et al., 2024) Thus, we further evaluate our gate-based rescaling approach in computer vision (Feng et al., 2022; 2025). Specifically, we implement an MoE-LoRA architecture for the well-known vision-language foundation model, LLaVA-v1.5-7B (Chen et al., 2024). We introduce trainable MoE-LoRA adapters into both visual and textual modules of LLaVA-v1.5-7B. For evaluation, Visual7W (Zhu et al., 2016) and VMCBench (Zhang et al., 2025b) datasets are employed,
+
+which both consist of multimodal samples each containing a multiple-choice question paired with a related image. The question can be answered through understanding the provided image. Visual7W is a subset of Visual Genome (Krishna et al., 2017) dataset, while VMCBench is a benchmark created from 20 existing VQA datasets. For VMCBench, we only use their dev set since their test set is not labeled. We take 900 of all the 1,000 labeled samples as training samples, while the rest 100 are for evaluation. Table 3 exhibits the results of all experimental candidates. Our approach consistently demonstrates visible improvements, especially for SGD.
+
+# 4.5. Compare and Integrate with MoE-LoRA Baselines
+
+We then compare and integrate our method with existing MoE-LoRA baselines. We provide our comparisons with two baselines: (1) The pure mixture of LoRAs (Liu et al., 2023), which we denote as MoELoRA and use token-level routing; (2) MoLA (Gao et al., 2024), which is a MoE-LoRA variant specifically focusing on assigning different numbers of experts to different layers, and proving that higher layers need more LoRA experts. It should be noted that our proposed gate-based rescaling approach can be integrated with most MoE-LoRA variants since they are not in conflict. Take MoLA as an example, we can integrate our method with MoLA by implementing a model with more experts in its higher layers and trained through Riemannian preconditioners and gate-based rescaling approach. We reproduce MoELoRA and MoLA, implement the integrations, and illustrate their performances in Table 4. We use Llama-3.2-3B as the foundation model and follow MoLA's configurations here, which means we set the per-expert rank to 4, top-k to 2, and the total number of experts of all layers to 140. In this way, MoELoRA and our method assign 5 experts to each layer, while MoLA assigns 2, 4, 6, and 8 experts respectively to the bottom, lower middle, higher middle, and top layers. In Table 4 we denote this special assignment strategy as (2,4,6,8), while the average assignment is (5,5,5,5), where each digit covers seven layers under Llama-3.2-3B. We still provide enhancement in the context of MoLA architecture.
+
+# 4.6. Ablation Study
+
+Theoretical Dependence. Although our proposed approach is grounded in the context of Riemannian preconditioners, it is important to note that our engineering implementation does not inherently require coexistence with Riemannian preconditioners. The reason is that our modifications are solely focused on altering the forward propagation conventions of MoE-LoRA. This consequently raises a vital question about the standalone efficacy of our modifications in enhancing MoE-LoRA's performance, without depending on the Riemannian preconditioning context. Ideally, since the conventional un-preconditioned optimizer does not guar
+
+
+Figure 3. Curves of ScienceQA training losses under the optimization of conventional and Riemannian preconditioned SGDs, and also both integrated with the gate-based rescaling approach. Llama-3.2-3B serves as the foundation model.
+
+antee a projection of full matrix gradient in low-rank space, it should be trivial for them to normalize the sum of expert gradients by replacing $g_{i}$ with $\sqrt{g_i}$ . To confirm this, we conduct an ablation study by integrating our gate-based revision with a conventional un-preconditioned SGD optimizer. The loss-decreasing curves shown in Figure 3 illustrate that applying our approach directly on a pure SGD optimizer does not provide help, which oppositely demonstrates our refinement is highly coupled with the Riemannian preconditioning algorithm.
+
+Various MoE architectures. To demonstrate that our proposed approach can be generalized to various settings of LoRA mixtures, we construct different MoE-LoRA architectures for further exploration, including the variations in the numbers of experts, per-expert ranks, and the number of top-k. Specifically, we test seven structural conditions on the ScienceQA dataset, and all candidates are trained for 800 steps using the same initial learning rate. Table 5 exhibits the results, showing we are able to outperform across most circumstances in terms of MoE structure. Moreover, it is also observed from SGD performances that, variations in expert numbers or per-expert ranks introduce limited impacts on our effectiveness, while larger top-k roughly exhibit higher boosts. This observation aligns with our theoretical analysis, which suggests a larger number of activated experts results in more reduced per-expert gate values, thereby leaving a larger margin for our revision to take effect.
+
+# 5. Conclusion
+
+We introduce the Riemannian gradient preconditioners to train a mixture of Low-rank Experts (MoE-LoRA). Instead
+
+Table 4. Baselines Comparison and Integration. The first three lines provide comparisons between pure MoE-LoRA (MoELoRA-SGD), MoLA (MoLA-SGD), and our gate-rescaled Riemannian preconditioning method (MoELoRA-gRSGD). The last two lines provide MoLA integrated with conventional (MoLA-RSGD) and gate-rescaled preconditioning methods (MoLA-gRSGD), respectively. All candidates are trained using SGD optimizers for up to 2,000 steps.
+
+Experts ScienceQA CommonsenseQA OpenBookQA SIQA avg. MoELoRA-SGD (5,5,5,5) 54.68 48.90 48.40 57.92 52.47 MoELoRA-gRSGD (Ours) (5,5,5,5) 70.01 54.80 63.60 64.45 63.22 MoLA-SGD (2,4,6,8) 54.99 49.20 52.80 58.94 53.98 MoLA-RSGD (2,4,6,8) 68.03 53.90 59.80 64.08 61.45 MoLA-gRSGD (Ours) (2,4,6,8) 70.46 56.30 64.00 64.90 63.92
+
+Table 5. Accuracies and boosts of ScienceQA for conventional and gate-rescaled Riemannian optimizers under various MoE architectures. Llama-3.2-3B serves as the foundation model.
+
+n/k/r RSGD gRSGD Boost RAdamW gRAdamW Boost 5/5/4 65.78 71.31 8.41%↑ 76.67 78.19 1.98%↑ 8/5/4 65.11 69.96 7.45%↑ 78.69 79.45 0.97%↑ 10/5/4 64.34 69.33 7.76%↑ 79.05 80.31 1.59%↑ 10/5/2 72.03 78.06 8.37%↑ 79.63 80.13 0.63%↑ 10/5/1 79.95 87.28 9.17%↑ 80.71 81.11 0.50%↑ 10/10/2 68.62 77.16 12.45%↑ 77.47 77.65 0.23%↑ 10/2/2 79.68 81.16 1.86%↑ 83.63 83.45 -0.22%↓
+
+of directly attaching Riemannian preconditioners to each expert's gradient for pursuing local optimality, we claim that multiplying expert $B_{i}A_{i}$ by its respective gate value $g_{i}$ during forwarding leads to a further rescaling of the manifold constructed by expert $i$ . To alleviate this, Riemannian preconditioners designed for MoE-LoRA shall be revised to incorporate gate values. To approximate this concept, we propose an engineering solution that decomposes forwarding variables into estimizable and un-optimizable components. Experiments across various downstream tasks demonstrate our performance improvement over conventional Riemannian preconditioners. Ablation studies further demonstrate our theoretical foundation and universality. We claim that, our work can be applied to the fields like efficient and low-resource model training, continual or multi-task learning, stabilized training and modular task adaptation, etc.
+
+# Acknowledgments
+
+This work has been supported by the National Natural Science Foundation of China (NSFC) for Distinguished Young Scholar under Grant 62425601, the NSFC under Grant 62406036, the National Key Research and Development Program of China under Grant 2024YFC3308500, the State Key Laboratory of Networking and Switching Technology under Grant NST20250110, the New Cornerstone Science Foundation through the XPLORER PRIZE, and also sponsored by SMP-Zhipu.AI Large Model Cross-Disciplinary Fund under Grant ZPCG20241029322.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Bi, W., Kou, F., Shi, L., Li, Y., Li, H., Chen, J., and Xu, M. Leveraging the dual capabilities of llm: Llm-enhanced text mapping model for personality detection. In AAAI, 2025.
+Bian, A., Li, W., Yuan, H., Wang, M., Zhao, Z., Lu, A., Ji, P., Feng, T., et al. Make continual learning stronger via c-flat. NeurIPS, 2024.
+Chen, S., Jie, Z., and Ma, L. Llava-mole: Sparse mixture of lora experts for mitigating data conflicts in instruction finetuning mllms. arXiv e-prints, pp. arXiv-2401, 2024.
+Diao, S., Xu, T., Xu, R., Wang, J., and Zhang, T. Mixture-of-domain-adapters: Decoupling and injecting domain knowledge to pre-trained language models' memories. In 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, pp. 5113-5129. Association for Computational Linguistics (ACL), 2023.
+Dou, S., Zhou, E., Liu, Y., Gao, S., Shen, W., Xiong, L., Zhou, Y., Wang, X., Xi, Z., Fan, X., et al. Loramoe:
+
+Alleviating world knowledge forgetting in large language models via moe-style plugin. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1932-1945, 2024.
+Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011.
+Feng, T., Wang, M., and Yuan, H. Overcoming catastrophic forgetting in incremental object detection via elastic response distillation. In CVPR, 2022.
+Feng, T., Li, W., Zhu, D., Yuan, H., Zheng, W., Zhang, D., and Tang, J. Zeroflow: Overcoming catastrophic forgetting is easier than you think. ICML, 2025.
+Fu, J., Ge, X., Xin, X., Karatzoglou, A., Arapakis, I., Wang, J., and Jose, J. M. Iisan: Efficiently adapting multimodal representation for sequential recommendation with decoupled peft. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 687-697, 2024.
+Gao, C., Chen, K., Rao, J., Sun, B., Liu, R., Peng, D., Zhang, Y., Guo, X., Yang, J., and Subrahmanian, V. Higher layers need more lora experts. arXiv e-prints, pp. arXiv-2402, 2024.
+Ge, X., Chen, F., Jose, J. M., Ji, Z., Wu, Z., and Liu, X. Structured multi-modal feature embedding and alignment for image-sentence retrieval. In Proceedings of the 29th ACM international conference on multimedia, pp. 5185-5193, 2021.
+Gou, Y., Liu, Z., Chen, K., Hong, L., Xu, H., Li, A., Yeung, D.-Y., Kwok, J. T., and Zhang, Y. Mixture of cluster-conditional lora experts for vision-language instruction tuning. arXiv e-prints, pp. arXiv-2312, 2023.
+Hayou, S., Ghosh, N., and Yu, B. Lora+: Efficient low rank adaptation of large models. In International Conference on Machine Learning, pp. 17783-17806. PMLR, 2024.
+Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022.
+Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. Adaptive mixtures of local experts. Neural computation, 3(1):79-87, 1991.
+Jia, X., Wang, H., Peng, J., Feng, X., and Meng, D. Preconditioning matters: Fast global convergence of non-convex matrix factorization via scaled gradient descent. Advances in Neural Information Processing Systems, 36, 2024.
+
+Jin, H., Zhang, Y., Shi, L., Zhang, S., Kou, F., Yang, J., Zhu, C., and Luo, J. An end-to-end graph attention network hashing for cross-modal retrieval. Advances in Neural Information Processing Systems, 37:2106-2126, 2024.
+Kalajdzievski, D. A rank stabilization scaling factor for fine-tuning with lora. arXiv e-prints, pp. arXiv-2312, 2023.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D. A., et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32-73, 2017.
+Li, D., Ma, Y., Wang, N., Cheng, Z., Duan, L., Zuo, J., Yang, C., and Tang, M. Mixlora: Enhancing large language models fine-tuning with lora based mixture of experts. CoRR, 2024.
+Lin, Y., Ma, X., Chu, X., Jin, Y., Yang, Z., Wang, Y., and Mei, H. Lora dropout as a sparsity regularizer for overfitting control. CoRR, 2024.
+Liu, H., Li, C., Wu, Q., and Lee, Y. J. Visual instruction tuning. Advances in neural information processing systems, 36, 2024a.
+Liu, Q., Wu, X., Zhao, X., Zhu, Y., Xu, D., Tian, F., and Zheng, Y. Moelora: An moe-based parameter efficient fine-tuning method for multi-task medical applications. CoRR, 2023.
+Liu, S., Wang, C., Yin, H., Molchanov, P., Wang, Y. F., Cheng, K., and Chen, M. Dora: Weight-decomposed low-rank adaptation. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024b.
+Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
+Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K.-W., Zhu, S.-C., Tafjord, O., Clark, P., and Kalyan, A. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521, 2022.
+Luo, H., Chen, G., Zheng, Y., Wu, X., Guo, Y., Lin, Q., Feng, Y., Kuang, Z., Song, M., Zhu, Y., et al.
+
+Hypergraphrag: Retrieval-augmented generation with hypergraph-structured knowledge representation. arXiv, 2025a.
+Luo, H., Guo, Y., Lin, Q., Wu, X., Mu, X., Liu, W., Song, M., Zhu, Y., Tuan, L. A., et al. Kbqa-01: Agentic knowledge base question answering with monte carlo tree search. arXiv preprint arXiv, 2025b.
+Luo, T., Lei, J., Lei, F., Liu, W., He, S., Zhao, J., and Liu, K. Moelora: Contrastive learning guided mixture of experts on parameter-efficient fine-tuning for large language models. arXiv e-prints, pp. arXiv-2402, 2024.
+Ma, C., Xu, X., Tong, T., and Chi, Y. Provably accelerating ill-conditioned low-rank estimation via scaled gradient descent, even with overparameterization. Explorations in the Mathematics of Data Science: The Inaugural Volume of the Center for Approximation and Mathematical Data Analytics, pp. 133-165, 2024.
+Meng, F., Wang, Z., and Zhang, M. Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37:121038-121072, 2024.
+Mihaylov, T., Clark, P., Khot, T., and Sabharwal, A. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, 2018.
+Mishra, B. and Sepulchre, R. Riemannian preconditioning. SIAM Journal on Optimization, 26(1):635-660, 2016.
+Mishra, B., Meyer, G., Bach, F., and Sepulchre, R. Low-rank optimization with trace norm penalty. SIAM Journal on Optimization, 23(4):2124-2149, 2013.
+Qiang, R., Zhang, R., and Xie, P. Bilora: A bi-level optimization framework for overfitting-resilient low-rank adaptation of large pre-trained models. CoRR, 2024.
+Sap, M., Rashkin, H., Chen, D., LeBras, R., and Choi, Y. Socialiqa: Commonsense reasoning about social interactions. In Conference on Empirical Methods in Natural Language Processing, 2019.
+Shi, L., Yang, J., Lv, P., Yuan, L., Kou, F., Luo, J., and Xu, M. Self-derived knowledge graph contrastive learning for recommendation. In ACM MM, 2024a.
+Shi, S., Huang, S., Song, M., Li, Z., Zhang, Z., Huang, H., Wei, F., Deng, W., Sun, F., and Zhang, Q. Restlora: Identity residual mapping in low-rank adaption. In Findings of the Association for Computational Linguistics ACL 2024, pp. 8870-8884, 2024b.
+
+Talmor, A., Herzig, J., Lourie, N., and Berant, J. Commonwealthseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149-4158, 2019.
+Tian, C., Shi, Z., Guo, Z., Li, L., and Xu, C.-Z. Hydralora: An asymmetric lora architecture for efficient fine-tuning. Advances in Neural Information Processing Systems, 37: 9565-9584, 2024.
+Tong, T., Ma, C., and Chi, Y. Low-rank matrix recovery with scaled subgradient methods: Fast and robust convergence without the condition number. IEEE Transactions on Signal Processing, 69:2396-2409, 2021.
+Tong, T., Ma, C., Prater-Bennette, A., Tripp, E., and Chi, Y. Scaling and scalability: Provable nonconvex low-rank tensor estimation from incomplete measurements. Journal of Machine Learning Research, 23(163):1-77, 2022.
+Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv e-prints, pp. arXiv-2307, 2023.
+Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
+Wang, H., Li, Y., Wang, S., Chen, G., and Chen, Y. Milora: Harnessing minor singular components for parameter-efficient LLM finetuning. In Chiruzzo, L., Ritter, A., and Wang, L. (eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2025 - Volume 1: Long Papers, Albuquerque, New Mexico, USA, April 29 - May 4, 2025, pp. 4823-4836. Association for Computational Linguistics, 2025a.
+Wang, S., Chen, L., Jiang, J., Xue, B., Kong, L., and Wu, C. Lora meets dropout under a unified framework. In Findings of the Association for Computational Linguistics ACL 2024, pp. 1995-2008, 2024a.
+Wang, S., Yu, L., and Li, J. Lora-ga: Low-rank adaptation with gradient approximation. Advances in Neural Information Processing Systems, 37:54905-54931, 2024b.
+
+Wang, W., Lv, Q., Yu, W., Hong, W., Qi, J., Wang, Y., Ji, J., Yang, Z., Zhao, L., XiXuan, S., et al. Cogvlm: Visual expert for pretrained language models. Advances in Neural Information Processing Systems, 37:121475-121499, 2024c.
+Wang, Y. and Agarwal, S. Adamix: Mixture-of-adaptations for parameter-efficient model tuning. In The 2022 Conference on Empirical Methods in Natural Language Processing, 2022.
+Wang, Z., Liang, J., He, R., Wang, Z., and Tan, T. Lorapro: Are low-rank adapters properly optimized? In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. OpenReview.net, 2025b.
+Wen, Z., Zhang, J., and Fang, Y. SIBO: A simple booster for parameter-efficient fine-tuning. In Ku, L., Martins, A., and Srikumar, V. (eds.), Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 1241-1257. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-ACL.72.
+Wu, T., Wang, J., Zhao, Z., and Wong, N. Mixture-of-subspaces in low-rank adaptation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 7880-7899, 2024a.
+Wu, X., Huang, S., and Wei, F. Mixture of lora experts. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024b.
+Yang, S., Ali, M. A., Wang, C.-L., Hu, L., and Wang, D. Moral: Moe augmented lora for llms' lifelong learning. CoRR, 2024.
+Zadouri, T., Üstün, A., Ahmadian, A., Ermis, B., Locatelli, A., and Hooker, S. Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
+Zeng, A., Xu, B., Wang, B., Zhang, C., Yin, D., Rojas, D., Feng, G., Zhao, H., Lai, H., Yu, H., et al. Chatglm: A family of large language models from glm-130b to glm-4 all tools. CoRR, 2024.
+Zhang, D., Hu, Z., Zhoubian, S., Du, Z., Yang, K., Wang, Z., Yue, Y., Dong, Y., and Tang, J. Sciinstruct: a self-reflective instruction annotated dataset for training scientific language models. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024a.
+
+Zhang, D., Zhoubian, S., Hu, Z., Yue, Y., Dong, Y., and Tang, J. Rest-mcts*: Llm self-training via process reward guided tree search. In The Thirty-eight Conference on Neural Information Processing Systems, pp. 64735-64772, 2024b.
+Zhang, D., Feng, T., Xue, L., Wang, Y., and Tang, J. Parameter-efficient fine-tuning for foundation models. arXiv e-prints, pp. arXiv-2501, 2025a.
+Zhang, F. and Pilanci, M. Riemannian preconditioned lora for fine-tuning foundation models. In Proceedings of the 41st International Conference on Machine Learning, pp. 59641-59669, 2024.
+Zhang, G., Fattahi, S., and Zhang, R. Y. Preconditioned gradient descent for overparameterized nonconvex burer-monteiro factorization with global optimality certification. Journal of Machine Learning Research, 24(163):1-55, 2023.
+Zhang, J., Zhang, R. Y., and Chiu, H.-M. Fast and accurate estimation of low-rank matrices from noisy measurements via preconditioned non-convex gradient descent. In International Conference on Artificial Intelligence and Statistics, pp. 3772-3780. PMLR, 2024c.
+Zhang, Y., Su, Y., Liu, Y., Wang, X., Burgess, J., Sui, E., Wang, C., Aklilu, J., Lozano, A., Wei, A., et al. Automated generation of challenging multiple-choice questions for vision language model evaluation. arXiv e-prints, pp. arXiv-2501, 2025b.
+Zhu, Y., Groth, O., Bernstein, M., and Fei-Fei, L. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4995-5004, 2016.
+Zhu, Y., Wichers, N., Lin, C.-C., Wang, X., Chen, T., Shu, L., Lu, H., Liu, C., Luo, L., Chen, J., et al. Sira: Sparse mixture of low rank adaptation. CoRR, 2023.
+
+# A. Various Model Sizes and MoE Architectures
+
+To evaluate our method, we already conducted experiments on Llama-3.2-3B, GLM-4-9B, and LLaVA-v1.5-7B. We also presented our $n / k / r$ analysis in Table 5 in Section 4.6, which consists of seven different candidates tested under SGD and AdamW optimizers with Llama-3.2-3B serving as their foundation model. To make our investigation further sufficient, here we add Llama-3.2-1B as a new foundation model, and add two new experiments under LLaVA-v1.5-7B with two different $n / k / r$ configurations (16/8/4 and 10/5/4).
+
+Specifically, we conduct further experiments on Llama-3.2-1B for four QA benchmarks, each trained for 2,000 steps. To speed up the training, we set a relatively smaller MoE configuration which is $10/5/1$ . Results are illustrated in Table 6. For those two new $n/k/r$ configurations of LLaVA-v1.5-7B, we conduct their experiments on both Visual7W and VMCBench benchmarks under SGD and AdamW optimizers. An overall enhancement of our method can still be witnessed under different configurations (especially for SGD), illustrated by Table 7.
+
+Table 6. Question answering evaluations across four QA datasets with Llama-3.2-1B as the foundation model.
+
+ScienceQA CommonsenseQA OpenBookQA SIQA avg. RSGD10,5,1 47.71 49.47 48.80 50.41 49.10 gRSGD10,5,1 49.87 59.30 54.00 57.06 55.06 RAdamW10,5,1 46.18 42.92 41.60 44.11 43.70 gRAdamW10,5,1 46.58 43.82 43.40 45.50 44.83
+
+Table 7. Visual7W and VMCBench Performances of LLaVA-v1.5-7B across various MoE architectures.
+
+n/k/r Task RSGD gRSGD RAdamW gRAdamW 10/5/4 Visual7W 71% 74% 76% 76% VMCBench 63% 73% 76% 77% 16/8/4 Visual7W 72% 74% 76% 77% VMCBench 59% 69% 71% 71%
+
+# B. Coverage Efficiency
+
+In the main body of our paper, we illustrate the converging speed enhancements of our proposed approach, $gRSGD$ , over the conventional $RSGD$ through a series of loss-decreasing plots. To further exhibit the comprehensive comparisons of convergence efficiency, we provide more results on GLUE benchmarks. In particular, for experiments conducted under Llama-3.2-3B as well as GLM-4-9B, we record metrics after the initial 100 training steps for each of the GLUE evaluations, as detailed in Table 8 and Table 9 respectively.
+
+Table 8 and 9 clearly demonstrate the superior convergence speed of our solutions over the conventional Riemannian preconditioned SGD optimizers. Nevertheless, they simultaneously illustrate an overall equivalent performance with trivial differences between our gate-based approach and the conventional Riemannian preconditioning method under AdamW optimizers. This indicates that our proposed approach is more valuable for SGD optimization. AdamW optimizers already present robust converging performances due to their adaptive gradient and learning rate mechanisms. As a result, our global optimal approximation under AdamW optimizing mainly contributes to the final optimality rather than significantly accelerating the initial gradient descending.
+
+Table 8. GLUE Benchmark evaluations after the initial 100 training steps conducted under Llama-3.2-3B. Our proposed gate-based rescaling method contributes an overall converging speed enhancement over conventional Riemannian preconditioned SGD optimizers, while for AdamW optimizers, we provide a similar converging speed compared with the conventional Riemannian ones.
+
+CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE WNLI avg. RSGD20,10,4 22.52 91.63 49.97 0.00 62.22 36.27 58.90 56.68 1.41 42.18 gRSGD20,10,4 39.30 92.66 66.55 24.49 66.17 43.40 65.33 63.18 26.76 54.20 RAdamW20,10,4 52.05 92.66 68.99 54.04 72.11 60.81 84.14 53.95 32.39 63.46 gRAdamW20,10,4 50.63 93.23 67.01 54.72 71.41 61.96 83.80 55.75 40.85 64.37
+
+Table 9. GLUE Benchmark evaluations after the initial 100 training steps conducted under GLM-4-9B. Our proposed gate-based rescaling method still contributes an overall converging speed enhancement over conventional Riemannian preconditioned SGD optimizers, while for AdamW optimizers, we still provide a similar converging speed compared with the conventional Riemannian ones. (Note that for CoLA, SST-2, and MRPC, we utilize a lower initial learning rate such as $3 \times 10^{-6}$ and $1 \times 10^{-6}$ , while the others are $3 \times 10^{-5}$ and $1 \times 10^{-5}$ . Therefore CoLA, SST-2, and MRPC converge much slower than the others.)
+
+CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE WNLI avg. RSGD20,10,4 0.00 0.00 0.41 61.15 84.57 62.35 87.49 41.73 63.38 44.56 gRSGD20,10,4 7.22 48.97 62.26 63.06 85.95 85.66 89.28 75.18 74.65 65.80 RAdamW20,10,4 17.37 0.00 0.00 63.34 86.15 0.00 89.05 89.21 80.28 47.27 gRAdamW20,10,4 16.75 0.00 0.00 63.54 86.05 0.61 89.34 91.37 78.87 47.39
+
+# C. AdamW Weight Decay Analysis
+
+AdamW implements a strategy called weight decay, which decays the trainable weights after each gradient update by $\theta_t = \theta_t - \alpha \lambda \theta_t$ . Instead of the original Adam algorithm, AdamW separates the weight decay from the gradient update, which leads to better performance in some cases. To comprehensively prove the effectiveness of our gate-based rescaling method over Riemannian preconditioned AdamW, we evaluate our boosts across various weight decay factors $\lambda$ . Results are exhibited in Table 10.
+
+Table 10. ScienceQA boosting performances under Llama-3.2-3B, across different AdamW weight decay.
+
+0 1e-5 1e-4 1e-3 \(RAdamW_{20,10,4}\) 82.60 83.50 83.23 83.99 \(gAdamW_{20,10,4}\) 83.80 84.58 84.98 84.67 Boost 1.45%↑ 1.30%↑ 2.10%↑ 0.81%↑
+
+# D. Multi-Task Performance
+
+One of the most valuable features of MoE architectures is their capability of modeling multiple tasks. Through gating mechanism, the MoE system adeptly delegates specific tasks to individual experts, thereby facilitating a more focused and efficient learning process within each expert module. As a result, one question arises regarding our proposed gate-based rescaling approach: Can it still effectively augment the performance of MoE architectures in multi-task scenarios?
+
+To illustrate this, we manually construct two mixed datasets, each consisting of three irrelevant natural language tasks from the GLUE Benchmark. The first mixture consists of CoLA, SST-2 and MRPC tasks, which serves as a multi-task scenario involving both grammar checking, sentiment classification, and equivalent sentences judging; The second mixture consists of STS-B, QQP and QNLI tasks, which serves as another multi-task scenario involving both sentence similarity scoring, equivalent questions judging, and question-answering NLI. For evaluation, we test candidates on each of the tasks individually and then average per-task performances within the mixture as the overall evaluation for that mixture.
+
+For sufficiently assessing the multi-task performance of our proposed gate-based rescaling method, we conduct experiments under two different MoE configurations, i.e., $20 / 10 / 4$ and $10 / 5 / 4$ . We train each candidate for 2,000 steps under $RAdamW$ and $gRAdamW$ (RAdamW with our proposed gate-based rescaling method) optimizers. Results are exhibited in Table 11. Our proposed method is still effective to boost feature learning under multi-task scenarios
+
+Table 11. Conventional and gate-rescaled optimizers performed on two mixed datasets consisting of tasks from the GLUE Benchmark. All candidates are trained for 2,000 steps. Our gate-based rescaling method still contributes enhancements.
+
+Mixture n/k/r RAdamW gRAdamW CoLA + SST-2 + MRPC 20/10/4 70.15 71.39 10/5/4 71.64 72.13 STS-B + QQP + QNLI 20/10/4 74.61 75.74 10/5/4 74.81 75.36
+
+# E. Backward Derivation of the Engineering Strategy
+
+We further elaborate why we implement (13), which is our engineering approximation for achieving (11) and (12). Since by forwarding as (13), the gradient updating process of $X$ can be derived as the following (Similar as previous, we treat gate value $g_{i}$ s as constants when focusing on the gradients of $A_{i}$ s and $B_{i}$ s):
+
+$$
+\begin{array}{l} X _ {n e w} = \hat {W} + \sum_ {i = 1} ^ {N _ {E x p e r t}} [ \hat {\sqrt {g _ {i}}} (B _ {i} - \eta \nabla_ {B _ {i}} \mathcal {L}) (A _ {i} - \eta \nabla_ {A _ {i}} \mathcal {L}) + (g _ {i} - \sqrt {\hat {g _ {i}}}) \hat {B _ {i}} \hat {A _ {i}} ] \\ = (\hat {W} + \sum_ {i = 1} ^ {N _ {E x p e r t}} \sqrt {g _ {i}} B _ {i} A _ {i} + (g _ {i} - \sqrt {g _ {i}}) \hat {B} _ {i} \hat {A} _ {i}) - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} \sqrt {g _ {i}} [ B _ {i} (\nabla_ {A _ {i}} \mathcal {L}) + (\nabla_ {B _ {i}} \mathcal {L}) A _ {i} ] \\ = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} \sqrt {g _ {i}} (B _ {i} \nabla_ {A _ {i}} \mathcal {L} + \nabla_ {B _ {i}} \mathcal {L} A _ {i}) \\ = X - \eta \sum_ {i = 1} ^ {N _ {\text {E x p e r t}}} \left(\hat {g _ {i}}\right) ^ {2} \operatorname {P r o j} _ {\operatorname {c o l} \left(B _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right) ^ {T} - \eta \sum_ {i = 1} ^ {N _ {\text {E x p e r t}}} \left(\hat {g _ {i}}\right) ^ {2} \operatorname {P r o j} _ {\operatorname {r o w} \left(A _ {i}\right)} \left(\nabla_ {X} \mathcal {L}\right) (\text {s a m e a s t h e d e r i v a t i o n o f (1 0)}) \\ = X - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} P r o j _ {c o l (B _ {i})} (\nabla_ {X} \mathcal {L}) ^ {T} - \eta \sum_ {i = 1} ^ {N _ {E x p e r t}} g _ {i} P r o j _ {r o w (A _ {i})} (\nabla_ {X} \mathcal {L}), \\ \end{array}
+$$
+
+so that (12) can be achieved.
+
+The advantages of implementing (13) can be elaborated from two aspects: Firstly, it achieves (12) when conducting gradient updating of $X$ while still keeps the original behavior of training gates, because it holds the same gate gradient, $\nabla_{g_i}X = A_i^T B_i^T$ , as normal forwarding; Secondly, it provides equivalent behavior and the same result of normal module forwarding $X = \hat{W} +\sum_{i = 1}^{N_{Expert}}g_iB_iA_i$ , and only requires a relatively low overhead.
+
+# F. Method Implementation
+
+The engineering alternative solution of the gate-based rescaling approach is to manually separate the forwarding into.
+optimizable and unoptimizable components. Here we provide our implementation in Python-like pseudocode. We only
+update two lines of the original MoE-LoRA code.
+
+Algorithm 1 Engineering Alternative Solution of Gate-based Rescaling Method
+```python
+def forward(self, x, ...) :
+...
+# compute gate values
+gvs = ...
+...
+# execute each activated expert
+for exp_id in activated_experts:
+ A = self.As[exp_id]
+ B = self.Bs[exp_id]
+ gv = gvs[:, :, exp_id]
+ exp_out = B(A(x))
+ sqrt_gv = (gv**0.5).detach() # update 1
+ w_exp_out = sqrt_gv*exp_out + (gv-sqrt_gv)*exp_outdetach() # update 2
+ result = result + w_exp_out
+...
+```
+
+# G. Experimental Details
+
+We present our experimental details in Table 12. All experiments in this paper follow this configuration unless they specify their particular settings. For training steps, some of the experiments may converge earlier, therefore we perform an early
+
+stop for those experiments. We constrain the maximum of training steps by 2,000, considering it a relatively fair setup for various downstream tasks, especially those with different scales of training corpora but in the same level of complexity.
+
+Table 12. Default experimental details implemented throughout this paper. All experiments follow this configuration unless they specify their particular settings, like the MoE structural experiments, baselines comparing experiments, and the experiments of AdamW weight decay.
+
+SGD AdamW Train batch size (logical) 80 for textual tasks, 40 for vision-language tasks Max training steps <=2,000 QA, GLUE: 3e-5WNLI: 3e-6Multi-Task: 3e-4 QA: 1e-5MRPC, CoLA, QNLI, STS-B: 3e-5SST-2, QQP, MNLI, WNLI: 1e-5RTE: 1e-6Multi-Task: 1e-4 Initial lr for 3B (expert) Initial lr for 9B (expert) CoLA, SST-2, MRPC: 3e-6STS-B, QQP, MNLI, QNLI, RTE,WNLI: 3e-5 CoLA, SST-2, MRPC: 1e-6STS-B, QQP, QNLI, RTE, WNLI: 1e-5MNLI: 3e-6 Initial lr for 7B (expert) 3e-5 1e-5 Initial lr (gate) 3e-8 Lr scheduler (expert) Linear Warmup steps 0 Max gradient norm (gate) 1.0 Default LoRA expert rank 4 Default number of experts 20 Default activated Top-K 10 LoRA α 16 LoRA dropout 0.05 Default weight decay / 0 β1,β2,ε / 0.9,0.999,1e-6
\ No newline at end of file
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/images.zip b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0339a1d2bf592f70b3c44e5af9490437b86a6d85
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5125e16c9675349e9e7a8b4e3027760a8643a0d6c86189f7682374618e3a69b5
+size 820512
diff --git a/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/layout.json b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..43cf050ff53ae1c954c260c63354f3aecbc8c05c
--- /dev/null
+++ b/astrongermixtureoflowrankexpertsforfinetuningfoundationmodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d1be58a9c4c598cc1bd1148a95e285234063e2d4836674bb346e21b1960d5e6
+size 547815
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_content_list.json b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0077c87afdf1d3448cbebc293aa9f344440d535e
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af448cd1b4bbb24598c2694c257aff1e65f3bb021b3ae2c190dcaf73d8427a04
+size 107736
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_model.json b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f0796f2fe3a8f8af2cccae56ef4058b6006bbcdc
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6cee5a05945cfb89c9c0064f21f79efb210d30cd03ef58d12387206bd6cd8bfe
+size 128154
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_origin.pdf b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..22885209a917f95adac6a13e916e012826d46d21
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/243c7412-a5e0-4497-b4b9-b5023a16d66d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3999b68b308410470aadc6ed73767e02885e7077e3bc01851f6c6543afa7df07
+size 479887
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/full.md b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..126680630ba3ce664722cda756283b4180e1470b
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/full.md
@@ -0,0 +1,506 @@
+# A Sub-Problem Quantum Alternating Operator Ansatz for Correlation Clustering
+
+Lucas Fabian Naumann1 Jannik Irmai1 Bjoern Andres1,2
+
+# Abstract
+
+The Quantum Alternating Operator Ansatz (QAOA) is a hybrid quantum-classical variational algorithm for approximately solving combinatorial optimization problems on Noisy Intermediate-Scale Quantum (NISQ) devices. Although it has been successfully applied to a variety of problems, there is only limited work on correlation clustering due to the difficulty of modelling the problem constraints with the ansatz. Motivated by this, we present a generalization of QAOA that is more suitable for this problem. In particular, we modify QAOA in two ways: Firstly, we use nucleus sampling for the computation of the expected cost. Secondly, we split the problem into sub-problems, solving each individually with QAOA. We call this generalization the Sub-Problem Quantum Alternating Operator Ansatz (SQAOA) and show theoretically that optimal solutions to correlation clustering instances can be obtained with certainty when the depth of the ansatz tends to infinity. Further, we show experimentally that SQAOA achieves better approximation ratios than QAOA for correlation clustering, while using only one qubit per node of the respective problem instance and reducing the runtime (of simulations).
+
+# 1. Introduction
+
+The term "quantum supremacy" (Preskill, 2012) refers to the ability of quantum computers to perform tasks efficiently that classical computers cannot. From a theoretical point of view, algorithms achieving this supremacy for problems of practical interest have long been established. However, applying these algorithms to problem instances of classically intractable sizes is not possible on current quantum
+
+$^{1}$ Faculty of Computer Science, TU Dresden $^{2}$ Center for Scalable Data Analytics and AI, Dresden/Leipzig. Correspondence to: Bjoern Andres .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+computers, as these are limited both by their number of qubits and the number of operations that can be performed on a qubit before its state is too corrupted by noise (circuit depth). These current quantum computers are also referred to as Noisy Intermediate-Scale Quantum (NISQ) devices (Preskill, 2018).
+
+Variational quantum algorithms (Cerezo et al., 2021) have emerged as a promising paradigm to achieve quantum supremacy for practical problems on NISQ devices. These algorithms combine quantum and classical computing by using parameterized quantum circuits with few qubits and low depth, whose parameters are learned in a classical optimization loop. The Quantum Alternating Operator Ansatz (QAOA) (Hadfield et al., 2019) is such a variational algorithm designed for approximately solving combinatorial optimization problems. In particular, it alternately applies a parameterized phase-separation operator, which changes the phase of states depending on their cost, and a mixing operator, which enables transitions between states and thus constructive or destructive interference based on their phase difference. The number of times $p$ these operators are applied alternately is called the ansatz depth. For $p \to \infty$ , and under the conditions given by Binkowski et al. (2024), there exist parameters for each problem instance such that QAOA returns an optimal solution with certainty. QAOA has been applied to a variety of problems (Cook et al., 2020; Saleem, 2020; Tabi et al., 2020; Fuchs et al., 2021), including correlation clustering (Weggemans, 2020; Weggemans et al., 2022).
+
+Correlation clustering (Bansal et al., 2004) is a special clustering formulation in which objects are represented by the nodes of a graph, (dis-)similarities between them by edges with corresponding costs, and the goal is to cluster the nodes of the graph such that a cost function is optimized. In prominent difference to other formulations, the number of clusters is not fixed in advance, but learned from the data. This unsupervised clustering of objects based solely on pairwise (dis-)similarities finds application in various domains, such as computational biology (D'haeseleer, 2005; Erola et al., 2020), data analysis (Benjelloun et al., 2009; Abbas & Swoboda, 2023) and image segmentation (Yarkony et al., 2012; Beier et al., 2015; Keuper et al., 2015).
+
+There are different variants of correlation clustering with respect to the associated costs and the cost function. In this article, we consider unweighted $\{+1, -1\}$ costs) maximum agreement correlation clustering and will assume that, unless otherwise specified, the term "correlation clustering" refers to this variant. However, our approach can easily be adapted to weighted correlation clustering and other cost functions.
+
+We introduce the Sub-Problem Quantum Alternating Operator Ansatz (SQAOA), a generalization of QAOA motivated by the application to correlation clustering and based on the idea of nucleus sampling (Holtzman et al., 2020) and the concept of splitting a problem into several dependent subproblems. For a specific SQAOA formulation of correlation clustering we show that:
+
+- For each instance, there exist parameters such that an optimal solution is obtained with certainty for $p \to \infty$ .
+- Only as many qubits are required to solve an instance as there are elements to cluster.
+- It experimentally outperforms existing approaches in terms of approximation ratios and runtimes on complete and Erdős-Rényi graphs with up to 10 nodes.
+
+# 2. Related Work
+
+Unweighted maximum agreement correlation clustering on general graphs is known to be APX-hard. In particular, it is NP-hard for every $\epsilon > 0$ to approximate the problem within a factor of $80/79 - \epsilon$ (Tan, 2008). The best known classical algorithm for approximating unweighted maximum agreement correlation clustering is given by Swamy (2004) and achieves an approximation ratio of 0.7666 (Swamy bound). However, there exists a polynomial time approximation scheme when restricting the considered graphs to be complete (Bansal et al., 2004).
+
+The only other work applying QAOA to a correlation clustering variant is by Weggemans (2020) and Weggemans et al. (2022). Weggemans (2020) reviews different QAOA formulations for correlation clustering with respect to the number of used qubits, circuit complexities and approximation ratios obtained by simulations. Furthermore, improvement strategies for the standard QAOA algorithm are evaluated, like the choice of the classical optimizer, the choice of initial parameters and the number of restarts. Most importantly, it is found that the achieved approximation ratios can be significantly increased by looping over the cluster number, i.e., by applying QAOA repeatedly, varying the number of allowed clusters from 1 to the number of nodes in the graph and returning only the best result.
+
+From these studies, a "multi-level" formulation emerges as the best approach, in which each element to be clustered
+
+is associated with a qudit and the cluster of that element is given by the state of the qudit. We will use this formulation as a reference to benchmark SQAOA against. Using techniques from Farhi et al. (2014) and Wurtz & Love (2021), it is further shown for this multi-level QAOA formulation (including looping over the cluster number) that, for $p = 1$ , there exist parameters achieving an approximation ratio of at least 0.6367 on all 3-regular graphs. Weggemans et al. (2022) build on this work by extending the evaluation of the multi-level formulation and describing how to realize it on a Rydberg system. However, the described implementation is restricted to 4-level quuds, i.e., quuds with four states, such that only solutions involving at most 4 clusters can be considered.
+
+There is a variety of generalizations and variations of QAOA; a recent survey is by Blekos et al. (2024). However, to our knowledge, there exists no work on using different sampling strategies to compute the expected cost. And although there are approaches that apply QAOA to subproblems (Tomesh et al., 2022; Esposito & Danzig, 2024), these split a problem instance into smaller instances of the same problem that are solved independently. In contrast, we solve instances of dependent sub-problems that are different from the original problem.
+
+# 3. Preliminaries
+
+We begin this section with a brief review of the notation and the fundamentals of quantum computing to the extent necessary for understanding the article. A thorough introduction can be found, e.g., in Nielsen & Chuang (2010). We then formally describe the Quantum Alternating Operator Ansatz and the correlation clustering problem before introducing our SQAOA formulation for correlation clustering in the next section.
+
+# Notation and Fundamentals of Quantum Computing
+
+We use the Dirac notation, i.e., we denote elements of $\mathbb{C}^n$ by $|\cdot \rangle$ , their conjugate transpose by $\langle \cdot |$ and write $\langle x|y\rangle \coloneqq \langle x| |y\rangle = |x\rangle^\dagger |y\rangle$ for the inner product of $|x\rangle ,|y\rangle \in \mathbb{C}^n$ .
+
+We denote the standard unit vectors of $\mathbb{C}^2$ by
+
+$$
+| 0 \rangle = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \quad \text {a n d} \quad | 1 \rangle = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right].
+$$
+
+Consequently, the standard unit vectors of $\mathbb{C}^{2^n} = \bigotimes_{j=1}^n \mathbb{C}^2$ are given by
+
+$$
+\bigotimes_ {j = 1} ^ {n} | x \rangle \quad \text {w h e r e} \quad | x \rangle \in \left\{\left| 0 \right\rangle , \left| 1 \right\rangle \right\},
+$$
+
+for which we introduce the abbreviation
+
+$$
+\left| x \right\rangle \quad \text {w h e r e} \quad x \in \{0, 1 \} ^ {n}.
+$$
+
+The state of an $n$ -qubit system is given by a normalized vector (statevector) in the Hilbert space $\mathbb{C}^{2^n} = \bigotimes_{j=1}^n \mathbb{C}^2$ , which can be written as
+
+$$
+| \psi \rangle = \sum_ {x \in \{0, 1 \} ^ {n}} a _ {x} | x \rangle \text {w i t h} | | \psi \rangle | ^ {2} = \sum_ {x \in \{0, 1 \} ^ {n}} | a _ {x} | ^ {2} = 1.
+$$
+
+When measuring such a system (with respect to the computational basis), it collapses to one of the computational basis states $|x\rangle$ . The coefficients $a_{x}$ are called probability amplitudes and their squared absolute $|a_x|^2$ give the probability of collapsing into state $|x\rangle$ .
+
+In gate-based quantum computing, algorithms are realized by manipulating qubits with quantum gates. Quantum gates acting on $n$ qubits can be represented by unitary matrices of size $2^n$ . Another way of characterizing quantum gates is by Hermitian matrices. For any unitary matrix $U$ , there exists a Hermitian matrix $H$ such that $U = e^{-iH}$ , and vice versa. In the remainder of this article, we will assume that matrices denoted by $U$ are unitary and that matrices denoted by $H$ are Hermitian. An overview of the gates used in this article is given in Appendix A. We use the following notation for applying a unitary $U$ operating on a single qubit to the $j$ -th qubit of an $n$ -qubit system:
+
+$$
+U _ {j} := \left(\bigotimes_ {k = 1} ^ {j - 1} I\right) \otimes U \otimes \left(\bigotimes_ {k = j + 1} ^ {n} I\right).
+$$
+
+Quantum states differ from probability distributions as their probability amplitudes can take complex values. This means that, in addition to their absolute value, quantum states have a phase. This fact enables constructive and destructive interference between them, i.e., applying an operation to a quantum state is different from just applying it to all of its basis states separately.
+
+Quantum Alternating Operator Ansatz The Quantum Alternating Operator Ansatz (QAOA) (Hadfield et al., 2019) is a variational algorithm for approximately solving combinatorial optimization problems. It generalizes the Quantum Approximate Optimization Algorithm (Farhi et al., 2014) which is, on the other hand, a translation of the Quantum Adiabatic Algorithm (Farhi et al., 2001) from adiabatic quantum computing to gate-based quantum computing.
+
+A generic combinatorial optimization problem of size $n$ with feasible space $S \subseteq \{0,1\}^n$ and cost function $C: S \to \mathbb{R}$ can be written as
+
+$$
+\min _ {x \in S} C (x) .
+$$
+
+Applying QAOA of depth $p$ for solving this problem, we first compute
+
+$$
+\left| \boldsymbol {\beta}, \boldsymbol {\gamma} \right\rangle^ {\mathrm {Q A O A}} := \left(\prod_ {j = 1} ^ {p} U _ {M} \left(\beta_ {j}\right) U _ {C} \left(\gamma_ {j}\right)\right) | s \rangle , \tag {1}
+$$
+
+where
+
+- $|s\rangle$ is an initial state in the feasible space $S$ , which is given by the set of all superpositions of classically feasible states, i.e., by
+
+$$
+\mathcal {S} := \left\{\sum_ {i} \lambda_ {i} | x \rangle \left| \sum_ {i} | \lambda_ {i} | ^ {2} = 1, \lambda_ {i} \in \mathbb {C}, x \in S \right. \right\};
+$$
+
+- $U_{C}(\gamma) = e^{-i\gamma H_{C}}$ is a phase-separation operator such that
+
+$$
+H _ {C} | x \rangle = C (x) | x \rangle , \tag {2}
+$$
+
+and $H_{C}$ is called cost Hamiltonian;
+
+- $U_{M}(\beta) = e^{-i\beta H_{M}}$ is a mixing operator that preserves feasible states
+
+$$
+\forall | \psi \rangle \in \mathcal {S} \forall \beta \in \mathbb {R}: U _ {M} (\beta) | \psi \rangle \in \mathcal {S}, \tag {3}
+$$
+
+and allows for full mixing of solutions
+
+$$
+\forall x, y \in S \exists \beta \in \mathbb {R} \exists r \in \mathbb {N}:
+$$
+
+$$
+\langle y | U _ {M} ^ {r} (\beta) | x \rangle > 0; \tag {4}
+$$
+
+The parameters $\beta, \gamma$ are learned in a classical optimization loop such that the expectation value of the cost function is minimized
+
+$$
+\langle \boldsymbol {\beta}, \boldsymbol {\gamma} | ^ {\mathrm {Q A O A}} H _ {C} | \boldsymbol {\beta}, \boldsymbol {\gamma} \rangle^ {\mathrm {Q A O A}}. \tag {5}
+$$
+
+The intuition behind this ansatz is that the phase-separation operator modifies the phase of basis states (which correspond to feasible solutions) depending on their cost, while the mixing operator realizes transitions between feasible states, resulting in constructive and destructive interference. Since the parameters of the operators are optimized with respect to the expectation value of the cost function, states with low cost are amplified by constructive interference, while states with high cost are erased by destructive interference.
+
+In order to apply QAOA to a specific problem, the operators and the initial state need to be defined and implemented. The main challenge lies in constructing the initial state and the mixing operator. Conversely, the phase-separation operator is easy to construct. If the problem is formulated as an integer program with binary variables, it is sufficient to choose the cost Hamiltonian $H_{C}$ such that variables $x_{i}$ in the cost function $C(x)$ are replaced by the term $\frac{(1 - Z_i)}{2}$ . This is due to the fact that if $x_{i} = 0$ , then $\frac{(1 - Z_i)}{2} |x_i\rangle = 0|x_i\rangle$ , and if $x_{i} = 1$ , then $\frac{(1 - Z_i)}{2} |x_i\rangle = 1|x_i\rangle$ . Thus, (2) is fulfilled for the Hamiltonian constructed in this way. Implementing the corresponding unitary operator only requires the application of RZ gates to individual qubits.
+
+QAOA is considered a promising variational quantum algorithm for the following reasons:
+
+- For $p \to \infty$ and under the conditions given in (Binkowski et al., 2024), there exist parameters for each problem instance such that an optimal solution is obtained with certainty.
+- Under reasonable complexity theoretic conjectures, it is not possible to efficiently sample from the generated distributions classically, even for $p = 1$ (Farhi & Harrow, 2019).
+- The parameters are concentrated for different instances of the same problem (Brandao et al., 2018; Akshay et al., 2021), allowing us to learn them for one instance and reuse them, or use them as an initial point for others.
+
+The approximation ratios are expected to increase with larger ansatz depth $p$ and are guaranteed to improve with optimal parameters. However, the depth is limited for two reasons. Firstly, the number of applied operators increases, resulting in problems with computational resources for the simulation on classical computers and the introduction of noise for the execution on quantum computers. Secondly, the number of learnable parameters increases with $p$ , and gradients cannot be easily computed on quantum computers.
+
+Correlation Clustering Let $G = (V, E)$ be an undirected graph, let $n = |V|$ be the number of nodes in $G$ and let $c \in \{+1, -1\}^E$ be costs associated with the edges of the graph. The problem of unweighted maximum agreement correlation clustering consists in finding a clustering (or partition) of the node set $V$ such that the number of pairs of nodes connected by edges with cost $+1$ that are in the same cluster, and the number of pairs of nodes connected by edges with cost $-1$ that are in different clusters, is maximized. Figure 1 shows an example of a problem instance.
+
+We can formulate unweighted maximum agreement correlation clustering as an integer quadratic program in which binary variables $x \in \{0,1\}^{n \times n}$ indicate if a node $v$ is assigned to cluster $i$ , $x_{v,i} = 1$ , or not, $x_{v,i} = 0$ :
+
+$$
+\begin{array}{l} \max _ {x} \sum_ {u v \in E: c _ {u v} = 1} \sum_ {i \in K} x _ {u, i} x _ {v, i} + \\ \sum_ {u v \in E: c _ {u v} = - 1} \sum_ {i, j \in K: i \neq j} x _ {u, i} x _ {v, j} \tag {6} \\ \end{array}
+$$
+
+subject to $\sum_{i\in K}x_{u,i} = 1$ for all $u\in V$
+
+where $K = \{1,\dots ,n\}$ and we use $uv, vu$ for denoting an edge $\{u,v\} \in E$
+
+
+Figure 1. Depicted above is an example of an instance of the unweighted maximum agreement correlation clustering problem, along with a corresponding optimal solution. The problem instance is given by the graph and the costs written along its edges. The clustering indicated by the coloring of the nodes has 5 agreements and is optimal.
+
+In this formulation, the variable assignment $x_{u,1} = 1$ and $x_{v,2} = 1$ for nodes $u$ and $v$ indicates that node $u$ is in Cluster 1 and that node $v$ is in Cluster 2. Thus, the nodes are in different clusters. In this case, a value of 1 is contributed to the cost if and only if $c_{uv} = -1$ .
+
+Note that when using QAOA for maximum agreement correlation clustering, we need to take the negative of the above cost function, since the expected cost (5) is minimized.
+
+# 4. Sub-Problem Quantum Alternating Operator Ansatz
+
+In this section, we introduce the Sub-Problem Quantum Alternating Operator Ansatz, a generalization of QAOA that, as we will show in Section 5, leads to better results when solving the correlation clustering problem regarding both the approximation ratio and the used resources while maintaining the optimality guarantee for $p \to \infty$ . In comparison to QAOA, we make two significant changes: Firstly, we employ nucleus sampling (Holtzman et al., 2020) for the computation of the expected cost. Secondly, we alter the ansatz itself by splitting the problem into sub-problems and applying QAOA to each of them.
+
+Nucleus Sampling As is typical for variational algorithms, QAOA minimizes the expectation value of the classical cost function, which is represented by a cost Hamiltonian (5). Since the expectation value acts as an upper bound on the ground state energy, i.e., the optimal cost, this minimization approximates ground states of the cost Hamiltonian, and thus optimal solutions.
+
+The upper diagram of Figure 2 shows the agreements of the basis states of the multi-level QAOA formulation of Weggemans et al. (2022) with $p = 1$ for a correlation clustering problem instance and the corresponding probabilities of these basis states. As expected, the algorithm shifts probability mass to solutions of low cost (i.e., high agreement).
+
+
+Figure 2. Depicted are two diagrams showing the probability of measuring basis states when applying the multi-level QAOA formulation of Weggemans et al. (2022) with $p = 1$ to the correlation clustering problem instance given in Figure 1. Shown in addition are the agreements of these basis states, i.e., the value of the cost function in (6). The probabilities of the diagram at the top are obtained directly from the QAOA results. The probabilities of the diagram at the bottom are obtained by nucleus sampling with a threshold of $t = 0.5$ .
+
+However, solutions of high cost are not completely erased and, due to their number, increase the expectation value significantly. The problem of this "unreliable tail" also occurs in decoding strategies for large language models and was approached by Holtzman et al. (2020) using a technique called nucleus sampling (sometimes called top-p sampling).
+
+The main idea of nucleus sampling is that, instead of sampling directly from a given probability distribution, we sample from the most probable states whose cumulative probability surpasses a previously defined threshold. The set of those states is called the nucleus. Hence, in our case, we do not compute the expected cost with respect to the $n$ -qubit state $|\psi \rangle$ obtained from the quantum algorithm, but with respect to $|\psi^{\prime}\rangle$ obtained in the following way. Firstly, we set a threshold $t \in (0,1]$ and compute a nucleus, i.e., a smallest set $X^{(t)} \subseteq \{0,1\}^n$ such that
+
+$$
+\sum_ {x \in X ^ {(t)}} | \langle x | \psi \rangle | ^ {2} \geq t.
+$$
+
+Secondly, we set the probability amplitudes of the basis states not in $X^{(t)}$ to zero and use
+
+$$
+t ^ {\prime} = \sqrt {\sum_ {x \in X ^ {(t)}} | \langle x | \psi \rangle | ^ {2}}
+$$
+
+to rescale the remaining states accordingly, i.e., we set $|\psi^{\prime}\rangle$ such that
+
+$$
+\langle x \mid \psi^ {\prime} \rangle = \left\{ \begin{array}{l l} \langle x \mid \psi \rangle / t ^ {\prime} & \text {i f} x \in X ^ {(t)} \\ 0 & \text {o t h e r w i s e} \end{array} \right..
+$$
+
+For the case $t = 1$ , it holds $|\psi^{\prime}\rangle = |\psi \rangle$ , and our approach specializes to regular sampling. Moreover, since $|\psi^{\prime}\rangle$ is also a normalized quantum state, the obtained cost function still acts as an upper bound on the ground state energy.
+
+The lower diagram of Figure 2 shows the probabilities obtained when using multi-level QAOA with nucleus sampling and a threshold of $t = 0.5$ for training parameters and inference. Clearly, reducing the threshold from 1 leads to better solutions.
+
+Sub-Problems Instead of solving the whole correlation clustering problem at once by applying QAOA (1), we split it into $l$ sub-problems, solving each with QAOA, and introduce transition operators $U_{T_i}$ for $i \in \{1, \dots, l\}$ , preparing their initial state:
+
+$$
+\left| \boldsymbol {\beta}, \boldsymbol {\gamma} \right\rangle^ {\mathrm {S Q A O A}} := \prod_ {i = 1} ^ {l} \left(\prod_ {j = 1} ^ {p} U _ {M _ {i}} \left(\boldsymbol {\beta} _ {\boldsymbol {i}, \boldsymbol {j}}\right) U _ {C _ {i}} \left(\boldsymbol {\gamma} _ {\boldsymbol {i}, \boldsymbol {j}}\right)\right) U _ {T _ {i}} \left| \boldsymbol {0} \right\rangle . \tag {7}
+$$
+
+Clearly, for $l = 1$ , $U_{T_1}|\mathbf{0}\rangle = |s\rangle$ and operators $U_{M_1}, U_{C_1}$ satisfying properties (2-4), this specializes to QAOA, so we are again considering a proper generalization.
+
+In order to apply this ansatz to an instance of the (unweighted maximum agreement) correlation clustering problem given by an undirected graph $G = (V, E)$ with $n = |V|$ nodes and costs $c \in \{+1, -1\}^E$ , we need to choose the number of sub-problems $l$ and the corresponding operators appropriately. We do so by modelling correlation clustering as an iterated application of max-cut. In each of $l = n - 1$ iterations, we solve a max-cut problem restricted to those nodes that have not been assigned to a cluster in a previous iteration. The nodes that are labeled 0 by the solution to the max-cut problem are assigned to a new cluster. The nodes labeled 1 remain unassigned (or are assigned to a "final" cluster if it is the last iteration).
+
+Realizing this directly within the framework of (7) is possible but requires complicated operators and, since at least one qubit per node is needed for the decision in each subproblem, $\Omega(n^2)$ qubits which is worse than the approaches presented by Weggemans (2020) and Weggemans et al. (2022). However, since the qubits associated with a subproblem are only manipulated by its operators, there is no interference between states that differ in one of those qubits once the sub-problem is processed. Consequently, we can measure all qubits of a sub-problem after applying its operators and evaluate the following sub-problem only on
+
+the classical probability distribution estimated from these measurements instead of further performing operations on the whole quantum system. This fact allows for a more efficient implementation described in the following and in more detail in Algorithm 1.
+
+We choose $l = n - 1$ . For any sub-problem $i \in \{1, \dots, l\}$ and any solution $|x\rangle$ of the previous sub-problem, we compute
+
+$$
+\left| \boldsymbol {\beta}, \boldsymbol {\gamma} \right\rangle_ {i, x} ^ {\mathrm {S Q A O A}} := \left(\prod_ {j = 1} ^ {p} U _ {M _ {i, x}} \left(\boldsymbol {\beta} _ {\boldsymbol {i}, \boldsymbol {j}}\right) U _ {C _ {i, x}} \left(\boldsymbol {\gamma} _ {\boldsymbol {i}, \boldsymbol {j}}\right)\right) U _ {T _ {i, x}} \left| \mathbf {0} \right\rangle , \tag {8}
+$$
+
+where
+
+$$
+\begin{array}{l} U _ {T _ {i, x}} = \bigotimes_ {u \in V _ {i, x}} H _ {u}, (9) \\ U _ {M _ {i, x}} (\beta) = e ^ {- i \beta \sum_ {u \in V _ {i, x}} X _ {u}}, (10) \\ U _ {C _ {i, x}} \left(\gamma_ {1}, \gamma_ {2}\right) = e ^ {- i \gamma_ {1} \sum_ {u v \in E _ {i, x}} c _ {u v} Z _ {u} Z _ {v}} (11) \\ e ^ {- i \gamma_ {2} \sum_ {u \in V _ {i, x}} w _ {u} Z _ {u}}, \\ \end{array}
+$$
+
+$G_{i,x} = (V_{i,x},E_{i,x})$ is the graph obtained from $G$ by removing nodes decided, i.e., labeled 0, in solution $|x\rangle$ of the previous sub-problem and $w\in \mathbb{R}^V$ are weights fulfilling $w_{u}^{2}\neq w_{v}^{2}$ for all distinct nodes $u,v\in V$ . For the first sub-problem we need to consider the whole graph $G$ ; therefore, we set $|x\rangle$ to $|\mathbf{1}\rangle$ , i.e., all nodes are yet undecided. After preparing the state $|\beta ,\gamma \rangle_{i,x}^{\mathrm{SQAOA}}$ , we estimate the corresponding probability distribution by sampling repeatedly from it and continue evaluating the next sub-problem on all states with non-zero probability. Once all sub-problems are processed, the expected costs are computed from the measured probability distributions. Figure 3 illustrates the described procedure compared to the multi-level approach of Weggemans et al. (2022).
+
+The transition operator $U_{T_i,x}$ uses Hadamard gates to construct an equal superposition of all feasible states of the subproblem. The mixing operator $U_{M_i,x}$ enables transitions between the feasible states of a sub-problem by flipping qubits, i.e., by changing if the corresponding nodes remain in the current cluster, or are assigned to a new cluster that is further split in the next sub-problem. The phase-separation operator $U_{C_i,x}$ incorporates the cost function into the first exponent, as described in Section 3, but drops constant terms since they affect all states in the same way and can be neglected. Additionally, the Hamiltonian given by the second exponent allows a cost-independent separation of phases based on individual nodes. The motivation behind introducing this second term with weights $w$ , as well as the reason for restricting those, will be discussed in Section 5.
+
+As described, we evaluate each sub-problem with respect to all solutions having a non-zero probability in the previous
+
+$$
+U _ {M} \quad \begin{array}{c c c c c} \bullet & & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & & & \\ & \bullet & \end{array} U _ {M _ {1}}
+$$
+
+Figure 3. Depicted is are basis state of the multi-level QAOA formulation of Weggemans et al. (2022) (left) and SQAOA (right) corresponding to the same solution of a correlation clustering problem instance with four nodes. In the presented solution, the first node is assigned to the first cluster, the second node to the fourth cluster, and the remaining nodes to the second cluster. For QAOA, each column represents a qudit and each node a qudit state. For SQAOA, each node represents a qubit. Nodes colored black correspond to qudit states or qubits set to $|1\rangle$ , and white nodes to qudit states or qubits set to $|0\rangle$ . The arrows indicate transitions realized by the mixing operator. For SQAOA, the colors indicate the three sub-problems where, due to the measurements, the same physical qubits can be used for all sub-problems.
+
+sub-problem. Since there can be exponentially many of these for dense probability distributions, this may constitute a performance bottleneck. However, since we apply nucleus sampling, as discussed previously, we only need to evaluate the next problem on the states in the nucleus. Although this does not guarantee a sub-exponential number of evaluations, we show experimentally in Section 5 that this number remains almost constant for the considered problem sizes and low nucleus sampling thresholds.
+
+Note that, besides the sum over $V_{i,x}$ in $U_{C_{i,x}}$ , the ansatz for a sub-problem corresponds to the one used for solving max-cut with the Quantum Approximate Optimization Algorithm (Farhi et al., 2014). Without this second term in the phase-separation operator, only pairwise interactions between nodes weighted by the costs would be considered for the phase-separation. Including it allows to also take individual nodes with weights $w$ into account. Note further that, while all other terms are permutation invariant, and we thus expect the parameters $\beta$ and $\gamma_1$ to be reusable across instances as for QAOA, the newly introduced term with parameter $\gamma_2$ is not, leading to the necessity of relearning it for different problem instances, or even the same instance when permuting nodes.
+
+As shown by Weggemans (2020), operators (9-11) can be implemented using only Hadamard, RX, RZ and CX gates without requiring additional ancilla qubits. Moreover, since we measure after processing a sub-problem, qubits can be reused, and thus only a total of $n$ qubits are needed for the whole algorithm.
+
+Algorithm 1 SQAOA - Correlation Clustering
+probabilities $=$ empty dictionary
+current_states $=$ empty set
+add 1 to current_states
+for $i = 1$ to $l$ do next_states $=$ empty set for $x$ in current_states do for shot $= 1$ to 1000 do $|\psi \rangle = U_{T_{i,x}}|\mathbf{0}\rangle$ for $j = 1$ to $p$ do $|\psi \rangle = U_{M_{i,x}}(\beta_{i,j})U_{C_{i,x}}(\gamma_{i,j})|\psi \rangle$ measure $|\psi \rangle$ and update probabilities[i][x] add nucleus of probabilities[i][x] to next_states current_states $=$ next_states
+costs $=$ empty dictionary
+for $i = l$ to 1 do for $y$ in probabilities[i] do costs[i][y] $= 0$ for $p,x$ in nucleus of probabilities[i][y] do for $u,v$ in $E_{i,y}$ do if $c_{uv} == 1$ then costs[i][y] $+ = p(1 - x_u)(1 - x_v)$ if $i == l$ then costs[i][y] $+ = p x_{u} x_{v}$ if $c_{uv} == -1$ then costs[i][y] $+ = p(1 - x_u)x_v$ $+p x_{u}(1 - x_{v})$ if $i\neq l$ then costs[i][y] $+ = p$ cost $[i + 1][x]$
+return costs[1][1]
+
+# 5. Evaluation
+
+In this section, we evaluate our SQAOA formulation for correlation clustering. Firstly, we show that for $p \to \infty$ , there exist parameters for each problem instance such that an optimal solution is obtained with certainty. Secondly, we experimentally compare our approach to the one of Wegge-mans et al. (2022) in terms of approximation ratios and runtimes.
+
+Theoretical Analysis QAOA yields an optimal solution under the conditions given by Binkowski et al. (2024), containing especially $p \to \infty$ and that the optimal solution is an eigenvector of the phase-separation operator with the smallest eigenvalue. Although the given operators fulfill these conditions, this argument only guarantees to reach optimal solutions on the individual sub-problems, but not a globally optimal solution. Considering only pairwise interactions for the mixing operator, we have not been able to prove or disprove that there exist parameters such that (8) yields
+
+a globally optimal solution for $p \to \infty$ . However, when including the term for individual nodes, we have been able to show this by adapting the universality proof for the Quantum Approximate Optimization Algorithm from Morales et al. (2020) as shown in the following. The proofs of the upcoming lemmata are deferred to Appendix B.
+
+Definition 5.1. Given a set of Hamiltonians $\mathcal{P} = \{H_1, H_2, \ldots, H_q\}$ , we call the smallest real Lie algebra $\mathcal{L}$ with the commutator as the Lie bracket containing the elements of $\mathcal{P}$ the generated Lie algebra of $\mathcal{P}$ .
+
+Proposition 5.2. (D'Alessandro, 2021) Let $\mathcal{P}$ be a set of Hamiltonians and let $\mathcal{L}$ be the generated Lie algebra of $\mathcal{P}$ . The set of unitaries that can be approximated to arbitrary precision by iterated application of the elements in $\mathcal{P}$ is given by
+
+$$
+\left\{e ^ {- i A} \mid A \in \mathcal {L} \right\}.
+$$
+
+Lemma 5.3. Let $G = (V, E)$ be an undirected graph, let $c \in \mathbb{R}^E$ and $w \in \mathbb{R}^V$ . Let further
+
+$$
+H _ {M} = \sum_ {u \in V} X _ {u}, \quad H _ {C} = \sum_ {u v \in E} c _ {u v} Z _ {u} Z _ {v} + \sum_ {u \in V} w _ {u} Z _ {u}
+$$
+
+and let $\mathcal{L}$ be the generated Lie algebra of $\{H_M, H_C\}$ . It holds that
+
+$$
+H _ {C ^ {\prime}} := \sum_ {u \in V} w _ {u} Z _ {u} \in \mathcal {L}.
+$$
+
+Lemma 5.4. Let $G = (V, E)$ be an undirected graph and let $w \in \mathbb{R}^V$ . Let further
+
+$$
+H _ {M} = \sum_ {u \in V} X _ {u}, \quad H _ {\mathcal {C} ^ {\prime}} = \sum_ {u \in V} w _ {u} Z _ {u}
+$$
+
+and let $\mathcal{L}$ be the generated Lie algebra of $\{H_M, H_{C'}\}$ . If $w_u^2 \neq w_v^2$ for all distinct $u, v \in V$ , it holds for all $u' \in V$ that
+
+$$
+H _ {u ^ {\prime}}, X _ {u ^ {\prime}} \in \mathcal {L}.
+$$
+
+Theorem 5.5. Let $G = (V, E)$ be an undirected graph, let $c \in \mathbb{R}^E$ and $w \in \mathbb{R}^V$ with $w_u^2 \neq w_v^2$ for all distinct $u, v \in V$ . Let further
+
+$$
+U _ {T} = \bigotimes_ {u \in V} H _ {u},
+$$
+
+$$
+U _ {M} (\beta) = e ^ {- i \beta \sum_ {u \in V} X _ {u}} a n d
+$$
+
+$$
+U _ {C} \left(\gamma_ {1}, \gamma_ {2}\right) = e ^ {- i \left(\gamma_ {1} \sum_ {u v \in E} c _ {u v} Z _ {u} Z _ {v} + \gamma_ {2} \sum_ {u \in V} w _ {u} Z _ {u}\right)}.
+$$
+
+For any basis state $|x\rangle$ with $x\in \{0,1\}^{|V|}$ , there exist parameters $\beta_{j},\gamma_{j}\in \mathbb{R}$ for $j\in \{1,\ldots ,p\}$ and a phase shift $\theta \in \mathbb{R}$ such that it holds for $p\to \infty$ :
+
+$$
+e ^ {- i \theta} | x \rangle = \left(\prod_ {j = 1} ^ {p} U _ {M} (\beta_ {j}) U _ {C} (\gamma_ {1, j}, \gamma_ {2, j})\right) U _ {T} | \mathbf {0} \rangle .
+$$
+
+Proof. Let $\mathcal{L}$ be the generated Lie algebra of $\{H_M, H_C\}$ . After applying $U_T$ , the qubits are in state $U_T |0\rangle$ . Since $H_u \in \mathcal{L}$ by Lemma 5.3 and Lemma 5.4, we can revert this state to $|0\rangle$ (modulo a phase shift of $(-i)^{|V|}$ ) by applying $\prod_{u \in V} e^{-i\pi/2H_u} = (-i)^{|V|} \otimes_{u \in V} H_u$ . Next, we can (modulo a phase shift of $-i$ ) flip individual qubits associated with nodes $u \in V$ by applying $e^{-i\pi/2X_u} = -iX_u$ , since $X_u \in \mathcal{L}$ by Lemma 5.3 and Lemma 5.4. This allows to construct arbitrary basis states $|x\rangle$ (modulo a potential phase shift $e^{-i\theta}$ ).
+
+According to Theorem 5.5, arbitrary basis states can be constructed in each sub-problem when $p$ approaches infinity. Therefore, for each instance of the correlation clustering problem, there clearly exist parameters such that our SQAOA formulation (8) obtains optimal solutions with certainty.
+
+Empirical Analysis To demonstrate the advancements of SQAOA and nucleus sampling in general, we conduct experiments on instances of the correlation clustering problem involving complete graphs and Erdős-Rényi graphs where the probability of an edge being present is 0.5. We then compare these results with those of the multilevel QAOA formulation presented by Weggemans et al. (2022). The code for the SQAOA experiments is available at https://github.com/fabian-na/SQAOA.
+
+For the experimental setup, we mainly follow Weggemans et al. (2022). In particular, for a fixed graph size, we evaluate the performance on datasets consisting of 50 problem instances with edge weights $\{+1, -1\}$ , where the probability of an edge having weight $+1$ is uniformly increased from 0 to 1 to represent all weight configurations. Mean values and standard deviations given in this section always refer to the results obtained for a dataset, i.e., a mean approximation ratio of 1.0 with a standard deviation of 0.0 indicates that all 50 instances are solved to optimality.
+
+For the classical optimization procedure, we use the Powell optimizer, which has proven to be efficient for solving other problems with QAOA (Pellow-Jarman et al., 2021; Fernandez-Pendas et al., 2022). Further, for each dataset, we first learn parameters for the instance with all edges having weight $-1$ and then use these parameters as an initial point for the remaining instances in that dataset. The only exception are the parameters $\gamma_{2}$ used in the phase-separation operator of SQAOA. Those are, due to their permutation dependence, always initialized to 0. The corresponding weights $w$ are chosen by enumerating all nodes by integers ranging from 1 to $n$ . With this choice, the phase-separation operator remains $2\pi$ -periodic with respect to each of its parameters. We set the number of shots used to estimate probability distributions to 1000 and restart each optimization procedure 5 times, taking only the best overall result.
+
+Table 1. Mean approximation ratios and runtimes for solving 50 correlation clustering problem instances on Erdős-Rényi Graph graphs with $n = 3,4,5$ nodes using the multi-level QAOA formulation of Weggemans et al. (2022) and SQAOA with a depth of $p = 1$ and thresholds for nucleus sampling of $t = 1,0.1$ .
+
+Erdős-Rényi Graphs - Approximation Ratio
+
+n = 3 n = 4 n = 5 QAOA t = 1 0.97 ± 0.04 0.92 ± 0.07 0.92 ± 0.07 QAOA t = 0.1 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 SQAOA t = 1 0.94 ± 0.08 0.88 ± 0.09 0.85 ± 0.10 SQAOA t = 0.1 1.00 ± 0.00 1.00 ± 0.02 1.00 ± 0.02
+
+Erdős-Rényi Graphs - Runtime [s]
+
+n = 3 n = 4 n = 5 QAOA t = 1 121 ± 40 221 ± 30 316 ± 44 QAOA t = 0.1 5 ± 0 15 ± 2 109 ± 32 SQAOA t = 1 23 ± 12 161 ± 45 799 ± 256 SQAOA t = 0.1 2 ± 1 6 ± 2 10 ± 5
+
+Table 1 shows approximation ratios and runtimes obtained for multi-level QAOA and SQAOA with depth $p = 1$ on correlation clustering instances of Erdős-Rényi graphs with $n = 3,4,5$ nodes and two thresholds $t = 1$ and $t = 0.1$ for nucleus sampling. An extended version of this table containing results for complete graphs, depths $p = 2,3$ and threshold $t = 0.5$ is given in Appendix C. As can be seen from the table, both QAOA and SQAOA perform, even for $t = 1$ , significantly better than the Swamy bound of 0.7666 (Swamy, 2004) with SQAOA achieving slightly worse approximation ratios. Setting the threshold to $t = 0.1$ greatly improves the approximation ratios, leading to optimal results for QAOA and near optimal results for SQAOA. While improving the approximation ratios, reducing the threshold also leads to an overall reduction of the runtime for the given experiments. However, this does not hold in general. As shown in the appendix, solving QAOA with $t = 0.5$ takes significantly longer than solving QAOA with $t = 1$ for the dataset with 5 nodes. For SQAOA, we do not observe such a behavior; in fact, the runtimes seem to scale much better than for the QAOA approach when using a low threshold for nucleus sampling.
+
+Since the threshold alters only the computation of the cost function for QAOA and not the quantum algorithm itself, the difference in runtimes must be caused by an increased number of function evaluations during the optimization procedure. This might be due to the fact that discontinuities get introduced to the cost function when states enter or leave the nucleus. For SQAOA on the other hand, reducing $t$ has always resulted in lower runtimes for the experiments we
+
+
+Figure 4. Depicted is a diagram showing the mean nucleus size and the mean number of function evaluations when solving the dataset of Erdős-Rényi graphs with 5 nodes using SQAOA with depth $p = 1$ for nucleus sampling thresholds of $t = 1, 0.5, 0.1$ .
+
+
+Figure 5. Depicted is a diagram showing the mean approximation ratios and runtimes of SQAOA ( $t = 0.1$ ) and the multi-level QAOA approach of Weggemans et al. (2022) ( $t = 1$ ) with ansatz depth $p = 1$ when applied to datasets of Erdős-Rényi graphs with up to 10 nodes for SQAOA and 7 nodes for QAOA.
+
+have conducted. This is due to the fact that, even as the number of function evaluations increases, each evaluation takes less time since fewer elements are in the nucleus. This is illustrated in Figure 4 for the dataset of Erdős-Rényi graphs with 5 nodes and $p = 1$ . In particular, one can see that the number of elements in the nucleus is almost constant for the considered problem sizes and low thresholds.
+
+Weggemans et al. (2022) consider instances of the correlation clustering problem with up to 7 nodes. In Figure 5, we give approximation ratios and runtimes for SQAOA with $t = 0.1$ and $p = 1$ on instances with up to 10 nodes. As can be seen from the figure, the runtime increases exponentially, as expected, although slower than for the QAOA formulation. The approximation ratio, however, seems to remain almost constant, further corroborating the potential of SQAOA and variational algorithms in general.
+
+# 6. Conclusion
+
+We introduce the Sub-Problem Quantum Alternating Operator Ansatz (SQAOA), a generalization of the Quantum Alternating Operator Ansatz (QAOA) based on nucleus sampling and splitting problems into sub-problems. In a theoretical analysis, we show that for each instance of the correlation clustering problem, there exist parameters such that a specific SQAOA formulation of the problem obtains an optimal solution with certainty. Further, we show experimentally that this SQAOA formulation outperforms existing QAOA approaches for correlation clustering in terms of approximation ratios and runtime while using only as many qubits as there are elements to cluster.
+
+We see two possible directions for future research: Further analyzing SQAOA for correlation clustering and extending its application to other problems. Regarding the first direction, we have not yet given a lower bound on the achieved approximation ratio, as it is done by Weggemans (2020) for the multi-level formulation. One could also consider modelling correlation clustering with different sub-problems since we do not exploit the full expressiveness of SQAOA with the current formulation, which uses the same operators for each sub-problem. Regarding the second direction, splitting a problem into sub-problems is a universal approach, and similar improvements may be possible for problems beyond correlation clustering. Of particular interest are thereby problems in which elements are assigned one of multiple labels. For example, one could consider the Maximum $k$ -Colorable Subgraph Problem with sub-problems coloring parts of the graph that have not yet been considered using a fixed number of colors smaller than $k$ .
+
+# Acknowledgements
+
+We thank Jordi Weggemans for providing the source code of Weggemans et al. (2022), which we use to perform the QAOA experiments. This work is partly supported by the Federal Ministry of Education and Research of Germany through DAAD Project 57616814 (SECAI) and Project 16KIS2332K (AI.Auto-Immune).
+
+# Impact Statement
+
+This theoretical article presents work whose goal is to advance the field of machine learning, more specifically clustering. As for all advances in this field, there are many potential societal consequences of our work. However, we do not feel that the implications of this article differ from those of other contributions to that field and must be specifically highlighted here.
+
+# References
+
+Abbas, A. and Swoboda, P. ClusterFuG: Clustering Fully connected Graphs by Multicut. In ICML, 2023. URL https://proceedings.mlr.press/v202/abbas23a.
+Akshay, V., Rabinovich, D., Campos, E., and Biamonte, J. Parameter concentrations in quantum approximate optimization. Phys. Rev. A, 104:L010401, 2021. doi: 10.1103/PhysRevA.104.L010401.
+Bansal, N., Blum, A., and Chawla, S. Correlation clustering. Machine Learning, 56(1):89-113, 2004. doi: 10.1023/B: MACH.0000033116.57574.95.
+Beier, T., Hamprecht, F. A., and Kappes, J. H. Fusion moves for correlation clustering. In CVPR, 2015. doi: 10.1109/CVPR.2015.7298973.
+Benjelloun, O., Garcia-Molina, H., Menestrina, D., Su, Q., Whang, S. E., and Widom, J. Swoosh: a generic approach to entity resolution. The VLDB Journal, 18(1):255-276, 2009. doi: 10.1007/s00778-008-0098-x.
+Binkowski, L., Koßmann, G., Ziegler, T., and Schwonnek, R. Elementary proof of QAOA convergence. New Journal of Physics, 26(7):073001, 2024. doi: 10.1088/1367-2630/ad59bb.
+Blekos, K., Brand, D., Ceschini, A., Chou, C.-H., Li, R.-H., Pandya, K., and Summer, A. A review on quantum approximate optimization algorithm and its variants. Physics Reports, 1068:1-66, 2024. doi: 10.1016/j.physrep.2024.03.002.
+Brandao, F. G. S. L., Broughton, M., Farhi, E., Gutmann, S., and Neven, H. For fixed control parameters the quantum approximate optimization algorithm's objective function value concentrates for typical instances, 2018. URL https://arxiv.org/abs/1812.04170.
+Cerezo, M., Arrasmith, A., Babbush, R., Benjamin, S. C., Endo, S., Fujii, K., McClean, J. R., Mitarai, K., Yuan, X., Cincio, L., and Coles, P. J. Variational quantum algorithms. Nature Reviews Physics, 3(9):625-644, 2021. doi: 10.1038/s42254-021-00348-9.
+Cook, J., Eidenbenz, S., and Bärtschi, A. The quantum alternating operator ansatz on maximum k-vertex cover. In International Conference on Quantum Computing and Engineering, 2020. doi: 10.1109/QCE49297.2020.00021.
+D'Alessandro, D. Introduction to quantum control and dynamics. Chapman & Hall/CRC. Taylor & Francis Ltd, 2nd edition, 2021. doi: 10.1201/9781003051268.
+
+D'haeseleer, P. How does gene expression clustering work? Nature Biotechnology, 23(12):1499-1501, 2005. doi: 10.1038/nbt1205-1499.
+Erola, P., Björkegren, J. L. M., and Michael, T. Model-based clustering of multi-tissue gene expression data. Bioinformatics, 36(6):1807-1813, 2020. doi: 10.1093/bioinformatics/btz805.
+Esposito, A. and Danzig, T. Hybrid classical-quantum simulation of MaxCut using QAOA-in-QAOA. In International Parallel and Distributed Processing Symposium Workshops, 2024. doi: 10.1109/IPDPSW63119.2024.00180.
+Farhi, E. and Harrow, A. W. Quantum supremacy through the quantum approximate optimization algorithm, 2019. URL https://arxiv.org/abs/1602.07674.
+Farhi, E., Goldstone, J., Gutmann, S., Lapan, J., Lundgren, A., and Preda, D. A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem. Science, 292(5516):472-475, 2001. doi: 10.1126/science.1057726.
+Farhi, E., Goldstone, J., and Gutmann, S. A quantum approximate optimization algorithm, 2014. URL https://arxiv.org/abs/1411.4028.
+Fernández-Pendás, M., Combarro, E. F., Vallecorsa, S., Ranilla, J., and Rúa, I. F. A study of the performance of classical minimizers in the quantum approximate optimization algorithm. Journal of Computational and Applied Mathematics, 404:113388, 2022. doi: 10.1016/j.cam.2021.113388.
+Fuchs, F. G., Kolden, H. Ø., Aase, N. H., and Sartor, G. Efficient encoding of the weighted max k-cut on a quantum computer using QAOA. SN Computer Science, 2(2):89, 2021. doi: 10.1007/s42979-020-00437-z.
+Hadfield, S., Wang, Z., O'Gorman, B., Rieffel, E. G., Venturelli, D., and Biswas, R. From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. Algorithms, 12(2), 2019. doi: 10.3390/a12020034.
+Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. The curious case of neural text degeneration. In ICLR, 2020. URL https://openreview.net/forum?id=rygGQyrFvH.
+Keuper, M., Levinkov, E., Bonneel, N., Lavoué, G., Brox, T., and Andres, B. Efficient decomposition of image and mesh graphs by lifted multicuts. In ICCV, 2015. doi: 10.1109/ICCV.2015.204.
+
+Morales, M. E. S., Biamonte, J. D., and Zimborás, Z. On the universality of the quantum approximate optimization algorithm. Quantum Information Processing, 19(9):291, 2020. doi: 10.1007/s11128-020-02748-9.
+Nielsen, M. A. and Chuang, I. L. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2010. doi: 10.1017/CBO9780511976667.
+Pellow-Jarman, A., Sinayskiy, I., Pillay, A., and Petrucione, F. A comparison of various classical optimizers for a variational quantum linear solver. Quantum Information Processing, 20(6):202, 2021. doi: 10.1007/s11128-021-03140-x.
+Preskill, J. Quantum computing and the entanglement frontier, 2012. URL https://arxiv.org/abs/1203.5813.
+Preskill, J. Quantum computing in the NISQ era and beyond. Quantum, 2:79, 2018. doi: 10.22331/q-2018-08-06-79.
+Saleem, Z. H. Max-independent set and the quantum alternating operator ansatz. International Journal of Quantum Information, 18(04):2050011, 2020. doi: 10.1142/S0219749920500112.
+Swamy, C. Correlation clustering: maximizing agreements via semidefinite programming. In SODA, 2004.
+Tabi, Z., El-Safty, K. H., Kallus, Z., Haga, P., Kozsik, T., Glos, A., and Zimboras, Z. Quantum Optimization for the Graph Coloring Problem with Space-Efficient Embedding. In International Conference on Quantum Computing and Engineering, 2020. doi: 10.1109/QCE49297.2020.00018.
+Tan, J. A note on the inapproximability of correlation clustering. Information Processing Letters, 108(5):331-335, 2008. doi: 10.1016/j.ipl.2008.06.004.
+Tomesh, T., Saleem, Z. H., and Suchara, M. Quantum local search with the quantum alternating operator ansatz. Quantum, 6:781, 2022. doi: 10.22331/q-2022-08-22-781.
+Weggemans, J. Solving correlation clustering with the quantum approximate optimisation algorithm, 2020. URL http://essay.utwente.nl/85484/.
+Weggemans, J. R., Urech, A., Rausch, A., Spreeuw, R., Boucherie, R., Schreck, F., Schoutens, K., Minář, J., and Speelman, F. Solving correlation clustering with QAOA and a Rydberg qudit system: a full-stack approach. Quantum, 6:687, 2022. doi: 10.22331/q-2022-04-13-687.
+
+Wurtz, J. and Love, P. MaxCut quantum approximate optimization algorithm performance guarantees for $p > 1$ . Phys. Rev. A, 103:042612, 2021. doi: 10.1103/PhysRevA.103.042612.
+Yarkony, J., Ihler, A., and Fowlkes, C. C. Fast planar correlation clustering for image segmentation. In ECCV, 2012. doi: 10.1007/978-3-642-33783-3_41.
+
+# A. Quantum Gates
+
+X: $X|0\rangle = |1\rangle$
+
+Rotational X: $R X(\theta):= e^{-i\theta X} = \cos (\theta)I - i\sin (\theta)X$
+
+Pauli-Z: $Z := \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ $Z|1\rangle = -|1\rangle$
+
+Rotational Z: $RZ(\theta)\coloneqq e^{-i\theta Z} = \cos (\theta)I - i\sin (\theta)Z$
+
+Hadamard: $H := \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \quad H|0\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$
+
+Conditional X: $CX := \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 00 \\ 01 \\ 10 \\ 11 \end{bmatrix}$
+
+# B. Additional Proofs
+
+Proof of Lemma 5.3. We want to show that $H_{C'} = \sum_{u \in V} w_u Z_u$ is in the generated lie algebra $\mathcal{L}$ of $\{H_M, H_C\}$ , where $H_M = \sum_{u \in V} X_u$ and $H_C = \sum_{uv \in E} c_{uv} Z_u Z_v + \sum_{u \in V} w_u Z_u$ . For notational convenience, we define $H_{C_1} := \sum_{uv \in E} c_{uv} Z_u Z_v$ .
+
+In analogy to Morales et al. (2020), we define a series of commutators in $\mathcal{L}$ , showing finally that $H_{C'} \in \mathcal{L}$ :
+
+$$
+\begin{array}{l} H _ {Y Z} := \frac {1}{2 i} \left[ H _ {C}, H _ {M} \right] = \frac {1}{2 i} \left(\left[ H _ {C _ {1}}, H _ {M} \right] + \left[ H _ {C ^ {\prime}}, H _ {M} \right]\right) \\ = \sum_ {u v \in E} c _ {u v} \left(Z _ {u} Y _ {v} + Y _ {u} Z _ {v}\right) + \sum_ {u \in V} w _ {u} Y _ {u} \in \mathcal {L}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \frac {1}{2 i} \left[ H _ {Y Z}, H _ {M} \right] = \sum_ {u v \in E} c _ {u v} \left[ Z _ {u} Y _ {v} + Y _ {u} Z _ {v}, \sum_ {u ^ {\prime} \in V} X _ {u ^ {\prime}} \right] + \sum_ {u \in V} w _ {u} \left[ Y _ {u}, \sum_ {u ^ {\prime} \in V} X _ {u ^ {\prime}} \right] \\ = \sum_ {u v \in E} c _ {u v} \left(Y _ {u} Y _ {v} - Z _ {u} Z _ {v} - Z _ {u} Z _ {v} + Y _ {u} Y _ {v}\right) - \sum_ {u \in V} w _ {u} Z _ {u} \\ = 2 \sum_ {u v \in E} c _ {u v} \left(Y _ {u} Y _ {v} - Z _ {u} Z _ {v}\right) - \sum_ {u \in V} w _ {u} Z _ {u} \in \mathcal {L}, \\ \end{array}
+$$
+
+$$
+H _ {(1)} := \frac {1}{2 i} \left[ H _ {Y Z}, H _ {M} \right] + H _ {C} = \sum_ {u v \in E} c _ {u v} \left(2 Y _ {u} Y _ {v} - Z _ {u} Z _ {v}\right) \in \mathcal {L},
+$$
+
+$$
+\begin{array}{l} H _ {(2)} := \frac {1}{2 i} [ H _ {(1)}, H _ {M} ] = \frac {1}{2 i} [ H _ {(1)}, H _ {M} ] = \frac {1}{2 i} \sum_ {u v \in E} c _ {u v} \left(2 \left[ Y _ {u} Y _ {v}, \sum_ {u ^ {\prime} \in V} X _ {u ^ {\prime}} \right] - \left[ Z _ {u} Z _ {v}, \sum_ {u ^ {\prime} \in V} X _ {u ^ {\prime}} \right]\right) \\ = \sum_ {u v \in E} c _ {u v} \left(2 \left(- Z _ {u} Y _ {v} - Y _ {u} Z _ {v}\right) - \left(Y _ {u} Z _ {v} + Z _ {u} Y _ {v}\right)\right) \\ = - 3 \sum_ {u v \in E} c _ {u v} \left(Z _ {u} Y _ {v} + Y _ {u} Z _ {v}\right) \in \mathcal {L}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \frac {1}{2 i} \left[ H _ {Y Z} + \frac {1}{3} H _ {(2)}, H _ {M} \right] = \frac {1}{2 i} \left(\left[ H _ {Y Z}, H _ {M} \right] + \left[ \frac {1}{3} H _ {(2)}, H _ {M} \right]\right) \\ = - \sum_ {u \in V} w _ {u} Z _ {u} + 2 \sum_ {u v \in E} c _ {u v} \left(Y _ {u} Y _ {v} - Z _ {u} Z _ {v}\right) \\ - \sum_ {u v \in E} c _ {u v} \left(Y _ {u} Y _ {v} - Z _ {u} Z _ {v} - Z _ {v} Z _ {u} + Y _ {v} Y _ {u}\right) \\ = - \sum_ {u \in V} w _ {u} Z _ {u} \\ = - H _ {C ^ {\prime}} \in \mathcal {L}. \\ \end{array}
+$$
+
+It follows directly from $-H_{C'} \in \mathcal{L}$ that $H_{C'} \in \mathcal{L}$ .
+
+Proof of Lemma 5.4. We want to show that $H_{u'}$ and $X_{u'}$ are for all $u' \in V$ in the generated lie algebra $\mathcal{L}$ of $\{H_M, H_{C'}\}$ , where $H_M = \sum_{u \in V} X_u$ and $H_{C'} = \sum_{u \in V} w_u Z_u$ with $w_u^2 \neq w_v^2$ for all $u, v \in V$ .
+
+Assume we have already shown $X_{u^{\prime}}\in \mathcal{L}$ . It follows directly that $Y_{u^{\prime}} = \frac{1}{2i} [\frac{1}{w_{u^{\prime}}} H_{C^{\prime}},X_{u^{\prime}}]\in \mathcal{L}$ , further $Z_{u^{\prime}} = \frac{1}{2i} [X_{u^{\prime}}Y_{u^{\prime}}]\in \mathcal{L}$ and thus $H_{u^{\prime}} = \frac{1}{\sqrt{2}} (Z + X)\in \mathcal{L}$ . Consequently, it only remains to show $X_{u^{\prime}}\in \mathcal{L}$ .
+
+Define $n = |V|$ . For proving $X_{u'} \in \mathcal{L}$ , we first show that if $H_{M'} = \sum_{u \in V'} w_u' X_u \in \mathcal{L}$ with $V' \subseteq V$ , and $w_u'^2 \neq w_v'^2$ for all $u, v \in V'$ , we can for any $x \in V'$ construct $\sum_{u \in V' \setminus \{x\}} w_u'' X_u \in \mathcal{L}$ such that $w_u''^2 \neq w_v''^2$ for all $u, v \in V' \setminus \{x\}$ .
+
+In particular, it follows from
+
+$$
+H _ {Y ^ {\prime}} := \frac {1}{2 i} \left[ H _ {C}, H _ {M ^ {\prime}} \right] = \sum_ {u \in V ^ {\prime}} w _ {u} w _ {u} ^ {\prime 2} Y _ {u} \in \mathcal {L}
+$$
+
+and
+
+$$
+H _ {X ^ {\prime}} := \frac {1}{2 i} [ H _ {Y ^ {\prime}}, H _ {C} ] = \sum_ {u \in V ^ {(i)}} w _ {u} ^ {2} w _ {u} ^ {\prime 2} X _ {u} \in \mathcal {L},
+$$
+
+that it holds for every $x \in V'$ that
+
+$$
+w _ {x} ^ {2} H _ {M ^ {\prime}} - H _ {X ^ {\prime}} = \sum_ {u \in V ^ {\prime} \backslash \{x \}} (w _ {x} ^ {2} - w _ {u} ^ {2}) w _ {u} ^ {\prime 2} X _ {u} \in \mathcal {L}.
+$$
+
+Setting $w_u^{\prime \prime 2} = (w_x^2 - w_u^2) w_u^{\prime 2}$ yields the desired result.
+
+It only remains to show that we can initially construct such an $H_{M^{\prime}}$ . Therefore, consider first
+
+$$
+H _ {Y} := \frac {1}{2 i} \left[ H _ {C}, H _ {M} \right] = \sum_ {u \in V} w _ {u} Y _ {u}.
+$$
+
+We then get
+
+$$
+\frac {1}{2 i} \left[ H _ {Y}, H _ {C} \right] = \sum_ {u \in V} w _ {u} ^ {2} X _ {u} = H _ {M ^ {\prime}}
+$$
+
+with $V^{\prime} = V$ and $w^{\prime} = w$
+
+# C. Additional Tables
+
+Table 2. Mean approximation ratios and runtimes with standard deviations for solving 50 correlation clustering problem instances on complete and Erdős-Rényi graphs with $n = 3,4,5$ nodes using the multi-level QAOA formulation of Weggemans et al. (2022) and SQAOA, ansatz depths of $p = 1,2,3$ and thresholds for nucleus sampling of $t = 1,0.5,0.1$ .
+Complete Graphs - Approximation Ratio
+
+QAOA t = 1 QAOA t = 0.5 QAOA t = 0.1 SQAOA t = 1 SQAOA t = 0.5 SQAOA t = 0.1 n = 3 0.97 ± 0.04 1.00 ± 0.00 1.00 ± 0.00 0.96 ± 0.05 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 0.97 ± 0.05 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 n = 4 0.91 ± 0.07 0.99 ± 0.02 1.00 ± 0.00 0.83 ± 0.06 0.98 ± 0.02 1.00 ± 0.02 0.98 ± 0.02 1.00 ± 0.00 1.00 ± 0.00 0.83 ± 0.06 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 0.99 ± 0.03 0.99 ± 0.04 n = 5 0.90 ± 0.08 0.97 ± 0.04 0.98 ± 0.04 0.86 ± 0.07 0.97 ± 0.03 0.99 ± 0.03 0.95 ± 0.00 0.98 ± 0.04 0.98 ± 0.05 0.86 ± 0.08 0.98 ± 0.03 0.99 ± 0.03 0.98 ± 0.04 0.98 ± 0.05 0.99 ± 0.03 0.99 ± 0.03
+
+Complete Graphs - Runtime [s]
+
+QAOA t = 1 QAOA t = 0.5 QAOA t = 0.1 SQAOA t = 1 SQAOA t = 0.5 SQAOA t = 0.1 n = 3 173 ± 33 6 ± 1 5 ± 0 44 ± 9 14 ± 5 2 ± 1 315 ± 59 16 ± 2 15 ± 2 100 ± 21 15 ± 7 4 ± 2 30 ± 4 21 ± 5 27 ± 1 7 ± 4 n = 4 272 ± 23 40 ± 8 15 ± 2 207 ± 50 41 ± 15 9 ± 2 691 ± 58 85 ± 17 35 ± 7 422 ± 77 78 ± 27 15 ± 4 102 ± 36 55 ± 16 85 ± 22 18 ± 4 n = 5 488 ± 60 716 ± 302 109 ± 32 1268 ± 450 155 ± 78 15 ± 4 1012 ± 101 1140 ± 190 216 ± 35 2438 ± 836 265 ± 118 26 ± 9 2361 ± 641 349 ± 73 384 ± 175 50 ± 15
+
+Erdős-Rényi Graphs - Approximation Ratio
+
+QAOA t = 1 QAOA t = 0.5 QAOA t = 0.1 SQAOA t = 1 SQAOA t = 0.5 SQAOA t = 0.1 n = 3 0.97 ± 0.04 1.00 ± 0.00 1.00 ± 0.00 0.94 ± 0.08 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.01 1.00 ± 0.00 1.00 ± 0.00 0.94 ± 0.08 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 n = 4 0.92 ± 0.07 1.00 ± 0.01 1.00 ± 0.00 0.88 ± 0.09 0.99 ± 0.03 1.00 ± 0.02 0.97 ± 0.03 1.00 ± 0.00 1.00 ± 0.00 0.88 ± 0.09 1.00 ± 0.02 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 n = 5 0.92 ± 0.07 0.98 ± 0.03 1.00 ± 0.00 0.85 ± 0.10 0.98 ± 0.03 1.00 ± 0.02 0.96 ± 0.04 1.00 ± 0.01 1.00 ± 0.00 0.85 ± 0.10 0.99 ± 0.03 1.00 ± 0.00 1.00 ± 0.01 1.00 ± 0.00 1.00 ± 0.01 1.00 ± 0.00
+
+Table 2. (Continuation)
+Erdős-Rényi Graphs - Runtime [s]
+
+QAOA t = 1 QAOA t = 0.5 QAOA t = 0.1 SQAOA t = 1 SQAOA t = 0.5 SQAOA t = 0.1 n = 3 121 ± 40 6 ± 1 5 ± 0 23 ± 12 8 ± 4 2 ± 1 272 ± 84 12 ± 2 15 ± 2 53 ± 27 15 ± 8 4 ± 2 26 ± 3 21 ± 5 23 ± 12 7 ± 3 n = 4 221 ± 30 38 ± 9 15 ± 2 161 ± 45 38 ± 18 6 ± 2 501 ± 98 54 ± 9 35 ± 7 338 ± 102 52 ± 25 12 ± 4 139 ± 26 55 ± 16 97 ± 37 19 ± 7 n = 5 316 ± 44 809 ± 148 109 ± 32 799 ± 256 127 ± 61 10 ± 5 748 ± 125 1285 ± 304 216 ± 35 2155 ± 789 193 ± 94 22 ± 7 1860 ± 386 349 ± 73 256 ± 116 28 ± 9
\ No newline at end of file
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/images.zip b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..428cdb721ecc53544186b9566f567a0fc56951fb
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:792103ed467fa25aa4ae7195b0ea30311ebd9d266afd3113004e7e27c0a2b579
+size 663636
diff --git a/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/layout.json b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7259ac62d0f314d9887bfec820d971070ea00e60
--- /dev/null
+++ b/asubproblemquantumalternatingoperatoransatzforcorrelationclustering/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5dc480da1e9c69eb625ff96d5c76ac32e8694be88a53824b84dbd88952472247
+size 653044
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_content_list.json b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a187f27e2cf74f8d79eadf7b69e753318b5a1bc4
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e546a3879637a6dda80c4b8cf1e7a146798365825a11b667f97f12730b6d5535
+size 248608
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_model.json b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ebfdc4f1dea7cdd31b18dcb4a9481dbc97564fe1
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5003f161378947b1aef226fadfa2c1c3bcfe902701fb796578ad85d7b34a907f
+size 292205
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_origin.pdf b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..15ee9caf64755230d51a2ef1c88b7c2cc9fd971f
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/83668b0c-fcaa-48e5-a237-c2806e8eb191_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82ff647b4bf4e289e134d3581fcaf515cce37d38a963cb162f3e244d75f9af06
+size 5184650
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/full.md b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cda5f666cdb6078729fd89de178df1ea2f038f70
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/full.md
@@ -0,0 +1,1596 @@
+# ATA: Adaptive Task Allocation for Efficient Resource Management in Distributed Machine Learning
+
+Artavazd Maranjyan1 El Mehdi Saad1 Peter Richtárik1 Francesco Orabona1
+
+# Abstract
+
+Asynchronous methods are fundamental for parallelizing computations in distributed machine learning. They aim to accelerate training by fully utilizing all available resources. However, their greedy approach can lead to inefficiencies using more computation than required, especially when computation times vary across devices. If the computation times were known in advance, training could be fast and resource-efficient by assigning more tasks to faster workers. The challenge lies in achieving this optimal allocation without prior knowledge of the computation time distributions. In this paper, we propose ATA (Adaptive Task Allocation), a method that adapts to heterogeneous and random distributions of worker computation times. Through rigorous theoretical analysis, we show that ATA identifies the optimal task allocation and performs comparably to methods with prior knowledge of computation times. Experimental results further demonstrate that ATA is resource-efficient, significantly reducing costs compared to the greedy approach, which can be arbitrarily expensive depending on the number of workers.
+
+# 1. Introduction
+
+In this work, we address a very general yet fundamental and important problem arising in various contexts and fields. In particular, there are $n$ workers/nodes/devices collaborating to run some iterative algorithm which has the following structure:
+
+$^{1}$ King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia. Correspondence to: Artavazd Maranjyan , El Mehdi Saad , Peter Richtárik , Francesco Orabona .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+- In order to perform a single iteration of the algorithm, a certain number $(B)$ of tasks needs to be performed.
+
+- Each task can be computed by any worker, and the tasks are not temporally related. That is, they can be computed in any order, in parallel, and so on.
+
+- Whenever a worker is asked to perform a single task, the task will take a certain amount of time, modeled as a nonnegative random variable drawn from an unknown distribution specific to that worker. The stochastic assumption makes sense because in real systems computation times are not fixed and can vary with each iteration (Dean & Barroso, 2013; Chen et al., 2016a; Dutta et al., 2018; Maranjyan et al., 2025a).
+
+- Each worker can only work on a single task at a time. That is, a worker processes all tasks it has to perform sequentially. Different workers work in parallel.
+
+A natural goal in this setup is to make sure all tasks are completed as fast as possible (in expectation), which minimizes the (expected) time it takes for a single iteration of the algorithm to be performed provided that the task completion time is the dominant time factor of the iteration. Provided we are willing to waste resources, there is a simple solution to this problem, a Greedy Task Allocation (GTA) strategy, which follows this principle: Make sure all workers are always busy working on some task, and stop once $B$ tasks have been completed. In GTA, we initially ask all $n$ workers to start working on a task, and as soon as some worker is done with a task, we ask it to start completing another task. This process is repeated until $B$ tasks have been completed.
+
+While GTA minimizes the completion time, it can be immensely wasteful in terms of the total worker utilization time needed to collect all $B$ tasks. Indeed, consider the scenario with $n = 1000$ workers and $B = 10$ tasks. In this case, GTA will lead to at least $n - B = 990$ unnecessary tasks being run in each iteration! This is highly undesirable in situations where the workers are utilized across multiple other jobs besides running the iterative algorithm mentioned above.
+
+The goal of our work is to design new task allocation strategies, with rigorous theoretical support, that would attempt
+
+to minimize the expected completion time subject to the constraint that such wastefulness is completely eliminated. That is, we ensure that no more than $B$ tasks are completed in each round.
+
+# 1.1. A Motivating Example: Optimal Parallel SGD
+
+A key inspiration for our work, and the prime example of the general task collection problem described above, relates to recent development in the area of parallel stochastic gradient descent (SGD) methods. Consider the problem of finding an approximate stationary point of the optimization problem
+
+$$
+\min _ {\boldsymbol {x} \in \mathbb {R} ^ {d}} \left\{f (\boldsymbol {x}) := \mathbb {E} _ {\boldsymbol {\xi} \sim \mathcal {D}} \left[ f _ {\boldsymbol {\xi}} (\boldsymbol {x}) \right] \right\},
+$$
+
+where $f_{\pmb{\xi}}:\mathbb{R}^{d}\to \mathbb{R}$ are smooth nonconvex functions, and $f$ is assumed to be bounded from below. We assume that
+
+$$
+\mathbb {E} _ {\boldsymbol {\xi} \sim \mathcal {D}} \left[ \| f _ {\boldsymbol {\xi}} (\boldsymbol {x}) - \nabla f (\boldsymbol {x}) \| ^ {2} \right] \leq \sigma^ {2}
+$$
+
+for all $\pmb{x} \in \mathbb{R}^d$ .
+
+In a recent breakthrough, Tyurin & Richtárik (2024) recently developed a parallel SGD method, optimal in terms of a novel notion of complexity called time complexity, for solving the above problem with $n$ parallel workers, assuming that it takes $\tau_{i} > 0$ seconds to worker $i$ to compute a stochastic gradient of $f$ (this corresponds to a task). Their method, Rennala SGD, corresponds to Minibatch SGD of minibatch size $B$ (which depends on the target accuracy and $\sigma$ only), with the $B$ tasks (stochastic gradients) completed via GTA. While minimax optimal in terms of time complexity, the GTA task allocation strategy employed within Rennala SGD can be wasteful, as explained above.
+
+Recently, Maranjyan et al. (2025b) proposed Ringmaster ASGD, a fully asynchronous SGD method, matching the optimal time complexity of Rennala SGD and achieving optimality for arbitrary compute time patterns associated with the tasks (stochastic gradients), including random, as considered in our setup. However, Ringmaster ASGD also employs a greedy task allocation strategy, leading to wastefulness.
+
+Numerous other parallel/distributed methods involve the implementation of a task allocation strategy, including stochastic proximal point methods (task = evaluation of the stochastic prox operator), higher-order methods (task = evaluation of stochastic Hessian), and beyond. So, by addressing the general task allocation problem, we aim to tame the inherent resource wastefulness of all these methods.
+
+# 1.2. Contributions
+
+In this work, we formalize the task allocation problem as a combinatorial online learning problem with partial feedback and non-linear losses. Then, we introduce ATA, a lower-confidence bound-based algorithm designed to solve
+
+the proposed allocation problem. ATA is agnostic to workers' computation times, and our theoretical analysis demonstrates that the total computation time achieved by our methods remains within a small multiplicative factor of the optimal computation time (i.e., the one attainable with full knowledge of the workers' arm distributions). Additionally, we present ATA-Empirical, a variant of ATA that leverages a novel data-dependent concentration inequality and achieves better empirical results. Finally, we validate our approach through numerical simulations.
+
+# 2. Related Work
+
+Most of the literature on asynchronous methods focuses on demonstrating advantages over their synchronous counterparts. For the simplest method, SGD, this was only recently established by Tyurin & Richtárik (2024). With this result in place, the community can now shift its focus to reducing the overhead of asynchrony. Our work may be the first step in this direction.
+
+In federated learning (FL) (Konečný et al., 2016; McMahan et al., 2016; Kairouz et al., 2021), several works account for system heterogeneity. The most well-known FL method, FedAvg (McMahan et al., 2017), operates by performing multiple local steps on workers, where each step can be viewed as a task. Some works adjust the number of local steps based on worker computation times (Li et al., 2020; Maranjyan et al., 2022), effectively adapting task assignments to worker speed. However, these methods rely on prior knowledge of these times rather than learning them adaptively, as we do.
+
+We reformulate our problem as an online bandit problem. The literature on bandit algorithms is vast, and we refer the reader to Lattimore & Szepesvári (2020) for an introduction to this subject. Our algorithm is based on the approach of using Lower Confidence Bounds (LCBs) on the true means of the arms. This idea, originally proposed by Auer (2002) for the classical Multi-Armed Bandit (MAB) setting, has since been widely adopted in the stochastic combinatorial bandits literature (Gai et al., 2012; Chen et al., 2013; Combes et al., 2015; Kveton et al., 2015). Using LCBs instead of the empirical estimates of the means allows to trade-off optimally exploration and exploitation.
+
+The "greedy" approach we employ, which involves selecting the action that minimizes the loss function based on lower confidence bounds instead of the unknown means, is a standard technique in the literature (Chen et al., 2013; Lin et al., 2015). However, note that our larger action space and the discontinuity of our loss function necessitates a more tailored analysis. To the best of our knowledge, this is the first work addressing a non-continuous loss function in a stochastic combinatorial MAB-like framework. To over
+
+come this challenge, we exploit the specific structures of our loss function and action space to control the number of rounds where suboptimal actions are chosen. Additionally, our procedure is computationally efficient.
+
+# 3. Problem Setup
+
+In this section, we formally describe the problem setup.
+
+# 3.1. Task Allocation Protocol
+
+We consider a system of $n$ workers, each responsible for computing gradients. In each round, the allocation algorithm has a budget of $B$ units that must be allocated among the $n$ workers. Each unit allocation will result in one gradient computation. We denote by $K$ the total number of rounds, which is assumed to be unknown to the learner. We denote by $X_{i,k}^{(u)}$ the computation time of the worker $i \in [n] := \{1,2,\dots,n\}$ for round $k \in [K]$ on its $u$ -th gradient. Consequently, the computation time required for worker $i$ to perform its task of computing $a_{i,k}$ gradients in round $k$ is given by
+
+$$
+\sum_ {u = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(u)}
+$$
+
+if $a_{i,k}\geq 1$ , and O otherwise.
+
+In each round $k$ , the allocation algorithm must choose an allocation vector $\pmb{a}_k \in \mathbb{N}^n$ such that $\| \pmb{a} \|_1 = B$ , based on the information available prior to round $k$ . The feedback consists of $a_{i,k}$ observed times for all the chosen workers. We will denote the action set by
+
+$$
+\mathcal {A} := \{\boldsymbol {a} \in \mathbb {N} ^ {n}: \| \boldsymbol {a} \| _ {1} = B \},
+$$
+
+where $\mathbb{N}$ is the set of natural numbers, including the 0.
+
+The objective of the allocation strategy in each round $k$ is to minimize the total computation time. Hence, the objective is to minimize $C: \mathcal{A} \to \mathbb{R}_+$ , the computation time that the optimizer waits to receive $B$ gradients using an allocation vector $\pmb{a} \in \mathcal{A}$ , defined as
+
+$$
+C \left(\boldsymbol {a} _ {k}\right) := \max _ {i \in \operatorname {s u p p} \left(\boldsymbol {a} _ {k}\right)} \sum_ {u = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(u)}. \tag {1}
+$$
+
+# 3.2. Modeling Assumptions
+
+We assume that the computation time of each worker $i \in [n]$ are i.i.d. drawn from a random variable $X_{i}$ following a probability distribution $\nu_{i}$ . We denote by $\pmb{\mu} = (\mu_1, \dots, \mu_n)$ the vector of unknown means. Hence, the random variables $(X_{i,k}^{(u)})$ with $u \in \{1, \dots, a_{i,k}\}$ are $a_{i,k}$ i.i.d. samples drawn from $\nu_{i}$ .
+
+We assume that the distribution $\nu_{i}$ of the computation times to be sub-exponential random variables. To quantify this
+
+assumption, we recall the definition of the sub-exponential norm, also known as the Orlicz norm, for a centered real-valued random variable $X$ :
+
+$$
+\left\| X \right\| _ {\psi_ {1}} := \inf \left\{C > 0: \mathbb {E} [ \exp (| X | / C) ] \leq 2 \right\}. \tag {2}
+$$
+
+Hence, formally we make the following assumption.
+
+Assumption 3.1. Let $\alpha \geq 0$ . For all $i \in [n]$ , $X_{i}$ is a positive random variable and $\| X_{i} - \mu_{i}\|_{\psi_{1}} \leq \alpha$ .
+
+In the remainder of this paper we denote $\alpha_{i} := \|X_{i}\|_{\psi_{1}}$ for each $i \in [n]$ , let $\alpha := \max_{i \in [n]} \alpha_{i}$ .
+
+The considered class encompasses several other well-known classes of distributions in the literature, such as support-bounded and sub-Gaussian distributions. Moreover, it includes exponential distributions, which are frequently used in the literature to model waiting or computation times in queuing theory and resource allocation in large distributed systems (Gelenbe & Mitrani, 2010; Gross et al., 2011; Hadjis et al., 2016; Mitliagkas et al., 2016; Dutta et al., 2018; Nguyen et al., 2022).
+
+# 3.3. Objective of the Allocation Algorithm
+
+The main objective of this work is to develop an online allocation strategy with small expected total computation time, defined as
+
+$$
+\mathcal {C} _ {K} := \sum_ {k = 1} ^ {K} \mathbb {E} [ C (\boldsymbol {a} _ {k}) ].
+$$
+
+If the distributions of the arms were known in advance, the optimal allocation $\pmb{a}^{*} \in \mathcal{A}$ would be selected to minimize the expected computation time per round, $\mathbb{E}[C(\cdot)]$ , and this allocation would be used consistently over $K$ rounds, leading to the optimal total computation time
+
+$$
+\mathcal {C} _ {K} ^ {*} = K \mathbb {E} [ C (\boldsymbol {a} ^ {*}) ].
+$$
+
+Our goal is to design a strategy that ensures the computation time $\mathcal{C}_K$ remains within a small multiplicative factor of the optimal time $\mathcal{C}_K^*$ , plus an additional negligible term. Specifically, we aim to satisfy
+
+$$
+\mathcal {C} _ {K} \leq \gamma \cdot \mathcal {C} _ {K} ^ {*} + \mathcal {E} _ {K}, \tag {3}
+$$
+
+where $\gamma \geq 1$ is a constant close to 1, and $\mathcal{E}_K$ is a negligible term compared to $\mathcal{C}_K^*$ when $K\to \infty$ . This would assure us that in the limit we are a constant multiplicative factor away from the performance of the optimal allocation strategy that has full knowledge of the distributions of the computational times of the workers.
+
+Finding a strategy solving the objective in (3) presents several technical challenges. First, the action space $\mathcal{A}$ is discrete, and the nonlinearity of the computation time functions
+
+$C(\cdot)$ prevents reducing our objective to a convex problem. Second, the size of $\mathcal{A}$ is combinatorial, growing on the order of $\binom{n+B-1}{B}$ , which necessitates exploiting the inherent problem structure to develop efficient strategies. Third, because the workers' computation times are stochastic, any solution must account for uncertainty. Finally, the online setting forces the learner to balance exploration and exploitation under a limited allocation budget of $B$ units per round and partial feedback—only the computation times of workers who receive allocations are observed. This last point naturally suggests adopting a MAB approach.
+
+In the next section, we show how to reduce this problem to a MAB problem and how to efficiently solve it.
+
+# 4. Adaptive Task Allocation
+
+Here, we first show how to reduce the problem in (3) to a non-linear stochastic Multi-Armed Bandit (MAB) problem. Then, we propose an efficient algorithm for this formulation.
+
+# 4.1. Reduction to Multi-Armed Bandit and Proxy Loss
+
+The stochastic MAB problem is a fundamental framework in sequential decision-making under uncertainty. It involves a scenario where an agent must choose among a set of arms, each associated with an unknown reward distribution. The agent aims to maximize cumulative reward (or equivalently minimize the cumulative loss) over time by balancing exploration (gathering information about the reward distributions) and exploitation (leveraging the best-known arm). The challenge lies in the trade-off between exploring suboptimal arms to refine reward estimates and exploiting the arm with the highest observed reward, given the stochastic nature of the outcomes. Using the terminology from bandit literature, here we will refer to each worker as an "arm."
+
+However, differently from the standard MAB problem, we have a harder problem because $\mathbb{E}\left[C(\pmb{a}_k)\right]$ depends on the joint distribution of all the arms in the support of $\pmb{a}_k$ , rather than on their expectations only. This dependency potentially renders the task of relying on estimates of $\mathbb{E}\left[C(\pmb{a})\right]$ for $\pmb{a} \in \mathcal{A}$ computationally challenging due to the combinatorial nature of the set $\mathcal{A}$ .
+
+To solve this issue, our first idea is to introduce a proxy loss $\ell : \mathcal{A} \times \mathbb{R}_{>0}^n \to \mathbb{R}_{\geq 0}$ , defined as
+
+$$
+\ell (\boldsymbol {a}, \boldsymbol {\mu}): = \max _ {i \in [ n ]} a _ {i} \mu_ {i}. \tag {4}
+$$
+
+Due to the convexity of $C(\cdot)$ , the introduced proxy-loss underestimates the expected computation time. However, in Appendix D.3 we prove that this quantity also upper bounds the expected computation time up to a constant that depends on the distribution of the arms. In particular, for any $\pmb{a} \in \mathcal{A}$ ,
+
+we show that
+
+$$
+\ell (\boldsymbol {a}, \boldsymbol {\mu}) \leq \mathbb {E} [ C (\boldsymbol {a}) ] \leq (1 + 4 \eta \ln (B)) \ell (\boldsymbol {a}, \boldsymbol {\mu}), \tag {5}
+$$
+
+where $\eta$ is defined as
+
+$$
+\eta := \max _ {i \in [ n ]} \frac {\alpha_ {i}}{\mu_ {i}}. \tag {6}
+$$
+
+In words, $\eta$ provides an upper bound on the ratio between the standard deviation and the mean of the arms. Note that in the literature, it is common to consider exponential, Erlang, or Gamma distributions, where the ratio $\eta$ is typically bounded by 1.
+
+The bound above will allow us to derive guarantees on the total computation time of an allocation strategy based on its guarantees for the proxy loss $\ell(\cdot)$ , up to a factor of the order $1 + 4\eta \ln(B)$ . We remark that in the special case where the arms' distributions are deterministic ( $\eta = 0$ ) or the query budget is unitary ( $B = 1$ ), the two targets $\mathbb{E}[C(\boldsymbol{a})]$ and $\ell$ exactly coincide.
+
+# 4.2. Comparison with the Combinatorial Bandits Setting
+
+Our setting is closely related to the Combinatorial Multi-Armed Bandits (CMAB) framework (Cesa-Bianchi & Lugosi, 2012), particularly due to the combinatorial nature of the action space and the semi-bandit feedback, where the learner observes outcomes from all chosen arms. However, our formulation differs in two significant ways. First, while CMAB typically involves selecting a subset of $n$ arms, resulting in an action space with a maximum size of $2^n$ , our action space $\mathcal{A}$ has a cardinality of $\binom{n+B-1}{B}$ . The ratio between these two can be extremely large, potentially growing exponentially with $n$ . Second, although most works in this domain assume a linear loss function in the arms' means, some notable exceptions address non-linear reward functions (Chen et al., 2013; Lin et al., 2015; Chen et al., 2016b; Wang & Chen, 2018). However, these approaches generally rely on assumptions such as smoothness, Lipschitz continuity, or higher-order differentiability of the reward function. In contrast, our loss function $\ell(\cdot, \mu)$ is not continuous with respect to the arms' means. Finally, motivated by the practical requirements of our setting, we place a strong emphasis on computational efficiency that rules out most of the approaches based on CMAB.
+
+# 4.3. Adaptive Task Allocation Algorithm
+
+Now, we introduce our Adaptive Task Allocation algorithm (ATA). ATA does not require prior knowledge of the horizon $K$ and only relies on an upper bound $\alpha$ satisfying
+
+# Algorithm 1 ATA (Adaptive Task Allocation)
+
+1: Input: allocation budget $B$ , $\alpha > 0$
+2: Initialize: empirical means $\hat{\mu}_{i,1} = 0$ , usage counts $K_{i,1} = 0$ , and usage times $T_{i,1} = 0$ , for all $i \in [n]$
+3: for $k = 1, \dots, K$ do
+4: Compute LCBs $(s_{i,k})$ for all $i\in [n]$ using (7)
+5: Find allocation: $\pmb{a}_k \in \arg \min_{\pmb{a} \in \mathcal{A}} \ell(\pmb{a}, \pmb{s}_k)$
+6: Allocate $a_{i,k}$ tasks to each worker $i \in [n]$
+7: Update optimization parameters
+8: for $i$ such that $a_{i,k} \neq 0$ do
+9: $K_{i,k + 1} = K_{i,k} + a_{i,k}$
+10: $T_{i,k + 1} = T_{i,k} + \sum_{j = 1}^{a_{i,k}}X_{i,k}^{(j)}$
+11: $\hat{\mu}_{i,k + 1} = T_{i,k + 1} / K_{i,k + 1}$
+12: end for
+13: end for
+
+$\alpha \geq \max_{i\in [n]}\| X_i - \mu_i\|_{\psi_1}$ for the Orlicz norms of the arm distributions. Recall that $\| X_i - \mu_i\|_{\psi_1} \leq 2\| X_i\|_{\psi_1}$ , so an upper bound on $\| X_i\|_{\psi_1}$ also provides one for $\| X_i - \mu_i\|_{\psi_1}$ . The core idea of the procedure is to allocate the workers based on lower confidence bound estimates on the arm means $(\mu_i)_{i\in [n]}$ , in order to balance exploration and exploitation.
+
+For each arm $i \in [n]$ and round $k \in [K]$ , let $K_{i,k}$ represent the number of samples collected from the distribution of arm $i$ up to round $k$ . At each round $k$ , we compute an empirical mean, denoted by $\hat{\mu}_{i,k}$ , using the $K_{i,k}$ samples obtained so far. Based on these empirical means, we define the lower confidence bounds $s_{i,k}$ as
+
+$$
+s _ {i, k} = \left(\hat {\mu} _ {i, k} - \operatorname {c o n f} (i, k)\right) _ {+}, \tag {7}
+$$
+
+where $(x)_{+} = \max \{x,0\}$ and $\operatorname {conf}(\cdot ,\cdot)$ is defined as
+
+$$
+\operatorname {c o n f} (i, k) = \left\{ \begin{array}{l l} 2 \alpha \left(\sqrt {\frac {\ln (2 k ^ {2})}{K _ {i , k}}} + \frac {\ln (2 k ^ {2})}{K _ {i , k}}\right), & K _ {i, k} \geq 1, \\ + \infty , & K _ {i, k} = 0. \end{array} \right.
+$$
+
+The term $\mathrm{conf}(\cdot ,\cdot)$ is derived from a known concentration inequality for sub-exponential variables with an Orlicz norm bounded by $\alpha$ (Lemma E.1 in the Appendix).
+
+Given the confidence bounds $s_k \coloneqq (s_{1,k},\ldots ,s_{n,k})$ , the learner selects the action $\pmb{a}_k \in \mathcal{A}$ at round $k$ that minimizes the loss $\ell (\cdot ,s_k)$ , defined in (4). While nonconvex, we show in Appendix C that this optimization problem can be solved using a recursive routine, whose computational efficiency is $\mathcal{O}(n\ln (\min \{B,n\}) + \min \{B,n\}^2)$ .
+
+Remark 4.1. Line 7 of the algorithm acts as a placeholder for the optimization method, where the optimization parameters are updated using the quantities computed by the workers (e.g., gradients in the case of SGD). In this view, the allocation algorithm is independent of the specifics of
+
+the chosen optimization algorithm. Refer to Appendix B for further details.
+
+As last step, the feedback obtained after applying the allocation $\mathbf{a}_k$ is used to update the lower confidence bounds. The complete pseudocode for ATA is provided in Algorithm 1.
+
+# 4.4. Upper-Bound on the Total Computation Time
+
+We provide guarantees for ATA in the form of an upper bound on the expected total computation time required to perform $K$ iterations of the optimization procedure. Recall that the proxy loss $\ell(\cdot, \mu)$ and the expected computation time are related through (5). This relationship and Theorem 6.1 allow us to derive guarantees on the expected total computation time, denoted by
+
+$$
+\mathcal {C} _ {K} := \sum_ {k = 1} ^ {K} \mathbb {E} \left[ C \left(\boldsymbol {a} _ {k}\right) \right].
+$$
+
+We define the optimal allocation for minimizing the computation time as
+
+$$
+\boldsymbol{a}^{*}\in \operatorname *{arg min}_{\boldsymbol {a}\in \mathcal{A}}\mathbb{E}\left[C(\boldsymbol {a})\right].
+$$
+
+Consequently, the optimal expected total computation time in this framework is given by
+
+$$
+\mathcal {C} _ {K} ^ {*} := K \mathbb {E} \left[ C \left(\boldsymbol {a} ^ {*}\right) \right].
+$$
+
+Theorem 4.2 (Proof in Appendix D.3). Suppose Assumption 3.1 holds and let $\eta \coloneqq \max_{i\in [n]}\alpha_i / \mu_i$ . Then, the total expected computation time after $K$ rounds, using the allocation prescribed by ATA with inputs $(B,\alpha)$ satisfies
+
+$$
+\mathcal {C} _ {K} \leq (1 + 4 \eta \ln (B)) \mathcal {C} _ {K} ^ {*} + \mathcal {O} (\ln K).
+$$
+
+Remark 4.3. The $\mathcal{O}(\cdot)$ term hides an instance dependent factor. We will give its full specifics in the regret upper bound of Theorem 6.1.
+
+The bound in Theorem 4.2 shows that the total expected computation time of ATA remains within a multiplicative factor of $1 + 4\eta \ln (B)$ of the optimal computation time $\mathcal{C}_K^*$ , with an additional remainder term that scales logarithmically with $K$ . Since $\mathcal{C}_K = \Omega (K)$ , this additive term is negligible compared to $\mathcal{C}_K^*$ . In practical scenarios, where computation time follows common distributions such as exponential or Gamma, the factor $\eta$ is typically of order 1, and $\ln (B)$ remains relatively small for the batch sizes commonly used in optimization algorithms like SGD.
+
+The reader might wonder if the more ambitious goal of deriving bounds with a multiplicative factor of exactly 1 is achievable. However, achieving this goal would require significantly more precise estimates of the expected computation time $\mathbb{E}[C(\pmb{a})]$ for all $\pmb{a} \in \mathcal{A}$ . Since $\mathbb{E}[C(\pmb{a})]$ depends
+
+on the joint distribution of all workers in the support $\mathbf{a}$ , obtaining such precise estimates would come at the cost of computational efficiency in the allocation strategy.
+
+We note that it is unsurprising that $\eta$ appears in the upper bound of Theorem 4.2, since having a heavier-tailed distribution increases the gap between $\ell(\pmb{a}, \pmb{\mu})$ and $\mathbb{E}[C(\pmb{a})]$ through the convexity of $C(\cdot)$ . Instead, the factor $\ln(B)$ arises because $C(\cdot)$ is expressed as the maximum of up to $B$ random variables. Moreover, in the edge cases where $\eta = 0$ (deterministic case) or $B = 1$ (linear cost function), we guarantee that the expected computation time is at most an additive factor away from the optimal one.
+
+# 5. Empirical Adaptive Task Allocation
+
+The ATA procedure is based on a lower confidence bound approach that relies on concentration inequalities. These bounds play a key role in performance, as sharper concentration bounds lead to more accurate estimates and reduce exploration of suboptimal options. Since workers' computation times follow sub-exponential distributions, their concentration behavior is determined by the Orlicz norm of the corresponding variables. In ATA, the only prior knowledge available is an upper bound on the largest Orlicz norm among all arms. When the Orlicz norms of the arms' distributions vary significantly, this uniform bound may result in loose confidence intervals and inefficient exploration.
+
+To address this issue, we introduce ATA-Empirical, which better adapts to the distribution of each arm, particularly its Orlicz norm. This adaptation is achieved through a novel data-dependent concentration inequality for sub-exponential variables. Unlike ATA, which depends on the maximum Orlicz norm, ATA-Empirical accounts for the individual Orlicz norms of all arms, denoted by $(\alpha_{i})_{i\in [n]}$ . This improvement is reflected in the upper bounds on regret presented in Section 6. In practice, this leads to improved performance at least some settings, as shown in our simulations in Section 7. However, this increased adaptivity comes with a trade-off since ATA-Empirical requires an upper bound on the quantity $\eta = \max_i\alpha_i / \mu_i$ , rather than a bound on the largest Orlicz norm. That said, for many distributions of interest, the ratios $\alpha_{i} / \mu_{i}$ across different arms tend to be of the same order, whereas their Orlicz norms can vary significantly.
+
+The ATA-Empirical procedure differs from ATA only in the lower confidence bounds it uses. These bounds are derived from the novel concentration inequality in Lemma 6.2 and are defined for arm $i \in [n]$ at round $k \in [K]$ as
+
+$$
+\hat {s} _ {i, k} = \hat {\mu} _ {i, k} \left[ 1 - 2 \eta \left(\sqrt {\frac {\ln \left(2 k ^ {2}\right)}{K _ {i , k}}} + \frac {\ln \left(2 k ^ {2}\right)}{K _ {i , k}}\right) \right] _ {+}, \tag {8}
+$$
+
+where $\eta = \max_{i\in [n]}\alpha_i / \mu_i$
+
+The expected total computation time $\mathcal{C}_K$ of ATA-Empirical satisfies the same guarantee presented in Theorem 6.1, but we obtain an improved multiplicative factor of the additive logarithmic term. The precise expressions of these factors are provided in the next section, and they show that the guarantees of ATA-Empirical adapt to the Orlicz norms $\| X_i\|_{\psi_1}$ of each arm, while the guarantees of ATA depend on the maximum Orlicz norm $\max_i\| X_i\|_{\psi_1}$ .
+
+# 6. Theoretical Results
+
+In this section, we sketch the derivation of Theorem 4.2 for ATA and ATA-Empirical, through a regret analysis on the proxy losses. We define the expected cumulative regret of the proxy loss $\ell(\cdot, \mu)$ after $K$ rounds
+
+$$
+\mathcal {R} _ {K} := \sum_ {k = 1} ^ {K} \mathbb {E} [ \ell (\boldsymbol {a} _ {k}, \boldsymbol {\mu}) ] - K \cdot \ell (\bar {\boldsymbol {a}}, \boldsymbol {\mu}), \tag {9}
+$$
+
+where $\bar{\pmb{a}}\in \arg \min_{\pmb {a}\in \mathcal{A}}\ell (\pmb {a},\pmb {\mu})$ represents the optimal allocation over the workers. If multiple optimal actions exist, we consider the one returned by the optimization sub-routine used in ATA (line 5 of Algorithm 2).
+
+We derive upper bounds on the expected cumulative regret $\mathcal{R}_K$ . Based on these bounds, we provide the guarantees on the expected total computation time required to complete $K$ iterations of the optimization process.
+
+# 6.1. Guarantees for ATA
+
+For each worker $i \in [n]$ , recall that $\bar{a}_i$ denote the prescribed allocation of the optimal action $\bar{a}$ . Define $k_i$ as the smallest integer satisfying
+
+$$
+\left(\bar {a} _ {i} + k _ {i}\right) \mu_ {i} > \ell \left(\bar {\boldsymbol {a}}, \boldsymbol {\mu}\right). \tag {10}
+$$
+
+From the definition above, it follows that if the learner plays an action $\pmb{a}_k$ at round $k$ such that $a_{k,i} \geq \bar{a}_i + k_i$ , then $\ell(\pmb{a}_k, \pmb{\mu}) \geq \ell(\bar{\pmb{a}}, \pmb{\mu})$ . Thus, $k_i$ can be interpreted as the smallest number of additional units allocated to worker $i$ that result in a suboptimal loss. Moreover, for every worker $i \in [n]$ , we have $k_i \in \{1, 2\}$ (see Lemma D.1 in the Appendix).
+
+The next result provides an upper bound on the expected regret of ATA.
+
+Theorem 6.1 (Proof in Appendix D.1). Suppose that Assumption 3.1 holds. Then, the expected regret of ATA with inputs $(B,\alpha)$ satisfies
+
+$$
+\begin{array}{l} \mathcal{R}_{K}\leq 2n\max_{i\in [n}\{B\mu_{i} - \ell (\bar{a},\boldsymbol {\mu})\} \\ + c \cdot \sum_ {i = 1} ^ {n} \frac {\alpha^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\pmb {a}} , \pmb {\mu}))}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\pmb {a}} , \pmb {\mu})) ^ {2}} \cdot \ln K, \\ \end{array}
+$$
+
+where $\alpha \geq \max_{i\in [n]}\| X_i - \mu_i\|_{\psi_1}$ , and $c$ is a numerical constant.
+
+The first term in the regret upper bound is independent on the number of rounds $K$ . The second term, however, grows logarithmically with $K$ , which aligns with the behavior observed in stochastic bandit problems in the literature.
+
+In the case where $B = 1$ , our setting reduces to the problem of regret minimization for the standard multi-armed bandits. Observe that in this case $\ell(\bar{\mathbf{a}}, \boldsymbol{\mu}) = \min_{i \in [n]} \mu_i$ , $k_i = 1$ for all $i \in [n]$ . Therefore, the guarantees of Theorem 6.1 recover the known optimal bound
+
+$$
+\mathcal {O} \left(\sum_ {i} \ln (K) / \Delta_ {i}\right)
+$$
+
+of the standard MAB setting, where $\Delta_{i} \coloneqq \mu_{i} - \min_{j} \mu_{j}$ .
+
+Proof sketch. In standard and combinatorial MAB problems, regret bounds are typically derived by controlling the number of rounds in which the learner selects suboptimal arms. These bounds are often of the order $\ln(K) / \Delta^2$ , where $\Delta$ denotes the suboptimality gap and quantifies the exploration cost required to distinguish optimal actions from suboptimal ones.
+
+In our setting, the problem is more complex since the learner must not only choose which arms to pull but also determine the allocation of resources across selected arms. With this in mind, we develop the following key arguments leading to the bound in Theorem 6.1.
+
+We define over-allocation for worker $i$ at round $k$ as the event where $a_{i,k} \geq \bar{a}_i + k_i$ . By definition of $k_i$ (see (10)), this implies that $\ell(\pmb{a}_k, \pmb{\mu}) > \ell(\bar{\pmb{a}}, \pmb{\mu})$ . We define a bad round as a round where $\ell(\pmb{a}_k, \pmb{\mu}) > \ell(\bar{\pmb{a}}, \pmb{\mu})$ , and we say that a bad round is triggered by arm $i$ when $a_{i,k}\mu_i = \ell(\pmb{a}_k, \pmb{\mu}) > \ell(\bar{\pmb{a}}, \pmb{\mu})$ . Then, the proof revolves around establishing an upper bound on the total number of bad rounds.
+
+To derive this bound, we consider the number of samples required to verify that the mean computation time of worker $i$ under over-allocation exceeds the optimal waiting time $\ell(\bar{a},\mu)$ . Specifically, we need to test whether the mean of the corresponding distribution, at least $(\bar{a}_i + k_i)\mu_i$ , surpasses the threshold $\ell(\bar{a},\mu)$ . This is equivalent to testing whether
+
+$$
+\left\{\mu_ {i} > \frac {\ell (\bar {\boldsymbol {a}} , \boldsymbol {\mu})}{\bar {a} _ {i} + k _ {i}} \right\}.
+$$
+
+Using the concentration inequality applied in our analysis, the number of samples required for this test is of the order:
+
+$$
+\alpha_ {i} ^ {2} \left(\mu_ {i} - \frac {\ell (\bar {\boldsymbol {a}} , \boldsymbol {\mu})}{\bar {a} _ {i} + k _ {i}}\right) ^ {- 2} = \frac {\alpha_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) ^ {2}}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}}, \boldsymbol {\mu})) ^ {2}}. \tag {11}
+$$
+
+During rounds where worker $i$ is over-allocated, the learner collects at least $\bar{a}_i + k_i$ samples from the corresponding distribution. Therefore, the total number of rounds required to
+
+accumulate enough samples to stop over-allocating worker $i$ can be upper-bounded by
+
+$$
+\frac {\alpha_ {i} ^ {2} (\bar {a} _ {i} + k _ {i})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}}, \boldsymbol {\mu})) ^ {2}}.
+$$
+
+In the regret bound of Theorem 6.1, the term $\alpha^2$ appears instead of $\alpha_i^2$ because the learner's prior knowledge is limited to an upper bound $\alpha \geq \max_i\| X_i - \mu_i\|_{\psi_1}$ on the maximal Orlicz norm of the arm distributions. Finally, considering that the worst-case excess loss incurred when overallocating worker $i$ is $B\mu_{i} - \ell (\bar{a},\pmb {\mu})$ , we obtain the stated bound.
+
+# 6.2. Guarantees for ATA-Empirical
+
+We present theoretical guarantees for ATA-Empirical by providing an upper bound on the expected cumulative regret (9). As discussed in Section 4, ATA-Empirical leverages lower confidence bounds derived from a novel data-dependent concentration inequality introduced below. The proof of this result is detailed in Appendix E.
+
+Lemma 6.2. Let $X_{1},\ldots ,X_{n}$ be i.i.d. positive random variables with mean $\mu$ such that $\alpha = \| X_1 - \mu \|_{\psi_1} < + \infty$ .Let $\hat{X}_n$ denote the empirical mean. For $\delta \in (0,1)$ , let
+
+$$
+C _ {n, \delta} := 2 \sqrt {\frac {\log (2 / \delta)}{n}} + 2 \frac {\log (2 / \delta)}{n},
+$$
+
+where $\eta = \alpha /\mu$ . Then, with probability at least $1 - \delta$ , we have
+
+$$
+\mu \geq \hat {X} _ {n} \left(1 - \eta C _ {n, \delta}\right) _ {+}.
+$$
+
+Moreover, if $\eta C_{n,\delta} \leq \frac{1}{4}$ , then, we have with probability at least $1 - \delta$ , we have
+
+$$
+\hat {X} _ {n} \left(1 - \eta C _ {n, \delta}\right) _ {+} \leq \mu \leq \hat {X} _ {n} \left(1 + \frac {4}{3} \eta C _ {n, \delta}\right).
+$$
+
+Using the concentration inequality above, we construct the lower confidence bounds $\hat{s}_{i,k}$ as defined in (8). The following theorem provides an upper bound on the regret of ATA-Empirical.
+
+Theorem 6.3 (Proof in Appendix D.2). Suppose that Assumption 3.1 holds. Then, the expected regret of ATA-Empirical with inputs $(B,\eta)$ , satisfies
+
+$$
+\begin{array}{l} \mathcal {R} _ {K} \leq 2 n \max _ {i \in [ n ]} \{B \mu_ {i} - \ell (\bar {\boldsymbol {a}}, \boldsymbol {\mu}) \} \\ + c \eta^ {2} \cdot \sum_ {i = 1} ^ {n} \frac {\mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\boldsymbol {a}} , \boldsymbol {\mu}))}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}} , \boldsymbol {\mu})) ^ {2}} \cdot \ln K, \\ \end{array}
+$$
+
+where $\eta \geq \max_{i\in [n]}\alpha_i / \mu_i$ and $c$ is a numerical constant.
+
+Comparing the bounds for ATA-Empirical and ATA, we observe a key differences. Unlike the bound in Theorem 6.1, which incurs a squared maximal Orlicz norm penalty of $\alpha^2$ for all terms in the upper bound, ATA-Empirical benefits from its adaptive nature, leading to a term-specific factor of $\eta^2\mu_i^2$ . In the case where the arm distributions have a ratio $\alpha_{i} / \mu_{i}$ of the same order (such as the exponential distributions), the bound of Theorem 6.3 shows that ATA-Empirical adapts to the quantities $\alpha_{i}$ as we have, in the last case, $\eta \mu_{i} = \alpha_{i}$ .
+
+# 7. Experiments
+
+In this section, we validate our algorithms by simulating a scenario with $n$ workers, where we solve a simple problem using SGD. In each iteration, we collect $B = 23$ gradients from the workers and perform a gradient descent step.
+
+The objective function $f: \mathbb{R}^d \to \mathbb{R}$ is a convex quadratic defined as
+
+$$
+f (x) = \frac {1}{2} \boldsymbol {x} ^ {\top} \mathbf {A} \boldsymbol {x} - \boldsymbol {b} ^ {\top} \boldsymbol {x},
+$$
+
+where
+
+$$
+\begin{array}{l} \mathbf {A} = \frac {1}{4} \left[ \begin{array}{c c c c} 2 & - 1 & & 0 \\ - 1 & \ddots & \ddots & \\ & \ddots & \ddots & - 1 \\ 0 & & - 1 & 2 \end{array} \right] \in \mathbb {R} ^ {d \times d}, \\ \boldsymbol {b} = \frac {1}{4} \left[ \begin{array}{c} - 1 \\ 0 \\ \vdots \\ 0 \end{array} \right] \in \mathbb {R} ^ {d}. \\ \end{array}
+$$
+
+We denote $f^{*}$ as the minimum value of the function $f$ . Each of the $n$ workers is able to calculate unbiased stochastic gradients $g(x)$ that satisfy
+
+$$
+\mathbb {E} \left[ \| \boldsymbol {g} (\boldsymbol {x}) - \nabla f (\boldsymbol {x}) \| ^ {2} \right] \leq 0. 0 1 ^ {2}.
+$$
+
+This is achieved by adding Gaussian noise to the gradients of $f$ .
+
+The computation time for worker $i$ is modeled by the distribution
+
+$$
+\nu_ {i} = 2 9 \sqrt {i} + \mathrm {E x p} \left(2 9 \sqrt {i}\right),
+$$
+
+for all $i \in [n]$ , where $\mathrm{Exp}(\beta)$ denotes the exponential distribution with scale parameter $\beta$ . The expected value of this distribution is $\mu_i = 2 \cdot 29\sqrt{i}$ . Furthermore, the Orlicz norm satisfies the bound $\alpha_i \leq 2\mu_i$ .
+
+We consider three benchmark algorithms. GTA-SGD, originally introduced as Rennala SGD by Tyurin & Richtárik (2024). Additionally, we include OFTA (Optimal Fixed Task Allocation), which assumes the oracle knowledge of the mean computation times and uses the optimal allocation $\bar{a}$
+
+in (9) in each iteration, and UTA (Uniform Task Allocation), which distributes $B$ tasks uniformly among the $n$ workers. If $n > B$ , then in UTA we select $B$ workers at random, each one tasked to calculate one stochastic gradient. Our algorithms aim to achieve a performance close to the one of OFTA, without any prior knowledge of the true means.
+
+For ATA we set $\alpha = \alpha_{n} = 4\cdot 29\sqrt{n}$ , while for ATA-Empirical we use $\eta = 1$ . The results of our experiments are shown in Figure 1. As expected, GTA is the fastest in terms of runtime (first column), but it performs poorly in terms of total worker time (second column). This is because it uses all devices, most of which perform useless computations that are never used, leading to worse performance as the number of workers increases. In fact, its performance can become arbitrarily worse. On the other hand, OFTA performs best in terms of total worker time. Although it is slower in terms of runtime, the difference is by a constant factor that does not increase as $n$ grows. This is because additional workers are less efficient and do not provide significant benefits for GTA.
+
+Turning our attention to our algorithms, both ATA and ATA-Empirical initially behave like UTA, as it is expected by the need to perform an initial exploration phase with uniform allocations. However, after this phase, they begin to converge to the performance of OFTA.
+
+The last two columns contain plots that confirm our theoretical derivations. The third plot validates Theorem 4.2, showing that ATA and ATA-Empirical converge to OFTA up to a constant. The final column shows the averaged cumulative regret, vanishing over time as predicted by Theorems 6.1 and 6.3.
+
+Table 1: Ratios of total worker times and runtimes required to achieve $f(x) - f^{*} < 10^{-5}$ . For total worker time, we divide the total worker time of GTA by the corresponding total worker times of the other algorithms listed. For runtime, we do the opposite, dividing the runtime of the other algorithms by the runtime of GTA, since GTA is the fastest. To simplify the naming, we refer to ATA-Empirical as ATA-E.
+
+n TOT. WORKER TIME RATIO RUNTIME RATIO ATA ATA-E OFTA ATA ATA-E OFTA 17 1.3 1.26 1.26 1.73 1.75 1.74 51 2.91 2.69 3.03 2.43 2.45 2.17 153 7.22 7.02 9.1 3.44 3.14 2.17 459 12.45 14.1 27.3 6.36 5.51 2.17
+
+In Table 1, we compare the results numerically. Both the total worker time ratio and runtime ratio increase as $n$ grows. The total worker time ratio increases because GTA becomes less efficient, using more resources than necessary. The runtime ratio grows for ATA and ATA-Empirical since a larger
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 1: Each row increases the number of workers by a factor of 3, starting from 17, that is, $n = 17, 51, 153, 459$ from top to bottom. The first column shows runtime vs. suboptimality. The second column also plots suboptimality, but against total worker time, i.e., $\sum_{i=1}^{n} T_{i,k}$ in Algorithm 1. The third column presents the average iteration time, given by $C_k / k$ over all iterations $k$ . The last column displays the averaged cumulative regret, as defined in (9).
+
+
+
+
+
+
+
+number of workers requires more exploration. However, for OFTA this ratio remains unchanged, as discussed earlier.
+
+We remark that in these experiments we started all runs for ATA and ATA-Empirical without prior knowledge of the computation time distribution. However, in real systems, where these algorithms are used multiple times, prior estimates of computation times from previous runs could be available. With this information, ATA and ATA-Empirical would be much faster, as they would spend less time on exploration, approaching the performance of OFTA in a faster way. We validate this through experiments presented in Appendix A.5.
+
+In Appendix A.1, we conducted similar experiments with a different time distribution, where the mean times vary linearly across the arms. In Appendix A.2, we examine scenarios with varying client time distributions. Additionally, in Appendix A.3, we analyze regret performance, confirming its logarithmic behavior as predicted by Theorems 6.1 and 6.3. Finally, in Appendix A.4, we trained a simple CNN
+
+on the CIFAR-100 dataset (Krizhevsky et al., 2009) using Adam (Kingma & Ba, 2014).
+
+# Acknowledgments
+
+The research reported in this publication was supported by funding from King Abdullah University of Science and Technology (KAUST): i) KAUST Baseline Research Scheme, ii) Center of Excellence for Generative AI, under award number 5940, iii) SDAIA-KAUST Center of Excellence in Artificial Intelligence and Data Science.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Auer, P. Finite-time analysis of the multiarmed bandit problem, 2002.
+Boucheron, S., Bousquet, O., Lugosi, G., and Massart, P. Moment inequalities for functions of independent random variables. 2005.
+Cesa-Bianchi, N. and Lugosi, G. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404-1422, 2012.
+Chen, J., Pan, X., Monga, R., Bengio, S., and Jozefowicz, R. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981, 2016a.
+Chen, W., Wang, Y., and Yuan, Y. Combinatorial multiarmed bandit: General framework and applications. In International Conference on Machine Learning, pp. 151-159. PMLR, 2013.
+Chen, W., Hu, W., Li, F., Li, J., Liu, Y., and Lu, P. Combinatorial multi-armed bandit with general reward functions. Advances in Neural Information Processing Systems, 29, 2016b.
+Combes, R., Talebi, M. S., Proutiere, A., and Lelarge, M. Combinatorial bandits revisited. Advances in Neural Information Processing Systems, 28, 2015.
+Dean, J. and Barroso, L. A. The tail at scale. Communications of the ACM, 56(2):74-80, 2013.
+Dutta, S., Joshi, G., Ghosh, S., Dube, P., and Nagpurkar, P. Slow and stale gradients can win the race: Error-routine trade-offs in distributed SGD. In International Conference on Artificial Intelligence and Statistics, pp. 803-812. PMLR, 2018.
+Gai, Y., Krishnamachari, B., and Jain, R. Combinatorial network optimization with unknown variables: Multiarmed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking, 20(5): 1466-1478, 2012.
+Gelenbe, E. and Mitrani, I. Analysis and synthesis of computer systems, volume 4. World Scientific, 2010.
+Gross, D., Shortle, J. F., Thompson, J. M., and Harris, C. M. Fundamentals of queueing theory, volume 627. John wiley & sons, 2011.
+Hadjis, S., Zhang, C., Mitliagkas, I., Iter, D., and Ré, C. Omnivore: An optimizer for multi-device deep learning on CPUs and GPUs. arXiv preprint arXiv:1606.04487, 2016.
+
+Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Dennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210, 2021.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Konečný, J., McMahan, H. B., Yu, F. X., Richtarik, P., Suresh, A. T., and Bacon, D. Federated learning: Strategies for improving communication efficiency. In NIPS Workshop on Private Multi-Party Machine Learning, 2016.
+Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical report, University of Toronto, Toronto, 2009.
+Kveton, B., Wen, Z., Ashkan, A., and Szepesvari, C. Tight regret bounds for stochastic combinatorial semi-bandits. In Artificial Intelligence and Statistics, pp. 535-543. PMLR, 2015.
+Lattimore, T. and Szepesvári, C. Bandit algorithms. Cambridge University Press, 2020.
+Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429-450, 2020.
+Lin, T., Li, J., and Chen, W. Stochastic online greedy learning with semi-bandit feedbacks. Advances in Neural Information Processing Systems, 28, 2015.
+Maranjyan, A., Safaryan, M., and Richtárik, P. Grad-Skip: Communication-accelerated local gradient methods with better computational complexity. arXiv preprint arXiv:2210.16402, 2022.
+Maranjyan, A., Omar, O. S., and Rictarik, P. MindFlayer SGD: Efficient parallel SGD in the presence of heterogeneous and random worker compute times. In Conference on Uncertainty in Artificial Intelligence, 2025a.
+Maranjyan, A., Tyurin, A., and Rictarik, P. Ringmaster ASGD: The first Asynchronous SGD with optimal time complexity. In International Conference on Machine Learning, 2025b.
+Maurer, A. and Pontil, M. Concentration inequalities under sub-Gaussian and sub-exponential conditions. Advances in Neural Information Processing Systems, 34: 7588-7597, 2021.
+McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep
+
+networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273-1282. PMLR, 2017.
+McMahan, H. B., Moore, E., Ramage, D., and y Arcas, B. A. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629, 2:2, 2016.
+Mitliagkas, I., Zhang, C., Hadjis, S., and Ré, C. Asynchrony begets momentum, with an application to deep learning. In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 997-1004. IEEE, 2016.
+Nguyen, J., Malik, K., Zhan, H., Yousefpour, A., Rabbat, M., Malek, M., and Huba, D. Federated learning with buffered asynchronous aggregation. In International Conference on Artificial Intelligence and Statistics, pp. 3581-3607. PMLR, 2022.
+Tyurin, A. and Richtárik, P. Optimal time complexities of parallel stochastic optimization methods under a fixed computation model. Advances in Neural Information Processing Systems, 36, 2024.
+Wang, S. and Chen, W. Thompson sampling for combinatorial semi-bandits. In International Conference on Machine Learning, pp. 5114-5122. PMLR, 2018.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 2: We use the same setup as in Figure 1, with each row tripling the number of workers, starting from $n = 17$ .
+
+
+
+
+
+
+
+# A. Additional Experiments
+
+The objective function is a convex quadratic function $f: \mathbb{R}^d \to \mathbb{R}$ defined as
+
+$$
+f (\boldsymbol {x}) = \frac {1}{2} x ^ {\top} \mathbf {A} \boldsymbol {x} - \boldsymbol {b} ^ {\top} \boldsymbol {x},
+$$
+
+where
+
+$$
+\mathbf {A} = \frac {1}{4} \left[ \begin{array}{c c c c} 2 & - 1 & & 0 \\ - 1 & \ddots & \ddots & \\ & \ddots & \ddots & - 1 \\ 0 & & - 1 & 2 \end{array} \right] \in \mathbb {R} ^ {d \times d}, \quad \text {a n d} \quad \boldsymbol {b} = \frac {1}{4} \left[ \begin{array}{c} - 1 \\ 0 \\ \vdots \\ 0 \end{array} \right] \in \mathbb {R} ^ {d}.
+$$
+
+We denote $f^{*}$ as the minimum value of the function $f$ . Each of the $n$ workers is able to calculate unbiased stochastic gradients $g(x)$ that satisfy
+
+$$
+\mathbb {E} \left[ \| \boldsymbol {g} (x) - \nabla f (\boldsymbol {x}) \| ^ {2} \right] \leq 0. 0 1 ^ {2}.
+$$
+
+This is achieved by adding Gaussian noise to the gradients of $f$ .
+
+The experiments were implemented in Python. The distributed environment was emulated on machines with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz.
+
+# A.1. Linear Noise
+
+In this section we model the computation time for worker $i$ by the distribution
+
+$$
+\nu_ {i} = 2 9 i + \operatorname {E x p} (2 9 i), \quad \text {f o r a l l} \quad i \in [ n ].
+$$
+
+The expected value of this distribution is $\mu_{i} = 2\cdot 29i$ . Furthermore, the Orlicz norm satisfies the bound $\alpha_{i}\leq 2\mu_{i}$
+
+We again set $B = 23$ and run simulations similar to those in Section 7. The results are shown in Figure 2.
+
+The important difference to the previous Figure 1 is that here ATA-Empirical performs better than ATA. This is because the Orlicz norm $\alpha = 4\cdot 29n$ is much larger.
+
+Similarly, we provide a numerical comparison in Table 2.
+
+Table 2: This table presents ratios similar to those in Table 1.
+
+n TOTAL WORKER TIME RATIO RUNTIME RATIO ATA ATA-Empirical OFTA ATA ATA-Empirical OFTA 17 2.32 1.91 2.1 1.71 1.71 1.58 51 6.71 5.02 6.29 3.27 2.12 1.58 153 3.41 8.68 18.87 7.96 4.5 1.58
+
+# A.2. Heterogeneous Time Distributions
+
+So far, we have only considered cases where clients follow the same distributions but with different means. In this section, we extend our experiments to cases where the distributions themselves differ. We consider five distributions: Exponential, Uniform, Half-Normal, Lognormal, and Gamma. We group five workers with these five distributions so that each group has the same mean, then vary the mean across different groups. More concretely, we use:
+
+- $\operatorname{Exp}(c(5g + 1))$ ,
+Uniform $\left(\frac{c(5g + 1)}{2},3\frac{c(5g + 1)}{2}\right)$
+- $\left|\mathcal{N}\left(0,c(5g + 1)\sqrt{\frac{\pi}{2}}\right)\right|,$
+- Lognormal $\left(\log (c(5g + 1)) / 2, \sqrt{\log(c(5g + 1))}\right)$ ,
+- Gamma $\left((c(5g + 1))^2, \frac{1}{c(5g + 1)}\right)$ with shape and scale parameters.
+
+Next, we add a constant $c(5g + 1)$ to all the distributions, where $c = 29$ , and $g$ represents the group number, starting from 0. The clients are divided into $n / 5$ groups.
+
+The results of the experiments are shown in Figure 3. The plots demonstrate that the algorithms are robust across different distributions.
+
+# A.3. Regret
+
+In this section, we verify Theorems 6.1 and 6.3 on regret through simulations. We set $n = 20$ and $B = 5$ , with the computation time for worker $i$ following the distribution
+
+$$
+\nu_ {i} = \operatorname {E x p} (2 i), \quad \text {f o r a l l} \quad i \in [ n ] .
+$$
+
+We ran the simulation five times, and the plots include standard deviation bars, although they are not visible. The results are presented in Figure 4.
+
+As expected, the regret grows logarithmically.
+
+# A.4. Real Dataset
+
+In this section, we present an experiment where we train a convolutional neural network (CNN) on the CIFAR-100 dataset (Krizhevsky et al., 2009). The network consists of three convolutional layers and two fully connected layers, with a total of 160k parameters.
+
+We use the Adam optimizer (Kingma & Ba, 2014) with a constant step size of $8 \cdot 10^{-5}$ . The computation time of the workers follows the same setup as in Figure 2. The results are shown in Figure 5.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 3: Each row corresponds to an increasing number of workers, with $n = 15, 45, 150$ from top to bottom. We consider five distributions—Exponential, Uniform, Half-Normal, Lognormal, and Gamma—grouping them to have the same mean and then varying the mean across different groups. The results demonstrate that the algorithms remain robust across different distributions. The columns represent the same as in Figure 1.
+
+
+
+
+
+
+
+# A.5. Impact of Prior Knowledge on Time Distributions
+
+In real-world systems where multiple machine learning models are trained, estimates of computation times from previous runs may be available. With this prior knowledge, ATA and ATA-Empirical can be much faster, as they spend less time on exploration and quickly approach the performance of OFTA.
+
+To illustrate this, we vary the number of prior runs, $P$ . Since our algorithms operate independently of the underlying optimization process, we first focus solely on the bandit component, updating the confidence scores of machines over several iterations. We then apply the loss curves to different segments of the bandit phase and compare the results as $P$ increases. A larger $P$ yields more accurate estimates.
+
+The optimization setup remains the same as in Figure 5, with $B = 23$ and $n = 51$ . The results are presented in Figure 6.
+
+# B. Concrete Optimization Methods
+
+In this section, we provide concrete examples of optimization algorithms using the ATA and GTA allocation strategies.
+
+For optimization problems, we focus on SGD and Asynchronous SGD. Other methods, such as stochastic proximal point methods and higher-order methods, can be developed in a similar fashion.
+
+# B.1. Stochastic Gradient Descent
+
+For SGD, it is important to distinguish homogeneous and heterogeneous cases. Let us start from the homogeneous case.
+
+# B.1.1. HOMOGENEOUS REGIME
+
+Consider the problem of finding an approximate stationary point of the optimization problem
+
+$$
+\min _ {\boldsymbol {x} \in \mathbb {R} ^ {d}} \left\{f (\boldsymbol {x}) := \mathbb {E} _ {\boldsymbol {\xi} \sim \mathcal {D}} [ f (\boldsymbol {x}; \boldsymbol {\xi}) ] \right\}. \tag {12}
+$$
+
+
+Figure 4: Regret growth over iterations.
+
+We assume that each worker is able to compute stochastic gradient $f(\pmb{x}; \pmb{\xi})$ satisfying $\mathbb{E}_{\pmb{\xi} \sim \mathcal{D}}\left[\|f(\pmb{x}; \pmb{\xi}) - \nabla f(\pmb{x})\|^2\right] \leq \sigma^2$ for all $\pmb{x} \in \mathbb{R}^d$ .
+
+In this case, SGD with allocation budget $B$ becomes Minibatch SGD with batch size $B$ . The next step is determining how the batch is collected. For ATA, we refer to this method as SGD-ATA, as described in Algorithm 2.
+
+# Algorithm 2 SGD-ATA (Homogeneous)
+
+1: Optimization inputs: initial point $\pmb{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$
+2: Allocation inputs: allocation budget $B$
+3: Initialize: empirical means $\hat{\mu}_{i,1} = 0$ , usage counts $K_{i,1} = 0$ , and usage times $T_{i,1} = 0$ , for all $i \in [n]$
+4: for $k = 1, \dots, K$ do
+5: Compute LCBs $(s_{i,k})$ for all $i\in [n]$
+6: Find allocation: $\pmb{a}_k \in \arg \min_{\pmb{a} \in \mathcal{A}} \ell(\pmb{a}, \pmb{s}_k)$ .
+7: Allocate $a_{i,k}$ tasks to each worker $i \in [n]$
+8: Update $\pmb{x}$ :
+
+$$
+\boldsymbol {x} _ {k + 1} = \boldsymbol {x} _ {k} - \frac {\gamma}{B} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {a _ {i, k}} \nabla f \left(\boldsymbol {x} _ {k}; \boldsymbol {\xi} _ {i} ^ {j}\right)
+$$
+
+9: for $i$ such that $a_{i,k} \neq 0$ do
+10: $K_{i,k + 1} = K_{i,k} + a_{i,k}$
+11: $T_{i,k + 1} = T_{i,k} + \sum_{j = 1}^{a_{i,k}}X_{i,k}^{(j)}$
+12: $\hat{\mu}_{i,k + 1} = \frac{T_{i,k + 1}}{K_{i,k + 1}}$
+13: end for
+14: end for
+
+In this case, each task consists in calculating the gradient using the device's local data, which is assumed to have the same distribution as the data on all other devices. Because of this, it does not matter which device performs the task. The method then averages these gradients to obtain an unbiased gradient estimator and performs a gradient descent step.
+
+Now, let us give the version of Minibatch SGD using greedy allocation Algorithm 3.
+
+
+
+
+
+
+
+
+
+
+Figure 5: We use the CIFAR-100 dataset (Krizhevsky et al., 2009). The model is a CNN with three convolutional layers and two fully connected layers, totaling 160k parameters. The Adam optimizer (Kingma & Ba, 2014) is used with a constant step size of $8 \cdot 10^{-5}$ . The computation time of the workers follows the same setup as in Figure 2, where the mean time increases linearly. The batch size remains the same at $B = 23$ . Each row corresponds to a different number of workers, with $n = 17, 51, 153$ from top to bottom.
+
+
+
+Algorithm 3 SGD-GTA (Homogeneous)
+1: Input: initial point $\pmb{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , allocation budget $B$
+2: for $k = 1, \dots, K$ do
+3: $b = 0$
+4: Query single gradient from each worker $i \in [n]$
+5: while $b < B$ do
+6: Gradient $\nabla f(\pmb{x}_k; \pmb{\xi}_{k_b})$ arrives from worker $i_{k_b}$
+7: $\pmb{g}_k = \pmb{g}_k + \nabla f(\pmb{x}_k; \pmb{\xi}_{k_b})$ ; $b = b + 1$
+8: Query gradient at $\pmb{x}_k$ from worker $i_{k_b}$
+9: end while
+10: Update the point: $\pmb{x}_{k+1} = \pmb{x}_k - \gamma \frac{\pmb{g}^k}{B}$
+11: end for
+
+Algorithm 3 is exactly Rennala SGD method proposed by Tyurin & Richtárik (2024), which has optimal time complexity when the objective function is non-convex and smooth.
+
+If the computation times are deterministic, then GTA makes the same allocation in each iteration. In that case, SGD-ATA will
+
+
+
+
+
+
+
+
+
+
+Figure 6: We use the same optimization setup as in Figure 5, with $B = 23$ and $n = 51$ . The number of prior iterations, $P$ , varies across rows, starting from the top with $P = 0, 5 \cdot 10^5, 5 \cdot 10^6$ . As the number of prior iterations increases, we observe that the training of ATA and ATA-Empirical accelerates, bringing their performance closer to the optimal performance of OFTA.
+
+
+
+converge to this fixed allocation. If the times are random, the allocation found by GTA may vary in each iteration. In this case, SGD-ATA will approach the best allocation for the expected times.
+
+# B.1.2. HETEROGENEOUS REGIME
+
+Now let us consider the following heterogeneous problem
+
+$$
+\min _ {x \in \mathbb {R} ^ {d}} \left\{f (\boldsymbol {x}) := \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\boldsymbol {\xi} _ {i} \sim \mathcal {D} _ {i}} \left[ f _ {i} (\boldsymbol {x}; \boldsymbol {\xi} _ {i}) \right] \right\}.
+$$
+
+Here each worker $i$ has its own data distribution $\mathcal{D}_i$ .
+
+We start with the greedy allocation. The algorithm is presented in Algorithm 4.
+
+Algorithm 4 SGD-GTA (Heterogeneous)
+1: Input: initial point $\pmb{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , parameter $S$
+2: for $k = 1, \dots, K$ do
+3: $s_i = 0$ and $\pmb{g}_{i,k} = \pmb{0}$ for all $i \in [n]$
+4: Query single gradient from each worker $i \in [n]$
+5: while $\left( \frac{1}{n} \sum_{i=1}^{n} \frac{1}{s_i} \right)^{-1} < \frac{S}{n}$ do
+6: Gradient $\nabla f_j(\pmb{x}_k; \pmb{\xi}_k)$ arrives from worker $j$
+7: $\pmb{g}_{j,k} = \pmb{g}_{j,k} + \nabla f_j(\pmb{x}_k; \pmb{\xi}_k)$ ; $s_j = s_j + 1$
+8: Query gradient at $\pmb{x}_k$ from worker $j$
+9: end while
+10: Update the point: $\pmb{x}_{k+1} = \pmb{x}_k - \gamma \frac{1}{n} \sum_{i=1}^{n} \frac{1}{s_i} \pmb{g}_{i,k}$
+11: end for
+
+Algorithm 5 presents the Malenia SGD algorithm, proposed by Tyurin & Richtárik (2024), which is also optimal for non-convex smooth functions.
+
+In each iteration, Algorithm 4 receives at least one gradient from each worker. Building on this idea, we design a method incorporating ATA, given in Algorithm 5.
+
+Algorithm 5 SGD-ATA (Heterogeneous)
+1: Optimization inputs: initial point $\boldsymbol{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$
+2: Allocation inputs: allocation budget $B$
+3: Initialize: empirical means $\hat{\mu}_{i,1} = 0$ , usage counts $K_{i,1} = 0$ , and usage times $T_{i,1} = 0$ , for all $i \in [n]$
+4: for $k = 1, \dots, K$ do
+5: Compute LCBs $(s_{i,k})$ for all $i \in [n]$
+6: Find allocation: $\boldsymbol{a}_k = \mathrm{RAS}(\boldsymbol{s}_k; B)$
+7: Allocate $a_{i,k} + 1$ tasks to each worker $i \in [n]$
+8: Update $x$ :
+
+$$
+\boldsymbol {x} _ {k + 1} = \boldsymbol {x} _ {k} - \frac {\gamma}{n} \sum_ {i = 1} ^ {n} \frac {1}{a _ {i , k} + 1} \sum_ {j = 1} ^ {a _ {i, k} + 1} \nabla f _ {i} \left(\boldsymbol {x} _ {k}; \boldsymbol {\xi} _ {i} ^ {j}\right)
+$$
+
+9: For all $i \in [n]$ , update:
+
+$$
+K _ {i, k + 1} = K _ {i, k} + a _ {i, k}
+$$
+
+$$
+T _ {i, k + 1} = T _ {i, k} + \sum_ {j = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(j)}
+$$
+
+$$
+\hat {\mu} _ {i, k + 1} = \frac {T _ {i , k + 1}}{K _ {i , k + 1}}
+$$
+
+10: end for
+
+# B.2. Asynchronous SGD
+
+Here, we focus on the homogeneous problem given in Equation (12). The greedy variant, Ringmaster ASGD, was proposed by Maranjyan et al. (2025b) and, like Rennala SGD, achieves the best runtime.
+
+We now present its version with ATA, given in Algorithm 6.
+
+Here, the task remains gradient computation, but each worker's subsequent tasks use different points for computing the gradient. These points depend on the actual computation times and the asynchronous nature of the method, hence the name
+
+# Algorithm 6 ASGD-ATA
+
+1: Optimization inputs: initial point $\boldsymbol{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$
+2: Allocation inputs: allocation budget $B$
+3: Initialize: empirical means $\hat{\mu}_{i,1} = 0$ , usage counts $K_{i,1} = 0$ , and usage times $T_{i,1} = 0$ , for all $i \in [n]$
+4: for $k = 1, \dots, K$ do
+5: Compute LCBs $(s_{i,k})$ for all $i\in [n]$
+6: Find allocation: $\pmb{a}_{k} = \mathrm{RAS}(\pmb{s}_{k};B)$
+7: Update $\pmb{x}_k$ using Algorithm 7 with allocation $\pmb{a}_k$
+8: For all $i$ such that $a_{i,k} \neq 0$ , update:
+
+$$
+\begin{array}{l} K _ {i, k + 1} = K _ {i, k} + a _ {i, k} \\ T _ {i, k + 1} = T _ {i, k} + \sum_ {j = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(j)} \\ \hat {\mu} _ {i, k + 1} = \frac {T _ {i , k + 1}}{K _ {i , k + 1}} \\ \end{array}
+$$
+
+# 9: end for
+
+# Algorithm 7 ASGD
+
+1: Input: Initial point $\pmb{x}_0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , allocation vector $\pmb{a}$ with $\| \pmb{a} \|_1 = B$
+2: Workers with $a_{i} > 0$ start computing stochastic gradients at $x_0$
+3: for $s = 0,1,\ldots ,B - 1$ do
+4:Receive gradient $\nabla f(\pmb{x}_{s + \delta_s};\pmb{\xi}_{s + \delta_s}^i)$ from worker $i$
+5: Update: $\pmb{x}_{s+1} = \pmb{x}_s - \gamma \nabla f(\pmb{x}_{s + \delta_s};\pmb{\xi}_{s + \delta_s}^i)$
+6: if $a_i > 0$ then
+7: Worker $i$ begins computing $\nabla f(\pmb{x}_{s+1};\pmb{\xi}_{s+1}^i)$
+8: Decrease remaining allocation for worker $i$ by one: $a_{i} = a_{i} - 1$
+9: end if
+10: end for
+11: return: $x_B$
+The sequence $\{\delta_s\}$ represents delays, where $\delta_s \geq 0$ is the difference between the iteration when worker $i$ started computing the gradient and iteration $s$ , when it was applied.
+
+Asynchronous SGD.
+
+# C. Recursive Allocation Selection Algorithm
+
+In this section, we introduce an efficient method for finding the best allocation. Given LCBs $s_k$ and allocation budget $B$ , each iteration of ATA (Algorithm 1) determines the allocation by solving
+
+$$
+\boldsymbol {a} _ {k} \in \operatorname * {a r g m i n} _ {\boldsymbol {a} \in \mathcal {A}} \ell (\boldsymbol {a}, \boldsymbol {s} _ {k}),
+$$
+
+where
+
+$$
+\ell (\boldsymbol {a}, \boldsymbol {\mu}) := \max _ {i \in [ n ]} a _ {i} \mu_ {i} = \| \boldsymbol {a} \odot \boldsymbol {\mu} \| _ {\infty},
+$$
+
+with $\odot$ denoting the element-wise product. When clear from context, we write $\ell(\pmb{a})$ instead of $\ell(\pmb{a}, \pmb{\mu})$ .
+
+In the early iterations, when some $s_i$ values are 0, ATA allocates uniformly across these arms until all $s_i$ values become positive. After that, the allocation is determined using the recursive routine in Algorithm 8.
+
+Remark C.1. The iteration complexity of RAS is $\mathcal{O}(n\ln (\min \{B,n\}) + \min \{B,n\}^2)$ . In fact, the first $n\ln (\min \{B,n\})$ term arises from identifying the smallest $B$ scores. For the second term, note that in (13), we have $r\leq \min \{B,n\}$
+
+# C.1. Optimality
+
+We now prove that RAS finds the optimal allocation, as stated in the following lemma.
+
+Lemma C.2. For positive scores $0 < s_1 \leq s_2 \leq \ldots \leq s_n$ , RAS (Algorithm 8) finds an optimal allocation $h \in \mathcal{A}$ , satisfying
+
+$$
+\boldsymbol {h} \in \operatorname * {a r g m i n} _ {\boldsymbol {a} \in \mathcal {A}} \| \boldsymbol {a} \odot \boldsymbol {s} \| _ {\infty}.
+$$
+
+Proof. We prove the claim by induction on the allocation budget $B$ .
+
+Base Case $(B = 1)$ : When $B = 1$ , RAS (Algorithm 8) allocates the task to worker with the smallest score (line 9). Thus, the base case holds.
+
+Inductive Step: Assume that RAS finds an optimal allocation for budget $B - 1$ , denoted by
+
+$$
+\bar {\boldsymbol {h}} = \operatorname {R A S} \left(s _ {1}, \dots , s _ {n}; B - 1\right).
+$$
+
+Algorithm 8 Recursive Allocation Selection (RAS)
+
+1: Input: Scores $s_1, \ldots, s_n$ , allocation budget $B$
+2: Assume without loss of generality that $s_1 \leq s_2 \leq \dots \leq s_n$ (i.e., sort the scores)
+3: if $B = 1$ then
+4: return: $(1,0,\dots ,0)$
+5: end if
+6: Find the previous best allocation:
+
+$$
+\boldsymbol {a} = \left(a _ {1}, \dots , a _ {n}\right) = \operatorname {R A S} \left(s _ {1}, \dots , s _ {n}; B - 1\right)
+$$
+
+7: Determine the first zero allocation:
+
+$$
+r = \left\{ \begin{array}{l l} \min \left\{i \mid a _ {i} = 0 \right\}, & \text {i f} a _ {n} = 0 \\ n, & \text {o t h e r w i s e} \end{array} \right. \tag {13}
+$$
+
+8: Find the best next query allocation set:
+
+$$
+M = \operatorname * {a r g m i n} _ {i \in [ r ]} \| (\boldsymbol {a} + \boldsymbol {e} _ {i}) \odot \boldsymbol {s} \| _ {\infty},
+$$
+
+where $e_i$ is the unit vector in direction $i$ .
+
+9: Select $j \in M$ such that the cardinality of
+
+$$
+\operatorname *{arg max}_{i\in [r]}(a_{i} + e_{j,i})s_{i}
+$$
+
+is minimized
+
+10: return: $\mathbf{a} + \mathbf{e}_j$
+
+We need to prove that the solution returned for budget $B$ , denoted by $\pmb{h} = \bar{\pmb{h}} + \pmb{e}_r$ , is also optimal.
+
+Assume, for contradiction, that there exists $\pmb{a} \in \mathcal{A}$ such that $\pmb{a} \neq \pmb{h}$ and $\ell(\pmb{a}) < \ell(\pmb{h})$ . Write $\pmb{a} = \bar{\pmb{a}} + \pmb{e}_q$ for some $q \in [n]$ . Observe that $\| \bar{\pmb{a}} \|_1 = B - 1$ because $\pmb{a} \in \mathcal{A}$ .
+
+We consider two cases based on the value of $\ell (\bar{h} +e_r)$
+
+- $\ell (\bar{h} + e_r) = h_k s_k$ for some $k \neq r$ . In this case, adding one unit to index $r$ does not change the maximum value, i.e., $\ell (\bar{h}) = \ell (\bar{h} + e_r)$ . By the inductive hypothesis, $\bar{h}$ minimizes $\ell(x)$ for budget $B - 1$ . Therefore, we have
+
+$$
+\ell (\boldsymbol {a}) \geq \ell (\bar {\boldsymbol {a}}) \geq \ell (\bar {\boldsymbol {h}}) = \ell (\bar {\boldsymbol {h}} + \boldsymbol {e} _ {r}) = \ell (\boldsymbol {h}),
+$$
+
+which contradicts the assumption that $\ell (\pmb {a}) < \ell (\pmb {h})$
+
+- $\ell(\bar{h} + e_r) = (\bar{h}_r + 1)s_r$ . By the algorithm's logic, $(\bar{h}_r + 1)s_r \leq (\bar{h}_i + 1)s_i$ for all $i \neq r$ . Since $\ell(\bar{h} + e_r) \leq \ell(\bar{h} + e_q)$ and we assumed $\ell(\bar{a} + e_q) = \ell(a) < \ell(h) = \ell(\bar{h} + e_r)$ , then $\bar{a} \neq \bar{h}$ otherwise $\ell(\bar{a} + e_q) < \ell(\bar{a} + e_r)$ . Given that $\|\bar{h}\|_1 = \|\bar{a}\|_1$ , this implies that there exists some $u \in [n]$ such that $0 \leq \bar{a}_u \leq \bar{h}_u - 1$ and another index $v \in [n]$ where $\bar{a}_v \geq \bar{h}_v + 1$ .
+
+In addition, note that $r$ is chosen such that $\ell(\bar{h} + e_r)$ is minimum. Using the fact that $\ell(\bar{h} + e_r) = (\bar{h}_r + 1)s_r$ , we have that for any index $q$ , we also necessarily have $\ell(\bar{h} + e_q) = (\bar{h}_q + 1)s_q$ . Using this, we deduce
+
+$$
+\ell (\boldsymbol {h}) = \ell (\bar {\boldsymbol {h}} + \boldsymbol {e} _ {r}) \leq \ell (\bar {\boldsymbol {h}} + \boldsymbol {e} _ {v}) = (\bar {h} _ {v} + 1) s _ {v} \leq \max _ {i} \bar {a} _ {i} s _ {i} = \ell (\bar {\boldsymbol {a}}) \leq \ell (\boldsymbol {a}),
+$$
+
+where in the second inequality we used the fact that $\bar{a}_v \geq \bar{h}_v + 1$ and in the last inequality we used the fact that the loss is not decreasing for we add one element to the vector. This chain of inequalities again contradicts the assumption that $\ell(\boldsymbol{a}) < \ell(\boldsymbol{h})$ .
+
+Since both cases lead to contradictions, we conclude that no $\pmb{a} \in \mathcal{A}$ exists with $\ell(\pmb{a}) < \ell(\pmb{h})$ . Thus, RAS produces an optimal allocation for budget $B$ .
+
+# C.2. Minimal Cardinality
+
+Among all possible allocations RAS choose one that always minimizes the cardinality of the set:
+
+$$
+\operatorname * {a r g m a x} _ {i \in [ n ]} a _ {i} s _ {i} .
+$$
+
+The reason for this choice is just technical as it allows the Lemma D.1 to be true.
+
+Lemma C.3. The output of RAS ensures the smallest cardinality of the set:
+
+$$
+\operatorname * {a r g m a x} _ {i \in [ n ]} a _ {i} s _ {i}
+$$
+
+among all the optimal allocations $\pmb{a}$
+
+Proof. This proof uses similar reasoning as the one before.
+
+Let $\pmb{h} = \mathsf{RAS}(s; B)$ , and denote the cardinality of the set $\arg \max_{i \in [n]} a_i s_i$ for allocation $\pmb{a}$ by
+
+$$
+C _ {B} (\boldsymbol {a}) = \left| \underset {i \in [ n ]} {\arg \max } a _ {i} s _ {i} \right| \geq 1.
+$$
+
+We prove the claim by induction on $B$ .
+
+Base Case ( $B = 1$ ): For $B = 1$ , there is a single coordinate allocation, thus $C_1(h) = 1$ , which is the smallest possible cardinality.
+
+Inductive Step: Assume that RAS finds an optimal allocation for budget $B - 1$ with the smallest cardinality, denote its output by
+
+$$
+\bar {\boldsymbol {h}} = \operatorname {R A S} \left(s _ {1}, \dots , s _ {n}; B - 1\right).
+$$
+
+We need to prove that $\pmb{h} = \bar{\pmb{h}} + \pmb{e}_r$ minimizes $C_B(\pmb{a})$ among all optimal allocations for budget $B$ .
+
+Assume, for contradiction, that there exists $\mathbf{a} \in \mathcal{A}$ such that $\mathbf{a} \neq \mathbf{h}$ , $\ell(\mathbf{a}) = \ell(\mathbf{h})$ , and $C_B(\mathbf{a}) < C_B(\mathbf{h})$ . Write $\mathbf{a} = \bar{\mathbf{a}} + \mathbf{e}_q$ for some $q \in [n]$ . We consider three cases:
+
+- $C_B(\pmb{h}) = 1$ . Since the minimum cardinality is exactly 1, we must have $C_B(\pmb{a}) \geq 1 = C_B(\pmb{h})$ , that contradicts our assumption.
+- $C_B(\pmb{h}) = C_{B - 1}(\bar{\pmb{h}}) > 1$ . This occurs when $\ell (\pmb{h}) = \ell (\bar{\pmb{h}})\neq (\bar{h}_r + 1)s_r$ . By the optimality of $\pmb{h}$ , we have $\ell (\bar{\pmb{h}})\leq \ell (\bar{\pmb{a}})\leq \ell (\pmb {a}) = \ell (\bar{\pmb{h}})$ , which implies $\ell (\bar{\pmb{a}}) = \ell (\pmb {a})$ . Therefore, $C_{B - 1}(\bar{\pmb{a}})\leq C_B(\pmb {a})$ . Since the induction hypothesis holds for $B - 1$ , we have $C_{B - 1}(\bar{\pmb{h}})\leq C_{B - 1}(\bar{\pmb{a}})$ . Thus,
+
+$$
+C _ {B} (\boldsymbol {h}) = C _ {B - 1} (\bar {\boldsymbol {h}}) \leq C _ {B - 1} (\bar {\boldsymbol {a}}) \leq C _ {B} (\boldsymbol {a}),
+$$
+
+which leads to a contradiction.
+
+- $C_B(\pmb{h}) = C_{B-1}(\bar{\pmb{h}}) + 1$ . This occurs when $\ell(\pmb{h}) = \ell(\bar{\pmb{h}}) = (\bar{h}_r + 1)s_r$ . Proceeding as in the previous case, we have $\ell(\bar{\pmb{a}}) = \ell(\pmb{a})$ , and hence $C_{B-1}(\bar{\pmb{a}}) \leq C_B(\pmb{a})$ . Since the induction hypothesis holds for $B - 1$ , we know $C_{B-1}(\bar{\pmb{h}}) \leq C_{B-1}(\bar{\pmb{a}})$ .
+
+We now have additional cases:
+
+$$
+- \text {I f} C _ {B - 1} (\bar {a}) = C _ {B - 1} (\bar {h}) + 1, \text {t h e n}
+$$
+
+$$
+C _ {B} (\boldsymbol {h}) = C _ {B - 1} \left(\bar {\boldsymbol {h}}\right) + 1 = C _ {B - 1} \left(\bar {\boldsymbol {a}}\right) \leq C _ {B} (\boldsymbol {a}),
+$$
+
+which leads to a contradiction.
+
+- Now assume $C_{B - 1}(\bar{\pmb{a}}) = C_{B - 1}(\bar{\pmb{h}})$ . We will show that in this case, $C_B(\pmb {a}) = C_{B - 1}(\bar{\pmb{a}}) + 1$ . By contradiction, suppose $C_B(\pmb {a}) = C_{B - 1}(\bar{\pmb{a}})$ , which implies $(\bar{a}_q + 1)s_q < \ell (\pmb {a})$ . Let $k$ be an index such that $\bar{a}_k s_k = \ell (\pmb {a})$ . Construct a new allocation $\pmb{a}' = \bar{\pmb{a}} +\pmb{e}_q - \pmb{e}_k$ . Then,
+
+$$
+C _ {B - 1} \left(\boldsymbol {a} ^ {\prime}\right) = C _ {B - 1} (\bar {\boldsymbol {a}}) - 1 < C _ {B - 1} (\bar {\boldsymbol {h}}),
+$$
+
+which contradicts the induction hypothesis. Thus, $C_B(\pmb{a}) = C_{B-1}(\bar{\pmb{a}}) + 1$ . Using this, we have
+
+$$
+C _ {B} (\boldsymbol {h}) = C _ {B - 1} \left(\bar {\boldsymbol {h}}\right) + 1 = C _ {B - 1} (\bar {\boldsymbol {a}}) + 1 = C _ {B} (\boldsymbol {a}),
+$$
+
+which again contradicts $C_B(\pmb{a}) < C_B(\pmb{h})$ .
+
+This concludes the proof.
+
+# D. Proofs of Theorem 6.1, Theorem 6.3, and Theorem 4.2
+
+We start by recalling the notation. For $i \in [n]$ and $k \in [K]$ , $(X_{i,k}^{(u)})_{u \in [B]}$ denote $B$ independent samples at round $k$ from distribution $\nu_i$ . When using an allocation vector $\pmb{a}_k \in \mathcal{A}$ , the total computation time of worker $i$ at round $k$ is $\sum_{u=1}^{a_{i,k}} X_{i,k}^{(u)}$ , when $a_{i,k} > 0$ . $\pmb{\mu} = (\mu_1, \dots, \mu_K)$ is the vector of means. For each $k \in [K]$ , when using the allocation vector $\pmb{a}_k$ , we recall the definition of the proxy loss $\ell: \mathcal{A} \times \mathbb{R}_{\geq 0}^n \to \mathbb{R}_{\geq 0}$ by
+
+$$
+\ell (\pmb {a} _ {k}, \pmb {\lambda}) = \max _ {i \in [ n ]} a _ {i, k} \lambda_ {i},
+$$
+
+where $\lambda = (\lambda_1, \dots, \lambda_n)$ is a vector of non-negative components. When $\lambda = \mu$ , we drop the dependence on the second input of $\ell$ . For each $\lambda$ , let $\bar{a}_{\lambda} \in \mathcal{A}$ , be the action minimizing this loss
+
+$$
+\bar{a}_{\boldsymbol {\lambda}}\in \operatorname *{arg min}_{\boldsymbol {a}\in \mathcal{A}}\ell (\boldsymbol {a},\boldsymbol {\lambda}) .
+$$
+
+We drop the dependency on $\mu$ from $\bar{a}_{\mu}$ to ease notation. The actual (random) computation time at round $k$ is denoted by $C: \mathcal{A} \to \mathbb{R}_+$ :
+
+$$
+C \left(\boldsymbol {a} _ {k}\right) := \max _ {i \in [ n ]} \sum_ {u = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(u)}. \tag {14}
+$$
+
+Let $a^*$ be the action minimizing the expected time
+
+$$
+\boldsymbol{a}^{*}\in \operatorname *{arg min}_{\boldsymbol {a}\in \mathcal{A}}\mathbb{E}\left[C(\boldsymbol {a})\right] .
+$$
+
+The expected regret after $K$ rounds is defined as follows
+
+$$
+\mathcal {R} _ {K} := \sum_ {t = 1} ^ {K} \mathbb {E} \left[ \ell (\boldsymbol {a} _ {k}) - \ell (\bar {\boldsymbol {a}}) \right].
+$$
+
+For the remainder of this analysis we consider $\bar{\pmb{a}}\in \arg \min_{a\in \mathcal{A}}\ell (\pmb {a})$ found using the RAS procedure. For each $i\in [n]$ recall that $k_{i}$ is the smallest integer such that
+
+$$
+\left(\bar {a} _ {i} + k _ {i}\right) \mu_ {i} > \ell (\bar {\boldsymbol {a}}). \tag {15}
+$$
+
+Below we present a technical lemma used in the proofs of Theorems 6.1 and 6.3.
+
+Lemma D.1. Let $\pmb{x} = (x_{1},\dots,x_{n})\in \mathbb{R}_{\geq 0}^{n}$ . Let $\pmb{a}$ be the output of $\mathsf{RAS}(\pmb{x};B)$ . For each $i,j\in [n]$ , we have
+
+$$
+a _ {j} x _ {j} \leq (a _ {i} + 1) x _ {i}.
+$$
+
+Proof. Fix $\pmb{x} \in \mathbb{R}_+^n$ , and let $\pmb{a} = \mathrm{RAS}(\pmb{x}; B)$ . The result is straightforward when $\min_{i \in [n]} x_i = 0$ .
+
+Suppose that $x_{i} > 0$ for all $i\in [n]$ . Let $s\geq 1$ denote the cardinality
+
+$$
+s := \left| \operatorname * {a r g m a x} _ {i \in [ n ]} a _ {i} x _ {i} \right|.
+$$
+
+Fix $i,j\in [n]$ , let $k\in \arg \max_{i\in [n]}a_{i}x_{i}$ . We need to show that
+
+$$
+a _ {k} x _ {k} \leq (a _ {i} + 1) x _ {i}.
+$$
+
+We use a proof by contradiction. Suppose that we have $a_{k}x_{k} > (a_{i} + 1)x_{i}$ consider the allocation vector $\pmb{a}' \in \mathcal{A}$ given by $a_{k}' = a_{k} - 1$ , $a_{i}' = a_{i} + 1$ and $a_{u}' = \bar{a}_{u}$ when $u \notin \{i, k\}$ . Let $R := \max_{u \neq i, k} \{a_{u}x_{u}\}$ . We have
+
+$$
+\ell (\pmb {a} ^ {\prime}, \pmb {x}) = \max _ {u \in [ n ]} a _ {u} ^ {\prime} x _ {u} = \max \{(a _ {i} + 1) x _ {i}, (a _ {k} - 1) x _ {k}, R \}.
+$$
+
+We consider two cases:
+
+- Suppose that $s = 1$ (i.e., the only element in $[n]$ such that $a_{u}x_{u} = \ell(\boldsymbol{a},\boldsymbol{x})$ is $k$ ), then we have necessarily $R < a_{k}x_{k}$ . Moreover, by the contradiction hypothesis, $(a_{i} + 1)x_{i} < a_{k}x_{k}$ . Therefore,
+
+$$
+\ell (\boldsymbol {a} ^ {\prime}, \boldsymbol {x}) = \max \left\{\left(a _ {i} + 1\right) x _ {i}, \left(a _ {k} - 1\right) x _ {k}, R \right\} < a _ {k} x _ {k} = \ell (\boldsymbol {a}, \boldsymbol {x}),
+$$
+
+which contradicts the definition of $\pmb{a}$ .
+
+- Suppose that $s \geq 2$ , since by hypothesis $a_k x_k > (a_i + 1)x_i$ , we clearly have $a_i x_i < \ell(\pmb{a}, \pmb{x})$ therefore among the set $[n] \setminus \{k, i\}$ there are exactly $s - 1$ elements such that $a_u x_u = \ell(\pmb{a}, \pmb{x})$ . In particular, this gives
+
+$$
+\ell (\boldsymbol {a} ^ {\prime}, \boldsymbol {x}) = \max _ {u \in [ n ]} \left\{\left(a _ {i} + 1\right) x _ {i}, \left(a _ {k} - 1\right) x _ {k}, R \right\} = R = \ell (\boldsymbol {a}, \boldsymbol {x}).
+$$
+
+Therefore, $\pmb{a}' \in \arg \min_{\pmb{a} \in \mathcal{A}} \ell(\pmb{a}, \pmb{x})$ and the number of elements such that $a_i' x_i = \ell(\pmb{a}', \pmb{x}) = \ell(\pmb{a}, \pmb{x})$ is at most $s - 1$ , which contradicts the fact that $s$ is minimal given the RAS choice and Lemma C.3.
+
+As a conclusion we have $a_{k}x_{k}\leq (a_{i} + 1)x_{i}$
+
+Remark D.2. Recall that Lemma D.1 guarantees that $k_{i}$ defined in (15) satisfy: $k_{i} \in \{1, 2\}$ for each $i \in [n]$ .
+
+# D.1. Proof of Theorem 6.1
+
+Below we restate the theorem.
+
+Theorem 6.1. Suppose that Assumption 3.1 holds. Let $\bar{a} \in \arg \min_{\pmb{a} \in \mathcal{A}} \ell(\pmb{a})$ , in case of multiple optimal actions, we consider the one output by RAS when fed with $\pmb{\mu}$ . Then, the expected regret of ATA with inputs $(B, \alpha)$ satisfies
+
+$$
+\mathcal {R} _ {K} \leq 2 n \max _ {i \in [ n ]} \{B \mu_ {i} - \ell (\bar {\boldsymbol {a}}) \} + c \cdot \sum_ {i = 1} ^ {n} \frac {\alpha^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\boldsymbol {a}}))}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}})) ^ {2}} \cdot \ln K,
+$$
+
+where $\alpha = \max_{i\in [n]}\| X_i - \mu_i\|_{\psi_1}$ , and $c$ is a constant.
+
+Proof. Let $K_{i,k}$ be the number of rounds where arm $i$ was queried prior to round $k$ (we take $K_{i,1} = 0$ ). Recall that we chose the following confidence bound: if $K_{i,k} \geq 1$ , then
+
+$$
+\operatorname {c o n f} (i, k) = 2 \alpha \sqrt {\frac {\ln (2 k ^ {2})}{K _ {i , k}}} + 2 \alpha \frac {\ln (2 k ^ {2})}{K _ {i , k}},
+$$
+
+and $\mathrm{conf}(i,k) = \infty$ otherwise. Recall that $\hat{\mu}_{i,k}$ denotes the empirical mean of samples from $\nu_{i}$ observed prior to $k$ if $K_{i,k} \geq 0$ and $\hat{\mu}_{i,k} = 0$ if $K_{i,k} = 0$ . Let $s_{i,k}$ denote the lower confidence bound used in the algorithm:
+
+$$
+s _ {i, k} = \left(\hat {\mu} _ {i, k} - \operatorname {c o n f} (i, k)\right) _ {+}.
+$$
+
+We introduce the events $\mathcal{E}_{i,k}$ for $i\in [n]$ and $k\in [K]$ defined by
+
+$$
+\mathcal {E} _ {i, k} := \left\{\left| \hat {\mu} _ {i, k} - \mu_ {i} \right| > \operatorname {c o n f} (i, k) \right\}.
+$$
+
+Let
+
+$$
+\mathcal {E} _ {k} = \cup_ {i \in [ n ]} \mathcal {E} _ {i, k}.
+$$
+
+Let us prove that for each $k \in [K]$ and $i \in [n]$ : $\mathbb{P}(\mathcal{E}_{i,k}) \leq \frac{1}{k^2}$ , which gives using a union bound $\mathbb{P}(\mathcal{E}_k) \leq \frac{n}{k^2}$ . Let $i \in [n]$ using Lemma E.1 and taking $\delta = 1 / k^2$ , we have
+
+$$
+\mathbb {P} \left(\mathcal {E} _ {i, k}\right) = \mathbb {P} \left\{\left| \hat {\mu} _ {i, k} - \mu \right| > \operatorname {c o n f} (i, k) \right\} \leq \frac {1}{k ^ {2}}.
+$$
+
+We call a "bad round", a round $k$ where we have $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ . Let us upper bound the number of bad rounds.
+
+Observe that in a bad round there is necessarily an arm $i \in [K]$ such that $a_{i,k}\mu_i > \ell(\bar{a})$ . For each $i \in [n]$ , let $N_i(k)$ denote the number of rounds $q \in \{1, \dots, k\}$ where $a_{i,q}\mu_i > \ell(\bar{a})$ and $i \in \arg \max_{j \in [n]} a_{j,q}\mu_j$ (this corresponds to a bad round triggered by worker $i$ )
+
+$$
+N _ {i} (k) := \left| \left\{q \in \{1, \dots , k \}: a _ {i, q} \mu_ {i} > \ell (\bar {\boldsymbol {a}}) \text {a n d} a _ {i, q} \mu_ {i} = \ell (\boldsymbol {a} _ {q}) \right\} \right|.
+$$
+
+We show that in the case of $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ , the following event will hold: there exists $i \in [n]$ such that
+
+$$
+E _ {i, k} := \mathcal {E} _ {k} \mathrm {o r} \left\{N _ {i} (k - 1) \leq \frac {2 4 \alpha^ {2} (\bar {a} _ {i} + k _ {i}) \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}})) ^ {2}} \right\}.
+$$
+
+To prove this we use a contradiction argument. Suppose that for each $i \in [n]$ , $\neg E_{i,k}$ holds and that $\ell(\boldsymbol{a}_k) > \ell(\bar{\boldsymbol{a}})$ . This means that $k$ is a bad round, let $i$ be an arm that triggered this bad round (i.e., $i \in \arg \max_{j \in [n]} a_{j,k} \mu_j$ ). Event $\neg E_{i,k}$ gives in particular
+
+$$
+N _ {i} (k - 1) > \frac {2 4 \alpha^ {2} \left(\bar {a} _ {i} + k _ {i}\right) \ln \left(2 K ^ {2}\right)}{\left(\left(\bar {a} _ {i} + k _ {i}\right) \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) ^ {2}}. \tag {16}
+$$
+
+Observe that in each round where $N_{i}(\cdot)$ is incremented, the number of samples received from the distribution $\nu_{i}$ increases by at least $\bar{a}_{i} + k_{i}$ . Therefore, we have (16) implies
+
+$$
+K _ {i, k} > \frac {2 4 \alpha^ {2} (\bar {a} _ {i} + k _ {i}) ^ {2} \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\mathbf {a}})) ^ {2}} = \frac {2 4 \alpha^ {2} \ln (2 K ^ {2})}{(\mu_ {i} - \frac {\ell (\bar {\mathbf {a}})}{\bar {a} _ {i} + k _ {i}}) ^ {2}}.
+$$
+
+Then we have, using the expressions of $\mathrm{conf}(\cdot)$ and the bound above
+
+$$
+\begin{array}{l} 2 \mathrm {c o n f} (i, k) = 4 \alpha \sqrt {\frac {\ln (2 k ^ {2})}{K _ {i , k}}} + 4 \alpha \frac {\ln (2 k ^ {2})}{K _ {i , k}} \\ \leq \mu_ {i} - \frac {\ell (\bar {\boldsymbol {a}})}{\bar {a} _ {i} + k _ {i}}. \tag {17} \\ \end{array}
+$$
+
+The contradiction hypothesis gives that $a_{i,k}\mu_i > \ell(\bar{a})$ , then we have, using the definition of $k_i$ that $a_{i,k} \geq \bar{a}_i + k_i$ . Therefore, (17) gives
+
+$$
+2 \operatorname {c o n f} (i, k) < \mu_ {i} - \frac {\ell (\bar {\mathbf {a}})}{a _ {i , k}}. \tag {18}
+$$
+
+Observe that in each round $\| \pmb{a}_k\| _0 = B$ , therefore if we have $a_{i,k}\geq \bar{a}_i + k_i > \bar{a}_i$ for some $i$ , we necessarily have that there exists $j\in [n]\setminus \{i\}$ such that $a_{j,k}\leq \bar{a}_j - 1$ . Using the fact that $\ell (\bar{\mathbf{a}})\geq \bar{a}_j\mu_j$ with (18), we get
+
+$$
+a _ {i, k} \left(\mu_ {i} - 2 \operatorname {c o n f} (i, k)\right) > \bar {a} _ {j} \mu_ {j}. \tag {19}
+$$
+
+Since both $\neg \mathcal{E}_{i,k}$ and $\neg \mathcal{E}_{j,k}$ hold (because $\neg E_{i,k}$ implies $\neg \mathcal{E}_k$ ), we have that
+
+$$
+\mu_ {i} - 2 \operatorname {c o n f} (i, k) \leq \hat {\mu} _ {i, k} - \operatorname {c o n f} (i, k) \leq s _ {i, k}, \tag {20}
+$$
+
+and $\mu_j \geq \hat{\mu}_{j,k} - \mathrm{conf}(j,k)$ . Recall that $\mu_j \geq 0$ , therefore
+
+$$
+\mu_ {j} \geq \left(\hat {\mu} _ {j, k} - \operatorname {c o n f} (j, k)\right) _ {+} = s _ {j, k}. \tag {21}
+$$
+
+Using the bounds (20) and (21) in (19), we have
+
+$$
+a _ {i, k} s _ {i, k} > \bar {a} _ {j} s _ {j, k} \geq (a _ {j, k} + 1) s _ {j, k},
+$$
+
+where we used the definition of $j$ in the second inequality. This contradicts the statement of Lemma D.1, which concludes the contradiction argument. Therefore, the event that $k$ is a bad round implies that $E_{i,k}$ holds for at least one $i \in [n]$ . We say that a bad round was triggered by arm $i$ , a round where $N_{i}(\cdot)$ was incremented. Observe that if $k \in [K]$ is not a bad round then $\mathbb{E}\left[\ell (\pmb {a}_k)\right] - \ell (\bar{\pmb{a}}) = 0$ , otherwise if $k$ is a bad round triggered by $i \in [n]$ then $\mathbb{E}\left[\ell (\pmb {a}_k)\right] - \ell (\bar{\pmb{a}})\leq B\mu_i - \ell (\bar{\pmb{a}})$ . To ease notation we introduce for $i \in [n]$
+
+$$
+H _ {i} := \frac {2 4 \alpha^ {2} (\bar {a} _ {i} + k _ {i}) \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\mathbf {a}})) ^ {2}}.
+$$
+
+The expected regret satisfies
+
+$$
+\begin{array}{l} \mathcal {R} _ {K} = \sum_ {i = 1} ^ {K} \mathbb {E} \left[ \ell (\boldsymbol {a} _ {k}) - \ell (\bar {\boldsymbol {a}}) \right] \\ \leq \sum_ {i = 1} ^ {n} (B \mu_ {i} - \ell (\bar {\mathbf {a}})) \mathbb {E} [ N _ {i} (K) ] \\ = \sum_ {i = 1} ^ {n} \sum_ {k = 1} ^ {K} \left(B \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) \mathbb {E} [ \mathbb {1} (k \text {i s a b a d r o u n d t r i g g e r e d b y} i) ] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \right\} \cdot \sum_ {t = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \sum_ {k = 1} ^ {K} \mathbb {E} \left[ \mathbb {1} (k \text {i s a b a d r o u n d t r i g g e r e d b y} i) \mid \neg \mathcal {E} _ {k} \right] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \right\} \cdot \sum_ {t = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \sum_ {k = 1} ^ {K} \mathbb {E} \left[ \mathbb {1} \left(N _ {i} (k) = 1 + N _ {i} (k - 1) \text {a n d} N _ {i} \leq H _ {i}\right) \mid \neg \mathcal {E} _ {k} \right] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \right\} \cdot \sum_ {k = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) H _ {i} \\ \leq 2 n \max _ {i \in [ n ]} \{(B \mu_ {i} - \ell (\bar {\mathbf {a}})) \} + \sum_ {i = 1} ^ {n} \frac {2 4 \alpha^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\mathbf {a}})) \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\mathbf {a}})) ^ {2}}. \\ \end{array}
+$$
+
+# D.2. Proof of Theorem 6.3
+
+Theorem 6.3. Suppose that Assumption 3.1 holds. Let $\bar{a} \in \arg \min_{\pmb{a} \in \mathcal{A}} \ell(\pmb{a})$ , in case of multiple optimal actions, we consider the one output by RAS when fed with $\pmb{\mu}$ . Then, the expected regret of ATA-Empirical with the empirical confidence bounds using the inputs $(B, \eta)$ satisfies
+
+$$
+\mathcal {R} _ {K} \leq 2 n \max _ {i \in [ n ]} \{B \mu_ {i} - \ell (\bar {\boldsymbol {a}}) \} + c \cdot \sum_ {i = 1} ^ {n} \frac {\eta^ {2} \mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\boldsymbol {a}}))}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}})) ^ {2}} \cdot \ln K,
+$$
+
+where $\eta = \max_{i\in [n]}\alpha_i / \mu_i$ and $c$ is a constant.
+
+Proof. We build on the techniques used in the proof of Theorem 6.1. Recall the expression of $\eta$ :
+
+$$
+\eta = \max _ {i \in [ n ]} \frac {\alpha_ {i}}{\mu_ {i}}.
+$$
+
+Define the quantities $C_{i,k}$ by
+
+$$
+C _ {i, k} = 2 \sqrt {\frac {\ln (2 k ^ {2})}{K _ {i , k}}} + 2 \frac {\ln (2 k ^ {2})}{K _ {i , k}}.
+$$
+
+Recall that the lower confidence bounds used here are defined as
+
+$$
+\hat {s} _ {i, k} = \hat {\mu} _ {i, k} (1 - \eta C _ {i, k}) _ {+}.
+$$
+
+We additionally define the following quantities
+
+$$
+\hat {u} _ {i, k} := \hat {\mu} _ {i, k} \left(1 + \frac {4}{3} \eta C _ {i, k}\right).
+$$
+
+We introduce the events $\mathcal{E}_{i,k}$ for $i\in [n]$ and $k\in [K]$ defined by
+
+$$
+\mathcal {E} _ {i, k} := \left\{\mu_ {i} < \hat {s} _ {i, k} \right\} \quad \text {o r} \quad \left\{\eta C _ {i, k} \leq \frac {1}{4} \quad \text {a n d} \quad \mu_ {i} > \hat {u} _ {i, k} \right\}.
+$$
+
+Let
+
+$$
+\mathcal {E} _ {k} = \cup_ {i \in [ n ]} \mathcal {E} _ {i, k}.
+$$
+
+We have, using Lemma E.3, for each $k \in [K]$ and $i \in [n]$ : $\mathbb{P}(\mathcal{E}_{i,k}) \leq \frac{1}{k^2}$ , which gives using a union bound $\mathbb{P}(\mathcal{E}_k) \leq \frac{n}{k^2}$ .
+
+Following similar steps as in the proof of Theorem 6.1, we call a "bad round", a round $k$ where we have $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ . Let us upper bound the number of bad rounds.
+
+Observe that in a bad round there is necessarily an arm $i \in [K]$ such that $a_{i,k}\mu_i > \ell(\bar{\mathbf{a}})$ . For each $i \in [n]$ , let $N_i(k)$ denote the number of rounds $q \in \{1, \dots, k\}$ where $a_{i,q}\mu_i > \ell(\bar{\mathbf{a}})$ and $i \in \arg \max_{j \in [n]} \{a_{j,q}\mu_j\}$ (this corresponds to a bad round triggered by worker $q$ ):
+
+$$
+N _ {i} (k) := \left| \left\{q \in \{1, \dots , k \}: a _ {i, q} \mu_ {i} > \ell (\bar {\boldsymbol {a}}) \text {a n d} a _ {i, q} \mu_ {i} = \ell (\boldsymbol {a} _ {q}) \right\} \right|.
+$$
+
+We show that in the case of $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ , the following event will hold: there exists $i \in [n]$ such that
+
+$$
+E _ {i, k} := \mathcal {E} _ {k} \mathrm {o r} \left\{N _ {i} (k - 1) \leq \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\boldsymbol {a}})) ^ {2}} \right\}.
+$$
+
+To prove this, suppose for a contradiction argument that we have for some $i \in [n]$ : $\neg E_{i,k}$ and that $k$ is a bad round triggered by arm $i$ (i.e., $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ and $i \in \arg \max_{j \in [n]} a_{j,k} \mu_j$ ).
+
+This gives in particular
+
+$$
+N _ {i} (k - 1) > \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} \left(\bar {a} _ {i} + k _ {i}\right) \ln \left(2 K ^ {2}\right)}{\left(\left(\bar {a} _ {i} + k _ {i}\right) \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) ^ {2}}. \tag {22}
+$$
+
+Observe that in each round where $N_{i}(\cdot)$ is incremented, the number of samples received from the distribution $\nu_{i}$ increases by at least $\bar{a}_{i} + k_{i}$ . Therefore, we have (22) implies
+
+$$
+K _ {i, k} > \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) ^ {2} \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {\mathbf {a}})) ^ {2}} = \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} \ln (2 K ^ {2})}{(\mu_ {i} - \frac {\ell (\bar {\mathbf {a}})}{\bar {a} _ {i} + k _ {i}}) ^ {2}}.
+$$
+
+Therefore, we have, using the expression of $C_{i,k}$ and the bound above
+
+$$
+\begin{array}{l} C _ {i, k} = 2 \sqrt {\frac {\ln (2 k ^ {2})}{K _ {i , k}}} + 2 \frac {\ln (2 k ^ {2})}{K _ {i , k}} \\ \leq \frac {3}{1 9 \eta \mu_ {i}} \left(\mu_ {i} - \frac {\ell (\bar {\mathbf {a}})}{\bar {a} _ {i} + k _ {i}}\right). \tag {23} \\ \end{array}
+$$
+
+The last bound implies in particular that $\eta C_{i,k} \leq \frac{3}{19}$ , hence $(1 - \eta C_{i,k})_+ = 1 - \eta C_{i,k}$ .
+
+We have
+
+$$
+\begin{array}{l} \hat {\mu} _ {i, k} - \hat {s} _ {i, k} = \hat {\mu} _ {i, k} \left(1 - \left(1 - \eta C _ {i, k}\right) _ {+}\right) \\ \leq \eta C _ {i, k} \hat {\mu} _ {i, k} \\ \leq \eta C _ {i, k} \frac {\mu_ {i}}{1 - \eta C _ {i , k}} \\ \leq \frac {1}{5} \left(\mu_ {i} - \frac {\ell (\bar {\boldsymbol {a}})}{\bar {a} _ {i} + k _ {i}}\right). \tag {24} \\ \end{array}
+$$
+
+where we used the event $\neg \mathcal{E}_{i,k}$ in the penultimate inequality (in particular $\hat{s}_{i,k} = \hat{\mu}_{i,k}(1 - \eta C_{i,k})_+ \leq \mu_i$ ), and the bound (23) in the last inequality.
+
+Given the hypothesis that $\ell(\pmb{a}_k) > \ell(\bar{\pmb{a}})$ and, $\ell(\pmb{a}_k) = a_{i,k}\mu_i$ , we necessarily have $a_{i,k} \geq \bar{a}_i + k_i$ . Therefore, bound (24)
+
+$$
+5 \hat {\mu} _ {i, k} - 5 \hat {s} _ {i, k} < \mu_ {i} - \frac {\ell (\bar {\mathbf {a}})}{a _ {i , k}}.
+$$
+
+Observe that in each round $\| \pmb{a}_k\| _0 = B$ , therefore if we have $a_{i,k}\geq \bar{a}_i + k_i > \bar{a}_i$ for some $i$ , we necessarily have that there exists $j\in [n]\setminus \{i\}$ such that $a_{j,k}\leq \bar{a}_j - 1$ . Therefore, using the fact that $\ell (\bar{\mathbf{a}})\geq \bar{a}_j\mu_j$ , we obtain
+
+$$
+a _ {i, k} \left(\mu_ {i} + 5 \hat {s} _ {i, k} - 5 \hat {\mu} _ {i, k}\right) > \ell (\bar {\boldsymbol {a}}) \geq \bar {a} _ {j} \mu_ {j}. \tag {25}
+$$
+
+Since both $\neg \mathcal{E}_{i,k}$ and $\neg \mathcal{E}_{j,k}$ hold (because $\neg E_{i,k}$ implies $\neg \mathcal{E}_k$ ), we have that
+
+$$
+\begin{array}{l} \mu_ {i} + 5 \hat {s} _ {i, k} - 5 \hat {\mu} _ {i, k} = \hat {s} _ {i, k} + \mu_ {i} - \hat {\mu} _ {i, k} + 4 (\hat {s} _ {i, k} - \hat {\mu} _ {i, k}) \\ = \hat {s} _ {i, k} + \mu_ {i} - \hat {\mu} _ {i, k} + 4 \hat {\mu} _ {i, k} \left(\left(1 - \eta C _ {i, k}\right) _ {+} - 1\right) \\ \leq \hat {s} _ {i, k} + \mu_ {i} - \hat {\mu} _ {i, k} - 4 \hat {\mu} _ {i, k} \eta C _ {i, k} \\ \leq \hat {s} _ {i, k} + \mu_ {i} - \hat {u} _ {i, k}, \\ \end{array}
+$$
+
+To conclude, observe that given $\eta C_{i,k} \leq \frac{3}{19}$ , event $\neg \mathcal{E}_{i,k}$ implies that $\mu_i \leq \hat{u}_{i,k}$ , therefore
+
+$$
+\mu_ {i} + 5 \hat {s} _ {i, k} - 5 \hat {\mu} _ {i, k} \leq \hat {s} _ {i, k}.
+$$
+
+Since $\neg \mathcal{E}_{j,k}$ holds, we also have
+
+$$
+\mu_ {j} \geq \hat {s} _ {j, k}.
+$$
+
+Using the two last bounds in (25), we have
+
+$$
+a _ {i, k} \hat {s} _ {i, k} > \bar {a} _ {j} \hat {s} _ {j, k} \geq (a _ {j, k} + 1) \hat {s} _ {j, k},
+$$
+
+where we used the definition of $j$ , as an arm satisfying $\bar{a}_j \geq 1 + a_{j,k}$ , in the second inequality. This contradicts the statement of Lemma D.1, which concludes the contradiction argument. Therefore, the event that $k$ is a bad round implies that $E_{i,k}$ holds for at least one $i \in [n]$ . We say that a bad round was triggered by arm $i$ , a round where $N_{i}(\cdot)$ was incremented. Observe that if $k \in [K]$ is not a bad round then $\mathbb{E}\left[\ell (\pmb {a}_k)\right] - \ell (\bar{\pmb{a}}) = 0$ , otherwise if $k$ is a bad round triggered by $i \in [n]$ then $\mathbb{E}\left[\ell (\pmb {a}_k)\right] - \ell (\bar{\pmb{a}}) \leq B\mu_i - \ell (\bar{\pmb{a}})$ . To ease notation we introduce for $i \in [n]$
+
+$$
+H _ {i} := \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) \ln (2 K ^ {2})}{((\bar {a} _ {i} + k _ {i}) \mu_ {i} - \ell (\bar {a})) ^ {2}}.
+$$
+
+The expected regret satisfies
+
+$$
+\begin{array}{l} \mathcal {R} _ {K} = \sum_ {i = 1} ^ {K} \mathbb {E} \left[ \ell (\boldsymbol {a} _ {k}) - \ell (\bar {\boldsymbol {a}}) \right] \\ \leq \sum_ {i = 1} ^ {n} (B \mu_ {i} - \ell (\bar {\mathbf {a}})) \mathbb {E} [ N _ {i} (K) ] \\ = \sum_ {i = 1} ^ {n} \sum_ {k = 1} ^ {K} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \mathbb {E} [ \mathbf {1} (k \text {i s a b a d r o u n d t r i g g e r e d b y} i) ] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) \right\} \cdot \sum_ {t = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) \sum_ {k = 1} ^ {K} \mathbb {E} \left[ \mathbb {1} (k \text {i s a b a d r o u n d t r i g g e r e d b y} i) \mid \neg \mathcal {E} _ {k} \right] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \right\} \cdot \sum_ {t = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \sum_ {k = 1} ^ {K} \mathbb {E} \left[ \mathbb {1} \left(N _ {i} (k) = 1 + N _ {i} (k - 1) \text {a n d} N _ {i} \leq H _ {i}\right) \mid \neg \mathcal {E} _ {k} \right] \\ \leq \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) \right\} \cdot \sum_ {k = 1} ^ {K} \mathbb {P} \left(\mathcal {E} _ {k}\right) + \sum_ {i = 1} ^ {n} \left(B \mu_ {i} - \ell (\bar {\mathbf {a}})\right) H _ {i} \\ \leq 2 n \max _ {i \in [ n ]} \left\{\left(B \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) \right\} + \sum_ {i = 1} ^ {n} \frac {1 8 5 \eta^ {2} \mu_ {i} ^ {2} (\bar {a} _ {i} + k _ {i}) (B \mu_ {i} - \ell (\bar {\boldsymbol {a}})) \ln (2 K ^ {2})}{\left(\left(\bar {a} _ {i} + k _ {i}\right) \mu_ {i} - \ell (\bar {\boldsymbol {a}})\right) ^ {2}}. \\ \end{array}
+$$
+
+# D.3. Proof of Theorem 4.2
+
+Let us first restate the theorem.
+
+Theorem 4.2. Suppose Assumption 3.1 holds and let $\eta \coloneqq \max_{i\in [n]}\alpha_i / \mu_i$ , where $\alpha_{i} = \| X_{i} - \mu_{i}\|_{\psi_{1}}$ . Then, the total expected computation time after $K$ rounds, using the allocation prescribed by ATA with inputs $(B,\alpha)$ satisfies
+
+$$
+\mathcal {C} _ {K} \leq (1 + 4 \eta \ln (B)) \mathcal {C} _ {K} ^ {*} + \mathcal {O} (\ln K).
+$$
+
+Proof. Let $\mathbb{E}_k$ be the expectation with respect to the variables observed up to and including $k$ and $\mathcal{F}_k$ the corresponding filtration. Using the tower rule, we have
+
+$$
+\sum_ {k = 1} ^ {K} \mathbb {E} \left[ C (\boldsymbol {a} _ {k}) \right] = \mathbb {E} \left[ \sum_ {k = 1} ^ {K} \mathbb {E} _ {k - 1} [ C (\boldsymbol {a} _ {k}) ] \right].
+$$
+
+Consider round $k\in [K]$ , let us upper bound $\mathbb{E}_{k - 1}[C(\pmb {a}_t)]$ using $\mathbb{E}_{k - 1}[\ell (\pmb {a}_k)]$ . Recall that $\pmb {a}_k\in \mathcal{F}_{k - 1}$ , let $Y_{i} = \sum_{u = 1}^{a_{i,k}}X_{i,k}^{(u)}$ , since $Y_{i}$ is the sum of $a_{i,k}$ i.i.d samples we have that $\mathbb{E}_{k - 1}[Y_i] = a_{i,k}\mu_i$ and $\| Y_{i} - a_{i,k}\mu_{i}\|_{\psi_{1}}\leq a_{i,k}\| X_{i} - \mu_{i}\|_{\psi_{1}}$ . Thus, using Lemma E.4, we get
+
+$$
+\begin{array}{l} \mathbb {E} _ {k - 1} \left[ C (\boldsymbol {a} _ {k}) \right] = \mathbb {E} _ {k - 1} \left[ \max _ {i \in \operatorname {s u p p} (\boldsymbol {a} _ {k})} \left\{\sum_ {u = 1} ^ {a _ {i, k}} X _ {i, k} ^ {(u)} \right\} \right] \\ \leq \max _ {i \in \operatorname {s u p p} \left(\boldsymbol {a} _ {k}\right)} \left\{a _ {i, k} \mu_ {i} \right\} + 4 \max _ {i \in \operatorname {s u p p} \left(\boldsymbol {a} _ {k}\right)} \left\{a _ {i, k} \alpha_ {i} \right\} \cdot \ln (B) \\ \leq \max _ {i \in \operatorname {s u p p} \left(\boldsymbol {a} _ {k}\right)} \left\{a _ {i, k} \mu_ {i} \right\} + 4 \max _ {i \in \operatorname {s u p p} \left(\boldsymbol {a} _ {k}\right)} \left\{a _ {i, k} \eta \mu_ {i} \right\} \cdot \ln (B) \\ = (1 + 4 \eta \ln (B)) \max \left\{a _ {i, k} \mu_ {i} \right\}. \\ \end{array}
+$$
+
+Moreover, using Jensen's inequality, we have
+
+$$
+\max _ {i \in [ n ]} \left\{a _ {i} ^ {*} \mu_ {i} \right\} \leq \mathbb {E} \left[ \max _ {i \in [ n ]} \left\{\sum_ {u = 1} ^ {a _ {k, i}} X _ {i, k} ^ {(u)} \right\} \right] = \mathbb {E} \left[ C \left(\boldsymbol {a} ^ {*}\right) \right].
+$$
+
+Using the last two bounds with the result of Theorem 6.1, we get the result.
+
+# E. Technical Lemmas
+
+The lemma below gives a concentration bound on sub-exponential variables. Note that this result can be inferred from Proposition 7 in Maurer & Pontil (2021), although applying the last result directly requires assuming the variables are positive, this is not needed in their proof in the one-dimensional case. For completeness, we present the full proof below.
+
+Lemma E.1. Let $Y_{1},\ldots ,Y_{n}$ be i.i.d random variables with $\mathbb{E}[Y_1] = 0$ and $\alpha = \| Y_1\|_{\psi_1} < + \infty$ . Then for any $\delta \in (0,1)$ , we have with probability at least $1 - \delta$
+
+$$
+\left| \frac {1}{n} \sum_ {i = 1} ^ {n} Y _ {i} \right| \leq 2 \alpha \left(\sqrt {\frac {\ln (2 / \delta)}{n}} + \frac {\ln (2 / \delta)}{2 n}\right).
+$$
+
+Proof. Let $v \coloneqq 2n\alpha^2$ . We have using Lemma E.5 that
+
+$$
+\sum_ {i = 1} ^ {n} \mathbb {E} \left[ Y _ {i} ^ {2} \right] \leq 2 n \alpha^ {2} \quad \mathrm {a n d} \quad \sum_ {i = 1} ^ {n} \mathbb {E} \left[ (Y _ {i}) _ {+} ^ {q} \right] \leq \frac {q !}{2} v \alpha^ {q - 2}.
+$$
+
+Therefore, using Bernstein concentration inequality (Proposition E.2) we obtain that
+
+$$
+\mathbb {P} \left(\left| \sum_ {i = 1} ^ {n} Y _ {i} \right| \geq 2 \alpha \sqrt {n t} + \alpha t\right) \leq 2 \exp (- t).
+$$
+
+Choosing $t = \ln (2 / \delta)$ , we obtain the result.
+
+Proposition E.2 (Theorem 2.10 in Boucheron et al. (2005)). Let $X_{1},\ldots ,X_{n}$ be independent real-valued random variables. Assume there exist positive numbers $v$ and $c$ such that
+
+$$
+\sum_ {i = 1} ^ {n} \mathbb {E} \left[ X _ {i} ^ {2} \right] \leq v \quad a n d \quad \sum_ {i = 1} ^ {n} \mathbb {E} \left[ (X _ {i}) _ {+} ^ {q} \right] \leq \frac {q !}{2} v c ^ {q - 2}, \qquad f o r a l l i n e g e r s q \geq 3,
+$$
+
+where $x_{+} \coloneqq \max \{x, 0\}$ . Define the centered sum
+
+$$
+S := \sum_ {i = 1} ^ {n} \left(X _ {i} - \mathbb {E} X _ {i}\right).
+$$
+
+Then, for every $t > 0$
+
+$$
+\mathbb {P} \left(S \geq \sqrt {2 v t} + c t\right) \leq e ^ {- t}.
+$$
+
+Lemma E.3. Let $X_{1},\ldots ,X_{n}$ be i.i.d positive random variables with mean $\mu$ and $\| X_1\|_{\psi_1} < + \infty$ . Denote $\alpha \coloneqq$ $\| X_{1} - \mu \|_{\psi_{1}}$ . Denote $\hat{X}_n = \frac{1}{n}\sum_{i = 1}^n X_i$ and let $\delta \in (0,1)$ . Define $\eta ,C,\cdot$ by:
+
+$$
+\eta := \frac {\alpha}{\mu} \qquad a n d \qquad C _ {n, \delta} := 2 \sqrt {\frac {\ln (2 / \delta)}{n}} + 2 \frac {\ln (2 / \delta)}{n}.
+$$
+
+Then with probability at least $1 - \delta$ we have
+
+$$
+\mu \geq \hat {X} _ {n} (1 - \eta \cdot C _ {n, \delta}) _ {+},
+$$
+
+where we use the notation $(a)_{+} = \max \{0,a\}$ . Moreover, if $\eta C_{n,\delta} \leq \frac{1}{4}$ , then with probability least $1 - \delta$
+
+$$
+\hat {X} _ {n} (1 - \eta \cdot C _ {n, \delta}) _ {+} \leq \mu \leq \hat {X} _ {n} \left(1 + \frac {4}{3} \eta \cdot C _ {n, \delta}\right).
+$$
+
+Proof. Fix $n, \delta$ . We work on the event
+
+$$
+\mathcal {E} _ {n, \delta} = \left\{\left| \hat {X} _ {n} - \mu \right| \leq \alpha \cdot C _ {n, \delta} \right\}
+$$
+
+that holds with probability at least $1 - \delta$ if we apply Proposition E.1 to $X_{i} - \mu$ .
+
+Proof of $\mu \geq \hat{X}_n(1 - \eta \cdot C_{n,\delta})_+$ :
+
+If $\eta C_{n,\delta} \geq 1$ , we have $(1 - \eta \cdot C_{n,\delta})_+ = 0$ and the result follows from the fact that $X$ is non-negative which gives $\mu \geq 0$ .
+
+Suppose now that $\eta C_{n,\delta} < 1$ . Recall that event $\mathcal{E}_{n,\delta}$ implies that
+
+$$
+\hat {X} _ {n} \leq \mu (1 + \eta C _ {n, \delta}).
+$$
+
+Therefore, we have
+
+$$
+\frac {\hat {X} _ {n}}{1 + \eta C _ {n , \delta}} \leq \mu .
+$$
+
+Using $1 - \eta C_{n,\delta} \leq \frac{1}{1 + \eta C_{n,\delta}}$ with the bound above, we obtain
+
+$$
+\hat {X} _ {n} (1 - \eta C _ {n, \delta}) _ {+} = \hat {X} _ {n} (1 - \eta C _ {n, \delta}) \leq \frac {\hat {X} _ {n}}{1 + \eta \cdot C _ {n , \delta}} \leq \mu .
+$$
+
+Proof of $\mu \leq \left(1 + \frac{4}{3}\eta \cdot C_{n,\delta}\right)$ . Recall that event $\mathcal{E}_{n,\delta}$ gives
+
+$$
+\hat {X} _ {n} \geq \mu - \alpha C _ {n, \delta} = \mu \left(1 - \eta C _ {n, \delta}\right).
+$$
+
+Suppose that $\eta C_{n,\delta} \leq \frac{1}{4}$ . We therefore have
+
+$$
+\mu \leq \frac {\hat {X} _ {n}}{1 - \eta C _ {n , \delta}}.
+$$
+
+Next, we use the fact that for any $x \in [0, 1/4]$ , we have
+
+$$
+\frac {1}{1 - x} \leq 1 + \frac {4}{3} x,
+$$
+
+which gives
+
+$$
+\mu \leq \hat {X} _ {n} \left(1 + \frac {4}{3} \eta \cdot C _ {n, \delta}\right),
+$$
+
+when $\eta C_{n,\delta} \leq \frac{1}{4}$ .
+
+Lemma E.4. Let $X_{1},\ldots ,X_{n}$ be a sequence of independent random variables with finite Orlicz norm $\| X_i\|_{\psi_1} < + \infty$ and let $\mathbb{E}[X_i] = \mu_i$ . Then we have
+
+$$
+\mathbb {E} \left[ \max _ {i \in [ n ]} X _ {i} \right] \leq \max _ {i \in [ n ]} \mu_ {i} + 4 \alpha \ln (n) ,
+$$
+
+where $\alpha = \max_{i\in [n]}\| X_i - \mu_i\|_{\psi_1}$
+
+Proof. If $n = 1$ the bound is straightforward, suppose that $n \geq 2$ . Let $Y_{i} \coloneqq X_{i} - \mu_{i}$ , then $\alpha = \max_{i \in [n]} \|Y_{i}\|_{\psi_{1}}$ . Let $\lambda \in (0,1/\alpha)$ , we have
+
+$$
+\begin{array}{l} \max _ {i \in [ n ]} Y _ {i} = \frac {1}{\lambda} \ln \left(\exp \left(\lambda \max _ {i \in [ n ]} Y _ {i}\right)\right) \\ \leq \frac {1}{\lambda} \ln \left(\sum_ {i = 1} ^ {n} \exp \left(\lambda Y _ {i}\right)\right). \\ \end{array}
+$$
+
+Taking the expectation and using Lemma E.5, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \max _ {i \in [ n ]} Y _ {i} \right] \leq \frac {1}{\lambda} \ln \left(\sum_ {i = 1} ^ {n} \mathbb {E} \left[ \exp \left(\lambda Y _ {i}\right) \right]\right) \\ \leq \frac {1}{\lambda} \ln \left(\frac {n}{1 - \lambda \alpha}\right). \\ \end{array}
+$$
+
+We choose $\lambda = \frac{1 - 1 / n}{\alpha}$ , which gives
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \max _ {i \in [ n ]} Y _ {i} \right] \leq \frac {\alpha}{1 - \frac {1}{n}} \ln (n ^ {2}) \\ = 2 \alpha \frac {n}{n - 1} \ln (n) \\ \leq 4 \alpha \ln (n). \tag {26} \\ \end{array}
+$$
+
+Let $i^{*} \in \arg \max_{i \in [n]} X_{i}$ , we have
+
+$$
+\begin{array}{l} \max _ {i \in [ n ]} X _ {i} - \max _ {i \in [ n ]} \mu_ {i} = X _ {i ^ {*}} - \max _ {i \in [ n ]} \mu_ {i} \\ \leq X _ {i ^ {*}} - \mu_ {i ^ {*}} \leq \max _ {i \in [ n ]} \{X _ {i} - \mu_ {i} \} = \max _ {i \in [ n ]} Y _ {i}. \\ \end{array}
+$$
+
+Combining the last bound with (26) we obtain
+
+$$
+\mathbb {E} \left[ \max _ {i \in [ n ]} X _ {i} \right] \leq \max _ {i \in [ n ]} \mu_ {i} + 4 \alpha \ln (n).
+$$
+
+Lemma below is based on a standard argument we give here for completeness.
+
+Lemma E.5. Let $Y$ be a variable such that $\alpha = \| Y\|_{\psi_1} < + \infty$ . Then we have for any $\lambda \in \left(-\frac{1}{\alpha},\frac{1}{\alpha}\right)$
+
+$$
+\mathbb {E} \left[ \exp (\lambda Y) \right] \leq \frac {1}{1 - | \lambda | \alpha}.
+$$
+
+Moreover, we have for any $q \geq 3$
+
+$$
+\mathbb {E} \left[ Y ^ {2} \right] \leq 2 \alpha^ {2} \quad a n d \quad \mathbb {E} \left[ (Y) _ {+} ^ {q} \right] \leq \frac {q !}{2} \cdot \left(2 \alpha^ {2}\right) \cdot \alpha^ {q - 2}.
+$$
+
+Proof. Let $Z = |Y| / \alpha$ . First observe that we have
+
+$$
+\sum_ {k \geq 0} \frac {\mathbb {E} [ Z ^ {k} ]}{k !} = \mathbb {E} [ \exp (Z) ] = \mathbb {E} [ \exp (| Y | / \alpha) ] \leq 2,
+$$
+
+so,
+
+$$
+\sum_ {k \geq 1} \frac {\mathbb {E} \left[ Z ^ {k} \right]}{k !} \leq 1.
+$$
+
+This implies $\mathbb{E}\left[\left|Y\right|^k\right]\leq k!\alpha^k$ for all $k\geq 1$ . Using this bound, we estimate the moment generating function
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \exp (\lambda Y) \right] \leq \mathbb {E} \left[ \exp \left(| \lambda | | Y |\right) \right] \\ = \sum_ {k \geq 0} \frac {\left| \lambda \right| ^ {k} \mathbb {E} \left[ \left| Y \right| ^ {k} \right]}{k !} \\ \leq 1 + \sum_ {k \geq 1} | \lambda | ^ {k} \alpha^ {k} \\ = \frac {1}{1 - | \lambda | \alpha}. \\ \end{array}
+$$
+
+The remaining bounds follow from $\mathbb{E}\left[\left|Y\right|^k\right]\leq k!\alpha^k$
+
+
\ No newline at end of file
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/images.zip b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8d4980c635686f984792404600af6ee217567f5e
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:402016f0bfc7a672c4dc8ff2625b7305daeb761939ac46836070cf5e7e7b78d5
+size 1634360
diff --git a/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/layout.json b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6aef2aec00de04780de0f534c23609b04b7cb93c
--- /dev/null
+++ b/ataadaptivetaskallocationforefficientresourcemanagementindistributedmachinelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7d0bfe0827301cb7b1371bf538307552ab75d53fec4c04bb181cde4247c66c6
+size 1767488
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_content_list.json b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ab0627d6fe52676aae9cad5dd6b9e2a724da27d0
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04797d98a00d3e0f2d50baf6e12ec4842577e873b3f9c6f5408f303135c11319
+size 321111
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_model.json b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4359ab5e22f2d24089c3b4cba38d2507575545f6
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52bda860d862ae3493df8d7b6a7e2d30753e61b3ef36d519822f36f65b3fbf65
+size 376885
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_origin.pdf b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5efe644e1dbd5328af4b8c4753059a9c06c205b3
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/cb04aaed-1b1a-494f-8627-8f62e85cfc31_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fbfeb87ae55bdc36a02816868152b76ed943339b762fe468eb8730674d308651
+size 3331692
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/full.md b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b9be95310103ef09c897c9657f04159fc8fd79a
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/full.md
@@ -0,0 +1,1462 @@
+# A Tale of Two Structures: Do LLMs Capture the Fractal Complexity of Language?
+
+Ibrahim Alabdulmohsin1 Andreas Steiner1
+
+# Abstract
+
+Language exhibits a fractal structure in its information-theoretic complexity (i.e. bits per token), with self-similarity across scales and long-range dependence (LRD). In this work, we investigate whether large language models (LLMs) can replicate such fractal characteristics and identify conditions—such as temperature setting and prompting method—under which they may fail. Moreover, we find that the fractal parameters observed in natural language are contained within a narrow range, whereas those of LLMs' output vary widely, suggesting that fractal parameters might prove helpful in detecting a non-trivial portion of LLM-generated texts. Notably, these findings, and many others reported in this work, are robust to the choice of the architecture; e.g. Gemini 1.0 Pro, Mistral-7B and Gemma-2B. We also release a dataset comprising over 240,000 articles generated by various LLMs (both pretrained and instruction-tuned) with different decoding temperatures and prompting methods, along with their corresponding human-generated texts. We hope that this work highlights the complex interplay between fractal properties, prompting, and statistical mimicry in LLMs, offering insights for generating, evaluating and detecting synthetic texts.
+
+# 1. Introduction
+
+The information-theoretic complexity of language (i.e., its bits or "surprise") has been shown to exhibit both self-similarity and long-range dependence (LRD) (Alabdul-mohsin et al., 2024). In simple terms, a stochastic process is considered self-similar if its statistical properties remain consistent across different scales, regardless of the level of magnification applied. A well-known example of such behavior
+
+is found in Ethernet traffic (Crovella & Bestavros, 1995; Leland et al., 1994; Paxson & Floyd, 1995; Willinger et al., 1997), where self-similarity manifests as burstiness across all time scales, thereby impacting the design of network device buffers (Leland & Wilson, 1991). In language, such self-similarity is attributed to its recursive structure (Altmann et al., 2012). On the other hand, a stochastic process is called long-range dependent (LRD) if its future is influenced by the distant past, with no particular characteristic context length.
+
+Self-similarity and long-range dependence (LRD) can be quantified using the Hölder and Hurst exponents, respectively. The Hölder exponent, which we denote by S for self-similarity, characterizes the rate of decay in the autocorrelation function, with smaller values of S indicating a more significant self-similar structure (heavier tail) (Watkins, 2019). By contrast, larger values of the Hurst exponent $\mathrm{H} \gg 0.5$ indicate more dependence across time (Hurst, 1951). We refer the reader to Appendix B for the exact definitions of these quantities. A natural question that arises, next, is: How different are S and H in LLM-generated texts from natural language? In this work, we investigate this question in depth, aiming to identify conditions—such as temperature settings, instruction-tuning, prompting and model size—that influence an LLM's ability to replicate such fractal characteristics.
+
+Before doing that, however, let us consider some arguments for why LLMs may or may not be capable of replicating the fractal structure of language. One argument for why they should be capable of doing so lies in the chain rule of probability. LLMs are reasonably calibrated at the token level (Kadavath et al., 2022), implying that auto-regressive decoding should theoretically capture the structure of natural language as long as token-level probability scores remain well-calibrated. By the chain rule, each subsequent token's probability is conditioned on the prior tokens, and this process should ideally reflect the self-similarity and long-range dependence inherent in language.
+
+However, this idealized view may not hold in practice. A critical issue can arise, for example, from the mismatch between how LLMs are trained and how they are used at inference, as pointed out by (Bachmann & Nagarajan, 2024).
+
+
+Figure 1. A causal model we consider, in which a latent "context" generates a prefix and both produce a suffix.
+
+During training, LLMs use teacher-forcing, where the correct previous tokens are always provided. This ensures that errors do not accumulate during training, but during inference, models must rely on their own predictions, leading potentially to compounding errors that can distort the fractal structure. Formally speaking, whereas LLMs can be well-calibrated at the next token level, their ability to accurately predict the distribution over longer sequences of tokens might degrade during inference.
+
+Another potential challenge lies in how language is generated by humans. Humans typically generate texts by first conceptualizing an underlying "context" and then constructing sentences based on it, rather than improvising one token at a time with no regard to the overall intent. This is captured by the causal model in Figure 1, where a latent "context" generates a "prefix" (beginning of text), and, in conjunction with this prefix, both produce a "suffix" (continuation).
+
+Formally, under this hypothetical causal model, prompting corresponds to an interventional (or "do") query. Let $\mathcal{V}$ be a finite vocabulary of tokens, where texts correspond to finite sequences $x \in \mathcal{V}^N$ . Now, divide $x$ into a prefix $x_{0:n-1}$ and a suffix $x_{n:N}$ . Assuming for simplicity that the set of possible latent contexts is finite, one classical result in causal inference states that the "interventional" distribution is given by (see Equation 4.5 in (Neal, 2020)):
+
+$$
+p (\operatorname {s u f} | \mathbf {d o} (\operatorname {p r e})) = \sum_ {c \in \mathcal {C}} p (\mathbf {c} = c) \cdot p (\operatorname {s u f} | \operatorname {p r e}, \mathbf {c} = c). \tag {1}
+$$
+
+A language model, by contrast, learns the "conditional" distribution, which by marginalization is:
+
+$$
+p (\text {s u f} \mid \text {p r e}) = \sum_ {c \in \mathcal {C}} p (\mathbf {c} = c \mid \text {p r e}) \cdot p (\text {s u f} \mid \text {p r e}, \mathbf {c} = c). \tag {2}
+$$
+
+We provide an example that illustrates such differences in Appendix D. Under this hypothesis, LLMs may struggle to fully replicate the fractal nature of human language because the conditional distribution uses $p(\mathbf{c} = c \mid \text{prefix})$ instead of the marginal $p(\mathbf{c} = c)$ , which should be used if prompting corresponds to an intervention. To account for this, we investigate the impact of availing various amounts of contextual information in the prompt. These prompting strategies range from minimal cues (e.g. few keywords) to detailed
+
+prompts (e.g. summaries or ordered excerpts). Interestingly, increasing the information density in the prompt does not always improve fractal characteristics, as shown in Figure 7. In fact, the relationship for self-similarity seems to exhibit a "double descent", where providing a summary in the prompt generates texts that are less similar to natural language than providing either an unordered set of keywords (less information) or an ordered set of excerpts (more).
+
+Do fractal parameters in LLM-generated texts differ substantially from natural language? Our findings suggest that they partially do. Specifically, the range of fractal parameters in natural language is contained with a narrow range, whereas those of LLMs' output vary widely, as demonstrated for the Hölder exponent S in Figure 10. Large values of S indicate less self-similar structure (i.e. lack of rich details) so we expect LLMs to fail occasionally to produce texts with small values of S. This is indeed what we observe.
+
+To conduct our study, we build a dataset comprising of over 240,000 LLM-generated articles. These differ by the model that generated them, the contextual information provided in the prompt, the decoding temperature, and the data domain (e.g. science, news, etc). To facilitate research in domains, such as detecting LLM-generated contents, we make this dataset public. The data card is provided in Appendix E and samples from the data are shown in Appendix G. In addition, we show that our main conclusions continue to hold using the RAID dataset (Dugan et al., 2024), which contains generated texts by 11 models (e.g. ChatGPT, Cohere, Llama 2, etc) in domains such as Reddit and reviews.
+
+# Statement of Contribution. In summary, we:
+
+1. provide a comprehensive analysis of how various factors—such as decoding temperatures, instruction-tuning and model size—affect the ability of LLMs to replicate the fractal properties of natural language.
+2. investigate how prompting affects the fractal structure of texts, showing evidence for a non-monotone behavior. We demonstrate that the range of fractal parameters in natural language is much narrower than in LLM's. This shows that fractal parameters might prove useful in identifying some (but not all) synthetic texts. We also connect the differences in fractal parameters to the quality of the texts.
+3. show these results hold across a variety of model architectures, demonstrating the generality of our findings and making them relevant for diverse LLM families.
+4. release a dataset containing over 240,000 articles generated by various LLMs (both pretrained and instruction-tuned) across various settings (e.g. data domain and temperature).
+
+# 2. Related Works
+
+Several studies have looked into the statistical properties of LLM-generated texts. For instance, (Guo et al., 2023) analyzed ChatGPT's responses in comparison to human-generated text and found that ChatGPT-produced outputs tend to have lower log-perplexity scores. Based on these findings, the authors developed detection systems for identifying LLM-generated content, concluding that short texts are more challenging to detect than longer documents. Similarly, Tulchinskii et al. (2023) propose using the intrinsic dimension of the data manifold as a metric that distinguishes natural language from LLM-generated texts.
+
+The observation that LLM-generated texts exhibit lower log-perplexity scores has been utilized by several other works for detecting such content, including (Solaiman et al., 2019; Gehrmann et al., 2019; Ippolito et al., 2020; Vasilatos et al., 2023) and (Yang et al., 2023), among others. This raises the question of which models are most suitable for evaluating text perplexity. In (Mireshghallah et al., 2024), the authors study this issue and suggest that smaller models are more effective! Additionally, (Hans et al., 2024) proposed normalizing the average log-perplexity score using a crossPPL metric calculated across two different models before applying a global detection threshold.
+
+However, such approaches, which are based on $1^{\mathrm{st}}$ -order statistics of log-perplexity scores, are not always effective (Hans et al., 2024). For instance, methods, such as DetectGPT, which incorporate $2^{\mathrm{nd}}$ -order information by estimating the curvature of the loss (Mitchell et al., 2023), have been found to yield better results.
+
+Our work contributes to this line of research by focusing primarily on fractal parameters. As argued in (Meister & Cotterell, 2021), the evaluation of LLMs should go beyond log-perplexity and also consider how well LLMs capture other "statistical tendencies" observed in natural language. Our study has a similar goal. We conduct a comprehensive quantitative analysis involving a range of model architectures, decoding temperatures, and prompting methods, and we explore the differences between pretrained and instruction-tuned models as well, among other considerations.
+
+Although our study may have implications for detecting LLM-generated content, detection is not the primary focus of this work. It is important to acknowledge, however, that detecting LLM-generated content is critical and has been the subject of several noteworthy studies. These include the perplexity-based detection methods mentioned above, as well as supervised classification approaches (Verma et al., 2024; Pu et al., 2022; Jawahar et al., 2020; Ghosal et al., 2023; Tang et al., 2024; Dhaini et al., 2023; Guo et al., 2023), with fine-tuning of pretrained models being especially effective (Zellers et al., 2020; Solaiman et al., 2019;
+
+Fagni et al., 2021). While detecting such content has inherent limitations (Varshney et al., 2020; Sadasivan et al., 2024)—as LLMs are trained to model the full joint distribution of human language—continued progress in this area is essential for mitigating societal risks. These include, but are not limited to, the spread of misinformation (Zellers et al., 2020), fake online reviews (Yao et al., 2017), potentially harmful medical advice (Guo et al., 2023), academic dishonesty (Susnjak, 2022), and extremist propaganda (McGuffie & Newhouse, 2020). The importance of these efforts is underscored by incidents where LLM-generated news articles containing factual inaccuracies were published with minimal human oversight (Christian, 2023). We hope our work will help along these directions.
+
+In this work, we examine the impact of various factors, including the model size. Prior literature has consistently shown that larger models tend to perform better, with the benefits of scaling being predictable empirically (Hestness et al., 2017; Kaplan et al., 2020; Alabdulmohsin et al., 2022; Zhai et al., 2022). These improvements are not limited to perplexity scores. For example, (Dou et al., 2022) found that larger models produce texts with fewer factual and coherence-related issues. Not surprisingly, we also find that bigger pretrained models yield fractal parameters that are closer to those of natural language (see Figure 3).
+
+Additionally, we investigate other factors, such as temperature settings. Previous research has demonstrated that while improved decoding methods may deceive humans, they may also introduce detectable statistical anomalies (Ippolito et al., 2020). Our work explores these effects in detail from the lens of fractals, contributing to the broader understanding of how model parameters influence LLM output.
+
+# 3. Experimental Setup and Dataset
+
+Our goal is to investigate when and how LLM-generated texts can vary substantially from natural language in their fractal structure. For that we need to generate synthetic texts. In order to correctly identify the impact of each factor we consider in our study (e.g. temperature setting, prompting, model size), we generate the data ourselves from scratch. We follow a similar setup to the one used in (Verma et al., 2024), in which we restrict analysis to long documents or paragraphs, as opposed to short answers to questions. Similar to (Verma et al., 2024), we query a capable LLM via its API, which is always Gemini 1.0 Pro (Anil et al., 2024) in our experiments, to generate some contextual information about an article (such as keywords or a summary) before asking another model to write an article based on this contextual information. Since each article is matched with a corresponding human-generated text (ground truth), we name this dataset "Generated And Grounded Language
+
+Table 1. A summary of the contextual information used during prompting, ordered from the least informative (top) to the most (bottom). Each prompting method provides more contextual information than the one above it. Keywords and summaries were generated using Gemini 1.0 Pro as discussed in Section 3. See Appendix C for prompt templates.
+
+Abbreviation Description continue (cont) Simple continuation based on a short prefix (no prompting). chain-of-thought (cot) Ask the model to generate an outline before generating the article, using a short prefix. short keywords (kw) A few, unordered keywords. keywords (kw+) Many unordered keywords. summary (su) A summary of the article. summary + keywords (su+) Both a summary and many keywords. excerpt (exc) Ordered list of long excerpts from the original article.
+
+Examples" (GAGLE). It contains over 240,000 articles1 .
+
+The contextual information we use were chosen such that they can be ordered from the least informative to the most, as shown in Table 1. The datasets we use are from five domains: (1) WIKIPEDIA (Wikipedia, 2024), (2) BIG-PATENT, consisting of over one million records of U.S. patents (Sharma et al., 2019), (3) NEWSROOM, containing over one million news articles (Grusky et al., 2018), (4) SCIENTIFIC, a collection of research papers obtained from ArXiv and PubMed repositories (Cohan et al., 2018), and (5) BILLSUM, containing US Congressional and California state bills (Kornilova & Eidelman, 2019). We only use, at most, 1,000 articles from each domain.
+
+In our experiments, we use pretrained models (with simple continuation only) and instruction-tuned models with various prompting strategies as discussed earlier. During text generation, we experiment with three decoding temperatures: $\beta = 0$ (greedy decoding), $\beta = 0.5$ , and $\beta = 1$ (pretraining temperature). The three models we use are Gemini 1.0 Pro (Anil et al., 2024), Mistral-7B (Jiang et al., 2023), and Gemma-2B (Mesnard et al., 2024).
+
+Once the texts are generated, we score them using pretrained models. One goal is to identify if our findings remain robust across different scoring models. As mentioned earlier, some previous works suggest that smaller models might be better for scoring and detecting LLM-generated texts (Mireshghallah et al., 2024) while other works suggest that using the same model for both generation and scoring yields better results (Fagni et al., 2021; Mitchell et al., 2023).
+
+Finally, once all the log-perplexity scores are calculated, we compute fractal parameters. Because S and H are exponents of power laws, sufficiently long documents are required. Hence, we encourage the model to generate long documents in the prompt (see prompting templates in Appendix C). We drop the first 64 tokens to remove any warm-up effects, and ignore documents that are less than 400 tokens in length. When a document is ignored, we also ignore the corresponding ground-truth document, to remove this confounding effect. Then, we clip all documents (both human- and LLM-generated) to 400 tokens to have equal lengths. We estimate fractal parameters using the scales (time gaps) $\tau \in \{8,16,32,48,64,96,128,160,192,256,320\}$ with $\epsilon = 10^{-2}$ ; see (Alabdulmohsin et al., 2024) for details on how to calculate them. We also use bootstrapping (Efron & Tibshirani, 1994) to estimate confidence intervals by subsampling with replacement 10 independent samples. Figure 2 illustrates quality of fit for the setting in Appendix G where samples of documents are provided.
+
+Disclaimer. Our estimates of S and H differ from those in (Alabdulmohsin et al., 2024) for two reasons. Because LLM-generated texts are short (typically of about 500 tokens), we restrict the range of the scale term $\tau$ to at most 320 tokens. Also, we use a slightly larger value of $\epsilon$ because we found that smaller values in short documents lead to high variance. Both imply that our estimates are less accurate. Nonetheless, our goal is to use S and H as statistical probes to compare natural texts from LLM-generated, so we use the same hyperparameters for both types of documents.
+
+# 4. Detailed Analysis
+
+Q1. How do log-perplexity scores in LLM-generated documents differ from those in language? To answer this question, results are shown in Figure 4. In agreement with prior works, we observe that LLM-generated texts have a lower log-perplexity scores (negative values in the $y$ axis) but not for large pretrained models when prompted using their pretraining temperature $\beta = 1$ . In the latter setting, LLM-generated texts have a similar average log-perplexity score to natural language. We illustrate this using the GLTR tool (Gehrmann et al., 2019) in Figure 5. Gemma-2B is an exception, probably because it is much smaller than the rest of the models. Instruction-tuning, by contrast, lowers the log-perplexity of generated texts compared to natural language even at temperature $\beta = 1$ , although no prompting instructions are used in Figure 4. Similar findings are also obtained using RAID dataset, as shown in Appendix A. Overall, this suggests that detection methods relying solely on log-perplexity scores, such as GLTR, may not be adequate for identifying contents generated by pretrained models as they become increasingly more capable in the future.
+
+
+Figure 2. Quality of power law fit for fractal parameters in both human- and LLM-generated documents, for the same setting as in Appendix G where samples of documents are provided.
+
+
+
+
+
+
+
+
+
+
+Figure 3. The $y$ -axis is either $\log \tilde{\mathrm{S}} / \mathrm{S}$ (left column) or $\log \tilde{\mathrm{H}} / \mathrm{H}$ (right column), where $\tilde{\mathrm{S}}$ is the Hölder exponent of LLM-generated texts while S is of natural language, and the same holds for $\tilde{\mathrm{H}}$ and H. The $x$ -axis are the generating models: Gemini 1.0 Pro (denoted G-P), Mistral-7B (denoted M-7), and Gemma-2B (denoted G-2), all are pretrained models with temperature $\beta = 1$ . Subtitles indicate the model used for scoring the texts. As expected, we observe that larger models tend to replicate the fractal properties of natural language better than smaller models. In addition, LLM-generated texts have higher values of both S (less self-similarity) and H (more dependence).
+
+Q2. If large pretrained LLMs at their pretraining temperature $\beta = 1$ can replicate the 1st-order statistics of log-perplexity scores in language, do they also replicate its fractal parameters? Figure 3 summarizes the results. We observe that larger pretrained models at temperature $\beta = 1$ replicate the fractal properties of natural language better, regardless of which model is used for scoring. In addition, LLM-generated texts are systematically biased towards higher values of both S (less self-similarity) and H (more dependence) than in natural language. As we will discuss later in Q6, this means they are biased towards generating texts with lower quality than in natural language.
+Q3. What about fractal parameters in instruction-tuned models? We examine the impact of instruction tun
+
+ing on fractal parameters when all texts are generated in both pretrained and instruction-tuned models using simple continuation (no prompting). We focus on simple continuation here to isolate the impact of instruction-tuning alone. To recall, Figure 4 shows that instruction tuning generates texts with lower log-perplexity scores than natural language. Figure 6 shows that texts generated by instruction-tuned models have higher values of the Hurst exponent at low temperatures $\beta < 1$ , indicating more dependence over time. Self-similarity is not impacted, however. Similar findings are also obtained using RAID dataset, as shown in Appendix A.
+
+Q4. What if contextual information is provided in the prompt to instruction-tuned models? As mentioned in Section 1, we also consider the causal graph shown in Figure 1, and examine the impact of adding various contextual cues in the prompt (see Table 1). Figure 7 summarizes the results across all combinations of generating models, scoring models, and datasets. For self-similarity, we observe a double descent. While the second descent is expected, given that LLMs should be eventually capable of replicating the original article if its entire content is provided in the prompt, the fact that providing a sample of unordered keywords is better than a summary is surprising! We also observe that asking the model to generate an outline first, similar to chain-of-thought (CoT) prompting (Wei et al., 2023), yields fractal parameters that are closer to those of natural language than simple continuation. In addition, as shown in Appendix F, generating patent and science articles seem to be more sensitive to prompting than in other domains.
+Q5. How sensitive are fractal parameters of LLM-generated articles to the contextual information provided in the prompt? To answer this question, we first calculate for each of the three architectures Gemini 1.0 Pro, Mistral-7B, and Gemma-2B the average Hölder and Hurst exponents disaggregated by prompting method, where averages are calculated over all remaining variables (e.g. decoding temperature, scoring algorithm, and dataset). Then, we plot the standard deviation calculated across the prompting methods. The results are displayed in Figure 8. As
+
+
+Figure 4. $y$ -axis is the log-ratio of log-PPL scores for both pretrained (PT) and instruction-tuned (IT) models with simple continuation, when Gemini 1.0 Pro (left), Mistral-7B (center), and Gemma-2B (right) is used to score texts. Texts generated by large pretrained models do not have a lower log-PPL than natural texts; instruction tuning and the use of small temperatures lead to that effect.
+
+
+
+
+
+
+Figure 5. Output of the GLTR tool (Gehrmann et al., 2019) on texts generated by humans (left), Mistral-7B pretrained (middle) and Mistral-7B instruction-tuned (right) at temperature $\beta = 1.0$ . Both the pretrained and instruction-tuned models are provided with a short prefix. Colors indicate perplexity scores. The output of the pretrained model looks similar to the human-generated text in terms of log-PPL scores (more orange, red, and purple tokens indicating surprise), in agreement with Figure 4.
+
+expected, larger models are less sensitive to the choice of the prompting method than smaller models.
+
+Q6. How do fractal parameters relate to the quality of output? To answer this question, we first recall that the Hölder exponent S quantifies the level of self-similarity in a stochastic process, with smaller values indicating a more self-similar structure (heavier tail); i.e. with complex, rich details at all levels of granularity. Hence, lower values of S are desirable. By contrast, the Hurst exponent H quantifies dependence over time. Values close to $\mathrm{H} \approx 0.5$ indicate no dependence (i.e. the process is random) while values close to $\mathrm{H} \approx 1.0$ indicate strong predictability (e.g. when the same text is repeated over and over again). Natural language has values close to $\mathrm{H} \approx 0.65$ . In practice, LLMs do not generate words entirely at random, so the correlation between H and model quality is negative. For this reason, H is also strongly and positively correlated with S, with a Pearson coefficient of 0.68 and a p-value $< 10^{-15}$ .
+
+In Figure 9, we plot the average quality of documents against their average log-perplexity score and fractal parameters. Here, we use Gemini Pro 1.0 to auto-rate the quality of generated texts. The prompt template and examples of responses are in Appendix C and we provide examples of quality ratings in Appendix G. We observe, as expected, that both S and H are negatively correlated with quality. Interestingly, H is a much stronger predictor of average quality
+
+than the other metrics. Similar findings are also obtained using RAID dataset, as shown in Appendix A. The reason log-perplexity is not a good predictor of quality can be illustrated with a simple example. Suppose that the entire document comprises of a single word repeated over and over again. Then, the log-perplexity of each subsequent token gets progressively closer to zero, since the next token can be reliably predicted. Obviously, this does not imply that an article made of a single repeating word has a good quality. The Hurst exponent, on the other hand, will be quite large in the latter case, indicating poor quality. In Appendix G, we provide a sample of documents from hyperparameter settings that yield large and small values of H for comparison.
+
+Q7. How well does the range of fractal parameters overlap between natural language and LLM-generated texts? Figure 10 shows that the range of fractal parameters in natural language is mostly a narrow subset of those observed in LLM-generated texts. Clearly, while there is an overlap, natural language maintains S and H in a narrow range whereas they vary widely in LLM-generated texts. Similar findings are also obtained using RAID dataset, as shown in Appendix A. This is consistent with the earlier observation about the relation between fractal parameters and quality of texts. Among the different factors considered, we find that prompting has the biggest impact on S and H as shown in Table 2, which calculates the Shannon mutual information between fractal parameters and other variables.
+
+Nevertheless, it is important to keep in mind that in Figure 10, each value of the fractal parameter is calculated over a corpus of texts, not individual documents. A corpus of texts corresponds to a particular combination of generating model, decoding temperature, prompting method, scoring model, and dataset. The reason fractal parameters are calculated over multiple documents is because they describe properties about the underlying stochastic process, such as its autocorrelation function $\rho_{n} = \mathbb{E}[x_{t + n}x_{t}]$ , not properties about a single individual realization of it. Hence, multiple independent measurements are needed.
+
+
+
+
+
+
+
+
+Figure 6. Distribution of the log-ratio of Hölder exponent S (top) and Hurst exponent H (bottom) in pretrained (PT) and instruction-tuned (IT) models (all with simple continuation) compared to natural language. Instruction-tuned models at low temperatures $\beta < 1$ have higher values of H; i.e. more dependence over time. See Section 4/Q3 for further discussion.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7. $y$ -axis is the log of the ratio of the fractal parameters between LLM-generated texts and natural language, similar to Figure 3. Here, we only use instruction-tuned models. $x$ -axis is from left to right the 7 prompting strategies in Table 1 (top to bottom). Detailed results are in Appendix F.
+
+
+
+
+
+
+Figure 8. Standard deviation of S (left) and H (right) calculated across prompting methods for each instruction-tuned model: Gemini 1.0 Pro (G-P), Mistral-7B (M-7) and Gemma-2B (G-2).
+
+Q8. Are there notable differences when LLMs are used to score their own outputs? Inspired by prior observations, which suggest that using similar architectures for both scoring and generation might work better for detecting LLM-generated texts (Mitchell et al., 2023; Fagni et al., 2021), we explore if there are differences in fractal parameters when
+
+Table 2. Shannon mutual information (Shannon, 1948) between S or H and other variables, normalized by the Shannon entropy of the fractal parameter. We bin values into intervals of length 0.1. Prompting has the biggest impact on fractal parameters, followed by the data domain.
+
+Scoring Model Generating Model Temp Dataset Prompt S 0.2% 1.0% 1.0% 7.3% 8.4% H 1.7% 4.8% 4.3% 7.2% 19.7%
+
+LLMs score their own outputs (i.e. using Mistral-7B to score its output, as opposed to the output of other models).
+
+One way to examine this is to look into the reduction in uncertainty:
+
+$$
+\mathcal {J} (X; Y) \doteq U (X \mid Z) - U (X \mid Z, Y), \tag {3}
+$$
+
+where $U(X|Z)$ is a measure of the uncertainty in $X$ when
+
+
+Figure 9. Average quality of LLM-generated documents, as judged by Gemini 1.0 Pro, vs. log-PPL (left), Hölder exponent (middle), and Hurst exponent (right). The Hurst parameter is a much better predictor of quality than the other metrics. See Section 4/Q6 for discussion, Appendix C for the exact prompt used in Gemini 1.0 Pro, and Appendix G for examples.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10. Distribution of S and H for collections containing both LLM-generated and human-generated texts, where the values in legend indicates the proportion of LLM-generated texts. See Section 4/Q7 for details. Cases when LLMs repeat the same text mostly occur with greedy decoding (left) but we still see different distributions of S and H in higher temperatures.
+
+
+
+
+
+conditioning on $Z$ , such as the conditional Shannon entropy or the error rate of predicting $X$ given $Z$ . In our setting, $X$ and $Y$ are the scoring and generating models while $Z$ are the remaining variables, such as dataset, decoding temperature, and fractal parameters. Intuitively, because fractal parameters are included in the set of predictors, if the error rate of predicting the generating model does not depend on the scoring model even when fractal parameters and other variables are included in the set of predictors, then fractal parameters remain relatively unchanged whether or not the same model is used for generating and scoring texts, which is indeed what we observe. Specifically, a random forest classifier, in its default Sciki-Learn implementation (Pedregosa et al., 2011), can predict the scoring model with a high accuracy of $97.0\%$ without having to include the generating model in the set of predictors and this accuracy remains unchanged when including the generating model. Similarly, the accuracy of predicting the generating model is quite high at $97.8\%$ without using the scoring model as a predictor. Including the scoring model improves accuracy only slightly.
+
+Q9. Are some domains easier to synthesize? Table 3 shows that LLM-generated encyclopedic articles and legal documents are closer to those of natural language in terms of average log-perplexity scores (1st order statistics) and
+
+fractal parameters (2nd order). However, this may be a reflection of the weight of those domains during (pre)training, rather than anything fundamental about the domains themselves. Interestingly, when using the RAID dataset, it seems challenging for LLMs to replicate humans in poetry, and this only becomes evident when we look into the Hölder exponent.
+
+# 5. Discussion and Limitations
+
+In this work, we investigate whether LLMs are capable of replicating the fractal structure of language. We note that various strategies, such as the decoding temperature and prompting method, can impact fractal parameters even when log-perplexity scores seem to be unaffected. This goal is in line with earlier works, such as Meister & Cotterell (2021), who argued that the evaluation of LLMs should go beyond log-perplexity and also consider how well LLMs capture other "statistical tendencies" observed in language.
+
+Our findings reveal that for pretrained models, larger architectures are more effective at capturing such fractal properties. In addition, with instruction-tuned models, the similarity to human language does not improve monotonically as the amount of contextual information in the prompt increases. Notably, the Hurst parameter emerged as a strong
+
+Table 3. Log-ratio of fractal parameters and log-perplexity scores between LLM-generated texts and natural language using instruction-tuned models. Results are averaged across all settings (e.g. decoding temperatures, models and prompts). LLM-generated encyclopedic and legal documents are closer to natural language, than other domains.
+
+Domain S H log-perplexity GAGLE Dataset NEWSROOM 0.19 ± 0.14 0.02 ± 0.05 -0.58 ± 0.29 SCIENTIFIC 0.17 ± 0.15 0.07 ± 0.05 -0.61 ± 0.27 BIGPATENT 0.15 ± 0.15 0.04 ± 0.06 -0.53 ± 0.26 BILLSUM 0.06 ± 0.10 -0.04 ± 0.05 -0.12 ± 0.28 WIKIPEDIA 0.06 ± 0.12 -0.01 ± 0.04 -0.46 ± 0.29 RAID Dataset ABSTRACTS -0.13 ± 0.05 0.16 ± 0.02 -0.59 ± 0.09 BOOKS -0.10 ± 0.04 0.07 ± 0.01 -0.66 ± 0.04 NEWS 0.18 ± 0.02 0.11 ± 0.01 -0.53 ± 0.04 POETRY 0.50 ± 0.02 0.05 ± 0.01 -0.67 ± 0.10 RECIPES 0.10 ± 0.05 0.05 ± 0.01 -0.75 ± 0.04 REDDIT -0.07 ± 0.02 0.11 ± 0.01 -0.67 ± 0.06 REIEWS 0.08 ± 0.02 0.13 ± 0.02 -1.23 ± 0.11
+
+predictor of quality in generated texts, among other significant findings. To facilitate further research in this area, we release our GAGLE dataset, which comprises over 240,000 LLM-generated articles.
+
+Limitations. In terms of limitations, estimating fractal parameters requires analyzing large corpora of lengthy documents because these parameters describe properties of the underlying stochastic processes. Therefore, they may not be reliable at making conclusions about individual documents or short texts. This limitation prevents us from making claims about the ability to detect AI-generated content using these metrics alone. However, we note that perplexity-based detection methods might be enhanced by incorporating second-order statistics such as the Hölder and Hurst exponents, since the range of those parameters in LLM-generated articles varies widely compared to human-generated texts. We leave the exploration of detection strategies that leverage these fractal characteristics for future work.
+
+In addition, we focus on auto-regressive models. Exploring whether our findings hold for fundamentally different architectures, such as state space models (SSMs) (Gu & Dao, 2024), is a valuable direction for future research.
+
+# Impact Statement
+
+There are many potential societal consequences of advancing the field of artificial intelligence (AI), both positive, such as improved accessibility to high-quality healthcare and education, and negative, if such systems are misused. The goal in this work is to contribute to the ongoing effort to improve understanding of language models and the mechanisms behind their success. While we recognize the general ethical considerations that accompany the advancement of
+
+language models and AI, we do not feel that this particular work raises unique or unaddressed ethical concerns beyond the established considerations within the field.
+
+# Acknowledgement
+
+We thank Vinh Tran and Jeremiah Harmsen for their insightful reviews and suggestions, Mostafa Dehghani and Mike Mozer for early discussions, and Google DeepMind at large for providing a supportive research environment.
+
+# References
+
+Alabdulmohsin, I., Neyshabur, B., and Zhai, X. Revisiting neural scaling laws in language and vision. In NeurIPS, 2022.
+Alabdulmohsin, I., Tran, V. Q., and Dehghani, M. Fractal patterns may illuminate the success of next-token prediction. In NeurIPS, 2024.
+Altmann, E. G., Cristadoro, G., and Esposti, M. D. On the origin of long-range correlations in texts. Proceedings of the National Academy of Sciences, 109(29):11582-11587, 2012.
+Anil, R., Borgeaud, S., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, K., Silver, D., Johnson, M., Antonoglou, I., Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., Lillicrap, T., Lazaridou, A., First, O., et al. Gemini: A family of highly capable multimodal models, 2024. URL https://arxiv.org/abs/2312.11805.
+Bachmann, G. and Nagarajan, V. The pitfalls of next-token prediction, 2024. URL https://arxiv.org/abs/2403.06963.
+Christian, J. CNET secretly used AI on articles that didn't disclose that fact, staff say. *Futurism*, January, 2023. URL https://futurism.com/cnet-ai-articles-label.
+Cohan, A., Dernoncourt, F., Kim, D. S., Bui, T., Kim, S., Chang, W., and Goharian, N. A discourse-aware attention model for abstractive summarization of long documents. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 2018. doi: 10.18653/v1/n18-2097. URL http://dx.doi.org/10.18653/v1/n18-2097.
+Crovella, M. E. and Bestavros, A. Explaining world wide web traffic self-similarity. Technical report, Boston University Computer Science Department, 1995.
+
+Dhaini, M., Poelman, W., and Erdogan, E. Detecting ChatGPT: A survey of the state of detecting ChatGPT-generated text. In Hardalov, M., Kancheva, Z., Velichkov, B., Nikolova-Koleva, I., and Slavcheva, M. (eds.), Proceedings of the 8th Student Research Workshop associated with the International Conference Recent Advances in Natural Language Processing, pp. 1-12, Varna, Bulgaria, September 2023. INCOMA Ltd., Shoumen, Bulgaria. URL https://aclanthology.org/2023.ranlp-stud.1.
+Dou, Y., Forbes, M., Koncel-Kedzierski, R., Smith, N. A., and Choi, Y. Is GPT-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text. In Muresan, S., Nakov, P., and Villavicencio, A. (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 7250–7274, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.501. URL https://aclanthology.org/2022.acl-long.501.
+Dugan, L., Hwang, A., Trhlik, F., Ludan, J. M., Zhu, A., Xu, H., Ippolito, D., and Callison-Burch, C. raid: A shared benchmark for robust evaluation of machine-generated text detectors, 2024. URL https://arxiv.org/abs/2405.07940.
+Efron, B. and Tibshirani, R. J. An introduction to the bootstrap. CRC press, 1994.
+Fagni, T., Falchi, F., Gambini, M., Martella, A., and Tesconi, M. TweepFake: About detecting deepfake tweets. Plos one, 16(5):e0251415, 2021.
+Gehrmann, S., Strobelt, H., and Rush, A. GLTR: Statistical detection and visualization of generated text. In Costa-jussa, M. R. and Alfonseca, E. (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 111-116, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-3019. URL https://aclanthology.org/P19-3019.
+Ghosal, S. S., Chakraborty, S., Geiping, J., Huang, F., Manocha, D., and Bedi, A. S. Towards possibilities & impossibilities of AI-generated text detection: A survey, 2023. URL https://arxiv.org/abs/2310.15264.
+Grusky, M., Naaman, M., and Artzi, Y. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018. doi:
+
+10.18653/v1/n18-1065. URL http://dx.doi.org/10.18653/v1/n18-1065.
+Gu, A. and Dao, T. Mamba: Linear-time sequence modeling with selective state spaces, 2024. URL https://arxiv.org/abs/2312.00752.
+Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., Yue, J., and Wu, Y. How close is ChatGPT to human experts? comparison corpus, evaluation, and detection, 2023. URL https://arxiv.org/abs/2301.07597.
+Hans, A., Schwarzschild, A., Cherepanova, V., Kazemi, H., Saha, A., Goldblum, M., Geiping, J., and Goldstein, T. Spotting LLMs with binoculars: Zero-shot detection of machine-generated text, 2024. URL https://arxiv.org/abs/2401.12070.
+Hestness, J., Narang, S., Ardalani, N., Diamos, G., Jun, H., Kianinejad, H., Patwary, M., Ali, M., Yang, Y., and Zhou, Y. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
+Hurst, H. E. Long-term storage capacity of reservoirs. Transactions of the American society of civil engineers, 116(1): 770-799, 1951.
+Ippolito, D., Duckworth, D., Callison-Burch, C., and Eck, D. Automatic detection of generated text is easiest when humans are fooled. In Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J. (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1808-1822, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.164. URL https://aclanthology.org/2020.acl-main.164.
+Jawahar, G., Abdul-Mageed, M., and Lakshmanan, V.S., L. Automatic detection of machine generated text: A critical survey. In Scott, D., Bel, N., and Zong, C. (eds.), Proceedings of the 28th International Conference on Computational Linguistics, pp. 2296-2309, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.208. URL https://aclanthology.org/2020.coling-main.208.
+Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mistral 7B, 2023. URL https://arxiv.org/abs/2310.06825.
+Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., Schiefer, N., Hatfield-Dodds, Z., DasSarma,
+
+N., Tran-Johnson, E., et al. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
+Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
+Kornilova, A. and Eidelman, V. Billsum: A corpus for automatic summarization of US legislation, 2019.
+Leland, W. E. and Wilson, D. V. High time-resolution measurement and analysis of LAN traffic: Implications for LAN interconnection. In IEEE INFCOM, 1991.
+Leland, W. E., Taqqu, M. S., Willinger, W., and Wilson, D. V. On the self-similar nature of Ethernet traffic. IEEE/ACM Transactions on networking, 2(1):1-15, 1994.
+McGuffie, K. and Newhouse, A. The radicalization risks of GPT-3 and advanced neural language models, 2020. URL https://arxiv.org/abs/2009.06807.
+Meister, C. and Cotterell, R. Language model evaluation beyond perplexity. In Zong, C., Xia, F., Li, W., and Navigli, R. (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 5328-5339, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.414. URL https://aclanthology.org/2021.acl-long.414.
+Mesnard, T., Hardin, C., Dadashi, R., Bhupatiraju, S., Pathak, S., Sifre, L., Riviere, M., Kale, M. S., Love, J., Tafti, P., Hussenot, L., Sessa, P. G., Chowdhery, A., Roberts, A., Barua, A., Botev, A., Castro-Ros, A., Slone, A., Heliou, A., Tacchetti, A., Bulanova, A., Paterson, A., Tsai, B., Shahriari, B., Lan, C. L., Choquette-Choo, C. A., Crepy, C., Cer, D., Ippolito, D., Reid, D., Buchatskaya, E., Ni, E., Noland, E., Yan, G., Tucker, G., Muraru, G.C. Rozhdestvenskiy, G., Michalewski, H., Tenney, I. Grishchenko, I., Austin, J., Keeling, J., Labanowski, J. Lespiau, J.-B., Stanway, J., Brennan, J., Chen, J., Ferret, J., Chiu, J., Mao-Jones, J., Lee, K., Yu, K., Millican, K. Sjoesund, L. L., Lee, L., Dixon, L., Reid, M., Mikula, M. Wirth, M., Sharman, M., Chinaev, N., Thain, N., Bachem O.ChangO.WahltinezO.BaileyP.MichelP.Yotov P. Chaabouni R.ComanescuR.Jana R.Anil R McIlroy,R.LiuR.MullinsR.SmithS.L.Borgeaud S.GirginS.DouglasS.PandyaS.ShakeriS.De S.KlimenkoT.HenniganT.FeinbergV.Stokowiec W.hui ChenY.AhmedZ.GongZ.WarkentinT. PeranL.GiangM.FarabetC.VinyalsODeanJ. KavukcuogluK.HassabisD.Ghahramani Z.EckD. BarralJ.PereiraF.CollinsE.JoulinA.FiedelN.
+
+Senter, E., Andreev, A., and Kenealy, K. Gemma: Open models based on Gemini research and technology, 2024. URL https://arxiv.org/abs/2403.08295.
+Mireshghallah, N., Mattern, J., Gao, S., Shokri, R., and Berg-Kirkpatrick, T. Smaller language models are better zero-shot machine-generated text detectors. In Graham, Y. and Purver, M. (eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 278-293, St. Julian's, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.eacl-short.25.
+Mitchell, E., Lee, Y., Khazatsky, A., Manning, C. D., and Finn, C. DetectGPT: zero-shot machine-generated text detection using probability curvature. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org, 2023.
+Neal, B. Introduction to causal inference. 2020. URL https://www.bradyneal.com/Introduction_to_Causal_Inference-Dec17_2020-Neal.pdf.
+Paxson, V. and Floyd, S. Wide area traffic: the failure of Poisson modeling. IEEE/ACM Transactions on networking, 3(3):226-244, 1995.
+Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournaepau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
+Pu, J., Sarwar, Z., Abdullah, S. M., Rehman, A., Kim, Y., Bhattacharya, P., Javed, M., and Viswanath, B. Deepfake text detection: Limitations and opportunities, 2022. URL https://arxiv.org/abs/2210.09421.
+Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., and Feizi, S. Can AI-generated text be reliably detected?, 2024. URL https://arxiv.org/abs/2303.11156.
+Shannon, C. E. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423, 1948.
+Sharma, E., Li, C., and Wang, L. BIGPATENT: A large-scale dataset for abstractive and coherent summarization, 2019.
+Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J. W., Kreps, S., McCain, M., Newhouse, A., Blazakis, J., McGuffie, K., and Wang, J. Release strategies and the social impacts of language models, 2019. URL https://arxiv.org/abs/1908.09203.
+
+Susnjak, T. ChatGPT: The end of online exam integrity?, 2022. URL https://arxiv.org/abs/2212.09292.
+Tang, R., Chuang, Y.-N., and Hu, X. The science of detecting LLM-generated text. Commun. ACM, 67(4):50-59, mar 2024. ISSN 0001-0782. doi: 10.1145/3624725. URL https://doi.org/10.1145/3624725.
+Tulchinskii, E., Kuznetsov, K., Kushnareva, L., Cherniavskii, D., Nikolenko, S., Burnaev, E., Barannikov, S., and Pionkovskaya, I. Intrinsic dimension estimation for robust detection of ai-generated texts. Advances in Neural Information Processing Systems, 36:39257-39276, 2023.
+Varshney, L. R., Keskar, N. S., and Socher, R. Limits of detecting text generated by large-scale language models, 2020. URL https://arxiv.org/abs/2002.03438.
+Vasilatos, C., Alam, M., Rahwan, T., Zaki, Y., and Maniatakos, M. HowkGPT: Investigating the detection of ChatGPT-generated university student homework through context-aware perplexity analysis, 2023. URL https://arxiv.org/abs/2305.18226.
+Verma, V., Fleisig, E., Tomlin, N., and Klein, D. Ghostbuster: Detecting text ghostwritten by large language models. In Duh, K., Gomez, H., and Bethard, S. (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 1702-1717, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.95. URL https://aclanthology.org/2024.naacl-long.95.
+Watkins, N. Mandelbrot's stochastic time series models. Earth and Space Science, 6(11):2044-2056, 2019.
+Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., and Zhou, D. Chain-of-thought prompting elicits reasoning in large language models, 2023. URL https://arxiv.org/abs/2201.11903.
+Wikipedia. Downloads, 2024. URL https://dumps.wikipedia.org.
+Willinger, W., Taqqu, M. S., Sherman, R., and Wilson, D. V. Self-similarity through high-variability: statistical analysis of Ethernet LAN traffic at the source level. IEEE/ACM Transactions on networking, 5(1):71-86, 1997.
+Yang, X., Cheng, W., Wu, Y., Petzold, L., Wang, W. Y., and Chen, H. DNA-GPT: Divergent n-gram analysis for training-free detection of GPT-generated text, 2023. URL https://arxiv.org/abs/2305.17359.
+
+Yao, Y., Viswanath, B., Cryan, J., Zheng, H., and Zhao, B. Y. Automated crowdturfing attacks and defenses in online review systems. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS '17, pp. 1143-1158, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450349468. doi: 10.1145/3133956.3133990. URL https://doi.org/10.1145/3133956.3133990.
+Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., and Choi, Y. Defending against neural fake news, 2020. URL https://arxiv.org/abs/1905.12616.
+Zhai, X., Kolesnikov, A., Houlsby, N., and Beyer, L. Scaling vision transformers. In CVPR, 2022.
+
+
+Scoring Model: GEMINI 1.0 Pro
+
+
+Figure 11. Similar to Figure 4, $y$ -axis is the log-ratio of log-PPL scores for both pretrained (PT) and instruction-tuned (IT) models, when Gemini 1.0 Pro is used to score texts.
+
+
+
+
+Figure 12. Similar to Figure 6, this figure shows the distribution of the log-ratio of the Hölder exponent S (top) and Hurst exponent H (bottom) across all LLM-generated articles in RAID dataset, when scored by Gemini 1.0 Pro.
+
+
+
+# A. Additional Experiments using RAID Dataset
+
+In this section, we conduct additional experiments using RAID dataset (Dugan et al., 2024), which contains articles generated by 11 models (e.g. ChatGPT, Cohere, Llama 2, etc) in domains such as Reddit and reviews, among others. The goal is to verify if our main conclusions continue to hold. Since we do not control the prompts in RAID and we only score texts using Gemini Pro 1.0, Q2/4/5/8 are omitted in this analysis.
+
+Overall, we find a substantial agreement. We summarize our findings below:
+
+1. Q1 (Log-perplexity): Consistent with our results, greedy decoding and instruction tuning yield lower perplexity than human text, but pretrained models at $\beta = 1$ show perplexity similar to human text. See Figure 11.
+2. Q3 (Fractals in IT Models): Our findings still hold as shown in Figure 12: Instruction tuning affects the Hurst exponent (H), especially at low temperatures (leading to higher H), while Self-Similarity (S) remains largely unaffected.
+3. Q6 (Text Quality): We use Gemini 1.0 Pro to evaluate the quality of generated articles and compare the average quality against the process of generating articles (e.g. model and hyperparameters). As before, only the Hurst exponent (H) is well-correlated with quality. This observation now holds across the 11 models and 7 domains in RAID, reinforcing our earlier result. Results are shown in Figure 13.
+4. Q7 (Distribution of Fractals): Natural language still has a tighter distribution of fractal parameters compared to LLM-generated text, particularly for S at low decoding temperatures. Results are shown in Figure 14.
+
+
+Figure 13. Similar to Figure 9, $y$ axis is the average quality of LLM-generated documents, as judged by Gemini 1.0 Pro, vs. log-PPL (left), Hölder exponent (middle), and Hurst exponent (right).
+
+
+
+
+
+
+
+
+
+
+Figure 14. Similar to Figure 10, the distribution of S and H is shown for collections containing either LLM-generated texts or original articles.
+
+
+
+# B. Definitions of Fractal Parameters
+
+In this appendix, we provide a concise, self-contained description of the two fractal parameters: (1) the Hölder (Self-Similarity) exponent S and (2) the Hurst exponent H. We use the code provided by Alabdulmohsin et al. (2024) to calculate these parameters.
+
+# B.1. Hölder Exponent
+
+An object is "self-similar" if its statistical or geometric properties remain consistent across different scales. Examples include coastlines, snowflakes, the Cantor set, and the Koch curve. Analogously, a stochastic process is self-similar if it is distributionally similar to a rescaling of time.
+
+Formally, let $(x_{t})_{t\in \mathbb{N}}$ be a stochastic process, such as a sequence of log-perplexity scores, and write $(X_{t})_{t\in \mathbb{N}}$ for its integral process: $X_{t} = \sum_{i = 1}^{t}x_{i}$ . Then, the process is said to be self-similar if $(X_{\tau t})_{t\in \mathbb{N}}$ is distributionally equivalent to $(\tau^S X_t)_{t\in \mathbb{N}}$ for some exponent S. Here, S is the Hölder exponent.
+
+One way to calculate S is as follows. Fix $\epsilon \ll 1$ and denote the $\tau$ -increments by $(X_{t + \tau} - X_t)_{t\in \mathbb{N}}$ . These would correspond, for instance, to the number of bits used for clauses, sentences, paragraphs and longer texts as $\tau$ increases. In terms of the increment process $(x_{t})_{t\in \mathbb{N}}$ , this corresponds to aggregating increments into "bursts". Let $p_{\epsilon}(\tau)$ be the probability mass of the event $\{|X_{t + \tau} - X_t|\leq \epsilon \}_{t\in \mathbb{N}}$ . Then, S can be estimated by fitting a power law relation $p_{\epsilon}(\tau)\sim \tau^{-S}$ (Watkins, 2019).
+
+# B.2. Hurst Exponent
+
+The Hurst parameter $\mathrm{H} \in [0,1]$ , on the other hand, quantifies the degree of predictability or dependence over time (Hurst, 1951). It is calculated using the so-called rescaled-range (R/S) analysis. Let $(x_{t})_{t \in \mathbb{N}}$ be an increment process. For each $n \in \mathbb{N}$ , write $y_{t} = x_{t} - \frac{1}{t} \sum_{k=0}^{t} x_{k}$ and $Y_{t} = \sum_{k=0}^{t} y_{t}$ . The range and scale are defined, respectively, as $R(n) =$
+
+$\max_{t \leq n} Y_t - \min_{t \leq n} Y_t$ and $S(n) = \sigma(\{x_k\}_{k \leq n})$ , where $\sigma$ is the standard deviation. Then, the Hurst parameter $\mathrm{H}$ is estimated by fitting a power law relation $R(n) / S(n) \sim n^{\mathrm{H}}$ .
+
+# C. Prompting Templates
+
+# C.1. Providing Contextual Information
+
+Table 4. A summary of the prompting templates used in this work.
+
+Abbreviation Description Prompting Template continue Simple continuation based on a short prefix (no prompting). None cot Ask the model to generate an outline first before generating the article. A short prefix is provided. Extend the following text. First write an outline. Then insert *Extended Text: After that write the text using a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings. The text you need to extend is: <prefix> short keywords A few, unordered list of keywords. Using these keywords: <keywords>, write an article, in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings. keywords Many unordered keywords. Using these keywords: <keywords>, write an article, in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings. summary A summary of the entire article. Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: <summary>. summary+keywords Both a summary and an unordered list of many keywords. Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: <summary>. Using these keywords: <keywords>. excerpt An ordered list of long excerpts from the original article. Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: <ordered excerpts>.
+
+# C.2. Generating Contextual Information
+
+To generate the keywords and summary, we use the instruction-tuned Gemini 1.0 Pro model with the following prompts:
+
+Keywords:
+
+Generate keywords for the following text, but only respond with the keywords in plain text separated by commas. The text is:
+
+# - Summary:
+
+Summarize the following text in few sentences, but only respond with the text summary in plain text. The text is:
+
+# C.3. Quality of Texts
+
+As discussed in Section 4/Q6, we use the instruction-tuned Gemini 1.0 Pro model to estimate the quality of texts and relate those to fractal parameters in Figure 9. The prompt we use to estimate the quality of an article is provided below:
+
+First, explain briefly what is good and what is bad about the quality of the following document. Then, rate it on a scale from '1' to '5', where '1' is poorest and '5' is best. The last character in your response must be a single digit that corresponds to your rating. The document is:\n
+
+Examples of how the model evaluates the quality of articles is provided in Appendix G.
+
+# D. Prompts as Do-Queries Example
+
+To illustrate the differences between Equations 1 and 2, suppose hypothetically that all of the world's conversations are about two topics only: fairy tales or the weather. These two will be our possible contexts. Let us also assume that $75\%$ of the time, people talk about the weather.
+
+In addition, let us suppose that only three possible prefixes listed below are used with equal probability as shown in the table:
+
+Prefix p(weather|prefix) p(fairytale|prefix) “It is a sunny day” 50% 50% “It is a lovely sunny day” 85% 15% “It is raining” 90% 10%
+
+The causal graph in this setup would be as follows: (1) we first select a context, which is "weather" with probability $75\%$ and "fairy tales" with probability $25\%$ . (2) Then, we select one of the three prefixes. By Bayes rule, for example, the probability we select the first prompt above given that the context is "weather" would be $22.22\%$ . Finally, (3) we continue the text, given both the context and the prefix.
+
+If a language model is trained on this data and we prompt it with the phrase "It is a sunny day," what should the model predict next? There are two ways to interpret this question. One option is that the model should give a prediction based on actual historical data, using the conditional distribution $p$ (suffix | prefix). In this case, there is $50\%$ probability that the context is about the weather, so the model will continue the prefix by describing the weather $50\%$ of time and continue the prefix by describing a fairy tale in $50\%$ of the time.
+
+Another interpretation, however, is that by prompting the model, we are asking the model to act as if there was an intervention that forced all conversations to start with the prefix "It is a sunny day" and predict how the historical distribution would have looked like. In that case, Equation 1 implies that the model should sample a context first from its marginal distribution, which would be "weather" with probability $75\%$ and fairy tales with probability $25\%$ . Once the context is selected, it can use the historical data to predict the continuation of the prefix "It is a sunny day" after conditioning on that particular context. The intuition here from the causal graph in Figure 1 is that it is the context that dictates or causes the conversations, not the other way around, so the distribution of underlying contexts would remain the same.
+
+# E. Data Card
+
+Table 5. Generated And Grounded Language Examples (GAGLE) data card.
+
+Description GAGLE comprises of LLM-generated articles. All articles are generated using the public checkpoints of the open-source models Mistral-7B (Jiang et al., 2023) and Gemma-2B (Mesnard et al., 2024). The seed for all articles are sourced from five academic datasets: (1) WIKIPEDIA articles (Wikipedia, 2024), (2) BIGPATENT, consisting of over one million records of U.S. patent documents (Sharma et al., 2019), (3) NEWSROOM, containing over one million news articles (Grusky et al., 2018), (4) SCIENTIFIC, a collection of research papers obtained from ArXiv and PubMed OpenAccess repositories (Cohan et al., 2018), and (5) BILLSUM, containing US Congressional and California state bills (Kornilova & Eidelman, 2019). So, all articles are of encyclopedic, news, legal, scientific or patents nature. The dataset contains, at most, 1,000 articles from each domain. The articles are generated via the following prompting strategies:
+1. continue (pt): Simple continuation based on a short prefix (no prompting) using pre-trained model.
+2. continue (it): Simple continuation based on a short prefix (no prompting) using instruction-tuned model.
+3. cot: Ask the model to generate an summary first before generating the article. A short prefix is provided.
+4. short keywords: A few, unordered list of keywords.
+5. keywords: Many unordered keywords.
+6. summary: A summary of the entire article.
+7. summary + keywords: Both a summary and an unordered list of many keywords.
+8. excerpt: An ordered list of long excerpts from the original article. With the exception of continue (pt), all other prompts are used in instruction-tuned models. Primary Data Modality Text. Data Fields 1. ID: a unique ID identifying the original ground-truth article.
+2. Model: one of Mistral-7B and Gemma-2B.
+3. Domain: one of WIKIPEDIA, BIGPATENT, NEWSROOM, SCIENTIFIC, and BILLSUM.
+4. Prompt: one of the specified prompting methods.
+5. Temperature: numeric. Either 0.0 (greedy decoding), 0.5, or 1.0 (pretraining temperature).
+6. Prefix: Prefix used to generate the article.
+7. Quality: description of the quality of the article along with a rating from 1 (poorest) to 5 (best).
+8. Text: Actual LLM-generated text.
+9. Log-Perplexity Scores: The scores generated by one of Mistral-7B or Gemma-2B pretrained models. Intended Use Case Facilitate research in domains related to detecting and analyzing LLM-generated tests. Access Type Unrestricted. CC-by-4.0 Sensitive Human Attributes N/A
+
+# F. Contextual Information in the Prompt: Full Figures
+
+
+Figure 15. Hölder Exponent. Generating model = Gemini 1.0 Pro
+
+
+Figure 16. Hölder Exponent. Generating model = Mistral-7B
+
+
+Figure 17. Hölder Exponent. Generating model = Gemma-2B
+
+
+Figure 18. Hurst Exponent. Generating model $=$ Gemini 1.0 Pro
+
+
+Figure 19. Hurst Exponent. Generating model $=$ Mistral-7B
+
+
+Figure 20. Hurst Exponent. Generating model $=$ Gemma-2B
+
+# G. Example Documents of Generating Processes with Large and Small Values of H
+
+# G.1. Large Values of H
+
+Data domain: Newsroom.
+- Generating:
+
+- Model Name: Mistral-7B (pretrained)
+- Decoding Temperature: 0.5
+- Prompting Method: continue
+
+- Fractal Parameters:
+
+- Scoring Model Name: Gemini 1.0 Pro.
+- Hölder Exponent: $\mathrm{S} = {0.630}$
+- Hurst Exponent: $\mathrm{H} = {0.808}$
+
+# G.1.1. DOCUMENT I
+
+# Ground-truth Record ID:
+
+"newsroom-00001-00064-6"
+
+# Prefix:
+
+(the following article contains a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings.)
+
+A year ago, the 25-year-old Serb swept into Paris after amassing one of the most spectacular five months in men's tennis history, winning every tournament he'd entered. Now, far from a perfect season, the tournament's top seed arrives at the French Open,
+
+# Generated:
+
+the last major of the year, having lost his last two matches, his ranking and his No. 1 ranking.
+
+In the past year, Djokovic has been beaten by a 17-year-old American, lost the last seven sets he's played, and been beaten by a player ranked 111th in the world.
+
+The Serb, who won the Australian Open and Wimbledon in the first two months of the year, was beaten by 17-year-old American Jared Donaldson in the final of the Stuttgart Open last month.
+
+He then lost to Dominic Thiem in the Rome Masters final, before being beaten by Gael Monfils in the quarterfinals of the Madrid Open.
+
+Djokovic, who has been ranked No. 1 for the last seven years, is now ranked 11th in the world.
+
+The Serb will be hoping to bounce back in Paris, where he has won three of the last four French Open titles.
+
+He will face 23-year-old Russian Andrey Rublev in the first round of the tournament.
+
+Rublev is ranked 22nd in the world, but has been on a roll in the last few months, winning the St. Petersburg Open and the ATP 500 event in Rome.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+Djokovic has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year.
+
+The Serb has been in fine form in the last
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+\*\*Good:\*\* The document provides a detailed account of Novak Djokovic's recent performances and rankings. It includes specific instances of his losses, such as
+
+being beaten by a 17-year-old American and a player ranked 111th in the world. The document also highlights Djokovic's past achievements, such as winning the Australian Open and Wimbledon earlier in the year.
+
+**Bad:** The document repeats the sentence "The Serb has been in fine form in the last few months, winning the Australian Open and Wimbledon in the first two months of the year" 17 times, which makes the content repetitive and unnecessarily long.
+
+Rating: 3/5
+
+# Ground-truth:
+
+A year ago, the 25-year-old Serb swept into Paris after amassing one of the most spectacular five months in men's tennis history, winning every tournament he'd entered.
+
+Now, far from a perfect season, the tournament's top seed arrives at the French Open, which starts Sunday in Paris, with fissures in his impenetrable facade.
+
+Djokovic has not won a clay title in 2012 and dropped his last two finals on dirt to defending champ Rafael Nadal- after beating him the previous seven times.
+
+He has also shown flashes of anger and signs of frustration, emotions that, in the past, have undermined his performance.
+
+"I am not comparing last year and this one," Djokovic said Monday following his 7-5, 6-3 loss to Nadal in the rain-delayed final in Rome. "I feel good on the court and I need to make a few adjustment before Paris, but I'll be in top form."
+
+If Nadal's resurgent spring and 45-1 record in Paris make him the favorite, Djokovic still will be dealing with heightened expectations.
+
+A Paris title would place him in rarefied company - one of just seven men to have won all four majors in a career.
+
+Even more historic, he has a chance to hold all four majors simultaneously - a so-called "Djoker Slam" - a feat not achieved by a man since Rod Laver 43 years ago.
+
+"If he were to win four in a row," said Laver-admirer John McEnroe, who came close but never won Roland Garros, "suddenly he'd be like top-10 (of best players in history). There's a lot riding on it."
+
+Djokovic's first taste of defeat in 2011 occurred on the crushed red brick of Paris to Roger Federer, who snapped his perfect season and 43-match winning streak in the semifinals. To put that run in perspective, consider this: Djokovic's first loss in 2012 came nearly three months and 33 matches earlier (to Andy Murray in the semifinals in Dubai in February).
+
+"It might have been the case," Djokovic told USA TODAY Sports when asked if the weight of his faultless performance sapped his energy and clouded his focus. "But I think even under that pressure I played a great tournament. Obviously in the semifinals against Roger, I have done what I could at that stage. I did my best at that moment, and he was a better player."
+
+Nadal went on to defeat Federer in the final, and the loss barely made an impact the Serb's juggernaut season.
+
+Djokovic won Wimbledon and the U.S. Open, snagged the No. 1 ranking and punctuated his dominant 2011 by winning a third consecutive major (and fifth overall) in January at the Australian Open- all three in finals against Nadal.
+
+He has won four of the last five majors and remains the man to beat in best-of-five sets even if the Spaniard has regained his swagger on clay.
+
+"The way he's played in Slams the last year or so has been very, very impressive," says fourth-ranked Scot Murray.
+
+Once the crowd-pleasing third wheel to Federer and Nadal, Djokovic's transformation into world-beater is well chronicled.
+
+Possessed of uncanny flexibility, redirecting ability and the most lethal two-handed backhand in the game, the 6-2 Djokovic had the skills to be a great player.
+
+After leading Serbia to its first Davis Cup championship in 2010, he made the small adjustments - fixing a flawed serve, shoring up his forehand and famously cutting gluten out of his diet - that helped him overcome the mental lapses and suspect fitness that plagued him in the past.
+
+That he was able to realize his potential after playing third fiddle for so long - he finished No. 3 from 2007-2009 behind the Federal-Nadal duopoly - is testament to his resolve.
+
+Djokovic, who as a child told a television interviewer that he little time for fun and games because he was going to become a champion, believed he had it in him.
+
+"I knew that I have qualities, but I wasn't managing to make that final step," he says. "I think it was all mental and it was all growing up and maturing. In the end, I managed to do it."
+
+While he never expected to repeat his 2011 season - a season that earned him a Laureus Sportsman of the Year award and a segment this spring on 60 Minutes-Djokovic has discovered what many before him have said: it's harder to stay on top than to get there.
+
+Djokovic limped into the fall after holding off Nadal in the 2011 U.S. Open final, taking five of his six losses (70-6) post-New York and failing to win any titles.
+
+His body was showing signs of wear, too, when he retired with back pain in the second set against Juan Martin del Potro in Serbia's semifinal Davis Cup defeat to Argentina.
+
+To recoup and recover, he took a two-week vacation with his girlfriend, Jelena Ristic, in the Maldives and then camped out in the heat of Dubai for two weeks in December.
+
+As his close-knit team gathered to plot for the coming year, they determined that the greatest danger to him was fitness and complacency.
+
+"We (tried) to keep him a little bit down on the earth to be humble, modest, to realize that it doesn't come easy," says his longtime coach Marian Vajda of Slovakia. "He earned that spot. He deserved it. It came with work, work and work."
+
+While they worked on his fitness and tweaked his game - including taking the ball earlier - they also decided in a crowded year of events, including the London Olympics, that winning in Paris would be their singular mission.
+
+"Roland Garros is on top of the priority list," Djokovic says.
+
+Djokovic has had to make sacrifices and adjustments, such as skipping his hometown tournament in Belgrade, which is owned by his family. It was a major blow to the event, but coming on the heels of his loss in Monte Carlo and his grandfather's death, Djokovic needed the break.
+
+"He is doing a great job of pacing himself and managing his schedule for majors," ESPN's Brad Gilbert says.
+
+Djokovic, who tried a failed coaching experiment with American Todd Martin two years ago, is bent on keeping things constant.
+
+"My approach hasn't changed, really," Djokovic says. "I still have the same practice routines every day. I didn't change anything in my tennis practices, in my preparations. I have the same places, same people around me, same kind of routines. I have no reason to change, really, because as soon as I try to change something in my career and my team ... it hasn't worked that well. So I keep it very simple."
+
+Statistically, Djokovic has shown little drop-off, except in his vaunted return game, where his year-over-year winning percentage against his opponents' serve has dropped from $42.8\%$ to $34.6\%$ heading into Roland Garros.
+
+But cracks have appeared in his resolve.
+
+Earlier this month Djokovic joined Nadal in lashing out at Madrid's slippery blue clay before surrendering feebly to compatriot Janko Tipsarevic 7-6 (7-2), 6-3 in the quarterfinals.
+
+During blustery conditions in Rome, Djokovic destroyed a frame after losing the first set in a third-round victory against Juan Monaco of Argentina. He broke another during his loss to Nadal two matches later.
+
+"I hope the children watching don't do that," a smiling Djokovic told reporters afterward. "But I show my emotions out there. That's who I am."
+
+Off the court, the fiercely loyal family man suffered a personal loss when his grandfather, Vladimir, died during the Monte Carlo tournament last month. Djokovic continued to play but wasn't himself. The loss of a loved one lingers.
+
+With the immovable force of Nadal looming on clay, there is little time for self-pity
+
+The Spaniard leads Djokovic 11-2 on clay (18-14 overall) and has owned him in Paris, winning all three of their matches (2006-08) without dropping a set.
+
+"He is the Mount Everest on that surface in best out of five," Andre Agassi said of Nadal in a recent call with reporters.
+
+To win, Djokovic will almost certainly have to conquer the 10-time major winner from Spain, who is chasing his own bit of history - surpassing Bjorn Borg with an unprecedented seven French Open crowns.
+
+"It's not going to be an easy tournament to win because we know sort of Rafa owns that place," No. 3 Federer says.
+
+It will be tougher still since Nadal appears to have recovered from hole Djokovic bored in his psyche last year - something Nadal himself admitted.
+
+Nadal knew he was close and never stopped looking for answers, saying after the Rome final: "You don't need to find great things - it is the small things that make the difference."
+
+"No question (Djokovic's) performance last year proves that he's capable of (winning Roland Garros)," said Agassi, whose victory at Roland Garros in 1999 completed his set of Grand Slam trophies. "You have to give a slight nose to Nadal just given how many times he's won it and just what a physical accomplishment it would be to take him off that perch in Paris."
+
+With no streak to defend, some pressure is off. That could be a blessing in disguise. Or at least, it's a tradeoff.
+
+Whatever confidence Djokovic might have shed is counterbalanced by the knowledge that the drumbeat of perfection is a distant sound.
+
+"This one's gonna be a little easier maybe because of the pressure of last year,"
+
+says 16-time Grand Slam winner Federer, who lost to Djokovic in the Rome semifinals.
+
+Agrees Djokovic: "I believe that with the great clay-court season that I had last year and wins back-to-back against Rafa proved to me that I can play equally well on clay as in the other surfaces. I need to keep that in my mind and believe in the success."
+
+# G.1.2. DOCUMENT II
+
+# Ground-truth Record ID:
+
+"newsroom-00002-00064-5"
+
+# Prefix:
+
+(the following article contains a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings.)
+
+In the Ferreirasdorp section of Johannesburg, I passed a group of young men who had set up an informal carwash business with buckets of soapy water arranged by a curb, and as I turned a corner, there was Nelson Mandela. Though Mandela died on Dec
+
+# Generated:
+
+. 5, 2013, at the age of 95, his face is everywhere in South Africa, on posters, in store windows, on the walls of buildings, and on the sides of buses.
+
+Mandela is the face of South Africa, the face of freedom, and a symbol of the struggle against oppression and apartheid. He was the first black president of the country, serving from 1994 to 1999.
+
+Mandela is a hero to many, but not to everyone.
+
+"I don't like Nelson Mandela," said a young man who was selling a copy of the South African newspaper, The Sowetan, at a street corner in the Hillbrow neighborhood of Johannesburg.
+
+"Why not?" I asked.
+
+"He was a communist," the man said.
+
+I asked him what he thought of apartheid.
+
+"It was not good," he said.
+
+"But Nelson Mandela fought against apartheid," I said.
+
+"Yes, but he was a communist," the man said.
+
+"What is wrong with communism?" I asked.
+
+"It is a bad ideology," the man said.
+
+"What do you think about the current president of South Africa, Jacob Zuma?" I asked.
+
+"He is a bad president," the man said.
+
+"Why?" I asked.
+
+"He is not doing anything for the people," the man said.
+
+"Do you think that Mandela did more for the people of South Africa than Zuma has?" I asked.
+
+"Mandela did more for the people," the man said.
+
+"Why?" I asked.
+
+"He was a good man," the man said.
+
+"What do you think about the ANC?" I asked.
+
+"The ANC is a bad party," the man said.
+
+"Why?" I asked.
+
+"They are corrupt," the man said.
+
+"What do you think about the DA?" I asked.
+
+"The DA is a good party," the man said.
+
+"Why?" I asked.
+
+"They are not corrupt," the man said.
+
+"What do you think about the EFF?" I asked.
+
+"The EFF is a bad party," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about Julius Malema?" I asked.
+
+"Julius Malema is a bad man," the man said.
+
+"Why?" I asked.
+
+"He is a communist," the man said.
+
+"What do you think about the ANC Youth League?" I asked.
+
+"The ANC Youth League is a bad organization," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about the EFF Youth League?" I asked.
+
+"The EFF Youth League is a bad organization," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about the DA Youth League?" I asked.
+
+"The DA Youth League is a good organization," the man said.
+
+"Why?" I asked.
+
+"They are not violent," the man said.
+
+"What do you think about the EFF Women's League?" I asked.
+
+"The EFF Women's League is a bad organization," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about the DA Women's League?" I asked.
+
+"The DA Women's League is a good organization," the man said.
+
+"Why?" I asked.
+
+"They are not violent," the man said.
+
+"What do you think about the ANC Women's League?" I asked.
+
+"The ANC Women's League is a bad organization," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about the EFF Women's League?" I asked.
+
+"The EFF Women's League is a bad organization," the man said.
+
+"Why?" I asked.
+
+"They are violent," the man said.
+
+"What do you think about the DA Women's League?" I asked.
+
+"The DA Women's League is a good organization," the man said.
+
+"Why?" I asked.
+
+"They are not
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+The document provides a detailed account of a conversation with a young man in Johannesburg regarding his views on Nelson Mandela, politics, and various organizations in South Africa. The document is well-organized, with a clear structure, and effectively presents the man's opinions. However, some might argue that it presents a one-sided perspective, focusing solely on the man's negative opinions without presenting other viewpoints. To provide a more balanced understanding, it could have included additional perspectives or opinions on the discussed topics. I rate this document as a 3 out of 5. 3
+
+# Ground-truth:
+
+In the Ferreirasdorp section of Johannesburg, I passed a group of young men who had set up an informal carwash business with buckets of soapy water arranged by a curb, and as I turned a corner, there was Nelson Mandela.
+
+Though Mandela died on Dec. 5, here he stood vividly rendered in layered steel cutouts, 34 years old, fresh and strong and in the pose of a boxer. He once had professional aspirations, and said the sweet science taught him about defense, attack and strategy, which he applied to law, politics and life in prison.
+
+Unveiled two years ago, the 16-foot sculpture, called "Shadow Boxer," is a lively piece with a great sense of energy and movement. The South African artist Marco Cianfanelli gave it a three-dimensional effect by layering painted-steel sheets in an image from a well-known 1952 photograph by Robert Gosani. The work is set in front of the Johannesburg magistrate court across the street from Chancellor House, where Mandela and his law, and later political, partner Oliver Tambo were young lawyers. Chancellor House, a privately owned corner office building, was itself restored in 2011 and has a timeline exhibition of the two partners' time there.
+
+The attractions have drawn visitors to a neighborhood that has a certain rough-around -the-edges look but is no longer so menacing. Since Mandela's death, the sculpture has become something of a memorial with people laying flowers at its base.
+
+The work speaks volumes: the young Mandela about to wage the fight of his life. It also shows how public art is helping to revivify urban Johannesburg, a seemingly implausible regeneration in this city of more than four million residents, which not that long ago seemed as though it was about to fall through the widening cracks of crime and dilapidation.
+
+The art has helped foster a virtuous cycle: Color and beauty draw people; people promote security, which draws more people, and creates a bigger audience possibility for more art. The improvements have made Joburg cool again - and popular. A new market, the Sheds@1Fox, featuring South African-produced goods, is set to open in October about two blocks from Chancellor House. Johannesburg is now Africa's most visited city, with 2.5 million international visitors in 2013, according to MasterCard's annual survey.
+
+"Art plays an incredibly important role in making people aware that the city is being reborn," said Gerald Garner, a guide and author of two books, "Johannesburg Ten Ahead" and "Joburg Places." "It's one thing to fix pavements and plant trees, and doing those things make a difference. But art makes the city humane. It tells people who were excluded by the old order that they're welcome. It tells stories of people who live here and celebrate the heroes."
+
+I lived in Johannesburg's northern suburbs for three years in the early 1990s and have returned frequently ever since. I saw some of its worst days, when parts of its downtown grid began to feel like an urban dystopia. One of the emblematic events, what some may argue was the low point, actually involved public art, in the late 1990s when vagrants in once-beloved, then abandoned Oppenheimer Park, decapitated a sculpture of impalas and sold the heads as scrap metal.
+
+To be clear, a good number of swaths of Johannesburg remain iffy. But the public art was a revelation for me, and after several days of exploring the city by foot and on public transit last July, I felt much more optimistic about its prospects. In the months since, the pace of positive change has even picked up.
+
+My route was done in several walks, some on my own on the way to various appointments, and then with help from Mr. Garner and later still from a thoughtful 22-year-old artist named Jabulani Fakude whom I hired through a local company called MainStreetWalks (mainstreetwalks.co.za). It's a good idea for visitors to get guides to navigate between some areas that remain sketchy.
+
+A short walk from "Shadow Boxer," the adjoining district, Newtown, has another popular sculpture of two other South African heroes, Walter and Albertina Sisulu, on Diagonal Street. The Sisulus, towering figures of the apartheid struggle, appear not as fighters but as elderly lovers and parents. The Johannesburg artist Marina Walsh, who installed the concrete sculpture in August 2009 after the city solicited proposals from artists, has the couple seated facing each other, their eyes locked in an affectionate gaze. The intention is to show them as equals. The Sisulus, who raised eight children, were married for 59 years, including Mr. Sisulu's 25-year term as a prisoner on Robben Island, an austere prison off the shore of Cape Town. They're exaggerated in scale; huge figures, squat and round, so as a viewer you're almost like a grandchild looking at grandparents, and indeed, you see people moved to sit in their laps - something the artist is happy to see them do.
+
+"I get an enormously warm response from people," Ms. Walsh told me. "One Saturday I was cleaning off a mustache someone had drawn on Albertina's lip. Some guy shouted, 'What are you doing?' He thought I was defacing her. I told him I was cleaning her and asked if he knew who they were. He said, 'Yes, they're my heroes!'"
+
+Not every work is as accessible, and the heroes often don't have names. As you leave Newtown via the landmark Queen Elizabeth Bridge, you quickly come upon "Fire Walker," a collaboration between the renowned South African artists William Kentridge and Gerhard Marx. It's set on a traffic island, and up close, the 36-foot work, erected with steel pieces, looks like a mass of exploded fragments. But like an Impressionist painting, the image is clearer farther away - that of a
+
+woman carrying a burning caldron on her head. It's a sight you rarely see in Johannesburg now, these women with braai mealies (roast corn) that they carry with flames crackling atop their heads. The work is a tribute to the hard ways people scrabble together a living.
+
+I wanted to visit Braamfontein, a precinct on the northern side of the city with two of South Africa's major universities, the University of the Witwatersrand and the University of Johannesburg. It has some of the earliest major public artworks, including "Juta Street Trees," nine large metal tree sculptures, and the "Eland," a concrete sculpture of Africa's largest antelope, which were installed in 2006 and 2007, respectively. The aesthetic upgrade has helped transform the student-dominated area, which even just four years ago still felt threatening, into a place with lively night life and a popular Saturday Neighbourgoods Market of artisan food, fabrics and jewelry.
+
+Up the hill from Juta Street, in the university area, I went to the Constitutional Court of South Africa, the country's highest court and itself a work of art both for its architecture and interior design. Set on Constitution Hill with views of city streets leading out toward the hilly and affluent northern suburbs, the court's design was intended to embody the idea of traditional justice, the way elders would hear disputes before whole communities under a tree. The notion of transparency is embodied in its openness, the glass exterior. The interior pillars are erected at angles that are meant to suggest the branches of a tree, the public space where legal decisions were rendered. Wood carvings, skins and art are integrated into the interior and are also curated in exhibits.
+
+Not all public art has official sanction, or is even legal for that matter. Wall murals have added vibrancy, too, though much of the city is still struggling to accommodate street artists who have plenty of urban wall space but face obstacles in getting permits; they consequently resort to "bombing," or illegally painting on, buildings, and for their trouble will spend occasional nights in jail or have to resort to bribing the police to go away.
+
+Mr. Fakude, the young street artist, does his work deep into the night, and moonlights by day giving walking tours for 250 rand, about $24, at 10.40 rand to the dollar - in my case, a graffiti tour of his and others' work in the artsy Market Theater area on the western side of the city. Mr. Fakude's work included one wall piece that was playful at a glance, a one-eyed cartoon character he'd invented, but that had a serious message that addressed the stark reality of crime and violence. So, too, did others. "We want to get the message across that drugs are not cool," he said. It gave the ostensible illegality of the street artists' work a certain irony, given the grass-roots appeal to nonviolent, cleaner living. It has gotten Mr. Fakude some attention, however. He was invited to paint murals in Berlin this year, though he told me recently that he was not able to travel there.
+
+One place where murals are getting an official stamp of approval is one of Joburg's most trendy and ambitious new places, the Maboneng Precinct, once a vast wasteland of disused industrial spaces and warehouses. In recent years, the 250-acre zone of reclaimed and repurposed industrial buildings has commissioned several dozen murals by artists from Berlin, Baltimore and elsewhere. In January, a 10-story mural was unveiled of "i am because we are." The artist Rickey Lee Gorden, who goes by Freddy Sam, used the same photo as "Shadow Boxer," the Mandela sculpture in Ferreiraasdorp. The revitalization of Maboneng Precinct began in 2009 with Arts on Main, a complex of studios and galleries on Main Street. Indeed, the first public art work in the precinct was the word "Maboneng," which is Sotho for "place of light" and was installed as text art on the Arts on Main rooftop.
+
+Arts on Main's success lifted Maboneng's profile. Mr. Kentridge, arguably South Africa's best-known living artist, was among the first to set up a studio. Five years hence, the area is a vibrant hive of artists' lofts, mixed-use spaces, live theater, galleries, a landscaped bar-cafe and a boutique hotel called the 12 Decades, with a dozen rooms designed with artwork and artifacts to celebrate each decade in the life of the city, which started in 1886 after a gold rush.
+
+Each building has a lighthouse design incorporated. There are 30 murals, with five more scheduled to be completed by year's end. In addition to the murals, an old canal is being restored, its banks featuring life-size steel cutouts of people who made important contributions to the precinct.
+
+One of the last areas I explored was the mining district in Johannesburg's commercial center. The area is strewn with artifacts of mining accouterments as public art in a city built on mining. Here I found the happy postscript to the story of "Leaping Impalas," the sculpture by Herman Wald in Oppenheimer Park that had been decapitated by vagrants in the 1990s. In 2002, new heads were fashioned and welded on, and the sculpture was installed in front of the headquarters of Anglo-American, one of the world's largest mining companies. You can still see the markings on the necks, like scars that reflect the greater story of a city that is in the process of healing.
+
+A version of this article appears in print on July 13, 2014, on page TR10 of the New York edition with the headline: A City Seen Through Artists' Eyes. Order Reprints |Today's Paper|Subscribe
+
+# G.1.3. DOCUMENT III
+
+# Ground-truth Record ID:
+
+"newsroom-00002-00064-14"
+
+# Prefix:
+
+(the following article contains a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings.)
+
+Hanley Ramirez's defense at shortstop only cost the Dodgers a perfect game Wednesday night. What might it cost the team in September and potentially October? What might it cost Ramirez in free agency after that? Many expected the Dodgers to sign Ramirez to an extension by now. It hasn't happened, in
+
+# Generated:
+
+part because they don't know if he'll be a shortstop next year.
+
+Ramirez has made 13 errors in 70 games at shortstop this season. He has a minus-7 defensive runs saved, the second-worst mark among shortstops with at least 400 innings. He has a minus-11 total zone rating, the worst mark among shortstops with at least 400 innings.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season.
+
+Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 69 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games AT shortstop this season. Ramirez has made 13 errors in 70 games AT shortstop this season. Ramirez has made 13 errors in 70 games AT shortstop this season. Ramirez has made 13 errors in 70 games AT shortstop this season. Ramirez has made 13 errors in 70 games AT shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season. Ramirez has made 13 errors in 70 games at shortstop this season. Ramirez hasmade 13 errors in 70 games at shortstop this season.
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+The quality of the document is poor. The document is repetitive and does not provide any new or insightful information. The same sentence is repeated over and over again, making it difficult to read and understand. The document does not provide any context or explanation for the errors Ramirez has made, making it impossible to assess the severity of his performance. I would rate this document a 1 out of 5.
+
+1
+
+# Ground-truth:
+
+Hanley Ramirez's defense at shortstop only cost the Dodgers a perfect game Wednesday night. What might it cost the team in September and potentially October? What might it cost Ramirez in free agency after that?
+
+Many expected the Dodgers to sign Ramirez to an extension by now. It hasn't happened, in part because the Dodgers want to see him stay healthy, in part because they might not be sure what the heck to do with him long term.
+
+Third base could be a possibility, but the Dodgers would need to trade Juan Uribe, a popular clubhouse figure who is under contract for $6.5 million next season. Some Dodgers officials have toyed with the idea of playing Ramirez in left field, but you may have noticed that the team has too many outfielders already.
+
+Maybe the Dodgers could win the 2014 World Series with Ramirez at short - heck, the Red Sox pulled off such a feat with Julio Lugo in '07. Advanced metrics, however, portray Ramirez as one of the worst defensive shortstops in baseball. And strong up-the-middle defense should be a requirement for a team built around starting pitching, no?
+
+Erisbel Arruebarrena, a Cuban defector in the first year of a five-year, $25 million contract, was a major defensive upgrade in his brief stint with the club. But the Dodgers have said that they do not want to shift Ramirez between short and third, perhaps out of respect for Ramirez's wishes. Besides, Uribe could return from his strained right hamstring on Monday.
+
+For this season, it appears, the Dodgers have little choice but to play Ramirez at short; they probably value his offense too much to trade him. But if Ramirez hits free agency, his market could turn problematic at a time when teams continue to place increased emphasis on defense. Indeed, what would it say about Ramirez if the Dodgers only were willing to make him a qualifying offer?
+
+Let's not get too far ahead of ourselves. Ramirez and many of his teammates are on the uptick offensively. The Dodgers, winners of eight of their last 11, are only four games behind the Giants in the NL West. A big run to the postseason, a World Series title, and Ramirez's attributes again might outweigh his deficiencies.
+
+Still, his poor throw that cost Clayton Kershaw a perfect game came on a play, as Hall of Fame broadcaster Vin Scully noted, that most shortstops make. Ramirez didn't make it. He is costing his team. He is costing himself.
+
+Beyond Ramirez, questions persist about the Dodgers, just as they do for every club.
+
+One rival executive said Thursday that the Dodgers' best outdoor would be Matt Kemp in left, Joc Pederson in center and Yasiel Puig in right. The exec added that the Dodgers should trade one of their left-handed hitting outfielders, Andre Ethier or Carl Crawford, and keep the other in reserve.
+
+Pederson, while batting .320 with a 1.016 OPS in the hitter-friendly Pacific Coast League, continues to strike out a ton, including 13 times in his last 28 at-bats. Crawford remains on the DL with a sprained left ankle. Ethier has only three homers and a .691 OPS. And let's not even talk about their respective contracts.
+
+Kemp finally is getting hot, his disposition improving with his swing. Many in the industry, however, believe he ultimately will be moved - most likely in the offseason - due to his tempestuous relationship with some of his superiors.
+
+*By clicking \"SUBSCRIBE\", you have read and agreed to the Fox Sports Privacy Policy and Terms of Use.
+
+The Dodgers trail only the Cardinals and Athletics in rotation ERA. Kershaw, Zack Greinke and Hyun-Jin Ryu are perhaps the best 1-2-3 in the game. But does anyone seriously expect Josh Beckett and Dan Haren to hold up the entire season?
+
+The loss of Chad Billingsley, who will undergo season-ending surgery to repair a partially torn flexor tendon in his right elbow, hurt the Dodgers' depth. The team lacks a prospect as polished as Marlins left-hander Andrew Heaney, who made his major-league debut Thursday night. The situation, in the words, of one club official, is "precarious."
+
+So, expect the Dodgers to be in the market for a starter.
+
+THE ORGANIZATION AS A WHOLE
+
+The Dodgers have yet to slow down their spending, so it's natural for them to be linked to a pitcher such as the Rays' David Price, who could give them a third pitcher earning $20 million next season.
+
+The team, however, likely would need to part with two or more of its top prospects to get Price, and its farm system isn't terribly deep to begin with (Price's teammate, super-utility man Ben Zobrist, is another potential LA target).
+
+Club officials keep talking about developing youngsters such as Pederson and shortstop Corey Seager so they don't need to maintain a $230 million payroll. They've made progress not only in signing players from Cuba, but also Mexico, Venezuela and the Dominican Republic. Still, are they truly intent on developing a player-development machine?
+
+The July 31 non-waiver deadline could prove the next test.
+
+Much has been made of Price's loss of baseball velocity, from an average of 95.3 mph in 2012 to 93.5 in '13 to 92.6 this season. But can anyone seriously argue that his stuff is significantly diminished?
+
+As pointed out by Rays Index earlier this week, Price is on pace for 280 strikeouts this season, which would be the most since Randy Johnson in 2004. Price's 12.1 strikeout-to-walk ratio, meanwhile, would break the record set by Brett Saberhagen in 1994 (11.0).
+
+Based upon those numbers, it's impossible to say that Price is losing it, even though his 3.93 ERA is - for him - unusually high. Poor luck, however, may help explain that number.
+
+Price's home run rate and opponents' batting average on balls in play are well above league averages, and probably will revert to his career norms as the season progresses.
+
+Meanwhile, Price's FIP (fielding independent pitching) is within the same range it has been for the past two seasons. That statistic is considered an indicator of future performance, signaling that Price's ERA likely will drop.
+
+The biggest issue for teams that want to acquire Price, as FanGraphs' Dave Cameron wrote Wednesday, is not his pitching. No, it's his expected $18 million to $20 million salary in his final year of arbitration in 2015.
+
+Few clubs will want to part with high-end prospects while absorbing such a payroll hit. Then again, the numbers are relative. Price's salary next season will not be terribly above the qualifying offer for free agents, which is expected to be $15 million to $16 million.
+
+The Phillies, despite their recent surge, surely recognize that they need to get younger. But even if the team has the will to be active sellers, it might not have a way.
+
+Second baseman Chase Utley, after telling club officials last July that he did not want to be traded, signed a two-year extension with three club options. Now, less than a year later, he's going to reverse a course, waive his 10-and-5 rights and approve a deal?
+
+Shortstop Jimmy Rollins, another 10-and-5 player, left open the possibility that he would approve a trade after breaking the team's all-time hit record last weekend. But realistically, where is Rollins going?
+
+The shortstop's wife is from Philadelphia. They are the parents of two young children. And good luck finding the right fit.
+
+Rollins' $11 million option for 2015 is almost certain to vest; he would not be just a rental. No, he would need to be comfortable spending the rest of this season in his new city, and all of next season as well.
+
+Neither team in Rollins' native Bay Area needs a shortstop. And while Rollins might crave the spotlight in either New York or LA, he wouldn't go to the Mets, the Yankees aren't going to displace Derek Jeter and neither the Angels nor Dodgers
+
+figures to pursue a shortstop.
+
+Then there is left-hander Cliff Lee, who - if all goes well - will return from his strained elbow before the All-Star break. By July 31, Lee still will have more than $45 million left on his contract, including a $12.5 million buyout for 2016.
+
+The financial obligation number actually might be higher than that for many; Lee can block trades to 20 teams, and could require his $27.5 million vesting option for '16 to be guaranteed to join a club such as the Dodgers.
+
+Phillies GM Ruben Amaro Jr. has said he would include cash in certain deals. The Phillies always could trade Lee during the August waiver period. But unless the team paid the majority of his contract, it could not expect a bounty in return.
+
+Closer Jonathan Papelbon could get moved if the Phillies pay down his $13 million salary; he, too, has a vesting option for '16, and can only be traded to 12 teams without his approval.
+
+Unless the Phils get truly creative, they will find it difficult to make impact moves
+
+I normally do not favor starting pitchers for MVP unless the field lacks strong position-player candidates; I voted for then-Red Sox outdoor Jacoby Ellsbury in 2011 over the eventual winner, Tigers right-hander Justin Verlander.
+
+An interesting MVP case, however, can be made for Yankees right-hander Masahiro Tanaka, at least to this point of the season.
+
+The Yankees are 12-2 when Tanaka starts, 25-31 when he does not. Even more striking, their run differential in Tanaka's starts is plus-33, while in all other games it's minus-54.
+
+In other words, the Yankees are worse than the Rays (minus-48) when Tanaka isn't pitching and nearly as bad as the Padres (minus-62) and Diamondbacks (minus-64).
+
+Of course, Wins Above Replacement (WAR) tells a different story. Tanaka is third among pitchers at 2.9 in the FanGraphs version of the metric, and well behind Mike Trout, the leader among position players at 4.7.
+
+Outfielder J.D. Martinez is one of the Tigers' few recent bright spots; he has three homers during his nine-game hitting streak, and is now batting .300 with a .903 OPS in 108 plate appearances on the season.
+
+Martinez, 26, spent the off-season watching video of Miguel Cabrera, Ryan Braun and other accomplished hitters, then completely overhauled his swing and approach. The Astros released him in spring training, and he signed a minor-league contract with the Tigers two days later.
+
+"If you watch video of me in the past, it's the complete opposite - it's that extreme," he said. "It's kind of like I re-invented myself."
+
+Martinez worked with his personal hitting coach for three weeks, then implemented his new ideas in Venezuela. Mostly, he's trying to line up directly to the ball and keep the barrel of his bat in the zone longer - the way Cabrera does.
+
+# G.2. Small Values of H
+
+Data domain: Newsroom.
+- Generating:
+
+- Model Name: Gemini 1.0 Pro.
+- Decoding Temperature: 1.0
+- Prompting Method: summary + keywords
+
+- Fractal Parameters:
+
+- Scoring Model Name: Gemini 1.0 Pro.
+- Hölder Exponent: $\mathrm{S} = {0.417}$
+- Hurst Exponent: $\mathrm{H} = 0.601$
+
+# G.2.1. DOCUMENT I
+
+# Ground-truth Record ID:
+
+"newsroom-00001-00064-6"
+
+# Prefix:
+
+Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: Novak Djokovic enters French Open with some vulnerabilities after dropping two clay finals to Rafael Nadal and showing frustration on court. Despite Nadal's resurgence, Djokovic still faces heightened expectations, as a Paris title would make him one of seven men with all four majors in a career, and potentially the first man since 1969 to hold all four simultaneously.
+
+Using these keywords: Novak Djokovic, French Open, clay, Nadal, Roger Federer, Australian Open, Wimbledon, US Open, Davis Cup, gluten-free diet, Laureus Sportsman of the Year, mental toughness, career Grand Slam, Djoker Slam, Rod Laver, John McEnroe, Roland Garros
+
+# Generated:
+
+After dominating the Australian Open and securing his 22nd Grand Slam title earlier this year, Novak Djokovic enters the French Open as the man to beat yet again. However, the usually confident Serb arrived at Roland Garros with a couple of concerning elements surrounding his game. In the lead-up to the clay-court major, Djokovic reached two finals on the surface but came up short against the "King of Clay," Rafael Nadal, each time in Rome and Madrid. The losses highlighted some vulnerabilities in Djokovic's game on clay, a surface which demands supreme confidence and comfort.
+
+Djokovic's frustrations have also been evident on the court, with the typically composed player exhibiting uncharacteristic emotional outbursts in recent tournaments. These moments have fueled concerns that he may not be in the right frame of mind to claim a third French Open title. Despite his recent setbacks, the expectations on Djokovic's shoulders remain immense. A title in Paris would not only draw him level with his great rival Roger Federer on 20 Grand Slams but also make him just the seventh man to complete a career Grand Slam. Moreover, the triumph could see Djokovic become the first man since Rod Laver in 1969 to hold all four major titles simultaneously. The significance of this incredible achievement cannot be understated, and Djokovic is well aware of the opportunity that awaits him.
+
+However, the task facing Djokovic at Roland Garros is not without its challenges. Nadal, the 13-time champion, remains a formidable force on clay and is always a dangerous opponent in Paris. The French Open demands exceptional mental and physical endurance, and it is here that Djokovic has arguably established himself as the greatest player of his era. His ability to stay calm, composed, and focused amidst the intense pressure and grueling demands of the tournament has been a defining characteristic of his many successes. This mental toughness will once again be put to the test if he is to navigate the challenges ahead.
+
+Djokovic's remarkable career includes triumphs at the Australian Open, Wimbledon, and the US Open, but the French Open has proven to be the most elusive major for him. His lone titles at Roland Garros came in 2016 and 2021 and while he reached the final twice more, the path to clay-court glory in Paris has been fraught with hurdles. Djokovic's hunger and determination to conquer the French Open for a third time will be pivotal in his quest for supremacy.
+
+Throughout his illustrious career, Djokovic has demonstrated unwavering commitment and dedication to the sport. From adopting a gluten-free diet to improve his physical well-being to focusing on mental resilience training, he leaves no stone unturned in his relentless pursuit of greatness. His unwavering dedication has not only manifested on the tennis court but has also led to several philanthropic endeavors, including the Novak Djokovic Foundation. Djokovic is also a recipient of the prestigious Laureus Sportsman of the Year award, a testament to his impact on both the sport and society.
+
+As the French Open unfolds, Djokovic stands as the frontrunner for the title, despite his recent setbacks. The potential for him to achieve the "Djoker Slam" and join Laver's elite company of completing a career Grand Slam is a tantalizing prospect. Yet, the road to glory will demand that Djokovic overcome not only his formidable opponents but also harness his emotions, and summon the mental fortitude that has defined his legendary career. With 10 French Open titles to his name, Nadal will be aiming to continue his dominance on the clay and prevent Djokovic from etching his name in history.
+
+The stakes are incredibly high for Djokovic at the French Open, as he has the opportunity to become the first man in over half a century to hold all four major titles simultaneously. The pressure is immense, but the Serbian star has repeatedly shown his ability to rise to the occasion. The tennis world awaits with bated breath to see if Djokovic can conquer his clay demons, defy the odds, and make history by completing a career Grand Slam and achieving the "Djoker Slam."
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+\*\*Good:\*\*
+- Comprehensive overview of Djokovic's current form, challenges, and aspirations.
+- Detailed analysis of his recent struggles on clay and his past successes at the French Open.
+- Acknowledgment of Djokovic's commitment to physical and mental well-being.
+- Exploration of the significance of achieving a career Grand Slam and the "Djoker Slam."
+
+\*\*Bad: \*\*
+
+- Some sections may lack depth or specific examples to further illustrate Djokovic's mental and physical attributes.
+- While the document highlights Djokovic's potential to overcome his clay-court demons and complete a career Grand Slam, it could benefit from discussing potential obstacles or rival players he may face in his quest.
+
+Rating: 4
+
+# Ground-truth:
+
+A year ago, the 25-year-old Serb swept into Paris after amassing one of the most spectacular five months in men's tennis history, winning every tournament he'd entered.
+
+Now, far from a perfect season, the tournament's top seed arrives at the French Open, which starts Sunday in Paris, with fissures in his impenetrable facade.
+
+Djokovic has not won a clay title in 2012 and dropped his last two finals on dirt to defending champ Rafael Nadal- after beating him the previous seven times.
+
+He has also shown flashes of anger and signs of frustration, emotions that, in the past, have undermined his performance.
+
+"I am not comparing last year and this one," Djokovic said Monday following his 7-5, 6-3 loss to Nadal in the rain-delayed final in Rome. "I feel good on the court and I need to make a few adjustment before Paris, but I'll be in top form."
+
+If Nadal's resurgent spring and 45-1 record in Paris make him the favorite, Djokovic still will be dealing with heightened expectations.
+
+A Paris title would place him in rarefied company - one of just seven men to have won all four majors in a career.
+
+Even more historic, he has a chance to hold all four majors simultaneously - a so-called "Djoker Slam" - a feat not achieved by a man since Rod Laver 43 years ago.
+
+"If he were to win four in a row," said Laver-admirer John McEnroe, who came close but never won Roland Garros, "suddenly he'd be like top-10 (of best players in history). There's a lot riding on it."
+
+Djokovic's first taste of defeat in 2011 occurred on the crushed red brick of Paris to Roger Federer, who snapped his perfect season and 43-match winning streak in the semifinals. To put that run in perspective, consider this: Djokovic's first loss in 2012 came nearly three months and 33 matches earlier (to Andy Murray in the semifinals in Dubai in February).
+
+"It might have been the case," Djokovic told USA TODAY Sports when asked if the weight of his faultless performance sapped his energy and clouded his focus. "But I think even under that pressure I played a great tournament. Obviously in the semifinals against Roger, I have done what I could at that stage. I did my best at that moment, and he was a better player."
+
+Nadal went on to defeat Federer in the final, and the loss barely made an impact the Serb's juggernaut season.
+
+Djokovic won Wimbledon and the U.S. Open, snagged the No. 1 ranking and punctuated his dominant 2011 by winning a third consecutive major (and fifth overall) in January at the Australian Open- all three in finals against Nadal.
+
+He has won four of the last five majors and remains the man to beat in best-of-five sets even if the Spaniard has regained his swagger on clay.
+
+"The way he's played in Slams the last year or so has been very, very impressive," says fourth-ranked Scot Murray.
+
+Once the crowd-pleasing third wheel to Federer and Nadal, Djokovic's transformation into world-beater is well chronicled.
+
+Possessed of uncanny flexibility, redirecting ability and the most lethal two-handed backhand in the game, the 6-2 Djokovic had the skills to be a great player.
+
+After leading Serbia to its first Davis Cup championship in 2010, he made the small adjustments - fixing a flawed serve, shoring up his forehand and famously cutting gluten out of his diet - that helped him overcome the mental lapses and suspect fitness that plagued him in the past.
+
+That he was able to realize his potential after playing third fiddle for so long - he finished No. 3 from 2007-2009 behind the Federal-Nadal duopoly - is testament to his resolve.
+
+Djokovic, who as a child told a television interviewer that he little time for fun and games because he was going to become a champion, believed he had it in him.
+
+"I knew that I have qualities, but I wasn't managing to make that final step," he says. "I think it was all mental and it was all growing up and maturing. In the
+
+end, I managed to do it."
+
+While he never expected to repeat his 2011 season - a season that earned him a Laureus Sportsman of the Year award and a segment this spring on 60 Minutes-Djokovic has discovered what many before him have said: it's harder to stay on top than to get there.
+
+Djokovic limped into the fall after holding off Nadal in the 2011 U.S. Open final, taking five of his six losses (70-6) post-New York and failing to win any titles.
+
+His body was showing signs of wear, too, when he retired with back pain in the second set against Juan Martin del Potro in Serbia's semifinal Davis Cup defeat to Argentina.
+
+To recoup and recover, he took a two-week vacation with his girlfriend, Jelena Ristic, in the Maldives and then camped out in the heat of Dubai for two weeks in December.
+
+As his close-knit team gathered to plot for the coming year, they determined that the greatest danger to him was fitness and complacency.
+
+"We (tried) to keep him a little bit down on the earth to be humble, modest, to realize that it doesn't come easy," says his longtime coach Marian Vajda of Slovakia. "He earned that spot. He deserved it. It came with work, work and work."
+
+While they worked on his fitness and tweaked his game - including taking the ball earlier - they also decided in a crowded year of events, including the London Olympics, that winning in Paris would be their singular mission.
+
+"Roland Garros is on top of the priority list," Djokovic says.
+
+Djokovic has had to make sacrifices and adjustments, such as skipping his hometown tournament in Belgrade, which is owned by his family. It was a major blow to the event, but coming on the heels of his loss in Monte Carlo and his grandfather's death, Djokovic needed the break.
+
+"He is doing a great job of pacing himself and managing his schedule for majors," ESPN's Brad Gilbert says.
+
+Djokovic, who tried a failed coaching experiment with American Todd Martin two years ago, is bent on keeping things constant.
+
+"My approach hasn't changed, really," Djokovic says. "I still have the same practice routines every day. I didn't change anything in my tennis practices, in my preparations. I have the same places, same people around me, same kind of routines. I have no reason to change, really, because as soon as I try to change something in my career and my team ... it hasn't worked that well. So I keep it very simple."
+
+Statistically, Djokovic has shown little drop-off, except in his vaunted return game, where his year-over-year winning percentage against his opponents' serve has dropped from $42.8\%$ to $34.6\%$ heading into Roland Garros.
+
+But cracks have appeared in his resolve.
+
+Earlier this month Djokovic joined Nadal in lashing out at Madrid's slippery blue clay before surrendering feebly to compatriot Janko Tipsarevic 7-6 (7-2), 6-3 in the quarterfinals.
+
+During blustery conditions in Rome, Djokovic destroyed a frame after losing the first set in a third-round victory against Juan Monaco of Argentina. He broke another during his loss to Nadal two matches later.
+
+"I hope the children watching don't do that," a smiling Djokovic told reporters
+
+afterward. "But I show my emotions out there. That's who I am."
+
+Off the court, the fiercely loyal family man suffered a personal loss when his grandfather, Vladimir, died during the Monte Carlo tournament last month. Djokovic continued to play but wasn't himself. The loss of a loved one lingers.
+
+With the immovable force of Nadal looming on clay, there is little time for self-pity.
+
+The Spaniard leads Djokovic 11-2 on clay (18-14 overall) and has owned him in Paris, winning all three of their matches (2006-08) without dropping a set.
+
+"He is the Mount Everest on that surface in best out of five," Andre Agassi said of Nadal in a recent call with reporters.
+
+To win, Djokovic will almost certainly have to conquer the 10-time major winner from Spain, who is chasing his own bit of history - surpassing Bjorn Borg with an unprecedented seven French Open crowns.
+
+"It's not going to be an easy tournament to win because we know sort of Rafa owns that place," No. 3 Federer says.
+
+It will be tougher still since Nadal appears to have recovered from hole Djokovic bored in his psyche last year - something Nadal himself admitted.
+
+Nadal knew he was close and never stopped looking for answers, saying after the Rome final: "You don't need to find great things - it is the small things that make the difference."
+
+"No question (Djokovic's) performance last year proves that he's capable of (winning Roland Garros)," said Agassi, whose victory at Roland Garros in 1999 completed his set of Grand Slam trophies. "You have to give a slight nose to Nadal just given how many times he's won it and just what a physical accomplishment it would be to take him off that perch in Paris."
+
+With no streak to defend, some pressure is off. That could be a blessing in disguise. Or at least, it's a tradeoff.
+
+Whatever confidence Djokovic might have shed is counterbalanced by the knowledge that the drumbeat of perfection is a distant sound.
+
+"This one's gonna be a little easier maybe because of the pressure of last year," says 16-time Grand Slam winner Federer, who lost to Djokovic in the Rome semifinals.
+
+Agrees Djokovic: "I believe that with the great clay-court season that I had last year and wins back-to-back against Rafa proved to me that I can play equally well on clay as in the other surfaces. I need to keep that in my mind and believe in the success."
+
+# G.2.2. DOCUMENT II
+
+# Ground-truth Record ID:
+
+"newsroom-00002-00064-5"
+
+# Prefix:
+
+Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: The text describes a sculpture of Nelson Mandela in Johannesburg, South Africa, honoring his boxing aspirations and his contributions to law, politics, and life in prison. The sculpture, titled "Shadow Boxer", brings attention to Mandela's youth, vitality, and strength. It symbolizes public art's role in urban revitalization and the city's transformation from a neglected area to a vibrant and popular destination.
+
+Using these keywords: Johannesburg, Mandela sculpture, Shadow Boxer, public art, regeneration, urban renewal, city transformation, crime, dilapidation, art as rejuvenation, tourism, African city, art as celebration, guide, navigation, Newtown, Walter and Albertina Sisulu sculpture, apartheid struggle
+
+# Generated:
+
+Nestled in the bustling cityscape of Johannesburg, South Africa, a striking sculpture pays homage to the extraordinary life of Nelson Mandela. Known as "Shadow Boxer," this masterpiece celebrates Mandela's aspirations as a boxer, his unwavering commitment to law and politics, and his resilience during his 27 years of imprisonment. Standing tall and proud, the sculpture serves as a poignant reminder of Mandela's indomitable spirit and his pivotal role in shaping South Africa's history.
+
+Crafted by renowned South African sculptor Ledelle Moe, "Shadow Boxer" captures the essence of Mandela's youth, vitality, and unwavering strength. The sculpture depicts a young Mandela poised in a boxing stance, his body taut and muscles rippling with anticipation. Moe's meticulous attention to detail brings Mandela's boxing gloves to life, their weathered surfaces bearing the marks of countless hours spent honing his skills in the ring.
+
+Yet, "Shadow Boxer" transcends the realm of sport, becoming a metaphor for Mandela's relentless determination to challenge injustice and fight for a better future for his country. The sculpture serves as a powerful symbol of his unwavering commitment to the pursuit of social justice, reflecting his belief in the power of peaceful resistance and non-violent action. Through his advocacy and leadership, Mandela left an enduring legacy that continues to inspire generations.
+
+Beyond its artistic merit, "Shadow Boxer" plays a pivotal role in Johannesburg's urban revitalization efforts. Located in Newtown, once a neglected and crime-ridden area, the sculpture has become a catalyst for the neighborhood's transformation into a vibrant and popular destination. The influx of visitors drawn to the iconic artwork has sparked economic growth, creating new jobs and stimulating local businesses.
+
+Furthermore, "Shadow Boxer" exemplifies the transformative power of public art in fostering social cohesion and creating a sense of place. Its prominent position in Newtown serves as a constant reminder of Mandela's life and values, encouraging citizens and visitors alike to reflect on the importance of human rights, equality, and reconciliation. The sculpture serves as a beacon of hope, inspiring individuals to embrace diversity and work together towards a more inclusive and just society.
+
+Additionally, "Shadow Boxer" transcends its physical form, becoming an essential guide and navigation point within the Johannesburg cityscape. Its distinctive silhouette is easily recognizable from afar, serving as a landmark that helps visitors orient themselves and explore the surrounding area. This aspect of the sculpture reinforces its role as a symbol of urban regeneration and renewal,
+
+connecting people with their city's history and cultural heritage.
+
+Situated in close proximity to the Walter and Albertina Sisulu sculpture, "Shadow Boxer" highlights the interconnectedness of South Africa's liberation struggle. Walter Sisulu, alongside Nelson Mandela, played a pivotal role in the anti-apartheid movement, serving as a mentor and comrade. The juxtaposition of these two sculptures underscores the collective efforts of South Africa's fearless leaders who dedicated their lives to dismantling apartheid and ushering in a new era of freedom and democracy.
+
+As the sun sets over Johannesburg, "Shadow Boxer" takes on a new dimension. Illuminated by carefully placed lights, the sculpture seems to come alive, casting long shadows that dance and flicker across the surrounding buildings. This interplay of light and shadow creates a mesmerizing spectacle, transforming "Shadow Boxer" into a vibrant beacon of hope and inspiration.
+
+Visitors from around the world flock to Johannesburg to witness this awe-inspiring sculpture firsthand. It has become an essential stop on the itinerary of any traveler seeking to understand South Africa's complex past and its journey towards reconciliation and transformation. The sculpture is not just a tribute to Nelson Mandela; it is a testament to the resilience and optimism of the South African people who, like Mandela, refused to be defeated by adversity.
+
+"Shadow Boxer" stands as a powerful reminder that even in the darkest of times, the human spirit can triumph over oppression and injustice. It encourages individuals to confront challenges with courage, compassion, and a relentless pursuit of justice. As Johannesburg continues to evolve and thrive, "Shadow Boxer" will undoubtedly remain a cherished landmark, serving as a source of pride for generations to come.
+
+Indeed, "Shadow Boxer" is more than just a sculpture; it is a living testament to the extraordinary life and legacy of Nelson Mandela. Its presence in Johannesburg serves as a constant reminder of the values he held dear and the profound impact he had on South Africa and the world. Through his unwavering determination and unwavering commitment to freedom and equality, Mandela left an enduring mark on history, inspiring countless others to strive for a better future. "Shadow Boxer" stands as a timeless tribute to his indomitable spirit and an enduring symbol of hope and transformation.
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+The document is well-written and informative, providing a detailed description of the "Shadow Boxer" sculpture in Johannesburg and its significance. It highlights the artistic merit, historical context, and social impact of the artwork, making it a valuable resource for understanding its importance. The language is engaging and descriptive, effectively conveying the essence of the sculpture and its role in Johannesburg's urban revitalization efforts.
+
+Rating: 4
+
+# Ground-truth:
+
+In the Ferreirasdorp section of Johannesburg, I passed a group of young men who had set up an informal carwash business with buckets of soapy water arranged by a curb, and as I turned a corner, there was Nelson Mandela.
+
+Though Mandela died on Dec. 5, here he stood vividly rendered in layered steel cutouts, 34 years old, fresh and strong and in the pose of a boxer. He once had professional aspirations, and said the sweet science taught him about defense, attack and strategy, which he applied to law, politics and life in prison.
+
+Unveiled two years ago, the 16-foot sculpture, called "Shadow Boxer," is a lively piece with a great sense of energy and movement. The South African artist Marco Cianfanelli gave it a three-dimensional effect by layering painted-steel sheets in an image from a well-known 1952 photograph by Robert Gosani. The work is set
+
+in front of the Johannesburg magistrate court across the street from Chancellor House, where Mandela and his law, and later political, partner Oliver Tambo were young lawyers. Chancellor House, a privately owned corner office building, was itself restored in 2011 and has a timeline exhibition of the two partners' time there.
+
+The attractions have drawn visitors to a neighborhood that has a certain rough-around -the-edges look but is no longer so menacing. Since Mandela's death, the sculpture has become something of a memorial with people laying flowers at its base.
+
+The work speaks volumes: the young Mandela about to wage the fight of his life. It also shows how public art is helping to revivify urban Johannesburg, a seemingly implausible regeneration in this city of more than four million residents, which not that long ago seemed as though it was about to fall through the widening cracks of crime and dilapidation.
+
+The art has helped foster a virtuous cycle: Color and beauty draw people; people promote security, which draws more people, and creates a bigger audience possibility for more art. The improvements have made Joburg cool again - and popular. A new market, the Sheds@1Fox, featuring South African-produced goods, is set to open in October about two blocks from Chancellor House. Johannesburg is now Africa's most visited city, with 2.5 million international visitors in 2013, according to MasterCard's annual survey.
+
+"Art plays an incredibly important role in making people aware that the city is being reborn," said Gerald Garner, a guide and author of two books, "Johannesburg Ten Ahead" and "Joburg Places." "It's one thing to fix pavements and plant trees, and doing those things make a difference. But art makes the city humane. It tells people who were excluded by the old order that they're welcome. It tells stories of people who live here and celebrate the heroes."
+
+I lived in Johannesburg's northern suburbs for three years in the early 1990s and have returned frequently ever since. I saw some of its worst days, when parts of its downtown grid began to feel like an urban dystopia. One of the emblematic events, what some may argue was the low point, actually involved public art, in the late 1990s when vagrants in once-beloved, then abandoned Oppenheimer Park, decapitated a sculpture of impalas and sold the heads as scrap metal.
+
+To be clear, a good number of swaths of Johannesburg remain iffy. But the public art was a revelation for me, and after several days of exploring the city by foot and on public transit last July, I felt much more optimistic about its prospects. In the months since, the pace of positive change has even picked up.
+
+My route was done in several walks, some on my own on the way to various appointments, and then with help from Mr. Garner and later still from a thoughtful 22-year-old artist named Jabulani Fakude whom I hired through a local company called MainStreetWalks (mainstreetwalks.co.za). It's a good idea for visitors to get guides to navigate between some areas that remain sketchy.
+
+A short walk from "Shadow Boxer," the adjoining district, Newtown, has another popular sculpture of two other South African heroes, Walter and Albertina Sisulu, on Diagonal Street. The Sisulus, towering figures of the apartheid struggle, appear not as fighters but as elderly lovers and parents. The Johannesburg artist Marina Walsh, who installed the concrete sculpture in August 2009 after the city solicited proposals from artists, has the couple seated facing each other, their eyes locked in an affectionate gaze. The intention is to show them as equals. The Sisulus, who raised eight children, were married for 59 years, including Mr. Sisulu's 25-year term as a prisoner on Robben Island, an austere prison off the shore of Cape Town. They're exaggerated in scale; huge figures, squat and round, so as a viewer you're almost like a grandchild looking at grandparents, and indeed, you see people moved to sit in their laps - something the artist is happy to see them do.
+
+"I get an enormously warm response from people," Ms. Walsh told me. "One Saturday I
+
+was cleaning off a mustache someone had drawn on Albertina's lip. Some guy shouted, 'What are you doing?' He thought I was defacing her. I told him I was cleaning her and asked if he knew who they were. He said, 'Yes, they're my heroes!'"
+
+Not every work is as accessible, and the heroes often don't have names. As you leave Newtown via the landmark Queen Elizabeth Bridge, you quickly come upon "Fire Walker," a collaboration between the renowned South African artists William Kentridge and Gerhard Marx. It's set on a traffic island, and up close, the 36-foot work, erected with steel pieces, looks like a mass of exploded fragments. But like an Impressionist painting, the image is clearer farther away - that of a woman carrying a burning caldron on her head. It's a sight you rarely see in Johannesburg now, these women with braai mealies (roast corn) that they carry with flames crackling atop their heads. The work is a tribute to the hard ways people scrabble together a living.
+
+I wanted to visit Braamfontein, a precinct on the northern side of the city with two of South Africa's major universities, the University of the Witwatersrand and the University of Johannesburg. It has some of the earliest major public artworks, including "Juta Street Trees," nine large metal tree sculptures, and the "Eland," a concrete sculpture of Africa's largest antelope, which were installed in 2006 and 2007, respectively. The aesthetic upgrade has helped transform the student-dominated area, which even just four years ago still felt threatening, into a place with lively night life and a popular Saturday Neighbourgoods Market of artisan food, fabrics and jewelry.
+
+Up the hill from Juta Street, in the university area, I went to the Constitutional Court of South Africa, the country's highest court and itself a work of art both for its architecture and interior design. Set on Constitution Hill with views of city streets leading out toward the hilly and affluent northern suburbs, the court's design was intended to embody the idea of traditional justice, the way elders would hear disputes before whole communities under a tree. The notion of transparency is embodied in its openness, the glass exterior. The interior pillars are erected at angles that are meant to suggest the branches of a tree, the public space where legal decisions were rendered. Wood carvings, skins and art are integrated into the interior and are also curated in exhibits.
+
+Not all public art has official sanction, or is even legal for that matter. Wall murals have added vibrancy, too, though much of the city is still struggling to accommodate street artists who have plenty of urban wall space but face obstacles in getting permits; they consequently resort to "bombing," or illegally painting on, buildings, and for their trouble will spend occasional nights in jail or have to resort to bribing the police to go away.
+
+Mr. Fakude, the young street artist, does his work deep into the night, and moonlights by day giving walking tours for 250 rand, about $24, at 10.40 rand to the dollar - in my case, a graffiti tour of his and others' work in the artsy Market Theater area on the western side of the city. Mr. Fakude's work included one wall piece that was playful at a glance, a one-eyed cartoon character he'd invented, but that had a serious message that addressed the stark reality of crime and violence. So, too, did others. "We want to get the message across that drugs are not cool," he said. It gave the ostensible illegality of the street artists' work a certain irony, given the grass-roots appeal to nonviolent, cleaner living. It has gotten Mr. Fakude some attention, however. He was invited to paint murals in Berlin this year, though he told me recently that he was not able to travel there.
+
+One place where murals are getting an official stamp of approval is one of Joburg's most trendy and ambitious new places, the Maboneng Precinct, once a vast wasteland of disused industrial spaces and warehouses. In recent years, the 250-acre zone of reclaimed and repurposed industrial buildings has commissioned several dozen murals by artists from Berlin, Baltimore and elsewhere. In January, a 10-story mural was unveiled of "i am because we are." The artist Rickey Lee Gorden, who goes by Freddy Sam, used the same photo as "Shadow Boxer," the Mandela sculpture in Ferreirasdorp. The revitalization of Maboneng Precinct began
+
+in 2009 with Arts on Main, a complex of studios and galleries on Main Street. Indeed, the first public art work in the precinct was the word "Maboneng," which is Sotho for "place of light" and was installed as text art on the Arts on Main rooftop.
+
+Arts on Main's success lifted Maboneng's profile. Mr. Kentridge, arguably South Africa's best-known living artist, was among the first to set up a studio. Five years hence, the area is a vibrant hive of artists' lofts, mixed-use spaces, live theater, galleries, a landscaped bar-cafe and a boutique hotel called the 12 Decades, with a dozen rooms designed with artwork and artifacts to celebrate each decade in the life of the city, which started in 1886 after a gold rush.
+
+Each building has a lighthouse design incorporated. There are 30 murals, with five more scheduled to be completed by year's end. In addition to the murals, an old canal is being restored, its banks featuring life-size steel cutouts of people who made important contributions to the precinct.
+
+One of the last areas I explored was the mining district in Johannesburg's commercial center. The area is strewn with artifacts of mining accouterments as public art in a city built on mining. Here I found the happy postscript to the story of "Leaping Impalas," the sculpture by Herman Wald in Oppenheimer Park that had been decapitated by vagrants in the 1990s. In 2002, new heads were fashioned and welded on, and the sculpture was installed in front of the headquarters of Anglo-American, one of the world's largest mining companies. You can still see the markings on the necks, like scars that reflect the greater story of a city that is in the process of healing.
+
+A version of this article appears in print on July 13, 2014, on page TR10 of the New York edition with the headline: A City Seen Through Artists' Eyes. Order Reprints |Today's Paper|Subscribe
+
+# G.2.3. DOCUMENT III
+
+# Ground-truth Record ID:
+
+"newsroom-00002-00064-14"
+
+# Prefix:
+
+Write about the following in a minimum of 15 paragraphs containing a minimum of 1000 words, in plain text only without titles or headings: Hanley Ramirez's poor defense at shortstop has raised concerns about the Dodgers' postseason chances and his own free agency value. Despite his offensive contributions, Ramirez's defensive liabilities are a concern, especially considering the Dodgers' focus on starting pitching. The team has considered moving Ramirez to third base or left field, but those positions are already filled. His market could be problematic in free agency, given the increased emphasis on defense among teams. The Dodgers' outfield situation is also uncertain, with too many players and inconsistent performances. Their rotation is strong, but concerns about Josh Beckett and Dan Haren's ability to sustain their success throughout the season remain. The loss of Chad Billingsley further hurts their depth. The Dodgers are expected to be in the market for a starting pitcher to address this issue.
+
+Using these keywords: Hanley Ramirez, Dodgers, defense, shortstop, free agency, extension, third base, Juan Uribe, left field, Julio Lugo, Erisbel Arruebarrena, Cuban defector, Clayton Kershaw, Matt Kemp, Joc Pederson, Yasiel Puig, Andre Ethier, Carl Crawford, Pacific Coast League, Zack Greinke, Hyun-Jin Ryu, Josh Beckett, Dan Haren, Chad Billingsley, Andrew Heaney, David Price
+
+# Generated:
+
+Hanley Ramirez's poor defense at shortstop has become a significant concern for the Los Angeles Dodgers as they approach the postseason. The team made a significant investment in Ramirez's offensive abilities when they signed him to a lucrative contract extension in 2012. However, his defensive deficiencies could potentially undermine the Dodgers' World Series aspirations.
+
+Ramirez's struggles at shortstop have been well-documented. He has made numerous errors in the field and his range has declined significantly in recent years. This has raised questions about the Dodgers' overall defensive alignment, especially considering their focus on starting pitching.
+
+To address Ramirez's defensive woes, the Dodgers have considered moving him to third base or left field. However, these positions are already filled by Juan Uribe and Carl Crawford, respectively. This leaves the Dodgers with few options to improve their defense in the middle of the infeld.
+
+Ramirez's defensive liabilities could also impact his value in free agency. The market for shortstops is expected to be strong this winter, with several teams in need of a solid defender at the position. However, Ramirez's defensive struggles may make it difficult for him to secure a long-term contract at a premium price.
+
+The Dodgers' outfield situation is also uncertain. With Matt Kemp, Joc Pederson, Yasiel Puig, and Andre Ethier all vying for playing time, the team has an abundance of outfielders. However, their performances have been inconsistent, leaving the Dodgers unsure of their best lineup configuration.
+
+On the pitching side, the Dodgers' rotation led by Clayton Kershaw is strong on paper. Kershaw is one of the best pitchers in baseball, and Zack Greinke and Hyun-Jin Ryu are both solid starters. However, there are concerns about Josh Beckett and Dan Haren's ability to sustain their success throughout the season, especially given their injury histories. The loss of Chad Billingsley to Tommy John surgery further hurts the Dodgers' depth in the rotation.
+
+To address their rotation concerns, the Dodgers are expected to be in the market for a starting pitcher. David Price is a potential trade target, although his contract demands could make him difficult to acquire. Andrew Heaney, a top pitching prospect, could also be an option to contribute this season.
+
+Amidst the uncertainties surrounding their defense and pitching staff, the Dodgers' path to the World Series is unclear. They have the potential to be a championship contender, but their ability to overcome their defensive and pitching issues will be crucial to their success.
+
+# Gemini 1.0 Pro Evaluation of Quality of Generated Text:
+
+\*\*Good\*\*:
+
+- The document provides a comprehensive analysis of the Los Angeles Dodgers' defensive and pitching concerns.
+- It highlights the potential impact of Hanley Ramirez's poor defense on the team's World Series aspirations.
+- It discusses the Dodgers' limited options to improve their defense and the challenges they face in free agency.
+- It also mentions the uncertain outfield situation and the need for a starting pitcher.
+
+\*\*Bad\*\*:
+- The document lacks specific examples or statistics to support the claims made about Hanley Ramirez's defensive struggles.
+- It does not provide an in-depth analysis of the Dodgers' offensive strengths or weaknesses.
+
+Rating: 3
+
+# Ground-truth:
+
+Hanley Ramirez's defense at shortstop only cost the Dodgers a perfect game Wednesday night. What might it cost the team in September and potentially October? What might it cost Ramirez in free agency after that?
+
+Many expected the Dodgers to sign Ramirez to an extension by now. It hasn't happened, in part because the Dodgers want to see him stay healthy, in part because they might not be sure what the heck to do with him long term.
+
+Third base could be a possibility, but the Dodgers would need to trade Juan Uribe, a popular clubhouse figure who is under contract for $6.5 million next season. Some Dodgers officials have toyed with the idea of playing Ramirez in left field, but you may have noticed that the team has too many outfielders already.
+
+Maybe the Dodgers could win the 2014 World Series with Ramirez at short - heck, the Red Sox pulled off such a feat with Julio Lugo in '07. Advanced metrics, however, portray Ramirez as one of the worst defensive shortstops in baseball. And strong up-the-middle defense should be a requirement for a team built around starting pitching, no?
+
+Erisbel Arruebarrena, a Cuban defector in the first year of a five-year, $25 million contract, was a major defensive upgrade in his brief stint with the club. But the Dodgers have said that they do not want to shift Ramirez between short and third, perhaps out of respect for Ramirez's wishes. Besides, Uribe could return from his strained right hamstring on Monday.
+
+For this season, it appears, the Dodgers have little choice but to play Ramirez at short; they probably value his offense too much to trade him. But if Ramirez hits free agency, his market could turn problematic at a time when teams continue to place increased emphasis on defense. Indeed, what would it say about Ramirez if the Dodgers only were willing to make him a qualifying offer?
+
+Let's not get too far ahead of ourselves. Ramirez and many of his teammates are on the uptick offensively. The Dodgers, winners of eight of their last 11, are only four games behind the Giants in the NL West. A big run to the postseason, a World Series title, and Ramirez's attributes again might outweigh his deficiencies.
+
+Still, his poor throw that cost Clayton Kershaw a perfect game came on a play, as Hall of Fame broadcaster Vin Scully noted, that most shortstops make. Ramirez didn't make it. He is costing his team. He is costing himself.
+
+Beyond Ramirez, questions persist about the Dodgers, just as they do for every club.
+
+One rival executive said Thursday that the Dodgers' best outdoor would be Matt Kemp in left, Joc Pederson in center and Yasiel Puig in right. The exec added that the Dodgers should trade one of their left-handed hitting outfielders, Andre Ethier or Carl Crawford, and keep the other in reserve.
+
+Pederson, while batting .320 with a 1.016 OPS in the hitter-friendly Pacific Coast League, continues to strike out a ton, including 13 times in his last 28 at-bats. Crawford remains on the DL with a sprained left ankle. Ethier has only three homers and a .691 OPS. And let's not even talk about their respective contracts.
+
+Kemp finally is getting hot, his disposition improving with his swing. Many in the industry, however, believe he ultimately will be moved - most likely in the offseason - due to his tempestuous relationship with some of his superiors.
+
+$\star$ By clicking $"\text{S}$ SUBSCRIBE\", you have read and agreed to the Fox Sports Privacy Policy and Terms of Use.
+
+The Dodgers trail only the Cardinals and Athletics in rotation ERA. Kershaw, Zack Greinke and Hyun-Jin Ryu are perhaps the best 1-2-3 in the game. But does anyone seriously expect Josh Beckett and Dan Haren to hold up the entire season?
+
+The loss of Chad Billingsley, who will undergo season-ending surgery to repair a partially torn flexor tendon in his right elbow, hurt the Dodgers' depth. The team lacks a prospect as polished as Marlins left-hander Andrew Heaney, who made his major-league debut Thursday night. The situation, in the words, of one club official, is "precarious."
+
+So, expect the Dodgers to be in the market for a starter.
+
+THE ORGANIZATION AS A WHOLE
+
+The Dodgers have yet to slow down their spending, so it's natural for them to be linked to a pitcher such as the Rays' David Price, who could give them a third pitcher earning $20 million next season.
+
+The team, however, likely would need to part with two or more of its top prospects to get Price, and its farm system isn't terribly deep to begin with (Price's teammate, super-utility man Ben Zobrist, is another potential LA target).
+
+Club officials keep talking about developing youngsters such as Pederson and shortstop Corey Seager so they don't need to maintain a $230 million payroll. They've made progress not only in signing players from Cuba, but also Mexico, Venezuela and the Dominican Republic. Still, are they truly intent on developing a player-development machine?
+
+The July 31 non-waiver deadline could prove the next test.
+
+Much has been made of Price's loss of fastball velocity, from an average of 95.3 mph in 2012 to 93.5 in '13 to 92.6 this season. But can anyone seriously argue that his stuff is significantly diminished?
+
+As pointed out by Rays Index earlier this week, Price is on pace for 280 strikeouts this season, which would be the most since Randy Johnson in 2004. Price's 12.1 strikeout-to-walk ratio, meanwhile, would break the record set by Brett Saberhagen in 1994 (11.0).
+
+Based upon those numbers, it's impossible to say that Price is losing it, even though his 3.93 ERA is - for him - unusually high. Poor luck, however, may help explain that number.
+
+Price's home run rate and opponents' batting average on balls in play are well above league averages, and probably will revert to his career norms as the season progresses.
+
+Meanwhile, Price's FIP (fielding independent pitching) is within the same range it has been for the past two seasons. That statistic is considered an indicator of future performance, signaling that Price's ERA likely will drop.
+
+The biggest issue for teams that want to acquire Price, as FanGraphs' Dave Cameron wrote Wednesday, is not his pitching. No, it's his expected $18 million to $20 million salary in his final year of arbitration in 2015.
+
+Few clubs will want to part with high-end prospects while absorbing such a payroll hit. Then again, the numbers are relative. Price's salary next season will not be terribly above the qualifying offer for free agents, which is expected to be $15 million to $16 million.
+
+The Phillies, despite their recent surge, surely recognize that they need to get younger. But even if the team has the will to be active sellers, it might not have a way.
+
+Second baseman Chase Utley, after telling club officials last July that he did not want to be traded, signed a two-year extension with three club options. Now, less than a year later, he's going to reverse a course, waive his 10-and-5 rights and approve a deal?
+
+Shortstop Jimmy Rollins, another 10-and-5 player, left open the possibility that he would approve a trade after breaking the team's all-time hit record last weekend. But realistically, where is Rollins going?
+
+The shortstop's wife is from Philadelphia. They are the parents of two young children. And good luck finding the right fit.
+
+Rollins' $11 million option for 2015 is almost certain to vest; he would not be just a rental. No, he would need to be comfortable spending the rest of this season in his new city, and all of next season as well.
+
+Neither team in Rollins' native Bay Area needs a shortstop. And while Rollins might crave the spotlight in either New York or LA, he wouldn't go to the Mets, the Yankees aren't going to displace Derek Jeter and neither the Angels nor Dodgers figures to pursue a shortstop.
+
+Then there is left-hander Cliff Lee, who - if all goes well - will return from his strained elbow before the All-Star break. By July 31, Lee still will have more than $45 million left on his contract, including a $12.5 million buyout for 2016.
+
+The financial obligation number actually might be higher than that for many; Lee can block trades to 20 teams, and could require his $27.5 million vesting option for '16 to be guaranteed to join a club such as the Dodgers.
+
+Phillies GM Ruben Amaro Jr. has said he would include cash in certain deals. The Phillies always could trade Lee during the August waiver period. But unless the team paid the majority of his contract, it could not expect a bounty in return.
+
+Closer Jonathan Papelbon could get moved if the Phillies pay down his $13 million salary; he, too, has a vesting option for '16, and can only be traded to 12 teams without his approval.
+
+Unless the Phils get truly creative, they will find it difficult to make impact moves.
+
+I normally do not favor starting pitchers for MVP unless the field lacks strong position-player candidates; I voted for then-Red Sox outdoor Jacoby Ellsbury in 2011 over the eventual winner, Tigers right-hander Justin Verlander.
+
+An interesting MVP case, however, can be made for Yankees right-hander Masahiro Tanaka, at least to this point of the season.
+
+The Yankees are 12-2 when Tanaka starts, 25-31 when he does not. Even more striking, their run differential in Tanaka's starts is plus-33, while in all other games it's minus-54.
+
+In other words, the Yankees are worse than the Rays (minus-48) when Tanaka isn't pitching and nearly as bad as the Padres (minus-62) and Diamondbacks (minus-64).
+
+Of course, Wins Above Replacement (WAR) tells a different story. Tanaka is third among pitchers at 2.9 in the FanGraphs version of the metric, and well behind Mike Trout, the leader among position players at 4.7.
+
+Outfielder J.D. Martinez is one of the Tigers' few recent bright spots; he has three homers during his nine-game hitting streak, and is now batting .300 with a .903 OPS in 108 plate appearances on the season.
+
+Martinez, 26, spent the offseason watching video of Miguel Cabrera, Ryan Braun and other accomplished hitters, then completely overhauled his swing and approach. The Astros released him in spring training, and he signed a minor-league contract with the Tigers two days later.
+
+"If you watch video of me in the past, it's the complete opposite - it's that extreme," he said. "It's kind of like I re-invented myself."
+
+Martinez worked with his personal hitting coach for three weeks, then implemented his new ideas in Venezuela. Mostly, he's trying to line up directly to the ball and keep the barrel of his bat in the zone longer - the way Cabrera does.
+
+# H. Full Figures
+
+The title of each figure is of the format: "Dataset / Scoring Model / Generating Model". The $x$ -axis is the temperature and hues correspond to different mixing ratio of human-generated and LLM-generated texts. From left to right, the proportion of LLM-generated texts is: $(0\%, 25\%, 50\%, 75\%, 100\%)$ . The purpose of including these is to examine how sensitive fractal parameters are when LLM-generated texts are mixed with natural language. We observe that all fractal parameter indeed vary smoothly.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/images.zip b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1889bcf956ed1c22b8cdea68f6375d80c60cf0ba
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0278a1479c5d7ac2be172c5577ed604c2a7add6aa7375be10e5991aa6f9d2d1
+size 3773833
diff --git a/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/layout.json b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..018173f609f9a29f7747a60f05c4ac4bd0e6ce3c
--- /dev/null
+++ b/ataleoftwostructuresdollmscapturethefractalcomplexityoflanguage/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:045a6f2fb8bbf6e42f8dc00f1cd47bd7c06344898bec56f348c4bc839c2dc6ae
+size 1373983
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_content_list.json b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d169f1dd89692614279a1112ddc87cf0ed8e2fc
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c48dae386ec4db028087b8976dbb720df75a7d914b9a5db11e16d27d0cb513f1
+size 214278
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_model.json b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..71d15c1766641f560b247ff30e90cf8ff0e65a64
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a82ded90165334001331b7f9a37d8aa8e7efdedf70ced0e58d421b538e76170b
+size 248404
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_origin.pdf b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ed2faab73a14588798dd313b54f57843f86f494d
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/a2da49ec-3dc0-4a6e-9204-1f807a17ffc2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eff83a51cc51bdfe30940978a01646ccc17175603d79fbbc4e11ecbc66ff7cdc
+size 6666606
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/full.md b/atheoreticalframeworkforoverfittinginenergybasedmodeling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..848d49e4e52658f1b53c7d74b0468fb0fcd87f3a
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/full.md
@@ -0,0 +1,1019 @@
+# A Theoretical Framework For Overfitting In Energy-based Modeling
+
+Giovanni Catania1 Aurélien Decelle1,2 Cyril Furtlehner3 Beatzio Seoane1
+
+# Abstract
+
+We investigate the impact of limited data on training pairwise energy-based models for inverse problems aimed at identifying interaction networks. Utilizing the Gaussian model as testbed, we dissect training trajectories across the eigenbasis of the coupling matrix, exploiting the independent evolution of eigenmodes and revealing that the learning timescales are tied to the spectral decomposition of the empirical covariance matrix. We see that optimal points for early stopping arise from the interplay between these timescales and the initial conditions of training. Moreover, we show that finite data corrections can be accurately modeled through asymptotic random matrix theory calculations and provide the counterpart of generalized cross-validation in the energy based model context. Our analytical framework extends to binary-variable maximum-entropy pairwise models with minimal variations. These findings offer strategies to control overfitting in discrete-variable models through empirical shrinkage corrections, improving the management of overfitting in energy-based generative models. Finally, we propose a generalization to arbitrary energy-based models by deriving the neural tangent kernel dynamics of the score function under the score-matching algorithm.
+
+# 1. Introduction
+
+Controlling overfitting is basic in machine learning, particularly as modern, over-parameterized architectures enhance learning capabilities. To prevent learning noise or irrelevant patterns, numerous empirical solutions have been proposed that modulate the model's implicit bias through its
+
+$^{1}$ Departamento de Física Teórica, Universidad Complutense de Madrid, Spain. $^{2}$ Escuela Técnica Superior de Ingenieros Industriales, Universidad Politécnica de Madrid, Spain $^{3}$ Inria-Saclay, Université Paris-Saclay, LISN, Gif-sur-Yvette, France.. Correspondence to: Giovanni Catania .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+architecture and optimization, employing various implicit regularization mechanisms (Belkin, 2021). Finding the optimal balance between maximizing data utility, preserving generalization, and ensuring the privacy of training data represents a critical trade-off that can be challenging to pinpoint. In supervised learning tasks like classification, overfitting is readily identified using standard practices. Metrics like test-set accuracy, particularly when augmented by cross-validation in data-scarce scenarios, clearly signal overfitting, enabling strategies like early stopping, regularization, and hyperparameter tuning to mitigate it. Furthermore, training and generalization performance in regression and classification tasks are now well understood in certain simplified regimes, such as high-dimensional ridge (Atanasov et al., 2024; Advani et al., 2020; Saxe et al., 2014; Tomasini et al., 2022) or logistic (Mai et al., 2019; Loffredo et al., 2024) regression or numerous more complex setting of non-linear regression in various scaling regime (see for instance (Mei et al., 2018; Arnaboldi et al., 2023; Saad & Solla, 1995) among many other recent works). This gives the possibility to assess some simple indicator like the generalized cross-validation (GCV) (Golub et al., 1979), an exact relation between train/test errors valid for the ridge regression that can be derived using a leave-one-out argument (see e.g. (Furtlehner, 2023)). This methodology is also relevant in deep learning contexts (Wei et al., 2022), particularly in over-parameterized regimes where it aligns with observed stochastic gradient descent behaviors (Patil et al., 2024).
+
+Recent advancements in training and architecture have greatly enhanced the generative capabilities of neural network models across various fields (Bengesi et al., 2024), enabling the creation of photorealistic images, credible speech synthesis, and biologically functional synthetic proteins (Wu et al., 2021). Despite this progress, selecting optimal models from a pool remains challenging due to noisy training data often leading to undetected overfitting. Yet, detecting overfitting in unsupervised learning settings, particularly for generative modeling, is elusive but crucial, especially with sensitive datasets like human genomic data (Yelmen et al., 2021; 2023) and copyrighted content. Unlike supervised learning, unsupervised learning lacks clear overfitting indicators, complicating model development and validation. While some theoretical insights on optimal regularization tuning for simple energy-based models exist (Fanhomme
+
+et al., 2022), practical indicators such as early stopping points during training dynamics remain undefined. Moreover, estimating log-likelihood for model selection poses significant computational challenges (Béreux et al., 2022; 2024). Consequently, there is an urgent need for methods to detect and mitigate overfitting in these contexts.
+
+This paper focuses on energy-based models (EBMs) (Ackley et al., 1985), which encode the empirical distributions of various data types — such as neural recordings (Roudi et al., 2009), images (Du & Mordatch, 2019), and genomic (Yelmen et al., 2021) or proteomic (Morcos et al., 2011) sequences—into a probability framework rooted in Boltzmann's law. By adopting a Bayesian approach, EBMs aim to maximize the likelihood function, enabling the generation of new data that closely resembles the training set and facilitates the extraction of detailed microscopic insights. EBMs range from simple Boltzmann Machines (BMs) and Restricted Boltzmann Machines (RBMs) to more complex architectures like convolutional neural networks, making them versatile in statistical physics for solving inverse problems like deducing Hamiltonian parameters from observed data. The interpretability of simple EBMs enables to uncover underlying rules within datasets: this capability has proven highly effective in fields ranging from neuroscience (Roudi et al., 2009) to bio-molecular structure prediction (Cocco et al., 2018) predominantly through pairwise maximum-entropy models. Recent advancements extend these applications, using complex EBMs to infer high-order interactions (Decelle et al., 2024; 2025; Feinauer et al., 2022; Feinauer & Lucibello, 2022) or constitutive patterns (Tubiana et al., 2019; Decelle et al., 2023), significantly deepening our comprehension of data structures.
+
+This work develops a theoretical framework for understanding and mitigating overfitting in EBMs. We begin with a simple Gaussian model as a fundamental non-trivial example, using it to quantitatively analyze overfitting through synthetic experiments with predefined ground truths. We examine eigenvalue dynamics using artificial covariance matrices that simulate real datasets, exploring how overfitting arises from different learning timescales associated with various eigenmodes of the empirical covariance matrix. We address inaccuracies in learned eigenvalues with corrections based on random matrix theory (RMT), showing that the quality of model generation in EBMs is less affected by the lower modes of the covariance matrix, while the accuracy of inferred couplings is significantly impacted. We demonstrate that regularization techniques like shrinkage corrections are crucial to counteract overfitting, providing a robust framework to refine EBM training by considering finite-sample-size effects. This approach also informs our analysis of more complex models like the BM, underscoring the importance of regularization strategies to enhance model reliability and predictive accuracy.
+
+# 2. Gaussian Model
+
+The Gaussian Energy-Based Model (GEBM) specifies a multivariate Gaussian distribution for real-valued variables $\pmb{x} \in \mathbb{R}^N$ , characterized by 2-body interactions encoded within a symmetric, positive-definite coupling matrix, $J \in \mathbb{R}^{N \times N}$ . The GEBM is the simplest model that effectively captures the first and second-order statistics of a set of data. For the purposes of this analysis, we assume 0 means for the data components, thus simplifying the initial model by excluding the learning of external biases. Nevertheless, the theoretical framework presented below can be readily extended to the above to accommodate non-zero means. The probability distribution of a configuration $\pmb{x}$ is then:
+
+$$
+p (\boldsymbol {x} \mid \boldsymbol {J}) = (2 \pi) ^ {- N / 2} \sqrt {\det \boldsymbol {J}} e ^ {- \frac {1}{2} \boldsymbol {x} ^ {\top} \boldsymbol {J} \boldsymbol {x}}. \tag {1}
+$$
+
+It is straightforward to check that the population covariance matrix of such distribution is $C = \mathbb{E}_J[\pmb{x}\pmb{x}^\top] = J^{-1}$ , with $\mathbb{E}_J[\cdot]$ denoting the average with respect to (1).
+
+Inference problem. Consider a dataset $\mathcal{D} = \{\pmb{x}^{\mu}\}_{\mu = 1}^{M}$ with $M$ entries generated with a GEBM model with coupling matrix $J^{*}$ . Our objective is then to find the parameters $\hat{J}$ that best approximate the empirical distribution of the data formally $p_{\mathcal{D}}(\pmb{x}) = M^{-1}\sum_{\mu = 1}^{M}\delta (\pmb{x} - \pmb{x}^{\mu})-$ , with the probabilistic model (1). Without prior information about the model parameters $J$ , the maximum likelihood (ML) estimator $\hat{J}^{\mathrm{ML},M}$ is calculated as $\hat{J}^{\mathrm{ML},M} = \left(\widehat{C}^{M}\right)^{-1}$ , where $\widehat{C}^{M}$ is the empirical covariance matrix from $M$ data points, provided it is invertible (MacKay, 2003). Denoting with $N$ the number of data components (data dimensions), this condition requires that $M \geq N$ , assuming samples to be independent. Clearly, when $M \to \infty$ , $\hat{J}^{\mathrm{ML},M}$ recovers the true set of parameters used to generate the data, $J^{*}$ .
+
+Training dynamics. The GEBM stands out as one of the few high-dimensional inference problems where an analytical expression for the ML estimator is available, independent of both $M$ and $N$ . However, our focus here is on the training dynamics associated with an iterative maximization of the likelihood function through gradient ascent dynamics, as is typical in EBMs. This approach allows us to explore the adaptive process of parameters' estimation over time.
+
+In the GEBM, the log-likelihood (LL) of the parameters $J$ depends only on $\widehat{C}^M$ and it reads $\mathcal{L}(J) = -\frac{1}{2}\sum_{i,j}J_{ij}\widehat{C}_{ij}^{M} + \frac{1}{2}\log \det J$ . This quantifies how well $J$ matches the observed data. In a standard gradient ascent algorithm, the update rule for the parameters reads:
+
+$$
+J _ {i j} ^ {t + 1} = J _ {i j} ^ {t} + \gamma \frac {\partial \mathcal {L}}{\partial J _ {i j}}, \tag {2}
+$$
+
+where $\gamma$ is the learning rate. Assuming non-symmetric perturbations on the parameters $J_{ij}$ , the gradient in (2) reads:
+
+$$
+\frac {\partial \mathcal {L}}{\partial J _ {i j}} = - \widehat {C} _ {i j} ^ {M} + \mathbb {E} _ {\boldsymbol {J}} \left[ x _ {i} x _ {j} \right] = - \widehat {C} _ {i j} ^ {M} + \left(\boldsymbol {J} ^ {- 1}\right) _ {i j}. \tag {3}
+$$
+
+The second equality comes from the exact expression of 2-point correlations of the Gaussian model in terms of its coupling matrix $J$ : this is equivalent to assume that we perfectly sample the model with an infinite amount of configurations at any $t$ . For a more generic EBM, another source of noise should be added due to the finite number of samples (or chains) used to estimate the empirical correlations used to compute the gradient.
+
+To analyze the learning dynamics, we use the spectral decomposition of $J$ and project the gradient onto its eigenbasis, denoted by $V = \{v_{\alpha}\}_{\alpha = 1}^{N}$ . From (3), the gradient projected on modes $\alpha$ and $\beta$ is expressed as:
+
+$$
+\left(\frac {\partial \mathcal {L}}{\partial J}\right) _ {\alpha \beta} = - \hat {c} _ {\alpha \beta} ^ {M} + \frac {\delta_ {\alpha \beta}}{J _ {\alpha}}, \tag {4}
+$$
+
+where $\hat{c}_{\alpha \beta}^{M}$ is the projection of $\widehat{C}^M$ . Generally, for any matrix $\mathcal{M}$ , we write $m_{\alpha \beta} \stackrel{\mathrm{def}}{=} \pmb{v}_{\alpha}^{\top} \pmb{M} \pmb{v}_{\beta}$ .
+
+This approach enables us to formulate a set of evolution equations for the eigenvalues $\{J_{\alpha}\}$ and the rotation of the eigenvectors. By assuming an infinitesimal learning rate, we can transform the discrete-time update equation (2) into a continuous set of differential equations (see Appendix A for further derivation details):
+
+$$
+\tau \frac {\mathrm {d} J _ {\alpha}}{\mathrm {d} t} = \frac {1}{J _ {\alpha}} - \hat {c} _ {\alpha \alpha} ^ {M}; \quad \tau \boldsymbol {v} ^ {\alpha} \frac {\mathrm {d} \boldsymbol {v} ^ {\beta}}{\mathrm {d} t} = \frac {\hat {c} _ {\alpha \beta} ^ {M}}{J _ {\alpha} - J _ {\beta}} \text {f o r} \alpha \neq \beta , \tag {5}
+$$
+
+where $\tau$ is a timescale set by the learning rate, $\tau = 1 / \gamma$ . From Eq. (5) we see that eigenvectors of $J$ stop rotating when they align with the eigenvectors of $\hat{C}^M$ to cancel out the numerator $\hat{c}_{\alpha \beta}^{M}$ . Eq. (5) can be integrated analytically, and $\hat{c}_{\alpha \alpha}^{M}$ can be replaced by the matrix eigenvalue $\hat{c}_{\alpha}^{M}$ .
+
+The solution of (5) can be expressed in an explicit (although not closed) form using Lambert $W_{0}$ function, namely:
+
+$$
+J _ {\alpha} (t) = \frac {1}{\hat {c} _ {\alpha} ^ {M}} + \frac {1}{\hat {c} _ {\alpha} ^ {M}} W _ {0} \left[ \mathcal {B} _ {\alpha} e ^ {- \left(\hat {c} _ {\alpha} ^ {M}\right) ^ {2} \frac {t}{\tau}} \right], \tag {6}
+$$
+
+with the constant $\mathcal{B}_{\alpha}$ is fixed by the initial condition at $t = 0$ . Eq. (6) delineates the evolution of each eigenvalue, which progresses independently once the eigenvectors of $\boldsymbol{J}$ align with those of $\widehat{\boldsymbol{C}}^M$ . A crucial aspect of this equation is that the relaxation time it takes for an eigenvalue to reach its steady-state value $J_{\alpha}^{(\infty)} = \lim_{t\to \infty}J_{\alpha}(t) = 1 / \hat{c}_{\alpha}^{M}$ is inversely proportional to the square of the corresponding eigenmode in the covariance matrix: indeed, for $t\to \infty$ Eq. (6) describes an exponential relaxation to the fixed point with a timescale $\propto (\hat{c}_{\alpha}^{M})^{-2}$ .
+
+This relationship shows that the evolution of each eigenvalue is closely linked to the significance of the corresponding eigenvector in representing the data, so that stronger modes in the covariance matrix are learned more quickly than weaker ones. The idea that information is learnt progressively starting from strong PCA's directions is closely
+
+related to the concept of spectral bias (Rahaman et al., 2019) - although here the decomposition is spectral rather than in a Fourier basis - and it has been characterized theoretically in the case of linear regression (Advani et al., 2020). The interaction between these varying timescales can result in an initial phase where the strongest components of the dataset's PCA are effectively captured, followed by a phase where training begins to adjust noise-dominated directions, potentially leading to overfitting.
+
+We've shown that GEBMs' learning dynamics are governed by the spectral decomposition of $\widehat{C}^M$ , with finite-sample effects arising from changes in the spectrum due to finite $M$ . This falls within the realm of random matrix theory (RMT) (Potters & Bouchaud, 2020), as we detail shortly.
+
+Asymptotic RMT analysis. In our simplified setting of GEBMs, the parameters of the model $J(t)$ along the learning trajectory are an explicit function of the empirical covariance matrix $\hat{C}^M$ upon choosing the same constant $\mathcal{B}_{\alpha} = \mathcal{B} > -1 / e$ in (6) for the initialization $(\mathcal{B} = -1 / e$ corresponds to the initialization $J(0) = 0$ ) and assuming that $J(t)$ is aligned with $\hat{C}^M$ at $t = 0$ . This choice simplifies considerably the analysis, and as explained in Appendix G. Using RMT, all relevant quantities can be derived in closed forms based solely on the population spectrum $\nu$ and the aspect ratio $\rho = M / N$ , under the asymptotic proportional scaling where $M, N \to \infty$ with $\rho$ held constant.
+
+We are interested in the train and test energies:
+
+$$
+E _ {\mathrm {t r a i n}} = N ^ {- 1} \mathrm {T r} [ J \widehat {C} ^ {M} ] \mathrm {a n d} E _ {\mathrm {t e s t}} = N ^ {- 1} \mathrm {T r} [ J C ^ {*} ],
+$$
+
+the coupling error $\mathcal{E}_{\mathrm{J}}\stackrel {\mathrm{def}}{=}N^{-1}\| J - J^{*}\|_{F}^{2}$ , with $\| \cdot \| _F^2$ , the Frobenius norm, and the LL (train and test)
+
+$$
+L L _ {\text {t r a i n , t e s t}} \stackrel {\text {d e f}} {=} \frac {1}{2 N} \log \det [ J ] - \frac {1}{2} E _ {\text {t r a i n , t e s t}} \tag {7}
+$$
+
+where $C^* \stackrel{\mathrm{def}}{=} \lim_{M \to \infty} \widehat{C}^M$ is the population matrix. In addition, we will also explore the behavior of the maximizer of the log-likelihood with regularization, i.e. $\mathcal{L}[J] = LL_{\mathrm{train}}[J] - \lambda A(J)$ focusing on $A(J) = \operatorname{Tr}[J^2]$ for $L_2$ ridge regularization, and $A(J) = \operatorname{Tr}[J]$ for $\widetilde{L}_1$ lasso regularization on the spectrum. $\widetilde{L}_1$ is applicable since $J$ is symmetric and remains positive definite throughout the trajectory, ensured by the logarithmic barrier.
+
+We simply quote here the result of the asymptotic limits (for $\rho > 1$ , more details in Appendix G), based on RMT (Marcenko & Pastur, 1967; Ledoit & Peché, 2011). First, the spectral density $\bar{\nu}$ of $\widehat{C}^M$ reads in this limit:
+
+$$
+\bar {\nu} (x) = \frac {\rho \Lambda_ {i} (x)}{\pi x} = \frac {\rho}{\pi x} \frac {\Gamma_ {i} (x)}{\left[ 1 - \Gamma_ {r} (x) \right] ^ {2} + \Gamma_ {i} (x) ^ {2}}, \tag {8}
+$$
+
+where $\Lambda (z) = \Lambda_r(x) + i\Lambda_i(x)$ and $\Gamma (z) = \Gamma_r(x)\pm i\Gamma_i(x)$ for $z = x + i0^{+}$ , obey the self-consistent equations
+
+$$
+\Lambda (z) = \frac {1}{1 - \Gamma (z) \tau}, \qquad \Gamma (z) = \frac {1}{\rho} \int \frac {\nu (d x) x}{z - \Lambda (z) x},
+$$
+
+
+Figure 1. (a): Eigenvalue spectra of the empirical covariance matrices for MNIST dataset (Deng, 2012). Black lines show spectra using the full dataset size $(M^{*})$ , while scatter colored points represent subsets $(M < M^{*})$ . (b): Black line shows a synthetic population eigenvalue spectrum based on (10) for $N = 100$ , $r = 0.9$ , $\beta = 0.9$ , $\gamma = 1.1$ , $x_{1} = 10^{-1}$ , $x_{2} = 10$ ; colored points show the eigenvalues from $\widehat{C}^M$ calculated by sampling different $M$ configurations from a GEBM model with $J^{*} = C^{*-1}$ (Eq. (1)).
+
+
+
+in terms of the population spectral density $\nu(dx)$ . In turn we obtain
+
+$$
+E _ {\mathrm {t r a i n}} = \frac {\rho}{\pi} \int_ {0} ^ {\infty} d y j (y) \bigl [ \Lambda_ {r} (y) \Gamma_ {i} (y) + \Lambda_ {i} (y) \Gamma_ {r} (y) \bigr ],
+$$
+
+$$
+E _ {\mathrm {t e s t}} = \frac {\rho}{\pi} \int_ {0} ^ {\infty} d y j (y) \Gamma_ {i} (y), \qquad (\rho \geq 1),
+$$
+
+while the coupling error takes the form
+
+$$
+\mathcal {E} _ {\mathrm {J}} = \int_ {0} ^ {\infty} \frac {\nu (d x)}{x ^ {2}} + \int_ {0} ^ {\infty} \bar {\nu} (d x) j (x) \Big [ j (x) - \frac {2}{\rho} \frac {(1 - \rho) + 2 \rho \Lambda_ {r} (x)}{x} \Big ]
+$$
+
+where $j(x)$ is one of the analytical functions $j_{t}, j_{L_{1}}$ and $j_{L_{2}}$ corresponding respectively to the time dependent, $\widetilde{L}_{1}$ and $L_{2}$ regularized forms of $J$ . Remarkably, for the $\widetilde{L}_{1}$ regularized coupling matrix we get a deterministic relation between the train and test energies (Appendix G)
+
+$$
+E _ {\text {t e s t}} = \left(1 - \rho^ {- 1} E _ {\text {t r a i n}}\right) ^ {- 1} E _ {\text {t r a i n}}, \tag {9}
+$$
+
+which is the counterpart of GCV for GEBMs which might be usable in practice for arbitrary EBM (in the same way as GCV can be used for deep regression models), as it allows one to get an estimation of the test LL. This concept remains a topic for future research. Instead, we have focused on strategies for data cleaning and regularization, specifically employing shrinkage techniques (Bun et al., 2017). Using a model of the data defined in the next section, allows us to specify $\nu(x)$ in order to assess these strategies by comparing with the expected optimal performances given by RMT.
+
+# 3. Modeling realistic data covariances
+
+To effectively study the impact of finite number of data on the learning process of a GEBM in a controlled setting, we need to define a synthetic model that facilitates the analysis of different learning timescales. The first step is to artificially create a population covariance matrix $C^*$ , from which
+
+a ground truth coupling matrix $J^{*}$ is constructed, through $J^{*} = C^{*^{-1}}$ . Using this setup, we generate a multivariate Gaussian distribution and extract $M$ data points from it. These data points are then used to train a new GEBM using the empirical covariance matrix, $\widehat{C}^M$ , derived from the $M$ of these samples, with the goal of inferring the original model parameters.
+
+As previously discussed, the training dynamics of each mode of $J$ are directly linked to the eigenvalues of $\hat{C}^M$ . To enhance this analysis, we have developed a synthetic model for the spectrum of $C^*$ , which influences the spectrum of $\hat{C}^M$ in scenarios with finite datasets. This model closely mimics the eigenvalue spectra of real datasets, as illustrated in Figure 1-(a), which shows the eigenvalue spectrum (in descending order) of covariance matrices from MNIST for several sizes $M$ (more examples are given in Appendix B). Our analysis reveals that the spectrum of $\hat{C}^M$ remains relatively stable w.r.t. $M$ for a significant number of modes, indicating $\hat{c}_\alpha^M \approx c_\alpha^\infty = c_\alpha^*$ . However, smaller eigenvalues fluctuate markedly with $M$ ; they tend to be underestimated as $M$ decreases, suggesting $\hat{c}_\alpha^M < c_\alpha^*$ for small $c_\alpha^*$ , and are slightly overestimated for the larger eigenvalues. This behavior is rigorously characterized using RMT tools in simplified data models (Baik & Silverstein, 2006; Ledoit & Péché, 2011). Additional insights into the conservation of eigenvectors across modes are detailed in Appendix B.
+
+Inspired by these findings, we will characterize our synthetic population matrix $C^*$ by an eigenvalue spectrum $\{c_{\alpha}^{*}\}_{\alpha = 1}^{N}$ generated according to a mixture of power laws. The cumulative distribution is defined as follows:
+
+$$
+P [ \lambda < x ] = r \left[ \frac {x - x _ {1}}{1 - x _ {1}} \right] ^ {\beta} \mathbb {1} _ {x} ^ {(x _ {1}, 1)} + \left[ r + (1 - r) \left(\frac {x - 1}{x _ {2} - 1}\right) ^ {\gamma} \right] \mathbb {1} _ {x} ^ {(1, x _ {2})} \tag {10}
+$$
+
+where $\mathbb{1}_x^{(a,b)}$ denotes the indicator function in the interval $(a,b)$ . This setup distinguishes between "strong" modes with $c_{\alpha}^{*} > 1$ and "weak" modes with $c_{\alpha}^{*} < 1$ , with their prevalence controlled by parameter $r$ . Parameters $\beta$ and $\gamma$ represent the power-law exponents for these two categories, with $x_{1}$ and $x_{2}$ the respective lower and upper cutoffs. Model (10) is chosen to $i$ ) mimic the eigenvalue distribution of a realistic dataset's covariance matrix, though our numerical results are robust to specific spectral details, and $ii$ ) to extract asymptotic quantities in the continuous-density limit $N,M\to \infty$ (with $\rho = M / N$ finite) using RMT.
+
+Figure 1-(b) shows an example spectrum of population eigenvalues $c_{\alpha}^{*}$ from Eq. (10) with $N = 100$ (black line), alongside empirical estimates of finite-data eigenvalues $\hat{c}_{\alpha}^{M}$ (scatter points) for various $M$ values, demonstrating that strong modes remain stable despite finite- $M$ noise, while weak modes are consistently underestimated. The full matrix $C^{*}$ is finally assembled by projecting the diagonal matrix of these eigenvalues onto a random orthogonal matrix $U^{*} = \{\pmb{u}_{\alpha}^{*}\}_{\alpha = 1}^{N}$ , resulting in $C^{*} = \sum_{\alpha} c_{\alpha}^{*} \pmb{u}_{\alpha}^{*} \pmb{u}_{\alpha}^{*}^{\top}$ .
+
+
+Figure 2. Training dynamics of the GEBM from a population matrix $C^*$ (in (a)), with system size and parameters matching those in Fig. 1-(b)), and from an empirical covariance matrix $\widehat{C}^{M}$ (generated from $C^*$ through (2), with $\rho = 2.11$ (in (b)). (a)-(b) display the analytic evolution of eigenvalues $J_{\alpha}$ toward the steady-state (lines), and a comparison with numerical training (points, in (a)). In all cases the initial condition is an identity matrix.
+
+# 4. Training Dynamics on Synthetic Data
+
+The introduced synthetic model enables analysis of training dynamics in two scenarios: $i$ ) an ideal setting with an infinite amount of samples using $C^*$ as the data covariance matrix, and $ii$ ) a more realistic situation with a finite dataset, represented by the empirical covariance matrix $\widehat{C}^M$ .
+
+Training Dynamics with Infinite Data. For clarity, we begin by training our GEBM using the population covariance matrix $C^*$ . Fig. 2-(a) illustrates the evolution of the coupling matrix eigenvalues $J_{\alpha}$ , comparing analytical solutions from Eq. (6) and numerical iterative training using Eq. (2), both starting from the same initial condition $(J_{\alpha}(0) = 1)$ . The analytical and numerical results match perfectly, demonstrating the expected time-scale separation: eigenvalues $J_{\alpha}$ corresponding to stronger covariance modes converge faster to their fixed point $J_{\alpha}^{(\infty)} = J_{\alpha}^{*} = 1 / c_{\alpha}^{*}$ , which are the smallest in the coupling matrix, while weaker modes converge slower. Starting from an initial condition $J(0)$ that does not commute with $C^*$ , the coupling matrix must initially align its eigenvectors with those of $C^*$ , a process detailed in Appendix C and guided by Eq. (5). Following this alignment, the eigenvalues evolve independently according to Eq. (12), supporting our analytical approach. Notably, training directly from the population matrix achieves perfect reconstruction of the original model, thereby avoiding any discrepancies or generation errors as expected.
+
+Impact of Finite Datasets: Interplay of Initialization and Time Scales Favoring Early Stopping Strategies. We explore the training dynamics using finite-data estimates of the population covariance matrix, $\widehat{C}^M$ . With any finite $M$ , the GEBM trained with $\widehat{C}^M$ will show discrepancies from the true model $J^*$ . We track these discrepancies by computing the reconstruction error $\mathcal{E}_{\mathrm{J}}$ between $J^*$ and the
+
+
+Figure 3. Results for GEBM training with finite data. (a)-(b)-(c) display respectively the reconstruction error $\mathcal{E}_1$ , the test-LL and the generation error $\mathcal{E}_{\mathrm{C}}$ , all plotted vs time, for various sample sizes $M$ (indicated by a color gradient from blue to red for increasing $\rho = M / N$ . Dashed black lines refer to a training from $C^*$ (i.e $M \to \infty$ ). (d): comparison between time of minimum reconstruction (circles), maximum test LL (diamonds) and time at which the generation error converges to its steady-state value. These quantities are also shown in the related panels for better clarity.
+
+
+
+trained $J(t)$ defined in the previous section. Fig 3-(a) illustrates the error's evolution over training time. Beginning from an identity matrix, at low $\rho = M / N$ values, the error displays marked non-monotonic behavior, peaking at a specific $t_{\mathrm{min}}(\rho)$ before stabilizing at the training's fixed point. At higher $\rho$ , the error decreases monotonically until stabilization, following a trend consistent with the $M \to \infty$ scenario (i.e. using $C^*$ , in black dashed line). This behavior, also noted in complex EBMs (Decelle et al., 2024; Agoritsas et al., 2023), underscores the GEBM's utility as a simple model yet capturing complex phenomena in EBMs.
+
+This analysis shows that with limited data, there is an optimal training duration beyond which model inference accuracy declines, highlighting a sweet point for early stopping. However, detecting this point without ground truth is challenging: it does not coincide with the peak of test LL (as in (b)), a phenomenon also noted in RBMs (Decelle et al., 2024). Moreover, the generation's quality, given error between $C(t) = J(t)^{-1}$ and the population matrix $C^*$ in (c) (computed as $\mathcal{E}_{\mathrm{C}} \stackrel{\mathrm{def}}{=} \| C^* - C(t) \|_{\mathrm{F}}$ ), stabilizes well before $t_{\min}(\rho)$ and remains flat afterwards. This suggests that the generation quality of the GEBM isn't solely dependent on the model itself, as evidenced by consistent generation errors at both the minimum-error point $t_{\min}$ and the training's fixed point, indicating this metric fails to capture the deterioration of model parameters over time. These optimal times are shown against $\rho$ in Fig 3-(d). Additional evaluation metrics are discussed in Appendix F.
+
+Fig. 2-(b) illustrates the evolution of eigenvalue $J_{\alpha}$ over time for a sample size with $\rho = 2.11$ . Initially, the stronger
+
+
+Figure 4. (a)-(c): for $\rho = M / N = 1.5$ we plot the reconstruction error during training (in (a), vs $t$ ) and the final reconstruction obtained using a $L_{2}$ -norm regularization (in (c), vs $\lambda$ ). (b): training time achieving optimal reconstruction error (points) and time of maximum test LL (squares), plotted vs $\rho$ . (d): optimal value of regularization prior $\lambda$ vs $\rho$ , again selecting the optimum w.r.t. reconstruction error and w.r.t. the test LL. All panels show comparisons between numerical results for various $N$ (colored lines) against asymptotic results from RMT (black line).
+
+modes (deep red curves), which are less influenced by low- $M$ induced noise, quickly stabilize, aligning their eigenvalues $\hat{c}_{\alpha}^{M}$ close to the population values $c_{\alpha}^{*}$ , marked by black crosses. This early alignment to very small and relatively accurate values makes the error curve $(\mathcal{E}_{\mathrm{J}})$ for finite $M$ closely resemble that of the $M\to \infty$ scenario. However, after a period around $t_{\mathrm{min}}(\rho)$ , the model starts encoding the weaker modes (blue lines), which are systematically understated relative to the population, causing $J_{\alpha}^{(\infty)} = 1 / \hat{c}_{\alpha}^{M}$ to significantly exceed the ground-truth $J_{\alpha}^{*} = 1 / c_{\alpha}^{*}$ . If training begins from small $J_{\alpha}$ values, there's a critical point where the eigenvalues temporarily align more closely with their ground-truth than at the fixed point, effectively creating an optimal time $t_{\alpha}^{*}$ where $J_{\alpha}(t_{\alpha}^{*})\approx J_{\alpha}^{*}$ . This alignment markedly decreases discrepancies between the trained model's eigenvalues and those of the true model, highlighting the significance of initial conditions in training dynamics. Yet, the specific initial values of $J_{\alpha}(0)$ are less critical, as long as they are substantially smaller than $1 / \hat{c}_{\alpha}^{M}$ for the weaker modes of $\widehat{C}^M$ (see Appendix D for further details).
+
+Now, we can also explain the stable generation performance of the GEBM, shown in Fig. 3-(c), using scale separation arguments. Generation error mainly depends on the strongest $\hat{c}_{\alpha}^{M}$ values, which are learned early on, whereas the overall model quality is controlled by the weakest $\hat{c}_{\alpha}^{M}$ (where $J_{\alpha}^{(\infty)} = 1 / \hat{c}_{\alpha}^{M}$ ), which minimally affects generation error due to their small value. While this phenomenon appears unique to the GEBM, a similar effect is observed in binary pairwise EBMs (cf. Sec. 6).
+
+Asymptotic analysis. Our findings so far have been established by numerically integrating the gradient ascent dynamics (i.e. using Eq. (2) with a slow learning rate), or with the analytical expression for the eigenvalue evolution (Eq. (6)). In both cases, we utilized empirical covariance matrices extracted from a finite number of samples $M$ , sampled from the distribution (1) with finite $N$ . These results are almost insensible to the choice of the population spectrum as discussed in Appendix F.
+
+We demonstrate that the phenomena of overfitting and finite- $M$ corrections can be accurately modeled using RMT to predict the $N$ , $M \to \infty$ limit, thereby removing the need for empirical data. Detailed methodologies are provided in Appendix G. For a constant $\rho = M / N = 1.5$ , Fig. 4-(a) compares the reconstruction error of $\mathbf{J}(t)$ over the training period for various $N$ values (colored lines) against the asymptotic RMT prediction (dashed black lines), showing strong consistency as $N$ increases. This agreement extends to the evolution of the test LL (not shown) and the timing of the minimum error and peak test LL as functions of $\rho$ (see Fig. 4-(b)). Notably, the optimal stopping times for the two estimators do not coincide, yet finite $(N, M)$ trainings align precisely with the asymptotic predictions.
+
+# 5. Protocols to mitigate overfitting
+
+In the GEBM, non-monotonic behavior stems from adjustments to the eigenvalues of $\widehat{C}^M$ compared to the population covariance matrix. In fact, one can easily check that replacing the population eigenvectors while retaining $M$ -dependent eigenvectors to form an optimally corrected matrix, $\widehat{C}_{\mathrm{val - pop}}^{M}\stackrel {\mathrm{def}}{=}\sum_{\alpha}c_{\alpha}^{*}\pmb{u}_{\alpha}^{M}\pmb{u}_{\alpha}^{M\top}$ , almost eliminates the non-monotonic effects on model quality and overfitting, as shown in Fig. 5 (a) (green), and in other datasets or $\rho$ values we see the bump completely disappear. While effective, this approach is useless for real experiments where the population matrix is unknown. Nonetheless, this idealized scenario informs the design of protocols aimed at minimizing overfitting and reducing reliance on uncontrollable early-stopping strategies. We now explore common strategies to mitigate overfitting within our framework, focusing on regularization and shrinkage corrections. We also introduce a versatile downsampling-guided mode-fitting scheme that allows circumvent the traditional limitations of RMT strategies, and design corrections that should be valid beyond GEBMs.
+
+Regularization. In machine learning, regularization priors are standard for preventing overfitting. In the GEBM, they constrain the growth of eigenvalues $J_{\alpha}$ , avoiding suboptimal fixed points affected by mode fluctuations in $\widehat{C}^{M}$ . For training dynamics, $L_{2}$ regularization is applied to the coupling matrix $J$ , and similar outcomes are achieved with projected $L_{1}$ -regularization on $J$ 's eigenbasis, (a protocol that facilitating asymptotic RMT analysis). The impact of
+
+
+Figure 5. Effect of data-correction protocols on the training a GEBM (in (a), for $\rho = 2.8$ ) and on the final model's quality as a function of $\rho$ (in (b)): comparison of the reconstruction error $\mathcal{E}_{\mathrm{J}}$ between training from an empirical covariance matrix $\hat{\boldsymbol{C}}^M$ (blue), optimal $L_{2}$ -regularization (w.r.t. reconstruction, red), shrinkage formula (cyan). The settings are the same as in Fig.3.
+
+regularization at the fixed point is studied for finite $N$ and in the $N \to \infty$ limit via RMT. Fig. 4 (c) shows the reconstruction error as a function of $\lambda$ , with empirical results aligning closely with RMT predictions. An optimal $\lambda_{\mathrm{opt}}$ minimizes the error but does not match to the value that maximizes the test LL (see (d)), complicating $\lambda_{\mathrm{opt}}$ 's identification without knowing the population parameters, akin to identifying optimal early stopping. Further details on regularized training and RMT are provided in Appendices H.1 and G. Red line in Fig. 5-(a) illustrates the error over time for a $L_{2}$ regularized training using the optimal parameter $\lambda^{\mathrm{opt}}$ .
+
+Shrinkage correction protocols are pivotal in statistical learning and signal processing for estimating covariance matrices, particularly when the sample size is small relative to data dimensionality (Bun et al., 2017). Some of these protocols use rotationally invariant estimators (RIEs) to adjust eigenvalues distorted by sampling noise (Ledoit & Wolf, 2004; 2020), while preserving eigenvectors, ensuring corrections are independent of the coordinate system. Based on RMT, RIEs align the eigenvalues of finite-sample covariance matrices to minimize the deviation of the covariance matrix from the population one. Using the optimal RIE from (Bun et al., 2017), we correct our $\widehat{C}^M$ matrices, and use them for training our GEBMs. Depicted in light blue in Fig. 5-(a), this new training shows significant improvements in model inference quality, although some non-monotonic behaviors persist. The main drawback of this approach is that it is specific to the GEBM case.
+
+Polynomial fit of eigenmodes. To overcome the limitation of RIEs, we introduce a simple strategy to correct empirically the eigenvalues of $\widehat{C}^M$ : the idea is to downsample our dataset to obtain $\widehat{C}^m$ with $m < M$ and use the corresponding eigenvalues $\hat{c}_{\alpha}^{m}$ values to extrapolate the $m \to \infty$ limit from a linear fit in $1/m$ (as expected from (Baik & Silverstein, 2006)). Additional details are given in Appendix H.2.
+
+
+
+
+
+
+Figure 6. Results on the BM for the inverse Ising problem. (a)-(b): Training dynamics. (a): Reconstruction error (Frobenius norm) vs time (number of updates), for different values of dataset's size. (b1)-(b2)-(b3): evolution of eigenmodes $J_{\alpha}$ during training for 3 values of $\rho$ : comparison between numerics (blue lines) and analytic curve (red). Black crosses indicate the true models' eigenvalues $\beta \hat{J}_{\alpha}$ . (c): effect between data-correction strategies on the inferred model. Comparison between optimal $L_{2}$ -regularization (red), standard ML-training (blue), modes-fitting (orange) and best reconstruction computed at $t_{\mathrm{min}}$ (green).
+
+
+
+We then use the extrapolated eigenvalues to clean our covariance matrix and run a new training. The evolution of the reconstruction error is shown in Fig. 5-(a) (yellow).
+
+Comparison of strategies. The effectiveness of various strategies to counteract overfitting is depicted in Fig. 5-(b), presenting the reconstruction error across different $\rho$ values. Notably, the optimal $L_{2}$ regularization, the $\widehat{C}_{\mathrm{val - pop}}^{M}$ strategy, and the performance at the optimal early stopping point derived from RMT all follow similar trajectories, with a $1 / \sqrt{\rho}$ scaling for large $\rho$ as expected. While the high performance of these strategies stems from knowing the true model to optimize parameters—not usable in practice—we demonstrate that similar performance can be achieved with RMT-based shrinkage corrections or empirical polynomial fits. These methods do not require prior knowledge of the model, making them especially suitable for real-world inference applications.
+
+# 6. Boltzmann Machine for inverse Ising
+
+We extend our analysis to the Boltzmann Machine (BM) or the so-called inverse Ising problem (Nguyen et al., 2017), adapting our approach to binary variables $\boldsymbol{x} = \{\pm 1\}^N$ . This model is able to capture multimodal distributions through its pairwise energy function $E(\boldsymbol{x}) = -\sum_{i < j} J_{ij} x_i x_j - \sum_i h_i x_i$ , with parameters $\boldsymbol{\theta} = (\boldsymbol{J}, \boldsymbol{h})$ . Due to the lack
+
+of closed-form solutions for the correlation functions and likelihood in BMs, we employ a mean-field approximation suitable e.g. at high temperatures. This approximation allows for an analytic, albeit not exact, expression linking the model's correlation matrix $\mathbf{C}$ to the coupling matrix $\mathbf{J}$ as $\mathbf{C} = (\mathbb{I}_N - \mathbf{J})^{-1}$ , facilitating an analytical exploration of the training dynamics shown in (Agoritsas et al., 2023) and further elaborated in Appendix I.
+
+Similar to GEBMs, an analytical description of spectral dynamics can be applied to BMs, though with certain limitations due to two main factors: a) the ML estimator for the coupling matrix $J(t)$ may not strictly preserve the same eigendecomposition as $\widehat{C}^M$ , despite typically observing a nice alignment for the strongest modes; and b) the binary nature of the variables is not accounted for in the diagonals of the covariance matrices. We must remove the diagonal constraint to allow independent evolution of the modes, similar to the GEBM scenario, since a spherical constraint as in (Fanhomme et al., 2022) would introduce mode coupling. This decision is crucial for deriving approximate analytical expressions for training dynamics; without it, the problem becomes intractable in time. However, the proper fixed point alone can still be effectively analyzed using mean field techniques (Kappen & Rodríguez, 1998).
+
+As detailed in Appendix I, we can project the gradient on the spectral basis of $\widehat{C}^M$ and obtain an approximate analytic expression for the evolution of the eigenvalues of $J$ :
+
+$$
+\tau \frac {\mathrm {d} J _ {\alpha}}{\mathrm {d} t} \approx \hat {c} _ {\alpha} ^ {M} - \frac {1}{1 - J _ {\alpha}}, \tag {11}
+$$
+
+$$
+J _ {\alpha} (t) \approx 1 - \frac {1}{\hat {c} _ {\alpha} ^ {M}} - \frac {1}{\hat {c} _ {\alpha} ^ {M}} W _ {0} \left[ B _ {\alpha} e ^ {- \left(\hat {c} _ {\alpha} ^ {M}\right) ^ {2} \frac {t}{\tau}} \right], \tag {12}
+$$
+
+whose fixed point is $J_{\alpha}^{\infty} \approx 1 - (\hat{c}_{\alpha}^{M})^{-1}$ . This fixed point is shifted due to our unconstrained diagonal, and neither is $J$ traceless, as compared to the complete treatment. However, Eq. (12) still qualitatively captures the training dynamics of BMs, revealing significant differences from the GEBM scenario where $J_{\alpha}$ and $c_{\alpha}$ are no longer inversely proportional. Notably, the smallest $c_{\alpha}$ values are associated with negative $J_{\alpha}$ values which are not necessarily small in absolute terms, which significantly contribute to the reconstruction of $J$ . Additionally, for positive $J_{\alpha}$ , $J_{\alpha}$ increases when $\hat{c}_{\alpha}^{M}$ does.
+
+Results. We conducted numerical experiments training an Ising-BM on equilibrium data sampled from a 2D Ising model (i.e. defining $J^{*}$ on a 2D nearest neighbors lattice) with $N = 8 \times 8$ spins at high temperature ( $\beta = 0.1$ ). Figure 6 presents the results: Panel (a) shows the reconstruction error between the trained model and the ground truth $\beta J^{*}$ for different $\rho$ values, revealing a non-monotonic trend at low $\rho$ that mirrors observations made with GEBMs (a behavior which is robust w.r.t. the system size, see Fig. 20 in Appendix I). Panels (b1)-(b3) track the eigenvalue evolution during training for three $\rho$ values, comparing nu
+
+merical results (eigenspectrum of $J(t)$ ) with the analytic curve from Eq. (12). While the trends align qualitatively, particularly in capturing the separation of time scales, the analytic curves consistently underestimate the actual eigenvalue evolution due to the overlooked diagonal constraint in the BM model. Nonetheless, this timescale separation is similar to that observed in the GEBM: stronger covariances ( $\hat{c}_{\alpha}^{M} > 1 \leftrightarrow J_{\alpha} > 0$ ) are learned faster, while weaker covariances ( $\hat{c}_{\alpha}^{M} < 1 \leftrightarrow J_{\alpha} < 0$ ) take longer. This pattern indicates that the training dynamics are dominated by the convergence of weak $\hat{c}_{\alpha}^{M}$ . In sparse Ising models, this involves learning the negative spectrum of $J$ . Unlike in GEBMs where negative eigenvalues do not exist, in BMs, these later-encoded modes significantly impact the overall reconstructed $J$ due to their large absolute eigenvalues, even though they have negligible effect in sampling quality. Accurately inferring sparse Ising models hinges on effectively learning weaker covariances, heavily influenced by finite-data noise. However, training good generative models is considerably faster, see Appendix I.
+
+Strategies similar to those used in GEBMs can be employed to mitigate overfitting in BMs, with comparable outcomes as shown in Fig. 6-(c). Shrinkage formulas are not applicable to BMs, yet the empirical polynomial fit correction for the eigenvalues proves still effective at reducing overfitting effects. This is a very good outcome as it is the only noninformative correction (as the identification of $t_{\mathrm{min}}$ or $\lambda_{\mathrm{opt}}$ requires knowing $J^{*}$ ).
+
+# 7. Theoretic extension to generic EBM learning
+
+The analysis of overfitting for simple models such as the GEBM or the BM constitutes a preparatory attempt to address this question in the broader context of EBMs. Let us see now to which extent the analyses of overfitting carried out so far is also relevant for more general EBMs. The line of arguments bears some similarity with the one justifying that high-dimensional linear regressions are relevant to analyze deep learning (Belkin et al., 2018; Hastie et al., 2022). To this end let us consider the score-matching algorithm (Hyvärinen & Dayan, 2005) as a theoretical proxy for the analyses of overfitting in EBM. Even though this approach might appear sub-optimal in many circumstances we postulate that the mechanisms leading to overfitting have similar origin as in more sophisticated methods. Consider a generic EBM of the form $p(\pmb{x}|\pmb{\theta}) = Z^{-1}(\pmb{\theta})e^{-E(\pmb{x}|\pmb{\theta})}$ where $\theta \in \mathbb{R}^P$ is the vector of parameters and having a train set $\mathcal{D} = \{\pmb{x}_i,i = 1,\dots M\}$ of size $M$ . Defining the score function as $\psi (\pmb{x}|\theta)\stackrel {\mathrm{def}}{=}\nabla_{\pmb{x}}E(\pmb{x}|\theta)$ , the score matching loss is given by $\mathcal{L}_{\mathrm{SM}}(\theta) = \frac{1}{2}\hat{\mathbb{E}}_{\pmb{x}}\left[\left\| \left(\psi (\pmb{x}|\theta) - \nabla \log \hat{p} (\pmb{x})\right)\right\| ^2\right]$
+
+which, thanks to a by part integration rewrites as
+
+$$
+\mathcal {L} _ {\mathrm {S M}} (\theta) = \hat {\mathbb {E}} _ {\boldsymbol {x}} \left[ \frac {1}{2} \left\| \nabla (E (\boldsymbol {x} | \theta) \right\| ^ {2} - \Delta E (\boldsymbol {x} | \theta) \right] + \mathrm {C s t}. \tag {13}
+$$
+
+Notice first that for the GEBM, this leads to a learning dynamics of the coupling matrix corresponding to $\frac{dJ(t)}{dt} = -\left(\hat{C} J(t) + J(t)\hat{C}\right) + \mathbb{I}$ leading to the solution
+
+$$
+j _ {t} (x) = \frac {1 - e ^ {- x t}}{x}, \tag {14}
+$$
+
+when assuming that the initial condition commutes with $\hat{C}$ . More generally, the dynamics of the score function is governed by a neural tangent kernel (NTK) (Jacot et al., 2018). We have
+
+$$
+\frac {d \psi (\boldsymbol {x} | \theta_ {t})}{d t} = - \hat {\mathbb {E}} _ {\boldsymbol {x} ^ {\prime}} \left[ K _ {t} \left(\boldsymbol {x}, \boldsymbol {x} ^ {\prime}\right) \left(\psi \left(\boldsymbol {x} ^ {\prime} \mid \theta_ {t}\right) - \nabla \log \hat {p} \left(\boldsymbol {x} ^ {\prime}\right)\right) \right] \tag {15}
+$$
+
+with $K_{t}(\pmb {x},\pmb{x}^{\prime}) = \partial_{\theta^{\top}}\psi (\pmb {x}|\theta_{t})\partial_{\theta}\psi (\pmb{x}^{\prime}|\theta_{t})^{\top}$ . Integrating by parts the second term we obtain
+
+$$
+\frac {d \psi (\boldsymbol {x} | \theta_ {t})}{d t} = - \hat {\mathbb {E}} _ {\boldsymbol {x} ^ {\prime}} \left[ K _ {t} \left(\boldsymbol {x}, \boldsymbol {x} ^ {\prime}\right) \left(\psi \left(\boldsymbol {x} ^ {\prime} \mid \theta_ {t}\right) \right] + \hat {\phi} _ {t} (\boldsymbol {x}) \right. \tag {16}
+$$
+
+with $\hat{\phi}_t(\pmb{x}) = -\hat{\mathbb{E}}_{\pmb{x}'}\left[\partial_{\theta^\top}\psi (\pmb{x}|\theta)\partial_\theta \nabla_{\pmb{x}'}^\top \psi (\pmb{x}'|\theta)\right] = -\hat{\mathbb{E}}_{\pmb{x}'}\left[\nabla_{\pmb{x}'}\cdot K_t(\pmb{x},\pmb{x}')\right]$ . As for supervised learning we expect a kernel regime for large enough network's width (Chizat et al., 2019). Then $K$ becomes deterministic, the dynamics is linear, with $\psi (\pmb {x},t)$ an explicit function of the kernel matrix $K(\pmb {x}_s,\pmb{x}_{s'})$ on the training set. Indeed, the NTK dynamics takes place on a reproducing kernel Hilbert space (RKHS) of finite dimension corresponding either to $\mathcal{H}_P\stackrel {\mathrm{def}}{=}\mathrm{Span}\{\partial_{\theta_q}\psi (\pmb {x}|\theta),q = 1,\ldots P\}$ , or to the $\mathcal{H}_M\stackrel {\mathrm{def}}{=}\mathrm{Span}\{K(\pmb {x},\pmb {x}_s),s = 1,\dots M\}$ , depending respectively on whether we are in the under or over-parameterized regime. In the latter case $\hat{K}_{ss'}\stackrel {\mathrm{def}}{=}\frac{1}{M} K(\pmb {x}_s,\pmb{x}_{s'})$ is full rank and we have
+
+$$
+\hat {\psi} (t) = - \frac {1 - e ^ {- \hat {K} (t - t _ {0})}}{\hat {K}} \hat {\phi} + \hat {\psi} (t _ {0}) \tag {17}
+$$
+
+where $\hat{\psi}(t)$ and $\hat{\phi}$ are respectively the vectors $\{\psi(\pmb{x}_s|\theta_t), s = 1, \dots, M\}$ and $\{\hat{\phi}(\pmb{x}_s), s = 1, \dots, M\}$ and assuming $\hat{\psi}(t_0) = 0$ . In any case we can consider only the projection of $\psi$ on the RKHS, its transverse part being assumed to be zero at $t = t_0$ . As a result, the dynamics takes place in the "empirical" RKHS and we have
+
+$$
+\psi (\boldsymbol {x} | \theta_ {t}) = \frac {1}{M} \sum_ {s = 1} ^ {M} K (\boldsymbol {x}, \boldsymbol {x} _ {s}) \beta_ {s} (t) \tag {18}
+$$
+
+where the vector $\beta$ is obtained from (17) yielding finally
+
+$$
+\psi (\boldsymbol {x} | \theta_ {t}) = - \hat {K} (\boldsymbol {x}) ^ {\top} \frac {j _ {t} (\hat {K})}{\hat {K}} \hat {\phi} + \psi (\boldsymbol {x} | \theta_ {t _ {0}}) \tag {19}
+$$
+
+where $\hat{K}(\boldsymbol{x}) = \left\{\frac{1}{M} K(\boldsymbol{x}, \boldsymbol{x}_s), s = 1, \ldots, M\right\}$ is the vector of empirical features spanning $\mathcal{H}_E$ . Additionally $\theta_t$ is directly read off from $\psi(\boldsymbol{x} | \theta_t)$ at first order in $\theta_t$ in the lazy regime
+
+$$
+\psi (\boldsymbol {x} | \theta_ {t}) \approx \nabla_ {\theta} ^ {\top} \psi (\boldsymbol {x} | \theta_ {t _ {0}}) \left(\theta_ {t} - \theta_ {t _ {0}}\right) \tag {20}
+$$
+
+Using the parameter-sample duality eventually leads to
+
+$$
+\theta_ {t} = \theta_ {t _ {0}} + \frac {j _ {t} \left(C ^ {(M)}\right)}{C ^ {(M)}} \phi^ {(M)} \tag {21}
+$$
+
+where
+
+$$
+C ^ {(M)} = \frac {1}{M} \sum_ {s = 1} ^ {M} \nabla_ {\theta} \psi \left(\boldsymbol {x} _ {s} \mid \theta\right) ^ {\top} \nabla_ {\theta^ {\top}} \psi \left(\boldsymbol {x} _ {s} \mid \theta\right) \tag {22}
+$$
+
+$$
+\phi^ {(M)} \stackrel {\text {d e f}} {=} \frac {1}{M} \sum_ {s = 1} ^ {M} \nabla_ {\theta} \psi (\boldsymbol {x} _ {s} | \theta) ^ {\top} \hat {\phi} (\boldsymbol {x} _ {s}) \tag {23}
+$$
+
+In GEBM case we recover (14) by letting $\psi (\pmb {x}|\theta) = \theta \pmb {x}$ $K(\pmb {x},\pmb{x}^{\prime}) = \frac{1}{2} (\pmb{x}\pmb{x}^{\prime \top} + \pmb{x}^{\top}\pmb{x}^{\prime})$ and $\hat{\phi} (\pmb {x}) = \pmb {x}$ leading to $\phi^{(M)} = C^{(M)}$ with $C^{(M)} = \frac{1}{M}\sum_{i = 1}^{M}\pmb{x}_i\pmb{x}_i^t.$
+
+# 8. Discussion
+
+This work presents a theoretical framework to understand overfitting in simple energy-based models, using the eigendecomposition of the data covariance matrix to analyze training dynamics. We illustrate how the principal components control a timescale separation, where information progressively encoded from the strongest to the weakest data modes. Due to varying impacts of finite-size noise on different components, this results in an early-stopping point dictated by their interplay. Furthermore, we show that finite sample corrections can be very accurately described analytically using asymptotic RMT analyses. This analysis provides us with an analogous of the GCV in the context of EBM, which deserves further empirical investigations. This analysis is exact for Gaussian EBMs and approximate for Ising-BMs at high temperatures, capturing similar phenomena observed in more complex EBMs like RBMs. We discuss data-correction protocols typically used to mitigate overfitting and propose to extend these strategies to more complex models leveraging higher-order data correlations (e.g. by exploiting the SVD decomposition). Further investigations into RMT may clarify how early-stopping points relate to the asymptotic properties of the covariance matrix's spectrum or which should be the proper observables to pinpoint them without a prior knowledge of the data model. Finally, an extension of the theory to EBM via a neural tangent kernel dynamics of the score function deserves further experimental investigations to find relevant hypothesis for the spectrum of population covariance matrices of tangent features.
+
+# Acknowledgments
+
+Authors acknowledge financial support by the Comunidad de Madrid and the Complutense University of Madrid through the Atracción de Talento program (Refs. 2019-T1/TIC-13298 & Refs. 2023-5A/TIC-28934), the project PID2021-125506NA-I00 financed by the "Ministerio de Economía y Competitividad, Agencia Estatal de Investigación" (MICIU/AEI/10.13039/501100011033), the Fondo Europeo de Desarrollo Regional (FEDER, UE) and the French ANR grant Scalp (ANR-24-CE23-1320).
+
+# Impact Statement
+
+This paper aims to advance the field of Machine Learning by deepening our understanding of generative models under data scarcity. While our findings may have broad societal implications, we do not identify any that require specific emphasis at this stage.
+
+# References
+
+Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. A learning algorithm for Boltzmann machines. Cognitive science, 9 (1):147-169, 1985.
+Advani, M. S., Saxe, A. M., and Sompolinsky, H. High-dimensional dynamics of generalization error in neural networks. Neural Networks, 132:428-446, 2020.
+Agoritsas, E., Catania, G., Decelle, A., and Seoane, B. Explaining the effects of non-convergent MCMC in the training of energy-based models. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 322-336. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/agoritsas23a.html.
+Arnaboldi, L., Stephan, L., Krzakala, F., and Loureiro, B. From high-dimensional & mean-field dynamics to dimensionless odes: A unifying approach to sgd in two-layers networks. In The Thirty Sixth Annual Conference on Learning Theory, pp. 1199-1227. PMLR, 2023.
+Atanasov, A., Zavatone-Veth, J., and Pehlevan, C. Scaling and renormalization in high-dimensional regression. arXiv preprint arXiv:2405.00592, 2024.
+Baik, J. and Silverstein, J. W. Eigenvalues of large sample covariance matrices of spiked population models. Journal of multivariate analysis, 97(6):1382-1408, 2006.
+Belkin, M. Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation. Acta Numerica, 30:203-248, 2021.
+
+Belkin, M., Ma, S., and Mandal, S. To understand deep learning we need to understand kernel learning. In proc. of ICML, pp. 541-549. PMLR, 2018.
+Bengesi, S., El-Sayed, H., Sarker, M. K., Houkpati, Y., Irungu, J., and Oladunni, T. Advancements in generative ai: A comprehensive review of gans, gpt, autoencoders, diffusion model, and transformers. IEEE Access, 2024.
+Béreux, N., Decelle, A., Furtlehner, C., and Seoane, B. Learning a restricted Boltzmann machine using biased monte carlo sampling. arXiv preprint arXiv:2206.01310, 2022.
+Béreux, N., Decelle, A., Furtlehner, C., Rosset, L., and Seoane, B. Fast, accurate training and sampling of restricted Boltzmann machines. arXiv preprint arXiv:2405.15376, 2024.
+Bun, J., Bouchaud, J.-P., and Potters, M. Cleaning large correlation matrices: Tools from random matrix theory. Physics Reports, 666:1-109, 2017. ISSN 0370-1573. doi: https://doi.org/10.1016/j.physrep.2016.10.005. URL https://www.sciencedirect.com/science/article/pii/S0370157316303337. Cleaning large correlation matrices: tools from random matrix theory.
+Bun, J., Bouchaud, J.-P., and Potters, M. Overlaps between eigenvectors of correlated random matrices. Phys. Rev. E, 98:052145, Nov 2018. doi: 10.1103/PhysRevE.98.052145. URL https://link.aps.org/doi/10.1103/PhysRevE.98.052145.
+Chizat, L., Oyallon, E., and Bach, F. On lazy training in differentiable programming. In proc. of NeurIPS, 32, 2019.
+Cocco, S., Feinauer, C., Figliuzzi, M., Monasson, R., and Weigt, M. Inverse statistical physics of protein sequences: a key issues review. Reports on Progress in Physics, 81 (3):032601, 2018.
+Consortium, . G. P. et al. A global reference for human genetic variation. Nature, 526(7571):68, 2015.
+Decelle, A., Fissore, G., and Furtlehner, C. Thermodynamics of restricted Boltzmann machines and related learning dynamics. Journal of Statistical Physics, 172 (6):1576-1608, 2018. doi: https://doi.org/10.1007/s10955-018-2105-y.
+Decelle, A., Seoane, B., and Rosset, L. Unsupervised hierarchical clustering using the learning dynamics of restricted Boltzmann machines. Physical Review E, 108(1):014110, 2023.
+
+Decelle, A., Furtlehner, C., Gómez, A. D. J. N., and Seoane, B. Inferring effective couplings with restricted Boltzmann machines. SciPost Phys., 16:095, 2024. doi: 10.21468/SciPostPhys.16.4.095. URL https://scipost.org/10.21468/SciPostPhys.16.4.095.
+Decelle, A., de Jesus Navas Gómez, A., and Seoane, B. Inferring high-order couplings with neural networks. 2025. URL https://arxiv.org/abs/2501.06108.
+Delon, J., Desolneux, A., and Salmona, A. Gromov-wasserstein distances between gaussian distributions. Journal of Applied Probability, 59(4):1178-1198, 2022. doi: 10.1017/jjpr.2022.16.
+Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012.
+Du, Y. and Mordatch, I. Implicit generation and modeling with energy based models. Advances in Neural Information Processing Systems, 32, 2019.
+Fanthomme, A., Rizzato, F., Cocco, S., and Monasson, R. Optimal regularizations for data generation with probabilistic graphical models. Journal of Statistical Mechanics: Theory and Experiment, 2022(5):053502, may 2022. doi: 10.1088/1742-5468/ac650c. URL https://dx.doi.org/10.1088/1742-5468/ac650c.
+Feinauer, C. and Lucibello, C. Reconstruction of pairwise interactions using energy-based models. In *Mathematical and Scientific Machine Learning*, pp. 291-313. PMLR, 2022.
+Feinauer, C., Meynard-Piganeau, B., and Lucibello, C. Interpretable pairwise distillations for generative protein sequence models. PLOS Computational Biology, 18(6): e1010219, 2022.
+Furtlehner, C. Free dynamics of feature learning processes. J.Stat.Phys, 190(3):51, 2023.
+Golub, G., Heath, M., and Wahba, G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21(2):215-223, 1979.
+Hachem, W., Loubaton, P., and Najim, J. Deterministic equivalents for certain functionals of large random matrices. 2007.
+Hastie, T., Montanari, A., Rosset, S., and Tibshirani, R. Surprises in high-dimensional ridgeless least squares interpolation. The Annals of Statistics, 50(2):949-986, 2022.
+Hyvarinen, A. and Dayan, P. Estimation of non-normalized statistical models by score matching. JMLR, 6(4), 2005.
+
+Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In In proc. of NeurIPS, volume 31, 2018.
+Kappen, H. J. and Rodríguez, F. B. Efficient Learning in Boltzmann Machines Using Linear Response Theory. Neural Computation, 10(5):1137-1156, 07 1998. ISSN 0899-7667. doi: 10.1162/089976698300017386. URL https://doi.org/10.1162/089976698300017386.
+Kiwata, H. Estimation of quenched random fields in the inverse ising problem using a diagonal matching method. Phys. Rev. E, 89:062135, Jun 2014. doi: 10.1103/PhysRevE.89.062135. URL https://link.aps.org/doi/10.1103/PhysRevE.89.062135.
+Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
+Ledoit, O. and Péché, S. Eigenvectors of some large sample covariance matrix ensembles. *Probability Theory and Related Fields*, 151(1):233-264, 2011.
+Ledoit, O. and Wolf, M. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365-411, 2004. ISSN 0047-259X. doi: https://doi.org/10.1016/S0047-259X(03)00096-4. URL https://www.sciencedirect.com/science/article/pii/S0047259X03000964.
+Ledoit, O. and Wolf, M. Analytical nonlinear shrinkage of large-dimensional covariance matrices. The Annals of Statistics, 48(5):3043 - 3065, 2020. doi: 10.1214/19-AOS1921. URL https://doi.org/10.1214/19-AOS1921.
+Loffredo, E., Pastore, M., Cocco, S., and Monasson, R. Restoring balance: principled under/oversampling of data for optimal classification. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org, 2024.
+MacKay, D. J. C. Information Theory, Inference, and Learning Algorithms. Copyright Cambridge University Press, 2003.
+Magnus, J. R. and Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley, second edition, 1999. ISBN 0471986321 9780471986324 047198633X 9780471986331.
+Mai, X., Liao, Z., and Couillet, R. A large scale analysis of logistic regression: Asymptotic performance and new insights. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3357-3361. IEEE, 2019.
+
+Marčenko, V. and Pastur, L. Distribution of eigenvalues for some sets of random matrices. Mathematics of the USSR-Sbornik, 1(4):457-483, 1967.
+Mei, S., Montanari, A., and Nguyen, P.-M. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33): E7665-E7671, 2018.
+Morcos, F., Pagnani, A., Lunt, B., Bertolino, A., Marks, D. S., Sander, C., Zecchina, R., Onuchic, J. N., Hwa, T., and Weigt, M. Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proceedings of the National Academy of Sciences, 108(49):E1293-E1301, 2011.
+Nguyen, H. C., Zecchina, R., and Berg, J. Inverse statistical problems: from the inverse Ising problem to data science. Advances in Physics, 66(3):197-261, 2017. doi: 10.1080/00018732.2017.1341604. URL https://doi.org/10.1080/00018732.2017.1341604.
+Patil, P., Wu, Y., and Tibshirani, R. Failures and successes of cross-validation for early-stopped gradient descent. In International Conference on Artificial Intelligence and Statistics, pp. 2260-2268. PMLR, 2024.
+Potters, M. and Bouchaud, J.-P. A first course in random matrix theory: for physicists, engineers and data scientists. Cambridge University Press, 2020.
+Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville, A. On the spectral bias of neural networks. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5301-5310. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/rahaman19a.html.
+Ricci-Tersenghi, F. The bethe approximation for solving the inverse Ising problem: a comparison with other inference methods. Journal of Statistical Mechanics: Theory and Experiment, 2012(08):P08015, aug 2012. doi: 10.1088/1742-5468/2012/08/P08015. URL https://dx.doi.org/10.1088/1742-5468/2012/08/P08015.
+Roudi, Y., Aurell, E., and Hertz, J. A. Statistical physics of pairwise probability models. Frontiers in computational neuroscience, 3:652, 2009.
+Saad, D. and Solla, S. Dynamics of on-line gradient descent learning for multilayer neural networks. Advances in neural information processing systems, 8, 1995.
+
+Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Bengio, Y. and LeCun, Y. (eds.), ICLR, 2014. URL http://dblp.uni-trier.de/db/conf/iclr/iclr2014.html#SaxeMG13.
+Suzuki, M. and Kubo, R. Dynamics of the Ising model near the critical point. i. Journal of the Physical Society of Japan, 24(1):51-60, 1968. doi: 10.1143/JPSJ.24.51.
+Tomasini, U. M., Sclocchi, A., and Wyart, M. Failure and success of the spectral bias prediction for Laplace kernel ridge regression: the case of low-dimensional data. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 21548-21583. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/tomasini22a.html.
+Tubiana, J., Cocco, S., and Monasson, R. Learning protein constitutive motifs from sequence data. *Elife*, 8:e39397, 2019.
+Wei, A., Hu, W., and Steinhardt, J. More than a toy: Random matrix models predict how real-world neural representations generalize. In International Conference on Machine Learning, pp. 23549-23588. PMLR, 2022.
+Wu, Z., Johnston, K. E., Arnold, F. H., and Yang, K. K. Protein sequence design with deep generative models. Current opinion in chemical biology, 65:18-27, 2021.
+Yasuda, M. and Tanaka, K. Susceptibility propagation by using diagonal consistency. Phys. Rev. E, 87:012134, Jan 2013. doi: 10.1103/PhysRevE.87.012134. URL https://link.aps.org/doi/10.1103/PhysRevE.87.012134.
+Yelmen, B., Decelle, A., Ongaro, L., Marnetto, D., Tallec, C., Montinaro, F., Furtlehner, C., Pagani, L., and Jay, F. Creating artificial human genomes using generative neural networks. PLoS genetics, 17(2):e1009303, 2021.
+Yelmen, B., Decelle, A., Boulos, L. L., Sztatkownik, A., Furtlehner, C., Charpiat, G., and Jay, F. Deep convolutional and conditional neural networks for large-scale genomic data generation. PLoS Computational Biology, 19(10):e1011584, 2023.
+
+# A. Derivation of projected gradient equations
+
+Starting from the log-likelihood's derivative w.r.t. a parameter $J_{ij}$ , we can assume that in the limit of an infinitely small learning rate $\gamma \rightarrow 0$ we can replace the discrete-time update equation for the parameter (2) into a differential equation for the evolution of each parameter $J_{ij}$ :
+
+$$
+J _ {i j} (t + 1) = J _ {i j} (t) + \gamma \left. \frac {\partial \mathcal {L}}{\partial J _ {i j}} \right| _ {\boldsymbol {J} (t)} \longrightarrow \frac {1}{\gamma} \frac {d J _ {i j}}{d t} = \left. \frac {\partial \mathcal {L}}{\partial J _ {i j}} \right| _ {\boldsymbol {J} (t)}. \tag {24}
+$$
+
+We now decompose the rhs of the above expression in terms of time-evolution of eigenvalues and eigenvectors of $J$ at time $t$ . Given the eigendecomposition $J_{ij} = \sum_{\gamma} v_i^\gamma J_\gamma v_j^\gamma$ , we have
+
+$$
+\frac {d J _ {i j}}{d t} = \frac {d}{d t} \sum_ {\gamma} v _ {i} ^ {\gamma} J _ {\gamma} v _ {j} ^ {\gamma} = \sum_ {\gamma} \left(\frac {d v _ {i} ^ {\gamma}}{d t} J _ {\gamma} v _ {j} ^ {\gamma} + v _ {i} ^ {\gamma} \frac {d J _ {\gamma}}{d t} v _ {j} ^ {\gamma} + v _ {i} ^ {\gamma} J _ {\gamma} \frac {d v _ {j} ^ {\gamma}}{d t}\right). \tag {25}
+$$
+
+We now project this on the eigenbasis of the eigenvectors of $J$ , which after simple algebraic manipulations leads to
+
+$$
+\begin{array}{l} \sum_ {i j} v _ {i} ^ {\alpha} \frac {d J _ {i j}}{d t} v _ {j} ^ {\beta} = \delta_ {\alpha \beta} \frac {d J _ {\alpha}}{d t} + (1 - \delta_ {\alpha \beta}) \left(\sum_ {i} v _ {i} ^ {\alpha} \frac {d v _ {i} ^ {\beta}}{d t} J _ {\beta} + \sum_ {j} \frac {d v _ {j} ^ {\alpha}}{d t} J _ {\alpha} v _ {j} ^ {\beta}\right) (26) \\ = \delta_ {\alpha \beta} \frac {d J _ {\alpha}}{d t} + \left(1 - \delta_ {\alpha \beta}\right) \left(J _ {\beta} - J _ {\alpha}\right) \sum_ {i} v _ {i} ^ {\alpha} \frac {d v _ {i} ^ {\beta}}{d t}, (27) \\ \end{array}
+$$
+
+where we used the property $\mathrm{d}\left( {{\mathbf{u}}^{\alpha } \cdot {\mathbf{u}}^{\beta }}\right) = 0$ because they are vectors of an orthonormal basis. Finally, combining Eqs. (4) and (27) separating the contributions for $\alpha = \beta$ and $\alpha \neq \beta$ we get to Eq. (5) in the main text.
+
+A final note on the log-likelihoods' gradient: in the first expression (3) we have assumed that perturbations are not symmetric, that is when taking the derivative w.r.t. $J_{ij}$ for $i \neq j$ we assume that $J_{ij} \neq J_{ji}$ . Assuming instead symmetric perturbations one would get a slight different form of the log-likelihood's gradient w.r.t. (3), given by:
+
+$$
+\frac {\partial \mathcal {L}}{\partial J _ {i j}} = \Lambda_ {i j} \left[ - \widehat {C} _ {i j} ^ {M} + \left(\boldsymbol {J} ^ {- 1}\right) _ {i j} \right], \tag {28}
+$$
+
+with $\Lambda_{ij} = 1 - \delta_{ij} / 2$ . From the point of view of the training fixed point this is not an issue, the ML estimator is exactly the same in both cases. However, the modified gradient (28) leads to a slight different dynamics: in particular, it is not anymore true that the dynamics can be exactly decomposed into a separate evolution for the different eigenvalues of $J$ by following the above steps. One could either include symmetric constraints on the eigendecomposition when computing the projected gradient (a more cumbersome process from a mathematical point of view, see e.g. (Magnus & Neudecker, 1999)) or simply double the learning rate on the diagonal terms $i = j$ to compensate for the factor $\Lambda_{ij}$ . Nevertheless, the difference between the analytic evolution (i.e. Eq. (6), obtained assuming non-symmetric perturbation) and a numerical training performed using the gradient (28) are almost coincident as seen from Figure 7. None of the results presented in the manuscript is affected by such a difference in the gradient computation.
+
+# B. Finite- $M$ fluctuations of eigenbasis
+
+We detail additional eigenvalue spectra of covariance matrices from various datasets in Fig. 8, complementing those in Fig. 1 of the main text. The spectra for CIFAR-10 (Krizhevsky et al., 2009) and the Human Genome Dataset (Consortium et al., 2015) are displayed in (a) and (b), respectively. Panel (c) illustrates the empirical covariance matrix from equilibrium configurations sampled from a 2D Ising model with periodic boundaries at high temperature $(\beta = 0.1)$ , i.e. in a paramagnetic phase. Panel (d) presents another synthetic spectrum, generated through a mixture of power-laws from Eq. (10), but using a different set of parameters than those used for Fig. 1-(b) (and used for the trainings in Figs. 2 and 3). This new spectrum was used for the figures involving comparisons with RMT (Figs. 4, 5), for numerical stability issues with the integration of the RMT equations. Nonetheless, all the results presented in the main text about the training dynamics of the GEBM can be perfectly reproduced on a wide range of the parameters defining the population eigenvalues, so that the qualitative picture that emerges from our analysis is extremely robust with respect to specific details of the spectrum.
+
+
+Figure 7. Difference in the eigenvalues' evolution in the training of a GEBM when imposing symmetry or allowing asymmetry in the perturbation of $J_{ij}$ . The points correspond to numerical results obtained by enforcing symmetry on $J_{ij}$ after each update during training (i.e. using Eq. (28)), while the lines represent analytical expressions derived for the case of non-symmetric perturbations (i.e. Eq. (6)). The setting is the same as Figure 2 in the main text.
+
+
+Figure 8. (a)-(b)-(c): Eigenvalue spectra of the empirical covariance matrix of real datasets, respectively CIFAR-10 (in (a)), Human Genome Dataset (in (b)), and a dataset made of equilibrium configurations of a 2-d Ising model of size $N = 16^2$ at $\beta = 0.1$ (in (c)). Black lines represent the spectrum computed with the full set of available data (of size $M^*$ ), while scatter colored points show the result for a subset of data $M < M^*$ . (d): the black line shows a synthetic population eigenvalue spectrum generated according to (10), with $N = 100$ , and $r = 0.5$ , $\beta = 1.0$ , $\gamma = 0.5$ , $x_1 = 10^{-1}$ , $x_2 = 10$ ; colored points display the eigenvalues of the empirical covariance matrix $\widehat{C}^M$ computed by sampling $M$ configurations from a GEBM with $J^* = C^{*-1}$ (from (1)) for different values of $M$ .
+
+
+
+
+
+
+
+Fig. 9 illustrates how the eigenbasis of the covariance matrix for real datasets remains consistent against downsampling. Starting with the eigenbasis decomposition of the covariance matrix for the largest available dataset $M^{*}$ ——considered our closest approximation to the population matrix $\hat{C}^{M^{*}} \approx \hat{C}^{\infty} = C^{*}$ ——we denote its eigenvector matrix as $U^{*} = \{u_{\alpha}^{*}\}_{\alpha = 1}^{N}$ . These eigenvectors are arranged columnwise and sorted in descending order by their corresponding eigenvalue. For each reduced sample size $M < M^{*}$ , we perform a similar decomposition on the resultant empirical covariance matrix $\hat{C}^{M}$ , with its basis represented as $U^{M} = \{u_{\alpha}^{M}\}_{\alpha = 1}^{N}$ . To evaluate the preservation of eigenvectors, we calculate the norm of the matrix product between a projection operator $P^n$ —defined as $P^n = U_{1:n}^{*}$ (incorporating the first $n$ eigenvectors of $C^{*}$ ) and the $\alpha$ -th eigenvector of $\hat{C}^{M}$ , $u_{\alpha}^{M}$ . This measurement determines whether $u_{\alpha}^{M}$ falls within the subspace spanned by the first $n$ eigenvectors of $C^{*}$ , thereby helping to mitigate eigenvector oscillations due to exchanges between the ordering of the associated eigenvalues. Fig. 9 shows the norm $\left\| P^{n^\top} \cdot u_\alpha^M \right\|$ plotted versus $n$ and for each value of $\alpha$ , for the same 4 datasets of Fig. 8. We can observe that for high values of $M$ most eigenvectors are well preserved: this means that most eigenvectors of $\hat{C}^{M}$ are contained in the subspace spanned by $C^{*}$ , as the norm of such a matrix product raises sharply to 1 when $n \approx \alpha$ . On the other hand, when $M$ is lowered (lower panels on each subplot) the conservation starts to deteriorate, especially in the middle-lower part of the spectrum. Interestingly, we observe that the most conserved directions (at least in (a)-(b)) are both the strongest covariance modes and the lowest ones, a phenomenon already highlighted in (Bun et al., 2018).
+
+
+
+
+Figure 9. Finite- $M$ fluctuations of eigenvectors in the covariance matrix of datasets. The four panels show the norm of the matrix product between the $n$ -th projection operator $\pmb{P}^n$ , containing the first $n$ eigenvectors of the population matrix $\pmb{C}^{*}$ (for a real dataset, we just take the covariance matrix with the full available data $M^{*}$ ) and the $\alpha$ -th eigenvector of the covariance matrix $\widehat{\pmb{C}}^M$ with $M < M^{*}$ . (a)-(b)-(c) respectively refer to MNIST dataset (Deng, 2012), Human Genome dataset, and equilibrium configurations drawn from a 2d Ising model (same setting as in Fig. 8-(c). (d) refers to a synthetic Gaussian Model generated as discussed in the main text with the same settings as in Fig. 1. All panels show the results for two values of $M$ , a larger one at the top and a lower one at the bottom. Results are plotted w.r.t. the projector index $n$ and each line correspond to a different $\alpha$ .
+
+
+
+
+
+# C. Training dynamics in GEBM with non-commutative initialization
+
+This section provides a brief follow-up to what discussed in the first part of Section 4, concerning the training dynamics of a GEBM. For simplicity we focus here only on the infinite-sample scenario (i.e. when training from $C^*$ ), although the same reasoning holds also for finite data. We are also interested in describing the training dynamics for a generic initialization of the matrix $J$ , which in general will not commute with $C^*$ . In this scenario, the model has also to learn the eigenvectors of $C^*$ . Fig. 8 shows the evolution of the coupling matrix eigenvalues $J_{\alpha}$ according to Eq. (6) (shown with solid lines), in comparison to a numerical training done iteratively maximizing the likelihood as in Eq. (2) (points). The initial condition here is a matrix $J(0)$ constructed from a random population of modes $J_{\alpha}(0) \sim U[0,1]$ and projected on to a random orthogonal matrix. In this way, $J(0)$ and $C^*$ do not commute. At the beginning of the training, there is indeed a discrepancy between theory and simulations, because of the wrong assumption of independence of eigenvalues. Once eigenvectors align, the evolution proceeds independently for each eigenvalue and perfectly follows Eq. (6).
+
+Note that the initial oscillations of the mode-to-mode eigenvector overlap in Fig. 10-(b) is due to the fact that eigenvalue learning is non-monotonic at the beginning, so that there is an initial exchange in the ordering of the eigenvectors. Nonetheless, after an initial transient all the eigenvectors align to their counterparts in the covariance matrix. This alignment process and with a much faster timescale w.r.t. the learning of eigenvalues (especially the ones associated to weaker covariances): for this reason, the assumption leading to our analytic description about independency on the eigenvalues' evolution remains justified for practical purposes. This reasoning about eigenvector alignment holds for any input covariance matrix: what determines the non trivial dynamics of the reconstruction error (explained in Sec. 4) is fully determined by the noise in the eigenvalues.
+
+
+Figure 10. Training dynamics of the GEBM from a population matrix $C^*$ . The system size and the parameters defining $C^*$ are the same as in Fig. 8-(d). (a): Evolution of eigenvalues, comparison between analytic solution (full line) and numerical training (points). The initial condition $J(0)$ is constructed from a random distribution of modes and projecting it back on to a random orthogonal matrix which differs from the eigenbasis of $C^*$ , so that the two matrices do not commute. (b): Alignment of eigenvectors, computed as the mode-to-mode overlap between eigenvectors of the population matrix $\mathbf{u}_{\alpha}^{*}$ and eigenvectors of $J$ , i.e. $\mathbf{v}_{\alpha}$ . Red-ish (resp. blue-ish) colors correspond to strong (resp. weak) covariances $c_{\alpha}^{*}$ . The learning rate is set to $\gamma = 10^{-3}$ .
+
+# D. Robustness of results w.r.t. initialization at finite $M$
+
+We show in Fig. 11 some analogous results w.r.t. Fig. 2-(b) for the training dynamics of the GEBM in the case of finite $M$ (here we set $\rho = M / N = 2.11$ ) by varying the initialization. In this case, we do not care about eigenvectors' alignment as in the previous section: we only consider different initializations for the eigenmodes of the coupling matrix, i.e. $\{J_{\alpha}\}_{\alpha = 1}^{N}$ . We can observe how the non-monotonic behavior of the reconstruction error (plotted in the bottom rows for each initialization) is robust against different standard, uninformative and small initializations, and it keeps appearing as long as $J_{\alpha}(0) < 1 / \hat{c}_{\alpha}^{M}$ for the majority of the eigenvalues (especially the ones corresponding to weak covariances). We can also observe how, increasing the initial conditions to higher values than the fixed point (i.e. moving from columns (1)-(2)-(3) to the right most ones (4)-(5)) the non-monotonic behavior disappears, indicating that the early-stopping break-point no longer exists and that there is now a way to mitigate the overfitting effects with this strategy. However, it is common practice to start with small values. Moreover, in more complex EBMs where sampling is required to estimate the correlations of the model in the LL gradient (e.g., BMs or RBMs), it may be a very bad idea to assume extreme initializations (i.e. far from an uninformative initialization where the parameters of the model are small): this could lead to ergodicity problems in sampling, as the model may get stuck in spin-glass-like phases, a phenomenon that has been well studied in several EBMs (see e.g. (Decelle et al., 2018)).
+
+
+Figure 11. Results on the training dynamics GEBM at finite amount of data by varying the initial conditions. Panels (a)'s (top row) show the eigenvalues' evolution (according to Eq. (12)), while panels (b)'s (bottom row) show the corresponding reconstruction error $\mathcal{E}_{\mathrm{J}}$ w.r.t the ground truth model $J^{*}$ ; all quantities are shown versus time. Each column corresponds instead to a different initial condition. From left to right: an identity-like initialization $(J_{\alpha}(0) = 1)$ in column (1), as in Fig. 2-(b); a small-coupling initialization in $(J_{\alpha}(0) = 10^{-2})$ , in column (2); two random initialization modes (resp. in the boundaries $[10^{-1}; 10]$ and [1; 30] in column (3)-(4)); a constant initialization to very high values larger than the fixed point, i.e. $(J_{\alpha}(0) = 50 > 1 / \hat{c}_{\alpha}^{M})$ in column (5). All trainings are performed analytically, with an empirical covariance matrix $\widehat{C}^{M}$ generated with the same settings as in Fig. 2 with $\rho = M / N = 2.11$ .
+
+# E. Additional generation quality metrics
+
+In this section we consider an additional metric to compute the discrepancy between the trained model and the true one, namely the Wasserstein distance (Delon et al., 2022). Figure 12 shows the same data as in Figure 3 of the main text, now including the evolution of the Wasserstein distance between the trained model and the true one w.r.t. training time. Also this quantity shows a non-monotonic behavior in $t$ especially for low $M$ , with a clear early-stopping point. In the rightmost panel, we compare the locations of the minima of each error estimator (and the maximum of the log-likelihood) as functions of $\rho$ . We observe that the time point corresponding to the minimum Wasserstein distance follows a trend very similar to that of the maximum log-likelihood.
+
+# F. GEBM analysis with various eigenvalue spectra
+
+This section presents additional results on the GEBM, analogous to Fig. 3 in the main text, obtained using alternative spectra for the population covariance matrix. To assess the robustness of our findings with respect to spectral choice, we replicate the analysis using both synthetic and empirical spectra.
+
+First, in Fig. 13, we consider a synthetic spectrum distinct from Eq. (10): for $N = 100$ , we generate 10 dominant modes with amplitudes uniformly distributed in [2, 10], and a bulk of $N - k = 90$ noisy modes with amplitudes in $[10^{-1}, 1]$ . The qualitative behavior, including overfitting effects, remains unchanged.
+
+Next, we repeat the analysis using empirical spectra: specifically, the eigenvalues of the sample covariance matrices from the MNIST and Human Genome Dataset (HGD), shown in Figs. 14 and 15, are used as population spectra. The resulting dynamics, analogous to Fig. 12, again show no qualitative deviations. In all cases, the non-monotonic temporal behavior of key metrics and the early-stopping times are preserved.
+
+
+(a) Reconstruction error
+
+
+(b) test log-likelihood
+
+
+(c) Generation error
+
+
+(d) Wasserstein distance
+
+
+(e)
+Figure 12. We compare the results shown in Fig. 3, obtained using various generation quality measures, with the corresponding curves computed using the Wasserstein distance. (a)-(b)-(c)-(d) display respectively the reconstruction error $\mathcal{E}_J$ , the test-LL, the Wasserstein distance and the generation error $\mathcal{E}_C$ , all plotted vs time, for various sample sizes $M$ (indicated by a color gradient from blue to red for increasing $\rho = M / N$ . Dashed black lines refer to a training from $C^*$ (i.e. $M \to \infty$ ). (e): comparison between time of minimum reconstruction (circles), maximum test LL (diamonds), minimum Wasserstein distance (squares) and time at which the generation error converges to its steady-state value. These quantities are also shown in the related panels for better clarity. Apart on panel (d), this figure contains the same information and quantities as Fig. 3.
+
+
+(a) Reconstruction error
+
+
+(b) test log-likelihood
+
+
+(c) Generation error
+
+
+(d) Wasserstein distance
+
+
+(e)
+Figure 13. Same plots as in Fig. 12, this time obtained by training a GEBM starting from a synthetic population covariance matrix spectrum of dimension $N = 100$ , with a set of 10 dominant modes with amplitudes uniformly distributed in the interval [2, 10], and a bulk of $N - k = 90$ noisy modes with amplitudes uniformly distributed between $10^{-1}$ and 1.
+
+
+(a) Reconstruction error
+
+
+(b) test log-likelihood
+
+
+(c) Generation error
+
+
+(d) Wasserstein distance
+
+
+(e)
+Figure 14. Same plots as in Fig. 12, this time obtained by training a GEBM starting from the eigenvalue spectrum of the empirical covariance matrix computed from the MNIST dataset, with a cutoff at $10^{-6}$ to filter out weak modes.
+
+
+(a) Reconstruction error
+
+
+(b) test log-likelihood
+
+
+(c) Generation error
+
+
+(d) Wasserstein distance
+
+
+(e)
+Figure 15. Same plots as in Fig. 12, this time obtained by training a GEBM starting from the eigenvalue spectrum of the empirical covariance matrix computed from the HGD dataset, with a cutoff at $10^{-12}$ to filter out weak modes.
+
+# G. Asymptotic analysis through Random Matrix Theory
+
+# G.1. General case
+
+Various quantities appearing in the core of the manuscript are explicit function of the empirical covariance matrix $\widehat{C}^M$ and as such are amenable to asymptotic analysis thanks to random matrix theory (RMT). These quantities are respectively the train, test energy (associated to the EBM), the coupling error and the LL (train and test). For the sake of clarity, we repeat them here:
+
+$$
+E _ {\text {t r a i n}} = \frac {1}{N} \operatorname {T r} [ J \widehat {C} ^ {M} ], \tag {29}
+$$
+
+$$
+E _ {\text {t e s t}} = \frac {1}{N} \operatorname {T r} \left[ J C ^ {*} \right], \tag {30}
+$$
+
+$$
+\mathcal {E} _ {\mathrm {J}} \stackrel {\text {d e f}} {=} \frac {1}{N} \| \boldsymbol {J} - \boldsymbol {J} ^ {*} \| _ {F} ^ {2} \tag {31}
+$$
+
+$$
+L L _ {\text {t r a i n , t e s t}} \stackrel {\text {d e f}} {=} \frac {1}{2 N} \log \det [ J ] - \frac {1}{2} E _ {\text {t r a i n , t e s t}}, \tag {32}
+$$
+
+where $C^* \stackrel{\mathrm{def}}{=} \lim_{M \to \infty} \widehat{C}^M$ is the population matrix, $\| \cdot \|_F^2$ the Frobenius norm while $\pmb{J}$ is the estimation of the coupling matrix from the train samples $\pmb{x}$ assumed to be of the form $\pmb{x} = \pmb{F} \pmb{z}$ , with $\mathbb{E}(z z^\top) = \mathbb{I}$ , $\pmb{F} \pmb{F}^\top = \widehat{C}^M$ , $\tau = \| \pmb{x} \|$ distributed w.r.t. some density $\sigma(\tau)$ . Depending on the setting (dynamical, spectral $\widetilde{L}_1$ or $L_2$ ) $\pmb{J}$ may appear in three different explicit functional form $j_t$ , $j_\alpha^{(\widetilde{L}_1)}$ and $j_\alpha^{(\mathrm{L}_2)}$ of $\widehat{C}^M$ . We have
+
+$$
+j _ {t} (x) = \frac {1}{x} \left(1 + W _ {0} \left[ - e ^ {- x ^ {2} t - 1} \right]\right), \quad \text {t r a i n i n g d y n a m i c s}, \tag {33}
+$$
+
+$$
+j _ {\alpha} ^ {(\widetilde {L} _ {1})}, = \frac {\alpha}{1 + \alpha x}, \quad (L _ {1} (\text {s p e c t r a l}) \text {r e g u l a r i z a t i o n}), \tag {34}
+$$
+
+$$
+j _ {\alpha} ^ {(\mathrm {L} _ {2})}, = \frac {\alpha}{2} \left(\sqrt {x ^ {2} + \frac {4}{\alpha}} - x\right), \quad (L _ {2} \text {r e g u l a r i z a t i o n}). \tag {35}
+$$
+
+A derivation of Eqs. (34)-(35) is given in Appendix H.1. The $j_{t}$ corresponds to the situation where all eigenvalues $J_{\alpha}$ have the initial condition $J_{\alpha}(0) = 0$ and follow the time evolution of Eq. (6). Let us call generically $j$ the functions given above.
+
+Using the resolvent
+
+$$
+\mathbf {G} ^ {(M)} (z) \stackrel {\mathrm {d e f}} {=} \frac {1}{z \mathbb {I} - \widehat {C} ^ {M}} \varsigma ,
+$$
+
+we can express the various quantities of interest with help of Cauchy integrals
+
+$$
+E _ {\mathrm {t r a i n}} = \frac {1}{2 i \pi} \oint_ {\mathcal {C}} d z j (z) \mathrm {T r} \big [ \mathbf {G} ^ {(M)} (z) \mathbf {C} ^ {(M)} \big ],
+$$
+
+$$
+E _ {\text {t e s t}} = \frac {1}{2 i \pi} \oint_ {\mathcal {C}} d z j (z) \operatorname {T r} \left[ \mathbf {G} ^ {(M)} (z) \boldsymbol {C} ^ {*} \right],
+$$
+
+$$
+\mathcal {E} _ {\mathrm {J}} = \mathrm {T r} \big [ C ^ {* - 2} \big ] + \frac {1}{2 i \pi} \oint_ {\mathcal {C}} d z \Big (j ^ {2} (z) \mathrm {T r} \big [ \mathbf {G} ^ {(M)} (z) \big ] - 2 j (z) \mathrm {T r} \big [ \mathbf {G} ^ {(M)} (z) C ^ {* - 1} \big ] \Big),
+$$
+
+$$
+L L _ {\mathrm {t r a i n , t e s t}} = \frac {1}{2 i \pi} \oint_ {\mathcal {C}} d z \frac {\log [ j (z) ]}{2} \mathrm {T r} \bigl [ \mathbf {G} ^ {(M)} (z) \bigr ] - \frac {E _ {\mathrm {t r a i n , t e s t}}}{2},
+$$
+
+where $\mathcal{C}$ is a contour of integration around the real axis. Next, from RMT, in the proportional asymptotic limit $M,N\to \infty$ with fixed $M / N = \rho$ , $\mathbf{G}^{(M)}(z)$ has a deterministic equivalent (Hachem et al., 2007) $\mathbf{G}$ , defined as
+
+$$
+\mathbf {G} (z) = \frac {1}{z \mathbb {I} - \Lambda (z) \mathbf {C} ^ {*}},
+$$
+
+with $C^*$ the population matrix and $\Lambda(z)$ is given implicitly by the following self-consistent equations (Marcenko & Pastur, 1967)
+
+$$
+\Lambda (z) = \int \frac {\sigma (\tau)}{1 - \Gamma (z) \tau}, \tag {36}
+$$
+
+$$
+\Gamma (z) = \frac {1}{\rho} \int \frac {\nu (d x) x}{z - \Lambda (z) x}, \tag {37}
+$$
+
+with $\nu(dx)$ the spectral density of the population matrix and where
+
+$$
+\Gamma \stackrel {\text {d e f}} {=} \lim _ {\substack {N, M \rightarrow \infty\\\frac {M}{N} = \rho}} \frac {\alpha}{M} \operatorname {Tr} \left[ \mathbf {G} ^ {(M)} C ^ {*} \right]. \tag{38}
+$$
+
+For sake of clarity we do not consider the Marchenko-Pastur equations in full-generality and actually assume the fluctuation of $z$ to be negligible i.e. we take $\sigma(\tau) = \delta(\tau - 1)$ . Letting $\bar{\nu}(dx)$ the asymptotic limit of the empirical spectrum in the proportional regime, its Stieltjes transform is given by the trace of the resolvent:
+
+$$
+g (z) \stackrel {\mathrm {d e f}} {=} \int \frac {\bar {\nu} (d x)}{z - x}.
+$$
+
+Then the bulk spectrum is given by the Stieltjes transform
+
+$$
+g (y + i \epsilon) = g _ {r} (y) + i \pi \frac {\epsilon}{| \epsilon |} \bar {\nu} (y),
+$$
+
+which rewrites (disregarding the pole at $z = 0$ for $\rho < 1$ )
+
+$$
+\bar {\nu} (y) = \frac {\rho \Lambda_ {i} (y)}{\pi y} = \frac {\rho}{\pi y} \frac {\Gamma_ {i} (y)}{\left[ 1 - \Gamma_ {r} (y) \right] ^ {2} + \Gamma_ {i} (y) ^ {2}}. \tag {39}
+$$
+
+Along the contour we integrate over $z = y + i\epsilon$ with $\epsilon$ infinitesimal. In the limit $\epsilon \to 0$ , both $\Lambda$ and $\Gamma$ may acquire a finite imaginary part which we write as
+
+$$
+\lim _ {\epsilon \to 0 ^ {\pm}} \Lambda (z) = \Lambda_ {r} (y) \pm \Lambda_ {i} (y)
+$$
+
+$$
+\lim _ {\epsilon \to 0 ^ {\pm}} \Gamma (z) = \Gamma_ {r} (y) \pm \Gamma_ {i} (y).
+$$
+
+In terms of these quantities we obtain the following equations for the train and test energies:
+
+$$
+E _ {\mathrm {t r a i n}} = \frac {\rho}{\pi} \int_ {0} ^ {\infty} d y j (y) \big [ \Lambda_ {r} (y) \Gamma_ {i} (y) + \Lambda_ {i} (y) \Gamma_ {r} (y) \big ],
+$$
+
+$$
+E _ {\mathrm {t e s t}} = \frac {\rho}{\pi} \int_ {0} ^ {\infty} d y j (y) \Gamma_ {i} (y) + \mathbb {1} _ {\{\rho < 1 \}} j (0) c (\rho),
+$$
+
+where $c(\rho)$ given implicitly by
+
+$$
+\int \frac {\nu (d x) x}{x + c (\rho)} = \rho . \tag {40}
+$$
+
+$E_{\mathrm{train}}$ may also be written
+
+$$
+E _ {\mathrm {t r a i n}} = \int_ {0} ^ {\infty} d y y j (y) \bar {\nu} (y),
+$$
+
+with $\bar{\nu} (y)$ given in (39).
+
+The coupling error takes the form for any $\rho >0$
+
+$$
+\begin{array}{l} \mathcal {E} _ {\mathrm {J}} = \int_ {0} ^ {\infty} \frac {\nu (d x)}{x ^ {2}} + \int_ {0} ^ {\infty} \bar {\nu} (d y) j ^ {2} (y) - \frac {2}{\rho} \int_ {0} ^ {\infty} \frac {\bar {\nu} (d y)}{y} \Big [ (1 - \rho) + 2 \rho \Lambda_ {r} (y) \Big ] j (y) \\ + \mathbb {1} _ {\{\rho < 1 \}} \Big [ (1 - \rho) j ^ {2} (0) + 2 j (0) \Big (\frac {1 - \rho}{c (\rho)} - \int_ {0} ^ {\infty} \frac {\nu (d x)}{x} \Big) \Big ], \\ \end{array}
+$$
+
+but in practice we consider only the under-parameterized regime corresponding to $\rho > 1$ .
+
+# G.2. Special case of spectral $L_{1}$ Regularization
+
+The case corresponding to the form (34) can be treated more directly without use of Cauchy integrals. In that case, considering instead the resolvent
+
+$$
+\mathbf {G} ^ {(M)} = \frac {1}{\mathbb {I} + \alpha \hat {\mathbf {C}} ^ {M}},
+$$
+
+with the inverse penalty
+
+$$
+\alpha = \frac {1}{\lambda}
+$$
+
+introduced here for convenience. This leads to the following form of the various quantities of interest
+
+$$
+E _ {\text {t r a i n}} = \frac {\alpha}{N} \operatorname {T r} \left[ \mathbf {G} ^ {(M)} \widehat {\boldsymbol {C}} ^ {M} \right] \tag {41}
+$$
+
+$$
+E _ {\text {t e s t}} = \frac {\alpha}{N} \operatorname {T r} \left[ \mathbf {G} ^ {(M)} \boldsymbol {C} ^ {*} \right]. \tag {42}
+$$
+
+$$
+E _ {\text {c o u p l i n g s}} \stackrel {\text {d e f}} {=} \frac {1}{N} \| \alpha \mathbf {G} ^ {(M)} - \boldsymbol {J} \| _ {F} ^ {2} \tag {43}
+$$
+
+$$
+L L _ {\text {t r a i n , t e s t}} = \frac {1}{2 N} \operatorname {T r} \left[ \log \alpha \mathbf {G} ^ {(M)} \right] - \frac {1}{2} E _ {\text {t r a i n , t e s t}}. \tag {44}
+$$
+
+In the scaling limit we again have a deterministic equivalent (Hachem et al., 2007) of the resolvent of the form
+
+$$
+\mathbf {G} = \frac {1}{\mathbb {I} + \Lambda \mathbf {C} ^ {*}}
+$$
+
+where the fixed point equations now read $(\sigma(\tau) = \delta(\tau - 1))$
+
+$$
+\Gamma = \frac {\alpha}{\rho} \int \nu (d x) \frac {x}{1 + \Lambda x} \tag {45}
+$$
+
+$$
+\Lambda = \frac {\alpha}{1 + \Gamma} \tag {46}
+$$
+
+with again $\Gamma$ given by (38). The expression for $E_{\mathrm{train, test}}$ are straightforward in the scaling limit:
+
+$$
+E _ {\mathrm {t r a i n}} = 1 - \int \frac {\nu (d x)}{1 + \Lambda x}
+$$
+
+$$
+E _ {\mathrm {t e s t}} = \frac {\Gamma}{\rho}.
+$$
+
+Remarkably, thanks to a leave-one out argument there is a deterministic relationship between the train and test energy. For $s \in \mathcal{I}_{\mathrm{train}}$ , we have a leave-one out relation of the form
+
+$$
+\mathbf {G} ^ {(M)} \boldsymbol {x} _ {s} = \frac {\mathbf {G} _ {\backslash s} ^ {(M)} \boldsymbol {x} _ {s}}{1 + \frac {\alpha}{M} \boldsymbol {x} _ {s} ^ {t} \mathbf {G} _ {\backslash s} ^ {(M)} \boldsymbol {x} _ {s}}
+$$
+
+where $\mathbf{G}_{\backslash s}^{(M)}$ is the resolvent obtained after removing $s$ from the train set. Assuming the samples to be of the form $\pmb{x}_s = F\pmb{z}_s$ with $FF^t = C$ with $\pmb{z}_s = \mathcal{N}(0,M\mathbb{I})$ such that $\mathbb{E}(\pmb{z}_s\pmb{z}_s^t) = \mathbb{I}$ , then for large $N$ we have the concentration property
+
+$$
+\boldsymbol {x} _ {s} ^ {t} \mathbf {G} _ {\backslash s} ^ {(M)} \boldsymbol {x} _ {s} = \frac {1}{M} \operatorname {T r} \left[ \mathbf {G} ^ {(M)} C \right] + \mathcal {O} \left(\frac {1}{\sqrt {M}}\right).
+$$
+
+As a result for large $N$ , $M$ we immediately obtain
+
+$$
+E _ {\mathrm {t r a i n}} = \frac {E _ {\mathrm {t e s t}}}{1 + \frac {1}{\rho} E _ {\mathrm {t e s t}}},
+$$
+
+which can be reverted as
+
+$$
+E _ {\text {t e s t}} = \frac {E _ {\text {t r a i n}}}{1 - \frac {1}{\rho} E _ {\text {t r a i n}}}. \tag {47}
+$$
+
+Concerning the error on the couplings, we have
+
+$$
+\begin{array}{l} E _ {\mathrm {J}} = \frac {1}{N} \operatorname {T r} \left[ \left(\alpha \mathbf {G} ^ {(M)} - \boldsymbol {J} ^ {*}\right) ^ {2} \right] \\ = \frac {1}{N} \left(\alpha^ {2} \operatorname {T r} \left[ \mathbf {G} ^ {(M) ^ {2}} \right] + \operatorname {T r} \left[ J ^ {* 2} \right] - 2 \alpha \operatorname {T r} \left[ \mathbf {G} ^ {(M)} J ^ {*} \right]\right) \\ \end{array}
+$$
+
+From Ledoit-Péchet the last term simply reads (up to $\mathcal{O}\big(1 / \sqrt{M}\big)$ corrections):
+
+$$
+\frac {1}{M} \operatorname {T r} \left[ \mathbf {G} ^ {(M)} J ^ {*} \right] = \frac {1}{M} \operatorname {T r} \left[ \frac {\boldsymbol {J} ^ {*}}{\mathbb {I} + \Lambda \boldsymbol {C} ^ {*}} \right] = \frac {1}{M} \operatorname {T r} \left[ \frac {\boldsymbol {J} ^ {* 2}}{\boldsymbol {J} ^ {*} + \Lambda} \right]
+$$
+
+For the first term we use the following identity:
+
+$$
+\mathbf {G} ^ {(M) ^ {2}} = \mathbf {G} ^ {(M)}) + \alpha \frac {d}{d \alpha} \mathbf {G} ^ {(M)}
+$$
+
+As a result, asymptotically we have:
+
+$$
+\begin{array}{l} \mathrm {T r} \Big [ \mathbf {G} ^ {(M) 2} \Big ] = \mathrm {T r} \Big [ \frac {1}{\mathbb {I} + \Lambda C ^ {*}} \Big ] + \alpha \frac {d}{d \alpha} \mathrm {T r} \Big [ \frac {1}{\mathbb {I} + \Lambda C ^ {*}} \Big ] \\ = \left(1 - \Lambda^ {\prime} (\alpha)\right) \operatorname {T r} \left[ \frac {1}{\mathbb {I} + \Lambda C ^ {*}} \right] + \Lambda^ {\prime} (\alpha) \operatorname {T r} \left[ \frac {1}{(\mathbb {I} + \Lambda C ^ {*}) ^ {2}} \right] \\ \end{array}
+$$
+
+For this we need to compute $\Lambda'(\alpha)$ which can be done from the self-consistent equation (46,45):
+
+$$
+\Lambda^ {\prime} (\alpha) = \frac {\Lambda^ {2}}{\alpha^ {2}} \frac {\rho}{\rho - Q [ \Lambda ]}
+$$
+
+with
+
+$$
+Q [ \Lambda ] = \frac {1}{M} \mathrm {T r} \Bigl [ \frac {\Lambda^ {2} C ^ {* 2}}{(\mathbb {I} + \Lambda C ^ {*}) ^ {2}} \Bigr ]
+$$
+
+Ultimately we obtain:
+
+$$
+\mathcal {E} _ {J} = \frac {1}{M} \mathrm {T r} \Big [ \Big (\frac {\alpha}{1 + \Lambda C ^ {*}} - \frac {1}{C ^ {*}} \Big) ^ {2} \Big ] + \frac {\alpha^ {2} (1 - \Lambda^ {\prime})}{M} \mathrm {T r} \Big [ \frac {\Lambda C ^ {*}}{(1 + \Lambda C ^ {*}) ^ {2}} \Big ],
+$$
+
+So we have
+
+$$
+\mathcal {E} _ {J} = \int \nu (d x) \left[ \frac {\alpha}{1 + \Lambda x} - \frac {1}{x} \right] ^ {2} + \alpha^ {2} (1 - \Lambda^ {\prime}) \int \nu (d x) \frac {\Lambda x}{(1 + \Lambda x) ^ {2}}
+$$
+
+Finally, concerning $LL_{\mathrm{train, test}}$ we don't see how to avoid the Cauchy integral, but the train-test relationship (47) has an important consequence, because it allows us to get a very precise estimation of the test likelihood when $M$ becomes large:
+
+$$
+L L _ {\mathrm {t e s t}} (J) = \frac {1}{2} \log \det (J) - \frac {E _ {\mathrm {t r a i n}}}{1 - \frac {1}{\rho} E _ {\mathrm {t r a i n}}}
+$$
+
+as long as $J$ is the function (34) of $C^{(M)}$ . For general EBM models we have a LL of the form
+
+$$
+L L _ {\mathrm {t r a i n , t e s t}} [ J ] = - \log Z [ J ] - E _ {\mathrm {t r a i n , t e s t}} [ J ]
+$$
+
+so by analogy with GCV, it is not excluded that we can use this train-test relation in practice.
+
+# H. Details on data-correction protocols
+
+In this section, we analyze the effect of different ways to improve the estimation of the covariance matrix's eigenvalues in order to avoid or diminish the effect of overfitting during the training dynamics.
+
+# H.1. Training dynamics with regularization prior for finite $N$
+
+We first discuss what happens to the training in the presence of a regularization. We employ two regularization protocols: a standard $L_{2}$ -norm, and a projected $L_{1}$ -norm. The choice of the second regularization is justified because it allows to have a maximum-a-posteriori coupling matrix which commutes with the original covariance matrix $\hat{C}^{M}$ , as it happens in the absence of regularization, thus facilitating the asymptotic analysis through RMT discussed in Appendix G.
+
+# H.1.1. $L_{2}$ REGULARIZATION
+
+The log-posterior now reads
+
+$$
+\frac {1}{M} \log p (\boldsymbol {J} \mid \mathcal {D}) = \mathcal {L} _ {\mathcal {D}} (\boldsymbol {J}) - \frac {\lambda}{4} \operatorname {T r} \left(\boldsymbol {J} ^ {2}\right) \tag {48}
+$$
+
+where $\lambda$ is the reguavlarization strength. The derivative w.r.t. the parameters now reads
+
+$$
+\frac {1}{M} \frac {\partial \log p (\boldsymbol {J} \mid \mathcal {D})}{\partial J _ {i j}} = \left[ - \hat {C} _ {i j} ^ {M} + \left(\boldsymbol {J} ^ {- 1}\right) _ {i j} - \lambda J _ {i j} \right] \tag {49}
+$$
+
+Notice that the new term commutes with the second one ( $J$ and $J^{-1}$ are diagonal in the same basis), so even in this case the maximum-a-posteriori matrix $\widehat{J}^{\mathrm{MAP}}$ will share the same basis as $\widehat{C}^{M}$ , as it happens in the absence of regularization. Therefore, we can apply the same reasoning discussed in the main text and project the log-posterior's gradient on the basis of $J$ . The evolution equation of each eigenvalues reads:
+
+$$
+\tau \frac {\mathrm {d} J _ {\alpha}}{\mathrm {d} t} = \frac {1}{J _ {\alpha}} - \hat {c} _ {\alpha} ^ {M} - \lambda J _ {\alpha}, \tag {50}
+$$
+
+Although there exist no closed expression for the full time-dependent solution of Eq. (50), it is possible at least to compute analytically its fixed point:
+
+$$
+J _ {\alpha} ^ {(\infty) - L _ {2}} (\lambda) = \frac {1}{2 \lambda} \left[ - \hat {c} _ {\alpha} ^ {M} + \sqrt {\left(\hat {c} _ {\alpha} ^ {M}\right) ^ {2} + 4 \lambda} \right] \tag {51}
+$$
+
+The full coupling matrix corresponding to the above fixed point is finally computed projecting back Eqs. (51) onto the eigenbasis of $\widehat{C}^M$ , that is $J^{(\infty)-L_2}(\lambda) = \sum_{\alpha} J_{\alpha}^{(\infty)-L_2}(\lambda) \boldsymbol{u}_{\alpha}^M \boldsymbol{u}_{\alpha}^{M^\top}$ .
+
+# H.1.2. SPECTRAL $\widetilde{L}_1$ -NORM
+
+This regularization schemes utilize a $L_{1}$ -norm but on the projected basis of the coupling matrix $\mathbf{J}$ . This construction still allows to employ a similar formula to Eq. (12) to describe the evolution of eigenvalues, each one independently on the others. The original differential equation describing the evolution of $J_{\alpha}$ is now modified as
+
+$$
+\frac {1}{\gamma} \frac {\mathrm {d} J _ {\alpha}}{\mathrm {d} t} = \frac {1}{J _ {\alpha}} - \hat {c} _ {\alpha} ^ {M} - \lambda , \tag {52}
+$$
+
+whose fixed point reads
+
+$$
+J _ {\alpha} ^ {(\infty) - \widetilde {L} _ {1}} (\lambda) = \frac {1}{\hat {c} _ {\alpha} ^ {M} + \lambda} \tag {53}
+$$
+
+Note that equations (51)-(53) just derived are the same ones as Eqs. (35)-(34) in Appendix 6, respectively.
+
+# H.1.3. RESULTS ON THE EFFECT OF REGULARIZATION
+
+We can check the performances of either type of regularization by looking at the training fixed point, and at how it modifies the quality of the inferred model. In Fig. 16-(a), we show the reconstruction error $\mathcal{E}_{\mathrm{J}}$ computed between the ground truth and the inferred model in the presence of a regularization prior with strength $\lambda$ , both for the $L_{2}$ -norm (solid lines) and for the spectral- $L_{1}$ norm (dashed lines), for a given value of number of samples: here we have $\rho = M / N = 1.5$ . Comparison is
+
+
+(a)
+Figure 16. Effect of the regularization priors on the inferred model's quality. (a): the plot shows the reconstruction error $\mathcal{E}_1$ computed between the ground truth and the inferred model in the presence of a regularization prior with strength $\lambda$ . Solid lines refer to the $L_{2}$ norm, while dashed lines to the spectral- $L_{1}$ norm, both discussed in Section H.1. For each prior, we compare finite-size results (colored lines) and RMT asymptotic estimations. (b)-(c): we show the values of the regularization strength $\lambda_{\mathrm{opt}}$ that achieves optimal reconstruction of the model (lines with scatter points), and the optimal value of the regularization that maximizes the test log-likelihood (lines with scatter diamonds). Panels (b) (resp. (c)) refers to the optimal values when using the $L_{2}$ -norm (resp. the spectral $L_{1}$ -norm). Note that (b)-(c) are on the same y-scale. Settings are the same as in Fig. 4.
+
+
+(b)
+
+
+(c)
+
+shown between finite-size trainings (colored lines) and RMT estimation (black lines). All the quantities are plotted versus the regularization strength $\lambda$ : we can observe how there is clear non-monotonic behavior with a minimum developing at a certain $\lambda_{\mathrm{opt}}$ . As one might expect the optima value differs between the two regularization priors, i.e. $\lambda_{\mathrm{opt}}^{L_2} \neq \lambda_{\mathrm{opt}}^{L_1}$ . Results on both priors are shown in the same panel to highlight that the two regularization schemes have qualitatively the same effect on the final reconstruction error: that is, the quality of the inferred model at the optimal value is the same for both regularizations. Panels (b)-(c) show instead the optimal value of $\lambda$ computed either by minimizing the reconstruction error (shown with points) or by maximizing the test log-likelihood (diamonds). Panel (b) is actually a repetition of Fig. 4-(d) and refers to the $L_2$ -prior, while (c) refers to the $L_1$ spectral prior. Again, we can observe a similar behavior of the two norms, with the only difference that $\lambda_{\mathrm{opt}}^{L_2} < \lambda_{\mathrm{opt}}^{L_1}$ independently on the chosen criterion. Finally, all the optimal values go to 0 when $\rho \to \infty$ , as expected.
+
+What is the effect of the regularization on the training dynamics? Considering that the standard training has a non-monotonic behavior w.r.t. the training time, we would expect that, since the regularization strongly improves on the models' quality w.r.t. the standard case (at least at the optimal optimizing regularization strength $\lambda_{\mathrm{opt}}$ ), such a non-monotonic behavior is diminished. This is indeed the case, as shown by Fig. 17-(a), displaying different training curves for different regularization strengths: the closest the regularization to its optimal value (highlighted in red), the smoother the model's quality is w.r.t. training time. At the optimal point the model's quality is completely non-monotonic and approaches the fixed point at the same reconstruction error as the minimum w.r.t. time.
+
+Actually, due to the simplicity of the GEBM it is even possible to interpret the $L_{2}$ regularization as a shrinkage correction protocol. Consider indeed the training fixed point given by Eq. (50). We stress again that the maximum-posterior matrix $\pmb{J}$ has the same basis decomposition as the empirical covariance matrix, because the regularization term commutes with the other two terms in Eq. (50). By the dualism between covariance matrix and coupling matrix in the Gaussian EBM, we can think at the reciprocal values of Eq. (50) as eigenvalues of a corrected covariance matrix w.r.t. $\hat{\pmb{C}}^M$ , depending on $\lambda$ . We can therefore define another eigenvalue-corrected covariance matrix, by using the analytic fixed point on the training dynamics obtained through the regularization:
+
+$$
+\widehat {\boldsymbol {C}} _ {\text {v a l} - L _ {2} (\lambda)} ^ {M} = \sum_ {\alpha} \frac {1}{J _ {\alpha} ^ {(\infty) - L _ {2}} (\lambda)} \boldsymbol {u} _ {\alpha} ^ {M} \boldsymbol {u} _ {\alpha} ^ {M \top} \tag {54}
+$$
+
+By definition, the training fixed point obtained using $i$ ) a training dynamics with the un-touched empirical covariance matrix $\widehat{C}^M$ plus the regularization term or $ii$ ) a regularization-free dynamics using matrix (54) are the same. Fig. 17-(a) shows indeed how the regularization modifies the eigenvalues of $\widehat{C}^M$ when interpreting the fixed point of the training dynamics
+
+
+Figure 17. Effect of different $L_{2}$ -norm regularization strengths $\lambda$ on the GEBM's learning dynamics. Panel (a) shows the reconstruction error vs time. The dotted blue line corresponds to the standard training over $\widehat{C}^{M}$ . All the other full lines correspond to a training with a certain value of regularization strength $\lambda$ , obtained by numerically solving Eq. (50) for all modes. The regularization strength $\lambda$ increases from yellowish colors to blueish (see the colorbar at the right). The curve corresponding to the optimal regularization that minimizes the reconstruction error after training is highlighted in red. Panel (b): plot of the equivalent covariances modes corrected by the regularization. For each curve, we scatter plot these values Eq. (54) against the population eigenvalues. Here we set $\rho = 1.66$ .
+
+
+
+as a shrinkage correction. Each set of points shows the quantities $1 / J_{\alpha}^{(\infty) - L_2}(\lambda)$ (i.e. the eigenvalues of (54)) vs the population ones $c_{\alpha}^{*}$ . It is intuitive to notice that the optimal regularization (red points) is the one that makes such corrected eigenvalues as close as possible to the population ones. This entire reasoning holds analogously with the spectral $L_{1}$ norm.
+
+# H.2. Empirical shrinkage correction through modes fitting
+
+A simple way to perform a heuristic shrinkage correction is to down-sample the empirical covariance matrix and estimate the asymptotic eigenvalues through a fitting procedure. Starting from the available dataset with $M$ samples - whose covariance matrix is $\hat{C}^M$ - we can randomly extract subsets of $N < m < M$ samples and estimate the eigenvalues of the size-reduced covariance matrices $\hat{C}^m$ . Every time a down-sampling procedure of this kind is performed, both the eigenvalues and the eigenvectors will be different from the original $\hat{C}^M$ ; however, we here suppose to account for the eigenvalues, keeping the basis fixed to the one of $\hat{C}^M$ . After applying this computation to different values of $N < m < M$ , for each eigenvalue $\alpha$ we can fit the resulting data $\{\hat{c}_{\alpha}^{m}\}_{m\in (N;M]}$ according to
+
+$$
+\hat {c} _ {\alpha} (m) = \frac {1}{m ^ {\nu}} A _ {\alpha} ^ {M} + B _ {\alpha} ^ {M} \tag {55}
+$$
+
+with $\nu$ being an exponent of choice. The coefficients $B_{\alpha}^{M}$ will represent the asymptotic estimate of the $\alpha$ -th eigenvalue of the population matrix, $c_{\alpha}^{*}$ , corresponding to the $m \to \infty$ extrapolation. After fitting each mode separately, we can construct an eigenvalue-corrected covariance matrix as
+
+$$
+\hat {\boldsymbol {C}} _ {\text {v a l - f i t}} = \sum_ {\alpha} B _ {\alpha} ^ {M} \boldsymbol {u} _ {\alpha} ^ {M} \boldsymbol {u} _ {\alpha} ^ {M \top} \tag {56}
+$$
+
+This procedure can in principle be generalized e.g by using a combination of powers in the fitting function, although in this work we only restricted to the functional form (56); secondly, the estimation of the fitting coefficients can be improved by collecting mean values of eigenvalues $\{\hat{c}_{\alpha}^{m}\}_{m\in (N;M]}$ by evaluating the down-sampled covariance matrix $\hat{C}^m$ multiple times with different subsets of the $M$ original data. In the experiments presented in the main text for the GEBM (orange line in Fig. 5-(b)), we used a simple linear fitting in $1 / m$ ( $\nu = 1$ ) and 10 random resampling for each value of $m$ , from which the mean eigenvalue is extracted to perform the fit. This linear scaling (in $1 / m$ ) for the finite-size fluctuation of the eigenvalues in the GEBM is also justified by theoretical evidence (see e.g. (Ledoit & Peché, 2011)). In the experiments for the Ising-BM instead, (orange line in Fig. 6-(c)) we find that best reconstruction is achieved with $\nu = 1 / 2$ , while a linear fitting has always very bad performances. Also in this case we used 10 resampling steps. An example of such a fitting procedure is shown in Figure 18 for both the GEBM (in (a)) and for the BM (in (b)), in each case for a given value of $M$ .
+
+
+Figure 18. Examples of eigenmode fitting procedures for the GEBM and the Ising-BM. Each panel illustrates the procedure used to fit the eigenmodes of the covariance matrix $\hat{C}^M$ by downsampling to $m < M$ samples, in order to extrapolate their behavior as $m \to \infty$ , following Eq. (55). Panel (a) refers to the GEBM used in the main text (e.g., Fig. 5). For a fixed value of $M$ such that $\rho = M / N = 1.66$ , we show a subset of eigenvalues $\{\hat{c}_{\alpha}^{M}\}$ (denoted by + markers), along with their downsampled counterparts $\{c_{\alpha}^{m}\}$ for several values of $m < M$ (small circles). These downsampled eigenvalues are obtained by randomly selecting $m$ samples from the full dataset and computing the eigenvalues of the resulting covariance matrix, averaged over 10 independent instances. Dashed black lines correspond to fits using Eq. (55) with $\nu = 1$ , and colored diamonds indicate the extrapolated intercepts $B_{\alpha}^{M}$ , i.e., the estimated eigenvalues at $m \to \infty$ . Crosses mark the population eigenvalues $\{c_{\alpha}^{*}\}$ , showing that the fitted extrapolations are significantly closer to the true population spectrum than the empirical eigenvalues obtained from $M$ samples. Panel (b) shows the analogous procedure for the Ising-BM, using $\rho = 7.81$ . In this case, the best fits are obtained with $\nu = 1 / 2$ , and the horizontal axis is accordingly rescaled as $1 / \sqrt{m}$ .
+
+
+
+# I. Eigendecomposition of training dynamics on Boltzmann Machine
+
+We consider a Ising-like Boltzmann Machine for the inference of binary-valued data. The probability of a configuration $\pmb{x}$ where $x_{i} \in [-1, 1]$ at given parameters is expressed as
+
+$$
+p (\boldsymbol {x} \mid \boldsymbol {J}, \boldsymbol {h}) = \frac {1}{Z} e ^ {\sum_ {i < j} J _ {i j} x _ {i} x _ {j} + \sum_ {i} x _ {i} h _ {i}}. \tag {57}
+$$
+
+We suppose to generate equilibrium configurations from a known model with $\theta^{*} = (J^{*},h^{*})$ (eventually these parameters are rescaled by an external factor $\beta$ that plays the role of an external inverse temperature), and we want to infer back the original model through a likelihood maximization procedure. The LL of a certain set of parameters $\theta = (J,h)$ is given by
+
+$$
+\mathcal {L} _ {\mathcal {D}} (\boldsymbol {J}, \boldsymbol {h}) = \frac {1}{M} \sum_ {\mu = 1} ^ {M} \log p \left(\boldsymbol {x} ^ {\mu} \mid \boldsymbol {J}, \boldsymbol {h}\right) = \sum_ {i < j} J _ {i j} \mathbb {E} _ {\mathcal {D}} \left[ x _ {i} x _ {j} \right] + \sum_ {i} h _ {i} \mathbb {E} _ {\mathcal {D}} \left[ x _ {i} \right] - \log Z \tag {58}
+$$
+
+where $\mathbb{E}_{\mathcal{D}}[\cdot ]$ denotes the average w.r.t. the dataset $\mathcal{D} = \{\pmb{x}_{\mu}\}_{\mu = 1}^{M}$ . In what follows, for the analytic treatment of the training dynamics in the ML procedure we neglect the problem of learning the external fields $h_i$ . This assumption is consistent with a scenario where the data have null magnetizations. Therefore, the gradient of the LL w.r.t. the couplings $J_{ij}$ reads
+
+$$
+\frac {\partial \mathcal {L} _ {\mathcal {D}}}{\partial J _ {i j}} = \mathbb {E} _ {\mathcal {D}} [ x _ {i} x _ {j} ] - \mathbb {E} _ {\boldsymbol {J}} [ x _ {i} x _ {j} ] = \hat {\boldsymbol {C}} ^ {M} - \mathbb {E} _ {\boldsymbol {J}} [ x _ {i} x _ {j} ], \tag {59}
+$$
+
+Then, the couplings are updated as
+
+$$
+J _ {i j} (t + 1) \leftarrow J _ {i j} (t) + \gamma \frac {\partial \mathcal {L} _ {\mathcal {D}} (t)}{\partial J _ {i j}}, \tag {60}
+$$
+
+where $\gamma$ is the learning rate. From here, we can assume an ideal training with an infinitesimal learning rate to recast the evolution equation of the matrix $J$ in time in the following matrix form
+
+$$
+\tau \frac {d J _ {i j}}{d t} = \left. \frac {\partial \mathcal {L} _ {\mathcal {D}}}{\partial J _ {i j}} \right| _ {J (t)} \quad \Longrightarrow \quad \tau \frac {d J}{d t} = \widehat {C} ^ {M} - \left\langle \boldsymbol {x} \boldsymbol {x} ^ {\top} \right\rangle_ {J}. \tag {61}
+$$
+
+where $\mathbb{E}_J[\cdot ]$ denotes the average w.r.t. to model (57), and $\tau = 1 / \gamma$ . Note that, since we assumed to neglect local magnetizations, the r.h.s (61) computed in its diagonal entries is 0. This is consistent with the fact that self-couplings $J_{ii}$ do not evolve in time, because they correspond to constant energy terms in the energy function. In order to make the models' correlation $\mathbb{E}_J[x\pmb{x}^\top]$ analytically treatable, we implement now a mean-field approximation. We can exploit the following exact expression (self-consistent) for the correlator (Suzuki & Kubo, 1968), which we further expand for high temperatures
+
+$$
+\mathbb {E} _ {\boldsymbol {J}} \left[ x _ {i} x _ {j} \right] = \delta_ {i j} + \left(1 - \delta_ {i j}\right) \mathbb {E} _ {\boldsymbol {J}} \left[ x _ {i} \tanh \sum_ {k} J _ {j k} x _ {k} \right] \approx \delta_ {i j} + \left(1 - \delta_ {i j}\right) \sum_ {k} J _ {j k} \mathbb {E} _ {\boldsymbol {J}} \left[ x _ {i} x _ {k} \right], \tag {62}
+$$
+
+Then, in matrix form $(C_{ij} = \mathbb{E}_{\boldsymbol{J}}[x_i x_j])$ we get
+
+$$
+\boldsymbol {C} = \mathbb {I} _ {N} + \boldsymbol {J C} - \operatorname {d i a g} [ \boldsymbol {J C} ] \tag {63}
+$$
+
+where $\mathbb{I}_N$ is the identity matrix of size $N$ , and the operator $\operatorname{diag}[\mathcal{M}]$ extracts the diagonal part of a matrix $\mathcal{M}$ . The last term is introduced to correct the wrong estimation of the diagonal entries of $C$ , which should be equal to 1. The problem is that the above equation does not admit a simple analytical solution for the model's correlation matrix $C$ , which was the original goal. Instead, is typical implemented in the literature is the following expression of the linear response correlations
+
+$$
+\boldsymbol {C} = f (\boldsymbol {J}) = \left(\mathbb {I} _ {N} - \boldsymbol {J}\right) ^ {- 1} \tag {64}
+$$
+
+which gives a reliable estimate of the correlation matrix and is at the core of well-studied mean-field like expression for the inferred couplings (Kappen & Rodríguez, 1998; Ricci-Tersenghi, 2012). However, the diagonal entries of (64) are not in general equal to 1. Normally, this is not an issue because one is interested in correlations for $i \neq j$ (i.e. off-diagonal entries). However, in our approach such a diagonal mismatch creates a non-null gradient on the diagonal entries of $J$ . Indeed, by plugging Eq. (64) into the LL's gradient, we get
+
+$$
+\tau \frac {d \boldsymbol {J}}{d t} = \widehat {\boldsymbol {C}} ^ {M} - \left(\mathbb {I} _ {N} - \boldsymbol {J}\right) ^ {- 1} \tag {65}
+$$
+
+Now, the diagonal part of the r.h.s. of Eq. (65) not null anymore. In order to circumvent this additional issue, a possible solution could be to modify the gradient by remove the diagonal terms - which would results in an nonphysical evolution of the self-couplings - "by hand":
+
+$$
+\tau \frac {d \boldsymbol {J}}{d t} = \hat {\boldsymbol {C}} ^ {M} - \left(\mathbb {I} _ {N} - \boldsymbol {J}\right) ^ {- 1} + \operatorname {d i a g} \left[ \mathbb {I} _ {N} - \left(\mathbb {I} _ {N} - \boldsymbol {J}\right) ^ {- 1} \right] \tag {66}
+$$
+
+The last term in the above expression correctly fixes the diagonal problem for the matrix $J$ in the gradient. This strategy is similar to what carried out in (Fanthomme et al., 2022) where the authors impose a add a spherical constraint on the gradient in the form of a Lagrange multiplier. However, this leads to a complicated expression even for the training fixed point because the evolution of all the eigenvalues is now coupled. For this reason, since here we are interested in the dynamics of training, adding this constraint would result into a system of coupled differential equations for the eigenvalues, which has computationally the same complexity of the original problem, so there would be no gain in that. The simplest strategy is therefore to use the approximate expression for the correlator and avoid adding the constraint, so to use the gradient (65) as it is. Although it might seem a crude approximation, it still allows us to decompose the dynamics in the same way as for the GEBM. Before going on, it is worth noticing that the diagonal matching problem is at the core of some refined mean-field like approximations for binary (Ising-like) maximum-entropy models which exploit e.g. iterative diagonal consistency tricks (see e.g (Yasuda & Tanaka, 2013; Kiwata, 2014)). Therefore, as explained in Sec. 4 for the GEBM we can project the log-likelihood's gradient onto the eigenbasis of $J$ , leading to an expression for the rotation of eigenvectors (same one as in (5)) and another one for the evolution of its eigenvalues, which is given below:
+
+$$
+\tau \frac {\mathrm {d} J _ {\alpha}}{\mathrm {d} t} = \hat {c} _ {\alpha \alpha} ^ {M} - \frac {1}{\mathbb {I} _ {N} - J _ {\alpha}} \tag {67}
+$$
+
+which is the same equation shown in the main text (Eq. (11)). As explained in the main text, the solution of the above equation describes an independent evolution of eigenvalues which is not quantitative accurate with respect to the numerical results, but still captures the qualitative trend: in particular, it perfectly describes the separation of timescales in terms of PCA's modes during the training dynamics, an effect which is observed also in the numerical results (see panels (b) in Fig. 6).
+
+
+Figure 19. Supplementary results on the Boltzmann Machine for the inverse Ising problem. The model, dataset and training setting are the same as in Fig. 6. (a): we show the error between the covariance matrix of generated configurations from the model (along the training trajectory) and the covariance matrix of the training set $\widehat{C}^M$ , so $\mathcal{E}_{\widehat{C}^M} = \left\| \widehat{C}^M - C^{\mathrm{gen}}(t)\right\|_{\mathrm{F}}$ . (b): plot of the generation error computed w.r.t. a test set, again between the covariance matrices, i.e. $\mathcal{E}_{\mathcal{C}^{\mathrm{test}}} = \left\| C^{\mathrm{test}} - C^{\mathrm{gen}}(t)\right\|_{\mathrm{F}}$ . Both quantities are plotted versus training time (number of updates) for different values of $M$ shown in the colorbar. The learning rate is set to $\gamma = 10^{-2}$ .
+
+
+
+
+
+
+Figure 20. Supplementary results on the Boltzmann Machine for the inverse Ising problem. The model, dataset and training setting are the same as in Fig. 6, except for the system size which here is equal to a lattice size of $L = 32$ , so that $N = L^2 = 1024$ . We show the Frobenius norm of the error between the true model and the trained one versus training time (number of updates) for different values of $\rho = M / N$ shown in the colorbar. The learning rate is set to $\gamma = 10^{-2}$ .
\ No newline at end of file
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/images.zip b/atheoreticalframeworkforoverfittinginenergybasedmodeling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b864d76155e6a2815cd497d6b1ac56d552ca695a
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00f38035f8fc39bb90880cf0fdb31039b6bcc625f313aca2336c80fd007ae3bd
+size 1734459
diff --git a/atheoreticalframeworkforoverfittinginenergybasedmodeling/layout.json b/atheoreticalframeworkforoverfittinginenergybasedmodeling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9eb724f90ec0a9bbdf0fabec8498f7491e29bf2a
--- /dev/null
+++ b/atheoreticalframeworkforoverfittinginenergybasedmodeling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa7cdc7ed2f15266fa3661f27e62d5779cc1da4c205429e574a365390f8b3f54
+size 1380174
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_content_list.json b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8aae1ea49fa5aa626477aecfa47a9dcf49da55f8
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9ee91e3e7f88c63c3b1e4efbabb8bd8824a14e2651e17ed2f569117dc4b263d
+size 265563
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_model.json b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9ae1f618da2a0866778da6559964e0d61280bc1b
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e481fced39c428d58607a50beae8cfd328f4ce362c3dea82f3def1659426334
+size 297890
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_origin.pdf b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1b9551221ad693d2dcf593c52370b323aba902dc
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/5eb562a7-449c-42e5-a1db-2330465b61e4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:846306360c95def0ea465acb3e04e2ab5162b258e1a37eda00a9c02e6f57d968
+size 465625
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/full.md b/atheoreticaljustificationforasymmetricactorcriticalgorithms/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c853e8e0cf889e856b33cbc4521444500a096a78
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/full.md
@@ -0,0 +1,1415 @@
+# A Theoretical Justification for Asymmetric Actor-Critic Algorithms
+
+Gaspard Lambrechts $^{1*}$ Damien Ernst $^{1}$ Aditya Mahajan $^{2}$
+
+# Abstract
+
+In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.
+
+# 1. Introduction
+
+Reinforcement learning (RL) is an appealing framework for solving decision making problems, notably because it makes very few assumptions about the problem at hand. In its purest form, the promise of an RL algorithm is to learn an optimal behavior from interaction with an environment whose dynamics are unknown. More formally, an RL algorithm aims to learn a policy – which is defined as a mapping from observations to actions – from interaction samples, in order to maximize a reward signal. While RL has obtained empirical successes for a plethora of challenging problems ranging from games to robotics (Mnih et al., 2015; Schrittwieser et al., 2020; Levine et al., 2015; Akkaya et al., 2019), most of these achievements have assumed full state observability. A more realistic assumption is partial state observability, where only a partial observation of the state of the environment is available for taking actions. In this setting, the optimal action generally depends on the complete history of past observations and actions. Tradi-
+
+tional RL approaches have thus been adapted by considering history-dependent policies, usually with a recurrent neural network to process histories (Bakker, 2001; Wierstra et al., 2007; Hausknecht & Stone, 2015; Heess et al., 2015; Zhang et al., 2016; Zhu et al., 2017). Given the difficulty of learning effective history-dependent policies, various auxiliary representation learning objectives have been proposed to compress the history into useful representations (Igl et al., 2018; Buesing et al., 2018; Guo et al., 2018; Gregor et al., 2019; Han et al., 2019; Guo et al., 2020; Lee et al., 2020; Subramanian et al., 2022; Ni et al., 2024). Such methods usually seek to learn history representations that encode the belief, defined as the posterior distributions over the states given the history, which is a sufficient statistic of the history for optimal control.
+
+While these methods are theoretically able to learn optimal history-dependent policies, they usually learn solely from the partial state observations, which can be restrictive. Indeed, assuming the same partial observability at training time and execution time can be too pessimistic for many environments, notably for those that are simulated. This motivated the asymmetric learning paradigm, where additional state information available at training time is leveraged during the process of learning a history-dependent policy. Although the optimal policies obtained by asymmetric learning are theoretically equivalent to those learned by symmetric learning, the promise of asymmetric learning is to improve the convergence speed. Early approaches proposed to imitate a privileged policy conditioned on the state (Choudhury et al., 2018), or to use an asymmetric critic conditioned on the state (Pinto et al., 2018). These heuristic methods initially lacked a theoretical framework, and a recent line of work has focused on proposing theoretically grounded asymmetric learning objectives. First, imitation learning of a privileged policy was known to be suboptimal, and it was addressed by constraining the privileged policy so that its imitation results in an optimal policy for the partially observable environment (Warrington et al., 2021). Similarly, asymmetric actor-critic approaches were proven to provide biased gradients, and an unbiased actor-critic approach was proposed by introducing the history-state value function (Baisero & Amato, 2022). In model-based RL, several works proposed world model objectives that are proved to provide sufficient statistics of the history, by leveraging
+
+the state (Avalos et al., 2024) or arbitrary state information (Lambrechts et al., 2024). Finally, asymmetric representation learning approaches were proposed to learn sufficient statistics from state samples (Wang et al., 2023; Sinha & Mahajan, 2023). It is worth noting that many recent successful applications of RL have greatly benefited from asymmetric learning, usually through an asymmetric critic (Degrave et al., 2022; Kaufmann et al., 2023; Vasco et al., 2024).
+
+Despite these methods being theoretically grounded, in the sense that policies satisfying these objectives are optimal policies, they still lack a theoretical justification for their potential benefit. In particular, there is no theoretical justification for the improved convergence speed of asymmetric learning. In this work, we propose such a justification for an asymmetric actor-critic algorithm, using agent-state policies and linear function approximators. Agent-state policies rely on an internal state, which is updated recurrently based on successive actions and observations, from which the next action is selected. This agent state can introduce aliasing, a phenomenon in which an agent state may correspond to two different beliefs. Our argument relies on the comparison of two analogous finite-time bounds: one for a symmetric natural actor-critic algorithm (Cayci et al., 2024), and its adaptation to the asymmetric setting that we derive in this paper. This comparison reveals that asymmetric learning eliminates error terms arising from aliasing in the agent state in symmetric learning. These aliasing terms are given by the difference between the true belief (i.e., the posterior distribution over the states given the history) and the approximate belief (i.e., the posterior distribution over the states given the agent state). This suggests that asymmetric learning may be particularly useful when aliasing is high.
+
+A recent related work proposed a model-based asymmetric actor-critic algorithm relying on belief approximation, and proved its sample efficiency (Cai et al., 2024). It also considered agent-state policies, and studied the finite-time performance by providing a probably approximately correct (PAC) bound, instead of an expectation bound as here. While the algorithm was restricted to finite horizon and discrete spaces, notably for implementing count-based exploration strategies, it tackled the online exploration setting and its performance bound did not present a concentrability coefficient. This related analysis thus provides a promising framework for future works in a more challenging setting. However, it did not study the existing asymmetric actor-critic algorithm, and did not provide a direct comparison with symmetric learning. In contrast, we focus on providing comparable bounds for the existing model-free asymmetric actor-critic algorithm and its symmetric counterpart.
+
+In Section 2, we formalize the environments, policies, and Q-functions that are considered. In Section 3, we introduce the asymmetric and symmetric actor-critic algorithms that
+
+are studied. In Section 4, we provide the finite-time bounds for the asymmetric and symmetric actor-critic algorithms. Finally, in Section 5, we conclude by summarizing the contributions and providing avenues for future works.
+
+# 2. Background
+
+In Subsection 2.1, we introduce the decision processes and agent-state policies that are considered. Then, we introduce the asymmetric and symmetric Q-function for such policies, in Subsection 2.2 and Subsection 2.3, respectively.
+
+# 2.1. Partially Observable Markov Decision Process
+
+A partially observable Markov decision process (POMDP) is a tuple $\mathcal{P} = (\mathcal{S},\mathcal{A},\mathcal{O},P,T,R,O,\gamma)$ , with discrete state space $\mathcal{S}$ , discrete action space $\mathcal{A}$ , and discrete observation space $\mathcal{O}$ . The initial state distribution $P$ gives the probability $P(s_0)$ of $s_0\in S$ being the initial state of the decision process. The dynamics are described by the transition distribution $T$ that gives the probability $T(s_{t + 1}|s_t,a_t)$ of $s_{t + 1}\in S$ being the state resulting from action $a_{t}\in \mathcal{A}$ in state $s_t\in S$ . The reward function $R$ gives the immediate reward $r_t = R(s_t,a_t,s_{t + 1})$ of the reward $r_t\in [0,1]$ resulting from this transition. The observation distribution $O$ gives the probability $O(o_{t}|s_{t})$ to get observation $o_{t}\in \mathcal{O}$ in state $s_t\in S$ . Finally, the discount factor $\gamma \in [0,1)$ weights the relative importance of future rewards. Taking a sequence of $t$ actions in the POMDP conditions its execution and provides the history $h_t = (o_0,a_0,\dots,o_t)\in \mathcal{H}$ , where $\mathcal{H}$ is the set of histories of arbitrary length. In general, the optimal policy in a POMDP depends on the complete history.
+
+However, in practice it is infeasible to learn a policy conditioned on the full history, since the latter grows unboundedly with time. We consider an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ that uses an agent-state process $\mathcal{M} = (\mathcal{Z}, U)$ , in order to take actions (Dong et al., 2022; Sinha & Mahajan, 2024). More formally, we consider a discrete agent state space $\mathcal{Z}$ , and an update distribution $U$ that gives the probability $U(z_{t+1} | z_t, a_t, o_{t+1})$ of $z_{t+1} \in \mathcal{Z}$ being the state resulting from action $a_t \in \mathcal{A}$ and observation $o_{t+1} \in \mathcal{O}$ in agent state $z_t \in \mathcal{Z}$ . Note that the update distribution $U$ also describes the initial agent state distribution with $z_{-1} \notin \mathcal{Z}$ the null agent state and $a_{-1} \notin \mathcal{A}$ the null action. Some examples of agent states that are often used are a sliding window of past observations, or a belief filter. Aliasing may occur when the agent state does not summarize all information from the history about the state of the environment, see Appendix A for an example. Given the agent state $z_t$ , the policy $\pi$ samples actions according to $a_t \sim \pi(\cdot | z_t)$ . An agent-state policy $\pi^* \in \Pi_{\mathcal{M}}$ is said to be optimal for an agent-state process $\mathcal{M}$ if it maximizes the expected discounted sum of rewards: $\pi^* \in \arg \max_{\pi \in \Pi_{\mathcal{M}}} J(\pi)$ with $J(\pi) = \mathbb{E}^{\pi}[\sum_{t=0}^{\infty} \gamma^t R_t]$ .
+
+In the following, we denote by $S_{t}, O_{t}, Z_{t}, A_{t}$ and $R_{t}$ the random variables induced by the POMDP $\mathcal{P}$ . Given a POMDP $\mathcal{P}$ and an agent-state process $\mathcal{M}$ , the initial environment-agent state distribution $P$ is given by,
+
+$$
+P (s _ {0}, z _ {0}) = P (s _ {0}) \sum_ {o _ {0} \in \mathcal {O}} O (o _ {0} | s _ {0}) U (z _ {0} | z _ {- 1}, a _ {- 1}, o _ {0}). (1)
+$$
+
+Furthermore, given an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ , we define the discounted visitation distribution as,
+
+$$
+\begin{array}{l} d ^ {\pi} (s, z) = (1 - \gamma) \sum_ {s _ {0}, z _ {0}} P \left(s _ {0}, z _ {0}\right) \tag {2} \\ \times \sum_ {t = 0} ^ {\infty} \gamma^ {k} \Pr (S _ {t} = s, Z _ {t} = z | S _ {0} = s _ {0}, Z _ {0} = z _ {0}). \\ \end{array}
+$$
+
+Finally, we define the visitation distribution $m$ steps from the discounted visitation distribution as,
+
+$$
+\begin{array}{l} d _ {m} ^ {\pi} (s, z) = \sum_ {s _ {0}, z _ {0}} d ^ {\pi} \left(s _ {0}, z _ {0}\right) \tag {3} \\ \times \Pr (S _ {m} = s, Z _ {m} = z | S _ {0} = s _ {0}, Z _ {0} = z _ {0}). \\ \end{array}
+$$
+
+In the following, we define the various value functions for the policies that we defined. Note that we use calligraphic letters $\mathcal{Q}^{\pi}$ , $\mathcal{V}^{\pi}$ and $\mathcal{A}^{\pi}$ for the asymmetric functions, and regular letters $Q^{\pi}$ , $V^{\pi}$ and $A^{\pi}$ for the symmetric ones.
+
+# 2.2. Asymmetric Q-function
+
+Similarly to the asymmetric Q-function of Baisero & Amato (2022), which is conditioned on $(s,h,a)$ , we define an asymmetric Q-function that we condition on $(s,z,a)$ , where $z$ is the agent state resulting from history $h$ . The asymmetric Q-function $\mathcal{Q}^{\pi}$ of an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ is defined as the expected discounted sum of rewards, starting from environment state $s$ , agent state $z$ , and action $a$ , and using policy $\pi$ afterwards,
+
+$$
+\mathcal {Q} ^ {\pi} (s, z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \mid S _ {0} = s, Z _ {0} = z, A _ {0} = a \right]. \tag {4}
+$$
+
+The asymmetric value function $\mathcal{V}^{\pi}$ of an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ is defined as $\mathcal{V}^{\pi}(s,z) = \sum_{a\in \mathcal{A}}\pi (a|z)$ $\mathcal{Q}^{\pi}(s,z,a)$ . We also define the asymmetric advantage function $\mathcal{A}^{\pi}(s,z,a) = \mathcal{Q}^{\pi}(s,z,a) - \mathcal{V}^{\pi}(s,z)$ .
+
+Let us define the $m$ -step asymmetric Bellman operator as,
+
+$$
+\begin{array}{r l} \widetilde {\mathcal {Q}} ^ {\pi} (s, z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {t} + \gamma^ {m} \widetilde {\mathcal {Q}} ^ {\pi} \left(S _ {m}, Z _ {m}, A _ {m}\right) \right. \\ \left. \quad \left| S _ {0} = s, Z _ {0} = z, A _ {0} = a \right. \right]. \end{array} \tag {5}
+$$
+
+Since this $m$ -step asymmetric Bellman operator is $\gamma^m$ -contractive, equation (5) has a unique fixed point $\widetilde{\mathcal{Q}}^\pi$ . Notice that, when using an agent-state policy, the environment
+
+state and agent state $(S_{t},Z_{t})$ are Markovian. Therefore, it can be shown that the fixed point $\widetilde{\mathcal{Q}}^{\pi}$ is the same as the asymmetric Q-function $\mathcal{Q}^{\pi}$ .
+
+# 2.3. Symmetric Q-function
+
+The symmetric Q-function $Q^{\pi}$ of an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ in a POMDP $\mathcal{P}$ is defined as the expected discounted sum of rewards, starting from agent state $z$ and action $a$ , and using policy $\pi$ afterwards,
+
+$$
+Q ^ {\pi} (z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \right| Z _ {0} = z, A _ {0} = a ]. \tag {6}
+$$
+
+The symmetric value function $V^{\pi}$ of an agent-state policy $\pi \in \Pi_{\mathcal{M}}$ is defined as $V^{\pi}(z) = \sum_{a \in \mathcal{A}} \pi(a|z) Q^{\pi}(z, a)$ . We also define the symmetric advantage function $A^{\pi}(z, a) = Q^{\pi}(z, a) - V^{\pi}(z)$ .
+
+Let us define the $m$ -step symmetric Bellman operator as,
+
+$$
+\begin{array}{l} \widetilde {Q} ^ {\pi} (z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {t} + \gamma^ {m} \widetilde {Q} ^ {\pi} \left(Z _ {m}, A _ {m}\right) \right. \\ \left| Z _ {0} = z, A _ {0} = a \right]. \tag {7} \\ \end{array}
+$$
+
+It can be verified that the $m$ -step symmetric Bellman operator is $\gamma^m$ -contractive. Therefore, equation (7) has a unique fixed point $\widetilde{Q}^\pi$ . However, because the agent state is not necessarily Markovian, in general $Q^\pi \neq \widetilde{Q}^\pi$ .
+
+# 3. Natural Actor-Critic Algorithms
+
+In this section, we present the asymmetric and symmetric natural actor-critic algorithms, which make use of an actor, or policy, and a critic, or Q-function. The asymmetric variant will use an asymmetric critic, learned using asymmetric temporal difference learning, while the symmetric variant will use a symmetric critic, learned using symmetric temporal difference learning. These temporal difference learning algorithms are presented in in Subsection 3.1 and Subsection 3.2, respectively. Then, Subsection 3.3 presents the complete natural actor-critic algorithm that uses a temporal difference learning algorithm as a subroutine.
+
+For any Euclidean space $\mathcal{X}$ , let $\mathcal{B}_2(0,B)$ be the $\ell_2$ -ball centered at the origin with radius $B > 0$ , and let $\Gamma_{\mathcal{C}}\colon \mathcal{X}\to \mathcal{C}$ be a projection operator into the closed and convex set $\mathcal{C}\subseteq \mathcal{X}$ in $\ell_2$ -norm: $\Gamma_{\mathcal{C}}(x)\in \arg \min_{c\in \mathcal{C}}\| c - x\| _2^2\subseteq \mathcal{C},\forall x\in \mathcal{X}$ . Finally, let us define the $\mu$ -weighted $\ell_2$ -norm, for any probability measures $\mu \in \Delta (\mathcal{X})$ as,
+
+$$
+\| f \| _ {\mu} = \sqrt {\sum_ {x \in \mathcal {X}} \mu (x) | f (x) | ^ {2}}. \tag {8}
+$$
+
+In the algorithms, we implicitly assume to be able to directly sample from the discounted visitation measure $d^{\pi}$ . When it is unrealistic, it is still possible to sample from $d^{\pi}$ by sampling an initial timestep $t_0 \sim \mathrm{Geom}(1 - \gamma)$ from a geometric distribution with success rate $1 - \gamma$ , and then taking $t_0 - 1$ actions in the POMDP. The resulting sample $(s_{t_0}, z_{t_0})$ follows the distribution $d^{\pi}$ .
+
+# 3.1. Asymmetric Critic
+
+Suppose we are given features $\phi \colon S\times \mathcal{Z}\times \mathcal{A}\to \mathbb{R}^{d_{\phi}}$ .Without loss of generality, we assume $\sup_{s,z,a}\| \phi (s,z,a)\| _2\leq 1$ Given a weight vector $\beta \in \mathbb{R}^{d_{\phi}}$ , let $\widehat{\mathcal{Q}}_{\beta}^{\pi}$ denote the linear approximation of the asymmetric Q-function $\mathcal{Q}^{\pi}$ that uses features $\phi$ with weight $\beta$
+
+$$
+\widehat {\mathcal {Q}} _ {\beta} ^ {\pi} (s, z, a) = \langle \beta , \phi (s, z, a) \rangle . \tag {9}
+$$
+
+Given an arbitrary projection radius $B > 0$ , we define the hypothesis space as,
+
+$$
+\mathcal {F} _ {\phi} ^ {B} = \left\{\left(s, z, a\right) \mapsto \left\langle \beta , \phi (s, z, a) \right\rangle : \beta \in \mathcal {B} _ {2} (0, B) \right\}. \tag {10}
+$$
+
+We denote the optimal parameter of the asymmetric critic approximation by $\beta_{*}^{\pi}\in \mathrm{argmin}_{\beta \in \mathcal{B}_{2}(0,B)}$ $\| \langle \beta ,\phi (\cdot)\rangle -\mathcal{Q}^{\pi}(\cdot)\|_{d},$ and denote the corresponding approximation by $\widehat{\mathcal{Q}}_*^\pi (\cdot) = \langle \beta_*^\pi ,\phi (\cdot)\rangle$ . The corresponding error is,
+
+$$
+\varepsilon_ {\text {a p p}} = \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \left\| f - \mathcal {Q} ^ {\pi} \right\| _ {d} = \left\| \widehat {\mathcal {Q}} _ {*} ^ {\pi} - \mathcal {Q} ^ {\pi} \right\| _ {d}, \tag {11}
+$$
+
+with $d(s,z,a) = d^{\pi}(s,z)\pi (a|z)$ the sampling distribution.
+
+In Algorithm 1, we present the $m$ -step temporal difference learning algorithm for approximating the asymmetric Q-function $\mathcal{Q}^{\pi}$ of an arbitrary agent-state policy $\pi \in \Pi_{\mathcal{M}}$ . At each step $k$ , the algorithm obtains one sample $(s_{k,0},z_{k,0}) \sim d^{\pi}$ from the discounted visitation distribution. Then, $m$ actions are selected according to policy $\pi$ to provide samples $(a_{k,t},r_{k,t},s_{k,t+1},o_{k,t+1},z_{k,t+1})$ for $0 \leq t < m$ . Next, the temporal difference $\delta_k$ and semi-gradient $g_k$ are computed, based on a last action $a_{k,m} \sim \pi(\cdot|z_{k,m})$ ,
+
+$$
+\begin{array}{l} \begin{array}{r l} \delta_ {k} = & \sum_ {i = 0} ^ {m - 1} \gamma^ {i} r _ {k, i} + \gamma^ {m} \widehat {\mathcal {Q}} _ {\beta_ {k}} ^ {\pi} \left(s _ {k, m}, z _ {k, m}, a _ {k, m}\right) \\ & - \widehat {\mathcal {Q}} _ {\beta_ {k}} ^ {\pi} \left(s _ {k, 0}, z _ {k, 0}, a _ {k, 0}\right), \end{array} (12) \\ g _ {k} = \delta_ {k} \nabla_ {\beta} \widehat {\mathcal {Q}} _ {\beta_ {k}} ^ {\pi} \left(s _ {k, 0}, z _ {k, 0}, a _ {k, 0}\right). (13) \\ \end{array}
+$$
+
+Then, the semi-gradient update is performed with $\beta_{k + 1}^{-} = \beta_{k} + \alpha g_{k}$ and the parameters are projected onto the ball of radius $B$ : $\beta_{k + 1} = \Gamma_{\mathcal{B}_2(0,B)}(\beta_{k + 1}^{-})$ . At the end, the algorithm computes the average parameter $\bar{\beta} = \frac{1}{K}\sum_{k = 0}^{K - 1}\beta_{k}$ and returns the average approximation $\overline{\mathcal{Q}}^{\pi} = \widehat{\mathcal{Q}}_{\beta}^{\pi}$ .
+
+# Algorithm 1 $m$ -step temporal difference learning algorithm
+
+input: policy $\pi \in \Pi_{\mathcal{M}}$ , bootstrap timestep $m$ , step size $\alpha$ , number of updates $K$ , projection radius $B$ .
+
+for $k = 0\ldots K - 1$ do
+
+Initialize $(s_{k,0},z_{k,0})\sim d^{\pi}$
+
+for $i = 0\ldots ,m - 1$ do
+
+Select action $a_{k,i}\sim \pi (\cdot |z_{k,i})$
+
+Get environment state $s_{k,i+1} \sim T(\cdot | s_{k,i}, a_{k,i})$ .
+
+Get reward $r_{k,i} = R(s_{k,i},a_{k,i},s_{k,i + 1})$
+
+Get observation $o_{k,i+1} \sim O(\cdot | s_{k,i+1})$ .
+
+Update agent state $z_{k,i + 1}\sim U(\cdot |z_{k,i},a_{k,i},o_{k,i + 1})$
+
+# end for
+
+Sample last action $a_{k,m} \sim \pi(\cdot | z_{k,m})$ .
+
+Compute semi-gradient $g_{k}$ according to equation (13) or equation (17).
+
+Update $\beta_{k + 1} = \Gamma_{\mathcal{B}_2(0,B)}(\beta_k + \alpha g_k)$
+
+# end for
+
+return: average estimate $\overline{Q}^{\pi}(\cdot) = \widehat{Q}_{\bar{\beta}}^{\pi}(\cdot) = \langle \bar{\beta},\phi (\cdot)\rangle$ or $\overline{Q}^{\pi}(\cdot) = \widehat{Q}_{\bar{\beta}}^{\pi}(\cdot) = \langle \bar{\beta},\chi (\cdot)\rangle$ with $\bar{\beta} = \frac{1}{K}\sum_{k = 0}^{K - 1}\beta_{k}$ .
+
+# 3.2. Symmetric Critic
+
+Similarly, we suppose that we are given features $\chi \colon \mathcal{Z} \times \mathcal{A} \to \mathbb{R}^{d_{\chi}}$ . Without loss of generality, we assume $\sup_{z,a} \| \chi(z,a) \|_2 \leq 1$ . Given a weight vector $\beta \in \mathbb{R}^{d_{\chi}}$ , let $\widehat{Q}_{\beta}^{\pi}$ denote the linear approximation of the symmetric Q-function $Q^{\pi}$ that uses features $\chi$ with weight $\beta$ ,
+
+$$
+\widehat {Q} _ {\beta} ^ {\pi} (z, a) = \left\langle \beta , \chi (z, a) \right\rangle . \tag {14}
+$$
+
+The corresponding hypothesis space for an arbitrary projection radius $B > 0$ is denoted with $\mathcal{F}_{\chi}^{B}$ . The optimal parameter is also denoted by $\beta_{*}^{\pi} \in \mathrm{argmin}_{\beta \in \mathcal{B}_{2}(0,B)} \| \langle \beta, \chi(\cdot) \rangle - Q^{\pi}(\cdot) \|_{d}$ , the corresponding optimal approximation is $\widehat{Q}_{*}^{\pi} = \langle \beta_{*}^{\pi}, \chi(\cdot) \rangle$ , and the corresponding error is,
+
+$$
+\varepsilon_ {\text {a p p}} = \min _ {f \in \mathcal {F} _ {\chi} ^ {B}} \left\| f - Q ^ {\pi} \right\| _ {d} = \left\| \widehat {Q} _ {*} ^ {\pi} - Q ^ {\pi} \right\| _ {d}, \tag {15}
+$$
+
+with $d(z, a) = \sum_{s \in S} d^{\pi}(s, z) \pi(a|z)$ the sampling distribution.
+
+Algorithm 1 also presents the $m$ -step temporal difference learning algorithm for approximating the symmetric Q-function. The latter is identical to that of the asymmetric Q-function except that states are not exploited, such that the temporal difference $\delta_{k}$ and semi-gradient $g_{k}$ are given by,
+
+$$
+\begin{array}{l} \delta_ {k} = \sum_ {i = 0} ^ {m - 1} \gamma^ {i} r _ {k, i} + \gamma^ {m} \widehat {Q} _ {\beta_ {k}} ^ {\pi} (z _ {k, m}, a _ {k, m}) \\ - \widehat {Q} _ {\beta_ {k}} ^ {\pi} \left(z _ {k, 0}, a _ {k, 0}\right), \tag {16} \\ \end{array}
+$$
+
+$$
+g _ {k} = \delta_ {k} \nabla_ {\beta} \widehat {Q} _ {\beta_ {k}} ^ {\pi} \left(z _ {k, 0}, a _ {k, 0}\right). \tag {17}
+$$
+
+At the end, the algorithm returns the average symmetric approximation $\overline{Q}^{\pi} = \hat{Q}_{\bar{\beta}}^{\pi}$ . Note that this symmetric critic
+
+approximation and temporal difference learning algorithm corresponds to the one proposed by Cayci et al. (2024).
+
+# 3.3. Natural Actor-Critic Algorithms
+
+For both the asymmetric and symmetric actor-critic algorithms, we consider a log-linear agent-state policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ . More precisely, the policy uses features $\psi \colon \mathcal{Z} \times \mathcal{A} \to \mathbb{R}^{d_{\psi}}$ with $\sup_{z,a} \| \psi(z,a) \|_2 \leq 1$ without loss of generality, and a softmax readout,
+
+$$
+\pi_ {\theta} \left(a _ {t} \mid z _ {t}\right) = \frac {\exp \left(\langle \theta , \psi \left(z _ {t} , a _ {t}\right) \rangle\right)}{\sum_ {a \in \mathcal {A}} \exp \left(\langle \theta , \psi \left(z _ {t} , a\right) \rangle\right)}. \tag {18}
+$$
+
+In this work, we consider natural policy gradients, which are less sensitive to policy parametrization (Kakade, 2001). Instead of computing the policy gradient in the original metric space, the idea is to compute the policy gradient on a statistical manifold, defined by the expected Fisher information metric. The natural policy gradient is thus given by the standard policy gradient multiplied by a preconditioner Fisher information matrix. Natural policy gradients are at the core of many effective modern policy-gradient methods (Schulman et al., 2015).
+
+The natural policy gradient of policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ is defined as follows (Kakade, 2001),
+
+$$
+w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) F _ {\pi_ {\theta}} ^ {\dagger} \nabla_ {\theta} J (\pi_ {\theta}), \tag {19}
+$$
+
+where $F_{\pi_\theta}^\dagger$ is the pseudoinverse of the Fisher information matrix, which is defined as the outer product of the score of the policy,
+
+$$
+F _ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} [ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \otimes \nabla_ {\theta} \log \pi_ {\theta} (A | Z) ]. \tag {20}
+$$
+
+As shown in Theorem 1, the natural policy gradient $w_{*}^{\pi_{\theta}}$ is the minimizer of the asymmetric objective (22).
+
+Theorem 1 (Asymmetric natural policy gradient). For any POMDP $\mathcal{P}$ and any agent-state policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ , we have,
+
+$$
+w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) F _ {\pi_ {\theta}} ^ {\dagger} \nabla_ {\theta} J (\pi_ {\theta}) \in \underset {w \in \mathbb {R} ^ {d _ {\psi}}} {\arg \min } \mathcal {L} (w), \tag {21}
+$$
+
+with,
+
+$$
+\mathcal {L} (w) = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {\theta}} (S, Z, A)\right) ^ {2} \right]. \tag {22}
+$$
+
+The proof is given in Appendix B. In practice, since the asymmetric advantage function is unknown, the algorithm estimates the natural policy gradient by stochastic gradient descent of $\mathcal{L}(\omega)$ using the approximation $\overline{\mathcal{A}}^{\pi_{\theta}}(S,Z,A) = \overline{\mathcal{Q}}^{\pi_{\theta}}(S,Z,A) - \overline{\mathcal{V}}^{\pi_{\theta}}(S,Z)$ with $\overline{\mathcal{V}}^{\pi_{\theta}} = \sum_{a\in \mathcal{A}}\pi_{\theta}(a|Z)\overline{\mathcal{Q}} (S,Z,a)$ .
+
+Our natural actor-critic algorithm generalizes the one of Cayci et al. (2024) to the asymmetric setting and is detailed in Algorithm 2. For each policy gradient step $0 \leq t < T$ , the natural policy gradient $w_{*}^{\pi_{t}}$ is first estimated using $N$ steps of stochastic gradient descent. At each natural policy gradient estimation step $0 \leq n < N$ , the algorithm samples an initial state $(s_{t,n}, z_{t,n}) \sim d^{\pi_{t}}$ from the discounted distribution $d^{\pi_{t}}$ and an action $a_{t,n} \sim \pi_{t}(\cdot | z_{t,n})$ according to the policy $\pi_{t} = \pi_{\theta_{t}}$ . Then, the gradient $v_{t,n}$ of the natural policy gradient estimate $w_{t,n}$ is computed with,
+
+$$
+\begin{array}{l} v _ {t, n} = \nabla_ {w} \big (\langle \nabla_ {\theta} \log \pi_ {\theta} (a _ {t, n} | z _ {t, n}), w _ {t, n} \rangle \\ \left. - \bar {\mathcal {A}} ^ {\pi_ {\theta}} \left(s _ {t, n}, z _ {t, n}, a _ {t, n}\right)\right) ^ {2}, \tag {23} \\ \end{array}
+$$
+
+The gradient step is performed with $w_{t,n+1}^{-} = w_{t,n} - \zeta v_{t,n}$ and the parameters are projected onto the ball of radius $B$ : $w_{t,n+1} = \Gamma_{\mathcal{B}_2(0,B)}(w_{t,n+1}^{-})$ . Finally, the algorithm computes the average parameter $\bar{w}_t = \frac{1}{N}\sum_{n=0}^{N-1}w_{t,n}$ and performs the policy gradient step: $\theta_{t+1} = \theta_t + \eta \bar{w}_t$ . After all policy gradient steps, the final policy is returned.
+
+# Algorithm 2 Natural actor-critic algorithm
+
+input: number of updates $T$ , number of steps $N$ , step sizes $\zeta, \eta$ , projection radius $B$ .
+
+Initialize $\theta_0 = 0$
+
+for $t = 0\ldots T - 1$ do
+
+Obtain $\overline{\mathcal{Q}}^{\pi_t}$ or $\overline{Q}^{\pi_t}$ using Algorithm 1.
+
+Initialize $w_{t,0} = 0$
+
+for $n = 0\ldots N - 1$ do
+
+Initialize $(s_{t,n},z_{t,n})\sim d^{\pi_t}$
+
+Sample $a_{t,n}\sim \pi_{\theta_t}(\cdot |z_{t,n})$
+
+Compute the gradient $v_{t,n}$ of the policy gradient using equation (23) or equation (26).
+
+Update $w_{t,n + 1}^{-} = w_{t,n} - \zeta v_{t,n}$
+
+Project $w_{t,n + 1} = \Gamma_{\mathcal{B}_2(0,B)}(w_{t,n + 1}^-)$
+
+end for
+
+Update $\theta_{t + 1} = \theta_t + \eta \frac{1}{N}\sum_{n = 0}^{N - 1}w_{t,n}$
+
+end for
+
+return: final policy $\pi_T = \pi_{\theta_T}$
+
+As shown in Theorem 2, the natural policy gradient $w_{*}^{\pi_{\theta}}$ is also the minimizer of the symmetric objective (25).
+
+Theorem 2 (Symmetric natural policy gradient). For any POMDP $\mathcal{P}$ and any agent-state policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ , we have,
+
+$$
+w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) F _ {\pi_ {\theta}} ^ {\dagger} \nabla_ {\theta} J (\pi_ {\theta}) \in \underset {w \in \mathbb {R} ^ {d _ {\psi}}} {\arg \min } L (w), \tag {24}
+$$
+
+with,
+
+$$
+L (w) = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w \right\rangle - A ^ {\pi_ {\theta}} (Z, A)\right) ^ {2} \right]. \tag {25}
+$$
+
+The proof is given in Appendix B. As in the asymmetric case, the symmetric advantage function is unknown, and the algorithm estimates the natural gradient by stochastic gradient descent of equation (25) using the approximation $\overline{A}^{\pi_{\theta}}(Z,A) = \overline{Q}^{\pi_{\theta}}(Z,A) - \overline{V}^{\pi_{\theta}}(Z)$ with $\overline{V}^{\pi_{\theta}} = \sum_{a\in \mathcal{A}}\pi_{\theta}(a|Z)\overline{Q}^{\pi_{\theta}}(Z,a)$ .
+
+Algorithm 2 also presents the symmetric natural actor-critic algorithm, initially proposed by Cayci et al. (2024). The latter is similar to the asymmetric algorithm except that it uses the symmetric advantage function, such that the gradient of the policy gradient is given by,
+
+$$
+\begin{array}{l} v _ {t, n} = \nabla_ {w} \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t, n} \mid z _ {t, n}\right), w _ {t, n} \right\rangle \right. \\ \left. - \bar {A} ^ {\pi_ {\theta}} \left(z _ {t, n}, a _ {t, n}\right)\right) ^ {2}. \tag {26} \\ \end{array}
+$$
+
+While Theorem 1 and Theorem 2 show that $w_{*}^{\pi_{\theta}}$ is the minimizer of both the asymmetric and the symmetric objectives, the next section establishes the benefit of using the asymmetric loss. More precisely, asymmetric learning is shown to improve the estimation of the critic and thus the advantage function, which in turn results in a better estimation of the natural policy gradient.
+
+# 4. Finite-Time Analysis
+
+In this section, we give the finite-time bounds of the previous algorithms in both the asymmetric and symmetric cases. The bounds of the asymmetric and symmetric temporal difference learning algorithms are presented in Subsection 4.1 and Subsection 4.2, respectively. In Subsection 4.3, the bounds of the asymmetric and symmetric natural actor-critic algorithms are given.
+
+We use $\| \mu -\nu \|_{\mathrm{TV}}$ to denote the total variation between two probability measures $\mu ,\nu \in \Delta (\mathcal{X})$ over a discrete space $\mathcal{X}$
+
+$$
+\begin{array}{l} \left\| \mu - \nu \right\| _ {\mathrm {T V}} = \sup _ {A \subset \mathcal {X}} | \mu (A) - \nu (A) | (27) \\ = \frac {1}{2} \sum_ {x \in \mathcal {X}} | \mu (x) - \nu (x) |. (28) \\ \end{array}
+$$
+
+# 4.1. Finite-Time Bound for the Asymmetric Critic
+
+Our main result is to establish the following finite-time bound for the Q-function approximation resulting from the asymmetric temporal difference learning algorithm detailed in Algorithm 1.
+
+Theorem 3 (Finite-time bound for asymmetric $m$ -step temporal difference learning). For any agent-state policy $\pi \in \Pi_{\mathcal{M}}$ , and any $m \in \mathbb{N}$ , we have for Algorithm 1 with $\alpha = \frac{1}{\sqrt{K}}$ and arbitrary $B > 0$ ,
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| Q ^ {\pi} - \bar {Q} ^ {\pi} \right\| _ {d} ^ {2} \right]} \leq \varepsilon_ {\mathrm {t d}} + \varepsilon_ {\mathrm {a p p}} + \varepsilon_ {\mathrm {s h i f t}}, \tag {29}
+$$
+
+where the temporal difference learning, function approximation, and distribution shift terms are given by,
+
+$$
+\varepsilon_ {\mathrm {t d}} = \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} (1 - \gamma^ {m})}} \tag {30}
+$$
+
+$$
+\varepsilon_ {\text {a p p}} = \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} ^ {\pi} \| _ {d} \tag {31}
+$$
+
+$$
+\varepsilon_ {\text {s h i f t}} = \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}}, \tag {32}
+$$
+
+with $d(s,z,a) = d^{\pi}(s,z)\pi (a|z)$ the sampling distribution, and $d_{m}(s,z,a) = d_{m}^{\pi}(s,z)\pi (a|z)$ the bootstrapping distribution.
+
+The proof is given in Appendix C, and adapts the proof of Cayci et al. (2024) to the asymmetric setting. The first term $\varepsilon_{\mathrm{td}}$ is the usual temporal difference error term, decreasing in $K^{-1/4}$ . The second term $\varepsilon_{\mathrm{app}}$ results from the use of linear function approximators. The third term $\varepsilon_{\mathrm{shift}}$ arises from the distribution shift between the sampling distribution $d^{\pi} \otimes \pi$ (i.e., the discounted visitation measure) and the bootstrapping distribution $d_m^{\pi} \otimes \pi$ (i.e., the distribution $m$ steps from the discounted visitation measure). It is a consequence of not assuming the existence of a stationary distribution nor assuming to sample from the stationary distribution.
+
+# 4.2. Finite-Time Bound for the Symmetric Critic
+
+Given a history $h_t = (o_0, a_0, \ldots, o_t)$ , the belief is defined as,
+
+$$
+b _ {t} \left(s _ {t} \mid h _ {t}\right) = \Pr \left(S _ {t} = s _ {t} \mid H _ {t} = h _ {t}\right). \tag {33}
+$$
+
+Given an agent state $z_{t}$ , the approximate belief is defined as,
+
+$$
+\hat {b} _ {t} \left(s _ {t} \mid z _ {t}\right) = \Pr \left(S _ {t} = s _ {t} \mid Z _ {t} = z _ {t}\right). \tag {34}
+$$
+
+We obtain the following finite-time bound for the Q-function approximation resulting from the symmetric temporal difference learning algorithm detailed in Algorithm 1.
+
+Theorem 4 (Finite-time bound for symmetric $m$ -step temporal difference learning (Cayci et al., 2024)). For any agent-state policy $\pi \in \Pi_{\mathcal{M}}$ , and any $m \in \mathbb{N}$ , we have for Algorithm 1 with $\alpha = \frac{1}{\sqrt{K}}$ , and arbitrary $B > 0$
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| Q ^ {\pi} - \bar {Q} ^ {\pi} \right\| _ {d} ^ {2} \right]} \leq \varepsilon_ {\mathrm {t d}} + \varepsilon_ {\mathrm {a p p}} + \varepsilon_ {\mathrm {s h i f t}} + \varepsilon_ {\mathrm {a l i a s}}, \tag {35}
+$$
+
+where the temporal difference learning, function approximation, distribution shift, and aliasing terms are given by,
+
+$$
+\varepsilon_ {\mathrm {t d}} = \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} \left(1 - \gamma^ {m}\right)}} \tag {36}
+$$
+
+$$
+\varepsilon_ {\text {a p p}} = \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\chi} ^ {B}} \| f - Q ^ {\pi} \| _ {d} \tag {37}
+$$
+
+$$
+\varepsilon_ {\text {s h i f t}} = \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} \tag {38}
+$$
+
+$$
+\varepsilon_ {\text {a l i a s}} = \frac {2}{1 - \gamma} \left\| \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg \| _ {d} \tag {39}
+$$
+
+with $d(z,a) = \sum_{s\in \mathcal{S}}d^{\pi}(s,z)\pi (a|z)$ the sampling distribution, and $d_m(z,a) = \sum_{s\in \mathcal{S}}d_m^\pi (s,z)\pi (a|z)$ the bootstrapping distribution.
+
+The first three terms are identical or analogous to the asymmetric case. The fourth term $\varepsilon_{\mathrm{alias}}$ results from the difference between the fixed point $\widetilde{Q}^{\pi}$ of the symmetric Bellman operator (7) and the true Q-function $Q^{\pi}$ .
+
+We note some minor differences with respect to the original result of Cayci et al. (2024) that appear to be typos and minor mistakes in the original proof.1 We provide the corrected proof in Appendix D.
+
+The results of Theorem 3 and Theorem 4 can be straightforwardly generalized to any other sampling distribution. However, obtaining bounds in term of $d^{\pi} \otimes \pi$ is useful for bounding the performance of the actor-critic algorithm.
+
+# 4.3. Finite-Time Bound for the Natural Actor-Critic
+
+Following Cayci et al. (2024), we assume that there exists a concentrability coefficient $\overline{C}_{\infty} < \infty$ such that $\sup_{0 < t < T}\mathbb{E}[C_t]\leq \overline{C}_{\infty}$ with,
+
+$$
+C _ {t} = \sup _ {s, z, a} \left| \frac {d ^ {\pi^ {*}} (s , z) \pi^ {*} (a | z)}{d ^ {\pi_ {\theta_ {t}}} (s , z) \pi_ {\theta_ {t}} (a | z)} \right|. \tag {40}
+$$
+
+Roughly speaking, this assumption means that all successive policies should visit every agent states and actions visited by the optimal policy with nonzero probability. It motivates the log-linear policy parametrization in equation (18) and the initialization to the maximum entropy policy in Algorithm 2. We obtain the following finite-time bound for the suboptimality of the policy resulting from Algorithm 2.
+
+Theorem 5 (Finite-time bound for asymmetric and symmetric natural actor-critic algorithm). For any agent-state process $\mathcal{M} = (\mathcal{Z}, U)$ , we have for Algorithm 2 with $\alpha = \frac{1}{\sqrt{K}}$ , $\zeta = \frac{B\sqrt{1 - \gamma}}{\sqrt{2N}}$ , $\eta = \frac{1}{\sqrt{T}}$ and arbitrary $B > 0$ ,
+
+$$
+\begin{array}{l} (1 - \gamma) \min _ {0 \leq t < T} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] \leq \varepsilon_ {\mathrm {n a c}} + 2 \varepsilon_ {\mathrm {i n f}} \\ + \bar {C} _ {\infty} \left(\varepsilon_ {\text {a c t o r}} + 2 \varepsilon_ {\text {g r a d}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right), \tag {41} \\ \end{array}
+$$
+
+where the different terms may differ for asymmetric and symmetric critics,
+
+$$
+\varepsilon_ {\text {n a c}} = \frac {B ^ {2} + 2 \log | \mathcal {A} |}{2 \sqrt {T}} \tag {42}
+$$
+
+$$
+\varepsilon_ {\text {a c t o r}} = \sqrt {\frac {(2 - \gamma) B}{(1 - \gamma) \sqrt {N}}} \tag {43}
+$$
+
+$$
+\varepsilon_ {\text {i n f , a s y m}} = 0 \tag {44}
+$$
+
+$$
+\varepsilon_ {\inf , \operatorname {s y m}} = \mathbb {E} ^ {\pi^ {*}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \right] \tag {45}
+$$
+
+$$
+\varepsilon_ {\text {g r a d , a s y m}} = \sup _ {0 \leq t < T} \sqrt {\frac {\min _ {w} \mathcal {L} _ {t} (w)}{w}} \tag {46}
+$$
+
+$$
+\varepsilon_ {\text {g r a d , s y m}} = \sup _ {0 \leq t < T} \sqrt {\underset {w} {\min } L _ {t} (w)}, \tag {47}
+$$
+
+and $\varepsilon_{\mathrm{crit}}^{\pi_t}$ is given in Theorem 3 and Theorem 4.
+
+The first term $\varepsilon_{\mathrm{nac}}$ is the usual natural actor-critic term decreasing in $T^{-1 / 2}$ (Agarwal et al., 2021). The second term $\varepsilon_{\mathrm{inf}}$ is the inference error resulting from use of an agent state in a POMDP (Cayci et al., 2024). This term is zero for the asymmetric algorithm. The third term $\varepsilon_{\mathrm{actor}}$ is the error resulting from the estimation of the natural policy gradient by stochastic gradient descent. The fourth term $\varepsilon_{\mathrm{grad}}$ is the error resulting from the use of a linear function approximator with features $\nabla_{\theta}\log \pi_t(a|z)$ for the natural policy gradient. Finally, the fifth term $\frac{1}{T}\sum_{t = 0}^{T - 1}\varepsilon_{\mathrm{critic}}^{\pi_t}$ is the error arising from the successive critic approximations. Inside of each $\varepsilon_{\mathrm{critic}}^{\pi_t}$ terms, the aliasing term is thus zero for the asymmetric algorithm. The proof, generalizing that of Cayci et al. (2024) to the asymmetric setting, is available in Appendix E.
+
+# 4.4. Discussion
+
+As can be seen from Theorem 3 and Theorem 4, compared to the symmetric temporal difference learning algorithm, the asymmetric one eliminates a term arising from aliasing in the agent state, in the sense of equation (39). In other words, even for an aliased agent-state process, leveraging the state to learn the asymmetric Q-function instead of the symmetric Q-function does not suffer from aliasing, while still providing a valid critic for the policy gradient algorithm. That said, these bounds are given in expectation, and future works may want to study the variance of the error of such Q-function approximations.
+
+From Theorem 5, we notice that the inference term (45) in the suboptimality bound vanishes in the asymmetric setting. Moreover, the average error $\frac{1}{T}\sum_{t=0}^{T-1}\varepsilon_{\text{critic}}^{\pi_t}$ made in the evaluation of all policies $\pi_0,\ldots,\pi_{t-1}$ appears in the finite-time bound that we obtain for the suboptimality of the policy. Thus, the suboptimality bound for the actor also improves in the asymmetric setting by eliminating the aliasing terms with respect to the symmetric setting.
+
+By diving into the proof of Theorem 5 at equations (236) and (237), we understand that the Q-function error impacts the suboptimality bound through the estimation of the natural policy gradient (19). Indeed, this error term in the suboptimality bound directly results from the error on the advantage function estimation used in the target of the natural policy gradient estimation loss of equations (23) and (26). This advantage function estimation is derived from the estimation of the Q-function, such that the error on the latter directly impacts the error on the former, as detailed in equations (236) and (237). This improvement in the average critic error unfortunately comes at the expense of a different residual error $\varepsilon_{\mathrm{grad}}$ on the natural policy gradient loss. Indeed, as can be seen in equation (47), we obtain a residual error $\varepsilon_{\mathrm{grad,asym}}$ using the best approximation of the asymmetric advantage $A^{\pi_t}(s,z,a)$ , instead of a residual error $\varepsilon_{\mathrm{grad,sym}}$ using the best approximation of the symmetric critic $A^{\pi_t}(z,a)$ . Since both natural policy gradients are obtained through a linear regression with features $\nabla_{\theta}\log \pi_{t}(a|z)$ , it is clear that the asymmetric residual error may be higher than the symmetric residual error, even in the tabular case.
+
+We conclude that the effectiveness of asymmetric actor-critic algorithms notably results from a better approximation of the Q-function by eliminating the aliasing bias, which in turn provides a better estimate of the policy gradient.
+
+# 5. Conclusion
+
+In this work, we extended the unbiased asymmetric actor-critic algorithm to agent-state policies. Then, we adapted a finite-time analysis for natural actor-critic to the asymmetric setting. This analysis highlighted that on the contrary to symmetric learning, asymmetric learning is less sensitive to aliasing in the agent state. While this analysis assumed a fixed agent-state process, we argue that it is useful to interpret the causes of effectiveness of asymmetric learning with learnable agent-state processes. Indeed, aliasing can be present in the agent-state process throughout learning, and in particular at initialization. Moreover, it should be noted that this analysis can be straightforwardly generalized to learnable agent-state processes by extending the action space to select future agent states. More formally, we would extend the action space to $\mathcal{A}^{+} = \mathcal{A}\times \Delta (\mathcal{Z})$ with $a_{t}^{+} = (a_{t},a_{t}^{z})$ , the agent state space to $\mathcal{Z}^{+} = \mathcal{Z}\times \mathcal{O}$ with $z_{t}^{+} = (z_{t},z_{t}^{o})$ , and the agent-state process to $U(z_{t + 1}^{+}|z_{t}^{+},a_{t},o_{t + 1})\propto \exp (a_{t}^{z_{t + 1}})\delta_{z_{t + 1}^{o},o_{t + 1}}$ . This alternative to backpropagation through time would nevertheless still not reflect the common setting of recurrent actor-critic algorithms. We consider this as a future work that could build on recent advances in finite-time bound for recurrent actor-critic algorithms (Cayci & Eryilmaz, 2024a;b). Alternatively, generalizing this analysis to nonlinear approximators may include recurrent neural networks, which can
+
+be seen as nonlinear approximators with a sliding window as agent state. Our analysis also motivates future work studying other asymmetric learning approaches that consider representation losses to reduce the aliasing bias (Sinha & Mahajan, 2023; Lambrechts et al., 2022; 2024).
+
+# Acknowledgements
+
+Gaspard Lambrechts acknowledges the financial support of the Wallonia-Brussels Federation for his FRIA grant.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. Journal of Machine Learning Research, 2021.
+Akkaya, I., Andrychowicz, M., Chociej, M., teusz Litwin, M., McGrew, B., Petron, A., Paino, A., Plappert, M., Powell, G., Ribas, R., Schneider, J., Tezak, N., Tworek, J., Welinder, P., Weng, L., Yuan, Q., Zaremba, W., and Zhang, L. Solving Rubik's Cube with a Robot Hand. arXiv:1910.07113, 2019.
+Avalos, R., Delgrange, F., Nowe, A., Perez, G., and Roijers, D. M. The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models. The Twelfth International Conference on Learning Representations, 2024.
+Baisero, A. and Amato, C. Unbiased Asymmetric Reinforcement Learning under Partial Observability. International Conference on Autonomous Agents and Multiagent Systems, 2022.
+Bakker, B. Reinforcement Learning with Long Short-Term Memory. Advances in Neural Information Processing Systems, 2001.
+Buesing, L., Weber, T., Racanière, S., Eslami, S. M. A., Rezende, D. J., Reichert, D. P., Viola, F., Besse, F., Gregor, K., Hassabis, D., and Wierstra, D. Learning and Querying Fast Generative Models for Reinforcement Learning. arXiv:1802.03006, 2018.
+Cai, Y., Liu, X., Oikonomou, A., and Zhang, K. Provable Partially Observable Reinforcement Learning with Privileged Information. Advances in Neural Information Processing Systems, 2024.
+
+Cayci, S. and Eryilmaz, A. Convergence of Gradient Descent for Recurrent Neural Networks: A Nonasymptotic Analysis. arXiv:2402.12241, 2024a.
+Cayci, S. and Eryilmaz, A. Recurrent Natural Policy Gradient for POMDPs. ICML Workshop on the Foundations of Reinforcement Learning and Control, 2024b.
+Cayci, S., He, N., and Srikant, R. Finite-Time Analysis of Natural Actor-Critic for POMDPs. SIAM Journal on Mathematics of Data Science, 2024.
+Choudhury, S., Bhardwaj, M., Arora, S., Kapoor, A., Ranade, G., Scherer, S., and Dey, D. Data-Driven Planning via Imitation Learning. The International Journal of Robotics Research, 2018.
+Degrave, J., Felici, F., Buchli, J., Neunert, M., Tracey, B. D., Carpanese, F., Ewalds, T., Hafner, R., Abdelmaleki, A., de Las Casas, D., Donner, C., Fritz, L., Galperti, C., Huber, A., Keeling, J., Tsimpoukelli, M., Kay, J., Merle, A., Moret, J.-M., Noury, S., Pesamosca, F., Pfau, D., Sauter, O., Sommariva, C., Coda, S., Duval, B., Fasoli, A., Kohli, P., Kavukcuoglu, K., Hassabis, D., and Riedmiller, M. A. Magnetic Control of Tokamak Plasmas through Deep Reinforcement Learning. Nature, 2022.
+Dong, S., Roy, B. V., and Zhou, Z. Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States. Journal of Machine Learning Research, 2022.
+Gregor, K., Rezende, D. J., Besse, F., Wu, Y., Merzic, H., and van den Oord, A. Shaping Belief States with Generative Environment Models for RL. Advances in Neural Information Processing Systems, 2019.
+Guo, Z. D., Azar, M. G., Piot, B., Pires, B. A., and Ré Munosmi. Neural Predictive Belief Representations. arXiv:1811.06407, 2018.
+Guo, Z. D., Pires, B. A., Piot, B., Grill, J.-B., Altché, F., Munos, R., and Azar, M. G. Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning. International Conference on Machine Learning, 2020.
+Han, D., Doya, K., and Tani, J. Variational Recurrent Models for Solving Partially Observable Control Tasks. *Internal Conference on Learning Representations*, 2019.
+Hausknecht, M. and Stone, P. Deep Recurrent Q-learning for Partially Observable MDPs. AAAI Fall Symposium Series, 2015.
+Heess, N., Hunt, J. J., Lillicrap, T. P., and Silver, D. Memory-Based Control with Recurrent Neural Networks. arXiv: 1512.04455, 2015.
+
+Igl, M., Zintgraf, L., Le, T. A., Wood, F., and Whiteson, S. Deep Variational Reinforcement Learning for POMDPs. International Conference on Machine Learning, 2018.
+Kaelbling, L. P., Littman, M. L., and Cassandra, A. R. Planning and Acting in Partially Observable Stochastic Domains. Artificial Intelligence, 1998.
+Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. International Conference on Machine Learning, 2002.
+Kakade, S. M. A Natural Policy Gradient. Advances in Neural Information Processing Systems, 2001.
+Kaufmann, E., Bauersfeld, L., Loquercio, A., Müller, M., Koltun, V., and Scaramuzza, D. Champion-Level Drone Racing using Deep Reinforcement Learning. Nature, 2023.
+Lambrechts, G., Bolland, A., and Ernst, D. Recurrent Networks, Hidden States and Beliefs in Partially Observable Environments. Transactions on Machine Learning Research, 2022.
+Lambrechts, G., Bolland, A., and Ernst, D. Informed POMDP: Leveraging Additional Information in Model-Based RL. Reinforcement Learning Journal, 2024.
+Lee, A. X., Nagabandi, A., Abbeel, P., and Levine, S. Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model. Advances in Neural Information Processing Systems, 2020.
+Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-End Training of Deep Visuomotor Policies. Journal of Machine Learning Research, 2015.
+Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M. A., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-Level Control through Deep Reinforcement Learning. Nature, 2015.
+Ni, T., Eysenbach, B., SeyedSalehi, E., Ma, M., Gehring, C., Mahajan, A., and Bacon, P.-L. Bridging State and History Representations: Understanding Self-Predictive RL. International Conference on Learning Representations, 2024.
+Pinto, L., Andrychowicz, M., Welinder, P., Zaremba, W., and Abbeel, P. Asymmetric Actor Critic for Image-Based Robot Learning. Robotics: Science and Systems, 2018.
+Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., Lillicrap, T. P., and Silver, D. Mastering
+
+Atari, Go, Chess and Shogi by Planning with a Learned Model. Nature, 2020.
+Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust Region Policy Optimization. International Conference on Machine Learning, 2015.
+Shalev-Shwartz, S. and Ben-David, S. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
+Sinha, A. and Mahajan, A. Asymmetric Actor-Critic with Approximate Information State. IEEE Conference on Decision and Control, 2023.
+Sinha, A. and Mahajan, A. Agent-State Based Policies in POMDPs: Beyond Belief-State MDPs. arXiv: 2409.15703, 2024.
+Subramanian, J., Sinha, A., Seraj, R., and Mahajan, A. Approximate Information State for Approximate Planning and Reinforcement Learning in Partially Observed Systems. Journal of Machine Learning Research, 2022.
+Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy Gradient Methods for Reinforcement Learning with Function Approximation. Advances in Neural Information Processing Systems, 1999.
+Vasco, M., Seno, T., Kawamoto, K., Subramanian, K., Wurman, P. R., and Stone, P. A Super-Human Vision-Based Reinforcement Learning Agent for Autonomous Racing in Gran Turismo. Reinforcement Learning Journal, 2024.
+Wang, A., Li, A. C., Klassen, T. Q., Icarte, R. T., and McIlraith, S. A. Learning Belief Representations for Partially Observable Deep RL. International Conference on Machine Learning, 2023.
+Warrington, A., Lavington, J. W., Scibior, A., Schmidt, M., and Wood, F. Robust Asymmetric Learning in POMDPs. International Conference on Machine Learning, 2021.
+Wierstra, D., Förster, A., Peters, J., and Schmidhuber, J. Solving Deep Memory POMDPs with Recurrent Policy Gradients. International Conference on Artificial Neural Networks, 2007.
+Zhang, M., McCarthy, Z., Finn, C., Levine, S., and Abbeel, P. Learning Deep Neural Network Policies with Continuous Memory States. IEEE International Conference on Robotics and Automation, 2016.
+Zhu, P., Li, X., Poupart, P., and Miao, G. On Improving Deep Reinforcement Learning for POMDPs. arXiv: 1704.07978, 2017.
+
+# A. Agent State Aliasing
+
+In this section, we provide an example of aliased agent state, and discuss the corresponding aliasing bias. For this purpose, we introduce a slightly modified version of the Tiger POMDP (Kaelbling et al., 1998), see Figure 1. In this POMDP, there are two doors: one opening on a room with a treasure on the left, and another opening on a room with a tiger on the right. There are four states for this POMDP: being in the treasure room (Treasure), being in the tiger room (Tiger), being in front of the treasure door (Left) or being in front of the tiger door (Right). The rooms are labeled outside (Left or Right), but inside it is completely dark (Dark), such that we do not observe in which room we are. When outside of the rooms, the agent can switch to the other door (Swap) or it can open the door and enter the room (Enter). Once in a room (Treasure or Tiger), the agent stays locked forever, and gets a positive reward (+1) if it is in the treasure room (Treasure) whatever the action taken (Swap or Enter). We consider the agent state to be
+
+
+Figure 1: Aliased Tiger POMDP.
+
+simply the last observation (Left, Right, or Dark). Notice that the optimal agent-state policy conditioned on this agent state is also an optimal history-dependent policy. In other words, the current observation is a sufficient statistic for optimal control in this POMDP. We consider a uniform initial distributions over the four states.
+
+For a given agent state (Dark), there exist two different underlying states (Treasure or Tiger). We call this phenomenon aliasing. Now, let us consider a simple policy $\pi$ that always takes the same action (Enter). It is clear that the symmetric value function defined according to equation (6) is given by $V^{\pi}(z = \mathrm{Dark}) = \frac{1}{2(1 - \gamma)}$ , $V^{\pi}(z = \mathrm{Left}) = \frac{\gamma}{1 - \gamma}$ , and $V^{\pi}(z = \mathrm{Right}) = 0$ . However, when considering the unique fixed point of the aliased Bellman operator of equation (7) with $m = 1$ , we have instead $\widetilde{V}^{\pi}(z = \mathrm{Dark}) = \frac{1}{2(1 - \gamma)}$ , $\widetilde{V}^{\pi}(z = \mathrm{Left}) = \frac{\gamma}{2(1 - \gamma)}$ , and $\widetilde{V}^{\pi}(z = \mathrm{Right}) = \frac{\gamma}{2(1 - \gamma)}$ . We refer to the distance between $V^{\pi}$ and $\widetilde{V}^{\pi}$ , or similarly $\widetilde{Q}^{\pi}$ and $\widetilde{Q}^{\pi}$ , as the aliasing bias. In the analysis of this paper, this distance appears as the weighted $\ell_2$ -norm $\| Q^{\pi} - \widetilde{Q}^{\pi}\|_{d}$ where $d(s,z,a) = d^{\pi}(s,z)\pi(a|z)$ . In the analysis, we also define the aliasing term $\varepsilon_{\mathrm{alias}}$ as an upper bound on this aliasing bias, see Lemma D.1 for a detailed definition.
+
+# B. Proof of the Natural Policy Gradients
+
+In this section, we prove that the natural policy gradient is the minimizer of analogous asymmetric and symmetric losses.
+
+# B.1. Proof of the Asymmetric Natural Policy Gradient
+
+In this section, we prove that the natural policy gradient is the minimizer of an asymmetric loss.
+
+Theorem 1 (Asymmetric natural policy gradient). For any POMDP $\mathcal{P}$ and any agent-state policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ , we have,
+
+$$
+w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) F _ {\pi_ {\theta}} ^ {\dagger} \nabla_ {\theta} J (\pi_ {\theta}) \in \underset {w \in \mathbb {R} ^ {d _ {\psi}}} {\arg \min } \mathcal {L} (w), \tag {21}
+$$
+
+with,
+
+$$
+\mathcal {L} (w) = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {\theta}} (S, Z, A)\right) ^ {2} \right]. \tag {22}
+$$
+
+Proof. Let us note that,
+
+$$
+\nabla_ {w} \mathcal {L} (w) = 2 \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {\theta}} (S, Z, A)\right) \right]. \tag {48}
+$$
+
+Therefore, for any $w_{*}^{\pi_{\theta}} \in \mathbb{R}^{d_{\psi}}$ minimizing $\mathcal{L}(w)$ , we have $\nabla_w \mathcal{L}(w) = 0$ , such that,
+
+$$
+\mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \mathcal {A} ^ {\pi_ {\theta}} (S, Z, A) \right] = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w _ {*} ^ {\pi_ {\theta}} \rangle \right] \tag {49}
+$$
+
+$$
+= \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\nabla_ {\theta} \log \pi_ {\theta} (A | Z) \otimes \nabla_ {\theta} \log \pi_ {\theta} (A | Z)\right) w _ {*} ^ {\pi_ {\theta}} \right] \tag {50}
+$$
+
+$$
+= \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \otimes \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \right] w _ {*} ^ {\pi_ {\theta}} \tag {51}
+$$
+
+$$
+= F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}}. \tag {52}
+$$
+
+which follows from the definition of the Fisher information matrix $F_{\pi_{\theta}}$ in equation (20). Now, let us define the policy $\pi_{\theta}^{+}(A|S,Z) = \pi_{\theta}(A|Z)$ , which ignores the state $S$ . From there, we have,
+
+$$
+\begin{array}{l} F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} [ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \mathcal {A} (S, Z, A) ] (53) \\ = \mathbb {E} d ^ {\pi_ {\theta} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) A (S, Z, A) \right] (54) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) (\mathcal {A} (S, Z, A) + \mathcal {V} (S, Z) - \mathcal {V} (S, Z)) \right] (55) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right] - \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {V} (S, Z) \right] (56) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right] - \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \mathcal {V} (S, Z) \sum_ {a \in \mathcal {A}} \pi_ {\theta} ^ {+} (a | S, Z) \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (a | S, Z) \right] (57) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right] - \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \mathcal {V} (S, Z) \sum_ {a \in \mathcal {A}} \nabla_ {\theta} \pi_ {\theta} ^ {+} (a | S, Z) \right] (58) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right] - \mathbb {E} ^ {d ^ {\pi_ {\theta} ^ {+}}} \left[ \mathcal {V} (S, Z) \nabla_ {\theta} \sum_ {a \in \mathcal {A}} \pi_ {\theta} ^ {+} (a | S, Z) \right] (59) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right] - \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} [ \mathcal {V} (S, Z) \nabla_ {\theta} 1 ] (60) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}} ^ {+}} \left[ \nabla_ {\theta} \log \pi_ {\theta} ^ {+} (A | S, Z) \mathcal {Q} (S, Z, A) \right]. (61) \\ \end{array}
+$$
+
+Using the policy gradient theorem (Sutton et al., 1999) and equation (61),
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) \nabla_ {\theta} J \left(\pi_ {\theta} ^ {+}\right), \tag {62}
+$$
+
+From there, we obtain using the definition of $\pi_{\theta}^{+}$
+
+$$
+\begin{array}{l} F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) \nabla_ {\theta} J \left(\pi_ {\theta} ^ {+}\right) (63) \\ = (1 - \gamma) \nabla_ {\theta} J (\pi_ {\theta}). (64) \\ \end{array}
+$$
+
+This concludes the proof.
+
+# B.2. Proof of the Symmetric Natural Policy Gradient
+
+In this section, we prove that the natural policy gradient is the minimizer of an asymmetric loss.
+
+Theorem 2 (Symmetric natural policy gradient). For any POMDP $\mathcal{P}$ and any agent-state policy $\pi_{\theta} \in \Pi_{\mathcal{M}}$ , we have,
+
+$$
+w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) F _ {\pi_ {\theta}} ^ {\dagger} \nabla_ {\theta} J (\pi_ {\theta}) \in \underset {w \in \mathbb {R} ^ {d _ {\psi}}} {\arg \min } L (w), \tag {24}
+$$
+
+with,
+
+$$
+L (w) = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z), w \right\rangle - A ^ {\pi_ {\theta}} (Z, A)\right) ^ {2} \right]. \tag {25}
+$$
+
+Proof. Similarly to the asymmetric setting, for any $w_{*}^{\pi_{\theta}}$ minimizing $L(w)$ , we have $\nabla_w L(w) = 0$ , such that,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) A (Z, A) \right] = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \langle \nabla_ {\theta} \log \pi_ {\theta} (A | Z) w _ {*} ^ {\pi_ {\theta}} \rangle \right] (65) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \left(\nabla_ {\theta} \log \pi_ {\theta} (A | Z) \otimes \nabla_ {\theta} \log \pi_ {\theta} (A | Z)\right) w _ {*} ^ {\pi_ {\theta}} \right] (66) \\ = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \otimes \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \right] w _ {*} ^ {\pi_ {\theta}} (67) \\ = F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}}, (68) \\ \end{array}
+$$
+
+which follows from the definition of the Fisher information matrix $F_{\pi_{\theta}}$ in equation (20). From there, we have,
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} [ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) A (Z, A) ] \tag {69}
+$$
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \mathbb {E} ^ {d ^ {\pi_ {\theta}}} [ \mathcal {A} (S, Z, A) | Z, A ] \right] \tag {70}
+$$
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \mathbb {E} ^ {d ^ {\pi_ {\theta}}} \left[ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \mathcal {A} (S, Z, A) | Z, A \right] \right] \tag {71}
+$$
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = \mathbb {E} ^ {d ^ {\pi_ {\theta}}} [ \nabla_ {\theta} \log \pi_ {\theta} (A | Z) \mathcal {A} (S, Z, A) ], \tag {72}
+$$
+
+which follows from the law of total probability. From there, by following the same steps as in the asymmetric case (see Subsection B.1), we obtain,
+
+$$
+F _ {\pi_ {\theta}} w _ {*} ^ {\pi_ {\theta}} = (1 - \gamma) \nabla_ {\theta} J (\pi_ {\theta}). \tag {73}
+$$
+
+This concludes the proof.
+
+# C. Proof of the Finite-Time Bound for the Asymmetric Critic
+
+In this section, we prove Theorem 3, that is recalled below.
+
+Theorem 3 (Finite-time bound for asymmetric $m$ -step temporal difference learning). For any agent-state policy $\pi \in \Pi_{\mathcal{M}}$ and any $m \in \mathbb{N}$ , we have for Algorithm 1 with $\alpha = \frac{1}{\sqrt{K}}$ and arbitrary $B > 0$
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| Q ^ {\pi} - \bar {Q} ^ {\pi} \right\| _ {d} ^ {2} \right]} \leq \varepsilon_ {\mathrm {t d}} + \varepsilon_ {\mathrm {a p p}} + \varepsilon_ {\mathrm {s h i f t}}, \tag {29}
+$$
+
+where the temporal difference learning, function approximation, and distribution shift terms are given by,
+
+$$
+\varepsilon_ {\mathrm {t d}} = \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} (1 - \gamma^ {m})}} \tag {30}
+$$
+
+$$
+\varepsilon_ {\text {a p p}} = \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} ^ {\pi} \| _ {d} \tag {31}
+$$
+
+$$
+\varepsilon_ {\text {s h i f t}} = \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}}, \tag {32}
+$$
+
+with $d(s,z,a) = d^{\pi}(s,z)\pi (a|z)$ the sampling distribution, and $d_{m}(s,z,a) = d_{m}^{\pi}(s,z)\pi (a|z)$ the bootstrapping distribution.
+
+Proof. To simplify notation, we drop the dependence on $\pi$ and $\beta$ and use $\mathcal{Q}$ as a shorthand for $\mathcal{Q}^{\pi}$ , $\widehat{Q}^{*}$ as a shorthand for $\widehat{Q}_{*}^{\pi}$ , $\overline{\mathcal{Q}}$ as a shorthand for $\overline{\mathcal{Q}}^{\pi}$ and $\widehat{\mathcal{Q}}_k$ as a shorthand for $\widehat{\mathcal{Q}}_{\beta_k}^{\pi}$ , where the subscripts and superscripts remain implicit but are assumed clear from context. When evaluating the Q-functions, we go one step further by using $\mathcal{Q}_{k,i}$ to denote $\mathcal{Q}(S_{k,i},Z_{k,i},A_{k,i})$ , $\widehat{Q}_{k,i}^{*}$ to denote $\widehat{Q}^{*}(Z_{k,i},A_{k,i})$ or $\widehat{\mathcal{Q}}_{k,i}$ to denote $\widehat{\mathcal{Q}}_k(S_{k,i},Z_{k,i},A_{k,i})$ , and $\phi_{k,i}$ to denote $\phi (S_{k,i},Z_{k,i},A_{k,i})$ . In addition, we define $d$ as a shorthand for $d^{\pi}\otimes \pi$ , such that $d(s,z,a) = d^{\pi}(s,z)\pi (a|z)$ , and $d_m$ as a shorthand for $d_m^{\pi}\otimes \pi$ , such that $d_m(s,z,a) = d_m^{\pi}(s,z)\pi (a|z)$ .
+
+First, let us define $\Delta_k$ as,
+
+$$
+\Delta_ {k} = \sqrt {\mathbb {E} \left[ \left\| \mathcal {Q} - \widehat {\mathcal {Q}} _ {k} \right\| _ {d} ^ {2} \right]} = \sqrt {\mathbb {E} \left[ \| \mathcal {Q} (\cdot) - \langle \beta_ {k} , \phi (\cdot) \rangle \| _ {d} ^ {2} \right]}. \tag {74}
+$$
+
+Using the linearity of $\overline{\mathcal{Q}}$ in $\beta_{1},\ldots ,\beta_{K - 1}$ , the triangle inequality, the subadditivity of the square root, and Jensen's inequality, we have,
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| \mathcal {Q} - \bar {\mathcal {Q}} \right\| _ {d} ^ {2} \right]} = \sqrt {\mathbb {E} \left[ \left\| \mathcal {Q} (\cdot) - \left\langle \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \beta_ {k} , \phi (\cdot) \right\rangle \right\| _ {d} ^ {2} \right]} \tag {75}
+$$
+
+$$
+\begin{array}{l} = \sqrt {\mathbb {E} \left[ \left\| \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\mathcal {Q} (\cdot) - \langle \beta_ {k} , \phi (\cdot) \rangle\right) \right\| _ {d} ^ {2} \right]} (76) \\ = \sqrt {\mathbb {E} \left[ \left\| \sum_ {k = 0} ^ {K - 1} \frac {1}{K} \left(\mathcal {Q} (\cdot) - \langle \beta_ {k} , \phi (\cdot) \rangle\right) \right\| _ {d} ^ {2} \right]} (77) \\ \leq \sqrt {\mathbb {E} \left[ \sum_ {k = 0} ^ {K - 1} \frac {1}{K ^ {2}} \| \mathcal {Q} (\cdot) - \langle \beta_ {k} , \phi (\cdot) \rangle \| _ {d} ^ {2} \right]} (78) \\ = \sqrt {\frac {1}{K ^ {2}} \sum_ {k = 0} ^ {K - 1} \mathbb {E} \left[ \left\| \mathcal {Q} (\cdot) - \langle \beta_ {k} , \phi (\cdot) \rangle \right\| _ {d} ^ {2} \right]} (79) \\ = \frac {1}{K} \sqrt {\sum_ {k = 0} ^ {K - 1} \Delta_ {k} ^ {2}} (80) \\ \leq \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \sqrt {\Delta_ {k} ^ {2}} (81) \\ = \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \Delta_ {k} (82) \\ = \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) + l (83) \\ \leq \sqrt {\left(\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right)\right) ^ {2}} + l (84) \\ \leq \sqrt {\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) ^ {2}} + l, (85) \\ \end{array}
+$$
+
+where $l$ is arbitrary.
+
+Now, we consider the Lyapounov function $\mathcal{L}(\beta) = \| \beta_{*} - \beta \|_{2}^{2}$ in order to find a bound on $\frac{1}{K}\sum_{k=0}^{K-1}(\Delta_k - l)^2$ . Since $\beta_{*} \in \mathcal{B}_{2}(0,B)$ , with $\mathcal{B}_{2}(0,B)$ a convex subset of $\mathbb{R}^{d_{\phi}}$ , and the projection $\Gamma_{\mathcal{C}}$ is non-expansive for closed and convex $\mathcal{C}$ , we have for all $k \geq 0$ ,
+
+$$
+\begin{array}{l} \mathcal {L} \left(\beta_ {k + 1}\right) = \left\| \beta_ {*} - \beta_ {k + 1} \right\| _ {2} ^ {2} (86) \\ \leq \left\| \beta_ {*} - \beta_ {k + 1} ^ {-} \right\| _ {2} ^ {2} (87) \\ = \left\| \beta_ {*} - \left(\beta_ {k} + \alpha g _ {k}\right) \right\| _ {2} ^ {2} (88) \\ = \left\| \left(\beta_ {*} - \beta_ {k}\right) - \alpha g _ {k} \right\| _ {2} ^ {2} (89) \\ = \left\langle \left(\beta_ {*} - \beta_ {k}\right) - \alpha g _ {k}, \left(\beta_ {*} - \beta_ {k}\right) - \alpha g _ {k} \right\rangle (90) \\ = \left\langle \beta_ {*} - \beta_ {k}, \beta_ {*} - \beta_ {k} \right\rangle - 2 \alpha \left\langle \beta_ {*} - \beta_ {k}, g _ {k} \right\rangle + \alpha^ {2} \left\langle g _ {k}, g _ {k} \right\rangle (91) \\ = \mathcal {L} \left(\beta_ {k}\right) - 2 \alpha \left\langle \beta_ {*} - \beta_ {k}, g _ {k} \right\rangle + \alpha^ {2} \| g _ {k} \| _ {2} ^ {2} (92) \\ = \mathcal {L} \left(\beta_ {k}\right) + 2 \alpha \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle + \alpha^ {2} \| g _ {k} \| _ {2} ^ {2}. (93) \\ \end{array}
+$$
+
+Let us consider the Lyapounov drift $\mathbb{E}[\mathcal{L}(\beta_{k + 1}) - \mathcal{L}(\beta_k)]$ , and exploit the fact that environments samples used to compute $g_{k}$ are independent and identically distributed. Formally, we define $\mathfrak{G}_k = \sigma (S_{i,j},Z_{i,j},A_{i,j},i\leq k,j\leq m)$ and $\mathfrak{F}_k = \sigma (S_{k,0},Z_{k,0},A_{k,0})$ , where $\sigma (X_i:i\in \mathcal{I})$ denotes the $\sigma$ -algebra generated by a collection $\{X_{i}\colon i\in \mathcal{I}\}$ of random
+
+variables. We can write, using to the law of total expectation,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] = \mathbb {E} \left[ \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \mid \mathfrak {G} _ {k - 1} \right] \right] (94) \\ \leq 2 \alpha \mathbb {E} \left[ \mathbb {E} \left[ \langle \beta_ {k} - \beta_ {*}, g _ {k} \rangle | \mathfrak {G} _ {k - 1} \right] \right] + \alpha^ {2} \mathbb {E} \left[ \mathbb {E} \left[ \| g _ {k} \| _ {2} ^ {2} \mid \mathfrak {G} _ {k - 1} \right] \right]. (95) \\ \end{array}
+$$
+
+Let us focus on the first term of equation (95) with $\mathbb{E}\left[\langle g_k,\beta_k - \beta_*\rangle |\mathfrak{G}_{k - 1}\right]$ . First, since $\nabla_{\beta}\widehat{\mathcal{Q}}_{k,0} = \phi_{k,0}$ , the semi-gradient $g_{k}$ is given by (see equation (13)),
+
+$$
+g _ {k} = \left(\sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} + \gamma^ {m} \widehat {\mathcal {Q}} _ {k, m} - \widehat {\mathcal {Q}} _ {k, 0}\right) \phi_ {k, 0}. \tag {96}
+$$
+
+By conditioning on the sigma-fields $\mathfrak{G}_{k - 1}$ and $\mathfrak{F}_k$ , we have,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] = \left(\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} + \gamma^ {m} \widehat {\mathcal {Q}} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {\mathcal {Q}} _ {k, 0}\right) \left\langle \beta_ {k} - \beta_ {*}, \phi_ {k, 0} \right\rangle (97) \\ = \left(\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} + \gamma^ {m} \widehat {\mathcal {Q}} _ {k, m} \middle | \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {\mathcal {Q}} _ {k, 0}\right) \left(\widehat {\mathcal {Q}} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right). (98) \\ \end{array}
+$$
+
+Note that according to the Bellman operator (5) we have,
+
+$$
+\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} \Bigg | \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] = \mathcal {Q} _ {k, 0} - \gamma^ {m} \mathbb {E} \left[ \mathcal {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right]. \tag {99}
+$$
+
+By substituting equation (99) in equation (98), we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] \\ = \left(\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] + \gamma^ {m} \mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {\mathcal {Q}} _ {k, 0}\right) \left(\widehat {\mathcal {Q}} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right) (100) \\ = \left(\mathcal {Q} _ {k, 0} - \gamma^ {m} \mathbb {E} \left[ \mathcal {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] + \gamma^ {m} \mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {\mathcal {Q}} _ {k, 0}\right) \left(\widehat {\mathcal {Q}} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right) (101) \\ = \left(\left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0}\right) - \gamma^ {m} \mathbb {E} \left[ \mathcal {Q} _ {k, m} - \widehat {\mathcal {Q}} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right]\right) \left(\left(\widehat {\mathcal {Q}} _ {k, 0} - \mathcal {Q} _ {k, 0}\right) + \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right)\right) (102) \\ = - \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0}\right) ^ {2} + \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0}\right) \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right) \\ + \gamma^ {m} \mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] (\widehat {\mathcal {Q}} _ {k, 0} - \mathcal {Q} _ {k, 0}) + \gamma^ {m} \mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] (\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}). (103) \\ \end{array}
+$$
+
+Let us now take the expectation of (103) over $\mathfrak{F}_k$ given $\mathfrak{G}_{k - 1}$ , for each term separately,
+
+- For the first term, we have,
+
+$$
+\mathbb {E} \left[ - \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0}\right) ^ {2} \mid \mathfrak {G} _ {k - 1} \right] = - \left\| \mathcal {Q} - \widehat {\mathcal {Q}} _ {k} \right\| _ {d} ^ {2}. \tag {104}
+$$
+
+- For the second term, we have, using the Cauchy-Schwarz inequality,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0}\right) \left(\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}\right) \Big | \mathfrak {G} _ {k - 1} \right] = \left\| (\mathcal {Q} - \widehat {\mathcal {Q}} _ {k}) (\mathcal {Q} - \widehat {\mathcal {Q}} ^ {*}) \right\| _ {d} (105) \\ \leq \left\| \mathcal {Q} - \widehat {\mathcal {Q}} _ {k} \right\| _ {d} \left\| \mathcal {Q} - \widehat {\mathcal {Q}} ^ {*} \right\| _ {d}. (106) \\ \end{array}
+$$
+
+Before proceeding to the third and fourth terms, let us notice that,
+
+$$
+\mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m} \mid \mathfrak {G} _ {k - 1} \right] = \sum_ {s, z, a} d _ {m} (s, z, a) \left(\widehat {\mathcal {Q}} _ {k} (s, z, a) - \mathcal {Q} (s, z, a)\right) \tag {107}
+$$
+
+$$
+= \sum_ {s, z, a} (d (s, z, a) + d _ {m} (s, z, a) - d (s, z, a)) (\widehat {\mathcal {Q}} _ {k} (s, z, a) - \mathcal {Q} (s, z, a)). \tag {108}
+$$
+
+Remembering that $\sup_{s,z,a}\widehat{\mathcal{Q}}_k(s,z,a)\leq B$ and $\sup_{s,z,a}\mathcal{Q}(s,z,a)\leq \frac{1}{1 - \gamma}$ , we have,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left(\widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m}\right) ^ {2} \mid \mathfrak {G} _ {k - 1} \right] = \sum_ {s, z, a} (d (s, z, a) + d _ {m} (s, z, a) - d (s, z, a)) (\widehat {\mathcal {Q}} _ {k} (s, z, a) - \mathcal {Q} (s, z, a)) ^ {2} (109) \\ = \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} ^ {2} + \sum_ {s, z, a} \left(d _ {m} (s, z, a) - d (s, z, a)\right) \left(\widehat {\mathcal {Q}} _ {k} (s, z, a) - \mathcal {Q} (s, z, a)\right) ^ {2} (110) \\ \leq \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} ^ {2} + \| d _ {m} - d \| _ {\mathrm {T V}} \sup _ {s, z, a} \left(\widehat {\mathcal {Q}} _ {k} (s, z, a) - \mathcal {Q} (s, z, a)\right) ^ {2} (111) \\ \leq \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} ^ {2} + \left\| d _ {m} - d \right\| _ {\mathrm {T V}} \left(B + \frac {1}{1 - \gamma}\right) ^ {2}, (112) \\ \end{array}
+$$
+
+where $\left(B + \frac{1}{1 - \gamma}\right)$ is an upper bound on $\sup_{s,z,a} \left|\widehat{\mathcal{Q}}_k(s,z,a) - \mathcal{Q}(s,z,a)\right|$ . Now, using Jensen's inequality and the subadditivity of the square root, we have,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m} \mid \mathfrak {G} _ {k - 1} \right] \leq \mathbb {E} \left[ \sqrt {\left(\widehat {\mathcal {Q}} _ {k , m} - \mathcal {Q} _ {k , m}\right) ^ {2}} \mid \mathfrak {G} _ {k - 1} \right] (113) \\ \leq \sqrt {\mathbb {E} \left[ \left(\widehat {\mathcal {Q}} _ {k , m} - \mathcal {Q} _ {k , m}\right) ^ {2} \mid \mathfrak {G} _ {k - 1} \right]} (114) \\ \leq \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. (115) \\ \end{array}
+$$
+
+With this, we proceed to the third and fourth terms (without the multiplier $\gamma^m$ ) and show the following.
+
+- For the third term, we have by upper bounding $|\widehat{\mathcal{Q}}_{k,0} - \mathcal{Q}_{k,0}|$ by $B + \frac{1}{1 - \gamma}$ ,
+
+$$
+\mathbb {E} \left[ (\widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m}) (\widehat {\mathcal {Q}} _ {k, 0} - \mathcal {Q} _ {k, 0}) \mid \mathfrak {G} _ {k - 1} \right] \leq \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} ^ {2} + \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {116}
+$$
+
+- For the fourth term, we have by upper bounding $\left|Q_{k,0} - \widehat{Q}_{k,0}^{*}\right|$ by $\frac{1}{1 - \gamma} + B$ ,
+
+$$
+\mathbb {E} \left[ (\widehat {\mathcal {Q}} _ {k, m} - \mathcal {Q} _ {k, m}) (\mathcal {Q} _ {k, 0} - \widehat {\mathcal {Q}} _ {k, 0} ^ {*}) \Big | \mathfrak {G} _ {k - 1} \right] \leq \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} \left\| \mathcal {Q} - \widehat {\mathcal {Q}} ^ {*} \right\| _ {d} + \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. (1 1 7)
+$$
+
+By taking expectation over $\mathfrak{G}_{k - 1}$ of the four terms and using the previous upper bounds, we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \right] = \mathbb {E} \left[ \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {G} _ {k - 1} \right] \right] (118) \\ \leq - (1 - \gamma^ {m}) \mathbb {E} \left[ \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} ^ {2} \right] + (1 + \gamma^ {m}) \mathbb {E} \left[ \left\| \widehat {\mathcal {Q}} _ {k} - \mathcal {Q} \right\| _ {d} \right] \left\| \widehat {\mathcal {Q}} ^ {*} - \mathcal {Q} \right\| _ {d} \\ + 2 \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}} (119) \\ = - \left(1 - \gamma^ {m}\right) \Delta_ {k} ^ {2} + \left(1 + \gamma^ {m}\right) \Delta_ {k} \left\| \widehat {\mathcal {Q}} ^ {*} - \mathcal {Q} \right\| _ {d} + 2 \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\left\| d _ {m} - d \right\| _ {\mathrm {T V}}}. (120) \\ \end{array}
+$$
+
+Let us now focus on the second term of equation (95) with $\mathbb{E}\left[\| g_k\| _2^2\Big|\mathfrak{G}_{k - 1}\right]$ . Since $\sup_{s,z,a}\| \phi (s,z,a)\| _2\leq 1$ and $\| \beta_k\| _2\leq B$ for all $k\geq 0$ , and $r_{k,i}\leq 1$ for all $i < m - 1$ , the norm of the gradient (96) is bounded as follows,
+
+$$
+\sup _ {k \geq 0} \| g _ {k} \| _ {2} \leq \frac {1 - \gamma^ {m}}{1 - \gamma} + (1 + \gamma^ {m}) B \leq \frac {1}{1 - \gamma} + 2 B. \tag {121}
+$$
+
+We obtain, for the second term of equation (95),
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| g _ {k} \| _ {2} ^ {2} \right] = \mathbb {E} \left[ \mathbb {E} \left[ \| g _ {k} \| _ {2} ^ {2} \mid \mathfrak {G} _ {k - 1} \right] \right] (122) \\ \leq \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}. (123) \\ \end{array}
+$$
+
+By substituting equations (120) and (123) into the Lyapounov drift of equation (95), we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leq - 2 \alpha \left(1 - \gamma^ {m}\right) \Delta_ {k} ^ {2} + 2 \alpha \left(1 + \gamma^ {m}\right) \Delta_ {k} \left\| \widehat {\mathcal {Q}} ^ {*} - \mathcal {Q} \right\| _ {d} + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {124} \\ \end{array}
+$$
+
+By setting $l = \frac{1 + \gamma^m}{2(1 - \gamma^m)}\min_{f\in \mathcal{F}_\phi^B}\| f - \mathcal{Q}\| _d$ , we can write,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leq - 2 \alpha \left(1 - \gamma^ {m}\right) \left(\Delta_ {k} ^ {2} - 2 l \Delta_ {k}\right) + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}} (125) \\ = - 2 \alpha (1 - \gamma^ {m}) \left(\Delta_ {k} ^ {2} - 2 l \Delta_ {k} + l ^ {2}\right) + 2 \alpha (1 - \gamma^ {m}) l ^ {2} + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}} (126) \\ = - 2 \alpha (1 - \gamma^ {m}) (\Delta_ {k} - l) ^ {2} + 2 \alpha (1 - \gamma^ {m}) l ^ {2} + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. (127) \\ \end{array}
+$$
+
+By summing all Lyapounov drifts $\sum_{k=0}^{K-1} \mathbb{E}\left[\mathcal{L}\left(\beta_{k+1}\right) - \mathcal{L}\left(\beta_k\right)\right]$ , we get,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {K}\right) - \mathcal {L} \left(\beta_ {0}\right) \right] \leq - 2 \alpha \left(1 - \gamma^ {m}\right) \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) ^ {2} + 2 \alpha K \left(1 - \gamma^ {m}\right) l ^ {2} + \alpha^ {2} K \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha K \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {128} \\ \end{array}
+$$
+
+By rearranging and dividing by $2\alpha K(1 - \gamma^{m})$ , we obtain after neglecting $\mathcal{L}(\beta_K) > 0$
+
+$$
+\begin{array}{l} \frac {1}{K} \sum_ {k = 0} ^ {K - 1} (\Delta_ {k} - l) ^ {2} \leq \frac {\mathbb {E} [ \mathcal {L} (\beta_ {0}) - \mathcal {L} (\beta_ {K}) ]}{2 \alpha K (1 - \gamma^ {m})} + l ^ {2} + \frac {\alpha}{2 (1 - \gamma^ {m})} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + \frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}} (129) \\ \leq \frac {\| \beta_ {0} - \beta_ {*} \| _ {2} ^ {2}}{2 \alpha K (1 - \gamma^ {m})} + l ^ {2} + \frac {\alpha}{2 (1 - \gamma^ {m})} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + \frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. (130) \\ \end{array}
+$$
+
+The bound obtained through this Lyapounov drift summation can be used to further develop equation (85), using the subadditivity of the square root,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left\| \mathcal {Q} - \bar {\mathcal {Q}} \right\| _ {d} ^ {2} \right]} \leq \sqrt {\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) ^ {2}} + l (131) \\ \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + 2 l + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} (132) \\ = \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} \| _ {d} + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}}. (133) \\ \end{array}
+$$
+
+By setting $\alpha = \frac{1}{\sqrt{K}}$ and upper bounding $\| \beta_0 - \beta_*\|$ by $2B$ , we get,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \| Q - \bar {Q} \| _ {d} ^ {2} \right]} \leq \frac {2 B}{\sqrt {2 \sqrt {K} (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - Q \| _ {d} + \frac {1}{\sqrt {2 \sqrt {K} (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} (134) \\ = \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} \| _ {d} \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}}. (135) \\ \end{array}
+$$
+
+This concludes the proof.
+
+# D. Proof of the Finite-Time Bound for the Symmetric Critic
+
+Let us first find an upper bound on the distance $\left\| Q^{\pi} - \widetilde{Q}^{\pi}\right\|_{d}^{2}$ between the Q-function $Q^{\pi}$ and the fixed point $\widetilde{Q}^{\pi}$ .
+
+Lemma D.1 (Upper bound on the aliasing bias (Cayci et al., 2024)). For any agent-state policy $\pi \in \Pi_{\mathcal{M}}$ , and any $m \in \mathbb{N}$ , we have,
+
+$$
+\left\| Q ^ {\pi} - \widetilde {Q} ^ {\pi} \right\| _ {d} \leq \frac {1 - \gamma^ {m}}{1 - \gamma} \left\| \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg | _ {d}. \tag {136}
+$$
+
+Proof. The proof is similar to the one of Cayci et al. (2024). Let us first define the expected $m$ -step return,
+
+$$
+\bar {r} _ {m} (s, z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {m - 1} \gamma^ {k} R _ {k} \mid S _ {0} = s, Z _ {0} = s, A _ {0} = a \right]. \tag {137}
+$$
+
+Using the expected $m$ -step return and the definition of the belief $b$ in equation (33) and approximate belief $\hat{b}$ in equation (34), it can be noted that,
+
+$$
+Q ^ {\pi} (z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sum_ {s \in S} b _ {k m} (s | H _ {k m}) \bar {r} _ {m} (s, Z _ {k m}, A _ {k m}) \right| Z _ {0} = z, A _ {0} = a \tag {138}
+$$
+
+$$
+\widetilde {Q} ^ {\pi} (z, a) = \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sum_ {s \in S} \hat {b} _ {k m} (s | Z _ {k m}) \bar {r} _ {m} (s, Z _ {k m}, A _ {k m}) \right| Z _ {0} = z, A _ {0} = a \Bigg ]. \tag {139}
+$$
+
+Indeed, bootstrapping at timestep $m$ based on the agent state only is equivalent to considering the distribution of future states to be $\hat{b}_m(\cdot | Z_m)$ instead of $b_m(\cdot | H_m)$ . As a consequence, we have,
+
+$$
+\begin{array}{l} \left| Q ^ {\pi} (z, a) - \widetilde {Q} ^ {\pi} (z, a) \right| = \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sum_ {s \in \mathcal {S}} \left(b _ {k m} (s | H _ {k m}) - \hat {b} _ {k m} (s | Z _ {k m})\right) \bar {r} _ {m} (s, Z _ {k m}, A _ {k m}) \right| Z _ {0} = z, A _ {0} = a (140) \\ \leq \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sup _ {s \in \mathcal {S}} \left| b _ {k m} (s | H _ {k m}) - \hat {b} _ {k m} (s | Z _ {k m}) \right| \sup _ {s \in \mathcal {S}} | \bar {r} _ {m} (s, Z _ {k m}, A _ {k m}) | \right\rvert Z _ {0} = z, A _ {0} = a (141) \\ \leq \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sup _ {s \in \mathcal {S}} \left| b _ {k m} (s | H _ {k m}) - \hat {b} _ {k m} (s | Z _ {k m}) \right| \frac {1 - \gamma^ {m}}{1 - \gamma} \mid Z _ {0} = z, A _ {0} = a \right] (142) \\ = \frac {1 - \gamma^ {m}}{1 - \gamma} \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \sup _ {s \in S} \left| b _ {k m} (s | H _ {k m}) - \hat {b} _ {k m} (s | Z _ {k m}) \right| \right| Z _ {0} = z, A _ {0} = a (143) \\ \leq \frac {1 - \gamma^ {m}}{1 - \gamma} \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| b _ {k m} (\cdot | H _ {k m}) - \hat {b} _ {k m} (\cdot | Z _ {k m}) \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z, A _ {0} = a (144) \\ \leq \frac {1 - \gamma^ {m}}{1 - \gamma} \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| b _ {k m} - \hat {b} _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z, A _ {0} = a \Bigg ], (145) \\ \end{array}
+$$
+
+where we use $b_{km}$ and $\hat{b}_{km}$ to denote the random variables $b_{km}(\cdot |H_{km})$ and $\hat{b}_{km}(\cdot |Z_{km})$ , respectively. It illustrates that the aliasing bias can be bounded proportionally to the distance between the true belief and the approximate belief at the bootstrapping timesteps. Then, we obtain,
+
+$$
+\left\| Q ^ {\pi} - \widetilde {Q} ^ {\pi} \right\| _ {d} \leq \frac {1 - \gamma^ {m}}{1 - \gamma} \left\| \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg | _ {d}. \tag {146}
+$$
+
+This concludes the proof.
+
+Using Lemma D.1, we can prove Theorem 4, that is recalled below. Note that some notations used in Appendix C will be reused with another meaning.
+
+Theorem 4 (Finite-time bound for symmetric $m$ -step temporal difference learning (Cayci et al., 2024)). For any agent-state policy $\pi \in \Pi_{\mathcal{M}}$ , and any $m \in \mathbb{N}$ , we have for Algorithm 1 with $\alpha = \frac{1}{\sqrt{K}}$ , and arbitrary $B > 0$
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| Q ^ {\pi} - \bar {Q} ^ {\pi} \right\| _ {d} ^ {2} \right]} \leq \varepsilon_ {\mathrm {t d}} + \varepsilon_ {\mathrm {a p p}} + \varepsilon_ {\mathrm {s h i f t}} + \varepsilon_ {\mathrm {a l i a s}}, \tag {35}
+$$
+
+where the temporal difference learning, function approximation, distribution shift, and aliasing terms are given by,
+
+$$
+\varepsilon_ {\mathrm {t d}} = \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} \left(1 - \gamma^ {m}\right)}} \tag {36}
+$$
+
+$$
+\varepsilon_ {\text {a p p}} = \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\chi} ^ {B}} \| f - Q ^ {\pi} \| _ {d} \tag {37}
+$$
+
+$$
+\varepsilon_ {\text {s h i f t}} = \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} \tag {38}
+$$
+
+$$
+\varepsilon_ {\text {a l i a s}} = \frac {2}{1 - \gamma} \left\| \mathbb {E} ^ {\pi} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right. \right\rvert Z _ {0} = \cdot \Bigg ] \Bigg | _ {d}, \tag {39}
+$$
+
+with $d(z, a) = \sum_{s \in S} d^{\pi}(s, z) \pi(a|z)$ the sampling distribution, and $d_m(z, a) = \sum_{s \in S} d_m^{\pi}(s, z) \pi(a|z)$ the bootstrapping distribution.
+
+Proof. To ease notation as for the proof of Theorem 3 in Appendix C, we use $Q$ as a shorthand for $Q^{\pi}$ , $\widehat{Q}^{*}$ as a shorthand for $\widehat{Q}_{*}^{\pi}$ , $\widetilde{Q}$ as a shorthand for $\widetilde{Q}^{\pi}$ , $\overline{Q}$ as a shorthand for $\overline{Q}^{\pi}$ and $\widehat{Q}_k$ as a shorthand for $\widehat{Q}_{\beta_k}^{\pi}$ , where the subscripts and superscripts remain implicit but are assumed clear from context. When evaluating the Q-functions, we go one step further by using $Q_{k,i}$ to denote $Q(Z_{k,i},A_{k,i})$ , $\widehat{Q}_{k,i}^{*}$ to denote $\widehat{Q}^{*}(Z_{k,i},A_{k,i})$ , $\widetilde{Q}_{k,i}$ to denote $\widetilde{Q}(Z_{k,i},A_{k,i})$ and $\widehat{Q}_{k,i}$ to denote $\widehat{Q}_k(Z_{k,i},A_{k,i})$ , and $\chi_{k,i}$ to denote $\chi (Z_{k,i},A_{k,i})$ . In addition, we define $d$ as a shorthand for $d^{\pi}\otimes \pi$ , such that $d(z,a) = d^{\pi}(z)\pi (a|z)$ , and $d_m$ as a shorthand for $d_m^\pi \otimes \pi$ , such that $d_m(z,a) = d_m^\pi (z)\pi (a|z)$ . Using the triangle inequality and the subadditivity of the square root, we have,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left\| Q - \bar {Q} \right\| _ {d} ^ {2} \right]} \leq \sqrt {\mathbb {E} \left[ \left\| Q - \tilde {Q} \right\| _ {d} ^ {2} \right] + \mathbb {E} \left[ \left\| \tilde {Q} - \bar {Q} \right\| _ {d} ^ {2} \right]} (147) \\ \leq \sqrt {\mathbb {E} \left[ \left\| Q - \widetilde {Q} \right\| _ {d} ^ {2} \right]} + \sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} - \overline {{Q}} \right\| _ {d} ^ {2} \right]} (148) \\ \leq \left\| Q - \widetilde {Q} \right\| _ {d} + \sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} - \bar {Q} \right\| _ {d} ^ {2} \right]}. (149) \\ \end{array}
+$$
+
+We can bound the second term in equation (149) using similar steps as in the proof for the asymmetric finite-time bound (see Appendix C). We obtain,
+
+$$
+\sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} - \bar {Q} \right\| _ {d} ^ {2} \right]} \leq \sqrt {\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) ^ {2}} + l, \tag {150}
+$$
+
+where $l$ is arbitrary, and $\Delta_{k}$ is defined as,
+
+$$
+\Delta_ {k} = \sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} - \widehat {Q} _ {k} \right\| _ {d} ^ {2} \right]} = \sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} (\cdot) - \langle \beta_ {k} , \chi (\cdot) \rangle \right\| _ {d} ^ {2} \right]}. \tag {151}
+$$
+
+Similarly to the asymmetric case (see Appendix C), we consider the Lyapounov function $\mathcal{L}(\beta) = \| \beta_{*} - \beta \|_{2}^{2}$ in order to find a bound on $\frac{1}{K}\sum_{k=0}^{K-1} (\Delta_k - l)^2$ . We define $\mathfrak{G}_k = \sigma(Z_{i,j}, A_{i,j}, i \leq k, j \leq m)$ and $\mathfrak{F}_k = \sigma(Z_{k,0}, A_{k,0})$ . As in the asymmetric case (see Appendix C), we obtain, using the law of total expectation,
+
+$$
+\mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leq 2 \alpha \mathbb {E} \left[ \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {G} _ {k - 1} \right] \right] + \alpha^ {2} \mathbb {E} \left[ \mathbb {E} \left[ \| g _ {k} \| _ {2} ^ {2} \mid \mathfrak {G} _ {k - 1} \right] \right]. \tag {152}
+$$
+
+Let us focus on the first term of equation (152) with $\mathbb{E}\left[\langle \beta_k - \beta_*, g_k \rangle | \mathfrak{G}_{k-1}\right]$ . By conditioning on the sigma-fields $\mathfrak{G}_{k-1}$ and $\mathfrak{F}_k$ , we have,
+
+$$
+\mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] = \left(\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} + \gamma^ {m} \widehat {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {Q} _ {k, 0}\right) \left(\widehat {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right). \tag {153}
+$$
+
+Note that, according to the Bellman operator (7), we have,
+
+$$
+\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} \Bigg | \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] = \widetilde {Q} _ {k, 0} - \gamma^ {m} \mathbb {E} \left[ \widetilde {Q} _ {k, m} \Big | \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right]. \tag {154}
+$$
+
+It differs from the asymmetric case (see Appendix C) in that we do not necessarily have $Q = \bar{Q}$ here. By substituting equation (154) in equation (153), we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] \\ = \left(\mathbb {E} \left[ \sum_ {t = 0} ^ {m - 1} \gamma^ {t} R _ {k, t} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] + \gamma^ {m} \mathbb {E} \left[ \widehat {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {Q} _ {k, 0}\right) \left(\widehat {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) \tag {155} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \left(\widetilde {Q} _ {k, 0} - \gamma^ {m} \mathbb {E} \left[ \widetilde {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] + \gamma^ {m} \mathbb {E} \left[ \widehat {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {Q} _ {k, 0}\right) \left(\widehat {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) (156) \\ = \left(\widetilde {Q} _ {k, 0} - \gamma^ {m} \mathbb {E} \left[ \widetilde {Q} _ {k, m} - \widehat {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] - \widehat {Q} _ {k, 0}\right) \left(\widehat {Q} _ {k, 0} - \widetilde {Q} _ {k, 0} + \widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) (157) \\ = \left(\left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0}\right) - \gamma^ {m} \mathbb {E} \left[ \widetilde {Q} _ {k, m} - \widehat {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right]\right) \left(\left(\widehat {Q} _ {k, 0} - \widetilde {Q} _ {k, 0}\right) + \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right)\right) (158) \\ = - \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0}\right) ^ {2} + \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0}\right) \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) \\ + \gamma^ {m} \mathbb {E} \left[ \widehat {Q} _ {k, m} - \widetilde {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] (\widehat {Q} _ {k, 0} - \widetilde {Q} _ {k, 0}) + \gamma^ {m} \mathbb {E} \left[ \widehat {Q} _ {k, m} - \widetilde {Q} _ {k, m} \mid \mathfrak {F} _ {k}, \mathfrak {G} _ {k - 1} \right] (\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}). (159) \\ \end{array}
+$$
+
+We now follow the same technique as in the asymmetric case (see Appendix C) for each of the four terms. By taking the expectation over $\mathfrak{F}_k$ , we get the following.
+
+- For the first term, we have,
+
+$$
+\mathbb {E} \left[ - \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0}\right) ^ {2} \mid \mathfrak {G} _ {k - 1} \right] = - \left\| \widetilde {Q} - \widehat {Q} _ {k} \right\| _ {d} ^ {2}. \tag {160}
+$$
+
+- For the second term, we have,
+
+$$
+\mathbb {E} \left[ \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0}\right) \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) \mid \mathfrak {G} _ {k - 1} \right] \leq \left\| \widetilde {Q} - \widehat {Q} _ {k} \right\| _ {d} \left\| \widetilde {Q} - \widehat {Q} ^ {*} \right\| _ {d}. \tag {161}
+$$
+
+- For the third term, we have,
+
+$$
+\mathbb {E} \left[ (\widehat {Q} _ {k, m} - \widetilde {Q} _ {k, m}) (\widehat {Q} _ {k, 0} - \widetilde {Q} _ {k, 0}) \mid \mathfrak {G} _ {k - 1} \right] \leq \left\| \widehat {Q} _ {k} - \widetilde {Q} \right\| _ {d} ^ {2} + \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {162}
+$$
+
+- For the fourth term, we have,
+
+$$
+\mathbb {E} \left[ \left(\widehat {Q} _ {k, m} - \widetilde {Q} _ {k, m}\right) \left(\widetilde {Q} _ {k, 0} - \widehat {Q} _ {k, 0} ^ {*}\right) \Big | \mathfrak {G} _ {k - 1} \right] \leq \left\| \widehat {Q} _ {k} - \widetilde {Q} \right\| _ {d} \left\| \widetilde {Q} - \widehat {Q} ^ {*} \right\| _ {d} + \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {163}
+$$
+
+By taking expectation over $\mathfrak{G}_{k - 1}$ of the four terms and using the previous upper bounds, we obtain,
+
+$$
+\mathbb {E} \left[ \left\langle \beta_ {k} - \beta_ {*}, g _ {k} \right\rangle \right] \leq - (1 - \gamma^ {m}) \Delta_ {k} ^ {2} + (1 + \gamma^ {m}) \Delta_ {k} \left\| \widehat {Q} ^ {*} - \widetilde {Q} \right\| _ {d} + 2 \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {164}
+$$
+
+The second term in equation (152) is treated similarly to the asymmetric case (see Appendix C), which yields,
+
+$$
+\mathbb {E} \left[ \| g _ {k} \| _ {2} ^ {2} \right] \leq \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}. \tag {165}
+$$
+
+By substituting equations (164) and (165) into the Lyapounov drift of equation (152), we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leq - 2 \alpha \left(1 - \gamma^ {m}\right) \Delta_ {k} ^ {2} + 2 \alpha \left(1 + \gamma^ {m}\right) \Delta_ {k} \left\| \widehat {Q} ^ {*} - \widetilde {Q} \right\| _ {d} + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {166} \\ \end{array}
+$$
+
+We can upper bound $\left\| \widehat{Q}^{*} - \widetilde{Q}\right\|_{d}$ as follows,
+
+$$
+\left\| \widehat {Q} ^ {*} - \widetilde {Q} \right\| _ {d} \leq \left\| \widehat {Q} ^ {*} - Q \right\| _ {d} + \left\| Q - \widetilde {Q} \right\| _ {d}. \tag {167}
+$$
+
+By setting $l = \frac{1 + \gamma^m}{2(1 - \gamma^m)}\left(\left\| \widehat{Q}^* - Q\right\|_d + \left\| Q - \widetilde{Q}\right\|_d\right)$ , we can write, following a similar strategy as in the asymmetric case (see Appendix C),
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leq - 2 \alpha \left(1 - \gamma^ {m}\right) \left(\Delta_ {k} - l\right) ^ {2} + 2 \alpha \left(1 - \gamma^ {m}\right) l ^ {2} + \alpha^ {2} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + 4 \alpha \gamma^ {m} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {168} \\ \end{array}
+$$
+
+By summing all drifts, rearranging, and dividing by $2\alpha K(1 - \gamma^m)$ , we obtain after neglecting $\mathcal{L}(\beta_K) > 0$
+
+$$
+\begin{array}{l} \frac {1}{K} \sum_ {k = 0} ^ {K - 1} (\Delta_ {k} - l) ^ {2} \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2} ^ {2}}{2 \alpha K (1 - \gamma^ {m})} + l ^ {2} + \frac {\alpha}{2 (1 - \gamma^ {m})} \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2} \\ + \frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \left(B + \frac {1}{1 - \gamma}\right) ^ {2} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}. \tag {169} \\ \end{array}
+$$
+
+The bound obtained through this Lyapounov drift summation can be used to further develop equation (150), using the subadditivity of the square root,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left\| \widetilde {Q} - \bar {Q} \right\| _ {d} ^ {2} \right]} \leq \sqrt {\frac {1}{K} \sum_ {k = 0} ^ {K - 1} \left(\Delta_ {k} - l\right) ^ {2}} + l (170) \\ \leq \frac {\| \beta_ {0} - \beta_ {*} \| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + 2 l + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} (171) \\ = \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + 2 l + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}}. (172) \\ \end{array}
+$$
+
+Plugging equation (172) into equation (149), and substituting back $l$ , we finally have,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left\| Q - \bar {Q} \right\| _ {d} ^ {2} \right]} \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \left(\left\| \widehat {Q} ^ {*} - Q \right\| _ {d} + \left\| Q - \widetilde {Q} \right\| _ {d}\right) + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} + \left\| Q - \widetilde {Q} \right\| _ {d} (173) \\ \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \left\| \widehat {Q} ^ {*} - Q \right\| _ {d} + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} + \frac {2}{1 - \gamma^ {m}} \left\| Q - \widetilde {Q} \right\| _ {d} (174) \\ \end{array}
+$$
+
+Using Lemma D.1, we finally obtain,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left\| Q - \overline {{Q}} \right\| _ {d} ^ {2} \right]} \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \left\| \widehat {Q} ^ {*} - Q \right\| _ {d} + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} \\ + \left(\frac {2}{1 - \gamma^ {m}}\right) \frac {1 - \gamma^ {m}}{1 - \gamma} \left\| \mathbb {E} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg \| _ {d} \tag {175} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \leq \frac {\left\| \beta_ {0} - \beta_ {*} \right\| _ {2}}{\sqrt {2 \alpha K (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} \| _ {d} + \sqrt {\frac {\alpha}{2 (1 - \gamma^ {m})}} \left(\frac {1}{1 - \gamma} + 2 B\right) \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} \\ + \frac {2}{1 - \gamma} \left\| \mathbb {E} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg | _ {d}. \tag {176} \\ \end{array}
+$$
+
+By setting $\alpha = \frac{1}{\sqrt{K}}$ and upper bounding $\| \beta_0 - \beta_*\|$ by $2B$ , we get,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \| Q - \bar {Q} \| _ {d} ^ {2} \right]} \leq \sqrt {\frac {4 B ^ {2} + \left(\frac {1}{1 - \gamma} + 2 B\right) ^ {2}}{2 \sqrt {K} (1 - \gamma^ {m})}} + \frac {1 + \gamma^ {m}}{1 - \gamma^ {m}} \min _ {f \in \mathcal {F} _ {\phi} ^ {B}} \| f - \mathcal {Q} \| _ {d} \\ + \left(B + \frac {1}{1 - \gamma}\right) \sqrt {\frac {2 \gamma^ {m}}{1 - \gamma^ {m}} \sqrt {\| d _ {m} - d \| _ {\mathrm {T V}}}} \\ + \frac {2}{1 - \gamma} \left\| \mathbb {E} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k m} \left\| \hat {b} _ {k m} - b _ {k m} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = \cdot \right] \Bigg | _ {d}. \tag {177} \\ \end{array}
+$$
+
+This concludes the proof.
+
+
+
+# E. Proof of the Finite-Time Bound for the Natural Actor-Critic
+
+Let us first give the performance difference lemma for POMDP proved by Cayci et al. (2024). Note that this proof is completely agnostic about the critic used to compute $\pi_1, \pi_2 \in \Pi_{\mathcal{M}}$ and is thus applicable both to the asymmetric setting and the symmetric setting.
+
+Lemma E.1 (Performance difference (Cayci et al., 2024)). For any two agent-state polices $\pi_1, \pi_2 \in \Pi_{\mathcal{M}}$
+
+$$
+V ^ {\pi_ {2}} \left(z _ {0}\right) - V ^ {\pi_ {1}} \left(z _ {0}\right) \leq \frac {1}{1 - \gamma} \mathbb {E} ^ {d ^ {\pi_ {2}}} \left[ A ^ {\pi_ {1}} (Z, A) \mid Z _ {0} = z _ {0} \right] + \frac {2}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right), \tag {178}
+$$
+
+where,
+
+$$
+\varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right) = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z _ {0} \Bigg ]. \tag {179}
+$$
+
+Proof. The proof is similar to the one of Cayci et al. (2024). First, let us decompose the performance difference in the following terms,
+
+$$
+\begin{array}{l} V ^ {\pi_ {2}} \left(z _ {0}\right) - V ^ {\pi_ {1}} \left(z _ {0}\right) = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R _ {t} \mid Z _ {0} = z _ {0} \right] - V ^ {\pi_ {1}} \left(z _ {0}\right) (180) \\ = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} - V ^ {\pi_ {1}} \left(Z _ {t}\right) + V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] - V ^ {\pi_ {1}} \left(z _ {0}\right) (181) \\ = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} - V ^ {\pi_ {1}} \left(Z _ {t}\right) + \gamma V ^ {\pi_ {1}} \left(Z _ {t + 1}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] (182) \\ = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ + \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(\gamma V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right)\right) \mid Z _ {0} = z _ {0} \right] (183) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ + \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t + 1} \left(V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right)\right) \mid Z _ {0} = z _ {0} \right]. \tag {184} \\ \end{array}
+$$
+
+Let us focus on bounding the first term in equation (184). We have, for any $T > 0$
+
+$$
+\left| \sum_ {t = 0} ^ {T} \gamma^ {t} \left(R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \right| \leq \frac {2}{(1 - \gamma) ^ {2}} < \infty . \tag {185}
+$$
+
+By Lebesgue's dominated convergence, we have,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ = \sum_ {t = 0} ^ {\infty} \gamma^ {t} \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right) \mid Z _ {0} = z _ {0} \right]. \tag {186} \\ \end{array}
+$$
+
+Then, by the law of total expectation, we have at any timestep $t \geq 0$ ,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right) \mid Z _ {0} = z _ {0} \right] \\ = \mathbb {E} \left[ \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t}, Z _ {t} \right] - V ^ {\pi_ {1}} \left(Z _ {t}\right) \mid Z _ {0} = z _ {0} \right]. \tag {187} \\ \end{array}
+$$
+
+And, we have,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t} = h _ {t}, Z _ {t} = z _ {t} \right] \\ = \sum_ {s _ {t}, a _ {t}} b _ {t} \left(s _ {t} \mid h _ {t}\right) \pi_ {2} \left(a _ {t} \mid z _ {t}\right) \mathcal {Q} ^ {\pi_ {1}} \left(s _ {t}, z _ {t}, a _ {t}\right) (188) \\ = \sum_ {a _ {t}} \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(z _ {t}, a _ {t}\right) + \sum_ {s _ {t}, a _ {t}} b _ {t} \left(s _ {t} \mid h _ {t}\right) \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(s _ {t}, z _ {t}, a _ {t}\right) - \sum_ {a _ {t}} \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(z _ {t}, a _ {t}\right) (189) \\ = \sum_ {a _ {t}} \pi_ {2} (a _ {t} | z _ {t}) Q ^ {\pi_ {1}} (z _ {t}, a _ {t}) + \sum_ {s _ {t}, a _ {t}} b _ {t} (s _ {t} | h _ {t}) \pi_ {2} (a _ {t} | z _ {t}) Q ^ {\pi_ {1}} (s _ {t}, z _ {t}, a _ {t}) \\ - \sum_ {s _ {t}, a _ {t}} \hat {b} _ {t} \left(s _ {t} \mid z _ {t}\right) \pi_ {2} \left(a _ {t} \mid z _ {t}\right) \mathcal {Q} ^ {\pi_ {1}} \left(s _ {t}, z _ {t}, a _ {t}\right) (190) \\ = \sum_ {a _ {t}} \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(z _ {t}, a _ {t}\right) + \sum_ {s _ {t}, a _ {t}} \left(b _ {t} \left(s _ {t} \mid h _ {t}\right) - \hat {b} _ {t} \left(s _ {t} \mid z _ {t}\right)\right) \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(s _ {t}, z _ {t}, a _ {t}\right). (191) \\ \end{array}
+$$
+
+By noting that $\sup_{s,z}|\sum_{a}\pi_2(a|z)\mathcal{Q}^{\pi_1}(s,z,a)|\leq \sup_{s,z,a}|\mathcal{Q}^{\pi_1}(s,z,a)|\leq \frac{1}{1 - \gamma}$ , we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t} = h _ {t}, Z _ {t} = z _ {t} \right] \\ \leq \sum_ {a _ {t}} \pi_ {2} \left(a _ {t} \mid z _ {t}\right) Q ^ {\pi_ {1}} \left(z _ {t}, a _ {t}\right) + \frac {1}{1 - \gamma} \left\| b _ {t} (\cdot \mid h _ {t}) - \hat {b} _ {t} (\cdot \mid z _ {t}) \right\| _ {\mathrm {T V}}. \tag {192} \\ \end{array}
+$$
+
+Finally, the expectation at time $t\geq 0$ can be written as,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right) \mid Z _ {0} = z _ {0} \right] \\ = \mathbb {E} \left[ \mathbb {E} ^ {\pi_ {2}} \left[ R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t}, Z _ {t} \right] - V ^ {\pi_ {1}} \left(Z _ {t}\right) \mid Z _ {0} = z _ {0} \right] (193) \\ \leq \mathbb {E} ^ {\pi_ {2}} \left[ Q ^ {\pi_ {1}} \left(Z _ {t}, A _ {t}\right) + \frac {1}{1 - \gamma} \left\| b _ {t} (\cdot | H _ {t}) - \hat {b} _ {t} (\cdot | Z _ {t}) \right\| _ {\mathrm {T V}} - V ^ {\pi_ {1}} \left(Z _ {t}\right) \Bigg | Z _ {0} = z _ {0} \right] (194) \\ = \mathbb {E} ^ {\pi_ {2}} \left[ A ^ {\pi_ {1}} \left(Z _ {t}, A _ {t}\right) - \frac {1}{1 - \gamma} \left\| b _ {t} (\cdot | H _ {t}) - \hat {b} _ {t} (\cdot | Z _ {t}) \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z _ {0} (195) \\ \end{array}
+$$
+
+Now, by using Lebesgue's dominated theorem in the reverse direction, we have,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(R _ {t} + \gamma \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ \leq \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} A ^ {\pi_ {1}} \left(Z _ {t}, A _ {t}\right) \Bigg | Z _ {0} = z _ {0} \right] + \frac {1}{1 - \gamma} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left\| \hat {b} _ {t} - b _ {t} \right\| _ {\mathrm {T V}} \Bigg | Z _ {0} = z _ {0} \right] (196) \\ = \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} A ^ {\pi_ {1}} \left(Z _ {t}, A _ {t}\right) \mid Z _ {0} = z _ {0} \right] + \frac {1}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right) (197) \\ \end{array}
+$$
+
+Now, let us focus on bounding the second term in equation (184). We have, for any $T > 0$
+
+$$
+\left| \sum_ {t = 0} ^ {T} \gamma^ {t + 1} \left(V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right)\right) \right| \leq \frac {2}{(1 - \gamma) ^ {2}} < \infty . \tag {198}
+$$
+
+Using Lebesgue dominated convergence theorem, we can write,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t + 1} \left(V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ = \sum_ {t = 0} ^ {\infty} \gamma^ {t + 1} \mathbb {E} ^ {\pi_ {2}} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid Z _ {0} = z _ {0} \right]. \tag {199} \\ \end{array}
+$$
+
+By the law of total expectation, we have at any timestep $t \geq 0$ ,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid Z _ {0} = z _ {0} \right] \\ = \mathbb {E} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathbb {E} ^ {\pi_ {2}} \left[ \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t + 1}, Z _ {t + 1} \right] \mid Z _ {0} = z _ {0} \right]. \tag {200} \\ \end{array}
+$$
+
+And, we have,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, z _ {t + 1}\right) \mid H _ {t + 1} = h _ {t + 1}, Z _ {t + 1} = z _ {t + 1}, \right] \\ = \sum_ {s _ {t + 1}} b _ {t + 1} \left(s _ {t + 1} \mid h _ {t + 1}\right) \mathcal {V} ^ {\pi_ {1}} \left(s _ {t + 1}, z _ {t + 1}\right) (201) \\ = V ^ {\pi_ {1}} \left(z _ {t + 1}\right) + \sum_ {s _ {t + 1}} b _ {t + 1} \left(s _ {t + 1} \mid h _ {t + 1}\right) \mathcal {V} ^ {\pi_ {1}} \left(s _ {t + 1}, z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(z _ {t + 1}\right) (202) \\ = V ^ {\pi_ {1}} \left(z _ {t + 1}\right) + \sum_ {s _ {t + 1}} b _ {t + 1} \left(s _ {t + 1} \mid h _ {t + 1}\right) \mathcal {V} ^ {\pi_ {1}} \left(s _ {t + 1}, z _ {t + 1}\right) - \sum_ {s _ {t + 1}} \hat {b} _ {t + 1} \left(s _ {t + 1} \mid z _ {t + 1}\right) \mathcal {V} ^ {\pi_ {1}} \left(s _ {t + 1}, z _ {t + 1}\right) (203) \\ = V ^ {\pi_ {1}} \left(z _ {t + 1}\right) + \sum_ {s _ {t + 1}} \left(b _ {t + 1} \left(s _ {t + 1} \mid h _ {t + 1}\right) - \hat {b} _ {t + 1} \left(s _ {t + 1} \mid z _ {t + 1}\right)\right) \mathcal {V} ^ {\pi_ {1}} \left(s _ {t + 1}, z _ {t + 1}\right). (204) \\ \end{array}
+$$
+
+From there, by noting that $\sup_{s,z}|\mathcal{V}^{\pi_1}(s,z)|\leq \frac{1}{1 - \gamma}$ , we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, z _ {t + 1}\right) \mid H _ {t + 1} = h _ {t + 1}, Z _ {t + 1} = z _ {t + 1}, \right] \\ \geq V ^ {\pi_ {1}} \left(z _ {t + 1}\right) - \frac {1}{1 - \gamma} \left\| b _ {t + 1} (\cdot | h _ {t + 1}) - \hat {b} _ {t + 1} (\cdot | z _ {t + 1}) \right\| _ {\mathrm {T V}}. \tag {205} \\ \end{array}
+$$
+
+Finally, the expectation at time $t\geq 0$ can be written as,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid Z _ {0} = z _ {0} \right] \\ = \mathbb {E} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathbb {E} ^ {\pi_ {2}} \left[ \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right) \mid H _ {t + 1}, Z _ {t + 1} \right] \mid Z _ {0} = z _ {0} \right] (206) \\ \leq \mathbb {E} \left[ V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) + \frac {1}{1 - \gamma} \left\| b _ {t + 1} (\cdot | H _ {t + 1}) - \hat {b} _ {t + 1} (\cdot | Z _ {t + 1}) \right\| _ {\mathrm {T V}} \mid Z _ {0} = z _ {0} \right] (207) \\ \end{array}
+$$
+
+$$
+\leq \mathbb {E} \left[ \frac {1}{1 - \gamma} \left\| b _ {t + 1} (\cdot | H _ {t + 1}) - \hat {b} _ {t + 1} (\cdot | Z _ {t + 1}) \right\| _ {\mathrm {T V}} \mid Z _ {0} = z _ {0} \right]. \tag {208}
+$$
+
+Now, by using Lebesgue's dominated theorem in the reverse direction, we have,
+
+$$
+\begin{array}{l} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t + 1} \left(V ^ {\pi_ {1}} \left(Z _ {t + 1}\right) - \mathcal {V} ^ {\pi_ {1}} \left(S _ {t + 1}, Z _ {t + 1}\right)\right) \Bigg | Z _ {0} = z _ {0} \right] \\ \leq \frac {1}{1 - \gamma} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t + 1} \left\| b _ {t + 1} (\cdot | H _ {t + 1}) - \hat {b} _ {t + 1} (\cdot | Z _ {t + 1}) \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z _ {0} (209) \\ = \frac {1}{1 - \gamma} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left\| b _ {t} (\cdot | H _ {t}) - \hat {b} _ {t} (\cdot | Z _ {t}) \right\| _ {\mathrm {T V}} - \left\| b _ {0} (\cdot | H _ {0}) - \hat {b} _ {0} (\cdot | Z _ {0}) \right\| _ {\mathrm {T V}} \right\rvert Z _ {0} = z _ {0} (210) \\ = \frac {1}{1 - \gamma} \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left\| b _ {t} (\cdot | H _ {t}) - \hat {b} _ {t} (\cdot | Z _ {t}) \right\| _ {\mathrm {T V}} \middle | Z _ {0} = z _ {0} \right] \\ - \mathbb {E} ^ {\pi_ {2}} \left[ \left\| b _ {0} (\cdot | H _ {0}) - \hat {b} _ {0} (\cdot | Z _ {0}) \right\| _ {\mathrm {T V}} \mid Z _ {0} = z _ {0} \right] (211) \\ = \frac {1}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right) - \mathbb {E} ^ {\pi_ {2}} \left[ \left\| b _ {0} (\cdot | H _ {0}) - \hat {b} _ {0} (\cdot | Z _ {0}) \right\| _ {\mathrm {T V}} \right| Z _ {0} = z _ {0} \Bigg ] (212) \\ \leq \frac {1}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right). (213) \\ \end{array}
+$$
+
+Finally, by substituting the upper bound (197) on the first term and the upper bound (213) on the second term into equation (184), we obtain,
+
+$$
+\begin{array}{l} V ^ {\pi_ {2}} \left(z _ {0}\right) - V ^ {\pi_ {1}} \left(z _ {0}\right) \leq \mathbb {E} ^ {\pi_ {2}} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} A ^ {\pi_ {1}} \left(Z _ {t}, A _ {t}\right) \Bigg | Z _ {0} = z _ {0} \right] + \frac {2}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right) (214) \\ = \frac {1}{1 - \gamma} \mathbb {E} ^ {d ^ {\pi_ {2}}} \left[ A ^ {\pi_ {1}} (Z, A) \mid Z _ {0} = z _ {0} \right] + \frac {2}{1 - \gamma} \varepsilon_ {\inf } ^ {\pi_ {2}} \left(z _ {0}\right). (215) \\ \end{array}
+$$
+
+This concludes the proof.
+
+Using Lemma E.1, we can prove Theorem 5, that is recalled below. The proof from Cayci et al. (2024) is generalized to the asymmetric setting.
+
+Theorem 5 (Finite-time bound for asymmetric and symmetric natural actor-critic algorithm). For any agent-state process $\mathcal{M} = (\mathcal{Z}, U)$ , we have for Algorithm 2 with $\alpha = \frac{1}{\sqrt{K}}$ , $\zeta = \frac{B\sqrt{1 - \gamma}}{\sqrt{2N}}$ , $\eta = \frac{1}{\sqrt{T}}$ and arbitrary $B > 0$ ,
+
+$$
+(1 - \gamma) \min _ {0 \leq t < T} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] \leq \varepsilon_ {\text {n a c}} + 2 \varepsilon_ {\text {i n f}} + \bar {C} _ {\infty} \left(\varepsilon_ {\text {a c t o r}} + 2 \varepsilon_ {\text {g r a d}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right), \tag {41}
+$$
+
+where the different terms may differ for asymmetric and symmetric critics,
+
+$$
+\varepsilon_ {\mathrm {n a c}} = \frac {B ^ {2} + 2 \log | \mathcal {A} |}{2 \sqrt {T}} \tag {42}
+$$
+
+$$
+\varepsilon_ {\text {a c t o r}} = \sqrt {\frac {(2 - \gamma) B}{(1 - \gamma) \sqrt {N}}} \tag {43}
+$$
+
+$$
+\varepsilon_ {\text {i n f , a s y m}} = 0 \tag {44}
+$$
+
+$$
+\varepsilon_ {\inf , \operatorname {s y m}} = \mathbb {E} ^ {\pi^ {*}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \right] \tag {45}
+$$
+
+$$
+\varepsilon_ {\text {g r a d , a s y m}} = \sup _ {0 \leq t < T} \sqrt {\frac {\min _ {w} \mathcal {L} _ {t} (w)}{w}} \tag {46}
+$$
+
+$$
+\varepsilon_ {\text {g r a d , s y m}} = \sup _ {0 \leq t < T} \sqrt {\frac {\min L _ {t} (w)}{w}}, \tag {47}
+$$
+
+and $\varepsilon_{\mathrm{crit}}^{\pi_t}$ is given in Theorem 3 and Theorem 4.
+
+Proof. The proof is based on a Lyapounov drift result using the following Lyapounov function,
+
+$$
+\Lambda (\pi) = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \mathrm {K L} \left(\pi^ {*} (\cdot | z) \| \pi (\cdot | z)\right). \tag {216}
+$$
+
+The Lyapounov drift is given by,
+
+$$
+\begin{array}{l} \Lambda \left(\pi_ {t + 1}\right) - \Lambda \left(\pi_ {t}\right) = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \frac {\pi_ {t} (a | z)}{\pi_ {t + 1} (a | z)} (217) \\ = \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \log \frac {\pi_ {t} (a | z)}{\pi_ {t + 1} (a | z)}. (218) \\ \end{array}
+$$
+
+Since $\sup_{z,a}\| \psi (z,a)\| _2\leq 1$ , we have that $\log \pi_{\theta}(a|z)$ is 1-smooth (Agarwal et al., 2021), which implies,
+
+$$
+\log \pi_ {\theta_ {2}} (a | z) \leq \log \pi_ {\theta_ {1}} (a | z) + \left\langle \nabla_ {\theta} \log \pi_ {\theta_ {1}} (a | z), \theta_ {2} - \theta_ {1} \right\rangle + \frac {1}{2} \| \theta_ {2} - \theta_ {1} \| _ {2} ^ {2}. \tag {219}
+$$
+
+By selecting $\theta_{2} = \theta_{t}$ and $\theta_{1} = \theta_{t + 1}$ and noting that $\theta_{t + 1} - \theta_t = \eta \bar{w}_t = \eta \frac{1}{N}\sum_{n = 0}^{N - 1}w_{t,n}$ we obtain,
+
+$$
+\log \frac {\pi_ {t} (a | z)}{\pi_ {t + 1} (a | z)} \leq \frac {\eta^ {2}}{2} \| \bar {w} _ {t} \| _ {2} ^ {2} - \eta \langle \nabla_ {\theta} \log \pi_ {t} (a | z), \bar {w} _ {t} \rangle . \tag {220}
+$$
+
+Now, we separately bound the Lyapounov drift for the asymmetric and symmetric settings. In the following, some notations are overloaded across both settings when their meaning is clear from context. For the asymmetric setting, we have,
+
+$$
+\begin{array}{l} \Lambda \left(\pi_ {t + 1}\right) - \Lambda \left(\pi_ {t}\right) = \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \log \frac {\pi_ {t} (a | z)}{\pi_ {t + 1} (a | z)} (221) \\ \leq \frac {\eta^ {2}}{2} \| \bar {w} _ {t} \| _ {2} ^ {2} - \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \left\langle \nabla_ {\theta} \log \pi_ {t} (a | z), \bar {w} _ {t} \right\rangle (222) \\ = \frac {\eta^ {2}}{2} B ^ {2} - \eta \sum_ {s, z, a} d ^ {\pi^ {*}} (s, z, a) \mathcal {A} ^ {\pi_ {t}} (s, z, a) - \eta \sum_ {s, z, a} d ^ {\pi^ {*}} (s, z, a) \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (a | z), \bar {w} _ {t} \right\rangle - \mathcal {A} ^ {\pi_ {t}} (s, z, a)\right) (223) \\ \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta \sum_ {s, z, a} d ^ {\pi^ {*}} (s, z, a) \mathcal {A} ^ {\pi_ {t}} (s, z, a) + \eta \sum_ {z, a} d ^ {\pi^ {*}} (s, z, a) \sqrt {\left(\left\langle \nabla_ {\theta} \log \pi_ {t} (a | z) , \bar {w} _ {t} \right\rangle - \mathcal {A} ^ {\pi_ {t}} (s , z , a)\right) ^ {2}}. (224) \\ \end{array}
+$$
+
+For the symmetric setting, we observe instead,
+
+$$
+\begin{array}{l} \Lambda \left(\pi_ {t + 1}\right) - \Lambda \left(\pi_ {t}\right) = \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \log \frac {\pi_ {t} (a | z)}{\pi_ {t + 1} (a | z)} (225) \\ \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) A ^ {\pi_ {t}} (z, a) + \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \sqrt {\left(\left\langle \nabla_ {\theta} \log \pi_ {t} (a | z) , \bar {w} _ {t} \right\rangle - A ^ {\pi_ {t}} (z , a)\right) ^ {2}}. (226) \\ \end{array}
+$$
+
+Now, let $\mathfrak{H}_t$ denote the sigma field of all samples used in the computation of $\pi_t$ (which excludes the samples used for computing $\bar{w}_t$ ), along with all the samples used in the computation of $\overline{Q}^{\pi_t}$ . We define the ideal and approximate loss functions, both in the asymmetric and the symmetric setting,
+
+$$
+\mathcal {L} _ {t} (w) = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] \tag {227}
+$$
+
+$$
+\bar {\mathcal {L}} _ {t} (w) = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \bar {\mathcal {A}} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] \tag {228}
+$$
+
+$$
+L _ {t} (w) = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - A ^ {\pi_ {t}} (Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] \tag {229}
+$$
+
+$$
+\bar {L} _ {t} (w) = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \bar {A} ^ {\pi_ {t}} (Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right]. \tag {230}
+$$
+
+Because $\mathbb{E}\left[\left\| \mathcal{V}^{\pi_t} - \overline{\mathcal{V}}^{\pi_t}\right\|_{d^{\pi_t}}^2\Big|\mathfrak{H}_t\right]\leq \mathbb{E}\left[\left\| \overline{\mathcal{Q}}^{\pi_t} - \mathcal{Q}^{\pi_t}\right\|_{d^{\pi_t}}^2\Big|\mathfrak{H}_t\right]$ , the error between the asymmetric advantage $\mathcal{A}$ and its approximation $\bar{\mathcal{A}}$ is upper bounded by,
+
+$$
+\begin{array}{l} \sqrt {\mathbb {E} \left[ \left(\bar {\mathcal {A}} ^ {\pi_ {t}} (S , Z , A) - \mathcal {A} ^ {\pi_ {t}} (S , Z , A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right]} = \sqrt {\mathbb {E} \left[ \left\| \bar {\mathcal {A}} ^ {\pi_ {t}} - \mathcal {A} ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} (231) \\ = \sqrt {\mathbb {E} \left[ \left\| \bar {\mathcal {Q}} ^ {\pi_ {t}} - \bar {\mathcal {V}} ^ {\pi_ {t}} - \mathcal {Q} ^ {\pi_ {t}} + \mathcal {V} ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} (232) \\ = \sqrt {\mathbb {E} \left[ \left\| \bar {Q} ^ {\pi_ {t}} - Q ^ {\pi_ {t}} + \mathcal {V} ^ {\pi_ {t}} - \bar {V} ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} (233) \\ \leq \sqrt {\mathbb {E} \left[ \left\| \bar {Q} ^ {\pi_ {t}} - Q ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} + \left\| \mathcal {V} ^ {\pi_ {t}} - \bar {\mathcal {V}} ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} (234) \\ \leq \sqrt {\mathbb {E} \left[ \left\| \bar {Q} ^ {\pi_ {t}} - Q ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} + \sqrt {\mathbb {E} \left[ \left\| \mathcal {V} ^ {\pi_ {t}} - \bar {\mathcal {V}} ^ {\pi_ {t}} \right\| _ {d ^ {\pi_ {t}}} ^ {2} \mid \mathfrak {H} _ {t} \right]} (235) \\ \leq 2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}, (236) \\ \end{array}
+$$
+
+where $\varepsilon_{\mathrm{crit},\mathrm{asym}}^{\pi_t} = \varepsilon_{\mathrm{td,asym}}^{\pi_t} + \varepsilon_{\mathrm{app,asym}}^{\pi_t} + \varepsilon_{\mathrm{shift,asym}}^{\pi_t}$ is given by the upper bound (29) in Theorem 3. Similarly, the error between the symmetric advantage $A$ and its approximation $\overline{A}$ is upper bounded by,
+
+$$
+\sqrt {\mathbb {E} \left[ \left(\bar {A} ^ {\pi_ {t}} (Z , A) - A ^ {\pi_ {t}} (Z , A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right]} \leq 2 \varepsilon_ {\text {c r i t i c , s y m}} ^ {\pi_ {t}}, \tag {237}
+$$
+
+where $\varepsilon_{\mathrm{critic,sym}}^{\pi_t} = \varepsilon_{\mathrm{td,sym}}^{\pi_t} + \varepsilon_{\mathrm{app,sym}}^{\pi_t} + \varepsilon_{\mathrm{shift,sym}}^{\pi_t} + \varepsilon_{\mathrm{alias,sym}}^{\pi_t}$ is given by the upper bound (35) in Theorem 4. By using the inequality $(x + y)^2 \leq 2x^2 + 2y^2$ ,
+
+$$
+\begin{array}{l} \bar {\mathcal {L}} _ {t} (w) = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \bar {\mathcal {A}} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] (238) \\ = \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {t}} (S, Z, A) + \mathcal {A} ^ {\pi_ {t}} (S, Z, A) - \bar {\mathcal {A}} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] (239) \\ \leq 2 \mathbb {E} \left[ \left(\left\langle \nabla_ {\theta} \log \pi_ {t} (A | Z), w \right\rangle - \mathcal {A} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] + 2 \mathbb {E} \left[ \left(\mathcal {A} ^ {\pi_ {t}} (S, Z, A) - \overline {{\mathcal {A}}} ^ {\pi_ {t}} (S, Z, A)\right) ^ {2} \mid \mathfrak {H} _ {t} \right] (240) \\ \leq 2 \mathcal {L} _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2}. (241) \\ \end{array}
+$$
+
+Similarly, we obtain in the symmetric case,
+
+$$
+\bar {L} _ {t} (w) \leq 2 L _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , s y m}} ^ {\pi_ {t}}\right) ^ {2}. \tag {242}
+$$
+
+Starting from the ideal objective and following a similar technique, we also obtain,
+
+$$
+\mathcal {L} _ {t} (w) \leq 2 \bar {\mathcal {L}} _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2} \tag {243}
+$$
+
+$$
+L _ {t} (w) \leq 2 \bar {L} _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , s y m}} ^ {\pi_ {t}}\right) ^ {2}. \tag {244}
+$$
+
+By using Theorem 14.8 in (Shalev-Shwartz & Ben-David, 2014) with step size $\zeta = \frac{B\sqrt{1 - \gamma}}{\sqrt{2N}}$ , we obtain for the average iterate $\bar{w}_t$ under the asymmetric loss and symmetric loss, respectively,
+
+$$
+\bar {\mathcal {L}} _ {t} \left(\bar {w} _ {t}\right) \leq \varepsilon_ {\text {a c t o r}} ^ {2} + \min _ {\| w \| _ {2} \leq B} \bar {\mathcal {L}} _ {t} (w) \tag {245}
+$$
+
+$$
+\bar {L} _ {t} (\bar {w} _ {t}) \leq \varepsilon_ {\text {a c t o r}} ^ {2} + \min _ {\| w \| _ {2} \leq B} \bar {L} _ {t} (w), \tag {246}
+$$
+
+where $\varepsilon_{\mathrm{actor}}^2 = \frac{(2 - \gamma)B}{2(1 - \gamma)\sqrt{N}}$ . On expectation, for the ideal asymmetric objective $\mathcal{L}_t$ , we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \mathcal {L} _ {t} \left(\bar {w} _ {t}\right) \right] \leq 2 \mathbb {E} \left[ \bar {\mathcal {L}} _ {t} \left(\bar {w} _ {t}\right) \right] + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2} (247) \\ \leq 2 \varepsilon_ {\text {a c t o r}} ^ {2} + 2 \min _ {\| w \| _ {2} \leq B} \bar {\mathcal {L}} _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2} (248) \\ \leq 2 \varepsilon_ {\text {a c t o r}} ^ {2} + 2 \left(2 \min _ {\| w \| _ {2} \leq B} \mathcal {L} _ {t} (w) + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2}\right) + 2 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2} (249) \\ = 2 \varepsilon_ {\text {a c t o r}} ^ {2} + 4 \min _ {\| w \| _ {2} \leq B} \mathcal {L} _ {t} (w) + 6 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2} (250) \\ = 2 \varepsilon_ {\text {a c t o r}} ^ {2} + 4 \left(\varepsilon_ {\text {g r a d , a s y m}} ^ {\pi_ {t}}\right) ^ {2} + 6 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2}, (251) \\ \end{array}
+$$
+
+where we define the actor gradient function approximation error as,
+
+$$
+\left(\varepsilon_ {\text {g r a d}, \text {a s y m}} ^ {\pi_ {t}}\right) ^ {2} = \min _ {\| w \| _ {2} \leq B} \mathcal {L} _ {t} (w). \tag {252}
+$$
+
+Similarly, we obtain on expectation for the ideal symmetric objective $L_{t}$
+
+$$
+\mathbb {E} \left[ L _ {t} \left(\bar {w} _ {t}\right) \right] \leq 2 \varepsilon_ {\text {a c t o r}} ^ {2} + 4 \left(\varepsilon_ {\text {g r a d , s y m}} ^ {\pi_ {t}}\right) ^ {2} + 6 \left(2 \varepsilon_ {\text {c r i t i c , s y m}} ^ {\pi_ {t}}\right) ^ {2}, \tag {253}
+$$
+
+where we define the actor gradient function approximation error as,
+
+$$
+\left(\varepsilon_ {\text {g r a d , s y m}} ^ {\pi_ {t}}\right) ^ {2} = \min _ {\| w \| _ {2} \leq B} L _ {t} (w). \tag {254}
+$$
+
+Now, let us go back to the asymmetric and symmetric Lyapounov drift functions of equation (224) and (226). First, we assume that there exists $\overline{C}_{\infty} < \infty$ such that $\sup_{t\geq 0}\mathbb{E}[C_t]\leq \overline{C}_{\infty}$ with,
+
+$$
+C _ {t} = \sup _ {s, z, a} \left| \frac {d ^ {\pi^ {*}} (s , z) \pi^ {*} (a | z)}{d ^ {\pi_ {\theta_ {t}}} (s , z) \pi_ {\theta_ {t}} (a | z)} \right|. \tag {255}
+$$
+
+Second, we leverage the performance difference lemma to bound the advantage. For the asymmetric setting, the performance difference lemma for MDP (Kakade & Langford, 2002) holds because of the Markovianity of $(S_{t},Z_{t})$ ,
+
+$$
+(1 - \gamma) \left(V ^ {\pi^ {*}} \left(s _ {0}, z _ {0}\right) - V ^ {\pi_ {t}} \left(s _ {0}, z _ {0}\right)\right) = \mathbb {E} ^ {d ^ {\pi^ {*}}} \left[ \mathcal {A} ^ {\pi_ {t}} (S, Z, A) \mid S _ {0} = s _ {0}, Z _ {0} = z _ {0} \right]. \tag {256}
+$$
+
+We note that $\mathbb{E}\left[V^{\pi^{*}}(S_{0},Z_{0}) - V^{\pi_{t}}(S_{0},Z_{0})\right] = \mathbb{E}\left[J(\pi^{*}) - J(\pi_{t})\right]$ , such that,
+
+$$
+\begin{array}{l} - \mathbb {E} ^ {d ^ {\pi^ {*}}} [ \mathcal {A} ^ {\pi_ {t}} (S, Z, A) ] = - (1 - \gamma) (J (\pi^ {*}) - J (\pi_ {t})). (257) \\ = - (1 - \gamma) \left(J \left(\pi^ {*}\right) - J \left(\pi_ {t}\right)\right) + \varepsilon_ {\inf , \text {a s y m}}, (258) \\ \end{array}
+$$
+
+where $\varepsilon_{\mathrm{inf,asym}} = 0$ . For the symmetric setting, using Lemma E.1 with $\pi_2 = \pi^*$ and $\pi_1 = \pi_t$ , we note that,
+
+$$
+(1 - \gamma) \left(V ^ {\pi^ {*}} \left(z _ {0}\right) - V ^ {\pi_ {t}} \left(z _ {0}\right)\right) \leq \mathbb {E} ^ {d ^ {\pi^ {*}}} \left[ A ^ {\pi_ {t}} (Z, A) \mid Z _ {0} = z _ {0} \right] + 2 \varepsilon_ {\inf } ^ {\pi^ {*}} \left(z _ {0}\right), \tag {259}
+$$
+
+which implies,
+
+$$
+- \mathbb {E} ^ {d ^ {\pi^ {*}}} [ A ^ {\pi_ {t}} (Z, A) | Z _ {0} = z _ {0} ] \leq - (1 - \gamma) \left(V ^ {\pi^ {*}} (z _ {0}) - V ^ {\pi_ {t}} (z _ {0})\right) + 2 \varepsilon_ {\inf } ^ {\pi^ {*}} (z _ {0}). \tag {260}
+$$
+
+We note that $\mathbb{E}\left[V^{\pi^{*}}(Z_{0}) - V^{\pi_{t}}(Z_{0})\right] = \mathbb{E}\left[J(\pi^{*}) - J(\pi_{t})\right]$ and we denote $\mathbb{E}\left[\varepsilon_{\inf}^{\pi^{*}}(Z_{0})\right]$ with $\varepsilon_{\inf,\mathrm{sym}}$ , so that,
+
+$$
+\varepsilon_ {\inf , \operatorname {s y m}} = \mathbb {E} \left[ \mathbb {E} ^ {\pi^ {*}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \mid Z _ {0} = Z _ {0} \right] \right] \tag {261}
+$$
+
+$$
+= \mathbb {E} ^ {\pi^ {*}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \right]. \tag {262}
+$$
+
+By rearranging, we have,
+
+$$
+- \mathbb {E} ^ {d ^ {\pi^ {*}}} [ A ^ {\pi_ {t}} (Z, A) ] \leq - (1 - \gamma) \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] + 2 \varepsilon_ {\inf , \operatorname {s y m}}. \tag {263}
+$$
+
+Note that $\sum_{s,z,a}d^{\pi^{*}}(s,z,a)f(s,z,a) = \sum_{s,z,a}\frac{d^{\pi^{*}}(s,z,a)}{d^{\pi_{t}}(s,z,a)} d^{\pi_{t}}(s,z,a)f(s,z,a)\leq C_{t}\sum_{s,z,a}d^{\pi_{t}}(s,z,a)f(s,z,a)$ for positive $f$ . Taking expectation over the asymmetric Lyapounov drift of equation (224), we obtain using equation (255),
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \Lambda \left(\pi_ {t + 1}\right) - \Lambda \left(\pi_ {t}\right) \right] \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) A ^ {\pi_ {t}} (z, a) \\ + \eta \sum_ {s, z, a} d ^ {\pi^ {*}} (s, z, a) \sqrt {\left(\left\langle \nabla_ {\theta} \log \pi_ {t} (a | z) , \bar {w} _ {t} \right\rangle - \mathcal {A} ^ {\pi_ {t}} (s , z , a)\right) ^ {2}} (264) \\ \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta (1 - \gamma) \mathbb {E} \left[ J \left(\pi^ {*}\right) - J \left(\pi_ {t}\right) \right] + 2 \eta \varepsilon_ {\inf , \operatorname {a s y m}} \\ + \eta \bar {C} _ {\infty} \sqrt {2 \varepsilon_ {\text {a c t o r}} ^ {2} + 4 \left(\varepsilon_ {\text {g r a d , a s y m}} ^ {\pi_ {t}}\right) ^ {2} + 6 \left(2 \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right) ^ {2}} (265) \\ \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta (1 - \gamma) \mathbb {E} \left[ J \left(\pi^ {*}\right) - J \left(\pi_ {t}\right) \right] + 2 \eta \varepsilon_ {\inf , \operatorname {a s y m}} \\ + \eta \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \varepsilon_ {\text {g r a d , a s y m}} ^ {\pi_ {t}} + 2 \sqrt {6} \varepsilon_ {\text {c r i t i c , a s y m}} ^ {\pi_ {t}}\right). (266) \\ \end{array}
+$$
+
+Similarly, taking expectation over the symmetric drift of equation (226), we obtain a similar expression,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \Lambda \left(\pi_ {t + 1}\right) - \Lambda \left(\pi_ {t}\right) \right] \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) A ^ {\pi_ {t}} (z, a) \\ + \eta \sum_ {z, a} d ^ {\pi^ {*}} (z, a) \sqrt {\left(\left\langle \nabla_ {\theta} \log \pi_ {t} (a | z) , \bar {w} _ {t} \right\rangle - A ^ {\pi_ {t}} (z , a)\right) ^ {2}} (267) \\ \leq \frac {\eta^ {2}}{2} B ^ {2} - \eta (1 - \gamma) \mathbb {E} \left[ J (\pi^ {*}) - J (\pi_ {t}) \right] + 2 \eta \varepsilon_ {\mathrm {i n f , s y m}} \\ + \eta \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \varepsilon_ {\text {g r a d , s y m}} ^ {\pi_ {t}} + 2 \sqrt {6} \varepsilon_ {\text {c r i t i c , s y m}} ^ {\pi_ {t}}\right). (268) \\ \end{array}
+$$
+
+Given the similarity of equation (266) and equation (268), in the following we denote the denote the upper bounds using $\varepsilon_{\mathrm{inf}}$ $\varepsilon_{\mathrm{grad}}^{\pi_t}$ and $\varepsilon_{\mathrm{critic}}^{\pi_t}$ , respectively of the setting (i.e., asymmetric or symmetric).
+
+By summing all Laypounov drifts, we obtain,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \Lambda \left(\pi_ {T}\right) - \Lambda \left(\pi_ {0}\right) \right] \leq T \frac {\eta^ {2}}{2} B ^ {2} - \eta (1 - \gamma) \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ J \left(\pi^ {*}\right) - J \left(\pi_ {t}\right) \right] + 2 \eta T \varepsilon_ {\inf } \\ + \eta \sum_ {t = 0} ^ {T - 1} \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} + 2 \sqrt {6} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right) (269) \\ \leq T \frac {\eta^ {2}}{2} B ^ {2} - \eta (1 - \gamma) \sum_ {t = 0} ^ {T - 1} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] + 2 \eta T \varepsilon_ {\inf } \\ + \eta \bar {C} _ {\infty} \left(\sqrt {2} T \varepsilon_ {\text {a c t o r}} + 2 \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} + 2 \sqrt {6} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right). (270) \\ \end{array}
+$$
+
+Since $\pi_0$ is initialized at the uniform policy with $\theta_0\coloneqq 0$ , we have,
+
+$$
+\Lambda \left(\pi_ {0}\right) = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \mathrm {K L} \left(\pi^ {*} (\cdot | z) \| \pi_ {0} (\cdot | z)\right) \tag {271}
+$$
+
+$$
+\begin{array}{l} = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \left(\sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \pi^ {*} (a | z) - \sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \pi_ {0} (a | z)\right) (272) \\ = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \left(\sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \pi^ {*} (a | z) - \sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \frac {1}{| \mathcal {A} |}\right) (273) \\ = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \left(\sum_ {a \in \mathcal {A}} \pi^ {*} (a | z) \log \pi^ {*} (a | z) + \log | \mathcal {A} |\right) (274) \\ = \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) (\log | \mathcal {A} | - H \left(\pi^ {*} (\cdot | z)\right)) (275) \\ \leq \sum_ {z \in \mathcal {Z}} d ^ {\pi^ {*}} (z) \log | \mathcal {A} | (276) \\ \leq \log | \mathcal {A} |, (277) \\ \end{array}
+$$
+
+where $H$ denotes the Shannon entropy. Rearranging and dividing by $\eta T$ , we obtain after neglecting $\mathcal{L}(\pi_T) > 0$
+
+$$
+\begin{array}{l} (1 - \gamma) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] \leq \frac {\log | \mathcal {A} |}{\eta T} + \frac {\eta}{2} B ^ {2} + 2 \varepsilon_ {\inf } \\ + \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right). \tag {278} \\ \end{array}
+$$
+
+It can also be noted that $\min_{0\leq t < T}[x_t]\leq \frac{1}{T}\sum_{t = 0}^{T}x_t$ , which implies that,
+
+$$
+\begin{array}{l} (1 - \gamma) \min _ {0 \leq t < T} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] \leq \frac {\log | \mathcal {A} |}{\eta T} + \frac {\eta}{2} B ^ {2} + 2 \varepsilon_ {\inf } \\ + \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right). \tag {279} \\ \end{array}
+$$
+
+Let us define the worse actor gradient function approximation error,
+
+$$
+\begin{array}{l} \varepsilon_ {\text {g r a d}} = \sup _ {0 \leq t < T} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} (280) \\ = \sup _ {0 \leq t < T} \sqrt {\min _ {\| w \| _ {2} \leq B} L _ {t} (w)}, (281) \\ \end{array}
+$$
+
+and let us note that,
+
+$$
+\frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} \leq \varepsilon_ {\text {g r a d}}. \tag {282}
+$$
+
+By setting $\eta = \frac{1}{\sqrt{T}}$ , we obtain,
+
+$$
+\begin{array}{l} (1 - \gamma) \min _ {0 \leq t < T} \mathbb {E} [ J (\pi^ {*}) - J (\pi_ {t}) ] \leq \frac {\log | \mathcal {A} |}{\sqrt {T}} + \frac {B ^ {2}}{2 \sqrt {T}} + 2 \varepsilon_ {\inf } \\ + \bar {C} _ {\infty} \left(\sqrt {2} \varepsilon_ {\text {a c t o r}} + 2 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {g r a d}} ^ {\pi_ {t}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right) (283) \\ = \frac {B ^ {2} + 2 \log | \mathcal {A} |}{2 \sqrt {T}} + 2 \mathbb {E} ^ {\pi^ {*}} \left[ \sum_ {k = 0} ^ {\infty} \gamma^ {k} \left\| \hat {b} _ {k} - b _ {k} \right\| _ {\mathrm {T V}} \right] \\ + \bar {C} _ {\infty} \left(\sqrt {\frac {(2 - \gamma) B}{(1 - \gamma) \sqrt {N}}} + 2 \varepsilon_ {\text {g r a d}} + 2 \sqrt {6} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \varepsilon_ {\text {c r i t i c}} ^ {\pi_ {t}}\right). (284) \\ \end{array}
+$$
+
+This concludes the proof.
\ No newline at end of file
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/images.zip b/atheoreticaljustificationforasymmetricactorcriticalgorithms/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..daa051d858eb0e600c724e7588f3f2d9a42c6ff6
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:236357ab32b415f3d1d808338d5e8d6618151cf835c548f7ed2aeeafb5000a05
+size 2674118
diff --git a/atheoreticaljustificationforasymmetricactorcriticalgorithms/layout.json b/atheoreticaljustificationforasymmetricactorcriticalgorithms/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3d74a7a86d8770f7289a9638bf6abc138a53debf
--- /dev/null
+++ b/atheoreticaljustificationforasymmetricactorcriticalgorithms/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2434e48222274e0a82642342f7e5f5f4b4054efb2c370ca417bd7d4b3543db57
+size 1334376
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_content_list.json b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f92db08fa59e429f284b61f71c9925226a068fb4
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f12223673384c9991f205ce3824889889d248e541e91da0d9c28e431126b676
+size 438155
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_model.json b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4344d7f04c8fa67cc1740eb35294cd3196d79d0f
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1d6270bfdbe36f1f14e218e419f5c238debb3ee4a5ace1029fa7fcb533a19b3
+size 500518
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_origin.pdf b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..605ef004c31d0ef4e8d7139ade315a9526f12346
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/8bf05f92-8f6b-4674-9821-6fe66c395caa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7785116bd9f46520633988817fa494b5eae754f40bc89cc2d55f62b66e56a07e
+size 924420
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/full.md b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..58a0ad50a918c2c105baf26771e386a569a07900
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/full.md
@@ -0,0 +1,2369 @@
+# A Theoretical Study of (Hyper) Self-Attention through the Lens of Interactions: Representation, Training, Generalization
+
+Muhammed Ustaomeroglu1 Guannan Qu1
+
+# Abstract
+
+Self-attention has emerged as a core component of modern neural architectures, yet its theoretical underpinnings remain elusive. In this paper, we study self-attention through the lens of interacting entities, ranging from agents in multi-agent reinforcement learning to alleles in genetic sequences, and show that a single layer linear self-attention can efficiently represent, learn, and generalize functions capturing pairwise interactions, including out-of-distribution scenarios. Our analysis reveals that self-attention acts as a mutual interaction learner under minimal assumptions on the diversity of interaction patterns observed during training, thereby encompassing a wide variety of real-world domains. In addition, we validate our theoretical insights through experiments demonstrating that self-attention learns interaction functions and generalizes across both population distributions and out-of-distribution scenarios. Building on our theories, we introduce HyperFeatureAttention, a novel neural network module designed to learn couplings of different feature-level interactions between entities. Furthermore, we propose HyperAttention, a new module that extends beyond pairwise interactions to capture multi-entity dependencies, such as three-way, four-way, or general $n$ -way interactions.
+
+# 1. Introduction
+
+Ever since the invention of Transformers (Vaswani et al., 2023), attention is the building block for many domains, spanning natural language processing (Brown et al., 2020; Devlin et al., 2019), computer vision (Dosovitskiy et al., 2021), protein structure prediction (Jumper et al., 2021),
+
+$^{1}$ Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, USA. Correspondence to: Muhammed Ustaomeroglu .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+reinforcement learning (Chen et al., 2021). Despite the success of attention, our formal understanding of its representation, optimization, and generalization abilities is in its early stages.
+
+Recent theoretical investigations illuminated Transformers' representational abilities/limitations (Liu et al., 2023; Sanford et al., 2023) and training dynamics (Ahn et al., 2023; Jelassi et al., 2022; Gao et al., 2024) from different perspectives (e.g., language modeling, image patch classification, in context learning, etc.). Despite this progress, current theoretical frameworks exhibit critical limitations:
+
+(i) Existing analyses often target isolated problems, lacking a unified perspective for characterizing Transformers' capabilities across diverse domains. In contrast, our theory makes an attempt to provide a unified perspective by assuming that the data comes from a mutual interaction model, which we show captures broad applications.
+(ii) Most mathematically rigorous theories overlook test-time generalization, particularly robustness to out-of-distribution (OOD) shifts. Our analysis addresses OOD in terms of length generalization.
+(iii) Mathematically rigorous theories typically offer interpretations only for the predetermined parameters. In contrast, our approach explains a broader set of parameters—many of which may initially appear unintuitive.
+(iv) Generally, rigorous theories rely on restrictive assumptions about model parameters. In contrast, our framework does not impose such assumptions on the parameters. Our approach only requires mild and possibly inevitable conditions on the data distribution -such as training data versatility.
+
+In this work, we adopt a interacting entities viewpoint to study self-attention, where each token represents an interacting entity (e.g., agents in multi-agent reinforcement learning, particles in physical simulations, amino acids in protein sequences, or words in natural language). Specifically, we introduce a function that models interactions among these entities and demonstrate its applicability across diverse domains, including the colliding agents environment, genotype-phenotype mapping task, vision task, and time
+
+series prediction. Under this viewpoint, we prove that a single-layer linear self-attention can efficiently represent such functions and show that gradient-descent training converges to the parameters realizing these interactions, under mild assumptions on the data. $^{12}$ In addition, we demonstrate versatility requirements on the train data such that the learned parameters generalize both to the test distribution and to out-of-distribution (length generalization). By neither imposing restrictive constraints on model parameters nor limiting ourselves to particular domains, our framework unifies some diverse application scenarios and offers a novel theoretical lens on how Transformers learn dependencies among multiple interacting entities.
+
+We further validate our theoretical insights on representation, convergence, and generalization through controlled experiments, demonstrating that the learned model parameters closely align with theoretical predictions. Beyond analyzing attention patterns, we highlight how the parameters themselves can be directly interpreted to uncover meaningful interactions among entities.
+
+Building on these insights, investigations confirm that self-attention excels at capturing mutual interactions between entities. Motivated by this, we introduce two novel generalizations named (i) HyperFeatureAttention, for capturing couplings of different interactions between features of the entities, and (ii) HyperAttention, for capturing higher-order interactions (e.g. three-way or four-way) between entities. Extending our single-layer analysis, we show that HyperFeatureAttention can efficiently represent the couplings of feature interactions. In addition, we show that HyperAttention can represent and learn higher order interactions, with the corresponding theories.
+
+We summarize our main contributions as follows:
+
+- In Section 3, we present a unified perspective where each token (e.g., agent, amino acid, pixel patch) is treated as an interacting entity, seamlessly bridging several experimental settings.
+- In Section 4, we show that gradient flow, on standard mean squared error loss, converges to a solution that captures how the entities interact. Furthermore, the learned parameters generalize to both unseen examples from the task distribution and out of distribution (varying sequence lengths), under suitable versatility conditions in the training data.
+
+- In Section 7, we provide experiments that validate our theoretical predictions with clear interpretation of the learned parameters.
+- In Section 5, we introduce HyperFeatureAttention, a novel mechanism designed to capture couplings of feature interactions. In Section 6, we present HyperAttention, which models higher-order dependencies between entities, such as three-way and four-way interactions. Also, we provide accompanying theoretical analyses of these models' capabilities. We also provide some preliminary experiments on these novel models in Section 7.
+
+# Related Works.
+
+Transformer Representation Theory. A large body of work has illuminated the representational abilities of self-attention from various angles (Yao et al., 2023; Bhattamishra et al., 2020a; Wei et al., 2023; Kajitsuka & Sato, 2024; Nath et al., 2024; Luo et al., 2022; Li et al., 2024). For instance, Transformers have been shown to be Turing-complete and capable of simulating intricate sequential computations (Bhattachamishra et al., 2020b), act as provably efficient "compilers" for domain-specific languages (Zhai et al., 2024), and approximate key operators such as sparse or local averaging with sublinear complexity (Likhosherstov et al., 2021; Sanford et al., 2023; Edelman et al., 2022). Their abilities and limitations have also been explored in POMDP settings (Lu et al., 2024), automata-theoretic perspectives (Liu et al., 2023), sequence-to-sequence tasks (Yun et al., 2020), and hidden Markov model learning scenarios (Hu et al., 2024).
+
+Transformer Convergence Analysis. Parallel to the progress on representation, another line of research has investigated the convergence properties of training Transformers (Ahn et al., 2023; Tarzanagh et al., 2024; Li et al., 2023; Tian et al., 2023; Song et al., 2024; Huang et al., 2024a; Chen et al., 2024). These studies analyze training via gradient flow in simplified yet insightful settings (Yang et al., 2024), establish conditions under which one can efficiently learn multi-head attention layers (Chen & Li, 2024; Deora et al., 2023), or employ mean-field methods to show global convergence in large-scale regimes (Gao et al., 2024). Additional works examine specialized domains such as masked visual pretraining (Huang et al., 2024b) and spatial structure learning (Jelassi et al., 2022), or investigate sparse token selection tasks (Wang et al., 2024).
+
+Despite the valuable insights offered by these studies, the majority have all or some of the limitations, i.e. (i), (ii), (iii) we listed in the second paragraph of introduction.
+
+# 2. Preliminaries
+
+Self-Attention. The self-attention mechanism is a core component of Transformers (Vaswani et al., 2023), enabling
+
+models to learn dependencies between input tokens effectively. For a sequence of $L$ input tokens represented as a matrix $\mathbf{X} \in \mathbb{R}^{L \times d}$ , the self-attention is defined as:
+
+$$
+\mathbf {S A} ^ {\sigma} (\mathbf {X}) = \sigma \left(\frac {\mathbf {X W} ^ {Q} (\mathbf {X W} ^ {K}) ^ {\top}}{\sqrt {d _ {k}}}\right) \mathbf {X W} ^ {V},
+$$
+
+where $\mathbf{W}^Q, \mathbf{W}^K, \mathbf{W}^V \in \mathbb{R}^{d \times d_k}$ are the learnable projection matrices. Defining $\mathbf{C} = \mathbf{W}^Q (\mathbf{W}^K)^{\top} / \sqrt{d_k}$ , we can write the same equation as
+
+$$
+\mathbf {S A} ^ {\sigma} (\mathbf {X}) = \sigma \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V}.
+$$
+
+Since the introduction of attention mechanisms (Bahdanau et al., 2016), the function $\sigma : \mathbb{R}^{L \times L} \to \mathbb{R}^{L \times L}$ has been predominantly implemented as a row-wise softmax operation, where the input matrix to $\sigma$ , commonly referred to as the attention scores, determines the relative importance of different tokens in the sequence.
+
+Alternative Attention Functions. Recent advancements have explored alternative $\sigma$ functions beyond the traditional softmax. One prominent direction is using linear self-attention mechanisms, which reduce the computational complexity of self-attention from $\mathcal{O}(L^2)$ to $\mathcal{O}(L)$ . Linear attention methods approximate the softmax function while maintaining comparable performance in many tasks (Katharopoulos et al., 2020; Choromanski et al., 2022). For instance, the Performer model introduces kernel-based methods to approximate the softmax operation efficiently, achieving scalability without significant loss in accuracy (Choromanski et al., 2022). Moreover, alternative activation functions, such as ReLU, cosine, polynomial and sigmoid based transformations, have been explored, showing competitive or even superior performance compared to softmax in some tasks (Koohpayegani & Pirsiavash, 2024; Kacham et al., 2024; Ramapuram et al., 2024).
+
+To simplify theoretical analysis while shedding light on a diverse range of self-attention implementations, we adopt a linear variant of self-attention, which preserves the core characteristics of self-attention through its reliance on attention scores.
+
+Linear Self-Attention. Linear self-attention simplifies the attention by omitting the $\sigma$ operation, resulting in:
+
+$$
+\mathbf {S A} ^ {\text {l i n}} (\mathbf {X}) = \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V}. \tag {1}
+$$
+
+Although omitting the $\sigma$ operation may appear to be oversimplification, extensive theoretical and empirical studies confirm the power of linear self-attention. Notably, layers of linear self-attention can implement gradient descent and preconditioned gradient descent (von Oswald et al., 2023; Ahn et al., 2023). In addition, variants of softmax, ReLU, and linear transformers (including the exact version used here) can perform functional gradient descent to learn nonlinear
+
+functions in context (Cheng et al., 2024). Furthermore, (Ahn et al., 2024) demonstrates that linear self-attention replicates key optimization dynamics of Transformers -such as heavily-tailed gradient noise and ill-conditioned loss landscapes-without softmax or feedforward layers. This simplification retains the computational advantages of linearity while enabling rigorous analysis of phenomena like adaptive optimizer superiority (e.g., Adam over SGD) and gradient convergence. Critically, insights from this abstraction extend to softmax-based Transformers, particularly in understanding optimization stability and generalization under varying data distributions or model depths (Ahn et al., 2024).
+
+In the following sections, we explore the capabilities of linear self-attention across diverse tasks to illustrate its practical and theoretical value. By striking a balance between tractability and expressiveness, linear self-attention offers a powerful framework for investigating and enhancing attention-based architectures.
+
+# 3. Representing Mutual Interactions with Attention
+
+Consider a discrete finite domain (or "vocabulary") $S = \{\alpha, \beta, \gamma, \omega, \ldots\}$ with cardinality $|\mathcal{S}|$ . In our setting we have tuples of $L$ elements (or sequences of length $L$ ) from the domain, denoted by $\mathcal{X}$ and entries of which are uniquely indexed by the integers in $[L]$ . Here, $[i]$ denotes the set $\{0, 1, \ldots, i - 1\}$ for any $i \in \mathbb{Z}^+$ . For each index $i$ , we denote the corresponding element $\mathcal{X}(i) \in S$ . Thus, we can write $\mathcal{X} = (\mathcal{X}(0), \mathcal{X}(1), \ldots, \mathcal{X}(L - 1))$ , which is distributed according to $\mathcal{X} \sim \mathcal{D}$ . We also have tuples $\mathcal{Y}$ with elements from a corresponding relatively small set $\mathcal{S}_{\mathcal{Y}}$ . The tuples are jointly distributed according to a task distribution $(\mathcal{X}, \mathcal{Y}) \sim \mathcal{D}_{\mathcal{X} \times \mathcal{Y}}$ . In order to train a neural network, we map each element of $\mathcal{S}$ to a $d$ -dimensional embedding space via a function $\mathbf{x}: S \to \mathbb{R}^{d}$ and each element of $\mathcal{S}_{\mathcal{Y}}$ to a corresponding vector or a scalar depending on the task, via the function $\mathbf{y}: \mathcal{S}_{\mathcal{Y}} \to \mathbb{R}^{d_2}$ . Additionally, we stack the embeddings of the elements in $\mathcal{X}$ and $\mathcal{Y}$ as rows of $\mathbf{X} \in \mathbb{R}^{L \times d}$ and $\mathbf{Y} \in \mathbb{R}^{L \times d_2}$ matrices. We denote their distribution as $(\mathbf{X}, \mathbf{Y}) \sim \mathcal{P}_{\mathbf{X} \times \mathbf{Y}}$
+
+Our first result concerns self-attention' the representation for mutual interactions, which we define below. We introduce a pairwise effect function. For $\alpha ,\beta \in S$ , let $f(\alpha ,\beta)\in$ $\mathbb{R}$ measure how strongly entity $\beta$ affects entity $\alpha$ , and let $\mathbf{w}_{\beta}\in \mathbb{R}^{d_2}$ represent how that influence is expressed. The aggregated effect on the $i$ -th entity, from all other entities, is
+
+$$
+\mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j \in [ L ]} f (\mathcal {X} (i), \mathcal {X} (j)) \mathbf {w} _ {\mathcal {X} (j)}, \tag {2}
+$$
+
+capturing mutual interactions: each entity's behavior or state depends on every other entity in the sequence.
+
+Theorem 3.1 (Representation Ability of Linear Self-Attention). $d = |\mathcal{S}|$ is sufficient for a single-layer linear self-attention to exactly represent any aggregate pairwise interaction functions $\{\mathbf{y}_{\mathcal{X}(i)}\}_{i=1}^{L}$ in Eq. 2 for all entities simultaneously. Also, $d \geq |\mathcal{S}|$ is necessary for a single layer linear self-attention to exactly represent any such functions.
+
+Consequently, self-attention requires $\Theta(|S|^2)$ parameters to capture the interactions. The key distinction, however, lies in efficiency. The following theorem demonstrates that self-attention is efficient compared to fully connected neural networks. This kind efficiency is one of the reasons we contend that Transformers are mutual interaction learners, while generic fully connected architectures are not.
+
+Theorem 3.2 (Efficiency of Self-Attention). A linear fully connected network requires $\Omega(L^2 \cdot |\mathcal{S}|^2)$ parameters to represent the aggregate pairwise interaction functions $\{\mathbf{y}_i\}_{i=1}^L$ in Eq. 2 exactly, for all entities simultaneously.
+
+The proofs of Theorems 3.1 and 3.2 appear in Appendix B. Equation 2 and Theorem 3.1 underpin all later examples: the same simple formula can model multi-agent rewards, pixel patterns, time-series signals, and genotype-phenotype relations, showing that one self-attention layer already captures rich pairwise dependencies.
+
+A more practical interpretation of these results appears in the context of deep neural networks with a modular perspective. Learning algorithms, such as gradient descent applied to a loss criterion, often lead to the emergence of task specialization within small subcomponents of a large neural network. Specifically, individual neural network blocks naturally become responsible for particular subtasks -a phenomenon studied in (Elhage et al., 2021; Cammarata et al., 2020). Our representation theorem formally demonstrates that each self-attention block is indeed sufficiently powerful to execute any mutual interaction task potentially allocated to it within a deep neural network, so theoretically supporting the more empirical observations.
+
+We provide the details about this section in Appendix B and provide convergence and generalization analyses in Section 4. Finally, Section 7 presents empirical results that confirm our theoretical findings and provide interpretation of the learned parameters.
+
+Example 1 (Colliding Agents Environment). In a multiagent system with $L$ identical agents positioned in a dim-dimensional space, with position vectors $\mathbf{r}_i \in \mathbb{R}^{\dim}$ (or $\mathbf{r}_i \in [N]^{\dim}$ for a discrete setup). We aim to capture how each agent's value function $V_{\mathbf{r}_i}$ depends on other agents' initial states (positions). Seeing that the system is translationally and rotationally invariant, as explained dually in
+
+Appendix B.1, we can write $j^{th}$ agent's effect on $i^{th}$ agent's value function as, $f\big(\mathbf{r}_i - \mathbf{r}_j\big)w_{\mathbf{r}_j}$ , where $w_{r_j} \in \mathbb{R}$ is a scalar weight and $f$ depends only on their relative position. Then we consider the value function of the $i$ -th agent as
+
+$$
+V _ {\mathbf {r} _ {i}} = \sum_ {j \neq i} ^ {L} f (\mathbf {r} _ {i} - \mathbf {r} _ {j}) w _ {\mathbf {r} _ {j}}.
+$$
+
+In Appendix B.1, we illustrate how a reward of $-1$ per distinct collision is captured by the value function above. This function fits directly into (2), allowing a single-layer linear self-attention to represent the multi-agent value function, when discrete positions are encoded orthogonally. In addition in Appendix B.1 we show similar results for non-identical agents setting, too.
+
+Example 2 (Genotype-Phenotype Mapping Task). In many genotype-phenotype models, a DNA sequence of length $L$ is composed of alleles which we represent as $\mathcal{X}(i) \in S$ . Some alleles are always active, others' activation level depends on the presence of certain alleles, and some remain inactive regardless of context (Frommlet et al., 2016). Formally, in a simplistic setting,
+
+$$
+\begin{array}{l} P _ {\mathcal {X} (i)} = \mathbb {I} \left\{\mathcal {X} (i) \text {i s a l w a y s a c t i v e} \right\} \\ + \sum_ {j \in [ L ]} \mathbb {I} \left\{\mathcal {X} (i) \text {i s a c t i v a t e d b y} \mathcal {X} (j) \right\} w _ {\mathcal {X} (j)}, \\ \end{array}
+$$
+
+where $P_{\mathcal{X}(i)}$ captures activeness of the gene at position $i$ . By orthogonally embedding each allele, Theorem 3.1 again ensures a single-layer linear self-attention can replicate these interactions exactly. For more details see the demonstrations at Appendix B.2.
+
+We illustrate the generality of our theory with several additional case studies, time-series prediction (Appendix B.4), a computer-vision task (Appendix B.3), and variants of the colliding-agent environment (Appendix B.1.3). Studying isolated single-layer self-attention models uncovers component-level behaviors that directly inform the design and optimization of deep Transformer architectures. This mirrors the role of circuit theory in electrical engineering: although large-scale designs rely on simulation and prototyping, foundational insights about transistors, resistors, and capacitors remain indispensable. Likewise, our block-level theory is a step toward a datasheet for self-attention units, helping steer the construction of deep Transformers. Full-network experiments remain vital, but a precise block-level theory focuses the design space and boosts the likelihood of success.
+
+Why $d = |S|$ While setting the embedding dimension $d$ equal to the domain size $|S|$ may be impractical for large vocabularies, it simplifies our analysis without altering core insights. Our goal is to understand how self attention cap
+
+tures mutual interactions, and $d = |\mathcal{S}|$ ensures orthogonal domain embeddings, yielding an exact and transparent representation. Compressing to $d < |\mathcal{S}|$ is perpendicular to this focus and can be addressed separately using standard techniques, such as Johnson-Lindenstrauss projections, to approximate high-dimensional orthogonal embeddings. Starting with $d = |\mathcal{S}|$ allows us to establish clean, exact theorems that elucidate how self-attention captures pairwise interactions, while reducing to $d < |\mathcal{S}|$ merely introduces a small approximation gap without altering the core theory. For completeness, we provide an approximate version of Theorem 3.1 in Appendix B (Theorem B.2).
+
+# 4. Training and Generalization
+
+In this section, we analyze how a single layer linear self-attention can achieve zero training error and demonstrate its generalization guarantees under mild assumptions. We focus on learning mutual interaction functions of the form (2), which, as ensured by Theorem 3.1, can be represented by linear self-attention.
+
+Setup and Notation. Let $\left\{\left(\mathbf{X}^{(n)},\mathbf{Y}^{(n)}\right)\right\}_{n = 1}^{B}$ be the training data, where $(\mathbf{X}^{(n)},\mathbf{Y}^{(n)})\sim \mathcal{P}_{\mathbf{X}\times \mathbf{Y}}^{L^*}$ , for a specific $L^{*}$ so in the training set all tuples have the same length $L^{*}$ . Also, let $\mathcal{P}_{\mathbf{X}\times \mathbf{Y}}^{\forall L}$ be the universal distribution that covers samples of any length. Throughout, we focus on the mean-squared error (MSE) objective
+
+$$
+L ^ {\mathrm {M S E}} (\mathbf {C}, \mathbf {W} ^ {V}) = \frac {1}{B} \sum_ {n = 1} ^ {B} \left\| \mathbf {S A} _ {\mathbf {C}, \mathbf {W} ^ {V}} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(n)}) - \mathbf {Y} ^ {(n)} \right\| ^ {2},
+$$
+
+We address three key questions: (1) Convergence: Under what conditions does gradient flow reach zero training error? (2) Generalization: When does a perfect fit on the training set imply zero error on new data from $\mathcal{P}_{\mathcal{X}\times \mathcal{Y}}^{L^*}$ ? (3) OOD
+
+Generalization: Can such a model generalize to longer or shorter sequences than those seen in training?
+
+Definition 4.1 (Data Matrix for Element $\mu$ ). Let $\mathcal{B}_{\mu}$ as the set of training indices that contain element $\mu \in S$ . Denoting the number of times an element $\mu$ appears in tuple $\mathcal{X}^{(n)}$ as $s_{\mu}^{(n)}$ , we define the data matrix for element $\mu$ as
+
+$$
+\mathbf {s} ^ {(n)} = \left[ \begin{array}{c c c} s _ {\alpha} ^ {(n)} & s _ {\beta} ^ {(n)} & \ldots \end{array} \right] ^ {\top}, \mathbf {S} _ {\mathcal {B} _ {\mu}} = \left[ \begin{array}{c c c} \ldots & \mathbf {s} ^ {(n)} & \ldots \end{array} \right] _ {n \in \mathcal {B} _ {\mu}} ^ {\top}.
+$$
+
+In short, $\left[\mathbf{S}_{\mathcal{B}_{\mu}}\right]_{n\nu} = s_{\nu}^{(n)}$ , but for $n$ such that $\mu \in \mathcal{X}^{(n)}$ .
+
+Assumption 4.2 (Training Data Versatility). For all $\mu \in S$ , $\mathbf{S}_{\mathcal{B}_{\mu}}$ is full column rank.
+
+This assumption is mild in practice. When elements in $\mathcal{X}^{(n)}$ are drawn from a diverse distribution (e.g., uniformly or with non-degenerate correlations), the counts $\{s_{\nu}^{(n)}\}$ for $\nu \in S$ naturally vary. This ensures that the columns of $\mathbf{S}_{B_\mu}$ remain
+
+linearly independent, as redundant patterns (e.g., fixed linear relationships between element counts) are highly unlikely under unstructured or randomized data. Moreover, if the data distribution satisfies even milder assumption that the covariance of $\mathbf{s}^{(n)}$ is positive definite (Assumption E.1), we show in Theorem E.2 that $\mathbb{P}\left(\mathrm{rank}(\mathbf{S}_{\mathcal{B}_\mu}) < |S|\right) \leq e^{-\gamma |\mathcal{B}_\mu|}$ for some $\gamma \in \mathbb{R}$ , meaning Assumption 4.2 is satisfied with high probability.
+
+# 4.1. Convergence
+
+In this section, we show that if the target function is representable by a single-layer linear self-attention model (as guaranteed by Theorem 3.1), then gradient descent on $L^{\mathrm{MSE}}(\mathbf{C},\mathbf{W}^V)$ converges zero training error under mild conditions. It is stated below with proof in Appendix C.
+
+Seeing that our main aim is studying the mutual interaction perspective, so the attention scores are the core component, which is also the core mechanism defining self-attention, we choose $d_{2} = 1$ to simplify the convergence analysis. Thus, $\mathbf{W}^{V}$ is one dimensional which we denote as $\mathbf{w}$ in this subsection. Lastly, we denote $w_{\alpha} = \mathbf{x}(\alpha)^{\top}\mathbf{w}$ for any $\alpha \in S$ .
+
+Assumption 4.3 (Weak Realizability). The task is realizable, i.e., there exist $\mathbf{C}^*$ and $\mathbf{w}^*$ that perfectly fits the training data. That is, $\mathbf{y}^{(n)} = \left(\mathbf{X}^{(n)}\mathbf{C}^*\mathbf{X}^{(n)\top}\right)\mathbf{X}^{(n)}\mathbf{w}^*\forall n\in \mathcal{B}$ .
+
+Theorem 4.4 (Convergence to Zero Training Error). Let the dimensions $d = |S|$ and $d_2 = 1$ . Also, let the initial parameters $\mathbf{C}_{(t=0)} = \mathbf{0}$ , $\langle \mathbf{x}(\alpha), \mathbf{w}_{(t=0)} \rangle \geq b > 0$ , $\forall \alpha \in S$ . Then, under the assumptions 4.2 and 4.3, gradient flow on $L^{\mathrm{MSE}}(\mathbf{C}, \mathbf{w})$ converges to zero training error.
+
+The realizability assumption decomposes into two parts: (i) the task genuinely involves mutual interactions, and (ii) the data are noise-free. The second condition, analogous to fixing $d = |\mathcal{S}|$ , simplifies the presentation without altering the core insight. Extending our results to noisy data is straightforward: in that setting, exact zero-error convergence is replaced by nonzero error bounds and probabilistic guarantees. We leave such generalizations to future work.
+
+# 4.2. Test Generalization
+
+Under training data versatility and strong realizability, achieving zero training error with linear self-attention implies perfect generalization to new data from the same distribution. Moreover, under even milder assumptions, zero test error ensures generalization to unseen sequence lengths.
+
+Assumption 4.5 (Strong Realizability). The task is strongly realizable, meaning there exist matrices $\mathbf{C}^{\dagger}$ and $\mathbf{W}^{V\dagger}$ such that the model perfectly fits the underlying population dis
+
+ribution at a fixed sequence length $L^{*}$ . Specifically, we assume that $\mathbf{Y} = \left(\mathbf{X}\mathbf{C}^{\dagger}\mathbf{X}^{\top}\right)\mathbf{X}\mathbf{W}^{V\dagger}$ holds almost surely for $(\mathbf{X},\mathbf{Y})\sim \mathcal{P}_{\mathbf{X}\times \mathbf{Y}}^{L^{*}}$
+
+Due to Theorem 3.1 and Appendix B, we can safely assume strong realizability.
+
+Theorem 4.6 (Generalization). Suppose Assumptions 4.2 and 4.5 hold, then zero training error forces $(\mathbf{C}, \mathbf{W}^V)$ to agree with $(\mathbf{C}^\dagger, \mathbf{W}^{V\dagger})$ on $(\mathbf{X}, \mathbf{Y}) \sim \mathcal{P}_{\mathbf{X} \times \mathbf{Y}}$ . That is, the solution achieving zero training error also satisfies
+
+$$
+\mathbb {E} _ {(\mathbf {X}, \mathbf {Y}) \sim \mathcal {P} _ {\mathbf {X} \times \mathbf {Y}} ^ {L ^ {*}}} \left\| f _ {\mathbf {C}, \mathbf {W} ^ {V}} (\mathbf {X}) - \mathbf {Y} \right\| = 0.
+$$
+
+Hence, it generalizes perfectly to new examples from the same distribution.
+
+See Appendix D for the proof of Theorem 4.6.
+
+# 4.3. Out of Distribution (Length) Generalization
+
+A unique strength of self-attention is its ability to process sequences of variable length. In many tasks, we might train on sequences of length $L^*$ yet hope to predict accurately on sequences of different lengths $L \neq L^*$ . To state our result Theorem 4.8 (with proof in Appendix D.2), we need the following assumption, which holds due to Theorem 3.1.
+
+Assumption 4.7 (Universal Realizability). The task is universally realizable, meaning there exist matrices $\mathbf{C}^{\forall \mathrm{L}}$ and $\mathbf{W}^{V,\forall L}$ such that the model perfectly fits the population distribution for all sequence lengths. Specifically, we assume that $\mathbf{Y} = (\mathbf{X}\mathbf{C}^{\forall \mathrm{L}}\mathbf{X}^{\top})\mathbf{X}\mathbf{W}^{V,\forall L}$ holds almost surely for $(\mathbf{X},\mathbf{Y})\sim \mathcal{P}_{\mathcal{X}\times \mathcal{Y}}^{\forall L}$ .
+
+Theorem 4.8 (Length Generalization). Under the Assumptions 4.2 and 4.7, any $\mathbf{C}^{\dagger}$ , $\mathbf{W}^{V\dagger}$ that generalizes to $\mathcal{P}_{\mathbf{X}\times \mathbf{Y}}^{L^{*}}$ , must generalize to $\mathcal{P}_{\mathbf{X}\times \mathbf{Y}}^{\forall L}$ .
+
+Building on the proof of Theorem 4.8 (in Appendix D.2), we observe a key relationship between the matrices $\mathbf{C}$ and $\mathbf{W}^V$ , which underpins the model's ability to generalize. We state this formally in the following corollary:
+
+Corollary 4.9. Two sets of parameters $\{\mathbf{C}^1, \mathbf{W}^{1,V}\}$ $\{\mathbf{C}^2, \mathbf{W}^{2,V}\}$ lead to functionally equivalent linear selfattention blocks if and only if they satisfy
+
+$$
+\mathcal {T} _ {\mu , k} \left(\mathbf {C} ^ {1}, \mathbf {W} ^ {1, V}\right) = \mathcal {T} _ {\mu , k} \left(\mathbf {C} ^ {2}, \mathbf {W} ^ {2, V}\right), \forall \mu , k,
+$$
+
+where
+
+$$
+\mathcal {T} _ {\mu , k} (\mathbf {C}, \mathbf {W}) = \sum_ {\nu \in \mathcal {S}} \left(\mathbf {x} ^ {\top} (\mu) \mathbf {C x} (\nu)\right) \left(\mathbf {x} ^ {\top} (\nu) \mathbf {W} _ {:, k}\right).
+$$
+
+Consequently, if you apply the transformation $\mathcal{T}_{\mu,k}$ to the parameters, all of the sets of parameters that lead to functionally equivalent linear self-attentions be mapped to a specific matrix (over index $\mu,k$ ) that depends on the function they represent. Thus, for a specific task all length generalizing sets of parameters will lead to the same matrix under this transformation.
+
+# 4.4. Discussion
+
+Our theoretical findings in this section show that a single-layer linear self-attention model not only converges to zero training error but also generalizes to both unseen data and unseen sequence lengths. In Section 7, we present controlled experiments, confirming our convergence and generalization guarantees hold.
+
+Building on the modular perspective introduced in Section 3, we view a deep Transformer as a stack of self-contained blocks, each capable of shouldering a distinct subtask that emerges during gradient-based optimization. Our analysis shows that when a mutual-interaction subtask is delegated to a single-layer linear self-attention block, that block provably (i) converges to zero error for the subtask and (ii) generalizes to unseen data and longer sequences. Earlier layer-wise results support this modular viewpoint. In deep linear networks, (Shin, 2020) prove that block-coordinate (layer-by-layer) gradient descent reaches the same global optimum as full end-to-end training, provided every hidden layer is at least as wide as the input and output. For nonlinear architectures, (Zeng et al., 2019) show that cyclic block-coordinate descent converges to a stationary point at an $\mathcal{O}(1 / k)$ rate, and (Akiyama, 2024) recently extend global-minimum guarantees to networks with strictly monotone activations (and, with skip connections, to modified ReLU nets). Although these results do not yet cover soft-max attention, they suggest that if each Transformer block can provably master its assigned mutual-interaction task—as established here for linear self-attention—then an alternating layer-wise schedule can, under suitable conditions, approach the performance of joint optimization. Our block-level theorem thus provides an "atomic" guarantee that future work can build on to analyze full Transformer training.
+
+A key assumption throughout is training data versatility (Assumption 4.2). Intuitively, this requires each element in the domain to appear in sufficiently diverse contexts, ensuring the corresponding data matrix $\mathbf{S}_{\mathcal{B}_{\mu}}$ has full column rank. This richness is crucial for generalization, as it allows the model to learn meaningful interactions that extend beyond the training set. In realistic settings -where domain elements vary meaningfully (e.g., different agent configurations, protein sequences, or natural language tokens)- such rank deficiencies are unlikely. Consequently, data versatility ensures that the model not only fits the training data but also generalizes effectively to new distributions and sequence lengths.
+
+# 5. Extension to HyperFeatureAttention
+
+In the previous sections, we showed how self-attention learns pairwise interactions between entities. However, in practical scenarios, the entities $\mu \in S$ are not mono
+
+lithic. In other words, they are composed of features. For instance, consider $\mu$ to be composed of $M \in \mathbb{Z}^+$ features, $\mu = (\mu_{\phi_1}, \ldots, \mu_{\phi_M})$ , where $\mu_{\phi_i} \in S_{\phi_i}$ , so $S = S_{\phi_1} \times \ldots \times S_{\phi_M}$ . Those features may be the (red, green, blue) components of an RGB pixel in an image or (height, weight,...) characteristics of a person in a population or something else depending on the context. To illustrate how the couplings of interactions between features are crucial, let us revisit the colliding agents environment of Section 3, with modifications.
+
+Colliding Agents Revisited. Assume the same setting as before except now the agents are not identical. We need labels for the agents $\ell_i \in S_\ell$ . Thus, each agent is composed of two features $x_i = (\ell_i, \mathbf{r}_i)$ . In Appendix F, we showed that for non-identical agents, it is natural to have the value function of the form
+
+$$
+V _ {i} = \sum_ {j \in [ L ]} \left(\prod_ {a \in \mathcal {A}} f _ {a} \left(\phi_ {a, i}, \theta_ {a, j}\right)\right) \left(\prod_ {a \in \mathcal {A}} w _ {a} \left(\gamma_ {a, j}\right)\right), \tag {3}
+$$
+
+where $\phi_{a,i},\theta_{aj},\gamma_{a,j}$ are the corresponding features (label or position) picked depending on the scenario and $f_{a},w_{a}$ are some functions from features to real numbers.
+
+Here we provide a simplified illustration of how HyperFeatureAttention emerges from our theoretical framework; see Appendix F.1 for a detailed motivation. Seeing that $|S| = |S_{\ell}||S_{\mathbf{r}}|$ , from Theorem 3.1, a linear self-attention requires $d = \Theta(|S_{\ell}||S_{\mathbf{r}}|)$ , so $\Theta(|S_{\ell}|^{2}|S_{\mathbf{r}}|^{2})$ parameters to represent (3). However, defining new attention matrices, $\mathbf{C}^{(h,a)}$ for each $f_{a}$ , we can represent the corresponding function only with $\Theta(|S_{\ell}|^{2})$ , $\Theta(|S_{\mathbf{r}}|^{2})$ , or $\Theta(|S_{\ell}||S_{\ell}|)$ parameters depending on which features are used for $f_{a}$ . For example, $f(\ell_{i},\ell_{j})$ requires $\Theta(|S_{\ell}|^{2})$ parameters. As a result, if there are $M$ features, self-attention would require embedding dimension and number of parameters in $\Theta(\exp(M))$ , while defining attention for individual $f_{a}$ 's only require $\Theta(M)$ (see Appendix F for exact calculation). This brings us to HyperFeatureAttention.
+
+Definition 5.1 (Linear HyperFeatureAttention of order $A \in \mathbb{Z}^+$ ).
+
+$$
+\mathbf {H F A} ^ {\mathrm {l i n}} (\mathbf {X}) = \left(\prod_ {a \in [ A ]} ^ {\odot} \mathbf {X C} ^ {(a)} \mathbf {X} ^ {\top}\right) \left(\prod_ {a \in [ A ]} ^ {\odot} \mathbf {X W} ^ {V, (a)}\right),
+$$
+
+where $\prod^{\odot}$ is Hadamard (element-wise) product between the matrices, and $\mathbf{C}^{(a)},\mathbf{W}^{V,(a)}\in \mathbb{R}^{d\times d}$ .
+
+From the preceding discussion, Linear HyperFeatureAttention requires only $\Theta(M)$ embedding dimension and parameters to express (3). One may concern that in practice we use layers of multi-head attention, which may possibly express Eq. 3, without exponential embedding dimension. However, we showed in Remark F.6 that even two layer multihead linear self-attention cannot express (3). After all these motivations, we formally defined Multihead HyperFeatureAttention in Appendix F.2. Also, a detailed comparison of the new modules' memory and computational requirements versus standard self-attention appears in Appendix H.
+
+In short, similar to how self-attention generalizes dense layers by enabling entity (token)-level interactions, HyperFeatureAttention extends Self-Attention by enabling coupling between the feature interactions. It enhances traditional self-attention by allowing attentions to be coupled, which enables model to capture coupled feature level interactions.
+
+# 6. Extension to HyperAttention
+
+Similar to pairwise interactions, some tasks may involve higher-order interactions, three-way four-way, n-way. Thus, in a tuple $\mathcal{X}$ we may have $\mu \in S$ that is influenced by a composite function of $\nu$ and $\gamma \in S$ . In this case, we would need our attention scores to be able to capture interaction functions of the form $f(\mu, \nu, \gamma)$ . From this need, we develop a novel generalization, named HyperAttention for capturing higher-order interactions, alongside (Sanford et al., 2023; Alman & Song, 2023).8 Here we state third order and linear version, please see Appendix G for the full version.
+
+Definition 6.1 (Third order Linear HyperAttention).
+
+$$
+A _ {i j _ {1} j _ {2}} = \sum_ {\alpha , \zeta_ {1}, \zeta_ {2} \in [ d ]} C _ {\alpha \zeta_ {1} \zeta_ {2}} X _ {i \alpha} X _ {j _ {1} \zeta_ {1}} X _ {j _ {2} \zeta_ {2}}
+$$
+
+$$
+V _ {j _ {1} j _ {2} \tau} = \sum_ {\xi_ {1}, \xi_ {2} \in [ d ]} X _ {j _ {1} \xi_ {1}} X _ {j _ {2} \xi_ {2}} W _ {\xi_ {1} \xi_ {2} \tau} ^ {V}
+$$
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\operatorname {l i n}} (\mathbf {X}) = \sum_ {1 \leq j _ {1} \leq j _ {2} \leq L} A _ {i j _ {1} j _ {2}} V _ {j _ {1} j _ {2} \tau},
+$$
+
+where we denote $(i,j,k)$ -th entry of a tensor $\mathbf{T}$ as $T_{ijk}$ and $\mathbf{C} \in \mathbb{R}^{d \times d \times d}$ , $\mathbf{W}^V \in \mathbb{R}^{d \times d \times d_2}$ and Table H.
+
+Ternary Synergies in Multi-Agent Collaboration Imagine a multi-agent system in which each agent's payoff depends not just on pairwise interactions with other agents, but on three-way synergies. For instance, suppose agent $i$ gets reward only if it forms an alliance with agents $j$ and $k$ , but
+
+agents $j$ and $k$ does not form an alliance with each other. Formally, each agent $i$ has a discrete state $\mathcal{X}(i)$ that inclues its strategic type or coalition membership. Whenever $i$ , $j$ , and $k$ all share a compatible configuration of states, $i$ gains an additional bonus. Concretely, define, $f\big(\mathcal{X}(i), \mathcal{X}(j), \mathcal{X}(k)\big)$ , a ternary synergy function which is nonzero only when $i$ , $j$ , and $k$ are all in an appropriate joint configuration (e.g. $i$ is allied with $j$ and $k$ but $j$ and $k$ are not allied). Let $w_{\mathcal{X}(j), \mathcal{X}(k)}$ be a weight that captures how the pair $(j, k)$ specifically contributes to $i$ 's reward under that triple configuration. Then agent $i$ 's total payoff takes the form:
+
+$$
+V _ {\mathcal {X} (i)} = \sum_ {1 \leq j < k \leq L} f \big (\mathcal {X} (i), \mathcal {X} (j), \mathcal {X} (k) \big) w _ {\mathcal {X} (j), \mathcal {X} (k)}.
+$$
+
+In this setup, standard pairwise interactions (as in ordinary self-attention) are insufficient to capture the triple compatibility requirement. However, assigning each state $\mathcal{X}(\cdot)$ a suitable embedding enables encoding these ternary synergies into a higher-order extension of (2), allowing the resulting HyperAttention mechanism to learn how three agents jointly influence each other's rewards. We also provably explained how HyperAttention learns those higher order interactions in Appendix G.4.
+
+For an additional example illustrating the representational capabilities of HyperAttention, see Appendix G.2, where we analyze the "skip-trigram bug" (Elhage et al., 2021).
+
+A potential concern is the $\mathcal{O}(L^3)$ computational cost introduced by the final equation in Definition 6.1, which may pose efficiency challenges for long sequences. Leveraging techniques similar to those in (Katharopoulos et al., 2020; Choromanski et al., 2022), we address this issue in Appendix G.3. Also, a detailed comparison of the new modules' memory and computational requirements versus standard self-attention appears in Appendix H.
+
+# 7. Experiments
+
+# 7.1. Experiments for the Theories
+
+We empirically validate our linear-SA theories, i.e. representation (Theorem 3.1), convergence (Theorem 4.4), and generalization (Theorem 4.6, Theorem 4.8) with the colliding agents environment.
+
+Setup: Colliding Agents on a Cylindrical Grid. Consider $L$ identical agents on a cylindrical grid of size $[N] \times [N]$ , with initial position vectors $\mathbf{r}_i = \left[n_i^x n_i^y\right] \in [N]^2$ , with wrap-around in $y$ (cylinder). Each agent is modeled as a circle of radius $R$ executes a simple "move-right" policy (one step to the right per time step if there is space to move, $n_i^x < N - 1$ , otherwise stays where it is). Whenever agent $i$ collides with agent $j$ (distinct), it receives a penalty of $-1$ . Our goal is to capture how each agent's final total accumulated reward, that is the value function $V_i$ , depends
+
+on the initial states. We leverage Theorem 3.1 and show how this reward structure and the final value function can be exactly represented by a single layer linear self-attention. Under this setting, the final value function for each agent can be expressed as
+
+$$
+V _ {\mathbf {r} _ {i}} = - \sum_ {j \neq i} ^ {L} \mathbb {I} \left\{\min \left(\left| n _ {i} ^ {y} - n _ {j} ^ {y} \right|, N - \left| n _ {i} ^ {y} - n _ {j} ^ {y} \right|\right) \leq 2 R \right\} \tag {4}
+$$
+
+Seeing that the value function depends only on the $y$ -coordinates, we focus our discussion on a one-dimensional case for simplicity. The extension to value functions with higher dimension dependence follows naturally and is illustrated in Appendix B.1. We trained linear self-attention, for different embeddings one-hot and sinusoidal. It has an embedding dimension of $N$ and its parameters are initialized as $\mathbf{C}_{(t = 0)} = \mathbf{0}$ , $\langle \mathbf{x}(\alpha), \mathbf{w}_{(t = 0)} \rangle$ , $\forall \alpha \in S$ .
+
+One-Hot. We first use a one-hot embedding: each position $n \in [N]$ is represented by the standard basis vector $\mathbf{e}_n \in \mathbb{R}^N$ . In Appendix B.1, we demonstrate how a single-layer linear self-attention with $\mathbf{W}^V = -\mathbf{1}_N$ , and
+
+$$
+C _ {m n} = \left\{ \begin{array}{l l} 1, & \min (| m - n |, N - | m - n |) \leq 2 R, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {5}
+$$
+
+can exactly implement the value function in (4), irrespective of $L$ , so all realizability assumptions are satisfied. Under these, Theorem 4.4 predicts that the training mean squared error (MSE) converges to zero, which matches our observations in practice. Furthermore, as the Theorems 4.6 and 4.8 predict, we see generalization results: negligible error $(\Theta(10^{-7}))$ on test sets both when $L = 20$ and when $L \in \{2, 5, 10, 30, 40\}$ varies.
+
+Sinusoidal. We next consider a sinusoidal embedding, inspired by common positional encodings in attention models (Vaswani et al., 2023; Su et al., 2023). For even $N$ , the position $n_i$ of agent $i$ is mapped to
+
+$$
+\mathbf {p} _ {i} = \left[ \frac {1}{\sqrt {2}}, \dots , \sin \left(\frac {2 \pi k}{N} n _ {i}\right), \cos \left(\frac {2 \pi k}{N} n _ {i}\right), \dots , \right.
+$$
+
+$$
+\left. \frac {1}{\sqrt {2}} \cos \left(\frac {2 \pi}{N} \left(\frac {N}{2} - 1\right) n _ {i}\right), \frac {1}{\sqrt {2}} \cos \left(\frac {2 \pi}{N} \frac {N}{2} n _ {i}\right) \right] ^ {\top} \in \mathbb {R} ^ {N}.
+$$
+
+Note that $\mathbf{p}_i\top \mathbf{p}_j = \frac{N}{2}\delta_{i,j}$ . Due to Lemma B.4 and Theorem B.6, a single layer linear self-attention with $\mathbf{W}^{V,\forall \mathrm{L}} = \left[-\sqrt{2} 0 0 \dots \right]^{\top} \in \mathbb{R}^{N\times 1}$ and diagonal $\mathbf{C}^{\forall \mathrm{L}} \in \mathbb{R}^{N\times N}$ , such that
+
+$$
+C _ {\mu \mu} = \left\{ \begin{array}{l l} \frac {4}{N} \left(2 R + \frac {1}{2}\right) & \text {i f} \mu = 0, \\ \frac {2}{N} \frac {\sin \left[ \frac {2 \pi}{N} \left(2 R + \frac {1}{2}\right) \left(\frac {\mu + 1}{2}\right) \right]}{\sin \left[ \frac {2 \pi}{N} \frac {1}{2} \left(\frac {\mu + 1}{2}\right) \right]} & \text {i f} \mu \text {o d d}, \\ \frac {2}{N} \frac {\sin \left[ \frac {2 \pi}{N} \left(2 R + \frac {1}{2}\right) \left(\frac {\mu}{2}\right) \right]}{\sin \left[ \frac {2 \pi}{N} \frac {1}{2} \left(\frac {\mu}{2}\right) \right]} & \text {i f} \mu \text {e v e n}, \end{array} \right.
+$$
+
+Model Order Perplexity ↓ Self-Attention - 62.28 HyperFeatureAttention 4 60.22 HyperAttention 3 51.26 HyperAttention (no sharing) 3 48.50
+
+Table 1. 1-layer, 1-head, 256-token window benchmark All models share GPT3-small hyper-parameters ( $d_{\mathrm{model}} = 768$ , vocab size 50257, and identical optimiser settings). Training: $28\mathrm{k}$ iterations, batch 0.16M tokens, cosine LR schedule with warm-up ( $\eta_{\mathrm{max}} = 6 \times 10^{-4}$ , $\eta_{\mathrm{min}} = 0.1$ $\eta_{\mathrm{max}}$ ). Per-epoch validation perplexities are Gaussian-smoothed; final values are reported. SA and HFA have identical parameter counts and $\Theta(L^2)$ compute; HA retains the same parameter count but, without the low-rank trick, scales as $\Theta(L^3)$ . HA (no sharing) has slightly more parameters explained in Appendix G.
+
+Model Heads / Order Perplexity ↓ SA 3/2 28.70 HFA 3/3 27.97 HFAv2 4/(2×2, 2×3) 27.75
+
+Table 2. 3-layer, 1024-token benchmark. Hyper-parameters and optimiser settings match Table 1. HFA incurs $< 0.1\%$ extra compute over SA due to attention products. The hybrid ${\mathrm{{HFA}}}_{\mathrm{v}2}$ (2 SA heads + 2 order-3 HFA heads) attains the best perplexity.
+
+can represent the value functions at Eq. 4 exactly, for any $L$ , which is plotted at Figure 1. The entries of $\mathbf{C}$ are simply Fourier transform of the interaction function, which is an artifact of the sinusoidal embedding. Seeing that the same assumptions are satisfied for sinusoidal embedding, we observe the same convergence and generalization results for sinusoidal embedding.
+
+Although the results match the theorems, the learned parameters do not overlap with the parameters we originally devised, especially for sinusoidal embedding, seen in Figure 2 -these learned parameters lack an intuitive and easy interpretation. However, as discussed in Corollary 4.9, this outcome is a natural consequence of the generalization theories (Theorems 4.6 and 4.8). Therefore, we focus on the matrices we get under the nontrivial transformation explained in 4.9. As shown in Figures 3 and 4, these matrices are indistinguishable from the theoretical counterparts. With only $\Theta(10^{-4})$ mean square distance between their entries. Thus, the parameters we get from training are functionally equivalent to the length generalizing parameters we devised.
+
+# 7.2. Experiments for the Novel Modules
+
+To verify that the theoretical benefits of our layers translate into practice, we ran two small-scale next-token-prediction benchmarks on OpenWebText under hyperparameter parity with GPT3-small, see Tables 1 and 2. All models share the
+
+same embedding, MLP and optimiser settings (as in GPT3); HFA has the same $\Theta(L^2)$ time/space cost as SA, while HA (evaluated without low-rank tricks) scales as $\Theta(L^3)$ in computation. The tables report the final validation perplexities. The results support our claim that the flexibility with richer interaction blocks improve language-model quality.
+
+# 8. Conclusion
+
+We introduced a unifying theoretical framework that interprets tokens in self-attention as interacting entities and showed that a single layer linear self-attention mechanism can capture pairwise interactions across a broad range of domains. Our analysis clarified how self-attention learns and generalizes these interaction functions without relying on domain-specific assumptions. In particular, we proved that (i) such models converge under simple gradient-based training, (ii) they generalize to both unseen data, and variable sequence lengths when the training set is sufficiently diverse
+
+Building on our theories for self-attention, we introduced novel HyperFeatureAttention and HyperAttention modules that extend self-attention's abilities to capturing couplings of different feature level interactions and higher-order interactions among multiple entities. Beyond the theoretical guarantees, our language-modeling benchmarks show that both HFA and HA consistently achieve lower perplexity than standard attention, confirming that their enhanced interaction capacity translates into tangible performance gains.
+
+Taken together, our theoretical and empirical findings establish self-attention -and its HyperFeatureAttention and HyperAttention variants- as efficient learners of complex entity interactions. Our language-modeling experiments confirm the feasibility of these modules, showing consistent perplexity reductions under matched compute budgets. A natural next step is to stress-test HFA and HA at larger scales and in applications such as multi-agent control, protein design, and the other scenarios motivated in this paper. We hope this interaction-centric lens spurs the creation of even more adaptable attention architectures and motivates deeper analyses of softmax attention, stacked transformer layers, and emerging attention mechanisms.
+
+# Acknowledgements
+
+This research was supported by NSF Grants 2154171, CA-REER Award 2339112, CMU CyLab Seed Funding.
+
+# Impact Statement
+
+This work advances the theoretical understanding of self-attention and its generalizations -HyperFeatureAttention and HyperAttention- by providing a unifying framework for learning complex interactions. Our insights could benefit diverse applications, including NLP, computer vision, multi-agent systems, and scientific modeling. Improved interpretability and training efficiency may enhance model reliability and user trust.
+
+While theoretical, our findings could indirectly impact real-world risks, such as biased decision-making or misuse in misinformation and surveillance. Mitigating such risks requires policy measures rather than changes to fundamental theory. Additionally, optimizing attention mechanisms may contribute to more efficient models, reducing the environmental footprint of large-scale training. Overall, this work presents no unique societal risks beyond typical concerns in machine learning research.
+
+# References
+
+Ahn, K., Cheng, X., Daneshmand, H., and Sra, S. Transformers learn to implement preconditioned gradient descent for in-context learning, November 2023. URL http://arxiv.org/abs/2306.00297. arXiv:2306.00297 [cs].
+Ahn, K., Cheng, X., Song, M., Yun, C., Jadbabaie, A., and Sra, S. Linear attention is (maybe) all you need (to understand transformer optimization). arXiv preprint arXiv:2310.01082, 2024. URL https://arxiv.org/abs/2310.01082.
+Akiyama, S. Block coordinate descent for neural networks provably finds global minima. 2024. URL https://openreview.net/forum?id=n2RIkaf1S4.
+Alman, J. and Song, Z. How to Capture Higher-order Correlations? Generalizing Matrix Softmax Attention to Kronecker Computation, October 2023. URL https://arxiv.org/abs/2310.04064v1.
+Bahdanau, D., Cho, K., and Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate, May 2016. URL http://arxiv.org/abs/1409.0473.arXiv:1409.0473 [cs].
+Bhattachamishra, S., Ahuja, K., and Goyal, N. On the ability and limitations of transformers to recognize formal
+
+languages, 2020a. URL http://arxiv.org/abs/2009.11264.
+Bhattachamishra, S., Patel, A., and Goyal, N. On the Computational Power of Transformers and its Implications in Sequence Modeling, October 2020b. URL http:// arxiv.org/abs/2006.09286.arXiv:2006.09286 [cs].
+Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/bitstream/1457c0d6bfbcb4967418bf8ac142f64a-Abstract.html.
+Cammarata, N., Carter, S., Goh, G., Olah, C., Petrov, M., Schubert, L., Voss, C., Egan, B., and Lim, S. K. Thread: Circuits. Distill, 2020. doi: 10.23915/distill.00024. https://distill.pub/2020/circuits.
+Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., Abbeel, P., Srinivas, A., and Mordatch, I. Decision Transformer: Reinforcement Learning via Sequence Modeling, June 2021. URL http://arxiv.org/abs/2106.01345.arXiv:2106.01345 [cs].
+Chen, S. and Li, Y. Provably learning a multi-head attention layer, February 2024. URL http://arxiv.org/abs/2402.04084.arXiv:2402.04084 [cs, stat].
+Chen, S., Sheen, H., Wang, T., and Yang, Z. Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. arXiv preprint arXiv:2402.19442, 2024.
+Cheng, X., Chen, Y., and Sra, S. Transformers implement functional gradient descent to learn non-linear functions in context. In Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 8002-8037. PMLR, July 2024. doi: 10.48550/arXiv.2312.06528. URL https://proceedings.mlr.press/v235/cheng24a.html.arXiv:2312.06528 [cs.LG].
+Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., Belanger, D., Colwell, L., and
+
+Weller, A. Rethinking Attention with Performers, November 2022. URL http://arxiv.org/abs/2009.14794.arXiv:2009.14794 [cs].
+Deora, P., Ghaderi, R., Taheri, H., and Thrampoulidis, C. On the Optimization and Generalization of Multi-head Attention, October 2023. URL http://arxiv.org/abs/2310.12680.arXiv:2310.12680 [cs, math, stat].
+Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, May 2019. URL http:// arxiv.org/abs/1810.04805.arXiv:1810.04805 [cs].
+Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, June 2021. URL http:// arxiv.org/abs/2010.11929.arXiv:2010.11929 [cs].
+Edelman, B. L., Goel, S., Kakade, S., and Zhang, C. Inductive Biases and Variable Creation in Self-Attention Mechanisms, June 2022. URL http://arxiv.org/abs/2110.10090. arXiv:2110.10090 [cs, stat].
+Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., and Olah, C. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformercircuits.pub/2021/framework/index.html.
+Frommlet, F., Bogdan, M., and Ramsey, D. Phenotypes and Genotypes: The Search for Influential Genes, volume 18 of Computational Biology. Springer, 2016. ISBN 978-1-4471-5309-2. doi: 10.1007/978-1-4471-5310-8. URL https://link.springer.com/book/10.1007/978-1-4471-5310-8.
+Gao, C., Cao, Y., Li, Z., He, Y., Wang, M., Liu, H., Klusowski, J. M., and Fan, J. Global convergence in training large-scale transformers. In Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024), 2024.
+Hu, J., Liu, Q., and Jin, C. On Limitation of Transformer for Learning HMMs, June 2024. URL http://arxiv.org/abs/2406.04089. arXiv:2406.04089 [cs].
+Huang, Y., Cheng, Y., and Liang, Y. In-context convergence of transformers. In Proceedings of the 41st International Conference on Machine Learning (ICML). PMLR, 2024a.
+
+Huang, Y., Wen, Z., Chi, Y., and Liang, Y. How Transformers Learn Diverse Attention Correlations in Masked Vision Pretraining. 2024b.
+Jelassi, S., Sander, M. E., and Li, Y. Vision Transformers provably learn spatial structure, October 2022. URL http://arxiv.org/abs/2210.09221.arXiv:2210.09221 [cs].
+Jensen, H. J. Complexity Science: The Study of Emergence. Higher Education from Cambridge University Press, November 2022. doi: 10.1017/9781108873710. URL https://www.cambridge.org/highereducation/books/complexity-science/E0761D26BDAB25D75C6AB868AECE2F2D. ISBN: 9781108873710, Publisher: Cambridge University Press.
+Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Zidek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P., and Hassabis, D. Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873):583-589, August 2021. ISSN 1476-4687. doi: 10.1038/s41586-021-03819-2. URL https://www.nature.com/articles/s41586-021-03819-2. Publisher: Nature Publishing Group.
+Kacham, P., Mirrokni, V., and Zhong, P. PolySketch-Former: Fast Transformers via Sketching Polynomial Kernels, March 2024. URL http://arxiv.org/abs/2310.01655.arXiv:2310.01655 [cs].
+Kajitsuka, T. and Sato, I. Are transformers with one layer self-attention using low-rank weight matrices universal approximators? In International Conference on Learning Representations (ICLR), 2024.
+Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. Proceedings of Machine Learning Research, August 2020. doi: 10.48550/arXiv.2006.16236. URL http://arxiv.org/abs/2006.16236.arXiv:2006.16236 [cs].
+Koohpayegani, S. A. and Piriavash, H. SimA: Simple Softmax-free Attention for Vision Transformers, March 2024. URL http://arxiv.org/abs/2206.08898.arXiv:2206.08898 [cs].
+Li, H., Wang, M., Liu, S., and Chen, P.-y. A theoretical understanding of shallow vision transformers: Learn
+
+ing, generalization, and sample complexity, 2023. URL http://arxiv.org/abs/2302.06015.
+Li, S., Song, Z., Xia, Y., Yu, T., and Zhou, T. The closeness of in-context learning and weight shifting for softmax regression. arXiv preprint arXiv:2304.13276, 2024.
+Likhosherstov, V., Choromanski, K., and Weller, A. On the Expressive Power of Self-Attention Matrices, June 2021. URL http://arxiv.org/abs/2106.03764.arXiv:2106.03764 [cs].
+Liu, B., Ash, J. T., Goel, S., Krishnamurthy, A., and Zhang, C. Transformers Learn Shortcuts to Automata, May 2023. URL http://arxiv.org/abs/2210.10749.arXiv:2210.10749 [cs, stat].
+Lu, C., Shi, R., Liu, Y., Hu, K., Du, S. S., and Xu, H. Rethinking Transformers in Solving POMDPs, May 2024. URL http://arxiv.org/abs/2405.17358.arXiv:2405.17358 [cs].
+Luo, S., Li, S., Zheng, S., Liu, T.-Y., Wang, L., and He, D. Your transformer may not be as powerful as you expect. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
+Nath, S., Khadilkar, H., and Bhattacharyya, P. Transformers are expressive, but are they expressive enough for regression? arXiv preprint arXiv:2402.15478, 2024.
+Ramapuram, J., Danieli, F., Dhekane, E., Weers, F., Busbridge, D., Ablin, P., Likhomanenko, T., Digani, J., Gu, Z., Shidani, A., and Webb, R. Theory, Analysis, and Best Practices for Sigmoid Self-Attention, September 2024. URL http://arxiv.org/abs/2409.04431.arXiv:2409.04431 [cs].
+Sanford, C., Hsu, D., and Telgarsky, M. Representational Strengths and Limitations of Transformers, November 2023. URL http://arxiv.org/abs/2306.02896.arXiv:2306.02896 [cs, stat].
+Shin, Y. Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks. arXiv, 2020. doi: 10.48550/arXiv.1910.05874. URL http://arxiv.org/abs/1910.05874.
+Song, B., Han, B., Zhang, S., Ding, J., and Hong, M. Unraveling the gradient descent dynamics of transformers. In Advances in Neural Information Processing Systems (NeurIPS), 2024.
+Su, J., Lu, Y., Pan, S., Murtadha, A., Wen, B., and Liu, Y. RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv preprint arXiv:2104.09864, November 2023. doi: 10.48550/arXiv.2104.09864. URL http://
+
+arxiv.org/abs/2104.09864. Available at http: //arxiv.org/abs/2104.09864.
+Tarzanagh, D. A., Li, Y., Thrampoulidis, C., and Oymak, S. Transformers as support vector machines, 2024. URL http://arxiv.org/abs/2308.16898.
+Tian, Y., Wang, Y., Chen, B., and Du, S. Scan and snap: Understanding training dynamics and token composition in 1-layer transformer, 2023. URL http://arxiv.org/abs/2305.16380.
+Tropp, J. A. An introduction to matrix concentration inequalities, 2015. URL https://arxiv.org/abs/1501.01571.
+Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention Is All You Need, August 2023. URL http://arxiv.org/abs/1706.03762.arXiv:1706.03762 [cs].
+von Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., and Vladymyrov, M. Transformers learn in-context by gradient descent, 2023. URL http://arxiv.org/abs/2212.07677. arXiv preprint arXiv:2212.07677.
+Wang, S., Li, B. Z., Khabsa, M., Fang, H., and Ma, H. Linformer: Self-Attention with Linear Complexity, June 2020. URL http://arxiv.org/abs/2006.04768.arXiv:2006.04768 [cs].
+Wang, Z., Wei, S., Hsu, D., and Lee, J. D. Transformers Provably Learn Sparse Token Selection While Fully Connected Nets Cannot, June 2024. URL http:// arxiv.org/abs/2406.06893.arXiv:2406.06893 [cs, math, stat].
+Wei, C., Chen, Y., and Ma, T. Statistically meaningful approximation: a case study on approximating Turing machines with transformers, 2023. URL http://arxiv.org/abs/2107.13163.
+Yang, H., Kailkhura, B., Wang, Z., and Liang, Y. Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis, October 2024. URL http://arxiv.org/abs/2410.09605.arXiv:2410.09605.
+Yao, S., Peng, B., Papadimitriou, C., and Narasimhan, K. Self-attention networks can process bounded hierarchical languages, 2023. URL http://arxiv.org/abs/2105.11115.
+Yun, C., Bhojanapalli, S., Rawat, A. S., Reddi, S. J., and Kumar, S. Are Transformers universal approximators of sequence-to-sequence functions?, February 2020. URL http://arxiv.org/abs/1912.10077. arXiv:1912.10077 [cs, stat].
+
+Zeng, J., Lau, T. T.-K., Lin, S., and Yao, Y. Global convergence of block coordinate descent in deep learning, 2019. URL http://arxiv.org/abs/1803.00225.
+Zhai, X., Zhou, R., Zhang, L., and Du, S. S. Transformers are Efficient Compilers, Provably, October 2024. URL http://arxiv.org/abs/2410.14706.arXiv:2410.14706 [cs].
+
+# A. Notation and Definitions
+
+Consider a discrete domain (or "vocabulary") $\mathcal{S} = \{\alpha, \beta, \gamma, \omega, \ldots\}$ with cardinality $|\mathcal{S}|$ . In our setting we have tuples of $L$ elements (sequences of length $L$ ) from the domain, denoted by $\mathcal{X}$ and entries of which are uniquely indexed by the integers in $[L]$ . Here, $[i]$ denotes the set $\{0, 1, \ldots, i - 1\}$ for any $i \in \mathbb{Z}^+$ , positive integer. We define the function, $\mathcal{X}: [L] \to \mathcal{S}$ assign each index $i$ to the corresponding element $\mathcal{X}(i) \in \mathcal{S}$ . Thus, we can write $\mathcal{X} = (\mathcal{X}(0), \mathcal{X}(1), \ldots, \mathcal{X}(L - 1))$ , which is distributed according to $\mathcal{X} \sim \mathcal{D}$ . We also have tuples $\mathcal{Y}$ , length of which depends on the specific task we have, with elements from a corresponding relatively small set $\mathcal{S}_{\mathcal{Y}}$ . The tuples are distributed according to a task distribution $(\mathcal{X}, \mathcal{Y}) \sim \mathcal{D}_{\mathcal{X} \times \mathcal{Y}}$ . When needed we use $\mathcal{D}^L$ and $\mathcal{D}_{\mathcal{X} \times \mathcal{Y}}^L$ to denote the same distributions for specifically tuple length of $L$ . Similarly, $\mathcal{D}^{\forall L}$ means the distribution covering all possible lengths. In our training dataset, we have $B$ such pairs that $(\mathcal{X}^{(n)}, \mathcal{Y}^{(n)}) \sim \mathcal{D}_{\mathcal{X} \times \mathcal{Y}}$ , which are uniquely indexed by the elements in the set $\mathcal{B} = [B]$ . We also define, $\mathcal{B}_{\mu}$ as the set of training indices that contain the element $\mu$ . We denote the number of times an element $\mu$ appears in tuple $\mathcal{X}^{(n)}$ as $s_{\mu}^{(n)}$ . We define the corresponding $\mathbf{s}^{(n)} = \left[s_{\alpha}^{(n)} s_{\beta}^{(n)} \ldots\right]^{\top}$ , $\mathbf{S} = [\mathbf{s}^{(1)} \quad \mathbf{s}^{(2)} \quad \ldots \quad \mathbf{s}^{(B)}]^{\top} \in \mathbb{R}^{B \times N}$ and $\mathbf{S}_{\mathcal{B}_{\mu}} = \left[\ldots s^{(n)} \ldots\right]_{n \in \mathcal{B}_{\mu}}^{\top}$ matrices.
+
+In order to train a neural network, we map each element of $S$ to a $d$ -dimensional embedding space via a function $\mathbf{x} : S \to \mathbb{R}^d$ and each element of $S_{\mathcal{Y}}$ to a corresponding vector or a scalar depending on the task. We can now define the domain embedding matrix
+
+$$
+\mathbf {B} = \left[ \begin{array}{c} \mathbf {x} ^ {\top} (\alpha) \\ \mathbf {x} ^ {\top} (\beta) \\ \vdots \end{array} \right]. \tag {6}
+$$
+
+Additionally, we stack the embeddings of the elements in $\mathcal{X}$ and $\mathcal{Y}$ as rows of $\mathbf{X} \in \mathbb{R}^{L \times d}$ and $\mathbf{Y} \in \mathbb{R}^{L \times d_2}$ matrices, which we say are distributed according to $(\mathbf{X}, \mathbf{Y}) \sim \mathcal{P}_{\mathbf{X} \times \mathbf{Y}}^L$ . For training sample $n$ it can be written as,
+
+$$
+\mathbf {X} = \left[ \begin{array}{c} \mathbf {x} ^ {\top} (\mathcal {X} (0)) \\ \mathbf {x} ^ {\top} (\mathcal {X} (1)) \\ \vdots \\ \mathbf {x} ^ {\top} (\mathcal {X} (L - 1)) \end{array} \right] = \left[ \begin{array}{c} \mathbf {x} _ {0} ^ {\top} \\ \mathbf {x} _ {1} ^ {\top} \\ \vdots \\ \mathbf {x} _ {L - 1} ^ {\top} \end{array} \right],
+$$
+
+where in the last equality we denote the embedding of the $i$ -th element as $\mathbf{x}_i \coloneqq \mathbf{x}(\mathcal{X}(i)) \in \mathbb{R}^d$ , to reduce the notational cluttering.
+
+We denote matrices and vectors by bold characters. Let $\mathbf{M} \in \mathbb{R}^{d_1 \times d_2}$ , $\mathbf{w} \in \mathbb{R}^d$ be any matrix and vector respectively. We denote $k$ -th row of a $\mathbf{M}$ as $\mathbf{M}_{k,:}$ or $\mathbf{m}_k$ , similarly we denote $k$ -th column as $\mathbf{M}_{:,k}$ or $\mathbf{m}^k$ . We denote $k$ -th entry of $\mathbf{m}$ as $m_k$ and $(k,l)$ -th entry of $\mathbf{M}$ as $M_{kl}$ . Similarly, if a vector is defined in terms of blocks we explicitly state it and denote each the blocks as $\mathbf{w} = \left[\mathbf{w}_1^\top \quad \mathbf{w}_2^\top \quad \ldots\right]^\top$ . The same notations naturally extend to tensors, e.g., we denote $(i,j,k,\ldots)$ -th entry of a tensor $\mathbf{T} \in \mathbb{R}^{d \times d \times d \times \ldots}$ as $T_{ijk\ldots}$ .
+
+In addition we denote $i^{th}$ eigenvalue of a matrix $\mathbf{M}$ as $\lambda_i(\mathbf{M})$ , where $\lambda_1(\mathbf{M}) \geq \lambda_2(\mathbf{M}) \geq \dots$ . Similarly we denote the $i^{th}$ singular value as $\sigma_i(\mathbf{M})$ . Lastly we have some operators diagonalization
+
+$$
+\operatorname {d i a g} \left(\mathbf {w}\right) = \left( \begin{array}{c c c} w _ {1} & & \\ & \ddots & \\ & & w _ {d} \end{array} \right)
+$$
+
+and Kronecker product of two matrices $A \in \mathbb{R}^{m \times n}$ , $B \in \mathbb{R}^{p \times q}$
+
+$$
+A \otimes B = \left[ \begin{array}{c c c c} a _ {1 1} B & a _ {1 2} B & \ldots & a _ {1 n} B \\ a _ {2 1} B & a _ {2 2} B & \ldots & a _ {2 n} B \\ \vdots & \vdots & \ddots & \vdots \\ a _ {m 1} B & a _ {m 2} B & \ldots & a _ {m n} B \end{array} \right],
+$$
+
+where each entry $a_{ij}$ in $A$ is multiplied by the entire matrix $B$ , which results in a matrix of size $(mp)\times (nq)$ .
+
+# B. Representation Abilities of Linear Self-Attention
+
+Proof of Theorem 3.1 (Representation Ability of Linear Self Attention). We first show sufficiency of $d = |\mathcal{S}|$ . We can define an orthonormal embedding $\mathbf{x} : \mathcal{S} \to \mathbb{R}^N$ such that
+
+$$
+\mathbf {x} (\alpha) ^ {\top} \mathbf {x} (\beta) = \delta_ {\alpha , \beta} \forall a, b \in \mathcal {S},
+$$
+
+where $\delta_{a,b}$ is the Kronecker delta. Recall that $(\mathcal{X},\mathcal{Y})\sim \mathcal{D}_{\mathcal{X}\times \mathcal{Y}}$ are tuples whose elements are indexed from the set $[L]$ . Also, recall that we let $s:[L]\to S$ map each index $i$ to its corresponding symbol $\mathcal{X}(i)\in S$ . We arrange the embeddings $\mathbf{x}(\mathcal{X}(i))$ row-wise into $\mathbf{X}\in \mathbb{R}^{L\times |S|}$ .
+
+Goal. We wish to represent the family of functions
+
+$$
+\mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j \in [ L ]} f \left(\mathcal {X} (i), \mathcal {X} (j)\right) \mathbf {w} _ {\mathcal {X} (j)}, \quad \text {f o r e a c h} i \in [ L ],
+$$
+
+using a single-layer linear self-attention of dimension $d = |\mathcal{S}|$ . Here, $f: \mathcal{S} \times \mathcal{S} \to \mathbb{R}$ measures how strongly one entity affects another, and $\mathbf{w}_{\mathcal{X}(j)} \in \mathbb{R}^{d_2}$ encodes how that influence is expressed (e.g., a direction vector or contribution to a subsequent feature).
+
+Recall that a single-layer linear self-attention can be written as
+
+$$
+\mathbf {A t t n} ^ {\mathrm {l i n}} (\mathbf {X}) = \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V},
+$$
+
+where $\mathbf{C} \in \mathbb{R}^{d \times d}$ captures the attention scores (keys $\times$ queries) in a linear setting, $\mathbf{W}^V \in \mathbb{R}^{d \times d_2}$ represents the values transformation.
+
+Constructing C and V. We define C and $\mathbf{W}^V$ to satisfy:
+
+$$
+\mathbf {x} (\alpha) ^ {\top} \mathbf {C x} (\beta) = f (\alpha , \beta), \quad \text {a n d} \quad \mathbf {x} (a) ^ {\top} \mathbf {W} ^ {V} = \mathbf {w} _ {a}.
+$$
+
+Such $\mathbf{C}$ and $\mathbf{V}$ exist because, $\{\mathbf{x}(a)\}_{a\in S}$ forms an orthonormal basis in $\mathbb{R}^d$ , due to $d = |\mathcal{S}|$ . Concretely, $\mathbf{C}$ can be chosen so that its bilinear form on basis vectors $\mathbf{x}(\alpha)$ , $\mathbf{x}(\beta)$ equals $f(\alpha ,\beta)$ . Likewise, $\mathbf{W}^V$ can be chosen so that $\mathbf{x}(\alpha)$ maps to $\mathbf{w}_a$ .
+
+Verification. Given a sequence of length $L$ , the matrix $\mathbf{X} \in \mathbb{R}^{L \times |S|}$ is
+
+$$
+\mathbf {X} = \left[ \begin{array}{c} \mathbf {x} \big (\mathcal {X} (0) \big) ^ {\top} \\ \mathbf {x} \big (\mathcal {X} (1) \big) ^ {\top} \\ \vdots \\ \mathbf {x} \big (\mathcal {X} (L - 1) \big) ^ {\top} \end{array} \right].
+$$
+
+Thus,
+
+$$
+\mathbf {X C X} ^ {\top} = \left[ \begin{array}{c c c} f \big (\mathcal {X} (0), \mathcal {X} (0) \big) & \dots & f \big (\mathcal {X} (0), \mathcal {X} (L - 1) \big) \\ \vdots & \ddots & \vdots \\ f \big (\mathcal {X} (L - 1), \mathcal {X} (0) \big) & \dots & f \big (\mathcal {X} (L - 1), \mathcal {X} (L - 1) \big) \end{array} \right],
+$$
+
+and
+
+$$
+\mathbf {X} \mathbf {W} ^ {V} = \left[ \begin{array}{c} \mathbf {w} _ {\mathcal {X} (0)} ^ {\top} \\ \mathbf {w} _ {\mathcal {X} (1)} ^ {\top} \\ \vdots \\ \mathbf {w} _ {\mathcal {X} (L - 1) ^ {\top}} \end{array} \right].
+$$
+
+Multiplying these terms in the linear self-attention expression yields, for each row $i \in [L]$ :
+
+$$
+\left[ \left(\mathbf {X} \mathbf {C} \mathbf {X} ^ {\top}\right) \mathbf {X} \mathbf {W} ^ {V} \right] _ {i,:} = \sum_ {j = 0} ^ {L - 1} f \left(\mathcal {X} (i), \mathcal {X} (j)\right) \mathbf {w} _ {\mathcal {X} (j)}.
+$$
+
+This matches the desired function $\mathbf{F}_i = \sum_{j\in [L]}f\big(\mathcal{X}(i),\mathcal{X}(j)\big)\mathbf{w}_{\mathcal{X}(j)}$ . Hence, a single-layer linear self-attention with dimension $|\mathcal{S}|$ can represent any pairwise interaction function of this form.
+
+Now, we focus on necessity of $d \geq |\mathcal{S}|$ . Remember, $f: \mathcal{S} \times \mathcal{S} \to \mathbb{R}$ represent any pairwise interaction function. For fixed embeddings $\{\mathbf{x}(a) \in \mathbb{R}^d\}_{a \in \mathcal{S}}$ and any matrix $\mathbf{C} \in \mathbb{R}^{d \times d}$ , the product $\mathbf{X} \mathbf{C} \mathbf{X}^\top$ is at most rank $d$ . Formally, if we stack all dictionary embeddings $\mathbf{x}(a)$ into a matrix $\mathbf{X} \in \mathbb{R}^{|S| \times d}$ , then
+
+$$
+\operatorname {r a n k} \left(\mathbf {X C X} ^ {\top}\right) \leq d.
+$$
+
+Thus, any pairwise function $f(a,b) = \mathbf{x}(a)^{\top}\mathbf{C}\mathbf{x}(b)$ produces an $|S|\times |S|$ matrix with rank at most $d$ .
+
+Consider the pairwise function $f^{*}(a,b)$ whose corresponding matrix $F^{*} \in \mathbb{R}^{|S| \times |S|}$ is full-rank, i.e., $\mathrm{rank}(F^{*}) = |\mathcal{S}|$ . For example, take $F^{*} = \mathbf{I}_{|\mathcal{S}|}$ (the identity matrix), corresponding to $f^{*}(a,b) = \delta_{a,b}$ (the Kronecker delta).
+
+If $d < |\mathcal{S}|$ , then any matrix $\mathbf{X}\mathbf{C}\mathbf{X}^{\top}$ has $\mathrm{rank}\leq d$ and hence cannot equal $F^{*}$ , which has $\mathrm{rank}(F^{*}) = |\mathcal{S}| > d$ . Therefore, the self-attention mechanism cannot represent $f^{*}(a,b)$ exactly when $d < |\mathcal{S}|$ .
+
+Since a single-layer linear self-attention mechanism of dimension $d < |S|$ cannot represent the full-rank function $f^{*}$ , it follows that $d \geq |\mathcal{S}|$ is necessary to exactly represent all pairwise functions $f(a, b)$ .
+
+As a consequence of the above Theorem, it requires $\mathcal{O}(|S|^2)$ parameters to exactly represent any such sequence to sequence interaction mapping seen in (2). Further, the output dimension, $d_2$ , plays a largely peripheral role in the self-attention mechanism's ability to capture pairwise interactions. Its primary function is to align with the desired output representation—whether it be the dimensionality of subsequent features or the final embedding size—without restricting the expressiveness of the attention scores. The latter is dictated by setting $d = |S|$ , ensuring that all necessary pairwise interactions $f(\alpha, \beta)$ can be effectively captured. Thus, $d_2$ should not be viewed as a bottleneck; it is simply a design choice for how the represented interactions are ultimately projected or stored.
+
+Corollary B.1 (Orthogonal Transforms Preserve Representational Capacity). Under $d = |\mathcal{S}|$ , let $\mathbf{x}(\alpha)$ be the orthonormal embedding constructed therein. For any orthogonal matrix $\mathbf{Q} \in \mathbb{R}^{d \times d}$ , define the new embedding $\mathbf{x}'(\alpha) = \mathbf{Q}\mathbf{x}(\alpha)$ . Then the same pairwise function $f(\alpha, \beta)$ can be represented exactly by redefining
+
+$$
+\mathbf {C} ^ {\prime} = \mathbf {Q C Q} ^ {\top} a n d \mathbf {W} ^ {\prime V} = \mathbf {Q W} ^ {V}.
+$$
+
+Hence, any orthonormal transformation of $\mathbf{x}(\alpha)$ leaves the representational capacity of the single-layer linear self-attention unchanged. Since the construction in the Proof of Theorem 3.1 (in Appendix B) relies only on the orthogonality of $\mathbf{x}(\alpha)$ , the same argument applies to any embedding $\mathbf{x}'(\alpha) = \mathbf{Q}\mathbf{x}(\alpha)$ with $\mathbf{Q} \in \mathbb{R}^{d \times d}$ orthogonal. Thus, representational capacity is invariant to the choice of orthonormal basis.
+
+Proof of Theorem 3.2 (Efficiency of Linear Self Attention). We count the minimum number of parameters needed for a fully connected network.
+
+Step 1: Because $f(\alpha, \beta)$ can be chosen arbitrarily for each of the $|S|^2$ ordered pairs $(\alpha, \beta)$ , the model must be able to realize at least $|S|^2$ independent scalar parameters to represent $f$ itself.
+
+Step 2: For a fixed position $i$ , the quantity
+
+$$
+\mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j = 1} ^ {L} f (\mathcal {X} (i), \mathcal {X} (j)) \mathbf {w} _ {\mathcal {X} (j)}
+$$
+
+sums over $j = 1, \ldots, L$ . Although there may be some degeneracies as $f(\mathcal{X}(i), \mathcal{X}(j)) \mathbf{w}_{\mathcal{X}(j)} = f(\mathcal{X}(i), \mathcal{X}(k)) \mathbf{w}_{\mathcal{X}(k)}$ for some $j, k$ , or $f(\mathcal{X}(i), \mathcal{X}(j)) \mathbf{w}_{\mathcal{X}(j)} + f(\mathcal{X}(i), \mathcal{X}(k)) \mathbf{w}_{\mathcal{X}(k)} = (f(\mathcal{X}(i), \mathcal{X}(m)) \mathbf{w}_{\mathcal{X}(m)} + f(\mathcal{X}(i), \mathcal{X}(l)) \mathbf{w}_{\mathcal{X}(l)}$ , that reduces the total number of parameters, we analyze the most general case. In the most general case, each $f(\mathcal{X}(i), \mathcal{X}(j)) \mathbf{w}_{\mathcal{X}(j)}$ in the summation should be calculated separately. Seeing that there are $L$ such terms in the summation, a fully connected network with no build-in parameter sharing cannot collapse these $L$ terms at no cost. Each summand could be different and must be learned with its own parameters. Hence, even for representing one output $\mathbf{y}_{\mathcal{X}(i)}$ , the network needs $\Omega(L \cdot |\mathcal{S}|^2)$ .
+
+Step 3: The network must simultaneously produce $\mathbf{y}_{\mathcal{X}(1)},\dots ,\mathbf{y}_{\mathcal{X}(l)}$ . Without additional structure or shared weights across output positions, a fully connected network pays a separate parameter cost for each output $\mathbf{y}_{\mathcal{X}(i)}$
+
+Therefore, any linear fully connected network that realizes all such pairwise interaction mappings with zero error must have at least $\Omega (L^2 |\mathcal{S}|^2)$ parameters.
+
+Theorem B.2 (Approximate Representation Abilities of Single-Layer Linear Self-Attention). Recall $S$ is a finite set and $f: S \times S \to \mathbb{R}$ is any pairwise interaction function. For each symbol $\mu \in S$ , recall that $\mathbf{w}_{\mu} \in \mathbb{R}^{d_2}$ is (value) representation. Suppose the embedding dimension is less than the domain size, $d \leq |\mathcal{S}|$ . Then, there exist an embedding $\mathbf{x}: S \to \mathbb{R}^d$ , and parameters $\mathbf{C} \in \mathbb{R}^{d \times d}$ and $\mathbf{W}^V \in \mathbb{R}^{d \times d_2}$ for a single-layer linear self-attention mechanism such that, for any sequence $s: [L] \to S$ , the output of the linear self-attention block
+
+$$
+\mathbf {S A} ^ {\operatorname {l i n}} (\mathbf {X}) = \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V}, \tag {7}
+$$
+
+approximates the "pairwise-sum" mapping
+
+$$
+\mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j = 0} ^ {L - 1} f (\mathcal {X} (i), \mathcal {X} (j)) \mathbf {w} _ {\mathcal {X} (j)}, \tag {8}
+$$
+
+such that for all, $i = 0,1,\ldots ,L - 1$
+
+$$
+\| \mathbf {S A} _ {i,:} ^ {l i n} (\mathbf {X}) - \mathbf {y} _ {\mathcal {X} (i)} \| _ {2} \leq \left\{ \begin{array}{l l} \zeta_ {1} L \sigma_ {d + 1} (\mathbf {F}), & d _ {2} \leq d < | \mathcal {S} | \\ \zeta_ {1} L \sigma_ {d + 1} (\mathbf {F}) + \zeta_ {2} L \sqrt {\sum_ {i = d + 1} ^ {d _ {2}} \sigma_ {i} ^ {2} (\mathbf {W})}, & d < d _ {2} < | \mathcal {S} | \\ \zeta_ {1} L \sigma_ {d + 1} (\mathbf {F}) + \zeta_ {2} L \sqrt {\sum_ {i = d + 1} ^ {| \mathcal {S} |} \sigma_ {i} ^ {2} (\mathbf {W})}. & d < | \mathcal {S} | \leq d _ {2}, \end{array} \right.
+$$
+
+where $\zeta_1 = \max_{\mu \in \mathcal{S}}\| \mathbf{w}_\mu \| _2,\zeta_2 = \max_{\mu ,\nu \in \mathcal{S}}\left|\tilde{C}_{\mu \nu}\right|$ and $\sigma_i(\mathbf{F})$ is $i$ -th singular value of $\mathbf{F}$ such that $\sigma_1(\mathbf{F})\geq \sigma_2(\mathbf{F})\geq$ $\dots \geq \sigma_{|\mathcal{S}|}(\mathbf{F})$
+
+Proof. Recall $\mathbf{B}$ is the embedding matrix seen in (6). Form an $|\mathcal{S}|\times |\mathcal{S}|$ matrix $\mathbf{F}$ such that $F_{\mu \nu} = f(\mu ,\nu)\quad \forall \mu ,\nu \in \mathcal{S}$ . We have $\mathbf{C}\in \mathbb{R}^{d\times d}$ , so
+
+$$
+\tilde {\mathbf {C}} := \mathbf {B C B} ^ {T} \in \mathbb {R} ^ {| S | \times | S |}
+$$
+
+is an arbitrary matrix of rank at most $d$ . Similarly, let $\mathbf{W} \in \mathbb{R}^{|S| \times d_2}$ such that $\mathbf{W}_{\mu,\cdot} = \mathbf{w}_\mu$ , and $\tilde{\mathbf{W}}^V \coloneqq \mathbf{B}\mathbf{W}^V \in \mathbb{R}^{|S| \times d_2}$ . Thus, for each $i \in [L]$ , we can write,
+
+$$
+\mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j \in [ L ]} f (\mathcal {X} (i), \mathcal {X} (j)) \mathbf {w} _ {\mathcal {X} (j)} = \sum_ {j \in [ L ]} F _ {\mathcal {X} (i), \mathcal {X} (j)} \mathbf {W} _ {\mathcal {X} (j),:} \tag {9}
+$$
+
+As $d \leq |\mathcal{S}|$ and we can freely choose the embedding, we can choose the embedding such that the embedding base is orthonormal, i.e. $\mathbf{B}^\top \mathbf{B} = \mathbf{I}$ , we can write
+
+$$
+\mathbf {S A} _ {i,:} ^ {\text {l i n}} = \mathbf {X} _ {i,:} \mathbf {C X} ^ {\top} \mathbf {X W} ^ {V} = \mathbf {X} _ {i,:} \mathbf {B} ^ {\top} \mathbf {B C B} ^ {\top} \mathbf {B X} ^ {\top} \mathbf {X B} ^ {\top} \mathbf {B W} ^ {V} = \sum_ {j = 0} ^ {L - 1} \tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} \tilde {\mathbf {W}} _ {\mathcal {X} (j),:}. \tag {10}
+$$
+
+We start the approximation by writing the singular value decomposition of $\mathbf{F}$ as
+
+$$
+\mathbf {F} = \sum_ {i = 1} ^ {| \mathcal {S} |} \sigma_ {i} (\mathbf {F}) u _ {i} v _ {i} ^ {\top},
+$$
+
+where $\sigma_{i}(\mathbf{F})$ is $i$ -th singular value of $\mathbf{F}$ such that $\sigma_{1}(\mathbf{F}) \geq \sigma_{2}(\mathbf{F}) \geq \dots \geq \sigma_{|S|}(\mathbf{F})$ . We can set $\tilde{\mathbf{C}}$ to
+
+$$
+\mathbf {\tilde {C}} = \sum_ {i = 1} ^ {d} \sigma_ {i} (\mathbf {F}) u _ {i} v _ {i} ^ {\top},
+$$
+
+which satisfies the only constraint we have on $\tilde{\mathbf{C}}$ , that it is of rank $d$ . Consequently,
+
+$$
+\mathbf {F} - \tilde {\mathbf {C}} = \sum_ {i = d + 1} ^ {| \mathcal {S} |} \sigma_ {i} (\mathbf {F}) u _ {i} v _ {i} ^ {\top}
+$$
+
+From classical linear algebra we get
+
+$$
+\max _ {i j} \left| F _ {i, j} - \tilde {C} _ {i, j} \right| \leq \left| \left| \mathbf {F} - \tilde {\mathbf {C}} \right| \right| _ {2} = \sigma_ {d + 1} (\mathbf {F}). \tag {11}
+$$
+
+We can choose $\tilde{\mathbf{W}}^V$ to minimize $\| \tilde{\mathbf{W}}^V - \mathbf{W}\|_2$ . Note that the only constraint on $\tilde{\mathbf{W}}^V$ is it of rank at most $\min(d, d_2)$ , because $\tilde{\mathbf{W}}^V = \mathbf{B}\mathbf{W}^V$ where $\mathbf{W}^V \in \mathbb{R}^{d \times d_2}$ can be freely chosen and $\mathbf{B} \in \mathbb{R}^{|S| \times d}$ is full column rank. Therefore, there exists a $\tilde{\mathbf{W}}^V$ such that $\| \tilde{\mathbf{W}}^V - \mathbf{W}\|_2$ is upper bounded by the $\min(d, d_2) + 1$ 'st largest singular value of $\mathbf{W}$ . Then, using the identity $\| \mathbf{A}\|_2 \leq \| \mathbf{A}\|_2$ for any $\mathbf{A}$ matrix,
+
+$$
+\sqrt {\max _ {i} \sum_ {j} (\tilde {W} _ {i j} - W _ {i j}) ^ {2}} \leq \| \tilde {\mathbf {W}} ^ {V} - \mathbf {W} \| _ {2},
+$$
+
+we have,
+
+$$
+\sqrt {\max _ {i} \sum_ {j} \left(\tilde {W} _ {i j} ^ {V} - W _ {i j}\right) ^ {2}} \leq \left\{ \begin{array}{l l} 0, & d _ {2} \leq d < | \mathcal {S} | \\ \sqrt {\sum_ {i = d + 1} ^ {d _ {2}} \sigma_ {i} ^ {2} (\mathbf {W})}, & d < d _ {2} < | \mathcal {S} | \\ \sqrt {\sum_ {i = d + 1} ^ {| \mathcal {S} |} \sigma_ {i} ^ {2} (\mathbf {W})}. & d < | \mathcal {S} | \leq d _ {2} \end{array} \right. \tag {12}
+$$
+
+Returning to Equations 9 and 10, the error can be written as
+
+$$
+\begin{array}{l} \mathbf {S A} _ {i,:} ^ {\operatorname {l i n}} - \mathbf {y} _ {\mathcal {X} (i)} = \sum_ {j = 0} ^ {L - 1} \left\{\tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} \tilde {\mathbf {W}} _ {\mathcal {X} (j),:} ^ {V} - F _ {\mathcal {X} (i), \mathcal {X} (j)} \mathbf {W} _ {\mathcal {X} (j),:} \right\} \\ = \sum_ {j = 0} ^ {L - 1} \left\{\left(\tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} - F _ {\mathcal {X} (i), \mathcal {X} (j)}\right) \mathbf {W} _ {\mathcal {X} (j);:} + \tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} \left(\tilde {\mathbf {W}} _ {\mathcal {X} (j);:} ^ {V} - \mathbf {W} _ {\mathcal {X} (j);:}\right) \right\} \\ \end{array}
+$$
+
+$$
+\| \mathbf {S A} _ {i,:} ^ {\mathrm {l i n}} - \mathbf {y} _ {\mathcal {X} (i)} \| _ {2} \leq \sum_ {j = 0} ^ {L - 1} \Big \{\Big | \tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} - F _ {\mathcal {X} (i), \mathcal {X} (j)} \Big | \left\| \mathbf {W} _ {\mathcal {X} (j),:} \right\| _ {2} + \Big | \tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} \Big | \left\| \tilde {\mathbf {W}} _ {\mathcal {X} (j),:} ^ {V} - \mathbf {W} _ {\mathcal {X} (j),:} \right\| _ {2} \Big \}
+$$
+
+Letting $\zeta_1 = \max_{\mu \in \mathcal{S}}\| \mathbf{w}_\mu \| _2,\zeta_2 = \max_{\mu ,\nu \in \mathcal{S}}\left|\tilde{C}_{\mu \nu}\right|$ substituting Equations 11 and 12,
+
+$$
+\begin{array}{l} \left\| \mathbf {S} \mathbf {A} _ {i,:} ^ {\operatorname {l i n}} - \mathbf {y} _ {\mathcal {X} (i)} \right\| _ {2} \leq \sum_ {j = 0} ^ {L - 1} \sigma_ {d + 1} (\mathbf {F}) \left\| \mathbf {W} _ {\mathcal {X} (j), :} \right\| _ {2} + \left| \tilde {C} _ {\mathcal {X} (i), \mathcal {X} (j)} \right| \left\| \tilde {\mathbf {W}} _ {\mathcal {X} (j), :} ^ {V} - \mathbf {W} _ {\mathcal {X} (j), :} \right\| _ {2} \\ \leq \left\{ \begin{array}{l l} \zeta_ {1} L \sigma_ {d + 1} \left(\mathbf {F}\right), & d _ {2} \leq d < | \mathcal {S} | \\ \zeta_ {1} L \sigma_ {d + 1} \left(\mathbf {F}\right) + \zeta_ {2} L \sqrt {\sum_ {i = d + 1} ^ {d _ {2}} \sigma_ {i} (\mathbf {W})}, & d < d _ {2} < | \mathcal {S} | \\ \zeta_ {1} L \sigma_ {d + 1} \left(\mathbf {F}\right) + \zeta_ {2} L \sqrt {\sum_ {i = d + 1} ^ {| \mathcal {S} |} \sigma_ {i} (\mathbf {W})}, & d < | \mathcal {S} | \leq d _ {2} \end{array} \right. \\ \end{array}
+$$
+
+
+
+Justification for the $d = |\mathcal{S}|$ Assumption. A common concern may arise from our theoretical results, which assume the embedding dimension $d$ equals the domain size $|\mathcal{S}|$ . While this may seem restrictive for large vocabularies, this assumption is well-motivated for the following reasons:
+
+- Clarifying the Core Mechanism and Simplifying Analysis. Our primary goal is to study how self-attention models interactions between entities, independent of embedding dimension constraints. Setting $d = |\mathcal{S}|$ ensures an
+
+exact orthonormal representation, making both representational power and training dynamics fully transparent. This choice highlights the fundamental ability of self-attention to encode all pairwise interactions while simplifying the theoretical analysis without losing generality. By removing extraneous complexities, we preserve the core insights into representation, convergence, and generalization.
+
+- Establishing a Natural Theoretical Baseline. The assumption $d = |\mathcal{S}|$ serves as an idealized yet expressive starting point in theoretical analysis, providing a one-to-one mapping from domain elements to embeddings. This eliminates unnecessary confounding factors, allowing a clean study of self-attention's structural properties. Future work can systematically extend these results to cases where $d < |\mathcal{S}|$ , exploring compressed or approximate embeddings.
+- Scalability to Lower Dimensions. Although we analyze $d = |\mathcal{S}|$ , our insights extend to $d < |\mathcal{S}|$ through approximate orthonormal embeddings. Techniques such as random projections (e.g., via the Johnson-Lindenstrauss lemma) allow for efficient lower-dimensional embeddings while preserving essential properties. Thus, our results remain applicable in practical settings with small approximation errors.
+
+In summary, the $d = |\mathcal{S}|$ assumption is a deliberate choice to make our analysis transparent and tractable, enabling exact theorems that clarify the core design principles of self-attention. While not always practical, the insights gained from this assumption shed light on why self-attention mechanisms are so effective in learning interactions, thereby paving the way for future research on more approximate and scalable extensions.
+
+Overview and Motivation of the Following Examples In this appendix, we present several illustrative examples that showcase how single-layer linear self-attention can capture important pairwise interactions across diverse domains: multiagent collision settings, time series forecasting, genotype-phenotype mapping, simple vision tasks, and more. Although there exist many possible configurations, we focus on relatively straightforward, yet representative cases that make it clear how to embed domain-specific features into our theoretical framework.
+
+# B.1. Colliding Agents Environment
+
+We consider $L$ identical agents on a cylindrical grid of size $[N] \times [N]$ , with initial position vector $\mathbf{r}_i = \left[n_i^x n_i^y\right] \in [N]^2$ where $y$ axis is looped, that is, the distance between $n_i^y = 1$ and $n_i^y = N - 1$ is just 2. An analogy for such a grid world is that ants moving on the surface of a water pipe. Each agent is modeled as a circle of radius $R$ executes a simple "move-right" policy (one step to the right per time step if there is space to move, $n_i^x < N - 1$ , otherwise stays where it is). We denote the value function for agent $i$ as $V_{i}$ , which corresponds to the expected return (sum of discounted rewards) from a particular configuration. Our goal is to capture how each agent's value function $V_{i}$ depends on the initial states (positions) of other agents: whenever agent $i$ collides with agent $j$ (distinct), it receives a penalty of $-1$ . Leveraging Theorem 3.1, we show how this reward structure (and hence the value function if continuing in time) can be exactly represented by a single-layer linear self-attention mechanism.
+
+Under this setting, adding $-1$ bias to all value functions for simplicity, the value function for each agent can be expressed as
+
+$$
+V _ {\mathbf {r} _ {i}} = \sum_ {\substack {j \in [ L ] \\ j \neq i}} \mathbb {I} \left\{\min \left(\left| n _ {i} ^ {y} - n _ {j} ^ {y} \right|, N - \left| n _ {i} ^ {y} - n _ {j} ^ {y} \right|\right) \leq 2 R \right\} \cdot (-1) \tag{13}
+$$
+
+Seeing that the value function depends only on the $y$ -coordinates, we focus our discussion on a one-dimensional case for simplicity. The extension to value functions with higher dimension dependence follows naturally and is illustrated after the one-dimensional case discussed here. In addition, we provide two versions of the similar representation construction for different positional embeddings.
+
+# B.1.1. ONE-HOT EMBEDDING
+
+One of the most straightforward approaches is to embed each agent's position $n_i \in [N]$ as the standard basis vector $\mathbf{e}_{n_i} \in \mathbb{R}^N$ . Concretely, $n_i$ -th entry of $\mathbf{e}_{n_i}$ is one and the rest are zeros. This guarantees $\mathbf{e}_{n_i}^\top \mathbf{e}_{n_j} = \delta_{n_i,n_j}$ , making the embedding orthonormal.
+
+Representing the Collision Value Function. By Theorem 3.1, there exists a single-layer linear self-attention model (dimension $d = N$ ) that encodes the collision-based value function $V_{i}$ via an appropriate choice of $\mathbf{C} \in \mathbb{R}^{N \times N}$ and
+
+$\mathbf{W}^V \in \mathbb{R}^{N \times 1}$ . To represent the value function in Eq. (13) using a single-layer linear self-attention mechanism, we explicitly construct the attention weight matrix $\mathbf{C}$ and value projection matrix $\mathbf{W}^V$ . Define $\mathbf{W}^V \in \mathbb{R}^{N \times 1}$ as a vector of all $-1$ s:
+
+$$
+\mathbf {W} ^ {V} = - \mathbf {1} _ {N} = \left[ \begin{array}{l l l l} - 1 & - 1 & \dots & - 1 \end{array} \right] ^ {\top}. \tag {14}
+$$
+
+This ensures that the output of self-attention sums over the interactions weighted by $\mathbf{C}$ . The interaction matrix $\mathbf{C} \in \mathbb{R}^{N \times N}$ is defined to capture the penalty for collisions:
+
+$$
+C _ {m n} = \left\{ \begin{array}{l l} 1, & \text {i f} \min (| m - n |, N - | m - n |) \leq 2 R, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {15}
+$$
+
+Here, $C_{mn} = 1$ whenever the $y$ -coordinates $m$ and $n$ are within a radius of $2R$ , accounting for cylindrical wrapping. The resulting attention computation effectively sums over all nearby agents within the collision range.
+
+# B.1.2. SINUSOIDAL EMBEDDING
+
+We now consider sinusoidal embeddings, inspired by various positional embedding techniques that leverage sinusoidal representations. These include absolute positional embeddings from (Vaswani et al., 2023) and the more recent rotational embeddings in (Su et al., 2023). Specifically, assuming $N$ is an even number, we embed the position of the $i$ -th agent as:
+
+$$
+\mathbf {x} \left(\mathbf {r} _ {i}\right) = \mathbf {p} _ {i} = \left[ \begin{array}{l l l l l} \frac {1}{\sqrt {2}} & \dots & \sin \left(\frac {2 \pi k}{N} n _ {i}\right) & \cos \left(\frac {2 \pi k}{N} n _ {i}\right) & \dots \end{array} \quad \frac {1}{\sqrt {2}} \cos \left(\frac {2 \pi}{N} \left(\frac {N}{2} - 1\right) n _ {i}\right) & \frac {1}{\sqrt {2}} \cos \left(\frac {2 \pi}{N} \frac {N}{2} n _ {i}\right) \right] ^ {\top} \in \mathbb {R} ^ {N}. \tag {16}
+$$
+
+It is a simple exercise to check that $\mathbf{p}_i\top \mathbf{p}_j = \frac{N}{2}\delta_{i,j}$
+
+The main result of this section is that: due to Lemma B.4 and Threorem B.6, a single layer linear self-attention 1 with $\mathbf{W}^V = \left[-\sqrt{2} \quad 0 \quad 0 \quad \ldots\right]^\top \in \mathbb{R}^{N \times 1}$ and diagonal $\mathbf{C} \in \mathbb{R}^{N \times N}$ , such that
+
+$$
+C _ {n n} = \left\{ \begin{array}{l l} \frac {4}{N} \left(2 R + \frac {1}{2}\right) & \text {i f} n = 0, \\ \frac {2}{N} \frac {\sin \left[ \frac {2 \pi}{N} \left(2 R + \frac {1}{2}\right) \left(\frac {n + 1}{2}\right) \right]}{\sin \left[ \frac {2 \pi}{N} \frac {1}{2} \left(\frac {n + 1}{2}\right) \right]} & \text {i f} n \text {o d d}, \\ \frac {2}{N} \frac {\sin \left[ \frac {2 \pi}{N} \left(2 R + \frac {1}{2}\right) \left(\frac {n}{2}\right) \right]}{\sin \left[ \frac {2 \pi}{N} \frac {1}{2} \left(\frac {n}{2}\right) \right]} & \text {i f} n \text {e v e n}, \end{array} \right.
+$$
+
+can represent the value functions at Eq. (13) exactly, for any $L$ , which is plotted at Figure 1.
+
+Definition B.3 (Discrete-Time Fourier Series (DTFS)). DTFS of a function $f:[N]\to \mathbb{R}$ is defined as
+
+$$
+F [ k ] = \sum_ {n = 0} ^ {N - 1} f [ n ] e ^ {- j \frac {2 \pi}{N} k n}.
+$$
+
+Lemma B.4 (Discrete-Time Fourier Series Expansion of Window Function). Let $f$ be:
+
+$$
+f [ n ] = \mathbb {I} \left[ | n | \leq 2 R \right]
+$$
+
+Then the DFS, $F$ is:
+
+$$
+F [ k ] = \left\{ \begin{array}{l l} \frac {1}{N} \frac {\sin \left[ \frac {2 \pi}{N} \left(2 R + \frac {1}{2}\right) k \right]}{\sin \left[ \frac {2 \pi}{N} \frac {1}{2} k \right]} & i f k \neq 0 \\ \frac {2}{N} \left(2 R + \frac {1}{2}\right) & i f k = 0 \end{array} \right.
+$$
+
+Proof. Straightforward.
+
+Lemma B.5 (Real Valued, i.e. Sinusoidal, Expression of Discrete-Time Fourier Series). The Discrete Fourier Series (DFS) of a periodic discrete function $x[n]$ with period $N$ is given by:
+
+$$
+x [ n ] = \sum_ {k = 0} ^ {N - 1} X [ k ] e ^ {i \frac {2 \pi}{N} k n}
+$$
+
+
+Figure 1. Diagonal entries of the interaction matrix $C$ for $N = 360$ agents with radius $R = 5$ . The pattern emerges from Fourier analysis of collision dynamics in the agent movement model. The matrix indices are from 0 to $N - 1$ .
+
+Letting $a_k = 2 \operatorname{Re}\{X[k]\}$ and $b_k = -2 \operatorname{Im}\{X[k]\}$ , the same function can be expressed as
+
+$$
+x [ n ] = \left\{ \begin{array}{l l} \frac {a _ {0}}{2} + \sum_ {k = 1} ^ {\frac {N - 1}{2}} \left(a _ {k} \cos \left(\frac {2 \pi}{N} k n\right) + b _ {k} \sin \left(\frac {2 \pi}{N} k n\right)\right) & , f o r o d d N \\ \frac {a _ {0}}{2} + \frac {a _ {N / 2}}{2} \cos \left(\frac {2 \pi}{N} \frac {N}{2} n\right) + \sum_ {k = 1} ^ {\frac {N}{2} - 1} \left(a _ {k} \cos \left(\frac {2 \pi}{N} k n\right) + b _ {k} \sin \left(\frac {2 \pi}{N} k n\right)\right) & , f o r e v e n N \end{array} \right.
+$$
+
+Proof. Using Euler's formula, the complex exponential term can be rewritten as:
+
+$$
+e ^ {i \frac {2 \pi}{N} k n} = \cos \left(\frac {2 \pi}{N} k n\right) + i \sin \left(\frac {2 \pi}{N} k n\right)
+$$
+
+Substitute this expression into the DTFS equation:
+
+$$
+x [ n ] = \sum_ {k = 0} ^ {N - 1} X [ k ] \left(\cos \left(\frac {2 \pi}{N} k n\right) + i \sin \left(\frac {2 \pi}{N} k n\right)\right)
+$$
+
+Separate the real and imaginary parts of the equation:
+
+$$
+\begin{array}{l} x [ n ] = \sum_ {k = 0} ^ {N - 1} \left(\operatorname {R e} \{X [ k ] \} \cos \left(\frac {2 \pi}{N} k n\right) - \operatorname {I m} \{X [ k ] \} \sin \left(\frac {2 \pi}{N} k n\right)\right) \\ + i \sum_ {k = 0} ^ {N - 1} \left(\operatorname {I m} \{X [ k ] \} \cos \left(\frac {2 \pi}{N} k n\right) + \operatorname {R e} \{X [ k ] \} \sin \left(\frac {2 \pi}{N} k n\right)\right) \\ \end{array}
+$$
+
+$x[n]$ is a real function, so the imaginary part sums to zero. Therefore, for odd $N$ we have:
+
+- $\operatorname{Im}\{X[0]\} = 0$
+- $\operatorname{Im}\{X[k]\} + \operatorname{Im}\{X[N - k]\} = 0$ because cosine function has period of $\pi$
+
+- $\operatorname{Re}\{X[k]\} - \operatorname{Re}\{X[N - k]\} = 0$ because sine function has period of $\pi$
+
+As for the even $N$ we have an additional requirement:
+
+- $\operatorname{Im}\{X[N/2]\} = 0$
+
+Thus $x[n]$ for odd $N$ is:
+
+$$
+x [ n ] = \operatorname {R e} \{X [ 0 ] \} + \sum_ {k = 1} ^ {\frac {N - 1}{2}} \left(2 \operatorname {R e} \{X [ k ] \} \cos \left(\frac {2 \pi}{N} k n\right) - 2 \operatorname {I m} \{X [ k ] \} \sin \left(\frac {2 \pi}{N} k n\right)\right),
+$$
+
+As for even $N$ we have:
+
+$$
+\begin{array}{l} x [ n ] = \mathrm {R e} \{X [ 0 ] \} + \mathrm {R e} \{X [ N / 2 ] \} (- 1) ^ {n} \\ + \sum_ {k = 1} ^ {\frac {N}{2} - 1} \left(2 \operatorname {R e} \{X [ k ] \} \cos \left(\frac {2 \pi}{N} k n\right) - 2 \operatorname {I m} \{X [ k ] \} \sin \left(\frac {2 \pi}{N} k n\right)\right), \\ \end{array}
+$$
+
+Letting $a_{k} = 2\operatorname{Re}\{X[k]\}$ and $b_{k} = -2\operatorname{Im}\{X[k]\}$ , we get the final form as:
+
+$$
+x [ n ] = \left\{ \begin{array}{l l} \frac {a _ {0}}{2} + \sum_ {k = 1} ^ {\frac {N - 1}{2}} \left(a _ {k} \cos \left(\frac {2 \pi}{N} k n\right) + b _ {k} \sin \left(\frac {2 \pi}{N} k n\right)\right) & , \text {f o r o d d N} \\ \frac {a _ {0}}{2} + \frac {a _ {N / 2}}{2} \cos \left(\frac {2 \pi}{N} \frac {N}{2} n\right) + \sum_ {k = 1} ^ {\frac {N}{2} - 1} \left(a _ {k} \cos \left(\frac {2 \pi}{N} k n\right) + b _ {k} \sin \left(\frac {2 \pi}{N} k n\right)\right) & , \text {f o r e v e n N} \end{array} \right.
+$$
+
+Theorem B.6. Let us denote DTFS of a function $f[n]$ as $F[k]$ . Defining $a_{k} = 2\operatorname{Re}\{F[k]\}$ and $b_{k} = -2\operatorname{Im}\{F[k]\}$ a attention score matrix $\mathbf{C}$ of the form
+
+$$
+\mathbf {C} = \left[ \begin{array}{c c c c c c c} a _ {0} & & & & & & \\ & a _ {1} & b _ {1} & & & & \\ & - b _ {1} & a _ {1} & & & & \\ & & \ddots & & & & \\ & & & a _ {N / 2 - 1} & b _ {N / 2 - 1} & \\ & & & - b _ {N / 2 - 1} & a _ {N / 2 - 1} & \\ & & & & & a _ {N / 2} \end{array} \right], \tag {17}
+$$
+
+can represent any function $f[n_i - n_j] = \mathbf{p}_i\mathbf{C}\mathbf{p}_j^T$ , where $\mathbf{p}_i$ is given at Eq. (16).
+
+Proof. Setting $\mathbf{C}$ as in (17),
+
+$$
+\mathbf {p} _ {i} \mathbf {C} \mathbf {p} _ {j} ^ {T} = \frac {a _ {0}}{2} + \frac {a _ {N / 2}}{2} \cos \left(\frac {2 \pi}{N} \frac {N}{2} n\right) + \sum_ {k = 1} ^ {\frac {N}{2} - 1} \left(a _ {k} \cos \left(\frac {2 \pi}{N} k n\right) + b _ {k} \sin \left(\frac {2 \pi}{N} k n\right)\right).
+$$
+
+That is equal to $f[n]$ owing to Lemma B.5,
+
+Remark B.7. If $f$ is symmetric, then the corresponding $\mathbf{C}$ is diagonal.
+
+# B.1.3. MORE COMPLEX COLLIDING AGENTS ENVIRONMENTS
+
+Value Functions That Depend on Both Coordinates. Assume the same basic grid setup described earlier, but now suppose each agent's value function depends on both coordinates. A trivial example is when agents receive huge penalties $-10$ if they collide even though initially they are far away, i.e., $\mathrm{dist}(n_i,n_j)\geq R_2$ for some $R_{2}\in \mathbb{R}$ and any $\mathrm{dist}(\cdot)$ function. They receive small $-1$ penalties, if initially they are very close, so escaping from the collision is difficult. Thus, the value function becomes
+
+$$
+V _ {i} = - \sum_ {j \neq i} \mathbb {I} \{| n _ {i} ^ {y} - n _ {j} ^ {y} | \leq 2 R \} \cdot (1 + 9 \cdot \mathbb {I} \{\mathrm {d i s t} (n _ {i}, n _ {j}) \leq R _ {2} \}),
+$$
+
+where the $x$ -coordinate condition encodes temporal proximity. This value function, along with similar ones, can be expressed in a general form as follows. Consider $L$ agents on a two-dimensional $[N] \times [N]$ grid, with each agent $i$ occupying a cell at integer coordinates $\mathbf{r}_i = \left[n_i^x \quad n_i^y\right]^\top \in [N]^2$ . Each agent's value function is then influenced by pairwise interactions of the form
+
+$$
+V _ {i} = \sum_ {\substack {j \in [ L ] \\ j \neq i}} f \big (\mathbf {r} _ {i} - \mathbf {r} _ {j} \big) w _ {\mathbf {r} _ {j}} = \sum_ {\substack {j \in [ L ] \\ j \neq i}} f \big (n _ {i} ^ {x} - n _ {j} ^ {x}, n _ {i} ^ {y} - n _ {j} ^ {y} \big) w _ {\mathbf {r} _ {j}},
+$$
+
+where $f:[N]^2 \to \mathbb{R}$ measures how agent $j$ (via its relative position) influences agent $i$ . To represent $f$ exactly as a linear self-attention kernel (as in Theorem 3.1), we embed each 2D position $\mathbf{r}_i$ into a vector of dimension $N^2$ , defined by the Kronecker (outer) product
+
+$$
+\mathbf {p} _ {i} = \mathbf {p} _ {i} ^ {x} \otimes \mathbf {p} _ {i} ^ {y} \in \mathbb {R} ^ {N ^ {2}}.
+$$
+
+Here, $\mathbf{p}_i^x,\mathbf{p}_i^y\in \mathbb{R}^N$ are the 1D sinusoidal embeddings from Equation (16), applied separately to the $x-$ and $y$ -coordinates. By construction, $\{\mathbf{p}_i\}_{i = 1}^L$ spans an orthogonal set in $\mathbb{R}^{N^2}$ , with one vector per distinct grid cell.
+
+From 1D to 2D Discrete-Time Fourier Series. Recall from Lemma B.4 and Theorem B.6 that any function $g(n)$ on a 1D grid $[N]$ can be written as $\mathbf{p}_i^\top \mathbf{C}\mathbf{p}_j$ for a suitably chosen matrix $\mathbf{C}$ . Precisely the same reasoning applies in two dimensions via a 2D Discrete-Time Fourier Series (DTFS). Specifically, one writes
+
+$$
+f \big (n _ {i} ^ {x} - n _ {j} ^ {x}, n _ {i} ^ {y} - n _ {j} ^ {y} \big) = \sum_ {k _ {x} = 0} ^ {N - 1} \sum_ {k _ {y} = 0} ^ {N - 1} F \big [ k _ {x}, k _ {y} \big ] e ^ {i \frac {2 \pi}{N} \left(k _ {x} \left(n _ {i} ^ {x} - n _ {j} ^ {x}\right) + k _ {y} \left(n _ {i} ^ {y} - n _ {j} ^ {y}\right)\right)},
+$$
+
+and then rewrites each exponential in terms of sines/cosines matching the tensor-product embedding $\mathbf{p}_i^x\otimes \mathbf{p}_i^y$ . Consequently, there exists a block-structured $\mathbf{C}\in \mathbb{R}^{N^2\times N^2}$ such that
+
+$$
+\mathbf {p} _ {i} ^ {\top} \mathbf {C} \mathbf {p} _ {j} = f (\mathbf {r} _ {i} - \mathbf {r} _ {j}).
+$$
+
+Hence, by choosing $\mathbf{W}^V$ so that $\mathbf{p}_i\mapsto w_{\mathbf{r}_i}$ , a single-layer linear self-attention (dimension $d = N^2$ ) recovers exactly the mapping
+
+$$
+V _ {i} = \sum_ {j \neq i} f \left(\mathbf {r} _ {i} - \mathbf {r} _ {j}\right) w _ {\mathbf {r} _ {j}}.
+$$
+
+All the 1D collision arguments from earlier carry over: once the domain is embedded into $\mathbb{R}^{N^2}$ , Theorem 3.1 implies that single-layer linear self-attention can represent any pairwise function $f(\mathbf{r}_i - \mathbf{r}_j)$ . Of course, this comes at the cost of embedding dimension $N^2$ , so the resulting kernel $\mathbf{C}$ has size $N^2 \times N^2$ , i.e. an $\mathcal{O}(N^4)$ parameter count for a fully general 2D interaction.
+
+Non-Identical Agents. One can likewise handle agents with different behavior or policies. For example, suppose half of the agents always move right until they reach the boundary, while the others always move up. Label these two behaviors (or policies) via a discrete set $S_{q} = \{\mathrm{R},\mathrm{U}\}$ , and let each agent $i$ carry an extra label $q_{i}\in S_{q}$ . Then the value function is
+
+$$
+V _ {\mathbf {r} _ {i}, q _ {i}} = \sum_ {\substack {j \in [ L ] \\ j \neq i}} f \left(\mathbf {r} _ {i} - \mathbf {r} _ {j}, q _ {i}, q _ {j}\right) w _ {\mathbf {r} _ {j}, q _ {j}}. \tag{18}
+$$
+
+Following steps similar to the value functions that depend on both coordinates example, we now view the domain of each agent as
+
+$$
+\mathcal {S} = \mathcal {S} _ {q} \times \mathcal {S} _ {x} \times \mathcal {S} _ {y},
+$$
+
+where $S_x$ and $S_y$ are each $[N]$ . Its cardinality is $|S| = 2N^2$ , in this simple example. We then embed each agent's label and position as a vector in $\mathbb{R}^{2N^2}$ . For instance, if $q_i = \mathrm{R}$ we might set
+
+$$
+q _ {i} = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right], \quad \mathbf {p} _ {i} ^ {x}, \mathbf {p} _ {i} ^ {y} \in \mathbb {R} ^ {N}, \quad \mathbf {z} _ {i} = \mathbf {q} _ {i} \otimes \mathbf {p} _ {i} ^ {x} \otimes \mathbf {p} _ {i} ^ {y} \in \mathbb {R} ^ {2 N ^ {2}}.
+$$
+
+Likewise if $q_{i} = \mathrm{U}$ , we flip those two bits in $\mathbf{q}_i$ . By expanding the definition of $f\big(\mathbf{r}_i - \mathbf{r}_j,q_i,q_j\big)$ via a suitable DTFS, one again obtains a matrix $\mathbf{C}$ of size $(2N^{2})\times (2N^{2})$ that captures all pairwise interactions. Thus a single-layer linear self-attention with $d = 2N^2$ dimensions suffices to represent Eq. (18). The parameter count grows to $\mathcal{O}(4N^4)$ in the fully general case, but the construction exactly parallels the identical-agent scenario.
+
+# B.2.Genotype-Phenotype Mapping Task
+
+Each allele (or gene variant) in the domain $S$ is labeled with a unique integer and embedded using a one-hot vector. We consider a DNA sequence of length $L$ in which every position corresponds to a unique allele. In other words, each allele appears at most once in the sequence (no duplicates). Our experiments randomly sample activation relations: an allele $\mu \in S$ is activated if another allele $\nu$ exists somewhere in the sequence. Symbolically, we might store these relations in a dictionary of the form
+
+$$
+\mu : [ \nu_ {1}, \nu_ {2}, \dots ],
+$$
+
+meaning "allele $i$ is activated by the set of alleles- $\nu_{k}$ " If $i:[]$ (an empty list), then allele $i$ is always active, while alleles not present in the dictionary behave like redundant or inactive genetic material.
+
+Constructing $\mathbf{C}$ and $\mathbf{W}^V$ . To model these activation patterns via a single-layer linear self-attention, we build $\mathbf{C} \in \mathbb{R}^{d \times d}$ and $\mathbf{W}^V \in \mathbb{R}^{d \times 1}$ as follows (assuming each allele is one-hot embedded into $\mathbb{R}^d$ , with $d = |\mathcal{S}|$ ):
+
+- For every dictionary entry of the form $i: [j_1, \ldots, j_m]$ , set
+
+$$
+\mathbf {C} _ {i, j _ {k}} = 1 / m \quad \text {f o r e a c h} k = 1, \dots , m,
+$$
+
+and set all other entries in row $i$ to zero.
+
+- For entries $i : \left\lbrack \right\rbrack$ (i.e.,allele $i$ is always active),assign each entry in row $i$ to $\frac{1}{L}$ . This ensures allele $i$ gains a constant contribution from the entire sequence.
+- Set every entry of $\mathbf{W}^V$ to 1 (i.e., $\mathbf{W}^V \in \mathbb{R}^{d \times 1}$ is a vector of all ones).
+
+An example dictionary may be $\{1:[3], 2[:], 4[:2]\}$ . Seeing that we have one hot encoding
+
+$$
+C = \left[ \begin{array}{c c c c c} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 1 / L & 1 / L & 1 / L & 1 / L & 1 / L \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \end{array} \right]
+$$
+
+This is an example task for which a single layer linear self-attention cannot lengthenize. However, a simple dense layer with ReLU activation addition after single layer linear self-attention, trivially enables it to generalize any length
+
+# B.3. Vision Task
+
+In this example, each token or entity corresponds to a single pixel in an image. Suppose there are $d$ possible pixel positions in the image, each with a positional embedding $\mathbf{p}_i \in \mathbb{R}^d$ . We also have a binary indicator $b_i \in \{0,1\}$ denoting whether pixel $i$ is black ( $b_i = 1$ ) or white ( $b_i = 0$ ). We embed each pixel jointly as
+
+$$
+\mathbf {x} _ {i} = \left[ b _ {i}, \mathbf {p} _ {i} \right] \in \mathbb {R} ^ {1 + d}.
+$$
+
+Thus, the first coordinate captures color, and the remaining coordinates capture position. We want to detect whether a specific pattern of black pixels occurs around the position $\mathbf{p}_i$ . Formally, we want to detect if pixels in a set of relative offsets
+
+$$
+\Delta = \{\Delta_ {1}, \Delta_ {2}, \ldots \}
+$$
+
+are all black around position $i$ . For example, $\Delta$ might define a small pattern (e.g., a $3 \times 3$ cross), such that for each $\Delta_k \in \Delta$ , the pixel at $\mathbf{p}_i + \Delta_k$ must be black for us to declare "shape present at $i$ ."
+
+Block-Structured C Matrix. Consider a single-layer linear self-attention of dimension $d + 1$ . Let $\mathbf{C} \in \mathbb{R}^{(d + 1) \times (d + 1)}$ be block-structured, meaning:
+
+$$
+\mathbf {C} = \left[ \begin{array}{l l} \mathbf {0} _ {1 \times 1} & \mathbf {0} _ {1 \times d} \\ \mathbf {0} _ {d \times 1} & \widetilde {\mathbf {C}} _ {d \times d} \end{array} \right],
+$$
+
+where $\mathbf{0}$ denotes a zero block (ensuring that the binary coordinate $b_{i}$ does not directly alter which positions are relevant). If we want to detect a shape around $\mathbf{p}_i$ by checking offsets $\Delta = \{\Delta_1,\dots ,\Delta_m\} \subseteq \mathbb{R}^d$ , then for each row $i$ (representing $\mathbf{p}_i$ in the $\widetilde{\mathbf{C}}$ submatrix), we set
+
+$$
+\widetilde {\mathbf {C}} _ {i, j} = \left\{ \begin{array}{l l} 1, & \text {i f \mathbf {p} _ {j} i s i n \mathbf {p} _ {i} + \Delta}, \\ 0, & \text {o t h e r w i s e}. \end{array} \right.
+$$
+
+This ensures $\mathbf{p}_i$ attends to exactly those pixel positions $\mathbf{p}_j$ that lie within the shape region around $i$ .
+
+Capturing the Binary Color. To incorporate the notion that only black pixels $(b_{j} = 1)$ contribute, we define $\mathbf{W}^{V}\in$ $\mathbb{R}^{(d + 1)\times 1}$ so that its entries corresponding to the binary part of $\mathbf{x}_j$ are nonzero, while its entries corresponding to the positional part are zero. More formally,
+
+$$
+\mathbf {W} ^ {V} = \left[ \begin{array}{c} 1 \\ \mathbf {0} _ {d} \end{array} \right] \in \mathbb {R} ^ {1 + d}.
+$$
+
+Hence, when we multiply $\mathbf{x}_j$ by $\mathbf{W}^V$ , the outcome is simply $b_{j}$ . In other words, if $b_{j} = 1$ , the pixel contributes; if $b_{j} = 0$ , it does not.
+
+Overall Operation. Putting it together, our single-layer linear self-attention
+
+$$
+\left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V}
+$$
+
+behaves as follows: (i) $\widetilde{\mathbf{C}}$ identifies the relevant offsets $\mathbf{p}_j$ around each $\mathbf{p}_i$ , i.e., it checks which pixels could participate in a pattern at $i$ . (ii) $\mathbf{W}^V$ converts the embedding $[b_j, \mathbf{p}_j]$ into $b_j$ , effectively a blackness indicator. (iii) Summing across the image yields, for each position $i$ , the count of black pixels at the appropriate offsets $\Delta$ .
+
+If this count matches the size of $\Delta$ , we conclude that the shape is present around pixel $i$ . Thus, the linear self-attention mechanism can recognize patterns of arbitrary size and structure without changing the kernel size or other architectural hyperparameters.
+
+CNN Versus Transformer: Theoretical Insight. A single-layer Convolutional Neural Network (CNN) with a fixed $3 \times 3$ kernel cannot detect patterns larger than $3 \times 3$ within that same layer. Extending the receptive field would require either deeper networks or larger kernels. However, we do not know even the task of a layer in a large and deep neural network. Therefore, generally we do not know the required kernel for the task in advance, so we just cross validate different over different trials. This design constraint makes CNNs less flexible when the optimal pattern scale is not known a priori. By contrast, Transformers can attend to any subset of positions in the input—whether nearby or distant—in a single layer. In our example, the shape offsets $\Delta$ might be large or irregular, yet the same $\widetilde{\mathbf{C}}$ submatrix can be adapted to capture those long-range relationships. Consequently, Transformers provide a more versatile framework for shape detection (which is simply learning how different parts of image interact with each other) or other vision tasks where the required pattern scale or geometry may vary significantly from one scenario to another.
+
+# B.4. Time Series Prediction
+
+Consider a univariate or multivariate time series $\{\mathbf{m}[t]\}_{t=1}^{L+1}$ , where each $\mathbf{m}[t] \in \mathbb{R}^{d_2}$ is the observed value at time $t$ . We assume $\mathbf{m}[t]$ depends on a set of specific past delays $D = \{t_1, t_2, \ldots\} \subset \mathbb{N}$ , along with scalar multipliers $\{a_k\}_{k \in D}$ and a
+
+linear transform $\mathbf{A} \in \mathbb{R}^{d_2 \times d_2}$ capturing how past values affect the current state:
+
+$$
+\mathbf {m} [ t ] = \sum_ {k \in D} a _ {k} \mathbf {A} \mathbf {m} [ t - k ].
+$$
+
+For instance, if $D = \{2,5\}$ , then $\mathbf{m}[t] = 3\mathbf{A}\mathbf{m}[t - 2] + 7\mathbf{A}\mathbf{m}[t - 5]$ . For example, if $D = \{2,5\}$ , we get $\mathbf{m}[t] = 3\mathbf{A}\mathbf{m}[t - 2] + 7\mathbf{A}\mathbf{m}[t - 5]$ .
+
+To embed each time step $t$ as an entity, define
+
+$$
+\mathbf {x} _ {t} = \left[ \mathbf {m} [ t ], \mathbf {p} [ t ] \right] \in \mathbb {R} ^ {d + d _ {2}},
+$$
+
+where $\mathbf{m}[t] \in \mathbb{R}^{d_2}$ is the observed state, and $\mathbf{p}[t] \in \mathbb{R}^d$ is a positional embedding (e.g., one-hot or continuous encoding). An indicator-based formulation ensures the attention mechanism recovers delays in $D$ :
+
+$$
+\mathbf {m} [ L + 1 ] = \sum_ {j \in [ L ]} \left(\sum_ {k \in D} a _ {k} \mathbb {I} \left\{\left(L + 1\right) - j = k \right\}\right) \mathbf {A} \mathbf {m} [ j ].
+$$
+
+Our goal is to show how a single-layer linear self-attention can represent this dependency structure exactly, following Theorem 3.1.
+
+We treat each time step $t$ as a distinct entity in the sequence. Its embedding $\mathbf{x}_t\in \mathbb{R}^{d + d_2}$ combines:
+
+$$
+\mathbf {x} _ {t} = \left[ \begin{array}{l} \mathbf {m} [ t ], \mathbf {p} [ t ] \end{array} \right],
+$$
+
+where:
+
+- $\mathbf{m}[t] \in \mathbb{R}^{d_2}$ is the observed state at time $t$ (univariate or multivariate).
+- $\mathbf{p}[t] \in \mathbb{R}^d$ is a positional embedding encoding the index $t$ . This could be a one-hot vector of length $L$ , or a sinusoidal embedding.
+
+Define a block-structured matrix $\mathbf{C} \in \mathbb{R}^{(d + d_2) \times (d + d_2)}$ to separate the $\mathbf{m}[t]$ coordinates from the positional coordinates $\mathbf{p}[t]$ . We can denote:
+
+$$
+\mathbf {C} = \left[ \begin{array}{c c} \mathbf {0} _ {d _ {2} \times d _ {2}} & \mathbf {0} _ {d _ {2} \times d} \\ \mathbf {0} _ {d \times d _ {2}} & \widetilde {\mathbf {C}} _ {d \times d} \end{array} \right],
+$$
+
+so that the positional part $\widetilde{\mathbf{C}}$ encodes which time delays matter, while the $\mathbf{m}[t]$ portion does not directly determine which indices to attend to. Concretely, let $\widetilde{\mathbf{C}}_{u,v} = 1$ if position $v$ is a valid activator for position $u$ under one of the delays in $D$ , and 0 otherwise. Equivalently, we can define
+
+$$
+\widetilde {\mathbf {C}} _ {u, v} = \sum_ {k \in D} a _ {k} \mathbf {I} [ u - v = k ],
+$$
+
+if we index the positional embedding such that $u, v \in \{1, \dots, L\}$ . This ensures that time step $u$ attends to time step $v$ if $v$ is exactly one of the valid delays $k$ behind $u$ .
+
+Next, we define $\mathbf{W}^V\in \mathbb{R}^{(d + d_2)\times d_2}$ to extract the actual state $\mathbf{m}[t]$ from each token:
+
+$$
+\mathbf {W} ^ {V} = \left[ \begin{array}{c} \mathbf {A} \\ \mathbf {0} _ {d \times d _ {2}} \end{array} \right],
+$$
+
+where $\mathbf{A} \in \mathbb{R}^{d_2 \times d_2}$ is precisely the transformation from the autoregressive model. This construction means that when we multiply $\mathbf{x}_t$ by $\mathbf{W}^V$ , we obtain $\mathbf{A} \mathbf{m}[t]$ , and the positional coordinates are ignored in this step (since their block is zero).
+
+Putting it all together, a single-layer linear self-attention computes
+
+$$
+\left(\mathbf {X C X} ^ {\top}\right) \mathbf {X W} ^ {V}.
+$$
+
+For the row corresponding to the final time step $L + 1$ , the multiplication by $\mathbf{C}$ picks out those time steps $j$ such that $(L + 1) - j \in D$ . Then multiplying by $\mathbf{W}^V$ retrieves $\mathbf{A}\mathbf{m}[j]$ . Summing the contributions yields precisely
+
+$$
+\mathbf {m} [ L + 1 ] = \sum_ {k \in D} a _ {k} \mathbf {A} \mathbf {m} [ L + 1 - k ],
+$$
+
+mirroring the autoregressive formula.
+
+# C. Proof of Theorem 4.4: Convergence to Zero Training Error
+
+Recall that $\mathbf{B} \in \mathbb{R}^{|S| \times d}$ is the embedding base matrix whose rows are embeddings of each element in the domain. We start with the following simple observation.
+
+Lemma C.1. If the domain embeddings are orthonormal, that is, $\langle x(\alpha),x(\beta)\rangle = \delta_{a,b},\forall \alpha ,\beta \in S,$ then $\mathbf{X}^{\top}\mathbf{X}$ is diagonal in the embedding basis, and $\mathbf{BX}^{(n)}\top \mathbf{X}^{(n)}\mathbf{B}^{\top} = \mathrm{diag}(\mathbf{s}^{(n)})$ where $s_\mu^{(n)}$ is the number of times the element $\mu$ appears in the $n$ -th sample.
+
+Since we set $d = |\mathcal{S}|$ , we can freely adopt an orthonormal domain embedding, ensuring that
+
+$$
+\langle \mathbf {B} _ {i,:}, \mathbf {B} _ {j,:} \rangle = \delta_ {i, j}.
+$$
+
+For the remainder of this section, we conduct all calculations in the domain embedding basis. To formalize this, we define the following orthonormal transformations:
+
+$$
+\mathbf {X} ^ {(n) \text {o n e - h o t}} = \mathbf {X} ^ {(n)} \mathbf {B} ^ {\top},
+$$
+
+$$
+\mathbf {C} ^ {\mathrm {o n e - h o t}} = \mathbf {B C B} ^ {\top},
+$$
+
+$$
+\mathbf {w} ^ {\mathrm {o n e - h o t}} = \mathbf {B w}.
+$$
+
+For notational simplicity, we omit the one - hot superscripts in the rest of this section. This will not cause any confusion as we are always referring to the one-hot transformed versions. That is, whenever we write $\mathbf{X}^{(n)}$ , $\mathbf{C}$ , $\mathbf{w}$ , they actually represent $\mathbf{X}^{(n)\mathrm{one - hot}}$ , $\mathbf{C}^{\mathrm{one - hot}}$ , $\mathbf{w}^{\mathrm{one - hot}}$ . Also, seeing that $\mathbf{B}$ is orthonormal matrix, when we establish the convergence of the one-hot parameter representations, it directly implies the convergence of the original parameters, which can be recovered by applying the inverse transformation in the $\mathbf{B}$ basis. What we do is simply change our perspective to how we look at coordinate system. Lastly, in this section, we denote $\mathbf{e}_{\mu} \in \mathbb{R}^{|S|}$ as unique one-hot encoded vector for all $\mu \in S$ , i.e. the base vector.
+
+We will firstly derive the gradients of the loss function with respect to the parameters than we will state a lemma that will be useful for the proof of Theorem 3.1.
+
+Gradients of the $L^{\mathrm{MSE}}(\mathbf{C},\mathbf{w})$ with Respect to $\mathbf{C}$ and $\mathbf{w}$ . For convenience, denote $\mathbf{W}^{V}\in \mathbb{R}^{d\times 1}$ as $\mathbf{w}\in \mathbb{R}^d$ . It is easy to verify the following equation:
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {C}} = \frac {1}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {D} ^ {(n)}\right) ^ {\top} \frac {\partial}{\partial \mathbf {C}} \mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right), \quad \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {w}} = \frac {1}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {D} ^ {(n)}\right) ^ {\top} \frac {\partial}{\partial \mathbf {w}} \mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right).
+$$
+
+Since $d_{2} = 1$ the linear self-attention in Eq.1 can be written as
+
+$$
+\mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right) = \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X w},
+$$
+
+where $\mathbf{w}\in \mathbb{R}^d$ . We have
+
+$$
+\frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right)}{\partial C _ {\mu \nu}} = \mathbf {X}: _ {\mu} \left[ \mathbf {X} ^ {\top} \mathbf {X} \mathbf {w} \right] _ {\nu}.
+$$
+
+Hence,
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial C _ {\mu \nu}} = \frac {2}{B} \sum_ {n = 1} ^ {B} \left[ \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w} \right] _ {\nu} \left(\mathbf {X} _ {: \mu} ^ {(n)}\right) ^ {\top} \mathbf {D} ^ {(n)}.
+$$
+
+Similarly,
+
+$$
+\frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right)}{\partial w _ {\alpha}} = \left(\mathbf {X C X} ^ {\top}\right) \mathbf {X} _ {: \alpha},
+$$
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial w _ {\alpha}} = \frac {2}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {X} ^ {(n) \top}\right) _ {\alpha}: \left(\mathbf {X} ^ {(n)} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top}\right) \mathbf {D} ^ {(n)}.
+$$
+
+We can write the same gradient equations as
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {C}} = \frac {2}{B} \sum_ {n} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)},
+$$
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {w}} = \frac {2}{B} \sum_ {n} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)}.
+$$
+
+Lemma C.2. If we choose initial parameters as $\mathbf{C}(0) = \mathbf{0}$ and $w_{\alpha}(0)\geq b > 0$ then $w_{\alpha}(t)\geq b > 0,\forall \alpha$ and $\forall t\geq 0$
+
+Proof. Firstly, we will show that $w_{\alpha}(t)^2 \geq w_{\alpha}(0)^2$ , $\forall t$ and $\forall i$ , then we will prove the statement in the lemma. We can copy the previous gradient derivations and gradient flow equations
+
+$$
+\frac {d \mathbf {C}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {C}} = - \eta \frac {2}{B} \sum_ {n} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)},
+$$
+
+$$
+\frac {d \mathbf {w}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {w}} = - \eta \frac {2}{B} \sum_ {n} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)}.
+$$
+
+Let $\pmb{\Lambda}$ be a matrix that is diagonal in the embedding base $\mathbf{B}$ . However, we again abuse the notation. We do not rewrite the one - hot in $\pmb{\Lambda}^{\mathrm{one - hot}} = \mathbf{BCB}^\top$ and denote it just as $\pmb{\Lambda}$ in the rest of the proof. We can now write
+
+$$
+\mathbf {C} ^ {\top} \frac {d \mathbf {C}}{d t} \boldsymbol {\Lambda} = - \eta \frac {2}{B} \sum_ {n} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \boldsymbol {\Lambda},
+$$
+
+$$
+\mathrm {T r} \Bigl \{\mathbf {C} \frac {d \mathbf {C}}{d t} \boldsymbol {\Lambda} \Bigr \} = - \eta \frac {2}{B} \sum_ {n} \mathrm {T r} \Bigl \{\mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \boldsymbol {\Lambda} \Bigr \},
+$$
+
+$$
+\boldsymbol {\Lambda} \frac {d \mathbf {w}}{d t} \mathbf {w} ^ {\top} = - \eta \frac {2}{B} \sum_ {n} \boldsymbol {\Lambda} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top},
+$$
+
+$$
+\operatorname {T r} \left\{\boldsymbol {\Lambda} \frac {d \mathbf {w}}{d t} \mathbf {w} ^ {\top} \right\} = - \eta \frac {2}{B} \sum_ {n} \operatorname {T r} \left\{\mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {D} ^ {(n)} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \boldsymbol {\Lambda} \right\}.
+$$
+
+Thus, we have
+
+$$
+\operatorname {T r} \left\{\mathbf {C} ^ {\top} \frac {d \mathbf {C}}{d t} \boldsymbol {\Lambda} \right\} = \operatorname {T r} \left\{\boldsymbol {\Lambda} \frac {d \mathbf {w}}{d t} \mathbf {w} ^ {\top} \right\},
+$$
+
+$$
+\operatorname {T r} \left\{\frac {d \mathbf {C}}{d t} \boldsymbol {\Lambda} \mathbf {C} ^ {\top} \right\} = \operatorname {T r} \left\{\mathbf {w} ^ {\top} \boldsymbol {\Lambda} \frac {d \mathbf {w}}{d t} \right\}. \tag {19}
+$$
+
+Seeing that $\Lambda$ is diagonal $\Lambda^{\top} = \Lambda$ . Thus, from Equation 19 we can get
+
+$$
+\frac {d}{d t} \left(\operatorname {T r} \left\{\mathbf {C} \boldsymbol {\Lambda} \mathbf {C} ^ {\top} \right\} - \operatorname {T r} \left\{\mathbf {w} ^ {\top} \boldsymbol {\Lambda} \mathbf {w} \right\}\right) = 0
+$$
+
+$$
+\operatorname {T r} \left\{\mathbf {C} (t) \boldsymbol {\Lambda} \mathbf {C} (t) ^ {\top} \right\} - \operatorname {T r} \left\{\mathbf {w} ^ {\top} (t) \boldsymbol {\Lambda} \mathbf {w} (t) \right\} = \operatorname {T r} \left\{\mathbf {C} (0) \boldsymbol {\Lambda} \mathbf {C} (0) ^ {\top} \right\} - \operatorname {T r} \left\{\mathbf {w} ^ {\top} (0) \boldsymbol {\Lambda} \mathbf {w} (0) \right\}
+$$
+
+Letting $\Lambda = \mathrm{diag}(\mathbf{e}_{\alpha})$ , where $\mathbf{e}_{\alpha}$ is the unit basis vector corresponding to $\alpha$ , we reach to
+
+$$
+w _ {\alpha} ^ {2} (t) = w _ {\alpha} ^ {2} (0) + \| \mathbf {C} _ {:, i} (t) \| _ {2} ^ {2} - \| \mathbf {C} _ {:, \alpha} (0) \| _ {2} ^ {2} = w _ {\alpha} ^ {2} (0) + \| \mathbf {C} _ {:, \alpha} (t) \| _ {2} ^ {2},
+$$
+
+where the last equality follows because $\mathbf{C}(0) = \mathbf{0}$ . As a result we reach to
+
+$$
+w _ {\alpha} ^ {2} (t) \geq w _ {\alpha} ^ {2} (0) \geq b ^ {2} \tag {20}
+$$
+
+Seeing that $\frac{dw_{\alpha}}{dt}$ is finite $\forall t$ , $w_{\alpha}(t)$ is continuous. As a result if $w_{\alpha}(0) \geq b > 0$ , then $w_{\alpha}(t) \geq b$ , $\forall t$ which can be proven by contradiction. Assume $\exists t^{*} > 0$ such that $w_{\alpha}(t^{*}) \leq b$ . By Equation 20, $w_{\alpha}(t^{*}) \leq -b < 0$ . By intermediate value theorem $\exists \tau \in (0, t^{*})$ such that $w_{\alpha}(\tau) = 0$ , so $w_{\alpha}^{2}(\tau) = 0 < w_{\alpha}^{2}(0) \geq b^{2}$ , which contradicts with (20)
+
+Proof of Theorem 4.4 (Convergence to Zero Training Error). Gradient Flow for the Residuals and the Loss. We define the residual (error) on the $n$ -th example
+
+$$
+\mathbf {D} ^ {(n)} = \mathbf {S A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right) - \mathbf {y} ^ {(n)}.
+$$
+
+Let $t$ denote the (continuous) gradient-descent time, with
+
+$$
+\frac {d \mathbf {C}}{d t} = - \eta \frac {\partial L}{\partial \mathbf {C}}, \quad \frac {d \mathbf {w}}{d t} = - \eta \frac {\partial L}{\partial \mathbf {w}}.
+$$
+
+Consider the time derivative of the residual
+
+$$
+\frac {d \mathbf {D} ^ {(m)}}{d t} = \frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(m)})}{\partial \mathbf {C}} \frac {d \mathbf {C}}{d t} + \frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(m)})}{\partial \mathbf {w}} \frac {d \mathbf {w}}{d t}.
+$$
+
+Expanding each term, we substitute
+
+$$
+\frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(m)})}{\partial C _ {\mu \nu}} = \mathbf {X} _ {: \mu} ^ {(m)} [ \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {w} ] _ {\nu} \mathrm {a n d} \frac {\partial \mathbf {S A} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(m)})}{\partial w _ {\alpha}} = (\mathbf {X} ^ {(m)} \mathbf {C} \mathbf {X} ^ {(m) \top}) \mathbf {X} _ {: \alpha} ^ {(m)}.
+$$
+
+As for the gradient updates we substitute
+
+$$
+\frac {d C _ {\mu \nu}}{d t} = - \frac {2 \eta}{B} \sum_ {n = 1} ^ {B} \left[ \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w} \right] _ {\nu} \left(\mathbf {X} _ {: \mu} ^ {(n)}\right) ^ {\top} \mathbf {D} ^ {(n)} \mathrm {a n d} \frac {d w _ {\alpha}}{d t} = - \frac {2 \eta}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {X} ^ {(n) \top}\right) _ {\alpha :} \left(\mathbf {X} ^ {(n)} \mathbf {C} \mathbf {X} ^ {(n) \top}\right) \mathbf {D} ^ {(n)},
+$$
+
+Thus, we arrive to
+
+$$
+\begin{array}{l} \frac {d \mathbf {D} ^ {(m)}}{d t} = \sum_ {\mu , \nu} \mathbf {X} _ {: \mu} ^ {(m)} \left[ \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {w} \right] _ {\nu} \left(- \frac {2 \eta}{B}\right) \sum_ {n = 1} ^ {B} \left[ \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w} \right] _ {\nu} \left(\mathbf {X} _ {: \mu} ^ {(n)}\right) ^ {\top} \mathbf {D} ^ {(n)} \\ + \sum_ {\alpha} \left(\mathbf {X} ^ {(m)} \mathbf {C X} ^ {(m) \top}\right) \mathbf {X} _ {: \alpha} ^ {(m)} \left(- \frac {2 \eta}{B}\right) \sum_ {n = 1} ^ {B} \left(\mathbf {X} ^ {(n) \top}\right) _ {\alpha}: \left(\mathbf {X} ^ {(n)} \mathbf {C X} ^ {(n) \top}\right) \mathbf {D} ^ {(n)}. \\ \end{array}
+$$
+
+Rearranging terms,
+
+$$
+\begin{array}{l} \frac {d \mathbf {D} ^ {(m)}}{d t} = - \frac {2 \eta}{B} \sum_ {n = 1} ^ {B} \left[ \left(\mathbf {w} ^ {\top} \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w}\right) \mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top} \right. \\ \left. + \mathbf {X} ^ {(m)} \mathbf {C} \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {C} \mathbf {X} ^ {(n) \top} \right] \mathbf {D} ^ {(n)} \\ \end{array}
+$$
+
+Seeing that the term in the parenthesis is a scalar, we can write the same equation in terms for kronocker product for future convenience, which leads to
+
+$$
+\begin{array}{l} \frac {d \mathbf {D} ^ {(m)}}{d t} = - \frac {2 \eta}{B} \sum_ {n = 1} ^ {B} \left[ \left(\mathbf {w} ^ {\top} \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w}\right) \otimes \left(\mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top}\right) \right. \\ \left. + \mathbf {X} ^ {(m)} \mathbf {C} \mathbf {X} ^ {(m) \top} \mathbf {X} ^ {(m)} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {C} ^ {\top} \mathbf {X} ^ {(n) \top} \right] \mathbf {D} ^ {(n)}. \tag {21} \\ \end{array}
+$$
+
+Stacking different samples with the following definitions
+
+$$
+\mathbf {D} = \left[ \begin{array}{c} \vdots \\ \mathbf {D} ^ {(n)} \\ \vdots \end{array} \right], \mathbf {M} = \left[ \begin{array}{c} \vdots \\ \left(\mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)}\right) \otimes \mathbf {X} ^ {(n)} \\ \vdots \end{array} \right], \mathbf {M _ {2}} = \left[ \begin{array}{c} \vdots \\ \mathbf {X} ^ {(n)} \mathbf {C} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \\ \vdots \end{array} \right],
+$$
+
+we can write the Eq. 21
+
+$$
+\frac {d \mathbf {D}}{d t} = - \frac {2 \eta}{B} \left[ \mathbf {M} \mathbf {M} ^ {\top} + \mathbf {M} _ {2} \mathbf {M} _ {2} ^ {\top} \right] \mathbf {D}.
+$$
+
+We can write the derivative of the loss as
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} = \frac {2}{B} \sum_ {m = 1} ^ {B} \mathbf {D} ^ {(m) \top} \frac {d \mathbf {D} ^ {(m)}}{d t} = - \frac {4 \eta}{B ^ {2}} \mathbf {D} ^ {\top} \left[ \mathbf {M M} ^ {\top} + \mathbf {M} _ {2} \mathbf {M} _ {2} ^ {\top} \right] \mathbf {D}.
+$$
+
+Clearly, both $\mathbf{MM}^{\top}$ and $\mathbf{M}_2\mathbf{M}_2^{\top}$ are positive semidefinite, so
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \mathbf {D} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {D}. \tag {22}
+$$
+
+Now, we will write Eq. 22 differently by re-expressing $\mathbf{D}$ . Thanks to the realizability, we can write,
+
+$$
+\mathbf {D} ^ {(n)} = \left(\mathbf {X} ^ {(n)} \mathbf {C} \mathbf {X} ^ {(n) \top}\right) \mathbf {X} ^ {(n)} \mathbf {w} - \left(\mathbf {X} ^ {(n)} \mathbf {C} ^ {*} \mathbf {X} ^ {(n) \top}\right) \mathbf {X} ^ {(n)} \mathbf {w} ^ {*}.
+$$
+
+Due to Lemma C.2 $\mathbf{w}_i \neq 0$ , so $\mathbf{w}^*_{i} / \mathbf{w}_{i}$ is defined for all $i \in [d]$ . Thus we can define, $\operatorname{diag}\left(\frac{\mathbf{w}^*}{\mathbf{w}}\right)$ to be the diagonal matrix, whose entries are $\mathbf{w}^*_{i} / \mathbf{w}_{i}$ in order.
+
+$$
+\mathbf {D} ^ {(n)} = \left(\mathbf {X} ^ {(n)} \mathbf {C} \mathbf {X} ^ {(n) \top}\right) \mathbf {X} ^ {(n)} \mathbf {w} - \left(\mathbf {X} ^ {(n)} \mathbf {C} ^ {*} \mathbf {X} ^ {(n) \top}\right) \mathbf {X} ^ {(n)} \operatorname {d i a g} \left(\frac {\mathbf {w} ^ {*}}{\mathbf {w}}\right) \mathbf {w}.
+$$
+
+In the orthonormal basis, $\mathbf{X}^{(n)}\top \mathbf{X}^{(n)}$ is diagonal, which allows reordering to obtain
+
+$$
+\mathbf {D} ^ {(n)} = \mathbf {X} ^ {(n)} \left[ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \left(\frac {\mathbf {w} ^ {*}}{\mathbf {w}}\right) \right] \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w}.
+$$
+
+Vectorizing (using $\operatorname{vec}(\mathbf{A}\mathbf{X}\mathbf{B}) = (\mathbf{B}^\top \otimes \mathbf{A})\operatorname{vec}(\mathbf{X}))$ yields
+
+$$
+\mathbf {D} ^ {(n)} = \mathbf {M} ^ {(n)} \operatorname {v e c} \left[ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \left(\frac {\mathbf {w} ^ {*}}{\mathbf {w}}\right) \right].
+$$
+
+Stacking over $n$ produces $\mathbf{D} = \mathbf{M}\operatorname {vec}\bigl (\mathbf{C} - \mathbf{C}^{*}\mathrm{diag}(\mathbf{w}^{*} / \mathbf{w})\bigr)$ . Thus, Eq. 22 can be written as
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \mathbf {D} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {D} = - \frac {4 \eta}{B ^ {2}} \operatorname {v e c} \left[ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \left(\frac {\mathbf {w} ^ {*}}{\mathbf {w}}\right) \right] ^ {\top} \mathbf {M} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {M} \operatorname {v e c} \left[ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \left(\frac {\mathbf {w} ^ {*}}{\mathbf {w}}\right) \right]
+$$
+
+Using the Lemma C.3, the same inequality can be written as
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \lambda_ {\min} (\mathbf {M} ^ {\top} \mathbf {M}) \left\| \mathbf {M} \mathrm {v e c} \Big [ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \Big (\frac {\mathbf {w} ^ {*}}{\mathbf {w}} \Big) \Big ] \right\| ^ {2} = - \frac {4 \eta}{B ^ {2}} \lambda_ {\min} (\mathbf {M} ^ {\top} \mathbf {M}) \| \mathbf {D} \| ^ {2},
+$$
+
+where $\lambda_{\min}\left(\mathbf{M}^{\top}\mathbf{M}\right)$ is the minimum eigenvalue of $\mathbf{M}^{\top}\mathbf{M}$ . Thus, if there exists a constant $\psi$ such that $\lambda_{\min}\left(\mathbf{M}^{\top}(t)\mathbf{M}(t)\right) \geq \psi > 0$ , $\forall t$ , then the training loss stops decreasing only when $\mathbf{D}$ reaches to all zero vector, i.e., training loss stops decreasing only when it reaches to zero, which is stated more rigorously in Lemma C.4.
+
+Lower Bound on the Eigenvalues of $\mathbf{M}^{\top}\mathbf{M}$ . We can show that
+
+$$
+\lambda_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) = \sigma_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) = \min _ {\mathbf {u}: \| \mathbf {u} \| _ {2} = 1} \| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \| _ {2}, \tag {23}
+$$
+
+where the first equality follows because $\mathbf{M}^{\top}\mathbf{M}$ is symmetric and positive semi definite. We also know
+
+$$
+\mathbf {M} ^ {\top} \mathbf {M} = \sum_ {n} \left(\mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)}\right) \otimes \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)}.
+$$
+
+Defining
+
+$$
+\mathbf {u} = \sum_ {\mu \in \mathcal {S}} \mathbf {u} _ {\mu} \otimes \mathbf {e} _ {\mu},
+$$
+
+where each $\mathbf{u}_{\mu} \in \mathbb{R}^d$ , so
+
+$$
+\left\| \mathbf {u} \right\| _ {2} ^ {2} = \sum_ {\mu \in \mathcal {S}} \left\| \mathbf {u} _ {\mu} \right\| _ {2} ^ {2} = 1. \tag {24}
+$$
+
+Recalling
+
+$$
+\mathbf {X} ^ {(n)} = \left( \begin{array}{c} \vdots \\ \mathbf {e} _ {\mathcal {X} (i)} \\ \vdots \end{array} \right) _ {i \in [ L ]},
+$$
+
+$\mathbf{X}^{(n)} \top \mathbf{X}^{(n)}$ acts as a projection matrix onto the space $\{\mathbf{e}_{\mathcal{X}(0)}, \mathbf{e}_{\mathcal{X}(1)}, \dots, \mathbf{e}_{\mathcal{X}(L-1)}\}$ . Thus, using the mixed-product property of Kronecker product, we can write
+
+$$
+\left\| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \right\| _ {2} ^ {2} = \left\| \sum_ {n \in \mathcal {B}} \sum_ {i \in [ L ]} \left(\left[ \mathbf {X} ^ {(n) ^ {\top}} \mathbf {X} ^ {(n)} \mathbf {w w} ^ {\top} \mathbf {X} ^ {(n) ^ {\top}} \mathbf {X} ^ {(n)} \right] \mathbf {u} _ {\mathcal {X} (i)}\right) \otimes \mathbf {e} _ {\mathcal {X} (i)} \right\| ^ {2} \tag {25}
+$$
+
+$$
+\| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \| _ {2} ^ {2} = \left\| \sum_ {n \in \mathcal {B}} \sum_ {\mu \in \mathcal {S}} \left(\left[ \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w} \mathbf {w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \right] \mathbf {u} _ {\mu}\right) \otimes s _ {\mu} ^ {(n)} \mathbf {e} _ {\mu} \right\| ^ {2}
+$$
+
+Recall the definition $\mathcal{B}_{\mu} = \left\{n\in \mathcal{B}:\mu \in \mathcal{X}^{(n)}\right\}$ , so
+
+$$
+\begin{array}{l} \left\| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \right\| _ {2} ^ {2} = \left\| \sum_ {\mu \in \mathcal {S}} \sum_ {n \in \mathcal {B} _ {\mu}} \left(s _ {\mu} ^ {(n)} \left[ \mathbf {X} ^ {(n) ^ {\top}} \mathbf {X} ^ {(n)} \mathbf {w w} ^ {\top} \mathbf {X} ^ {(n) ^ {\top}} \mathbf {X} ^ {(n)} \right] \mathbf {u} _ {\mu}\right) \otimes \mathbf {e} _ {\mu} \right\| _ {2} ^ {2} (26) \\ = \sum_ {\mu \in \mathcal {S}} \left\| \sum_ {n \in \mathcal {B} _ {\mu}} s _ {\mu} ^ {(n)} \left[ \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w w} ^ {\top} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \right] \mathbf {u} _ {\mu} \right\| _ {2} ^ {2} \\ = \sum_ {\mu \in \mathcal {S}} \Big \| \sum_ {n \in \mathcal {B} _ {\mu}} s _ {\mu} ^ {(n)} \left[ \mathrm {d i a g} \left(\mathbf {s} ^ {(n)}\right) \mathbf {w w} ^ {\top} \mathrm {d i a g} \left(\mathbf {s} ^ {(n)}\right) \right] \mathbf {u} _ {\mu} \Big \| _ {2} ^ {2} \\ = \sum_ {\mu \in \mathcal {S}} \left\| \sum_ {n \in \mathcal {B} _ {\mu}} s _ {\mu} ^ {(n)} \left[ \mathrm {d i a g} (\mathbf {w}) \mathbf {s} ^ {(n)} \mathbf {s} ^ {(n) \top} \mathrm {d i a g} (\mathbf {w}) \right] \mathbf {u} _ {\mu} \right\| _ {2} ^ {2} \\ = \sum_ {\mu \in \mathcal {S}} \left\| \operatorname {d i a g} (\mathbf {w}) \left(\sum_ {n \in \mathcal {B} _ {\mu}} s _ {\mu} ^ {(n)} \mathbf {s} ^ {(n)} \mathbf {s} ^ {(n) \top}\right) \operatorname {d i a g} (\mathbf {w}) \mathbf {u} _ {\mu} \right\| _ {2} ^ {2}. (27) \\ \end{array}
+$$
+
+Recall the $\mathbf{S}_{\mathcal{B}_\mu} = \left[ \begin{array}{lll}\dots & \mathbf{s}^{(n)} & \dots \end{array} \right]_{n\in \mathcal{B}_\mu}^\top$ , and define
+
+$$
+\operatorname {d i a g} \left(\mathbf {s} _ {\mu}\right) = \left( \begin{array}{c c c} \ddots & & \\ & s _ {\mu} ^ {(n)} & \\ & & \ddots \end{array} \right) _ {n \in \mathcal {B} _ {\mu}}.
+$$
+
+Thus, we reach to
+
+$$
+\| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \| _ {2} ^ {2} = \sum_ {\mu \in \mathcal {S}} \left\| \operatorname {d i a g} (\mathbf {w}) \mathbf {S} _ {\mathcal {B} _ {\mu}} ^ {\top} \operatorname {d i a g} (\mathbf {s} _ {\mu}) \mathbf {S} _ {\mathcal {B} _ {\mu}} \operatorname {d i a g} (\mathbf {w}) \mathbf {u} _ {\mu} \right\| _ {2} ^ {2}.
+$$
+
+Repeatedly applying the identity $\| \mathbf{A}\mathbf{z}\| _2\geq \sigma_{\min}(\mathbf{A})\| \mathbf{z}\| _2$ , where $\mathbf{A}$ and $\mathbf{z}$ are any matrix and vectors with suitable shapes,
+
+$$
+\| \mathbf {M} ^ {\top} \mathbf {M u} \| _ {2} ^ {2} \geq \sum_ {\mu \in \mathcal {S}} \sigma_ {\min } ^ {4} \left\{\operatorname {d i a g} \left(\mathbf {w}\right) \right\} \sigma_ {\min } ^ {2} \left\{\operatorname {d i a g} \left(\mathbf {s} _ {\mu}\right) \right\} \sigma_ {\min } ^ {4} \left(\mathbf {S} _ {\mathcal {B} _ {\mu}}\right) \| \mathbf {u} _ {\mu} \| _ {2} ^ {2}
+$$
+
+Due to Lemma C.2, $\sigma_{\mathrm{min}}^2\{\mathrm{diag}(\mathbf{w})\} \geq b^2 > 0$ . By definition, $\sigma_{\mathrm{min}}^2\{\mathrm{diag}(\mathbf{s}_\mu)\} \geq 1$ . By Assumption 4.2, $\sigma_{\mathrm{min}}^2\left(\mathbf{S}_{B_\mu}\right) \geq \zeta^2 > 0$ . Because of the Equations 23 and 24, we reach to
+
+$$
+\lambda_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) \geq b ^ {2} \zeta^ {2} > 0.
+$$
+
+
+
+Lemma C.3. For any matrix $\mathbf{M}$ , and any vector $\mathbf{x}$ such that $\mathbf{Mx}$ is defined, then
+
+$$
+\mathbf {x} ^ {\top} \mathbf {M} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {M} \mathbf {x} \geq \lambda_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) \| \mathbf {M} \mathbf {x} \| ^ {2}, \tag {28}
+$$
+
+where $\lambda_{\mathrm{min}}(\mathbf{M}^{\top}\mathbf{M})$ corresponds to minimum eigenvalue of $\mathbf{M}^{\top}\mathbf{M}$ matrix.
+
+Proof. Obviously $\mathbf{A} = (\mathbf{M}^{\top}\mathbf{M})$ is a symmetric and positive semi definite matrix, so it can be diagonalized as $\mathbf{A} = \mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^{\mathrm{T}}$ and its square root $\mathbf{A} = \mathbf{A}^{\frac{1}{2}}\mathbf{A}^{\frac{1}{2}}$ defined uniquely $\mathbf{A}^{\frac{1}{2}} = \mathbf{Q}\boldsymbol{\Lambda}^{\frac{1}{2}}\mathbf{Q}^{\top}$ . It follows that
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\top} \mathbf {A} ^ {2} \mathbf {x} = \mathbf {x} ^ {\top} \mathbf {A} ^ {\frac {1}{2}} \mathbf {A} \mathbf {A} ^ {\frac {1}{2}} \mathbf {x} \\ = \mathbf {x} ^ {\top} \mathbf {A} ^ {\frac {1}{2} \top} \mathbf {Q} \boldsymbol {\Lambda} \mathbf {Q} ^ {\top} \mathbf {A} ^ {\frac {1}{2}} \mathbf {x} \\ = \left(\mathbf {Q} ^ {\top} \mathbf {A} ^ {\frac {1}{2}} \mathbf {x}\right) ^ {\top} \boldsymbol {\Lambda} \left(\mathbf {Q} ^ {\top} \mathbf {A} ^ {\frac {1}{2}} \mathbf {x}\right) \\ \geq \lambda_ {\min } \| \mathbf {Q} ^ {\top} \mathbf {A} ^ {\frac {1}{2}} \mathbf {x} \| ^ {2} = \lambda_ {\min } \mathbf {x} ^ {\top} \mathbf {A} \mathbf {x} = \lambda_ {\min } \| \mathbf {M} \mathbf {x} \| ^ {2}. \\ \end{array}
+$$
+
+
+
+Lemma C.4 (Convergence Lemma). Consider the equation
+
+$$
+\dot {\mathbf {x}} (t) = - \eta \mathbf {A} (t) \mathbf {x} (t). \tag {29}
+$$
+
+Assume the following conditions hold
+
+(i) $\mathbf{A}(t)$ is symmetric for all $t$ .
+(ii) The eigenvalues of $\mathbf{A}(t)$ are lower bounded by a positive constant, i.e., $\lambda_{\min}(\mathbf{A}(t)) \geq \psi > 0$ for all $t$ , for some $\psi > 0$ . Then, $\mathbf{x}(t) \to 0$ as $t \to \infty$ .
+
+# D. Generalization Analysis of Linear Self-Attention
+
+Let's denote the parameters we get from training as $\hat{\mathbf{C}}$ and $\hat{\mathbf{W}}^V$ . Also, to simplify the notation, just for this section, we define
+
+$$
+C _ {\mu \nu} = \mathbf {x} ^ {\top} (\mu) \mathbf {C x} (\nu),
+$$
+
+$$
+W _ {\nu k} = \mathbf {x} ^ {\top} (\nu) \mathbf {W} _ {:, k},
+$$
+
+# D.1. Proof of Theorem 4.6: Generalization
+
+Remember that $\mathbf{B}$ is the domain embedding matrix seen in (6). Also, remember that $\{\mathbf{W}^{V\dagger},\mathbf{C}^{\dagger}\}$ represent a set of parameters that generalize to population distribution.
+
+Proof of Theorem 4.6. Let $\hat{\mathbf{C}}$ and $\hat{\mathbf{W}}^V$ be the parameters we get from the training. The zero training error condition corresponds to $\forall n\in \mathcal{B}$
+
+$$
+\mathbf {X} ^ {(n)} \hat {\mathbf {C}} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \hat {\mathbf {W}} ^ {V} = \mathbf {X} ^ {(n)} \mathbf {C} ^ {\dagger} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {W} ^ {V \dagger},
+$$
+
+writing the same equation for each column separately, we get
+
+$$
+\mathbf {X} ^ {(n)} \hat {\mathbf {C}} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \hat {\mathbf {w}} ^ {k} = \mathbf {X} ^ {(n)} \mathbf {C} ^ {\dagger} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {w} ^ {k \dagger},
+$$
+
+where $\mathbf{w}^{k\dagger}$ is defined as $k$ -th column of $\mathbf{W}^{V\dagger}$ . Using $\mathbf{B}^{\top}\mathbf{B} = \mathbf{I}$ , we write it in the domain embedding base
+
+$$
+\mathbf {X} ^ {(n)} \mathbf {B} ^ {\top} \mathbf {B} \hat {\mathbf {C}} \mathbf {B} ^ {\top} \mathbf {B} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {B} ^ {\top} \mathbf {B} \hat {\mathbf {w}} ^ {k} = \mathbf {X} ^ {(n)} \mathbf {B} ^ {\top} \mathbf {B} \mathbf {C} ^ {\dagger} \mathbf {B} ^ {\top} \mathbf {B} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {B} ^ {\top} \mathbf {B} \mathbf {w} ^ {k \dagger}
+$$
+
+By Lemma C.1,
+
+$$
+\mathbf {B} \mathbf {X} ^ {(n) \top} \mathbf {X} ^ {(n)} \mathbf {B} ^ {\top} = \mathrm {d i a g} \left(s _ {\alpha} ^ {(n)}, s _ {\beta} ^ {(n)}, \ldots\right)
+$$
+
+Remembering the definition $\hat{C}_{\mu \nu} = \mathbf{x}(\mu)^{\top}\hat{\mathbf{C}}\mathbf{x}(\nu)$ and $\hat{w}_{\alpha} = \mathbf{x}(\alpha)^{\top}\hat{\mathbf{w}}$ and similar definitions for $\mathbf{C}^{\dagger}$ and $\mathbf{W}^{V\dagger}$ , for any $\mu, \nu, \alpha \in S$ ,
+
+$$
+\begin{array}{l} \sum_ {\nu \in \mathcal {S}} \hat {C} _ {\mu \nu} s _ {\nu} ^ {(n)} \hat {w} _ {\nu} ^ {k} = \sum_ {\nu \in \mathcal {S}} C _ {\mu \nu} ^ {\dagger} s _ {\nu} ^ {(n)} w _ {\nu} ^ {k \dagger}, \\ \sum_ {\nu \in \mathcal {S}} s _ {\nu} ^ {(n)} \left(\hat {C} _ {\mu \nu} \hat {w} _ {\nu} ^ {k} - C _ {\mu \nu} ^ {\dagger} w _ {\nu} ^ {k \dagger}\right) = 0, \forall \mu \in \mathcal {X} ^ {(n)} \text {a n d} \forall n \in \mathcal {B} \tag {30} \\ \end{array}
+$$
+
+For each specific $\mu$ and $k$ , the terms in the last equation in parentheses can be construed as a vector with indices over $\nu$ . Denoting that vector as $\mathbf{a}^{\mu k}$ and remembering $\mathbf{S}_{\mathcal{B}_{\mu}}$ definition (from Appendix A), Eq. 30 can be written as
+
+$$
+\mathbf {S} _ {\mathcal {B} _ {\mu}} \mathbf {a} ^ {\mu k} = 0
+$$
+
+Thanks to the Assumption 4.2, $\mathbf{S}_{\mathcal{B}_{\mu}}$ is full column rank so only solution to last equation is $\mathbf{a}^{\mu k} = \mathbf{0}$ , which corresponds to $\hat{C}_{\mu \nu}\hat{w}_{\nu}^{k} = C_{\mu \nu}^{\dagger}w_{\nu}^{k\dagger}\forall \mu ,\nu \in \mathcal{S},k\in [d]$ . Consequently, for any $\mathbf{X}$ which is embedding of a corresponding $\mathcal{X}\sim \mathcal{D}$
+
+$$
+\mathbf {X} \hat {\mathbf {C}} \mathbf {X} ^ {\top} \mathbf {X} \hat {\mathbf {W}} ^ {V} = \mathbf {X} \mathbf {C} ^ {\dagger} \mathbf {X} ^ {\top} \mathbf {X} \mathbf {W} ^ {V \dagger} = \mathbf {Y}
+$$
+
+Thus the learned parameters generalize to all population, that is,
+
+$$
+\mathbb {E} _ {\mathcal {D} _ {\mathcal {X} \times \mathcal {Y}}} \left[ \left\| \mathbf {Y} - \left(\mathbf {X} \hat {\mathbf {C}} \mathbf {X} ^ {\top}\right) \mathbf {X} \hat {\mathbf {W}} ^ {V} \right\| \right] = 0
+$$
+
+From the above proof, we can see that Assumption 4.2 not just plays a critical role in ensuring zero training error, but also leads to generalization to the entire population distribution. Specifically, this assumption ensures that any set of parameters achieving zero training error must also yield zero population error, meaning there exists no parameter set that perfectly fits the training data while still incurring some error at the population level.
+
+# D.2. Length Generalization
+
+In the preceding analysis, we showed that under some mild assumptions, achieving zero training error leads to robust population-level generalization for the specific length $L$ from which training data is sampled. However, many of our interaction-based experiments naturally suggest length generalization: the outputs of these models are not inherently tied to a fixed sequence length, nor did our experimental design depend on a specific number of entities. That brings us to the Assumption 4.7. Despite this, our test-time generalization results do not formally guarantee out-of-distribution generalization to unseen sequence lengths. In this section, we analyze the conditions on $\mathcal{D}^{L^*}$ under which the parameters that generalize to the distribution $\mathcal{D}^{L^*}$ generalize to $\mathcal{D}^{\forall L}$ .
+
+Remember that we denote $(\mathbf{C}^{\forall \mathrm{L}},\mathbf{W}^{V,\forall \mathrm{L}})$ as the set of parameters that generalize to any length for the task of interest. Also, we denote $k$ -th column of the matrix $\mathbf{W}^{V,\forall \mathrm{L}}$ as $\mathbf{w}^{k,\forall \mathrm{L}}$ . Let us first look at possibility of parameters that has zero error for $\mathcal{X}^{L^{*}}\sim \mathcal{D}^{L^{*}}$ but does not generalize to any other length $L$ . Writing them in terms of a specific length generalizing parameters and $\mathbf{C}^{\Delta}$ , $\mathbf{w}^{\Delta}$ -defined accordingly to satisfy the following equalities-, $\mathbf{C}^{L^{*}} = \mathbf{C}^{\forall \mathrm{L}} + \mathbf{C}^{\Delta}$ and $\mathbf{w}^{k,L^{*}} = \mathbf{w}^{k,\forall \mathrm{L}} + \mathbf{w}^{\Delta}$ , $\forall \mu \in \mathcal{X}^{L^{*}}$ and denoting $k$ -th column of the true output for the input tuple $\mathcal{X}^{L^{*}}$ as $\mathbf{y}^{k,L^{*}}\left(\mathcal{X}^{L^{*}}\right)$ .
+
+$$
+\begin{array}{l} y _ {\mu} ^ {k, L ^ {*}} \left(\mathcal {X} ^ {L ^ {*}}\right) = \sum_ {\nu \in \mathcal {X} ^ {L ^ {*}}} \left(C _ {\mu \nu} ^ {\forall \mathrm {L}} + C _ {\mu \nu} ^ {\Delta}\right) \left(w _ {\nu} ^ {k, \forall \mathrm {L}} + w _ {\nu} ^ {k \Delta}\right) \\ = \sum_ {\nu \in \mathcal {X} ^ {L ^ {*}}} \left(C _ {\mu \nu} ^ {\forall \mathrm {L}} w _ {\nu} ^ {k, \forall \mathrm {L}} + C _ {\mu \nu} ^ {\forall \mathrm {L}} w _ {\nu} ^ {k, \Delta} + C _ {\mu \nu} ^ {\Delta} w _ {\nu} ^ {k, \forall L} + C _ {\mu \nu} ^ {\Delta} w _ {\nu} ^ {k, \forall L}\right) \tag {31} \\ \end{array}
+$$
+
+Defining $\Delta_{\mu \nu}^{L^*} = C_{\mu \nu}^{\forall L}w_{\nu}^{k,\Delta} + C_{\mu \nu}^{\Delta}w_{\nu}^{k,\forall L} + C_{\mu \nu}^{\Delta}w_{\nu}^{k,\forall L}$ , and combining the effect of biases on the output,
+
+$$
+y _ {\mu} ^ {k, L ^ {*}} \left(\mathcal {X} ^ {L ^ {*}}\right) = \sum_ {\nu \in \mathcal {X} ^ {L ^ {*}}} C _ {\mu \nu} ^ {\forall \mathrm {L}} w _ {\nu} ^ {k, \forall \mathrm {L}} + \sum_ {\nu \in \mathcal {X} ^ {L ^ {*}}} \Delta_ {\mu \nu} ^ {L ^ {*}}
+$$
+
+We can write the bias on the function output that does not change the function output for the inputs such that $|\mathcal{X}| = L^{*}$ , but may change the output for inputs with other $L$ , as
+
+$$
+\sum_ {\nu \in \mathcal {X}} \Delta_ {\mu \nu} ^ {L ^ {*}}.
+$$
+
+To illustrate this point, consider $\Delta_{\mu,\mu} = a$ and $\Delta_{\mu,\nu} = \frac{-a}{L^* - 1} \forall \nu \neq \mu$ , seeing that Eq. 31 is written $\forall \mu \in \mathcal{X}$
+
+$$
+\sum_{\nu \in \mathcal{X}^{L^{*}}}\Delta^{L^{*}}_{\mu \nu} = \Delta^{L^{*}}_{\mu \nu} + \sum_{\substack{\nu \in \mathcal{X}^{L^{*}}\\ \nu \neq \mu}}\Delta^{L^{*}}_{\mu \nu} = a - \sum_{\substack{\nu \in \mathcal{X}^{L^{*}}\\ \nu \neq \mu}}\frac{a}{L^{*} - 1} = 0.
+$$
+
+However, when we feed an input $\mathcal{X}^L\sim \mathcal{D}^L$ that $L\neq L^{*}$ ,we get
+
+$$
+\begin{array}{l} y _ {\mu} ^ {k, L ^ {*}} \left(\mathcal {X} ^ {L}\right) = \sum_ {\nu \in \mathcal {X} ^ {L}} C _ {\mu \nu} ^ {\forall \mathrm {L}} w _ {\nu} ^ {k, \forall \mathrm {L}} + \sum_ {\nu \in \mathcal {X} ^ {L}} \Delta_ {\mu \nu} ^ {L ^ {*}} \\ = y _ {\mu} ^ {k} + \sum_ {\nu \in \mathcal {X} ^ {L}} \Delta_ {\mu \nu} ^ {L ^ {*}} \\ = y_{\mu}^{k} + \Delta_{\mu \nu}^{L^{*}} + \sum_{\substack{\nu \in \mathcal{X}^{L}\\ \nu \neq \mu}}\Delta_{\mu \nu}^{L^{*}} \\ = y_{\mu}^{k} + a - \sum_{\substack{\nu \in \mathcal{X}^{L}\\ \nu \neq \mu}}\frac{a}{L^{*} - 1} \\ = y _ {\mu} ^ {k} + a \frac {L ^ {*} - L}{L ^ {*} - 1} \\ \end{array}
+$$
+
+In particular, the last equation shows that an unseen sequence of length $L \neq L^{*}$ incurs an additive deviation of $a\frac{L^{*} - L}{L^{*} - 1}$ from the desired target $y_{\mu}^{k,L^{*}}$ , so the model generalizes to every length if and only if this bias term vanishes, i.e., $a = 0$ . This will be useful in the following proof.
+
+Proof of Theorem 4.8. To generalize to any length we should make sure that the population distribution for any length $L^*$ does not allow such a bias, that is, $\Delta_{\mu \nu}^{L^*} = 0, \forall \mu, \nu$ . From the previous discussion, for the set of parameters that generalize to population distribution $\mathcal{D}^{L^*}$ we know that,
+
+$$
+\sum_ {\nu \in \mathcal {X} ^ {L ^ {*}}} \Delta_ {\mu \nu} ^ {L ^ {*}} = 0, \forall \mathcal {X} ^ {L ^ {*}} \sim \mathcal {D} ^ {L ^ {*}} \tag {32}
+$$
+
+For each $\mu$ , we can think of Eq. 32 as a linear system of equations. Defining, $\delta_{\mu}^{L^*} = \left[ \begin{array}{cccc}\Delta_{\mu \alpha}^{L^*} & \Delta_{\mu \beta}^{L^*} & \Delta_{\mu \gamma}^{L^*} & \ldots \end{array} \right]^\top \in \mathbb{R}^{|S|}$ and an infinite sample extension of $\mathbf{S}_{\mathcal{B}_\mu}$ that is
+
+$$
+\mathbf {S} _ {\mathcal {B} _ {\mu} ^ {\infty}} ^ {L ^ {*}} = \left[ \dots \quad \mathbf {s} ^ {L ^ {*}} \quad \dots \right] _ {s _ {\nu} \neq 0} ^ {\top}, \tag {33}
+$$
+
+where $\mathbf{S}_{B_{\mu}^{\infty}}^{L^{*}}$ is such a matrix that each row of it sums to $L^{*}$ , Eq. 32 can be written as
+
+$$
+\mathbf {S} _ {\mathcal {B} _ {\mu} ^ {\infty}} \delta^ {\mu} = 0, \forall \mu \in \mathcal {S}.
+$$
+
+If $\mathbf{S}_{\mathcal{B}_{\mu}^{\infty}}$ is full column rank for all $\mu$ , which is satisfied by Assumption 4.2, then $\Delta = 0$ . Thus, it generalizes to any length.
+
+Seeing that Assumption 4.2 inherently encompasses " $\mathbf{S}_{\mathcal{B}_{\mu}^{\infty}}$ is full column rank", it may seem that test generalization would always lead to length generalization, without any assumption. However, it is important to note that Assumption 4.2 is not the minimal requirement for ensuring test generalization.
+
+Remark D.1. For clarity, let us look at an example class of tasks that for which $\mathbf{S}_{\mathcal{B}_{\mu}^{\infty}}^{L^{*}}$ is not full column rank. If there is a constraint on the distribution $\mathcal{X}\sim \mathcal{D}^{L^{*}}$ that the elements within $\mathcal{X}$ tuple are unique, than $\mathbf{S}_{\mathcal{B}_{\mu}^{\infty}}^{L^{*}}$ is not full rank since the column of $\mathbf{S}_{\mathcal{B}_{\mu}^{\infty}}^{L^{*}}$ that corresponds to element $\mu$ will be all 1 vector. Which is means, that there are some possible parameters that ensure zero error on $\mathcal{D}^{L^{*}}$ but does not generalize to $\mathcal{D}^{L}$ for any $L$ .
+
+Remark D.2. Under the realizability assumption, skip connections do not affect the values of the residues $\mathbf{D}^{(n)}$ , allowing the same proof on generalization to apply in the skip-connected scenario.
+
+Corollary D.3. Defining
+
+$$
+C _ {\mu \nu} = \mathbf {x} ^ {\top} (\mu) \mathbf {C x} (\nu),
+$$
+
+$$
+W _ {\nu k} = \mathbf {x} ^ {\top} (\nu) \mathbf {W} _ {:, k},
+$$
+
+it follows from the proof of Theorem 4.8 that any $\mathbf{C},\mathbf{W}^V$ that generalizes $\forall L$ satisfies
+
+$$
+C _ {\mu \nu} W _ {\nu k} ^ {V} = C _ {\mu \nu} ^ {\forall \mathrm {L}} W _ {\nu k} ^ {V, \forall \mathrm {L}}.
+$$
+
+Consequently, if you apply a nontrivial transformation to the parameters, all of the length generalizing parameters lead to a specific matrix that depends on the task at hand.
+
+If two sets of parameters for linear self-attention lead to the functionally equivalent self-attention, i.e. they will lead to the same outputs for the same inputs for any input output pairs, then under this kind of nontrivial transformation they lead to the same matrix. Thus with this transformation we simply show that although the parameters we get after training are the different than the length generalizing parameters we designed they are functionally equivalent to the length generalizing parameters we have.
+
+# E. Justification for Data Versatility Assumption
+
+In this section we justify Assumption 4.2, by showing it holds under an even milder assumption shown below.
+
+Assumption E.1 (Positive-Definite Covariance and Bounded Norm). Let $\{\mathbf{s}^{(n)}\}_{n = 1}^{B}\subset \mathbb{R}^{|S|}$ be i.i.d. random row vectors. Suppose there exist constants $\zeta_{\mathrm{min}} > 0$ and $M > 0$ such that:
+
+(A1) Positive-Definite Covariance: The covariance matrix
+
+$$
+\Sigma := \operatorname {C o v} (\mathbf {s} ^ {(n)}) = \mathbb {E} \left[ \left(\mathbf {s} ^ {(n)} - \mathbb {E} [ \mathbf {s} ^ {(n)} ]\right) \left(\mathbf {s} ^ {(n)} - \mathbb {E} [ \mathbf {s} ^ {(n)} ]\right) ^ {\top} \right] \quad \text {s a t i s f i e s} \quad \Sigma \succeq \zeta_ {\min } ^ {2} \mathbf {I}.
+$$
+
+That is, $\lambda_{\mathrm{min}}(\Sigma) \geq \zeta_{\mathrm{min}}^2 > 0$ .
+
+(A2) Bounded Norm: The centered vectors satisfy
+
+$$
+\left\| \mathbf {s} ^ {(n)} - \mathbb {E} [ \mathbf {s} ^ {(n)} ] \right\| _ {2} \leq M \quad \text {a l m o s t s u r e l y}. \tag {34}
+$$
+
+Theorem E.2 (Full Column Rank with High Probability). Under Assumption E.1, let
+
+$$
+\mathbf {S} = \left[ \begin{array}{c} \mathbf {s} _ {1} ^ {(n)} \\ \mathbf {s} _ {2} ^ {(n)} \\ \vdots \\ \mathbf {s} _ {B} ^ {(n)} \end{array} \right] \in \mathbb {R} ^ {B \times | \mathcal {S} |}.
+$$
+
+Then there exist positive constant $\gamma > 0$ (depending on $M$ , $\zeta_{\min}$ , and $|\mathcal{S}|$ ) such that for all sufficiently large $B$ ,
+
+$$
+\mathbb {P} \left[ \operatorname {r a n k} (\mathbf {S}) < | \mathcal {S} | \right] \leq e ^ {- \gamma B}.
+$$
+
+Equivalently, $\mathbf{A}$ is full column rank with probability at least $1 - e^{-\gamma B}$ .
+
+Proof. Step 1: Center the rows. Define $\mathbf{a}_n \coloneqq \mathbf{s}^{(n)} - \mathbb{E}[\mathbf{s}^{(n)}]$ , so that $\mathbb{E}[\mathbf{a}_n] = \mathbf{0}$ and
+
+$$
+\operatorname {C o v} \left(\mathbf {a} _ {n}\right) = \mathbb {E} \left[ \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} \right] = \Sigma .
+$$
+
+Stack these centered rows into
+
+$$
+\mathbf {X} = \left[ \begin{array}{c} \mathbf {a} _ {1} ^ {\top} \\ \mathbf {a} _ {2} ^ {\top} \\ \vdots \\ \mathbf {a} _ {B} ^ {\top} \end{array} \right] \in \mathbb {R} ^ {B \times | \mathcal {S} |}.
+$$
+
+Since each row of $\mathbf{S}$ differs from $\mathbf{X}$ by a constant shift, $\mathrm{rank}(\mathbf{S}) = \mathrm{rank}(\mathbf{X})$ . Hence it suffices to show $\mathbf{X}$ is full column rank with high probability.
+
+Step 2: Expected Gram matrix lower bound. We have
+
+$$
+\mathbf {A} ^ {\top} \mathbf {A} = \sum_ {n = 1} ^ {B} \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top}
+$$
+
+Taking expectation,
+
+$$
+\mathbb {E} [ \mathbf {A} ^ {\top} \mathbf {A} ] = \sum_ {n = 1} ^ {B} \mathbb {E} [ \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} ] = B \Sigma .
+$$
+
+By Assumption E.1(A1), $\Sigma \succeq \zeta_{\min}^2\mathbf{I}$ , hence
+
+$$
+\mathbb {E} \left[ \mathbf {A} ^ {\top} \mathbf {A} \right] \succeq B \zeta_ {\min } ^ {2} \mathbf {I}.
+$$
+
+Step 3: Concentration via Matrix Bernstein. Define the centered matrix
+
+$$
+\mathbf {Z} _ {n} := \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} - \Sigma .
+$$
+
+Note $\mathbb{E}[\mathbf{Z}_n] = \mathbf{0}$ . Summing,
+
+$$
+\mathbf {A} ^ {\top} \mathbf {A} - \mathbb {E} [ \mathbf {A} ^ {\top} \mathbf {A} ] = \sum_ {n = 1} ^ {B} (\mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} - \Sigma) = \sum_ {n = 1} ^ {B} \mathbf {Z} _ {n}.
+$$
+
+Each $\mathbf{Z}_n$ is bounded in operator norm since
+
+$$
+\left\| \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} \right\| _ {\mathrm {o p}} = \left\| \mathbf {a} _ {n} \right\| _ {2} ^ {2} \leq M ^ {2}, \quad \left\| \Sigma \right\| _ {\mathrm {o p}} \leq \left\| \Sigma \right\| _ {\mathrm {F}} (\text {f i n i t e}).
+$$
+
+Thus for a constant $R\in \mathbb{R}$
+
+$$
+\left\| \mathbf {Z} _ {n} \right\| _ {\mathrm {o p}} \leq \left\| \mathbf {a} _ {n} \mathbf {a} _ {n} ^ {\top} \right\| _ {\mathrm {o p}} + \left\| \boldsymbol {\Sigma} \right\| _ {\mathrm {o p}} = R.
+$$
+
+In addition,
+
+$$
+\left\| \mathbf {Z} _ {n} ^ {2} \right\| _ {\mathrm {o p}} \leq \left\| \mathbf {Z} _ {n} \right\| _ {\mathrm {o p}} ^ {2} \leq R ^ {2},
+$$
+
+$$
+\left\| \sum_ {n = 1} ^ {B} \mathbb {E} \left[ \mathbf {Z} _ {n} ^ {2} \right] \right\| _ {\mathrm {o p}} \leq B R ^ {2}
+$$
+
+Hence by a standard matrix Bernstein inequality (self-adjoint version) from (Tropp, 2015), there exist constant $\gamma > 0$ such that
+
+$$
+\mathbb {P} \left[ \| \mathbf {A} ^ {\top} \mathbf {A} - \mathbb {E} [ \mathbf {A} ^ {\top} \mathbf {A} ] \| _ {\mathrm {o p}} \geq \frac {1}{2} B \zeta_ {\min } ^ {2} \right] = \mathbb {P} \left[ \left\| \sum_ {n = 1} ^ {B} \mathbf {Z} _ {n} \right\| _ {\mathrm {o p}} \geq \frac {1}{2} B \zeta_ {\min } ^ {2} \right] \leq e ^ {- \gamma B}.
+$$
+
+In other words, with high probability, $\mathbf{A}^\top \mathbf{A}$ stays within half its expected value in spectral norm.
+
+Step 4: Weyl's inequality implies strict positivity. On this high-probability event,
+
+$$
+\lambda_ {\min } (\mathbf {A} ^ {\top} \mathbf {A}) \geq \lambda_ {\min } \left(\mathbb {E} [ \mathbf {A} ^ {\top} \mathbf {A} ]\right) - \left\| \mathbf {A} ^ {\top} \mathbf {A} - \mathbb {E} [ \mathbf {A} ^ {\top} \mathbf {A} ] \right\| _ {\mathrm {o p}} \geq B \zeta_ {\min } ^ {2} - \frac {1}{2} B \zeta_ {\min } ^ {2} = \frac {1}{2} B \zeta_ {\min } ^ {2}.
+$$
+
+Hence $\mathbf{A}^{\top}\mathbf{A}$ is strictly positive-definite, implying $\mathrm{rank}(\mathbf{A}) = |\mathcal{S}|$ . Consequently, $\mathbf{A}$ is full column rank with probability at least $1 - e^{-\gamma B}$ .
+
+Remark E.3 (Justification for Assumption E.1). In many natural data-generation processes, these assumptions hold:
+
+- Count Vectors from a Dictionary. Suppose each sample $\mathcal{X}^{(n)}$ is a tuple of $L$ elements drawn from a vocabulary $S$ . The row vector $\mathbf{s}^{(n)}$ may represent counts $(s_{\alpha}^{(n)}, s_{\beta}^{(n)}, \ldots)$ of how many times each element $\alpha, \beta, \ldots$ appears. If we focus on the subset $\mathcal{B}_{\mu}$ of samples that contain $\mu$ , then $s_{\mu}^{(n)} \geq 1$ , while the other $L - 1$ slots of the sequence are drawn from $S \setminus \{\mu\}$ according to some distribution.
+- Positive-Definite Covariance. When these $(L - 1)$ "remaining" elements are distributed in a non-degenerate way (e.g., at least some variability in how the other vocabulary items appear), the resulting count vectors $\mathbf{s}^{(n)}$ will have a covariance $\Sigma$ whose minimum eigenvalue is strictly positive. For instance, under a uniform choice of the $L - 1$ positions among the $|S| - 1$ possible elements, straightforward calculations show each coordinate has nonzero variance and $\lambda_{\min}(\Sigma) > 0$ .
+- Bounded Norm. Since $0 \leq s_{\nu}^{(n)} \leq L$ for each element $\nu \in S$ , the count vector $\mathbf{a}_n$ is trivially bounded by $\sqrt{|\mathcal{S}|} L$ in Euclidean norm. Thus we can take $M = \sqrt{|\mathcal{S}|} L$ , satisfying Assumption E.1(A2).
+- General Distributions. Even more general scenarios (e.g., non-uniform sampling, correlated draws) satisfy the same assumptions, provided negative correlations are not too extreme to force $\Sigma$ to have a zero eigenvalue. In practice, real-world data tends to have enough variability so that $\mathrm{Cov}(\mathbf{s}^{(n)})$ is well-conditioned, meeting the requirement $\lambda_{\min}(\Sigma) \geq \zeta_{\min}^2$ for some $\zeta_{\min} > 0$ .
+
+# F. HyperFeatureAttention
+
+# F.1. Motivation
+
+The preceding discussions in Appendix B.1 on value functions that depend on single coordinates, both coordinates, and both coordinates with nonidentical agents reveal a fundamental pattern: as the number of features describing agents increases, the embedding dimension sufficient to fully capture pairwise interactions grows exponentially. As a quick recap of representations:
+
+- For a single coordinate (e.g., $x$ ), the embedding dimension is proportional to $|S_x| = N$ , the number of possible positions along the $x$ -axis.
+- When extending to two coordinates $(x$ and $y)$ , the domain becomes $\mathcal{S} = \mathcal{S}_x\times \mathcal{S}_y$ , resulting in an embedding dimension of $|\mathcal{S}| = N^2$ , reflecting all possible 2D positions.
+- Adding agent-specific policies (e.g., $S_{q} = \{\mathrm{R},\mathrm{U}\}$ ) introduces an additional multiplicative factor, so the domain expands to $S = S_{q}\times S_{x}\times S_{y}$ , with $|S| = 2N^2$ .
+- We can even add a new feature called "species", from a different nature, such as species $\in$ {H, P}, which would make the domain size even larger.
+
+Now let us look at the colliding agents environments from Appendix B.1 and Appendix B.1.3, with a slightly more complex example, to motivate the novel HyperFeatureAttention.
+
+Non-Identical Agents Revisited. Consider $L$ agents on a $S_{\mathbf{r}} = [N] \times [N]$ grid, each with initial coordinates $\mathbf{r}_i = \left[n_i^x n_i^y\right] \in [N]^2$ . The agents have identities from $\ell_i \in S_{\mathrm{spc s}} = \{\mathrm{H}, \mathrm{P}\}$ . The Hs get +1 reward if they catch (collide with) a P, but the Ps get -1 reward when they are caught. The agents also have fixed policies $q_i \in S_q = \{\mathrm{R}, \mathrm{U}\}$ , that agents with policy R always moves to right and policy U always moves to up. From our earlier step by step results in Appendix B.1.3 the value functions for the agents can be written as
+
+$$
+V _ {i} = \sum_ {\substack {j \in [ L ] \\ j \neq i}} f \left(n _ {i} ^ {x}, n _ {i} ^ {y}, q _ {i}, \ell_ {i}, n _ {j} ^ {x}, n _ {j} ^ {y}, q _ {j}, \ell_ {j}\right) w _ {n _ {j} ^ {x}, n _ {j} ^ {y}, q _ {j}, \ell_ {j}}. \tag{35}
+$$
+
+With our earlier discussion (Appendix B), we know that a linear self-attention needs $\mathcal{O}(|S|^2|\mathcal{S}_{\mathrm{spc}}|^2|\mathcal{S}_q|^2)$ parameters to represent such functions, which is exponential in the number of features.
+
+However, after careful thinking on each case in our experiment, one can write the value functions as
+
+$$
+\begin{array}{l} V _ {i} = \sum_ {{j \in [ L ]: j \neq i}} \left\{\mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {U} \right\} \mathbb {I} \left\{n _ {i} ^ {x} - n _ {j} ^ {x} \approx n _ {i} ^ {y} - n _ {j} ^ {y} \right\} \right. \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {U} \right\} \mathbb {I} \left\{n _ {i} ^ {x} - n _ {j} ^ {x} \approx n _ {i} ^ {y} - n _ {j} ^ {y} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {U}, q _ {j} = \mathrm {R} \right\} \mathbb {I} \left\{n _ {i} ^ {x} - n _ {j} ^ {x} \approx n _ {i} ^ {y} - n _ {j} ^ {y} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {U}, q _ {j} = \mathrm {R} \right\} \mathbb {I} \left\{n _ {i} ^ {x} - n _ {j} ^ {x} \approx n _ {i} ^ {y} - n _ {j} ^ {y} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {R} \right\} \mathbb {I} \left\{n _ {i} ^ {y} \approx n _ {j} ^ {y} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {R} \right\} \mathbb {I} \left\{n _ {i} ^ {y} \approx n _ {j} ^ {y} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {U}, q _ {j} = \mathrm {U} \right\} \mathbb {I} \left\{n _ {i} ^ {x} \approx n _ {j} ^ {x} \right\} \\ + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\} \mathbb {I} \left\{q _ {i} = \mathrm {U}, q _ {j} = \mathrm {U} \right\} \mathbb {I} \left\{n _ {i} ^ {x} \approx n _ {j} ^ {x} \right\} \Big \} \left(\mathbb {I} \left\{\ell_ {j} = \mathrm {P} \right\} - \mathbb {I} \left\{\ell_ {j} = \mathrm {H} \right\}\right). \\ \end{array}
+$$
+
+Defining
+
+$$
+\begin{array}{l} f _ {a _ {1}} ^ {h _ {1}} \left(\ell_ {i}, \ell_ {j}\right) = \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\}, \\ f _ {a _ {2}} ^ {h _ {1}} \left(q _ {i}, q _ {j}\right) = \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {U} \right\} + \mathbb {I} \left\{q _ {i} = \mathrm {U}, q _ {j} = \mathrm {R} \right\}, \\ f _ {a _ {3}} ^ {h _ {1}} \left(\mathbf {r} _ {i}, \mathbf {r} _ {j}\right) = \mathbb {I} \left\{n _ {i} ^ {x} - n _ {j} ^ {x} \approx n _ {i} ^ {y} - n _ {j} ^ {y} \right\}, \\ f _ {a _ {1}} ^ {h _ {2}} \left(\ell_ {i}, \ell_ {j}\right) = \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\}, \\ f _ {a _ {2}} ^ {h _ {2}} \left(q _ {i}, q _ {j}\right) = \mathbb {I} \left\{q _ {i} = \mathrm {R}, q _ {j} = \mathrm {R} \right\}, \\ f _ {a _ {3}} ^ {h _ {2}} \left(n _ {i} ^ {y}, n _ {j} ^ {y}\right) = \mathbb {I} \left\{n _ {i} ^ {y} \approx n _ {j} ^ {y} \right\}, \\ f _ {a _ {1}} ^ {h _ {3}} \left(\ell_ {i}, \ell_ {j}\right) = \mathbb {I} \left\{\ell_ {i} = \mathrm {P}, \ell_ {j} = \mathrm {H} \right\} + \mathbb {I} \left\{\ell_ {i} = \mathrm {H}, \ell_ {j} = \mathrm {P} \right\}, \\ f _ {a _ {2}} ^ {h _ {3}} \left(q _ {i}, q _ {j}\right) = \mathbb {I} \left\{q _ {i} = U, q _ {j} = U \right\}, \\ f _ {a _ {3}} ^ {h _ {3}} \left(n _ {i} ^ {x}, n _ {j} ^ {x}\right) = \mathbb {I} \left\{n _ {i} ^ {x} \approx n _ {j} ^ {x} \right\}, \\ w _ {a j} ^ {h _ {i}} \left(\ell_ {j}\right) = \mathbb {I} \left\{\ell_ {j} = \mathrm {P} \right\} - \mathbb {I} \left\{\ell_ {j} = \mathrm {H} \right\}, \\ \end{array}
+$$
+
+and $\mathcal{H} = \{h_1, h_2, h_3\}$ , $\mathcal{H} = \{a_1, a_2, a_3\}$ the same equation can be organized into
+
+$$
+V _ {i} = \sum_ {h \in \mathcal {H}} \left[ \sum_ {j \in [ L ]} \left(\prod_ {a \in \mathcal {A}} f _ {a} ^ {h} \left(\phi_ {a, i} ^ {h}, \theta_ {a, j} ^ {h}\right)\right) \left(\prod_ {a \in \mathcal {A}} w _ {a} ^ {h} \left(\gamma_ {a, j} ^ {h}\right)\right) \right], \tag {36}
+$$
+
+where $\phi_{a,i}^{h},\theta_{a,j}^{h}\gamma_{a,j}^{h}$ are the corresponding features picked by the corresponding functions.10 Owing to our discussion from Appendix B, we can easily conclude that the functions $f_{a_1}^h$ can be represented exactly with an attention score matrix of $\mathbf{C}_{a_1}^h\in \mathbb{R}^{|\mathcal{S}_{\mathrm{spcs}}|\times |\mathcal{S}_{\mathrm{spcs}}|}$ similarly for $f_{a_2}^h$ we need $\mathbf{C}_{a_2}^h\in \mathbb{R}^{|\mathcal{S}_q|\times |\mathcal{S}_q|}$ and for $f_{a_3}^h$ we need $\mathbf{C}_{a_3}^h\in \mathbb{R}^{|\mathcal{S}_{\mathbf{r}}|\times |\mathcal{S}_{\mathbf{r}}|}$ . As a result total number of parameters required is $\mathcal{O}(|\mathcal{S}_{\mathrm{spcs}}|^2 +|\mathcal{S}_q|^2 +|\mathcal{S}_{\mathbf{r}}|^2)$ which is linear in the number of features, much better than linear self-attention which was exponential. We can calculate the exact number of parameters instead of the big- $\mathcal{O}$ versions. For $N = 360$ , approximately $10^{6}$ parameters is sufficient for HyperFeatureAttention, while self-attention requires approximately $10^{7}$ parameters.
+
+Implications for Attention Mechanisms. While the linear self-attention mechanism discussed in Theorem 3.1 can represent pairwise interactions exactly, its parameter requirements also scale with the embedding dimension $d$ . This limits its applicability in cases where the number of features (or their cardinality) is very large. For instance, in scenarios with additional discrete features such as temporal information, behavioral categories, or hierarchical roles, the exponential growth in $|S|$ quickly becomes a bottleneck.
+
+The exponential growth observed here highlights the need for alternative attention mechanisms that can efficiently handle high-dimensional domains without explicitly embedding all feature combinations. This sets the stage for the introduction of
+
+a novel attention module HyperFeatureAttention, designed specifically to address exponential embedding growth while preserving the ability to model complex interactions across diverse features.
+
+Also, one may concern that in practice we use layers of multi-head attention, which may possibly express Eq. 3, without exponential embedding dimension. However, we briefly go over in Theorem F.6 and the following justification that even two layer multihead linear self-attention cannot express (3) (which is easily expressed by a single layer single head HyperFeatureAttention).11
+
+# F.2. HyperFeatureAttention Definition
+
+The non-identical agents revised discussion was just a toy example for setting the stage for our novel HyperFeatureAttention model. Let us first generalize the discussion. We have $\Phi$ as the set of all features and $M = |\Phi|$ as total number of features in our setting that are distinct in nature. For feature $\phi$ , denote its domain $S_{\phi}$ , domain size $|S_{\phi}|$ , and allocated embedding size $d_{\phi}$ . From the previous discussion, to represent any function of the form Eq (36) the total embedding dimension for linear self-attention grows exponentially as $d = \prod_{\phi \in \Phi} |S_{\phi}|$ and the corresponding number of parameters $\mathcal{O}\left(\prod_{\phi \in \Phi} |S_{\phi}|^2\right)$ . However, using a function of the form,
+
+$$
+\mathbf {H F A} ^ {\mathrm {l i n}} (\mathbf {X}) = \sum_ {h \in \mathcal {H}} \left[ \left(\prod_ {a \in \mathcal {A}} ^ {\odot} \mathbf {X C} ^ {(h, a)} \mathbf {X} ^ {\top}\right) \left(\prod_ {a \in \mathcal {A}} ^ {\odot} \mathbf {X W} ^ {V, (h, a)}\right) \right],
+$$
+
+we only need embedding dimension of $d = \sum_{\phi \in \Phi} |\mathcal{S}_{\phi}|$ and the corresponding number of parameters grows linearly in the number of features $\mathcal{O}(M)$ .
+
+Remark F.1 (Note on Approximate Embeddings). Although the following discussion is somewhat perpendicular to our main focus on how entities (or features) interact, we include it briefly for completeness. In principle, self-attention may require an embedding dimension exponential in the number of features, but in practice, one often leverages the Johnson-Lindenstrauss lemma: in high-dimensional spaces, random projections yield vectors that are approximately orthonormal with embedding dimension only linear in the number of features. Concretely, each feature $\phi$ typically contributes $\mathcal{O}(\log |S_{\phi}|)$ dimensions rather than $\mathcal{O}(|S_{\phi}|)$ , so total embedding dimension becomes $d = \mathcal{O}(|\Phi|)$ .12 However, in HyperFeatureAttention module, this same Johnson-Lindenstrauss argument implies a dimension requirement of $\mathcal{O}(\log |\Phi|)$ (rather than $\mathcal{O}(|\Phi|)$ ). Hence, even though both methods rely on approximate embeddings, the exponential gap remains: standard self-attention requires dimension linear in the number of features, whereas our module reduces it to logarithmic.
+
+After all these motivations, we can now formally define the Multihead HyperFeature Attention. First, we provide the definition of Multihead Self Attention in (Vaswani et al., 2023) here for comparison.
+
+Definition F.2 (Multihead Self-Attention). Let $\mathbf{X} \in \mathbb{R}^{T \times d}$ denote the input sequence of $T$ tokens, where $d$ is the embedding dimension. Multihead self-attention computes a sequence of contextualized embeddings as follows:
+
+$$
+\mathbf {S A} ^ {h} = \operatorname {S o f t m a x} \left(\mathbf {Q} ^ {h} \left(\mathbf {K} ^ {h}\right) ^ {\top}\right) \mathbf {V} ^ {h}, \quad \forall h \in \{1, \dots , H \}, \tag {37}
+$$
+
+$$
+\mathbf {M H A} (\mathbf {X}) = \operatorname {C o n c a t} \left(\mathbf {S A} ^ {1}, \dots , \mathbf {S A} ^ {H}\right) \mathbf {W} ^ {O}, \tag {38}
+$$
+
+# where:
+
+- $\mathbf{Q}^h = \mathbf{X}\mathbf{W}^{Q^h}$ , $\mathbf{K}^h = \mathbf{X}\mathbf{W}^{K^h}$ , and $\mathbf{V}^h = \mathbf{X}\mathbf{W}^{V^h}$ are the query, key, and value matrices for the $h$ -th head, respectively.
+- $d_h = \frac{d}{H}$ is the dimension of each attention head.
+- $\mathbf{W}^{Q^h}, \mathbf{W}^{K^h}, \mathbf{W}^{V^h} \in \mathbb{R}^{d \times d_h}$ and $\mathbf{W}^O \in \mathbb{R}^{d \times d}$ are learnable weight matrices.
+- Softmax(·) is applied along the last dimension.
+
+Definition F.3 (Multihead HyperFeatureAttention). We define two versions of Multihead HyperFeatureAttention, "value-product" and "non-value-product" respectively. Let $\mathbf{X} \in \mathbb{R}^{T \times d}$ denote the input sequence of $T$ tokens, where $d$ is the embedding dimension.
+
+$$
+\mathbf {H F A} ^ {h} = \left\{ \begin{array}{l} \operatorname {S o f t m a x} \left(\prod_ {a \in [ A ]} ^ {\odot} \mathbf {Q} ^ {(h, a)} \left(\mathbf {K} ^ {(h, a)}\right) ^ {\top}\right) \prod_ {a \in [ A ]} ^ {\odot} \mathbf {V} ^ {(h, a)}, \quad \text {i f v a l u e p r o d u c t}, \\ \operatorname {S o f t m a x} \left(\prod_ {a \in [ A ]} ^ {\odot} \mathbf {Q} ^ {(h, a)} \left(\mathbf {K} ^ {(h, a)}\right) ^ {\top}\right) \mathbf {V} ^ {(h)}, \quad \text {o t h e r w i s e}, \end{array} \right. \tag {39}
+$$
+
+$$
+\mathbf {M H F A} (\mathbf {X}) = \operatorname {C o n c a t} \left(\mathbf {H F A} ^ {1}, \dots , \mathbf {H F A} ^ {H}\right) \mathbf {W} ^ {O}, \tag {40}
+$$
+
+where:
+
+- $\mathbf{Q}^{(h,a)} = \mathbf{X}\mathbf{W}^{Q^{(h,a)}}$ , $\mathbf{K}^{(h,a)} = \mathbf{X}\mathbf{W}^{K^{(h,a)}}$ , and $\mathbf{V}^{(h,a)} = \mathbf{X}\mathbf{W}^{V^{(h,a)}}$ are the query, key, and value matrices for the $h$ -th head $a$ -th attention score, respectively ( $\mathbf{V}^{(h)} = \mathbf{X}\mathbf{W}^{V^{(h)}}$ for no value product case).
+- For each head we specify $d_h$ such that $\sum_{h} d_h = d$ and the attention size for that head is $d_a^h = d_h / A$ .
+- $\mathbf{W}^{Q(h,a)}, \mathbf{W}^{K(h,a)} \in \mathbb{R}^{d \times d_a^h}, \mathbf{W}^{V(h,a)}, \mathbf{W}^{V(h)} \in \mathbb{R}^{d \times d_h}$ and $\mathbf{W}^O \in \mathbb{R}^{d \times d}$ are learnable weight matrices.
+- $\prod^{\odot}$ represents Hadamard product of matrices and $[A] = \{1, \dots, A\}$
+- Softmax(·) is applied along the last dimension.
+
+Remark F.4. In the above definition the no value product HFA has the same number of parameters as SA (assuming same $d$ ), yet value product version has slightly more parameters depending on the orders.
+
+Remark F.5 (Rotary Positional Embedding). One can easily incorporate rotary positional embedding into this module by applying the corresponding rotation matrices $\mathbf{R}$ as $\mathbf{RQ}^{(h,a)}$ and $\mathbf{RK}^{(h,a)}$ to keys and queries just as they are explained in (Su et al., 2023).
+
+Lastly, one may concern that in practice we use layers of multi-head attention, which may possibly express Eq. 3, without exponential embedding dimension. However, we show in the following remark that even two layer multihead linear self-attention cannot express (3) (which is easily expressed by a single layer single head HyperFeatureAttention). In this comparison, we deliberately bias the setup toward standard self-attention by pitting a two-layer, multi-head SA model against a single-layer, single-head HFA.
+
+Remark F.6 (Limitations of Two-Layer Multihead Linear Self-Attention). Two-layer multihead linear self-attention cannot represent factorized cross-feature interaction functions of the form
+
+$$
+\sum_ {j} f ^ {(1)} \left(a _ {i}, a _ {j}\right) f ^ {(2)} \left(b _ {i}, b _ {j}\right), \quad \text {o r} \quad \sum_ {j} f ^ {(1)} \left(a _ {i}, b _ {j}\right) f ^ {(2)} \left(b _ {i}, a _ {j}\right), \quad \text {o r s i m i l a r v a r i a n t s}, \tag {41}
+$$
+
+where the token embedding is defined as $\mathbf{x}_i = \mathrm{concat}(a_i, b_i)$ .
+
+Brief justification of Remark F.6. Consider a two-layer multihead linear self-attention model. Each layer consists of $H$ attention heads, where the output of each head is computed as:
+
+$$
+\mathbf {S A} _ {i} ^ {(h)} = \sum_ {j = 1} ^ {L} \left(\mathbf {x} _ {i} ^ {\top} \mathbf {C} ^ {(h)} \mathbf {x} _ {j}\right) \mathbf {w} ^ {(h)} \left(\mathbf {x} _ {j}\right), \quad \text {f o r} m = 1, \dots , H,
+$$
+
+with $\mathbf{C}^{(h)}\in \mathbb{R}^{d\times d}$ as the attention matrix for head $h$ , and $\mathbf{w}^{(h)}(\cdot)$ as a linear map. The outputs of the heads are concatenated and optionally projected using a weight matrix $\mathbf{W}^O$ . For a single-layer multihead attention, the overall output for token $i$ is a linear combination of bilinear terms of the form:
+
+$$
+\mathbf {M H A} _ {i} = \sum_ {h = 1} ^ {H} \sum_ {j = 1} ^ {L} \left(\mathbf {x} _ {i} ^ {\top} \mathbf {C} ^ {(h)} \mathbf {x} _ {j}\right) \mathbf {w} ^ {(h)} \left(\mathbf {x} _ {j}\right).
+$$
+
+In a two-layer model, the first layer computes:
+
+$$
+\mathbf {h} _ {i} = \operatorname {L i n e a r C o m b i n e} \left\{\sum_ {j = 1} ^ {L} \left(\mathbf {x} _ {i} ^ {\top} \mathbf {C} ^ {(h)} \mathbf {x} _ {j}\right) \mathbf {w} ^ {(h)} (\mathbf {x} _ {j}), h = 1, \dots , H _ {1} \right\},
+$$
+
+and passes $\{\mathbf{h}_i\}_{i\in [T]}$ as input to the second layer. The second layer then computes:
+
+$$
+\mathbf {S A} _ {i} ^ {(p)} = \sum_ {k = 1} ^ {L} \left(\mathbf {h} _ {i} ^ {\top} \mathbf {C} ^ {(p)} \mathbf {h} _ {k}\right) \mathbf {v} ^ {(p)} \left(\mathbf {h} _ {k}\right), \quad \text {f o r} p = 1, \dots , H _ {2}.
+$$
+
+By definition, $\mathbf{h}_i$ is a linear combination of sums of bilinear terms in $\mathbf{x}_i$ and $\mathbf{x}_j$ . Therefore, $\mathbf{h}_i$ remains a sum of bilinear expressions across $\{\mathbf{x}_i, \mathbf{x}_j\}$ .
+
+The second layer operates on $\mathbf{h}_i$ and computes terms of the form $\mathbf{h}_i^\top \mathbf{C}^{(p)}\mathbf{h}_k$ . Since $\mathbf{h}_i$ itself is bilinear, this results in expressions that are multi-bilinear in $\mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k$ . However, it does not introduce multiplicative interactions between independent subsets of features (e.g., $a_i, a_j$ vs. $b_i, b_j$ ).
+
+Factorization is absent: The target function $\sum_{j}f^{(1)}(a_i,a_j)f^{(2)}(b_i,b_j)$ requires the output to be a product of two independent terms: one depending solely on $(a_{i},a_{j})$ and the other on $(b_{i},b_{j})$ . Multihead attention combines bilinear terms additively, not multiplicatively, so it cannot achieve this factorization.
+
+Suppose $f^{(1)}$ and $f^{(2)}$ are chosen such that $f^{(1)}(a_i, a_j)$ is orthogonal to $f^{(2)}(b_i, b_j)$ . In this case, any additive combination of bilinear terms cannot approximate the product $f^{(1)}(a_i, a_j)f^{(2)}(b_i, b_j)$ , regardless of how many layers or heads are used.
+
+Even with two layers of multihead linear self-attention, the mechanism remains additive and cannot express functions requiring multiplicative factorization of independent feature interactions. This limitation highlights the inability of 2 layer multihead self-attention to represent cross-feature factorized interactions such as $\sum_{j}f^{(1)}(a_{i},a_{j})f^{(2)}(b_{i},b_{j})$ or its variants.
+
+# G. HyperAttention
+
+# G.1. Defining HyperAttention
+
+Generalizing the Definition 6.1 to order $n$ ,
+
+$$
+A _ {i j _ {1} j _ {2} \dots j _ {n - 1}} = \sum_ {\alpha \zeta_ {1} \zeta_ {2} \dots \zeta_ {n - 1}} ^ {d} C _ {\alpha \zeta_ {1} \zeta_ {2} \dots \zeta_ {n - 1}} X _ {i \alpha} X _ {j _ {1} \zeta_ {1}} X _ {j _ {2} \zeta_ {2}} \dots X _ {j _ {n - 1} \zeta_ {n - 1}}
+$$
+
+$$
+V _ {j _ {1} j _ {2} \dots j _ {n - 1} \tau} = \sum_ {\xi_ {1} \xi_ {2} \dots \xi_ {n - 1}} ^ {d} X _ {j _ {1} \xi_ {1}} X _ {j _ {2} \xi_ {2}} \dots X _ {j _ {n - 1} \xi_ {n - 1}} W _ {\xi_ {1} \xi_ {2} \dots \xi_ {n - 1} \tau} ^ {V}
+$$
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\operatorname {l i n}} (\mathbf {X}) = \sum_ {j _ {1} \leq j _ {2} \leq \dots j _ {n - 1}} ^ {L} A _ {i j _ {1} j _ {2} \dots j _ {n - 1}} V _ {j _ {1} j _ {2} \dots j _ {n - 1} \tau},
+$$
+
+where $\mathbf{C},\mathbf{W}^V\in \mathbb{R}^{d\times n}$ and we denote $(i,j_{1},j_{2},\ldots)$ -th entry of a tensor $\mathbf{T}$ as $T_{ij_1j_2\dots}$ . Similar to how self-attention implements low rank approximation for efficiency, i.e,
+
+$$
+\mathbf {C} = \mathbf {W} ^ {Q} \left(\mathbf {W} ^ {K}\right) ^ {\top} \quad \text {a n d} \quad C _ {\alpha \beta} = \sum_ {\sigma} ^ {R} W _ {\alpha \sigma} ^ {Q} W _ {\beta \sigma} ^ {K}
+$$
+
+where $\mathbf{W}^Q$ and $\mathbf{W}^K\in \mathbb{R}^{d\times R}$ are chosen low rank $R < d$ , where $\mathrm{d}$ is the maximum rank can a $d\times d$ matrix have, we can have low rank approximation of HyperAttention as
+
+$$
+C _ {\alpha \zeta_ {1} \zeta_ {2} \dots \zeta_ {n - 1}} = \sum_ {\sigma} ^ {R} W _ {\alpha \sigma} ^ {Q} W _ {\zeta_ {1} \sigma} ^ {K ^ {1}} W _ {\zeta_ {2} \sigma} ^ {K ^ {2}} \dots W _ {\zeta_ {n - 1} \sigma} ^ {K ^ {n - 1}}
+$$
+
+$$
+W _ {\xi_ {1} \xi_ {2} \dots \xi_ {n - 1} \tau} ^ {V} = \sum_ {\sigma} ^ {R} W _ {\xi_ {1} \sigma} ^ {V ^ {1}} W _ {\xi_ {2} \sigma} ^ {V ^ {2}} \dots W _ {\xi_ {n - 1} \sigma} ^ {V ^ {n - 1}} W _ {\tau \sigma} ^ {V ^ {n}},
+$$
+
+where each $\mathbf{W}^Q$ , $\mathbf{W}^{K^i}$ , $\mathbf{W}^{V^i} \in \mathbb{R}^{d \times R}$ . Similarly, we choose $R$ less than the maximum rank can an $n$ dimensional tensor have. Finally adding the non-linearity, the full definition becomes the following.
+
+Definition G.1 (HyperAttention with parameter sharing). Let $\mathbf{X} \in \mathbb{R}^{T \times d}$ be the input sequence of $T$ tokens, where $d$ is the embedding dimension. For the $n$ -th order HyperAttention, define:
+
+$$
+\mathbf {Q} = \mathbf {X} \mathbf {W} ^ {Q} \in \mathbb {R} ^ {T \times R},
+$$
+
+$$
+\mathbf {K} = \mathbf {X} \mathbf {W} ^ {K} \in \mathbb {R} ^ {T \times R},
+$$
+
+$$
+\mathbf {V} ^ {1} = \mathbf {X} \mathbf {W} ^ {V ^ {1}} \in \mathbb {R} ^ {T \times R},
+$$
+
+$$
+\mathbf {V} ^ {2} = \mathbf {W} ^ {V ^ {2}} \in \mathbb {R} ^ {d \times R}.
+$$
+
+We also define permutation mask $\mathbf{M} \in \mathbb{R}^{T \times n}$ such that,
+
+$$
+M _ {i, j _ {1}, \dots , j _ {n - 1}} = - \infty (1 - \mathbb {I} [ j _ {1} \geq j _ {2} \geq \dots \geq j _ {n - 1} ]),
+$$
+
+where $\mathbb{I}$ is the indicator function that is equal to 1 if the condition satisfied, 0 otherwise. Then, for each token index $i$ and output dimension $\tau$ ,
+
+$$
+A _ {i, j _ {1}, \dots , j _ {n - 1}} = \operatorname {S o f t m a x} _ {\left(j _ {1}, \dots , j _ {n - 1}\right)} \left(M _ {i, j _ {1}, \dots , j _ {n - 1}} \sum_ {\sigma = 1} ^ {R} Q _ {i, \sigma} K _ {j _ {1}, \sigma} K _ {j _ {2}, \sigma} \dots K _ {j _ {n - 1}, \sigma}\right),
+$$
+
+$$
+V _ {j _ {1}, \ldots , j _ {n - 1}, \tau} = \sum_ {\sigma = 1} ^ {R} V _ {j _ {1}, \sigma} ^ {1} V _ {j _ {2}, \sigma} ^ {1} \ldots V _ {j _ {n - 1}, \sigma} ^ {1} V _ {\tau , \sigma} ^ {2}.
+$$
+
+The HyperAttention output is computed as:
+
+$$
+\mathrm {H A} _ {i, \tau} (\mathbf {X}) = \sum_ {j _ {1}, \dots , j _ {n - 1} = 1} ^ {T} A _ {i, j _ {1}, \dots , j _ {n - 1}} V _ {j _ {1}, \dots , j _ {n - 1}, \tau},
+$$
+
+which is equivalent to
+
+$$
+\mathrm {H A} _ {i, \tau} (\mathbf {X}) = \sum_ {j _ {1} \geq \dots \geq j _ {n - 1}} ^ {T} A _ {i, j _ {1}, \dots , j _ {n - 1}} V _ {j _ {1}, \dots , j _ {n - 1}, \tau}.
+$$
+
+For multihead HyperAttention, we compute multiple heads indexed by $h \in \{1, \dots, H\}$ , each with independent learnable weights $\mathbf{W}^{Q^h}, \mathbf{W}^{K^h}, \mathbf{W}^{V^{h,1}}$ , and $\mathbf{W}^{V^{h,2}}$ . The multihead HyperAttention output is given by:
+
+$$
+\mathbf {M u t i H e a d} \mathbf {H A} (\mathbf {X}) = \operatorname {C o n c a t} \left(\mathbf {H A} ^ {1} (\mathbf {X}), \dots , \mathbf {H A} ^ {H} (\mathbf {X})\right) \mathbf {W} ^ {O},
+$$
+
+where $\mathbf{HA}^h (\mathbf{X})\in \mathbb{R}^{T\times d_h}$ , $d_{h} = d / H$ , and $\mathbf{W}^O\in \mathbb{R}^{d\times d}$ is a learnable projection matrix.
+
+We also have another version which is very similar to the previous definition but the parameters corresponding to different keys and values are not shared.
+
+Remark G.2. To increase the implicit bias one could use $M_{i,j_1,\dots,j_{n-1}} = -\infty \left(1 - \mathbb{I}[j_1 > j_2 > \dots > j_{n-1}]\right)$ instead of the mask stated in the definition. In this version the block would be more specialized to learning $n$ -way interactions instead of lower order interactions.
+
+Definition G.3 (HyperAttention without parameter sharing). Let $\mathbf{X} \in \mathbb{R}^{T \times d}$ be the input sequence of $T$ tokens, where $d$ is the embedding dimension. For the $n$ -th order HyperAttention, define:
+
+$$
+\mathbf {Q} = \mathbf {X} \mathbf {W} ^ {Q} \in \mathbb {R} ^ {T \times R},
+$$
+
+$$
+\mathbf {K} ^ {m} = \mathbf {X} \mathbf {W} ^ {K ^ {m}} \in \mathbb {R} ^ {T \times R}, \quad \forall m \in \{1, \dots , n - 1 \},
+$$
+
+$$
+\mathbf {V} ^ {m} = \mathbf {X} \mathbf {W} ^ {V ^ {m}} \in \mathbb {R} ^ {T \times R}, \quad \forall m \in \{1, \dots , n - 1 \},
+$$
+
+$$
+\mathbf {V} ^ {n} = \mathbf {W} ^ {V ^ {n}} \in \mathbb {R} ^ {d \times R}.
+$$
+
+Then, for each token index $i$ and output dimension $\tau$ ,
+
+$$
+A _ {i, j _ {1}, \dots , j _ {n - 1}} = \operatorname {S o f t m a x} _ {\left(j _ {1}, \dots , j _ {n - 1}\right)} \left(M _ {i, j _ {1}, \dots , j _ {n - 1}} \sum_ {\sigma = 1} ^ {R} Q _ {i, \sigma} K _ {j _ {1}, \sigma} ^ {1} K _ {j _ {2}, \sigma} ^ {2} \dots K _ {j _ {n - 1}, \sigma} ^ {n - 1}\right),
+$$
+
+$$
+V _ {j _ {1}, \dots , j _ {n - 1}, \tau} = \sum_ {\sigma = 1} ^ {R} V _ {j _ {1}, \sigma} ^ {1} V _ {j _ {2}, \sigma} ^ {2} \dots V _ {j _ {n - 1}, \sigma} ^ {n - 1} V _ {\tau , \sigma} ^ {n}.
+$$
+
+The rest of the definition is the same as the previous definition G.1.
+
+# G.2. Representation Abilities of HyperAttention
+
+Theorem G.4. A single layer of order $n$ linear HyperAttention (without parameter sharing), with embedding dimension $d = |\mathcal{S}|$ , can represent any function of the form
+
+$$
+\mathbf {F} _ {i} = \sum_ {j _ {1}, j _ {2}, \dots , j _ {n - 1} \in [ L ]} f \left(\mathcal {X} (i), \mathcal {X} (j _ {1}), \mathcal {X} (j _ {2}), \dots , \mathcal {X} (j _ {n - 1})\right) w _ {s (j _ {1}), s (j _ {2}), s (j _ {n - 1})}
+$$
+
+for all elements in the sequence, i.e., $i \in [L]$ . The parameter-sharing variant of HyperAttention can express the same class of functions provided that (i) the weight tensor $w$ is fully symmetric, i.e. invariant under any permutation of its indices, and (ii) the kernel $f$ is symmetric in its last $n - 1$ arguments, $\mathcal{X}(j_1), \ldots, \mathcal{X}(j_{n-1})$ .
+
+Proof. Seeing that $d = |\mathcal{S}|$ , the embeddings can be orthonormal. Consequently, the same arguments of the Proof of Theorem 3.1 in Appendix B follow.
+
+# G.2.1. SKIP-TRIGRAM BUG
+
+Let us now illustrate how higher-order dependencies may arise with a more practical example on skip-trigrams.
+
+Next-Token Prediction A common strategy for training language models is next-token prediction: given a sequence of tokens $(\mathcal{X}(0),\mathcal{X}(1),\ldots ,\mathcal{X}(L - 1))$ , the model learns to predict the next token. In our setting, the label for training is $\mathcal{X}(L - 1)$ . For simplicity, we focus on the final row of the softmax self-attention output (corresponding to $\mathcal{X}(L - 1)$ ). Concretely, the model produces a probability distribution $l$ over the vocabulary, defined as:
+
+$$
+l = \frac {\sum_ {j \in [ L ]} \exp \left(f \left(\mathcal {X} (i) , \mathcal {X} (j)\right)\right) w _ {\mathcal {X} (j)}}{\sum_ {j \in [ L ]} \exp \left(f \left(\mathcal {X} (i) , \mathcal {X} (j)\right)\right)}.
+$$
+
+During inference, the next token $\hat{y}$ is then sampled from this distribution.
+
+As shown by (Elhage et al., 2021), self-attention can learn skip-trigrams. For example, in a sentence fragment such as "... keep... in [," the model might predict the next token "mind," completing the phrase as "... keep... in mind." In the context of our work, we can interpret this phenomenon in two steps: (1) self-attention identifies that "in" is influenced by "keep" (i.e., $f$ (in, keep) is large), and (2) it leverages that influence to generate the token "mind."
+
+However, as shown in (Elhage et al., 2021), the same mechanism that raises the probabilities of the correct skip-trigrams “...keep...in mind” and “...keep...at bay” also inadvertently increases the probabilities of the erroneous skip-trigrams “...keep...in bay” and “...keep...at mind.” This phenomenon, known as the “skip-trigram bug,” arises from how
+
+attention influences these completions. From our interaction perspective, increasing the probability of "... keep... in mind" can be done in two ways (1) Increasing $f(\text{in}, \text{keep})$ , which unfortunately also boosts the probability of "... keep... in bay." (2) Modifying $w_{\text{keep}}$ to bias the model more strongly towards "mind," which reduces the probability of "... keep... at bay." Either approach makes it challenging for the model to consistently prefer the correct completions without also amplifying incorrect ones, thereby explaining the skip-trigram bug.
+
+Hyper-Attention for Avoiding Skip-Trigram Bugs. The crux of the skip-trigram bug is that a single self-attention head (or pairwise interaction) tries to capture the entire phrase “...keep...in mind” by boosting $f(\text{in}, \text{keep})$ alone. This inadvertently increases the probability of other completions like “...keep...in bay” whenever $w_{\text{keep}}$ also points toward “bay.” However, many real-world contexts contain additional tokens that disambiguate the correct completion. For instance, the sentence “...keep the deadline in [ ]” strongly suggests “mind” over “bay”.
+
+In our framework of hyper-attention, one can introduce a ternary interaction term
+
+$$
+f (\text {i n}, \text {k e e p}, \text {d e a l d r i n e})
+$$
+
+that focuses specifically on the triplet $\{\text{"in", "keep", "deadline"}\}$ , allowing the model to favor "mind" without simultaneously boosting "bay." Concretely, if we let
+
+$$
+f \left(\mathcal {X} (i), \mathcal {X} (j), \mathcal {X} (k)\right) \quad \text {a n d} \quad w _ {\mathcal {X} (j), \mathcal {X} (k)}
+$$
+
+govern three-way effects (rather than just pairwise $f(\text{in}, \text{keep})$ ), the probability of "mind" can be increased via a higher-order interaction $f(\text{in}, \text{keep}, \text{the\_deadline})$ specifically tailored to that context. In doing so, we need not raise all completions of "... keep ... in [ ]," and thus avoid inadvertently increasing "... keep ... in bay."
+
+Hence, by modeling triplet or higher-order interactions, hyper-attention more flexibly captures context-specific phrases like "keep the deadline in mind," while suppressing incorrect ones like "keep the deadline in bay," mitigating the skip-trigram bug highlighted.
+
+# G.3. Efficient Strategy to Mitigate $\mathcal{O}(L^n)$ Computation Complexity
+
+For simplicity, we focus on third order HyperAttention but the same arguments generalize to any order. Let's first look at linear version with recalling the definition of third order linear HyperAttention (Definition 6.1) in the tensor decomposition format of Definition G.1.
+
+$$
+A _ {i, j _ {1}, j _ {2}} = \sum_ {\sigma = 1} ^ {R} Q _ {i, \sigma} K _ {j _ {1}, \sigma} ^ {1} K _ {j _ {2}, \sigma} ^ {2},
+$$
+
+$$
+V _ {j _ {1}, j _ {2}, \tau} = \sum_ {\sigma = 1} ^ {R} V _ {j _ {1}, \sigma} ^ {1} V _ {j _ {2}, \sigma} ^ {2} V _ {\tau , \sigma} ^ {3},
+$$
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\operatorname {l i n}} (\mathbf {X}) = \sum_ {j _ {1} j _ {2}} ^ {L} A _ {i j _ {1} j _ {2}} V _ {j _ {1} j _ {2} \tau}.
+$$
+
+Here, each equation has $\mathcal{O}(L^3 R)$ computational complexity, so the total calculation has $\mathcal{O}(L^3 R)$ complexity. We can write the same expression as
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\lim } (\mathbf {X}) = \sum_ {j _ {1}, j _ {2}} ^ {L} \left\{\sum_ {\sigma = 1} ^ {R} Q _ {i, \sigma} K _ {j _ {1}, \sigma} ^ {1} K _ {j _ {2}, \sigma} ^ {2} \right\} \left\{\sum_ {\tilde {\sigma} = 1} ^ {R} V _ {j _ {1}, \tilde {\sigma}} ^ {1} V _ {j _ {2}, \tilde {\sigma}} ^ {2} V _ {\tau , \tilde {\sigma}} ^ {3} \right\}. \tag {42}
+$$
+
+Changing the order of summations (take the summations over $j$ s first),
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\lim } (\mathbf {X}) = \sum_ {\tilde {\sigma} = 1} ^ {R} \sum_ {\tilde {\sigma} = 1} ^ {R} Q _ {i, \sigma} \left\{\sum_ {j _ {1}} ^ {L} K _ {j _ {1}, \sigma} ^ {1} V _ {j _ {1}, \tilde {\sigma}} ^ {1} \right\} \left\{\sum_ {j _ {2}} ^ {L} K _ {j _ {2}, \sigma} ^ {2} V _ {j _ {2}, \tilde {\sigma}} ^ {2} \right\} V _ {\tau , \tilde {\sigma}} ^ {3}, \tag {43}
+$$
+
+its computational complexity can be reduced to $\mathcal{O}(LR^2)$ . Generally $R \ll L$ because $R$ simply corresponds to attention head dimension. Thus, computational complexity reduces significantly.
+
+As for the softmax or general nonlinear version, we use techniques similar to those in (Katharopoulos et al., 2020; Choromanski et al., 2022). For general nonlinear case Eq.42 can be written as,
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\mathrm {l i n}} (\mathbf {X}) = \frac {\sum_ {j _ {1} j _ {2}} ^ {L} \mathrm {s i m} \left(Q _ {i , ;} , K _ {j _ {1} , ;} ^ {1} , K _ {j _ {2} , ;} ^ {2}\right) \left\{\sum_ {\sigma = 1} ^ {R} V _ {j _ {1} , \sigma} ^ {1} V _ {j _ {2} , \sigma} ^ {2} V _ {\tau , \sigma} ^ {3} \right\}}{\sum_ {j _ {1} j _ {2}} ^ {L} \mathrm {s i m} \left(Q _ {i , ;} , K _ {j _ {1} , ;} ^ {1} , K _ {j _ {2} , ;} ^ {2}\right)},
+$$
+
+where $\mathrm{sim}(.)$ is classical non-linearities (for softmax it is exponentiation of the attention scores). This can be approximated with a function of the form
+
+$$
+\widehat {\mathrm {H A}} _ {i \tau} ^ {\mathrm {l i n}} (\mathbf {X}) = \frac {\sum_ {j _ {1} j _ {2}} ^ {L} \left(\sum_ {\sigma = 1} ^ {R _ {2}} \phi (Q _ {i :}) _ {\sigma} \phi (K _ {j _ {1}:} ^ {1}) _ {\sigma} \phi (K _ {j _ {2}:} ^ {2}) _ {\sigma}\right) \left\{\sum_ {\sigma = 1} ^ {R} V _ {j _ {1} , \sigma} ^ {1} V _ {j _ {2} , \sigma} ^ {2} V _ {\tau , \sigma} ^ {3} \right\}}{\sum_ {j _ {1} j _ {2}} ^ {L} \left(\sum_ {\sigma = 1} ^ {R _ {2}} \phi (Q _ {i :}) _ {\sigma} \phi (K _ {j _ {1}:} ^ {1}) _ {\sigma} \phi (K _ {j _ {2}:} ^ {2}) _ {\sigma}\right)},
+$$
+
+where $\phi : \mathbb{R}^R \to \mathbb{R}^{R_2}$ is a nonlinear transformation chosen accordingly to the sim and $R_2 \in \mathbb{Z}^+$ is $\mathcal{O}(R)$ . (Alman & Song, 2023) show that if entries of the input matrices $\mathbf{Q}, \mathbf{K}$ are less than $o\left(\sqrt[3]{\log L}\right)$ than for $\epsilon = 1 / \mathrm{poly}(L)$
+
+$$
+\max _ {i, \tau} \left| \mathrm {H A} _ {i \tau} ^ {\operatorname {l i n}} - \widehat {\mathrm {H A}} _ {i \tau} ^ {\operatorname {l i n}} \right| \leq \epsilon
+$$
+
+Consequently, the same summation order change trick (43) applies for nonlinear HyperAttentions, too.
+
+# G.4. HyperAttention Learning
+
+For simplicity we prove the convergence of order three linear HyperAttention. However, after this proof, extension to any order is trivial. Recall Definition 6.1, which is copied here.
+
+Definition G.5 (Third order Linear HyperAttention).
+
+$$
+A _ {i j _ {1} j _ {2}} = \sum_ {\alpha \nu_ {1} \nu_ {2}} ^ {d} C _ {\alpha \zeta_ {1} \zeta_ {2}} X _ {i \alpha} X _ {j _ {1} \zeta_ {1}} X _ {j _ {2} \zeta_ {2}}
+$$
+
+$$
+V _ {j _ {1} j _ {2} \tau} = \sum_ {\xi_ {1} \xi_ {2}} ^ {d} X _ {j _ {1} \xi_ {1}} X _ {j _ {2} \xi_ {2}} W _ {\xi_ {1} \xi_ {2} \tau} ^ {V}
+$$
+
+$$
+\mathrm {H A} _ {i \tau} ^ {\operatorname {l i n}} (\mathbf {X}) = \sum_ {j _ {1} \leq j _ {2}} ^ {L} A _ {i j _ {1} j _ {2}} V _ {j _ {1} j _ {2} \tau},
+$$
+
+where we denote $(i,j,k)$ -th entry of a tensor $\mathbf{T}$ as $T_{ijk}$ and $\mathbf{C} \in \mathbb{R}^{d \times d \times d}$ , $\mathbf{W}^V \in \mathbb{R}^{d \times d \times d_2}$ .
+
+Seeing that our main aim is understanding the attention scores, the core mechanism defining self-attention, we choose $d_{2} = 1$ to simplify the convergence analysis. Thus, $\mathbf{W}^{V}$ is two dimensional tensor which we denote as $\mathbf{w}$ in this subsection.
+
+Assumption G.6 (Weak Realizability). The task is realizable, i.e., there exist $\mathbf{C}^*$ and $\mathbf{w}^*$ that perfectly fits the training data.
+
+Theorem G.7 (Convergence of HyperAttention to Zero Training Error). Let the dimensions $d = |S|$ and $d_2 = 1$ . Also, let the initial parameters $\mathbf{C}(t) = \mathbf{0}$ , $w_{\alpha \beta}^2(0) \geq b > 0$ , $\forall i$ . Then, under the assumptions G.10 and G.6, gradient flow on
+
+$$
+L ^ {\mathrm {M S E}} (\mathbf {C}, \mathbf {W} ^ {V}) = \frac {1}{B} \sum_ {n = 1} ^ {B} \left\| \mathbf {H A} _ {\mathbf {C}, \mathbf {W} ^ {V}} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(n)}) - \mathbf {Y} ^ {(n)} \right\| ^ {2},
+$$
+
+converges to zero training error.
+
+Before proving the theorem we will derive the gradients and state some lemmas that are going to be useful in the proof.
+
+Gradients with Respect to C and w.
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial C _ {\mu \nu \sigma}} = \frac {2}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {H A} ^ {\ln} \left(\mathbf {X} ^ {(n)}\right) - \mathbf {y} ^ {(n)}\right) ^ {\top} \frac {\partial \mathbf {H A} ^ {\ln} \left(\mathbf {X} ^ {(n)}\right)}{\partial C _ {\mu \nu \sigma}}
+$$
+
+$$
+\frac {\partial \mathrm {H A} _ {i} ^ {\mathrm {l i n}} (\mathbf {X} ^ {(n)})}{\partial C _ {\mu \nu \sigma}} = \sum_ {\gamma} \sum_ {\sigma} \sum_ {k} \sum_ {j \leq k} X _ {i \mu} ^ {(n)} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {k \theta} ^ {(n)} w _ {\gamma \theta}
+$$
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial C _ {\mu \nu \sigma}} = \frac {2}{B} \sum_ {n} \sum_ {\gamma \theta} w _ {\gamma \theta} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n)}\right) _ {\mu ,:} ^ {\top} \mathbf {D} ^ {(n)}
+$$
+
+Similarly we can find the gradient with respect to $\mathbf{w}$ .
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial w _ {\gamma \theta}} = \frac {2}{B} \sum_ {n = 1} ^ {B} \left(\mathbf {H A} ^ {\ln} \left(\mathbf {X} ^ {(n)}\right) - \mathbf {y} ^ {(n)}\right) ^ {\top} \frac {\partial \mathbf {H A} ^ {\ln} \left(\mathbf {X} ^ {(n)}\right)}{\partial w _ {\gamma \theta}}
+$$
+
+$$
+\frac {\partial \mathbf {H A} ^ {\mathrm {l i n}} \left(\mathbf {X} ^ {(n)}\right)}{\partial w _ {\gamma \theta}} = \sum_ {k} \sum_ {j \leq k} \left(X _ {i \mu} X _ {j \nu} X _ {k \sigma} C _ {\mu \nu \sigma}\right) X _ {j \gamma} X _ {k \theta}
+$$
+
+$$
+\frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial w _ {\gamma \theta}} = \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n) ^ {\top}}\right) _ {\mu ,:} \mathbf {D} ^ {(n)}
+$$
+
+In this subsection, we will change our perspective and prove the things in the one hot encoding basis, similarly to what we did in Section C. That is, we define
+
+$$
+X _ {i \mu} ^ {(n) \mathrm {o n e - h o t}} = \sum_ {k} X _ {i k} ^ {(n)} B _ {\mu k},
+$$
+
+$$
+C _ {\mu \nu \sigma} ^ {\mathrm {o n e - h o t}} = \sum_ {i j k} B _ {\mu i} B _ {\nu j} B _ {\sigma k} C _ {i j k},
+$$
+
+$$
+w _ {\gamma \theta} ^ {\mathrm {o n e - h o t}} = \sum_ {i j} B _ {\gamma i} B _ {\theta j} w _ {i j},
+$$
+
+and use the one - hot encoded versions. However, again, we abuse the notation and omit the subscript in this subsection, e.g. we write $C$ but we mean the one - hot version $C^{\mathrm{one - hot}}$ , in the rest of this subsection. Lastly, in this section, we denote $\mathbf{e}_{\mu} \in \mathbb{R}^{|\mathcal{S}|}$ as unique one-hot encoded vector for all $\mu \in S$ , i.e. the base vector.
+
+Now we state some lemmas which are analogous to Lemmas C.1 and C.2.
+
+Lemma G.8. In the embedding base, $\sum_{k}\sum_{j\leq k}X_{j\nu}^{(n)}X_{j\gamma}^{(n)}X_{k\sigma}^{(n)}X_{j\theta}^{(n)}$ is diagonal in the sense that it can be written as
+
+$$
+\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)} = \Gamma_ {\nu \sigma} ^ {(n)} \delta_ {\nu \gamma} \delta_ {\sigma \theta},
+$$
+
+where
+
+$$
+\Gamma_ {\nu \sigma} ^ {(n)} = \sum_ {k} \sum_ {j \leq k} \delta_ {\mathcal {X} ^ {(n)} (j), \nu} \delta_ {\mathcal {X} ^ {(n)} (k), \sigma},
+$$
+
+and $\delta$ is kronocker delta function, that is,
+
+$$
+\delta_ {\nu \gamma} = \left\{ \begin{array}{l l} 1, & i f \nu = \gamma \\ 0, & i f \nu \neq \gamma \end{array} \right.
+$$
+
+Proof. Seeing that we are in the embedding base,
+
+$$
+\begin{array}{l} \sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)} = \sum_ {k} \sum_ {j < k} \left[ \mathbf {e} _ {\mathcal {X} ^ {(n)} (k)} \right] _ {\sigma} \left[ \mathbf {e} _ {\mathcal {X} ^ {(n)} (k)} \right] _ {\theta} \left[ \mathbf {e} _ {\mathcal {X} ^ {(n)} (j)} \right] _ {\nu} \left[ \mathbf {e} _ {\mathcal {X} ^ {(n)} (j)} \right] _ {\gamma} \\ = \sum_ {k} \sum_ {j \leq k} \delta_ {\mathcal {X} ^ {(n)} (j), \nu} \delta_ {\mathcal {X} ^ {(n)} (j), \gamma} \delta_ {\mathcal {X} ^ {(n)} (k), \sigma} \delta_ {\mathcal {X} ^ {(n)} (k), \theta} \\ \end{array}
+$$
+
+$$
+= \left(\sum_ {k} \sum_ {j \leq k} \delta_ {\mathcal {X} ^ {(n)} (j), \nu} \delta_ {\mathcal {X} ^ {(n)} (k), \sigma}\right) \delta_ {\nu \gamma} \delta_ {\sigma \theta}
+$$
+
+the last equality follows from the identity $\delta_{ij}\delta_{ik} = \delta_{ij}\delta_{jk}$ .
+
+Lemma G.9. If we choose initial parameters as $\mathbf{C}(0) = \mathbf{0}$ and $w_{\alpha \beta}(0) \geq b > 0$ , then $w_{\alpha \beta}(t) \geq b > 0$ , $\forall \alpha, \beta$ and $\forall t \geq 0$ .
+
+Proof. Firstly, we will show that $w_{\alpha}(t)^{2} \geq w_{\alpha}(0)^{2}, \forall t$ and $\forall i$ , then the statement in the lemma will follow similar to the proof of Lemma C.2. Using the previous gradient derivations and Lemma G.8,
+
+$$
+\begin{array}{l} \frac {d C _ {\mu \nu \sigma}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial C _ {\mu \nu \sigma}} = - \eta \frac {2}{B} \sum_ {n} \sum_ {\gamma \theta} w _ {\gamma \theta} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n)}\right) _ {\mu ;:} ^ {\top} \mathbf {D} ^ {(n)}, \\ = - \eta \frac {2}{B} \sum_ {n} \sum_ {\gamma \theta} w _ {\gamma \theta} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)} \Gamma_ {\nu \sigma} \delta_ {\nu \gamma} \delta_ {\sigma \theta} \left(\mathbf {X} ^ {(n)}\right) _ {\mu ,:} ^ {\top} \mathbf {D} ^ {(n)}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \frac {d w _ {\gamma \theta}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial w _ {\gamma \theta}} = - \eta \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu ,:} \mathbf {D} ^ {(n)}, \\ = - \eta \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \Gamma_ {\nu \sigma} \delta_ {\nu \gamma} \delta_ {\sigma \theta} \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu ,:} \mathbf {D} ^ {(n)}. \\ \end{array}
+$$
+
+Let $\pmb{\Lambda}^a$ and $\pmb{\Lambda}^b$ be matrices that are diagonal in the embedding base $\mathbf{B}$ . However, we again abuse the notation. We do not rewrite the one - hot in $\pmb{\Lambda}^{a,b\text{one - hot}} = \mathbf{BCB}^\top$ and denote it just as $\pmb{\Lambda}^{a,b}$ in the rest of the proof. We can now write
+
+$$
+\begin{array}{l} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \frac {d C _ {\mu \nu \sigma}}{d t} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} = - \eta \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} \sum_ {\gamma \theta} w _ {\gamma \theta} \Gamma_ {\nu \sigma} ^ {(n)} \delta_ {\nu \gamma} \delta_ {\sigma \theta} (\mathbf {X} ^ {(n) \top}) _ {\mu,:} \mathbf {D} ^ {(n)}, \\ = - \eta \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} w _ {\nu \theta} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} \Gamma_ {\nu \sigma} ^ {(n)} \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu ;} \mathbf {D} ^ {(n)}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \sum_ {\gamma \theta} w _ {\gamma \theta} \frac {d w _ {\gamma \theta}}{d t} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} = - \eta \frac {2}{B} \sum_ {n} \sum_ {\gamma \theta} \Lambda_ {\gamma \gamma} ^ {a} \Lambda_ {\theta \theta} ^ {b} w _ {\gamma \theta} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \Gamma_ {\nu \sigma} ^ {(n)} \delta_ {\nu \gamma} \delta_ {\sigma \theta} (\mathbf {X} ^ {(n) \top}) _ {\mu ;:} \mathbf {D} ^ {(n)}, \\ = - \eta \frac {2}{B} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} w _ {\nu \theta} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} \Gamma_ {\nu \sigma} ^ {(n)} \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu ;} \mathbf {D} ^ {(n)}. \\ \end{array}
+$$
+
+Consequently we have
+
+$$
+\begin{array}{l} \sum_ {\gamma \theta} w _ {\gamma \theta} \frac {d w _ {\gamma \theta}}{d t} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} = \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \frac {d C _ {\mu \nu \sigma}}{d t} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b}, \\ \frac {d}{d t} \sum_ {\gamma \theta} w _ {\gamma \theta} ^ {2} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} = \frac {d}{d t} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} ^ {2} \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b}, \\ \sum_ {\gamma \theta} w _ {\gamma \theta} ^ {2} (t) \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} = \sum_ {\gamma \theta} w _ {\gamma \theta} ^ {2} (0) \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} + \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} ^ {2} (t) \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b} - \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} ^ {2} (0) \Lambda_ {\nu \nu} ^ {a} \Lambda_ {\sigma \sigma} ^ {b}. \\ \end{array}
+$$
+
+Letting $\pmb{\Lambda}^a = \mathrm{diag}(\mathbf{e}_\alpha)$ and $\pmb{\Lambda}^b = \mathrm{diag}(\mathbf{e}_\beta)$ ,
+
+$$
+w _ {\alpha \beta} ^ {2} (t) = w _ {\alpha \beta} ^ {2} (0) + \| \mathbf {C} _ {:, \alpha , \beta} (t) \| ^ {2} - \| \mathbf {C} _ {:, \alpha , \beta} (0) \| ^ {2} = w _ {\alpha \beta} ^ {2} (0) + \| \mathbf {C} _ {:, \alpha , \beta} (t) \| ^ {2}
+$$
+
+where the last equality follows because $\mathbf{C}(0) = \mathbf{0}$ . As a result we reach to
+
+$$
+w _ {\alpha \beta} ^ {2} (t) \geq w _ {\alpha \beta} ^ {2} (0) \geq b ^ {2} \tag {44}
+$$
+
+Seeing that $\frac{dw_{\alpha\beta}}{dt}$ is finite $\forall t$ , $w_{\alpha \beta}(t)$ is continuous. As a result if $w_{\alpha}(0) \geq b > 0$ , then $w_{\alpha \beta}(t) \geq b$ , $\forall t$ which can be proven by contradiction. Assume $\exists t^{*} > 0$ such that $w_{\alpha \beta}(t^{*}) \leq b$ . By Equation 44, $w_{\alpha \beta}(t^{*}) \leq -b < 0$ . By intermediate value theorem $\exists \tau \in (0, t^{*})$ such that $w_{\alpha \beta}(\tau) = 0$ , so $w_{\alpha \beta}^2(\tau) = 0 < w_{\alpha \beta}^2(0) \geq b^2$ , which contradicts with (44)
+
+Proof of Theorem G.7. Seeing that $d_{2} = 1$ we denote two dimensional reduction of the three dimensional tensor $\mathbf{W}^{V}$ as $\mathbf{w}$ . Thus the HyperAttention formula becomes,
+
+$$
+\mathrm {H A} _ {i} ^ {\mathrm {l i n}} (\mathbf {X}) = \sum_ {j _ {1} \leq j _ {2}} ^ {L} \left(\sum_ {\alpha \zeta_ {1} \zeta_ {2}} ^ {d} C _ {\alpha \zeta_ {1} \zeta_ {2}} X _ {i \alpha} X _ {j _ {1} \zeta_ {1}} X _ {j _ {2} \zeta_ {2}}\right) \left(\sum_ {\xi_ {1} \xi_ {2}} ^ {d} X _ {j _ {1} \xi_ {1}} X _ {j _ {2} \xi_ {2}} w _ {\xi_ {1} \xi_ {2}}\right)
+$$
+
+# Gradient Flow for the Residuals and the Loss
+
+$$
+\frac {d \mathbf {C}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {C}}, \quad \frac {d \mathbf {w}}{d t} = - \eta \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial \mathbf {w}}
+$$
+
+Following similar steps as in the proof of Theorem 4.4,
+
+$$
+\begin{array}{l} \frac {d D _ {i} ^ {(m)}}{d t} = - \eta \sum_ {\mu \nu \sigma \gamma \theta} X _ {i \mu} ^ {(m)} \sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(m)} X _ {k \sigma} ^ {(m)} X _ {j \gamma} ^ {(m)} X _ {k \theta} ^ {(m)} w _ {\gamma \theta} \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial C _ {\mu \nu \sigma}} \\ - \eta \sum_ {\nu \mu \sigma \gamma \theta} X _ {i \mu} ^ {(m)} \sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(m)} X _ {k \sigma} ^ {(m)} C _ {\mu \nu \sigma} X _ {j \gamma} ^ {(m)} X _ {k \theta} ^ {(m)} \frac {\partial L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{\partial w _ {\gamma \theta}} \\ \end{array}
+$$
+
+Substituting the gradients
+
+$$
+\begin{array}{l} \frac {d D _ {i} ^ {(m)}}{d t} = - \frac {2 \eta}{B} \sum_ {\mu \nu \sigma \gamma^ {\prime} \theta^ {\prime}} \left\{X _ {i \mu} ^ {(m)} w _ {\gamma^ {\prime} \theta^ {\prime}} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(m)} X _ {k \sigma} ^ {(m)} X _ {j \gamma^ {\prime}} ^ {(m)} X _ {k \theta^ {\prime}} ^ {(m)}\right) \right. \\ \times \sum_ {n} \sum_ {\gamma \theta} w _ {\gamma \theta} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n)}\right) _ {\mu ,:} ^ {\top} \mathbf {D} ^ {(n)} \Bigg \} \\ - \frac {2 \eta}{B} \sum_ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime} \gamma \theta} \left\{X _ {i \mu^ {\prime}} ^ {(m)} C _ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu^ {\prime}} ^ {(m)} X _ {k \sigma^ {\prime}} ^ {(m)} X _ {j \gamma} ^ {(m)} X _ {k \theta} ^ {(m)}\right) \right. \\ \times \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \left(\sum_ {k} \sum_ {j \leq k} X _ {j \nu} ^ {(n)} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} X _ {j \theta} ^ {(n)}\right) \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu ,:} \mathbf {D} ^ {(n)} \Bigg \} \\ \end{array}
+$$
+
+By Lemma G.8,
+
+$$
+\begin{array}{l} \frac {d D _ {i} ^ {(m)}}{d t} = - \eta \frac {2}{B} \sum_ {\mu \nu \sigma \gamma^ {\prime} \theta^ {\prime}} X _ {i \mu} ^ {(m)} w _ {\gamma^ {\prime} \theta^ {\prime}} \Gamma_ {\nu \sigma} ^ {(m)} \delta_ {\nu \gamma^ {\prime}} \delta_ {\sigma \theta^ {\prime}} \sum_ {n} \sum_ {\gamma \theta} w _ {\gamma \theta} \Gamma_ {\nu \sigma} ^ {(n)} \delta_ {\nu \gamma} \delta_ {\sigma \theta} (\mathbf {X} ^ {(n)}) _ {\mu ,:} ^ {\top} \mathbf {D} ^ {(n)} \\ - \eta \frac {2}{B} \sum_ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime} \gamma \theta} X _ {i \mu^ {\prime}} ^ {(m)} C _ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {(m)} \delta_ {\nu^ {\prime} \gamma} \delta_ {\sigma^ {\prime} \theta} \sum_ {n} \sum_ {\mu \nu \sigma} C _ {\mu \nu \sigma} \Gamma_ {\nu \sigma} ^ {(n)} \delta_ {\nu \gamma} \delta_ {\sigma \theta} \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu,:} \mathbf {D} ^ {(n)} \\ = - \eta \frac {2}{B} \sum_ {\mu \nu \sigma} X _ {i \mu} ^ {(m)} w _ {\nu \sigma} \Gamma_ {\nu \sigma} ^ {(m)} \sum_ {n} w _ {\nu \sigma} \Gamma_ {\nu \sigma} ^ {(n)} (\mathbf {X} ^ {(n)}) _ {\mu ,:} ^ {\top} \mathbf {D} ^ {(n)} \\ - \eta \frac {2}{B} \sum_ {\mu^ {\prime} \gamma \theta} X _ {i \mu^ {\prime}} ^ {(m)} C _ {\mu^ {\prime} \gamma \theta} \Gamma_ {\gamma \theta} ^ {(m)} \sum_ {n} \sum_ {\mu} C _ {\mu \gamma \theta} \Gamma_ {\gamma \theta} ^ {(n)} \left(\mathbf {X} ^ {(n) \top}\right) _ {\mu,:} \mathbf {D} ^ {(n)} \\ \end{array}
+$$
+
+Now let us define,
+
+$$
+M _ {i \mu \nu \sigma} ^ {(n)} = X _ {i \mu} ^ {(n)} w _ {\nu \sigma} \Gamma_ {\nu \sigma} ^ {(n)}
+$$
+
+Also define matrixize $M_{i\mu \nu \sigma}^{(n)}$ as $\mathbf{M}$ such that first dimension of $\mathbf{M}$ is vectorization of $n$ and $i$ the second dimension is vectorization of $\mu, \nu, \sigma$ , so $\mathbf{M} \in \mathbb{R}^{BL \times d^3}$ . There exists a similar matrix $\mathbf{M}_2$ such that
+
+$$
+\frac {d \mathbf {D}}{d t} = - \eta \frac {2}{B} \left[ \mathbf {M M} ^ {\top} + \mathbf {M} _ {2} \mathbf {M} _ {2} ^ {\top} \right]
+$$
+
+Similar to what we in the proof of Theorem 4.4, it follows that
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \mathbf {D} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {D} \tag {45}
+$$
+
+Now, we will write Eq. 45 differently, reexpressing $\mathbf{D}$ . Thanks to Assumption G.6, i.e. the realizability, we can write
+
+$$
+D _ {i} ^ {(n)} = \sum_ {\mu \nu \sigma \gamma \theta} X _ {i \mu} ^ {(n)} X _ {j \nu} ^ {(n)} X _ {k \sigma} ^ {(n)} C _ {\mu \nu \sigma} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} w _ {\gamma \sigma} - \sum_ {\mu \nu \sigma \gamma \theta} X _ {i \mu} ^ {(n)} X _ {j \nu} ^ {(n)} X _ {k \sigma} ^ {(n)} C _ {\mu \nu \sigma} ^ {*} X _ {j \gamma} ^ {(n)} X _ {k \sigma} ^ {(n)} w _ {\gamma \theta}.
+$$
+
+By Lemma G.8,
+
+$$
+D _ {i} ^ {(n)} = \sum_ {\mu \nu \sigma} X _ {i \mu} \left(C _ {\mu \nu \sigma} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} - C _ {\mu \nu \sigma} ^ {*} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} ^ {*}\right)
+$$
+
+By Lemma G.9, $1 / w_{\nu \sigma}$ is defined, so we can write
+
+$$
+D _ {i} ^ {(n)} = \sum_ {\mu \nu \sigma} X _ {i \mu} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} \left(C _ {\mu \nu \sigma} - C _ {\mu \nu \sigma} ^ {*} \frac {w _ {\nu \sigma} ^ {*}}{w _ {\nu \sigma}}\right) = \sum_ {\mu \nu \sigma} M _ {i \mu \nu \sigma} ^ {(n)} \left(C _ {\mu \nu \sigma} - C _ {\mu \nu \sigma} ^ {*} \frac {w _ {\nu \sigma} ^ {*}}{w _ {\nu \sigma}}\right).
+$$
+
+Now again we can vectorize along $n, i$ and vectorize along $\mu, \nu, \sigma$ which leads to
+
+$$
+\mathbf {D} = \mathbf {M} \mathrm {v e c} \left(\mathbf {C} - \mathbf {C} ^ {*} \frac {\mathbf {w} ^ {*}}{\mathbf {w} ^ {*}}\right).
+$$
+
+Following the same steps as in the proof of Theorem 4.4,
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \mathbf {D} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {D} = - \frac {4 \eta}{B ^ {2}} \mathrm {v e c} \Big [ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \Big (\frac {\mathbf {w} ^ {*}}{\mathbf {w}} \Big) \Big ] ^ {\top} \mathbf {M} ^ {\top} \mathbf {M} \mathbf {M} ^ {\top} \mathbf {M} \mathrm {v e c} \Big [ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \Big (\frac {\mathbf {w} ^ {*}}{\mathbf {w}} \Big) \Big ]
+$$
+
+Using the Lemma C.3, the same inequality can be written as
+
+$$
+\frac {d L ^ {\mathrm {M S E}} (\mathbf {C} , \mathbf {w})}{d t} \leq - \frac {4 \eta}{B ^ {2}} \lambda_ {\min} (\mathbf {M} ^ {\top} \mathbf {M}) \left\| \mathbf {M} \mathrm {v e c} \Big [ \mathbf {C} - \mathbf {C} ^ {*} \operatorname {d i a g} \Big (\frac {\mathbf {w} ^ {*}}{\mathbf {w}} \Big) \Big ] \right\| ^ {2} = - \frac {4 \eta}{B ^ {2}} \lambda_ {\min} (\mathbf {M} ^ {\top} \mathbf {M}) \| \mathbf {D} \| ^ {2},
+$$
+
+where $\lambda_{\mathrm{min}}(\mathbf{M}^{\top}\mathbf{M})$ is the minimum eigenvalue of $\mathbf{M}^{\top}\mathbf{M}$ . Thus, if there exists a constant $\psi$ such that $\lambda_{\mathrm{min}}(\mathbf{M}^{\top}(t)\mathbf{M}(t)) \geq \psi > 0$ , $\forall t$ , then the training loss stops decreasing only when $\mathbf{D}$ reaches to all zero vector, i.e., training loss stops decreasing only when it reaches to zero, which is stated more rigorously in Lemma C.4.
+
+Lower Bound on the Eigenvalues of $\mathbf{M}^{\top}\mathbf{M}$
+
+$$
+\lambda_ {m i n} \left(\mathbf {M} ^ {\top} \mathbf {M}\right) = \sigma_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) = \min _ {\mathbf {u}: \| \mathbf {u} \| _ {2} = 1} \| \mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} \| _ {2}, \tag {46}
+$$
+
+where the first equality follows because $\mathbf{M}^{\top}\mathbf{M}$ is symmetric and positive semi definite and $\mathbf{u}\in \mathbb{R}^{d^3}$ . We also know
+
+$$
+\left[ \mathbf {M} ^ {\top} \mathbf {M} \right] _ {\mu \nu \sigma \mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} = \sum_ {n} \sum_ {i \in [ L ]} X _ {i \mu} ^ {(n)} \Gamma_ {\nu \sigma} ^ {n} w _ {\nu \sigma} X _ {i \mu^ {\prime}} ^ {(n)} \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {n} w _ {\nu^ {\prime} \sigma^ {\prime}}.
+$$
+
+For ease of notation we can tensorize the things back to the $\mu, \nu, \sigma$ . Thus,
+
+$$
+\| \mathbf {u} \| = \sum_ {\mu \nu \sigma} u _ {\mu \nu \sigma} ^ {2} = 1
+$$
+
+It follows that
+
+$$
+\left[ \mathbf {M} ^ {\top} \mathbf {M u} \right] _ {\mu \nu \sigma} = \sum_ {n \in \mathcal {B}} \sum_ {i \in [ L ]} \sum_ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} X _ {i \mu} ^ {(n)} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\mu \nu} X _ {i \mu^ {\prime}} ^ {(n)} \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {(n)} w _ {\nu^ {\prime} \sigma^ {\prime}} u _ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} \tag {47}
+$$
+
+Remembering
+
+$$
+\sum_ {i \in [ L ]} X _ {i \mu} X _ {i \mu^ {\prime}} = \sum_ {i \in [ L ]} \delta_ {\mathcal {X} (i), \mu} \delta_ {\mathcal {X} (i), \mu^ {\prime}},
+$$
+
+Eq. 47 becomes
+
+$$
+\begin{array}{l} \left[ \mathbf {M} ^ {\top} \mathbf {M u} \right] _ {\mu \nu \sigma} = \sum_ {n \in \mathcal {B}} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} \sum_ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} \left(\sum_ {i \in [ L ]} \delta_ {\mathcal {X} (i), \mu} \delta_ {\mathcal {X} (i), \mu^ {\prime}}\right) \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {(n)} w _ {\nu^ {\prime} \sigma^ {\prime}} u _ {\mu^ {\prime} \nu^ {\prime} \sigma^ {\prime}} \\ = \sum_ {n \in \mathcal {B}} \sum_ {i \in [ L ]} \sum_ {\nu^ {\prime} \sigma^ {\prime}} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {(n)} w _ {\nu^ {\prime} \sigma^ {\prime}} u _ {\mathcal {X} (i) ^ {n} \nu^ {\prime} \sigma^ {\prime}} \\ \end{array}
+$$
+
+Recalling the definition $\mathcal{B}_{\mu} = \left\{n\in \mathcal{B}:\mu \in \mathcal{X}^{(n)}\right\}$ , we do the same trick we did when we were getting Eq. 26 from Eq. 25, so we get
+
+$$
+\left[ \mathbf {M} ^ {\top} \mathbf {M u} \right] _ {\mu \nu \sigma} = \sum_ {\mu \in \mathcal {S}} \sum_ {n \in \mathcal {B} _ {\mu}} \left(s _ {\mu} ^ {(n)} \sum_ {\nu^ {\prime} \sigma^ {\prime}} \Gamma_ {\nu \sigma} ^ {(n)} w _ {\nu \sigma} \Gamma_ {\nu^ {\prime} \sigma^ {\prime}} ^ {(n)} w _ {\nu^ {\prime} \sigma^ {\prime}}\right) u _ {\mu \nu^ {\prime} \sigma^ {\prime}}
+$$
+
+Vectorizing along $\nu, \sigma$ and $\nu', \sigma'$ we reach to
+
+$$
+\mathbf {M} ^ {\top} \mathbf {M} \mathbf {u} = \sum_ {\mu \in S} \operatorname {d i a g} (\mathbf {w}) \left(\sum_ {n \in \mathcal {B} _ {\mu}} s _ {\mu} ^ {(n)} \operatorname {v e c} \left(\boldsymbol {\Gamma} ^ {(n)}\right) \operatorname {v e c} ^ {\top} \left(\boldsymbol {\Gamma} ^ {(n)}\right)\right) \mathbf {u} _ {\mu ,..}. \tag {48}
+$$
+
+Notice the similarity between Eq. 48 and 27. Thus, defining
+
+$$
+\mathbf {Z} _ {\mathcal {B} _ {\mu}} = \left( \begin{array}{c} \vdots \\ \operatorname {v e c} ^ {\top} \big (\Gamma^ {(n)} \big) \\ \vdots \end{array} \right) _ {n \in \mathcal {B} _ {\mu}},
+$$
+
+we follow the same steps seen after Eq. 27 and reach to
+
+$$
+\lambda_ {\min } \left(\mathbf {M} ^ {\top} \mathbf {M}\right) \geq b ^ {2} \zeta^ {2},
+$$
+
+where we used the bound $w_{\nu \sigma} \geq b$ and the Assumption G.10 $-\sigma_{\min}^2(\mathbf{Z}_{B_\mu}) \geq \zeta^2$ .
+
+Assumption G.10 (Training Data Versatility). For all $\mu \in S$ ,
+
+$$
+\mathbf {Z} _ {\mathcal {B} _ {\mu}} = \left( \begin{array}{c} \vdots \\ \operatorname {v e c} ^ {\top} \big (\Gamma^ {(n)} \big) \\ \vdots \end{array} \right) _ {n \in \mathcal {B} _ {\mu}},
+$$
+
+is full column rank.
+
+# H. Comparison Between the Attention Models
+
+We have introduced two novel models (HyperFeatureAttention in Appendix F, HyperAttention in Appendix G) and mentioned some approximations to reduce computational complexity (in Appendix G.3). In this section, letting embedding dimension to be $d$ , sequence length to be $L$ , we compare those models in terms of number of parameters, computational complexity, and abilities in Table H, too.
+
+Table 3. Comparison Between Attention Models
+
+Model Computational Complexity Captures Self Attention Θ (L2) Mutual interactions HyperFeatureAttention Θ (L2) Couplings of mutual interactions HyperAttention of order n Θ (Ln) n-way interactions Linear Self Attention Θ (L) Mutual interactions (approximate) Linear HyperFeatureAttention Θ (L) Couplings of mutual interactions (approximate) Linear HyperAttention of order n Θ (L) n-way interactions (approximate)
+
+Starting with self-attention, it captures mutual interactions between entities. $^{13}$ If it has multiple heads, it can capture summation over mutual interactions between features of the entities. It has $\Theta(d^2)$ parameters, and its computational complexity is $\Theta(L^2)$ . In order to reduce the computational complexity to $\Theta(L)$ , people came up with approximations called "Linear Attention" (Katharopoulos et al., 2020; Wang et al., 2020). However, despite the name, the method is generally used to approximate softmax self-attention.
+
+HyperFeatureAttention captures couplings of mutual interactions between features. If it has multiple heads, it can capture summation over couplings of mutual interactions between features. Same as self-attention, HyperFeatureAttention has $\Theta(d^2)$ parameters, and its computational complexity is $\Theta(L^2)$ . The same Linear Attention approximations can be applied to HyperFeatureAttention, reducing its computational complexity to $\Theta(L)$ . Seeing that the main goal of the paper is not this approximation for HyperFeatureAttention, we did not show it explicitly.
+
+As for HyperAttention of order $n$ , it captures up to $n^{th}$ order interactions. If it has multiple heads, it can capture summation over up to $n^{th}$ order interactions between features of the tokens. It has $\Theta(d^2)$ parameters, and its computational complexity is $\Theta(L^n)$ . Using similar Linear Attention approximations, in Appendix G.3, we explained how to reduce computational complexity of HyperAttention to $\Theta(L)$ .
+
+One might contend that standard self-attention can, in principle, capture these complex interactions "in surprising ways." The key difference is that our modules achieve comparable expressiveness with far fewer parameters—and therefore lower memory and compute overhead. While we do not advocate replacing conventional self-attention with HyperAttention or HyperFeatureAttention, we propose these mechanisms as complementary enhancements. In a typical multi-head architecture, certain heads may employ standard self-attention while others utilize HyperFeatureAttention (of varying orders) or HyperAttention to capture richer, higher-order interactions. Depending on the computational constraints, the HyperAttentions may leverage the linear approximations described in Appendix G.3.
+
+# I. Figures
+
+
+Figure 2. Comparison of learned vs. devised parameters for sinusoidal embedding: (a) Devised matrix $\mathbf{C}^{\forall L}$ showing the original devised structure, (b) Learned matrix $\mathbf{C}$ demonstrating the emergent but non-interpretable patterns. While visually distinct, both parameterizations lead to equivalent model behavior through different mathematical organizations.
+
+
+
+
+
+
+
+
+Figure 3. Equivalence of learned parameters with one-hot embedding in domain embedding base (here the parameters are already in the domain embedding base so we did not transfer them): (a) $C_{\mu \nu}$ learned parameters in domain embedding base, (b) $C^{\forall \mathrm{L}}$ devised parameters in domain embedding base, (c) $C_{\mu \nu}W_{\nu 0}$ the interesting matrix in domain embedding base, (d) Transformed $C_{\mu \nu}^{\forall \mathrm{L}}W_{\nu 0}^{\forall L}$ using original parameters. The mean squared difference between (c) and (d) is $\mathcal{O}(10^{-5})$ , demonstrating functional equivalence despite different parameter organizations.
+
+
+
+
+
+
+
+
+Figure 4. Equivalence of learned parameters with sinusoidal embedding in domain embedding base: (a) $C_{\mu \nu}$ learned parameters in domain embedding base, (b) $C^{\forall L}$ devised parameters in domain embedding base, (c) $C_{\mu \nu} W_{\nu 0}$ the interesting matrix in domain embedding base, (d) Transformed $C_{\mu \nu}^{\forall L} W_{\nu 0}^{\forall L}$ using original parameters. The main squared difference between (c) and (d) is $\mathcal{O}(10^{-5})$ , demonstrating functional equivalence despite different parameter organizations. Additionally as a side note, comparing (b) with Fig. 2 (b), we observe the advantage of sinusoidal embeddings in terms of their parameter efficiency within the $\mathbf{C} = \mathbf{W}^{Q}\mathbf{W}^{K^{\top}}$ matrix, particularly when relative positions are more important than absolute positions.
+
+
\ No newline at end of file
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/images.zip b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c32930ee7a648127bf46d7705b15a13c07d3dd84
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f1b53af51b6c26e1cba60015efd5bf3e3e82d4bd4385813702ad3a47742b4d5
+size 2760699
diff --git a/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/layout.json b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7b71bef330f0d8f5847ba03ecc1531369b465778
--- /dev/null
+++ b/atheoreticalstudyofhyperselfattentionthroughthelensofinteractionsrepresentationtraininggeneralization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3825405d8f700f4271f73fea2beed95cfdc66702b988a90a18f91b0030e02662
+size 2657189
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_content_list.json b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..78c60afa99351d9fc8089bfe6dc2b16022125bcd
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4dec8604dee433aa1928645a5bed80b53512577f2ec960df503247b448f8bc7f
+size 326934
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_model.json b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cb7c6d6b39b73b5633a10563441a377005859353
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:113e8c729daaae7e487f685f8dc2171e40c17187087760891c36a1e168a0f24d
+size 367229
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_origin.pdf b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a57c37d4778e14b0e8ae67145aa24856619cfdb1
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/eec526be-0fe7-4c0c-9d34-d2446cf6b549_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82dd592abdda936664f17b0fc0ce50fd13b311d22baa96da7fc8154cfacc2957
+size 899172
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/full.md b/atheoryforconditionalgenerativemodelingonmultipledatasources/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..18b948dca0b4a6e1382382a664aa073c1c42b643
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/full.md
@@ -0,0 +1,1384 @@
+# A Theory for Conditional Generative Modeling on Multiple Data Sources
+
+Rongzhen Wang $^{123}$ Yan Zhang $^{4}$ Chenyu Zheng $^{123}$ Chongxuan Li $^{123\dagger}$ Guoqiang Wu $^{4\dagger}$
+
+# Abstract
+
+The success of large generative models has driven a paradigm shift, leveraging massive multi-source data to enhance model capabilities. However, the interaction among these sources remains theoretically underexplored. This paper takes the first step toward a rigorous analysis of multi-source training in conditional generative modeling, where each condition represents a distinct data source. Specifically, we establish a general distribution estimation error bound in average total variation distance for conditional maximum likelihood estimation based on the bracketing number. Our result shows that when source distributions share certain similarities and the model is expressive enough, multi-source training guarantees a sharper bound than single-source training. We further instantiate the general theory on conditional Gaussian estimation and deep generative models including autoregressive and flexible energy-based models, by characterizing their bracketing numbers. The results highlight that the number of sources and similarity among source distributions improve the advantage of multi-source training. Simulations and real-world experiments validate our theory. Code is available at: https://github.com/ML-GSAI/Multi-Source-GM.
+
+# 1. Introduction
+
+Large generative models have achieved remarkable success in generating realistic and complex outputs across natural language (Brown et al., 2020; Touvron et al., 2023) and computer vision (Rombach et al., 2022; OpenAI, 2024). A
+
+$^{1}$ Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China $^{2}$ Beijing Key Laboratory of Research on Large Models and Intelligent Governance $^{3}$ Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE $^{4}$ School of Software, Shandong University, Shandong, China. $^{\dagger}$ Correspondence to: Chongxuan Li , Guoqiang Wu .
+
+Proceedings of the $42^{st}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+key factor behind their strong performance is the diverse and rich training data. For instance, large language models are trained on heterogeneous datasets comprising web content, books, and code (Brown et al., 2020; Hu et al., 2024b), while image generation models benefit from vast datasets spanning various categories and aesthetic qualities (Peebles & Xie, 2023; Chen et al., 2024; Esser et al., 2024). Empirical evidence suggests that, under certain conditions, training on multiple data sources can enhance performance across all sources (Pires et al., 2019; Chen et al., 2024; Allen-Zhu & Li, 2024a). Consequently, data mixture strategies have become an essential research topic (Nguyen et al., 2022; Chidambaram et al., 2022; Hu et al., 2024b).
+
+However, the theoretical underpinnings of this multi-source training paradigm remain poorly understood. This raises a fundamental question: is it more effective to train separate models on individual data sources, or to train a single model using data from multiple sources? In this paper, we take the first step toward a rigorous analysis of multi-source training, focusing on its impact on conditional generative models, where each condition represents a distinct data source.
+
+Our first contribution is establishing a general upper bound on distribution estimation error for conditional generative modeling via maximum likelihood estimation (MLE) in Section 3. Specifically, we measure the error using average total variation (TV) distance between the true and estimated conditional distributions across all sources, which scales as $\tilde{\mathcal{O}} (\sqrt{\log\mathcal{N}_{P_{X|Y}} / n})$ , where $n$ is the training set size and $\mathcal{N}_{P_{X|Y}}$ is the bracketing number of the conditional distribution space $\mathcal{P}_{X|Y}$ . Further, when source distributions exhibit parametric similarity, multi-source training effectively reduces the complexity of the distribution space, leading to a provably sharper bound than single-source training.
+
+Technically, our analysis extends classical MLE estimation error bounds from empirical process theory (Wong & Shen, 1995; Geer, 2000; Ge et al., 2024) to the conditional setting by adapting the complexity of the distribution space and measuring the estimation error in terms of average TV distance. Further discussions are provided in Section 6.
+
+As the second contribution, we instantiate our general theory in three specific cases: (1) parametric estimation of conditional Gaussian distributions, a simple example clearly
+
+illustrating how source distribution properties influence the benefits of multi-source training, (2) autoregressive models (ARMs), the foundation of large language models (Brown et al., 2020; Touvron et al., 2023; Liu et al., 2024; Bai et al., 2023; Zheng et al., 2024), and (3) energy-based models (EBMs), a general class of generative models for continuous data (LeCun et al., 2006; Du & Mordatch, 2019; Song & Ermon, 2019; Zhao et al., 2022). For each model, we derive explicit estimation error bounds for both multi-source and single-source training by measuring the bracketing number of the corresponding conditional distribution space. Based on the theoretical results in these instantiations, we observe a common pattern: across all cases, the ratio of multi-source to single-source estimation error bounds takes the form $\sqrt{1 - (K - 1 / K)\beta_{\mathrm{sim}}}$ , where $K$ is the number of sources and $\beta_{\mathrm{sim}} \in [0,1]$ is an inductively derived quantity that can be interpreted as similarity among source distributions, with model-specific definitions detailed in Section 4. This ratio decreases with both $K$ and $\beta_{\mathrm{sim}}$ , indicating that the number of sources and their similarity improve the benefits of multi-source training.
+
+A core technical contribution is establishing novel bracketing number bounds for ARMs and EBMs. This is challenging since on the one hand, the bracketing number provides a refined measure of function spaces, requiring carefully designed one-sided bounds over the entire domain. On the other hand, the conditional distribution space of deep generative models is inherently complex, involving both neural network architectures and specific probabilistic characteristics for different generative modeling methods. Please refer to Appendixes C and D for detailed proofs and discussions.
+
+Finally, we validate our theoretical findings through simulations and real-world experiments in Section 5. In simulations, we perform conditional Gaussian estimation, where the MLE solutions can be analytically derived, enabling a rigorous assessment of the tightness of our bounds. The close match between the empirical and theoretical error orders supports the validity of our results. Beyond simulations, we train class-condition diffusion models (Karras et al., 2024) on ILSVRC2012 (Russakovsky et al., 2015) where its semantic hierarchy (Deng et al., 2010) provides a natural way to define inter-source distribution similarity. Empirical results confirm that multi-source training outperforms single-source training by achieving lower FID scores, consistent with our theoretical guarantee in Section 3, and this advantage depends on both the number of sources and their similarity, supporting our insights in Section 4.
+
+# 2. Problem formulation
+
+Elementary notations. Scalars, vectors, and matrices are denoted by lowercase letters (e.g., $a$ ), lowercase boldface letters (e.g., $a$ ), and uppercase boldface letters (e.g., $A$ ). We
+
+use $\pmb{a}[m]$ to denote the $m$ -th entry of vector $\pmb{a}$ , and $A[m, :]$ , $A(:, n]$ , and $A[m, n]$ to denote the $m$ -th row, the $n$ -th column, and the entry at the $m$ -th row and the $n$ -th column of $A$ . $(\pmb{a}, \pmb{b})$ denotes the concatenation of $\pmb{a}$ and $\pmb{b}$ as a column vector. We denote $[n] := \{1, \dots, n\}$ for any $n \in \mathbb{N}$ and $a \lor b$ as $\max \{a, b\}$ . For any measurable scalar function $f(\pmb{x})$ on domain $\mathcal{X}$ and real number $1 \leq p \leq \infty$ , its $L^{\mathsf{p}}(\mathcal{X})$ -norm is defined as $\| f(\pmb{x}) \|_{L^{\mathsf{p}}(\mathcal{X})} := (\int_{\mathcal{X}} |f(\pmb{x})|^{\mathsf{p}} d\pmb{x})^{\frac{1}{\mathsf{p}}}$ . When $p = \infty$ , $\| f(\pmb{x}) \|_{L^{\infty}(\mathcal{X})} = \sup_{\pmb{x} \in \mathcal{X}} |f(\pmb{x})|$ . $\mathbb{I}(\cdot)$ denotes the indicator function. Notation $a_n = \tilde{\mathcal{O}}(b_n)$ indicates $a_n$ is asymptotically bounded above by $b_n$ up to logarithmic factors.
+
+# 2.1. Data from multiple sources
+
+Let $X$ denote the random variable for data (e.g., a natural image) in a data space $\mathcal{X}$ , and $Y$ denote the random variable for the source label in a label space $\mathcal{Y}$ . Suppose there are $K$ data sources (e.g., $K$ categories of images), each corresponding to an unknown conditional distribution $p_{X|k}^{*}$ for $k\in [K]$ . We assume that $p_{X|k}^{*}$ is parameterized by a source-specific feature $\phi_k^*$ in a parameter space $\Phi$ and a shared feature $\psi^{*}$ in a parameter space $\Psi$ , such that $p_{X|k}^{*}(x|k) = p_{\phi_k^*,\psi^*}(x|k)$ . The conditional distribution of $X$ given $Y = y$ is consequently expressed as
+
+$$
+p _ {X | Y} ^ {*} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \left(p _ {\phi_ {k} ^ {*}, \psi^ {*}} (\boldsymbol {x} | k)\right) ^ {\mathbb {I} (y = k)}.
+$$
+
+This compact representation provides convenience for subsequent discussions.
+
+We further assume the distribution of $Y$ is known since the proportion of data from different sources is often manually designed in practice (Deng et al., 2009; Krizhevsky et al., 2009; Brown et al., 2020; Chen et al., 2024). The joint distribution of $X$ and $Y$ is then given by $p_{X,Y}^{*}(\boldsymbol{x},y) = p_{X|Y}^{*}(\boldsymbol{x}|y)p_{Y}^{*}(y)$ .
+
+# 2.2. Conditional generative modeling
+
+Consider a dataset $S = \{(\pmb{x}_i, y_i)\}_{i=1}^n$ consisting of $n$ independent and identically distributed (i.i.d.) data-label pairs sampled from the joint distribution $p_{X,Y}^{*}$ . In the learning phase, a conditional generative model uses maximum likelihood estimation (MLE) to estimate $p_{X|Y}^{*}$ based on the dataset $S$ , where the conditional likelihood is defined as
+
+$$
+\mathcal {L} _ {S} \left(p _ {X \mid Y}\right) := \prod_ {i = 1} ^ {n} p _ {X \mid Y} \left(\boldsymbol {x} _ {i} \mid y _ {i}\right). \tag {1}
+$$
+
+Multi-source training. Under multi-source training, the conditional distribution space is given by $\mathcal{P}_{X|Y}^{\mathrm{multi}}\coloneqq$
+
+$$
+\left\{p _ {X | Y} ^ {\mathrm {m u l t i}} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \left(p _ {\phi_ {k}, \psi} (\boldsymbol {x} | k)\right) ^ {\mathbb {I} (y = k)}: \phi_ {k} \in \Phi , \psi \in \Psi \right\},
+$$
+
+and the corresponding estimator of $p_{X|Y}^{*}$ is
+
+$$
+\hat {p} _ {X | Y} ^ {\text {m u l t i}} = \underset {p _ {X | Y} ^ {\text {m u l t i}} \in \mathcal {P} _ {X | Y} ^ {\text {m u l t i}}} {\arg \max } \mathcal {L} _ {S} (p _ {X | Y} ^ {\text {m u l t i}}). \tag {2}
+$$
+
+Here, we adopt the realizable assumption that true parameters satisfy $\phi_k^* \in \Phi$ and $\psi^* \in \Psi$ following (Ge et al., 2024), which allows the estimation error analysis to focus on the generalization property of the distribution space.
+
+Single-source training. Under single-source training, we train $K$ conditional generative models for each source using data exclusively from the corresponding source. For any particular source $k$ , denoting $S_{k} \coloneqq \{(x_{i},y_{i}) \in S | y_{i} = k\} = \{x_{j}^{k}, k\}_{j=1}^{n_{k}}$ , the corresponding generative model estimate $p_{X|k}^{*}$ by maximizing the conditional likelihood on $S_{k}$ as
+
+$$
+\hat{p}_{X|k}^{\text{single}} = \operatorname *{arg max}_{\substack{p_{X|k}^{\text{single}}\in \mathcal{P}_{X|k}^{\text{single}}}}\mathcal{L}_{S_{k}}(p_{X|k}^{\text{single}}),
+$$
+
+where $\mathcal{L}_{S_k}(p_{X|k})\coloneqq \prod_{j = 1}^{n_k}p_{X|k}(\pmb{x}_j^k |k)$ and $\mathcal{P}_{X|k}^{\mathrm{single}}\coloneqq$ $\{p_{\phi_k,\psi_k}(\pmb {x}|k):\phi_k\in \Phi ,\psi_k\in \Psi \}$
+
+Separately maximizing these $K$ objectives is equivalent to finding the maximizer of $L_{S}$ in conditional distribution space $\mathcal{P}_{X|Y}^{\mathrm{single}}\coloneqq$
+
+$$
+\left\{p _ {X | Y} ^ {\text {s i n g l e}} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \left(p _ {\phi_ {k}, \psi_ {k}} (\boldsymbol {x} | k)\right) ^ {\mathbb {I} (y = k)}: \phi_ {k} \in \Phi , \psi_ {k} \in \Psi \right\}.
+$$
+
+Therefore, the estimator of $p_{X|Y}^*$ under single-source training is
+
+$$
+\hat {p} _ {X \mid Y} ^ {\text {s i n g l e}} = \underset {p _ {X \mid Y} ^ {\text {s i n g l e}} \in \mathcal {P} _ {X \mid Y} ^ {\text {s i n g l e}}} {\arg \max } \mathcal {L} _ {S} \left(p _ {X \mid Y} ^ {\text {s i n g l e}}\right). \tag {3}
+$$
+
+The introduced multi-source setting abstracts practical scenarios where different sources share certain underlying data structures (via $\psi$ ) while retaining unique characteristics (via $\phi_k$ ). At the same time, the single-source setting provides a controlled comparison to rigorously assess whether incorporating other sources improves the model's learning.
+
+Evaluation for conditional distribution estimation. We measure the accuracy of conditional distribution estimation by average TV distance between the estimated and true conditional distributions, referred to as average TV error:
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y}) := \mathbb {E} _ {Y} \left[ \mathrm {T V} \left(\hat {p} _ {X | Y}, p _ {X | Y} ^ {*}\right) \right], \tag {4}
+$$
+
+where $\mathrm{TV}(\hat{p}_{X|Y}, p_{X|Y}^*) = \frac{1}{2} \int_{\mathcal{X}} |\hat{p}_{X|Y}(\boldsymbol{x}|y) - p_{X|Y}^*(\boldsymbol{x}|y)| d\boldsymbol{x}$ is the total variation distance between $\hat{p}_{X|Y}$ and $p_{X|Y}^*$ .
+
+In the following sections, we investigate the effectiveness of multi-source training by measuring and comparing $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{multi}})$ and $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}})$ .
+
+# 3. Provable advantage of multi-source training
+
+In this section, we establish a general upper bound on the average TV error for conditional MLE and provide a statistical guarantee for the benefits of multi-source training. Our analysis extends the classical MLE guarantees (Geer, 2000; Ge et al., 2024), which leverage the bracketing number and the uniform law of large numbers.
+
+# 3.1. Complexity of the conditional distribution space
+
+We begin by introducing an extended notion of the bracketing number as follows.
+
+Definition 3.1 ( $\epsilon$ -upper bracketing number for conditional distribution space). Let $\epsilon$ be a real number that $\epsilon > 0$ and $\mathfrak{p}$ be an integer that $1 \leq \mathfrak{p} \leq \infty$ . An $\epsilon$ -upper bracket of a conditional distribution space $\mathcal{P}_{X|Y}$ with respect to $L^{\mathfrak{p}}(\mathcal{X})$ is a finite function set $\mathcal{B}$ such that for any $p_{X|Y} \in \mathcal{P}_{X|Y}$ , there exists some $p' \in \mathcal{B}$ such that given any $y \in \mathcal{Y}$ , it holds
+
+$$
+\forall \boldsymbol {x} \in \mathcal {X}: p ^ {\prime} (\boldsymbol {x}, y) \geq p _ {X | Y} (\boldsymbol {x} | y), \text {a n d}
+$$
+
+$$
+\left\| p ^ {\prime} (\cdot , y) - p _ {X \mid Y} (\cdot \mid y) \right\| _ {L ^ {p} (\mathcal {X})} \leq \epsilon .
+$$
+
+The $\epsilon$ -upper bracketing number $\mathcal{N}_{[]}(\epsilon; \mathcal{P}_{X|Y}, L^{\mathfrak{p}}(\mathcal{X}))$ is the cardinality of the smallest $\epsilon$ -upper bracket.
+
+This notion quantifies the minimal set of functions needed to upper bound every conditional distribution within a small margin, reducing error analysis from an infinite to a finite function class. Unlike traditional bracketing numbers for unconditional distributions $p_{X}$ using two-sided brackets (Wellner, 2002), this extension employs one-sided upper brackets (Ge et al., 2024) and requires uniform coverage across $y$ for all conditional distributions tailored for our setting. We provide a concrete example and corresponding visualization in Appendix G to make this notion more accessible.
+
+# 3.2. Guarantee for conditional MLE
+
+We now present a general error bound which applies to both training strategies.
+
+Theorem 3.2 (Average TV error bound for conditional MLE, proof in Appendix A.1). Given a dataset $S$ of size $n$ that i.i.d. sampled from $p_{X,Y}^{*}$ , let $\hat{p}_{X|Y}$ be the maximizer of $L_{S}(p_{X|Y})$ defined in Equation (1) in conditional distribution space $\mathcal{P}_{X|Y}$ . Suppose the real conditional distribution $p_{X|Y}^{*}$ is contained in $\mathcal{P}_{X|Y}$ . Then, for any $0 < \delta \leq 1/2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y}) \leq 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)}.
+$$
+
+As formulated in Equation (2) and Equation (3), multisource and single-source training apply conditional MLE on
+
+$S$ within different conditional distribution spaces. The following proposition shows that multi-source training reduces the bracketing number of its distribution space through source similarity.
+
+Proposition 3.3 (Multi-source training reducing complexity, proof in Appendix A.2.). Let $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ and $\mathcal{P}_{X|Y}^{\mathrm{single}}$ be as defined in Section 2. Then, for any $\epsilon >0$ and $1\leq \mathsf{p}\leq \infty$ we have
+
+$$
+\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}}, L ^ {\mathsf {p}} (\mathcal {X})\right) \leq \mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} ^ {\mathrm {s i n g l e}}, L ^ {\mathsf {p}} (\mathcal {X})\right).
+$$
+
+Combining Theorem 3.2 and Proposition 3.3, we conclude that when source distributions have parametric similarity and the model satisfies the realizable assumption, multi-source training can enjoy a sharper estimation guarantee than single-source training. Simulations and real-world experiments in Section 5 support our result.
+
+# 4. Instantiations
+
+We now apply our general analysis to conditional Gaussian estimation and two deep generative models to obtain concrete error bounds.
+
+# 4.1. Parametric estimation on Gaussian distributions
+
+As employed in extensive work (Montanari & Saeed, 2022; Wang & Thrampoulidis, 2022; He et al., 2022; Zheng et al., 2023b; Dandi et al., 2024; Zheng et al., 2023a), Gaussian models provide a simple yet insightful case for illustrating the benefits of multi-source training and enable analytically tractable simulations under our theoretical assumptions.
+
+Parametric distribution family. Suppose each of the $K$ conditional distributions is a $d$ -dimensional standard Gaussian distribution, i.e.,
+
+$$
+\forall k \in [ K ], X | k \sim \mathcal {N} (\pmb {\mu} _ {k} ^ {*}, \pmb {I} _ {d}) = (2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {1}{2} \| \pmb {x} - \pmb {\mu} _ {k} ^ {*} \| _ {2} ^ {2}},
+$$
+
+with a mean vector $\pmb{\mu}_k^*$ and an identity covariance matrix $\pmb{I}_d\in \mathbb{R}^{d\times d}$ . We assume each mean vector has two parts: the first $d_{1}$ entries $\pmb{\mu}_k^*[1:d_1]$ represent the source-specific feature which is potentially different for each source, and the remaining entries $\pmb{\mu}_k^*[d_1 + 1:d]$ represent the shared feature which is identical across all sources. Corresponding to the general formulation in Section 2, we denote
+
+$$
+\phi_ {k} := \boldsymbol {\mu} _ {k} ^ {*} [ 1: d _ {1} ], \psi := \boldsymbol {\mu} _ {1} ^ {*} [ d _ {1} + 1: d ] = \dots = \boldsymbol {\mu} _ {K} ^ {*} [ d _ {1} + 1: d ],
+$$
+
+and the conditional distribution is parameterized as
+
+$$
+p _ {\phi_ {k}, \psi} (\boldsymbol {x} | k) = (2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {1}{2} \| \boldsymbol {x} - (\phi_ {k}, \psi) \| _ {2} ^ {2}}. \tag {5}
+$$
+
+Statistical guarantee of the average TV error. In this formulation, the conditional MLE in $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ under multi-source training leads to the following result.
+
+Theorem 4.1 (Average TV error bound for conditional Gaussian estimation under multi-source training, proof in Appendix B.2). Let $\hat{p}_{X|Y}^{\mathrm{multi}}$ be the likelihood maximizer defined in Equation (2) given $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ with conditional distributions as in Equation (5). Suppose $\Phi = [-B,B]^{d_1}$ , $\Psi = [-B,B]^{d - d_1}$ with constant $B > 0$ , and $\phi_k^* \in \Phi$ , $\psi^* \in \Psi$ . Then, for any $0 < \delta \leq 1/2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {m u l t i}}) = \tilde {\mathcal {O}} \left(\sqrt {\frac {(K - 1) d _ {1} + d}{n}}\right).
+$$
+
+In contrast, single-source training results in an error of $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}}) = \tilde{\mathcal{O}}\Big(\sqrt{Kd / n}\Big)$ , with a formal result provided in Theorem B.2. The advantage of multi-source learning can be quantified by the ratio of error bounds:
+
+$$
+\sqrt {\frac {(K - 1) d _ {1} + d}{K d}} = \sqrt {1 - \frac {K - 1}{K} \frac {d - d _ {1}}{d}}.
+$$
+
+Letting $\beta_{\mathrm{sim}} \coloneqq \frac{d - d_1}{d}$ , where $\frac{d - d_1}{d}$ represents the proportion of the shared mean dimensions relative to the total dimensionality, this quantity $\beta_{\mathrm{sim}}$ can thus be interpreted as the similarity among source distributions. As we will see in subsequent instantiations, this general form of ratio $\sqrt{1 - \frac{K - 1}{K} \beta_{\mathrm{sim}}}$ applies across Section 4.2 and Section 4.3, with $\beta_{\mathrm{sim}}$ instantiated in a case-specific manner. Further discussion on the notion of $\beta_{\mathrm{sim}}$ and the measure of distribution similarity in practice can be found in Appendix F.
+
+Notably, this ratio decreases with both the number of sources $K$ and source similarity $\beta_{\mathrm{sim}}$ . As $K$ increases from 1 to $\infty$ , the ratio decreases from 1 to $\sqrt{1 - \beta_{\mathrm{sim}}}$ , and as $\beta_{\mathrm{sim}}$ increases from 0 (completely dissimilar distributions) to 1 (completely identical distributions), it decreases from 1 to $\sqrt{1 / K}$ , reflecting a transition from no asymptotic gain to a constant improvement. This highlights that the number of sources and distribution similarity enhance the benefits of multi-source training. Empirical results in Section 5.2 confirm this trend.
+
+# 4.2. Conditional ARMs on discrete distributions
+
+For deep generative models, our formulations are based on multilayer perceptrons (MLPs), a fundamental network component, with potential extensions to Transformers and convolution networks with existing literature (Lin & Zhang, 2019; Ledent et al., 2021; Shen et al., 2021; Hu et al., 2024a; Trauger & Tewari, 2024; Jiao et al., 2024). We formally define MLPs mainly following notations in Oko et al. (2023)
+
+Definition 4.2 (Class of MLPs). A class of MLPs $\mathcal{F}(L,W,S,B)$ with depth $L$ , width $W$ , sparsity $S$ , norm $B$ , and element-wise ReLU activation that $\mathrm{ReLU}(x) = 0 \vee x$ is defined as $\mathcal{F}(L,W,S,B) := \{\pmb{f}(\pmb{x}) = (\pmb{A}^{(L)}\mathrm{ReLU}(\cdot) + \pmb{b}^{(L)}) \circ \dots \circ (\pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)}) : (\pmb{A}^{(l)}, \pmb{b}^{(l)})\}_{l=1}^{L}$
+
+$\mathcal{W}(L,W,S,B)\}$ , where parameter space $\mathcal{W}(L,W,S,B)$ is defined by $\mathcal{W}(L,W,S,B) := \{\{(A^{(l)},b^{(l)})\}_{l=1}^{L}: A^{(l)} \in \mathbb{R}^{W_l \times W_{l-1}}, b^{(l)} \in \mathbb{R}^{W_l}, \max_l W_l \leq W, \sum_{l=1}^{L} (\|A^{(l)}\|_0 + \|b^{(l)}\|_0) \leq S, \max_l \|A^{(l)}\|_\infty \vee \|b^{(l)}\|_\infty \leq B\}$ .
+
+We now present the formulation for ARMs, which can be viewed as an extension of Uria et al. (2016).
+
+Probabilistic modeling with autoregression. Consider a common data scenario for the natural language where $X$ represents a $D$ -length text in $[M]^D$ . Each dimension of $X$ is an integer token following an $M$ -categorical distribution with $M$ being the vocabulary size. Adopting the autoregressive approach of probabilistic modeling, conditional distribution $p_{X|Y}(\boldsymbol{x}|y)$ is factorized using the chain rule as:
+
+$$
+\begin{array}{l} p (\boldsymbol {x} | y) = p \left(x _ {1} | y\right) \dots p \left(x _ {D} | \boldsymbol {x} _ {< D}, y\right) \\ = p \left(x _ {1}; \boldsymbol {\rho} (y)\right) \dots p \left(x _ {D}; \boldsymbol {\rho} (\boldsymbol {x} _ {< D}, y)\right). \\ \end{array}
+$$
+
+We omit the subscripts for notation simplicity. Here, for any $d \in [D]$ , $\rho(x_{ 0$ and $\phi_k^*\in \Phi$ , $\psi^{*}\in \Psi$ . Then, for any $0 < \delta \leq 1 / 2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {m u l t i}}) = \tilde {\mathcal {O}} \left(\sqrt {\frac {L \left(S + D + (D + M + K) d _ {\mathrm {e}}\right)}{n}}\right).
+$$
+
+In contrast, single-source training results in an error of $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}}) = \tilde{\mathcal{O}}\Big(\sqrt{KL(S + D + (D + M + 1)d_{\mathrm{e}}) / n}\Big)$ with a formal result provided in Theorem C.8. The advantage of multi-source learning is quantified by the ratio of error bounds: $\sqrt{\frac{L(S + D + (D + M + K)d_{\mathrm{e}})}{KL(S + D + (D + M + 1)d_{\mathrm{e}})}} = \sqrt{1 - \frac{K - 1}{K}\beta_{\mathrm{sim}}}$ , where the term $\beta_{\mathrm{sim}} := \frac{S + D + (D + M)d_{\mathrm{e}}}{S + D + (D + M + K)d_{\mathrm{e}} + d_{\mathrm{e}}}\in [0,1]$ quantifies source distribution similarity based on the proportion of shared parameters. This ratio follows the same pattern discussed in Section 4.1 where the number of sources $K$ and the distribution similarity $\beta_{\mathrm{sim}}$ are two key factors improving the advantage of multi-source training.
+
+# 4.3. Conditional EBMs on continuous distributions
+
+In this section, we study distribution estimation for conditional EBMs, a flexible probabilistic modeling approach on continuous data. Our formulation follows Du & Mordatch (2019) with simplified neural network architecture.
+
+Probabilistic modeling with energy function. Consider a common scenario with natural image $X$ flattened and normalized in $[0,1]^D$ . The conditional distribution $p_{X|Y}(\boldsymbol{x}|y)$ is factorized with an energy function $u(\boldsymbol{x}|y)$ as:
+
+$$
+p (\pmb {x} | y) = \frac {e ^ {- u (\pmb {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u (\pmb {s} | y)} d \pmb {s}}.
+$$
+
+Distribution estimation via neural network. We suppose the energy function is estimated with $u_{\theta}(\pmb{x}|\pmb{y})$ using a neural network parameterized by $\theta$ , which comprises a condition embedding layer and an energy-estimating MLP.
+
+Specifically, we first look up $y$ in a condition embedding matrix $\mathbf{V} \in [0,1]^{K \times d_{\mathrm{e}}}$ and concat the embedding with $\mathbf{x}$
+
+$$
+\boldsymbol {e} _ {\boldsymbol {V}} (\boldsymbol {x}, y) = \left[ \begin{array}{c} \boldsymbol {x} \\ \boldsymbol {V} [ y,: ] \end{array} \right] \in [ 0, 1 ] ^ {D + d _ {\mathrm {e}}}.
+$$
+
+Then we use an MLP $f_{\omega} \in \mathcal{F}(L, W, S, B)$ with $W_0 = D + d_{\mathrm{e}}$ and $W_L = 1$ to estimate the energy as
+
+$$
+u _ {\theta} (\boldsymbol {x} | y) = f _ {\omega} \big (\boldsymbol {e} _ {\boldsymbol {V}} (\boldsymbol {x}, y) \big),
+$$
+
+where $\theta \coloneqq \{\pmb {V},\omega \}$ . This leads to a conditional distribution as
+
+$$
+p _ {\theta} (\boldsymbol {x} | y) = \frac {e ^ {- u _ {\theta} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}}. \tag {7}
+$$
+
+When training such an EBM, each row of $V$ is only optimized on data with the corresponding condition, while $\omega$ is optimized on data with all conditions. That means $V[k,:]$ serves as the source-specific parameter and $\omega$ is shared across all sources. Corresponding to the general formulation in Section 2, we denote
+
+$$
+\phi_ {k} := \boldsymbol {V} [ k,: ], \text {a n d} \psi := \omega .
+$$
+
+Statistical guarantee of the average TV error. In this formulation, the conditional MLE in $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ under multi-source training leads to the following result.
+
+Theorem 4.4 (Average TV error bound for EBMs under multi-source training, Proof in Appendix D.3). Let $\hat{p}_{X|Y}^{\mathrm{multi}}$ be the likelihood maximizer defined in Equation (2) given $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ with conditional distributions in Equation (7). Suppose $\Phi = [0,1]^{d_{\mathrm{e}}}$ and $\Psi = \mathcal{W}(L,W,S,B)$ with constants
+
+$L, W, S, B > 0$ and assume $\phi_k^* \in \Phi$ , $\psi^* \in \Psi$ . Then, for any $0 < \delta \leq 1/2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {m u l t i}}) = \tilde {\mathcal {O}} \left(\sqrt {\frac {L (S + K d _ {\mathrm {e}})}{n}}\right).
+$$
+
+In contrast, single-source training results in an error of $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}}) = \tilde{\mathcal{O}}\left(\sqrt{LK(S + d_{\mathrm{e}}) / n}\right)$ with a formal proof provided in Theorem D.4. The advantage of multi-source learning is quantified by the ratio of error bounds: $\sqrt{\frac{L(S + Kd_{\mathrm{e}})}{LK(S + d_{\mathrm{e}})}} = \sqrt{1 - \frac{K - 1}{K}\beta_{\mathrm{sim}}}$ , where $\beta_{\mathrm{sim}} := \frac{S}{S + d_{\mathrm{e}}} \in [0,1]$ quantifies source distribution similarity based on the proportion of shared parameters. Similar to the former two cases, the number of sources $K$ and the distribution similarity $\beta_{\mathrm{sim}}$ improve the advantage of multi-source training.
+
+# 5. Experiments
+
+In this section, simulations and real-world experiments are conducted to verify our theoretical results in Section 3 and 4.
+
+# 5.1. Simulations on conditional Gaussian estimation
+
+In this part, we aim to examine the tightness of the derived upper bound that $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{multi}}) = \tilde{\mathcal{O}} (\sqrt{\frac{(K - 1)d_1 + d}{n}})$ in Theorem 4.1 and $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}}) = \tilde{\mathcal{O}} (\sqrt{\frac{Kd}{n}})$ in Theorem B.2.
+
+The number of sources $K$ , sample size $n$ , and the similarity factor $\beta_{\mathrm{sim}} \in [0,1]$ are key parameters. In all of our simulations, we fix data dimension $d = 10$ and $p_Y^*(k) = 1 / K$ all $k \in [K]$ . The dissimilar dimension $d_1 = d - \lfloor \beta_{\mathrm{sim}} d \rfloor$ . We set the source-specific feature as $\phi_k = k \mathbf{1} \in \mathbb{R}^{d_1}$ and the shared feature as $\psi = \mathbf{0} \in \mathbb{R}^{d - d_1}$ . Under the setting of Section 4.1, conditional MLE has analytical solutions: under multi-source training, we have
+
+$$
+\hat {\phi} _ {k} = \sum_ {y _ {i} = k} \boldsymbol {x} _ {i} [ 1: d _ {1} ] / n _ {k}, \hat {\psi} = \sum_ {i = 1} ^ {n} \boldsymbol {x} _ {i} [ d _ {1} + 1: d ] / n,
+$$
+
+and under single-source training, we have
+
+$$
+\hat {\phi} _ {k} = \sum_ {y _ {i} = k} \boldsymbol {x} _ {i} [ 1: d _ {1} ] / n _ {k}, \hat {\psi} _ {k} = \sum_ {y _ {i} = k} \boldsymbol {x} _ {i} [ d _ {1} + 1: d ] / n _ {k}.
+$$
+
+For evaluation, we randomly sample $n^{\mathrm{test}} = 500$ data points according to the true joint distribution $p_{X,Y}^{*}$ . Empirically, we approximate the true TV distance by using the Monte Carlo method based on the test set, which can be written formally as
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y}) \approx \frac {1}{2 n ^ {\mathrm {t e s t}}} \sum_ {i = 1} ^ {n ^ {\mathrm {t e s t}}} \left| \frac {\hat {p} _ {X | Y} (\boldsymbol {x} _ {i} | y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})} - 1 \right| = \mathcal {R} _ {\overline {{\mathrm {T V}}}} ^ {\mathrm {e m}} (\hat {p} _ {X | Y}).
+$$
+
+To eliminate the randomness, we average over 5 random runs for each simulation and report the mean results.
+
+
+Figure 1. Simulation results for conditional Gaussian estimation. Empirical values (solid lines) correspond to the left vertical axis, while theoretical values (dashed lines) correspond to the right. Single-source results are shown in orange, and multi-source results in green. The matched orders of empirical errors and theoretical upper bounds support the validity of results in Section 4.1.
+
+
+
+
+
+Order of the average TV error about $K$ . We range the number of sources $K$ in [1, 3, 5, 10, 15] with fixed sample size $n = 500$ and similarity factor $\beta_{\mathrm{sim}} = 0.5$ . We display the empirical average TV error for each $K$ in Figure 1(a), with $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{multi}})$ colored in green and $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{single}})$ colored in orange. Ignoring the influence of constants, it shows a good alignment between empirical errors (in solid lines) and theoretical upper bounds (in dashed lines), both scaling as $\tilde{\mathcal{O}} (\sqrt{K})$ .
+
+Order of the average TV error about $n$ . We range sample size $n$ in [100, 300, 500, 1000, 5000] with fixed number of sources $K = 5$ and similarity factor $\beta_{\mathrm{sim}} = 0.5$ . We display the empirical error for each $n$ in Figure 1(b), with $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{multi}})$ colored in green and $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{single}})$ colored in orange. Ignoring the influence of constants, it shows that the orders of empirical error about $n$ match well with the theoretical upper bounds which scale as $\tilde{\mathcal{O}}(1/\sqrt{n})$ .
+
+Order of the average TV error about $\beta_{\mathrm{sim}}$ . We range similarity factor $\beta_{\mathrm{sim}}$ in $[0, 0.3, 0.5, 0.7, 1]$ with fixed sample size $n = 500$ and number of data sources $K = 5$ . We display the empirical average TV error for each $\beta_{\mathrm{sim}}$ in Figure 1(c) to observe how similarity factor $\beta_{\mathrm{sim}}$ impacts the advantage of multi-source training. Concretely, as predicted by the theoretical bounds, the changing of $\beta_{\mathrm{sim}}$ will not influence the performance of single-source training but will decrease the error of multi-source training in the order of $\tilde{\mathcal{O}}(\sqrt{d_1}) = \tilde{\mathcal{O}}(\sqrt{1 - \beta_{\mathrm{sim}}})$ . The results show that the theoretical bounds predict the empirical performance well.
+
+To sum up, our simulations verify the validity of our theoretical bounds in Section 4.1. Moreover, in all experiments, $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{multi}})$ is consistently smaller than $\mathcal{R}_{\mathrm{TV}}^{\mathrm{em}}(\hat{p}_{X|Y}^{\mathrm{single}})$ , supporting our results in Section 3
+
+# 5.2. Real-world experiments on diffusion models
+
+In this section, we conduct experiments on diffusion models to validate our theoretical findings in real-world scenarios from two aspects: (1) We empirically compare multi-source
+
+and single-source training on conditional diffusion models and evaluate their performance to validate the guaranteed advantage of multi-source training against single-source training proved in Section 3. (2) We investigate the trend of this advantage about key factors—the number of sources and distribution similarity—as discussed in Section 4.
+
+Experimental settings. We train class-conditional diffusion models following EDM2 (Karras et al., 2024) at $256 \times 256$ resolution on the selected classes from the ILSVRC2012 training set (Russakovsky et al., 2015), which is a subset of ImageNet (Deng et al., 2009) containing 1.28M natural images from 1000 classes, each annotated with an integer class label from 1 to 1000. In our experiments, we treat each class as a distinct data source. To control similarity among data sources, we manually design two levels of distribution similarity based on the semantic hierarchy of ImageNet (Deng et al., 2010; Bostock., 2018) as shown in Figure 2 in Appendix E.1 along with other experimental details.
+
+For each controlled experiment comparing multi-source and single-source training, we fix $K$ target classes within one similarity level Sim and train the models on a dataset $S$ consisting of $N$ examples per class. Under multi-source training, we train a single conditional diffusion model for all $K$ classes jointly. Under single-source training, we train $K$ separate conditional diffusion models, one for each class. Please refer to Section 2 for the formal formulation of these two strategies. We set each factor with two possible values: the number of classes $K$ in 3 or 10, distribution similarity Sim in 1 or 2, and the sample size per class $N$ in 500 or 1000. This results in a total of 8 sets of experiments comparing multi-source and single-source training.
+
+We evaluate model performance using the average Fréchet Inception Distance (Heusel et al., 2017) (FID, a widely used metric for image generation quality) across all conditions to assess the overall conditional generation performance. Results are displayed in Table 1. Specifically, for multisource training, we compute the FID for each class and take
+
+Table 1. Average FID for single-source and multi-source training. Under different amounts of classes $K$ , similarity level Sim, and per-class sample size $N$ , multi-source training generally achieves lower average FID than that of single-source training, which is consistent with our theoretical guarantees derived in Section 3.
+
+N Sim K Avg. FID ↓ (Single) Avg. FID ↓ (Multi) 500 1 3 30.03 29.94 10 30.18 29.28 2 3 32.69 30.69 10 30.54 28.75 1000 1 3 28.01 26.41 10 27.49 25.84 2 3 30.58 29.35 10 29.01 27.81
+
+the average over all $K$ classes. For single-source training, we compute the FID for each of the $K$ separately trained models on their respective classes and calculate the average. Relative advantage of multi-source training is measured by $\frac{\text{Avg. FID (Single) - Avg. FID (Multi)}}{\text{Avg. FID (Single)}}$ as displayed in Figure 3.
+
+Experimental results In the following, we interpret the results sequentially from the view of our theoretical findings.
+
+From Table 1, we observe that under different amounts of classes $K$ , similarity level Sim, and per-class sample size $N$ , multi-source training generally achieves lower average FID than that of single-source training, which is consistent with our theoretical guarantees derived in Section 3,
+
+From Figure 3, we observe that for any fixed similarity level Sim and per-class sample size $N$ , the relative advantage of multi-sources training with a larger $K$ (the green bars) is larger than that with a smaller $K$ (the nearby orange bars). Additionally, for any fixed $K$ and $N$ , the relative advantage of multi-sources training with a larger distribution similarity is larger than that with a smaller distribution similarity (as shown through the dashed lines). These results support our theoretical insights in Section 4 that the number of sources and similarity among source distributions improves the advantage of multi-source training.
+
+# 6. Other related works
+
+Distribution estimation guarantee for MLE. Classical approaches investigate distribution estimation for MLE in Hellinger distance based on the bracketing number and the uniform law of large numbers from empirical process theory (Wong & Shen, 1995; Geer, 2000), which yields high-probability bounds of similar order as Theorem 3.2. Ge et al. (2024) extend the analysis to derive TV error bound under
+
+
+Figure 2. Similarity level in ILSVRC2012 training set.
+
+
+Figure 3. Relative advantage of multi-source training. For any fixed similarity level Sim and per-class sample size $N$ , a larger $K$ yields a greater FID improvement than a smaller $K$ . For any fixed $K$ and $N$ , higher distribution similarity leads to greater FID improvement (illustrated by dashed lines). These results support the theoretical findings in Section 4.
+
+the realizable assumption. We further adapt their techniques to conditional generative modeling by introducing the upper bracketing number to quantify the complexity of conditional distribution space in Definition 3.1 and modify the proofs to handle conditional MLE in Appendix A.1.
+
+Theory on multi-task learning. Multi-task learning is a well-studied topic in supervised learning (Caruana, 1997; Baxter, 2000). It typically benefits from similarities across tasks, sharing some commonality with multi-source training. However, theoretical analyses in supervised learning often assume a bounded objective (Ben-David & Borbely, 2008; Maurer et al., 2016; Tripuraneni et al., 2020), whereas our MLE analysis imposes no such restriction.
+
+Advanced theory on generative models. Among generative models based on (approximate) MLE (LeCun et al., 2006; Uria et al., 2016; Ho et al., 2020), diffusion models have been extensively studied theoretically on its score approximation, sampling behavior, distribution estimation, and scalability (Oko et al., 2023; Chen et al., 2023a;b; Fu et al., 2024; Zheng et al., 2025). This paper focuses on distribution estimation for general conditional generative modeling. Incorporating existing literature could be a promising
+
+direction for future work.
+
+# 7. Conclusion and discussion
+
+This paper provides the first attempt to rigorously analyze the conditional generative modeling on multiple data sources from a distribution estimation perspective. In particular, we establish a general estimation error bound in average TV distance under the realizable assumption based on the bracketing number of the conditional distribution space. When source distributions share parametric similarities, multi-source training has a provable advantage against single-source training by reducing the bracketing number. We further instantiate the general theory on three specific models to obtain concrete error bounds. To achieve this, novel bracketing number bounds for ARMs and EBMs are established. The results show that the number of data sources and the similarity between source distributions enhance the benefits of multi-source training. Simulations and real-world experiments support our theoretical findings.
+
+Our theoretical setting differs from practice in some aspects, e.g., language models have no explicit conditions, and image generation models are commonly conditioned on descriptive text involving multiple conditions. However, our abstraction provides a simplified framework that preserves the core properties of multi-source training and isolates how individual source distributions are learned. Moreover, recent studies suggest adding source labels, such as domain names, at the start of training text for language models can enhance performance (Allen-Zhu & Li, 2024b; Gao et al., 2025), which may become a standard practice in the future.
+
+# Acknowledgments
+
+This work was supported by NSF of China (Nos. 92470118, 62206159); Beijing Nova Program (20220484044); Beijing Natural Science Foundation (L247030); Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative, Renmin University of China; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13); the Natural Science Foundation of Shandong Province (ZR2022QF117), the Fundamental Research Funds of Shandong University. G. Wu was also sponsored by the TaiShan Scholars Program (NO.tsqn202306051).
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Allen-Zhu, Z. and Li, Y. Physics of language models: Part 3.1, knowledge storage and extraction. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024a.
+Allen-Zhu, Z. and Li, Y. Physics of language models: Part 3.3, knowledge capacity scaling laws. CoRR, abs/2404.05405, 2024b.
+Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
+Balcan, M.-F., Khodak, M., and Talwalkar, A. Provable guarantees for gradient-based meta-learning. In International Conference on Machine Learning, pp. 424-433. PMLR, 2019.
+Bartlett, P. L., Foster, D. J., and Telgarsky, M. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6240-6249, 2017.
+Bartlett, P. L., Harvey, N., Liaw, C., and Mehrabian, A. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. J. Mach. Learn. Res., 20:63:1-63:17, 2019.
+Baxter, J. A model of inductive bias learning. *J. Artif. Intell. Res.*, 12:149-198, 2000.
+Ben-David, S. and Borbely, R. S. A notion of task relatedness yielding provable multiple-task learning guarantees. Mach. Learn., 73(3):273-287, 2008.
+Bostock., M. Imagenet hierarchy, 2018. URL https://observablehq.com/@mbostock/imagenet-hierarchy.
+Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Caruana, R. Multitask learning. Mach. Learn., 28(1):41-75, 1997.
+
+Chen, J., Yu, J., Ge, C., Yao, L., Xie, E., Wang, Z., Kwok, J. T., Luo, P., Lu, H., and Li, Z. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
+Chen, M., Huang, K., Zhao, T., and Wang, M. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 4672-4712. PMLR, 2023a.
+Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., and Zhang, A. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b.
+Chidambaram, M., Wang, X., Hu, Y., Wu, C., and Ge, R. Towards understanding the data dependency of mixup-style training. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
+Dandi, Y., Stephan, L., Krzakala, F., Loureiro, B., and Zdeborova, L. Universality laws for gaussian mixtures in generalized linear models. Advances in Neural Information Processing Systems, 36, 2024.
+Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
+Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet summary and statistics, 2010. URL https://tex.stackexchange.com/questions/3587/how-can-i-use-bibtex-to-cite-a-web-page.
+Du, Y. and Mordatch, I. Implicit generation and modeling with energy based models. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E. B., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 3603-3613, 2019.
+Esser, P., Kulal, S., Blattmann, A., Entezari, R., Müller, J., Saini, H., Levi, Y., Lorenz, D., Sauer, A., Boesel, F., Podell, D., Dockhorn, T., English, Z., and Rombach, R. Scaling rectified flow transformers for high-resolution
+
+image synthesis. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
+Fu, H., Yang, Z., Wang, M., and Chen, M. Unveil conditional diffusion models with classifier-free guidance: A sharp statistical theory. CoRR, abs/2403.11968, 2024.
+Gao, T., Wettig, A., He, L., Dong, Y., Malladi, S., and Chen, D. Metadata conditioning accelerates language model pre-training. arXiv preprint arXiv:2501.01956, 2025.
+Ge, J., Tang, S., Fan, J., and Jin, C. On the provable advantage of unsupervised pretraining. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024.
+Geer, S. A. Empirical Processes in M-estimation, volume 6. Cambridge university press, 2000.
+He, H., Yan, H., and Tan, V. Y. Information-theoretic characterization of the generalization error for iterative semi-supervised learning. Journal of Machine Learning Research, 23(287):1-52, 2022.
+Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Hu, J. Y.-C., Wu, W., Lee, Y.-C., Huang, Y.-C., Chen, M., and Liu, H. On statistical rates of conditional diffusion transformers: Approximation, estimation and minimax optimality. arXiv preprint arXiv:2411.17522, 2024a.
+Hu, S., Tu, Y., Han, X., He, C., Cui, G., Long, X., Zheng, Z., Fang, Y., Huang, Y., Zhao, W., Zhang, X., Thai, Z. L., Zhang, K., Wang, C., Yao, Y., Zhao, C., Zhou, J., Cai, J., Zhai, Z., Ding, N., Jia, C., Zeng, G., Li, D., Liu, Z., and Sun, M. Minicpm: Unveiling the potential of small language models with scalable training strategies. CoRR, abs/2404.06395, 2024b.
+Jiao, Y., Lai, Y., Wang, Y., and Yan, B. Convergence analysis of flow matching in latent space with transformers. arXiv preprint arXiv:2404.02538, 2024.
+Jose, S. T. and Simeone, O. An information-theoretic analysis of the impact of task similarity on meta-learning. In 2021 IEEE International Symposium on Information Theory (ISIT), pp. 1534-1539. IEEE, 2021.
+
+Karras, T., Aittala, M., Lehtinen, J., Hellsten, J., Aila, T., and Laine, S. Analyzing and improving the training dynamics of diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24174-24184, 2024.
+Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
+LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., Huang, F., et al. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006.
+Ledent, A., Mustafa, W., Lei, Y., and Kloft, M. Norm-based generalisation bounds for deep multi-class convolutional neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 8279-8287. AAAI Press, 2021.
+Lin, S. and Zhang, J. Generalization bounds for convolutional neural networks. arXiv preprint arXiv:1910.01487, 2019.
+Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437, 2024.
+Maurer, A., Pontil, M., and Romera-Paredes, B. The benefit of multitask representation learning. J. Mach. Learn. Res., 17:81:1-81:32, 2016.
+Montanari, A. and Saeed, B. N. Universality of empirical risk minimization. In Conference on Learning Theory, pp. 4310-4312. PMLR, 2022.
+Nguyen, T., Ilharco, G., Wortsman, M., Oh, S., and Schmidt, L. Quality not quantity: On the interaction between dataset design and robustness of CLIP. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
+Oko, K., Akiyama, S., and Suzuki, T. Diffusion models are minimax optimal distribution estimators. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 26517-26582. PMLR, 2023.
+OpenAI. Video generation models as world simulators. https://openai.com/index/video-generation-models-as-world-simulators/, 2024.
+
+Ou, W. and Bölskei, H. Covering numbers for deep relunetworks with applications to function approximation and nonparametric regression. CoRR, abs/2410.06378, 2024.
+Peebles, W. and Xie, S. Scalable diffusion models with transformers. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pp. 4172-4182. IEEE, 2023.
+Pires, T., Schlinger, E., and Garrette, D. How multilingual is multilingual bert? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pp. 4996-5001. Association for Computational Linguistics, 2019.
+Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 10674-10685. IEEE, 2022.
+Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
+Salimans, T. and Ho, J. Should ebms model the energy or the score? In Energy Based Models Workshop-ICLR 2021, 2021.
+Shen, G. Exploring the complexity of deep neural networks through functional equivalence. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
+Shen, G., Jiao, Y., Lin, Y., and Huang, J. Non-asymptotic excess risk bounds for classification with deep convolutional neural networks. arXiv preprint arXiv:2105.00292, 2021.
+Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E. B., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 11895-11907, 2019.
+Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In 9th International Conference on Learning Representations, ICLR
+
+2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
+Suzuki, T. Adaptivity of deep relu network for learning in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
+Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023.
+Trauger, J. and Tewari, A. Sequence length independent norm-based generalization bounds for transformers. In International Conference on Artificial Intelligence and Statistics, pp. 1405-1413. PMLR, 2024.
+Tripuraneni, N., Jordan, M. I., and Jin, C. On the theory of transfer learning: The importance of task diversity. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
+Uria, B., Côté, M., Gregor, K., Murray, I., and Larochelle, H. Neural autoregressive distribution estimation. J. Mach. Learn. Res., 17:205:1-205:37, 2016.
+Wang, K. and Thrampoulidis, C. Binary classification of gaussian mixtures: Abundance of support vectors, benign overfitting, and regularization. SIAM Journal on Mathematics of Data Science, 4(1):260-284, 2022.
+Wellner, J. A. Empirical processes in statistics: Methods, examples, further problems, 2002.
+Wong, W. H. and Shen, X. Probability inequalities for likelihood ratios and convergence rates of sieve mles. The Annals of Statistics, pp. 339-362, 1995.
+Xie, S. M., Pham, H., Dong, X., Du, N., Liu, H., Lu, Y., Liang, P. S., Le, Q. V., Ma, T., and Yu, A. W. Doremi: Optimizing data mixtures speeds up language model pretraining. Advances in Neural Information Processing Systems, 36:69798-69818, 2023.
+Zhao, M., Bao, F., Li, C., and Zhu, J. Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. Advances in Neural Information Processing Systems, 35:3609-3623, 2022.
+Zheng, C., Wu, G., Bao, F., Cao, Y., Li, C., and Zhu, J. Revisiting discriminative vs. generative classifiers: Theory and implications. In International conference on machine learning, pp. 42420-42477. PMLR, 2023a.
+
+Zheng, C., Wu, G., and Li, C. Toward understanding generative data augmentation. Advances in neural information processing systems, 36:54046-54060, 2023b.
+Zheng, C., Huang, W., Wang, R., Wu, G., Zhu, J., and Li, C. On mesa-optimization in autoregressively trained transformers: Emergence and capability. Advances in Neural Information Processing Systems, 37:49081-49129, 2024.
+Zheng, C., Zhang, X., Wang, R., Huang, W., Tian, Z., Huang, W., Zhu, J., and Li, C. Scaling diffusion transformers efficiently via $\mu p$ . arXiv preprint arXiv:2505.15270, 2025.
+
+# A. Proofs for Section 3
+
+# A.1. Proof of Theorem 3.2
+
+Proof of Theorem 3.2. This theorem applies to both discrete and continuous random variables, while we use integration notation in the proof for generality. In the following, we first present an elementary inequality (in Equation (10)) which serves as a toolkit for the subsequent derivations. Then we decompose the TV distance and derive its complexity-based upper bound (in Equation (12)) using the former inequality. Finally, after specifying certain constants in this upper bound, a clearer order w.r.t. $n$ is revealed (in Equation (13)).
+
+Intermediate result induced by union bound. Let $\epsilon$ be a real number that $\epsilon > 0$ and $\mathfrak{p}$ be an integer that $1 \leq \mathfrak{p} \leq \infty$ . Let $\mathcal{B}$ be an $\epsilon$ -upper bracket of $\mathcal{P}_{X|Y}$ w.r.t. $L^1(\mathcal{X})$ such that $|\mathcal{B}| = \mathcal{N}_{\mathbb{I}}\Bigl(\epsilon; \mathcal{P}_{X|Y}, L^1(\mathcal{X})\Bigr)$ .
+
+According to the minimum cardinality requirement, we obtain a proposition of $\mathcal{B}$ that: for any $p' \in \mathcal{B}$ , $p'(\boldsymbol{x}, y) \geq 0$ on $\mathcal{X} \times \mathcal{Y}$ . Let's first consider $\prod_{i=1}^{n} \sqrt{\frac{p'(\boldsymbol{x}_i, y_i)}{p_{X|Y}^*(\boldsymbol{x}_i|y_i)}}$ as a random variable on $S$ , where we suppose $p_{X,Y}^*(\boldsymbol{x}_i, y_i) > 0$ since $(\boldsymbol{x}_i, y_i)$ are sampled from $p_{X,Y}^*$ and thus $p_{X|Y}^*(\boldsymbol{x}_i|y_i) \neq 0$ . By applying the Markov inequality, we have: given any $0 < \delta' < 1$ ,
+
+$$
+\Pr_ {S} \left(\prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X \mid Y} ^ {*} (\boldsymbol {x} _ {i} \mid y _ {i})}} \geq \frac {1}{\delta^ {\prime}} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X \mid Y} ^ {*} (\boldsymbol {x} _ {i} \mid y _ {i})}} \right]\right) \leq \delta^ {\prime}. \tag {8}
+$$
+
+Applying the union bound on all $p' \in \mathcal{B}$ , we further have:
+
+$$
+\begin{array}{l} \Pr_ {S} \left(\forall p ^ {\prime} \in \mathcal {B}, \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})}} < \frac {1}{\delta^ {\prime}} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})}} \right]\right) \\ = 1 - \Pr_ {S} \left(\exists p ^ {\prime} \in \mathcal {B}, \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X \mid Y} ^ {*} \left(\boldsymbol {x} _ {i} \mid y _ {i}\right)}} \geq \frac {1}{\delta^ {\prime}} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X \mid Y} ^ {*} \left(\boldsymbol {x} _ {i} \mid y _ {i}\right)}} \right]\right) \\ = 1 - \Pr_ {S} \left(\bigcup_ {p ^ {\prime} \in \mathcal {B}} \left\{\prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X | Y} ^ {*} \left(\boldsymbol {x} _ {i} | y _ {i}\right)}} \geq \frac {1}{\delta^ {\prime}} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X | Y} ^ {*} \left(\boldsymbol {x} _ {i} | y _ {i}\right)}} \right] \right\}\right) \\ \geq 1 - \sum_ {p ^ {\prime} \in \mathcal {B}} \Pr_ {S} \left(\prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X | Y} ^ {*} \left(\boldsymbol {x} _ {i} | y _ {i}\right)}} \geq \frac {1}{\delta^ {\prime}} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X | Y} ^ {*} \left(\boldsymbol {x} _ {i} | y _ {i}\right)}} \right]\right) \quad (\text {b y}) \\ \geq 1 - \mathcal {N} _ {[ ]} (\epsilon ; \mathcal {P} _ {X | Y}, L ^ {1} (\mathcal {X})) \delta^ {\prime}. \tag {byEquation(8)} \\ \end{array}
+$$
+
+By denoting that $\delta \coloneqq \mathcal{N}_{[[}\big(\epsilon ;\mathcal{P}_{X|Y},L^{1}(\mathcal{X})\big)\delta^{\prime}$ , we have that it holds with probability at least $1 - \delta$ that for all $p^\prime \in \mathcal{B}$
+
+$$
+\prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})}} < \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})}} \right].
+$$
+
+Taking logarithms at both sides, we have
+
+$$
+\begin{array}{l} \frac {1}{2} \sum_ {i = 1} ^ {n} \log \frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})} \leq \log \mathbb {E} _ {S} \left[ \prod_ {i = 1} ^ {n} \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})}} \right] + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \\ = \log \prod_ {i = 1} ^ {n} \mathbb {E} _ {\left(\boldsymbol {x} _ {i}, y _ {i}\right) \sim p _ {X, Y} ^ {*}} \left[ \sqrt {\frac {p ^ {\prime} \left(\boldsymbol {x} _ {i} , y _ {i}\right)}{p _ {X \mid Y} ^ {*} \left(\boldsymbol {x} _ {i} \mid y _ {i}\right)}} \right] + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X \mid Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \\ \left(\left\{x _ {i} \right\} _ {i = 1} ^ {n} \text {a r e i . i . d . s a m p l e d f r o m} p _ {X} ^ {*}\right) \\ = n \log \mathbb {E} _ {X, Y} \left[ \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} , y)}{p _ {X | Y} ^ {*} (\boldsymbol {x} | y)}} \right] + \log \frac {\mathcal {N} _ {\mathbb {I}} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \\ = n \log \mathbb {E} _ {Y} \left[ \mathbb {E} _ {X | Y} \left[ \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} , y)}{p _ {X | Y} ^ {*} (\boldsymbol {x} | y)}} \right] \right] + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \\ = n \log \mathbb {E} _ {Y} \left[ \int_ {\mathcal {X}} p _ {X | Y} ^ {*} (\boldsymbol {x} | y) \sqrt {\frac {p ^ {\prime} (\boldsymbol {x} , y)}{p _ {X | Y} ^ {*} (\boldsymbol {x} | y)}} d \boldsymbol {x} \right] + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} \\ = n \log \mathbb {E} _ {Y} \left[ \int_ {\mathcal {X}} \sqrt {p ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x} \right] + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}. \\ \end{array}
+$$
+
+As $\log x\leq x - 1$ for all $x > 0$ , the inequality can be further transformed into
+
+$$
+\frac {1}{2} \sum_ {i = 1} ^ {n} \log \frac {p ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})} \leq n \left(\mathbb {E} _ {Y} \left[ \int_ {\mathcal {X}} \sqrt {p ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x} \right] - 1\right) + \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}. \tag {9}
+$$
+
+Elementary inequality for MLE estimators. Since the real conditional distribution $p_{X|Y}^{*}$ is in $\mathcal{P}_{X|Y}$ , for the likelihood maximizers $\hat{p}_{X|Y} \in \mathcal{P}_{X|Y}$ , we have $L_{S}(\hat{p}_{X|Y}) = \prod_{i=1}^{n} \hat{p}_{X|Y}(\boldsymbol{x}_i|y_i) \geq L_{S}(p_{X|Y}^{*}) = \prod_{i=1}^{n} p_{X|Y}^{*}(\boldsymbol{x}_i|y_i)$ , and thus $\frac{1}{2} \sum_{i=1}^{n} \log \frac{\hat{p}_{X|Y}(\boldsymbol{x}_i|y_i)}{p_{X|Y}^*(\boldsymbol{x}_i|y_i)} = \frac{1}{2} \log \frac{\prod_{i=1}^{n} \hat{p}_{X|Y}(\boldsymbol{x}_i|y_i)}{\prod_{i=1}^{n} p_{X|Y}^*(\boldsymbol{x}_i|y_i)} \geq \frac{1}{2} \log 1 = 0$ . According to the definition of upper bracketing number, there exists some $\hat{p}' \in \mathcal{B}$ such that given any $y \in \mathcal{Y}$ , it holds that: (i) $\forall x \in \mathcal{X}, \hat{p}'(\boldsymbol{x}, y) \geq \hat{p}_{X|Y}(\boldsymbol{x}|y)$ , and (ii) $\| \hat{p}'(\cdot, y) - \hat{p}_{X|Y}(\cdot|y) \|_{L^1(\mathcal{X})} = \int_{\mathcal{X}} |\hat{p}'(\boldsymbol{x}, y) - \hat{p}_{X|Y}(\boldsymbol{x}|y)| dx \leq \epsilon$ . Applying (i), we have:
+
+$$
+\frac {1}{2} \sum_ {i = 1} ^ {n} \log \frac {\hat {p} ^ {\prime} (\boldsymbol {x} _ {i} , y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})} \geq \frac {1}{2} \sum_ {i = 1} ^ {n} \log \frac {\hat {p} _ {X | Y} (\boldsymbol {x} _ {i} | y _ {i})}{p _ {X | Y} ^ {*} (\boldsymbol {x} _ {i} | y _ {i})} \geq 0.
+$$
+
+Combining this with Equation (9) and rearranging the terms, we have: it holds with at least probability $1 - \delta$ that
+
+$$
+1 - \mathbb {E} _ {Y} \left[ \int_ {\mathcal {X}} \sqrt {p ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x} \right] \leq \frac {1}{n} \log \frac {\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}. \tag {10}
+$$
+
+This serves as an elementary toolkit for deriving the subsequent upper bounds.
+
+Decomposing the square of the TV distance. Recalling that $\mathrm{TV}(\hat{p}_{X|Y}, p_{X|Y}^*) = \frac{1}{2} \int_X |\hat{p}_{X|Y}(\boldsymbol{x}|y) - p_{X|Y}^*(\boldsymbol{x}|y)| dx$ , we will decompose its square and then bound each term sequentially. First, we use the above $\hat{p}'(\boldsymbol{x}, y)$ as an intermediate term to decompose the square of $2\mathrm{TV}(\hat{p}_{X|Y}, p_{X|Y}^*)$ into parts that can be effectively upper bounded:
+
+$$
+\begin{array}{l} \left(2 \mathrm {T V} \left(\hat {p} _ {X | Y}, p _ {X | Y} ^ {*}\right)\right) ^ {2} = \left(\int_ {\mathcal {X}} \left| \hat {p} _ {X | Y} (\boldsymbol {x} | y) - p _ {X | Y} ^ {*} (\boldsymbol {x} | y) \right| d \boldsymbol {x}\right) ^ {2} \\ = \underbrace {\left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) ^ {2} - \left(\int_ {\mathcal {X}} | \hat {p} ^ {\prime} (\boldsymbol {x} , y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) ^ {2}} _ {\text {(I)}} + \underbrace {\left(\int_ {\mathcal {X}} | \hat {p} ^ {\prime} (\boldsymbol {x} , y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) ^ {2}} _ {\text {(I I)}}. \\ \end{array}
+$$
+
+For (I), we have
+
+$$
+\begin{array}{l} \left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) ^ {2} - \left(\int_ {\mathcal {X}} | \hat {p} ^ {\prime} (\boldsymbol {x}, y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) ^ {2} \\ = \left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | + | \hat {p} ^ {\prime} (\boldsymbol {x}, y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) \left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | - | \hat {p} ^ {\prime} (\boldsymbol {x}, y) - p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) \\ \leq \left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) | + | p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | + | \hat {p} ^ {\prime} (\boldsymbol {x}, y) - \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) | + | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) | + | p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x}\right) \left(\int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) - \hat {p} ^ {\prime} (\boldsymbol {x}, y) | d \boldsymbol {x}\right) \\ \leq (\epsilon + 4) \epsilon . \\ \end{array}
+$$
+
+The first inequality holds for the triangle inequality $|a + b| \leq |a| + |b|$ and the reverse triangle inequality $||a| - |b|| \leq |a - b|$ . The second inequality holds for the normalization property of conditional distributions $(\int_{\mathcal{X}} |\hat{p}_{X|Y}(\boldsymbol{x}|y)| dx$ and $\int_{\mathcal{X}} |p_{X|Y}^*(\boldsymbol{x}|y)| dx$ equal 1) and the property of the $\epsilon$ -upper bracket $(\int_{\mathcal{X}} |\hat{p}'(\boldsymbol{x}, y) - \hat{p}_{X|Y}(\boldsymbol{x}|y)| dx \leq \epsilon)$ .
+
+For (II), we have
+
+$$
+\begin{array}{l} \left(\int_ {\mathcal {X}} \left| \hat {p} ^ {\prime} (\boldsymbol {x}, y) - p _ {X | Y} ^ {*} (\boldsymbol {x} | y) \right| d \boldsymbol {x}\right) ^ {2} \\ \leq \left(\int_ {\mathcal {X}} \left(\sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y)} + \sqrt {p _ {X | Y} ^ {*} (\boldsymbol {x} | y)}\right) ^ {2} d x\right) \left(\int_ {\mathcal {X}} \left(\sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y)} - \sqrt {p _ {X | Y} ^ {*} (\boldsymbol {x} | y)}\right) ^ {2} d x\right) \quad (\text {b y C a u c h y - S c h w a r z i n e q u a l i t y}) \\ \leq \left(\int_ {\mathcal {X}} 2 \left(\hat {p} ^ {\prime} (\boldsymbol {x}, y) + p _ {X | Y} ^ {*} (\boldsymbol {x} | y)\right) d \boldsymbol {x}\right) \left(\int_ {\mathcal {X}} \hat {p} ^ {\prime} (\boldsymbol {x}, y) + p _ {X | Y} ^ {*} (\boldsymbol {x} | y) - 2 \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right) \quad \text {(b y} (a + b) ^ {2} \leq 2 (a ^ {2} + b ^ {2})) \\ = 2 \left(\int_ {\mathcal {X}} \hat {p} ^ {\prime} (\boldsymbol {x}, y) - \hat {p} _ {X | Y} (\boldsymbol {x} | y) + \hat {p} _ {X | Y} (\boldsymbol {x} | y) + p _ {X | Y} ^ {*} (\boldsymbol {x} | y) d \boldsymbol {x}\right) \\ \left(\int_ {\mathcal {X}} \hat {p} ^ {\prime} (\boldsymbol {x}, y) - \hat {p} _ {X | Y} (\boldsymbol {x} | y) + \hat {p} _ {X | Y} (\boldsymbol {x} | y) + p _ {X | Y} ^ {*} (\boldsymbol {x} | y) - 2 \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right) \\ \leq 2 (\epsilon + 2) \left(\epsilon + 2 - 2 \int_ {\mathcal {X}} \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right). \\ (b y \int_ {\mathcal {X}} | \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) | d \boldsymbol {x} = \int_ {\mathcal {X}} | p _ {X \mid Y} ^ {*} (\boldsymbol {x} \mid y) | d \boldsymbol {x} = 1 \text {a n d} \int_ {\mathcal {X}} | \hat {p} ^ {\prime} (\boldsymbol {x}, y) - \hat {p} _ {X \mid Y} (\boldsymbol {x} \mid y) | d \boldsymbol {x} \leq \epsilon) \\ \end{array}
+$$
+
+Putting together (I) and (II), we get:
+
+$$
+\begin{array}{l} \mathrm {T V} (\hat {p} _ {X | Y}, p _ {X | Y} ^ {*}) = \frac {1}{2} \sqrt {\left(\int_ {\mathcal {X}} | \hat {p} _ {X | Y} (\boldsymbol {x} | y) - p _ {X | Y} ^ {*} (\boldsymbol {x} | y) | d \boldsymbol {x}\right) ^ {2}} \\ \leq \frac {1}{2} \sqrt {(\epsilon + 4) \epsilon + 2 (\epsilon + 2) \left(\epsilon + 2 - 2 \int_ {\mathcal {X}} \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right)}. \tag {11} \\ \end{array}
+$$
+
+Bounding the average TV error. Based on the above results, we upper bound the average TV error (defined in Equation (4)) of $\hat{p}_{X|Y}$ as follows:
+
+$$
+\begin{array}{l} \mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y}) = \mathbb {E} _ {Y} \left[ \mathrm {T V} (\hat {p} _ {X | Y}, p _ {X | Y} ^ {*}) \right] \\ \leq \frac {1}{2} \mathbb {E} _ {Y} \left[ \sqrt {(\epsilon + 4) \epsilon + 2 (\epsilon + 2) \left(\epsilon + 2 - 2 \int_ {\mathcal {X}} \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right)} \right] \quad \text {(b y E q u a t i o n (1 1))} \\ \leq \frac {1}{2} \sqrt {\mathbb {E} _ {Y} \left[ (\epsilon + 4) \epsilon + 2 (\epsilon + 2) \left(\epsilon + 2 - 2 \int_ {\mathcal {X}} \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x}\right) \right]} \\ \end{array}
+$$
+
+(by concavity of $f(x) = \sqrt{x}$ and Jensen's inequality)
+
+$$
+= \frac {1}{2} \sqrt {(\epsilon + 4) \epsilon + 2 (\epsilon + 2) \left(\epsilon + 2 \left(1 - \mathbb {E} _ {Y} \left[ \int_ {\mathcal {X}} \sqrt {\hat {p} ^ {\prime} (\boldsymbol {x} , y) p _ {X | Y} ^ {*} (\boldsymbol {x} | y)} d \boldsymbol {x} \right]\right)\right)}.
+$$
+
+(by the linearity of expectation)
+
+Recalling the elementary inequality we derived formerly in Equation (10), we have: it holds with at least probability $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} \left(\hat {p} _ {X | Y}\right) \leq \frac {1}{2} \sqrt {(\epsilon + 4) \epsilon + 2 (\epsilon + 2) \left(\epsilon + \frac {2}{n} \log \frac {\mathcal {N} _ {\mathbb {I}} \left(\epsilon ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}\right)}. \tag {12}
+$$
+
+Recalling that $0 \leq \delta \leq \frac{1}{2}$ and for non-empty $\mathcal{P}_{X|Y}, \mathcal{N}_{\mathbb{I}}\left(\epsilon; \mathcal{P}_{X|Y}, L^{1}(\mathcal{X})\right) \geq 1$ , we have $\mathcal{N}_{\mathbb{I}}\left(\epsilon; \mathcal{P}_{X|Y}, L^{1}(\mathcal{X})\right) / \delta \geq 2 \geq e^{\frac{1}{2}}$ . Taking $\epsilon = 1/n$ in Equation (12), it then holds with probability at least $1 - \delta$ that
+
+$$
+\begin{array}{l} \mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y}) \leq \frac {1}{2} \sqrt {(\frac {1}{n} + 4) \frac {1}{n} + 2 (\frac {1}{n} + 2) \left(\frac {1}{n} + \frac {2}{n} \log \frac {\mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}\right)} \\ \leq \frac {1}{2} \sqrt {\frac {5}{n} + 6 \left(\frac {1}{n} + \frac {2}{n} \log \frac {\mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}\right)} \quad (\text {b y} \frac {1}{n} \leq 1) \\ \leq \frac {1}{2} \sqrt {\frac {1 0}{n} \log \frac {\mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta} + 6 \left(\frac {4}{n} \log \frac {\mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}\right)} \\ (\text {b y} \frac {1}{n} \leq \frac {2}{n} \log \frac {\mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathfrak {X})\right)}{\delta}) \\ = \frac {1}{2} \sqrt {\frac {3 4}{n} \log \frac {\mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}} \leq 3 \sqrt {\frac {1}{n} \log \frac {\mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right)}{\delta}} \\ = 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)}. \tag {13} \\ \end{array}
+$$
+
+Until now, we have completed the proof of this theorem.
+
+# A.2. Proof of Proposition 3.3
+
+Proof of Proposition 3.3. As defined in Section 2, it holds that $\mathcal{P}_{X|Y}^{\mathrm{multi}} \subset \mathcal{P}_{X|Y}^{\mathrm{single}}$ . Then, for any $p_{X|Y}^{\mathrm{multi}} \in \mathcal{P}_{X|Y}^{\mathrm{multi}}$ , there exists some $p_{X|Y}^{\mathrm{single}} \in \mathcal{P}_{X|Y}^{\mathrm{single}}$ such that $p_{X|Y}^{\mathrm{single}} = p_{X|Y}^{\mathrm{multi}}$ . Given any $\epsilon > 0$ and $1 \leq \mathsf{p} \leq \infty$ , let $\mathcal{B}^{\mathrm{single}}$ be a $\epsilon$ -upper bracket w.r.t. $L^{\mathfrak{p}}(\mathcal{X})$ for $\mathcal{P}_{X|Y}^{\mathrm{single}}$ such that $|\mathcal{B}^{\mathrm{single}}| = \mathcal{N}_{\mathbb{I}}\left(\epsilon; \mathcal{P}_{X|Y}^{\mathrm{single}}, L^{\mathfrak{p}}(\mathcal{X})\right)$ . According to the definition of $\epsilon$ -upper bracket (as in Definition 3.1), there exists some $p' \in \mathcal{B}^{\mathrm{single}}$ such that given any $y \in \mathcal{Y}$ , it holds that: $\forall x \in \mathcal{X}, p'(x, y) \geq p_{X|Y}^{\mathrm{single}}(x|y) = p_{X|Y}^{\mathrm{multi}}(x|y)$ , and $\| p'(\cdot, y) - p_{X|Y}^{\mathrm{multi}}(\cdot|y)\|_{L^{\mathfrak{p}}(\mathcal{X})} = \| p'(\cdot, y) - p_{X|Y}^{\mathrm{single}}(\cdot|y)\|_{L^{\mathfrak{p}}(\mathcal{X})} \leq \epsilon$ . Therefore, $\mathcal{B}^{\mathrm{single}}$ is also a $\epsilon$ -upper bracket w.r.t. $L^{\mathfrak{p}}(\mathcal{X})$ for $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ , and thus $\mathcal{N}_{\mathbb{I}}\left(\epsilon; \mathcal{P}_{X|Y}^{\mathrm{multi}}, L^{\mathfrak{p}}(\mathcal{X})\right) \leq |\mathcal{B}^{\mathrm{single}}| = \mathcal{N}_{\mathbb{I}}\left(\epsilon; \mathcal{P}_{X|Y}^{\mathrm{single}}, L^{\mathfrak{p}}(\mathcal{X})\right)$ .
+
+# B. Proofs for Section 4.1
+
+# B.1. Bracketing number of conditional Gaussian distribution space
+
+According to Theorem 3.2, to derive the upper bound of average TV error, we need to measure the upper bracketing number for the conditional Gaussian distribution space. This result mainly follows the bracketing number analysis of Gaussian distribution space in Lemma C.5 in (Ge et al., 2024), and slightly modifies it to conditional Gaussian distribution space.
+
+Theorem B.1 (Bracketing number upper bound for conditional Gaussian distribution space under multi-source training). Let $B$ be a constant that $0 < B < \infty$ , suppose that $\Phi = [-B, B]^{d_1}$ , $\Psi = [-B, B]^{d - d_1}$ , and conditional distributions in $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ are formulated as in Equation (5). Then, given any $0 < \epsilon \leq 1$ , the $\epsilon$ -upper bracketing number of $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ w.r.t. $L^1(\mathcal{X})$ satisfies
+
+$$
+\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}}, L ^ {1} (\mathcal {X})\right) \leq \left(\frac {2 (1 + d) B}{\epsilon} + 1\right) ^ {(K - 1) d _ {1} + d}.
+$$
+
+Proof. According to the assumptions, the conditional distribution space expressed by the parametric estimation model is
+
+$$
+\mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}} := \left\{p _ {X | Y} ^ {\mathrm {m u l t i}} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \big (p _ {\phi_ {k}, \psi} (\boldsymbol {x} | k) \big) ^ {\mathbb {I} (y = k)} = \prod_ {k = 1} ^ {K} \Big ((2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {1}{2} \| \boldsymbol {x} - (\phi_ {k}, \psi) \| _ {2} ^ {2}} \Big) ^ {\mathbb {I} (y = k)}: \phi_ {k} \in [ - B, B ] ^ {d _ {1}}, \psi \in [ - B, B ] ^ {d - d _ {1}} \right\}.
+$$
+
+For any $p_{X|Y}^{\mathrm{multi}}(\pmb{x}|y) = \prod_{k=1}^{K}\left((2\pi)^{-\frac{d}{2}}e^{-\frac{1}{2}\|\pmb{x} - (\phi_k, \psi)\|^2}\right)^{\mathbb{I}(y=k)} \in \mathcal{P}_{X|Y}^{\mathrm{multi}}$ , let's first divide the mean vector $(\phi_k, \psi)$ into $\eta$ -width grids with a small constant $\eta > 0$ (the value of $\eta$ will be specified later): If $(\phi_k)_i \in [j\eta, (j+1)\eta)$ for some $j \in \mathbb{Z}$ , let $(\bar{\phi}_k)_i = j\eta$ and $\bar{\phi}_k := ((\bar{\phi}_k)_1, \dots, (\bar{\phi}_k)_{d_1})$ . Similarly, if $(\psi)_i \in [j\eta, (j+1)\eta)$ for some $j \in \mathbb{Z}$ , let $(\bar{\psi})_i = j\eta$ and $\bar{\psi} := ((\bar{\psi})_1, \dots, (\bar{\psi})_{d-1})$ . In this case, we have $\| (\phi_k, \psi) - (\bar{\phi}_k, \bar{\psi})\|_2^2 \leq d\eta^2$ .
+
+Let
+
+$$
+p ^ {\prime} (\pmb {x}, y) = \prod_ {k = 1} ^ {K} \Big ((2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {c _ {1}}{2} \| \pmb {x} - (\bar {\phi} _ {k}, \bar {\psi}) \| _ {2} ^ {2} + c _ {2}} \Big) ^ {\mathbb {I} (y = k)}.
+$$
+
+According to the definition of the bracketing, we want to prove that $p^{\prime}(\pmb {x},y)\geq p_{X|Y}^{\mathrm{multi}}(\pmb {x}|y)$ . By completing the square w.r.t. $\pmb{x}$ , we have
+
+$$
+\begin{array}{l} - \frac {c _ {1}}{2} \| \boldsymbol {x} - (\bar {\phi} _ {k}, \bar {\psi}) \| _ {2} ^ {2} + c _ {2} - \left(- \frac {1}{2} \| \boldsymbol {x} - (\phi_ {k}, \psi) \| _ {2} ^ {2}\right) \\ = \frac {1}{2} \left((1 - c _ {1}) \left\| \boldsymbol {x} + \frac {c _ {1} (\bar {\phi} _ {k} , \bar {\psi}) - (\phi_ {k} , \psi)}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {c _ {1}}{1 - c _ {1}} \left\| (\bar {\phi} _ {k}, \bar {\psi}) - (\phi_ {k}, \psi) \right\| _ {2} ^ {2} + 2 c _ {2}\right). \\ \end{array}
+$$
+
+Further taking $c_{1} = 1 - \eta$ and $c_{2} = d(1 - \eta)\eta /2$ , we have
+
+$$
+\begin{array}{l} (1 - c _ {1}) \left\| \boldsymbol {x} + \frac {c _ {1} (\bar {\phi} _ {k} , \bar {\psi}) - (\phi_ {k} , \psi)}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {c _ {1}}{1 - c _ {1}} \left\| (\bar {\phi} _ {k}, \bar {\psi}) - (\phi_ {k}, \psi) \right\| _ {2} ^ {2} + 2 c _ {2} \\ = \eta \left\| \boldsymbol {x} + \frac {c _ {1} (\bar {\phi} _ {k} , \bar {\psi}) - (\phi_ {k} , \psi)}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {1 - \eta}{\eta} \left\| (\bar {\phi} _ {k}, \bar {\psi}) - (\phi_ {k}, \psi) \right\| _ {2} ^ {2} + 2 c _ {2} \quad (c _ {1} = 1 - \eta) \\ \geq - \frac {1 - \eta}{\eta} \left\| (\bar {\phi} _ {k}, \bar {\psi}) - (\phi_ {k}, \psi) \right\| _ {2} ^ {2} + 2 c _ {2} \quad (\eta > 0) \\ \geq - \frac {1 - \eta}{\eta} d \eta^ {2} + 2 c _ {2} \quad (\| (\phi_ {k}, \psi) - (\bar {\phi} _ {k}, \bar {\psi}) \| _ {2} ^ {2} \leq d \eta^ {2}) \\ = - d (1 - \eta) \eta + d (1 - \eta) \eta = 0. \\ \end{array}
+$$
+
+Therefore, it holds that for all $y \in \mathcal{Y}$ ,
+
+$$
+\forall \boldsymbol {x} \in \mathcal {X}: p ^ {\prime} (\boldsymbol {x}, y) \geq p _ {X | Y} ^ {\text {m u l t i}} (\boldsymbol {x} | y). \tag {14}
+$$
+
+Moreover, given any $0 < \epsilon \leq 1$ , we take $\eta = \frac{\epsilon}{1 + d}$ , and thus $c_{1} = 1 - \frac{\epsilon}{1 + d}$ and $c_{2} = \frac{1}{2} (1 - \frac{\epsilon}{1 + d})\frac{\epsilon}{d + 1}$ . Since $d \in \mathbb{N}$ , we have $\eta \leq \frac{1}{2}$ and $c_{2} \leq \frac{1}{2}$ . Then, $\| p'(\cdot, y) - p_{X|Y}^{\mathrm{multi}}(\cdot | y) \|_{L^{1}(\mathfrak{X})}$ can be bounded as
+
+$$
+\begin{array}{l} \| p ^ {\prime} (\cdot , y) - p _ {X | Y} ^ {\text {m u l t i}} (\cdot | y) \| _ {L ^ {1} (\mathcal {X})} = \int_ {\mathcal {X}} | p ^ {\prime} (\boldsymbol {x}, y) - p _ {X | Y} ^ {\text {m u l t i}} (\boldsymbol {x} | y) | d \boldsymbol {x} \\ = \int_ {\mathcal {X}} p ^ {\prime} (\boldsymbol {x}, y) d \boldsymbol {x} - \int_ {\mathcal {X}} p _ {X | Y} ^ {\mathrm {m u l t i}} (\boldsymbol {x} | y) d \boldsymbol {x} = \frac {1}{\sqrt {c _ {1}}} e ^ {c _ {2}} - 1 \qquad \qquad \qquad \left(\int_ {\mathcal {X}} e ^ {- \frac {1}{2} \| \boldsymbol {x} \| _ {2} ^ {2}} d \boldsymbol {x} = (2 \pi) ^ {\frac {d}{2}}\right) \\ \leq \frac {1}{\sqrt {c _ {1}}} (1 + 2 c _ {2}) - 1 \quad \left(e ^ {x} \leq 1 + 2 x \text {f o r} x \in \left[ 0, \frac {1}{2} \right]\right) \\ = \frac {1}{\sqrt {1 - \eta}} (1 + d (1 - \eta) \eta) - 1 \quad (c _ {1} = 1 - \eta \text {a n d} c _ {2} = d (1 - \eta) \eta / 2) \\ \leq (1 + \eta) (1 + d (1 - \eta) \eta) - 1 \quad (\frac {1}{\sqrt {1 - x}} \leq 1 + x \text {f o r} x \in [ 0, \frac {1}{2} ]) \\ = \eta (1 + d \left(1 - \eta^ {2}\right)) \leq \eta (1 + d) = \epsilon \tag {15} \\ \end{array}
+$$
+
+Combining Equation (14) and Equation (15), we know that for any $p_{X|Y}^{\mathrm{multi}}(\boldsymbol{x}|y) \in \mathcal{P}_{X|Y}^{\mathrm{multi}}$ and $0 < \epsilon \leq 1$ , there exists some $p'(\boldsymbol{x}, y) \in \mathcal{B}$ such that given any $y \in \mathcal{Y}$ , it holds that $\forall \boldsymbol{x} \in \mathcal{X}: p'(\boldsymbol{x}, y) \geq p_{X|Y}(\boldsymbol{x}|y)$ , and $\| p'(\cdot, y) - p_{X|Y}(\cdot|y) \|_{L^p(\mathcal{X})} \leq \epsilon$ , where
+
+$$
+\mathcal {B} := \left\{p ^ {\prime} (\boldsymbol {x}, y) = \prod_ {k = 1} ^ {K} \left(\left(2 \pi\right) ^ {- \frac {d}{2}} e ^ {- \frac {c _ {1}}{2} \| \boldsymbol {x} - \left(\bar {\phi} _ {k}, \bar {\psi}\right) \| _ {2} ^ {2} + c _ {2}}\right) ^ {\mathbb {I} (y = k)}: \left(\bar {\phi} _ {k}\right) _ {i}, \left(\bar {\psi}\right) _ {i} \in [ - B, B ] \cap \eta \mathbb {Z} \right\}
+$$
+
+Recalling the definition of the upper bracketing number in Definition 3.1, we know that $\mathcal{B}$ is an $\epsilon$ -upper bracket of $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ w.r.t. $L^1(\mathcal{X})$ . Therefore,
+
+$$
+\begin{array}{l} \mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y} ^ {\text {m u l t i}}, L ^ {1} (\mathscr {X})\right) \\ \leq | \mathcal {B} | = \left| \left\{\left\{\bar {\phi} _ {k} \right\} _ {k = 1} ^ {K}, \bar {\psi}: (\bar {\phi} _ {k}) _ {i}, (\bar {\psi}) _ {i} \in [ - B, B ] \cap \eta \mathbb {Z} \right\} \right| \\ \leq \left(\frac {2 B}{\eta} + 1\right) ^ {K d _ {1} + d - d _ {1}} \\ = \left(\frac {2 (1 + d) B}{\epsilon} + 1\right) ^ {(K - 1) d _ {1} + d}, \\ \end{array}
+$$
+
+which completes the proof.
+
+# B.2. Proof of Theorem 4.1
+
+Proof of Theorem 4.1. As $\phi_k^* \in \Phi$ , $\psi^* \in \Psi$ , and $\hat{p}_{X|Y}^{\mathrm{multi}}$ is the maximizer of likelihood $L_S(p_{X|Y})$ in $\mathcal{P}_{X|Y}^{\mathrm{multi}}$ , according to Theorem 3.2, we know that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {m u l t i}}) \leq 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)}.
+$$
+
+According to Theorem B.1, it holds that
+
+$$
+\mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n}; \mathcal {P} _ {X | Y} ^ {\text {m u l t i}}, L ^ {1} (\mathfrak {X})\right) \leq \left(2 (1 + d) B n + 1\right) ^ {(K - 1) d _ {1} + d}.
+$$
+
+Therefore, we obtain the result that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {m u l t i}}) \leq 3 \sqrt {\frac {1}{n} \bigg ((K - 1) d _ {1} + d \big) \log \big (2 (1 + d) B n + 1 \big) + \log \frac {1}{\delta} \bigg)}.
+$$
+
+Omitting constants about $n, K, d_1, d, B$ , and the logarithm term we have $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{multi}}) = \tilde{\mathcal{O}}\left(\sqrt{\frac{(K - 1)d_1 + d}{n}}\right)$ .
+
+
+
+# B.3. Average TV error bound under single-source training
+
+Theorem B.2 (Average TV error bound for conditional Gaussian distribution space under single-source training). Let $\hat{p}_{X|Y}^{\mathrm{single}}$ be the likelihood maximizer defined in Equation (3) given $\mathcal{P}_{X|Y}^{\mathrm{single}}$ with conditional distributions as in Equation (5). Suppose $\Phi = [-B, B]^{d_1}$ , $\Psi = [-B, B]^{d - d_1}$ with constant $B > 0$ , and $\phi_k^* \in \Phi$ , $\psi^* \in \Psi$ . Then, for any $0 < \delta \leq 1/2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {s i n g l e}}) = \tilde {\mathcal {O}} \left(\sqrt {\frac {K d}{n}}\right).
+$$
+
+Proof. The proof is very similar to that in the multi-source case. According to the assumptions, the conditional distribution space expressed by the parametric estimation model is
+
+$$
+\mathcal {P} _ {X | Y} ^ {\text {s i n g l e}} := \left\{p _ {X | Y} ^ {\text {s i n g l e}} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \big (p _ {\phi_ {k}, \psi_ {k}} (\boldsymbol {x} | k) \big) ^ {\mathbb {I} (y = k)} = \prod_ {k = 1} ^ {K} \Big ((2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {1}{2} \| \boldsymbol {x} - (\phi_ {k}, \psi_ {k}) \| _ {2} ^ {2}} \Big) ^ {\mathbb {I} (y = k)}: \phi_ {k} \in [ - B, B ] ^ {d _ {1}}, \psi_ {k} \in [ - B, B ] ^ {d - d _ {1}} \right\}.
+$$
+
+For any $p_{X|Y}^{\mathrm{single}}(\pmb{x}|y) = \prod_{k=1}^{K}\left((2\pi)^{-\frac{d}{2}}e^{-\frac{1}{2}\|\pmb{x} - (\phi_k,\psi_k)\|^2_2}\right)^{\mathbb{I}(y=k)} \in \mathcal{P}_{X|Y}^{\mathrm{single}}$ , let's first divide the mean vector $(\phi_k,\psi_k)$ into $\eta$ -width grids with a small constant $\eta > 0$ (the value of $\eta$ will be specified later): If $(\phi_k)_i \in [j\eta, (j+1)\eta)$ for some $j \in \mathbb{Z}$ , let $(\bar{\phi}_k)_i = j\eta$ and $\bar{\phi}_k := ((\bar{\phi}_k)_1, \dots, (\bar{\phi}_k)_{d_1})$ . Similarly, if $(\psi_k)_i \in [j\eta, (j+1)\eta)$ for some $j \in \mathbb{Z}$ , let $(\bar{\psi}_k)_i = j\eta$ and $\bar{\psi}_k := ((\bar{\psi}_k)_1, \dots, (\bar{\psi}_k)_{d-1})$ . In this case, we have $\| (\phi_k,\psi_k) - (\bar{\phi}_k,\bar{\psi}_k)\|_2^2 \leq d\eta^2$ .
+
+Let
+
+$$
+p ^ {\prime} (\boldsymbol {x}, y) = \prod_ {k = 1} ^ {K} \left((2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {c _ {1}}{2} \| \boldsymbol {x} - (\bar {\phi} _ {k}, \bar {\psi} _ {k}) \| _ {2} ^ {2} + c _ {2}}\right) ^ {\mathbb {I} (y = k)}.
+$$
+
+We need $p^{\prime}(\pmb {x},y)\geq p_{X|Y}^{\mathrm{single}}(\pmb {x}|y)$ by the definition of the bracketing. By completing the square w.r.t. $\pmb{x}$ , we have
+
+$$
+\begin{array}{l} - \frac {c _ {1}}{2} \| \boldsymbol {x} - (\bar {\phi} _ {k}, \bar {\psi} _ {k}) \| _ {2} ^ {2} + c _ {2} - \left(- \frac {1}{2} \| \boldsymbol {x} - (\phi_ {k}, \psi_ {k}) \| _ {2} ^ {2}\right) \\ = \frac {1}{2} \left(\left(1 - c _ {1}\right) \left\| \boldsymbol {x} + \frac {c _ {1} \left(\bar {\phi} _ {k} , \bar {\psi} _ {k}\right) - \left(\phi_ {k} , \psi_ {k}\right)}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {c _ {1}}{1 - c _ {1}} \left\| \left(\bar {\phi} _ {k}, \bar {\psi} _ {k}\right) - \left(\phi_ {k}, \psi_ {k}\right) \right\| _ {2} ^ {2} + 2 c _ {2}\right). \\ \end{array}
+$$
+
+Further taking $c_{1} = 1 - \eta$ and $c_{2} = d(1 - \eta)\eta /2$ , we have
+
+$$
+\begin{array}{l} (1 - c _ {1}) \left\| \boldsymbol {x} + \frac {c _ {1} (\bar {\phi} _ {k} , \bar {\psi} _ {k}) - (\phi_ {k} , \psi_ {k})}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {c _ {1}}{1 - c _ {1}} \big \| (\bar {\phi} _ {k}, \bar {\psi} _ {k}) - (\phi_ {k}, \psi_ {k}) \big \| _ {2} ^ {2} + 2 c _ {2} \\ = \eta \left\| \boldsymbol {x} + \frac {c _ {1} (\bar {\phi} _ {k} , \bar {\psi} _ {k}) - (\phi_ {k} , \psi_ {k})}{1 - c _ {1}} \right\| _ {2} ^ {2} - \frac {1 - \eta}{\eta} \| (\bar {\phi} _ {k}, \bar {\psi} _ {k}) - (\phi_ {k}, \psi_ {k}) \| _ {2} ^ {2} + 2 c _ {2} \quad (c _ {1} = 1 - \eta) \\ \geq - \frac {1 - \eta}{\eta} \left\| \left(\bar {\phi} _ {k}, \bar {\psi} _ {k}\right) - \left(\phi_ {k}, \psi_ {k}\right) \right\| _ {2} ^ {2} + 2 c _ {2} \tag {η>0} \\ \geq - \frac {1 - \eta}{\eta} d \eta^ {2} + 2 c _ {2} \quad (\| (\phi_ {k}, \psi) - (\bar {\phi} _ {k}, \bar {\psi} _ {k}) \| _ {2} ^ {2} \leq d \eta^ {2}) \\ = - d (1 - \eta) \eta + d (1 - \eta) \eta = 0. \\ \end{array}
+$$
+
+Therefore, it holds that for all $y \in \mathcal{Y}$ ,
+
+$$
+\forall \boldsymbol {x} \in \mathcal {X}: p ^ {\prime} (\boldsymbol {x}, y) \geq p _ {X | Y} ^ {\text {s i n g l e}} (\boldsymbol {x} | y). \tag {16}
+$$
+
+Moreover, given any $0 < \epsilon \leq 1$ , we take $\eta = \frac{\epsilon}{1 + d}$ , and thus $c_{1} = 1 - \frac{\epsilon}{1 + d}$ and $c_{2} = \frac{1}{2} (1 - \frac{\epsilon}{1 + d})\frac{\epsilon}{d + 1}$ . Since $d \in \mathbb{N}$ , we have $\eta \leq \frac{1}{2}$ and $c_{2} \leq \frac{1}{2}$ . Then, $\| p'(\cdot, y) - p_{X|Y}^{\mathrm{single}}(\cdot | y) \|_{L^{1}(\mathcal{X})}$ can be bounded as
+
+$$
+\begin{array}{l} \left\| p ^ {\prime} (\cdot , y) - p _ {X \mid Y} ^ {\text {s i n g l e}} (\cdot | y) \right\| _ {L ^ {1} (\mathcal {X})} = \int_ {\mathcal {X}} \left| p ^ {\prime} (\boldsymbol {x}, y) - p _ {X \mid Y} ^ {\text {s i n g l e}} (\boldsymbol {x} | y) \right| d \boldsymbol {x} \\ = \int_ {\mathcal {X}} p ^ {\prime} (\boldsymbol {x}, y) d \boldsymbol {x} - \int_ {\mathcal {X}} p _ {X | Y} ^ {\mathrm {s i n g l e}} (\boldsymbol {x} | y) d \boldsymbol {x} = \frac {1}{\sqrt {c _ {1}}} e ^ {c _ {2}} - 1 \qquad \qquad \qquad \left(\int_ {\mathcal {X}} e ^ {- \frac {1}{2} \| \boldsymbol {x} \| _ {2} ^ {2}} d \boldsymbol {x} = (2 \pi) ^ {\frac {d}{2}}\right) \\ \leq \frac {1}{\sqrt {c _ {1}}} (1 + 2 c _ {2}) - 1 \quad \left(e ^ {x} \leq 1 + 2 x \text {f o r} x \in \left[ 0, \frac {1}{2} \right]\right) \\ = \frac {1}{\sqrt {1 - \eta}} (1 + d (1 - \eta) \eta) - 1 \quad (c _ {1} = 1 - \eta \text {a n d} c _ {2} = d (1 - \eta) \eta / 2) \\ \leq (1 + \eta) (1 + d (1 - \eta) \eta) - 1 \quad (\frac {1}{\sqrt {1 - x}} \leq 1 + x \text {f o r} x \in [ 0, \frac {1}{2} ]) \\ = \eta (1 + d (1 - \eta^ {2})) \leq \eta (1 + d) = \epsilon \tag {17} \\ \end{array}
+$$
+
+Combining Equation (16) and Equation (17), we know that for any $p_{X|Y}^{\mathrm{single}}(\boldsymbol{x}|y) \in \mathcal{P}_{X|Y}^{\mathrm{single}}$ and $0 < \epsilon \leq 1$ , there exists some $p'(\boldsymbol{x}, y) \in \mathcal{B}$ such that given any $y \in \mathcal{Y}$ , it holds that $\forall \boldsymbol{x} \in \mathcal{X}: p'(\boldsymbol{x}, y) \geq p_{X|Y}(\boldsymbol{x}|y)$ , and $\| p'(\cdot, y) - p_{X|Y}(\cdot|y) \|_{L^p(\mathcal{X})} \leq \epsilon$ , where
+
+$$
+\mathcal {B} := \left\{p ^ {\prime} (\boldsymbol {x}, y) = \prod_ {k = 1} ^ {K} \left((2 \pi) ^ {- \frac {d}{2}} e ^ {- \frac {c _ {1}}{2} \| \boldsymbol {x} - (\bar {\phi} _ {k}, \bar {\psi} _ {k}) \| _ {2} ^ {2} + c _ {2}}\right) ^ {\mathbb {I} (y = k)}: (\bar {\phi} _ {k}) _ {i}, (\bar {\psi} _ {k}) _ {i} \in [ - B, B ] \cap \eta \mathbb {Z} \right\}
+$$
+
+Recalling the definition of the upper bracketing number in Definition 3.1, we know that $\mathcal{B}$ is an $\epsilon$ -upper bracket of $\mathcal{P}_{X|Y}^{\mathrm{single}}$ w.r.t. $L^{1}(\mathcal{X})$ . Therefore,
+
+$$
+\begin{array}{l} \mathcal {N} _ {\llbracket} \left(\epsilon ; \mathcal {P} _ {X | Y} ^ {\text {s i n g l e}}, L ^ {1} (\mathfrak {X})\right) \\ \leq | \mathcal {B} | = \left| \left\{\left\{\bar {\phi} _ {k} \right\} _ {k = 1} ^ {K}, \left\{\bar {\psi} _ {k} \right\} _ {k = 1} ^ {K}: (\bar {\phi} _ {k}) _ {i}, (\bar {\psi} _ {k}) _ {i} \in [ - B, B ] \cap \eta \mathbb {Z} \right\} \right| \\ \leq \left(\frac {2 B}{\eta} + 1\right) ^ {K d _ {1} + K (d - d _ {1})} \\ = \left(\frac {2 (1 + d) B}{\epsilon} + 1\right) ^ {K d}. \\ \end{array}
+$$
+
+Besides, according to Theorem 3.2, we know that
+
+$$
+\begin{array}{l} \mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\text {s i n g l e}}) \leq 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {\mathbb {I}} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} ^ {\text {s i n g l e}} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)} \\ \leq 3 \sqrt {\frac {1}{n} \left(K d \log (2 (1 + d) B n + 1) + \log \frac {1}{\delta}\right)}. \\ \end{array}
+$$
+
+Omitting constants about $n, K, d_1, d, B$ , and the logarithm term we have $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{multi}}) = \tilde{\mathcal{O}}\left(\sqrt{\frac{Kd}{n}}\right)$ .
+
+
+
+# C. Proofs for Section 4.2
+
+# C.1. Preliminaries for evaluating the bracketing number of neural networks
+
+Our results build on the intrinsic connection between the bracketing number of conditional distribution spaces and the covering number of neural network models. There is a rich body of work on the complexity of ReLU fully connected neural networks, also known as multilayer perceptrons (MLPs) as defined in Definition 4.2 from various perspectives, including Rademacher complexity (Bartlett et al., 2017), VC-dimension (Bartlett et al., 2019), and covering numbers (Suzuki, 2019; Shen, 2024; Ou & Bölskei, 2024). Specifically, prior results indicate that the logarithm of the covering number of an MLP scales as $\tilde{\mathcal{O}}(LS)$ , where $L$ is the depth and $S$ is the sparsity constraint. Furthermore, Ou & Bölskei (2024) establish a lower bound, showing that for $B \geq 1$ and $W$ , $L \geq 60$ , the covering number scales as $\tilde{\Theta}(LS)$ . Their proofs share a common idea. To enhance clarity, we include a detailed derivation below following these prior works.
+
+Definition C.1 ( $\epsilon$ -covering number). Let $\epsilon$ be a real number that $\epsilon > 0$ and $\mathfrak{p}, \mathfrak{q}$ be integers that $1 \leq \mathfrak{p}, \mathfrak{q} \leq \infty$ . An $\epsilon$ -cover of a function space $\mathcal{F}$ with respect to $\|\cdot\|$ is a finite function set $\mathcal{C} \subset \mathcal{F}$ such that for any $\pmb{f} \in \mathcal{F}$ , there exists some $\pmb{f}' \in \mathcal{C}$ such that $\left\| \left\| \pmb{f}(\pmb{x}) - \pmb{f}'(\pmb{x}) \right\|_{\mathfrak{q}} \right\|_{L^{\mathbb{P}}(\mathcal{X})} \leq \epsilon$ . In particular, when $\mathfrak{p} = \mathfrak{q} = \infty$ , it requires $\left\| \left\| \pmb{f}(\pmb{x}) - \pmb{f}'(\pmb{x}) \right\|_{\infty} \right\|_{L^{\infty}(\mathcal{X})} = \sup_{\pmb{x} \in \mathcal{X}} \| \pmb{f}(\pmb{x}) - \pmb{f}'(\pmb{x}) \|_{\infty} \leq \epsilon$ . The $\epsilon$ -covering number $\mathcal{N}\left(\epsilon; \mathcal{F}, \| \cdot \|_{\mathfrak{q}, L^{\mathbb{P}}(\mathcal{X})}\right)$ is the cardinality of the smallest $\epsilon$ -cover with respect to $\|\cdot\|_{\mathfrak{q}, L^{\mathbb{P}}(\mathcal{X})}$ .
+
+Lemma C.2 (Lipschitz property of ReLU and sigmoid, Lemma A.1 in (Bartlett et al., 2017)). Element-wise ReLU $\mathrm{ReLU}(\cdot):\mathbb{R}^d\to \mathbb{R}^d$ and sigmoid $\sigma (x) = \frac{1}{1 + e^{-x}}$ are 1-Lipschitz according to $\| \cdot \|_{\mathfrak{p}}$ for any $\mathsf{p}\geq 1$
+
+Lemma C.3 (Covering number of a composite function class). Suppose we have two classes of functions, $\mathcal{G}$ consisting of functions mapping from $\mathcal{X}_1$ to $\mathcal{X}_2$ and $\mathcal{F}$ consisting of functions mapping from $\mathcal{X}_2$ to $\mathcal{X}_3$ . We denote by $\mathcal{F} \circ \mathcal{G}$ all possible composition of functions in $\mathcal{F}$ and $\mathcal{G}$ that $\mathcal{F} \circ \mathcal{G} = \{\pmb{f} \circ \pmb{g} : \pmb{f} \in \mathcal{F}, \pmb{g} \in \mathcal{G}\}$ . Assume that any $\pmb{f} \in \mathcal{F}$ is $\kappa_{\mathcal{F}}$ -Lipschitz w.r.t. $\|\cdot\|_{\infty}$ , i.e., for all $\pmb{x}_2, \pmb{x}_2' \in \mathcal{X}_2$ , $\|\pmb{f}(\pmb{x}_2) - \pmb{f}(\pmb{x}_2')\|_{\infty} \leq \kappa_{\mathcal{F}}\|\pmb{x}_2 - \pmb{x}_2'\|_{\infty}$ . Then, given constants $\epsilon_{\mathcal{F}}, \epsilon_{\mathcal{G}} > 0$ we have
+
+$$
+\left. \mathcal {N} \left(\epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \epsilon_ {\mathcal {G}}; \mathcal {F} \circ \mathcal {G}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)}\right) \leq \mathcal {N} \left(\epsilon_ {\mathcal {F}}; \mathcal {F}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {2}\right)}\right)\right) \mathcal {N} \left(\epsilon_ {\mathcal {G}}; \mathcal {G}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)}\right)
+$$
+
+Proof. Let $\mathcal{C}_{\mathcal{F}}$ be an $\epsilon_{\mathcal{F}}$ -cover of $\mathcal{F}$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}_2)}$ such that $|\mathcal{C}_{\mathcal{F}}| = \mathcal{N}\Bigl (\epsilon_{\mathcal{F}};\mathcal{F},\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}_2)}\Bigr)$ , and $\mathcal{C}_{\mathcal{G}}$ be an $\epsilon_{\mathcal{G}}$ -cover of $\mathcal{G}$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}_1)}$ such that $|\mathcal{C}_{\mathcal{G}}| = \mathcal{N}\Bigl (\epsilon_{\mathcal{G}};\mathcal{G},\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}_1)}\Bigr)$ . For any $\pmb {f}\circ \pmb {g}\in \mathcal{F}\circ \mathcal{G}$ , there exists $\pmb {f}'\in \mathcal{C}_{\mathcal{F}}$ and $\pmb {g}'\in \mathcal{C}_{\mathcal{G}}$ such that
+
+$$
+\forall \boldsymbol {x} _ {2} \in \mathcal {X} _ {2}, \| \boldsymbol {f} (\boldsymbol {x} _ {2}) - \boldsymbol {f} ^ {\prime} (\boldsymbol {x} _ {2}) \| _ {\infty} \leq \epsilon_ {\mathcal {F}}, \quad \text {a n d} \quad \forall \boldsymbol {x} _ {1} \in \mathcal {X} _ {1}, \| \boldsymbol {g} (\boldsymbol {x} _ {1}) - \boldsymbol {g} ^ {\prime} (\boldsymbol {x} _ {1}) \| _ {\infty} \leq \epsilon_ {\mathcal {G}}.
+$$
+
+Then, for any $\pmb{x}_1 \in \mathcal{X}_1$ , we have
+
+$$
+\begin{array}{l} \| \boldsymbol {f} \circ \boldsymbol {g} (\boldsymbol {x} _ {1}) - \boldsymbol {f} ^ {\prime} \circ \boldsymbol {g} ^ {\prime} (\boldsymbol {x} _ {1}) \| _ {\infty} \leq \| \boldsymbol {f} \circ \boldsymbol {g} (\boldsymbol {x} _ {1}) - \boldsymbol {f} ^ {\prime} \circ \boldsymbol {g} (\boldsymbol {x} _ {1}) \| _ {\infty} + \| \boldsymbol {f} ^ {\prime} \circ \boldsymbol {g} (\boldsymbol {x} _ {1}) - \boldsymbol {f} ^ {\prime} \circ \boldsymbol {g} ^ {\prime} (\boldsymbol {x} _ {1}) \| _ {\infty} \\ \leq \epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \| \boldsymbol {g} (\boldsymbol {x} _ {1}) - \boldsymbol {g} ^ {\prime} (\boldsymbol {x} _ {1}) \| _ {\infty} \\ \leq \epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \epsilon_ {\mathcal {G}}. \\ \end{array}
+$$
+
+Therefore, we have $\mathcal{C}_{\mathcal{F}}\circ \mathcal{C}_{\mathcal{G}}$ is an $\epsilon_{\mathcal{F}} + \kappa_{\mathcal{F}}\epsilon_{\mathcal{G}}$ -cover of $\mathcal{F}\circ \mathcal{G}$ , and thus
+
+$$
+\mathcal {N} \Big (\epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \epsilon_ {\mathcal {G}}; \mathcal {F} \circ \mathcal {G}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)} \Big) \leq | \mathcal {C} _ {\mathcal {F}} \circ \mathcal {C} _ {\mathcal {G}} | \leq | \mathcal {C} _ {\mathcal {F}} | | \mathcal {C} _ {\mathcal {G}} | = \mathcal {N} \Big (\epsilon_ {\mathcal {F}}; \mathcal {F}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {2}\right)} \Big) \mathcal {N} \Big (\epsilon_ {\mathcal {G}}; \mathcal {G}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)} \Big).
+$$
+
+
+
+Lemma C.4 (Covering number of an MLP class). Given any constant $\delta > 0$ , the covering number with respect to $\| \cdot \|_{\infty, L^{\infty}(\mathcal{X})}$ with $\mathcal{X} \subset [0,1]^{W_0}$ of an MLP class $\mathcal{F}(L, W, S, B)$ defined in Definition 4.2 can be bounded by
+
+$$
+\mathcal {N} \Big (L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} \delta ; \mathcal {F} (L, W, S, B), \| \cdot \| _ {\infty , L ^ {\infty} (\mathfrak {X})} \Big) \leq \left(\frac {2 B}{\delta} + 1\right) ^ {S}.
+$$
+
+Proof. Fix any $\pmb{x} \in [0,1]^{W_0}$ . Given any network $\pmb{f} \in \mathcal{F}(L,W,S,B)$ expressed as
+
+$$
+\boldsymbol {f} (\boldsymbol {x}) = \left(\boldsymbol {A} ^ {(L)} \operatorname {R e L U} (\cdot) + \boldsymbol {b} ^ {(L)}\right) \circ \dots \circ \left(\boldsymbol {A} ^ {(1)} \boldsymbol {x} + \boldsymbol {b} ^ {(1)}\right),
+$$
+
+let $\pmb{f}_l(\pmb{x}) \coloneqq (\pmb{A}^{(l)}\mathrm{ReL}\mathrm{U}(\cdot) + \pmb{b}^{(l)}) \circ \dots \circ (\pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)})$ for $l = 2, \ldots, L$ and $\pmb{f}_1(\pmb{x}) = \pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)}$ .
+
+Sup-norm of the output at each layer. We first prove the statement that $\| f_l(\pmb{x})\|_{\infty} \leq (B \vee 1)^l (W + 1)^l$ for all $l \in [L]$ by induction. When $l = 1$ ,
+
+$$
+\begin{array}{l} \| \pmb {f} _ {1} (\pmb {x}) \| _ {\infty} = \| \pmb {A} ^ {(1)} \pmb {x} + \pmb {b} ^ {(1)} \| _ {\infty} \leq \max _ {i} \| \pmb {A} ^ {(1)} [ i,: ] \| _ {1} \| \pmb {x} \| _ {\infty} + \| \pmb {b} ^ {(1)} \| _ {\infty} \\ \leq W B + B \quad (W _ {0} \leq W, \| \boldsymbol {A} ^ {(1)} \| _ {\infty} \leq B, \| \boldsymbol {b} ^ {(1)} \| _ {\infty} \leq B, \boldsymbol {x} \in \mathcal {X} \subset [ 0, 1 ] ^ {W _ {0}}) \\ = B (W + 1) \leq (B \vee 1) ^ {1} (W + 1) ^ {1}, \\ \end{array}
+$$
+
+which implies the statement is true for $l = 1$ . Assume that for some $l = i \geq 1$ , $\| f_i(\pmb{x}) \|_{\infty} \leq (B \vee 1)^i (W + 1)^i$ , then we have
+
+$$
+\begin{array}{l} \| \boldsymbol {f} _ {i + 1} (\boldsymbol {x}) \| _ {\infty} = \| \boldsymbol {A} ^ {(i + 1)} \mathrm {R e L U} \big (\boldsymbol {f} _ {i} (\boldsymbol {x}) \big) + \boldsymbol {b} ^ {(i + 1)} \| _ {\infty} \leq \max _ {i} \| \boldsymbol {A} ^ {(i + 1)} [ i,: ] \| _ {1} \| \mathrm {R e L U} \big (\boldsymbol {f} _ {i} (\boldsymbol {x}) \big) \| _ {\infty} + \| \boldsymbol {b} ^ {(i + 1)} \| _ {\infty} \\ \leq W B \| \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) \| _ {\infty} + B \quad (W _ {i} \leq W, \| \boldsymbol {A} ^ {(i + 1)} \| _ {\infty} \leq B, \| \boldsymbol {b} ^ {(i + 1)} \| _ {\infty} \leq B) \\ \leq W B \| \boldsymbol {f} _ {i} (\boldsymbol {x}) \| _ {\infty} + B \quad \left(\operatorname {R e L U} (\cdot) \text {i s 1 - L i p s c h i t z c o n t i n u o u s f o r L e m m a C . 2 a n d R e L U} (\boldsymbol {0}) = \boldsymbol {0}\right) \\ \leq W B (B \vee 1) ^ {i} (W + 1) ^ {i} + B \\ \leq \left\{ \begin{array}{l l} W B ^ {i + 1} (W + 1) ^ {i} + B ^ {i + 1} (W + 1) ^ {i}, & B \geq 1, \\ W (W + 1) ^ {i} + (W + 1) ^ {i}, & B < 1, \end{array} \right. \\ = (B \vee 1) ^ {i + 1} (W + 1) ^ {i + 1}, \\ \end{array}
+$$
+
+which implies the statement is true for $l = i + 1$ , completing the induction steps. Therefore, it holds that for all $l \in [L]$
+
+$$
+\left\| \boldsymbol {f} _ {l} (\boldsymbol {x}) \right\| _ {\infty} \leq (B \vee 1) ^ {l} (W + 1) ^ {l}. \tag {18}
+$$
+
+Parameter-Lipschitzness at each layer. For any two different neural networks $\pmb{f}, \pmb{f}' \in \mathcal{F}(L, W, S, B)$ expressed by
+
+$$
+\pmb {f} (x) = (\pmb {A} ^ {(L)} \mathrm {R e L U} (\cdot) + \pmb {b} ^ {(L)}) \circ \dots \circ (\pmb {A} ^ {(1)} \pmb {x} + \pmb {b} ^ {(1)}), \pmb {f} ^ {\prime} (\pmb {x}) = (\pmb {A} ^ {(L) ^ {\prime}} \mathrm {R e L U} (\cdot) + \pmb {b} ^ {(L) ^ {\prime}}) \circ \dots \circ (\pmb {A} ^ {(1) ^ {\prime}} \pmb {x} + \pmb {b} ^ {(1) ^ {\prime}}),
+$$
+
+with parameter distance that $\max_l\| A^{(l)} - A^{(l)'}\|_{\infty}\vee \| b^{(l)} - b^{(l)'}\|_{\infty}\leq \delta$ , we prove the statement that $\| f_l(\pmb {x}) - f_l'(\pmb {x})\|_{\infty}\leq$ $l(B\lor 1)^{l - 1}(W + 1)^{l}\delta$ for all $l\in [L]$ by induction. When $l = 1$
+
+$$
+\begin{array}{l} \left\| \boldsymbol {f} _ {1} (\boldsymbol {x}) - \boldsymbol {f} _ {1} ^ {\prime} (\boldsymbol {x}) \right\| _ {\infty} = \left\| \boldsymbol {A} ^ {(1)} \boldsymbol {x} + \boldsymbol {b} ^ {(1)} - \boldsymbol {A} ^ {(1) ^ {\prime}} \boldsymbol {x} - \boldsymbol {b} ^ {(1) ^ {\prime}} \right\| _ {\infty} \\ \leq \left\| \left(\boldsymbol {A} ^ {(1)} - \boldsymbol {A} ^ {(1) ^ {\prime}}\right) \boldsymbol {x} \right\| _ {\infty} + \left\| \boldsymbol {b} ^ {(1)} - \boldsymbol {b} ^ {(1) ^ {\prime}} \right\| _ {\infty} \\ \leq W \delta + \delta \quad (W _ {0} \leq W, \| \boldsymbol {A} ^ {(1)} - \boldsymbol {A} ^ {(1) ^ {\prime}} \| _ {\infty} \leq \delta , \| \boldsymbol {b} ^ {(1)} - \boldsymbol {b} ^ {(1) ^ {\prime}} \| _ {\infty} \leq \delta , \boldsymbol {x} \in [ 0, 1 ] ^ {W _ {0}}) \\ = (W + 1) \delta \\ \leq (B \vee 1) ^ {0} (W + 1) ^ {1} \delta , \\ \end{array}
+$$
+
+which implies the statement is true for $l = 1$ . Assume that for some $l = i \geq 1$ , $\| \pmb{f}_i(\pmb{x}) - \pmb{f}_i'(\pmb{x}) \|_{\infty} \leq i (B \vee 1)^{i-1} (W + 1)^i \delta$ , then we have
+
+$$
+\| \boldsymbol {f} _ {i + 1} (\boldsymbol {x}) - \boldsymbol {f} _ {i + 1} ^ {\prime} (\boldsymbol {x}) \| _ {\infty} = \| \boldsymbol {A} ^ {(i + 1)} \mathrm {R e L U} \big (\boldsymbol {f} _ {i} (\boldsymbol {x}) \big) + \boldsymbol {b} ^ {(i + 1)} - \boldsymbol {A} ^ {(i + 1) ^ {\prime}} \mathrm {R e L U} \big (\boldsymbol {f} _ {i} ^ {\prime} (x) \big) - \boldsymbol {b} ^ {(i + 1) ^ {\prime}} \| _ {\infty}
+$$
+
+$$
+\leq \| \left(\boldsymbol {A} ^ {(i + 1)} - \boldsymbol {A} ^ {(i + 1) ^ {\prime}}\right) \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) \| _ {\infty} + \| \boldsymbol {A} ^ {(i + 1) ^ {\prime}} \left(\operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) - \operatorname {R e L U} \left(\boldsymbol {f} _ {i} ^ {\prime} (\boldsymbol {x})\right)\right) \| _ {\infty}
+$$
+
+$$
++ \left\| \boldsymbol {b} ^ {(i + 1)} - \boldsymbol {b} ^ {(i + 1) ^ {\prime}} \right\| _ {\infty}
+$$
+
+$$
+\leq W \delta \| \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) \| _ {\infty} + W B \left\| \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) - \operatorname {R e L U} \left(\boldsymbol {f} _ {i} ^ {\prime} (\boldsymbol {x})\right) \right\| _ {\infty} + \delta
+$$
+
+$$
+\left(W _ {i} \leq W, \| \boldsymbol {A} ^ {(i + 1)} - \boldsymbol {A} ^ {(i + 1) ^ {\prime}} \| _ {\infty} \leq \delta , \| \boldsymbol {A} ^ {(i + 1) ^ {\prime}} \| _ {\infty} \leq B, \| \boldsymbol {b} ^ {(i + 1)} - \boldsymbol {b} ^ {(i + 1) ^ {\prime}} \| _ {\infty} \leq \delta\right)
+$$
+
+$$
+\leq W \delta \| \boldsymbol {f} _ {i} (\boldsymbol {x}) \| _ {\infty} + W B \| \boldsymbol {f} _ {i} (\boldsymbol {x}) - \boldsymbol {f} _ {i} ^ {\prime} (\boldsymbol {x}) \| _ {\infty} + \delta
+$$
+
+$(\mathrm{ReLU}(\cdot))$ is 1-Lipschitz continuous for Lemma C.2 and $\mathrm{ReLU}(\mathbf{0}) = \mathbf{0}$
+
+$$
+\leq W \delta (B \vee 1) ^ {i} (W + 1) ^ {i} + W B \| \boldsymbol {f} _ {i} (\boldsymbol {x}) - \boldsymbol {f} _ {i} ^ {\prime} (\boldsymbol {x}) \| _ {\infty} + \delta \tag {Equation(18)}
+$$
+
+$$
+\begin{array}{l} \leq W \delta (B \vee 1) ^ {i} (W + 1) ^ {i} + W B i (B \vee 1) ^ {i - 1} (W + 1) ^ {i} \delta + \delta \\ \leq \left(W (B \vee 1) ^ {i} (W + 1) ^ {i} + i W (B \vee 1) ^ {i} (W + 1) ^ {i} + 1\right) \delta \\ = \left((i + 1) W (B \vee 1) ^ {i} (W + 1) ^ {i} + 1\right) \delta \\ \leq \left\{ \begin{array}{l l} \big (W (i + 1) B ^ {i} (W + 1) ^ {i} + (i + 1) B ^ {i} (W + 1) ^ {i} \big) \delta , & B \geq 1, \\ \big (W (i + 1) (W + 1) ^ {i} + (i + 1) (W + 1) ^ {i} \big) \delta , & B < 1, \end{array} \right. \\ = (i + 1) (B \vee 1) ^ {i} (W + 1) ^ {i + 1} \delta \\ \end{array}
+$$
+
+which implies the statement is true for $l = i + 1$ , completing the induction steps. Therefore, it holds that for all $l \in [L]$
+
+$$
+\left\| \boldsymbol {f} _ {l} (\boldsymbol {x}) - \boldsymbol {f} _ {l} ^ {\prime} (\boldsymbol {x}) \right\| _ {\infty} \leq l (B \vee 1) ^ {l - 1} (W + 1) ^ {l} \delta . \tag {19}
+$$
+
+Discretization of entry space. Let $S_{\mathrm{entry}}(\{(A^{(l)},b^{(l)})\}_{l = 1}^{L}) \coloneqq \bigcup_{l = 1}^{L}\Big(S_{\mathrm{entry}}(A^{(l)})\bigcup S_{\mathrm{entry}}(b^{(l)})\Big)$ with $S_{\mathrm{entry}}(A)$ denotes the value space of all entries in $A$ and $S_{\mathrm{entry}}(b)$ denotes the value space of all entries in $b$ . Now we discretize the value spaces of $\mathcal{F}(L,W,S,B)$ into $\delta$ -width grids and get a finite class of neural network as $\mathcal{F}_{\delta \mathbb{Z}}(L,W,S,B) \coloneqq \{f\in \mathcal{F}(L,W,S,B):\mathcal{S}_{\mathrm{entry}}(\{(A^{(l)},b^{(l)})\}_{l = 1}^{L})) = [-B,B]\cap \delta \mathbb{Z}\}$ , where $\delta \mathbb{Z} = \{k\delta |k\in \mathbb{Z}\}$ . Then, for any $f\in \mathcal{F}(L,W,S,B)$ expressed as $f(x) = (A^{(L)}\mathrm{ReLU}(\cdot) + b^{(L)})\circ \dots \circ (A^{(1)}x + b^{(1)})$ , there exists $f^{\prime}\in \mathcal{F}_{\delta \mathbb{Z}}(L,W,S,B)$ expressed as $f^{\prime}(\pmb {x}) = (A^{(L)^{\prime}}\mathrm{ReLU}(\cdot) + b^{(L)^{\prime}})\circ \dots \circ (A^{(1)^{\prime}}\pmb {x} + b^{(1)^{\prime}})$ such that $\max_l\| A^{(l)} - A^{(l)^{\prime}}\|_{\infty}\vee \| b^{(l)} - b^{(l)^{\prime}}\|_{\infty}\leq \delta$ . According to Equation (19), we have for any $x\in [0,1]^{W_0}$ , $\| f(x) - f'(x)\|_{\infty}\leq L(B\vee 1)^{L - 1}(W + 1)^{L}\delta$ . Therefore, $F_{\delta \mathbb{Z}}(L,W,S,B)$ is an $L(B\vee 1)^{L - 1}(W + 1)^{L}\delta$ -cover of $F(L,W,S,B)$ with respect to $\| \cdot \|_{\infty ,L^{\infty}(\mathfrak{X})}$ and thus we have
+
+$$
+\begin{array}{l} \mathcal {N} \left(L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} \delta ; \mathcal {F} (L, W, S, B), \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X})}\right) \\ \leq \left| \mathcal {F} _ {\delta \mathbb {Z}} (L, W, S, B) \right| = \left| \left\{\left\{\left(\boldsymbol {A} ^ {(l)}, \boldsymbol {b} ^ {(l)}\right) \right\} _ {l = 1} ^ {L}: \mathcal {S} _ {\mathrm {e n t r y}} (\boldsymbol {A} ^ {(l)}) = \mathcal {S} _ {\mathrm {e n t r y}} (\boldsymbol {b} ^ {(l)}) = [ - B, B ] \cap \delta \mathbb {Z} \right\} \right| \leq \left(\frac {2 B}{\delta} + 1\right) ^ {S}, \\ \end{array}
+$$
+
+which completes the proof.
+
+Here, we further establish the Lipschitz property of MLPs, which is useful in the following proofs for deriving the covering number for MLPs with an embedding layer applied to input data.
+
+Lemma C.5 (Lipschitz property of MLPs about the input). For any $\pmb{f} \in \mathcal{F}(L, W, S, B)$ defined in Definition 4.2, $\pmb{f}$ is $B^L W^L$ -Lipschitz continuous w.r.t. $\| \cdot \|_{\infty}$ about $\pmb{x}$ on $\mathfrak{X}$ , i.e., for any $\pmb{x}, \pmb{x}' \in \mathcal{X} \subset \mathbb{R}^{W_0}$ , it holds that
+
+$$
+\| \boldsymbol {f} (\boldsymbol {x}) - \boldsymbol {f} (\boldsymbol {x} ^ {\prime}) \| _ {\infty} \leq B ^ {L} W ^ {L} \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty}.
+$$
+
+Proof. Fix any $\pmb{x}$ , $\pmb{x}' \in \mathcal{X}$ . Given any $\pmb{f} \in \mathcal{F}(L, W, S, B)$ expressed as $\pmb{f}(\pmb{x}) = (\pmb{A}^{(L)}\mathrm{ReLU}(\cdot) + \pmb{b}^{(L)}) \circ \dots \circ (\pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)})$ , let $\pmb{f}_l(x) := (\pmb{A}^{(l)}\mathrm{ReLU}(\cdot) + \pmb{b}^{(l)}) \circ \dots \circ (\pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)})$ for $l = 2, \ldots, L$ and $\pmb{f}_1(x) = \pmb{A}^{(1)}\pmb{x} + \pmb{b}^{(1)}$ . We prove the
+
+statement that $\| \pmb{f}_l(\pmb{x}) - \pmb{f}_l(\pmb{x}')\|_{\infty} \leq B^l W^l\| \pmb{x} - \pmb{x}'\|_{\infty}$ for all $l \in [L]$ by induction. When $l = 1$
+
+$$
+\begin{array}{l} \left\| \boldsymbol {f} _ {1} (\boldsymbol {x}) - \boldsymbol {f} _ {1} \left(\boldsymbol {x} ^ {\prime}\right) \right\| _ {\infty} = \left\| \boldsymbol {A} ^ {(1)} \boldsymbol {x} + \boldsymbol {b} ^ {(1)} - \boldsymbol {A} ^ {(1)} \boldsymbol {x} ^ {\prime} - \boldsymbol {b} ^ {(1)} \right\| _ {\infty} = \left\| \boldsymbol {A} ^ {(1)} \left(\boldsymbol {x} - \boldsymbol {x} ^ {\prime}\right) \right\| _ {\infty} \\ \leq B W \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty} \quad (W _ {0} \leq W, \| \boldsymbol {A} ^ {(1)} \| _ {\infty} \leq B) \\ = B ^ {1} W ^ {1} \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty}, \\ \end{array}
+$$
+
+which implies the statement is true for $l = 1$ . Assume that for some $l = i \geq 1$ , $\| \pmb{f}_i(\pmb{x}) - \pmb{f}_i(\pmb{x}')\|_{\infty} \leq B^i W^i\| \pmb{x} - \pmb{x}'\|_{\infty}$ , then we have
+
+$$
+\begin{array}{l} \left\| \boldsymbol {f} _ {i + 1} (\boldsymbol {x}) - \boldsymbol {f} _ {i + 1} \left(\boldsymbol {x} ^ {\prime}\right) \right\| _ {\infty} = \left\| \boldsymbol {A} ^ {(i + 1)} \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) + \boldsymbol {b} ^ {(i + 1)} - \boldsymbol {A} ^ {(i + 1)} \operatorname {R e L U} \left(\boldsymbol {f} _ {i} \left(\boldsymbol {x} ^ {\prime}\right)\right) - \boldsymbol {b} ^ {(i + 1)} \right\| _ {\infty} \\ = \left\| \boldsymbol {A} ^ {(i + 1)} \left(\operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) - \operatorname {R e L U} \left(\boldsymbol {f} _ {i} \left(\boldsymbol {x} ^ {\prime}\right)\right)\right) \right\| _ {\infty} \\ \leq W B \| \operatorname {R e L U} \left(\boldsymbol {f} _ {i} (\boldsymbol {x})\right) - \operatorname {R e L U} \left(\boldsymbol {f} _ {i} \left(\boldsymbol {x} ^ {\prime}\right)\right) \| _ {\infty} \quad (W _ {i} \leq W, \| \boldsymbol {A} ^ {(i + 1)} \| _ {\infty} \leq B) \\ \leq W B \| \boldsymbol {f} _ {i} (\boldsymbol {x}) - \boldsymbol {f} _ {i} \left(\boldsymbol {x} ^ {\prime}\right) \| _ {\infty} \quad \left(\operatorname {R e L U} (\cdot) \text {i s 1 - L i p s c h i t z c o n t i n u o u s f o r L e m m a C . 2}\right) \\ \leq W B B ^ {i} W ^ {i} \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty} \\ = B ^ {i + 1} W ^ {i + 1} \left\| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \right\| _ {\infty}, \\ \end{array}
+$$
+
+which implies the statement is true for $l = i + 1$ , completing the induction steps. Therefore, it holds that for all $l \in [L]$
+
+$$
+\left\| \boldsymbol {f} _ {l} (\boldsymbol {x}) - \boldsymbol {f} _ {l} \left(\boldsymbol {x} ^ {\prime}\right) \right\| _ {\infty} \leq B ^ {l} W ^ {l} \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty}.
+$$
+
+Thus, when $l = L$ , we have $\| \pmb{f}(\pmb{x}) - \pmb{f}(\pmb{x}^{\prime}) \|_{\infty} = \| \pmb{f}_{L}(\pmb{x}) - \pmb{f}_{L}(\pmb{x}^{\prime}) \|_{\infty} \leq B^{L} W^{L} \| \pmb{x} - \pmb{x}^{\prime} \|_{\infty}$ .
+
+# C.2. Covering number of the logit space of ARMs
+
+We first characterize the output function space of the neural network without softmax operation, i.e., the unnormalized distribution parameter space, commonly referred to as logits in ARMS. This result serves as the foundation for deriving the bracketing number of the conditional probability mass function for each dimension. The derivation carefully analyzes the covering number of outputs for the entire network, which consists of the embedding layer, encoding layer, and an MLP. This analysis makes use of the previously established Lemma C.4 and Lemma C.5.
+
+Lemma C.6 (Covering number of the unnormalized distribution parameter vectors). Let $\pmb{H}_{\theta}(\pmb{x},y) := [h_{\theta,1}(\pmb{x},y) \cdots h_{\theta,D}(\pmb{x},y)] : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}^{M \times D}$ with $h_{\theta,d}(\pmb{x},y) = \pmb{f}_{\omega}\left(\pmb{v}_{\pmb{A}_0,\pmb{b}_0}^{\pmb{0}_{D-d}}\big(\pmb{E}_{\pmb{V}_Y,\pmb{V}_X}(\pmb{x},y)\big)\right) : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}^M$ as defined in Section 4.2. Let $\mathcal{H}_{\theta} = \{\pmb{H}_{\theta}(\pmb{x},y) : \omega \in \mathcal{W}(L,W,S,B), \pmb{A}_0 \in [-B,B]^{D \times d_e}, \pmb{b}_0 \in [-B,B]^D, \pmb{V}_X \in [0,1]^{M \times d_e}, \pmb{V}_Y \in [0,1]^{K \times d_e}\}$ with constants $L, W, S, B > 0$ . Then, given any $\epsilon > 0$ , we have
+
+$$
+\mathcal {N} \left(\epsilon ; \mathcal {H} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right) \leq \left(\frac {3 (L + 3) (B \vee 1) ^ {L + 2} (W + 1) ^ {L}}{\epsilon}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}}
+$$
+
+Proof. For any $d \in [D]$ , $\pmb{h}_{\theta, d}$ can be written as $\pmb{f}_{\omega} \circ \pmb{v}_{\pmb{A}_0, \pmb{b}_0}^{\backslash \pmb{0}_{D - d}} \circ \pmb{E}_{\pmb{V}_Y, \pmb{V}_X}$ . Let the embedding space
+
+$$
+\begin{array}{l} \mathcal {G} _ {\alpha} := \left\{ \right.\boldsymbol {G} _ {\theta} (\boldsymbol {x}, y) = \left[ \boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} ^ {\backslash \boldsymbol {0} _ {D}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y), \dots , \boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} ^ {\backslash \boldsymbol {0} _ {0}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) \right]: \alpha = \left\{\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}, \boldsymbol {V} _ {X}, \boldsymbol {V} _ {Y} \right\}, \\ \boldsymbol {A} _ {0} \in [ - B, B ] ^ {D \times d _ {e}}, \boldsymbol {b} _ {0} \in [ - B, B ] ^ {D}, \boldsymbol {V} _ {X} \in [ 0, 1 ] ^ {M \times d _ {e}}, \boldsymbol {V} _ {Y} \in [ 0, 1 ] ^ {K \times d _ {e}} \}. \\ \end{array}
+$$
+
+where
+
+$$
+\boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) = \left[ \begin{array}{c} \sigma \big (\boldsymbol {A} _ {0} [ 1,: ] \boldsymbol {V} _ {Y} [ y,: ] ^ {\top} + \boldsymbol {b} _ {0} [ 1 ] \big) \\ \vdots \\ \sigma \big (\boldsymbol {A} _ {0} [ D,: ] \boldsymbol {V} _ {X} [ x _ {D - 1,:} ] ^ {\top} + \boldsymbol {b} _ {0} [ D ] \big) \end{array} \right] = \sigma \Bigg (\mathrm {d i a g} \Big (\boldsymbol {A} _ {0} \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) ^ {\top} \Big) + \boldsymbol {b} _ {0} \Big) \in [ 0, 1 ] ^ {D}.
+$$
+
+Given any $\delta > 0$ , we first evaluate the $\delta$ -covering number of $\mathcal{G}_{\alpha}(\boldsymbol{x},y)$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathfrak{X}_1)}$ .
+
+Covering number of the embedding layer. Let $S_{\mathrm{entry}}(A)$ denote the union of value spaces of all entries in $A$ and $S_{\mathrm{entry}}(a)$ denote the union of value spaces of all entries in $a$ . We first discretize the value spaces of $G_{\alpha}$ into $\delta$ -width grids to get a finite embedding function class:
+
+$$
+\mathcal {G} _ {\alpha , \delta \mathbb {Z}} := \left\{\pmb {G} _ {\theta} \in \mathcal {G} _ {\alpha}: \mathcal {S} _ {\mathrm {e n t r y}} (\pmb {A} _ {0}) = \mathcal {S} _ {\mathrm {e n t r y}} (\pmb {b} _ {0}) = [ - B, B ] \cap \delta \mathbb {Z}, \mathcal {S} _ {\mathrm {e n t r y}} (\pmb {V} _ {Y}) = \mathcal {S} _ {\mathrm {e n t r y}} (\pmb {V} _ {X}) = [ 0, 1 ] \cap \delta \mathbb {Z} \right\}.
+$$
+
+Denote by $\| \alpha -\alpha^{\prime}\|_{\infty}\coloneqq \sup \bigl \{\| A_0 - A_0^{\prime}\|_{\infty},\| b_0 - b_0^{\prime}\|_{\infty},\| V_Y - V_Y^{\prime}\|_{\infty},\| V_X - V_X^{\prime}\|_{\infty}\bigr \}$ . For any $G_{\alpha}\in \mathcal{G}_{\alpha}$ with $\alpha \coloneqq \{A_0,b_0,V_X,V_Y\}$ , there exists $G_{\alpha^{\prime}}\in \mathcal{G}_{\alpha ,\delta \mathbb{Z}}$ with $\alpha^{\prime}\coloneqq \{A_0^{\prime},b_0^{\prime},V_X^{\prime},V_Y^{\prime}\}$ such that $\| \alpha -\alpha^{\prime}\|_{\infty}\leq \delta$ . Then we have for any $x,y\in \mathcal{X}\times \mathcal{Y}$ ,
+
+$$
+\begin{array}{l} \left\| \boldsymbol {G} _ {\alpha} (\boldsymbol {x}, y) - \boldsymbol {G} _ {\alpha^ {\prime}} (\boldsymbol {x}, y) \right\| _ {\infty} \\ = \left\| \left[ \boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} ^ {\backslash \mathbf {0} _ {D}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) - \boldsymbol {v} _ {\boldsymbol {A} _ {0} ^ {\prime}, \boldsymbol {b} _ {0} ^ {\prime}} ^ {\backslash \mathbf {0} _ {D}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y), \dots , \boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} ^ {\backslash \mathbf {0} _ {0}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) - \boldsymbol {v} _ {\boldsymbol {A} _ {0} ^ {\prime}, \boldsymbol {b} _ {0} ^ {\prime}} ^ {\backslash \mathbf {0} _ {0}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y) \right] \right\| _ {\infty} \\ \leq \left\| \boldsymbol {v} _ {\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) - \boldsymbol {v} _ {\boldsymbol {A} _ {0} ^ {\prime}, \boldsymbol {b} _ {0} ^ {\prime}} \circ \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y) \right\| _ {\infty} \\ = \left\| \sigma \left(\operatorname {d i a g} \left(\boldsymbol {A} _ {0} \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) ^ {\top}\right) + \boldsymbol {b} _ {0}\right) - \sigma \left(\operatorname {d i a g} \left(\boldsymbol {A} _ {0} ^ {\prime} \boldsymbol {E} _ {\boldsymbol {V} _ {Y ^ {\prime}}, \boldsymbol {V} _ {X ^ {\prime}}} (\boldsymbol {x}, y) ^ {\top}\right) - \boldsymbol {b} _ {0} ^ {\prime}\right) \right\| _ {\infty} \\ \leq \left\| \operatorname {diag} \left(\boldsymbol {A} _ {0} \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) ^ {\top}\right) + \boldsymbol {b} _ {0} - \operatorname {diag} \left(\boldsymbol {A} _ {0} ^ {\prime} \boldsymbol {E} _ {\boldsymbol {V} _ {Y ^ {\prime}}, \boldsymbol {V} _ {X ^ {\prime}}} (\boldsymbol {x}, y) ^ {\top}\right) - \boldsymbol {b} _ {0} ^ {\prime} \right\| _ {\infty} \quad (\sigma (\cdot) \text {i s 1 - L i p s c h i t z c o n t i n u o u s f o r L e m m a C . 2}) \\ \leq \sup _ {d \in [ D ]} \left| \boldsymbol {A} _ {0} [ d,: ] \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) [ d,: ] ^ {\top} - \boldsymbol {A} _ {0} ^ {\prime} [ d,: ] \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y) [ d,: ] ^ {\top} \right| + \left\| \boldsymbol {b} _ {0} - \boldsymbol {b} _ {0} ^ {\prime} \right\| _ {\infty} \\ \leq \sup _ {d \in [ D ]} \left| (\boldsymbol {A} _ {0} [ d,: ] - \boldsymbol {A} _ {0} ^ {\prime} [ d,: ]) \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) [ d,: ] ^ {\top} \right| + \left| \boldsymbol {A} _ {0} ^ {\prime} [ d,: ] (\boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) [ d,: ] ^ {\top} - \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y) [ d,: ] ^ {\top}) \right| + \delta \\ \leq \sup _ {d \in [ D ]} d _ {\mathrm {e}} \delta \left\| \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) [ d,: ] \right\| _ {\infty} + d _ {\mathrm {e}} B \left\| \boldsymbol {E} _ {\boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}} (\boldsymbol {x}, y) [ d,: ] - \boldsymbol {E} _ {\boldsymbol {V} _ {Y} ^ {\prime}, \boldsymbol {V} _ {X} ^ {\prime}} (\boldsymbol {x}, y) [ d,: ] \right\| _ {\infty} + \delta \\ \left(\boldsymbol {A} _ {0} \in [ - B, B ] ^ {D \times d _ {c}}, \| \boldsymbol {A} _ {0} - \boldsymbol {A} _ {0} ^ {\prime} \| _ {\infty} \leq \delta\right) \\ = \left\{ \begin{array}{l} d _ {\mathrm {e}} \delta \big \| \mathbf {V} _ {Y} [ y,: ] \big \| _ {\infty} + d _ {\mathrm {e}} B \big \| \mathbf {V} _ {Y} [ y,: ] - \mathbf {V} _ {Y} ^ {\prime} [ y,: ] \big \| _ {\infty} + \delta , \\ \sup _ {d \in [ D ]} d _ {\mathrm {e}} \delta \big \| \mathbf {V} _ {X} [ x _ {d - 1},,: ] \big \| _ {\infty} + d _ {\mathrm {e}} B \big \| \mathbf {V} _ {X} [ x _ {d - 1},,: ] - \mathbf {V} _ {X} ^ {\prime} [ x _ {d - 1},,: ] \big \| _ {\infty} + \delta , \quad d = 2, \ldots , D, \end{array} \right. \\ \leq d _ {\mathrm {e}} \delta + d _ {\mathrm {e}} B \delta + \delta \quad (\| \boldsymbol {V} _ {Y} \| _ {\infty} \leq 1, \| \boldsymbol {V} _ {X} \| _ {\infty} \leq 1, \| \boldsymbol {V} _ {Y} - \boldsymbol {V} _ {Y} ^ {\prime} \| _ {\infty} \leq \delta , \| \boldsymbol {V} _ {X} - \boldsymbol {V} _ {X} ^ {\prime} \| _ {\infty} \leq \delta) \\ = (1 + d _ {\mathrm {e}} + d _ {\mathrm {e}} B) \delta . \\ \end{array}
+$$
+
+Therefore, $\mathcal{G}_{\alpha ,\delta \mathbb{Z}}$ is an $(1 + d_{\mathrm{e}} + d_{\mathrm{e}}B)\delta$ -cover of $\mathcal{G}_{\alpha}$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}\times \mathcal{Y})}$ and thus we have
+
+$$
+\begin{array}{l} \mathcal {N} \Big ((1 + d _ {\mathrm {e}} + d _ {\mathrm {e}} B) \delta ; \mathcal {G} _ {\alpha}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathfrak {X} \times \mathfrak {Y})} \Big) \leq | \mathcal {G} _ {\alpha , \delta \mathbb {Z}} | \\ = \left| \left\{\alpha = \left(\boldsymbol {A} _ {0}, \boldsymbol {b} _ {0}, \boldsymbol {V} _ {Y}, \boldsymbol {V} _ {X}\right): \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {A} _ {0}) = \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {b} _ {0}) = [ - B, B ] \cap \delta \mathbb {Z}, \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {V} _ {Y}) = \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {V} _ {X}) = [ 0, 1 ] \cap \delta \mathbb {Z} \right\} \right| \\ \leq \left(\frac {2 B}{\delta} + 1\right) ^ {D d _ {\mathrm {e}} + D} \left(\frac {1}{\delta} + 1\right) ^ {M d _ {\mathrm {e}} + K d _ {\mathrm {e}}} \tag {20} \\ \end{array}
+$$
+
+Composition of an MLP. Let $\mathcal{C}_{\mathcal{F}}$ be an $\epsilon_{\mathcal{F}} = L(B \vee 1)^{L-1}(W+1)^{L}\delta$ -cover of $\mathcal{F}\{L, W, S, B\}$ w.r.t. $\|\cdot\|_{\infty, L^{\infty}([0,1]^{D})}$ such that $|\mathcal{C}_{\mathcal{F}}| = \mathcal{N}\Big(\epsilon_{\mathcal{F}}; \mathcal{F}(L, W, B, S), \|\cdot\|_{\infty, L^{\infty}([0,1]^{D})}\Big)$ and $\mathcal{C}_{\mathcal{G}}$ be an $\epsilon_{\mathcal{G}} = (1 + d_{\mathrm{e}} + d_{\mathrm{e}}B)\delta$ -cover of $\mathcal{G}_{\alpha}$ w.r.t. $\|\cdot\|_{\infty, L^{\infty}([0,1]^{D})}$ such that $|\mathcal{C}_{\mathcal{G}}| = \mathcal{N}\Big(\epsilon_{\mathcal{G}}; \mathcal{G}_{\alpha}, \|\cdot\|_{\infty, L^{\infty}(\mathcal{X} \times \mathcal{Y})}\Big)$ . For any $\pmb{H}_{\theta} := [f_{\omega} \circ G_{\alpha}(:, 1] \cdots f_{\omega} \circ G_{\alpha}(:, D]] \in \mathcal{H}_{\theta}$ , there exists $\pmb{f}_{\omega}' \in \mathcal{C}_{\mathcal{F}}$ and $\pmb{G}_{\alpha}' \in \mathcal{C}_{\mathcal{G}}$ such that
+
+$$
+\forall \pmb {v} \in [ 0, 1 ] ^ {D}, \| \pmb {f} _ {\omega} (\pmb {v}) - \pmb {f} _ {\omega} ^ {\prime} (\pmb {v}) \| _ {\infty} \leq \epsilon_ {\mathcal {F}}, \mathrm {a n d} \forall \pmb {x}, y \in \mathcal {X} \times \mathcal {Y}, \| \pmb {G} _ {\alpha} (\pmb {x}, y) - \pmb {G} _ {\alpha} ^ {\prime} (\pmb {x}, y) \| _ {\infty} \leq \epsilon_ {\mathcal {G}}.
+$$
+
+Let $H_{\theta}^{\prime} \coloneqq \left[f_{\omega}^{\prime} \circ G_{\alpha}^{\prime}(:,1] \cdots f_{\omega}^{\prime} \circ G_{\alpha}^{\prime}(:,D]\right]$ . We have for all $x, y \in \mathcal{X} \times \mathcal{Y}$ ,
+
+$$
+\begin{array}{l} \| \boldsymbol {H} _ {\theta} - \boldsymbol {H} _ {\theta} ^ {\prime} \| _ {\infty} = \left\| \left[ \boldsymbol {f} _ {\omega} \circ \boldsymbol {G} _ {\alpha} [:, 1 ] - \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} ^ {\prime} [:, 1 ], \dots , \boldsymbol {f} _ {\omega} \circ \boldsymbol {G} _ {\alpha} [:, D ] - \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} ^ {\prime} [:, D ] \right] \right\| _ {\infty} \\ = \sup _ {d} \| \boldsymbol {f} _ {\omega} \circ \boldsymbol {G} _ {\alpha} [:, d ] - \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} ^ {\prime} [:, d ] \| _ {\infty} \\ \leq \sup _ {d} \left\{\left\| \boldsymbol {f} _ {\omega} \circ \boldsymbol {G} _ {\alpha} [:, d ] - \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} [:, d ] \right\| + \left\| \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} [:, d ] - \boldsymbol {f} _ {\omega} ^ {\prime} \circ \boldsymbol {G} _ {\alpha} ^ {\prime} [:, d ] \right\| _ {\infty} \right\} \\ \leq \sup _ {d} \left\{\epsilon_ {\mathcal {F}} + B ^ {L} W ^ {L} \epsilon_ {\mathcal {G}} \right\} \quad \left(\boldsymbol {f} _ {\omega} \text {i s} B ^ {L} W ^ {L} - \text {L i p s c h i t z c o n t i n u o u s a s i n L e m m a C . 5}\right) \\ = \epsilon_ {\mathcal {F}} + B ^ {L} W ^ {L} \epsilon_ {\mathcal {G}}. \\ \end{array}
+$$
+
+Therefore, $\mathcal{C}_{\mathcal{H}} := \{\pmb{H}_{\theta}^{\prime} := \left[\pmb{f}_{\omega}^{\prime} \circ \pmb{G}_{\alpha}^{\prime}(:, 1] \cdots \pmb{f}_{\omega}^{\prime} \circ \pmb{G}_{\alpha}^{\prime}(:, D]\right]: \pmb{f}_{\omega}^{\prime} \in \mathcal{C}_{\mathcal{F}}, \pmb{G}_{\alpha}^{\prime} \in \mathcal{C}_{\mathcal{G}}\}$ is an $\epsilon_{\mathcal{F}} + B^{L}W^{L}\epsilon_{\mathcal{G}}$ -cover of $\mathcal{H}_{\theta}$ w.r.t. $\| \cdot \|_{\infty, L^{\infty}(\mathcal{X} \times \mathcal{Y})}$ , and thus
+
+$$
+\begin{array}{l} \mathcal {N} \left(\left(L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} + B ^ {L} W ^ {L} \left(1 + d _ {\mathrm {e}} + d _ {\mathrm {e}} B\right)\right) \delta ; \mathcal {H} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right) \\ \leq \left| \mathcal {C} _ {\mathcal {H}} \right| \leq \left| \mathcal {C} _ {\mathcal {F}} \right| \left| \mathcal {C} _ {\mathcal {G}} \right| \\ = \mathcal {N} \Big (L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} \delta ; \mathcal {F} (L, W, B, S), \| \cdot \| _ {\infty , L ^ {\infty} ([ 0, 1 ] ^ {D})}) \Big) \mathcal {N} \Big ((1 + d _ {\mathrm {e}} + d _ {\mathrm {e}} B) \delta ; \mathcal {G} _ {\alpha}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})} \Big) \\ \leq \left(\frac {2 B}{\delta} + 1\right) ^ {S} \left(\frac {2 B}{\delta} + 1\right) ^ {D d _ {\mathrm {e}} + D} \left(\frac {1}{\delta} + 1\right) ^ {M d _ {\mathrm {e}} + K d _ {\mathrm {e}}} \tag {Lemma C.4 and Equation (20)} \\ \leq \left(\frac {(2 B \vee 1)}{\delta} + 1\right) ^ {S + D d _ {\mathrm {e}} + D + M d _ {\mathrm {e}} + K d _ {\mathrm {e}}} \\ \leq \left(\frac {3 (B \vee 1)}{\delta}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}}. \quad (\frac {(B \vee 1)}{\delta} \geq 1) \\ \end{array}
+$$
+
+Taking $\epsilon = \bigl (L(B\vee 1)^{L - 1}(W + 1)^{L} + B^{L}W^{L}(1 + d_{\mathrm{e}} + d_{\mathrm{e}}B)\bigr)\delta$ we have
+
+$$
+\begin{array}{l} \mathcal {N} \Big (\epsilon ; \mathcal {H} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})} \Big) \leq \left(\frac {3 (B \vee 1) \big (L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} + B ^ {L} W ^ {L} (1 + d _ {\mathrm {e}} + d _ {\mathrm {e}} B) \big)}{\epsilon}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}} \\ \leq \left(\frac {3 (B \vee 1) \left(L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} d _ {\mathrm {e}} (B \vee 1) + 3 B ^ {L} W ^ {L} d _ {\mathrm {e}} (B \vee 1))\right)}{\epsilon}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}} \quad (d _ {\mathrm {e}}, (B \vee 1) \geq 1) \\ \leq \left(\frac {3 (B \vee 1) \left(L (B \vee 1) ^ {L + 1} (W + 1) ^ {L} d _ {\mathrm {e}} + 3 (B \vee 1) ^ {L + 1} (W + 1) ^ {L} d _ {\mathrm {e}})\right)}{\epsilon}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}} \\ = \left(\frac {3 (L + 3) (B \vee 1) ^ {L + 2} (W + 1) ^ {L}}{\epsilon}\right) ^ {S + D + (D + M + K) d _ {\mathrm {e}}} \\ \end{array}
+$$
+
+# C.3. Bracketing number of conditional probability space on each dimension
+
+Lemma C.7 (Bracketing number of conditional probability space on each dimension). Let $P_{\theta}(\pmb{x},y) \coloneqq \left[\pmb{\rho}_{\theta}(y)\dots \pmb{\rho}_{\theta}(\pmb{x}_{ 0$ , and $\mathcal{X} = [0,1]^D$ , $\mathcal{Y} = [K]$ . Then, given any $\epsilon >0$ , we have
+
+$$
+\mathcal {N} \left(\epsilon ; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right) \leq \left(\frac {3 (L + 1) (B \vee 1) ^ {L + 1} (W + 1) ^ {L}}{\epsilon}\right) ^ {S + K d _ {\mathrm {e}}}.
+$$
+
+Proof. As defined, $\mathcal{U}_{\theta}$ can be written as
+
+$$
+\mathcal {U} _ {\theta} = \left\{u _ {\theta} (\boldsymbol {x} | y) = f _ {\omega} \circ \boldsymbol {e} _ {\boldsymbol {V}} (\boldsymbol {x}, y): f _ {\omega} \in \mathcal {F} (L, W, B, S), \boldsymbol {e} _ {\boldsymbol {V}} \in \mathcal {E} _ {\boldsymbol {V}} \right\} = \mathcal {F} (L, W, B, S) \circ \mathcal {E} _ {\boldsymbol {V}},
+$$
+
+where $\mathcal{E}_V = \{\pmb {e}_V(\pmb {x},y) = \left[ \begin{array}{c}\pmb {x}\\ \pmb {V}[y,:] \end{array} \right]:\pmb {V}\in [0,1]^{K\times d_{\mathrm{e}}}\}$ . Denote by $\mathcal{X}_1\coloneqq \mathcal{X}\times \mathcal{Y} = [0,1]^D\times [K]$ and $\mathcal{X}_2\coloneqq [0,1]^{d_{\mathrm{e}} + D}$ .
+
+Given any $\delta > 0$ , we first evaluate the $\delta$ -covering number of $\mathcal{E}_{\mathbf{V}}$ w.r.t. $\| \cdot \|_{\infty, L^{\infty}(\mathfrak{X}_1)}$ .
+
+Covering number of the embedding layer. Let $S_{\mathrm{entry}}(V)$ denote the value space of all entries in $V$ . We first discretize the value spaces of $V$ into $\delta$ -width grids to get a finite embedding function class:
+
+$$
+\mathcal {E} _ {\boldsymbol {V}, \delta \mathbb {Z}} := \left\{\boldsymbol {e} _ {\boldsymbol {V}} \in \mathcal {E} _ {\boldsymbol {V}}: \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {V}) = [ 0, 1 ] \cap \delta \mathbb {Z} \right\}.
+$$
+
+For any $e_V \in \mathcal{E}_V$ , there exists $e_{V'} \in \mathcal{E}_{V,\delta \mathbb{Z}}$ such that $\| V - V' \|_{\infty} \leq \delta$ . Then we have for any $x, y \in \mathcal{X}_1$ ,
+
+$$
+\left\| e _ {\boldsymbol {V}} (\boldsymbol {x}, y) - e _ {\boldsymbol {V} ^ {\prime}} (\boldsymbol {x}, y) \right\| _ {\infty} = \left\| \left[ \begin{array}{c} \boldsymbol {x} \\ \boldsymbol {V} [ y,: ] \end{array} \right] - \left[ \begin{array}{c} \boldsymbol {x} \\ \boldsymbol {V} ^ {\prime} [ y,: ] \end{array} \right] \right\| _ {\infty} = \left\| \left[ \begin{array}{c} \boldsymbol {0} \\ \boldsymbol {V} [ y,: ] - \boldsymbol {V} ^ {\prime} [ y,: ] \end{array} \right] \right\| _ {\infty} \leq \| \boldsymbol {V} - \boldsymbol {V} ^ {\prime} \| _ {\infty} \leq \delta . \tag {22}
+$$
+
+Therefore, $\mathcal{E}_{\mathbf{V},\delta \mathbb{Z}}$ is an $\delta$ -cover of $\mathcal{E}_{\mathbf{V}}$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}_1)}$ and thus we have
+
+$$
+\mathcal {N} \Big (\delta ; \mathcal {E} _ {\boldsymbol {V}}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)} \Big) \leq | \mathcal {E} _ {\boldsymbol {V}, \delta \mathbb {Z}} | = \left| \boldsymbol {V}: \mathcal {S} _ {\text {e n t r y}} (\boldsymbol {V})\right) = [ 0, 1 ] \cap \delta \mathbb {Z} | \leq \left(\frac {1}{\delta} + 1\right) ^ {K d _ {\mathrm {e}}}.
+$$
+
+Composite energy function. According to Lemma C.3, given any $\epsilon_{\mathcal{F}}, \epsilon_{\mathcal{E}} > 0$ , the covering number of $\mathcal{U}_{\theta}$ is bounded by
+
+$$
+\mathcal {N} \Big (\epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \epsilon_ {\mathcal {E}}; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathfrak {X} _ {1}\right)} \Big) \leq \mathcal {N} \Big (\epsilon_ {\mathcal {F}}; \mathcal {F} (L, W, B, S), \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathfrak {X} _ {2}\right)} \Big) \mathcal {N} \Big (\epsilon_ {\mathcal {E}}; \mathcal {E} _ {V}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathfrak {X} _ {1}\right)} \Big).
+$$
+
+According to Lemma C.5, $\kappa_{\mathcal{F}} = B^{L}W^{L}$ . Further taking $\epsilon_{\mathcal{F}} = L(B\vee 1)^{L - 1}(W + 1)^{L}\delta$ and $\epsilon_{\mathcal{E}} = \delta$ , we have
+
+$$
+\epsilon_ {\mathcal {F}} + \kappa_ {\mathcal {F}} \epsilon_ {\mathcal {E}} = L (B \lor 1) ^ {L - 1} (W + 1) ^ {L} \delta + B ^ {L} W ^ {L} \delta = \left(L (B \lor 1) ^ {L - 1} (W + 1) ^ {L} + B ^ {L} W ^ {L}\right) \delta .
+$$
+
+According to Lemma C.4 and Equation (22), we have
+
+$$
+\mathcal {N} \left(\epsilon_ {\mathcal {F}}; \mathcal {F} (L, W, B, S), \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {2}\right)}\right) \leq \left(\frac {2 B}{\delta} + 1\right) ^ {S}, \text {a n d} \mathcal {N} \left(\epsilon_ {\mathcal {E}}; \mathcal {E} _ {V}, \| \cdot \| _ {\infty , L ^ {\infty} \left(\mathcal {X} _ {1}\right)}\right) \leq \left(\frac {1}{\delta} + 1\right) ^ {K d _ {\mathrm {e}}}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \mathcal {N} \left(\left(L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} + B ^ {L} W ^ {L}\right) \delta ; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} _ {1})}\right) \leq \left(\frac {2 B}{\delta} + 1\right) ^ {S} \left(\frac {1}{\delta} + 1\right) ^ {K d _ {\mathrm {e}}} \\ \leq \left(\frac {(2 B \vee 1)}{\delta} + 1\right) ^ {S + K d _ {\mathrm {e}}} \\ \leq \left(\frac {2 (B \vee 1)}{\delta} + 1\right) ^ {S + K d _ {\mathrm {e}}} \\ \leq \left(\frac {3 (B \vee 1)}{\delta}\right) ^ {S + K d _ {\mathrm {e}}}. \quad (\frac {(B \vee 1)}{\delta} \geq 1) \\ \end{array}
+$$
+
+Taking $\epsilon = \bigl (L(B\lor 1)^{L - 1}(W + 1)^{L} + B^{L}W^{L}\bigr)\delta$ , we have
+
+$$
+\begin{array}{l} \mathcal {N} \left(\epsilon ; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} \left(x _ {1}\right)}\right) \leq \left(\frac {3 (B \vee 1) \left(L (B \vee 1) ^ {L - 1} (W + 1) ^ {L} + B ^ {L} W ^ {L}\right)}{\epsilon}\right) ^ {S + K d _ {\mathrm {e}}} \\ \leq \left(\frac {3 (B \vee 1) \left(L (B \vee 1) ^ {L} (W + 1) ^ {L} + (B \vee 1) ^ {L} (W + 1) ^ {L}\right)}{\epsilon}\right) ^ {S + K d _ {\mathrm {e}}} \\ = \left(\frac {3 (L + 1) (B \vee 1) ^ {L + 1} (W + 1) ^ {L}}{\epsilon}\right) ^ {S + K d _ {\mathrm {e}}}, \\ \end{array}
+$$
+
+which completes the proof.
+
+# D.2. Bracketing number of the conditional distribution via the energy function
+
+Lemma D.2 (Bracketing number of the conditional distribution via the energy function). Given a class of energy-based conditional distributions that $\mathcal{P}_{X|Y} = \left\{p_{\theta}(\boldsymbol{x}|y) = \frac{e^{-u_{\theta}(\boldsymbol{x}|y)}}{\int_{\mathcal{X}} e^{-u_{\theta}(\boldsymbol{x}|y)} d\boldsymbol{x}} : u_{\theta} \in \mathcal{U}_{\theta}\right\}$ , for any $0 < \epsilon \leq 1$ , it holds that
+
+$$
+\mathcal {N} _ {[ ]} \left(\epsilon ; \mathcal {P} _ {X | Y}, L ^ {1} (\mathcal {X})\right) \leq \mathcal {N} \left(\frac {\epsilon}{4 e}; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right).
+$$
+
+Proof. Let $\mathcal{C}_{\mathcal{U}}$ be an $\epsilon_{\mathcal{U}}$ -cover of $\mathcal{U}_{\theta}$ w.r.t. $\| \cdot \|_{\infty ,L^{\infty}(\mathfrak{X}\times \mathfrak{Y})}$ such that $|\mathcal{C}_{\mathcal{U}}| = \mathcal{N}\Big(\epsilon_{\mathcal{U}};\mathcal{U}_{\theta},\| \cdot \|_{\infty ,L^{\infty}(\mathfrak{X}\times \mathfrak{Y})}\Big)$ . For any $p_{\theta}(\boldsymbol {x}|y) = \frac{e^{-u_{\theta}(\boldsymbol{x}|y)}}{\int_{\mathcal{X}}e^{-u_{\theta}(\boldsymbol{x}|y)}d\boldsymbol{x}}\in \mathcal{P}_{X|Y}$ , there exists $u_{\theta}^{\prime}\in \mathcal{C}_{\mathcal{U}}$ such that for all $\boldsymbol {x}\in \mathfrak{X}$ and $y\in \mathcal{Y}$ , $\| u_{\theta}(\boldsymbol {x}|y) - u_{\theta}^{\prime}(\boldsymbol {x}|y)\|_{\infty} = |u_{\theta}(\boldsymbol {x}|y) - u_{\theta}^{\prime}(\boldsymbol {x}|y)|\leq u_{\theta}^{\prime}(\boldsymbol {x}|y)$ . Since $u_{\theta}^{\prime}(\boldsymbol {x}|y)\leq u_{\theta}^{\prime}(\boldsymbol {x}|y)$ , it follows that $u_{\theta}^{\prime}(\boldsymbol {x}|y) = u_{\theta}^{\prime}(\boldsymbol {x}|y)$ .
+
+Let $p_{\theta}^{\prime}(\boldsymbol {x}|y) = \frac{e^{-u_{\theta}^{\prime}(\boldsymbol{x}|y) + 2\epsilon_{\mathcal{U}}}}{\int_{\mathcal{X}}e^{-u_{\theta}^{\prime}(\boldsymbol{x}|y)}d\boldsymbol{x}}$ . Then we immediately obtain that: given any $y\in \mathcal{Y}$
+
+$$
+\forall x \in \mathcal {X}, p _ {\theta} ^ {\prime} (\boldsymbol {x} | y) = \frac {e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + \epsilon_ {\mathcal {U}}}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) - \epsilon_ {\mathcal {U}}} d \boldsymbol {x}} \geq \frac {e ^ {- u _ {\theta} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {x} | y)} d \boldsymbol {x}} = p _ {\theta} (\boldsymbol {x} | y), \tag {23}
+$$
+
+since for all $\pmb{x} \in \mathcal{X}, y \in \mathcal{Y}$ , $e^{-u_{\theta}'(\pmb{x}|y) + \epsilon_{\mathcal{U}}} \geq e^{-u_{\theta}(\pmb{x}|y)}$ and $\int_{\mathcal{X}} e^{-u_{\theta}'(\pmb{x}|y) - \epsilon_{\mathcal{U}}} d\pmb{x} \leq \int_{\mathcal{X}} e^{-u_{\theta}(\pmb{x}|y)} d\pmb{x}$ .
+
+Moreover, we can bound the $L^1(\mathcal{X})$ distance between $p_\theta'(\cdot | y)$ and $p_\theta(\cdot | y)$ as
+
+$$
+\begin{array}{l} \left\| p _ {\theta} ^ {\prime} (\cdot | y) - p _ {\theta} (\cdot | y) \right\| _ {L ^ {1} (\mathcal {X})} \\ = \int_ {\mathcal {X}} | p _ {\theta} ^ {\prime} (\boldsymbol {x} | y) - p _ {\theta} (\boldsymbol {x} | y) | d \boldsymbol {x} \\ = \int_ {\mathcal {X}} \left| \frac {e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s}} - \frac {e ^ {- u _ {\theta} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} \right| d \boldsymbol {x} \\ = \int_ {\mathcal {X}} \left| \frac {e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} - e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} \right| d \boldsymbol {x} \\ \leq \int_ {\mathcal {X}} \left| \frac {\left(e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}} - e ^ {- u _ {\theta} (\boldsymbol {x} | y)}\right) \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} + e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \left(\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} - e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s}\right)}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} \right| d \boldsymbol {x} \\ \leq \int_ {\mathcal {X}} \frac {\left| e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}} - e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \right| \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} + e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} \left| e ^ {- u _ {\theta} (\boldsymbol {s} | y)} - e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} \right| d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} \\ \leq \int_ {\mathcal {X}} \frac {e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}} \left| u _ {\theta} (\boldsymbol {x} | y) - u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}} \right| \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} + e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {\left(- u _ {\theta} (\boldsymbol {s} | y)\right) \vee \left(- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)\right)} \left| u _ {\theta} (\boldsymbol {s} | y) - u _ {\theta} ^ {\prime} (\boldsymbol {s} | y) \right| d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} \\ \left(\left| e ^ {a} - e ^ {b} \right| = \left| \int_ {b} ^ {a} e ^ {x} d x \right| \leq \left| \int_ {b} ^ {a} e ^ {a \vee b} d x \right| = e ^ {a \vee b} | a - b |\right) \\ \leq \int_ {\mathcal {X}} \frac {e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) + 2 \epsilon_ {\mathcal {U}}} 3 \epsilon_ {\mathcal {U}} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s} + e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y) + \epsilon_ {\mathcal {U}}} \epsilon_ {\mathcal {U}} d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} \\ (\forall x \in \mathcal {X}, y \in \mathcal {Y}, \left| u _ {\theta} (\boldsymbol {x} | y) - u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) \right| \leq \epsilon_ {\mathcal {U}}) \\ \leq \int_ {\mathcal {X}} \frac {3 \epsilon_ {\mathcal {U}} e ^ {- u _ {\theta} (\boldsymbol {x} | y) + 3 \epsilon_ {\mathcal {U}}} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y) + \epsilon_ {\mathcal {U}}} d \boldsymbol {s} + \epsilon_ {\mathcal {U}} e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y) + \epsilon_ {\mathcal {U}}} d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} \\ (\forall x \in \mathcal {X}, y \in \mathcal {Y}, \left| u _ {\theta} (\boldsymbol {x} | y) - u _ {\theta} ^ {\prime} (\boldsymbol {x} | y) \right| \leq \epsilon_ {\mathcal {U}}) \\ = \int_ {\mathcal {X}} \frac {3 \epsilon_ {\mathcal {U}} e ^ {4 \epsilon_ {\mathcal {U}}} e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} + \epsilon_ {\mathcal {U}} e ^ {\epsilon_ {\mathcal {U}}} e ^ {- u _ {\theta} (\boldsymbol {x} | y)} \int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} ^ {\prime} (\boldsymbol {s} | y)} d \boldsymbol {s} \int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} \\ = \int_ {\mathcal {X}} \frac {\left(3 \epsilon_ {\mathcal {U}} e ^ {4 \epsilon_ {\mathcal {U}}} + \epsilon_ {\mathcal {U}} e ^ {\epsilon_ {\mathcal {U}}}\right) e ^ {- u _ {\theta} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {s} | y)} d \boldsymbol {s}} d \boldsymbol {x} = 3 \epsilon_ {\mathcal {U}} e ^ {4 \epsilon_ {\mathcal {U}}} + \epsilon_ {\mathcal {U}} e ^ {\epsilon_ {\mathcal {U}}} \leq 4 \epsilon_ {\mathcal {U}} e ^ {(4 \epsilon_ {\mathcal {U}}) \vee 1}. \tag {24} \\ \end{array}
+$$
+
+Therefore, $\mathcal{B}_{\mathcal{P}} := \left\{p_{\theta}'(\boldsymbol{x}|y) = \frac{e^{-u_{\theta}'(\boldsymbol{x}|y) + \epsilon_{\mathcal{U}}}}{\int_{\mathcal{X}} e^{-u_{\theta}'(\boldsymbol{x}|y) - \epsilon_{\mathcal{U}}} dx} : -u_{\theta}'(\boldsymbol{x}|y) \in \mathcal{C}_{\mathcal{U}}\right\}$ is an $4\epsilon_{\mathcal{U}} e^{4\epsilon_{\mathcal{U}}}$ -upper bracket of $\mathcal{P}_{X|Y}$ w.r.t. $L^1(\mathcal{X})$ . Thus we have
+
+$$
+\mathcal {N} _ {[ ]} \left(4 \epsilon_ {\mathcal {U}} e ^ {(4 \epsilon_ {\mathcal {U}}) \vee 1}; \mathcal {P} _ {X | Y}, L ^ {1} (\mathcal {X})\right) \leq | \mathcal {B} _ {\mathcal {P}} | = | \mathcal {C} _ {\mathcal {U}} | = \mathcal {N} \Big (\epsilon_ {\mathcal {U}}; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})} \Big).
+$$
+
+Let $4\epsilon_{\mathcal{U}}e^{(4\epsilon_{\mathcal{U}})\vee 1} = \epsilon$ , we have $4\epsilon_{\mathcal{U}} = \frac{\epsilon}{e^{(4\epsilon_{\mathcal{U}})\vee 1}} \leq 1$ and thus $4\epsilon_{\mathcal{U}}e^{(4\epsilon_{\mathcal{U}})\vee 1} = 4e\epsilon_{\mathcal{U}}$ , so that we get $\epsilon_{\mathcal{U}} = \frac{\epsilon}{4e}$ . Therefore, we have for any $0 < \epsilon \leq 1$ ,
+
+$$
+\mathcal {N} _ {[ ]} \Big (\epsilon ; \mathcal {P} _ {X | Y}, L ^ {1} (\mathfrak {X}) \Big) \leq \mathcal {N} \Big (\frac {\epsilon}{4 e}; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathfrak {X} \times \mathfrak {Y})} \Big)
+$$
+
+# D.3. Proof of Theorem 4.4
+
+Based on the relation between the bracketing number of conditional distribution space $\mathcal{P}_{X|Y}$ and the covering number of energy function space $\mathcal{U}_{\theta}$ derived in previous lemmas, we obtain the final result.
+
+Proof of Theorem 4.4. With conditional distributions as defined in Equation (7), we have
+
+$$
+\mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}} = \left\{p _ {\theta} (\boldsymbol {x} | y) = \frac {e ^ {- u _ {\theta} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta} (\boldsymbol {x} | y)} d \boldsymbol {x}}: u _ {\theta} \in \mathcal {U} _ {\theta} \right\},
+$$
+
+where
+
+$$
+\mathcal {U} _ {\theta} = \left\{u _ {\theta} (\boldsymbol {x} | y) = f _ {\omega} \circ \boldsymbol {e} _ {\boldsymbol {V}} (\boldsymbol {x}, y): \omega \in \mathcal {W} (L, W, S, B), \boldsymbol {V} [ k,: ] \in [ 0, 1 ] ^ {d _ {\mathrm {e}}} \right\}.
+$$
+
+Let $\epsilon_{\mathcal{U}}$ be an constant that $\epsilon_{\mathcal{U}} > 0$ , according to Lemma D.1, we have $\mathcal{N}\Big(\epsilon_{\mathcal{U}};\mathcal{U}_{\theta},\| \cdot \|_{\infty ,L^{\infty}(\mathcal{X}\times \mathcal{Y})}\Big)\leq \left(\frac{3(L + 1)(B\vee 1)^{L + 1}(W + 1)^L}{\epsilon_{\mathcal{U}}}\right)^{S + Kd_e}$ . Then with Lemma D.2, we further obtain that
+
+$$
+\mathcal {N} _ {[ ]} \left(\frac {1}{n}; \mathcal {P} _ {X | Y} ^ {\mathrm {m u l t i}}, L ^ {1} (\mathcal {X})\right) \leq \mathcal {N} \left(\frac {1}{4 e n}; \mathcal {U} _ {\theta}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right) \leq \left(1 2 e (L + 1) (B \lor 1) ^ {L + 1} (W + 1) ^ {L}\right) ^ {S + K d _ {\mathrm {e}}}.
+$$
+
+According to Theorem 3.2, we arrive at the conclusion that
+
+$$
+\begin{array}{l} \mathcal {R} _ {\overline {{\mathrm {T V}}}} \left(\hat {p} _ {X | Y} ^ {\text {m u l t i}}\right) \leq 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} ^ {\text {m u l t i}} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)} \\ \leq 3 \sqrt {\frac {1}{n} \left((S + K d _ {\mathrm {e}}) \log \big (1 2 e n (L + 1) (B \lor 1) ^ {L + 1} (W + 1) ^ {L} \big) + \log \frac {1}{\delta}\right)} \\ = 3 \sqrt {\frac {1}{n} \left(L (S + K d _ {\mathrm {e}}) \log \left(1 2 e n (L + 1) ^ {\frac {1}{L}} (B \lor 1) ^ {1 + \frac {1}{L}} (W + 1)\right) + \log \frac {1}{\delta}\right)} \\ \end{array}
+$$
+
+Omitting constants about $n, K, d_{\mathrm{e}}, L, W, S, B,$ and the logarithm term we have $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{multi}}) = \tilde{\mathcal{O}}\left(\sqrt{\frac{L(S + Kd_{\mathrm{e}})}{n}}\right)$ .
+
+# D.4. Average TV error bound under single-source training
+
+Theorem D.3 (average TV error bound for EBMs under single-source training). Let $\tilde{p}_{X|Y}^{\mathrm{single}}$ be the likelihood maximizer defined in Equation (3) given $\mathcal{P}_{X|Y}^{\mathrm{single}}$ with conditional distributions as in Equation (7), suppose that $\Phi = [0,1]^{d_{\mathrm{e}}}$ and $\Psi = \mathcal{W}(L,W,S,B)$ and assume $\phi_k^* \in \Phi$ , $\psi^* \in \Psi$ . Then, for any $0 < \delta \leq 1/2$ , it holds with probability at least $1 - \delta$ that
+
+$$
+\mathcal {R} _ {\overline {{\mathrm {T V}}}} (\hat {p} _ {X | Y} ^ {\mathrm {s i n g l e}}) = \tilde {\mathcal {O}} \left(\sqrt {\frac {L K (S + d _ {\mathrm {e}})}{n}}\right).
+$$
+
+Proof. As formulated in Section 2 and with conditional distributions as in Equation (7), we have
+
+$$
+\mathcal {P} _ {X | Y} ^ {\mathrm {s i n g l e}} = \left\{\prod_ {k = 1} ^ {K} \big (p _ {\theta_ {k}} (\boldsymbol {x} | y) \big) ^ {\mathbb {I} (y = k)}: p _ {\theta_ {k}} (\boldsymbol {x} | y) = \frac {e ^ {- u _ {\theta_ {k}} (\boldsymbol {x} | y)}}{\int_ {\mathcal {X}} e ^ {- u _ {\theta_ {k}} (\boldsymbol {x} | y)} d \boldsymbol {x}}: u _ {\theta_ {k}} \in \mathcal {U} _ {\theta_ {k}} \right\},
+$$
+
+where
+
+$$
+\mathcal {U} _ {\theta_ {k}} = \left\{u _ {\theta_ {k}} (\pmb {x} | y) = f _ {\omega_ {k}} \circ \pmb {e} _ {\pmb {V} [ k,: ]} (\pmb {x}, y): \omega_ {k} \in \mathcal {W} (L, W, S, B), \pmb {V} [ k,: ] \in [ 0, 1 ] ^ {d _ {\mathrm {e}}} \right\}.
+$$
+
+For all $k\in [K]$ , let $\mathcal{B}_{\mathcal{P}_k}$ be an $\frac{1}{n}$ -upper bracket of $\mathcal{P}_{X|Y,k} = \left\{p_{\theta_k}(\boldsymbol {x}|y) = \frac{e^{-u_{\theta_k}(\boldsymbol{x}|y)}}{\int_\mathcal{X}e^{-u_{\theta_k}(\boldsymbol{x}|y)}dx}:u_{\theta_k}\in \mathcal{U}_{\theta_k}\right\}$ w.r.t. $L^1 (\mathcal{X})$ such that $|\mathcal{B}_{\mathcal{P}_k}| = \mathcal{N}_{\mathbb{I}}\Big(\frac{1}{n};\mathcal{P}_{X|Y,k},L^1 (\mathcal{X})\Big)$ . According to Lemma D.1 and Lemma D.2, we know that
+
+$$
+\left| \mathcal {B} _ {\mathcal {P} _ {k}} \right| \leq \mathcal {N} \left(\frac {1}{4 e n}; \mathcal {U} _ {\theta_ {k}}, \| \cdot \| _ {\infty , L ^ {\infty} (\mathcal {X} \times \mathcal {Y})}\right) \leq \left(1 2 e n (L + 1) (B \vee 1) ^ {L + 1} (W + 1) ^ {L}\right) ^ {S + d _ {\mathrm {e}}}.
+$$
+
+For any $p(\boldsymbol{x}|y) = \prod_{k=1}^{K}\left(p_{\theta_k}(\boldsymbol{x}|y)\right)^{\mathbb{I}(y=k)} \in \mathcal{P}_{X|Y}^{\text{single}}$ , there exists $p_{\theta_1}' \in \mathcal{B}_{\mathcal{P}_1}, \ldots, p_{\theta_K}' \in \mathcal{B}_{\mathcal{P}_K}$ such that for all $k \in [K]$ , we have: Given any $y \in \mathcal{Y}$ , it holds that $\forall \boldsymbol{x} \in \mathcal{X}, p_{\theta_k}'(\boldsymbol{x}|y) \geq p_{\theta_k}(\boldsymbol{x}|y)$ , and $\|p_{\theta_k}'(\cdot|\boldsymbol{y}) - p_{\theta_k}(\cdot|\boldsymbol{y})\|_{L^1(\mathcal{X})} \leq \frac{1}{n}$ .
+
+Let $p^{\prime}(\boldsymbol {x}|y) = \prod_{k = 1}^{K}\Big(p_{\theta_k}^{\prime}(\boldsymbol {x}|y)\Big)^{\mathbb{I}(y = k)}$ , then we have that given any $y\in \mathcal{V}$
+
+$$
+\forall \boldsymbol {x} \in \mathcal {X}, p ^ {\prime} (\boldsymbol {x} | y) = \prod_ {k = 1} ^ {K} \left(p _ {\theta_ {k}} ^ {\prime} (\boldsymbol {x} | y)\right) ^ {\mathbb {I} (y = k)} \geq \prod_ {k = 1} ^ {K} \left(p _ {\theta_ {k}} (\boldsymbol {x} | y)\right) ^ {\mathbb {I} (y = k)} = p (\boldsymbol {x} | y),
+$$
+
+and
+
+$$
+\| p ^ {\prime} (\cdot | y) - p (\cdot | y) \| _ {L ^ {1} (\mathcal {X})} \leq \sup _ {k \in [ K ]} \| p _ {\theta_ {k}} ^ {\prime} (\cdot | y) - p _ {\theta_ {k}} (\cdot | y) \| _ {L ^ {1} (\mathcal {X})} \leq \frac {1}{n}.
+$$
+
+Therefore, $\mathcal{B}_{\mathcal{P}} := \left\{p'(\boldsymbol{x}|y) = \prod_{k=1}^{K}\left(p_{\theta_k}'(\boldsymbol{x}|y)\right)^{\mathbb{I}(y=k)} : p_{\theta_k}' \in \mathcal{B}_{\mathcal{P}_k}\right\}$ is an $\frac{1}{n}$ -upper bracket of $\mathcal{P}_{X|Y}^{\mathrm{single}}$ w.r.t. $L^1(\mathfrak{X})$ . Thus we have
+
+$$
+\begin{array}{l} \mathcal {N} _ {[ ]} \left(\frac {1}{n}; \mathcal {P} _ {X | Y} ^ {\text {s i n g l e}}, L ^ {1} (\mathcal {X})\right) \leq | \mathcal {B} _ {\mathcal {P}} | = \left| \bigcup_ {k \in [ K ]} \mathcal {B} _ {\mathcal {P} _ {k}} \right| \leq \prod_ {k \in [ K ]} \left| \mathcal {B} _ {\mathcal {P} _ {k}} \right| \\ = \prod_ {k \in [ K ]} \left(1 2 e n (L + 1) (B \vee 1) ^ {L + 1} (W + 1) ^ {L}\right) ^ {S + d _ {\mathrm {e}}} \\ = \left(1 2 e n (L + 1) (B \lor 1) ^ {L + 1} (W + 1) ^ {L}\right) ^ {K (S + d _ {e})}. \\ \end{array}
+$$
+
+According to Theorem 3.2, we arrive at the conclusion that
+
+$$
+\begin{array}{l} \mathcal {R} _ {\overline {{\mathrm {T V}}}} \left(\hat {p} _ {X | Y} ^ {\text {s i n g l e}}\right) \leq 3 \sqrt {\frac {1}{n} \left(\log \mathcal {N} _ {[ ]} \left(\frac {1}{n} ; \mathcal {P} _ {X | Y} ^ {\text {s i n g l e}} , L ^ {1} (\mathcal {X})\right) + \log \frac {1}{\delta}\right)} \\ \leq 3 \sqrt {\frac {1}{n} \left(L K \left(S + d _ {\mathrm {e}}\right) \log \left((1 2 e n L + 1) ^ {\frac {1}{L}} (B \vee 1) ^ {1 + \frac {1}{L}} (W + 1)\right) + \log \frac {1}{\delta}\right)} \\ \end{array}
+$$
+
+Omitting constants about $n, K, d_{\mathrm{e}}, L, W, S, B$ , and the logarithm term we have $\mathcal{R}_{\overline{\mathrm{TV}}}(\hat{p}_{X|Y}^{\mathrm{single}}) = \tilde{\mathcal{O}}\left(\sqrt{\frac{LK(S + d_{\mathrm{e}})}{n}}\right)$ .
+
+
+
+Table 2. Hyparameters of our experiments. '1c' denotes training from single-source, and others denote training from multi-source which contains 3,5, and 10 classes.
+
+Setup Iterations (kimg) Learning rate Decay (kimg) 1c 184549 0.005 2500 3c 268435 0.006 4000 10c 1610612 0.012 6000
+
+Table 3. Standard deviations of FID scores over five times of sampling.
+
+N Sim K Std Dev (Single) Std Dev (Multi) 500 1 3 0.0086 0.0057 10 0.0018 0.0336 2 3 0.0160 0.0158 10 0.0056 0.0035 1000 1 3 0.0034 0.0064 10 0.0028 0.0250 2 3 0.0047 0.0051 10 0.0013 0.0084
+
+# E. Supplementary for experiments
+
+# E.1. Additional detail of real-world experiments
+
+Implementation. Following EDM2, we use the Latent Diffusion Model (LDM) (Rombach et al., 2022) to down-sample each image $x \in \mathbb{R}^{3 \times 256 \times 256}$ to a corresponding latent $z \in \mathbb{R}^{4 \times 32 \times 32}$ for training a diffusion models.
+
+All experiments are trained and sampled on $8 \times$ NVIDIA A800 80GB, $8 \times$ NVIDIA GeForce RTX 4090, and $8 \times$ NVIDIA GeForce RTX 3090 on the Linux Ubuntu-22.04 platform.
+
+For a fair comparison, we set different hyperparameters for experiments with different numbers of sources as shown in Table 2, but these parameters are the same within each similarity.
+
+Based on these trained models, we perform multiple samplings using five different random seeds to estimate the randomness in calculating the FID scores. The standard deviations of FID scores over multiple samplings are reported in Table 3, corresponding to Table 1 in the main paper.
+
+The selection of sample sizes and the number of classes. In the real-world experiments, we set the number of classes $K$ in 3 and 10, and the sample size per class $N$ in 500 and 1000. We would like to clarify that this selection is influenced by several inherent characteristics of the ILSVRC2012 dataset: (1) Sample sizes: The maximum number of images per class in ILSVRC2012 is 1300, so we selected sample sizes of 1000 and 500 images per class, which are common choices. (2) Number of sources: Given that distribution similarity levels were manually defined, it was difficult to establish a large number of structured subdivisions. To be specific, to ensure reasonable similarity levels for the controlled experiment, we designed a two-level tree structure for the dataset, as shown in Figure 2. Overall, we divided the whole ILSVRC2012 into 10 high-level categories (mammal, amphibian, bird, fish, reptile, vehicle, furniture, musical instrument, geological formation, and utensil). Each category was further subdivided into 10 subsets (e.g., for mammals, we have Italian greyhound, Border terrier, standard Schnauzer, etc.). Defining such semantically meaningful and mutually exclusive divisions is not trivial. As a result, the number of classes within each similarity level in our experiments is limited to 10.
+
+While our experiments are not on large-scale datasets, there are existing studies that provide valuable empirical observations for large-scale multi-source training, including: cross-lingual model transfer for similar languages (Pires et al., 2019), pretraining with additionla high-quality images to improve overall aesthetics in image generation (Chen et al., 2024), and knowledge augmentation on subsets of data to enhance model performance on other subsets (Allen-Zhu & Li, 2024a). They have offered relevant findings that inform our work.
+
+Connection between FID and the theoretical guarantees. Our theory provides guarantees for the average TV distance
+
+Table 4. TV errors with the number of sources $K$ in simulations on ARMs.
+
+K↑ 1 3 5 7 10 Single-source 0.0763 0.1212 0.1519 0.1787 0.2127 Multi-source 0.0763 0.1145 0.1318 0.1364 0.1369
+
+Table 5. TV errors with the sample size $n$ in simulations on ARMs.
+
+n↓ 1000 3000 5000 10000 30000 Single-source 0.5680 0.3516 0.2882 0.2036 0.1212 Multi-source 0.5491 0.3467 0.2747 0.1922 0.1145
+
+Table 6. TV errors with the sequence length $D$ in simulations on ARMs.
+
+D↑ 10 12 14 16 18 Single-source 0.2036 0.3785 0.5932 0.7242 0.7505 Multi-source 0.1922 0.3530 0.5068 0.5747 0.6289
+
+(Equation (4)), which quantifies distribution estimation quality but is incomputable without access to the true conditional distributions. Therefore, in real-world experiments, we use FID as a practical alternative. FID measures the similarity between generated and real data distributions by comparing their feature representations in a pretrained neural network. It is widely used to evaluate image generation quality and serves as the best available metric for our setting.
+
+Connection with the theoretical analysis of EBMs. Additionally, we would like to discuss the connection between our real-world diffusion model experiments and the theoretical analysis of EBMs. As mentioned in Section 1, EBMs are a general and flexible class of generative models closely connected to diffusion models. To be specific, first, the training and sampling methods in (Song & Ermon, 2019; Song et al., 2021) are directly inspired by EBMs. The distinction is that EBMs parameterize the energy function, while diffusion models parameterize its gradient (the score function). Second, Salimans & Ho (2021) shows that under a specific energy function formulation (Equation (5) in their paper), EBMs are equivalent to constrained diffusion models. Their experimental results (Table 1, Rows A and B) indicate that the constraint has a minor impact on generative performance. Thus, our diffusion model experiments provide insight into EBMs' behavior in real-world settings to some extent.
+
+# E.2. Supplementary Simulations on ARMs
+
+We conduct additional simulations on autoregressive models (ARMs) to examine how empirical total variation (TV) errors align with the theoretical predictions. The empirical results are summarized in Tables 4, 5, and 6.
+
+Each ground truth source distribution is defined as a discrete categorical distribution supported on the set $[M]^D$ , where $M$ is the vocabulary size and $D$ is the sequence length. The variable $Y$ is drawn uniformly from $\{1,2,\dots,K\}$ . A multi-source dataset of size $n$ is sampled from the joint distribution of $(X,Y)$ by first drawing $n$ samples of $Y$ , followed by sampling $X|Y$ conditionally.
+
+The network architecture exactly follows the setup in Section 4.2. It consists of two embedding matrices to encode $Y$ and the first $D - 1$ dimensions of $X$ into $d_{e}$ -dimensional vectors. These embeddings are processed by a single encoding layer, followed by a multi-layer perceptron (MLP) with width $W$ , depth $L$ , and a softmax output. The conditional distribution parameters are computed autoregressively using a masked input vector.
+
+For multi-source training, a single model is trained on the full dataset. In contrast, single-source training involves training $K$ separate models, each using data from its corresponding source. In all experiments, we fix $M = 2$ and use network configurations with $d_{e} = W = 64$ , $L = 5$ , and batch size $B = 1$ . We vary the number of sources $K \in \{1,3,5,7,10\}$ , the total number of samples $n \in \{1000,3000,5000,10000,30000\}$ , and the sequence length $D \in \{10,12,14,16,18\}$ to assess the alignment between empirical total variation (TV) error and the theoretical bounds. For each configuration, the batch size and learning rate are selected from $\{100,300,500\}$ and $\{10^{-5},10^{-4},10^{-3}\}$ , respectively, for maximum likelihood.
+
+
+Figure 4. Illustration of $\epsilon$ -upper brackets for the constant function class with $\epsilon = 0.25$ . Each bracket function (horizontal line) lies above the function it covers, with a difference at most $\epsilon$ .
+
+# F. Additional discussions on the notion of $\beta_{sim}$
+
+The notion of $\beta_{sim}$ in Section 4 is defined by induction based on our three specific model instantiations. It directly measures the model parameter sharing across different sources, and thus reflects the source distribution similarity under our theoretical formulation in Section 2.2. As such, its exact formulation varies depending on the model instantiation.
+
+Specifically, in the Gaussian model (Section 4.1), $\beta_{sim} = \frac{d - d_1}{d}$ measures the proportion of shared mean vector dimensions, which seems to correspond to the property of the ground truth distribution. While for ARMs or EBMs (Section 4.2 and Section 4.3), $\beta_{sim} = \frac{S}{S + d_e}$ is based on shared model parameters, which do not explicitly represent the data distribution itself. Despite this difference, in both cases, $\beta_{sim}$ fundamentally represents the extent of parameter sharing across sources. The distinction arises from the modeling paradigm: the Gaussian case assumes a parametric form for distributions, where model parameters (e.g., mean vectors) explicitly encode data properties, whereas EBMs use neural networks as a function approximator to fit probability densities without an explicit distributional form, making no explicit connection between parameters and data.
+
+As a result, $\beta_{\mathrm{sim}}$ cannot be directly computed from general datasets without model-specific assumptions. We remark that rigorously quantifying dataset similarity in practice is still a direction under exploration. Possible approaches might include: (1) From a practical perspective, a small proxy model can be used to estimate source distributions' interaction (Xie et al., 2023). (2) From a theoretical perspective, several existing notions in multi-task learning and meta-learning could be adapted for this purpose, such as transformation equivalence (Ben-David & Borbely, 2008), parameter distance (Balcan et al., 2019), and distribution divergence (Jose & Simeone, 2021).
+
+# G. Intuitive illustration of the upper bracketing number
+
+The $\epsilon$ -upper bracketing number (Definition 3.1) is a way to quantify the complexity of an infinite set of functions. The key idea is to construct a collection of "brackets" that enclose every function in the set within a small margin.
+
+To illustrate this, consider a simple example. Suppose we have the function set $\mathcal{F} = \{f(x) = c : x \in [0,1], c \in [0,1]\}$ , which consists of all constant functions taking values in the interval $[0,1]$ . We can construct an $\epsilon$ -upper bracket for $\mathcal{F}$ by defining the finite set $\mathcal{B} = \{b(x) = k\epsilon : k = 1, \dots, \lceil 1 / \epsilon \rceil\}$ . Then, for any function $f \in \mathcal{F}$ , there exists a bracket function $b \in \mathcal{B}$ such that: (1) For all $x \in [0,1]$ , the bracket function is always an upper bound: $b(x) \geq f(x)$ . (2) The total "gap" between $b$ and $f$ , measured by the integral $\int_0^1 b(x) - f(x)dx$ , is at most $\epsilon$ . Intuitively, this means we can cover the entire function class using a small number of simple approximating functions that "overestimate" each function just slightly. This concept is visualized in Figure 4, where we show such brackets for $\epsilon = 0.25$ .
+
+In our paper, we extend this idea to conditional probability spaces. There, each condition defines its own function set, and we construct corresponding upper brackets that ensure every conditional distribution is approximated with a small error uniformly across conditions.
\ No newline at end of file
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/images.zip b/atheoryforconditionalgenerativemodelingonmultipledatasources/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cb54b20006fc50d2b4589a4f45c16a0a6c76e594
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5b21ab91ed45427ad7d5c4f5017e6a16fef4293e01806a07d83181eb52d40e6
+size 2815086
diff --git a/atheoryforconditionalgenerativemodelingonmultipledatasources/layout.json b/atheoryforconditionalgenerativemodelingonmultipledatasources/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0129a6ba249e460a0989e0e6db8a7e84f75b96f5
--- /dev/null
+++ b/atheoryforconditionalgenerativemodelingonmultipledatasources/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e37780456feb75d7f5daefcbadfaf1c4123357eeb3ac92b9e202806d41a5c804
+size 1806132
diff --git a/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_content_list.json b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e1a34c6c7b62ae160f7c1ad30e51c606b9e58c2a
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee54f38603b1fddd5b975fefa67ff0137566cba9ba720fcb34a6c98ba0796b3f
+size 121105
diff --git a/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_model.json b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1f6d8c02e6ca946b0da2bcfd64b5a2dfe3d2c6ca
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:538c8b25714128314cb90bed67261985234150c670701c9dafe0d61d6ede1e49
+size 140458
diff --git a/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_origin.pdf b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..01a4bb5ca05a44ae3466f776febce38060c28758
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/1c118c38-3d5a-403a-b8aa-77e0dde94a67_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1304b27f58f4b5c91f5c7f00be44cf4269921445bd7bc5f5509c948a2f773ea4
+size 402925
diff --git a/atrichotomyforlisttransductiveonlinelearning/full.md b/atrichotomyforlisttransductiveonlinelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..09937e1946232740e1ee991965e3a79f2d4b5173
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/full.md
@@ -0,0 +1,418 @@
+# A Trichotomy for List Transductive Online Learning
+
+Steve Hanneke1 Amirreza Shaeiri1
+
+# Abstract
+
+List learning is an important topic in both theoretical and empirical machine learning research, playing a key role in the recent breakthrough result of (Brukhim et al., 2022) on the characterization of multiclass PAC learnability, as well as the ambiguity of labels in computer vision classification tasks, among others. In this paper, we study the problem of list transductive online learning. In this framework, the learner outputs a list of multiple labels for each instance rather than just one, as in traditional multiclass classification. In the realizable setting, we demonstrate a trichotomy of possible rates of the minimax number of mistakes. In particular, if the learner plays for $\mathrm{T} \in \mathbb{N}$ rounds, its minimax number of mistakes can only be of the orders $\Theta(\mathrm{T})$ , $\Theta(\log \mathrm{T})$ , or $\Theta(1)$ . This resolves an open question raised by (Hanneke et al., 2024b). On the other hand, in the agnostic setting, we characterize the learnability by constructively proving the $\widetilde{\mathcal{O}}(\sqrt{\mathrm{T}})$ upper bound on the minimax expected regret. Along this way, we also answer another open question asked by (Moran et al., 2023). To establish these results, we introduce two new combinatorial complexity dimensions, called the Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension and Level-constrained $(\mathrm{L} + 1)$ -Branching dimension, if the list size is $\mathrm{L} \in \mathbb{N}$ . Eventually, we conclude our work by raising an open question regarding eliminating the factor of list size, which seems to be a crucial step, as it has consistently appeared in previous works on this subject.
+
+# 1. Introduction
+
+List learning is a significant subject in both theoretical and empirical machine learning research. From a theoretical point of view, a key technique in the recent breakthrough
+
+$^{1}$ Department of Computer Science, Purdue University, West Lafayette, IN, USA. Correspondence to: Steve Hanneke .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+result by (Brukhim et al., 2022) on the combinatorial characterization of multiclass PAC learnability is the use of list learners. Subsequent works, including (Charikar & Pabbaraju, 2023), (Moran et al., 2023), (Brukhim et al., 2024), (Hanneke et al., 2024a), and (Pabbaraju & Sarmasarkar, 2024) have studied the notion of list learnability in various contexts, such as PAC learning, online learning, boosting, as well as sample compression and uniform convergence, and real-valued regression, accordingly. Moreover, this topic brings to mind the fundamental setting of list-decodable learning in statistics. For a detailed discussion, for example, refer to Chapter 5 of the recent textbook by (Diakonikolas & Kane, 2023). From an empirical point of view, there are many scenarios in which one may prefer the list learning approach to the classical multiclass classification. For instance, in recommendation systems, the objective is often to present a short list of items to users, trying to ensure that the user will select one of the items from the list. As another example, in computer vision classification tasks, predicting a list of labels can potentially prevent label ambiguity. Furthermore, this subject may bring to mind the fundamental setting of conformal prediction in practical applications, which can be viewed as a dual counterpart of the list setting.
+
+Consider a round-robin tennis tournament consisting of $\mathrm{T} \in \mathbb{N}$ matches, all scheduled in advance. In tennis, each set continues until one player leads by at least two games, so sets can, in theory, play on indefinitely; this means the space of possible exact outcomes is countably infinite. A gambler, aware of the players participating in each match based on the schedule, aims to predict the exact outcomes. Since predicting the precise results is challenging, the gambler is allowed to submit a list of $\mathrm{L} \in \mathbb{N}$ possible outcomes for each match before it begins. After each match concludes, the actual result is disclosed. The gambler earns a profit if the actual outcome is among the predictions they have submitted for that match. Given minimal assumptions about the nature of the matches, how can the gambler effectively select predictions to maximize the number of rounds that yield a profit?
+
+The aforementioned example, along with many other real-world scenarios involving possibly adversarially chosen prespecified schedules, can be formulated within the framework called List Transductive Online Learning. This framework is informally defined as follows. Initially, an adversary
+
+selects a finite sequence of instances, such as images, of which the learner is aware. In each subsequent round, the learner must predict a list of labels, such as a list of possible categories for an image, for the next instance in the sequence. Following each prediction, an adversary reveals the correct label. Importantly, in this setting, the learner provides a list of multiple labels for each instance instead of a single label, as in traditional multiclass classification. Moreover, the primary quantity of interest in this framework is the notion of the number of mistakes the learner makes over time. In particular, a mistake occurs if the correct label is not included in the learner's predicted list of labels. For simplicity, the informal formulation is presented here in the context of deterministic learners.
+
+To derive meaningful results, this work adopts the well-established notion of a concept class, consisting of functions mapping the instance space to the label space. In the realizable setting, we assume that the sequence generated by the adversary is consistent with at least one of the concepts within the concept class. Furthermore, in this setting, we focus specifically on minimizing the number of mistakes made by the learner as the primary objective. In contrast, in the agnostic setting, no assumptions are made about the sequence generated by the adversary. Here, rather than directly minimizing the learner's number of mistakes, we compare the learner's performance to that of the best concept within the concept class, a standard performance measure known as regret. Additionally, we note that when the learner's predictions are randomized, we focus on the expected values of the mentioned objectives.
+
+In this paper, our main contribution is constructively answering the following questions in the list transductive online learning framework:
+
+- What are the possible rates of the minimax number of mistakes in the realizable setting as a function of the concept class $\mathcal{C}$ and the size of the initial sequence selected by the adversary T?
+- What is the necessary and sufficient combinatorial condition to make learnability possible in the agnostic regime?
+
+Before this study, the questions outlined above remained entirely open, particularly in scenarios where the number of labels is unbounded. In fact, we show that there exists a concept class that is not learnable in the list online learning framework of (Moran et al., 2023), which does not assume prior knowledge of the sequence of instances, but it becomes learnable in the list transductive online learning framework with only a few mistakes Proposition C.5. Notably, the special case of list size of one, which is equivalent to the multiclass transductive online learning framework, was recently studied in (Hanneke et al., 2024b). In particular, our
+
+results answer an open question posed in their paper, regarding the generalization of their results to the list setting. Moreover, we also demonstrate the existence of a concept class that is not learnable in the multiclass transductive online learning framework; however, again, it is learnable in the list transductive online learning framework with only a few mistakes by using a list size of two Proposition C.3. To complete our findings, we illustrate an example of another concept class, which is easily list PAC learnable, but it is not list transductive online learnable, showing that the finiteness of the dimension from the work of (Charikar & Pabbaraju, 2023) is not sufficient for list transductive online learnability Proposition C.4. Finally, we note that in the way of proving the agnostic result, we also resolve another open question asked by (Moran et al., 2023), regarding the extension of their agnostic result to the unbounded label spaces.
+
+# 1.1. Related Work
+
+Online Learning. Online learning has been a subject of study for more than half a century, gaining significant attention within the computer science community since the seminal work of (Littlestone, 1988). This foundational contribution introduced the adversarial online learning framework for binary classification setting, where during each round, an adversary selects an instance; afterward, the learner is required to predict a binary label for that instance; following this, the adversary reveals the correct label. The celebrated work of (Littlestone, 1988) also provided a combinatorial characterization of learnability for the mentioned problem in the realizable setting based on the Littlestone dimension. Later, (Ben-David et al., 2009) extended Littlestone's result to the agnostic setting, showing that the Littlestone dimension continues to characterize the learnability. Since then, online learning has been explored in various frameworks, including multiclass setting (Daniely et al., 2012; Hanneke et al., 2023a), and list setting (Moran et al., 2023). Given its fundamental nature, online learning has found numerous practical applications.
+
+Transductive Online Learning. In contrast to the above definition of online learning, an alternative setting involves a scenario where the sequence of instances is predetermined by an adversary before the start of the game. This setting can eliminate the uncertainty associated with instances, yet retains the uncertainty about the labels. The study of this setting was first initiated by (Ben-David et al., 1997), with the goal of exploring how uncertainty in labeling alone influences the optimal number of mistakes. Furthermore, given that the online learnable classes are quite limiting, it is natural to extend the learnable classes whenever we have additional assumptions. Notably, the recent work of (Hanneke et al., 2023b) referred to this setting by the term "Transductive Online Learning" due to its relation to transductive PAC learning. Given the set of instances before the
+
+start of the game, other related lines of research include the self-directed online learning framework, in which the learning algorithm is permitted to choose the next instance for prediction from the remaining set of instances in each round. Additionally, there is the best order framework, where the learner, rather than an adversary, determines the order of instances at the outset of the game. See (Goldman & Sloan, 1994; Ben-David et al., 1995; Ben-David & Eiron, 1998; Devulapalli & Hanneke, 2024) for more details.
+
+# 1.2. Overview of the Main Results
+
+In the subsequent subsection, we provide an overview of the primary results in our paper along with a summary of the proof techniques.
+
+# 1.2.1. LIST TRANSDUCTIVE ONLINE LEARNING FRAMEWORK
+
+We consider a sequential game between the learner and an adversary over a total of $\mathrm{T} \in \mathbb{N}$ rounds. Initially, an adversary chooses a sequence of $\mathrm{T}$ instances from a non-empty instance space $\mathcal{X}$ , namely $(x_{1}, x_{2}, \ldots, x_{\mathrm{T}})$ and reveals it to the learner. Moreover, at each round $t \in [\mathrm{T}]$ , the adversary selects a label $y_{t}$ from a non-empty label space $\mathcal{V}$ , which can possibly be uncountable; then, the learner predicts a list of size $\mathrm{L} \in \mathbb{N}$ of labels, which can be randomized; subsequently, the learner, observes the true label $y_{t}$ . Before going forward, following the well-established frameworks in learning theory, we consider a concept class $\mathcal{C}$ as a set of mappings from $\mathcal{X}$ to $\mathcal{V}$ that is known to the learner before starting the game. See subs subsection 2.2.1 and subs subsection 2.2.2 for more details.
+
+# 1.2.2. REALIZABLE SETTING
+
+In the realizable setting, we assume that the sequence $(x_{1},y_{1}),\ldots ,(x_{\mathrm{T}},y_{\mathrm{T}})$ , generated by the adversary, is consistent with at least one mapping in the concept class $\mathcal{C}$ . Moreover, in this setting, we focus on the standard notion of the number of mistakes made by the learner over T rounds. In particular, we aim to establish upper and lower bounds on the minimax number of mistakes, as a function of $\mathcal{Q}$ as an instance of the list transductive online learning framework and T as a total number of rounds, denoted by $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T})$ . See subsection 2.2.4 for more details.
+
+Our primary result in this part demonstrates that if the learner plays for $\mathrm{T} \in \mathbb{N}$ rounds, its minimax number of mistakes can only be of the orders $\Theta(\mathrm{T})$ , $\Theta(\log \mathrm{T})$ , or $\Theta(1)$ . Furthermore, this trichotomy is fully characterized by the finiteness of the Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension and the Level-constrained $(\mathrm{L} + 1)$ -Branching dimension, where the list size is $\mathrm{L} \in \mathbb{N}$ .
+
+To define the Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimen
+
+sion, we first need to define the Level-constrained $(\mathrm{L} + 1)$ -Littlestone tree. A Level-constrained $(\mathrm{L} + 1)$ -Littlestone tree is a $(\mathrm{L} + 1)$ -Littlestone tree with the additional requirement that for a given level the same instance has to label all nodes. Then, the $(\mathrm{L} + 1)$ -Level-constrained Littlestone dimension is defined as $\sup_{d \in \mathbb{N}}$ such that there exists a shattered Level-constrained $(\mathrm{L} + 1)$ -Littlestone tree of depth $d$ . To define the Level-constrained $(\mathrm{L} + 1)$ -Branching dimension, we first need to define the Level-constrained $(\mathrm{L} + 1)$ -Branching tree. A Level-constrained $(\mathrm{L} + 1)$ -Branching tree is a Level-constrained $(\mathrm{L} + 1)$ -Littlestone tree without the restriction that the labels on the two outgoing edges are distinct. Then, the Level-constrained $(\mathrm{L} + 1)$ -Branching dimension is defined as $\sup_{d \in \mathbb{N}}$ such that there exists a shattered Level-constrained $(\mathrm{L} + 1)$ -Branching tree such that every root-to-leaf path contains at least $d$ nodes with all distinct labels. See subsection 2.3 for more details. Formally, we have the following theorem.
+
+Theorem 1.1. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Then, we have:
+
+$$
+\mathbf {M} ^ {\star} (\mathcal {Q}, T) \in \left\{ \begin{array}{l l} \Theta (1), & \mathrm {B} (\mathcal {Q}) < \infty \\ \Theta (\log T), & \mathrm {D} (\mathcal {Q}) < \infty a n d \mathrm {B} (\mathcal {Q}) = \infty , \\ \Theta (T), & \mathrm {D} (\mathcal {Q}) = \infty \end{array} \right.
+$$
+
+where $\mathrm{B}(\mathcal{Q})$ is the Level-constrained L-Branching dimension of $\mathcal{Q}$ , and $\mathrm{D}(\mathcal{Q})$ is the Level-constrained L-Littlestone dimension of $\mathcal{Q}$ , both of them defined in subsection 2.3.
+
+The proof of the above theorem comprises up several components. First, to establish the upper bound in the constant case, basically, we generalize the notion of rank introduced by (Ben-David et al., 1997) to the list setting. Then, we use the adaptation of Littlestone's Standard Optimal Algorithm (SOA) to get the final result. Next, we derive two lower bounds by using ideas from (Hanneke et al., 2024b). For the upper bound in the log T case, the first idea that one may think of is to employ the Halving algorithm combined with the list Sauer-Shelah-Perles (SSP) Lemma from (Charikar & Pabbaraju, 2023; Hanneke et al., 2024a). However, as discussed by (Hanneke et al., 2024b), this approach is inapplicable when the label space is unbounded. Importantly, even when focusing on the finite label space setting, this approach is not applicable at all. To see this, notice that the number of functions based on the list SSP Lemma from (Charikar & Pabbaraju, 2023; Hanneke et al., 2024a), can be of order $\mathcal{O}(\mathrm{L}^{\mathrm{T}}\mathrm{T}^{\mathrm{d}}\mathrm{k}^{\mathrm{L}\mathrm{d}})$ , where $\mathrm{k}$ denotes the size of the label space, $\mathrm{L}$ is the list size, and $\mathrm{d}$ is the associated dimension. Consequently, as the reduction in the version space per mistake is only by a factor of $1 - \mathrm{L} / \mathrm{k}$ , this approach leads to a bound that is linear in $\mathrm{T}$ . To overcome this obstacle, we generalize the technique of (Hanneke et al., 2024b). This technique enables us to establish the desired upper bound of $\log \mathrm{T}$ on the minimax number of mistakes. In summary,
+
+we define a notion of shattering for a sequence of instances from $\mathcal{X}$ . Based on the finiteness of $\mathrm{D}(\mathcal{Q})$ , we can bound the total number of sub-sequences of the initial sequence played by the adversary that are shattered. A key observation is that if we do not have any shattered sub-sequence, we will make no more mistakes. Our algorithm can guarantee a decrease in the number of shattered sub-sequences after making a mistake, so the total number of mistakes is bounded. See section 3 and Appendix A for more details.
+
+Before proceeding, we emphasize that in contrast to the multiclass setting, even the finite label space regime for list learning cannot be handled by the simple approach of Halving via the SSP Lemma. Moreover, there are different ways of selecting L labels. We found that decreasingly sorting the label set $\mathcal{V}$ based on appropriate measures and selecting the first L labels works in both of our upper bounds. Also, we have the following inequalities: $\mathrm{DS}(\mathcal{Q})\leq \mathrm{D}(\mathcal{Q})\leq \mathrm{B}(\mathcal{Q})\leq \mathrm{LD}(\mathcal{Q})$ ; see Appendix C.
+
+# 1.2.3.AGNOSTIC SETTING
+
+In the agnostic setting, we make no assumptions about the sequence $(x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{\mathrm{T}},y_{\mathrm{T}})$ , generated by the adversary. Moreover, in this setting, our focus shifts to the standard notion of expected regret, which compares the expected number of mistakes made by the learner to those made by the best concept in the concept class $\mathcal{C}$ over the sequence. In particular, we say that an instance of the list transductive online learning framework $\mathcal{Q}$ is agnostic learnable in the list transductive online learning framework, if the minimax expected regret, as a function of $\mathcal{Q}$ and $\mathrm{T}$ as a total number of rounds, denoted by $\mathbf{R}^{\star}(\mathcal{Q},\mathrm{T})$ , is sub-linear in $\mathrm{T}$ . See subsection 2.2.5 for more details.
+
+Our primary result in this part demonstrates that this criterion for learnability is fully characterized by the finiteness of the Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension, if the list size is $\mathbf{L}\in \mathbb{N}$ . Formally, we have the following theorem.
+
+Theorem 1.2. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Then, $\mathcal{Q}$ is agnostic learnable in the list transductive online learning framework if and only if it has finite Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension.
+
+The proof of the above theorem involves a few pieces. First, we establish the lower bound for randomized learners by leveraging ideas from (Moran et al., 2023). To prove the upper bound, we face the challenge of dealing with an unbounded label space, which renders the standard techniques from (Ben-David et al., 2009; Daniely et al., 2012) inapplicable. To overcome this obstacle, we employ the dynamic expert technique from the recent work of (Hanneke et al., 2023a). To do so, we use our technique from the realizable
+
+setting for proving the upper bound in the $\log (T)$ case. Furthermore, we combined dynamic experts with the celebrated exponential weights algorithm to get the final result. Our finding indicates that the technique introduced in (Hanneke et al., 2023a) effectively addresses infinite label spaces even in the list setting, thereby resolving the question raised by (Moran et al., 2023) regarding the extension of their agnostic result to the unbounded label spaces. See section 4 and Appendix B for more details.
+
+# 1.3. Organization
+
+The rest of the paper is organized as follows. In section 2, we formally set the notation and definitions. Subsequently, in section 3, we present our results for the realizable setting. Then, in section 4, we extend our results to the agnostic setting. Afterward, in Appendix C, we provide a few examples showing separations between different related combinatorial complexity dimensions. Eventually, in section 5, we conclude our manuscript and present some future directions.
+
+# 2. Notations, Definitions, and Preliminaries
+
+In this section, we set our basic notations in subsection 2.1. Then, we present the List Transductive Online Learning framework in subsection 2.2. Finally, we define main combinatorial complexity measures in this paper in subsection 2.3.
+
+# 2.1. Notations
+
+In this subsection, we present the basic notations that we use throughout our paper. Let $\mathbb{N}$ and $\mathbb{R}$ stand for the set of natural and real numbers, accordingly. We denote by $\tilde{\mathbb{N}}$ the extended natural number system defined as $\tilde{\mathbb{N}} := \mathbb{N} \cup \{-\infty, +\infty\}$ . Also, for a given $n \in \mathbb{N}$ , we use $[n]$ to denote $\{1, 2, \ldots, n\}$ . Next, let $n \in \mathbb{N}$ , for any sequence of size $n$ or $n$ -tuple $x$ , and any $i \in \mathbb{N}$ such that $1 \leq i \leq n$ , let us use $x_i$ to denote the $i$ -th element in $x$ . To increase the readability of our manuscript, we use “,” to separate indices of elements when we have more than one index; for instance, let $x$ be a sequence of size 5 of 2-tuples, we denote by $x_{5,1}$ the first element of the 5-th element of $x$ . We denote by $A \times B$ the Cartesian product of two arbitrarily sets $A$ and $B$ . In addition, for any set $A$ and any $n \in \mathbb{N}$ , we let $A^n$ indicate $n$ times the Cartesian product of $A$ with itself. Note that, for any set $A$ , we define $A^0 := \{\emptyset\}$ . Also, given a set $A$ , we denote by $A^\star$ the set of all finite sequences of the members of $A$ ; more formally, $A^\star := \bigcup_{T=0}^{\infty} A^T$ . Then, for the arbitrary sets $X$ and $Y$ , we use $Y^X$ to denote the space of all functions from $X$ to $Y$ . Finally, we use $\mathcal{O}(.)$ , $\mathcal{O}(.)$ , $\Omega(.)$ , $\omega(.)$ , and $\Theta(.)$ as standard notations of them in the theoretical computer science. We also use $\widetilde{\mathcal{O}}(.)$ , $\widetilde{\Omega}(.)$ , $\widetilde{\Theta}(.)$ to exclude logarithmic factors as well as constant coefficients.
+
+# 2.2. List Transductive Online Learning Framework
+
+In this subsection, we present the List Transductive Online Learning framework. To do so, first, we give the problem setup in subsubsection 2.2.1. Then, we formulate the problem in subsubsection 2.2.2. Afterward, we define list transductive online learning algorithms in subsubsection 2.2.3. Finally, we give the associated definitions with the realizable and agnostic settings in subsubsection 2.2.4 and subsubsection 2.2.5, accordingly.
+
+# 2.2.1.PROBLEMSETUP
+
+Fix a non-empty set $\mathcal{X}$ as the instance space. Let $\mathrm{L} \in \mathbb{N}$ be the size of the list. Also, fix a non-empty set $\mathcal{Y}$ equipped with a $\sigma$ -algebra such that every subset of $\mathcal{Y}$ with cardinality $\mathrm{L}$ is measurable as the label space. We note briefly that, since our focus is on deterministic algorithms in the realizable setting, no measurability assumptions on $\mathcal{Y}$ are required. Following the well-established frameworks in learning theory, let $\mathcal{C} \subseteq \mathcal{Y}^{\mathcal{X}}$ be a concept class. In particular, a 4-tuple $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ presents an instance of the list transductive online learning framework.
+
+# 2.2.2. LIST TRANSDUCTIVE ONLINE LEARNING GAME
+
+Let $\mathrm{T} \in \mathbb{N}$ . The problem of list transductive online learning is formulated as a T-rounded sequential game between the learner/player and an adversary/opponent. Initially, an adversary chooses a sequence of T instances $\mathrm{X} \in \mathcal{X}^{\mathrm{T}}$ and reveals it to the learner. Moreover, at each round $t \in [\mathrm{T}]$ :
+
+- The adversary chooses a label $y_{t}$ from $\mathcal{V}$ .
+- The learner predicts a list of size $\mathrm{L}$ of labels.
+- The adversary reveals the true label $y_{t}$ .
+
+# 2.2.3. LIST TRANSDUCTIVE ONLINE LEARNING RULES
+
+We consider two different types of list transductive online learning rules/algorithms, namely deterministic rules and randomized rules. As a result, we have the following definitions.
+
+Definition 2.1 (Deterministic List Transductive Online Learning Rule). Let $\mathcal{D} = \{(x^{\star},y^{\star})\mid x^{\star}\in \mathcal{X}^{\star},y^{\star}\in$ $\mathcal{V}^{\star}$ $|y^{\star}| < |x^{\star}|\}$ . In addition, let $\mathcal{V}_{\mathrm{L}} = \{\mathcal{A}\mid \mathcal{A}\subseteq$ $\mathcal{V}$ $|\mathcal{A}| = \mathrm{L}\}$ . A deterministic list transductive online learning rule is a mapping $\mathbf{A}:D\to \mathcal{Y}_{\mathrm{L}}$
+
+In words, it is a mapping that maps each finite sequence of instances and a finite sequence of labels with a size smaller than the size of the sequence of instances to a set of size $L$ of labels.
+
+Definition 2.2 (Randomized List Transductive Online Learning Rule). Let $\mathcal{V}_{\mathrm{L}} = \{\mathcal{A} \mid \mathcal{A} \subseteq \mathcal{Y}, |\mathcal{A}| = \mathrm{L}\}$ . In addition, let $\mathcal{D} = \{(x^{\star}, a^{\star}, y^{\star}) \mid x^{\star} \in \mathcal{X}^{\star}, a^{\star} \in \mathcal{A}^{\star}, y^{\star} \in$
+
+$\mathcal{V}^{\star}$ $|a^{\star}| = |y^{\star}| < |x^{\star}|\}$ . A randomized list transductive online learning rule is a mapping $\mathbf{A}:\mathcal{D}\to \Pi (\mathcal{Y}_{\mathrm{L}})$ .
+
+In words, it is a mapping that maps each finite sequence of instances denoted by $x'$ , a finite sequence of set of labels of size $L$ denoted by $a'$ , and a finite sequence of labels denoted by $y'$ such that $|a^{\star}| = |y^{\star}| < |x^{\star}|$ to a probability measure on $\mathcal{Y}_L$ .
+
+# 2.2.4. REALIZABLE SETTING
+
+Here, we begin by defining a realizable sequence. Then, to evaluate the performance of any deterministic algorithm, we define the well-known notion of the number of mistakes adapted to our framework. Finally, we define the optimal mistake bound, building on the previous definitions.
+
+Definition 2.3 (Realizable Sequence). Fix $\mathrm{T} \in \mathbb{N}$ . We say that a finite sequence of size $\mathrm{T}$ of instance-label pairs $((x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{\mathrm{T}},y_{\mathrm{T}})) \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}$ is realizable by a concept class $\mathcal{C}$ if there exists a concept $c \in \mathcal{C}$ such that for every $i \in [\mathrm{T}]$ , we have $c(x_{i}) = y_{i}$ .
+
+Definition 2.4 (Number of Mistakes). Let $\mathbf{A}$ be a deterministic list transductive online learning rule. Fix $\mathrm{T} \in \mathbb{N}$ . Let $S$ be a finite sequence of size $\mathrm{T}$ of instance-label pairs $S = ((x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{\mathrm{T}},y_{\mathrm{T}})) \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}$ . We define the number of mistakes made by $\mathbf{A}$ with respect to the sequence $S$ , denoted by $\mathbf{M}(\mathbf{A};S)$ , as follows:
+
+$$
+\mathbf {M} (\mathbf {A}; S) := \sum_ {t = 1} ^ {\mathrm {T}} \mathbb {1} \left\{y _ {t} \notin \mathbf {A} \left(\left(S ^ {\prime}, \left(y _ {1}, y _ {2}, \dots , y _ {t - 1}\right)\right)\right) \right\},
+$$
+
+where $S^{\prime}$ is defined as follows: $\mathcal{S}^{\prime} := (x_{1}, x_{2}, \ldots, x_{\mathrm{T}})$ .
+
+Definition 2.5 (Optimal Mistake Bound). Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. The optimal mistake bound of $\mathcal{Q}$ as a function of the time horizon T, denoted by $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T})$ , is defined as follows:
+
+$$
+\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T}):= \inf_{\mathbf{A}\in \mathcal{A}}\sup_{\substack{S\in (\mathcal{X}\times \mathcal{Y})^{\mathrm{T}}\text{which is}\\ \text{realizable by}\mathcal{C}}}\mathbf{M}(\mathbf{A};S),
+$$
+
+where $\mathcal{A}$ is defined as the set of all deterministic list transductive online learning rules.
+
+Before proceeding, it is important to note that while our definitions are based on an oblivious adversary, it is straightforward to see that they are equivalent to the case with an adaptive adversary.
+
+# 2.2.5.AGNOSTIC SETTING
+
+In this subsection, we begin by defining the well-known game theoretic notion of regret. Then, we present the definition of agnostic learnability.
+
+Definition 2.6 (Expected Regret). Let $\mathbf{A}$ be a randomized list transductive online learning rule. Fix $\mathrm{T} \in \mathbb{N}$ . Let $S$ be a finite sequence of size $\mathrm{T}$ of instance-label pairs $S = ((x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{\mathrm{T}},y_{\mathrm{T}})) \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}$ . We define the expected regret of $\mathbf{A}$ with respect to the sequence $S$ against the competitor concept class $\mathcal{C}$ , denoted by $\mathbf{R}(\mathbf{A};S;\mathcal{C})$ , as follows:
+
+$$
+\mathbf {R} (\mathbf {A}; S; \mathcal {C}) := \mathbb {E} _ {\mathbf {A} ^ {\prime}} \Big [ \mathbf {M} (\mathbf {A} ^ {\prime}; S) \Big ] - \inf _ {c \in \mathcal {C}} \sum_ {t = 1} ^ {\mathrm {T}} \mathbb {1} \big \{y _ {t} \neq c (x _ {t}) \big \},
+$$
+
+where we view a single run of the randomized list transductive online learning rule $\mathbf{A}$ as running a deterministic list transductive online learning rule $\mathbf{A}'$ . Moreover, we take the expectation over a random sequence of outputs of $\mathbf{A}$ .
+
+Definition 2.7 (Optimal Expected Regret). Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. The optimal expected regret bound of $\mathcal{Q}$ as a function of the time horizon T, denoted by $\mathbf{R}^{\star}(\mathcal{Q},\mathrm{T})$ is defined as follows:
+
+$$
+\mathbf{R}^{\star}(\mathcal{Q},\mathrm{T}):= \inf_{\mathbf{A}\in \mathcal{A}}\sup_{S\in (\mathcal{X}\times \mathcal{Y})^{\mathrm{T}}}\mathbf{R}(\mathbf{A};S;\mathcal{C}),
+$$
+
+where $\mathcal{A}$ is defined as the set of all randomized list transductive online learning rules.
+
+Definition 2.8 (Agnostic Learnability). We say that an instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ is agnostic learnable in the list transductive online learning framework, if $\mathbf{R}^{\star}(\mathcal{Q},\mathrm{T})$ as a function of the time horizon $\mathrm{T}$ is sub-linear in $\mathrm{T}$ .
+
+Before proceeding, it is important to note that while our definitions are based on an oblivious adversary, it is straightforward to see that they are equivalent to the case with an adaptive adversary. See Lemma 4.1 in (Cesa-Bianchi & Lugosi, 2006).
+
+We emphasize that one may also consider a similar notion of learnability defined using $\mathcal{O}(T)$ in the realizable setting.
+
+# 2.3. Combinatorial Complexity Parameters
+
+In this subsection, we first set our notations for trees. Then, we proceed with the definitions of the main combinatorial complexity parameters in our paper based on previous definitions.
+
+Definition 2.9 (Perfect Rooted L-ary Trees). Let $\mathrm{L} \in \mathbb{N}$ . A perfect rooted L-ary tree $\mathcal{T}$ is a rooted tree, each of whose internal nodes has exactly $\mathrm{L}$ children and all leaves have the same depth.
+
+Definition 2.10 (L-ary $(\mathcal{X},\mathcal{Y})$ -valued Trees). Let $\mathrm{L}\in \mathbb{N}$ . Also, let $\mathcal{X},\mathcal{Y}$ be any non-empty sets. A L-ary $(\mathcal{X},\mathcal{Y})$ -valued tree $\mathcal{T}$ is a perfect rooted L-ary tree, each of whose nodes are labeled by an element of $\mathcal{X}$ , and each of whose
+
+edges are labeled by an element of $\mathcal{V}$ . Moreover, for any Lary $(\mathcal{X},\mathcal{Y})$ -valued tree, a root to leaf path of length $\ell \in \mathbb{N}$ can be identified by a sequence of pairs $\mathfrak{s} \in (\mathcal{X} \times \mathcal{Y})^\ell$ .
+
+# 2.3.1. $(\mathrm{L} + 1)$ -LITTLESTONE DIMENSION
+
+Definition 2.11 $((\mathrm{L} + 1)$ -Littlestone Tree). Let $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ be an instance of the list transductive online learning framework. An $(\mathrm{L} + 1)$ -ary $(\mathcal{X}, \mathcal{Y})$ -valued tree $\mathcal{T}$ is called $(\mathrm{L} + 1)$ -Littlestone tree for $\mathcal{Q}$ .
+
+Definition 2.12 (Shattered $(\mathrm{L} + 1)$ -Littlestone Tree). Let $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ be an instance of the list transductive online learning framework. We say that a $(\mathrm{L} + 1)$ -Littlestone tree for $\mathcal{T}$ for $\mathcal{Q}$ is shattered by $\mathcal{C}$ , if for every finite root to leaf path in $\mathcal{T}$ , identified by $\mathfrak{s} \in (\mathcal{X} \times \mathcal{Y})^\ell$ for some $\ell \in \mathbb{N}$ , there exists a concept $c \in \mathcal{C}$ such that for every $i \in \mathbb{N}$ , $i \leq \ell$ , we have: $\mathfrak{s}_{i,2} = c(\mathfrak{s}_{i,1})$ .
+
+Definition 2.13 ((L + 1)-Littlestone Dimension). Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. The $(\mathrm{L} + 1)$ -Littlestone dimension of $\mathcal{Q}$ , denoted by $\mathrm{L}(\mathcal{Q})$ , is defined as a $\sup_{d\in \bar{\mathbb{N}}}$ such that there exists a $(\mathrm{L} + 1)$ -Littlestone tree $\mathcal{T}$ of depth $d$ for $\mathcal{Q}$ that all children of every node have distinct labels which is shattered by $\mathcal{C}$ . Also, if $\mathcal{C} = \{\emptyset\}$ , we have: $\mathrm{L}(\mathcal{Q}) = 0$ .
+
+# 2.3.2. LEVEL-CONSTRAINED $(\mathrm{L} + 1)$ -LITTLESTONE DIMENSION
+
+Definition 2.14 (Level-constrained $(\mathrm{L} + 1)$ -Littlestone Dimension). Let $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ be an instance of the list transductive online learning framework. The Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension of $\mathcal{Q}$ , denoted by $\mathrm{D}(\mathcal{Q}) \in \bar{\mathbb{N}}$ , is defined as a $\sup_{d \in \mathbb{N}}$ such that there exists a $(\mathrm{L} + 1)$ -Littlestone tree $\mathcal{T}$ of depth $d$ for $\mathcal{Q}$ that all children of every node are labeled by distinct elements of $\mathcal{V}$ and all nodes at the same level are labeled by the same element of $\mathcal{X}$ which is shattered by $\mathcal{C}$ . Also, if $\mathcal{C} = \{\emptyset\}$ , we have: $\mathrm{D}(\mathcal{Q}) = 0$ .
+
+# 2.3.3. LEVEL-CONSTRAINED $(\mathrm{L} + 1)$ -BRANCHING DIMENSION
+
+Definition 2.15 (Level-constrained $(\mathrm{L} + 1)$ -Branching Dimension). Let $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ be an instance of the list transductive online learning framework. The Level-constrained $(\mathrm{L} + 1)$ -Branching dimension of $\mathcal{Q}$ denoted by $\mathrm{B}(\mathcal{Q}) \in \bar{\mathbb{N}}$ , is defined as a $\sup_{d \in \mathbb{N}}$ such that there exists a $(\mathrm{L} + 1)$ -Littlestone tree $\mathcal{T}$ for $\mathcal{Q}$ that all nodes at the same level are labeled by the same element of $\mathcal{X}$ and every root to leaf path contains at least $d$ nodes labeled by distinct elements of $\mathcal{Y}$ which is shattered by $\mathcal{C}$ . Also, if $\mathcal{C} = \{\emptyset\}$ , we have: $\mathrm{B}(\mathcal{Q}) = 0$ .
+
+# 3. Realizable Setting
+
+First, we restate the main theorem in the realizable setting here for the sake of simplicity.
+
+Theorem 3.1. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Then, we have:
+
+$$
+\mathbf {M} ^ {\star} (\mathcal {Q}, T) \in \left\{ \begin{array}{l l} \Theta (1), & \mathrm {B} (\mathcal {Q}) < \infty \\ \Theta (\log T), & \mathrm {D} (\mathcal {Q}) < \infty a n d \mathrm {B} (\mathcal {Q}) = \infty , \\ \Theta (T), & \mathrm {D} (\mathcal {Q}) = \infty \end{array} \right.
+$$
+
+Proof. The theorem can be proved by combining two lower bounds of Lemma 3.2 and Lemma 3.5 as well as two upper bounds from Lemma A.1 and Lemma A.3. This finishes the proof. $\square$
+
+We provide the proofs for the lower bounds in the main text, while deferring the more technical proofs for the upper bounds to Appendix A.
+
+# 3.1. Lower Bound T
+
+We start by proving the lower bound for the linear case. Furthermore, the proof of the following Lemma uses the main idea in the lower bound proof of the work of (Littlestone, 1988).
+
+Lemma 3.2. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\mathrm{D}(\mathcal{Q}) = \infty$ . Then, we have $\mathbf{M}^{\star}(\mathcal{Q},T)\in \Omega (T)$ .
+
+Proof. Fix $\mathrm{T} \in \mathbb{N}$ . Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{D}(\mathcal{Q}) = \mathrm{T}$ . Note that such a tree should exist as $\mathrm{D}(\mathcal{Q}) = \infty$ . Based on Definition 2.14, this tree should have depth $\mathrm{T}$ as well. Also, at each level of $\mathcal{T}$ , all nodes are labeled by the same instance from $\mathcal{X}$ . In addition, children of all nodes are labeled by distinct labels from $\mathcal{Y}$ . Let $\mathbf{A}$ be any deterministic list transductive online learning rule. Now, we will build an adversarial strategy against $\mathbf{A}$ using $\mathcal{T}$ . To do so, we first present $\mathrm{T}$ instances at $\mathrm{T}$ levels of $\mathcal{T}$ in order to the learner before starting the game. Furthermore, we will continue to make this strategy based on a special root-to-leaf path in $\mathcal{T}$ , which depends on $\mathbf{A}$ . Receive the first set of labels of size $\mathrm{L}$ predicted by the learner. As we have $\mathrm{L} + 1$ edges for the root node of $\mathcal{T}$ labeled with distinct labels from $\mathcal{Y}$ , there exists at least one of them, which is labeled by a label that is not in the received set. We output that label. Moreover, we will continue this construction of the adversarial strategy based on $\mathcal{T}$ and $\mathbf{A}$ . Indeed, we can force $\mathrm{T}$ number of mistakes to the $\mathbf{A}$ . In addition, as $\mathcal{T}$ is shattered by $\mathcal{C}$ , there exists a concept in $\mathcal{C}$ , which is consistent with the root-to-leaf path that we used. So, we are in the realizable setting. As a result, we have: $\sup_{S \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}} \mathbf{M}(\mathbf{A}; S) \geq T$ . Since we are able to realize by $\mathcal{C}$
+
+construct an adversarial strategy for an arbitrary deterministic list transductive online learning rule $\mathbf{A}$ , we conclude $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T})\geq \mathrm{T}$ . Finally, note that this argument works for any $\mathrm{T}\in \mathbb{N}$ . This finishes the proof.
+
+Fix $\mathrm{T} \in \mathbb{N}$ . Indeed, for a given $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ , if $\mathrm{D}(\mathcal{Q}) = \mathrm{d}$ , using the above proof, we can show min $\{\mathrm{T},\mathrm{d}\}$ lower bound.
+
+# 3.2. Lower Bound $\frac{\log(\mathrm{L} \mathrm{T} + 1)}{\log(\mathrm{L} + 1)}$
+
+We continue by proving the lower bound for the Logarithmic case. Furthermore, the proof requires the following combinatorial Lemma. Additionally, we establish a similar lower bound as the one we just proved before presenting the main result.
+
+Lemma 3.3. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\mathrm{B}(\mathcal{Q}) = \mathrm{d}$ for some $\mathrm{d}\in \mathbb{N}$ . Then, there exists a $(\mathrm{L} + 1)$ -Littlestone tree of maximum depth $\frac{(\mathrm{L} + 1)^{\mathrm{d}} - 1}{\mathrm{L}}$ witnessing $\mathrm{B}(\mathcal{Q}) = \mathrm{d}$ , crucially, just using a subset of nodes in any tree witnessing $\mathrm{B}(\mathcal{Q}) = \mathrm{d}$ .
+
+Proof. We prove this Lemma by induction. We start with the base case. In particular, assume $\mathrm{d} = 1$ . Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{B}(\mathcal{Q}) = 1$ . Thus, in every root-leaf-path in $\mathcal{T}$ , we should have at least one node whose outgoing edges are labeled by $(\mathrm{L} + 1)$ distinct values from $\mathcal{V}$ . Take one such node. Indeed, this node can itself witness $\mathrm{B}(\mathcal{Q}) = 1$ . So, the depth is at most $1 \leq \frac{(\mathrm{L} + 1)^{1} - 1}{\mathrm{L}} = 1$ . This finishes the proof of the base case. Now, assume the claim is true for $\mathrm{d} \in \mathbb{N}$ . Subsequently, we prove the claim for $\mathrm{d} + 1$ . More specifically, we show that if $\mathrm{B}(\mathcal{Q}) = \mathrm{d} + 1$ , then there exists a $(\mathrm{L} + 1)$ -Littlestone tree of maximum depth $\frac{(\mathrm{L} + 1)^{\mathrm{d} + 1} - 1}{\mathrm{L}}$ witnessing $\mathrm{B}(\mathcal{Q}) = \mathrm{d} + 1$ . Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{B}(\mathcal{Q}) = \mathrm{d} + 1$ . Find the node with the minimum level in $\mathcal{T}$ whose outgoing edges are labeled by $(\mathrm{L} + 1)$ distinct values from $\mathcal{V}$ . Indeed, such a node should exist. Also, we have several of them, just take one. Now, denote $(\mathrm{L} + 1)$ sub-trees of that node by $\mathcal{T}_1, \mathcal{T}_2, \ldots, \mathcal{T}_{(\mathrm{L} + 1)}$ . Restrict our instance space $\mathcal{X}$ to instances on the levels of these sub-trees and call it $\mathcal{X}'$ . Also, consider functions induced by the projection of $\mathcal{C}$ to $\mathcal{X}'$ and call it $\mathcal{C}'$ . Let $\mathcal{Q}' := (\mathcal{X}', \mathrm{L}, \mathcal{Y}, \mathcal{C}')$ . Based on our construction of $\mathcal{Q}'$ , we have $\mathrm{B}(\mathcal{Q}') = \mathrm{d}$ . Now, we apply the induction hypothesis and get $T_1', T_2', \ldots, T_{(\mathrm{L} + 1)}'$ such that each of them witness $\mathrm{B}(\mathcal{Q}') = \mathrm{d}$ and their depths are bounded above by $\frac{(\mathrm{L} + 1)^{\mathrm{d}} - 1}{\mathrm{L}}$ . Subsequently, let us join our one node to these sub-trees. In particular, to keep the level -constraint property, the final depth is bound by
+
+$(\mathrm{L} + 1)\times \frac{(\mathrm{L} + 1)^{\mathrm{d}} - 1}{\mathrm{L}} +1 = \frac{(\mathrm{L} + 1)^{\mathrm{d} + 1} - 1}{\mathrm{L}}.$ As we have a node with $\mathrm{L} + 1$ outgoing edges labeled by $\mathrm{L} + 1$ distinct labels from $\mathcal{V}$ , the resulting tree can witness $\mathrm{B}(\mathcal{Q}) = \mathrm{d} + 1$ . This completes the proof.
+
+Lemma 3.4. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\operatorname {B}(\mathcal{Q}) = \mathrm{d}$ for some $\mathrm{d}\in \mathbb{N}$ . Then, we have $\mathbf{M}^{\star}(\mathcal{Q},T)\geq \mathrm{d}$ for large enough $T$ .
+
+Proof. Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree of depth $r \in \mathbb{N}$ witnessing $\mathrm{B}(\mathcal{Q}) = \mathrm{d}$ . Based on Definition 2.15, at each level of $\mathcal{T}$ , all nodes are labeled with the same instance from $\mathcal{X}$ . Let $\mathbf{A}$ be any deterministic list transductive online learning rule. Now, we will build an adversarial strategy against $\mathbf{A}$ using $\mathcal{T}$ . To do so, we first present $r$ instances at $r$ levels of $\mathcal{T}$ in order to the learner before starting the game. Furthermore, we will continue to make this strategy based on a special root-to-leaf path in $\mathcal{T}$ , which depends on $\mathbf{A}$ . Receive the first set of labels of size $\mathrm{L}$ predicted by the learner. We may have $\mathrm{L} + 1$ edges for the root node of $\mathcal{T}$ labeled with distinct labels from $\mathcal{Y}$ . If that is the case, then there exists at least one of them, which is labeled by a label that is not in the received set. We output that label. If that is not the case, just output any of them. Moreover, we will continue this construction of the adversarial strategy based on $\mathcal{T}$ and $\mathbf{A}$ . Indeed, we can force d number of mistakes to the $\mathbf{A}$ . This is because every root-to-leaf path in $\mathcal{T}$ should contain at least d nodes having $\mathrm{L} + 1$ outgoing edges labeled by distinct labels from $\mathcal{Y}$ . In addition, as $\mathcal{T}$ is shattered by $\mathcal{C}$ , there exists a concept in $\mathcal{C}$ , which is consistent with the root-to-leaf path that we used. So, we are in the realizable setting. As a result, we have: $\sup_{S \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}\text{which is realizable by } \mathcal{C}} \mathbf{M}(\mathbf{A}; S) \geq \mathrm{d}$ .
+
+Since we are able to construct an adversarial strategy for an arbitrary deterministic list transductive online learning rule $\mathbf{A}$ , we conclude $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T})\geq \mathrm{d}$ for large enough $\mathrm{T} = r$ . This finishes the proof.
+
+Lemma 3.5. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\operatorname {B}(\mathcal{Q}) = \infty$ . Then, we have $\mathbf{M}^{\star}(\mathcal{Q},T)\in$ $\Omega \Big(\frac{\log(\mathrm{L}T + 1)}{\log(\mathrm{L} + 1)}\Big)\in \Omega (\log T).$
+
+Proof. Fix $\mathrm{T} \in \mathbb{N}$ . Solve $\frac{(\mathrm{L} + 1)^{\mathrm{d}} - 1}{\mathrm{L}} = \mathrm{T}$ for $\mathrm{d}$ . So, the result is $\frac{\log(\mathrm{L} \mathrm{T} + 1)}{\log(\mathrm{L} + 1)}$ . Now, define $r := \left\lfloor \frac{\log(\mathrm{L} \mathrm{T} + 1)}{\log(\mathrm{L} + 1)} \right\rfloor$ . Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{B}(\mathcal{Q}) = r$ . Note that such a tree should exist as $\mathrm{B}(\mathcal{Q}) = \infty$ . Now, apply Lemma 3.3 on $\mathcal{T}$ . Thus, we can get a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{B}(\mathcal{Q}) = r$ whose depth is at most $\mathrm{T}$ . Subsequently, based on the proof of Lemma 3.4,
+
+we have $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T})\geq r$ . Finally, note that this argument works for any $\mathbf{T}\in \mathbb{N}$ . This concludes the proof.
+
+# 4. Agnostic Setting
+
+First, we restate the main theorem in the agnostic setting here for the sake of simplicity.
+
+Theorem 4.1. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Then, $\mathcal{Q}$ is agnostic learnable in the list transductive online learning framework if and only if $\mathrm{D}(\mathcal{Q}) < \infty$ .
+
+Proof. The theorem can be proved by combining a lower bound of Lemma B.1 and as well as an upper bound from Lemma B.2. $\square$
+
+# 5. Conclusion, Discussion, and Future Directions
+
+In this work, we investigated the problem of list transductive online learning with possibly arbitrary label space. In the realizable setting, we showed a trichotomy of possible minimax rates for the number of mistakes. In addition, we demonstrated a dichotomy of the minimax expected regret in the agnostic setting. To do so, we introduced two new combinatorial complexity parameters, the Level-constrained $(\mathrm{L} + 1)$ -Littlestone dimension and the Level-constrained $(\mathrm{L} + 1)$ -Branching dimension for some $\mathrm{L} \in \mathbb{N}$ .
+
+Finally, we outline a potential future direction for this line of research. Similarly to our work, in all previous studies on list learnability, including (Charikar & Pabbaraju, 2023; Moran et al., 2023; Brukhim et al., 2024), additional factors related to the size of the list appear in the upper bounds. For instance, if the size of the list is $\mathrm{L} \in \mathbb{N}$ , in the work of (Charikar & Pabbaraju, 2023), a factor $\mathrm{L}^6$ is present, or in the work of (Brukhim et al., 2024), a factor $\mathrm{L}^4$ arises. A key open question is how to eliminate such factors from our log T upper bound in the realizable setting. Addressing this question could potentially lead to the elimination of list size factors in other related problems as well. In close relation to this question, one could explore the problem of list learning with possibly unbounded list size. For example, the label space can be the real numbers, while lists are intervals of size $c$ for some constant $c \in \mathbb{R}$ .
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Statistical Learning Theory. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Ben-David, S. and Eiron, N. Self-directed learning and its relation to the vc-dimension and to teacher-directed learning. Machine Learning, 33:87-104, 1998.
+Ben-David, S., Eiron, N., and Kushilevitz, E. On self-directed learning. In Proceedings of the eighth annual conference on Computational learning theory, pp. 136-143, 1995.
+Ben-David, S., Kushilevitz, E., and Mansour, Y. Online learning versus offline learning. Machine Learning, 29: 45-63, 1997.
+Ben-David, S., Pál, D., and Shalev-Shwartz, S. Agnostic online learning. In Conference on Learning Theory, volume 3, pp. 1, 2009.
+Bousquet, O., Hanneke, S., Moran, S., Van Handel, R., and Yehudayoff, A. A theory of universal learning. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 532-541, 2021.
+Brukhim, N., Carmon, D., Dinur, I., Moran, S., and Yehudayoff, A. A characterization of multiclass learnability. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pp. 943-955. IEEE, 2022.
+Brukhim, N., Daniely, A., Mansour, Y., and Moran, S. Multiclass boosting: simple and intuitive weak learning criteria. Advances in Neural Information Processing Systems, 36, 2024.
+Cesa-Bianchi, N. and Lugosi, G. Prediction, Learning, and Games. Cambridge University Press, 2006. ISBN 9781139454827.
+Charikar, M. and Pabbaraju, C. A characterization of list learnability. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pp. 1713-1726, 2023.
+Daniely, A., Sabato, S., and Shwartz, S. Multiclass learning approaches: A theoretical comparison with implications. Advances in Neural Information Processing Systems, 25, 2012.
+Devulapalli, P. and Hanneke, S. The dimension of self-directed learning, 2024.
+Diakonikolas, I. and Kane, D. M. Algorithmic high-dimensional robust statistics. Cambridge university press, 2023.
+Goldman, S. A. and Sloan, R. H. The power of self-directed learning. Machine Learning, 14:271-294, 1994.
+
+Hanneke, S., Moran, S., Raman, V., Subedi, U., and Tewari, A. Multiclass online learning and uniform convergence. In The Thirty Sixth Annual Conference on Learning Theory, pp. 5682-5696. PMLR, 2023a.
+Hanneke, S., Moran, S., and Shafer, J. A trichotomy for transductive online learning. Advances in Neural Information Processing Systems, 2023b.
+Hanneke, S., Moran, S., and Tom, W. List sample compression and uniform convergence. In The Thirty Seventh Annual Conference on Learning Theory, pp. 2360-2388. PMLR, 2024a.
+Hanneke, S., Raman, V., Shaeiri, A., and Subedi, U. Multiclass transductive online learning. Advances in Neural Information Processing Systems, 2024b.
+Littlestone, N. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine learning, 2:285-318, 1988.
+Moran, S., Sharon, O., Tsubari, I., and Yosebashvili, S. List online classification. In The Thirty Sixth Annual Conference on Learning Theory, pp. 1885-1913. PMLR, 2023.
+Pabbaraju, C. and Sarmasarkar, S. A characterization of list regression, 2024.
+
+# List Transductive Standard Optimal Algorithm (LTSOA)
+
+Input: A 4-tuple $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ , a total number of rounds $\mathrm{T}\in \mathbb{N}$ , and a sequence of T instances $X\in \mathcal{X}^{\mathrm{T}}$
+
+Initialize $\mathcal{V}^0 = \mathcal{C}$ and $t = 1$
+
+While $(t\leq \mathrm{T})$ ..
+
+1. Sort labels in $\mathcal{V}$ in a non-increasing order according to the values B $\left((X_{t + 1},X_{t + 2},\dots ,X_{\mathrm{T}}),\mathrm{L},\mathcal{V},\mathcal{V}_{x_t\to y}^{t - 1}\right)$ for $y\in \mathcal{V}$
+2. Predict the list $\mathcal{A}_t$ which consists of the top L labels in the above order.
+3. Receive a label $y_{t}\in \mathcal{V}$
+4. Set $\mathcal{V}^t = V_{x_t\to y_t}^{t - 1}$ and update $t = t + 1$
+
+Figure 1: List Transductive Standard Optimal Algorithm (LTSOA) is a variant of Standard Optimal Algorithm (SOA) originally proposed by (Littlestone, 1988). Further, see the definition of $\mathcal{V}_{x\to y}$ for some $\mathcal{V} \subseteq \mathcal{V}^{\mathcal{X}}$ , $x \in \mathcal{X}$ , and $y \in \mathcal{Y}$ in the proof of Lemma A.1. In addition, see the definition of a sequence-dependent Level-constraint $(\mathrm{L} + 1)$ -Branching dimension in the proof of Lemma A.1.
+
+# A. Realizable Upper Bounds Proofs
+
+# A.1. Upper Bound $\mathrm{B}(\mathcal{Q})$
+
+Next, we move on to the proof of the constant upper bound. Furthermore, the proof of the following Lemma uses the main idea in the upper bound proof of the work of (Littlestone, 1988).
+
+Lemma A.1. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\operatorname {B}(\mathcal{Q}) = \mathrm{d}$ . Then, we have $\mathbf{M}^{\star}(\mathcal{Q},T)\in \mathcal{O}(\mathrm{d})\in \mathcal{O}(1)$
+
+Proof. We first define $\mathcal{V}_{x\to y}$ for some $\mathcal{V} \subseteq \mathcal{V}^{\mathcal{X}}$ , $x \in \mathcal{X}$ , and $y \in \mathcal{Y}$ . In particular, $\mathcal{V}_{x\to y} := \{c \mid c \in \mathcal{V}_{x\to y}, c(x) = y\}$ . Next, for every $X \in \mathcal{X}^k$ for some $k \in \mathbb{N}$ , we define $\mathrm{B}\left((X,\mathrm{L},\mathcal{V},\mathcal{C})\right)$ by adding a new constraint to Definition 2.15. In particular, we require to use instances of $X$ in order as the label of the nodes of $(\mathrm{L} + 1)$ -ary Littlestone tree.
+
+Now, we proceed with the proof of the Lemma. Fix $\mathrm{T} \in \mathbb{N}$ . We run LTSOA Algorithm 1 with input $\mathcal{Q}$ , $\mathrm{T}$ , and $X \in \mathcal{X}^{\mathrm{T}}$ as the initial sequence of instances chosen by the adversary. First of all, notice that for any instance of the list transductive online learning framework $\mathcal{Q}'$ , we have $\mathrm{B}(\mathcal{Q}') \geq 0$ . Also, notice that $\mathrm{B}\left((X_1, X_2, \ldots, X_{\mathrm{T}}), \mathrm{L}, \mathcal{Y}, \mathcal{C}\right) \leq \mathrm{B}(\mathcal{Q})$ . As a result, based on the following Claim, it is clear that $\sup_{S \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}} \mathbf{M}(\mathrm{LTSOA}; S) \leq \mathrm{B}(\mathcal{Q}) = \mathrm{d}$ . Therefore, $\mathbf{M}^\star(\mathcal{Q}, \mathrm{T}) \leq \mathrm{B}(\mathcal{Q}) = \mathrm{d}$ . This completes the proof. Subsequently, we prove the following Claim.
+
+Claim A.2. For every $t \in [\mathrm{T}]$ , if $y_t \notin \mathcal{A}_t$ , then $\mathrm{B}\left((X_t, X_{t+1}, \ldots, X_{\mathrm{T}}), \mathrm{L}, \mathcal{V}, \mathcal{V}^{t-1})\right) > \mathrm{B}\left((X_{t+1}, X_{t+2}, \ldots, X_{\mathrm{T}}), \mathrm{L}, \mathcal{V}, \mathcal{V}^t\right)$ .
+
+Proof We prove this claim by contradiction. For simplicity, denote $\mathrm{B}\left(\left((X_{t},X_{t + 1},\ldots ,X_{\mathrm{T}}),\mathrm{L},\mathcal{Y},\mathcal{V}^{t - 1}\right)\right)$ by $A$ . Also, denote $\mathrm{B}\left(\left((X_{t + 1},X_{t + 2},\dots ,X_{\mathrm{T}}),\mathrm{L},\mathcal{Y},\mathcal{V}^{t}\right)\right)$ by $B$ . Assume that $A \geq B$ . Indeed, the case that $A < B$ is not possible. So, assume that $A = B$ . If that is the case, based on lines 1 and 2 in Algorithm 1, it means that there are at least $\mathrm{L} + 1$ labels such that the restriction of the current concept class $V^{t - 1}$ to $x_{t}$ using those labels still leads to $A$ as a new dimension. Thus, this means that $A > A$ as we can construct a new tree. This is a clear contradiction.
+
+# List Shattering Algorithm
+
+Input: A 4-tuple $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ , a total number of rounds $\mathrm{T}\in \mathbb{N}$ , and a sequence of $\mathrm{T}$ instances $X\in \mathcal{X}^{\mathrm{T}}$
+
+Initialize $\mathcal{V}^0 = \mathcal{C}$ and $t = 1$
+
+While $(t\leq \mathrm{T})$ ..
+
+1. Sort labels in $\mathcal{V}$ in a non-increasing order according to the values $\mathrm{Sh}\left(\left((X_{t + 1},X_{t + 2},\ldots ,X_{\mathrm{T}}),\mathrm{L},\mathcal{V},\mathcal{V}_{x_t\to y}^{t - 1}\right)\right)$ for $y\in \mathcal{V}$ .
+2. Predict the list $\mathcal{A}_t$ which consists of the top L labels in the above order.
+3. Receive a label $y_{t}\in \mathcal{V}$
+4. Set $\mathcal{V}^t = V_{x_t\to y_t}^{t - 1}$ and update $t = t + 1$
+
+Figure 2: See the definition of $\mathcal{V}_{x\to y}$ for some $\mathcal{V}\subseteq \mathcal{V}^{\mathcal{X}}$ $x\in \mathcal{X}$ , and $y\in \mathcal{V}$ in the proof of Lemma A.3. In addition, see the definition of $\mathrm{Sh}(.)$ in the proof of Lemma A.3.
+
+# A.2. Upper Bound $\mathrm{L} \, \mathrm{D}(\mathcal{C}) \log \left( \frac{\mathrm{e} \, \mathrm{T}}{\mathrm{D}(\mathcal{Q})} \right)$
+
+Finally, we turn our attention to proving the log T upper bound. Moreover, the proof of the following lemma represents the main contribution of this section, relying on the shattering technique from the recent work of (Hanneke et al., 2024b).
+
+Lemma A.3. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\mathrm{D}(\mathcal{Q}) = \mathrm{d}$ . Then, we have $\mathbf{M}^{\star}(\mathcal{Q},T)\in \mathcal{O}\Big(\mathrm{L}\mathrm{d}\log \left(\frac{\mathrm{e}T}{\mathrm{d}}\right)\Big)\in \log (T)$ .
+
+Proof. We first define $\mathcal{V}_{x\to y}$ for some $\mathcal{V} \subseteq \mathcal{V}^{\mathcal{X}}$ , $x \in \mathcal{X}$ , and $y \in \mathcal{Y}$ . In particular, $\mathcal{V}_{x\to y} := \{c \mid c \in \mathcal{V}_{x\to y}, c(x) = y\}$ . Next, for every $X \in \mathcal{X}^k$ for some $k \in \mathbb{N}$ , we say that $X$ is full shattered by $\mathcal{C}$ if there exists a $(\mathrm{L} + 1)$ -ary Littlestone tree $\mathcal{T}$ of depth $k$ for $\mathcal{Q}$ such that all children of every node of $\mathcal{T}$ are labeled by distinct elements of $\mathcal{V}$ and for every $i \in [k]$ all nodes at level $i$ of $\mathcal{T}$ are labeled by $X_i$ which is shattered by $\mathcal{C}$ . Now, let $\mathcal{V} \subseteq \mathcal{V}^{\mathcal{X}}$ and $X \in \mathcal{X}^k$ for some $k \in \mathbb{N}$ . Then, we define $\operatorname{Sh}\left((X_1, X_2, \ldots, X_k), \mathrm{L}, \mathcal{V}, \mathcal{V}\right)$ as the number of non-empty sub-sequences of $X$ that are full shattered by $\mathcal{V}$ .
+
+The first observation is if $X \in \mathcal{X}^{\star}$ is full shattered by $\mathcal{V} \subseteq \mathcal{C}$ , then the size of $X$ should be smaller than $\mathrm{d} + 1$ . Next, clearly, if $X \in \mathcal{X}^{\star}$ does not have any non-empty sub-sequence that is full shattered by $\mathcal{V} \subseteq \mathcal{C}$ , the projection of $\mathcal{V}$ on $X$ is unique.
+
+Now, we proceed with the proof of the Lemma. Fix $\mathrm{T} \in \mathbb{N}$ . We run List Shattering Algorithm 2 with input $\mathcal{Q}$ , $\mathrm{T}$ , and $X \in \mathcal{X}^{\mathrm{T}}$ as the initial sequence of instances chosen by the adversary. Based on the previous paragraph, we know $\operatorname{Sh}\left(X, \mathrm{L}, \mathcal{V}, \mathcal{V}\right) \leq \sum_{i=1}^{d} \binom{\mathrm{T}}{i} \leq \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)^{\mathrm{d}}$ . Whenever we make a mistake, there are some cases to consider. (1) For each sub-sequence full shattered by all $\mathrm{L}$ of the version spaces related to the labels in the predicted list, and by the $y_t$ version space, there is another full shattered sequence being removed. In particular, the one that has the predicted point at its root and the other $\mathrm{L} + 1$ trees as its sub-trees. (2) The remaining full shattered sub-sequences by the $y_t$ version space can be full shattered by at most $\mathrm{L} - 1$ of the top $\mathrm{L}$ labels. Since those $\mathrm{L}$ labels all shattered at least as many sub-sequences as $y_t$ , that should mean the maximum number of sub-sequences that can be full shattered by the $y_t$ version space is a $\left(1 - \frac{1}{\mathrm{L}}\right)$ fraction of the total full shattered sub-sequences. As a result, after at most $m > \frac{-\ln(A)}{\ln\left(1 - \frac{1}{\mathrm{L}}\right)}$ mistakes, where $A := \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)^{\mathrm{d}}$ , we have no full shattered sub-sequences. Therefore, we can get $\mathbf{M}^{\star}(\mathcal{Q}, \mathrm{T}) \in \mathcal{O}\left(\mathrm{L} \log\left(\frac{\mathrm{eT}}{\mathrm{d}}\right)\right) \in \log(\mathrm{T})$ .
+
+# B. Agnostic Proofs
+
+# B.1. Lower Bound
+
+We start by proving the lower bound for the linear case. Furthermore, the proof of the following Lemma uses the main idea in the lower bound proof of the work of (Moran et al., 2023).
+
+Lemma B.1. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\mathrm{D}(\mathcal{Q}) = \infty$ . Then, $\mathcal{Q}$ is not agnostic learnable in the list transductive online learning framework.
+
+Proof. First of all, note that the realizable setting is a particular case of the agnostic setting. Fix $\mathrm{T} \in \mathbb{N}$ . Let $\mathcal{T}$ be a $(\mathrm{L} + 1)$ -Littlestone tree witnessing $\mathrm{D}(\mathcal{Q}) = \mathrm{T}$ . Note that such a tree should exist as $\mathrm{D}(\mathcal{Q}) = \infty$ . Based on Definition 2.14, this tree should have depth $\mathrm{T}$ as well. Also, at each level of $\mathcal{T}$ , all nodes are labeled by the same instance from $\mathcal{X}$ . In addition, children of all nodes are labeled by distinct labels from $\mathcal{Y}$ . Let $\mathbf{A}$ be any randomized list transductive online learning rule. Now, we will build an adversarial strategy against $\mathbf{A}$ using $\mathcal{T}$ . To do so, we first present $\mathrm{T}$ instances at $\mathrm{T}$ levels of $\mathcal{T}$ in order to the learner before starting the game. Furthermore, we will continue to make this strategy based on a special root-to-leaf path in $\mathcal{T}$ . Receive the first set of labels of size $\mathrm{L}$ predicted by the learner. As we have $\mathrm{L} + 1$ edges for the root node of $\mathcal{T}$ labeled by distinct labels from $\mathcal{Y}$ , there exists at least one of them, which is labeled by a label that is not in the received set. We output a random label from those labels according to the uniform distribution. Moreover, we will continue this construction of the adversarial strategy based on $\mathcal{T}$ . Notice that, as $\mathcal{T}$ is shattered by $\mathcal{C}$ , there exists a concept in $\mathcal{C}$ , which is consistent with the root-to-leaf path that we used. So, we are in the realizable setting. In addition, notice that our choices of labels are completely independent of the learner's prediction. So, $\mathbf{A}$ makes a mistake at every point with probability of at least $\frac{1}{\mathrm{L} + 1}$ . Since we are able to construct an adversarial strategy for an arbitrary randomized list transductive online learning rule $\mathbf{A}$ , we conclude any randomized rule has at least $\frac{\mathrm{T}}{\mathrm{L} + 1}$ expected regret. It means that $\mathcal{Q}$ is not agnostic learnable in the list transductive online learning framework. This finishes the proof.
+
+# B.2. Upper Bound
+
+Finally, we turn our attention to proving the $\sqrt{\mathrm{T}}\times \log (\mathrm{T})$ upper bound. Moreover, the proof of the following lemma represents the main contribution of this section, relying on the shattering technique from the realizable part, as well as the proof technique of (Hanneke et al., 2023a).
+
+We note that Algorithm 2 can be made conservative, that is, only adding an running the line 5 if it is a mistake. It is not hard to see that we can get the same guarantee as in A.3. Also, we note that Algorithm 1 can be made conservative, that is, only adding an running the line 5 if it is a mistake. It is not hard to see that we can get the same guarantee as in Lemma A.1.
+
+Lemma B.2. Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Assume that $\mathrm{D}(\mathcal{Q}) = \mathrm{d} < \infty$ for some $\mathrm{d}$ in $\mathbb{N}$ . Then, $\mathcal{Q}$ is agnostic learnable in the list transductive online learning framework.
+
+Proof. Fix $\mathrm{T} \in \mathbb{N}$ . Based on the results in section 3, we know that there exists a conservative version of Algorithm 2, which can give us $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T}) \in \mathcal{O}\Big(\mathrm{L}\mathrm{d}\log \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)\Big) \in \log (\mathrm{T})$ . Let us call the best concept in $\mathcal{C}$ in the definition of regret for a given $S \in (\mathcal{X} \times \mathcal{Y})^{\mathrm{T}}$ as a sequence played by the adversary $c^{\star}$ . Denote by $R^{\star}$ a sub-sequence of indices that $c^{\star}$ is correct on. Indeed, if we run the mentioned algorithm only on these points, we make at most $\mathcal{O}\Big(\mathrm{L}\mathrm{d}\log \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)\Big)$ number of mistakes on indices $J^{\star} \subseteq R^{\star}$ . In fact, we only need to update our algorithm on those points. Furthermore, between all experts updating on every possible sub-sequence of size at most $\mathcal{O}\Big(\mathrm{L}\mathrm{d}\log \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)\Big)$ of $\mathrm{T}$ , one of them is the one that is updating on exactly on $J^{\star}$ . Thus, based on the celebrated prediction with expert advise algorithm (Cesa-Bianchi & Lugosi, 2006), we can get regret of:
+
+$$
+\mathcal {O} \left(\sqrt {\mathrm {T L D} (\mathcal {Q}) \log \left(\frac {\mathrm {e T}}{\mathrm {D} (\mathcal {Q})}\right) \log \left(\frac {\mathrm {e T}}{\mathrm {L D} (\mathcal {Q}) \log \left(\frac {\mathrm {e T}}{\mathrm {D} (\mathcal {Q})}\right)}\right)}\right) \in \mathcal {O} (\mathrm {T})
+$$
+
+We note that our experts are at most different on $R^{\star}$ with $c^{\star}$ on $\mathcal{O}\left(\mathrm{Ld}\log \left(\frac{\mathrm{eT}}{\mathrm{d}}\right)\right)$ number of instances.
+
+Importantly, the same technique can be used to overcome the issue of infinite label space in the work of (Moran et al., 2023), thus answering their open question.
+
+# C. Examples
+
+This section provides three examples of instances within the list transductive online learning framework, revealing the separations between related learnability definitions.
+
+# C.1. L-DS Dimension
+
+Definition C.1 (i-neighbour). Let $f, g \in \mathcal{Y}^d$ for some non-empty set $\mathcal{Y}$ and some $d \in \mathbb{N}$ . For every $i \in [d]$ , we say that $f$ and $g$ are $i$ -neighbours if $f_i \neq g_i$ and $\forall_{j \in [d] - \{i\}} f_j = g_j$ .
+
+Definition C.2 (L-DS Dimension (Charikar & Pabbaraju, 2023)). Let $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ be an instance of the list transductive online learning framework. Let $S\in \mathcal{X}^d$ be a sequence for some $d\in \mathbb{N}$ . We say that $S$ is L-DS shattered by $\mathcal{C}$ , if there exists $F\subseteq \mathcal{C},|F| < \infty$ such that for all $f\in \{g\mid g\in \mathcal{Y}^d,\exists_{g\in F}\forall_{i\in [d]}g_i = f(S_i)\}$ and for all $i\in [d]$ , $f$ has at least L number of $i$ -neighbor. The L-DS dimension of $\mathcal{Q}$ , denoted $\mathrm{DS}(\mathcal{Q})$ , is the maximal size of a sequence $S\in \mathcal{X}^d$ for some $d\in \bar{\mathbb{N}}$ that is L-DS shattered by $\mathcal{C}$ .
+
+# C.2. Main Results on Learnability Separations
+
+Our first result in this section implies a separation between realizable/agnostic multiclass transductive online learnability (Hanneke et al., 2024b) and realizable/agnostic list transductive online learnability.
+
+Proposition C.3. For every $\mathrm{L} \in \mathbb{N}$ , there exists an instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ and another instance of the list transductive online learning framework $\mathcal{Q}' = (\mathcal{X}, \mathrm{L} + 1, \mathcal{Y}, \mathcal{C})$ such that $\mathrm{D}(\mathcal{Q}) = \infty$ and $\mathrm{B}(\mathcal{Q}') = 0$ .
+
+Proof. Fix $\mathrm{L} \in \mathbb{N}$ . Let $\mathcal{T}$ be an infinite depth perfect rooted $(\mathrm{L} + 1)$ -ary tree Definition 2.9. The definition of such a tree is similar to Definition 1.7 in the work of (Bousquet et al., 2021). For every $i \in \mathbb{N}$ , label all nodes at the level $i - 1$ of $\mathcal{T}$ with $i - 1$ . Also, for every node in $\mathcal{T}$ , label all its children with distinct elements from $[\mathrm{L} + 1]$ . Let $\mathcal{X} = \{0\} \cup \mathbb{N}$ . In addition, let $\mathcal{Y} = [\mathrm{L} + 1]$ . Further, define $\mathcal{C} \in \mathcal{Y}^{\mathcal{X}}$ so that it only contains all functions consistent with a root-to-leaf path of $\mathcal{T}$ . Now, define $\mathcal{Q} := (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ . Also, define $\mathcal{Q}' := (\mathcal{X}, \mathrm{L} + 1, \mathcal{Y}, \mathcal{C})$ . Based on the definition of $\mathcal{T}$ and $\mathcal{Q}$ , it is clear that $\mathrm{D}(\mathcal{Q}) = \infty$ . Additionally, notice that for every $x \in \mathcal{X}$ , the size of $\{y \mid \exists_{c \in \mathcal{C}} y = c(x)\}$ is bounded above by $\mathrm{L} + 1$ . As a result, $\mathrm{B}(\mathcal{Q}') = 0$ . This finishes the proof.
+
+Our second result in this section implies a separation between realizable/agnostic list PAC learnability (Charikar & Pabbaraju, 2023) and realizable/agnostic list transductive online learnability.
+
+Proposition C.4. For every $\mathrm{L} \in \mathbb{N}$ , there exists an instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ such that $\mathrm{D}(\mathcal{Q}) = \infty$ and $\mathrm{DS}(\mathcal{Q}) = 1$ .
+
+Proof. Fix $\mathrm{L} \in \mathbb{N}$ . Let $\mathcal{T}$ be an infinite depth perfect rooted $(\mathrm{L} + 1)$ -ary tree Definition 2.9. The definition of such a tree is similar to Definition 1.7 in the work of (Bousquet et al., 2021). For every $i \in \mathbb{N}$ , label all nodes at the level $i - 1$ of $\mathcal{T}$ with $i - 1$ . Also, label all edges of $\mathcal{T}$ with distinct elements of $\mathbb{N}$ . Let $\mathcal{X} = \{0\} \cup \mathbb{N}$ . In addition, let $\mathcal{Y} = \mathbb{N}$ . Further, define $\mathcal{C} \in \mathcal{Y}^{\mathcal{X}}$ such that it only contains all functions consistent with a root-to-leaf path of $\mathcal{T}$ . Now, define $\mathcal{Q} := (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ . Based on the definition of $\mathcal{T}$ and $\mathcal{Q}$ , it is clear that $\mathrm{D}(\mathcal{Q}) = \infty$ . Subsequently, we prove that $\mathrm{DS}(\mathcal{Q}) = 1$ . In particular, we prove this mainly by contradiction. Assume $\mathrm{DS}(\mathcal{Q}) \geq 2$ . Thus, there exist $S = (x_1, x_2) \subset \mathcal{X}$ of size 2 and $F \subseteq \mathcal{C}$ , $|F| < \infty$ witnessing that $\mathrm{DS}(\mathcal{C}) = 2$ . Without loss of generality, we assume that $x_1$ is above $x_2$ in $\mathcal{T}$ . Based on the fact that the edges of $\mathcal{T}$ are labeled with distinct elements of $\mathcal{Y}$ , we can conclude that every pair of concepts from $(c_1, c_2) \in \mathcal{C}^2$ such that $c_1(x_2) = c_2(x_2)$ should have been equivalent on $x_1$ , meaning that $c_1(x_1) = c_2(x_1)$ as well. So, we cannot even have one neighbor. This is a contradiction. Therefore, $\mathrm{DS}(\mathcal{C}) < 2$ . It is easy to see that the root node of $\mathcal{T}$ is L-DS shattered by $\mathcal{C}$ . As a result, $\mathrm{DS}(\mathcal{Q}) = 1$ . This finishes the proof.
+
+Notably, one can also show that for every instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ , we always have $\mathrm{DS}(\mathcal{Q})\leq \mathrm{D}(\mathcal{Q})$
+
+Our third result in this section implies a separation between realizable/agnostic list online learnability (Moran et al., 2023) and realizable/agnostic list transductive online learnability.
+
+Proposition C.5. For every $\mathrm{L} \in \mathbb{N}$ , there exists an instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ such that $\mathrm{L}(\mathcal{Q}) = \infty$ and $\mathrm{B}(\mathcal{Q}) \leq 2$ .
+
+Proof. Fix $\mathrm{L} \in \mathbb{N}$ . Let $\mathcal{T}$ be an infinite depth perfect rooted $(\mathrm{L} + 1)$ -ary tree Definition 2.9. The definition of such a tree is similar to Definition 1.7 in the work of (Bousquet et al., 2021). First, label all nodes of $\mathcal{T}$ with distinct elements of $\mathbb{N}$ . Also, for every node in $\mathcal{T}$ , label all of its children with distinct elements from $[\mathrm{L} + 1]$ . Let $\mathcal{X} = \mathbb{N}$ . In addition, let $\mathcal{Y} = \mathbb{R}$ . Further, define $\mathcal{C} \in \mathcal{Y}^{\mathcal{X}}$ such that it only contains all functions consistent with a root-to-leaf path of $\mathcal{T}$ with a special property. In particular, each of these functions equals to a unique element of $\mathbb{R}$ on all instances outside its associated root-to-leaf path. Now, define $\mathcal{Q} := (\mathcal{X}, \mathrm{L}, \mathcal{Y}, \mathcal{C})$ . Based on the definition of $\mathcal{T}$ and $\mathcal{Q}$ , it is clear that $\mathrm{L}(\mathcal{Q}) = \infty$ . Subsequently, we prove that $\mathrm{B}(\mathcal{Q}) \leq 2$ .
+
+To prove this, we show that for every $\mathrm{T} \in \mathbb{N}$ , we have: $\mathbf{M}^{\star}(\mathcal{Q},\mathrm{T}) \leq 2$ . So, we can then conclude that $\mathrm{B}(\mathcal{Q}) \leq 2$ . To see why, just notice that we can prove a lower bound based on $\mathrm{B}(\mathcal{Q})$ . In particular, if $\mathrm{B}(\mathcal{Q}) > 2$ , we can always force at least three number of mistakes to any deterministic list transductive online learning rule for large enough $\mathrm{T}$ .
+
+This part of the proof is essentially identical to the similar part in the proof of Proposition 11 in the work of (Hanneke et al., 2024b). We include it for the sake of completeness. Fix $\mathrm{T} \in \mathbb{N}$ . Let $S\mathcal{X}^{\mathrm{T}}$ be the sequence chosen by the adversary at the beginning of the game. Also, let $c^{\star} \in \mathcal{C}$ be the target concept chosen by the adversary. Further, let $u$ be the root-to-leaf path in $\mathcal{T}$ associated with the concept $c^{\star}$ . In addition, for every $i \in [T]$ , let $v_{i}$ be a root-to-leaf path in $\mathcal{T}$ containing first $i$ members of $S$ , if it exists. Finally, let $i^{\star}$ be the smallest positive integer such that $v_{i^{\star}}$ does not exist. If $i^{\star}$ itself does not exist, let $i^{\star} = \mathrm{T} + 1$ . Our algorithm predicts according to the $[\mathrm{L} + 1]$ labels associated with the path $v_{i^{\star} - 1}$ for the first $i^{\star} - 1$ points in $S$ . Moreover, if the adversary ever reveals a unique label, we use its corresponding $c \in \mathcal{C}'$ to make predictions in all future rounds. For the $i^{\star}$ 'th member of $S$ , if it exists, we predict arbitrarily. To see that this algorithm makes at most 2 mistakes, we consider two cases. (1) If $i^{\star} = T + 1$ , then our algorithm makes at most one mistake. In fact, our algorithm makes a mistake: (a) if the adversary switches the label from something in $[\mathrm{L} + 1]$ to a unique label corresponding to the target concept $c^{\star}$ . (b) perhaps on the last instance. (2) Otherwise, the algorithm makes at most two mistakes; the first mistake can be on round $i^{\star} - 1$ , and the second mistake can be on round $i^{\star}$ , after which the true $c^{\star}$ is known to the learner from its unique label. Indeed, if the adversary switches the label from $[\mathrm{L} + 1]$ to a unique label corresponding to the target concept $c^{\star}$ before round $i^{\star} - 1$ , we only make one mistake. In fact, we just showed that even by using an algorithm having list size of one, we can do so. This completes the proof.
+
+Notably, one can also show that for every instance of the list transductive online learning framework $\mathcal{Q} = (\mathcal{X},\mathrm{L},\mathcal{Y},\mathcal{C})$ , we always have $\mathrm{B}(\mathcal{Q})\leq \mathrm{L}(\mathcal{Q})$ .
\ No newline at end of file
diff --git a/atrichotomyforlisttransductiveonlinelearning/images.zip b/atrichotomyforlisttransductiveonlinelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b80417aba875be6e55f1af65507988fbdc86b784
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9bb9551797eb0c1a8be59188a1e8675413923e395620b13cb8bf7b252de2fe53
+size 63507
diff --git a/atrichotomyforlisttransductiveonlinelearning/layout.json b/atrichotomyforlisttransductiveonlinelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d05ecdf1241d4d04b2576838e31b3650ea080d00
--- /dev/null
+++ b/atrichotomyforlisttransductiveonlinelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81d4641d289c2f42a72de099059302e6ce7ce27ea2c02ea7eceb27c50d334283
+size 1038714
diff --git a/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_content_list.json b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c6615e111c1eabdf2b3a7e144a07acdeea05779
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e541babdf5b6d1d4ae96cde2053ccd0fe411c5b20b7027a57713fe5545a0794
+size 194558
diff --git a/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_model.json b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b3d15e473ab6c267963d63d0adb7f0cab9f58d9e
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b749c9538a12bd917dd2b8cd184998633024c2929a1a670e3d808a3ce23c3e7
+size 228050
diff --git a/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_origin.pdf b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5214a31bea92be80f1de7a74d178ba158ddefeda
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/a840dbea-0307-45b8-8982-93c69887846c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:92a91ce6cbb01bb048171349b9d6d9a8dde75e50b292dc89fa6c637666d327cc
+size 537794
diff --git a/atwostagelearningtodeferapproachformultitasklearning/full.md b/atwostagelearningtodeferapproachformultitasklearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9cd7dbace6bfe414e2aff37f5e324dfbb086eb8
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/full.md
@@ -0,0 +1,976 @@
+# A Two-Stage Learning-to-Defer Approach for Multi-Task Learning
+
+Yannis Montreuil $^{123*}$ Shu Heng Yeo $^{1*}$ Axel Carlier $^{45}$ Lai Xing Ng $^{25}$ Wei Tsang Ooi $^{15}$
+
+# Abstract
+
+The Two-Stage Learning-to-Defer (L2D) framework has been extensively studied for classification and, more recently, regression tasks. However, many real-world applications require solving both tasks jointly in a multi-task setting. We introduce a novel Two-Stage L2D framework for multi-task learning that integrates classification and regression through a unified deferral mechanism. Our method leverages a two-stage surrogate loss family, which we prove to be both Bayes-consistent and $(\mathcal{G},\mathcal{R})$ -consistent, ensuring convergence to the Bayes-optimal rejector. We derive explicit consistency bounds tied to the cross-entropy surrogate and the $L_{1}$ -norm of agent-specific costs, and extend minimizability gap analysis to the multi-expert two-stage regime. We also make explicit how shared representation learning—commonly used in multi-task models—affects these consistency guarantees. Experiments on object detection and electronic health record analysis demonstrate the effectiveness of our approach and highlight the limitations of existing L2D methods in multi-task scenarios.
+
+# 1. Introduction
+
+Learning-to-Defer (L2D) integrates predictive models with human experts—or, more broadly, decision-makers—to optimize systems requiring high reliability (Madras et al., 2018). This approach benefits from the scalability of machine learning models and leverages expert knowledge to address complex queries (Hemmer et al., 2021). The Learning-to-Defer approach defers decisions to experts when the
+
+*Equal contribution 1School of Computing, National University of Singapore, Singapore 2Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore 3CNRS@CREATE LTD, 1 Create Way, Singapore 4IRIT, Université de Toulouse, CNRS, Toulouse INP, Toulouse, France 5IPAL, IRL2955, Singapore. Correspondence to: Yannis Montreuil .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+learning-based model has lower confidence than the most confident expert. This deference mechanism enhances safety, which is particularly crucial in high-stakes scenarios (Mozannar & Sontag, 2020; Mozannar et al., 2023; Mao, 2025). For example, in medical diagnostics, the system utilizes patient-acquired data to deliver an initial diagnosis (Johnson et al., 2023; 2016). If the model is sufficiently confident, its diagnosis is accepted; otherwise, the decision is deferred to a medical expert who provides the final diagnosis. Such tasks, which can directly impact human lives, underscore the need to develop reliable systems (Balagurunathan et al., 2021).
+
+Learning-to-Defer has been extensively studied in classification problems (Madras et al., 2018; Verma et al., 2023; Mozannar & Sontag, 2020; Mozannar et al., 2023; Mao et al., 2023a) and, more recently, in regression scenarios (Mao et al., 2024h). However, many modern complex tasks involve both regression and classification components, requiring deferral to be applied to both components simultaneously, as they cannot be treated independently. For instance, in object detection, a model predicts both the class of an object and its location using a regressor, with these outputs being inherently interdependent (Girshick, 2015; Redmon et al., 2016; Buch et al., 2017). In practice, deferring only localization or classification is not meaningful, as decision-makers will treat these two tasks simultaneously. A failure in either component—such as misclassifying the object or inaccurately estimating its position—can undermine the entire problem, emphasizing the importance of coordinated deferral strategies that address both components jointly.
+
+This potential for failure underscores the need for a Learning-to-Defer approach tailored to multi-task problems involving both classification and regression. We propose a novel framework for multi-task environments, incorporating expertise from multiple experts and the predictor-regressor model. We focus our work on the two-stage scenario, where the model is already trained offline. This setting is relevant when retraining from scratch the predictor-regressor model is either too costly or not feasible due to diverse constraints such as non-open models (Mao et al., 2023a; 2024h). We approximate the true deferral loss using a surrogate deferral loss family, based on cross-entropy, and tailored for the two-stage setting, ensuring that the loss effectively approximates the original discontinuous loss function. Our theoretical
+
+analysis establishes that our surrogate loss is both $(\mathcal{G},\mathcal{R})$ consistent and Bayes-consistent. Furthermore, we study and generalize results on the minimizability gap for deferral loss based on cross-entropy, providing deeper insights into its optimization properties. Our contributions are as follows:
+
+(i) Novelty: We introduce two-stage Learning-to-Defer for multi-task learning with multiple experts. Unlike previous L2D methods that focus solely on classification or regression, our approach addresses situations where a sole optimal agent has to be selected to jointly handle both tasks in a unified framework.
+
+(ii) Theoretical Foundation: We prove that our surrogate family is both Bayes-consistent and $(\mathcal{G},\mathcal{R})$ -consistent for any cross-entropy-based surrogate. We derive tight consistency bounds that depend on the choice of the surrogate and the $L_{1}$ -norm of the cost, extending minimizability gap analysis to the two-stage, multi-expert setting. Additionally, we establish learning bounds for the true deferral loss, showing that generalization improves as agents become more accurate.
+
+(iii) Empirical Validation: We evaluate our approach on two challenging tasks. In object detection, our method effectively captures the intrinsic interdependence between classification and regression, overcoming the limitations of existing L2D approaches. In EHR analysis, we show that current L2D methods struggle when agents have varying expertise across classification and regression—whereas our method achieves superior performance.
+
+# 2. Related Work
+
+Learning-to-Defer builds upon the foundational ideas of Learning with Abstention (Chow, 2003; Bartlett & Wegkamp, 2008; Cortes et al., 2016; Geifman & El-Yaniv, 2017; Ramaswamy et al., 2018; Cao et al., 2022; Mao et al., 2024a), where a model is permitted to abstain from making a prediction when its confidence is low. The core insight of L2D is to extend this framework from rejection to deferral—delegating uncertain decisions to external agents or experts whose confidence may exceed that of the model.
+
+One-stage Learning-to-Defer. L2D was originally introduced by Madras et al. (2018) for binary classification, using a pass function inspired by the predictor-rejector framework of Cortes et al. (2016). In the multiclass setting, Mozannar & Sontag (2020) proposed a score-based formulation that leverages a log-softmax surrogate to ensure Bayesconsistency. This formulation has since been extended to a wide range of classification tasks (Okati et al., 2021; Verma et al., 2023; Cao et al., 2024; 2022; Keswani et al., 2021; Kerrigan et al., 2021; Hemmer et al., 2022; Benz & Rodriguez, 2022; Tailor et al., 2024; Liu et al., 2024; Palomba et al.,
+
+2024; Wei et al., 2024). A pivotal contribution by Mozannar et al. (2023) challenged the sufficiency of Bayes-consistency, showing that existing score-based methods may be suboptimal under realizable distributions—particularly when the hypothesis class is restricted. They introduced the notion of hypothesis-consistency, which strengthens theoretical alignment between the surrogate loss and the constrained hypothesis space. This work sparked a broader effort to refine the theoretical foundations of L2D using tools from surrogate risk analysis (Long & Servedio, 2013; Zhang & Agarwal, 2020; Awasthi et al., 2022; Mao et al., 2023b). Recent theoretical advances have solidified the status of score-based L2D. Mao et al. (2024f) established that the general score-based L2D framework achieves $\mathcal{H}$ -consistency, while Mao et al. (2024g; 2025b) introduced a novel surrogate loss that guarantees realizable-consistency—i.e., optimality under realizable distributions. Montreuil et al. (2025c) generalize L2D to deferral to the set of top- $k$ experts. Beyond classification, the L2D framework has also been extended to regression (Mao et al., 2024h), demonstrating its applicability in continuous-output settings with expert deferral.
+
+Two-stage Learning-to-Defer. The emergence of large-scale pretrained models has motivated the development of two-stage L2D frameworks, where both the model and the expert agents are trained offline. This reflects practical constraints: most users lack the computational resources to fine-tune large models end-to-end. Narasimhan et al. (2022) were the first to formalize this setting, and Mao et al. (2023a) introduced a dedicated predictor-rejector architecture tailored for two-stage L2D, with theoretical guarantees including both Bayes- and hypothesis-consistency. Charusaie et al. (2022) offered a comparative analysis of one-stage (joint training) and two-stage (post hoc) L2D, highlighting trade-offs between model flexibility and sample efficiency. More recently, two-stage L2D has been successfully extended to regression (Mao et al., 2024h) and top- $k$ expert deferral (Montreuil et al., 2025b), and has been applied to real-world tasks such as extractive question answering (Montreuil et al., 2024) and adversarial robustness (Montreuil et al., 2025a).
+
+Despite significant progress, current two-stage L2D research largely addresses classification and regression independently. However, many contemporary tasks involve both regression and classification components, necessitating their joint optimization. In this work, we extend two-stage L2D to joint classifier-regressor models, addressing this critical gap.
+
+# 3. Preliminaries
+
+Multi-task scenario. We consider a multi-task setting encompassing both classification and regression problems. Let $\mathcal{X}$ denote the input space, $\mathcal{Y} = \{1,\dots ,n\}$ represent the
+
+set of $n$ distinct classes, and $\mathcal{T} \subseteq \mathbb{R}$ denote the space of real-valued targets for regression. For compactness, each data point is represented as a triplet $z = (x,y,t) \in \mathcal{Z}$ , where $\mathcal{Z} = \mathcal{X} \times \mathcal{Y} \times \mathcal{T}$ . We assume the data is sampled independently and identically distributed (i.i.d.) from a distribution $\mathcal{D}$ over $\mathcal{Z}$ (Girshick, 2015; Redmon et al., 2016; Carion et al., 2020).
+
+We define a backbone $w \in \mathcal{W}$ , or shared feature extractor, such that $w: \mathcal{X} \to \mathcal{Q}$ . For example, $w$ can be a deep network that takes an input $x \in \mathcal{X}$ and produces a latent feature vector $q = w(x) \in \mathcal{Q}$ . Next, we define a classifier $h \in \mathcal{H}$ , representing all possible classification heads operating on $\mathcal{Q}$ . Formally, $h$ is a score function defined as $h: \mathcal{Q} \times \mathcal{Y} \to \mathbb{R}$ where the predicted class is $h(q) = \arg \max_{y \in \mathcal{Y}} h(q, y)$ . Likewise, we define a regressor $f \in \mathcal{F}$ , representing all regression heads, where $f: \mathcal{Q} \to \mathcal{T}$ . These components are combined into a single multi-head network $g \in \mathcal{G}$ , where $\mathcal{G} = \{g : g(x) = (h \circ w(x), f \circ w(x)) | w \in \mathcal{W}, h \in \mathcal{H}, f \in \mathcal{F}\}$ . Hence, $g$ jointly produces classification and regression outputs, $h(q)$ and $f(q)$ , from the same latent representation $q = w(x)$ .
+
+Consistency in classification. In the classification setting, the goal is to identify a classifier $h \in \mathcal{H}$ in the specific case where $w(x) = x$ , such that $h(x) = \arg \max_{y \in \mathcal{Y}} h(x, y)$ . This classifier should minimize the true error $\mathcal{E}_{\ell_{01}}(h)$ , defined as $\mathcal{E}_{\ell_{01}}(h) = \mathbb{E}_{(x, y)}[\ell_{01}(h(x), y)]$ . The Bayes-optimal error is given by $\mathcal{E}_{\ell_{01}}^B(\mathcal{H}) = \inf_{h \in \mathcal{H}} \mathcal{E}_{\ell_{01}}(h)$ . However, directly minimizing $\mathcal{E}_{\ell_{01}}(h)$ is challenging due to the non-differentiability of the true multiclass 0-1 loss (Zhang, 2002; Steinwart, 2007; Awasthi et al., 2022; Cortes et al., 2025; Mao et al., 2025c). This motivates the introduction of the cross-entropy multiclass surrogate family, denoted by $\Phi_{01}^{\nu}: \mathcal{H} \times \mathcal{X} \times \mathcal{Y} \to \mathbb{R}^{+}$ , which provides a convex upper bound to the true multiclass loss $\ell_{01}$ . This family is parameterized by $\nu \geq 0$ and encompasses standard surrogate functions widely adopted in the community such as the MAE (Ghosh et al., 2017) or the log-softmax (Mohri et al., 2012).
+
+$$
+\Phi_ {0 1} ^ {\nu} = \left\{ \begin{array}{l l} \frac {1}{1 - \nu} \left(\left[ \sum_ {y ^ {\prime} \in \mathcal {Y}} e ^ {h (x, y ^ {\prime}) - h (x, y)} \right] ^ {1 - \nu} - 1\right) & \nu \neq 1 \\ \log \left(\sum_ {y ^ {\prime} \in \mathcal {Y}} e ^ {h (x, y ^ {\prime}) - h (x, y)}\right) & \nu = 1. \end{array} \right. \tag {1}
+$$
+
+The corresponding surrogate error is defined as $\mathcal{E}_{\Phi_{01}^{\nu}}(h) = \mathbb{E}_{(x,y)}[\Phi_{01}^{\nu}(h(x),y)]$ , with its optimal value given by $\mathcal{E}_{\Phi_{01}^{\nu}}^{*}(\mathcal{H}) = \inf_{h\in \mathcal{H}}\mathcal{E}_{\Phi_{01}^{\nu}}(h)$ . A crucial property of a surrogate loss is Bayes-consistency, which guarantees that minimizing the surrogate generalization error also minimizes the true generalization error (Zhang, 2002; Steinwart, 2007; Bartlett et al., 2006; Tewari & Bartlett, 2007; Mao et al., 2025a; Zhong, 2025). Formally, $\Phi_{01}^{\nu}$ is Bayes-consistent with respect to $\ell_{01}$ if, for any sequence $\{h_k\}_{k\in \mathbb{N}}\subset \mathcal{H}$ , the
+
+following implication holds:
+
+$$
+\begin{array}{l} \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} \left(h _ {k}\right) - \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} ^ {*} (\mathcal {H}) \xrightarrow {k \rightarrow \infty} 0 \tag {2} \\ \Longrightarrow \mathcal {E} _ {\ell_ {0 1}} (h _ {k}) - \mathcal {E} _ {\ell_ {0 1}} ^ {B} (\mathcal {H}) \xrightarrow {k \to \infty} 0. \\ \end{array}
+$$
+
+This property assumes that $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ , a condition that does not necessarily hold for restricted hypothesis classes such as $\mathcal{H}_{\mathrm{lin}}$ or $\mathcal{H}_{\mathrm{ReLU}}$ (Long & Servedio, 2013; Awasthi et al., 2022). To address this limitation, Awasthi et al. (2022) proposed $\mathcal{H}$ -consistency bounds. These bounds depend on a non-decreasing function $\Gamma: \mathbb{R}^{+} \to \mathbb{R}^{+}$ and are expressed as:
+
+$$
+\begin{array}{l} \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} (h) - \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} ^ {*} (\mathcal {H}) + \mathcal {U} _ {\Phi_ {0 1} ^ {\nu}} (\mathcal {H}) \geq \\ \left(\text {一} \quad \text {一} \quad \text {一} \quad \text {一} \quad \text {一}\right) \end{array} \tag {3}
+$$
+
+$$
+\Gamma \left(\mathcal {E} _ {\ell_ {0 1}} (h) - \mathcal {E} _ {\ell_ {0 1}} ^ {B} (\mathcal {H}) + \mathcal {U} _ {\ell_ {0 1}} (\mathcal {H})\right), \tag {3}
+$$
+
+where the minimizability gap $\mathcal{U}_{\ell_{01}}(\mathcal{H})$ measures the disparity between the best-in-class generalization error and the expected pointwise minimum error: $\mathcal{U}_{\ell_{01}}(\mathcal{H}) = \mathcal{E}_{\ell_{01}}^{B}(\mathcal{H}) - \mathbb{E}_x\left[\inf_{h\in \mathcal{H}}\mathbb{E}_{y|x}[\ell_{01}(h(x),y)]\right]$ . Notably, the minimizability gap vanishes when $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ (Steinwart, 2007; Awasthi et al., 2022; Cortes et al., 2024; Mao et al., 2024e;d;j;b). In the asymptotic limit, inequality (3) guarantees the recovery of Bayes-consistency, aligning with the condition in (2).
+
+# 4. Two-stage Multi-Task L2D: Theoretical Analysis
+
+# 4.1. Formulating the True Deferral Loss
+
+We extend the two-stage predictor-rejector framework, originally proposed by (Narasimhan et al., 2022; Mao et al., 2023a), to the multi-task setting described in Section 3. Specifically, we consider an offline-trained model $g \in \mathcal{G}$ which jointly performs classification and regression. In addition, we assume access to $J$ offline-trained experts, denoted $\mathbf{M}_j$ for $j \in \{1, \dots, J\}$ . Each expert outputs predictions of the form $m_j(x) = (m_j^h(x), m_j^f(x))$ , where $m_j^h(x) \in \mathcal{Y}$ and $m_j^f(x) \in \mathcal{T}$ correspond to the classification and regression components, respectively. Each expert prediction lies in a corresponding space $\mathcal{M}_j$ , so that $m_j(x) \in \mathcal{M}_j$ . We denote the aggregated outputs of all experts as $m(x) = (m_1(x), \ldots, m_J(x)) \in \mathcal{M} := \prod_{j=1}^{J} \mathcal{M}_j$ . We write $[J] := \{1, \dots, J\}$ to denote the index set of experts, and define the set of all agents as $\mathcal{A} := \{0\} \cup [J]$ , where agent 0 corresponds to the model $g$ . Thus, the system contains $|\mathcal{A}| = J + 1$ agents in total.
+
+To allocate each decision, we introduce a rejector function $r \in \mathcal{R}$ , where $r: \mathcal{X} \times \mathcal{A} \to \mathbb{R}$ . Given an input $x \in \mathcal{X}$ , the rejector selects the agent $j \in \mathcal{A}$ that maximizes its score: $r(x) = \arg \max_{j \in \mathcal{A}} r(x, j)$ . This mechanism induces the deferral loss, a mapping $\ell_{\mathrm{def}}: \mathcal{R} \times \mathcal{G} \times \mathcal{Z} \times \mathcal{M} \to \mathbb{R}_+$ , which quantifies the cost of allocating a decision to a particular agent.
+
+Definition 4.1 (True deferral loss). Let an input $x \in \mathcal{X}$ , for any $r \in \mathcal{R}$ , we have the true deferral loss:
+
+$$
+\ell_ {\mathrm {d e f}} (r, g, m, z) = \sum_ {j = 0} ^ {J} c _ {j} (g (x), m _ {j} (x), z) 1 _ {r (x) = j},
+$$
+
+with a bounded cost $c_{j}$ that quantifies the penalty incurred when allocating the decision to agent $j \in \mathcal{A}$ . When the rejector $r \in \mathcal{R}$ predicts $r(x) = 0$ , the decision is assigned to the multi-task model $g$ , incurring a base cost $c_{0}$ defined as $c_{0}(g(x), z) = \rho(g(x), z)$ , where $\rho(\cdot, \cdot) \in \mathbb{R}_{+}$ measures the discrepancy between the model's output $g(x)$ and the ground truth $z$ . Conversely, if the rejector selects $r(x) = j$ for some $j > 0$ , the decision is deferred to expert $j$ , yielding a deferral cost $c_{j}(m_{j}(x), z) = \rho(m_{j}(x), z) + \beta_{j}$ . Here, $\beta_{j} \geq 0$ denotes the querying cost associated with invoking expert $j$ , which may reflect domain-specific constraints such as computational overhead, annotation effort, or time expenditure.
+
+When the classification and regression objectives are separable, the total cost can be decomposed as $c_{j} = \lambda^{\mathrm{cla}}c^{\mathrm{cla}} + \lambda^{\mathrm{reg}}c^{\mathrm{reg}}$ , where $\lambda^{\mathrm{cla}},\lambda^{\mathrm{reg}}\geq 0$ specify the relative importance of each task. A neutral setting is recovered when $\lambda^{\mathrm{cla}} = \lambda^{\mathrm{reg}} = 1$ , ensuring a task-agnostic trade-off. If classification performance is prioritized, one can select $\lambda^{\mathrm{cla}} > \lambda^{\mathrm{reg}}$ to favor agents with stronger classification expertise.
+
+Optimal deferral rule. In Definition 4.1, we introduced the true deferral loss $\ell_{\mathrm{def}}$ , which quantifies the expected cost incurred when allocating predictions across the model and experts. Our goal is to minimize this loss by identifying the Bayes-optimal rejector $r \in \mathcal{R}$ that minimizes the true risk. To formalize this objective, we analyze the pointwise Bayes rejector $r^B(x)$ , which minimizes the conditional risk $\mathcal{C}_{\ell_{\mathrm{def}}}$ . The corresponding population risk is given by $\mathcal{E}_{\ell_{\mathrm{def}}}(\boldsymbol{g}, r) = \mathbb{E}_x[\mathcal{C}_{\ell_{\mathrm{def}}}(\boldsymbol{g}, r, x)]$ . The following lemma characterizes the optimal decision rule at each input $x \in \mathcal{X}$ .
+
+Lemma 4.2 (Pointwise Bayes Rejector). Given an input $x \in \mathcal{X}$ and data distribution $\mathcal{D}$ , the rejection rule that minimizes the conditional risk $C_{\ell_{\text{def}}}$ associated with the true deferral loss $\ell_{\text{def}}$ is:
+
+$$
+r^{B}(x) = \left\{ \begin{array}{ll}0 & if\inf_{g\in \mathcal{G}}\mathbb{E}_{y,t|x}[c_{0}]\leq \min_{j\in [J]}\mathbb{E}_{y,t|x}[c_{j}]\\ j & otherwise, \end{array} \right.
+$$
+
+The proof is provided in Appendix B. Lemma 4.2 shows that the optimal rejector $r \in \mathcal{R}$ assigns the decision to the model $g \in \mathcal{G}$ whenever its expected cost is lower than that of any expert. Otherwise, the rejector defers to the expert with the minimal expected deferral cost.
+
+Although Lemma 4.2 characterizes the Bayes-optimal policy under the true deferral loss $\ell_{\mathrm{def}}$ , this loss is non-differentiable and thus intractable for direct optimization in practice (Zhang, 2002).
+
+# 4.2. Surrogate Loss for Two-Stage Multi-Task L2D
+
+Introducing the surrogate. To address the optimization challenges posed by discontinuous losses (Berkson, 1944; Cortes & Vapnik, 1995), we introduce a family of convex surrogate losses with favorable analytical properties. Specifically, we adopt the multiclass cross-entropy surrogates $\Phi_{01}^{\nu}:\mathcal{R}\times \mathcal{X}\times \mathcal{A}\to \mathbb{R}_{+}$ , which upper-bounds the true multiclass 0-1 loss $\ell_{01}$ and facilitates gradient-based optimization. This surrogate family is defined in Equation 1.
+
+Building on the framework of Mao et al. (2024h), who proposed convex surrogates for deferral settings, we extend their approach to account for the interdependence between classification and regression tasks. In our setting, this yields a family of surrogate losses $\Phi_{\mathrm{def}}^{\nu}:\mathcal{R}\times \mathcal{G}\times \mathcal{M}\times \mathcal{Z}\to \mathbb{R}_{+}$ which incorporate the full structure of the multi-task cost.
+
+Lemma 4.3 (Surrogate Deferral Surrogates). Let $x \in \mathcal{X}$ and let $\Phi_{01}^{\nu}$ be a multiclass surrogate loss. Then the surrogate deferral loss $\Phi_{\text{def}}^{\nu}$ for $J + 1$ agents is given by
+
+$$
+\Phi_ {d e f} ^ {\nu} (r, g, m, z) = \sum_ {j = 0} ^ {J} \tau_ {j} (g (x), m (x), z) \Phi_ {0 1} ^ {\nu} (r, x, j),
+$$
+
+where the aggregated cost weights are defined as $\tau_{j}(g(x),m(x),z) = \sum_{i = 0}^{J}c_{i}(g(x),m_{i}(x),z)1_{i\neq j}$ .
+
+The surrogate deferral loss $\Phi_{\mathrm{def}}^{\nu}$ combines the individual surrogate losses $\Phi_{01}^{\nu}(r,x,j)$ for each agent $j\in \mathcal{A}$ , weighted by the corresponding aggregated cost $\tau_{j}$ . Intuitively, $\tau_0$ quantifies the total cost of deferring to any expert instead of using the model, while $\tau_{j}$ for $j > 0$ reflects the total cost incurred by selecting expert $j$ instead of any other agent, including the model and other experts.
+
+This construction preserves task generality and only requires that the base surrogate $\Phi_{01}^{\nu}$ admit an $\mathcal{R}$ -consistency bound. The modular formulation of the cost functions $c_{j}$ allows this surrogate to flexibly accommodate diverse multi-task settings.
+
+Consistency of the surrogate losses. In Lemma 4.3, we established that the proposed surrogate losses form a convex upper bound on the true deferral loss $\ell_{\mathrm{def}}$ . However, it remains to determine whether this surrogate family provides a reliable approximation of the true loss in terms of optimal decision-making. In particular, it is not immediate that the pointwise minimizer of the surrogate loss, $r^* (x)$ , aligns with the Bayes-optimal rejector $r^B (x)$ that minimizes $\ell_{\mathrm{def}}$ . To address this, we study the relationship between
+
+the surrogate and true risks by analyzing their respective excess risks. Specifically, we compare the surrogate excess risk, $\mathcal{E}_{\Phi_{\mathrm{def}}^{\nu}}(g,r) - \mathcal{E}_{\Phi_{\mathrm{def}}^{\nu}}^{*}(\mathcal{G},\mathcal{R})$ , to the true excess risk, $\mathcal{E}_{\ell_{\mathrm{def}}}(g,r) - \mathcal{E}_{\ell_{\mathrm{def}}}^{B}(\mathcal{G},\mathcal{R})$ . Understanding this discrepancy is crucial for establishing the $(\mathcal{G},\mathcal{R})$ -consistency of the surrogate loss family, a topic extensively studied in prior work on multiclass surrogate theory (Steinwart, 2007; Zhang, 2002; Bartlett et al., 2006; Awasthi et al., 2022).
+
+Leveraging consistency bounds developed in (Awasthi et al., 2022; Mao et al., 2024c), we present Theorem 4.4, which proves that the surrogate deferral loss family $\Phi_{\mathrm{def}}^{\nu}$ is indeed $(\mathcal{G},\mathcal{R})$ -consistent.
+
+Theorem 4.4 (( $\mathcal{G}, \mathcal{R}$ -consistency bounds). Let $g \in \mathcal{G}$ be a multi-task model. Suppose there exists a non-decreasing function $\Gamma^{\nu} : \mathbb{R}_{+} \to \mathbb{R}_{+}$ , parameterized by $\nu \geq 0$ , such that the $\mathcal{R}$ -consistency bound holds for any distribution $\mathcal{D}$ :
+
+$$
+\mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} (r) - \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {0 1} ^ {\nu}} (\mathcal {R}) \geq
+$$
+
+$$
+\Gamma^ {\nu} \left(\mathcal {E} _ {\ell_ {0 1}} (r) - \mathcal {E} _ {\ell_ {0 1}} ^ {B} (\mathcal {R}) + \mathcal {U} _ {\ell_ {0 1}} (\mathcal {R})\right),
+$$
+
+then for any $(g,r)\in \mathcal{G}\times \mathcal{R}$ , any distribution $\mathcal{D}$ , and any $x\in \mathcal{X}$
+
+$$
+\begin{array}{l} \mathcal {E} _ {\ell_ {d e f}} (g, r) - \mathcal {E} _ {\ell_ {d e f}} ^ {B} (\mathcal {G}, \mathcal {R}) + \mathcal {U} _ {\ell_ {d e f}} (\mathcal {G}, \mathcal {R}) \leq \\ \overline {{\Gamma}} ^ {\nu} \left(\mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} (r) - \mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {d e f} ^ {\nu}} (\mathcal {R})\right) \\ + \mathcal {E} _ {c _ {0}} (g) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) + \mathcal {U} _ {c _ {0}} (\mathcal {G}), \\ \end{array}
+$$
+
+where the expected aggregated cost vector is given by $\overline{\pmb{\tau}} = \left(\mathbb{E}_{y,t|x}[\tau_0],\dots,\mathbb{E}_{y,t|x}[\tau_J]\right)$ , and $\overline{\Gamma}^{\nu}(u) = \| \overline{\pmb{\tau}} \|_{1}\Gamma^{\nu}\left(\frac{u}{\|\overline{\pmb{\tau}}\|_{1}}\right)$ with $\Gamma^{\nu}(u) = \mathcal{T}^{-1,\nu}(u)$ . In the case of the log-softmax surrogate $(\nu = 1)$ , the transformation is given by $\mathcal{T}^{\nu = 1}(u) = \frac{1 + u}{2}\log (1 + u) + \frac{1 - u}{2}\log (1 - u)$ .
+
+The proof of Theorem 4.4, along with generalizations to any $\nu \geq 0$ , is provided in Appendix C. This result yields refined consistency guarantees for the surrogate deferral loss, improving upon the bounds established by Mao et al. (2024h). The bounds are explicitly tailored to the cross-entropy surrogate family and parameterized by $\nu$ , allowing for precise control over the surrogate's approximation behavior. Crucially, the tightness of the bound depends on the aggregated deferral costs, and is scaled by the $L_{1}$ -norm $\|\overline{\tau}\|_{1}$ , which quantifies the cumulative cost discrepancy across agents.
+
+Moreover, we show that the surrogate deferral losses are $(\mathcal{G},\mathcal{R})$ -consistent whenever the underlying multiclass surrogate family $\Phi_{01}^{\nu}$ is $\mathcal{R}$ -consistent. Under the assumption that $\mathcal{R} = \mathcal{R}_{\mathrm{all}}$ and $\mathcal{G} = \mathcal{G}_{\mathrm{all}}$ , the minimizability gaps vanish, as established by Steinwart (2007). As a result, minimizing the surrogate deferral excess risk while accounting for the minimizability gap yields $\mathcal{E}_{\Phi_{\mathrm{def}}^{\nu}}(r_k) - \mathcal{E}_{\Phi_{\mathrm{def}}^{\nu}}^{*}(\mathcal{R}_{\mathrm{all}}) + \mathcal{U}_{\Phi_{\mathrm{def}}^{\nu}}(\mathcal{R}_{\mathrm{all}})\xrightarrow{k\to\infty}0$ . Since the multi-task model $g$ is trained
+
+offline, it is reasonable to assume that the $c_{0}$ -excess risk also vanishes: $\mathcal{E}_{c_0}(g_k) - \mathcal{E}_{c_0}^B (\mathcal{G}_{\mathrm{all}}) + \mathcal{U}_{c_0}(\mathcal{G}_{\mathrm{all}})\xrightarrow{k\to\infty}0$ . Combining the two convergence results and invoking the properties of $\overline{\Gamma}^{\nu}$ , we conclude that
+
+$$
+\mathcal {E} _ {\ell_ {\text {d e f}}} (g, r _ {k}) - \mathcal {E} _ {\ell_ {\text {d e f}}} ^ {B} (\mathcal {G} _ {\text {a l l}}, \mathcal {R} _ {\text {a l l}}) + \mathcal {U} _ {\ell_ {\text {d e f}}} (\mathcal {G} _ {\text {a l l}}, \mathcal {R} _ {\text {a l l}}) \xrightarrow {k \to \infty} 0.
+$$
+
+Hence, the following corollary holds:
+
+Corollary 4.5 (Bayes-consistency of the deferral surrogate losses). Under the conditions of Theorem 4.4, and assuming $(\mathcal{G},\mathcal{R}) = (\mathcal{G}_{all},\mathcal{R}_{all})$ and $\mathcal{E}_{c_0}(g_k) - \mathcal{E}_{c_0}^B (\mathcal{G}_{all})\xrightarrow{k\to\infty}0$ , the surrogate deferral loss family $\Phi_{def}^{\nu}$ is Bayes-consistent with respect to the true deferral loss $\ell_{def}$ . Specifically, minimizing the surrogate deferral excess risk ensures convergence of the true deferral excess risk. Formally, for sequences $\{r_k\}_{k\in \mathbb{N}}\subset \mathcal{R}$ and $\{g_k\}_{k\in \mathbb{N}}\subset \mathcal{G}$ , we have:
+
+$$
+\begin{array}{l} \mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} (r _ {k}) - \mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} ^ {*} (\mathcal {R} _ {a l l}) \xrightarrow {k \to \infty} 0 \\ \Rightarrow \quad \mathcal {E} _ {\ell_ {d e f}} (g _ {k}, r _ {k}) - \mathcal {E} _ {\ell_ {d e f}} ^ {B} (\mathcal {G} _ {a l l}, \mathcal {R} _ {a l l}) \xrightarrow {k \to \infty} 0. \\ \end{array}
+$$
+
+This result confirms that, as $k \to \infty$ , the surrogate losses $\Phi_{\mathrm{def}}^{\nu}$ attain asymptotic Bayes optimality for both the rejector $r$ and the offline-trained multi-task model $g$ . Thus, the surrogate family faithfully approximates the true deferral loss in the limit. Moreover, the pointwise surrogate-optimal rejector $r^{*}(x)$ converges to a close approximation of the Bayes-optimal rejector $r^{B}(x)$ , thereby inducing deferral decisions consistent with the characterization in Lemma 4.2 (Bartlett et al., 2006).
+
+Analysis of the minimizability gap. In As shown by Awasthi et al. (2022), the minimizability gap does not vanish in general. Understanding the conditions under which it arises, quantifying its magnitude, and identifying effective mitigation strategies are crucial for ensuring that surrogate-based optimization aligns with the true task-specific objectives.
+
+We provide a novel and strong characterization of the minimizability gap in the two-stage setting with multiple experts, extending the results of Mao et al. (2024i), who analyzed the gap in the context of learning with abstention (constant cost) for a single expert and a specific distribution.
+
+Theorem 4.6 (Characterization of Minimizability Gaps). Assume $\mathcal{R}$ is symmetric and complete. Then, for the cross-entropy multiclass surrogates $\Phi_{01}^{\nu}$ and any distribution $\mathcal{D}$ , the following holds for $\nu \geq 0$ :
+
+$$
+\mathcal {C} _ {\Phi_ {d e f} ^ {\nu , *}} ^ {\nu , *} (\mathcal {R}, x) = \left\{ \begin{array}{l l} \| \overline {{\boldsymbol {\tau}}} \| _ {1} H \left(\frac {\overline {{\boldsymbol {\tau}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) & \text {f o r} \nu = 1 \\ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\infty} & \nu = 2 \\ \frac {1}{\nu - 1} \left[ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\frac {1}{2 - \nu}} \right] & \nu \in (1, 2) \\ \frac {1}{1 - \nu} \left[ \left(\sum_ {k = 0} ^ {J} \overline {{\tau}} _ {k} ^ {\frac {1}{2 - \nu}}\right) ^ {2 - \nu} \| \overline {{\boldsymbol {\tau}}} \| _ {1} \right] & \nu > 2, \end{array} \right.
+$$
+
+where $\overline{\tau} = \{\mathbb{E}_{y,t|x}[\overline{\tau}_0],\dots ,\mathbb{E}_{y,t|x}[\overline{\tau}_J]\}$ , the aggregated costs are $\tau_{j} = \sum_{k = 0}^{J}c_{k}1_{k\neq j}$ , and $H$ denotes the Shannon entropy. The minimizability gap is defined as $\mathcal{U}_{\Phi_{def}^{\nu}}(\mathcal{R}) = \mathcal{E}_{\Phi_{def}^{\nu}}^{*}(\mathcal{R}) - \mathbb{E}_{x}\left[\mathcal{C}_{\Phi_{def}^{\nu}}^{\nu,*}(\mathcal{R},x)\right]$ .
+
+We provide the proof in Appendix D. Theorem 4.6 characterizes the minimizability gap $\mathcal{U}_{\Phi_{\mathrm{def}}^{\nu}}(\mathcal{R})$ for cross-entropy multiclass surrogates over symmetric and complete hypothesis sets $\mathcal{R}$ . The gap depends on $\nu \geq 0$ , and its behavior varies across different surrogates. Specifically, for $\nu = 1$ , the gap is proportional to the Shannon entropy of the normalized expected cost vector $\frac{\overline{\tau}}{\|\overline{\tau}\|_1}$ , which increases with entropy, reflecting higher uncertainty in the misclassification distribution. For $\nu = 2$ , the gap simplifies to the difference between the $L_{1}$ -norm and $L_{\infty}$ -norm of $\overline{\tau}$ , where a smaller gap indicates concentrated misclassifications, thus reducing uncertainty. For $\nu \in (1,2)$ , the gap balances the entropy-based sensitivity at $\nu = 1$ and the margin-based sensitivity at $\nu = 2$ . As $\nu \to 1^{+}$ , the gap emphasizes agents with higher misclassification counts, while as $\nu \to 2^{-}$ , it shifts towards aggregate misclassification counts. For $\nu < 1$ , where $p = \frac{1}{2 - \nu} \in (0,1)$ , the gap becomes more sensitive to misclassification distribution, increasing when errors are dispersed. For $\nu > 2$ , with $p < 0$ , reciprocal weighting reduces sensitivity to dominant errors, potentially decreasing the gap but at the risk of underemphasizing critical misclassifications.
+
+In the special case of learning with abstention and a single expert $(J = 1)$ , assigning costs $\tau_0 = 1$ and $\tau_J = 1 - c$ recovers the minimizability gap introduced by Mao et al. (2024i). Thus, our formulation generalizes the minimizability gap to settings with multiple experts, non-constant costs, and arbitrary distributions $\mathcal{D}$ .
+
+# 4.3. Encoder-Aware Bounds
+
+In this section, we show that our approach is theoretically aligned with multi-task learning using shared representations. Let $\mathcal{W}$ denote a class of representation functions (encoders), $\mathcal{H}$ a class of classification heads, and $\mathcal{F}$ a class of regression heads. For any $(w,h,f)\in \mathcal{W}\times \mathcal{H}\times \mathcal{F}$ , the multi-task predictor is defined as $g_{w,h,f}(x) = (h\circ w(x),f\circ w(x))$ , where the shared representation $w(x)$ is passed to both task-specific heads. The true risk defined as $\mathcal{E}_{c_0}(g) = \mathbb{E}_{z\sim \mathcal{D}}[c_0(g(x),(y,t))]$ , where $z = (x,y,t)\in \mathcal{X}\times \mathcal{Y}\times \mathcal{T}$ . The Bayes risk over a class $\mathcal{G}$ is given by $\mathcal{E}_{c_0}^B (\mathcal{G}) = \inf_{g\in \mathcal{G}}\mathcal{E}_{c_0}(g)$ .
+
+Proposition 4.7 (Head and representation gaps.). Fix $w \in \mathcal{W}$ and let $\mathcal{G} := \{g_{w', h', f'} : w' \in \mathcal{W}, h' \in \mathcal{H}, f' \in \mathcal{F}\}$ .
+
+Define
+
+$$
+\mathcal {E} _ {\min } (w) := \inf _ {h ^ {\prime}, f ^ {\prime}} \mathcal {E} _ {c _ {0}} \left(g _ {w, h ^ {\prime}, f ^ {\prime}}\right),
+$$
+
+$$
+\Delta_ {\text {h e a d s}} (w, h, f) := \mathcal {E} _ {c _ {0}} \left(g _ {w, h, f}\right) - \mathcal {E} _ {\min } (w),
+$$
+
+$$
+\Delta_ {\operatorname {r e p r}} (w) := \mathcal {E} _ {\min } (w) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}).
+$$
+
+The quantity $\Delta_{\mathrm{heads}}$ measures head sub-optimality given the extracted representation from the encoder fixed at a particular iteration, while $\Delta_{\mathrm{repr}}$ captures how far $w$ lies from a Bayes-optimal shared representation.
+
+Lemma 4.8 (Non-negativity of the gaps). For all $(w,h,f)\in \mathcal{W}\times \mathcal{H}\times \mathcal{F}$ , we have $\Delta_{\mathrm{heads}}(w,h,f)\geq 0$ . For every $w\in \mathcal{W}$ , we have $\Delta_{\mathrm{repr}}(w)\geq 0$ .
+
+Proof. Fix $w$ . By definition of the infimum, $\mathcal{E}_{\min}(w) = \inf_{h',f'} \mathcal{E}_{c_0}(g_{w,h',f'}) \leq \mathcal{E}_{c_0}(g_{w,h,f})$ for any heads $(h,f)$ , hence $\Delta_{\mathrm{heads}}(w,h,f) \geq 0$ . For the representation gap, note that $\mathcal{E}_{c_0}^B(\mathcal{G}) = \inf_{w',h',f'} \mathcal{E}_{c_0}(g_{w',h',f'}) \leq \mathcal{E}_{\min}(w)$ , so $\Delta_{\mathrm{repr}}(w) = \mathcal{E}_{\min}(w) - \mathcal{E}_{c_0}^B(\mathcal{G}) \geq 0$ . Both inequalities hold with equality when $(w,h,f)$ is Bayes-optimal.
+
+Proposition 4.9 (Excess-risk decomposition). For every $(w, h, f)$ ,
+
+$$
+\mathcal {E} _ {c _ {0}} \left(g _ {w, h, f}\right) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) = \Delta_ {\text {h e a d s}} (w, h, f) + \Delta_ {\text {r e p r}} (w).
+$$
+
+Proof. Add and subtract $\mathcal{E}_{\min}(w)$ .
+
+Proposition 4.10 (Cost of Enforcing a Shared Encoder). Suppose two independent heads act directly on the raw input $x$ : $g_{sep,h,f}(x) = (h(x),f(x))$ . Let $\mathcal{E}_{sep,c_0}^B \coloneqq \inf_{h,f} \mathcal{E}_{c_0}(g_{sep,h,f})$ and define
+
+$$
+\Delta_ {\mathrm {M T L}} := \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) - \mathcal {E} _ {s e p, c _ {0}} ^ {B}.
+$$
+
+Hence $\Delta_{\mathrm{MTL}} < 0$ indicates that forcing a shared encoding is beneficial, whereas $\Delta_{\mathrm{MTL}} > 0$ points to a potential penalty relative to two stand-alone models.
+
+Combining definitions,
+
+$$
+\mathcal {E} _ {c _ {0}} \left(g _ {w, h, f}\right) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) = \left[ \mathcal {E} _ {c _ {0}} \left(g _ {w, h, f}\right) - \mathcal {E} _ {\operatorname {s e p}, c _ {0}} ^ {B} \right] - \Delta_ {\mathrm {M T L}},
+$$
+
+we can link these relationships with the main Theorem 4.4 that states that for any $(g,r)\in \mathcal{G}\times \mathcal{R}$ . Setting $g = g_{w,h,f}$ and invoking Proposition 4.9 yields the encoder-aware consistency bound:
+
+Corollary 4.11 (Encoder-aware $(\mathcal{G},\mathcal{R})$ -consistency). For any $(w,h,f,r)\in \mathcal{W}\times \mathcal{H}\times \mathcal{F}\times \mathcal{R}$
+
+$$
+\begin{array}{l} \mathcal {E} _ {\ell_ {\mathrm {d e f}}} \left(g _ {w, h, f}, r\right) - \mathcal {E} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}) + \mathcal {U} _ {\ell_ {\mathrm {d e f}}} (\mathcal {G}, \mathcal {R}) \leq \\ \overline {{\Gamma}} ^ {\nu} \left(\mathcal {E} _ {\Phi_ {\mathrm {d e f}} ^ {\nu}} (r) - \mathcal {E} _ {\Phi_ {\mathrm {d e f}} ^ {\nu}} ^ {B} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {\mathrm {d e f}} ^ {\nu}} (\mathcal {R})\right) \\ + \Delta_ {\text {h e a d s}} (w, h, f) + \Delta_ {\text {r e p r}} (w) + \mathcal {U} _ {c _ {0}} (\mathcal {G}). \\ \end{array}
+$$
+
+Corollary (4.11) decomposes the end-to-end excess deferral risk into three orthogonal sources: (i) the rejector optimisation error, (ii) the head sub-optimality, and (iii) the representation gap.
+
+The bound suggests a two-stage pipeline: (i) learn or select high-capacity representations to minimise $\Delta_{\mathrm{repr}}$ as well as best heads for this representation, then (ii) optimise the rejector to tighten the remaining terms. The pipeline exactly mirrors our proposed L2D solution. Such decoupling is particularly attractive when $|\mathcal{T}|$ is large and feature sharing is essential for sample efficiency.
+
+# 4.4. Generalization Bound
+
+We aim to quantify the generalization capability of our system, considering both the complexity of the hypothesis space and the quality of the participating agents. To this end, we define the empirical optimal rejector $\hat{r}^B$ as the minimizer of the empirical generalization error:
+
+$$
+\widehat {r} ^ {B} = \arg \min _ {r \in \mathcal {R}} \frac {1}{K} \sum_ {k = 1} ^ {K} \ell_ {\mathrm {d e f}} (g, m, r, z _ {k}), \tag {4}
+$$
+
+where $\ell_{\mathrm{def}}$ denotes the true deferral loss function. To characterize the system's generalization ability, we utilize the Rademacher complexity, which measures the expressive richness of a hypothesis class by evaluating its capacity to fit random noise (Bartlett & Mendelson, 2003; Mohri et al., 2012). The proof of Lemma 4.12 is provided in Appendix E.
+
+Lemma 4.12. Let $\mathcal{L}_1$ be a family of functions mapping $\mathcal{X}$ to $[0,1]$ , and let $\mathcal{L}_2$ be a family of functions mapping $\mathcal{X}$ to $\{0,1\}$ . Define $\mathcal{L} = \{l_1l_2 : l_1 \in \mathcal{L}_1, l_2 \in \mathcal{L}_2\}$ . Then, the empirical Rademacher complexity of $\mathcal{L}$ for any sample $S$ of size $K$ is bounded by:
+
+$$
+\widehat {\mathfrak {R}} _ {S} (\mathcal {L}) \leq \widehat {\mathfrak {R}} _ {S} (\mathcal {L} _ {1}) + \widehat {\mathfrak {R}} _ {S} (\mathcal {L} _ {2}).
+$$
+
+For simplicity, we assume costs $c_{0}(g(x),z) = \ell_{01}(h(x),y) + \ell_{\mathrm{reg}}(f(x),t)$ and $c_{j > 0}(m_j(x),z) = c_0(m_j(x),z)$ . We assume the regression loss $\ell_{\mathrm{reg}}$ to be non-negative, bounded by $L$ , and Lipschitz. Furthermore, we assume that $m_{k,j}^{h}$ is drawn from the conditional distribution of the random variable $M_j^h$ given parameters $\{X = x_k,Y = y_k\}$ , and that $m_{k,j}^{f}$ is drawn from the conditional distribution of $M_j^f$ given $\{X = x_k,T = t_k\}$ . We define the family of deferral loss functions as $\mathcal{L}_{\mathrm{def}} = \{\ell_{\mathrm{def}}:\mathcal{G}\times \mathcal{R}\times \mathcal{M}\times \mathcal{Z}\to \mathbb{R}^{+}\}$ . Under these assumptions, we derive the generalization bounds for the binary setting as follows:
+
+Theorem 4.13 (Learning bounds of the deferral loss). For any expert $M_j$ , any distribution $\mathcal{D}$ over $\mathcal{Z}$ , we have with
+
+probability $1 - \delta$ for $\delta \in [0,1 / 2]$ , that the following bound holds at the optimum:
+
+$$
+\mathcal {E} _ {\ell_ {d e f}} (h, f, r) \leq \widehat {\mathcal {E}} _ {\ell_ {d e f}} (h, f, r) + 2 \Re_ {K} (\mathcal {L} _ {d e f}) + \sqrt {\frac {\log 1 / \delta}{2 K}},
+$$
+
+with
+
+$$
+\begin{array}{l} \mathfrak {R} _ {K} (\mathcal {L} _ {d e f}) \leq \frac {1}{2} \mathfrak {R} _ {K} (\mathcal {H}) + \mathfrak {R} _ {K} (\mathcal {F}) + \sum_ {j = 1} ^ {J} \Omega \left(m _ {j} ^ {h}, y\right) \\ + \Big (\sum_ {j = 1} ^ {J} \max \ell_ {r e g} (m _ {j} ^ {f}, t) + 2 \Big) \Re_ {K} (\mathcal {R}), \\ \end{array}
+$$
+
+with $\Omega(m_j^h, y) = \frac{1}{2} \mathcal{D}(m_j^h \neq y) \exp\left(-\frac{K}{8} \mathcal{D}(m_j^h \neq y)\right) + \mathcal{R}_{K\mathcal{D}(m_j^h \neq y)/2}(\mathcal{R})$ .
+
+We prove Theorem 4.13 in Appendix F. The terms $\Re_K(\mathcal{H})$ and $\Re_K(\mathcal{F})$ denote the Rademacher complexities of the hypothesis class $\mathcal{H}$ and function class $\mathcal{F}$ , respectively, indicating that the generalization bounds depend on the complexity of the pre-trained model. The term $\Omega(m_j^h, y)$ captures the impact of each expert's classification error on the learning bound. It includes an exponentially decaying factor, $\frac{\mathcal{D}(m_j^h \neq y)}{2} \exp\left(-\frac{KD(m_j^h \neq y)}{8}\right)$ , which decreases rapidly as the sample size $K$ grows or as the expert's error rate $\mathcal{D}(m_j^h \neq y)$ declines (Mozannar & Sontag, 2020). This reflects the intuition that more accurate experts contribute less to the bound, improving overall generalization. Finally, the last term suggests that the generalization properties of our true deferral loss depend on the expert's regression performance.
+
+# 5. Experiments
+
+In this section, we present the performance improvements achieved by the proposed Learning-to-Defer surrogate in a multi-task context. Specifically, we demonstrate that our approach excels in object detection, a task where classification and regression components are inherently intertwined and cannot be delegated to separate agents, and where existing L2D methods encounter significant limitations. Furthermore, we evaluate our approach on an Electronic Health Record task, jointly predicting mortality (classification) and length of stay (regression), comparing our results with Mao et al. (2023a; 2024h).
+
+For each experiment, we report the mean and standard deviation across four independent trials to account for variability in the results. All training and evaluation were conducted on an NVIDIA H100 GPU. We give our training algorithm in Appendix A. Additional figures and details are provided in Appendix G. To ensure reproducibility, we have made our implementation publicly available.
+
+# 5.1. Object Detection Task
+
+We evaluate our approach using the Pascal VOC dataset (Everingham et al., 2010), a multi-object detection benchmark. This is the first time such a multi-task problem has been explored within the L2D framework as previous L2D approaches require the classification and regression component to be independent (Mao et al., 2023a; 2024h).
+
+Dataset and Metrics: The PASCAL Visual Object Classes (VOC) dataset (Everingham et al., 2010) serves as a widely recognized benchmark in computer vision for evaluating object detection models. It consists of annotated images spanning 20 object categories, showcasing diverse scenes with varying scales, occlusions, and lighting conditions. To assess object detection performance, we report the mean Average Precision (mAP), a standard metric in the field. Additionally, in the context of L2D, we report the allocation metric (All.), which represents the ratio of allocated queries per agent.
+
+Agents setting: We trained three distinct Faster R-CNN models (Ren et al., 2016) to serve as our agents, differentiated by their computational complexities. The smallest, characterized by GFLOPS $= 12.2$ , represents our model $g \in \mathcal{G}$ with $\mathcal{G} = \{g : g(x) = (h \circ w(x), f \circ w(x)) \mid w \in \mathcal{W}, h \in \mathcal{H}, f \in \mathcal{F}\}$ . The medium-sized, denoted as Expert 1, has a computational cost of GFLOPS $= 134.4$ , while the largest, Expert 2, operates at GFLOPS $= 280.3$ . To account for the difference in complexity between Experts 1 and 2, we define the ratio $R_{G} = 280.3 / 134.4$ and set the query cost for Expert 1 as $\beta_{1} = \beta_{2} / R_{G}$ . This parameterization reflects the relative computational costs of querying experts. We define the agent costs as $c_{0}(g(x), z) = \mathrm{mAP}(g(x), z)$ and $c_{j \in [J]}(m_{j}(x), z) = \mathrm{mAP}(m_{j}(x), z)$ . We report the performance metrics of the agents alongside additional training details in Appendix G.1.
+
+Rejector: The rejector is trained using a smaller version of the Faster R-CNN model (Ren et al., 2016). Training is performed for 200 epochs using the Adam optimizer (Kingma & Ba, 2017) with a learning rate of 0.001 and a batch size of 64. The checkpoint achieving the lowest empirical risk on the validation set is selected for evaluation.
+
+Results: In Figure 1, we observe that for lower cost values, specifically when $\beta_{1} < 0.15$ , the system consistently avoids selecting Expert 1. This outcome arises because the cost difference between $\beta_{1}$ and $\beta_{2}$ is negligible, making it more advantageous to defer to Expert 2 (the most accurate expert), where the modest cost increase is offset by superior outcomes. When $\beta_{2} = 0.15$ , however, it becomes optimal to defer to both experts and model at the same time. In particular, there exist instances $x \in \mathcal{X}$ where both Expert 1
+
+
+Figure 1. Performance comparison across different cost values $\beta_{2}$ on Pascal VOC (Everingham et al., 2010). The table reports the mean Average Precision (mAP) and the allocation ratio for the model and two experts with mean and variance. We report these results in Appendix Table 3.
+
+and Expert 2 correctly predict the target (while the model does not). In such cases, Expert 1 is preferred due to its lower cost $\beta_{1} < \beta_{2}$ . Conversely, for instances $x \in \mathcal{X}$ where Expert 2 is accurate and Expert 1 (along with the model) is incorrect, the system continues to select Expert 2, as $\beta_{2}$ remains relatively low. For $\beta_{2} \geq 0.2$ , the increasing cost differential between the experts shifts the balance in favor of Expert 1, enabling the system to achieve strong performance while minimizing overall costs.
+
+This demonstrates that our approach effectively allocates queries among agents, thereby enhancing the overall performance of the system, even when the classification and regression tasks are interdependent.
+
+# 5.2. EHR Task
+
+We compare our novel approach against existing two-stage L2D methods (Mao et al., 2023a; 2024h). Unlike the first experiment on object detection (Subsection 5.1), where classification and regression tasks are interdependent, this evaluation focuses on a second scenario where the two tasks can be treated independently.
+
+Dataset and Metrics: The Medical Information Mart for Intensive Care IV (MIMIC-IV) dataset (Johnson et al., 2023) is a comprehensive collection of de-identified health-related data patients admitted to critical care units. For our analysis, we focus on two tasks: mortality prediction and length-of-stay prediction, corresponding to classification and regression tasks, respectively. To evaluate performance, we report accuracy (Acc) for the mortality prediction task, which quantifies classification performance, and Smooth L1 loss (sL1) for the length-of-stay prediction task, which measures the deviation between the predicted and actual values. Additionally, we report the allocation metric (All.) for L2D to
+
+capture query allocation behavior.
+
+Agents setting: We consider two experts, $\mathbf{M}_1$ and $\mathbf{M}_2$ , acting as specialized agents, aligning with the category allocation described in (Mozannar & Sontag, 2020; Verma et al., 2023; Verma & Nalisnick, 2022; Cao et al., 2024). The dataset is partitioned into $Z = 6$ clusters using the $K$ -means algorithm (Lloyd, 1982), where $Z$ is selected via the Elbow method (Thorndike, 1953). The clusters are denoted as $\{C_1, C_2, \ldots, C_Z\}$ . Each cluster represents a subset of data instances grouped by feature similarity, enabling features-specific specialization by the experts. The experts are assumed to specialize in distinct subsets of clusters based on the task. For classification, $\mathbf{M}_1$ correctly predicts the outcomes for clusters $C_{\mathrm{cla}}^{\mathrm{M}_1} = \{C_1, C_2, C_4\}$ , while $\mathbf{M}_2$ handles clusters $C_{\mathrm{cla}}^{\mathrm{M}_2} = \{C_1, C_5, C_6\}$ . Notably, cluster $C_1$ is shared between the two experts, reflecting practical scenarios where domain knowledge overlaps. For regression tasks, $\mathbf{M}_1$ is accurate on clusters $C_{\mathrm{reg}}^{\mathrm{M}_1} = \{C_1, C_3, C_5\}$ , while $\mathbf{M}_2$ specializes in clusters $C_{\mathrm{reg}}^{\mathrm{M}_2} = \{C_1, C_4, C_6\}$ . Here too, overlap is modeled, with cluster $C_1$ being common to both experts and classification-regression task. Note that the category assignments do not follow any specific rule.
+
+We assume that each expert produces correct predictions for the clusters they are assigned (Verma et al., 2023; Mozannar & Sontag, 2020). Conversely, for clusters outside their expertise, predictions are assumed to be incorrect. In such cases, for length-of-stay predictions, the outcomes are modeled using a uniform probability distribution to reflect uncertainty. The detailed performance evaluation of these agents is provided in Appendix G.2.
+
+The model utilizes two compact transformer architectures (Vaswani et al., 2017) for addressing both classification and regression tasks, formally defined as $\mathcal{G} = \{g : g(x) = (h(x), f(x)) \mid h \in \mathcal{H}, f \in \mathcal{F}\}$ . The agent's costs are specified as $c_{0}(g(x), z) = \lambda^{\mathrm{cla}}\ell_{01}(h(x), y) + \lambda^{\mathrm{reg}}\ell_{\mathrm{reg}}(f(x), t)$ and $c_{j \in [J]}(m_{j}(x), z) = c_{0}(m_{j}(x), z) + \beta_{j}$ . Consistent with prior works (Mozannar & Sontag, 2020; Verma et al., 2023; Mao et al., 2023a; 2024h), we set $\beta_{j} = 0$ .
+
+Rejectors: The two-stage L2D rejectors are trained using a small transformer model (Vaswani et al., 2017) as the encoder, following the approach outlined by Yang et al. (2023), with a classification head for query allocation. Training is performed over 100 epochs with a learning rate of 0.003, a warm-up period of 0.1, a cosine learning rate scheduler, the Adam optimizer (Kingma & Ba, 2017), and a batch size of 1024 for all baselines. The checkpoint with the lowest empirical risk on the validation set is selected for evaluation.
+
+Results: Table 1 compares the performance of our proposed Learning-to-Defer (Ours) approach with two existing methods: a classification-focused rejector (Mao et al.,
+
+2023a) and a regression-focused rejector (Mao et al., 2024h). The results highlight the limitations of task-specific rejectors and the advantages of our balanced approach.
+
+Rejection Acc (%) sL1 All. Model All. Expert 1 All. Expert 2 Mao et al. (2023a) 71.3 ± .1 1.45 ± .03 .60 ± .02 .01 ± .01 .39 ± .02 Mao et al. (2024h) 50.7 ± .8 1.18 ± .05 .38 ± .01 .37 ± .02 .25 ± .01 Ours 70.0 ± .5 1.28 ± .02 .66 ± .01 .12 ± .02 .22 ± .01
+
+Table 1. Performance comparison of different two-stage L2D. The table reports accuracy (Acc), smooth L1 loss (sL1), and allocation rates (All.) to the model and experts with mean and variance.
+
+The classification-focused rejector achieves the highest classification accuracy at $71.3\%$ but struggles with regression, as reflected by its high smooth L1 loss of 1.45. On the other hand, the regression-focused rejector achieves the best regression performance with an sL1 loss of 1.18 but performs poorly in classification with an accuracy of $50.7\%$ . In contrast, our method balances performance across tasks, achieving a classification accuracy of $70.0\%$ and an sL1 loss of 1.28. Moreover, it significantly reduces reliance on experts, allocating $66\%$ of queries to the model compared to $60\%$ for Mao et al. (2023a) and $38\%$ for Mao et al. (2024h). Expert involvement is minimized, with only $12\%$ and $22\%$ of queries allocated to Experts 1 and 2, respectively.
+
+Since the experts possess distinct knowledge for the two tasks ( $C_{\mathrm{cla}}^{\mathrm{M}_1}$ and $C_{\mathrm{reg}}^{\mathrm{M}_1}$ for $\mathbf{M}_1$ ), independently deferring classification and regression may lead to suboptimal performance. In contrast, our approach models deferral decisions dependently, considering the interplay between the two components to achieve better overall results.
+
+# 6. Conclusion
+
+We introduced a Two-Stage Learning-to-Defer framework for multi-task problems, extending existing approaches to jointly handle classification and regression. We proposed a two-stage surrogate loss family that is both $(\mathcal{G},\mathcal{R})$ -consistent and Bayes-consistent for any cross-entropy-based surrogate. Additionally, we derived tight consistency bounds linked to cross-entropy losses and the $L_{1}$ -norm of aggregated costs. We further established novel minimizability gap for the two-stage setting, generalizing prior results to Learning-to-Defer with multiple experts. Finally, we showed that our learning bounds improve with a richer hypothesis space and more confident experts.
+
+We validated our framework on two challenging tasks: (i) object detection, where classification and regression are inherently interdependent—beyond the scope of existing L2D methods; and (ii) electronic health record analysis, where we demonstrated that current L2D approaches can be suboptimal even when classification and regression tasks are independent.
+
+# Acknowledgment
+
+This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2023-01-041-J) and by A*STAR, and is part of the programme DesCartes which is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
+
+# Impact Statement
+
+This paper advances the theoretical and practical understanding of machine learning, contributing to the development of more effective models and methods. While our research does not present any immediate or significant ethical concerns, we recognize the potential for indirect societal impacts.
+
+# References
+
+Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. Multi-class h-consistency bounds. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088.
+Balagurunathan, Y., Mitchell, R., and El Naqa, I. Requirements and reliability of AI in the medical context. Physica Medica, 83:72-78, 2021.
+Bartlett, P., Jordan, M., and McAuliffe, J. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101:138-156, 02 2006. doi: 10.1198/016214505000000907.
+Bartlett, P. L. and Mendelson, S. Rademacher and Gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3(null):463-482, March 2003. ISSN 1532-4435.
+Bartlett, P. L. and Wegkamp, M. H. Classification with a reject option using a hinge loss. Journal of Machine Learning Research, 9(8), 2008.
+Benz, N. L. C. and Rodriguez, M. G. Counterfactual inference of second opinions. In Uncertainty in Artificial Intelligence, pp. 453-463. PMLR, 2022.
+Berkson, J. Application of the logistic function to bio-assay. Journal of the American Statistical Association, 39:357-365, 1944. URL https://api-semanticscholar.org/CorpusID:122893121.
+Buch, S., Escorcia, V., Shen, C., Ghanem, B., and Carlos Niebles, J. SST: Single-stream temporal action pro
+
+posals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2911-2920, 2017.
+Cao, Y., Cai, T., Feng, L., Gu, L., Gu, J., An, B., Niu, G., and Sugiyama, M. Generalizing consistent multiclass classification with rejection to be compatible with arbitrary losses. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088.
+Cao, Y., Mozannar, H., Feng, L., Wei, H., and An, B. In defense of softmax parametrization for calibrated and consistent learning to defer. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA, 2024. Curran Associates Inc.
+Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. End-to-end object detection with transformers. In Computer Vision - ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, pp. 213-229, Berlin, Heidelberg, 2020. Springer-Verlag. ISBN 978-3-030-58451-1. doi: 10.1007/978-3-030-58452-8_13. URL https://doi.org/10.1007/978-3-030-58452-8_13.
+Charusaie, M.-A., Mozannar, H., Sontag, D., and Samadi, S. Sample efficient learning of predictors that complement humans. In International Conference on Machine Learning, pp. 2972-3005. PMLR, 2022.
+Chow, C. On optimum recognition error and reject tradeoff. IEEE Transactions on information theory, 16(1):41-46, 2003.
+Cortes, C. and Vapnik, V. N. Support-vector networks. Machine Learning, 20:273-297, 1995. URL https://api-semanticscholar.org/CorpusID:52874011.
+Cortes, C., DeSalvo, G., and Mohri, M. Learning with rejection. In Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27, pp. 67-82. Springer, 2016.
+Cortes, C., Mao, A., Mohri, C., Mohri, M., and Zhong, Y. Cardinality-aware set prediction and top- $k$ classification. Advances in neural information processing systems, 37: 18265-18309, 2024.
+Cortes, C., Mao, A., Mohri, M., and Zhong, Y. Balancing the scales: A theoretical and algorithmic framework for learning from imbalanced data. In *Forty-second International Conference on Machine Learning*, 2025. URL https://openreview.net/forum?id=gscscNNiPN.
+
+Everingham, M., Gool, L., Williams, C. K., Winn, J., and Zisserman, A. The Pascal visual object classes (voc) challenge. Int. J. Comput. Vision, 88(2): 303-338, June 2010. ISSN 0920-5691. doi: 10.1007/s11263-009-0275-4. URL https://doi.org/10.1007/s11263-009-0275-4.
+Geifman, Y. and El-Yaniv, R. Selective classification for deep neural networks. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/4a8423d5e91fda00bb7e46540e2b0cf1-Paper.pdf.
+Ghosh, A., Kumar, H., and Sastry, P. S. Robust loss functions under label noise for deep neural networks. ArXiv, abs/1712.09482, 2017. URL https://api.sementicscholar.org/CorpusID:6546734.
+Girshick, R. Fast R-CNN. arXiv preprint arXiv:1504.08083, 2015.
+Hemmer, P., Schemmer, M., Vössing, M., and Kuhl, N. Human-AI complementarity in hybrid intelligence systems: A structured literature review. *PACIS*, pp. 78, 2021.
+Hemmer, P., Schellhammer, S., Vössing, M., Jakubik, J., and Satzger, G. Forming effective human-AI teams: Building machine learning models that complement the capabilities of multiple experts. In Raedt, L. D. (ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp. 2478-2484. International Joint Conferences on Artificial Intelligence Organization, 7 2022. doi: 10.24963/ijcai.2022/344. URL https://doi.org/10.24963/ijcai.2022/344. Main Track.
+Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L.-w. H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Anthony Celi, L., and Mark, R. G. MIMIC-III, a freely accessible critical care database. Scientific data, 3(1):1-9, 2016.
+Johnson, A. E., Bulgarelli, L., Shen, L., Gayles, A., Shammout, A., Horng, S., Pollard, T. J., Hao, S., Moody, B., Gow, B., et al. MIMIC-IV, a freely accessible electronic health record dataset. Scientific data, 10(1):1, 2023.
+Kerrigan, G., Smyth, P., and Steyvers, M. Combining human predictions with model probabilities via confusion matrices and calibration. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems,
+
+volume 34, pp. 4421-4434. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/234b941e88b755b7a72a1c1dd5022f30-Paper.pdf.
+Keswani, V., Lease, M., and Kenthapadi, K. Towards unbiased and accurate deferral to multiple experts. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES '21, pp. 154-165, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384735. doi: 10.1145/3461702.3462516. URL https://doi.org/10.1145/3461702.3462516.
+Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization, 2017. URL https://arxiv.org/abs/1412.6980.
+Liu, S., Cao, Y., Zhang, Q., Feng, L., and An, B. Mitigating underfitting in learning to defer with consistent losses. In International Conference on Artificial Intelligence and Statistics, pp. 4816-4824. PMLR, 2024.
+Lloyd, S. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129-137, 1982. doi: 10.1109/TIT.1982.1056489.
+Long, P. and Servedio, R. Consistency versus realizable h-consistency for multiclass classification. In Dasgupta, S. and McAllester, D. (eds.), Proceedings of the 30th International Conference on Machine Learning, number 3 in Proceedings of Machine Learning Research, pp. 801-809, Atlanta, Georgia, USA, 17-19 Jun 2013. PMLR. URL https://proceedings.mlr.press/v28/long13.html.
+Madras, D., Pitassi, T., and Zemel, R. Predict responsibly: Improving fairness and accuracy by learning to defer. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, pp. 6150-6160, 2018.
+Mao, A. Theory and Algorithms for Learning with Multi-Class Abstention and Multi-Expert Deferral. PhD thesis, New York University, 2025.
+Mao, A., Mohri, C., Mohri, M., and Zhong, Y. Two-stage learning to defer with multiple experts. In Thirty-seventh Conference on Neural Information Processing Systems, 2023a. URL https://openreview.net/forum?id=G1lsH0T4b2.
+Mao, A., Mohri, M., and Zhong, Y. Cross-entropy loss functions: Theoretical analysis and applications. In Proceedings of the 40th International Conference on Machine Learning, ICML'23, 2023b.
+
+Mao, A., Mohri, M., and Zhong, Y. Predictor-rejector multiclass abstention: Theoretical analysis and algorithms. In International Conference on Algorithmic Learning Theory, pp. 822-867. PMLR, 2024a.
+Mao, A., Mohri, M., and Zhong, Y. $h$ -consistency guarantees for regression. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 34712-34737. PMLR, 21-27 Jul 2024b. URL https://proceedings.mlrpress/v235/mao24c.html.
+Mao, A., Mohri, M., and Zhong, Y. Enhanced $h$ -consistency bounds. arXiv preprint arXiv:2407.13722, 2024c.
+Mao, A., Mohri, M., and Zhong, Y. H-consistency guarantees for regression. Proceedings of Machine Learning Research, 235:34712-34737, 2024d.
+Mao, A., Mohri, M., and Zhong, Y. Multi-label learning with stronger consistency guarantees. Advances in neural information processing systems, 37:2378-2406, 2024e.
+Mao, A., Mohri, M., and Zhong, Y. Principled approaches for learning to defer with multiple experts. In International Workshop on Combinatorial Image Analysis, pp. 107-135. Springer, 2024f.
+Mao, A., Mohri, M., and Zhong, Y. Realizable $h$-consistent and bayes-consistent loss functions for learning to defer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024g. URL https://openreview.net/forum?id=OcO2XakUUK.
+Mao, A., Mohri, M., and Zhong, Y. Regression with multi-expert deferral. In *Forty-first International Conference on Machine Learning*, 2024h.
+Mao, A., Mohri, M., and Zhong, Y. Theoretically grounded loss functions and algorithms for score-based multi-class abstention. In International Conference on Artificial Intelligence and Statistics, pp. 4753-4761. PMLR, 2024i.
+Mao, A., Mohri, M., and Zhong, Y. A universal growth rate for learning with smooth surrogate losses. Advances in neural information processing systems, 37:41670-41708, 2024j.
+Mao, A., Mohri, M., and Zhong, Y. Enhanced $\$ h$ -consistency bounds. In 36th International Conference on Algorithmic Learning Theory, 2025a. URL https://openreview.net/forum?id=qgnVGFJMJo.
+
+Mao, A., Mohri, M., and Zhong, Y. Mastering multiple-expert routing: Realizable $\$ h$ -consistency and strong guarantees for learning to defer. In Forty-second International Conference on Machine Learning, 2025b. URL https://openreview.net/forum?id=2KlxjR61sd.
+Mao, A., Mohri, M., and Zhong, Y. Principled algorithms for optimizing generalized metrics in binary classification. In *Forty-second International Conference on Machine Learning*, 2025c. URL https://openreview.net/forum?id=YngQelHz1X.
+Mohri, M., Rostamizadeh, A., and Talwalkar, A. Foundations of Machine Learning. The MIT Press, 2012. ISBN 026201825X.
+Montreuil, Y., Yeo, S. H., Carlier, A., Ng, L. X., and Ooi, W. T. Optimal query allocation in extractive qa with llms: A learning-to-defer framework with theoretical guarantees, 2024.
+Montreuil, Y., Carlier, A., Ng, L. X., and Ooi, W. T. Adversarial robustness in two-stage learning-to-defer: Algorithms and guarantees. In *Forty-second International Conference on Machine Learning*, 2025a. URL https://openreview.net/forum?id=h3KHwZCnxH.
+Montreuil, Y., Carlier, A., Ng, L. X., and Ooi, W. T. Why ask one when you can ask $k$ ? two-stage learning-to-defer to the top- $k$ experts, 2025b.
+Montreuil, Y., Carlier, A., Ng, L. X., and Ooi, W. T. One-stage top- $k$ learning-to-defer: Score-based surrogates with theoretical guarantees, 2025c.
+Mozannar, H. and Sontag, D. Consistent estimators for learning to defer to an expert. In Proceedings of the 37th International Conference on Machine Learning, ICML'20, 2020.
+Mozannar, H., Lang, H., Wei, D., Sattigeri, P., Das, S., and Sontag, D. A. Who should predict? exact algorithms for learning to defer to humans. In International Conference on Artificial Intelligence and Statistics, 2023. URL https://api_semanticscholar.org/CorpusID:255941521.
+Narasimhan, H., Jitkrittum, W., Menon, A. K., Rawat, A., and Kumar, S. Post-hoc estimators for learning to defer to an expert. Advances in Neural Information Processing Systems, 35:29292-29304, 2022.
+Okati, N., De, A., and Rodriguez, M. Differentiable learning under triage. Advances in Neural Information Processing Systems, 34:9140-9151, 2021.
+
+Palomba, F., Pugnana, A., Álvarez, J. M., and Ruggieri, S. A causal framework for evaluating deferring systems. CoRR, 2024.
+Ramaswamy, H. G., Tewari, A., and Agarwal, S. Consistent algorithms for multiclass classification with an abstain option. *Electronic Journal of Statistics*, 12(1):530 - 554, 2018. doi: 10.1214/17-EJS1388. URL https://doi.org/10.1214/17-EJS1388.
+Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
+Ren, S., He, K., Girshick, R., and Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks, 2016. URL https://arxiv.org/abs/1506.01497.
+Steinwart, I. How to compare different loss functions and their risks. Constructive Approximation, 26:225-287, 2007. URL https://api_semanticscholar.org/CorpusID:16660598.
+Tailor, D., Patra, A., Verma, R., Manggala, P., and Nalisnick, E. Learning to defer to a population: A meta-learning approach. In Dasgupta, S., Mandt, S., and Li, Y. (eds.), Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, volume 238 of Proceedings of Machine Learning Research, pp. 3475-3483. PMLR, 02-04 May 2024. URL https://proceedings.mlr.press/v238/tailor24a.html.
+Tewari, A. and Bartlett, P. L. On the consistency of multiclass classification methods. Journal of Machine Learning Research, 8(36):1007-1025, 2007. URL http://jmlr.org/papers/v8/tewari07a.html.
+Thorndike, R. L. Who belongs in the family? Psychometrika, 18:267-276, 1953. URL https://api.sementicscholar.org/CorpusID:120467216.
+Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
+
+Verma, R. and Nalisnick, E. Calibrated learning to defer with one-vs-all classifiers. In International Conference on Machine Learning, pp. 22184-22202. PMLR, 2022.
+Verma, R., Barrejon, D., and Nalisnick, E. Learning to defer to multiple experts: Consistent surrogate losses, confidence calibration, and conformal ensembles. In International Conference on Artificial Intelligence and Statistics, 2023. URL https://api_semanticscholar.org/CorpusID:253237048.
+Wei, Z., Cao, Y., and Feng, L. Exploiting human-ai dependence for learning to defer. In *Forty-first International Conference on Machine Learning*, 2024.
+Yang, C., Wu, Z., Jiang, P., Lin, Z., Gao, J., Danek, B., and Sun, J. PyHealth: A deep learning toolkit for healthcare predictive modeling. In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) 2023, 2023. URL https://github.com/sunlabuiuc/PyHealth.
+Zhang, M. and Agarwal, S. Bayes consistency vs. h-consistency: The interplay between surrogate loss functions and the scoring function class. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 16927-16936. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-Paper.pdf.
+Zhang, T. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32, 12 2002. doi: 10.1214/aos/1079120130.
+Zhong, Y. Fundamental Novel Consistency Theory: H-Consistency Bounds. New York University, 2025.
+
+# A. Algorithm
+
+Algorithm 1 Two-Stage Learning-to-Defer for Multi-Task Learning Algorithm
+Input: Dataset $\{(x_{k},y_{k},t_{k})\}_{k = 1}^{K}$ , multi-task model $g\in \mathcal{G}$ , experts $m\in \mathcal{M}$ , rejector $r\in \mathcal{R}$ , number of epochs EPOCH,
+batch size $B$ , learning rate $\eta$
+Initialization: Initialize rejector parameters $\theta$
+for $i = 1$ to EPOCH do Shuffle dataset $\{(x_k,y_k,t_k)\}_{k = 1}^K$ for each mini-batch $\mathcal{B}\subset \{(x_k,y_k,t_k)\}_{k = 1}^K$ of size $B$ do Extract input-output pairs $z = (x,y,t)\in \mathcal{B}$ Query model $g(x)$ and experts $m(x)$ . {Agents are pre-trained and fixed} Evaluate costs $c_{0}(g(x),z)$ and $c_{j > 0}(m(x),z)$ . {Compute task-specific costs} Compute rejector prediction $r(x) = \arg \max_{j\in \mathcal{A}}r(x,j)$ . {Rejector decision} Compute surrogate deferral empirical risk $\widehat{\mathcal{E}}_{\Phi_{\mathrm{def}}}$ .. $\widehat{\mathcal{E}}_{\Phi_{\mathrm{def}}} = \frac{1}{B}\sum_{z\in \mathcal{B}}\left[\Phi_{\mathrm{def}}(g,r,m,z)\right].$ {Empirical risk computation} Update parameters $\theta$ using gradient descent: $\theta \gets \theta -\eta \nabla_{\theta}\widehat{\mathcal{E}}_{\Phi_{\mathrm{def}}}$ {Parameter update} end for
+end for
+Return: trained rejector model $r^*$
+
+We will prove key lemmas and theorems stated in our main paper.
+
+# B. Proof of Lemma 4.2
+
+We aim to prove Lemma 4.2, which establishes the optimal deferral decision by minimizing the conditional risk.
+
+By definition, the Bayes-optimal rejector $r^B (x)$ minimizes the conditional risk $\mathcal{C}_{\ell_{\mathrm{def}}}$ , given by:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} (g, r, x) = \mathbb {E} _ {y, t | x} [ \ell_ {\mathrm {d e f}} (g, r, m, z) ]. \tag {5}
+$$
+
+Expanding the expectation, we obtain:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} (g, r, x) = \mathbb {E} _ {y, t | x} \left[ \sum_ {j = 0} ^ {J} c _ {j} (g (x), m _ {j} (x), z) 1 _ {r (x) = j} \right]. \tag {6}
+$$
+
+Using the linearity of expectation, this simplifies to:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} (g, r, x) = \sum_ {j = 0} ^ {J} \mathbb {E} _ {y, t | x} [ c _ {j} (g (x), m _ {j} (x), z) ] 1 _ {r (x) = j}. \tag {7}
+$$
+
+Since we seek the rejector that minimizes the expected loss, the Bayes-conditional risk is given by:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}, x) = \inf _ {g \in \mathcal {G}, r \in \mathcal {R}} \mathbb {E} _ {y, t | x} [ \ell_ {\mathrm {d e f}} (g, r, m, z) ]. \tag {8}
+$$
+
+Rewriting this expression, we obtain:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}, x) = \inf _ {r \in \mathcal {R}} \mathbb {E} _ {y, t | x} \left[ \inf _ {g \in \mathcal {G}} c _ {0} (g (x), z) 1 _ {r (x) = 0} + \sum_ {j = 1} ^ {J} c _ {j} \left(m _ {j} (x), z\right) 1 _ {r (x) = j} \right]. \tag {9}
+$$
+
+This leads to the following minimization problem:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}, x) = \min \left\{\inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ], \min _ {j \in [ J ]} \mathbb {E} _ {y, t | x} [ c _ {j} \left(m _ {j} (x), z\right) ] \right\}. \tag {10}
+$$
+
+To simplify notation, we define:
+
+$$
+\bar {c} _ {j} ^ {*} = \left\{ \begin{array}{l l} \inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ], & \text {i f} j = 0, \\ \mathbb {E} _ {y, t | x} [ c _ {j} (m _ {j} (x), z) ], & \text {o t h e r w i s e .} \end{array} \right. \tag {11}
+$$
+
+Thus, the Bayes-conditional risk simplifies to:
+
+$$
+\mathcal {C} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}, x) = \min _ {j \in \mathcal {A}} \bar {c} _ {j} ^ {*}. \tag {12}
+$$
+
+Since the rejector selects the decision with the lowest expected cost, the optimal rejector is given by:
+
+$$
+r ^ {B} (x) = \left\{ \begin{array}{l l} 0, & \text {i f} \inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] \leq \min _ {j \in [ J ]} \mathbb {E} _ {y, t | x} [ c _ {j} (m _ {j} (x), z) ], \\ j, & \text {o t h e r w i s e .} \end{array} \right. \tag {13}
+$$
+
+This completes the proof.
+
+
+
+# C. Proof Theorem 4.4
+
+Before proving the desired Theorem 4.4, we will use the following Lemma C.1 (Awasthi et al., 2022; Mao et al., 2024h): Lemma C.1 ( $\mathcal{R}$ -consistency bound). Assume that the following $\mathcal{R}$ -consistency bounds hold for $r \in \mathcal{R}$ , and any distribution
+
+$$
+\mathcal {E} _ {\ell_ {0 1}} (r) - \mathcal {E} _ {\ell_ {0 1}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\ell_ {0 1}} (\mathcal {R}) \leq \Gamma^ {\nu} (\mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} (r) - \mathcal {E} _ {\Phi_ {0 1} ^ {\nu}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {0 1} ^ {\nu}} (\mathcal {R}))
+$$
+
+then for $p\in (p_0\ldots p_J)\in \Delta^{|\mathcal{A}|}$ and $x\in \mathcal{X}$ , we get
+
+$$
+\sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} \leq \Gamma^ {\nu} \Big (\sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) \Big)
+$$
+
+Theorem 4.4 $((\mathcal{G},\mathcal{R})$ -consistency bounds). Let $g\in \mathcal{G}$ be a multi-task model. Suppose there exists a non-decreasing function $\Gamma^{\nu}:\mathbb{R}_{+}\to \mathbb{R}_{+}$ , parameterized by $\nu \geq 0$ , such that the $\mathcal{R}$ -consistency bound holds for any distribution $\mathcal{D}$ :
+
+$$
+\mathcal {E} _ {\Phi_ {0 1} ^ {\prime}} (r) - \mathcal {E} _ {\Phi_ {0 1} ^ {*}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {0 1} ^ {\prime}} (\mathcal {R}) \geq
+$$
+
+$$
+\Gamma^ {\nu} \left(\mathcal {E} _ {\ell_ {0 1}} (r) - \mathcal {E} _ {\ell_ {0 1}} ^ {B} (\mathcal {R}) + \mathcal {U} _ {\ell_ {0 1}} (\mathcal {R})\right),
+$$
+
+then for any $(g,r)\in \mathcal{G}\times \mathcal{R}$ , any distribution $\mathcal{D}$ , and any $x\in \mathcal{X}$
+
+$$
+\begin{array}{l} \mathcal {E} _ {\ell_ {d e f}} (g, r) - \mathcal {E} _ {\ell_ {d e f}} ^ {B} (\mathcal {G}, \mathcal {R}) + \mathcal {U} _ {\ell_ {d e f}} (\mathcal {G}, \mathcal {R}) \leq \\ \overline {{\Gamma}} ^ {\nu} \left(\mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} (r) - \mathcal {E} _ {\Phi_ {d e f} ^ {\nu}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {d e f} ^ {\nu}} (\mathcal {R})\right) \\ + \mathcal {E} _ {c _ {0}} (g) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) + \mathcal {U} _ {c _ {0}} (\mathcal {G}), \\ \end{array}
+$$
+
+where the expected aggregated cost vector is given by $\overline{\tau} = \left(\mathbb{E}_{y,t|x}[\tau_0],\dots ,\mathbb{E}_{y,t|x}[\tau_J]\right)$ , and $\overline{\Gamma}^{\nu}(u) = \| \overline{\tau}\|_{1}\Gamma^{\nu}\left(\frac{u}{\|\overline{\tau}\|_{1}}\right)$ with $\Gamma^{\nu}(u) = \mathcal{T}^{-1,\nu}(u)$ . In the case of the log-softmax surrogate ( $\nu = 1$ ), the transformation is given by $\mathcal{T}^{\nu = 1}(u) = \frac{1 + u}{2}\log (1 + u) + \frac{1 - u}{2}\log (1 - u)$ .
+
+Proof. Let denote a cost for $j \in \mathcal{A} = \{0, \dots, J\}$ :
+
+$$
+\overline {{c}} _ {j} ^ {*} = \left\{ \begin{array}{l l} \inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] & \text {i f} j = 0 \\ \mathbb {E} _ {y, t | x} [ c _ {j} (m (x), z) ] & \text {o t h e r w i s e} \end{array} \right.
+$$
+
+Using the change of variables and the Bayes-conditional risk introduced in the proof of Lemma 4.2 in Appendix B, we have:
+
+$$
+\begin{array}{l} \mathcal{C}_{\ell_{\mathrm{def}}}^{B}(\mathcal{G},\mathcal{R},x) = \min_{j\in \mathcal{A}}\overline{c}_{j}^{*} \\ \mathcal {C} _ {\ell_ {\mathrm {d e f}}} (g, r, x) = \sum_ {j = 0} ^ {J} \mathbb {E} _ {y, t | x} \left[ c _ {j} (g (x), m _ {j} (x), z) \right] 1 _ {r (x) = j} \tag {14} \\ \end{array}
+$$
+
+We follow suit for our surrogate $\Phi_{\mathrm{def}}$ and derive its conditional risk and optimal conditional risk.
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} = \mathbb {E} _ {y, t | x} \Big [ \sum_ {j = 1} ^ {J} c _ {j} (m (x), z) \Phi_ {0 1} ^ {\nu} (r, x, 0) + \sum_ {j = 1} ^ {J} \Big (c _ {0} (g (x), z) + \sum_ {i = 1} ^ {J} c _ {i} (m _ {i} (x), z) 1 _ {j \neq i} \Big) \Phi_ {0 1} ^ {\nu} (r, x, j)
+$$
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {*} = \inf _ {r \in \mathcal {R}} \mathbb {E} _ {y, t | x} \left[ \sum_ {j = 1} ^ {J} c _ {j} (g (x), m (x), z) \Phi_ {0 1} ^ {\nu} (r, x, 0) + \sum_ {j = 1} ^ {J} [ c _ {0} (g (x), z) + \sum_ {i = 1} ^ {J} c _ {i} (m _ {i} (x), z) 1 _ {j \neq i} ] \Phi_ {0 1} ^ {\nu} (r, x, j) \right]
+$$
+
+Let us define the function $v(m(x),z) = \min_{j\in [J]}\bar{c}_j(m_j(x),z)$ , where $m_j(x)$ denotes the model's output and $\bar{c}_j$ represents the corresponding cost function. Using this definition, the calibration gap is formulated as $\Delta C_{\ell_{\mathrm{def}}} \coloneqq C_{\ell_{\mathrm{def}}} - C_{\ell_{\mathrm{def}}}^B$ , where $C_{\ell_{\mathrm{def}}}$ represents the original calibration term and $C_{\ell_{\mathrm{def}}}^B$ denotes the baseline calibration term. By construction, the calibration gap satisfies $\Delta C_{\ell_{\mathrm{def}}} \geq 0$ , leveraging the risks derived in the preceding analysis.
+
+$$
+\begin{array}{l} \Delta \mathcal {C} _ {\ell_ {\mathrm {d e f}}} = \mathbb {E} _ {y, t | x} \left[ \rho (g (x), z) 1 _ {r (x) = 0} + \sum_ {j = 1} ^ {J} \left(\rho (m (x), z) + \beta_ {j}\right) 1 _ {r (x) = j} \right] \\ \left. - v (m (x), z) + \left(v (m (x), z) - \min _ {j \in \mathcal {A}} \bar {c} _ {j} ^ {*} (g (x), m (x), z)\right) \right. \\ \end{array}
+$$
+
+Let us consider $\Delta \mathcal{C}_{\ell_{\mathrm{def}}} = A_1 + A_2$ , such that:
+
+$$
+A _ {1} = \mathbb {E} _ {y, t | x} \left[ 1 _ {r (x) = 0} \rho (g (x), z) + \sum_ {j = 1} ^ {J} 1 _ {r (x) = j} \left(\rho \left(m _ {j} (x), z\right) + \beta_ {j}\right) \right] - v (m (x), z) \tag {15}
+$$
+
+$$
+A _ {2} = \left(v (m (x), z) - \min _ {j \in \mathcal {A}} \bar {c} _ {j} (g (x), m (x), z)\right)
+$$
+
+By considering the properties of min, we also get the following inequality:
+
+$$
+v (m (x), z) - \min _ {j \in \mathcal {A}} \bar {c} _ {j} ^ {*} (g (x), m (x), z) \leq \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] - \inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] \tag {16}
+$$
+
+implying,
+
+$$
+\Delta \mathcal {C} _ {\ell_ {\mathrm {d e f}}} \leq A _ {1} + \bar {c} _ {0} (g (x), z) - \bar {c} _ {0} ^ {*} (g (x), z) \tag {17}
+$$
+
+We now select a distribution for our rejector. We first define $\forall j\in \mathcal{A}$
+
+$$
+p _ {0} = \frac {\sum_ {j = 1} ^ {J} \overline {{c}} _ {j} (m _ {j} (x) , z)}{J \sum_ {j = 0} ^ {J} \overline {{c}} _ {j} (g (x) , m _ {j} (x) , z)}
+$$
+
+and
+
+$$
+p _ {j \in [ J ]} = \frac {\overline {{c}} _ {0} (g (x) , z) + \sum_ {j \neq j ^ {\prime}} ^ {J} \overline {{c}} _ {j} ^ {\prime} (m _ {j} (x) , z)}{J \sum_ {j = 0} ^ {J} \overline {{c}} _ {j} (g (x) , m _ {j} (x) , z)}
+$$
+
+which can also be written as:
+
+$$
+p _ {j} = \frac {\bar {\tau} _ {j}}{\| \bar {\boldsymbol {\tau}} \| _ {1}} \tag {18}
+$$
+
+Injecting the new distribution, we obtain the following:
+
+$$
+\Delta \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} = \| \overline {{\boldsymbol {\tau}}} \| _ {1} \left(\sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j)\right) \tag {19}
+$$
+
+Now consider the first and last term of $\Delta C_{\ell_{\mathrm{def}}}$ . Following the intermediate step for Lemma 4.3, we have:
+
+$$
+\begin{array}{l} A _ {1} = \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] 1 _ {r (x) = 0} + \sum_ {j = 1} ^ {J} \mathbb {E} _ {y, t | x} [ c _ {j} (m _ {j} (x), z) ] 1 _ {r (x) = j} - v (m (x), z) \\ = \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] 1 _ {r (x) = 0} + \sum_ {j = 1} ^ {J} \mathbb {E} _ {y, t | x} [ c _ {j} (m _ {j} (x), z) ] 1 _ {r (x) = j} \\ - \inf _ {r \in \mathcal {R}} \left[ \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] 1 _ {r (x) = 0} + \sum_ {j = 1} ^ {J} \mathbb {E} _ {y, t | x} [ c _ {j} (m _ {j} (x), z) ] 1 _ {r (x) = j} \right] \\ = \sum_ {j = 1} ^ {J} \bar {c} _ {j} (z, m _ {j}) 1 _ {r (x) \neq 0} + \sum_ {j = 1} ^ {J} \left(\bar {c} _ {0} (g (x), z) + \sum_ {j \neq j ^ {\prime}} ^ {J} \bar {c} _ {j ^ {\prime}} \left(m _ {j ^ {\prime}} (x), z\right)\right) 1 _ {r (x) \neq j} \\ - \inf _ {r \in \mathcal {R}} \left[ \sum_ {j = 1} ^ {J} \bar {c} _ {j} \left(m _ {j ^ {\prime}} (x), z\right) 1 _ {r (x) \neq 0} + \sum_ {j = 1} ^ {J} \left(\bar {c} _ {0} \left(g (x), z\right) + \sum_ {j \neq j ^ {\prime}} ^ {J} \bar {c} _ {j ^ {\prime}} \left(m _ {j ^ {\prime}} (x), z\right)\right) 1 _ {r (x) \neq j} \right] \\ \end{array}
+$$
+
+Then, applying a change of variables to introduce $\| \overline{\tau} \|_1$ , we get:
+
+$$
+\begin{array}{l} \| \overline {{\boldsymbol {\tau}}} \| _ {1} p _ {0} 1 _ {r (x) \neq 0} + \| \overline {{\boldsymbol {\tau}}} \| _ {1} \sum_ {j = 1} ^ {J} p _ {j} 1 _ {r (x) \neq j} - \inf _ {r \in \mathcal {R}} [ \| \overline {{\boldsymbol {\tau}}} \| _ {1} p _ {0} 1 _ {r (x) \neq 0} + \| \overline {{\boldsymbol {\tau}}} \| _ {1} \sum_ {j = 1} ^ {J} p _ {j} 1 _ {r (x) \neq j} ] \\ = \| \overline {{\boldsymbol {\tau}}} \| _ {1} \sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} - \inf _ {r \in \mathcal {R}} \| \overline {{\boldsymbol {\tau}}} \| _ {1} \sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} \\ \end{array}
+$$
+
+We now apply Lemma C.1 to introduce $\Gamma$ ,
+
+$$
+\begin{array}{l} \sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} p _ {j} 1 _ {r (x) \neq j} \leq \Gamma \Big (\sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} p _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) \Big) \\ \frac {1}{\left\| \overline {{\boldsymbol {\tau}}} \right\| _ {1}} \left[ \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} 1 _ {r (x) \neq j} - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} 1 _ {r (x) \neq j} \right] \leq \Gamma \left(\frac {1}{\left\| \overline {{\boldsymbol {\tau}}} \right\| _ {1}} \left[ \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) - \inf _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \Phi_ {0 1} ^ {\nu} (r, x, j) \right]\right) \tag {20} \\ \Delta \mathcal {C} _ {\ell_ {\mathrm {d e f}}} \leq \| \overline {{\boldsymbol {\tau}}} \| _ {1} \Gamma \left(\frac {\Delta \mathcal {C} _ {\Phi_ {\mathrm {d e f}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) \\ \end{array}
+$$
+
+We reintroduce the coefficient $A_{2}$ such that:
+
+$$
+\begin{array}{l} \Delta \mathcal {C} _ {\ell_ {\mathrm {d e f}}} \leq \| \overline {{\boldsymbol {\tau}}} \| _ {1} \Gamma \left(\frac {\Delta \mathcal {C} _ {\Phi_ {\mathrm {d e f}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) + A _ {2} \\ \Delta \mathcal {C} _ {\ell_ {\mathrm {d e f}}} \leq \| \overline {{\boldsymbol {\tau}}} \| _ {1} \Gamma \left(\frac {\Delta \mathcal {C} _ {\Phi_ {\mathrm {d e f}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) + \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] - \inf _ {g \in \mathcal {G}} \mathbb {E} _ {y, t | x} [ c _ {0} (g (x), z) ] \quad (\text {u p p e r b o u n d i n g w i t h E q 1 6}) \\ \end{array}
+$$
+
+Mao et al. (2023b) introduced a tight bound for the comp-sum surrogates family. It follows for $\nu \geq 0$ the inverse transformation $\Gamma^{\nu}(u) = \mathcal{T}^{-1,\nu}(u)$ :
+
+$$
+\mathcal {T} ^ {\nu} (v) = \left\{ \begin{array}{l l} \frac {2 ^ {1 - \nu}}{1 - \nu} \left[ 1 - \left(\frac {(1 + v) ^ {\frac {2 - \nu}{2}} + (1 - v) ^ {\frac {2 - \nu}{2}}}{2}\right) ^ {2 - \nu} \right] & \nu \in [ 0, 1) \\ \frac {1 + v}{2} \log [ 1 + v ] + \frac {1 - v}{2} \log [ 1 - v ] & \nu = 1 \\ \frac {1}{(\nu - 1) n ^ {\nu - 1}} \left[ \left(\frac {(1 + v) ^ {\frac {2 - \nu}{2}} + (1 - v) ^ {\frac {2 - \nu}{2}}}{2}\right) ^ {2 - \nu} - 1 \right] & \nu \in (1, 2) \\ \frac {1}{(\nu - 1) n ^ {\nu - 1}} v & \nu \in [ 2, + \infty). \end{array} \right.
+$$
+
+We note $\overline{\Gamma}^{\nu}(u) = \| \overline{\tau}\|_{1}\Gamma^{\nu}(\frac{u}{\|\overline{\tau}\|_{1}})$ . By applying Jensen's Inequality and taking expectation on both sides, we get
+
+$$
+\begin{array}{l} \mathcal {E} _ {\ell_ {\mathrm {d e f}}} (g, r) - \mathcal {E} _ {\ell_ {\mathrm {d e f}}} ^ {B} (\mathcal {G}, \mathcal {R}) + \mathcal {U} _ {\ell_ {\mathrm {d e f}}} (\mathcal {G}, \mathcal {R}) \\ \leq \overline {{\Gamma}} ^ {\nu} \left(\mathcal {E} _ {\Phi_ {\mathrm {d e f}}} (r) - \mathcal {E} _ {\Phi_ {\mathrm {d e f}}} ^ {*} (\mathcal {R}) + \mathcal {U} _ {\Phi_ {\mathrm {d e f}}} (\mathcal {R})\right) + \mathcal {E} _ {c _ {0}} (g) - \mathcal {E} _ {c _ {0}} ^ {B} (\mathcal {G}) + \mathcal {U} _ {c _ {0}} (\mathcal {G}) \\ \end{array}
+$$
+
+# D. Proof Theorem 4.6
+
+Theorem 4.6 (Characterization of Minimizability Gaps). Assume $\mathcal{R}$ is symmetric and complete. Then, for the cross-entropy multiclass surrogates $\Phi_{01}^{\nu}$ and any distribution $\mathcal{D}$ , the following holds for $\nu \geq 0$ :
+
+$$
+\mathcal {C} _ {\Phi_ {d e f} ^ {\nu , *}} ^ {\nu , *} (\mathcal {R}, x) = \left\{ \begin{array}{l l} \| \overline {{\boldsymbol {\tau}}} \| _ {1} H \left(\frac {\overline {{\boldsymbol {\tau}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) & \text {f o r} \nu = 1 \\ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\infty} & \nu = 2 \\ \frac {1}{\nu - 1} \left[ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\frac {1}{2 - \nu}} \right] & \nu \in (1, 2) \\ \frac {1}{1 - \nu} \left[ \left(\sum_ {k = 0} ^ {J} \overline {{\boldsymbol {\tau}}} _ {k} ^ {\frac {1}{2 - \nu}}\right) ^ {2 - \nu} \| \overline {{\boldsymbol {\tau}}} \| _ {1} \right] & \nu > 2, \end{array} \right.
+$$
+
+where $\overline{\tau} = \{\mathbb{E}_{y,t|x}[\overline{\tau}_0],\dots ,\mathbb{E}_{y,t|x}[\overline{\tau}_J]\}$ , the aggregated costs are $\tau_{j} = \sum_{k = 0}^{J}c_{k}1_{k\neq j}$ , and $H$ denotes the Shannon entropy. The minimizability gap is defined as $\mathcal{U}_{\Phi_{def}^{\nu}}(\mathcal{R}) = \mathcal{E}_{\Phi_{def}^{\nu}}^{*}(\mathcal{R}) - \mathbb{E}_{x}\left[\mathcal{C}_{\Phi_{def}^{\nu}}^{\nu,*}(\mathcal{R},x)\right]$ .
+
+Proof. We define the softmax distribution as $s_j = \frac{e^{r(x,j)}}{\sum_{j' \in \mathcal{A}} e^{r(x,j')}}$ , where $s_j \in [0,1]$ . Let $\overline{\tau}_j = \overline{\tau}_j(g(x), m(x), z)$ with $\tau_j \in \mathbb{R}^+$ , and denote the expected value as $\overline{\tau} = \mathbb{E}_{y,t|x}[\tau]$ . We now derive the conditional risk for a given $\nu \geq 0$ :
+
+$$
+\begin{array}{l} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu} (r, x) = \sum_ {j = 0} ^ {J} \mathbb {E} _ {y, t | x} [ \tau_ {j} ] \Phi_ {0 1} ^ {\nu} (r, x, j) \\ = \left\{ \begin{array}{l l} \frac {1}{1 - \nu} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left[ \left(\sum_ {j ^ {\prime} \in \mathcal {A}} e ^ {r (x, j ^ {\prime}) - r (x, j)}\right) ^ {1 - \nu} - 1 \right] & \nu \neq 1 \\ \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \log \left(\sum_ {j ^ {\prime} \in \mathcal {A}} e ^ {r (x, j ^ {\prime}) - r (x, j)}\right) & \nu = 1 \end{array} \right. \tag {21} \\ = \left\{ \begin{array}{l l} \frac {1}{1 - \nu} \sum_ {j = 0} ^ {J} \overline {{\tau}} _ {j} \Big [ s _ {j} ^ {\nu - 1} - 1 \Big ] & \nu \neq 1 \\ - \sum_ {j = 0} ^ {J} \overline {{\tau}} _ {j} \log (s _ {j}) & \nu = 1 \end{array} \right. \\ \end{array}
+$$
+
+For $\nu = 1$ : we can write the following conditional risk:
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu = 1} (r, x) = - \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left[ r (x, j) - \log \sum_ {j ^ {\prime} \in \mathcal {A}} e ^ {r (x, j ^ {\prime})} \right] \tag {22}
+$$
+
+Then,
+
+$$
+\frac {\partial \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu = 1}}{\partial r (x , i)} (r, x) = - \bar {\tau} _ {i} + \left(\sum_ {j = 0} ^ {J} \bar {\tau} _ {j}\right) s _ {i} ^ {*} \tag {23}
+$$
+
+At the optimum, we have:
+
+$$
+s ^ {*} (x, i) = \frac {\bar {\tau} _ {i}}{\sum_ {j = 0} \bar {\tau} _ {j}} \tag {24}
+$$
+
+Then, it follows:
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {*, \nu = 1} (\mathcal {R}, x) = - \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \log \left(\frac {\bar {\tau} _ {j}}{\sum_ {j ^ {\prime} = 0} \bar {\tau} _ {j ^ {\prime}}}\right) \tag {25}
+$$
+
+As the softmax parametrization is a distribution $s^* \in \Delta^{|\mathcal{A}|}$ , we can write this conditional in terms of entropy with $\overline{\tau} = \{\overline{\tau}_j\}_{j \in \mathcal{A}}$ :
+
+$$
+\begin{array}{l} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {*, \nu = 1} (\mathcal {R}, x) = - \Big (\sum_ {k = 0} ^ {J} \overline {{\tau}} _ {k} \Big) \sum_ {j = 0} s _ {j} ^ {*} \log (s _ {j} ^ {*}) \\ = \left(\sum_ {k = 0} ^ {J} \bar {\tau} _ {k}\right) H \left(\frac {\bar {\tau}}{\sum_ {j ^ {\prime} = 0} \bar {\tau} _ {j ^ {\prime}}}\right) \tag {26} \\ = \| \overline {{\boldsymbol {\tau}}} \| _ {1} H \left(\frac {\overline {{\boldsymbol {\tau}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) \quad (\text {a s} \tau_ {j} \in \mathbb {R} ^ {+}) \\ \end{array}
+$$
+
+For $\nu \neq 1,2$ : The softmax parametrization can be written as a constraint $\sum_{j=0}^{J} s_j = 1$ and $s_j \geq 0$ . Consider the objective
+
+$$
+\Phi (\mathbf {s}) = \frac {1}{1 - \nu} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left[ s _ {j} ^ {\nu - 1} - 1 \right]. \tag {27}
+$$
+
+We aim to find $\mathbf{s}^{*} = (s_{0}^{*},\dots ,s_{J}^{*})$ that minimizes (27) subject to $\sum_{j = 0}^{J}s_{j} = 1$ . Introduce a Lagrange multiplier $\lambda$ for the normalization $\sum_{j = 0}^{J}s_{j} = 1$ . The Lagrangian is:
+
+$$
+\mathcal {L} (\mathbf {s}, \lambda) = \frac {1}{1 - \nu} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left[ s _ {j} ^ {\nu - 1} - 1 \right] + \lambda \left(1 - \sum_ {j = 0} ^ {J} s _ {j}\right). \tag {28}
+$$
+
+We take partial derivatives with respect to $s_i$ :
+
+$$
+\frac {\partial \mathcal {L}}{\partial s _ {i}} = \frac {1}{1 - \nu} \bar {\tau} _ {i} (\nu - 1) s _ {i} ^ {\nu - 2} - \lambda = 0. \tag {29}
+$$
+
+Since $\frac{\nu - 1}{1 - \nu} = -1$ , we get
+
+$$
+\bar {\tau} _ {i} s _ {i} ^ {\nu - 2} = - \lambda > 0 \Rightarrow s _ {i} ^ {\nu - 2} = \frac {\alpha}{\bar {\tau} _ {i}} \text {f o r s o m e} \alpha > 0. \tag {30}
+$$
+
+Hence
+
+$$
+s _ {i} = \left(\frac {\alpha}{\bar {r} _ {i}}\right) ^ {\frac {1}{\nu - 2}}. \tag {31}
+$$
+
+Summing $s_i$ over $\{i = 0, \dots, J\}$ and setting the total to 1 yields:
+
+$$
+\sum_ {i = 0} ^ {J} \left(\frac {\alpha}{\bar {\tau} _ {i}}\right) ^ {\frac {1}{\nu - 2}} = 1. \tag {32}
+$$
+
+Let
+
+$$
+\alpha^ {\frac {1}{\nu - 2}} = \frac {1}{\sum_ {k = 0} ^ {J} \left(\frac {1}{\bar {\tau} _ {k}}\right) ^ {\frac {1}{\nu - 2}}} \Rightarrow \alpha = \left[ \sum_ {k = 0} ^ {J} \left(\frac {1}{\bar {\tau} _ {k}}\right) ^ {\frac {1}{\nu - 2}} \right] ^ {\nu - 2}. \tag {33}
+$$
+
+Therefore, for each $i$
+
+$$
+s _ {i} ^ {*} = \left(\frac {\alpha}{\bar {\tau} _ {i}}\right) ^ {\frac {1}{\nu - 2}} = \frac {\bar {\tau} _ {i} ^ {\frac {1}{2 - \nu}}}{\sum_ {k = 0} ^ {J} \bar {\tau} _ {k} ^ {\frac {1}{2 - \nu}}}. \tag {34}
+$$
+
+This $\{s_i^*\}$ is a valid probability distribution. Let
+
+$$
+A = \sum_ {k = 0} ^ {J} \tau_ {k} ^ {\frac {1}{2 - \nu}}. \tag {35}
+$$
+
+Then the optimum distribution is
+
+$$
+s _ {i} ^ {*} = \frac {\bar {\tau} _ {i} ^ {\frac {1}{2 - \nu}}}{A}. \tag {36}
+$$
+
+Recall
+
+$$
+\Phi (\mathbf {s}) = \frac {1}{1 - \nu} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left[ s _ {j} ^ {\nu - 1} - 1 \right]. \tag {37}
+$$
+
+At $s_j^*$ we have
+
+$$
+\left(s _ {j} ^ {*}\right) ^ {\nu - 1} = \left(\frac {\frac {1}{\bar {\tau} _ {j}} ^ {\frac {1}{2 - \nu}}}{A}\right) ^ {\nu - 1} = \frac {\bar {\tau} _ {j} ^ {\frac {\nu - 1}{2 - \nu}}}{A ^ {\nu - 1}}. \tag {38}
+$$
+
+Hence
+
+$$
+\sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \left(s _ {j} ^ {*}\right) ^ {\nu - 1} = \frac {1}{A ^ {\nu - 1}} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} ^ {1 + \frac {\nu - 1}{2 - \nu}} = \frac {1}{A ^ {\nu - 1}} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} ^ {\frac {1}{2 - \nu}} = \frac {A}{A ^ {\nu - 1}} = A ^ {2 - \nu}. \tag {39}
+$$
+
+Substituting back,
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {*, \nu \neq 1, 2} (\mathcal {R}, x) = \frac {1}{1 - \nu} \left[ \left(\sum_ {k = 0} ^ {J} \bar {\tau} _ {k} ^ {\frac {1}{2 - \nu}}\right) ^ {2 - \nu} - \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \right] \tag {40}
+$$
+
+We can express this conditional risk with a valid $L^{\left(\frac{1}{2 - \nu}\right)}$ norm as long as $\nu \in (1, 2)$ .
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {*, \nu \neq 1, 2} (\mathcal {R}, x) = \frac {1}{\nu - 1} \left[ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\frac {1}{2 - \nu}} \right] \tag {41}
+$$
+
+For $\nu = 2$ : Since $\sum_{j=0}^{J} \overline{\tau}_j = S$ , we have
+
+$$
+\mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu = 2} (r, x) = \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} [ 1 - s _ {j} (r) ] = \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} - \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} s _ {j} (r). \tag {42}
+$$
+
+Hence
+
+$$
+\inf _ {r \in \mathcal {R}} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu = 2} (r, x) = S - \sup _ {r \in \mathcal {R}} \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} s _ {j} (r). \tag {43}
+$$
+
+Therefore, minimizing $\mathcal{C}_{\Phi_{\mathrm{def}}}^{\nu = 2}(r,x)$ is equivalent to maximizing
+
+$$
+F (r) = \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} s _ {j} (r). \tag {44}
+$$
+
+Its partial derivative w.r.t. $r_i$ is the standard softmax derivative:
+
+$$
+\frac {\partial s _ {j}}{\partial r _ {i}} = s _ {j} \left(\delta_ {i j} - s _ {i}\right) = \left\{ \begin{array}{l l} s _ {i} (1 - s _ {i}), & \text {i f} i = j, \\ - s _ {j} s _ {i}, & \text {o t h e r w i s e .} \end{array} \right. \tag {45}
+$$
+
+Hence, for each $i$
+
+$$
+\frac {\partial F}{\partial r _ {i}} = \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} \frac {\partial s _ {j}}{\partial r _ {i}} = \bar {\tau} _ {i} s _ {i} (1 - s _ {i}) + \sum_ {\substack {j = 0 \\ j \neq i}} ^ {J} \bar {\tau} _ {j} (- s _ {j} s _ {i}). \tag{46}
+$$
+
+Factor out $s_i$ :
+
+$$
+\frac {\partial F}{\partial r _ {i}} = s _ {i} \left[ \bar {\tau} _ {i} (1 - s _ {i}) - \sum_ {j \neq i} \bar {\tau} _ {j} s _ {j} \right] = s _ {i} \left[ \bar {\tau} _ {i} - \left(\sum_ {j = 0} ^ {J} \bar {\tau} _ {j} s _ {j}\right) \right], \tag {47}
+$$
+
+because $\sum_{j\neq i}\overline{\tau}_j s_j = \sum_{j = 0}^{J}\overline{\tau}_j s_j - \overline{\tau}_i s_i$ . Define $F(r) = \sum_{j = 0}^{J}\overline{\tau}_{j}s_{j}(r)$ . Then:
+
+$$
+\frac {\partial F}{\partial r _ {i}} = s _ {i} \left[ \bar {\tau} _ {i} - F (r) \right]. \tag {48}
+$$
+
+Setting $\frac{\partial F}{\partial r_i} = 0$ for each $i$ implies
+
+$$
+s _ {i} \left[ \bar {\tau} _ {i} - F (r) \right] = 0, \quad \forall i. \tag {49}
+$$
+
+Thus, for each index $i$ :
+
+$$
+s _ {i} = 0 \quad \text {o r} \quad \bar {\tau} _ {i} = F (r). \tag {50}
+$$
+
+To maximize $F(r)$ , notice that:
+
+- If $\overline{\tau}_{i^*}$ is strictly the largest among all $\overline{\tau}_i$ , then the maximum is approached by making $s_{i^*} \approx 1$ , so $F(r) \approx \overline{\tau}_{i^*}$ . In the softmax parameterization, this occurs in the limit $r_{i^*} \to +\infty$ and $r_k \to -\infty$ for $k \neq i^*$ .
+- If there is a tie for the largest $\overline{\tau}_i$ , we can put mass on those coordinates that share the maximum value. In any case, the supremum is $\max_i \overline{\tau}_i$ .
+
+Hence
+
+$$
+\sup _ {r \in \mathcal {R}} F (r) = \max _ {0 \leq i \leq J} \bar {\tau} _ {i}. \tag {51}
+$$
+
+Because $\mathcal{C}_{\Phi_{\mathrm{def}}}^{\nu = 2}(r,x) = S - F(r)$
+
+$$
+\inf _ {r \in \mathcal {R}} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu = 2} (r, x) = S - \sup _ {r \in \mathcal {R}} F (r) = \sum_ {j = 0} ^ {J} \bar {\tau} _ {j} - \max _ {i \in \mathcal {A}} \bar {\tau} _ {i} = \| \bar {\boldsymbol {\tau}} \| _ {1} - \| \bar {\boldsymbol {\tau}} \| _ {\infty} \tag {52}
+$$
+
+Hence the global minimum of $\mathcal{C}_{\Phi_{\mathrm{def}}}^{\nu = 2}$ is $\| \overline{\tau}\| _1 - \| \overline{\tau}\|_\infty$ . In the "softmax" parameterization, this is only approached in the limit as one coordinate $r_{i^{*}}$ goes to $+\infty$ and all others go to $-\infty$ . No finite $r$ yields an exactly one-hot $s_i(r) = 1$ , but the limit is enough to achieve the infimum arbitrarily closely.
+
+It follows for $\overline{\tau} = \{\overline{\tau}_j\}_{j\in \mathcal{A}}$ and $\nu \geq 0$
+
+$$
+\inf _ {r \in \mathcal {R}} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu} (r, x) = \left\{ \begin{array}{l l} \| \overline {{\boldsymbol {\tau}}} \| _ {1} H \left(\frac {\overline {{\boldsymbol {\tau}}}}{\| \overline {{\boldsymbol {\tau}}} \| _ {1}}\right) & \nu = 1 \\ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\infty} & \nu = 2 \\ \frac {1}{\nu - 1} \left[ \| \overline {{\boldsymbol {\tau}}} \| _ {1} - \| \overline {{\boldsymbol {\tau}}} \| _ {\frac {1}{2 - \nu}} \right] & \nu \in (1, 2) \\ \frac {1}{1 - \nu} \left[ \left(\sum_ {k = 0} ^ {J} \overline {{\tau}} _ {k} ^ {\frac {1}{2 - \nu}}\right) ^ {2 - \nu} - \| \overline {{\boldsymbol {\tau}}} \| _ {1} \right] & \text {o t h e r w i s e} \end{array} \right. \tag {53}
+$$
+
+Building on this, we can infer the minimizability gap:
+
+$$
+\mathcal {U} _ {\Phi_ {\mathrm {d e f}}} (\mathcal {R}) = \mathcal {E} _ {\Phi_ {\mathrm {d e f}}} ^ {*} (\mathcal {R}) - \mathbb {E} _ {x} \left[ \inf _ {r \in \mathcal {R}} \mathcal {C} _ {\Phi_ {\mathrm {d e f}}} ^ {\nu} (r, x) \right] \tag {54}
+$$
+
+# E. Proof Lemma 4.12
+
+Lemma 4.12. Let $\mathcal{L}_1$ be a family of functions mapping $\mathcal{X}$ to $[0,1]$ , and let $\mathcal{L}_2$ be a family of functions mapping $\mathcal{X}$ to $\{0,1\}$ . Define $\mathcal{L} = \{l_1l_2 : l_1 \in \mathcal{L}_1, l_2 \in \mathcal{L}_2\}$ . Then, the empirical Rademacher complexity of $\mathcal{L}$ for any sample $S$ of size $K$ is bounded by:
+
+$$
+\widehat {\mathfrak {R}} _ {S} (\mathcal {L}) \leq \widehat {\mathfrak {R}} _ {S} (\mathcal {L} _ {1}) + \widehat {\mathfrak {R}} _ {S} (\mathcal {L} _ {2}).
+$$
+
+Proof. We define the function $\psi$ as follows:
+
+$$
+\psi : \begin{array}{c c c} \mathcal {L} _ {1} + \mathcal {L} _ {2} & \longrightarrow & \mathcal {L} _ {1} \mathcal {L} _ {2} \\ l _ {1} + l _ {2} & \longmapsto & (l _ {1} + l _ {2} - 1) _ {+} \end{array} \tag {55}
+$$
+
+Here, $l_{1} \in \mathcal{L}_{1}$ and $l_{2} \in \mathcal{L}_{2}$ . The function $\psi$ is 1-Lipschitz as we have $t \mapsto (t - 1)_+$ for $t = l_{1} + l_{2}$ . Furthermore, given that $\psi$ is surjective and 1-Lipschitz, by Talagrand's lemma (Mohri et al., 2012), we have:
+
+$$
+\hat {\Re} _ {S} \left(\psi \left(\mathcal {L} _ {1} + \mathcal {L} _ {2}\right)\right) \leq \hat {\Re} _ {S} \left(\mathcal {L} _ {1} + \mathcal {L} _ {2}\right) \leq \hat {\Re} _ {S} \left(\mathcal {L} _ {1}\right) + \hat {\Re} _ {S} \left(\mathcal {L} _ {2}\right) \tag {56}
+$$
+
+This inequality shows that the Rademacher complexity of the sum of the losses is bounded by the sum of their individual complexities.
+
+# F. Proof Theorem 4.13
+
+Theorem 4.13 (Learning bounds of the deferral loss). For any expert $M_j$ , any distribution $\mathcal{D}$ over $\mathcal{Z}$ , we have with probability $1 - \delta$ for $\delta \in [0,1/2]$ , that the following bound holds at the optimum:
+
+$$
+\mathcal {E} _ {\ell_ {d e f}} (h, f, r) \leq \widehat {\mathcal {E}} _ {\ell_ {d e f}} (h, f, r) + 2 \Re_ {K} (\mathcal {L} _ {d e f}) + \sqrt {\frac {\log 1 / \delta}{2 K}},
+$$
+
+with
+
+$$
+\begin{array}{l} \mathfrak {R} _ {K} (\mathcal {L} _ {d e f}) \leq \frac {1}{2} \mathfrak {R} _ {K} (\mathcal {H}) + \mathfrak {R} _ {K} (\mathcal {F}) + \sum_ {j = 1} ^ {J} \Omega \left(m _ {j} ^ {h}, y\right) \\ + \Big (\sum_ {j = 1} ^ {J} \max \ell_ {r e g} (m _ {j} ^ {f}, t) + 2 \Big) \Re_ {K} (\mathcal {R}), \\ \end{array}
+$$
+
+with $\Omega(m_j^h, y) = \frac{1}{2} \mathcal{D}(m_j^h \neq y) \exp\left(-\frac{K}{8} \mathcal{D}(m_j^h \neq y)\right) + \mathcal{R}_{K\mathcal{D}(m_j^h \neq y)/2}(\mathcal{R})$ .
+
+Proof. We are interested in finding the generalization of $u = (g,r) \in \mathcal{L}$ :
+
+$$
+\begin{array}{l} \mathfrak {R} _ {S} (\mathcal {L}) = \frac {1}{K} \mathbb {E} _ {\sigma} [ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} \ell_ {\mathrm {d e f}} (g, r, x _ {k}, y _ {k}, b _ {k}, m _ {k}) ] \\ = \frac {1}{K} \mathbb {E} _ {\sigma} [ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} \left(\sum_ {j = 0} ^ {J} c _ {j} 1 _ {r (x _ {k}) = j}\right) ] \\ \leq \frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} c _ {0} 1 _ {r (x _ {k}) = 0} \right] + \frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} c _ {j} 1 _ {r (x _ {k}) = j} \right] \quad (\text {B y}) \\ \end{array}
+$$
+
+Let's consider $j = 0$ :
+
+$$
+\begin{array}{l} \frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} c _ {0} 1 _ {r (x _ {k}) = 0} \right] = \frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} \left[ 1 _ {h (x _ {k}) \neq y} + \ell_ {\text {r e g}} (f (x _ {k}), b _ {k}) \right] 1 _ {r (x _ {k}) = 0} \right] \\ \leq \frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} 1 _ {h (x _ {k}) \neq y} 1 _ {r (x _ {k}) = 0} \right] + \frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {g \in \mathcal {L}} \sum_ {k = 1} ^ {K} \sigma_ {k} \ell_ {\text {r e g}} (f (x _ {k}), b _ {k}) 1 _ {r (x _ {k}) = 0} \right] \\ \leq \left[ \frac {1}{2} \Re_ {K} (\mathcal {H}) + \Re_ {K} (\mathcal {R}) \right] + \left[ \Re_ {K} (\mathcal {F}) + \Re_ {K} (\mathcal {R}) \right] \quad (u s i n g L e m m a 4. 1 2) \\ = \frac {1}{2} \Re_ {K} (\mathcal {H}) + \Re_ {K} (\mathcal {F}) + 2 \Re_ {K} (\mathcal {R}) \tag {57} \\ \end{array}
+$$
+
+Let's consider $j > 0$ :
+
+$$
+\begin{array}{l} \frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} c _ {j} 1 _ {r \left(x _ {k}\right) = j} \right] \leq \frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} 1 _ {m _ {k, j} ^ {h}} \neq y 1 _ {r \left(x _ {k}\right) = j} \right] \tag {58} \\ + \frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} \ell_ {\operatorname {r e g}} \left(m _ {k, j} ^ {f}, b _ {k}\right) 1 _ {r \left(x _ {k}\right) = j} \right] \\ \end{array}
+$$
+
+Using learning-bounds for single expert in classification (Mozannar & Sontag, 2020), we have:
+
+$$
+\frac {1}{K} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} 1 _ {m _ {k} ^ {h} \neq y} 1 _ {r (x _ {k}) = 1} \right] \leq \frac {\mathcal {D} \left(m ^ {h} \neq y\right)}{2} \exp \left(- \frac {K \mathcal {D} \left(m ^ {h} \neq y\right)}{8}\right) + \mathcal {R} _ {K \mathcal {D} \left(m ^ {h} \neq y\right) / 2} (\mathcal {R}) \tag {59}
+$$
+
+Applying it to our case:
+
+$$
+\frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} 1 _ {m _ {k, j} ^ {h} \neq y} 1 _ {r \left(x _ {k}\right) = j} \right] \leq \sum_ {j = 1} ^ {J} \left(\frac {\mathcal {D} \left(m _ {j} ^ {h} \neq y\right)}{2} \exp \left(- \frac {K \mathcal {D} \left(m _ {j} ^ {h} \neq y\right)}{8}\right) + \mathcal {R} _ {K \mathcal {D} \left(m _ {j} ^ {h} \neq y\right) / 2} (\mathcal {R})\right) \tag {60}
+$$
+
+For the last term,
+
+$$
+\frac {1}{K} \sum_ {j = 1} ^ {J} \mathbb {E} _ {\sigma} \left[ \sup _ {r \in \mathcal {R}} \sum_ {k = 1} ^ {K} \sigma_ {k} \ell_ {\text {r e g}} \left(m _ {k, j} ^ {f}, b _ {k}\right) 1 _ {r \left(x _ {k}\right) = j} \right] \leq \sum_ {j = 1} ^ {J} \left(\max \ell_ {r e g} \left(m _ {j} ^ {f}, t\right) \Re_ {K} (\mathcal {R})\right) \tag {61}
+$$
+
+Then, it leads to:
+
+$$
+\mathfrak {R} _ {K} (\mathcal {L} _ {\mathrm {d e f}}) \leq \frac {1}{2} \mathfrak {R} _ {K} (\mathcal {H}) + \mathfrak {R} _ {K} (\mathcal {F}) + \sum_ {j = 1} ^ {J} \Omega \left(m _ {j} ^ {h}, y\right) + \left(\sum_ {j = 1} ^ {J} \max \ell_ {\mathrm {r e g}} \left(m _ {j} ^ {f}, t\right) + 2\right) \mathfrak {R} _ {K} (\mathcal {R})
+$$
+
+with $\Omega(m_j^h, y) = \frac{\mathcal{D}(m_j^h \neq y)}{2} \exp\left(-\frac{K\mathcal{D}(m_j^h \neq y)}{8}\right) + \mathcal{R}_{K\mathcal{D}(m_j^h \neq y)/2}(\mathcal{R})$
+
+# G. Experiments
+
+# G.1. PascalVOC Experiment
+
+Since an image may contain multiple objects, our deferral rule is applied at the level of the entire image $x \in \mathcal{X}$ , ensuring that the approach remains consistent with real-world scenarios.
+
+
+
+Table 2. Agent accuracies on the CIFAR-100 validation set. Since the training and validation sets are pre-determined in this dataset, the agents' knowledge remains fixed throughout the evaluation.
+
+Cost β2 mAP (%) Model Allocation (%) Expert 1 Allocation (%) Expert 2 Allocation (%) 0.01 52.8 ± 0.0 0.0 ± 0.0 0.0 ± 0.0 100.0 ± 0.0 0.05 52.5 ± 0.1 7.3 ± 0.8 0.0 ± 0.0 92.7 ± 0.3 0.1 49.1 ± 0.6 48.0 ± 0.7 0.0 ± 0.0 52.0 ± 0.2 0.15 44.2 ± 0.4 68.1 ± 0.3 19.7 ± 0.4 12.2 ± 0.1 0.2 42.0 ± 0.2 77.5 ± 0.2 22.5 ± 0.5 0.0 ± 0.0 0.3 40.1 ± 0.2 98.1 ± 0.0 1.9 ± 0.1 0.0 ± 0.0 0.5 39.5 ± 0.0 100.0 ± 0.0 0.0 ± 0.0 0.0 ± 0.0
+
+Table 3. Detailed results across different cost values ${\beta }_{2}$ . Errors represent the standard deviation over multiple runs.
+
+# G.2. MIMIC-IV Experiments
+
+MIMIC-IV (Johnson et al., 2023) is a large collection of de-identified health-related data covering over forty thousand patients who stayed in critical care units. This dataset includes a wide variety of information, such as demographic details, vital signs, laboratory test results, medications, and procedures. For our analysis, we focus specifically on features related to procedures, which correspond to medical procedures performed during hospital visits, and diagnoses received by the patients.
+
+Using these features, we address two predictive tasks: (1) a classification task to predict whether a patient will die during their next hospital visit based on clinical information from the current visit, and (2) a regression task to estimate the length of stay for the current hospital visit based on the same clinical information.
+
+A key challenge in this task is the severe class imbalance, particularly in predicting mortality. To mitigate this issue, we sub-sample the negative mortality class, retaining a balanced dataset with $K = 5995$ samples, comprising $48.2\%$ positive mortality cases and $51.8\%$ negative mortality cases. Our model is trained on $80\%$ of this dataset, while the remaining $20\%$ is held out for validation. To ensure consistency in the results, we fixed the training and validation partitions.
+
+Model M1 M2 Accuracy 60.0 39.7 46.2 Smooth L1 1.45 2.31 1.92
+
+Table 4. Performance of the agents on the MIMIC-IV dataset, evaluated in terms of accuracy and Smooth L1 loss. We fixed the training/validation set such that the agents' knowledge remains fixed throughout the evaluation.
\ No newline at end of file
diff --git a/atwostagelearningtodeferapproachformultitasklearning/images.zip b/atwostagelearningtodeferapproachformultitasklearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..26e3424d9d0a55b038f064454476500ad251fbb5
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5d54d76e15e36afd321ea768944647a227f74c0b182875c7fbf197a14a09a7d
+size 1108747
diff --git a/atwostagelearningtodeferapproachformultitasklearning/layout.json b/atwostagelearningtodeferapproachformultitasklearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..955a9ce3beac34f354112bfab63653f2f55924d4
--- /dev/null
+++ b/atwostagelearningtodeferapproachformultitasklearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04e5a08f08c8e387400ff820eed0b97510a2f02120222f4511a56d104c114116
+size 1140284
diff --git a/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_content_list.json b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2b6cafa0f827fe450a7ecf788c5f8e6fe97e1f8b
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d42e58864fb58468c622e69ae147c6ddb0226c62a9b9eeb46b739c9951a40a9a
+size 176292
diff --git a/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_model.json b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a1406b883befd61facd0c85407ba99baf34bdaa
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00d5c2ba1811a791007e7da82425f2c307e1d93064d044c3d0f6d7f9516caefc
+size 215552
diff --git a/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_origin.pdf b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4eda67b54a623505ea707018214fed742eb5e40d
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/27ceccd6-6f94-4dd2-a6ff-45d3823db339_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03ff730297b29846e1b9700a6e8d39adebd776834b5a5f943fd0b0b89166ff8d
+size 752649
diff --git a/aunifiedapproachtoroutingandcascadingforllms/full.md b/aunifiedapproachtoroutingandcascadingforllms/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..56658969f0185dc94f0e284bc57180fe9a62cfc0
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/full.md
@@ -0,0 +1,695 @@
+# A Unified Approach to Routing and Cascading for LLMs
+
+Jasper Dekoninck1 Maximilian Baader1 Martin Vechev1
+
+# Abstract
+
+The availability of a wide range of large language models (LLMs) embedded in various agentic systems has significantly increased the potential of model selection strategies to improve the cost-performance tradeoff. Existing strategies involve either routing, where a single model is chosen per query, or cascading, which sequentially runs increasingly larger models until a satisfactory answer is found. However, current approaches face three key limitations: they (1) lack formal proofs of optimality, (2) fail to identify the conditions under which these strategies are most effective to improve the cost-performance tradeoff, and (3) are unable to combine both paradigms for further improvements. To address these issues, we first derive a novel optimal strategy for cascading and prove the optimality of an existing routing strategy. Further, we propose cascade routing, a unified framework that integrates routing and cascading into a theoretically optimal strategy. Through our analysis, we identify good quality estimators as the critical factor for the success of model selection paradigms. Finally, in our experiments, we show that cascade routing consistently outperforms the individual approaches by a large margin and we analyze quality estimators to determine when routing and/or cascading are useful paradigms for model selection.1
+
+# 1. Introduction
+
+Large language models (LLMs) have found applications in a wide range of tasks, some of which are easily handled by small models, while others require the full capacity of state-of-the-art LLMs. This has led to the development of many fine-tuned models of various sizes that target specific tasks.
+
+$^{1}$ Department of Computer Science, ETH Zurich, Switzerland. Correspondence to: Jasper Dekoninck .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+$^{1}$ Code available at https://github.com/eth-sri/ cascade-routing
+
+To maximize performance, it is crucial to select the most suitable model for each query, accounting for both the expected quality of the model's output and the model's cost. Such model selection strategies can significantly improve performance over any individual model and can reduce inference costs by selecting a smaller model when the query does not require the full capacity of a larger model.
+
+Routing and Cascading Two primary strategies have been proposed to solve model selection. The first, routing, directs each input query to a specific model from a set of available models (Chen et al., 2022; Liu et al., 2024), as illustrated in Fig. 1(a). This approach is particularly effective when different expert LLMs are needed for diverse tasks, enabling the selection of the most suitable expert for each query. The second strategy, cascading, processes an input query through a sequence of increasingly larger models, stopping when a model produces an answer deemed sufficiently good (Chen et al., 2023; Varshney & Baral, 2022), as illustrated in Fig. 1(b). Cascading is particularly valuable for handling queries of varying difficulty, as it allows simpler queries to be addressed by smaller models while reserving more complex queries for larger models.
+
+Restrictive Conditions Despite their utility, both routing and cascading impose significant restrictions on the model selection process. In routing, the initial selection of a model is final, preventing any reconsideration after the initial decision. In cascading, each query must sequentially pass through all models in the chain, with no option to skip a model. Therefore, a less restrictive strategy that combines the strengths of both routing and cascading could offer significant performance improvements.
+
+Lack of Deeper Understanding Further, the conditions under which current routing and cascading strategies are optimal, are not well understood. For routing, an extensive proof is required just to show that current strategies are close to optimal (Chen et al., 2022), while the theoretical analysis of cascading does not provide optimality guarantees (Chen et al., 2023; Varshney & Baral, 2022). This lack of theoretical understanding hinders the development of more effective model selection strategies. Moreover, prior work fails to provide insights into the limitations of model selection strategies and cannot identify the conditions under which they are useful in practical scenarios.
+
+
+Figure 1: Overview of three model selection strategies. Routing selects a single model for a query, cascading processes queries through a sequence of models, and cascade routing generalizes both.
+
+
+
+
+
+For instance, it is widely believed that one needs models of various sizes to benefit from cascading (Chen et al., 2023; Gupta et al., 2024; Khalili et al., 2022), but we show that this notion is incorrect.
+
+This Work: Cascade Routing To address these limitations, we first derive optimal routing and cascading strategies by framing them as linear optimization problems aimed at maximizing output quality while remaining within a given cost budget. For routing, this optimal strategy is close to the one obtained by prior work, while for cascading we derive a new strategy that is provably better than existing approaches. Building on this analysis, we propose a new paradigm called cascade routing, which generalizes both routing and cascading. As illustrated in Fig. 1(c), cascade routing initially routes a query to any available model but keeps rerouting to different models until a model produces an answer of sufficient quality. We prove the optimality of cascade routing and show that it offers significantly more flexibility in processing a query.
+
+Importance of Quality Estimation Our theoretical analysis enables a more thorough investigation into the factors that influence the effectiveness of model selection strategies. Specifically, we find that accurate estimates of model performance and response quality are most important. For routing, reliable ex-ante quality estimation—the ability to predict whether a model will perform well on a given query—is essential. For cascading, robust post-hoc quality estimation—the ability to evaluate the quality of a model's response after generation—is critical. Without it, the effectiveness of cascading strategies is severely limited even when models of various sizes are available.
+
+Results We evaluate cascade routing on a range of tasks, demonstrating that it significantly outperforms both routing and cascading. Notably, cascade routing consistently outperforms other methods, improving performance by up to $8\%$ on the RouterBench benchmark (Hu et al., 2024) and by $14\%$ on the SWE-Bench benchmark (Jimenez et al., 2024). Further, we show that our new cascading strategy outperforms existing cascades in several scenarios by over $10\%$ .
+
+# Key Contributions Our main contributions are:
+
+- We derive optimal strategies for routing and cascading and obtain a new cascading strategy that is provably better than prior approaches ( $\S 2$ , $\S 3$ ).
+- We introduce cascade routing, a new paradigm that combines the strengths of routing and cascading, and prove its optimality (§4).
+- We conduct a thorough evaluation, demonstrating that cascade routing consistently outperforms the baselines, highlighting the critical role of quality estimation for the effectiveness of model selection (§5).
+
+# 2. Routing as Linear Optimization
+
+We derive an optimal routing strategy to select the best model for a given query, providing detailed proofs for all statements in this section in App. A.
+
+Brief Overview In this section, we begin by defining a routing strategy as a function that maps queries to models. Next, we introduce the notation for quality and cost functions, demonstrating how the optimal routing strategy can be formulated to maximize a linear tradeoff between these two factors. Lastly, we give an illustrative example and discuss the significance of quality estimation in routing and its impact on the effectiveness of the strategy.
+
+Routing In routing, our goal is to develop a strategy that selects the best language model for a given input query. Formally, let $\mathcal{X}$ represent the distribution over all possible queries, and suppose we have $k$ language models $m_{1},\ldots ,m_{k}$ available for routing. Further, let $\Delta_k$ denote the set of all probability distributions over $k$ variables. A routing strategy can then be defined as follows:
+
+Definition 1 (Routing). A routing strategy $s$ is a function $s: \mathcal{X} \to \Delta_k$ that maps a query $x \in \mathcal{X}$ to a probability distribution over models. $s_i(x)$ denotes the probability that $m_i$ is selected for query $x$ .
+
+A routing strategy selects a model by sampling from the distribution $s(x)$ for each query $x$ . In prior work, routing strategies were restricted to be deterministic, i.e., $s_i(x) \in \{0,1\}$ (Chen et al., 2022; Hu et al., 2024). In contrast, we propose using a more general probabilistic strategy that enables a better solution and an easier theoretical analysis.
+
+Quality and Cost In routing, we seek to maximize the output quality of the selected model while adhering to a given cost budget $B$ . We define the quality function $q_{i}(x)$ as the output quality of model $m_{i}$ on query $x$ , and the cost function $c_{i}(x)$ as the cost of running model $m_{i}$ on $x$ . Quality could measure model accuracy, user preference, or any other performance indicator. Cost could measure either monetary costs or latency, depending on the use case.
+
+However, since these functions are unknown in practice, we need estimators $\hat{q}_i(x)$ and $\hat{c}_i(x)$ that approximate the output quality and cost of querying model $m_i$ on input $x$ . Estimators for $q_{i}(x)$ can be created using small classifiers trained to predict model accuracy, as done in prior work (Hu et al., 2024; Shnitzer et al., 2023). $\hat{c}_i(x)$ can be estimated by tokenizing the input query and determining the average output length of the model on a query. Then, we can use API-specific costs per token to estimate the cost of running a model on a query. Alternatively, we can also use average execution time as a proxy for cost.
+
+Optimal Routing Using these estimators, we can formally define the optimal routing strategy:
+
+Definition 2 (Optimal Routing). The optimal routing strategy $s_{\mathrm{OPT}}$ for a given cost budget $B$ is the solution to the optimization problem that maximizes the expected output quality of the selected model while adhering to the budget:
+
+$$
+\max _ {s} \quad \mathbb {E} _ {x \in \mathcal {X}} \left(\sum_ {i = 1} ^ {k} s _ {i} (x) \hat {q} _ {i} (x)\right) \tag {1}
+$$
+
+$$
+s. t. \quad \mathbb {E} _ {x \in \mathcal {X}} \left(\sum_ {i = 1} ^ {k} s _ {i} (x) \hat {c} _ {i} (x)\right) \leqslant B.
+$$
+
+We now explain how to solve this optimization problem. For a given query $x$ , it can be shown (see App. A) that the optimal routing strategy selects the model maximizing the cost-quality tradeoff $\tau_{i}(x, \lambda) = \hat{q}_{i}(x) - \lambda \hat{c}_{i}(x)$ . Here, $\lambda \in \mathbb{R}^{+}$ is a hyperparameter that controls the balance between quality and cost based on the budget $B$ .
+
+However, it can occur that several models achieve the same optimal cost-quality tradeoff for a given query. To address this, we define two deterministic strategies $s_{\mathrm{MIN}}^{\lambda}(x)$ and $s_{\mathrm{MAX}}^{\lambda}(x)$ , which, respectively, select the cheapest and most expensive model that achieves the optimal tradeoff. The optimal routing strategy $s_{\mathrm{OPT}}$ is then determined by:
+
+Theorem 1 (Optimal Routing Strategy). For a cost budget $B$ , there exists a $\lambda \in \mathbb{R}^{+}$ and a $\gamma \in [0,1]$ such that the optimal routing strategy $s_{\mathrm{OPT}}$ equals $\gamma s_{\mathrm{MIN}}^{\lambda} + (1 - \gamma)s_{\mathrm{MAX}}^{\lambda}$ .
+
+Theorem 1, continued. Furthermore, all routing strategies that have an expected cost that is exactly equal to $B$ and can be written as a convex combination of $s_{\mathrm{MIN}}^{\lambda'}$ and $s_{\mathrm{MAX}}^{\lambda'}$ for some $\lambda' \in \mathbb{R}^{+}$ achieve the same optimal quality.
+
+In App. A, we show how to obtain the optimal $\lambda$ and $\gamma$ for a cost budget $B$ using a validation dataset $D$ . In Algorithm 1, we provide pseudocode for the optimal routing algorithm.
+
+# Algorithm 1 Optimal Routing Algorithm
+
+Input: input query $x$ , quality estimator $\hat{q}_i$ , cost estimator $\hat{c}_i$ , tradeoff parameters $\lambda$ and $\gamma$
+
+Output: Model index $i$ to be used for query $x$
+
+1: $\tau_{i}(x,\lambda)\coloneqq \hat{q}_{i}(x) - \lambda \hat{c}_{i}(x)$
+2: $\tau_{\max}(x,\lambda)\coloneqq \max_{i\in \{1,\ldots ,k\}}\tau_i(x,\lambda)$
+3: best := $\{i \in \{1, \dots, k\} \mid \tau_i(x, \lambda) = \tau_{\max}(x, \lambda)\}$
+4: min_cost_index := arg min $_i \in \text{best}$ $c_i(x)$
+5: max_cost_index := arg max $_{i \in \text{best}} c_i(x)$
+6: if $\text{random}(0, 1) < \gamma$ then
+7: return min_cost_index
+8: else
+9: return max_cost_index
+10: end if
+
+Since $\gamma$ is often not equal to 0 or 1, $s_{\mathrm{OPT}}$ is not necessarily deterministic, i.e., there are queries $x$ such that $s_{\mathrm{OPT},i}(x) \notin \{0,1\}$ for some index $i$ . Therefore, prior work that only considered deterministic routing strategies (Chen et al., 2022; Hu et al., 2024) cannot express the optimal routing strategy and fall back to the near-optimal $s_{\mathrm{MIN}}^{\lambda}$ .
+
+Example To illustrate routing, consider a scenario with two models $m_1$ and $m_2$ . The estimated cost $\hat{c}(x) = (0.9, 1)$ is constant across queries and slightly lower for $m_1$ than for $m_2$ . The quality is estimated based on the category of the query, which is either math, code, or generic. For instance, let $\hat{q}(x_{\mathrm{math}}) = (0.8, 0.5)$ , $\hat{q}(x_{\mathrm{code}}) = (0.5, 0.8)$ , and $\hat{q}(x_{\mathrm{generic}}) = (0.8, 0.9)$ . For $\lambda = 1$ and $\gamma = 0.7$ , the cost-quality tradeoff is highest for $m_1$ , resp. $m_2$ , on math, resp. code, queries, and is equal for both models on generic queries. Thus, the router will select $m_1$ , resp. $m_2$ , for math, resp. code, queries, and select $m_1$ with a probability of 0.7 and $m_2$ with a probability of 0.3 for generic queries.
+
+Ex-Ante Quality Estimation We explicitly distinguish between the model's true quality $q_{i}(x)$ and the ex-ante quality estimate $\hat{q}_i(x)$ . An optimal routing strategy will select a good model only if $q_{i}(x)\approx \hat{q}_{i}(x)$ , otherwise the objective in Eq. (1) is not appropriate. Thus, even when routing is suited for the application, the strategy will fail if the quality estimates are inaccurate. This makes quality estimation a critical component of routing strategies. While cost estimation also faces similar challenges, we found that it is less critical and can be approximated more easily.
+
+# 3. Cascading as Sequential Routing
+
+We extend our analysis of the optimal routing strategy to a cascade, providing proofs for all statements in App. B.
+
+Brief Overview In this section, we reinterpret cascading as a sequence of routing problems. Furthermore, we show how our approach improves upon the cascading strategies used in prior work. Finally, we examine the impact of post-hoc quality estimates on the effectiveness of cascading strategies.
+
+Cascading In cascading, an input query is processed sequentially through a chain of models, typically arranged in order of increasing size or cost. The cascade stops once a model's output meets a certain condition, and that output is returned. We will reinterpret cascading as a sequence of routing problems. To do so, we first define the models over which we need to route, which we refer to as supermodels.
+
+Definition 3 (Supermodel). A supermodel $M$ is a sequence of models $(m_{i_1},\ldots ,m_{i_j})$ such that running a query through $M$ is equivalent to running it through each of the models in the sequence. $\mathcal{M}$ denotes the set of all supermodels and by $M_{i:j}$ we denote the supermodel $(m_i,\dots ,m_j)$ .
+
+In cascading, we only need to consider the supermodels $M_{1:1},\ldots ,M_{1:k}$ . The full expressivity of Definition 3 will only be necessary for cascade routing in $\S 4$ .
+
+Cascading as Sequential Routing Running a cascade on a sample $x$ occurs in a sequence of steps, where at each step, the cascade determines whether to run the next model in the sequence or terminate. By step $j$ , we have obtained outputs from the first $j - 1$ models. To decide whether to continue and run $m_j$ , we need to determine, in expectation, how well the supermodels $M_{1:j-1}, \ldots, M_{1:k}$ will perform on the sample $x$ . Once again, this performance is measured as having the highest expected output quality within a certain cost budget. If $M_{1:j-1}$ offers the best performance, we terminate the cascade and return its output, i.e., the output of $m_{j-1}$ . Otherwise, if any of $M_{1:j}, \ldots, M_{1:k}$ has better performance, we continue the cascade and run $m_j$ . Therefore, at step $j$ , the cascade is equivalent to a routing strategy that selects the best supermodel from $M_{1:j-1}, \ldots, M_{1:k}$ . Thus, a cascade can be formally defined as follows:
+
+Definition 4 (Cascading Strategy). A cascading strategy $s$ is a sequence of routing strategies $(s^{(1)},\ldots ,s^{(k)})$ such that $s^{(j)}$ routes between the supermodels $M_{1:j - 1},\dots ,M_{1:k}$ .
+
+Notably, while the action associated with supermodels $M_{1:j},\ldots ,M_{1:k}$ is the same, namely continuing the cascade, it is important to consider all these supermodels. Indeed, a model $m_j$ might perform poorly while $m_{j + 1}$ performs exceptionally well on a given query. In such cases,
+
+the quality-cost tradeoff of $M_{1:j}$ will be worse than the tradeoff of $M_{1:j-1}$ , but $M_{1:j+1}$ could still provide a better outcome. Therefore, it is crucial to consider all supermodels $M_{1:j}, \ldots, M_{1:k}$ at step $j$ rather than making decisions solely based on immediate performance.
+
+Quality and Cost To apply Theorem 1 to find the optimal cascading strategy, we first need to derive the quality and cost estimates of the supermodels. Both of these can depend on the answers of previously computed models. Therefore, let $\hat{q}^{(j)}(x)$ and $\hat{c}^{(j)}(x)$ represent the updated estimates in step $j$ after computing the first $j - 1$ models.
+
+We derive the quality and cost estimates associated with supermodel $M_{1:i}$ , denoted as $\hat{q}_{1:i}^{(j)}(x)$ and $\hat{c}_{1:i}^{(j)}(x)$ , based on the quality and cost estimates of the individual models. Trivially, the cost of the supermodel is equal to the sum of the individual model costs. The quality of a supermodel, however, is governed by the best model within it. Thus, it equals $\mathbb{E}_{\hat{q}_i}[\max (\hat{q}_1(x),\dots ,\hat{q}_i(x))]$ , where the expected value reflects the uncertainty in each quality estimate. Specifically, each quality estimate $\hat{q}_i(x)$ is modeled as a random variable estimating the true quality $q_{i}(x)$ . This is crucial since ignoring uncertainty would falsely assume that the quality of a supermodel is always equal to the best model within it, even though the best model may return a poor answer, while another returns a good one. To estimate the uncertainties associated with the estimates, we compute the variance of $\hat{q}_i^{(j)}(x) - \hat{q}_i^{(k)}(x)$ over a validation dataset.
+
+Optimal Cascading We now leverage the optimal routing strategy from Theorem 1 to determine the optimal cascading strategy. As before, optimality is defined in terms of maximizing the expected output quality while adhering to a given cost budget. However, the budget is now only enforced over the entire cascade, and not over individual steps. This leads to a slightly different formulation:
+
+Theorem 2 (Optimal Cascading Strategy). For a given cost budget $B$ , there exist $\lambda_1, \ldots, \lambda_k \in \mathbb{R}^+$ and a $\gamma \in [0,1]$ such that the optimal cascading strategy $s_{\mathrm{OPT}} = (s_{\mathrm{OPT}}^{(1)}, \ldots, s_{\mathrm{OPT}}^{(k)})$ is given by $s_{\mathrm{OPT}}^{(j)} = \gamma s_{\mathrm{MIN}}^{(j), \lambda_j} + (1 - \gamma)s_{\mathrm{MAX}}^{(j), \lambda_j}$ where $s_{\mathrm{MIN}}^{(j), \lambda_j}$ and $s_{\mathrm{MAX}}^{(j), \lambda_j}$ are defined as in Theorem 1.
+
+In App. B, we explain how to obtain the hyperparameters $\lambda_1,\dots ,\lambda_k$ and $\gamma$ for a given cost budget $B$ using a validation dataset $D$ . In Algorithm 2, we provide pseudocode for the optimal cascading algorithm, illustrating the steps involved in selecting the best model for a given query.
+
+Example To illustrate the optimal cascading strategy, consider once again a scenario with two models $m_1, m_2$ with costs $\hat{c}^{(1)}(x) = (0.5, 1)$ and $\hat{q}^{(1)}(x) = (0.5, 0.8)$ (ignoring uncertainty). In a cascade, we will always run $m_1$ for a query $x$ . Thus, we obtain the first model's output.
+
+# Algorithm 2 Optimal Cascading Algorithm
+
+Input: input query $x$ , current step $j$ , quality estimator $\hat{q}_i^{(j)}$ , cost estimator $\hat{c}_i^{(j)}$ , tradeoff parameters $\lambda_j$ and $\gamma$
+
+Output: Whether to stop the cascade or continue with the next model $m_j$
+
+1: for $i = j - 1$ to $k$ do
+2: $\hat{q}_{1:i}^{(j)}(x) \coloneqq \mathbb{E}_{\hat{q}^{(j)}}[\max (\hat{q}_1^{(j)}(x), \dots, \hat{q}_i^{(j)}(x))]$
+3: $\hat{c}_{1:i}^{(j)}(x) := \sum_{l=1}^{i} \hat{c}_l^{(j)}(x)$
+4: end for
+5: index := Router $\begin{array}{r}\left(x,\left(\hat{q}_{1:j - 1}^{(j)}(x),\ldots ,\hat{q}_{1:k}^{(j)}(x)\right),\right. \end{array}$
+6: $\left(\hat{c}_{1:j-1}^{(j)}(x), \ldots, \hat{c}_{1:k}^{(j)}(x)\right), \lambda_j, \gamma$ .
+7: if index $= = 1$ then
+8: return stop
+9: else
+10: return run $m_j$
+11: end if
+
+Based on this output, we can adjust the quality and cost estimates. Suppose, for instance, that the model output is very long and its confidence in its own answer is only $30\%$ . Then we update, for instance, $\hat{q}^{(2)}(x) = (0.3, 0.6)$ and $\hat{c}^{(2)}(x) = (1, 2)$ . For $\lambda_2 = 1$ , we would now stop the cascade and return the output of $m_1$ since the cost-quality tradeoff is highest for the supermodel $\{m_1\}$ . If, instead, $\lambda_2 = 0.1$ , we would run $m_2$ since the cost-quality tradeoff is highest for the supermodel $\{m_1, m_2\}$ . In this case, the cascade would return the output of $m_2$ .
+
+Prior Work Prior work on cascading has often relied on strong assumptions to simplify the strategy, using a threshold-strategy as an approximation of the optimal cascade. Specifically, in step $j$ , the cascade continues if $\hat{q}_{j-1}^{(j)}(x) < \tau_j$ for some threshold $\tau_j \in \mathbb{R}$ . To the best of our knowledge, all existing works can be seen as a specific instantiation of this thresholding scheme with cost and quality estimators that depend on the specific application used (Chen et al., 2023; Damani et al., 2024; Gupta et al., 2024; Jitkrittum et al., 2023b; Nie et al., 2024). Below, we outline the conditions under which this simplified approach is optimal.
+
+Corollary 1 (Optimal Threshold Strategy). Under minor technical assumptions, the thresholding strategy is equivalent to our cascading strategy if and only if the following conditions hold: $\hat{c}_i^{(j)}(x)$ is independent of $x$ for all $i,j\in \{1,\ldots ,k\}$ , $\hat{q}_i^{(j)}(x)$ is independent of $x$ for all $i\geqslant j$ , and $\hat{q}_{1:i}^{(j)}(x)$ is equal to $\hat{q}_i^{(j)}(x)$ .
+
+Post-Hoc Quality Estimation Once again, this theoretical framework highlights the importance of quality estimation, with a shift in focus from ex-ante quality estimation to post-hoc quality estimation, which now plays a critical
+
+role. Cascading approaches are only advantageous when the post-hoc quality estimate provides significantly better information than the ex-ante estimate. If this improvement is minimal, it would be more effective to directly route queries to the most suitable model, bypassing the cascading process. While the post-hoc estimate is essential for refining decisions, the ex-ante quality estimate remains valuable in determining whether future models can potentially deliver better performance. Only in the threshold cascade strategy, where the ex-ante estimate is fixed, does it become irrelevant. By contrast, our approach improves upon the threshold cascade by incorporating both ex-ante and post-hoc quality estimates, thereby enabling more informed decision-making.
+
+# 4.Cascade Routing as Cascade Generalization
+
+Both routing and cascading are powerful techniques that enable the efficient use of multiple models. However, their use is often orthogonal: while routing is useful when ex ante quality estimates are accurate, cascading is more beneficial when post-hoc estimates are accurate. We therefore present cascade routing, which is a generalization of both techniques. Proofs for all theorems and lemmas in this section are included in App. C.
+
+Brief Overview In this section, we first define cascade routing and explain how it generalizes both routing and cascading. We then derive the optimal cascade routing strategy and solve several of the additional challenges that arise when applying cascade routing in practice. Finally, we provide an illustrative example.
+
+Cascade Routing Cascade routing closely resembles cascading, but with one crucial difference: the routing strategy at step $j$ routes between all possible supermodels, not just the supermodels $M_{1:j-1}, \ldots, M_{1:k}$ . Therefore, both Definition 4 and Theorem 2 can be extended to this setting.
+
+Definition 5 (Cascade Routing). A cascade routing strategy $s$ is a sequence of routing strategies $(s^{(1)},\ldots ,s^{(k)})$ such that, for a given sample $x\in \mathcal{X}$ , $s^{(j)}$ routes between all supermodels in $\mathcal{M}$ that start with the $j - 1$ models that have already been computed for this query.
+
+Theorem 3 (Optimal Cascade Routing). For a given cost budget $B$ , there exist $\lambda_1, \ldots, \lambda_k \in \mathbb{R}^+$ and a $\gamma \in \mathbb{R}^+$ such that the optimal cascade routing strategy $s_{\mathrm{OPT}} = (s_{\mathrm{OPT}}^{(1)}, \ldots, s_{\mathrm{OPT}}^{(k)})$ is given by $s_{\mathrm{OPT}}^{(j)} = \gamma s_{\mathrm{MIN}}^{(j), \lambda_j} + (1 - \gamma)s_{\mathrm{MAX}}^{(j), \lambda_j}$ where $s_{\mathrm{MIN}}^{(j), \lambda_j}$ and $s_{\mathrm{MAX}}^{(j), \lambda_j}$ are defined as in Theorem 1.
+
+In Algorithm 3, we provide pseudocode for the cascade routing algorithm. While cascade routing is a seemingly simple extension of cascading, it also introduces additional challenges which we address now.
+
+Algorithm 3 Optimal Cascade Routing Algorithm
+
+Input: input query $x$ , model indices run so far $\{i_1, \dots, i_{j-1}\}$ , quality estimator $\hat{q}_i^{(j)}$ , cost estimator $\hat{c}_i^{(j)}$ , tradeoff parameters $\lambda_j$ and $\gamma$
+
+Output: Index of the next model to run
+
+1: $\mathcal{S} = [M\subset \{1,\dots k\} |\forall l\in \{1,\ldots ,j - 1\}:i_l\in M]$
+2: for $M \in S$ do
+3: $\hat{q}_M^{(j)}(x) \coloneqq \mathbb{E}_{\hat{q}^{(j)}}[\max_{l \in M} (\hat{q}_l^{(j)}(x))]$
+4: $\hat{c}_M^{(j)}(x) := \sum_{l \in M} \hat{c}_l^{(j)}(x)$
+5: end for
+6: index := Router $\begin{array}{r}\mathrm{~\textit{x}~},(\hat{q}_M^{(j)}(x)\mathrm{~for~}M\in \mathcal{S}) \end{array}$
+7: $(\hat{c}_M^{(j)}(x)$ for $M\in \mathcal{S}$ $\lambda_j,\gamma \bigg)$
+8: possibilities := $S_{\mathrm{index}} \setminus \{i_1, \ldots, i_{j-1}\}$
+9: if possibilities $= \emptyset$ then
+10: return stop
+11: else
+12: min_cost_index := arg min $_{i \in \text{possibilities}}$ $c_i(x)$
+13: return min_cost_index
+14: end if
+
+Model Order In cascading, the model order is predetermined, and the routing strategy only decides whether to proceed with the next model in the sequence. In contrast, cascade routing must dynamically determine the order in which models are computed. Despite this, both the estimated quality $\hat{q}_M^{(j)}(x)$ and cost $\hat{c}_M^{(j)}(x)$ of a supermodel $M$ are order-independent. Therefore, supermodels that contain the same models in a different order will have the same cost and quality. To mitigate this, we sort the models within the selected supermodel by cost and compute the cheapest one first (illustrated in Lines 12-13 of Algorithm 3). This approach aligns with cascading, where more expensive models are only used if cheaper models do not suffice.
+
+Number of Supermodels In cascading, the quality and cost must be computed for a maximum of $k$ supermodels at each step. However, in cascade routing, the number of supermodels grows exponentially, leading to the need to evaluate up to $2^{k}$ supermodels. This increase can become prohibitively costly, particularly since the model selection process must remain computationally negligible with respect to model computation. To mitigate this, we leverage so-called negative marginal gains. It can be shown (see App. C) that if a model $m$ in a supermodel $M$ negatively impacts the quality-cost tradeoff, all supermodels containing all models in $M$ can be pruned from the search space. For example, if $m_{1}$ negatively affects the quality-cost tradeoff of the supermodel $\{m_{1}, m_{2}\}$ , we can prune all supermodels that contain both $m_{1}$ and $m_{2}$ . Since this negative contribution is quite common, this allows us to prune the search space significantly. More formally, this pruning operation relies on the following lemma:
+
+Lemma 1 (Negative Marginal Gain). Let $M \in \mathcal{M}$ and $m$ be any model in $M$ . Let the marginal gain of $m$ w.r.t. $M$ be defined as $\tau_{M}(x, \lambda) - \tau_{M \setminus \{m\}}(x, \lambda)$ . Then, if the marginal gain of $m$ w.r.t. $M$ is strictly negative for a given query, the optimal cascade routing strategy will never run a supermodel $M' \in \mathcal{M}$ that contains all models in $M$ .
+
+Example To illustrate the optimal cascade routing strategy, consider a scenario with two models $m_1$ and $m_2$ with costs $\hat{c}^{(1)}(x) = (0.5, 1)$ and $\hat{q}^{(1)}(x) = (0.5, 0.8)$ . In contrast to cascading, we do not necessarily need to run $m_1$ first. In this scenario, if $\lambda_1 = 0.1$ , we would immediately run $m_2$ . While we would most likely stop after $m_2$ , it is possible that its output is so bad that we update the quality estimator to $\hat{q}^{(2)}(x) = (0.5, 0.1)$ . In this case, for $\lambda_2 = 0.1$ , we would run $m_1$ next and return its output. If $\lambda_1 = 1$ , we come into the more classical cascading scenario explained before where $m_1$ is run first.
+
+# 5. Experimental Evaluation
+
+We now evaluate the performance of cascade routing and demonstrate that it significantly outperforms all other strategies. Additionally, we show that our new cascading approach outperforms the threshold-based cascading method. For this purpose, we first conduct experiments on RouterBench (Hu et al., 2024), a benchmark specifically designed to evaluate routing and cascading ( $\S 5.1$ ). Next, we test cascade routing on several additional benchmarks to evaluate its performance in more realistic scenarios ( $\S 5.2$ ). In the appendix, we perform an ablation study to examine the impact of various design choices in cascade routing on performance and runtime (App. F). Finally, in App. H, we show detailed results as well as cost-quality tradeoff curves for several benchmarks.
+
+# 5.1. RouterBench
+
+RouterBench (Hu et al., 2024) is a benchmark developed to evaluate the efficacy of different model selection strategies. It includes questions from seven diverse benchmarks, such as MMLU (Hendrycks et al., 2021), GSM8k (Cobbe et al., 2021), and MBPP (Austin et al., 2021), alongside answers from eleven different models ranging from GPT-4 (OpenAI, 2023) to Mistral-7B (Jiang et al., 2023).
+
+Quality and Cost Estimates Similar to (Hu et al., 2024), we estimate quality and cost by adding zero-centered Gaussian noise to their true values. Both cost and quality estimates are modeled as linear functions fitted on these noisy signals. Thus, the quality estimate can be expressed as $\hat{q}_{W,b}(x) = W(q(x) + \epsilon) + b$ where $\epsilon \sim \mathcal{N}(0,\sigma^2)$ . A similar expression holds for the cost estimate. We define the variance of the noisy signal as $\sigma_{\mathrm{ante}}^2$ before model computation (ex-ante estimates) and $\sigma_{\mathrm{post}}^2$ after (post-hoc estimates).
+
+Table 1: AUC scores in % for different strategies on RouterBench across model and noise levels. All baselines are always worse than the $95\%$ confidence intervals of cascade routing. For a discussion on confidence intervals, we refer to App. E.
+
+Three Models Five Models Eleven Models Low Med High Low Med High Low Med High Linear Interp. 69.62 69.62 69.62 69.22 69.22 69.22 70.51 70.51 70.51 Routing 79.73 74.97 71.81 81.24 74.43 71.33 83.25 74.63 72.67 Cascade (Baseline) 80.86 74.64 72.48 82.33 73.03 69.53 84.48 73.64 69.79 Cascade (Ours) 81.09 76.16 72.67 83.06 75.17 70.18 84.47 75.10 70.26 Cascade Routing (Ours) 82.36 76.55 73.22 84.33 76.31 72.75 87.24 77.57 74.40
+
+
+(a) Comparison of cascade routing with routing.
+
+
+(b) Comparison of cascade routing with cascading.
+Figure 2: Difference in AUC performance between cascade routing and baseline strategies on RouterBench for various noise values. Red indicates cascade routing is much better, while blue indicates it is only a bit better.
+
+To explore different uncertainty levels, we vary the variances to simulate low-, medium-, and high-noise scenarios, with exact values for the variances given in App. D.1.
+
+Models We evaluate cascade routing on RouterBench using three, five, and eleven models available for model selection, ensuring a comprehensive evaluation across a range of scenarios. The exact models are provided in App. D.1.
+
+Strategies We compare cascade routing against several baseline strategies, including the routing strategy described in §2, the threshold-based cascading approach from prior work (Corollary 1), and the optimal cascading strategy (Theorem 2). Additionally, as in (Hu et al., 2024), we include a baseline that linearly interpolates cost and quality on the Pareto frontier of the models.
+
+Evaluation Metric For each method, we evaluate performance using cost budgets ranging from the cheapest to the most expensive model. This produces a quality-cost curve for each strategy. Following (Hu et al., 2024), we use the Area Under the Curve (AUC) as the performance metric.
+
+Results Table 1 presents the results for the zero-shot setting, with the five-shot results detailed in App. H. Cascade routing consistently outperforms all baseline strategies with performance gains between $1\%$ to $4\%$ , which measured relatively to the naive linear interpolation baseline means that cascade routing improves by $13\%$ to $80\%$ over the baselines. This performance gap widens as more models are available and narrows under higher noise levels, indicating that cascade routing is most effective with large model sets and accurate cost and quality estimates. Furthermore, our new cascading strategy outperforms the threshold-based cascade by up to $2\%$ , reinforcing the practical relevance of our theoretical results.
+
+Quality Estimation To better understand the impact of quality estimation on model selection strategies, we additionally conduct experiments with five models under a broader range of varying noise levels. Fig. 2 illustrates the difference in AUC performance between cascade routing and baseline strategies for all possible noise levels. The results demonstrate that cascade routing consistently outperforms the baselines, achieving up to an $8\%$ improvement for cascading and up to a $12\%$ improvement for routing. Notably, the performance gap highlights key differences
+
+between the cascading and routing strategies. For routing, the value of $\sigma_{\mathrm{ante}}$ is critical—high $\sigma_{\mathrm{ante}}$ significantly reduces performance compared to cascade routing. Conversely, for cascading, $\sigma_{\mathrm{post}}$ plays a more influential role, with higher values causing substantial performance degradation. These findings underscore the importance of accurate quality estimation for both strategies. Cascade routing proves to be a more robust solution by unifying the strengths of both approaches and effectively leveraging low $\sigma_{\mathrm{ante}}$ and low $\sigma_{\mathrm{post}}$ to enhance performance.
+
+# 5.2. Real-World Benchmarks
+
+We now show that cascade routing outperforms baselines on more realistic benchmarks with quality estimates that can be used in real-world scenarios. We differentiate in our analysis between benchmarks where accurate quality estimation is available and those where it is not.
+
+Accurate Quality Estimation We first evaluate cascade routing on two benchmarks that allow for accurate quality estimation. First, in the domain of software engineering, it is often easier to generate tests to reproduce specific issues than to fix them. We therefore use SWE-Bench (Jimenez et al., 2024) as a benchmark where accurate posthoc quality estimation is available. Specifically, we assume that the quality of a model's response can be accurately estimated by testing it on the ground-truth test cases. Second, to simulate a use-case where ex-ante quality estimation is accurate, we use the Math andCoder models from the QwEN-2.5 model family (Yang et al., 2024; Hui et al., 2024) and evaluate them on a combination of Minerva Math (Lewkowycz et al., 2022) and LiveCodeBench (Jain et al., 2024). To obtain accurate quality estimates, we incorporate a sample's origin benchmark as a feature in the quality estimation model. For all details about the benchmarks, models, and estimators, we refer to App. D.1.
+
+Results Table 2 (left) shows the results for both benchmarks. In SWE-Bench, our methods outperform baseline strategies by up to $14\%$ . As expected, the routing strategy does not outperform the trivial baseline on this benchmark, as ex-ante quality estimates are insufficient. Interestingly, despite perfect post-hoc quality estimation for SWE-Bench, the baseline cascade strategy also performs poorly. This is due to the binary feedback of the quality estimator, which leads the threshold $\tau$ of the baseline cascade to either admit all models ( $\tau = 0$ ) or only correct ones ( $\tau > 0$ ).
+
+For Minerva Math and LiveCodeBench, the opposite trend holds true. With accurate ex-ante quality estimation, the routing strategy achieves strong performance, surpassing the baseline cascade strategy by $10\%$ . However, the cascade routing strategy still outperforms all methods, highlighting its robustness across diverse benchmarks and
+
+quality estimation scenarios. Interestingly, despite poor post-hoc quality estimation, our cascading strategy nearly matches the performance of routing. This suggests that the cascade effectively leverages ex-ante quality estimation to make informed decisions, unlike the baseline cascade.
+
+We highlight that the cost estimator for SWE-Bench is latency-based, computing cost as the time it takes to complete the task. In contrast, the estimator for Minerva Math and LiveCodeBench uses the cost of the generation. Thus, cascade routing can adapt to different cost estimators.
+
+Poor Quality Estimation We perform experiments on classification and open-form reasoning tasks where there is no known accurate quality estimator. The classification benchmarks include ARC-Challenge (Clark et al., 2018), MMLU-Pro (Wang et al., 2024), and MixEval (Ni et al., 2024). For open-form reasoning tasks, we use MMLU-Pro and GSM8k (Cobbe et al., 2021). In classification, models select a single option representing their answer, with no intermediate reasoning process. In contrast, open-form reasoning allows models to generate their answers after reasoning. In this section, we evaluate two model families consisting of three models, LLAMA and GEMMA, and show similar numbers for the MISTRAL model family in App. H. We create a quality estimator based on state-of-the-art work Gupta et al. (2024), which uses log probabilities as features. For full details on the benchmarks, models, and cost and quality estimators, we refer to App. D.
+
+Results Table 2 (right) presents the results for the LLAMA and GEMMA model families across both benchmarks. Cascade routing consistently performs on par with or outperforms all baselines, though with much narrower margins reaching up to $1.2\%$ . This reduced gain can be attributed to the fact that the quality and cost estimates are very noisy, leading to performance gains over the naive baseline similar to those observed in very high-noise scenarios on RouterBench.
+
+# 6. Related Work
+
+Routing Routing is a widely studied problem in machine learning, particularly in the task of directing input queries to specialized models. One of the most common applications of routing is model selection for natural language input queries with a known answer (Chuang et al., 2024; Ding et al., 2024; Hari & Thomson, 2023; Liu et al., 2024; Jang et al., 2023; Nguyen et al., 2024; Sakota et al., 2024; Shnitzer et al., 2023). All these works train a model to predict whether a given model will correctly answer a query. Though the setups in these works are largely similar, they vary in certain specifics, such as the type of input queries or the features used for quality estimation.
+
+Table 2: AUC scores on practical benchmarks. On the left, resp. right, side we show the benchmarks with good, resp. poor, quality estimates. The highest numbers are bolded, and underlined numbers are within the $95\%$ confidence intervals of the highest number. For a discussion on confidence intervals, refer to App. E. In App. G, we present benchmark-specific AUC values for results averaged over several benchmarks.
+
+SWE-Bench Math+Code Classification Open-Form 10 MODELS 5 MODELS QWEN LLAMA GEMMA LLAMA GEMMA Linear Interp. 40.51 38.64 39.63 74.28 61.68 79.11 54.10 Routing 40.47 39.40 47.46 74.92 64.44 79.32 58.40 Cascade (Baseline) 38.52 45.89 37.68 74.81 54.32 79.23 56.18 Cascade (Ours) 53.20 50.94 46.76 75.46 62.79 79.22 56.18 Cascade Routing (Ours) 54.12 51.09 48.55 75.52 64.84 79.88 59.66
+
+Routing is also applied in other areas. For instance, Lu et al. (2024); Ong et al. (2024) use preference data to train a quality estimator, which facilitates routing in scenarios involving real-world user queries where clear ground-truth answers may not exist. Additionally, Chen et al. (2022) employ routing for API selection in multi-label classification tasks, focusing on directing queries to the appropriate API based on task requirements. Similarly, Zhang et al. (2024b) apply routing in software agent environments, directing user issues to the agent most suited to handle them. Finally, Pichlmeier et al. (2024) dynamically routes token generation instead of entire queries, allowing for more fine-grained routing decisions.
+
+Cascading Cascading techniques are primarily used to reduce inference costs by employing smaller models initially and only cascading to larger models if the smaller ones fail to provide a sufficiently accurate answer. Most often, cascading decisions are based on the smaller model's confidence in its own predictions (Chen et al., 2023; 2024; Ramírez et al., 2024; Varshney & Baral, 2022). However, alternative techniques also exist. For example, Madaan et al. (2023) propose running models multiple times and measuring the variance in their responses to decide whether to cascade to a larger model.
+
+For classification tasks, early stopping is another cascading strategy (Li et al., 2021; Schuster et al., 2022). In this approach, the cascade halts when a model's intermediate layers generate representations that are sufficiently informative to predict the correct class. This reduces computational costs by avoiding the need to process every query through the entire model.
+
+There has also been specific research on quality estimation within cascading frameworks. Gupta et al. (2024) examine various measures of uncertainty in language model answers, evaluating their impact on cascading performance. Meanwhile, Jitkrittum et al. (2023a) explore failure cases in cascading mechanisms that rely on uncertainty, introducing alternative quality measures that enhance cascade effi
+
+ciency. Furthermore, Xue et al. (2023) apply cascading to majority voting for a single model to obtain a method called dynamic voting: the cascade stops depending on the aggregated answers of all previous model computations. Lastly, (Zhang et al., 2024a) propose the use of multi-objective quality metrics to guide cascading decisions and do not solely focus on accuracy.
+
+All works mentioned here can be seen as an instantiation of the thresholding mechanism outlined in Corollary 1 with application-specific quality and cost estimates.
+
+# 7. Conclusion
+
+In this work, we introduced a novel framework for routing and cascading that enabled us to propose theoretically optimal strategies for both paradigms. Further, we used this analysis to propose a new paradigm for model selection, cascade routing, which combines the benefits of routing and cascading. We showed that cascade routing can significantly outperform its baselines, especially with good quality and cost estimates. We also find that our new cascading strategy significantly outperforms existing approaches to cascading, showing our theoretical analysis also leads to practical gains.
+
+# Impact Statement
+
+Our work can significantly impact the field of model selection strategies. By providing a theoretical foundation for routing and cascading, we have shown that these strategies can be improved by using more accurate quality and cost estimates. Cascade routing combines the strengths of both routing and cascading and offers a more flexible and effective model selection strategy. Furthermore, by underscoring the importance of quality estimation, we highlighted a critical area for future research in model selection strategies that could lead to further improvements in this area.
+
+# Acknowledgements
+
+This work was funded in part by the Swiss National Science Foundation (SNSF) [200021_207967].
+
+This work has been done as part of the EU grant ELSA (European Lighthouse on Secure and Safe AI, grant agreement no. 101070617). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the European Commission can be held responsible for them.
+
+The work has received funding from the Swiss State Secretariat for Education, Research and Innovation (SERI).
+
+# References
+
+Austin, J., Odena, A., Nye, M. I., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C. J., Terry, M., Le, Q. V., and Sutton, C. Program synthesis with large language models. CoRR, abs/2108.07732, 2021. URL https:// arxiv.org/abs/2108.07732.
+Chen, D., Zhuang, Y., Zhang, S., Liu, J., Dong, S., and Tang, S. Data shunt: Collaboration of small and large models for lower costs and better performance. In Wooldridge, M. J., Dy, J. G., and Natarajan, S. (eds.), Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, pp. 11249-11257. AAAI Press, 2024. doi: 10.1609/AAAI.V38I10.29003. URL https://doi.org/10.1609/aaai.v38i10.29003.
+Chen, L., Zaharia, M., and Zou, J. Efficient online ML API selection for multi-label classification tasks. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 3716-3746. PMLR, 2022. URL https://proceedings.mlr.press/v162/chen22ad.html.
+Chen, L., Zaharia, M., and Zou, J. Frugalgpt: How to use large language models while reducing cost and improving performance. CoRR, abs/2305.05176, 2023. doi: 10.48550/ARXIV.2305.05176. URL https://doi.org/10.48550/arXiv.2305.05176.
+Chuang, Y., Zhou, H., Sarma, P. K., Gopalan, P., Boccio, J., Bolouki, S., and Hu, X. Learning to route with confidence tokens. CoRR, abs/2410.13284, 2024. doi:
+
+10.48550/ARXIV.2410.13284. URL https://doi.org/10.48550/arXiv.2410.13284.
+Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., and Tafjord, O. Think you have solved question answering? try arc, the AI2 reasoning challenge. ArXiv preprint, abs/1803.05457, 2018.
+Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems. *ArXiv preprint*, abs/2110.14168, 2021.
+Damani, M., Shenfeld, I., Peng, A., Bobu, A., and Andreas, J. Learning how hard to think: Input-adaptive allocation of LM computation. CoRR, abs/2410.04707, 2024. doi: 10.48550/ARXIV.2410.04707. URL https://doi.org/10.48550/arXiv.2410.04707.
+Ding, D., Mallick, A., Wang, C., Sim, R., Mukherjee, S., Ruhle, V., Lakshmanan, L. V. S., and Awadallah, A. H. Hybrid LLM: cost-efficient and quality-aware query routing. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=02f3mUtqnM.
+Gao, L., Tow, J., Abbasi, B., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., Le Noac'h, A., Li, H., McDonell, K., Muennighoff, N., Ociepa, C., Phang, J., Reynolds, L., Schoelkopf, H., Skowron, A., Sutawika, L., Tang, E., Thite, A., Wang, B., Wang, K., and Zou, A. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/12608602.
+Gupta, N., Narasimhan, H., Jitkrittum, W., Rawat, A. S., Menon, A. K., and Kumar, S. Language model cascades: Token-level uncertainty and beyond. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=KgaBScZ4VI.
+Hari, S. N. and Thomson, M. Tryage: Real-time, intelligent routing of user prompts to large language models. CoRR, abs/2308.11601, 2023. doi: 10.48550/ARXIV.2308.11601. URL https://doi.org/10.48550/arXiv.2308.11601.
+Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. In Proc. of ICLR, 2021.
+Hu, Q. J., Bieker, J., Li, X., Jiang, N., Keigwin, B., Ranganath, G., Keutzer, K., and Upadhyay, S. K. Routerbench: A benchmark for multi-llm routing system.
+
+CoRR, abs/2403.12031, 2024. doi: 10.48550/ARXIV.2403.12031. URL https://doi.org/10.48550/arXiv.2403.12031.
+Hui, B., Yang, J., Cui, Z., Yang, J., Liu, D., Zhang, L., Liu, T., Zhang, J., Yu, B., Dang, K., Yang, A., Men, R., Huang, F., Ren, X., Ren, X., Zhou, J., and Lin, J. Qwen2.5-coder technical report. CoRR, abs/2409.12186, 2024. doi: 10.48550/ARXIV.2409.12186. URL https://doi.org/10.48550/arXiv.2409.12186.
+Jain, N., Han, K., Gu, A., Li, W., Yan, F., Zhang, T., Wang, S., Solar-Lezama, A., Sen, K., and Stoica, I. Livecodebench: Holistic and contamination free evaluation of large language models for code. CoRR, abs/2403.07974, 2024. doi: 10.48550/ARXIV.2403.07974. URL https://doi.org/10.48550/arXiv.2403.07974.
+Jang, J., Kim, S., Ye, S., Kim, D., Logeswaran, L., Lee, M., Lee, K., and Seo, M. Exploring the benefits of training expert language models over instruction tuning. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 14702-14729. PMLR, 2023. URL https://proceedings.mlr.press/v202/jang23a.html.
+Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de Las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mistral 7b. CoRR, abs/2310.06825, 2023. doi: 10.48550/ARXIV.2310.06825.
+Jimenez, C. E., Yang, J., Wettig, A., Yao, S., Pei, K., Press, O., and Narasimhan, K. R. Swe-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=VTF8yNQM66.
+Jitkrittum, W., Gupta, N., Menon, A. K., Narasimhan, H., Rawat, A. S., and Kumar, S. When does confidence-based cascade deferral suffice? In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023a. URL http://papers.nips.cc/paper_files/paper/2023/bitstream/1f09e1ee5035a4c3fe38a5681cae5815-Abstract-Conference.html.
+
+Jitkrittum, W., Gupta, N., Menon, A. K., Narasimhan, H., Rawat, A. S., and Kumar, S. When does confidence-based cascade deferral suffice? In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023b. URL http://papers.nips.cc/paper_files/paper/2023/bitstream/1f09e1ee5035a4c3fe38a5681cae5815-Abstract-Conference.html.
+Khalili, L., You, Y., and Bohannon, J. Babybear: Cheap inference triage for expensive language models. CoRR, abs/2205.11747, 2022. doi: 10.48550/ARXIV.2205.11747. URL https://doi.org/10.48550/arXiv.2205.11747.
+Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V. V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., Wu, Y., Neyshabur, B., Gur-Ari, G., and Misra, V. Solving quantitative reasoning problems with language models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/bitstream/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html.
+Li, L., Lin, Y., Chen, D., Ren, S., Li, P., Zhou, J., and Sun, X. Cascadebert: Accelerating inference of pre-trained language models via calibrated complete models cascade. In Moens, M., Huang, X., Specia, L., and Yih, S. W. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pp. 475-486. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.FINDINGS-EMNLP.43. URL https://doi.org/10.18653/v1/2021-findings-emnlp.43.
+Liu, Y., Zhang, H., Miao, Y., Le, V., and Li, Z. Optillm: Optimal assignment of queries to large language models. CoRR, abs/2405.15130, 2024. doi: 10.48550/ARXIV.2405.15130. URL https://doi.org/10.48550/arXiv.2405.15130.
+Lu, K., Yuan, H., Lin, R., Lin, J., Yuan, Z., Zhou, C., and Zhou, J. Routing to the expert: Efficient reward-guided ensemble of large language models. In Duh, ce K., Gomez-Adorno, H., and Bethard, S. (eds.), Proceedings of the 2024 Conference of the North Amer
+
+ican Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024, pp. 1964-1974. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.NAACL-LONG.109. URL https://doi.org/10.18653/v1/2024.nacl-long.109.
+Madaan, A., Aggarwal, P., Anand, A., Potharaju, S. P., Mishra, S., Zhou, P., Gupta, A., Rajagopal, D., Kappaganthu, K., Yang, Y., Upadhyay, S., Mausam, and Faruqui, M. Automix: Automatically mixing language models. CoRR, abs/2310.12963, 2023. doi: 10.48550/ARXIV.2310.12963. URL https://doi.org/10.48550/arXiv.2310.12963.
+Nguyen, Q. H., Hoang, D. C., Decugis, J., Manchanda, S., Chawla, N. V., and Doan, K. D. Metallm: A high-performant and cost-efficient dynamic framework for wrapping llms. CoRR, abs/2407.10834, 2024. doi: 10.48550/ARXIV.2407.10834. URL https://doi.org/10.48550/arXiv.2407.10834.
+Ni, J., Xue, F., Yue, X., Deng, Y., Shah, M., Jain, K., Neubig, G., and You, Y. Mixeval: Deriving wisdom of the crowd from LLM benchmark mixtures. CoRR, abs/2406.06565, 2024. doi: 10.48550/ARXIV.2406.06565. URL https://doi.org/10.48550/arXiv.2406.06565.
+Nie, L., Ding, Z., Hu, E., Jermaine, C. M., and Chaudhuri, S. Online cascade learning for efficient inference over streams. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=Wz4lgc8dsN.
+Ong, I., Almahairi, A., Wu, V., Chiang, W., Wu, T., Gonzalez, J. E., Kadous, M. W., and Stoica, I. Routellm: Learning to route llms with preference data. CoRR, abs/2406.18665, 2024. doi: 10.48550/ARXIV.2406.18665. URL https://doi.org/10.48550/arXiv.2406.18665.
+OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774.
+Pichlmeier, J., Ross, P., and Luckow, A. Domain-Aware LLM Routing During Generation. In 2024 IEEE International Conference on Big Data (BigData), pp. 8235-8237, Los Alamitos, CA, USA, December 2024. IEEE Computer Society. doi: 10.1109/BigData62323.2024.10825152. URL https://doi.ieeecomputersociety.org/10.1109/BigData62323.2024.10825152.
+Ramírez, G., Birch, A., and Titov, I. Optimising calls to large language models with uncertainty-based two-tier selection. CoRR, abs/2405.02134, 2024. doi:
+
+10.48550/ARXIV.2405.02134. URL https://doi.org/10.48550/arXiv.2405.02134.
+Sakota, M., Peyrard, M., and West, R. Fly-swat or cannon? cost-effective language model choice via metamodeling. In Caudillo-Mata, L. A., Lattanzi, S., Medina, A. M., Akoglu, L., Gionis, A., and Vassilvitskii, S. (eds.), Proceedings of the 17th ACM International Conference on Web Search and Data Mining, WSDM 2024, Merida, Mexico, March 4-8, 2024, pp. 606-615. ACM, 2024. doi: 10.1145/3616855.3635825. URL https://doi.org/10.1145/3616855.3635825.
+Schuster, T., Fisch, A., Gupta, J., Dehghani, M., Bahri, D., Tran, V., Tay, Y., and Metzler, D. Confident adaptive language modeling. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/bitize/6fac9e316a4ae75ea244ddcef1982c71-Abstract-Conference.html.
+Shnitzer, T., Ou, A., Silva, M., Soule, K., Sun, Y., Solomon, J., Thompson, N., and Yurochkin, M. Large language model routing with benchmark datasets. CoRR, abs/2309.15789, 2023. doi: 10.48550/ARXIV.2309.15789. URL https://doi.org/10.48550/arXiv.2309.15789.
+Varshney, N. and Baral, C. Model cascading: Towards jointly improving efficiency and accuracy of NLP systems. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 11007-11021. Association for Computational Linguistics, 2022. doi: 10.18653/V1/2022.EMNLP-MAIN.756. URL https://doi.org/10.18653/v1/2022.emnlp-main.756.
+Wang, Y., Ma, X., Zhang, G., Ni, Y., Chandra, A., Guo, S., Ren, W., Arulraj, A., He, X., Jiang, Z., Li, T., Ku, M., Wang, K., Zhuang, A., Fan, R., Yue, X., and Chen, W. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. CoRR, abs/2406.01574, 2024. doi: 10.48550/ARXIV.2406.01574. URL https://doi.org/10.48550/arXiv.2406.01574.
+Xue, M., Liu, D., Lei, W., Ren, X., Yang, B., Xie, J., Zhang, Y., Peng, D., and Lv, J. Dynamic voting for efficient reasoning in large language models. In Bouamor, H., Pino, J., and Bali, K. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023,
+
+Singapore, December 6-10, 2023, pp. 3085-3104. Association for Computational Linguistics, 2023. doi: 10. 18653/V1/2023.FINDINGS-EMNLP.203. URL https://doi.org/10.18653/v1/2023-findings-emnlp.203.
+Yang, A., Zhang, B., Hui, B., Gao, B., Yu, B., Li, C., Liu, D., Tu, J., Zhou, J., Lin, J., Lu, K., Xue, M., Lin, R., Liu, T., Ren, X., and Zhang, Z. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. CoRR, abs/2409.12122, 2024. doi: 10.48550/ARXIV.2409.12122. URL https://doi.org/10.48550/arXiv.2409.12122.
+Zhang, K., Peng, L., Wang, C., Go, A., and Liu, X. LLM cascade with multi-objective optimal consideration. CoRR, abs/2410.08014, 2024a. doi: 10.48550/ARXIV.2410.08014. URL https://doi.org/10.48550/arXiv.2410.08014.
+Zhang, K., Yao, W., Liu, Z., Feng, Y., Liu, Z., Murthy, R., Lan, T., Li, L., Lou, R., Xu, J., Pang, B., Zhou, Y., Heinecke, S., Savarese, S., Wang, H., and Xiong, C. Diversity empowers intelligence: Integrating expertise of software engineering agents, 2024b. URL https://arxiv.org/abs/2408.07060.
+
+# A. Routing
+
+First, we explain how to obtain the hyperparameters $\lambda$ and $\gamma$ for the routing strategy. We then provide a more exact formulation of the routing optimization problem and prove Theorem 1.
+
+Hyperparameters Due to the second part of Theorem 1, we only need to find $a$ set of hyperparameters $\lambda$ and $\gamma$ that achieve the cost budget. Indeed, all routing strategies that have an expected cost that is exactly equal to $B$ and can be written as a convex combination of $s_{\mathrm{MIN}}^{\lambda'}$ and $s_{\mathrm{MAX}}^{\lambda'}$ for some $\lambda' \in \mathbb{R}^{+}$ achieve the same optimal quality.
+
+To determine these parameters, we estimate the cost of a strategy using a validation dataset $D$ that is representative of the query distribution $\mathcal{X}$ . We then perform a hyperparameter search to find optimal values of $\lambda$ and $\gamma$ . By leveraging several properties of routing strategies, one can show that this hyperparameter search can be reduced to a single binary search over $\lambda$ , enabling a quick and efficient hyperparameter optimization process.
+
+Proving the Theorem To prove Theorem 1, we first rewrite the routing optimization problem in Eq. (1) as a linear program over functions $s: \mathcal{X} \to \mathbb{R}^k$ instead of functions $s: \mathcal{X} \to \Delta_k$ . This makes the optimization problem more tractable. Specifically, Eq. (1) can be rewritten as follows:
+
+$$
+\begin{array}{l} \max _ {r} \quad \mathbb {E} _ {x \sim \mathcal {X}} \left[ \sum_ {i = 1} ^ {k} s _ {i} (x) \hat {q} _ {i} (x) \right] \\ \text {s . t .} \quad \mathbb {E} _ {x \sim \mathcal {X}} \left[ \sum_ {i = 1} ^ {k} s _ {i} (x) \hat {c} _ {i} (x) \right] \leqslant B \tag {2} \\ \forall i \in \{1, \dots , k \}: \forall x \in \mathcal {X}: s _ {i} (x) \geq 0 \wedge \sum_ {j = 1} ^ {k} s _ {j} (x) = 1 \\ \end{array}
+$$
+
+We then rewrite Theorem 1 to allow for a more exact formulation of the optimal routing strategy:
+
+Theorem 4. (Optimal Routing Strategy) Suppose there exists an admissible solution to the set of constraints in Eq. (2). For any $\lambda \in \mathbb{R}^{+}$ , let $S_{\lambda}$ be the set of routing strategies $s$ that satisfy the following constraints:
+
+$$
+\forall x \in \mathcal {X}, \forall i \in \{1, \dots , k \}: \hat {q} _ {i} (x) - \lambda \hat {c} _ {i} (x) < \max _ {j} \hat {q} _ {j} (x) - \lambda \hat {c} _ {j} (x) \Rightarrow s _ {i} (x) = 0 \tag {3}
+$$
+
+If there exists a strategy in $S_0$ that has a cost less than or equal to $B$ , then this strategy achieves the optimal quality. Otherwise, there exists a $\lambda^* \in \mathbb{R}^+$ such that $S_{\lambda}$ contains a routing strategy that has exactly cost $B$ and all routing strategies in $\bigcup_{\lambda \in \mathbb{R}^+} S_{\lambda}$ that have cost $B$ achieve the same optimal quality.
+
+There is one extra condition mentioned here that we omitted in the main text. The requirement of having at least an admissible solution to the constraints in Eq. (2) is necessary to ensure that the set of possible solutions to Eq. (2) is not empty. For instance, the cost budget $B$ can be too low such that even running the cheapest model for each query is too expensive.
+
+The formulation of $s_{\mathrm{OPT}}$ as a convex combination of $s_{\mathrm{MIN}}^{\lambda}$ and $s_{\mathrm{MAX}}^{\lambda}$ is a direct consequence of Theorem 4. Indeed, let $\lambda^{*}$ be as defined in Theorem 4. Then $s_{\mathrm{MIN}}^{\lambda^{*}}$ , resp. $s_{\mathrm{MAX}}^{\lambda^{*}}$ , must have the lowest, resp. highest, cost among all routing strategies in $S_{\lambda^{*}}$ . Since there is a routing strategy in $S_{\lambda^{*}}$ that has cost $B$ , there must exist a convex combination of $s_{\mathrm{MIN}}^{\lambda^{*}}$ and $s_{\mathrm{MAX}}^{\lambda^{*}}$ that also has cost $B$ and thus achieves the optimal quality.
+
+We first prove several lemmas before proving the theorem.
+
+Lemma 2. $S_{\lambda}$ is non-empty and convex for all $\lambda \in \mathbb{R}^{+}$ .
+
+Proof. Non-emptiness follows from the fact that the routing strategy that assigns all probability mass for a sample $x$ to a model $i$ for which $\hat{q}_i(x) - \lambda \hat{c}_i(x)$ is maximal, is in $S_{\lambda}$ . For convexity, let $s^{(1)}, s^{(2)} \in S_{\lambda}$ be arbitrary. Let $s^{\gamma}$ be the convex combination of $s^{(1)}$ and $s^{(2)}$ with weight $\gamma \in [0,1]$ . Let $x \in \mathcal{X}$ be arbitrary. Then, $s_i^\gamma (x) > 0$ if and only if $s_i^{(1)}(x) > 0$ or $s_i^{(2)}(x) > 0$ . Since $s^{(1)}, s^{(2)} \in S_{\lambda}$ , we have $\hat{q}_i(x) - \lambda \hat{c}_i(x) \geqslant \max_j\hat{q}_j(x) - \lambda \hat{c}_j(x)$ for all $i$ such that $s_i^{(1)}(x) > 0$ or $s_i^{(2)}(x) > 0$ . This implies that $\hat{q}_i(x) - \lambda \hat{c}_i(x) \geqslant \max_j\hat{q}_j(x) - \lambda \hat{c}_j(x)$ for all $i$ such that $s_i^\gamma (x) > 0$ . Thus, $s^\gamma \in S_\lambda$ .
+
+Lemma 3. Let $\lambda_1 < \lambda_2$ and $s^{(1)}$ , resp. $s^{(2)}$ be arbitrary routing strategies in $S_{\lambda_1}$ , resp. $S_{\lambda_2}$ . Then, the cost of $s^{(1)}$ is greater or equal to the cost of $s^{(2)}$ , i.e.,
+
+$$
+\mathbb {E} _ {x \sim \mathcal {X}} \left[ \sum_ {i = 1} ^ {k} s _ {i} ^ {(1)} (x) \hat {c} _ {i} (x) \right] \geqslant \mathbb {E} _ {x \sim \mathcal {X}} \left[ \sum_ {i = 1} ^ {k} s _ {i} ^ {(2)} (x) \hat {c} _ {i} (x) \right]
+$$
+
+Proof. We show that for any $x \in \mathcal{X}$ , the cost of $s^{(1)}$ is greater or equal to the cost of $s^{(2)}$ . Let $x \in \mathcal{X}$ be arbitrary. Suppose $s^{(1)}$ is strictly cheaper than $s^{(2)}$ . Then, there must exist a model pair $i, j$ such that $\hat{c}_i(x) < \hat{c}_j(x)$ , $s_i^{(1)}(x) > s_i^{(2)}(x) \geqslant 0$ , and $s_j^{(2)}(x) > s_j^{(1)}(x) \geqslant 0$ . However, $s_i^{(1)}(x) > 0$ implies
+
+$$
+\hat {q} _ {i} (x) - \lambda_ {1} \hat {c} _ {i} (x) \geqslant \hat {q} _ {j} (x) - \lambda_ {1} \hat {c} _ {j} (x).
+$$
+
+Furthermore, since $\lambda_1 - \lambda_2 < 0$ , we have
+
+$$
+\hat {c} _ {i} (x) \left(\lambda_ {1} - \lambda_ {2}\right) > \hat {c} _ {j} (x) \left(\lambda_ {1} - \lambda_ {2}\right).
+$$
+
+Adding these two inequalities gives
+
+$$
+\hat {q} _ {i} (x) - \lambda_ {2} \hat {c} _ {i} (x) > \hat {q} _ {j} (x) - \lambda_ {2} \hat {c} _ {j} (x),
+$$
+
+which is a contradiction with $s_j^{(2)}(x) > 0$ . Thus, the cost of $s^{(1)}$ is greater or equal to the cost of $s^{(2)}$ .
+
+Lemma 4. Let $\Lambda$ be the set of points $\lambda \in \mathbb{R}$ such that there exist an $x \in \mathcal{X}$ and $i \neq j$ such that $\hat{q}_i(x) - \lambda \hat{c}_i(x) = \hat{q}_j(x) - \lambda \hat{c}_j(x)$ . Let $\lambda_1 < \lambda_2$ be such that $[\lambda_1, \lambda_2] \cap \Lambda = \emptyset$ . Then, $S_{\lambda_1} = S_{\lambda_2}$ . Furthermore, if $[\lambda_1, \lambda_2] \cap \Lambda = \{\lambda^*\}$ , then $S_{\lambda} \subset S_{\lambda^*}$ for all $\lambda \in [\lambda_1, \lambda_2]$ .
+
+Proof. We first show the first statement by showing that $S_{\lambda_1} \setminus S_{\lambda_2} = \emptyset$ . $S_{\lambda_2} \setminus S_{\lambda_1} = \emptyset$ follows analogously. Suppose there exists a routing strategy $s \in S_{\lambda_1} \setminus S_{\lambda_2}$ . Since $s \notin S_{\lambda_2}$ , there must exist an $x \in \mathcal{X}$ and model $i$ such that $s_i(x) > 0$ and $\hat{q}_i(x) - \lambda_2 \hat{c}_i(x) < \max_j \hat{q}_j(x) - \lambda_2 \hat{c}_j(x)$ . Let $j$ be an index such that $\hat{q}_i(x) - \lambda_2 \hat{c}_i(x) < \hat{q}_j(x) - \lambda_2 \hat{c}_j(x)$ . Since $s \in S_{\lambda_1}$ , we have $\hat{q}_i(x) - \lambda_1 \hat{c}_i(x) \geqslant \hat{q}_j(x) - \lambda_1 \hat{c}_j(x)$ . By continuity, there exists a $\lambda \in [\lambda_1, \lambda_2]$ such that $\hat{q}_i(x) - \lambda \hat{c}_i(x) = \hat{q}_j(x) - \lambda \hat{c}_j(x)$ , which is a contradiction with $[\lambda_1, \lambda_2] \cap \Lambda = \emptyset$ .
+
+Now suppose $[\lambda_1, \lambda_2] \cap \Lambda = \{\lambda^*\}$ . Let $\lambda \in [\lambda_1, \lambda^*)$ be arbitrary and let $s \in S_{\lambda}$ be arbitrary. We show that $s \in S_{\lambda^*}$ . For $\lambda \in (\lambda^*, \lambda_2]$ , the proof is completely analogous. By contradiction, suppose there exists an $x \in \mathcal{X}$ and model $i$ such that $s_i(x) > 0$ and $\hat{q}_i(x) - \lambda^* \hat{c}_i(x) < \max_j \hat{q}_j(x) - \lambda^* \hat{c}_j(x)$ . This means there exists a model $j$ such that $\hat{q}_i(x) - \lambda^* \hat{c}_i(x) < \hat{q}_j(x) - \lambda^* \hat{c}_j(x)$ . Since $s \in S_{\lambda}$ , we know that $\hat{q}_i(x) - \lambda \hat{c}_i(x) \geqslant \hat{q}_j(x) - \lambda \hat{c}_j(x)$ . This implies that there must exist a $\lambda' \in [\lambda_1, \lambda^*)$ such that $\hat{q}_i(x) - \lambda' \hat{c}_i(x) = \hat{q}_j(x) - \lambda' \hat{c}_j(x)$ . However, this is a contradiction with $[\lambda_1, \lambda^*) \cap \Lambda = \emptyset$ . Thus, $s \in S_{\lambda^*}$ .
+
+In what follows, we will assume that $|\Lambda| < \infty$ . This is a very minor assumption. For instance, if $\hat{q}$ and $\hat{c}$ only take on a finite amount of values, this is trivially satisfied. Since estimators are implemented on a computer, they will always have a finite precision, meaning that $\hat{q}$ and $\hat{c}$ will only take on a finite amount of values.
+
+Lemma 5. Let $\lambda_1 < \lambda_2$ and $s^{(1)}$ , resp. $s^{(2)}$ be arbitrary routing strategies in $S_{\lambda_1}$ , resp. $S_{\lambda_2}$ , with costs resp. $B_1$ and $B_2$ . Then, for any $B \in [B_1, B_2]$ there exists a $\lambda \in [\lambda_1, \lambda_2]$ such that $S_{\lambda}$ contains a routing strategy that has exactly cost $B$ .
+
+Proof. Let $B \in [B_1, B_2]$ be arbitrary. If $B = B_1$ or $B = B_2$ , the statement is trivially true. Therefore, suppose $B \in (B_1, B_2)$ . Let $\Lambda$ be as defined in Lemma 4. By Lemma 3, there exists a $\lambda^* \in [\lambda_1, \lambda_2]$ such that all strategies in $S_{\lambda}$ for $\lambda < \lambda^*$ , resp. $\lambda > \lambda^*$ , have cost at least, resp. at most, $B$ . If $\lambda^* \notin \Lambda$ , then the first part of Lemma 4, together with $|\Lambda| < \infty$ , implies that $S_{\lambda^*} = S_{\lambda^* - \epsilon} = S_{\lambda^* + \epsilon}$ for some $\epsilon > 0$ . All the strategies in $S_{\lambda^*}$ must therefore have cost both at least and at most $B$ , meaning they should equal $B$ . We can therefore assume that $\lambda^* \in \Lambda$ . By Lemma 4 and $|\Lambda| < \infty$ , there is an $\epsilon > 0$ such that $S_{\lambda^* - \epsilon} \subset S_{\lambda^*}$ and $S_{\lambda^* + \epsilon} \subset S_{\lambda^*}$ . Let $s^- \in S_{\lambda^* - \epsilon}$ and $s^+ \in S_{\lambda^* + \epsilon}$ be arbitrary. Let $s^\gamma$ be the convex combination of $s^-$ and $s^+$ with weight $\gamma \in [0,1]$ . Since $s^-, s^+ \in S_{\lambda^*}$ , we have $s^\gamma \in S_{\lambda^*}$ by Lemma 2. Denote by $B^-$ , resp. $B^+$ , the cost of $s^-$ , resp. $s^+$ . Furthermore, the cost of $s^\gamma$ is $\gamma B^- + (1 - \gamma)B^+$ . Since $B \in [B^-, B^+]$ , there exists a $\gamma \in [0,1]$ such that $s^\gamma$ has cost exactly $B$ .
+
+We can now prove the theorem.
+
+Proof. If $S_0$ contains a solution that has cost less than or equal to $B$ , then this solution trivially achieves the optimal quality. Thus, for the rest of the proof we can assume that the cost of every solution in $S_0$ is greater than $B$ . For $\lambda \to \infty$ , $S_{\lambda}$ contains the solution that assigns all probability mass to the model with the lowest cost. Since there is an admissible solution, this solution necessarily has cost less than $B$ . Therefore, by Lemma 5, there exists a $\lambda^{*} \in \mathbb{R}$ such that $S_{\lambda^{*}}$ contains a routing strategy that has exactly cost $B$ .
+
+Let $s$ be an arbitrary routing strategy in $\bigcup_{\lambda \in \mathbb{R}^{+}} S_{\lambda}$ that has cost $B$ . Specifically, let $s \in S_{\lambda}$ . Let $s'$ be any other routing strategy that is an admissible solution to the optimization problem. Then:
+
+$$
+\begin{array}{l} \mathbb {E} _ {x \in X} \left[ \sum_ {i = 1} ^ {k} s _ {i} ^ {\prime} (x) \hat {q} _ {i} (x) \right] = \mathbb {E} _ {x \in X} \left[ \sum_ {i = 1} ^ {k} s _ {i} ^ {\prime} (x) \hat {q} _ {i} (x) - \lambda B + \lambda B \right] \\ \leqslant \mathbb {E} _ {x \in X} \left[ \sum_ {i = 1} ^ {k} s _ {i} ^ {\prime} (x) (\hat {q} _ {i} (x) - \lambda \hat {c} _ {i} (x)) + \lambda B \right] \\ \leqslant \mathbb {E} _ {x \in X} \left[ \sum_ {i = 1} ^ {k} s _ {i} (x) (\hat {q} _ {i} (x) - \lambda \hat {c} _ {i} (x)) + \lambda B \right] \\ = \mathbb {E} _ {x \in X} \left[ \sum_ {i = 1} ^ {k} s _ {i} (x) \hat {q} _ {i} (x) \right] \\ \end{array}
+$$
+
+Thus, $s$ achieves the optimal quality.
+
+# B. Cascading
+
+To prove Theorem 2, we heavily rely on the results derived in App. A. As explained in §3, cascading can be reinterpreted as a sequence of routing problems. However, to prove optimality, we need to be slightly more careful with the exact formulation of the problem.
+
+At step $j$ , the cascading strategy needs to decide whether to stop the cascade or to continue to the next model. It should continue to the next model if any of the supermodels $M_{1:j}, \ldots, M_{1:k}$ is better to run than $M_{1:j-1}$ for some measure of 'better'. Therefore, the cascading strategy is indeed performing a routing operation between the supermodels $M_{1:j-1}, \ldots, M_{1:k}$ .
+
+However, the optimization problem does slightly change compared to the routing problem. First of all, for each query $x \in \mathcal{X}$ , there is a possibility that the cascade is stopped before step $j$ . Therefore, the cascade should not aim to optimize the quality at step $j$ for such a query, since it would not have any effect on the overall quality of the cascade. Furthermore, the budget $B$ is only enforced over the entire cascade, and not over the individual steps. Since the problem changes through steps, it is not required that the cost of the router at step $j$ is exactly equal to $B$ .
+
+Therefore, we reformulate cascading using an inner and outer optimization problem. The inner optimization problem aims to find the optimal routing strategy at step $j$ for a given budget $B_{j}$ . The outer optimization problem aims to find the optimal budget $B_{j}$ for each step $j$ such that the overall quality of the cascade is maximized under the constraint that the total cost of the cascade is at most $B$ .
+
+To formulate this more exactly, let $P_{j}(M)$ be the probability that the cascade computed supermodel $M$ by step $j$ . Then, the inner optimization problem at step $j$ can be formulated as:
+
+$$
+\max _ {r ^ {(j)}} \mathbb {E} _ {x \sim \mathcal {X}} \left[ P _ {j} (M _ {1: j - 1}) \sum_ {i = j - 1} ^ {k} r _ {1: i} (x) \hat {q} _ {1: i} ^ {(j)} (x) \right]
+$$
+
+$$
+\text {s . t .} \quad \mathbb {E} _ {x \sim \mathcal {X}} \left[ P _ {j} \left(M _ {1: j - 1}\right) \sum_ {i = j - 1} ^ {k} r _ {1: i} (x) \hat {c} _ {1: i} ^ {(j)} (x) \right] \leqslant B _ {j} \tag {4}
+$$
+
+$$
+\forall i \in \{j - 1, \dots , k \}: \forall x \in \mathcal {X}: r _ {1: i} (x) \geq 0 \wedge \sum_ {i = j - 1} ^ {k} r _ {1: i} (x) = 1
+$$
+
+Note that $P_{j}(M_{1:j-1})$ can be incorporated in the quality and cost estimates. This leaves us with the exact same optimization problem as the routing problem, but with a different budget $B_{j}$ . Since the chosen model only depends on the maximization of $P_{j}(M_{1:j-1})\hat{q}_{i}^{(j)}(x) - \lambda_{j}P_{j}(M_{1:j-1})\hat{c}_{i}^{(j)}(x)$ , the probability $P_{j}(M_{1:j-1})$ can be divided out of the optimization problem.
+
+The inner optimization problems prove the existence of optimal routing strategies at each step $j$ with parameters $\lambda_{j}$ . We note that there only needs to be one parameter $\gamma$ that determines the convex combination since the budget $B$ is only enforced over the entire cascade.
+
+Let us denote the quality and cost of the entire cascading strategy for given parameters $\lambda_1, \dots, \lambda_k$ and $\gamma$ as $Q(\lambda_1, \dots, \lambda_k, \gamma)$ and $C(\lambda_1, \dots, \lambda_k, \gamma)$ respectively. Then, the outer optimization problem can be formulated as:
+
+$$
+\max _ {\lambda_ {1}, \dots , \lambda_ {k}, \gamma} Q \left(\lambda_ {1}, \dots , \lambda_ {k}, \gamma\right) \tag {5}
+$$
+
+$$
+\begin{array}{l l} \text {s . t .} & C (\lambda_ {1}, \dots , \lambda_ {k}, \gamma) \leqslant B \end{array}
+$$
+
+To solve this outer optimization problem, we simply perform a hyperparameter search over the budgets $B_{1},\ldots ,B_{k}$ using a hyperparameter optimization search as discussed in $\S 3$
+
+# B.1. Prior Approximations
+
+We now prove Corollary 1. Before doing so, we first need to define what we exactly mean by equivalency. For this purpose, let $\mathcal{C}_1$ be defined as follows:
+
+$$
+\mathcal {C} _ {1} = \left\{s \mid s \text {i s a c a s c a d i n g s t r a t e g y w i t h p a r a m e t e r s} \lambda_ {1}, \dots , \lambda_ {k}, \gamma = 0 \text {u s i n g e s t i m a t e s} \hat {q} ^ {(j)}, \hat {c} ^ {(j)} \right\}
+$$
+
+Similarly, let $\mathcal{C}_2$ be defined as follows:
+
+$$
+\mathcal {C} _ {2} = \left\{s \mid s \text {i s a t h r e s h o l d i n g s t r a t e g y w i t h p a r a m e t e r s} \tau_ {1}, \dots , \tau_ {k} \text {u s i n g e s t i m a t e s} \hat {q} ^ {(j)}, \hat {c} ^ {(j)} \right\}
+$$
+
+We note that we set $\gamma = 0$ since the thresholding strategy is deterministic. We therefore restrict the cascading strategy to be deterministic as well.
+
+We define the equivalence between the two sets as follows:
+
+Definition 6 (Equivalence of Strategies). We say a set of strategies $\mathcal{C}_1$ is equivalent to another set of strategies $\mathcal{C}_2$ , denoted as $\mathcal{C}_1 \equiv \mathcal{C}_2$ , if for all $s_0 \in \mathcal{C}_1 \cup \mathcal{C}_2$ there exists a $s_1 \in \mathcal{C}_1$ , and a $s_2 \in \mathcal{C}_2$ such that for all $x \in \mathcal{X}$ , $s_0$ , $s_1$ and $s_2$ take the same decisions on $x$ .
+
+We can now more accurately state the conditions under which the thresholding strategy is equivalent to the optimal strategy.
+
+Corollary 2 (Optimal Thresholding Strategy). Let $\mathcal{C}_1, \mathcal{C}_2$ be defined as above. Then, $\mathcal{C}_1 \equiv \mathcal{C}_2$ if and only if there exists alternative quality and cost estimates $\hat{q}_i^{(j)'}(x)$ and $\hat{c}_i^{(j)'}(x)$ with associated set of cascading strategies $\mathcal{C}_1'$ such that $\mathcal{C}_1 \equiv \mathcal{C}_1'$ and the following conditions hold on these alternative quality and cost estimates: $\hat{c}_i^{(j)'}(x)$ is independent of $x$ and bigger than $0$ , $\hat{q}_i^{(j)'}(x)$ is independent of $x$ for all $i \geq j$ , and $\hat{q}_{1:i}^{(j)'}(x)$ is equal to $\hat{q}_i^{(j)'}(x)$ .
+
+The main difference between Corollary 2 and Corollary 1 is that we impose the possibility of alternative quality and cost estimates. However, this does not really influence equivalency in the intuitive sense. Indeed, one could alternatively phrase
+
+the corollary as follows: the thresholding strategy is equivalent to any of our cascading strategies if and only if it is possible to construct alternative estimates such that the conditions hold.
+
+Proof. We note that the cascade $s \in \mathcal{C}_1$ continues on a sample if the following condition holds:
+
+$$
+\hat {q} _ {1: j - 1} ^ {(j)} (x) - \lambda_ {j} \hat {c} _ {1: j - 1} ^ {(j)} (x) < \max _ {i \in \{j, \dots , k \}} \hat {q} _ {1: i} ^ {(j)} (x) - \lambda_ {j} \hat {c} _ {1: i} ^ {(j)} (x) \tag {6}
+$$
+
+If $\mathcal{C}_1\equiv \mathcal{C}_1'$ , it is clear that Eq. (6) reduces to the thresholding strategy for all strategies in $\mathcal{C}_1^\prime$ . Indeed, for any $s\in \mathcal{C}_1^\prime$ set $\tau_{j} = \max_{i\in \{j,\dots,k\}}\hat{q}_{1:i}^{(j)} - \lambda_{j}\hat{c}_{j:i}^{(j)}$ and the thresholding strategy is equivalent to $s$ . Alternatively, if $s\in \mathcal{C}_2$ , suppose $\max_{i\in \{j,\dots,k\}}\hat{q}_{1:i}^{(j)} - \lambda_{j}\hat{c}_{j:i}^{(j)} = \hat{q}_{1:i}^{(j)} - \lambda_{j}\hat{c}_{j:i}^{(j)}$ for some index $i$ . Then, set $\lambda_{j} = \tau_{j} / \hat{c}_{j:i}^{(j)} - \hat{q}_{1:i}^{(j)} / \hat{c}_{j:i}^{(j)}$ and the cascading strategy is equivalent to $s$ . Therefore, $\mathcal{C}_1\equiv \mathcal{C}_1'\equiv \mathcal{C}_2$ .
+
+Suppose now that $\mathcal{C}_1\equiv \mathcal{C}_2$ . We construct alternative quality and cost estimates $\hat{c}_i^{(j)'}(x)$ and $\hat{c}_i^{(j)'}(x)$ such that the conditions hold and such that $\mathcal{C}_1\equiv \mathcal{C}_1'$ . For this purpose, we define $\hat{c}_i^{(j)'}(x) = 1$ for all $i,j\in \{1,\ldots ,k\}$ , $\hat{q}_i^{(j)'}(x) = 1$ for all $i\geqslant j$ , and $\hat{q}_i^{(j)'}(x) = \hat{q}_i^{(j)}(x)$ otherwise. Furthermore, we set $\hat{q}_{1:i}^{(j)'}(x) = \hat{q}_i^{(j)'}(x)$ for all $i,j\in \{1,\dots ,k\}$ . The equivalence of $\mathcal{C}_1^\prime$ and $\mathcal{C}_2$ can now be proven analogously to the previous paragraph. Therefore, $\mathcal{C}_1\equiv \mathcal{C}_1'\equiv \mathcal{C}_2$ .
+
+# C. Cascade Routing
+
+We first note that the proof of the optimality of the cascade routing strategy is equivalent to the proof of the optimality of the cascade strategy, except that the expectation in the optimization problem Eq. (4) is now not only over $x \in X$ , but also over all possible supermodels that were computed by step $j - 1$ . However, this does not change the optimization problem, and the proof is completely analogous to the proof given in §3. Thus, all we need to prove is Lemma 1. To prove the lemma, we first prove the following lemma.
+
+Lemma 6. Let $Q_{1}, \ldots, Q_{k}$ be distributions. Let $S$ be the superset of $\{1, \ldots, k\}$ . Then $f: S \to \mathbb{R}$ defined as $f(S) = \mathbb{E}(\max_{i \in S} Q_{i})$ is submodular. Here, we define $\max_{i \in \emptyset} Q_{i} = -\infty$
+
+Proof. Let $T \subset S \subset \{1, \ldots, k\}$ and $j \in \{1, \ldots, k\}$ be arbitrary. To show the submodularity of $f$ , we need to show that
+
+$$
+f (T \cup \{j \}) - f (T) \geq f (S \cup \{j \}) - f (S).
+$$
+
+We can write:
+
+$$
+\begin{array}{l} f (S \cup \{j \}) - f (S) = \mathbb {E} (\max _ {i \in S \cup \{j \}} Q _ {i}) - \mathbb {E} (\max _ {i \in S} Q _ {i}) \\ = \mathbb {E} (\max (0, Q _ {j} - \max _ {i \in S} Q _ {i})) \\ \leqslant \mathbb {E} (\max (0, Q _ {j} - \max _ {i \in T} Q _ {i})) \\ = \mathbb {E} (\max _ {i \in T \cup \{j \}} Q _ {i}) - \mathbb {E} (\max _ {i \in T} Q _ {i}) \\ = f (T \cup \{j \}) - f (T). \\ \end{array}
+$$
+
+In the proof, we needed $\max_{i\in \emptyset}Q_i = -\infty$ in the case $T = \emptyset$
+
+We note that the assertion that $\max_{i\in \emptyset}Q_i = -\infty$ corresponds to the fact that giving no answer to a query has $-\infty$ quality. We can now prove Lemma 1.
+
+Proof. Let $M$ and $m$ be as in the lemma. Suppose $M'$ is a supermodel that contains all models in $M$ . Furthermore, let $M'' = M' \setminus m$ . We show that the supermodel $M''$ is always strictly preferred over $M'$ . To see this, we note that the difference between $\tau_{M'}(x, \lambda)$ and $\tau_{M''}(x, \lambda)$ is equal to
+
+$$
+\mathbb {E} \big (\max _ {m ^ {\prime} \in M ^ {\prime}} \hat {q} _ {m ^ {\prime}} (x) \big) - \mathbb {E} \big (\max _ {m ^ {\prime} \in M ^ {\prime \prime}} \hat {q} _ {m ^ {\prime}} (x) \big) - \lambda_ {j} \hat {c} _ {m} (x)
+$$
+
+By Lemma 6, this difference is smaller than $\hat{q}_M(x) - \hat{q}_{M\setminus \{m\}}(x) - \lambda_j\hat{c}_m(x)$ . Thus, by assumption, this difference is negative, and therefore $M''$ is always preferred over $M'$ , which concludes the proof.
+
+Table 3: Standard deviations of the noise levels on the RouterBench dataset.
+
+Quality Cost σbefore σafter σbefore σafter LOW 0.6 0.3 0.0002 0.00005 MEDIUM 1.6 0.8 0.0004 0.0001 HIGH 2.4 1.2 100 100
+
+# D. Experimental Details
+
+We describe some additional details about the experimental setup and the datasets used in our experiments.
+
+# D.1. Routerbench
+
+Data Split We use $5\%$ of the RouterBench data (around 2000 samples) to optimize the hyperparameters of cascading, routing, and cascade routing. The remaining $95\%$ is used for evaluation. We use the same data split for all noise levels.
+
+Noise In Table 3 we specify the standard deviations of the noise levels on the RouterBench dataset. To put these numbers into context, we note that quality varies between 0 and 1, and the average cost of the smallest models is 0.000073, while the average cost of the largest models is 0.003281. We fit a logistic regression model on this noisy signal to obtain the quality and cost estimates. This simulates the noise in the features that are used to estimate the quality and cost of the models.
+
+Models In the evaluated scenarios for three models, we use the models MIXTRAL-8X7B-CHAT, GPT-3.5-TURBO-1106, and GPT-4-1106-PREVIEW. When using five models, we add WIZARDLM-13B-V1.2 and CLAude-V2 to the mix. For eleven models, we use all models available in the benchmark.
+
+# D.2. Accurate Quality Estimation
+
+Data Split For the SWE-Bench benchmark, we use its verified data split and divide the dataset into training and calibration subsets, with each comprising $50\%$ of the data. For the Minerva Math and LiveCodeBench benchmark, we only include the Algebra portion of Minerva Math to ensure that both benchmarks have a comparable number of samples for evaluation. Similarly, we also perform a $50\%$ split of this dataset into training and calibration sets.
+
+Evaluation Setting For the SWE-Bench evaluation, we analyze the performance of 10 models submitted to the benchmark's leaderboard. The logs for these models were obtained from the official SWE-Bench repository2 . Specifically, we evaluated the following models:
+
+20240402_sweagent_claude3opus
+20241007_nfactorial
+20240728_sweagent_gpt4o
+20240620_sweagent_claude3.5sonnet
+20241016_epam-ai-run-gpt-4o
+20240824_gru
+20241106 navie-2-gpt4o-sonnet
+20240820_epam-ai-run-gpt-4o
+20241202_agentless-1.5_claude-3.5_sonnet-20241022
+20241028(agentless-1.5_gpt4o
+
+For each model, we extract the time required to complete a task to measure cost.
+
+For LiveCodeBench and Minerva Math, we evaluate the following models:
+
+- QWEN-2.5-CODER-7B-INSTRUCT
+- QWEN-2.5-CODER-1.5B-INSTRUCT
+- QWEN-2.5-MATH-7B-INSTRUCT
+- QWEN-2.5-MATH-1.5B-INSTRUCT
+
+We conduct experiments using version 5 of the LiveCodeBench benchmark from its official repository. For Minerva Math, we utilize the LM Evaluation Harness (Gao et al., 2024) to ensure consistent and reliable evaluation.
+
+Cost Estimation For SWE-Bench, the cost is defined as the time (in seconds) that a model takes to complete a task. A linear regression model is fitted to predict this cost based on the query length and, when available, the cost of running other models.
+
+For LiveCodeBench and Minerva Math, the cost is calculated as the total number of tokens in both the query and the answer, multiplied by the size of the model (in billions of parameters). Similar to SWE-Bench, a linear model is used to predict the cost based on query length and other models' costs.
+
+Quality Estimation For ex-ante quality estimation in SWE-Bench, we train a logistic regression model that predicts quality based on the query length and a one-hot encoded variable representing the query's source repository. Post-hoc quality estimation leverages the ground-truth quality scores computed during evaluation.
+
+For ex-ante quality estimation in Minerva Math and LiveCodeBench, we include the query length, query source (Minerva Math or LiveCodeBench), and the difficulty level of the problem as defined by the benchmark. Post-hoc quality estimation incorporates additional information, such as whether the parsed answers from different models agree with one another.
+
+# D.3. Poor Quality Estimation
+
+Data Split We split each dataset in each benchmark into a training set and a test set, each comprising $50\%$ of the data. For all datasets except GSM8k, the training set is created by splitting the original test data. In the case of GSM8k, since a separate training set is already available, we use this pre-existing training data, leaving the original test set unchanged. The training set is then further divided, with $50\%$ used for training quality and cost estimators, and the remaining $50\%$ reserved for hyperparameter optimization through validation.
+
+Evaluation Setting We use completion-based evaluation in a one-shot setting for each benchmark. For the classification tasks, we obtain the probability associated with each class ("A", "B", "C", ...) from the model directly. For open-form reasoning tasks, we extract the answer by instruction the model to generate a completion that ends with an extractable answer. If the model does not output an answer in the correct format, we perform a best-effort extraction by trying various regex patterns. Details on the prompts and regex patterns used for each benchmark are provided in the code repository.
+
+Models For the LLAMA-3.1 model family, we use the models LLAMA-3.1-8B-INSTRUCT, LLAMA-3.1-70B-INSTRUCT, and LLAMA-3.1-405B-INSTRUCT. For the GEMMA model family, we use the models GEMMA-2B-INSTRUCT, GEMMA-2-9B-INSTRUCT, and GEMMA-2-27B-INSTRUCT. For the MISTRAL model family, we use the models MISTRAL-7B-INSTRUCT-V0.3, MIXTRAL-8x7B-INSTRUCT-V0.1, and MIXTRAL-8x22B-INSTRUCT-V0.1.
+
+Cost Estimation For cost estimation, we first calculate the number of tokens in both the query and the model's response. We then use API-based prices per token for each model to estimate the cost. In classification, where responses consist of a single token, the cost can be determined before running the model. In open-form reasoning tasks, where response lengths vary, we estimate this length based on responses from previous models in the cascade if the model has not yet been computed. If no model response is available, we estimate the response length using the average from the training data.
+
+Table 4: AUC scores in % for different strategies on RouterBench in the 0-shot setting with ${2\sigma }$ confidence intervals.
+
+Three Models Five Models Eleven Models Low Med High Low Med High Low Med High Cascade Routing (Ours) 82.37+0.31-0.32 76.57+0.34-0.35 73.23+0.37-0.38 84.33+0.29-0.29 76.32+0.34-0.37 72.75+0.36-0.40 87.24+0.23-0.26 77.58+0.30-0.33 74.41+0.33-0.36 -Routing 2.64+0.15-0.16 1.59+0.13-0.15 1.40+0.15-0.17 3.10+0.17-0.15 1.88+0.17-0.16 1.41+0.17-0.17 4.00+0.17-0.21 2.94+0.20-0.21 1.73+0.19-0.19 - Cascade (Baseline) 1.50+0.12-0.12 1.91+0.18-0.19 0.74+0.19-0.18 2.00+0.17-0.15 3.29+0.26-0.27 3.22+0.24-0.24 2.76+0.14-0.14 3.92+0.28-0.28 4.61+0.28-0.28 - Cascade (Ours) 1.28+0.12-0.11 0.39+0.15-0.14 0.54+0.18-0.17 1.27+0.10-0.11 1.14+0.20-0.21 2.57+0.21-0.26 2.77+0.13-0.13 2.46+0.22-0.24 4.14+0.25-0.27
+
+Table 5: AUC scores in % for different strategies on RouterBench in the 5-shot setting across model and noise levels with ${2\sigma }$ confidence intervals. Bold numbers indicate that the confidence interval contains zero.
+
+Three Models Five Models Eleven Models Low Med High Low Med High Low Med High Cascade Routing (Ours) 83.79+0.3-0.33 78.85+0.29-0.34 77.1+0.32-0.35 85.5+0.26-0.26 78.77+0.32-0.33 76.74+0.34-0.36 88.78+0.22-0.22 80.89+0.28-0.29 78.03+0.3-0.31 -Routing 2.3+0.14-0.13 1.64+0.15-0.15 1.1+0.13-0.14 3.08+0.16-0.14 1.94+0.16-0.14 1.21+0.14-0.15 3.43+0.15-0.17 3.13+0.19-0.2 1.6+0.17-0.16 - Cascade (Baseline) -0.64+0.11-0.1 0.28+0.13-0.14 0.22+0.16-0.16 1.23+0.12-0.12 2.19+0.21-0.21 2.83+0.24-0.24 1.64+0.12-0.13 2.29+0.24-0.24 3.09+0.27-0.26 - Cascade (Ours) 1.02+0.1-0.09 0.09+0.11-0.11 0.1+0.14-0.14 1.25+0.1-0.09 1.59+0.17-0.17 2.45+0.21-0.21 2.06+0.1-0.1 2.22+0.21-0.19 2.95+0.23-0.24
+
+Features Quality Estimates We specify the exact features used for the logistic regression model that serves as the quality estimator in §5.2. First, we include a one-hot encoding of the various datasets in each benchmark. Furthermore, for classification, we include the probability associated with the highest class and the entropy of the class probabilities if the model has been computed. If several models have been computed, we include both whether they agree on their prediction, and the JS-divergence between their class probabilities. For open-form reasoning, we include the perplexity, number of tokens, and several quantiles of the logits if the model has been computed, in accordance with Gupta et al. (2024). If several models have been computed, we also include whether they agree on their prediction.
+
+We note that we train a separate logistic regression model for each history of computed models, and for each model separately as well. Thus we have one linear model for each combination of a target model $m_{i}$ and computed models $m_{i_1},\ldots ,m_{i_j}$ . All the linear models are trained on the training set included in the benchmark.
+
+# E. Confidence Intervals
+
+To check whether the results obtained by cascade routing are significantly higher than our baselines in Tables 1, 2 and 10, we perform bootstrapping on the samples in the dataset. Specifically, we compute the confidence interval associated with the difference between the AUC scores of cascade routing and the baselines. If this difference is positive and its $2\sigma$ confidence interval does not contain zero, we can conclude that cascade routing is significantly better than the baseline. These confidence intervals are reported in Tables 4-6.
+
+# F. Additional Experiments
+
+# F.1. Ablation Study
+
+We conduct an ablation study to examine the impact of various design choices in cascade routing on performance and runtime. Runtime is a critical factor because the overhead introduced by the strategy must be negligible compared to the time required for model computation. If the strategy adds significant overhead, its performance gains may be offset by the increased runtime. We also include an additional ablation that specifically targets runtime on random data in App. F.2.
+
+To investigate this, we repeat the experiment from §5.1 when using all eleven models, testing different variations of cascade routing. We evaluate a slower variation that omits Lemma 1, thereby requiring more supermodels to be evaluated (SLOW), a greedy variation that only considers supermodels of length $j + 1$ at step $j$ (GREEDY), and a version that does not compute the expected value when evaluating supermodel quality, using the quality of the best model instead (No-EXPECT).
+
+Results Table 7 presents the results. As expected, the SLOW variation is almost an order of magnitude slower while achieving similar performance. In contrast, both GREEDY and NO-EXPECT are faster but perform worse in the low- and medium-noise scenarios by $0.5\%$ to $1.3\%$ . Interestingly, there is a much smaller performance gap in the high-noise scenario. This is due to the very low variance in the quality estimates, since the linear model used for quality estimation predicts an almost constant value for each query in this scenario, making the expected value computation less important.
+
+Table 6: AUC scores on the realistic benchmarks with ${2\sigma }$ confidence intervals. Bold numbers indicate that the confidence interval contains zero.
+
+SWE-Bench Math+Code Classification Open-Form 10 MODELS 5 MODELS QWEN LLAMA GEMMA MISTRAL LLAMA GEMMA MISTRAL Cascade Routing (Ours) 54.21+7.49-7.17 51.20+7.44-7.22 48.51+2.95-2.99 75.56+1.22-1.16 64.89+1.36-1.42 65.02+1.40-1.24 79.95+1.28-1.33 59.70+1.62-1.61 58.77+1.42-1.50 - Routing 13.65+4.50-4.32 11.71+4.39-4.14 1.11+0.81-0.80 0.60+0.32-0.34 0.39+0.50-0.47 0.08+0.12-0.10 0.56+0.38-0.42 1.26+0.57-0.55 0.02+0.05-0.05 - Cascade (Baseline) 15.36+3.90-3.22 5.17+1.69-1.63 10.77+1.71-1.72 0.71+0.30-0.28 10.51+0.62-0.61 3.79+0.73-0.77 0.65+0.23-0.27 3.47+0.37-0.40 10.45+1.15-1.06 - Cascade (Ours) 0.83+1.90-1.50 0.11+1.05-1.04 1.82+0.58-0.55 0.06+0.15-0.17 2.04+0.33-0.31 1.67+0.41-0.42 0.20+0.19-0.19 2.00+0.25-0.25 3.06+0.65-0.63
+
+Table 7: AUC scores and average runtime for variations of cascade routing on RouterBench when using all eleven models.
+
+Low-Noise Medium-Noise High-Noise AUC (%) Time (ms) AUC (%) Time (ms) AUC (%) Time (ms) Cascade Routing 87.29 15.26 77.61 9.53 74.41 13.68 SLOW 87.30 78.88 77.61 87.72 74.40 88.76 GREEDY 85.93 1.39 77.17 1.17 74.35 0.89 NO-EXPECT 85.98 4.78 77.11 2.49 74.35 2.08
+
+Furthermore, the GreEDY and NO-EXPECT variants perform very similarly, while GreEDY is about twice as fast as No-EXPECT. This suggests that one should almost always use the normal variant of cascade routing, and only consider the GreEDY variant if runtime is a critical concern. Neither the SLOW nor the NO-EXPECT variant is recommended, as they either perform worse or are significantly slower than the normal variant.
+
+# F.2. Runtime Analysis
+
+We further analyze the runtime of the four variants of cascade routing presented in App. F.1. Specifically, we perform experiments with random data, scaling the number of models to 80 to evaluate the runtime of all variants. Furthermore, we include a fifth variant of cascade routing in the analysis MAX-DEPTH, which restricts cascade routing to a maximum depth of 3 models. MAX-DEPTH does not reduce performance of cascade routing if the optimal depth is less than or equal to 3 models. However, it does significantly reduce the runtime of cascade routing.
+
+For each number of models, we generate 100 data points, each with random quality and cost estimates associated with each model. For each point, we generate the hyperparameters $\lambda_1,\dots,\lambda_k$ and $\gamma$ randomly. We then report the average runtime of the five variants of cascade routing in Fig. 3.
+
+The results show the varying computational complexity of the different variants of cascade routing. SLOW has the highest runtime, and becomes computationally too expensive even when using less than 20 models. In contrast, standard cascade routing has a significantly lower runtime, and is able to handle up to 40 models within a 1 second runtime. Its faster variant, MAX-DEPTH, is able to handle up to 80 models within a 1 second runtime. Furthermore, we now also see a clear difference between NO-EXPECT and GreEDY. While GreEDY remains computationally very cheap even for 80 models, NO-EXPECT has a significantly higher runtime, even obtaining higher runtimes than MAX-DEPTH for 80 models.
+
+Thus, the conclusions from App. F.1 are further supported by the runtime analysis: GreEDy is the most efficient variant of cascade routing, while NORMAL is the most efficient variant that does not compromise performance. MAX-DEPTH is a good choice if the optimal depth is known to be less than or equal to 3 models, as it significantly reduces runtime without compromising performance. Since cascades of more than 3 models are rare, MAX-DEPTH is a good choice in practice.
+
+
+Figure 3: Runtime of cascade routing variants for different numbers of models.
+
+Table 8: Classification AUC values for each benchmark separately for the experiment performed in §5.2.
+
+LLAMA GEMMA MISTRAL MMLU ARC MixEval MMLU ARC MixEval MMLU ARC MixEval Linear Interp. 53.82 93.15 82.86 39.40 82.28 70.97 39.76 85.39 73.03 Routing 55.32 93.12 82.86 40.01 83.13 73.12 40.61 85.64 74.28 Cascade (Baseline) 54.80 94.08 84.15 36.43 77.53 66.10 36.99 83.88 72.73 Cascade (Ours) 55.05 94.16 84.00 37.68 79.80 70.57 37.03 86.27 74.42 Cascade Routing (Ours) 55.40 93.90 83.91 39.93 83.74 73.16 40.56 86.52 74.64
+
+Table 9: Open-form AUC values for each benchmark separately for the experiment performed in §5.2.
+
+LLAMA GEMMA MISTRAL MMLU GSM8k MMLU GSM8k MMLU GSM8k Linear Interp. 65.64 94.43 36.52 73.86 41.40 67.84 Routing 65.75 94.15 38.08 75.01 43.03 68.00 Cascade (Baseline) 66.07 95.17 35.76 68.44 38.88 60.82 Cascade (Ours) 66.25 94.94 38.16 71.10 40.76 64.53 Cascade Routing (Ours) 66.60 94.69 40.43 75.25 42.93 68.30
+
+# G. Detailed Results
+
+We present benchmark-specific AUC values for the experiment performed in §5.2 in Table 8 for classification and Table 9 for open-form reasoning. In Fig. 4, we show the quality-cost tradeoff curves for several benchmarks. The curves are obtained by varying the cost threshold $\lambda$ and plotting the resulting accuracy and cost values in a curve.
+
+# H. Additional Results
+
+In Table 10 we report the AUC scores for the RouterBench dataset for different noise levels for the five-shot evaluation. Our conclusions presented in §5.1 remain consistent with the results presented in Table 10. However, there is one notable inconsistency: in two of the three low-noise scenarios, our cascading strategy performs worse than the threshold-based baseline cascade. In the scenario with three models, we find its cause can be found in the more difficult optimization surface for the hyperparameters of our cascading strategy. Specifically, our cascading strategy at some point starts to lose quality as cost increases. By simply setting the hyperparameters of the cascading strategy once it starts to lose quality to the ones where it obtained its highest quality, we obtain a quality of $83.35\%$ over the $83.17\%$ of the baseline cascade.
+
+In contrast, for low-noise and eleven models, a similar approach does not yield a better result. Rather, the discrepancy is caused by a small mismatch between the quality estimates of supermodels and the chosen model. While the quality estimate is based on the expected maximum of all models, we restrict the selected model to be the last model that was computed in the cascade. Since the expected maximum is higher than the quality of the last model, this discrepancy can lead to suboptimal decisions. By allowing both the baseline cascade and our cascading strategy to select the model with the highest quality estimate, we find that our cascading strategy once again outperforms the baseline cascade. Note that this slight discrepancy is not relevant for cascade routing, since the extra restriction is not imposed in this setting.
+
+
+(a) RouterBench, medium noise, 5 models
+
+
+(b) SWE-Bench with 5 models
+
+
+(c) Classification task for the LLAMA models
+Figure 4: Quality-cost tradeoff curves for several benchmarks.
+
+Table 10: AUC scores in % for different strategies on RouterBench across model and noise levels for five-shot evaluation. Highest numbers are bolded, underlined numbers are within the $95\%$ confidence intervals of the highest number. For a discussion on confidence intervals, we refer to App. E.
+
+Three Models Five Models Eleven Models Low Med High Low Med High Low Med High Linear Interp. 74.21 74.21 74.21 73.82 73.82 73.82 75.16 75.16 75.16 Routing 81.50 77.22 76.01 82.43 76.84 75.54 85.34 77.77 76.44 Cascade (Baseline) 83.16 78.58 76.89 84.27 76.59 73.92 87.14 78.60 74.94 Cascade (Ours) 82.78 78.77 77.01 84.26 77.19 74.30 86.72 78.67 75.08 Cascade Routing (Ours) 83.80 78.86 77.11 85.50 78.78 76.75 88.78 80.90 78.04
+
+Table 11: AUC scores on several benchmarks for the MISTRAL model family. Highest numbers are bolded, underlined numbers are within the $95\%$ confidence intervals of the highest number. For confidence intervals, see App. E.
+
+Classification Open-Form Linear Interp. 63.39 53.86 Routing 64.89 58.71 Cascade (Baseline) 61.20 48.29 Cascade (Ours) 63.31 55.51 Cascade Routing (Ours) 64.97 58.73
\ No newline at end of file
diff --git a/aunifiedapproachtoroutingandcascadingforllms/images.zip b/aunifiedapproachtoroutingandcascadingforllms/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5dc9ed7df6c625ae7863c21cb54faf6177741ec6
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fe40e45e31579f326d08277524d762197bc684d889337b0f3348b329ef6c9f9
+size 766905
diff --git a/aunifiedapproachtoroutingandcascadingforllms/layout.json b/aunifiedapproachtoroutingandcascadingforllms/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d2f0b7bba88c0530464b0d38ba06d48ee48fc923
--- /dev/null
+++ b/aunifiedapproachtoroutingandcascadingforllms/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da21fbd8d8d6470c99e66401d5064f29cf1f06dfa11f43b8b30010c335c35e33
+size 1189247
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_content_list.json b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c97884db2dfd7d276093cdc268173c207a72654c
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ac63a6ee2c2e35429664a81a39986c1869050da6937160f6d47ae0e90621183
+size 310444
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_model.json b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cd0a3728565c0893ab1d6d9e35a51c10b0065dc9
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50ba4cbaed33234e90775038c383d3af074f7d8525126cd479621f09ae961fbc
+size 357253
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_origin.pdf b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..713a09c662a054efccf8dd38f455feded18b151a
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/e96354e9-5688-4ee6-953b-9d88ce6c5555_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b621a9df234c1b5b5fbb152c60200378727ffd7017dab21d39d11493ea9b6b13
+size 5902148
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/full.md b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe68fa818a448cb5b7d7cc47737dba7f12739b96
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/full.md
@@ -0,0 +1,1584 @@
+# A Unified Comparative Study with Generalized Conformity Scores for Multi-Output Conformal Regression
+
+Victor Dheur 1 Matteo Fontana 2 Yorick Estievenart 1 Naomi Desobry 1 Souhaib Ben Taieb 13
+
+# Abstract
+
+Conformal prediction provides a powerful framework for constructing distribution-free prediction regions with finite-sample coverage guarantees. While extensively studied in univariate settings, its extension to multi-output problems presents additional challenges, including complex output dependencies and high computational costs, and remains relatively underexplored. In this work, we present a unified comparative study of nine conformal methods with different multivariate base models for constructing multivariate prediction regions within the same framework. This study highlights their key properties while also exploring the connections between them. Additionally, we introduce two novel classes of conformity scores for multi-output regression that generalize their univariate counterparts. These scores ensure asymptotic conditional coverage while maintaining exact finite-sample marginal coverage. One class is compatible with any generative model, offering broad applicability, while the other is computationally efficient, leveraging the properties of invertible generative models. Finally, we conduct a comprehensive empirical evaluation across 13 tabular datasets, comparing all the multi-output conformal methods explored in this work. To ensure a fair and consistent comparison, all methods are implemented within a unified code base1 .
+
+$^{1}$ Department of Computer Science, University of Mons, Mons, Belgium $^{2}$ Department of Computer Science, Royal Holloway, University of London, Egham, United Kingdom $^{3}$ Department of Statistics and Data Science, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates. Correspondence to: Victor Dheur .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+$^{1}$ https://github.com/Vekteur/multi-output-conformal-regression
+
+
+Figure 1: Examples of bivariate prediction regions with an $80\%$ coverage level for a toy example.
+
+# 1. Introduction
+
+Quantifying uncertainty in model predictions is crucial in many real-world applications, often involving prediction problems with multiple output variables and complex statistical dependencies. For example, in medical diagnostics, the progression of a disease can be studied by analysing multiple health indicators that exhibit nonlinear dependencies, such as blood pressure and cholesterol levels of a patient (Rajkomar et al., 2018). Although modern probabilistic AI models can model complex relationships between variables, they may produce unreliable or overly confident predictions (Nalisnick et al., 2018).
+
+Conformal prediction (CP) offers a robust framework for improving model reliability by generating distribution-free prediction regions with a finite-sample coverage guarantee (Vovk et al., 1999). Although substantial research has focused on univariate prediction problems (Romano et al., 2019; Sesia and Romano, 2021; Rossellini et al., 2024), multivariate settings have received less attention. Among existing work, Zhou et al. (2024) achieves marginal coverage by combining univariate prediction regions, but fails to capture dependencies between variables. Other methods, such as density-based approaches (Izbicki et al., 2022) or sample-based techniques (Wang et al., 2023b; Plassier et al., 2025), suffer from high computational costs. An alterna-
+
+tive method (Sadinle et al., 2019) optimises the size of the region, but does not achieve asymptotic conditional coverage. For a toy bivariate example, Figure 1 illustrates the diversity of prediction regions obtained using a selection of conformal methods considered in this paper.
+
+Our first contribution is a unified comparative study of nine conformal methods with different multivariate base models for constructing multivariate prediction regions within the same framework. This study highlights their key properties while also exploring the connections between them. We examine different conformity scores with different multivariate base predictors, discussing prediction regions derived from the marginal distributions of individual output variables, their joint PDF, or sampling procedures (e.g., generative models).
+
+Our second contribution introduces two novel classes of conformity scores for multi-output regression that generalize their univariate counterparts. These scores ensure asymptotic conditional coverage while maintaining exact finite-sample marginal coverage.
+
+The first, CDF-based scores, leverage the cumulative distribution function (CDF) of any conformity score to achieve asymptotic conditional coverage. This approach generalizes the univariate HPD-split score, based on univariate highest-density region from Izbicki et al., 2020, to multivariate prediction regions derived from any conformity score. Additionally, we propose a specific instance of CDF-based scores that builds on PCP from Wang et al., 2023b. This method avoids the estimation of a predictive density, instead relying solely on samples from any generative model.
+
+The second, latent-based scores is inspired by Feldman et al., 2023 and can be interpreted as an extension of distributional conformal prediction (Chernozhukov et al., 2021) to multivariate outputs. Compared to Feldman et al., 2023, it does not require directional quantile regression, and the conformalization is performed directly in the latent space, eliminating the need to construct a grid. This enhances both computational efficiency and scalability.
+
+Finally, as our third contribution, we conduct a large-scale empirical study comparing the different multi-output conformal methods across 13 tabular datasets with multivariate outputs, evaluating several performance metrics. We consider a variety of multivariate regression models, namely Multivariate Quantile Function Forecaster (Kan et al., 2022), Distributional Random Forests (Covid et al., 2022), and a multivariate Gaussian Mixture Model parameterized by a hypernetwork (Ha et al., 2022; Bishop, 1994).
+
+# 2. Background
+
+Consider a multivariate regression problem where the objective is to predict a $d$ -dimensional response vector $y \in \mathcal{Y} = \mathbb{R}^d$ based on a feature vector $x \in \mathcal{X} \subseteq \mathbb{R}^p$ . We assume there exists a true joint distribution $F_{XY}$ over $\mathcal{X} \times \mathcal{Y}$ , and we have access to a dataset $\mathcal{D} = \{(X^{(j)}, Y^{(j)})\}_{j=1}^n$ where $(X^{(j)}, Y^{(j)}) \stackrel{\text{i.i.d.}}{\sim} F_{XY}$ . Given a feature vector $x$ , we denote the conditional distribution of $Y$ given $X = x$ as $F_{Y|X=x}$ and the associated probability density function (PDF) as $f_{Y|X=x}$ .
+
+Using the dataset $\mathcal{D}$ , for any $x\in \mathcal{X}$ , CP allows us to transform base predictors, denoted $\hat{h}$ , into calibrated, distribution-free prediction regions $\hat{R} (x)\subseteq \mathcal{V}$ for the true output $y$ with finite-sample coverage guarantees.
+
+# 2.1. Split-conformal prediction
+
+Split-conformal prediction (SCP, Papadopoulos et al., 2002) is a computationally efficient variant of conformal prediction that divides the dataset $\mathcal{D}$ into two disjoint subsets: a training set $\mathcal{D}_{\mathrm{train}}$ and a calibration set $\mathcal{D}_{\mathrm{cal}}$ . A model is first trained on $\mathcal{D}_{\mathrm{train}}$ to obtain a base predictor $\hat{h}$ . Based on $\hat{h}$ , a conformity score (function) $s:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}$ is defined, where lower scores indicate a better fit between the feature vector $x$ and the response $y$ . The calibration scores $\mathcal{S} = \{s(x,y)\}_{(x,y)\in \mathcal{D}_{\mathrm{cal}}}\cup \{+\infty \}$ are then computed, from which the $(1 - \alpha)$ empirical quantile is calculated as:
+
+$$
+\hat {q} = \text {Q u a n t i l e} \left(\mathcal {S}; \frac {k _ {\alpha}}{| \mathcal {D} _ {\mathrm {c a l}} | + 1}\right), \tag {1}
+$$
+
+where $k_{\alpha} = \lceil (|\mathcal{D}_{\mathrm{cal}}| + 1)(1 - \alpha)\rceil$ . This quantile serves as the threshold for constructing prediction regions. For an input $x$ , the (random) prediction region is given by:
+
+$$
+\hat {R} (x) = \{y \in \mathcal {Y}: s (x, y) \leq \hat {q} \}. \tag {2}
+$$
+
+If the random pair $(X,Y)$ is exchangeable with $\mathcal{D}_{\mathrm{cal}}$ , SCP guarantees marginal coverage:
+
+$$
+\mathbb {P} _ {X, Y, \mathcal {D} _ {\mathrm {c a l}}} (Y \in \hat {R} (X)) = \mathbb {P} (s (X, Y) \leq \hat {q}) \geq 1 - \alpha , \tag {3}
+$$
+
+where the probability is taken over $(X,Y)$ and $\mathcal{D}_{\mathrm{cal}}$ . Assuming no ties in scores, the marginal coverage is exactly $\frac{k_{\alpha}}{|\mathcal{D}_{\mathrm{cal}}| + 1}$ , yielding $\mathbb{P}(Y \in \hat{R}(X)) \leq 1 - \alpha + \frac{1}{|\mathcal{D}_{\mathrm{cal}}| + 1}$ .
+
+Ideally, the prediction region should achieve conditional coverage at the level $1 - \alpha$ , i.e.:
+
+$$
+\mathbb {P} (Y \in \hat {R} (X) \mid X) \geq 1 - \alpha . \tag {4}
+$$
+
+holds almost surely. This is a stronger requirement than marginal coverage in (3). However, as Barber et al. (2019) demonstrate, achieving conditional coverage is generally impossible without making additional assumptions about the underlying data-generating process.
+
+# 2.2. Multi-output conformal methods
+
+Many conformal prediction methods have been proposed in the literature and implemented within the SCP framework for various base predictors and conformity scores, with a specific focus on univariate prediction problems. In this section, we survey several conformal methods for constructing multivariate prediction regions, using different multivariate base predictors and corresponding conformity scores. Specifically, we discuss density-based, and sample-based methods, which are based on their joint PDF, or a sampling procedure (e.g., a generative model), respectively. In the following, we describe the conformity scores $s$ for different methods. The methods M-CP and CopulaCPTS, which produce hyperrectangular regions, are detailed in Appendix B. Once a conformity score is defined, the corresponding prediction region $\hat{R}$ can be computed using (2). We detail this relationship for each method in Appendix C. Furthermore, in Section 5, we analyze the properties and relationships between these methods and provide illustrative examples of the resulting prediction regions.
+
+DR-CP. Given a predictive density $\hat{f}_{Y|X = x}$ , a natural conformity score is the negative density:
+
+$$
+s _ {\mathrm {D R - C P}} (x, y) = - \hat {f} (y \mid x). \tag {5}
+$$
+
+The corresponding prediction region is a density superlevel set, $\hat{R}_{\mathrm{DR - CP}}(x) = \{y\in \mathcal{Y}:f(y\mid x)\geq -\hat{q}\}$ . Sadinle et al. (2019) use this conformity score in the context of classification.
+
+C-HDR. Izbicki et al., 2022 proposed the HPD-split method, which defines a conformity score based on the Highest Predictive Density (HPD):
+
+$$
+\begin{array}{l} \operatorname {H P D} _ {\hat {f}} (y \mid x) = \int_ {\left\{y ^ {\prime} \mid \hat {f} \left(y ^ {\prime} \mid x\right) \geq \hat {f} (y \mid x) \right\}} \hat {f} \left(y ^ {\prime} \mid x\right) d y ^ {\prime} (6) \\ = \mathbb {P} \left(\hat {f} (\hat {Y} \mid x) \geq \hat {f} (y \mid x) \mid X = x\right), (7) \\ \end{array}
+$$
+
+where $\hat{Y} \sim \hat{f}_{Y|X = x}$ . The corresponding prediction region is a highest density region (HDR, Hyndman, 1996) with respect to $\hat{f}$ at level $\hat{q}$ :
+
+$$
+\hat {R} _ {\mathrm {C - H D R}} (x) = \{y \in \mathcal {Y}: \hat {f} (y \mid x) \geq t _ {\tilde {q}} \}, \tag {8}
+$$
+
+where $t_{\hat{q}} = \sup \{t:\mathbb{P}(\hat{f} (\hat{Y}\mid x)\geq t\mid X = x)\geq \hat{q}\}$
+
+Compared to DR-CP, where the threshold $-\hat{q}$ is independent of $x$ , C-HDR allows the threshold $t_{\hat{q}}$ to vary with $x$ . To compute the HPD in (6), Izbicki et al., 2022 use numerical integration, whereas in our experiments, we approximate (7) using Monte Carlo sampling, as described in (13).
+
+In the context of classification, Adaptive Prediction Sets (Romano et al., 2020) follows a similar principle by constructing a "highest mass region", which corresponds to a
+
+superlevel set of the probability mass function with probability content at least $\hat{q}$ .
+
+PCP. Let $\tilde{Y}^{(1)},\tilde{Y}^{(2)},\ldots ,\tilde{Y}^{(L)}$ denote a sample with $L$ points from the (estimated) conditional distribution $\hat{F}_{Y|X = x}$ . Probabilistic Conformal Prediction (PCP, Wang et al., 2023b) defines a conformity score as the closest distance to $y$ :
+
+$$
+s _ {\mathrm {P C P}} (x, y) = \min _ {l \in [ L ]} \| y - \tilde {Y} ^ {(l)} \| _ {2}, \tag {9}
+$$
+
+$$
+\text {w h e r e} \tilde {Y} ^ {(l)} \sim \hat {F} _ {Y | X = x}, \quad l \in [ L ]. \tag {10}
+$$
+
+The corresponding region is a union of $L$ balls centered at each sampled point $\tilde{Y}^{(l)}$ , i.e. $\hat{R}_{\mathrm{PCP}}(x) = \bigcup_{l\in [L]}\{y\in \mathcal{V}: \| y - \tilde{Y}^{(l)}\| _2\leq \hat{q}\}$ .
+
+HD-PCP. When a predictive density is available alongside a sample of $L$ points, Wang et al., 2023b proposed an extension to PCP, called HD-PCP. This method uses the same conformity score as in (9), but only retains the $\lfloor (1 - \alpha)L\rfloor$ samples with the highest density, ensuring that the prediction region is concentrated on high-density points.
+
+ST-DQR. Motivated by the limitation that existing multivariate quantile regression methods do not allow the construction of regions with arbitrary shapes, Feldman et al., 2023 proposed to construct convex regions in a latent space $\mathcal{Z}$ using directional quantile regression (Paindaveine and Simon, 2011). These regions are then mapped to the output space $\mathcal{Y}$ using a conditional variational autoencoder (CVAE), allowing a non-linear mapping between the two spaces. Specifically, they apply a conformalization step by creating a grid of points within the region in $\mathcal{Z}$ , map the points to the output space $\mathcal{Y}$ , and construct $d$ -balls around the mapped samples, similarly to PCP.
+
+# 3. Generalized Conformity Scores for Multi-Output Regression
+
+In this section, we introduce two new classes of conformity scores: CDF-based and latent-based scores. These scores generalize existing conformity scores for univariate regression to accommodate any conformity score for multivariate outputs. The former generalizes HPD-split (Izbicki et al., 2020) to any conformity score, allowing to apply this method to multivariate outputs. We further propose a specific instance that builds on PCP (Wang et al., 2023b). The latter is inspired by Feldman et al., 2023 and can be interpreted as an extension of distributional conformal prediction (Chernozhukov et al., 2021) for multivariate outputs. Section 5 will present a comparative study of the conformity scores introduced in Section 2.2 alongside those introduced in this section.
+
+# 3.1. CDF-based conformity scores
+
+Consider a conformity score $s_W$ , and define the random variable $W = s_W(X, Y)$ for a random pair $(X, Y)$ . For an observation $(x, y)$ , we introduce a new conformity score based on the conditional CDF of $W$ given $X = x$ , evaluated at $s_W(x, y)$ . Specifically, the score is given by
+
+$$
+\begin{array}{l} s _ {\mathrm {C D F}} (x, y) = \mathbb {P} \left(s _ {W} (X, Y) \leq s _ {W} (x, y) \mid X = x\right) (11) \\ = F _ {W \mid X = x} (s _ {W} (x, y)). (12) \\ \end{array}
+$$
+
+This new conformity score measures the rank of $s_W(x,y)$ relative to the conditional distribution of $W$ given $X = x$ .
+
+This method applies to any conformity score $s_W$ and generalizes the (oracle) HPD-split introduced in Izbicki et al., 2020 in the context of univariate regression. Specifically, when $s_W(x,y) = s_{\mathrm{DR - CP}}(x,y)$ is used in (12), we recover the C-HDR method. Additionally, by the probability integral transform, $s_{\mathrm{CDF}}(X,Y) \mid X = x \sim \mathcal{U}(0,1)$ for $x \in \mathcal{X}$ , meaning that the conformity score's distribution is independent of $x$ . This property ensures that conditional coverage is achieved as $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ (see Appendix E.2, Lemma 2). A similar observation was made by Izbicki et al., 2020 for C-HDR.
+
+However, in practice, since the distribution of $Y \mid X = x$ is unknown, we approximate $s_{\mathrm{CDF}}$ using Monte Carlo sampling:
+
+$$
+s _ {\mathrm {E C D F}} (x, y) = \frac {1}{K} \sum_ {k \in [ K ]} \mathbb {I} \left(s _ {W} (x, \hat {Y} ^ {(k)}) \leq s _ {W} (x, y)\right),
+$$
+
+where $\hat{Y}^{(k)}\sim \hat{F}_{Y|X = x}$ $k\in [K]$ (13)
+
+Dheur et al., 2024 considered a particular case of this empirical CDF-based approach with the $s_{\mathrm{DR - CP}}$ score for a bivariate prediction problem involving temporal point processes, where the HDR is estimated via Monte Carlo sampling.
+
+C-PCP. We introduce a special case of our new score, called C-PCP (CDF-based Probabilistic Conformal Prediction), by setting $s_W(x,y) = s_{\mathrm{PCP}}(x,y)$ in (13), which gives:
+
+$$
+\begin{array}{l} s _ {\mathrm {C - P C P}} (x, y) = \\ \frac {1}{K} \sum_ {k \in [ K ]} \mathbb {I} \left(\min _ {l \in [ L ]} \| \hat {Y} ^ {(k)} - \tilde {Y} ^ {(l)} \| \leq \min _ {l \in [ L ]} \| y - \tilde {Y} ^ {(l)} \|\right). \\ \end{array}
+$$
+
+Compared to the methods in Izbicki et al., 2020 and Dheur et al., 2024, this score has the advantage of not requiring the estimation of a predictive density, relying instead on samples from the conditional distribution. Consequently, this score can be applied with any generative model that does not have an explicit density, while still retaining the desirable properties of our CDF-based score.
+
+Interestingly, C-PCP shares similarities with the recently proposed $\mathrm{CP^2}$ -PCP method by Plassier et al., 2025. For a given $x\in \mathcal{X}$ , both methods adapt the radius of the balls based on a second sample from the conditional distribution composed of $K$ points, requiring a total of $L + K$ samples. A detailed discussion can be found in Appendix I.
+
+# 3.2. Latent-based conformity scores
+
+Inspired by Feldman et al. (2023), we propose a latent-based conformity score with key distinctions. First, our method does not require the use of directional quantile regression. Additionally, the conformalization step is performed in the latent space, eliminating the need to construct a grid, which improves both computational efficiency and scalability.
+
+Our base predictor is a conditional invertible generative model $\hat{Q}:\mathcal{Z}\times \mathcal{X}\to \mathcal{Y}$ , which maps a latent random variable $Z\in \mathcal{Z}$ (e.g., drawn from a standard multivariate normal distribution) to the output space $\mathcal{V}$ , conditional on $X\in \mathcal{X}$ (e.g., using normalizing flows). The model is both conditional and invertible, meaning that
+
+$$
+\hat {Q} (\hat {Q} ^ {- 1} (y; x); x) = y, \forall x \in \mathcal {X}, y \in \mathcal {Y}.
+$$
+
+We propose the following conformity score, called L-CP (Latent-based Conformal Prediction), defined as:
+
+$$
+s _ {\mathrm {L} - \mathrm {C P}} (x, y) = d _ {\mathcal {Z}} \left(\hat {Q} ^ {- 1} (y; x)\right), \tag {14}
+$$
+
+where $d_{\mathcal{Z}}: \mathcal{Z} \to \mathbb{R}$ is a conformity function in the latent space $\mathcal{Z}$ , independent of $x$ . In our experiments, we use $Z \sim \mathcal{N}(0, I_d)$ and $d_{\mathcal{Z}}(z) = \| z\|$ .
+
+The corresponding prediction region is obtained by mapping a region in the latent space, $R_{\mathcal{Z}}(\hat{q}) = \{z \in \mathcal{Z} : d_{\mathcal{Z}}(z) \leq \hat{q}\}$ , to a region in the output space, $\hat{R}_{\mathrm{L - CP}}(x) = \{\hat{Q}(z; x) : z \in R_{\mathcal{Z}}(\hat{q})\}$ .
+
+L-CP generalizes Distributional Conformal Prediction (Chernozhukov et al., 2021), which is a special case when $Y$ is univariate ( $d = 1$ ), $Z \sim \mathcal{U}(0,1)$ , $d_{\mathcal{Z}}(z) = |z - \frac{1}{2}|$ , and $\hat{Q}(\cdot; x)$ is the quantile function of $Y$ given $x$ .
+
+Concurrent work by Fang et al., 2025 introduces CONTRA, sharing the same algorithm as our latent-based methods. While related, the papers diverge in their primary focus. Fang et al., 2025 emphasizes the smaller prediction regions achieved by CONTRA, whereas our work concentrates on the computational complexity and conditional coverage guarantees of the latent-based methods while obtaining region sizes that are small but not smaller than density-based methods.
+
+# 4. Related Work
+
+Conformal Prediction (CP), introduced by Vovk et al., 1999, forms the foundation of our work by providing prediction
+
+
+Figure 2: Prediction regions for a bivariate unimodal dataset, conditional on a unidimensional input. The black, green, and yellow contours represent regions with nominal coverage levels of $20\%$ , $40\%$ , and $80\%$ , respectively.
+
+regions with finite-sample coverage guarantees. CP methods are well established for regression with univariate outputs (Papadopoulos et al., 2008; Lei and Wasserman, 2014; Romano et al., 2019; Sesia and Romano, 2021) and classification (Romano et al., 2020; Angelopoulos et al., 2020). In the multi-output regression setting, we need to capture dependencies between output dimensions, represent more complex prediction regions and handle a larger computational demand.
+
+To address multivariate prediction challenges, optimal transport methods such as cyclically monotone mappings (Carlier et al., 2016) define multivariate quantile regions with desirable properties such as existence and uniqueness of mappings. Hallin and Simon (2017), Hallin et al. (2021), and Barrio et al. (2024) have proposed extensions of these approaches. Neural network-based techniques leverage normalizing flows (Kan et al., 2022; Huang et al., 2020) or variational autoencoders (Feldman et al., 2023) to learn flexible quantile regions. Additionally, highest density regions (HDRs) (Hyndman, 1996) handle multimodality and have been applied in various contexts (Camehl et al., 2024; Izbicki et al., 2022; Dheur et al., 2024). Recently, Wang et al., 2023b proposed constructing prediction regions as hyperballs centered on generated samples, with extensions by Plassier et al., 2025 improving conditional validity. Other methods use copulas (Messoudi et al., 2021a; Sun and Yu, 2024b) to model the dependency between variables. Appendix A provides a more detailed discussion on related work.
+
+# 5. Comparison of Multi-Output Conformal Methods
+
+In this section, we present a unified comparison of the conformity scores introduced in Section 2.2 and the generalized
+
+scores proposed in Section 3.1.
+
+# 5.1. Illustrative examples
+
+We provide illustrative examples of bivariate prediction regions for different conformal methods on simulated data, covering both unimodal (Figure 2) and bimodal distributions (Figure 10 in Appendix D.2). The data-generating processes are given in Appendix D.2. Additionally, we present bivariate prediction regions for a real-world application, predicting a taxi passenger's drop-off location based on the passenger's information (Figures 8 and 9 in Appendix D.1).
+
+In both Figures 2 and 10, the black, green, and yellow contours represent prediction regions with nominal coverage levels of $20\%$ , $40\%$ , and $80\%$ , respectively. The top-left panel illustrates the density level sets of the oracle distribution $F_{Y|X}$ . The remaining panels display the prediction regions generated by various conformal methods, all utilizing the MQF² base predictor, as explained in Appendix F.2.
+
+We observe the following for the unimodal case in Figure 2. M-CP and CopulaCPTS capture heteroscedasticity but produce rectangular prediction regions, which do not align with the circular level sets of the oracle conditional distribution, resulting in a lack of sharpness. DR-CP fails to maintain conditional coverage, and for $X = 1$ , the absence of black and green contours indicates that the predictive density does not reach the threshold $-\hat{q}$ defined in (5) for coverage levels of 0.2 and 0.4. C-HDR generates prediction regions that closely resemble the oracle level sets. PCP generates highly discontinuous regions, especially at lower coverage levels, where the regions appear as balls centered on individual samples. In contrast, HD-PCP and STDQR yield smoother, more continuous regions but require the estimation of a predictive PDF or the identification of a map from the latent space to the output space, respectively.
+
+Table 1: Properties of different multivariate conformal methods. (*) M-CP achieves ACC under certain assumptions (Appendix E.2.3). $(^{**})$ STDQR and L-CP require a conditional invertible generative model $\hat{Q}:\mathcal{Z}\times \mathcal{X}\to \mathcal{Y}$ $(\dagger)$ CopulaCPTS has a pre-training cost of $O(C)$ .
+
+Method Type of region Asymptotic conditional coverage Computational complexity Predictive density not required Sampling procedure not required M-CP Hyperrectangle X(*) O(dM) ✓ ✓ CopulaCPTS Hyperrectangle X O(dM)† ✓ ✓ DR-CP Density superlevel set X O(D) X ✓ C-HDR Density superlevel set K→∞ O(K(D+S)) X X PCP Union of d-balls X O(LS) ✓ X HD-PCP Union of d-balls X O(L(D+S)) X X STDQR Union of d-balls X O(LS) ✓(**) X C-PCP Union of d-balls K→∞ O((K+L)S) ✓ X L-CP Quantile region ✓ O(Q) ✓(**) X
+
+For our methods, unlike PCP, C-PCP adjusts the radius of the prediction regions to improve conditional coverage. This is evident in the example, where the radius of the balls for $X = -1$ is smaller than for $X = 1$ , as indicated by the tighter regions around the samples. L-CP generates prediction regions that closely align with the oracle level sets, demonstrating good conditional coverage.
+
+For the bimodal distribution in Figure 10 (Appendix D.2), the prediction regions generated by M-CP and L-CP are connected, failing to capture the bimodal nature of the distribution. For the real-world application, Figures 8 and 9 (Appendix D.1) illustrate predictions under low and high uncertainty, respectively. Our methods, L-CP and C-PCP, alongside M-CP and C-HDR, demonstrate the best adaptability to outputs with varying levels of uncertainty.
+
+# 5.2. Properties
+
+In this section, we compare conformal methods based on several key properties. In the following, we use $\frac{\mathrm{d}}{\mathrm{d}t}$ to denote equality in distribution and $\frac{\mathrm{a.s.}}{\mathrm{d}}$ to denote almost sure equality.
+
+Marginal coverage. All the conformal methods presented achieve the classical finite-sample marginal coverage. But, as noted by Wang et al., 2023b (Theorem 1), the marginal coverage of methods such as C-HDR, PCP, HD-PCP, and C-PCP also depends on the randomness of the generated samples. In Appendix E.1, we demonstrate that the marginal coverage, conditional on the calibration dataset $\mathcal{D}_{\mathrm{cal}}$ and the samples drawn from it, follows a beta distribution, using standard arguments. CopulaCPTS is the only method that does not enter into the standard split-conformal algorithm and who does not satisfy the above property.
+
+Asymptotic conditional coverage (ACC). We examine the asymptotic conditional coverage property, which corresponds to conditional coverage as defined in (4) under
+
+the assumptions that $|D_{\mathrm{cal}}|\to \infty$ and the base predictor corresponds to the oracle distribution $F_{Y|X}$
+
+While the assumption of oracle base predictor is strong, it is crucial to demonstrate that the conformal procedure preserves the performance of the base model. Specifically, given $x \in \mathcal{X}$ , for M-CP and CopulaCPTS, we assume $\hat{l}_i(x) = Q_{Y_i|X = x}(\alpha_l)$ and $\hat{u}_i(x) = Q_{Y_i|X = x}(\alpha_u)$ with $i = 1, \ldots, d$ ; for DR-CP, C-HDR, and HD-PCP, $\hat{f}_{Y|X = x} = f_{Y|X = x}$ ; for L-CP, $\hat{Q}(Z; x) \stackrel{\mathrm{d.}}{=} Y|X = x$ ; and for PCP and C-PCP, $\hat{F}_{Y|X = x} = F_{Y|X = x}$ .
+
+Our empirical results (Section 6) demonstrate that methods achieving ACC under these assumptions also exhibit superior approximate conditional coverage across diverse datasets and base predictors. L-CP is the only method that achieves ACC without additional assumptions. C-HDR and C-PCP achieve ACC with $K \to \infty$ . Finally, M-CP achieves ACC under specific assumptions. Assuming that $Y_{1},\ldots ,Y_{d}$ are conditionally independent given $X$ , M-CP achieves ACC if $\alpha_{u} - \alpha_{l} = \sqrt[d]{1 - \alpha}$ . Furthermore, under the unrealistic assumption that $Y_{1} \mid X \stackrel{\mathrm{a.s.}}{=} \dots \stackrel{\mathrm{a.s.}}{=} Y_{d} \mid X$ , M-CP achieves ACC if $\alpha_{u} - \alpha_{l} = 1 - \alpha$ . The true dependence typically lies between these two extremes. We provide detailed proofs of these statements in Appendix E.2.
+
+As discussed in Section 5.1, DR-CP fails to achieve ACC. Likewise, PCP, HD-PCP and STDQR do not achieve ACC, as they are constrained to producing regions with upper bounded volume for any $x \in \mathcal{X}$ . Assuming each ball has a volume of $V$ , PCP generates regions with a total volume of at most $LV$ . For a given instance $x \in \mathcal{X}$ with high uncertainty, it may be impossible to capture sufficient probability mass to achieve conditional coverage.
+
+Region size. Among the methods that achieve ACC, C-HDR is expected to perform best, as it converges to the highest
+
+density regions, which correspond to the smallest volume regions (Hyndman, 1996). Prediction regions from C-PCP are expected to have a larger volume since they are constrained to a union of $L$ $d$ -balls. Similarly, prediction regions from L-CP are less flexible than those from C-HDR, as they are connected when the region $R_Z(\lambda)$ in the latent space is connected for all $\lambda \in \mathbb{R}$ and $\hat{Q}$ is continuous. This constraint may be desirable when more interpretable regions are preferred (Sesia and Romano, 2021).
+
+Among the remaining methods, DR-CP minimizes the mean region size $\mathbb{E}[\lvert\hat{R}(X)\rvert]$ under the oracle PDF as $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ as shown in Theorem 1 by Sadinle et al., 2019. In contrast, M-CP and CopulaCPTS are expected to yield larger prediction regions, as they do not explicitly account for dependencies between outputs. While PCP, HD-PCP, and C-PCP can capture multimodality, they are susceptible to the randomness of the sampling procedure, as evidenced by the shape of the regions in Figure 2. Furthermore, since they rely on a finite union of $L$ $d$ -balls, they are subject to the curse of dimensionality in high-dimensional spaces, where data sparsity necessitates larger balls to maintain marginal coverage.
+
+A potential weakness of the mean region size is that it can be disproportionately skewed by inputs with high uncertainty. To mitigate this sensitivity, we also report the median region size as a more robust alternative.
+
+Computational complexity. Table 1 reports the computational complexity of each conformity score. For M-CP and CopulaCPTS, let $M$ represent the compute time of the univariate conformity score for a single dimension and $C$ the optimization time for CopulaCPTS. Let $D$ , $S$ , and $Q$ denote the time required for density evaluation, sampling, and calculating the inverse of the quantile function $\hat{Q}^{-1}$ , respectively. In many cases, $M$ and $C$ are relatively low, while $D$ , $S$ , and $Q$ are comparable. C-HDR, PCP, HD-PCP, STDQR and C-PCP are significantly slower than M-CP, L-CP, and DR-CP since they need to generate a large number of samples to compute the conformity score (we used $K = L = 100$ in our experiments).
+
+Base predictor. Some conformal methods stand out because they do not need to evaluate the predictive density $\hat{f}$ or generate samples. M-CP and CopulaCPTS only require a univariate model for each dimension, without needing a model for the joint distribution of $Y$ . DR-CP does not require sampling from the model, which is beneficial when using normalizing flows that are slower to invert (e.g., Masked Autoregressive Flows (MAF, Papamakarios, Pavlakou, et al., 2017) or Convex Potential Flows (Huang et al., 2020)). PCP and C-PCP do not require evaluating the predictive density $\hat{f}$ , making them compatible with any generative model, including diffusion models and GANs. L-CP and STDQR do not require predictive density evaluation but re
+
+require the model to be invertible. We summarize the different properties in Table 1.
+
+# 5.3. Connection between sample-based and density-based methods
+
+Interestingly, the sample-based methods (PCP, HD-PCP, C-PCP) can be viewed as special cases of density-based methods (DR-CP, C-HDR). Let us assume a common predictive PDF $\hat{f}$ is used for the base predictor of these conformal methods. While PCP and C-PCP do not require a PDF, we assume that $\hat{f}_{Y|X = x}$ and $\hat{F}_{Y|X = x}$ correspond to the same distribution. Let $\tilde{Y}^{(l)}\sim \hat{F}_{Y|X = x}$ for $l\in [L]$ , and $f_{\mathbb{S}}(\cdot ;\tilde{Y}^{(l)})$ be a PDF with spherical level sets, centered at $\tilde{Y}^{(l)}$ , such as a standard multivariate Gaussian $\mathcal{N}(\cdot ;\tilde{Y}^{(l)},I_d)$ . For $x\in \mathcal{X}$ , we define a new PDF $\hat{f}_{\max}(y|x) = \max_{l\in [L]}f_{\mathbb{S}}(y;\tilde{Y}^{(l)}) / C$ , where $C$ is a normalizing constant ensuring that $\hat{f}_{\max}(\cdot |x)$ integrates to 1. The following proposition establishes the relationship between these methods.
+
+Proposition 1. PCP is equivalent to DR-CP with $\hat{f} = \hat{f}_{\mathrm{max}}$ . Similarly, HD-PCP is equivalent to DR-CP with $\hat{f} = \hat{f}_{\mathrm{max}}$ where only $\lfloor (1 - \alpha)L\rfloor$ samples with the highest density among $\{\hat{Y}^{(l)}\}_{l\in [L]}$ are kept. Finally, C-PCP is equivalent to C-HDR with $\hat{f} = \hat{f}_{\mathrm{max}}$ .
+
+We provide a proof in Appendix E.3. Although these sample-based methods are special cases of density-based approaches, the key advantage of PCP and C-PCP is that they rely solely on a sampling procedure, without requiring a predictive density $\hat{f}$ as base predictor. Figure 3 summarizes the connections between the main conformal methods.
+
+An interesting practical takeaway is that DR-CP and C-HDR are linked in the same way as PCP and C-PCP. Since DR-CP under the oracle has the smallest expected region size while C-HDR empirically has a smaller median region size, similar observations are expected for PCP and C-PCP. This is verified empirically: PCP has a smaller mean region size across all base predictors, while C-PCP has a smaller median region size.
+
+
+Figure 3: Connections between different methods.
+
+
+Figure 4: Conditional coverage metrics across datasets sorted by size. CEC-X and CEC-Z should be minimized while WSC should approach $1 - \alpha$ .
+
+# 6. A Large-Scale Study of Multi-Output Conformal Methods
+
+In this section, we present a large-scale study of multi-output conformal methods using 13 tabular datasets from previous studies (Tsoumakas et al., 2011; Feldman et al., 2023; Wang et al., 2023b; Barrio et al., 2024; Camehl et al., 2024). To ensure sufficient data for training, calibration, and testing, we include only datasets with at least 2,000 instances. The selected datasets contain between 7,207 and 50,000 data points, with the number of input features $p$ ranging from 1 to 279 and the number of output variables $d$ ranging from 2 to 16.
+
+We consider three base predictors: the Multivariate Quantile Function Forecaster (MQF $^2$ ), a normalizing flow (Kan et al., 2022), Distributional Random Forests (Covid et al., 2022), and a multivariate Gaussian mixture model (Bishop, 1994). We present results for MQF $^2$ in the main text, while similar results for the other models are provided in Appendix G. We compare the methods using several metrics, including conditional coverage (WSC, CEC-X, and CEC-V), marginal coverage (MC), region size, and computational time. A detailed description of the experimental setup is provided in Appendix F.
+
+Conditional coverage. Figure 4 presents the results for all datasets, ordered by increasing dataset size. On most datasets, C-PCP, L-CP, and C-HDR obtain the best conditional coverage. In contrast, HD-PCP, STDQR, PCP, and DR-CP are the least conditionally calibrated. Finally, M-CP
+
+
+Figure 5: CD diagrams with the base predictor $\mathrm{MQF}^2$ based on 10 runs per dataset and method.
+
+and CopulaCPTS attain intermediate conditional coverage, with M-CP performing slightly better. These results align with our analysis in Section 5.2, where we showed that C-PCP, L-CP, and C-HDR achieve ACC, while HD-PCP, STDQR, PCP, and DR-CP do not, and M-CP achieves it only under specific conditions. Finally, Figure 12 shows that all methods achieve marginal coverage, as expected.
+
+Region size. Figure 5 presents a critical difference (CD) diagram (Demšar, 2006) comparing the median region size of all methods across datasets. Higher-ranked methods (further right) perform better. Thick horizontal lines indicate models with no statistically significant difference at the 0.05 level (see Appendix F.5 for details).
+
+Among the methods that achieve ACC, C-HDR yields the smallest median region size, as expected, since its regions converge to the highest density regions (Izbicki et al., 2022). C-PCP and L-CP produce slightly larger regions, though the difference is not significant for these datasets. Among the remaining methods, DR-CP yields the smallest median region. In contrast, M-CP and CopulaCPTS generate larger regions, which is expected given their less flexible hyperrectangular shape. PCP tends to obtain the largest region sizes as it includes samples from low-density areas, whereas STQDR and HD-PCP mitigate this by removing samples from low-density areas, resulting in more compact regions. Finally, Figure 13 in Appendix G provides results for the mean
+
+
+Figure 6: Total time in seconds for calibration and test.
+
+region size, where DR-CP consistently performs best, under the oracle setting, it minimizes the expected region size, as explained in Section 5.2.
+
+Computation time. Figure 6 shows the total computation time for each method. M-CP and CopulaCPTS have the shortest computation times, as they do not require learning a complex model for the output joint distribution. L-CP and DR-CP follow, benefiting from the absence of per-instance sampling. In contrast, sampling-based methods typically require 100 to 200 times more computation time.
+
+Application to image dataset. Finally, we extend our comparison to a regression problem where the output is an image, which has a higher dimensionality than the previously considered tabular datasets. Specifically, we use the CIFAR-10 dataset (Krizhevsky et al., 2014), consisting of $32 \times 32$ RGB images, each labeled with one of 10 possible classes. We train the base predictor Glow (Kingma and Dhariwal, 2018), conditioned on the image label, where the output space is $\mathcal{Y} = [0,1]^{3 \times 32 \times 32}$ ( $d = 3072$ ) and the input space is $\mathcal{X} = \{0, \dots, 9\}$ ( $p = 1$ ). The results, detailed in Appendix J, lead to similar conclusions regarding conditional coverage, region size, and computational time.
+
+# 7. Conclusion
+
+We studied the problem of constructing conformal prediction regions for multi-output regression, which remains relatively underexplored compared to the univariate case. We presented a unified comparative study of several conformal methods along with their associated conformity scores, highlighting their properties and interconnections. In addition, we introduced two new classes of conformity scores: CDF-based scores, including a variant compatible with generative models, and latent-based scores, which exploit invertible generative models for improved computational efficiency. Both classes generalize existing conformity scores from the univariate setting.
+
+The choice of conformity score directly influences the geometry and flexibility of the resulting prediction regions. In the univariate setting, the most flexible regions are typically unions of intervals. In contrast, the multivariate case allows for a wider variety of geometries, ranging from hyperrectangles and ellipsoids to highly flexible, nonconvex regions that can be disconnected and capture distributional bimodality. A simple and computationally efficient approach is to construct separate univariate prediction regions for each output dimension and apply a correction for joint coverage. However, these methods do not capture dependencies between output dimensions and typically result in rigid, (unions of) hyperrectangular regions with limited flexibility. In contrast, more flexible methods account for correlations and dependencies across outputs by incorporating the covariance
+
+structure, modeling the joint density, or leveraging generative models. These approaches produce more expressive prediction regions but are generally more computationally demanding.
+
+While conformal prediction (CP) always guarantees marginal coverage, conformity scores whose thresholds do not vary instance-wise fail to achieve the desirable property of asymptotic conditional coverage (ACC). In contrast, our proposed scores enable ACC but require estimating the conditional distribution of the conformity score—an inherently challenging task in low-data regimes. Similarly, CP methods based on generative models introduce additional sampling variability. Finally, our large-scale empirical study systematically compares these conformal methods across multiple multi-output regression datasets, using various evaluation metrics, including conditional coverage and prediction region volume.
+
+Future work will focus on replacing region prediction with a recalibrated multivariate distribution, equipped with an explicit density function and conformal coverage guarantees. We also plan to extend our approach to more complex outputs, including semi-structured and unstructured data (e.g., images, text, and graphs), and to further investigate the theoretical connection between multivariate calibration and conformal prediction
+
+# Impact statement
+
+This work contributes to the development of statistically reliable and interpretable machine learning algorithms. Enhancing trust and transparency in predictive modeling helps designing more practical and accessible models for real-world applications.
+
+# References
+
+[1] Brandon Amos, Lei Xu, and J Zico Kolter. "Input Convex Neural Networks". Proceedings of the 34th International Conference on Machine Learning. Vol. 70. Proceedings of Machine Learning Research. PMLR, 2017, pp. 146-155.
+[2] Anastasios N Angelopoulos and Stephen Bates. “Conformal prediction: A gentle introduction”. Foundations and Trends® in Machine Learning 16 (4 2023), pp. 494–591.
+[3] Anastasios N Angelopoulos et al. "Uncertainty Sets for Image Classifiers using Conformal Prediction". International Conference on Learning Representations. 2020.
+[4] Rina Foygel Barber et al. “The limits of distribution-free conditional predictive inference”. arXiv [math.ST] (2019). arXiv: 1903.04684 [math.ST].
+[5] Eustasio del Barrio, Alberto González Sanz, and Marc Hallin. “Nonparametric multiple-output center-outward quantile regression”. Journal of the American Statistical Association (2024), pp. 1–43.
+[6] Alessio Benavoli, Giorgio Corani, and Francesca Mangili. “Should We Really Use Post-Hoc Tests Based on Mean-Ranks?” Journal of machine learning research: JMLR 17 (5 2016), pp. 1-10.
+[7] Christopher M Bishop. Mixture density networks. Tech. rep. Birmingham, 1994.
+[8] Annika Camehl, Dennis Fok, and Kathrin Gruber. "On superlevel sets of conditional densities and multivariate quantile regression". Journal of Econometrics (105807 2024), p. 105807.
+[9] Guillaume Carlier, Victor Chernozhukov, and Alfred Galichon. "Vector quantile regression: An optimal transport approach". Annals of statistics 44 (3 2016), pp. 1165-1192.
+[10] Maxime Cauchois, Suyash Gupta, and John C Duchi. "Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction". Journal of machine learning research: JMLR 22 (1 2021), pp. 3681-3722.
+[11] Domagoj Ceid et al. "Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression". Journal of machine learning research: JMLR 23 (333 2022), pp. 1-79.
+
+[12] Victor Chernozhukov, Kaspar Wüthrich, and Yinchu Zhu. "Distributional conformal prediction". Proceedings of the National Academy of Sciences of the United States of America 118 (48 2021).
+[13] Janez Demšar. "Statistical Comparisons of Classifiers over Multiple Data Sets". Journal of machine learning research: JMLR 7 (2006), pp. 1-30.
+[14] Victor Dheur et al. "Distribution-Free Conformal Joint Prediction Regions for Neural Marked Temporal Point Processes". Machine Learning 113 (2024), pp. 7055-7102. arXiv: 2401.04612 [cs.LG].
+[15] Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. “A Conformal approach for functional data prediction”. Book of short papers SIS 2021, Pisa, 21-25 Giugno 2021. 2021, pp. 907-910.
+[16] Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. Distribution-Free Prediction Bands for Multivariate Functional Time Series: an Application to the Italian Gas Market. arXiv:2107.00527 [stat]. 2024.
+[17] Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. “The importance of being a band: Finite-sample exact distribution-free prediction sets for functional data”. arXiv [stat.ME] (2021). arXiv: 2102.06746 [stat.ME].
+[18] Eshant English et al. "JANET: Joint Adaptive predictionN-region Estimation for time-series". arXiv [stat.ML] (2024). arXiv: 2407.06390 [stat.ML].
+[19] Zhenhan Fang, Aixin Tan, and Jian Huang. “CONTRA: Conformal prediction region via normalizing flow transformation”. The Thirteenth International Conference on Learning Representations. 2025.
+[20] Shai Feldman, Stephen Bates, and Yaniv Romano. "Calibrated Multiple-Output Quantile Regression with Representation Learning". Journal of machine learning research: JMLR 24 (24 2023), pp. 1-48.
+[21] Milton Friedman. “A Comparison of Alternative Tests of Significance for the Problem of $m$ Rankings”. The Annals of Mathematical Statistics 11 (1940), pp. 86–92.
+[22] Léo Grinsztajn, Edouard Oyallon, and G Varoquaux. “Why do tree-based models still outperform deep learning on typical tabular data?” Neural Information Processing Systems 35 (2022), pp. 507-520.
+[23] David Ha, Andrew M Dai, and Quoc V Le. "Hyper-Networks". International Conference on Learning Representations. 2022.
+[24] Marc Hallin and Miroslav Šiman. “Multiple-output quantile regression”. Handbook of quantile regression (2017), pp. 185–207.
+
+[25] Marc Hallin et al. "Distribution and quantile functions, ranks and signs in dimension d: A measure transportation approach". The Annals of Statistics 49 (2 2021), pp. 1139-1165.
+[26] Sture Holm. “A Simple Sequentially Rejective Multiple Test Procedure”. Scandinavian journal of statistics, theory and applications 6 (2 1979), pp. 65-70.
+[27] Eliahu Horwitz and Yedid Hoshen. "Confusion: Confidence Intervals for Diffusion Models". arXiv [cs.CV] (2022). arXiv: 2211.09795 [cs.CV].
+[28] Chin-Wei Huang et al. "Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization" (2020).
+[29] Rob J Hyndman. "Computing and Graphing Highest Density Regions". The American statistician 50 (2 1996), pp. 120-126.
+[30] Rafael Izbicki, Gilson Shimizu, and Rafael Stern. "Flexible distribution-free conditional predictive bands using density estimators". Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Vol. 108. Proceedings of Machine Learning Research. PMLR, 2020, pp. 3068-3077.
+[31] Rafael Izbicki, Gilson Shimizu, and Rafael B Stern. "CD-split and HPD-split: Efficient Conformal Regions in High Dimensions". Journal of machine learning research: JMLR 23 (87 2022), pp. 1-32.
+[32] Chancellor Johnstone and Eugene Ndiaye. "Exact and approximate conformal inference for multi-output regression". arXiv [stat.ML] (2022). arXiv: 2210.17405 [stat.ML].
+[33] Kelvin Kan et al. "Multivariate Quantile Function Forecaster". Proceedings of The 25th International Conference on Artificial Intelligence and Statistics. Vol. 151. Proceedings of Machine Learning Research. PMLR, 2022, pp. 10603-10621.
+[34] Diederik P Kingma and Prafulla Dhariwal. "Glow: Generative flow with invertible 1x1 convolutions". Neural Information Processing Systems abs/1807.03039 (2018).
+[35] Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, et al. "The CIFAR-10 dataset". online: http://www.cs.toronto.edu/kriz/cifar.html 55 (5 2014), p. 2.
+[36] Jing Lei and Larry Wasserman. "Distribution-free prediction bands for non-parametric regression". Journal of the Royal Statistical Society. Series B, Statistical methodology 76 (1 2014), pp. 71-96.
+[37] Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. "Copula-based conformal prediction for multi-target regression". Pattern Recognition 120 (2021), p. 108101.
+
+[38] Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. "Copula-based conformal prediction for multi-target regression". Pattern Recognition 120 (2021), p. 108101.
+[39] Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. "Ellipsoidal conformal inference for Multi-Target Regression". Conformal and Probabilistic Prediction with Applications. Conformal and Probabilistic Prediction with Applications. PMLR, 2022, pp. 294-306.
+[40] Eric Nalisnick et al. “Do Deep Generative Models Know What They Don’t Know?” International Conference on Learning Representations. 2018.
+[41] Davy Paindaveine and Miroslav Šiman. “On directional multiple-output quantile regression”. Journal of multivariate analysis 102 (2 2011), pp. 193-212.
+[42] Harris Papadopoulos, Alex Gammerman, and Volodya Vovk. "Normalized nonconformity measures for regression Conformal Prediction". Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications (Innsbruck, Austria). AIA '08. USA: ACTA Press, 2008, pp. 64-69.
+[43] Harris Papadopoulos et al. "Inductive Confidence Machines for Regression". Machine Learning: ECML 2002. Springer Berlin Heidelberg, 2002, pp. 345-356.
+[44] George Papamakarios, Theo Pavlakou, et al. “Masked autoregressive flow for density estimation”. Advances in neural information processing systems (2017).
+[45] Ji Won Park, Robert Tibshirani, and Kyunghyun Cho. Semiparametric conformal prediction. arXiv:2411.02114. 2024.
+[46] Vincent Plassier et al. "Probabilistic Conformal Prediction with Approximate Conditional Validity". The Thirteenth International Conference on Learning Representations. 2025.
+[47] Alvin Rajkomar et al. "Scalable and accurate deep learning with electronic health records". npj digital medicine 1 (1 2018), p. 18.
+[48] Yaniv Romano, Evan Patterson, and Emmanuel Candes. “Conformalized quantile regression”. Advances in neural information processing systems (2019).
+[49] Yaniv Romano, Matteo Sesia, and E Candès. "Classification with valid and adaptive coverage". Neural Information Processing Systems 33 (2020), pp. 3581-3591.
+[50] Raphael Rossellini, R Barber, and R Willett. "Integrating uncertainty awareness into conformalized Quantile Regression". International Conference on Artificial Intelligence and Statistics 238 (2024), pp. 1540-1548.
+
+[51] Mauricio Sadinle, Jing Lei, and Larry Wasserman. "Least ambiguous set-valued classifiers with bounded error levels". Journal of the American Statistical Association 114 (525 2019), pp. 223-234.
+[52] Matteo Sesia and Yaniv Romano. “Conformal prediction using conditional histograms”. Advances in neural information processing systems (2021).
+[53] Kamile Stankeviciute, Ahmed M Alaa, and Mihaela van der Schaar. "Conformal Time-series Forecasting". Advances in neural information processing systems 34 (2021).
+[54] Vincent Stimper, Bernhard Schölkopf, and Jose Miguel Hernandez-Lobato. “Resampling Base Distributions of Normalizing Flows”. International Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 4915–4936.
+[55] Sophia Sun and Rose Yu. Copula Conformal Prediction for Multi-step Time Series Forecasting. arXiv:2212.03281 [cs, stat]. 2024.
+[56] Sophia Huiwen Sun and Rose Yu. "Copula Conformal prediction for multi-step time series prediction". The Twelfth International Conference on Learning Representations. 2024.
+[57] Jacopo Teneggi et al. “How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control” (2023).
+[58] Alexander Timans et al. “Max-Rank: Efficient Multiple Testing for Conformal Prediction”. 258 (2025), pp. 3898–3906.
+[59] Grigorios Tsoumakas et al. “MULAN: A Java Library for Multi-Label Learning”. Journal of machine learning research: JMLR 12 (71 2011), pp. 2411–2414.
+[60] Vladimir Vovk, Alexander Gammerman, and Craig Saunders. “Machine-Learning Applications of Algorithmic Randomness”. Proceedings of the Sixteenth International Conference on Machine Learning. ICML '99. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1999, pp. 444–453.
+[61] Jun Wang et al. Conformal Temporal Logic Planning using Large Language Models: Knowing When to Do What and When to Ask for Help. arXiv:2309.10092 [cs]. 2023.
+[62] Zhendong Wang et al. "Probabilistic Conformal Prediction Using Conditional Random Samples". International Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and Statistics. PMLR, 2023, pp. 8814-8836.
+[63] Frank Wilcoxon. "Individual Comparisons by Ranking Methods". Biometrics Bulletin 1 (6 1945), pp. 80-83.
+
+[64] Xiaofan Zhou et al. "Conformal prediction: A data perspective". ACM computing surveys (2025).
+[65] Yanfei Zhou, Lars Lindemann, and Matteo Sesia. "Conformalized adaptive forecasting of heterogeneous trajectories". Proceedings of the 41st International Conference on Machine Learning. ICML'24. Vienna, Austria: JMLR.org, 2024.
+
+# A. Related Work
+
+Our research builds on a broad body of literature that spans several closely related themes. This supplementary section provides a concise overview of these topics.
+
+In the realm of multivariate functional data, Diquigiovanni et al., 2021a introduced conformal predictors that create adaptive, finite-sample valid prediction bands, with extensions into time series applications, particularly in energy markets (Diquigiovanni et al., 2024). In image processing, recent applications (Horwitz and Hoshen, 2022; Teneggi et al., 2023) apply CP in a pixel-wise manner, resulting in hyperrectangular regions that may not capture pixel dependencies effectively.
+
+For multi-step-ahead or multi-horizon forecasting, predictions can be made across multiple outputs simultaneously rather than sequentially, aligning with a multi-output forecasting framework. Stankeviciute et al., 2021 explored multi-horizon time series forecasting using recurrent neural networks (RNNs), incorporating univariate conformal techniques with nominal coverage adjustments via Bonferroni correction. Similarly, English et al., 2024 adapted the Amplitude-Modulated L-inf norm method from Diquigovanni et al., 2021b for multi-output, multi-step forecasting.
+
+In multi-target regression, Messoudi et al., 2021b applied copula functions in deep neural networks to provide multivariate predictions with guaranteed coverage. Their findings suggest that simple parametric copulas can work for certain datasets, but more complex copulas may be required for well-calibrated predictions, which introduces challenges, as complex copulas typically require significant calibration data. Building on this, Sun and Yu, 2024b proposed a copula-based method for multi-step time series forecasting, optimizing the calibration and efficiency of confidence intervals. However, this method requires two calibration phases and is primarily feasible with large calibration datasets. Moreover, its validity relies on the empirical copula, limiting applicability to other learnable copula classes. One very recent advancement on the subject, following ideas expressed by Messoudi et al., 2021b in their conclusions, is Park et al., 2024, where the dependence structure between marginal distributions is recovered via the use of vine copulas.
+
+Another set of methodologies that tackle multi-output problems are based on multiplicity-correction approaches for multiple testing. Timans et al., 2025 improves over Bonferroni correction using permutation tests, and obtain a tighter and globally valid prediction. Methods based on multiplicity correction such as controlling the Family-Wise Error Rate (FWER) are valuable for providing error control guarantees across multiple outputs. In contrast, the methods we survey and propose aim for potentially tighter prediction regions by directly modeling the multivariate structure.
+
+In the context of conformal prediction, the flexibility in configuring the prediction region is a degree of freedom for the modeler. To overcome the limitations of traditional hyper-rectangular prediction regions, Messoudi et al., 2022 introduced ellipsoidal uncertainty sets that enable instance-specific adaptation of confidence regions. Johnstone and Ndiaye, 2022 advanced multi-output regression by developing efficient techniques for approximating conformal prediction sets without retraining the model, although their approach relies heavily on the predictive model being a linear function of $Y$ . Sun and Yu, 2024b constructed ellipsoidal prediction regions for time series, capable of modeling dependencies between outputs, though this method does not handle multimodality. Our work closely connects with the multivariate conformal prediction literature, where multi-horizon prediction is viewed as a prediction across multiple outputs.
+
+Overall, as this study demonstrates, the flexibility of conformal prediction allows for coherent handling of diverse data types. Multi-output problems represent one facet of a broader taxonomy, as explored by Zhou et al., 2025, who discuss further developments in multi-output conformal prediction.
+
+# B. Additional multi-output conformal methods
+
+In this section, we describe the prediction regions $\hat{R}$ for M-CP and CopulaCPTS, which both produce hyperrectangular regions.
+
+M-CP. Zhou et al., 2024 applied a univariate conformal method to each output $i \in [d]$ of the multivariate response. Specifically, given a conformity score $s_i$ for the $i$ -th dimension, joint coverage across all dimensions can be achieved using the following conformity score:
+
+$$
+s _ {\mathrm {M} - \mathrm {C P}} (x, y) = \max _ {i \in [ d ]} s _ {i} (x, y _ {i}). \tag {15}
+$$
+
+A similar score has been considered by Diquigiovanni et al. (2021b) in the context of functional regression.
+
+In this work, we use Conformalized Quantile Regression (CQR, Romano et al., 2019) for each output $i \in [d]$ , where the
+
+conformity score is given by:
+
+$$
+s _ {i} (x, y _ {i}) = \max \left\{\hat {l} _ {i} (x) - y _ {i}, y _ {i} - \hat {u} _ {i} (x) \right\}, \tag {16}
+$$
+
+with $\hat{l}_i(x)$ and $\hat{u}_i(x)$ representing the lower and upper conditional quantiles of $Y_i|X = x$ at levels $\alpha_l$ and $\alpha_u$ , respectively. In our experiments, we consider equal-tailed prediction intervals, where $\alpha_l = \frac{\alpha}{2}$ , $\alpha_u = 1 - \frac{\alpha}{2}$ , and $\alpha$ denotes the miscoverage level. The corresponding prediction region $\hat{R}_{\mathrm{M - CP}}(x) = \times_{i=1}^{d} [\hat{l}_i(x) - \hat{q}, \hat{u}_i(x) + \hat{q}]$ is a hyperrectangle.
+
+CopulaCPTS. CopulaCPTS (Sun and Yu, 2024a) is originally designed for time-series but the calibration procedure is valid for any multi-dimensional variable. It models the joint probability of uncertainty for each output with a copula. The calibration set is divided into two subsets $\mathcal{D}_{\mathrm{cal} - 1}$ and $\mathcal{D}_{\mathrm{cal} - 2}$ . $\mathcal{D}_{\mathrm{cal} - 1}$ serves for the estimation of a CDF on the conformity score of each output and $\mathcal{D}_{\mathrm{cal} - 2}$ allows to calibrate the copula. CopulaCPTS can combine any univariate or multivariate conformity scores. In this paper, we use the CQR score $s_i$ (16) for each dimension $i\in [d]$ .
+
+Denote $\hat{F}_i$ the empirical CDF of the conformity scores $\{s_i(x,y_i)\}_{(x,y) \in \mathcal{D}_{\mathrm{cal} - 1}}$ for $i \in [d]$ , and $\hat{F}_i^{-1}$ the corresponding empirical quantile function. In practice, to minimize region sizes while achieving marginal validity, CopulaCPTS computes the optimal $s_1^*, \ldots, s_d^*$ that minimize the following loss using stochastic gradient descent:
+
+$$
+\mathcal {L} \left(\hat {s} _ {1}, \dots , \hat {s} _ {d}\right) = \frac {1}{\left| \mathcal {D} _ {\operatorname {c a l} - 2} \right|} \sum_ {(x, y) \in \mathcal {D} _ {\operatorname {c a l} - 2}} \prod_ {i = 1} ^ {d} \mathbb {1} \left[ \hat {F} _ {i} \left(s _ {i} \left(x, y _ {i}\right)\right) < \hat {F} _ {i} ^ {- 1} (\hat {s} _ {i}) \right] - (1 - \alpha). \tag {17}
+$$
+
+Then, the prediction region is defined as:
+
+$$
+\hat {R} _ {\text {C o p u l a C P T S}} (x) = \left\{y \in \mathcal {Y}: \forall i \in [ d ], s _ {i} (x, y _ {i}) < s _ {i} ^ {*} \right\} \tag {18}
+$$
+
+Sun and Yu, 2024a proved that this prediction set achieves marginal coverage. However, since CopulaCPTS does not follow the SCP algorithm, it does not achieve properties on the marginal coverage from Appendix E.1.
+
+# C. Relationship between conformity scores and regions
+
+Section 2.2 and Section 3 in the main text presented several multi-output conformal methods through their conformity score $s$ . As explained in Section 2.1, their corresponding prediction region $\hat{R}$ can be computed as follows:
+
+$$
+\hat {R} (x) = \{y \in \mathcal {Y}: s (x, y) \leq \dot {q} \}.
+$$
+
+In this section, we explicitly derive the prediction region associated with these methods.
+
+M-CP. Following Zhou et al., 2024, the prediction region $\hat{R}_{\mathrm{M - CP}}$ can be derived from $s_{\mathrm{M - CP}}$ as follows:
+
+$$
+s _ {\mathrm {M} - \mathrm {C P}} (x, y) \leq \hat {q} \iff \max _ {i \in [ d ]} s _ {i} (x, y _ {i}) \leq \hat {q} \tag {19}
+$$
+
+$$
+\Longleftrightarrow \forall i \in [ d ], s _ {i} (x, y _ {i}) \leq \hat {q} \tag {20}
+$$
+
+$$
+\Longleftrightarrow \forall i \in [ d ], \max \left\{\hat {l} _ {i} (x) - y _ {i}, y _ {i} - \hat {u} _ {i} (x) \right\} \leq \hat {q} \tag {21}
+$$
+
+$$
+\Longleftrightarrow \forall i \in [ d ], \hat {l} _ {i} (x) - y _ {i} \leq \hat {q} \wedge y _ {i} - \hat {u} _ {i} (x) \leq \hat {q} \tag {22}
+$$
+
+$$
+\Longleftrightarrow \forall i \in [ d ], \hat {l} _ {i} (x) - \hat {q} \leq y _ {i} \wedge y _ {i} \leq \hat {u} _ {i} (x) + \hat {q} \tag {23}
+$$
+
+$$
+\Longleftrightarrow \forall i \in [ d ], y _ {i} \in [ \hat {l} _ {i} (x) - \hat {q}, \hat {u} _ {i} (x) + \hat {q} ] \tag {24}
+$$
+
+$$
+\Longleftrightarrow y \in \bigotimes_ {i = 1} ^ {d} \left[ \hat {l} _ {i} (x) - \hat {q}, \hat {u} _ {i} (x) + \hat {q} \right] \tag {25}
+$$
+
+$$
+\Longleftrightarrow y \in \hat {R} _ {\mathrm {M - C P}} (x). \tag {26}
+$$
+
+DR-CP The equivalence is straightforward.
+
+C-HDR. Given $\hat{Y} \sim \hat{f}_{Y|X = x}$ and $U = \hat{f}(\hat{Y} | X = x)$ , for any $y \in \mathcal{V}$ , we can write
+
+$$
+\begin{array}{l} s _ {\mathrm {C - H D R}} (x, y) (27) \\ = \operatorname {H P D} _ {\hat {f}} (y \mid x) (28) \\ = \mathbb {P} (\hat {f} (\hat {Y} \mid x) \geq \hat {f} (y \mid x) \mid X = x) (29) \\ = \mathbb {P} (U \geq \hat {f} (y \mid x) \mid X = x) (30) \\ = 1 - \mathbb {P} (U \leq \hat {f} (y \mid x) \mid X = x) (31) \\ = 1 - F _ {U | X = x} (\hat {f} (y \mid x)), (32) \\ \end{array}
+$$
+
+where $F_{U|X = x}$ is the conditional CDF of $U$ given $X = x$ .
+
+Recall that the prediction region for C-HDR is given by
+
+$$
+\hat {R} _ {\mathrm {C - H D R}} (x) = \left\{y \in \mathcal {Y}: \hat {f} (y \mid x) \geq t _ {\hat {q}} \right\}, \quad \text {w h e r e} t _ {\hat {q}} = \sup \left\{t: \mathbb {P} (\hat {f} (\hat {Y} \mid x) \geq t \mid X = x) \geq \hat {q} \right\}. \tag {33}
+$$
+
+The threshold $t_q$ in (33) can be equivalently written as follows:
+
+$$
+\begin{array}{l} t _ {\hat {q}} = \sup \left\{t: \mathbb {P} (\hat {f} (\hat {Y} \mid x) \geq t \mid X = x) \geq \hat {q} \right\} (34) \\ = \sup \left\{t: \mathbb {P} (U \geq t \mid X = x) \geq \hat {q} \right\} (35) \\ = \sup \left\{t: 1 - P (U \leq t \mid X = x) \geq \hat {q} \right\} (36) \\ = \sup \left\{t: 1 - \hat {q} \geq F _ {U | X = x} (t) \right\} (37) \\ = F _ {U | X = x} ^ {- 1} (1 - \hat {q}), (38) \\ \end{array}
+$$
+
+where we use the definition of the upper quantile function in the last step.
+
+Using (27), (33), and (38), we can write
+
+$$
+\begin{array}{l} s _ {\mathrm {C - H D R}} (x, y) \leq \hat {q} \Longleftrightarrow \mathrm {H P D} _ {\hat {f}} (y \mid x) \leq \hat {q} (39) \\ \Longleftrightarrow 1 - F _ {U \mid X = x} (\hat {f} (y \mid x)) \leq \hat {q} (40) \\ \Longleftrightarrow F _ {U | X = x} (\hat {f} (y \mid x)) \geq 1 - \hat {q} (41) \\ \Longleftrightarrow \hat {f} (y \mid x) \geq F _ {U | X = x} ^ {- 1} (1 - \hat {q}) (42) \\ \Longleftrightarrow \hat {f} (y \mid x) \geq t _ {\hat {q}} (43) \\ \Longleftrightarrow y \in \hat {R} _ {\mathrm {C - H D R}} (x). (44) \\ \end{array}
+$$
+
+PCP. Let $B(\mu, r)$ represent a ball with center $\mu$ and radius $r$ . Following Wang et al., 2023b, we show that, for any $x \in \mathcal{X}$ , $\hat{R}_{\mathrm{PCP}}(x)$ corresponds to a union of balls:
+
+$$
+\begin{array}{l} s _ {\mathrm {P C P}} (x, y) \leq \hat {q} \iff \min _ {l \in [ L ]} \| y - \tilde {Y} ^ {(l)} \| _ {2} \leq \hat {q} (45) \\ \Longleftrightarrow \exists l \in [ L ], \| y - \tilde {Y} ^ {(l)} \| _ {2} \leq \hat {q} (46) \\ \Longleftrightarrow \exists l \in [ L ], y \in B (\tilde {Y} ^ {(l)}, \hat {q}) (47) \\ \Longleftrightarrow y \in \bigcup_ {l \in [ L ]} B \left(\tilde {Y} ^ {(l)}, \hat {q}\right) (48) \\ \Longleftrightarrow y \in \hat {R} _ {\mathrm {P C P}} (x), (49) \\ \end{array}
+$$
+
+where $\tilde{Y}^{(l)}\sim \hat{F}_{Y|X = x},l\in [L]$
+
+HD-PCP. For HD-PCP, the reasoning is similar to PCP with the difference that only the $\lfloor (1 - \alpha)L\rfloor$ samples with the highest density are kept.
+
+CDF-based conformity scores. We note that the region $\hat{R}_{\mathrm{CDF}}(x)$ has a similar form to $\hat{R}_W(x) = \{y\in \mathcal{V}:s_W(x,y)\leq \hat{q}\}$ , except that the threshold on $s_W(x,y)$ is different and depends on $x$ . In fact, we can write
+
+$$
+\begin{array}{l} \hat {R} _ {\mathrm {C D F}} (x) = \{y \in \mathcal {Y}: s _ {\mathrm {C D F}} (x, y) \leq \hat {q} \} (50) \\ = \{y \in \mathcal {Y}: F _ {W | X = x} (s _ {W} (x, y)) \leq \hat {q} \} (51) \\ = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq F _ {W | X = x} ^ {- 1} (\hat {q}) \}. (52) \\ \end{array}
+$$
+
+In the special case where $s_W = s_{\mathrm{PCP}}$ , since PCP always generates predictions as a union of balls, we can conclude that C-PCP will do the same.
+
+Latent-based conformity scores. Since $\hat{Q} (\cdot ;x)$ is bijective, for every $y\in \mathcal{V}$ , there exists a unique $z\in \mathcal{Z}$ such that $y = \hat{Q} (z;x)$ . Therefore, the condition $d_{\mathcal{Z}}(\hat{Q}^{-1}(y;x))\leq \hat{q}$ is equivalent to $d_{\mathcal{Z}}(z)\leq \hat{q}$ , where $z = \hat{Q}^{-1}(y;x)$ . This gives the prediction region:
+
+$$
+\begin{array}{l} \hat {R} _ {\mathrm {L - C P}} (x) = \left\{y \in \mathcal {Y}: d _ {\mathcal {Z}} \left(\hat {Q} ^ {- 1} (y; x)\right) \leq \hat {q} \right\} (53) \\ = \left\{\hat {Q} (z; x): z \in \mathcal {Z} \text {a n d} d _ {\mathcal {Z}} (z) \leq \hat {q} \right\}. (54) \\ \end{array}
+$$
+
+# D. Additional illustrative examples
+
+# D.1. A real-world application
+
+Following Wang et al., 2023a, we apply the multi-output conformal methods to the taxi dataset, where the goal is to predict the drop-off location of a New York taxi passenger based on the passenger's information.
+
+Figures 7(a) and 8(a) displays five randomly selected samples from the dataset, showing the pick-up (red pin) and drop-off (blue pin) locations of taxi passengers. The remaining panels show a specific input-output pair $(x,y)$ and the corresponding prediction regions generated by the conformal methods discussed in this paper. The coverage level $1 - \alpha$ for these regions is set to 0.8, with MQF $^2$ as the base predictor, as introduced in Section F.2.
+
+Figure 7 corresponds to the same data and predictions regions as Figure 8 except that it is zoomed for better comparison with Figure 9. Each region is labeled with its size, calculated using the estimator from Section F.4, displayed in the bottom left corner. Notably, C-PCP generates regions similar in shape to PCP but with an input-adaptive radius, resulting in smaller region sizes (8.2 compared to 8.67) in this case. Additionally, HD-PCP produces more clustered regions, while PCP and C-PCP show more dispersed regions.
+
+
+(a) Sample Data
+
+
+(b) M-CP
+
+
+(c) CopulaCPTS
+
+
+(d) DR-CP
+
+
+(e) C-HDR
+
+
+(f) PCP
+
+
+(g) HD-PCP
+
+
+(h) STDQR
+
+
+(i) C-PCP
+Figure 7: Conformal methods applied on the NYC Taxi dataset for an input with low uncertainty.
+
+
+(j) L-CP
+
+
+(a) Sample Data
+
+
+(b) M-CP
+
+
+(c) CopulaCPTS
+
+
+(d) DR-CP
+
+
+
+
+(f) PCP
+Figure 8: Conformal methods applied on the NYC Taxi dataset for an input with low uncertainty.
+
+
+(g) HD-PCP
+
+
+(h) STDQR
+
+
+(i) C-PCP
+
+
+(e) C-HDR
+(j) L-CP
+
+Figure 9 presents the same example for an input-output pair where the input is associated with higher uncertainty, resulting in larger region sizes. As in the first figure, the shapes of the regions (e.g., unions of hyperrectangles, quantile regions, etc.) remain consistent but expand to cover a larger area. Conformal methods with the best region sizes differ between the two figures, with C-HDR producing the smallest region in the first figure and DR-CP in the second. In this case, C-PCP selects a larger radius than PCP, resulting in larger regions than PCP. The observation that PCP and C-PCP produce more dispersed regions, while HD-PCP generates more clustered regions, also holds true for this higher uncertainty case.
+
+
+(a) Sample Data
+
+
+(b) M-CP
+
+
+(c) CopulaCPTS
+
+
+(d) DR-CP
+
+
+(e) C-HDR
+
+
+(f) PCP
+
+
+(g) HD-PCP
+
+
+(h) STDQR
+
+
+(i) C-PCP
+Figure 9: Conformal methods applied on the NYC Taxi dataset for an input with high uncertainty.
+
+
+(j) L-CP
+
+# D.2. Toy examples
+
+We define two data-generating processes to evaluate the models compared to a known distribution: a unimodal heteroscedastic distribution and a bimodal heteroscedastic distribution. The input variable $X \in \mathbb{R}$ is unidimensional ( $p = 1$ ) and the output variable $Y \in \mathbb{R}^2$ is bidimensional ( $d = 2$ ). The variables $X$ and $Y$ are scaled linearly such that the mean and variances on each dimension are 0 and 1. The figures are inspired by Barrio et al., 2024.
+
+Unimodal heteroscedastic process. The first process is illustrated in Figure 2 in the main text. The data generating process is as follows:
+
+$$
+X \sim \mathcal {U} (0, 1), \tag {55}
+$$
+
+$$
+Y \mid X = x \sim \frac {1}{k} \sum_ {j = 1} ^ {k} \mathcal {N} \left((1. 3 - x) \boldsymbol {\mu} ^ {(j)} (x), \sigma^ {2} I _ {2}\right), \tag {56}
+$$
+
+where $k = 200$ , $\sigma = 0.2$ , $I_{2}$ is the $2 \times 2$ identity, and, for $j = 1, \ldots, k$ ,
+
+$$
+\boldsymbol {\mu} _ {1} ^ {(j)} = \cos \alpha_ {j} \tag {58}
+$$
+
+$$
+\boldsymbol {\mu} _ {2} ^ {(j)} = (0. 5 - \sin \alpha_ {j}) \tag {59}
+$$
+
+$$
+\alpha_ {j} = \frac {(j - 1) \pi}{k - 1} \tag {60}
+$$
+
+Detailed metrics for this dataset, supporting Section 5.1, are provided in Table 2.
+
+Method MC Median Size CEC-X (×100) CEC-Z (×100) WSC Test time M-CP 0.8050.0039 8.470.14 0.1100.023 0.08120.018 0.8030.011 0.03360.022 CopulaCPTS 0.8150.011 8.780.25 0.1910.049 0.1630.047 0.8140.015 1.040.021 DR-CP 0.8080.0042 7.030.071 0.6130.045 0.5600.042 0.7100.016 0.02090.00047 C-HDR 0.8100.0038 6.800.059 0.06370.016 0.08250.012 0.7980.0070 3.520.085 PCP 0.8050.0039 9.160.089 0.6680.052 0.5870.046 0.7130.0080 1.690.021 HD-PCP 0.8040.0037 7.440.056 0.2870.031 0.2560.034 0.7580.013 3.380.043 STDQR 0.8060.0027 7.870.070 0.3430.025 0.3050.027 0.7460.011 1.770.022 C-PCP 0.8080.0049 9.140.12 0.04640.013 0.04840.013 0.8220.0085 3.440.056 L-CP 0.8030.0039 8.240.11 0.05440.0073 0.06540.012 0.8110.011 0.02170.00052
+
+Table 2: Detailed metrics for the unimodal heteroscedastic process from Figure 2.
+
+Bimodal heteroscedastic process. Figure 10, similar to Figure 2 but with a bimodal distribution for the output, is introduced in Section 5.1.
+
+The data generating process is as follows:
+
+$$
+X \sim \mathcal {U} (0. 5, 2), \tag {61}
+$$
+
+$$
+Y \mid X = x \sim 0. 5 \cdot \mathcal {N} (4, x I _ {d}) + 0. 5 \cdot \mathcal {N} (- 4, I _ {d} / x). \tag {62}
+$$
+
+
+Figure 10: Examples of prediction regions on a bivariate bimodal dataset, conditional on a unidimensional input.
+
+# E. Proofs
+
+# E.1. Distribution of the marginal coverage conditional on calibration data
+
+In contrast to M-CP, L-CP, and DR-CP, the methods C-HDR, PCP, HD-PCP, and C-PCP rely on a non-deterministic conformity score. For each calibration and test point, C-HDR, PCP, HD-PCP, and C-PCP require sampling $K$ , $L$ , $L$ , and $L + K$ points, respectively.
+
+Let $\mathcal{D}_{\mathrm{cal}} = \{(X^{(j)},Y^{(j)})\}_{j\in [|\mathcal{D}_{\mathrm{cal}}|]}$ represent the calibration dataset and $(X,Y)$ be the test instance. Let $\mathcal{S}_{\mathrm{cal}} = \{\mathcal{S}_{\mathrm{cal}}^{(j)}\}_{j\in [|\mathcal{D}_{\mathrm{cal}}|]}$ represent samples from the calibration dataset where $\mathcal{S}_{\mathrm{cal}}^{(j)}$ is generated based on input $X^{(j)}$ and $\mathcal{S}_{\mathrm{test}}$ the samples generated based on input $X$ . Despite the added sampling uncertainty, these methods still provide a marginal coverage guarantee:
+
+$$
+\mathbb {P} _ {X, Y, \mathcal {S} _ {\text {t e s t}}, \mathcal {D} _ {\text {c a l}}, \mathcal {S} _ {\text {c a l}}} (Y \in \hat {R} (X)) \geq 1 - \alpha . \tag {63}
+$$
+
+Compared to (3), the probability is additionally on $S_{\mathrm{cal}}$ and $S_{\mathrm{test}}$ . This result, specifically for PCP and HD-PCP, was demonstrated by Wang et al., 2023b.
+
+In Lemma 1, we further show that the marginal coverage conditional on the calibration dataset $\mathcal{D}_{\mathrm{cal}}$ and the samples $S_{\mathrm{cal}}$ follows a beta distribution, using standard arguments. Assuming no ties among the scores, this lemma applies to any conformity score $s$ .
+
+Lemma 1. Assuming no ties among the scores and i.i.d. inputs, outputs and samples, the distribution of the coverage, conditional on the calibration dataset and its samples, is given by:
+
+$$
+\mathbb {P} (Y \in \hat {R} (X) \mid \mathcal {D} _ {\mathrm {c a l}}, \mathcal {S} _ {\mathrm {c a l}}) \sim \operatorname {B e t a} \left(k _ {\alpha}, \left| \mathcal {D} _ {\mathrm {c a l}} \right| + 1 - k _ {\alpha}\right), \tag {64}
+$$
+
+where $k_{\alpha} = \lceil (1 - \alpha)(|\mathcal{D}_{\mathrm{cal}}| + 1)\rceil$ . Moreover, $\mathbb{P}(Y\in \hat{R} (X)) = \frac{k_{\alpha}}{|\mathcal{D}_{\mathrm{cal}}| + 1}\geq 1 - \alpha$
+
+Proof. For the methods C-HDR, PCP, HD-PCP, and C-PCP, the conformity score $s$ is non-deterministic due to sampling uncertainty. To clarify, we define a deterministic conformity score $\bar{s} : \mathcal{X} \times \mathcal{Y} \times \mathbb{S}$ , where $\mathbb{S}$ represents the space of samples for a given method.
+
+For $j = 1, \ldots, |\mathcal{D}_{\mathrm{cal}}|$ , let $S_{j} = \bar{s}(X^{(j)}, Y^{(j)}, \mathcal{S}_{\mathrm{cal}}^{(j)})$ denote the conformity score on the calibration dataset, and let $S = \bar{s}(X, Y, \mathcal{S}_{\mathrm{test}})$ represent the conformity score for the test instance. Since $\bar{s}$ is deterministic and the tuples $(X^{(1)}, Y^{(1)}, \mathcal{S}_{\mathrm{cal}}^{(1)}), \ldots, (X^{(|\mathcal{D}_{\mathrm{cal}}|)}, Y^{(|\mathcal{D}_{\mathrm{cal}}|)}, \mathcal{S}_{\mathrm{cal}}^{(|\mathcal{D}_{\mathrm{cal}}|)}), (X, Y, \mathcal{S}_{\mathrm{test}})$ are i.i.d. random variables, $S_{1}, \ldots, S_{|\mathcal{D}_{\mathrm{cal}}|}, S$ are also i.i.d. random variables.
+
+Since $S_{1},\ldots ,S_{|\mathcal{D}_{\mathrm{cal}}|},S$ are identically distributed, they share the same CDF. Using the probability integral transform, $F_{S}(S)\sim U(0,1)$ . Thus, $F_{S}(S_{1}),\ldots ,F_{S}(S_{|\mathcal{D}_{\mathrm{cal}}|})$ correspond to uniform variates $U_{1},\ldots ,U_{|\mathcal{D}_{\mathrm{cal}}|}$ . Since there are no ties among the scores, $F_{S}$ is strictly increasing, and $F_{S}(S_{(j)}) = U_{(j)}$ for $j = 1,\dots ,|\mathcal{D}_{\mathrm{cal}}|$ , where $S_{(j)}$ and $U_{(j)}$ are the $j$ -th order statistics. Hence:
+
+$$
+\begin{array}{l} \mathbb {P} (Y \in \hat {R} (X) \mid \mathcal {D} _ {\mathrm {c a l}}, S _ {\mathrm {c a l}}) = \mathbb {P} (S \leq S _ {(k _ {\alpha})} \mid S _ {1}, \dots , S _ {| \mathcal {D} _ {\mathrm {c a l}} |}) (65) \\ = F _ {S} \left(S _ {\left(k _ {\alpha}\right)}\right) (66) \\ = U _ {\left(k _ {\alpha}\right)} (67) \\ \sim \operatorname {B e t a} \left(k _ {\alpha}, \left| \mathcal {D} _ {\text {c a l}} \right| + 1 - k _ {\alpha}\right). (68) \\ \end{array}
+$$
+
+The final step results from the distribution of uniform order statistics. Taking the expectation of the Beta distribution gives:
+
+$$
+\mathbb {P} (Y \in \hat {R} (X)) = \mathbb {E} [ \mathbb {P} (Y \in \hat {R} (X) \mid \mathcal {D} _ {\mathrm {c a l}}, \mathcal {S} _ {\mathrm {c a l}}) ] = \frac {k _ {\alpha}}{\left| \mathcal {D} _ {\mathrm {c a l}} \right| + 1} \geq 1 - \alpha . \tag {69}
+$$
+
+# E.2. Proofs of asymptotic conditional coverage
+
+# E.2.1. L-CP
+
+Proposition 2. Assuming $|D_{\mathrm{cal}}| \to \infty$ and $\hat{Q}(Z;X) \stackrel{\mathrm{d.}}{=} Y|X$ , L-CP achieves conditional coverage.
+
+Proof. We first show that the conditional coverage of L-CP is equal to the CDF of the random variable $d_{\mathcal{Z}}(Z)$ in $\hat{q}$ , i.e.,
+
+$F_{d_{\mathcal{Z}}(Z)}(\hat{q})$ . Given $x\in \mathcal{X}$ , we have:
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {L - C P}} (X) \mid X = x) \tag {70}
+$$
+
+$$
+= \mathbb {P} (Y \in \{\hat {Q} (z; x): z \in R _ {\mathcal {Z}} (\hat {q}) \} \mid X = x) \tag {71}
+$$
+
+$$
+= \mathbb {P} \left(\hat {Q} ^ {- 1} (Y; X) \in R _ {\mathcal {Z}} (\hat {q}) \mid X = x\right) \quad \text {(I n v e r t i b i l i t y o f} \hat {Q} (\cdot ; X)) \tag {72}
+$$
+
+$$
+= \mathbb {P} (Z \in R _ {\mathcal {Z}} (\hat {q})) \quad (\hat {Q} (Z; X) \stackrel {\mathrm {d .}} {=} Y | X) \tag {73}
+$$
+
+$$
+= \mathbb {P} \left(d _ {\mathcal {Z}} (Z) \leq \hat {q}\right) \tag {74}
+$$
+
+$$
+= F _ {d _ {\mathcal {Z}} (Z)} (\hat {q}). \tag {75}
+$$
+
+Marginalizing over $X$ , we obtain that the marginal coverage is also equal to $F_{d_z(Z)}(\hat{q})$ :
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {L - C P}} (X)) \tag {76}
+$$
+
+$$
+= \mathbb {E} _ {X} \left[ \mathbb {P} \left(Y \in \hat {R} _ {\mathrm {L - C P}} (X) \mid X\right) \right] \tag {77}
+$$
+
+$$
+= \mathbb {E} _ {X} \left[ F _ {d _ {Z} (Z)} (\hat {q}) \right] \tag {78}
+$$
+
+$$
+= F _ {d z (Z)} (\hat {q}) \tag {79}
+$$
+
+In the limit of $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ , thanks to the Glivenko-Cantelli theorem, $\mathbb{P}(Y \in \hat{R}_{\mathrm{L - CP}}(X)) = 1 - \alpha$ and the quantile $\hat{q}$ obtained by SCP is thus $F_{d_{\mathcal{Z}}(Z)}^{-1}(1 - \alpha)$ .
+
+Finally, we obtain that the conditional coverage is equal to $1 - \alpha$ :
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {L - C P}} (X) \mid X = x) \tag {80}
+$$
+
+$$
+= F _ {d _ {\mathcal {Z}} (Z)} \left(F _ {d _ {\mathcal {Z}} (Z)} ^ {- 1} (1 - \alpha)\right) \tag {81}
+$$
+
+$$
+= 1 - \alpha . \tag {82}
+$$
+
+# E.2.2.C-HDR AND C-PCP
+
+Lemma 2. Assuming $|D_{\mathrm{cal}}| \to \infty$ , any conformal method with conformity score $s_{\mathrm{CDF}}$ (12) achieves conditional coverage, independently from the conformity score $s_W$ of the base method. With the additional assumption that $K \to \infty$ and $\hat{f} = f$ , $s_{\mathrm{ECDF}}$ (13) achieves conditional coverage.
+
+Proof. Let $W = s_W(X, Y)$ and consider $x \in \mathcal{X}$ and $y \in \mathcal{Y}$ . By the probability integral transform, $s_{\mathrm{CDF}}(x, Y) = F_{W|X=x}(W \mid X = x) \sim \mathcal{U}(0,1)$ .
+
+Marginalizing over $X$ , we obtain:
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {C D F}} (X)) = \mathbb {P} \left(s _ {\mathrm {C D F}} (X, Y) \leq \hat {q}\right) \tag {83}
+$$
+
+$$
+= \mathbb {E} _ {X} \left[ \mathbb {P} \left(s _ {\mathrm {C D F}} (X, Y) \leq \hat {q} \mid X\right) \right] \tag {84}
+$$
+
+$$
+= \mathbb {E} _ {X} [ \mathbb {P} (U \leq \hat {q}) ] \tag {85}
+$$
+
+$$
+= \mathbb {E} _ {X} [ \hat {q} ] \tag {86}
+$$
+
+$$
+= \hat {q}, \tag {87}
+$$
+
+where $U\sim \mathcal{U}(0,1)$ . In the limit of $|\mathcal{D}_{\mathrm{cal}}|\to \infty$ , thanks to the Glivenko-Cantelli theorem, $\mathbb{P}(Y\in \hat{R}_{\mathrm{CDF}}(X)) = 1 - \alpha$ and the quantile $\hat{q}$ obtained by SCP is thus $1 - \alpha$ .
+
+Finally, we note that:
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {C D F}} (X) \mid X = x) = \mathbb {P} \left(s _ {\mathrm {C D F}} (X, Y) \leq \hat {q} \mid X = x\right) \tag {88}
+$$
+
+$$
+\begin{array}{l} = \mathbb {P} (U \leq 1 - \alpha) (89) \\ = 1 - \alpha . (90) \\ \end{array}
+$$
+
+Assuming $\hat{f} = f$ , observe that, for any $x \in \mathcal{X}$ and $y \in \mathcal{Y}$ , $s_{\mathrm{ECDF}}(x,y) \to s_{\mathrm{CDF}}(x,y)$ as $K \to \infty$ by the law of large numbers. Thus, under these conditions, any conformal method with conformity score $s_{\mathrm{ECDF}}$ achieves conditional coverage.
+
+Proposition 3. Assuming $|D_{\mathrm{cal}}| \to \infty$ and $K \to \infty$ , both C-HDR and C-PCP with the oracle base predictor $\hat{f} = f$ achieve conditional coverage.
+
+Proof. The proof is direct by Lemma 2 with $s_W(x,y) = s_{\mathrm{DR - CP}}(x,y)$ for C-HDR and $s_W(x,y) = s_{\mathrm{PCP}}(x,y)$ for C-PCP.
+
+# E.2.3. M-CP
+
+Consider M-CP with exact quantile estimates $\hat{l}_i(x) = Q_{Y_i}(\alpha_l\mid x)$ and $\hat{u}_i(x) = Q_{Y_i}(\alpha_u\mid x)$ where $Q_{Y_i}(\alpha \mid x)$ is the quantile function of $Y_{i}$ conditional to $X = x$ evaluated in $\alpha$ . This section introduces two propositions where M-CP requests two different nominal coverage levels $\alpha_{u} - \alpha_{l}$ , namely $\sqrt[d]{1 - \alpha}$ and $1 - \alpha$ . The propositions show that M-CP can achieve conditional coverage under two contrasting scenarios: independence or total dependence between the dimensions of the output.
+
+Proposition 4. Assuming $Y_{1},\ldots ,Y_{d}$ are conditionally independent given $X$ , M-CP achieves conditional coverage if $\alpha_{u} - \alpha_{l} = \sqrt[d]{1 - \alpha}$ .
+
+Proof. For any $x \in \mathcal{X}$ and $i \in [d]$ , we first establish that the $\sqrt[d]{1 - \alpha}$ th quantile of the distribution of $s_i(X, Y_i)$ given $X = x$ equals 0:
+
+$$
+\mathbb {P} \left(s _ {i} (X, Y _ {i}) \leq 0 \mid X = x\right) = \mathbb {P} \left(\max \left\{l _ {i} (X) - Y, Y - u _ {i} (X) \right\} \leq 0 \mid X = x\right) \tag {91}
+$$
+
+$$
+= \mathbb {P} \left(l _ {i} (X) \leq Y \wedge Y \leq u _ {i} (X) \mid X = x\right) \tag {92}
+$$
+
+$$
+= 1 - \mathbb {P} \left(l _ {i} (X) > Y \vee Y > u _ {i} (X) \mid X = x\right) \tag {93}
+$$
+
+$$
+= 1 - \mathbb {P} \left(l _ {i} (X) > Y \mid X = x\right) - \mathbb {P} \left(Y > u _ {i} (X) \mid X = x\right) \tag {94}
+$$
+
+$$
+= 1 - \alpha_ {l} - \left(1 - \alpha_ {u}\right) \tag {95}
+$$
+
+$$
+= \alpha_ {u} - \alpha_ {l} \tag {96}
+$$
+
+$$
+= \sqrt [ d ]{1 - \alpha}. \tag {97}
+$$
+
+Using (97), we show that the $1 - \alpha$ th quantile of the distribution of $s(X,Y)$ given $X = x$ is 0:
+
+$$
+\mathbb {P} \left(s _ {\mathrm {M - C P}} (X, Y) \leq 0 \mid X = x\right) = \mathbb {P} \left(s _ {i} (X, Y _ {i}) \leq 0, \forall i \in [ d ] \mid X = x\right) \tag {98}
+$$
+
+$$
+= \mathbb {P} \left(s _ {1} \left(X, Y _ {1}\right) \leq 0 \wedge \dots \wedge s _ {d} \left(X, Y _ {d}\right) \leq 0 \mid X = x\right) \tag {99}
+$$
+
+$$
+= \mathbb {P} \left(s _ {1} (X, Y _ {1}) \leq 0 \mid X = x\right) \dots \mathbb {P} \left(s _ {d} (X, Y _ {d}) \leq 0 \mid X = x\right) \tag {100}
+$$
+
+$$
+= \sqrt [ d ]{1 - \alpha} ^ {d} \tag {101}
+$$
+
+$$
+= 1 - \alpha , \tag {102}
+$$
+
+where (100) is obtained by conditional independence of $Y_{1}, \ldots, Y_{d}$ given $X$ . Marginalizing over $X$ , we obtain that the $1 - \alpha$ th quantile of $s(X, Y)$ is 0:
+
+$$
+\mathbb {P} \left(s _ {\mathrm {M - C P}} (X, Y) \leq 0\right) = \mathbb {E} _ {X} \left[ \mathbb {P} \left(s _ {\mathrm {M - C P}} (X, Y) \leq 0 \mid X\right) \right] \tag {103}
+$$
+
+$$
+= \mathbb {E} _ {X} [ 1 - \alpha ] \tag {104}
+$$
+
+$$
+= 1 - \alpha . \tag {105}
+$$
+
+In the limit of $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ , thanks to the Glivenko-Cantelli theorem, $\mathbb{P}(Y \in \hat{R}_{\mathrm{M - CP}}(X)) = 1 - \alpha$ and the quantile $\hat{q}$ obtained by SCP is thus 0.
+
+Finally, using (102) and $\hat{q} = 0$ , we obtain that M-CP achieves conditional coverage:
+
+$$
+\mathbb {P} (Y \in \hat {R} _ {\mathrm {M - C P}} (X) \mid X = x) = \mathbb {P} \left(s _ {\mathrm {M - C P}} (X, Y) \leq \hat {q} \mid X = x\right) = 1 - \alpha . \tag {106}
+$$
+
+Proposition 5. Assuming $Y_{1}|X\stackrel {\mathrm{a.s.}}{=}\ldots \stackrel {\mathrm{a.s.}}{\leq}Y_{d}|X$ , M-CP achieves conditional coverage if $\alpha_{u} - \alpha_{l} = 1 - \alpha$
+
+Proof. Let $x \in \mathcal{X}$ . Using (96), we first show that the $1 - \alpha$ th conditional quantile of the distribution of $s_i(X, Y_i)$ , for any $i \in [d]$ , is 0:
+
+$$
+\begin{array}{l} \mathbb {P} \left(s _ {i} (X, Y _ {i}) \leq 0 \mid X = x\right) = \alpha_ {u} - \alpha_ {l} (107) \\ = 1 - \alpha . (108) \\ \end{array}
+$$
+
+Using (108), we show that the $1 - \alpha$ th quantile of the distribution of $s(X, Y)$ given $X$ is 0:
+
+$$
+\begin{array}{l} \mathbb {P} (s (X, Y) \leq 0 \mid X = x) = \mathbb {P} \left(s _ {i} \left(X, Y _ {i}\right) \leq 0, \forall i \in [ d ] \mid X = x\right) (109) \\ = \mathbb {P} \left(s _ {1} \left(X, Y _ {1}\right) \leq 0 \wedge \dots \wedge s _ {d} \left(X, Y _ {d}\right) \leq 0 \mid X = x\right) (110) \\ = \mathbb {P} \left(s _ {1} \left(X, Y _ {1}\right) \leq 0 \mid X = x\right) (111) \\ = 1 - \alpha , (112) \\ \end{array}
+$$
+
+where (111) is due to $Y_{1}|X \stackrel{\text{a.s.}}{=} \ldots \stackrel{\text{a.s.}}{=} Y_{d}|X$ , which implies that, conditional to $X = x$ , $l_{1}(X) = \dots = l_{d}(X)$ and $u_{1}(X) = \dots = u_{d}(X)$ and thus $s_{1}(X,Y_{1}) = \dots = s_{d}(X,Y_{d})$ . Using (105), we obtain that $\hat{q} = 0$ , Finally, using (112), we obtain that M-CP achieves conditional coverage:
+
+$$
+\mathbb {P} (Y \in \hat {R} (X) \mid X = x) = \mathbb {P} (s (X, Y) \leq 0 \mid X = x) = 1 - \alpha . \tag {113}
+$$
+
+
+
+# E.3. Connection between sample-based and density-based methods
+
+This section proves the connections between sample-based and density-based methods as introduced in Section 5.3. We start by restating a known lemma of conformal prediction.
+
+Lemma 3. Consider a conformal prediction method with conformity score $s$ . If $g: \mathbb{R} \to \mathbb{R}$ is a strictly increasing function, then the method with conformity score $g \circ s$ will produce the same prediction regions.
+
+Proof. For any $x \in \mathcal{X}$ , consider the prediction region created with $s$ as in Section 2.1:
+
+$$
+\hat {R} (x) = \left\{y \in \mathcal {Y}: s (x, y) \leq \text {Q u a n t i l e} \left(\left\{s _ {i} \right\} _ {i \in \left[ \left| \mathcal {D} _ {\text {c a l}} \right| \right]} \cup \{\infty \}; k _ {\alpha}\right) \right\}. \tag {114}
+$$
+
+Since $g$ is strictly increasing,
+
+$$
+\begin{array}{l} \hat {R} (x) = \left\{y \in \mathcal {Y}: g (s (x, y)) \leq g \left(\text {Q u a n t i l e} \left(\left\{s _ {i} \right\} _ {i \in \left[ \left| \mathcal {D} _ {\text {c a l}} \right| \right]} \cup \{\infty \}; k _ {\alpha}\right)\right) \right\} (115) \\ = \left\{y \in \mathcal {Y}: g (s (x, y)) \leq \text {Q u a n t i l e} \left(\left\{g \left(s _ {i}\right) \right\} _ {i \in \left[ \left| \mathcal {D} _ {\text {c a l}} \right| \right]} \cup \{\infty \}; k _ {\alpha}\right) \right\}. (116) \\ \end{array}
+$$
+
+Since (116) corresponds to the prediction region with conformity score $g \circ s$ , this shows that the two methods create the same regions.
+
+Proposition 1. PCP is equivalent to DR-CP with $\hat{f} = \hat{f}_{\mathrm{max}}$ . Similarly, HD-PCP is equivalent to DR-CP with $\hat{f} = \hat{f}_{\mathrm{max}}$ where only $\lfloor (1 - \alpha)L\rfloor$ samples with the highest density among $\{\tilde{Y}^{(l)}\}_{l\in [L]}$ are kept. Finally, C-PCP is equivalent to C-HDR with $\hat{f} = \hat{f}_{\mathrm{max}}$ .
+
+Proof. In the following proof, we note $a \uparrow b$ to signify that there exists a strictly increasing function $g$ such that $a = g(b)$ . Consider DR-CP with $\hat{f} = \hat{f}_{\max}$ . We have:
+
+$$
+s _ {\mathrm {D R - C P}} (x, y) = - \hat {f} _ {\max } (y \mid x) \tag {117}
+$$
+
+$$
+\begin{array}{l} \uparrow - \max _ {l \in [ L ]} f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) \quad \left(\hat {f} _ {\max } (y \mid x) = \max _ {l \in [ L ]} f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) / C\right) (118) \\ = \min _ {l \in [ L ]} - f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) (119) \\ \uparrow \min _ {l \in [ L ]} \| y - \tilde {Y} ^ {(l)} \| _ {2} \quad \left(f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) \text {h a s s p h e r i c a l l e v e l s e t s c e n t e r e d a t} \tilde {Y} ^ {(l)}\right) (120) \\ = s _ {\mathrm {P C P}} (x, y). (121) \\ \end{array}
+$$
+
+We obtain the equivalence between the two methods by Lemma 3. The proof for HD-PCP follows the same arguments.
+
+We now consider C-HDR with $\hat{f} = \hat{f}_{\mathrm{max}}$ . We have:
+
+$$
+s _ {\mathrm {C - H D R}} (x, y) = \frac {1}{K} \sum_ {k \in [ K ]} \mathbb {1} \left(\hat {f} _ {\max } \left(\hat {Y} ^ {(k)} \mid x\right) \geq \hat {f} _ {\max } (y \mid x)\right) \quad \text {w h e r e} \hat {Y} ^ {(k)} \sim \hat {f} _ {Y | X = x}, k \in [ K ]. \tag {122}
+$$
+
+Developing the inequality for $k \in [K]$ , we obtain:
+
+$$
+\hat {f} _ {\max } \left(\hat {Y} ^ {(k)} \mid x\right) \geq \hat {f} _ {\max } (y \mid x) \tag {123}
+$$
+
+$$
+\Longleftrightarrow \max _ {l \in [ L ]} f _ {\mathbb {S}} \left(\hat {Y} ^ {(k)}; \hat {Y} ^ {(l)}\right) \geq \max _ {l \in [ L ]} f _ {\mathbb {S}} (y; \hat {Y} ^ {(l)}) \quad \left(\hat {f} _ {\max } (y \mid x) = \max _ {l \in [ L ]} f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) / C\right) \tag {124}
+$$
+
+$$
+\Longleftrightarrow \min _ {l \in [ L ]} - f _ {\mathbb {S}} \left(\hat {Y} ^ {(k)}; \hat {Y} ^ {(l)}\right) \leq \min _ {l \in [ L ]} - f _ {\mathbb {S}} (y; \hat {Y} ^ {(l)}) \tag {125}
+$$
+
+$$
+\Longleftrightarrow \min _ {l \in [ L ]} \| \hat {Y} ^ {(k)} - \tilde {Y} ^ {(l)} \| _ {2} \leq \min _ {l \in [ L ]} \| y - \tilde {Y} ^ {(l)} \| _ {2}. \quad \left(f _ {\mathbb {S}} (y; \tilde {Y} ^ {(l)}) \text {h a s s p h e r i c a l l e v e l s s e t s c e n t e r e d a t} \tilde {Y} ^ {(l)}\right) \tag {127}
+$$
+
+Noting that (122) with (127) corresponds to the conformity score of C-PCP, we obtain the equivalence.
+
+
+
+# F. Experimental setup
+
+This section describes our experimental setup in more details. Computations were performed based on 2 workstations, one with 2 A6000 GPUs and 64 CPU threads, and one with 2 A5000 GPUs and 64 CPU threads, running for 48 hours.
+
+# F.1. Datasets
+
+We consider a total of 13 datasets that have been used in previous studies. Since our focus is on multivariate prediction regions, we select only datasets with an output that is at least two-dimensional. Specifically, we include 6 datasets from Feldman et al., 2023, 4 datasets from Tsoumakas et al., 2011 (MULAN benchmark), 1 dataset from Wang et al., 2023b, 1 datasets from Barrio et al., 2024, and 1 dataset from Camehl et al., 2024.
+
+Each dataset is split into training, validation, calibration, and test sets with 2048 points reserved for calibration. The remaining data is split into $55\%$ for training, $15\%$ for validation and $30\%$ for testing. The preprocessing follows the setup described in Grinsztajn et al., 2022. Table 3 provides the detailed characteristics of each dataset.
+
+Table 3: Characteristics of each dataset considered in this study.
+
+Source Dataset Nb instances Nb features p Nb targets d Camehl households 7207 4 4 Mulan scm20d 8966 60 16 rf1 9005 64 8 rf2 9005 64 8 scm1d 9803 279 16 Feldman meps_21 15656 137 2 meps_19 15785 137 2 meps_20 17541 137 2 house 21613 14 2 bio 45730 8 2 blog_data 50000 55 2 Del Barrio calcofi 50000 1 2 Wang taxi 50000 4 2
+
+# F.2. Base predictors
+
+We consider multiple base predictors and focus on $\mathrm{MQF}^2$ for our main experiments (Section 6).
+
+$\mathbf{MQF}^2$ . The Multivariate Quantile Function Forecaster (MQF $^2$ , Kan et al., 2022) is a normalizing flow that is directly compatible with most of the methods presented since it is invertible, has an explicit PDF, and can be sampled from. M-CP, CopulaCPTS and STDQR require small adaptations from the original methods, as discussed below. The quantile function $\hat{Q}$ and distribution function $\hat{Q}^{-1}$ of MQF $^2$ exhibit cyclical monotonicity, meaning they are the gradient of a convex function (Hallin et al., 2021).
+
+The main idea behind $\mathbf{MQF}^2$ is to interpret Convex Potential Flows (Huang et al., 2020) as multivariate (vector) quantile functions, in the sense that the representation property (128) and cyclical monotonicity property (129) are satisfied (Carlier et al., 2016):
+
+$$
+Y = \hat {Q} (Z; x) \quad \forall x \in \mathcal {X} \text {w h e r e} Z \sim \mathcal {U} (0, 1) ^ {d}, \tag {128}
+$$
+
+$$
+\left(\hat {Q} \left(z _ {1}; x\right) - \hat {Q} \left(z _ {2}; x\right)\right) ^ {T} \left(z _ {1} - z _ {2}\right) \geq 0 \quad \forall x \in \mathcal {X}, z _ {1}, z _ {2} \in \mathcal {Z}. \tag {129}
+$$
+
+When $d = 1$ , this reduces to the classical univariate quantile function. In practice, we follow Kan et al., 2022 and use a quantile vector that follows a normal distribution $Z \sim \mathcal{N}(0, I)$ , allowing better training.
+
+The underlying model of $\mathbf{MQF}^2$ is a partially input-convex neural network (PINN, Amos et al., 2017) with two hidden layers, each containing 30 units. Increasing the number of parameters did not significantly improve performance, which is partly due to the efficiency of Convex Potential Flows compared to other normalizing flows (Huang et al., 2020). While hyperparameter tuning for each dataset could enhance performance, it is not the primary focus of this paper.
+
+$\mathbf{MQF}^2$ is trained using maximum likelihood estimation with early stopping, with a patience of 15 epochs, where validation loss is measured every two epochs.
+
+Distributional Random Forests. Distributional Random Forest (Cevid et al., 2022) is a model built upon the Random Forest algorithm, which adaptively identifies the relevant training data points for any given test point. More specifically, given a test point $x \in \mathcal{X}$ , Distributional Random Forest outputs a weight $w(x^{(i)} \mid x)$ for each training point $x^{(i)}$ with $x^{(i)} \in \mathcal{D}_{\mathrm{train}}$ . This approach enables accurate estimation of any quantity of interest conditional on $x \in \mathcal{X}$ . In our experiments, we estimate the conditional distribution $Y|X$ as a Gaussian mixture, with each component centered on a training point and weighted by the Distributional Random Forest.
+
+The PDF at $y\in \mathcal{V}$ given $x\in \mathcal{X}$ is expressed as:
+
+$$
+\hat {f} (y \mid x) = \sum_ {i = 1} ^ {| \mathcal {D} _ {\mathrm {t r a i n}} |} w (x ^ {(i)} \mid x) \cdot \mathcal {N} (y \mid y ^ {(i)}, \sigma I _ {d}),
+$$
+
+where $\sigma$ is tuned by minimizing the NLL on a grid search. For the Distributional Random Forest, the minimum node size is set to 15, the forest consists of 2000 trees, and the splitting criterion is the maximum mean discrepancy (MMD).
+
+Since this method does not operate in a latent space, we do not consider L-CP in combination with this base predictor. CD diagrams for this predictor are presented in Appendix G.2.
+
+Multivariate Gaussian Mixture Model parameterized by a hypernetwork. As another base predictor, we consider a multivariate Gaussian Mixture Model parameterized by a hypernetwork. The hypernetwork is a multilayer perceptron (MLP) that outputs the parameters of a mixture of $M$ multivariate Gaussian distributions. Given $x \in \mathcal{X}$ , for each mixture component $m \in [M]$ , the hypernetwork outputs the logit $z_{m}(x)$ (for the categorical distribution over the mixture components), the mean $\mu_{m}(x)$ (component location), and the lower triangular Cholesky factor $L_{m}(x)$ (representing the scale of the covariance matrix).
+
+The mixture weights $\pi_m(x)$ are obtained by applying the softmax function to the logits $z_{m}(x)$ , ensuring they sum to 1. The covariance matrices $\Sigma_{m}(x)$ for each component are constructed by taking the product $L_{m}(x)L_{m}(x)^{\top}$ , guaranteeing that they are positive semi-definite.
+
+The PDF evaluated in $y \in \mathcal{V}$ conditional to $x \in \mathcal{X}$ is given by:
+
+$$
+\hat {f} (y \mid x) = \sum_ {m = 1} ^ {M} \pi_ {m} (x) \cdot \mathcal {N} (y \mid \mu_ {m} (x), \Sigma_ {m} (x)).
+$$
+
+The model is trained using maximum likelihood estimation with $M = 10$ .
+
+Similarly to Distributional Random Forests, this method does not operate in a latent space, and thus we do not consider L-CP. CD diagrams for this predictor are presented in Appendix G.3.
+
+# F.3. Adaptation of conformal methods into a common framework.
+
+To ensure a fair comparison among conformal methods, we apply the calibration step using the same base predictors. Only M-CP, CopulaCPTS, and STDQR require slight modifications from their original formulations.
+
+For M-CP and CopulaCPTS, direct estimation of marginal distributions for each output $Y_{i}$ , $i \in [d]$ is infeasible with MQF $^2$ . Instead, we estimate the lower and upper quantiles by first sampling $\{\hat{Y}^{(l)}\}_{l \in [L]}$ from $\hat{f}_{Y|X = x}$ given $x \in \mathcal{X}$ , and then computing the empirical quantiles $\hat{Y}_i^{\left(\lfloor L\frac{\alpha}{2}\rfloor\right)}$ and $\hat{Y}_i^{\left(\lfloor L(1 - \frac{\alpha}{2})\rfloor\right)}$ . Sampling time is not accounted in time computations for these methods. While a more computationally efficient base predictor could be used, this approach ensures a direct comparison with other conformal methods by maintaining consistency in the base predictor.
+
+For STDQR, we modify the original method by replacing the conditional variational autoencoder (CVAE) with a normalizing flow. Following recommendations for future work from Feldman et al., 2023, we exploit the property that the output is normally distributed in the latent space and replace the base predictor by a normalizing flow. This adaptation leverages the assumption that the output is normally distributed in the latent space, allowing for an exact inverse transformation and eliminating a potential source of noise. To construct a region $R_{\mathcal{Z}}$ with coverage $1 - \alpha$ in the latent space, we select the $1 - \alpha$ proportion of samples closest to the origin, ensuring correct coverage without the need for directional quantile regression. The calibration procedure remains unchanged.
+
+# F.4. Metrics
+
+Marginal coverage. Marginal coverage is measured using
+
+$$
+\mathbf {M C} = \frac {1}{| \mathcal {D} _ {\text {t e s t}} |} \sum_ {(x, y) \in \mathcal {D} _ {\text {t e s t}}} \mathbb {1} (y \in \hat {R} (x)).
+$$
+
+Region size. We report the mean region size
+
+$$
+\text {M e a n S i z e} = \frac {1}{| \mathcal {D} _ {\text {t e s t}} |} \sum_ {(x, y) \in \mathcal {D} _ {\text {t e s t}}} | \hat {R} (x) |.
+$$
+
+To avoid large regions disproportionately affecting the result, we also report the median of the region sizes
+
+$$
+\text {M e d i a n S i z e} = \text {Q u a n t i l e} (\{\left| \hat {R} (x) \right| \} _ {(x, y) \in \mathcal {D} _ {\text {t e s t}}}; 0. 5)
+$$
+
+Computing the size of the region is challenging in high dimensions. Hence, we propose an unbiased estimator of the region size using importance sampling:
+
+$$
+| \hat {R} (x) | = \int_ {\mathcal {Y}} \mathbb {1} (y \in \hat {R} (x)) d y = \mathbb {E} _ {\hat {Y} \sim \hat {f} (x)} \left[ \frac {\mathbb {1} (\hat {Y} \in \hat {R} (x))}{\hat {f} (\hat {Y} \mid x)} \right] \approx \frac {1}{K} \sum_ {k = 1} ^ {K} \frac {\mathbb {1} (\hat {Y} ^ {(k)} \in \hat {R} (x))}{\hat {f} (\hat {Y} ^ {(k)} \mid x)}, \tag {130}
+$$
+
+where $\hat{Y}^{(k)}\sim \hat{f}_{Y|X = x}$ $k\in [K]$ . This estimator is compatible with all base predictors in Appendix F.2 since it is both possible to sample from their predictive distribution and evaluate the PDF. In Appendix F.7, we discuss the efficiency of this estimator.
+
+Conditional coverage. To ensure a robust evaluation of conditional coverage, we consider three different conditional coverage metrics, detailed in Appendix F.6. The Worst Slab Coverage (WSC, Cauchois et al., 2021) groups inputs into "slabs" and evaluates the worst obtained coverage. The coverage error conditional to $X$ (CEC-X) partitions the input space $\mathcal{X}$ and evaluates coverage on each subset. The coverage error conditional to $V = \hat{f}(\hat{Y} \mid X)$ , where $\hat{Y} \sim \hat{f}_{Y|X}$ (CEC-V, Izbicki et al., 2022; Dheur et al., 2024), creates a partition based on the distribution of $V$ , which is more robust to high-dimensional inputs.
+
+Computing time. We report the total time required for calibration and testing the marginal coverage. Specifically, this requires evaluating conformity scores on $\mathcal{D}_{\mathrm{cal}}$ followed by evaluating conformity scores on $\mathcal{D}_{\mathrm{test}}$ .
+
+# F.5. Multi-Model, Multi-Dataset Comparison
+
+In order to determine whether there are significant differences in model performance, we first apply the Friedman test (Friedman, 1940). Following the recommendations of Benavoli et al. (2016), we then conduct a pairwise post-hoc analysis using the Wilcoxon signed-rank test (Wilcoxon, 1945), coupled with Holm's alpha correction (Holm, 1979) to adjust for multiple comparisons.
+
+The results are visualized using critical difference (CD) diagrams (Demšar, 2006). In these diagrams, models are ranked, with a lower rank (positioned further to the right) indicating better performance. A thick horizontal line connects models whose performances are not statistically different at the 0.05 significance level.
+
+For MC and WSC, the CD diagrams report $|\mathrm{MC} - (1 - \alpha)|$ and $|\mathrm{WSC} - (1 - \alpha)|$ , both of which should be minimized.
+
+# F.6. Metrics of Conditional Coverage
+
+Worst Slab Coverage. Introduced in Cauchois et al., 2021, the Worst Slab Coverage (WSC) metric quantifies the minimal coverage over all possible slabs in $\mathbb{R}^d$ , where each slab contains at least a fraction $\delta$ of the total mass, with $0 < \delta \leq 1$ . For a given vector $v \in \mathbb{R}^d$ , the WSC associated with $v$ , denoted as $\mathrm{WSC}_v$ , is defined by:
+
+$$
+\operatorname {W S C} _ {v} = \inf _ {a < b} \left\{\hat {\mathbb {P}} _ {\mathcal {D} _ {\text {t e s t}}} \left(y _ {i} \in \hat {R} (x _ {i}) \mid a \leq v ^ {\top} x _ {i} \leq b\right) \text {s . t .} \hat {\mathbb {P}} _ {\mathcal {D} _ {\text {t e s t}}} (a \leq v ^ {\top} x _ {i} \leq b) \geq \delta \right\}, \tag {131}
+$$
+
+where $a, b \in \mathbb{R}$ . This metric assesses conditional coverage by focusing on inputs $x_{i}$ that lie within a slab defined by $v$ , using the inner product $v^{\top} x_{i}$ to measure similarity.
+
+To estimate the worst-case slab, we follow the method from Cauchois et al., 2021, uniformly sampling 1,000 vectors $v_{j}$ from the unit sphere $\mathbb{S}^{d - 1}$ and calculating:
+
+$$
+\mathrm {W S C} = \min _ {v _ {j} \in \mathbb {S} ^ {d - 1}} \mathrm {W S C} _ {v _ {j}}. \tag {132}
+$$
+
+To mitigate overfitting on the test dataset, we partition the test set into two subsets, $\mathcal{D}_{\mathrm{test}} = \mathcal{D}_{\mathrm{test}}^{(1)} \cup \mathcal{D}_{\mathrm{test}}^{(2)}$ , as in Romano et al., 2020; Sesia and Romano, 2021. We identify the worst combination of $a$ , $b$ , and $v$ on $\mathcal{D}_{\mathrm{test}}^{(1)}$ by minimizing the WSC metric with $\delta = 0.2$ , and then evaluate conditional coverage on the separate subset $\mathcal{D}_{\mathrm{test}}^{(2)}$ .
+
+CEC-X. CEC-X approximates conditional coverage by partitioning the input space $X \in \mathcal{X} \subseteq \mathbb{R}^p$ . We apply the $k$ -means++ clustering algorithm on the inputs $X^{(i)}$ in the validation dataset $\mathcal{D}_{\mathrm{val}}$ , creating a partition $\mathcal{A} = A_1 \cup \dots \cup A_J$ over $\mathcal{X}$ . The Coverage Error Conditional to $X$ is defined as:
+
+$$
+\operatorname {C E C} - \mathrm {X} = \frac {1}{| \mathcal {D} _ {\text {t e s t}} |} \sum_ {i = 1} ^ {| \mathcal {D} _ {\text {t e s t}} |} \sum_ {j = 1} ^ {J} \left(\hat {\mathbb {P}} _ {\mathcal {D} _ {\text {t e s t}}} \left(y ^ {(i)} \in \hat {R} \left(x ^ {(i)}\right) \mid x ^ {(i)} \in A _ {j}\right) - (1 - \alpha)\right) ^ {2}. \tag {133}
+$$
+
+CEC-V. CEC-V is similar to CEC-X, but the conditioning is on the distribution of $\log V = \log \hat{f} (\hat{Y}\mid X)$ , where $\hat{Y}\sim \hat{f}_{Y|X}$ . Unlike CEC-X, CEC-V is more robust to high-dimensional inputs. This approach originates from the CD-split $^+$ method (Izbicki et al., 2022) and has been adapted to multivariate outputs in Dheur et al., 2024.
+
+In practice, given an input $x$ , a new feature $v_{x}$ is created. First, samples $v_{i}$ from $V \mid X = x$ are generated by sampling $y_{1},\ldots ,y_{m}\sim \hat{f}_{Y|X = x}$ and evaluating $v_{i} = \hat{f} (y_{i}\mid x)$ . The resulting vector $v_{x} = (v_{(1)},\dots,v_{(m)})$ consists of the order statistics $v_{(i)}$ from $v_{1},\ldots ,v_{m}$ .
+
+
+Figure 11: Panels 1 to 4: Trajectories of the log volume estimator with increasing $K$ compared to the true log volume (dashed line) for different output dimensions $d$ . Panel 5: Log volume estimator with $K = 100$ compared to the true log volume (dashed line).
+
+
+
+
+
+
+
+
+
+The $k$ -means++ clustering algorithm is applied on the vectors $\log v_{X^{(i)}}$ in the validation dataset $\mathcal{D}_{\mathrm{val}}$ , and a partition $\mathcal{A}_V = A_1 \cup \dots \cup A_J$ over $\mathbb{R}^m$ is obtained. The Coverage Error Conditional to the distribution of $V$ is then computed according to (133), using the partition $\mathcal{A}_V$ .
+
+Dheur et al., 2024 notes that the distance function corresponding to this partitioning approach is the 2-Wasserstein distance with respect to the distribution of $V$ .
+
+# F.7. Estimator for the region size
+
+In this section, we discuss the efficiency of the region size estimator introduced in Appendix F.4. This estimator is based on a density estimator $\hat{f}_{Y|X = x}$ and a sample $\hat{Y}^{(k)}, k \in [K]$ , drawn i.i.d. from the conditional distribution $Y \mid X = x$ for any $x \in \mathcal{X}$ . Specifically, the estimator is given by:
+
+$$
+\hat {V} (x) = \frac {1}{K} \sum_ {k = 1} ^ {K} \frac {\mathbb {1} (\hat {Y} ^ {(k)} \in \hat {R} (x))}{\hat {f} (\hat {Y} ^ {(k)} \mid x)}.
+$$
+
+While the estimator is unbiased, i.e., $\mathbb{E}[\hat{V}(x)] = |\hat{R}(x)|$ , we want to study its variance. Let $I = \mathbb{1}(\hat{Y} \in \hat{R}(x))$ represent the indicator that a sample $\hat{Y}$ lies within the prediction region $\hat{R}(x)$ , and let $\rho = \mathbb{P}(\hat{Y} \in \hat{R}(x))$ denote the coverage probability obtained from the samples based on our density estimator. Using the law of total variance, we obtain the following expression for the variance of $\hat{V}(x)$ :
+
+$$
+\begin{array}{l} \mathbb {V} \left[ \hat {V} (x) \right] = \frac {1}{K} \mathbb {V} \left[ \frac {I}{\hat {f} (\hat {Y} \mid x)} \right] \\ = \frac {1}{K} \left(\mathbb {E} \left[ \mathbb {V} \left[ \frac {I}{\hat {f} (\hat {Y} \mid x)} \Bigg | I \right] \right] + \mathbb {V} \left[ \mathbb {E} \left[ \frac {I}{\hat {f} (\hat {Y} \mid x)} \Bigg | I \right] \right]\right) \\ = \frac {1}{K} \left(\rho \mathbb {V} \left[ \frac {1}{\hat {f} (\hat {Y} \mid x)} \right] + \rho (1 - \rho) \mathbb {E} \left[ \frac {1}{\hat {f} (\hat {Y} \mid x)} \right] ^ {2}\right). \\ \end{array}
+$$
+
+Assuming that the density estimate corresponds to the true density, i.e. $\hat{f}_{Y|X = x} = f_{Y|x}(\cdot \mid x)$ and that $\hat{R}$ achieves conditional coverage, then $\rho = 1 - \alpha$ , and we obtain:
+
+$$
+\mathbb {V} \left[ \hat {V} (x) \right] = \frac {1}{K} \left((1 - \alpha) \mathbb {V} \left[ \frac {1}{f _ {Y | x} (Y \mid x)} \right] + \alpha (1 - \alpha) \mathbb {E} \left[ \frac {1}{f _ {Y | x} (Y \mid x)} \right] ^ {2}\right).
+$$
+
+This indicates that the variance of our estimator only depends on the variance and expectation of the random variable $\frac{1}{f(Y|x)}$ . In this case, the variance does not directly depend on the output dimension $d$ .
+
+Figure 11 shows how the estimator behaves in a scenario with a specific density estimator and prediction region with varying output dimension $d$ and an $80\%$ coverage level. Since there is no dependence on $X$ , we abbreviate the notation as follows:
+
+$\hat{R} = \hat{R}(x)$ , $\hat{f}(y) = \hat{f}(y \mid x)$ , and $\hat{V} = \hat{V}(x)$ for any $x \in \mathcal{X}$ . The density estimator is a standard normal distribution $\hat{f}(y) = \mathcal{N}(y; 0, I_d)$ and the prediction region is a ball $\hat{R} = \left\{ y \in \mathcal{Y} : \|y\|_2 \leq F_{\chi_d^2}^{-1}(1 - \alpha) \right\}$ , where $\chi_d^2$ is the chi-squared distribution with $d$ degrees of freedom and $F_{\chi_d^2}^{-1}$ is its quantile function. It can be shown that $\mathbb{P}_{\hat{Y} \sim \hat{f}(\cdot)}(\hat{Y} \in \hat{R}) = 1 - \alpha$ . In this case, the volume $V$ of $\hat{R}$ can be computed exactly.
+
+Each of the first four panels in Figure 11 shows five trajectories for $\log \hat{V}$ as $K$ increases from 1 to 100. The true volume, $\log V$ , of the prediction region is indicated by a dashed line. We observe that the estimator converges within a reasonable range of the true volume for varying output dimensions $d$ . The last panel illustrates the value of $\log \hat{V}$ as a function of $d$ , with $\log V$ again marked by a dashed line. From this, we observe that the estimator remains close to the true volume across different output dimensions $d$ .
+
+# G. Additional results
+
+This section presents additional results for $\mathbf{MQF}^2$ (Appendix G.1), Distributional Random Forests (Appendix G.2) and the Multivariate Gaussian Mixture Model (Appendix G.3). The experimental setup is described in Appendix F.
+
+# G.1.MQF2
+
+Figure 12 presents the marginal coverage and median region size across datasets of increasing size for $\mathbf{MQF}^2$ . In Panel 1, all methods except CopulaCPTS attain precise marginal coverage. This is expected since these methods follow the SCP algorithm (Section 2.1) and their marginal coverage conditional on the calibration dataset and samples from the calibration dataset follows a Beta distribution whose parameters only depend on the size of the calibration dataset (Appendix E.1). While CopulaCPTS attains marginal coverage, the larger variance in its marginal coverage arises because it does not follow the SCP algorithm.
+
+In Panel 2, the median region size is normalized between 0 and 1 for each dataset in order to facilitate comparison. We observe that C-HDR often obtains the smallest median region size. The performance of the other methods can vary highly across datasets for the median region size and is better visualized in a CD diagram (see Figure 13).
+
+
+Figure 12: Marginal coverage and median region size with the base predictor $\mathbf{MQF}^2$ across datasets sorted by size.
+
+Figure 13 presents critical difference diagrams for three conditional coverage metrics (CEC- $X$ , CEC- $Z$ and WSC), the mean region size and median region size, the total calibration and test time. Results are consistent with the results from the main text.
+
+Dataset M-CP CopulaCPTS DR-CP C-HDR PCP HD-PCP STDQR C-PCP L-CP households 14.20.48 12.30.87 13.20.29 10.60.33 20.50.38 15.60.39 17.80.41 15.50.74 18.60.80 scm20d 67.68.5 1.12e+022.1e+01 2.33e+022.2e+01 42.07.9 1.05e+021.1e+01 94.49.4 99.41.0e+01 26.03.1 72.01.0e+01 rf2 0.005470.0027 0.005550.0033 0.002150.0010 0.0006900.00032 0.007000.0036 0.006170.0030 0.006240.0031 0.002620.0012 0.001040.00048 rf1 0.005470.0027 0.005550.0033 0.002150.0010 0.0006900.00032 0.007000.0036 0.006170.0030 0.006240.003 0.002620.0012 0.001040.00048 scm1d 0.5280.046 0.3230.050 0.8670.078 0.2390.026 0.6980.065 0.6840.062 0.6710.069 0.2160.024 0.1970.020 meps_21 0.1850.013 0.1710.014 0.2270.013 0.1320.024 0.3590.021 0.2460.015 0.2830.015 0.2200.021 0.2440.052 meps_19 0.2140.022 0.5950.42 0.1750.011 0.1190.019 0.3960.059 0.2660.033 0.3070.043 0.2380.026 0.2320.043 meps_20 0.3710.061 0.3620.059 0.2230.020 0.1140.012 0.5350.050 0.4360.066 0.4720.052 0.3410.039 0.2800.028 house 1.170.023 1.220.043 0.6640.021 0.6510.016 0.8820.023 0.6800.018 0.7990.023 0.8580.018 1.190.017 bio 0.3030.0066 0.2960.0092 0.2570.0067 0.2180.0053 0.3430.0076 0.2590.0065 0.2690.0067 0.3020.0074 0.2670.0061 blog_data 0.1700.039 0.09480.015 0.03740.0056 0.01550.0031 0.1410.023 0.1250.023 0.1630.036 0.1060.021 0.06760.017 calcofi 2.130.024 2.380.12 1.670.022 1.990.026 2.330.029 1.890.029 1.970.021 2.810.042 2.700.024 taxi 4.260.068 4.720.11 2.620.029 2.620.033 4.030.040 3.180.030 3.630.058 4.020.064 4.940.12
+
+Table 4: Median region size with the base predictor ${\mathrm{{MQF}}}^{2}$ .
+
+Dataset M-CP CopulaCPTS DR-CP C-HDR PCP HD-PCP STDQR C-PCP L-CP households 36.90.86 35.11.4 15.70.63 40.11.2 33.82.1 30.12.1 28.82.1 62.62.5 50.61.4 scm20d 7.03e+062.5e+06 3.82e+071.6e+07 6.40e+031.9e+03 5.30e+092.1e+09 1.61e+045.1e+03 1.56e+045.1e+03 1.59e+045.1e+03 1.37e+091.0e+09 2.20e+109.1e+09 rf1 1.86e+021.0e+02 1.83e+029.6e+01 15.19.9 3.50e+021.7e+02 1.29e+028.1e+01 1.09e+026.7e+01 4.05e+052.1e+05 9.85e+024.8e+02 4.01e+021.9e+02 rf2 1.86e+021.0e+02 1.83e+029.6e+01 15.19.9 3.51e+021.7e+02 1.29e+028.1e+01 1.09e+026.7e+01 4.05e+052.1e+05 9.87e+024.8e+02 4.02e+021.9e+02 scm1d 2.37e+055.7e+04 1.81e+055.1e+04 78.41.6e+01 2.73e+085.3e+07 57.31.8e+01 43.01.1e+01 43.61.1e+01 1.48e+084.9e+07 1.52e+082.6e+07 meps_21 1.210.045 1.170.046 0.3150.020 1.440.14 0.6170.029 0.5580.026 0.5530.025 2.070.10 1.960.34 meps_19 1.140.027 1.110.031 0.2930.018 1.290.056 0.5810.027 0.5590.021 0.5370.021 1.960.072 1.520.053 meps_20 1.200.045 1.170.038 0.3090.014 1.300.047 0.6060.020 0.5620.020 0.5460.018 2.010.11 1.590.064 house 1.830.027 1.810.038 0.8870.033 1.090.033 1.230.034 0.9640.030 1.140.030 1.440.040 1.710.013 bio 1.400.39 1.420.39 0.2690.010 0.4860.037 0.3960.014 0.2970.0090 0.3110.010 1.410.31 2.570.77 calcofi 2.040.023 2.220.071 1.420.018 1.950.040 2.220.031 1.700.022 1.780.026 2.830.036 2.340.044 blog_data 0.3900.017 0.3900.016 0.08520.0050 0.3750.018 0.1730.0097 0.1630.0078 0.1880.0067 0.6130.028 0.5200.027 taxi 5.680.084 6.350.16 2.670.047 3.210.049 4.550.090 3.540.075 3.990.071 5.360.090 6.510.12
+
+Table 5: Mean region size with the base predictor ${\mathrm{{MQF}}}^{2}$ .
+
+
+
+
+
+
+
+
+Figure 13: CD diagrams with the base predictor $\mathrm{MQF}^2$ with 10 runs per dataset and method.
+
+
+
+# G.2. Distributional Random Forests
+
+Figure 14 presents additional results for the base predictor Distributional Random Forests. Since this model does not rely on a latent space, results for STDQR and L-CP are not included.
+
+In terms of conditional coverage, the results align with those of $\mathbf{MQF}^2$ , with C-PCP and C-HDR outperforming DR-CP, PCP, and HD-PCP. Notably, M-CP achieves competitive conditional coverage, suggesting it pairs well with DRF-KDE. Similar to $\mathbf{MQF}^2$ , all methods except for CopulaCPTS attain precise marginal coverage.
+
+The median region size is normalized to a [0,1] range for each dataset to facilitate comparison. We observe that C-HDR generally achieves the smallest median region size, followed by DR-CP. The test time is the lowest for M-CP and CopulaCPTS
+
+while C-PCP and C-HDR obtain the highest computation times.
+
+
+
+
+
+
+
+
+Figure 14: Conditional coverage metrics with the base predictor Distributional Random Forests across datasets sorted by size.
+
+
+
+
+
+Figure 15 shows CD diagrams obtained with Distributional Random Forests as the base predictor. The results are consistent with Figure 14.
+
+
+Rank of CEC-X (x100)
+
+
+Rank of CEC-Z (x100)
+
+
+Rank of WSC
+
+
+Rank of Mean Region Size
+Figure 15: CD diagrams with the base predictor Distributional Random Forests based on 10 runs per dataset and method.
+
+
+Rank of Median Region Size
+
+
+Rank of Time (s)
+
+# G.3. Multivariate Gaussian Mixture Model
+
+Figure 16 presents additional results for the base predictor Multivariate Gaussian Mixture Model. Similarly to Distributional Random Forests, this model does not rely on a latent space and thus results for STDQR and L-CP are not included.
+
+The conditional coverage also aligns with $\mathsf{MQF}^2$ , C-PCP and C-HDR outperforming DR-CP, PCP, and HD-PCP. M-CP and CopulaCPTS achieving intermediate conditional coverage. As expected, marginal coverage is precise for all methods except CopulaCPTS.
+
+C-HDR often obtains the smallest median region size, while DR-CP consistently attains the best mean region size.
+
+
+Figure 16: Conditional coverage metrics with the multivariate Gaussian mixture model base predictor across datasets sorted by size.
+
+
+
+
+
+CD diagrams in Figure 17 are consistent with Figure 16.
+
+
+
+
+
+
+
+
+Figure 17: CD diagrams based on Multivariate Gaussian Mixture Model parameterized by a hypernetwork with $M = 10$ and 10 runs per dataset and method.
+
+
+
+
+
+# G.4. Impact of the number of samples $K$
+
+Figures 18 and 19 illustrate how conditional coverage, marginal coverage and region size change as a function of $K$ on all datasets. For a better comparison among datasets, the metrics CEC- $X$ , CEC- $Z$ , the median region size and the mean region size are normalized between 0 and 1, with results averaged over 10 runs. Furthermore, the red line indicates a linear regression fit, allowing to see the trend.
+
+Conditional coverage metrics decreasing with $K$ indicate that conditional coverage tends to improve with an increasing number of samples. This is expected since an increasing number of Monte-Carlo samples allows a better estimation of the CDF of the scores in (13). Marginal validity is obtained with any $K$ . However, small sizes of $K$ will lead to more duplicated conformity scores and thus a possibility of overcoverage. Median region sizes and mean region sizes also tend to decrease with $K$ as the CDF approximation improves.
+
+
+Figure 18: Evolution of conditional coverage, marginal coverage and region sizes of C-PCP as a function of the number of samples $K$ using the base predictor MQF2 . The metrics CEC- $X$ , and CEC- $Z$ should be minimized, while the marginal coverage and WSC should approach $1 - \alpha$ (indicated by the dashed black line). The red line, obtained by linear regression, indicates the general trend.
+
+
+Figure 19: Reproduction of Figure 18 for C-HDR.
+
+# H. Comparison with Bonferroni correction
+
+To better understand the prediction regions produced by FWER control methods, we provide a qualitative and quantitative comparison with Bonferroni correction. We consider Bonferroni correction applied to the scores of CQR (see (16)), similarly to M-CP. Figure 20 provides an illustrative example, and Table 6 provides results on this same dataset. This shows that Bonferroni is computationally fast but produces larger regions due to the rectangular shape.
+
+
+
+
+
+
+
+
+Figure 20: Prediction regions for a bivariate unimodal dataset, conditional on a unidimensional input. The black, green, and yellow contours represent regions with nominal coverage levels of $20\%$ , $40\%$ , and $80\%$ , respectively. The figure is similar to Figure 2 in the main text, with Bonferroni added as a comparison. Both Bonferroni and M-CP are based on Conformal Quantile Regression (CQR) applied separately for each dimension.
+
+
+
+
+
+Table 6: Detailed metrics for the unimodal heteroscedastic process from Figure 20, with $1 - \alpha$ fixed to 0.8.
+
+Method MC Median Size CEC-X (×100) CEC-Z (×100) WSC Test time Bonferroni 0.8130.0036 9.070.15 0.02410.012 0.02490.0098 0.8150.0063 0.003395.9e-05 M-CP 0.8010.0037 8.620.074 0.02400.0031 0.01570.0031 0.7960.012 0.09590.058 DR-CP 0.7960.0019 6.830.042 0.4320.019 0.4030.015 0.6970.0093 0.05570.00075 C-HDR 0.8090.0025 6.970.039 0.01290.0059 0.01550.0037 0.8150.0030 14.20.11 L-CP 0.7980.0024 8.060.035 0.005860.00095 0.005490.0014 0.7940.0039 0.05840.0012
+
+# I. Comparison between C-PCP and $\mathbf{CP^2}$ -PCP
+
+In this section, we compare our proposed method, C-PCP, with the $\mathrm{CP^2}$ -PCP method recently proposed by Plassier et al., 2025. More generally, we also compare the methods from the $\mathrm{CP^2}$ framework of Plassier et al., 2025 with our class of CDF-based conformity scores (Section 3.1 in the main text). In Appendix I.1, we present the more general $\mathrm{CP^2}$ framework using our own notation for clarity, with $\mathrm{CP^2}$ -PCP as a particular case of $\mathrm{CP^2}$ . In Appendix I.2, we discuss the asymptotic properties of $\mathrm{CP^2}$ and show the asymptotic equivalence with CDF-based methods. In Appendix I.3, we discuss the relationship between CDF-based and $\mathrm{CP^2}$ -based methods.
+
+# I.1. The CP $^2$ framework
+
+Let us define a family of non-decreasing nested regions $\{\mathcal{R}(x;t)\}_{t\in \mathbb{R}}$ such that $\bigcap_{t\in \mathbb{R}}\mathcal{R}(x;t) = \emptyset$ and $\bigcup_{t\in \mathbb{R}}\mathcal{R}(x;t) = \mathcal{Y}$ , and $\bigcap_{t^{\prime} < t}\mathcal{R}(x;t^{\prime}) = \mathcal{R}(x;t)$ . Without loss of generality, these nested regions are expressed in terms of a conformity score $s_W(x,y)\in \mathbb{R}$ as follows:
+
+$$
+\mathcal {R} (x; t) = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq t \}, \tag {134}
+$$
+
+where $s_W(x,y)$ is continuous in $y$ .
+
+As the next step, we introduce a family of transformation functions $f_{\tau}(\lambda) : \mathbb{R} \to \mathbb{R}$ parameterized by $\tau \in \mathbb{R}$ . It is assumed
+
+that for any $\tau$ , the function $\lambda \mapsto f_{\tau}(\lambda)$ is increasing and bijective. Let $\varphi \in \mathbb{R}$ be a constant (e.g., $\varphi = 1$ ). We also define the function $g_{\varphi}(\tau) = f_{\tau}(\varphi)$ and assume that $\tau \mapsto g_{\varphi}(\tau)$ is increasing and bijective.
+
+As a first step towards defining $\mathbf{CP}^2$ , we construct a prediction region assuming knowledge of the conditional distribution $F_{Y|X}$ . For a given input $x \in \mathcal{X}$ , the prediction region is defined as:
+
+$$
+\bar {R} _ {\mathrm {C P} ^ {2}} (x) = \mathcal {R} (x; f _ {\tau_ {x}} (\varphi)), \tag {135}
+$$
+
+where
+
+$$
+\tau_ {x} = \inf \left\{\tau : \mathbb {P} (Y \in \mathcal {R} (X; f _ {\tau} (\varphi)) \mid X = x) \geq 1 - \alpha \right\} \tag {136}
+$$
+
+implies that $\bar{R}_{\mathrm{CP}^2}(x)$ guarantees conditional coverage given $x$ . Furthermore, using (134) and defining the random variable $W = s_W(X,Y)$ , we can equivalently express (136) as
+
+$$
+\begin{array}{l} \tau_ {x} = \inf \left\{\tau : \mathbb {P} \left(s _ {W} (X, Y) \leq f _ {\tau} (\varphi) \mid X = x\right) \geq 1 - \alpha \right\} (137) \\ = \inf \left\{\tau : \mathbb {P} \left(g _ {\varphi} ^ {- 1} \left(s _ {W} (X, Y)\right) \leq \tau \mid X = x\right) \geq 1 - \alpha \right\} (138) \\ = Q _ {g _ {\varphi} ^ {- 1} (W) \mid X = x} (1 - \alpha) (139) \\ = g _ {\varphi} ^ {- 1} \left(Q _ {W | X = x} (1 - \alpha)\right), (140) \\ \end{array}
+$$
+
+where we used that $g_{\varphi}$ is increasing and bijective, with $g_{\varphi}^{-1}(f_{\tau}(\varphi)) = \tau$ . In other words, $\tau_{x}$ is the $1 - \alpha$ quantile of $g_{\varphi}^{-1}(W)$ . However, in practice, $\tau_{x}$ cannot be computed directly because the true conditional distribution $F_{Y|x}$ is unknown. Instead, it can be estimated using a sample $\hat{Y}^{(k)}, k \in [K]$ , drawn from the estimated conditional distribution $\hat{F}_{Y|X=x}$ . If $\hat{Q}_{W|X=x}(1 - \alpha)$ is the $1 - \alpha$ quantile of the empirical distribution $\frac{1}{K} \sum_{k \in [K]} \delta_{s_W(x, \hat{Y}^{(k)})}$ , we can compute
+
+$$
+\hat {\tau} _ {x} = g _ {\varphi} ^ {- 1} \left(\hat {Q} _ {W | X = x} (1 - \alpha)\right). \tag {141}
+$$
+
+It should be noted that this estimated prediction region loses the exact conditional and marginal coverage properties due to the reliance on the estimated conditional distribution. The following shows how conformal prediction can restore some coverage properties.
+
+From (134), using (135), we can write
+
+$$
+\begin{array}{l} \bar {R} _ {\mathrm {C P} ^ {2}} (x) = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq f _ {\tau_ {x}} (\varphi) \} (142) \\ = \left\{y \in \mathcal {Y}: f _ {\tau_ {x}} ^ {- 1} \left(s _ {W} (x, y)\right) \leq \varphi \right\}, (143) \\ \end{array}
+$$
+
+where we used the invertibility of $f_{\tau}$ for any $\tau \in \mathbb{R}$ .
+
+Based on (143), Plassier et al., 2025 defined the following conformity score:
+
+$$
+s _ {\mathrm {C P} ^ {2}} (x, y) = f _ {\hat {\tau} _ {x}} ^ {- 1} \left(s _ {W} (x, y)\right), \tag {144}
+$$
+
+for which the corresponding prediction region $\hat{R}_{\mathrm{CP}^2}$ is given by
+
+$$
+\hat {R} _ {\mathrm {C P} ^ {2}} (x) = \{y \in \mathcal {Y}: s _ {\mathrm {C P} ^ {2}} (x, y) \leq \hat {q} \}, \tag {145}
+$$
+
+where we used (2) from the main text.
+
+As an example, taking $f_{\tau}(\lambda) = \tau \lambda$ and $\varphi = 1$ , the conformity score becomes:
+
+$$
+s _ {\mathrm {C P} ^ {2}} (x, y) = s _ {W} (x, y) / \hat {\tau} _ {x}, \tag {146}
+$$
+
+where $\hat{\tau}_{x}$ is defined in (141). Finally, we obtain $\mathrm{CP}^2$ -PCP simply by replacing $s_W$ with $s_{\mathrm{PCP}}$ in (146).
+
+# I.2. Asymptotic properties
+
+# I.2.1. ASYMPTOTIC EQUIVALENCE OF PREDICTION REGIONS
+
+In the following, we prove that the prediction regions generated by $\mathrm{CP}^2$ (for any $f_{\tau}$ and $\varphi$ ) and CDF-based methods are identical in the oracle setting, asymptotically, as $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ . Specifically, for any $x \in \mathcal{X}$ , both methods select the same threshold $t_{1 - \alpha} = Q_W(1 - \alpha \mid X = x)$ for the prediction region $\mathcal{R}(x; t_{1 - \alpha})$ , which ensures a coverage level of $1 - \alpha$ .
+
+Proposition 6. Provided that the assumptions in Appendix I.1 hold, for any $x \in \mathcal{X}$ , the prediction regions $\bar{R}_{\mathrm{CP}^2}(x)$ (for any choice of $f_{\tau}$ and $\varphi$ ) and $\hat{R}_{\mathrm{CDF}}(x)$ are equivalent.
+
+Proof. Using the fact that $g_{\varphi}^{-1}(f_{\tau}(\varphi)) = \tau$ for any $\tau \in \mathbb{R}$ and that $g_{\varphi}$ is increasing and bijective, we can write:
+
+$$
+\begin{array}{l} \bar {R} _ {\mathrm {C P} ^ {2}} (x) = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq f _ {\tau_ {x}} (\varphi) \} (147) \\ = \left\{y \in \mathcal {Y}: g _ {\varphi} ^ {- 1} \left(s _ {W} (x, y)\right) \leq \tau_ {x} \right\} (148) \\ = \left\{y \in \mathcal {Y}: g _ {\varphi} ^ {- 1} \left(s _ {W} (x, y)\right) \leq g _ {\varphi} ^ {- 1} \left(Q _ {W | X = x} (1 - \alpha)\right) \right\} (149) \\ = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq Q _ {W | X = x} (1 - \alpha) \}. (150) \\ \end{array}
+$$
+
+Let $\bar{R}_{\mathrm{CDF}}(x)$ denote the prediction region obtained using the conformity score $s_{\mathrm{CDF}}$ as $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ . As shown in Section 3.1, $s_{\mathrm{CDF}}(X,Y) \sim \mathcal{U}(0,1)$ , which implies $\hat{q} = 1 - \alpha$ . Therefore:
+
+$$
+\begin{array}{l} \bar {R} _ {\mathrm {C D F}} (x) = \{y \in \mathcal {Y}: s _ {\mathrm {C D F}} (x, y) \leq 1 - \alpha \} (151) \\ = \{y \in \mathcal {Y}: F _ {W | X = x} (s _ {W} (x, y)) \leq 1 - \alpha \} (152) \\ = \{y \in \mathcal {Y}: s _ {W} (x, y) \leq Q _ {W | X = x} (1 - \alpha) \}. (153) \\ \end{array}
+$$
+
+This shows that $\bar{R}_{\mathrm{CP}^2}(x) = \bar{R}_{\mathrm{CDF}}(x)$ and that the threshold $t_{1 - \alpha} = Q_{W|X = x}(1 - \alpha)$ is identical for both methods.
+
+# I.2.2. ASYMPTOTIC CONDITIONAL COVERAGE
+
+Proposition 7. Provided that the assumptions in Section 5.2 of the main text hold, specifically that $\hat{F}_{Y|X = x} = F_{Y|x}$ for all $x\in \mathcal{X}$ , and $|\mathcal{D}_{\mathrm{cal}}|\to \infty$ , $\mathrm{CP}^2$ achieves ACC as $K\rightarrow \infty$ .
+
+Proof. Under these assumptions, we have $\hat{Q}_{W|X = x} = Q_{W|X = x}$ , which implies $\hat{\tau}_x = \tau_x$ for all $x\in \mathcal{X}$ . Hence, the prediction region for $\mathbf{CP}^2$ is given by:
+
+$$
+\bar {R} _ {\mathrm {C P} ^ {2}} (x) = \{y \in \mathcal {Y}: s _ {\mathrm {C P} ^ {2}} (x, y) \leq \varphi \}.
+$$
+
+Since this prediction region provides conditional coverage, it also ensures marginal coverage:
+
+$$
+\begin{array}{l} \mathbb {P} (Y \in \bar {R} _ {\mathrm {C P} ^ {2}} (X)) = \mathbb {P} \left(s _ {\mathrm {C P} ^ {2}} (X, Y) \leq \varphi\right) (154) \\ = \mathbb {E} _ {X} \left[ \mathbb {P} \left(s _ {\mathrm {C P} ^ {2}} (X, Y) \leq \varphi \mid X\right) \right] (155) \\ = \mathbb {E} _ {X} [ 1 - \alpha ] (156) \\ = 1 - \alpha . (157) \\ \end{array}
+$$
+
+Since $\hat{q}$ is the $1 - \alpha$ quantile of $s_{\mathrm{CP}^2}(X,Y)$ , and as $|\mathcal{D}_{\mathrm{cal}}| \to \infty$ , we have $\hat{q} = \varphi$ by definition. Therefore, since $\bar{R}_{\mathrm{CP}^2}(x)$ achieves conditional coverage (see (136)), the region $\hat{R}_{\mathrm{CP}^2}(x)$ also achieves ACC:
+
+$$
+\begin{array}{l} \mathbb {P} \left(Y \in \hat {R} _ {\mathrm {C P} ^ {2}} (X) \mid X = x\right) = \mathbb {P} \left(s _ {\mathrm {C P} ^ {2}} (X, Y) \leq \hat {q} \mid X = x\right) (158) \\ = \mathbb {P} \left(s _ {\mathrm {C P} ^ {2}} (X, Y) \leq \varphi \mid X = x\right) (159) \\ \geq 1 - \alpha . (160) \\ \end{array}
+$$
+
+
+
+# I.3. Relationship between CDF-based and $\mathbf{CP^2}$ -based methods
+
+A natural question is whether there exists $\{f_{\tau}\}_{\tau \in \mathbb{R}}$ and $\varphi \in \mathbb{R}$ (with the assumptions introduced in Appendix I.1) such that CDF-based and $\mathrm{CP}^2$ -based methods produce the same regions. In the simple case where the distribution of the base conformity score is in a location family, Proposition 8 shows that the two methods are equivalent for a simple choice of $f_{\tau}$ and $\phi$ . However, the proof is not easily generalizable to a location-scale family. Further development of existing classes of conformal methods with ACC and their intersections is a promising avenue for future research. Interestingly, we discuss below that answering this question would also draw links between established univariate conformal methods.
+
+Analogy to univariate conformal prediction. To further clarify the distinction between CDF- and $\mathrm{CP^2}$ -based methods, we can draw an analogy to the established univariate methods Dist-split (DS, Izbicki et al., 2020) and Conformalized Quantile Regression (CQR, Romano et al., 2019). Since CDF- and $\mathrm{CP^2}$ -based methods calibrate one quantile instead of an interval, we only consider the right-tail version of DS and CQR:
+
+- $s_{\mathrm{ECDF}}$ is analogous to DS but operates in the space of conformity instead of the output space $\mathcal{V}$ . DS uses the estimated conditional CDF of the output variable, $s_{\mathrm{DS}}(x,y) = \hat{F}_{Y|X=x}(y)$ , transforming $y$ based on its rank.
+- $s_{\mathrm{CP}^2}$ with difference adjustment is analogous to CQR, and also operates in the space of conformity instead of the output space $\mathcal{V}$ . Note that $\mathrm{CP}^2$ with difference adjustment can be simplified to $s_{\mathrm{CP}^2}(x,y) = s_W(x,y) - \hat{Q}_{W|x}(1 - \alpha)$ . Similarly, CQR uses a score based on the difference from a single estimated conditional quantile, $s_{\mathrm{CQR}}(x,y) = y - \hat{Q}_{Y|x}(1 - \alpha)$ .
+
+Both CDF-based and $\mathrm{CP}^2$ -based methods rely on a sample $\{\hat{Y}^{(k)}\}_{k = 1}^{K}$ where $\hat{Y}^{(k)}\sim \hat{F}_{Y|X = x}$ . The difference lies in the way they transform $s_W(x,y)$ to obtain ACC. Recall that the conformity scores $s_{\mathrm{ECDF}}$ and $s_{\mathrm{CP}^2}$ are given by
+
+$$
+s _ {\mathrm {E C D F}} (x, y) = \frac {1}{K} \sum_ {k \in [ K ]} \mathbb {I} \left(s _ {W} \left(x, \hat {Y} ^ {(k)}\right) \leq s _ {W} (x, y)\right) = \hat {F} _ {W | X = x} \left(s _ {W} (x, y)\right), \tag {161}
+$$
+
+$$
+s _ {\mathrm {C P} ^ {2}} (x, y) = f _ {\hat {\tau} _ {x}} ^ {- 1} \left(s _ {W} (x, y)\right) \text {w h e r e} \hat {\tau} _ {x} = g _ {\varphi} ^ {- 1} \left(\hat {Q} _ {W | X = x} (1 - \alpha)\right). \tag {162}
+$$
+
+It is known that two conformal methods produce equal regions if and only if their conformity scores are equal after applying a strictly increasing function $\phi : \mathbb{R} \to \mathbb{R}$ , i.e.:
+
+$$
+s _ {\mathrm {E C D F}} (x, y) = \phi \left(s _ {\mathrm {C P} ^ {2}} (x, y)\right) \quad \forall x \in \mathcal {X}, y \in \mathcal {Y}. \tag {163}
+$$
+
+Given $x \in \mathcal{X}$ , when $K$ is finite, the conformity score $s_{\mathrm{ECDF}}(x,\cdot)$ is discontinuous and is thus necessarily different from the conformity score $s_{\mathrm{CP}^2}(x,\cdot)$ , which is continuous. A more interesting setting is the case where $K \to \infty$ and $s_{\mathrm{ECDF}}(x,\cdot)$ becomes continuous. We define the random variable $\hat{W} = s_W(X,\hat{Y})$ , with $\hat{Y} \sim \hat{F}_{Y|X}$ . Let $F_{\hat{W}|x}$ and $Q_{\hat{W}|x}$ denote the conditional CDF and QF of $\hat{W}$ given $X = x$ . The conformity scores are defined as follows:
+
+$$
+\bar {s} _ {\mathrm {E C D F}} (x, y) = F _ {\hat {W} | x} \left(s _ {W} (x, y)\right), \tag {164}
+$$
+
+$$
+\bar {s} _ {\mathrm {C P} ^ {2}} (x, y) = f _ {\hat {\tau} _ {x}} ^ {- 1} \left(s _ {W} (x, y)\right) \text {w h e r e} \hat {\tau} _ {x} = g _ {\varphi} ^ {- 1} \left(Q _ {\hat {W} | x} (1 - \alpha)\right). \tag {165}
+$$
+
+Thus, we require that
+
+$$
+f _ {\tau_ {x}} ^ {- 1} \left(s _ {W} (x, y)\right) = \phi \left(F _ {\hat {W} | x} \left(s _ {W} (x, y)\right)\right) \quad \forall x \in \mathcal {X}, y \in \mathcal {Y} \tag {166}
+$$
+
+or equivalently
+
+$$
+f _ {\tau_ {x}} ^ {- 1} (w) = \phi \left(F _ {\hat {W} | x} (w)\right) \quad \forall x \in \mathcal {X}, w \in \mathbb {R}. \tag {167}
+$$
+
+In Proposition 8, we show that, in the particular case where the conditional distributions $\{F_{\hat{W} |x}\}_{x\in \mathcal{X}}$ belong to a location family, there exists a simple choice of $\{f_{\tau}\}_{\tau \in \mathbb{R}}, \varphi \in \mathbb{R}$ and strictly increasing $\phi : \mathbb{R} \to \mathbb{R}$ such that the two methods are equivalent.
+
+Proposition 8. Consider a scenario where all conditional distributions $\{F_{\hat{W} |x}\}_{x\in \mathcal{X}}$ belong to a location family, i.e.,
+
+$$
+F _ {\hat {W} | x} = F \left(w - \hat {\mu} _ {x}\right) \text {a n d} Q _ {\hat {W} | x} (\alpha) = F ^ {- 1} (\alpha) + \hat {\mu} _ {x}, \tag {168}
+$$
+
+for some continuous and strictly increasing base CDF $F$ and location parameter $\hat{\mu}_x$ . The conformity scores $\bar{s}_{\mathrm{ECDF}}$ and $\bar{s}_{\mathrm{CP^2}}$ lead to the same prediction regions.
+
+Proof. We will show that there is a family of transformations $\{f_{\tau}\}_{\tau \in \mathbb{R}}, \varphi \in \mathbb{R}$ and strictly increasing $\phi: \mathbb{R} \to \mathbb{R}$ with the assumptions above such that, for any $x \in \mathcal{X}$ and $w \in \mathbb{R}$ ,
+
+$$
+f _ {\tau_ {x}} ^ {- 1} (w) = \phi \left(F _ {\hat {W} | x} (w \mid X = x)\right) \tag {169}
+$$
+
+Define the transformation function $f_{\tau}$ as:
+
+$$
+f _ {\tau} (\lambda) = F ^ {- 1} (\lambda) + \tau , \tag {170}
+$$
+
+where $\tau > 0$ , and define $\varphi = 1 - \alpha$ and $\phi(\lambda) = \lambda$ .
+
+The inverse transformations are:
+
+$$
+f _ {\tau} ^ {- 1} (\lambda) = F (\lambda - \tau), \tag {171}
+$$
+
+and
+
+$$
+g _ {\varphi} ^ {- 1} (w) = w - F ^ {- 1} (\varphi). \tag {172}
+$$
+
+Now, for $x\in \mathcal{X}$ , compute
+
+$$
+\hat {\tau} _ {x} = F ^ {- 1} (1 - \alpha) + \hat {\mu} _ {x} - F ^ {- 1} (\varphi) = \hat {\mu} _ {x}. \tag {173}
+$$
+
+Finally, we obtain the required equality
+
+$$
+f _ {\hat {\tau} _ {x}} ^ {- 1} (w) = F (w - \hat {\tau} _ {x}) = F (w - \hat {\mu} _ {x}) = F _ {\hat {W} | x} (w). \tag {174}
+$$
+
+# I.4. Empirical comparison
+
+We perform a direct empirical comparison between CDF-based methods (C-PCP, C-HDR) and the corresponding $\mathrm{CP}^2$ methods (CP $^2$ -PCP, CP $^2$ -HPD using both linear (-L) and difference (-D) adjustments from Plassier et al., 2025). Figure 21 shows that:
+
+- C-PCP performs comparably to $\mathrm{CP^2}$ -PCP-L (best $\mathrm{CP^2}$ variant for PCP).
+- C-HDR performs comparably to $\mathrm{CP^2}$ -HPD-D (best $\mathrm{CP^2}$ variant for HPD).
+- Other $\mathrm{CP^2}$ variants (CP $^2$ -PCP-D, CP $^2$ -HPD-L) are generally outperformed by their CDF-based version.
+
+
+
+
+
+
+Figure 21: Comparison of CDF-based methods and $\mathrm{CP^2}$ -based methods.
+
+
+
+The worsened conditional coverage of $\mathrm{CP^2}$ -PCP-D is an interesting observation that was not observed in the smaller scale study of Plassier et al., 2025. In the case of $\mathrm{CP^2}$ -HPD-L, the poor conditional coverage is due to an incompatibility of the linear adjustment function with the (log-scaled) conformity score $s_{\mathrm{DR - CP}}(x,y) = -\log \hat{f} (y|x)$ , which can present negative values and thus a decreasing (instead of increasing) adjustment function $f_{\tau}(\lambda) = \tau \lambda$
+
+This shows that our simpler $s_{\mathrm{ECDF}}$ formulation achieves the same practical benefits as $s_{\mathrm{CP}^2}$ without the sensibility of choosing an adjustment function $f_{\tau}$ .
+
+# J. Results on an image dataset
+
+To better understand the behavior of prediction regions in high-dimensional spaces, we apply conformal methods to the CIFAR-10 dataset (Krizhevsky et al., 2014), which consists of $32 \times 32$ RGB images, each labeled with one of 10 possible classes. We train a generative model conditioned on the image label, where $\mathcal{Y} = [0,1]^{3 \times 32 \times 32}$ ( $d = 3072$ ) represents the image space, and $\mathcal{X} = \{0,\dots,9\}$ ( $p = 1$ ) represents the labels. The training, calibration, and test datasets contain 50,000, 1,500, and 1,500 images, respectively. As noted in Angelopoulos and Bates, 2023, this calibration dataset size is sufficient to ensure good marginal coverage.
+
+Our generative model is a conditional Glow model (Kingma and Dhariwal, 2018) based on the implementation from Stimper et al., 2022 using a 3-level multi-scale architecture with 32 blocks per level. Like MQF² (Appendix F.2), this generative model is a normalizing flow and directly compatible with all methods presented, except M-CP. For a direct comparison with M-CP, we compute quantiles based on samples from the generative model as in Appendix F.2.
+
+The latent space of the conditional Glow model, due to its multi-scale architecture, consists of three subspaces: $\mathcal{Z} = \mathcal{Z}_1 \times \mathcal{Z}_2 \times \mathcal{Z}_3$ , where $\mathcal{Z}_1 = \mathbb{R}^{48 \times 4 \times 4}$ , $\mathcal{Z}_2 = \mathbb{R}^{12 \times 8 \times 8}$ , and $\mathcal{Z}_3 = \mathbb{R}^{6 \times 16 \times 16}$ . As the distance function $d_{\mathcal{Z}}$ in the latent space, we use the maximum norm across the three spaces to penalize high norms in any of them: $d_{\mathcal{Z}}(z) = \max \{\| z_1 \|_2, \| z_2 \|_2, \| z_3 \|_2\}$ , where $z = z_1 \times z_2 \times z_3$ .
+
+Table 7 presents the metrics introduced in Appendix F.4. All methods achieve marginal coverage despite the high dimensionality of $\mathcal{V}$ , which is expected as the marginal coverage distribution conditional on the calibration dataset is
+
+independent of $d$ (Appendix E.1). The median of the logarithm of the region size is the smallest for C-HDR and DR-CP, which matches results on tabular datasets. The mean region size is not reported because it becomes infinity with machine precision. Instead, we report the mean of the logarithm of the region size, with similar conclusions to the median region size.
+
+Regarding conditional coverage, as in other experiments, L-CP, C-HDR, C-PCP, and M-CP exhibit the smallest CEC- $X$ and CEC- $Z$ values, indicating superior conditional coverage. The WSC metric supports similar conclusions, with DR-CP and PCP being the least calibrated.
+
+Table 7: Results obtained with a conditional Glow model on CIFAR-10 with $1 - \alpha = 0.9$
+
+Dataset Method MC Median Log Size Mean Log Size CEC-X (×100) CEC-Z (×100) WSC Time (s) cifar10 M-CP 0.9000035 -7.10e+035.7 -7.05e+031.3e+01 0.1110.028 0.2010.033 0.8550.012 0.4650.16 DR-CP 0.9030042 -8.30e+031.4e+01 -8.33e+031.4e+01 0.1520.030 0.3250.033 0.8610.0064 47.01.7e+01 C-HDR 0.9020041 -8.33e+031.9e+01 -8.40e+031.9e+01 0.05330.010 0.06290.020 0.9030030 4538.7e+01 PCP 0.8990038 -7.11e+034.5 -7.06e+031.3e+01 0.3420.070 0.1950.030 0.8250.0098 2033.5e+01 HD-PCP 0.8990038 -7.12e+034.8 -7.06e+031.2e+01 0.3590.075 0.1980.030 0.8190.0098 4066.9e+01 STDQR 0.8980046 -7.11e+034.6 -7.06e+031.2e+01 0.3570.071 0.2050.033 0.8280.015 2043.5e+01 C-PCP 0.9000036 -7.08e+034.9 -7.04e+031.1e+01 0.1180.021 0.08770.023 0.8800.0067 4086.9e+01 L-CP 0.9000033 -7.19e+037.0 -7.15e+031.1e+01 0.06680.0086 0.1900.027 0.8770.011 47.41.8e+01
+
+# K. Full results
+
+Tables 8 and 9 show the full results obtained with the setup described in Section 6. Each metric is the mean over 10 independent runs. The standard error of the mean is indicated as an index. For each dataset and metric, bold values indicate results statistically similar to the best performer $(\alpha = 0.05)$ according to a Z-test.
+
+Table 8: Full results obtained with the setup described in Section 6 (Part 1).
+
+Dataset Method MC Median Size CEC-X (×100) CEC-Z (×100) WSC Test time households M-CP 0.8010.0051 14.20.48 0.3400.068 0.3640.032 0.7790.010 5.690.49 CopulaCPTS 0.7820.0094 12.30.87 0.5240.057 0.6510.058 0.7450.016 8.860.77 DR-CP 0.8020.0046 13.20.29 0.9870.10 1.880.14 0.6560.018 0.2250.0092 C-HDR 0.8070.0054 10.60.33 0.2090.039 0.1490.020 0.7950.010 6.120.50 PCP 0.7980.0048 20.50.38 1.070.085 2.350.15 0.6320.015 5.480.46 HD-PCP 0.8000.0043 15.60.39 0.7760.091 1.380.10 0.7070.014 5.760.47 STDQR 0.8040.0050 17.80.41 0.9180.073 1.970.098 0.6770.019 8.450.79 C-PCP 0.8030.0066 15.50.74 0.1790.045 0.1200.026 0.8000.0061 11.20.95 L-CP 0.8000.0034 18.60.80 0.2040.040 0.1180.018 0.7880.014 0.1010.0043 scm20d M-CP 0.8000.0039 67.68.5 0.06820.011 0.9140.061 0.7770.0090 8.080.17 CopulaCPTS 0.8330.0086 1.12e+022.1e+01 0.2210.063 0.8780.043 0.8020.012 10.90.21 DR-CP 0.7990.0048 2.33e+022.2e+01 0.4290.044 2.720.16 0.6910.018 0.5600.025 C-HDR 0.8060.0055 42.07.9 0.1590.024 0.1020.017 0.7960.0065 9.420.18 PCP 0.7980.0051 1.05e+021.1e+01 0.5810.045 5.280.23 0.6210.016 6.270.39 HD-PCP 0.7990.0049 94.49.4 0.5040.047 4.780.23 0.6710.011 7.110.42 STDQR 0.8010.0047 99.410e+01 0.5400.052 4.860.17 0.6200.016 8.150.29 C-PCP 0.8090.0038 26.03.1 0.1050.020 0.08960.014 0.7890.0066 14.30.44 L-CP 0.7960.0035 72.01.0e+01 0.1660.033 0.08730.018 0.7860.0063 0.09870.0059 rf2 M-CP 0.7970.0046 0.005470.0027 0.2020.030 0.9680.15 0.6670.018 20.72.4 CopulaCPTS 0.7850.010 0.005550.0033 0.2990.045 1.180.17 0.6350.019 29.64.0 DR-CP 0.7990.0028 0.002150.0010 0.9490.21 5.420.70 0.5490.038 0.3440.015 C-HDR 0.8010.0033 0.0006900.0032 0.1110.036 0.2300.047 0.7320.018 21.52.4 PCP 0.8010.0022 0.007000.0036 0.8630.20 5.950.48 0.5380.030 17.82.4 HD-PCP 0.8000.0024 0.006170.0030 0.7760.19 5.580.49 0.5630.029 18.32.4 STDQR 0.8000.0032 0.006240.0031 0.7880.19 5.670.50 0.5660.025 25.74.1 C-PCP 0.8020.0051 0.002620.0012 0.09250.016 0.1690.027 0.7320.017 38.64.8 L-CP 0.8000.0026 0.001040.00048 0.1070.032 0.2360.042 0.7300.0093 0.09600.0053 rf1 M-CP 0.7970.0046 0.005470.0027 0.2020.030 0.9680.15 0.6670.018 20.92.5 CopulaCPTS 0.7850.010 0.005550.0033 0.2990.045 1.180.17 0.6350.019 29.84.1 DR-CP 0.7990.0028 0.002150.0010 0.9490.21 5.420.70 0.5490.038 0.3350.016 C-HDR 0.8010.0033 0.0006900.0032 0.1110.036 0.2300.047 0.7320.018 21.72.5 PCP 0.8010.0022 0.007000.0036 0.8630.20 5.950.48 0.5380.030 17.92.4 HD-PCP 0.8000.0024 0.006170.0030 0.7760.19 5.580.49 0.5630.029 18.42.4 STDQR 0.8000.0032 0.006240.0031 0.7880.19 5.670.50 0.5660.025 25.74.1 C-PCP 0.8020.0051 0.0022620.0012 0.09250.016 0.1690.027 0.7320.017 38.84.9 L-CP 0.8000.0026 0.001040.00048 0.1070.032 0.2360.042 0.7300.0093 0.09760.0057 scm1d M-CP 0.7960.0027 0.5280.046 1.020.060 2.420.094 0.6360.017 85.62.4e+01 CopulaCPTS 0.7320.011 0.3230.050 1.720.20 3.490.27 0.5820.017 87.82.4e+01 DR-CP 0.7930.0036 0.8670.078 1.500.087 5.170.20 0.5590.0097 0.5840.025 C-HDR 0.8120.0046 0.2390.026 0.4520.062 0.1140.015 0.7610.010 87.02.4e+01 PCP 0.7950.0054 0.6980.065 1.770.12 8.110.23 0.5160.013 5.530.34 HD-PCP 0.7950.0053 0.6840.062 1.750.11 7.960.22 0.5300.017 6.410.38 STDQR 0.7950.0064 0.6710.069 1.780.13 8.070.23 0.5020.017 6.870.23 C-PCP 0.8030.0053 0.2160.024 0.4560.066 0.1540.028 0.7510.0044 91.12.4e+01 L-CP 0.7990.0045 0.1970.020 0.4630.059 0.1080.017 0.7310.014 0.1020.0056 meps_21 M-CP 0.8000.0051 0.1850.013 0.9260.096 0.7750.099 0.7010.010 1.35e+021.7e+01 CopulaCPTS 0.7780.0064 0.1710.014 0.9570.13 0.6930.099 0.6840.011 1.61e+021.9e+01 DR-CP 0.8030.0023 0.2270.013 3.750.16 4.380.52 0.5310.012 0.2280.011 C-HDR 0.8070.0046 0.1320.024 0.4370.045 0.2600.041 0.7450.013 1.35e+021.7e+01 PCP 0.8010.0031 0.3590.021 3.170.13 3.750.44 0.5500.0078 78.48.3 HD-PCP 0.8020.0024 0.2460.015 2.090.15 2.180.28 0.6010.010 78.78.3 STDQR 0.8020.0022 0.2830.015 2.600.12 2.970.36 0.5820.011 96.09.9 C-PCP 0.8050.0026 0.2200.021 0.1650.044 0.08510.025 0.7750.0065 2.13e+022.3e+01 L-CP 0.8010.0034 0.2440.052 0.7700.13 0.4220.11 0.6850.026 0.1250.0073 meps_19 M-CP 0.8030.0027 0.2140.022 0.7020.049 0.6220.086 0.7090.0095 1.44e+022.3e+01 CopulaCPTS 0.8040.022 0.5950.42 1.130.26 0.9260.29 0.7210.030 1.77e+022.8e+01 DR-CP 0.7950.0028 0.1750.011 3.910.18 3.980.73 0.5010.013 0.2240.011 C-HDR 0.8070.0039 0.1190.019 0.3800.036 0.2450.039 0.7530.013 1.44e+022.3e+01 PCP 0.7940.0033 0.3960.059 2.950.23 3.510.53 0.5420.013 99.91.7e+01 HD-PCP 0.7960.0032 0.2660.033 1.980.14 2.050.35 0.5830.0090 1.00e+021.7e+01 STDQR 0.7910.0032 0.3070.043 2.630.23 2.950.49 0.5570.014 1.17e+021.8e+01 C-PCP 0.8100.0021 0.2380.026 0.1280.016 0.07570.024 0.7970.0088 2.44e+023.9e+01 L-CP 0.8030.0033 0.2320.043 0.6790.13 0.4150.13 0.7020.022 0.1230.0069
+
+Table 9: Full results obtained with the setup described in Section 6 (Part 2).
+
+Dataset Method MC Median Size CEC-X (×100) CEC-Z (×100) WSC Test time meps_20 M-CP 0.80600042 0.3710.061 0.8680.10 0.4550.12 0.7020.014 1.93e+022.1e+01 CopulaCPTS 0.79400091 0.3620.059 0.9630.12 0.4970.12 0.6920.016 2.30e+022.4e+01 DR-CP 0.80500036 0.2230.020 3.520.11 2.790.76 0.5300.0098 0.2270.010 C-HDR 0.80500044 0.1140.012 0.4390.10 0.1220.036 0.7450.010 1.94e+022.1e+01 PCP 0.80100036 0.5350.050 2.830.091 2.420.67 0.5440.0088 1.20e+021.4e+01 HD-PCP 0.80400039 0.4360.066 1.890.13 1.340.37 0.6220.012 1.20e+021.4e+01 STDQR 0.80300048 0.4720.052 2.450.15 1.840.50 0.5750.016 1.40e+021.6e+01 C-PCP 0.80600041 0.3410.039 0.1860.061 0.04840.010 0.7920.013 3.13e+023.1e+01 L-CP 0.79900033 0.2800.028 0.6620.073 0.2590.081 0.7030.014 0.1270.0062 house M-CP 0.80100023 1.170.023 0.2540.023 0.1900.019 0.7300.0098 56.03.8e+01 CopulaCPTS 0.81200082 1.220.043 0.3160.035 0.2760.027 0.7500.012 60.73.8e+01 DR-CP 0.80100041 0.6640.021 0.8950.045 1.200.073 0.6270.011 0.2830.011 C-HDR 0.80700039 0.6510.016 0.3880.026 0.1140.013 0.7090.010 56.63.8e+01 PCP 0.80100026 0.8820.023 0.7530.030 1.140.038 0.6430.0076 17.60.98 HD-PCP 0.80300034 0.6800.018 0.6940.033 0.7890.035 0.6490.0089 18.00.99 STDQR 0.80100042 0.7990.023 0.6700.022 0.7880.038 0.6490.0077 19.50.88 C-PCP 0.80900030 0.8580.018 0.2750.026 0.08310.011 0.7290.0091 73.73.8e+01 L-CP 0.80200035 1.190.017 0.1740.020 0.05420.0079 0.7560.0090 0.1460.0067 bio M-CP 0.80900021 0.3030.0066 0.1370.0093 0.2530.013 0.7640.0055 1.27e+026.0 CopulaCPTS 0.80000045 0.2960.0092 0.1370.0083 0.2600.015 0.7510.0068 1.45e+027.1 DR-CP 0.80500020 0.2570.0067 0.5070.028 1.140.034 0.6460.0066 0.5110.020 C-HDR 0.80800015 0.2180.0053 0.03720.0073 0.03600.0056 0.7940.0054 1.29e+026.0 PCP 0.80200021 0.3430.0076 0.5670.029 1.320.023 0.6280.0052 1.27e+026.1 HD-PCP 0.80400016 0.2590.0065 0.3520.020 0.8030.020 0.6730.0043 1.27e+026.1 STDQR 0.80300024 0.2690.0067 0.3890.019 0.9120.036 0.6670.0058 86.66.5 C-PCP 0.81000029 0.3020.0074 0.03690.0063 0.04040.0069 0.7980.0052 2.54e+021.2e+01 L-CP 0.805000093 0.2670.0061 0.02030.0045 0.01980.0021 0.7890.0039 0.2510.013 blog_data M-CP 0.80200049 0.1700.039 0.2920.051 0.1530.072 0.7360.012 5.06e+037.0e+02 CopulaCPTS 0.81300078 0.09480.015 0.3130.050 0.2310.063 0.7420.010 5.13e+037.1e+02 DR-CP 0.80800014 0.03740.0056 1.060.098 1.500.43 0.6440.0059 0.5150.026 C-HDR 0.80900030 0.01550.0031 0.2370.068 0.06110.019 0.7510.013 5.06e+037.0e+02 PCP 0.80100033 0.1410.023 0.9380.081 1.520.36 0.6430.0052 5.74e+027.9e+01 HD-PCP 0.80300038 0.1250.023 0.7940.075 0.9450.22 0.6600.0080 5.75e+027.9e+01 STDQR 0.81000072 0.1630.036 0.8050.074 0.8810.18 0.6780.012 5.84e+028.0e+01 C-PCP 0.80400045 0.1060.021 0.1630.049 0.1130.056 0.7640.012 5.63e+037.3e+02 L-CP 0.80100023 0.06760.017 0.3270.088 0.06240.023 0.7220.012 0.2580.012 calcofi M-CP 0.80300023 2.130.024 0.4330.015 0.4460.016 0.7340.0069 26.41.1 CopulaCPTS 0.81500075 2.380.12 0.4800.048 0.4920.048 0.7460.0096 29.61.2 DR-CP 0.80500027 1.670.022 1.440.040 1.560.039 0.6540.0061 0.5290.023 C-HDR 0.80500018 1.990.026 0.02940.012 0.01870.0037 0.7940.0053 27.71.2 PCP 0.80200026 2.330.029 1.640.042 1.790.041 0.6380.0034 26.51.2 HD-PCP 0.80200033 1.890.029 0.9800.033 1.050.030 0.6830.0050 27.31.2 STDQR 0.79900034 1.970.021 1.130.031 1.230.033 0.6760.0080 26.40.99 C-PCP 0.80900030 2.810.042 0.03320.0093 0.02530.0048 0.8060.0050 52.92.3 L-CP 0.80000020 2.700.024 0.03320.019 0.01790.0040 0.7920.0035 0.2640.012 taxi M-CP 0.80200032 4.260.068 0.05850.0034 0.04210.0058 0.7840.0052 60.68.6 CopulaCPTS 0.82200040 4.720.11 0.1140.018 0.09890.019 0.7990.0050 68.49.7 DR-CP 0.80500024 2.620.029 0.3830.016 0.4510.024 0.7070.0048 0.5390.031 C-HDR 0.80900030 2.620.033 0.03880.0049 0.04410.0053 0.7930.0040 61.98.6 PCP 0.80400016 4.030.040 0.3410.022 0.3990.025 0.7150.0042 60.28.5 HD-PCP 0.80500018 3.180.030 0.1940.012 0.2190.012 0.7500.0055 61.18.5 STDQR 0.80500035 3.630.058 0.2030.011 0.2240.013 0.7480.0080 32.41.1 C-PCP 0.80700026 4.020.064 0.03070.0053 0.03380.0048 0.8020.0050 1.21e+021.7e+01 L-CP 0.80500033 4.940.12 0.02640.0030 0.01960.0035 0.7960.0046 0.2430.012
\ No newline at end of file
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/images.zip b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ec20ac53c09d02a9c35d275d06ee869a1ef9d1b4
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41f03afbe953e1be188c4111318476e4fbe3c9632a3e6d104706560be0190bbd
+size 3904950
diff --git a/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/layout.json b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cc1a9e8a84d1d7e6df14095e1cd247ca87085131
--- /dev/null
+++ b/aunifiedcomparativestudywithgeneralizedconformityscoresformultioutputconformalregression/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:800731aa6f476ee5fef2ba4e75e6b1113a63cf73ad24a4dfb2cff622396426e5
+size 1870963
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_content_list.json b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..11418d5d73eedff478597b5487483cc3a5bb2b3d
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40e9d4e9724c22a3256cef6b4eb5b10d92de9421b26bed2b84765a31c6288e6b
+size 107092
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_model.json b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..01a1fb289dabeab3e42ffa8f4f7af545f78b3d5a
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd3ed3a30cdbd263aebd68f90077c9e2f80160d5b8f95cd4916fd219f23e020a
+size 130510
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_origin.pdf b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ea71e24ae01c804b1dbb05eedfadd54c54974d3b
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/e643cc3c-a16e-493e-8412-8b50c4785a00_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27822a27b1387d5f8686cc56563606c45644d7757ecaae02b70da99093c256e4
+size 1504268
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/full.md b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9f403f8ee83a7dbc4752e704e1ded17da3f25c1
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/full.md
@@ -0,0 +1,509 @@
+# A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization
+
+Nuojin Cheng* 1 Leonard Papenmeier* 2 Stephen Becker 1 Luigi Nardi 23
+
+# Abstract
+
+Bayesian optimization is a widely used method for optimizing expensive black-box functions, with Expected Improvement being one of the most commonly used acquisition functions. In contrast, information-theoretic acquisition functions aim to reduce uncertainty about the function's optimum and are often considered fundamentally distinct from EI. In this work, we challenge this prevailing perspective by introducing a unified theoretical framework, Variational Entropy Search, which reveals that EI and information-theoretic acquisition functions are more closely related than previously recognized. We demonstrate that EI can be interpreted as a variational inference approximation of the popular information-theoretic acquisition function, named Max-value Entropy Search. Building on this insight, we propose VES-Gamma, a novel acquisition function that balances the strengths of EI and MES. Extensive empirical evaluations across both low- and high-dimensional synthetic and real-world benchmarks demonstrate that VES-Gamma is competitive with state-of-the-art acquisition functions and in many cases outperforms EI and MES.
+
+# 1. Introduction
+
+Bayesian optimization (BO) is a widely used technique for maximizing black-box functions. Given a function $f: \mathcal{X} \to \mathbb{R}$ , BO iteratively refines a probabilistic surrogate of $f$ , typically a Gaussian process (GP), and selects the next evaluation point accordingly. At each iteration, the next sampling point is determined by maximizing an acqui
+
+*Equal contribution $^{1}$ Department of Applied Mathematics, University of Colorado Boulder $^{2}$ Department of Computer Science, Lund University $^{3}$ DBtune. Correspondence to: Nuojin Cheng .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+sition function (AF) $\alpha : \mathcal{X} \to \mathbb{R}$ . An effective AF must balance the exploration-exploitation trade-off, where exploitation prioritizes sampling points predicted by the surrogate to yield high objective values, while exploration targets regions with the potential to uncover even better values.
+
+Expected Improvement (EI) (Mockus, 1998) is one of the most widely used AFs, valued for its simple formulation, computational efficiency, and strong empirical performance. The core idea behind EI is to maximize the expected improvement over the current best observed value, which typically requires a noise-free assumption. More recently, (Villemonteix et al., 2009; Hennig & Schuler, 2012) have introduced the concepts of information-theoretic AFs, which represents a paradigm shift in Bayesian optimization. Unlike EI, which focuses on directly maximizing potential improvement, information-theoretic AFs aim to reduce uncertainty about the function $f$ 's optimal position and/or value, often through entropy-based measures. Due to their fundamentally different underlying philosophies and selection criteria, EI and information-theoretic AFs are widely regarded as distinct methodologies within the BO community (Hennig et al., 2022).
+
+Despite their apparent differences, we argue that EI and information-theoretic AFs share deeper theoretical connections than previously recognized. Understanding this relationship is crucial, as it provides novel insights into designing new acquisition functions. By unifying the perspectives of both sides, we introduce VES-Gamma, a new AF that effectively balances their strengths, resulting in a robust AF that adapts well to diverse optimization problems. VES-Gamma inherits the performance of EI while incorporating information-theoretic considerations.
+
+In summary, we make the following key contributions:
+
+1. We introduce the Variational Entropy Search (VES) framework which shows that EI can be interpreted as a special case of the popular information-theoretic acquisition function Max-value Entropy Search (MES). This unified theoretical perspective reveals that these two types of AFs are more closely related than previously recognized.
+
+2. We propose VES-Gamma as an intermediary between EI and MES, incorporating information-theoretic principles while maintaining EI's strength in performance.
+3. We provide an extensive evaluation across a diverse set of low- and high-dimensional synthetic, GP samples, and real-world benchmarks, demonstrating that VES-Gamma consistently performs competitively and, in many cases, outperforms both EI and MES.
+
+# 2. Background and Related Work
+
+# 2.1. Gaussian Processes
+
+A Gaussian process is a stochastic process that models an unknown function. It is characterized by the property that any finite set of function evaluations follows a multivariate Gaussian distribution. Assuming that $f$ has a zero mean, a Gaussian process is uniquely determined by the current observations $\mathcal{D}_t \coloneqq \{(x_i, y_{x_i})\}_{i=1}^t$ and the kernel function $\kappa(\boldsymbol{x}, \boldsymbol{x}')$ . Given these, at stage $t$ , the predicted mean of $y_{\boldsymbol{x}}$ at a new point $\boldsymbol{x}$ is $\mu_t(\boldsymbol{x}) = \kappa_t(\boldsymbol{x})^T (\boldsymbol{K}_t)^{-1} \boldsymbol{y}_t$ and the predicted covariance between points $\boldsymbol{x}$ and $\boldsymbol{x}'$ is $\mathrm{Cov}_t(\boldsymbol{x}, \boldsymbol{x}') = \kappa(\boldsymbol{x}, \boldsymbol{x}') - \kappa_t(\boldsymbol{x})^T (\boldsymbol{K}_t)^{-1} \kappa_t(\boldsymbol{x}')$ , where $[\kappa_t(\boldsymbol{x})]_i = \kappa(\boldsymbol{x}_i, \boldsymbol{x})$ , $[\boldsymbol{y}_t]_i = y_{\boldsymbol{x}_i}$ , and $[\boldsymbol{K}_t]_{i,j} = \kappa(\boldsymbol{x}_i, \boldsymbol{x}_j)$ ; see Rasmussen et al. (2006) for more details.
+
+# 2.2. Acquisition Functions
+
+Various acquisition functions (AFs) have been proposed to balance exploration and exploitation in optimization tasks, each tailored to different problem characteristics and assumptions. These include Probability of Improvement (PI), Expected Improvement (EI) (Mockus, 1998; Jones et al., 1998), Upper Confidence Bound (UCB) (Srinivas et al., 2010), Knowledge Gradient (KG) (Frazier et al., 2008), and information-theoretic AFs (Villemonteix et al., 2009; Henning & Schuler, 2012; Hernández-Lobato et al., 2014; Wang & Jegelka, 2017; Hvarfner et al., 2022; Tu et al., 2022). Below, we discuss two types of acquisition functions relevant to this study.
+
+Expected Improvement. Expected Improvement (EI) is one of the most commonly used acquisition functions and is formulated as follows:
+
+$$
+\alpha_ {\mathrm {E I}} (\boldsymbol {x}) = \mathbb {E} _ {p \left(y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\} \right] - y _ {t} ^ {*}, \tag {1}
+$$
+
+where $y_{t}^{*}$ is the maximum observed value in $\mathcal{D}_t$ , and $\mathbb{E}_{p(\cdot)}$ denotes the expectation with respect to the predictive density $p(\cdot)$ . The $-y_{t}^{*}$ term at the end can be dropped since it is constant with respect to $\pmb{x}$ .
+
+Information-Theoretic AFs. Information-theoretic AFs form a family of methods designed to select $x$ such
+
+that its evaluation reduces uncertainty regarding the optimal points of the objective function. This uncertainty is quantified using differential entropy, defined as $\mathbb{H}[y] := \mathbb{E}_{p(y)}[-\log p(y)]$ . Similarly, the conditional entropy is expressed as $\mathbb{H}[y|x] := \mathbb{H}[x,y] - \mathbb{H}[x]$ .
+
+The first information-theoretic AF for BO is Entropy Search (ES) (Hennig & Schuler, 2012), which is formulated as:
+
+$$
+\alpha_ {\mathrm {E S}} (\boldsymbol {x}) = \mathbb {H} [ \boldsymbol {x} ^ {*} \mid \mathcal {D} _ {t} ] - \mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} \left[ \mathbb {H} [ \boldsymbol {x} ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}} ] \right]. \tag {2}
+$$
+
+Here, the random variable $\boldsymbol{x}^*$ represents the location of the maximum.
+
+Predictive Entropy Search (PES) (Hernández-Lobato et al., 2014) offers a reformulation of ES that is computationally more efficient:
+
+$$
+\alpha_ {\mathrm {P E S}} (\boldsymbol {x}) = \mathbb {H} [ y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t} ] - \mathbb {E} _ {p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D} _ {t}\right)} \left[ \mathbb {H} \left[ y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}, \boldsymbol {x} ^ {*} \right] \right]. \tag {3}
+$$
+
+Since directly estimating the entropy with $x^{*}$ is expensive, following the PES format, Max-value Entropy Search (MES) (Wang & Jegelka, 2017) introduced an alternative approach that focuses on reducing the differential entropy of the 1D maximum value $y^{*}$ :
+
+$$
+\begin{array}{l} \alpha_ {\mathrm {M E S}} (\boldsymbol {x}) = \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t} ] - \mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} \left[ \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}} ] \right] \\ = \underbrace {\mathbb {H} \left[ y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t} \right]} _ {\text {c l o s e d - f o r m}} - \mathbb {E} _ {p \left(y ^ {*} \mid \mathcal {D} _ {t}\right)} \underbrace {\left[ \mathbb {H} \left[ y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t} , y ^ {*} \right] \right]} _ {\text {n o n - c l o s e d - f o r m}}. \tag {4} \\ \end{array}
+$$
+
+Unlike MES and its subsequent extensions (Hvarfner et al., 2022; Takeno et al., 2022) which approximate $p(y_{\mathbf{x}} \mid \mathcal{D}_t, y^*)$ using a truncated Gaussian, we focus on directly estimating $p(y^* \mid \mathcal{D}_t, y_{\mathbf{x}})$ via variational inference.
+
+# 2.3. Related Work
+
+Variational Inference and Evidence Lower Bound. Variational Inference (VI) is a widely used technique in Bayesian modeling to approximate intractable posterior distributions (Paisley et al., 2012; Hoffman et al., 2013; Kingma & Welling, 2014). It relies on maximizing the Evidence Lower Bound (ELBO) to approximate the log-likelihood $\log p(\tilde{\boldsymbol{x}})$ in the presence of latent variables $\boldsymbol{z}$ . The log-likelihood can be decomposed as follows:
+
+$$
+\log p (\tilde {\boldsymbol {x}}) \geq \mathbb {E} _ {q (\boldsymbol {z})} \left[ \log \left(\frac {p (\tilde {\boldsymbol {x}} \mid \boldsymbol {z}) p (\boldsymbol {z})}{q (\boldsymbol {z})}\right) \right], \tag {5}
+$$
+
+where $p(z)$ is a fixed prior distribution, and $q(z)$ is a variational approximation to the true posterior $p(\boldsymbol{z} \mid \tilde{\boldsymbol{x}})$ . The ELBO is formally defined as:
+
+$$
+\operatorname {E L B O} (p (\tilde {\boldsymbol {x}} \mid \boldsymbol {z}); q (\boldsymbol {z})) := \mathbb {E} _ {q (\boldsymbol {z})} \left[ \log \left(\frac {p (\tilde {\boldsymbol {x}} \mid \boldsymbol {z}) p (\boldsymbol {z})}{q (\boldsymbol {z})}\right) \right]. \tag {6}
+$$
+
+
+Figure 1. MES aims to optimize $\pmb{x}$ such that the entropy (averaged over all $y_{x}$ ) of the maximum values $p(y^{*} \mid \mathcal{D}_{t}, y_{x})$ is reduced. The left figure illustrates a noiseless Gaussian process conditioned on the observations $\mathcal{D}_{t}$ with three points (black crosses) and a sample $y_{x}$ at $x = 1$ drawn from $p(y_{x} \mid \mathcal{D}_{t})$ (red star). The mid and right panels illustrate the density $p(y^{*} \mid \mathcal{D}_{t}, y_{x})$ (blue curves). When $p(y^{*} \mid \mathcal{D}_{t}, y_{x})$ is approximated using an exponential distribution (green dashed curve), this leads to the VES-Exp AF that is equivalent to EI. Furthermore, VES-Gamma, which approximates $p(y^{*} \mid \mathcal{D}_{t}, y_{x})$ using a Gamma distribution (red dash-dot curve), leads to a more accurate approximation and a generalized version of EI.
+
+By maximizing the ELBO, VI indirectly maximizes the log-likelihood $\log p(\tilde{\pmb{x}})$ , thereby improving the quality of the posterior approximation. In many applications, such as variational autoencoders (VAEs) (Kingma & Welling, 2014) and variational diffusion (Kingma et al., 2021), both the conditional likelihood $p(\tilde{\pmb{x}}\mid \pmb{z})$ and the variational distribution $q(z)$ are parameterized using neural networks. Since both the expectation reference probability and the term inside the ELBO are parameterized, one common strategy is to estimate the gradient using finite Monte Carlo samples and the reparameterization trick to optimize the parameters. We adopt this approach, which enables efficient gradient-based optimization and has been widely applied in the BO community (Wilson et al., 2017).
+
+Improving the Expected Improvement. It is widely recognized that EI can be prone to over-exploitation (Qin et al., 2017; Berk et al., 2019; De Ath et al., 2021). To mitigate this issue, Hoffman et al. (2011) and Kandasamy et al. (2020) propose to use a portfolio of AFs, which assigns probabilities to different AFs at each step. Snoek et al. (2012) proposed a fully-Bayesian treatment on EI to improve empirical performance. Another approach is Weighted EI (WEI), which adaptively adjusts the weights of the components within the EI acquisition function (Sobester et al., 2005; Benjamins et al., 2023). Similarly, Qin et al. (2017) suggest "weakening" EI using suboptimal points suggested by the AF to mitigate its overexploitative behavior. However, these methods are primarily based on heuristics. Furthermore, information-theoretic acquisition functions are often excluded from these design enhancements, as they are generally considered distinct from heuristic AFs such as PI, EI, UCB, or KG.
+
+Entropy Approximation in Information-theoretic AFs. Estimating entropy in information-theoretic acquisition functions is computationally expensive and typically requires approximation techniques. Methods such as ES and PES employ sampling-based approaches, including Markov chain Monte Carlo and expectation propagation. In
+
+contrast, MES derives an explicit approximation (Wang & Jegelka, 2017, Eq. 6), which was later interpreted as a variational inference formulation by Takeno et al. (2020). This variational perspective has since been extended to multi-objective optimization (Qing et al., 2023). However, this approximation scheme lacks flexibility in tuning the variational distributions. Furthermore, to the best of our knowledge, most MES-based methods focus on approximating $p(y_{\boldsymbol{x}} \mid y^{*}, \mathcal{D}_{t})$ . An exception is Ma et al. (2023), which approximates $p(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ using a Gaussian distribution. While this approach provides computational advantages, the inherent symmetry of the Gaussian distribution does not align with the properties of $y^{*}$ .
+
+# 3. Variational Entropy Search
+
+# 3.1. Entropy Search Lower Bound
+
+The idea behind our Variational Entropy Search (VES) framework is to maximize a variational lower bound of MES with a predetermined family of densities to approximate $p(y^{*} \mid \mathcal{D}_{t}, y_{x})$ . Since we assume noiseless observations, the support is $[\max \{y_{x}, y_{t}^{*}\}, +\infty)$ . VES is illustrated in Figure 1. The lower bound is formalized in Theorem 3.1 and proven in Appendix A.1.
+
+Theorem 3.1. The MES acquisition function in Eq. (4) adheres to the Barber-Agakov (BA) bound (Barber & Agakov, 2004; Poole et al., 2019) and can be bounded from below as follows:
+
+$$
+\begin{array}{l} \alpha_ {M E S} (\boldsymbol {x}) = \mathbb {H} \left[ y ^ {*} \mid \mathcal {D} _ {t} \right] - \mathbb {E} _ {p \left(y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \mathbb {H} \left[ y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}} \right] \right] \\ \geq \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t} ] + \mathbb {E} _ {p \left(y ^ {*}, y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \log q \left(y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}}\right) \right], \tag {7} \\ \end{array}
+$$
+
+where $q(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ is any chosen density function that is absolutely continuous with respect to $p(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ .
+
+Since the first term on the right-hand side of Eq. (7), $\mathbb{H}[y^{*}|\mathcal{D}_t]$ , is independent of both $q$ and $\pmb{x}$ , we can omit it. This leads us to define the remaining term as the Entropy Search
+
+Lower Bound (ESLBO):
+
+$$
+\operatorname {E S L B O} (\boldsymbol {x}; q) := \mathbb {E} _ {p \left(y ^ {*}, y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \log q \left(y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}}\right) \right], \tag {8}
+$$
+
+where $p(y^{*},y_{\mathbf{x}}\mid \mathcal{D}_{t})$ represents a joint density, which can be sampled using Gaussian process path sampling (Hernandez-Lobato et al., 2014; Wang & Jegelka, 2017).
+
+To optimize $\alpha_{\mathrm{MES}}(\pmb{x})$ , we adopt the VI approach (Paisley et al., 2012), indirectly maximizing $\alpha_{\mathrm{MES}}(\pmb{x})$ by instead maximizing ESLBO. To ensure computational feasibility, the VI method constrains the density $q$ to a predefined family $\mathcal{Q}$ . When parameterizing $q$ within $\mathcal{Q}$ , the problem becomes tractable by solving for $q$ and $\pmb{x}$ iteratively, as detailed in Algorithm 1.
+
+Notably, this procedure, known as expectation maximization (EM), is analogous to maximizing the ELBO in Eq. (5). We conclude our discussion by summarizing the correspondence between ESLBO and ELBO in Table 1.
+
+# Algorithm 1 VES Framework
+
+Input: Observations $\mathcal{D}_t$ , variational family $\mathcal{Q}$ , number of inner iteration $N$
+
+Output: Next sampling location $\boldsymbol{x}_{t+1}$
+
+1: initialize $\boldsymbol{x}_{t+1}^{(0)}$
+2: for $n = 1:N$ do
+3: $q^{(n)}(y^*)\gets \arg \max_{q\in \mathcal{Q}}\operatorname {ESLBO}(\pmb{x}_{t + 1}^{(n - 1)};q)$
+4: $\pmb{x}_{t+1}^{(n)} \gets \arg \max_{\pmb{x}_{t+1}} \operatorname{ESLBO}(\pmb{x}_{t+1}; q^{(n)})$
+5: end for
+6: return $\pmb{x}_{t+1}^{(N)}$
+
+# 3.2. EI Through the Lens of the VES Framework
+
+In this section, we aim to establish an explicit connection between the VES and EI acquisition functions, allowing us to see EI through the lens of a VI approximation of the information-theoretical MES AF. We define $\mathcal{Q}$ as the set of all exponential density functions, $\mathcal{Q}_{\mathrm{exp}}$ , parameterized by the $\lambda > 0$ exponential density parameter and with support bounded from below by $\max \{y_x, y_t^*\}$ . The variational density function $q$ is given by
+
+$$
+q \left(y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}}; \lambda\right) = \lambda e ^ {- \lambda \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}\right)} \mathbf {1} _ {y ^ {*} \geq \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}}. \tag {9}
+$$
+
+For noiseless observations, the indicator function $\mathbf{1}_{y^* \geq \max\{y_x, y_t^*\}}$ always equals one and can be omitted. Plugging in $q$ from Eq. (9) into the ESLBO (Eq. (8)) yields a new $\lambda$ -parameterized AF. Since this AF stems from the exponential distribution, we name it VES-Exp. Theorem 3.2 shows that the next sampling point generated from VES-Exp within Algorithm 1 will be the same as for the EI AF; the theorem is proven in Appendix A.2.
+
+Theorem 3.2. When the family $\mathcal{Q}_{\mathrm{exp}}$ is selected as in Eq. (9) and the function is noiseless, ESLBO in Eq. (8) turns into
+
+$$
+\begin{array}{l} E S L B O (\boldsymbol {x}; \lambda) = \log \lambda - \lambda \underbrace {\mathbb {E} _ {p \left(y ^ {*} \mid \mathcal {D} _ {t}\right)} \left[ y ^ {*} \right]} _ {\text {c o n s t a n t}} \tag {10} \\ + \lambda \underbrace {\mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} \left[ \max \{y _ {\boldsymbol {x}} , y _ {t} ^ {*} \} \right]} _ {\text {E I}}. \\ \end{array}
+$$
+
+Maximizing $\text{ESLBO}(\pmb{x};\lambda)$ in Eq. (10) with respect to $\pmb{x}$ and $\lambda$ yields the same $\pmb{x}$ solution as the maximization of $EI$ in Eq. (1).
+
+The key idea behind the proof is that, following Algorithm 1, the ESLBO in Eq. (10) always converges within two iterations. Regardless of the positive value of $\lambda$ , the value of $x$ that maximizes $\mathrm{ESLBO}(x; \lambda)$ remains the same. Consequently, starting from an arbitrary initial point $x^{(0)}$ , a positive $\lambda^{(1)}$ is derived, ensuring that ESLBO reaches its maximum value in the next iteration.
+
+Theorem 3.2 reveals that EI can be viewed as a special case of MES, giving a new information-theoretic interpretation of the most popular acquisition function in use today. However, the exponential distribution has a fairly rigid parametric form that does not capture the characteristics of $p(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ . Figure 1 (right) shows an example of the structural limitations of the exponential density in green. We generate 1000 samples from an example distribution $p(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ , and observe that it significantly deviates from an exponential distribution. Specifically, the density of $p(y^{*} \mid \mathcal{D}_{t}, y_{\boldsymbol{x}})$ is non-monotonic, exhibiting a peak before decreasing near $\max \{y_{\boldsymbol{x}}, y_{t}^{*}\}$ (approximately 1.55), while exponential distributions are necessarily monotonic.
+
+This observation motivates the need to enrich the variational distributions $\mathcal{Q}$ to allow more flexibility. A natural extension is to use a Gamma distribution, which is a generalization of the exponential distribution. The Gamma density approximation in the previous example is shown in red in Figure 1 (right). The next section introduces VES-Gamma, which is a more general AF that extends VES-Exp and its equivalent EI acquisition function.
+
+# 3.3. VES-Gamma: A Generalization of EI
+
+VES-Gamma defines $\mathcal{Q}$ as the Gamma distribution parameterized by $k, \beta > 0$ with its support bounded from below by $\max\{y_x, y_t^*\}$ . The variational density is
+
+$$
+\begin{array}{l} q \left(y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}}; k, \beta\right) = \frac {\beta^ {k}}{\Gamma (k)} \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}\right) ^ {k - 1} \\ \times e ^ {- \beta \left(y ^ {*} - \max \left\{y _ {x}, y _ {t} ^ {*} \right\}\right)} \mathbf {1} _ {y ^ {*} \geq \max \left\{y _ {x}, y _ {t} ^ {*} \right\}}, \tag {11} \\ \end{array}
+$$
+
+where $\Gamma (\cdot)$ denotes the Gamma function. The noise-free assumption allows us to omit the indicator function, and
+
+Table 1. Comparison of key aspects between the ELBO and ESLBO approaches.
+
+Property ELBO Approach ESLBO Approach Primary Variable p(˜x | z) x Variational Variable q(z) q(y* | yx, Dt) Lower Bound Formulation ELBO(q(z); p(˜x | z)) ESLBO(q; x)
+
+the ESLBO is reformulated as
+
+$$
+\begin{array}{l} \operatorname {E S L B O} (\boldsymbol {x}; k, \beta) = k \log \beta - \log \Gamma (k) \\ + (k - 1) \mathbb {E} _ {p \left(y ^ {*}, y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \log \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}\right) \right] \tag {12} \\ - \beta \mathbb {E} _ {p (y ^ {*} | \mathcal {D} _ {t})} [ y ^ {*} ] + \beta \underbrace {\mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} [ \max \{y _ {\boldsymbol {x}} , y _ {t} ^ {*} \} ]} _ {\text {E I}}. \\ \end{array}
+$$
+
+The ESLBO in Eq. (12) serves as the primary objective in the VES-Gamma algorithm. Eq. (12) consists of five terms, with the last term being the EI AF in Eq. (1) scaled by a multiplicative factor. The two hyperparameters, $k$ and $\beta$ , originally part of the Gamma distribution, dynamically balance different components of the objective. In particular, when $k = 1$ , the Gamma distribution reduces to an exponential distribution, making the ESLBO in Eq. (12) equivalent to Eq. (10). In the following section, we discuss the approach for determining values for $k$ and $\beta$ .
+
+# Auto-determination of Tradeoff Hyperparameters.
+
+For any fixed $\pmb{x}$ , the global maximum of the ESLBO in Eq. (12) with respect to $k$ and $\beta$ uniquely exists, as can be demonstrated through derivative analysis. Taking the partial derivatives of ESLBO in Eq. (12) and setting them to zero, we obtain:
+
+$$
+\log \beta - \frac {\partial \log \Gamma (k)}{\partial k} + \mathbb {E} [ \log z _ {\boldsymbol {x}} ^ {*} ] = 0, \quad \frac {k}{\beta} - \mathbb {E} [ z _ {\boldsymbol {x}} ^ {*} ] = 0,
+$$
+
+where the random variable $z_{\pmb{x}}^{*} \coloneqq y^{*} - \max \{y_{\pmb{x}}, y_{t}^{*}\}$ .
+
+Substituting the second equation into the first yields:
+
+$$
+\log k - \psi (k) = \log \mathbb {E} \left[ z _ {\boldsymbol {x}} ^ {*} \right] - \mathbb {E} \left[ \log z _ {\boldsymbol {x}} ^ {*} \right], \tag {13}
+$$
+
+where $\psi (k)\coloneqq \partial \log \Gamma (k) / \partial k$ is the digamma function (Abramowitz et al., 1988), which can be efficiently approximated as a series. By Jensen's inequality, $\log \mathbb{E}[z_{\pmb{x}}^{*}] - \mathbb{E}[\log z_{\pmb{x}}^{*}]\geq 0$ . Since $\log k - \psi (k)$ is strictly decreasing and approaches zero asymptotically (see Figure 2), the root of Eq. (13), $k_{x}^{*}$ , exists uniquely—except in the degenerate case where $z_{x}^{*}$ is deterministic. In the practical implementation, we apply a clamping function to ensure that the term $\log k - \psi (k)$ does not become zero, and we employ a regularization mechanism to keep the resulting root $k_{x}^{*}$ close to 1. Specifically, we use L2 regularization when solving $\log k - \psi (k) = \log \mathbb{E}[z_x^* ] - \mathbb{E}[\log z_x^* ]$ since the unregularized version is unstable, presumably due to a widely flat landscape. In particular, for $\xi (k)\coloneqq$
+
+
+Figure 2. Plot of $\log k - \psi (k)$ for $k\in [0,500]$ . The function is strictly decreasing and asymptotically approaches zero.
+
+$\log k - \psi (k) - \log \mathbb{E}[z_{\pmb{x}}^{*}] + \mathbb{E}[\log z_{\pmb{x}}^{*}]$ , we solve the following optimization problem:
+
+$$
+\min _ {k} \xi (k) ^ {2} + \lambda (k - 1) ^ {2}, \tag {14}
+$$
+
+where $\lambda$ is a regularization parameter which is set to 1 in our experiments.
+
+With this analysis, the value $k_{\pmb{x}}^{*}$ is determined by minimizing Eq. (14) using Brent's method (Brent, 2013), where expectations of $z_{x}^{*}$ are estimated via Monte Carlo sampling from $p(y^{*}, y_{x} \mid \mathcal{D}_{t})$ . Once $k_{\pmb{x}}^{*}$ is obtained, the corresponding $\beta_{\pmb{x}}^{*}$ follows as:
+
+$$
+\beta_ {\boldsymbol {x}} ^ {*} \leftarrow \frac {k _ {\boldsymbol {x}} ^ {*}}{\mathbb {E} \left[ z _ {\boldsymbol {x}} ^ {*} \right]}. \tag {15}
+$$
+
+Notably, the weighting parameters $k_{\pmb{x}}^{*}$ and $\beta_{\pmb{x}}^{*}$ are location-dependent, as $z_{\pmb{x}}^{*}$ itself varies with $\pmb{x}$ . The VES-Gamma algorithm, which incorporates these principles, is detailed in Algorithm 2.
+
+Although we provide both a theoretical justification and a practical implementation for the VES-Gamma AF, a deeper interpretation of the ESLBO in Eq. (12) remains an open research question. Due to the complex non-linear structure of Eq. (12), it is currently uncertain if there is a clear and straightforward interpretation of the various terms and the overall expression. As an example, we hypothesize that the third term acts as an "anti-EI" component, steering the VES-Gamma solution away from the EI recommendation to promote diversity, with the values of $\beta_{\pmb{x}}^{*}$ and $k_{x}^{*}$ dynamically balancing its influence. Investigating this hypothesis
+
+# Algorithm 2 VES-Gamma
+
+Input: Sample set $\mathcal{D}_t$ , number of inner iterations $N$
+
+Output: Next sampling location $\boldsymbol{x}_{t+1}$
+
+1: initialize $\pmb{x}_{t + 1}^{(0)}$
+2: for $n = 1 : N$ do
+3: Evaluate values of $\mathbb{E}[z_{\pmb{x}}^{*}]$ and $\mathbb{E}[\log (z_{\pmb{x}}^{*})]$ by sampling $p(y^{*},y_{\pmb{x}}\mid \mathcal{D}_{t})$ given $\pmb {x} = \pmb{x}_{t + 1}^{(n - 1)}$
+4: Solve $k^{(n)}$ from Eq. (13)
+5: Solve $\beta^{(n)}$ from Eq. (15)
+6: Update $\pmb{x}_{t+1}^{(n)} \gets \arg \max_{\pmb{x}} \mathrm{ESLBO}(\pmb{x}; k^{(n)}, \beta^{(n)})$ defined in Eq. (12)
+7: end for
+8: return $\pmb{x}_{t+1}^{(N)}$
+
+Table 2. Average duration of a BO loop for each AF. We measure the runtime on the Branin, Levy, and Hartmann benchmarks and average over benchmarks, BO iterations, and 10 random restarts. For $N = 5$ outer repetitions, VES has a higher runtime than the other acquisition functions.
+
+AF average time per BO iteration EI 1.627s (±0.916s) MES 1.120s (±0.472s) VES 10.910s (±12.323)
+
+and further elucidating the role of each term within ESLBO will be the focus of future research.
+
+Computational Cost of VES-Gamma. Implementing VES-Gamma in Algorithm 2 is computationally intensive. The number of inner iterations, $N$ , must be sufficiently large for convergence, and each inner iteration requires estimating $\mathbb{E}[z_x^*]$ by sampling a large number of $y^*$ . Consequently, the overall BO loop takes significantly more time than EI and MES, as shown in Table 2 for $N = 5$ . However, since black-box function evaluations are often expensive, the additional computational cost of VES-Gamma is not a major bottleneck in many real-world applications.
+
+# 4. Results
+
+# 4.1. Experimental Setup
+
+We employ a consistent Gaussian Process (GP) hyperparameter and prior setting across all benchmarks and acquisition functions, evaluating Bayesian optimization (BO) performance using the simple regret $r(t) \coloneqq f^{*} - \max_{(\pmb{x}_{i},y_{\pmb{x}_{i}})\in \mathcal{D}_{t}}y_{\pmb{x}_{i}}$ , where $f^{*} \coloneqq \max_{\pmb{x}\in \mathcal{X}}f(\pmb{x})$ . When $f^{*}$ is unknown, we instead report the negative best function value, $-\max_{(\pmb{x}_i,y_{\pmb{x}_i})\in \mathcal{D}_t}y_{\pmb{x}_i}$ .
+
+To warm-start the optimization process, we initialize with 20 random samples drawn uniformly from $\mathcal{X}$ and model
+
+the GP using a $5/2$ -Matérn kernel with automatic relevance determination (ARD) and a dimensionality-scaled length-scale prior (Hvarfner et al., 2024). Following the theoretical assumption in the VES framework, we only focus on experiments with noise-free observations. Although all benchmarks are noiseless, we allow the GP to accommodate potential non-stationarity or discontinuities in the underlying function.
+
+Each experiment is repeated 10 times to estimate average performance, with results reported as mean $\pm$ one standard deviation. For problems with dimension less than 50, we run 100 iterations, otherwise 1000 iterations are computed. For numerical stability in VES-Gamma, we apply clamping: $z_{x}^{*} = \max \{10^{-10},y^{*} - \max \{y_{x},y_{t}^{*}\}\}$ . The expectation in Eq. (12) is estimated via pathwise conditioning (Wilson et al., 2021) using 128 posterior samples. Additionally, the number of inner iterations $N$ in Algorithm 2 is set to 5, with early stopping applied if $\| \pmb{x}^{(n - 1)} - \pmb{x}^{(n)}\| < d\cdot 10^{-5}$ , where $d$ denotes the problem dimension. We implement VES-Gamma and our other experiments using BoTorch (Balandat et al., 2020). We always compare against LogEI (Ament et al., 2023) and use EI and LogEI interchangeably. The code is available in https://github.com/NUOJIN/variational-entropy-search.
+
+**Benchmarks.** To evaluate VES, we consider three distinct categories of benchmark problems: synthetic benchmarks, GP samples, and real-world optimization tasks.
+
+For synthetic benchmarks, we examine commonly used functions that are diverse in dimensionality and landscape complexity. Specifically, we evaluate the 2-dimensional Branin, the 4-dimensional Levy, the 6-dimensional Hartmann, and the 8-dimensional Griewank functions. These benchmarks are widely utilized in optimization studies and provide controlled testbeds for algorithmic comparisons (Surjanovic & Bingham).
+
+For GP sample benchmarks, we draw from a GP prior with a $\nu = 5/2$ Matérn kernel. These experiments examine the impact of varying length scales ( $\ell = \{0.5, 1, 2\}$ ) and dimensionalities ( $d = \{2, 50, 100\}$ ) on algorithmic performance.
+
+For real-world scenarios, we utilize a set of benchmarks reflecting practical high-dimensional problems. These include the 60-dimensional Rover problem (Wang et al., 2018), the 124-dimensional soft-constrained Mopta08 (Jones, 2008) benchmark introduced in Eriksson & Jankowiak (2021), the 180-dimensional Lasso-DNA problem from LassoBench (Šehić et al., 2022), and the 388-dimensional SVM benchmark, also introduced in Eriksson & Jankowiak (2021). These tasks represent optimization challenges in engineering design, machine learning, and computational biology.
+
+Due to space constraints, additional experiments are provided in Appendix B.
+
+# 4.2. Comparing VES-Exp and EI
+
+Kolmogorov-Smirnov Test. After establishing the theoretical equivalence of VES-Exp and EI in Section 3.2, we aim to validate this equivalence in our practical implementation. To this end, we employ the Kolmogorov-Smirnov (KS) two-sample test with a significance level of $\alpha = 5\%$ to assess statistical similarity. The two samples consist of function values evaluated by each acquisition function (AF) across 10 repeated trials, i.e., $Y_{\mathrm{EI}}(t)\coloneqq \{\pmb {y}_t^i\}_{i = 1}^{10}$ , where $\pmb {y}_t^i$ denotes the function evaluation at step $t$ in the $i$ -th trial. The null hypothesis states that the function evaluations from VES-Exp and EI originate from the same distribution.
+
+We collect function values for all 500 iterations and consider a test successful (pass) for each iteration $t$ if the null hypothesis is not rejected. We include six different benchmarks spanning low-dimensional synthetic problems to high-dimensional real-world scenarios. Additional implementation details on KS test are presented in Appendix C.
+
+Empirical Equivalence Results Figure 3 illustrates the function values obtained by VES-Exp and EI, while Table 3 reports the passing rates of the KS test across six benchmarks. The results show that all passing rates exceed $90\%$ , with the Hartmann benchmark achieving the highest proportion of accepted tests.
+
+Several factors explain the remaining discrepancies between VES-Exp and EI. First, since both acquisition functions are non-convex, their optimization may yield different next sampling points $x_{t+1}$ due to variations in initialization. Second, VES methods employ a clamping mechanism to ensure that $z_x^*$ remains numerically positive, which introduces a dependency between $y^*$ and $x$ . In practice, this violates the assumptions used in the proof in Appendix A.2. We also employed Log-EI (Ament et al., 2023) instead of EI in our experiment, which may also explain the difference. Finally, while EI has a closed-form expression, VES-Exp relies on Monte Carlo estimation, introducing numerical inexactness and potential discrepancies.
+
+# 4.3. Performance of VES-Gamma
+
+Synthetic Test Functions. Figure 4 illustrates the performance of various methods, including MES, EI, and VES-Gamma, across four synthetic benchmark functions: Branin $(d = 2)$ , Levy $(d = 4)$ , Hartmann $(d = 6)$ , and Griewank $(d = 8)$ . The metric shown is the logarithm of the best value (or simple regret), averaged over 10 independent runs.
+
+Table 3. Kolmogorov-Smirnov two-sample test passing rate between VES-Exp and EI for various benchmarks. More details about p-values are available in Figure 8 in the appendix.
+
+Passing Rate (%) Branin (d=2) 94.00 Hartmann (d=6) 99.80 Rover (d=60) 92.60 Mopta08 (d=124) 93.20 Prior (d=2) 93.60 Prior (d=50) 94.60
+
+On Branin, VES-Gamma achieves the best performance, with MES and EI lagging behind. For Levy, VES-Gamma and EI are effectively tied for the best results, with MES showing slightly worse performance. On the Hartmann function, VES-Gamma outperforms all other methods. Finally, for the Griewank function, VES-Gamma and EI once again demonstrate similar performance, significantly outperforming MES.
+
+Overall, these results highlight the robustness of VES-Gamma across diverse synthetic benchmarks, consistently ranking among the top-performing methods.
+
+GP Samples. Here, we study problem instances where the GP can be fitted without model mismatch. To this end, we sample realizations from an isotropic 100-dimensional GP prior with varying length scale $\ell = 0.05, 0.1, 0.25, 0.5$ , using the same $5/2$ -Matérn covariance function for the GP prior and the GP we fit to the observations.
+
+Figure 5 shows the optimization performance on the 100-dimensional GP prior samples. For $\ell = 0.05, 0.1, 0.25$ , VES-Gamma outperforms EI and MES by a wide margin. EI and MES converge to a suboptimal solution. Only for $\ell = 0.5$ does EI reach the same quality as VES-Gamma, outperforming MES.
+
+Real-World Benchmarks. Figure 6 presents the performance of VES-Gamma, EI, and MES across four real-world optimization problems: the 60-dimensional Rover trajectory optimization, the 124-dimensional Mopta08 vehicle optimization, the 180-dimensional weighted Lasso-DNA regression, and the 388-dimensional SVM hyperparameter tuning benchmarks.
+
+Consistent with previous observations, VES-Gamma delivers strong performance, significantly outperforming all other acquisition functions on the SVM benchmark. It also ranks among the top-performing methods, alongside EI, on the Mopta08 and Lasso-DNA benchmarks. On the Rover problem, VES-Gamma performs comparably to EI, while MES achieves the best results in this scenario.
+
+
+
+
+Figure 3. Function values observed at each BO iteration for the EI and VES-Exp acquisition functions.
+
+MES exhibits mixed performance across the benchmarks, achieving the best results on Rover but falling behind on the Mopta08 and SVM problems.
+
+Overall, VES-Gamma demonstrates robust and consistent performance across all benchmarks, establishing itself as a versatile and reliable acquisition function for high-dimensional real-world optimization problems.
+
+# 5. Conclusion
+
+In this work, we introduce Variational Entropy Search (VES), a unified framework that bridges Expected Improvement (EI) and information-theoretic acquisition functions through a variational inference approach. We demonstrate that EI can be interpreted as a special case of Max-value Entropy Search (MES), revealing a deeper theoretical connection between these two widely used methodologies in Bayesian optimization. Building on this insight, we propose VES-Gamma, a novel acquisition function that dynamically balances the strengths of EI and MES. Comprehensive benchmark evaluations across a diverse set of low- and high-dimensional optimization problems highlight the robust and consistently high performance of VES-Gamma. These results underscore the potential of the VES framework as a promising foundation for developing more adaptive and efficient acquisition functions in Bayesian optimization.
+
+
+Figure 4. VES-Gamma, EI, and MES on the synthetic Branin $(d = 2)$ , Levy $(d = 4)$ , Hartmann $(d = 6)$ , and Griewank $(d = 8)$ benchmark functions. Average log simple regret: VES-Gamma performs best on Branin and Hartmann, and it is competitive on Levy and Griewank.
+
+Limitations and future work. While the Gamma distribution offers flexibility, future work will explore alternative variational distributions to enhance the adaptability of VES-Gamma. Another key direction is improving computational efficiency. Additionally, extending the theoretical framework to noisy settings remains an open challenge, requiring adaptations in variational inference to account for stochastic density supports.
+
+# Acknowledgements
+
+This project was partly supported by the Wallenberg AI, Autonomous Systems, and Software program (WASP) funded by the Knut and Alice Wallenberg Foundation, the AFOSR awards FA9550-20-1-0138, with Dr. Fariba Fahroo as the program manager, DOE award DE-SC0023346, and by the US Department of Energy's Wind Energy Technologies Office. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725
+
+# Impact Statement
+
+This paper presents work that aims to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+
+Figure 5. Performance curves (best values up to each iteration). VES-Gamma shows superior performance on all but one problem where it performs as good as EI.
+
+# References
+
+Abramowitz, M., Stegun, I. A., and Romer, R. H. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 1988.
+Ament, S., Daulton, S., Eriksson, D., Balandat, M., and Bakshy, E. Unexpected Improvements to Expected Improvement for Bayesian Optimization. Advances in Neural Information Processing Systems, 36:20577-20612, 2023.
+Balandat, M., Karrer, B., Jiang, D., Daulton, S., Letham, B., Wilson, A. G., and Bakshy, E. BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. Advances in Neural Information Processing Systems, 33: 21524-21538, 2020.
+Barber, D. and Agakov, F. The IM Algorithm: A variational approach to Information Maximization. Advances in Neural Information Processing Systems, 16(320):201, 2004.
+Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., and Lindauer, M. Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In International Conference on Automated Machine Learning, pp. 6-1. PMLR, 2023.
+Berk, J., Nguyen, V., Gupta, S., Rana, S., and Venkatesh, S. Exploration Enhanced Expected Improvement for Bayesian Optimization. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10-14, 2018, Proceedings, Part II 18, pp. 621-637. Springer, 2019.
+
+
+Figure 6. Performance curves (best function value up to each iteration). VES-Gamma outperforms all other AFs on SVM and performs well on the other problems.
+
+Brent, R. P. Algorithms for minimization without derivatives. Courier Corporation, 2013.
+De Ath, G., Everson, R. M., Rahat, A. A., and Fieldsend, J. E. Greed is Good: Exploration and Exploitation Trade-offs in Bayesian Optimisation. ACM Transactions on Evolutionary Learning and Optimization, 1(1):1-22, 2021.
+Eriksson, D. and Jankowiak, M. High-Dimensional Bayesian Optimization with Sparse Axis-Aligned Subspaces. In Uncertainty in Artificial Intelligence, pp. 493-503. PMLR, 2021.
+Frazier, P. I., Powell, W. B., and Dayanik, S. A KnowledgeGradient Policy for Sequential Information Collection. SIAM Journal on Control and Optimization, 47(5): 2410-2439, 2008.
+Golub, G. H. and Pereyra, V. The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. SIAM Journal on Numerical Analysis, 10(2):413-432, 1973.
+Hennig, P. and Schuler, C. J. Entropy Search for Information-Efficient Global Optimization. Journal of Machine Learning Research, 13(6), 2012.
+Hennig, P., Osborne, M. A., and Kersting, H. P. Probabilistic Numerics: Computation as Machine Learning. Cambridge University Press, 2022.
+Hernández-Lobato, J. M., Hoffman, M. W., and Ghahrami, Z. Predictive Entropy Search for Efficient Global Optimization of Black-box Functions. Advances in Neural Information Processing Systems, 27, 2014.
+
+Hoffman, M., Brochu, E., and de Freitas, N. Portfolio Allocation for Bayesian Optimization. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pp. 327-336, 2011.
+Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic Variational Inference. Journal of Machine Learning Research, 2013.
+Hvarfner, C., Hutter, F., and Nardi, L. Joint Entropy Search for Maximally-Informed Bayesian Optimization. Advances in Neural Information Processing Systems, 35: 11494-11506, 2022.
+Hvarfner, C., Hellsten, E. O., and Nardi, L. Vanilla Bayesian Optimization Perform Great in High Dimensions. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 20793-20817. PMLR, 21-27 Jul 2024.
+Jones, D. R. Large-Scale Multi-Disciplinary Mass Optimization in the Auto Industry. In MOPTA 2008 Conference (20 August 2008), 2008.
+Jones, D. R., Schonlau, M., and Welch, W. J. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13:455-492, 1998.
+Kandasamy, K., Vysyaraju, K. R., Neiswanger, W., Paria, B., Collins, C. R., Schneider, J., Poczos, B., and Xing, E. P. Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly. Journal of Machine Learning Research, 21(81): 1-27, 2020.
+Kingma, D., Salimans, T., Poole, B., and Ho, J. Variational diffusion models. Advances in Neural Information Processing Systems, 34:21696-21707, 2021.
+Kingma, D. P. and Welling, M. Auto-Encoding Variational Bayes. In Bengio, Y. and LeCun, Y. (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
+Ma, H., Zhang, T., Wu, Y., Calmon, F. P., and Li, N. Gaussian Max-Value Entropy Search for Multi-Agent Bayesian Optimization. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10028-10035. IEEE, 2023.
+Mockus, J. The application of Bayesian methods for seeking the extremum. Towards global optimization, 2:117, 1998.
+
+Paisley, J., Blei, D., and Jordan, M. Variational Bayesian Inference with Stochastic Search. In Proceedings of the International Conference on Machine Learning, 2012.
+Poole, B., Ozair, S., Van Den Oord, A., Alemi, A., and Tucker, G. On Variational Bounds of Mutual Information. In International Conference on Machine Learning, pp. 5171-5180. PMLR, 2019.
+Poon, C. and Peyre, G. Smooth over-parameterized solvers for non-smooth structured optimization. Mathematical Programming, 201(1):897-952, 2023.
+Qin, C., Klabjan, D., and Russo, D. Improving the Expected Improvement Algorithm. Advances in Neural Information Processing Systems, 30, 2017.
+Qing, J., Moss, H. B., Dhaene, T., and Couckuyt, I. $\{\mathrm{PF}\}^{2}$ ES: Parallel Feasible Pareto Frontier Entropy Search for Multi-Objective Bayesian Optimization. In 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023, volume 206, pp. 2565–2588, 2023.
+Rasmussen, C. E., Williams, C. K., et al. Gaussian Processes for Machine Learning, volume 1. Springer, 2006.
+Šehic, K., Gramfort, A., Salmon, J., and Nardi, L. LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark Suite for Lasso. In International Conference on Automated Machine Learning, pp. 2-1. PMLR, 2022.
+Snoek, J., Larochelle, H., and Adams, R. P. Practical Bayesian Optimization of Machine Learning Algorithms. In Pereira, F., Burges, C., Bottou, L., and Weinberger, K. (eds.), Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012.
+Sobester, A., Leary, S. J., and Keane, A. J. On the Design of Optimization Strategies Based on Global Response Surface Approximation Models. Journal of Global Optimization, 33:31-59, 2005.
+Srinivas, N., Krause, A., Kakade, S. M., and Seeger, M. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the International Conference on Machine Learning, 2010.
+Surjanovic, S. and Bingham, D. Virtual library of simulation experiments: Test functions and datasets, optimization test problems. https://www.sfu.ca/~ssurjano/optimization.html. Accessed: 2024-09-01.
+Takeno, S., Fukuoka, H., Tsukada, Y., Koyama, T., Shiga, M., Takeuchi, I., and Karasuyama, M. Multi-fidelity
+
+Bayesian Optimization with Max-value Entropy Search and its Parallelization. In International Conference on Machine Learning, pp. 9334-9345. PMLR, 2020.
+Takeno, S., Tamura, T., Shitara, K., and Karasuyama, M. Sequential and Parallel Constrained Max-value Entropy Search via Information Lower Bound. In Proceedings of the 39th International Conference on Machine Learning (ICML), volume 162 of Proceedings of Machine Learning Research, pp. 20960-20986. PMLR, June 2022.
+Tu, B., Gandy, A., Kantas, N., and Shafei, B. Joint Entropy Search for Multi-objective Bayesian Optimization. Advances in Neural Information Processing Systems, 35: 9922-9938, 2022.
+Villemonteix, J., Vazquez, E., and Walter, E. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44:509-534, 2009.
+Wang, Z. and Jegelka, S. Max-value Entropy Search for Efficient Bayesian Optimization. In International Conference on Machine Learning, pp. 3627-3635. PMLR, 2017.
+Wang, Z., Gehring, C., Kohli, P., and Jegelka, S. Batched Large-scale Bayesian Optimization in High-dimensional Spaces. In International Conference on Artificial Intelligence and Statistics, pp. 745-754. PMLR, 2018.
+Wilson, J. T., Moriconi, R., Hutter, F., and Deisenroth, M. P. The Reparameterization Trick for Acquisition Functions. In NIPS Workshop on Bayesian Optimization, 2017. URL https://bayesopt.github.io/papers/2017/32.pdf.
+Wilson, J. T., Borovitskiy, V., Terenin, A., Mostowsky, P., and Deisenroth, M. P. Pathwise Conditioning of Gaussian Processes. Journal of Machine Learning Research, 22(105):1-47, 2021.
+
+# A. Proofs
+
+# A.1. ESLB Proof
+
+The MES acquisition function in Eq. (4) can be lower bounded as follows,
+
+Proof.
+
+$$
+\begin{array}{l} \alpha_ {\mathrm {M E S}} (\boldsymbol {x}) = \mathbb {H} \left[ y ^ {*} \mid \mathcal {D} _ {t} \right] - \mathbb {E} _ {p \left(y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \mathbb {H} \left[ y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}} \right] \\ = \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t} ] + \mathbb {E} _ {p (y ^ {*}, y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} [ \log (p (y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}})) ] \\ = \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t} ] + \mathbb {E} _ {p (y ^ {*}, y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} \left[ \log \left(\frac {p (y ^ {*} \mid \mathcal {D} _ {t} , y _ {\boldsymbol {x}}) q (y ^ {*} \mid \mathcal {D} _ {t} , y _ {\boldsymbol {x}})}{q (y ^ {*} \mid \mathcal {D} _ {t} , y _ {\boldsymbol {x}})}\right) \right] \\ = \mathbb {H} [ y ^ {*} | \mathcal {D} _ {t} ] + \mathbb {E} _ {p (y ^ {*}, y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} [ \log (q (y ^ {*} | \mathcal {D} _ {t}, y _ {\boldsymbol {x}})) ] + \mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} [ D _ {\mathrm {K L}} \big (p (y ^ {*} | \mathcal {D} _ {t}, y _ {\boldsymbol {x}}) \| q (y ^ {*} | \mathcal {D} _ {t}, y _ {\boldsymbol {x}}) \big) ] \\ \geq \mathbb {H} [ y ^ {*} \mid \mathcal {D} _ {t} ] + \mathbb {E} _ {p (y ^ {*}, y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} [ \log (q (y ^ {*} \mid \mathcal {D} _ {t}, y _ {\boldsymbol {x}})) ], \\ \end{array}
+$$
+
+where the KL divergence $D_{\mathrm{KL}}(p(x)\| q(x))\coloneqq \mathbb{E}_{p(x)}[\log (p(x) / q(x))]$ . The inequality is tight if and only if $\mathbb{E}_{p(y_x|\mathcal{D}_t)}[D_{\mathrm{KL}}\big(p(y^*\mid \mathcal{D}_t,y_x)\| q(y^*\mid \mathcal{D}_t,y_x)\big)] = 0$ , which implies $p(y^{*}\mid \mathcal{D}_{t},y_{x}) = q(y^{*}\mid \mathcal{D}_{t},y_{x})$ for all $y_{x}\mid \mathcal{D}_{t}$ .
+
+# A.2. VES-Exp and EI Algorithmic Equivalence
+
+Theorem 3.2 is proved as follows:
+
+Proof. By restricting the variational distributions to exponential distributions, we slightly abuse the input notations of ESLBO in (8) and define:
+
+$$
+\begin{array}{l} \operatorname {E S L B O} (\lambda , \boldsymbol {x}) = \mathbb {E} _ {p \left(y ^ {*}, y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \log \left(\lambda \exp \left(- \lambda \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}\right)\right)\right) \right. \\ = \log \lambda - \lambda \mathbb {E} _ {p \left(y ^ {*}, y _ {\boldsymbol {x}} \mid \mathcal {D} _ {t}\right)} \left[ \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x}}, y _ {t} ^ {*} \right\}\right) \right] \tag {16} \\ = \log \lambda - \lambda \underbrace {\mathbb {E} _ {p (y ^ {*} | \mathcal {D} _ {t})} [ y ^ {*} ]} _ {\text {c o n s t a n t}} + \lambda \underbrace {\mathbb {E} _ {p (y _ {\boldsymbol {x}} | \mathcal {D} _ {t})} \left[ \max \{y _ {\boldsymbol {x}} , y _ {t} ^ {*} \} \right]} _ {\text {E I A F}}. \\ \end{array}
+$$
+
+Beginning with an arbitrary initial value $\pmb{x}^{(0)}$ , we determine the corresponding parameter
+
+$$
+\lambda^ {(1)} = \frac {1}{\mathbb {E} _ {p \left(y ^ {*} , y _ {\boldsymbol {x} ^ {(0)}} \mid \mathcal {D} _ {t}\right)} \left[ \left(y ^ {*} - \max \left\{y _ {\boldsymbol {x} ^ {(0)}}, y _ {t} ^ {*} \right\}\right) \right]}, \tag {17}
+$$
+
+which is derived by taking the derivative of (16) and letting it equal zero. With $\lambda$ fixed, $\operatorname{ESLBO}(\lambda^{(1)},\pmb{x})$ produces the same result as the EI acquisition function in (1). We then compute $\lambda^{(2)}$ based on $\pmb{x}^{(1)}$ following (17). Regardless of the specific value of $\lambda^{(2)}$ , the ESLBO function consistently yields the same result, $\pmb{x}^{(1)}$ . This consistency ensures that the VES iteration process converges in a single step. The final outcome, represented as $(\pmb{x}^{(1)},\lambda^{(2)})$ , indicates that the corresponding $q(y^{*}\mid y_{x},\mathcal{D}_{t})$ is the closest approximation to $p(y^{*}\mid y_{x},\mathcal{D}_{t})$ within $\mathcal{Q}_{\mathrm{exp}}$ (in the sense that minimizes their KL divergence).
+
+# B. Additional Experimental Results
+
+In this section, we evaluate VES-Gamma (Algorithm 2) on additional benchmarks.
+
+# B.1. Synthetic Test Functions
+
+
+Figure 7. Performance plots for EI, MES, and VES-Gamma on additional synthetic benchmark functions. VES-Gamma shows robust performance throughout the bank.
+
+
+
+
+
+
+
+
+
+Figure 7 shows the performance of the different acquisition functions, EI, MES, and VES-Gamma, on additional synthetic benchmark functions: the Ackley and Michalewicz test functions1 and the Lasso-High and Lasso-Hard benchmarks (Šehić et al., 2022). On the 1000-dimensional Lasso-Hard problem, VES-Gamma ran into a timeout after 48 hours. Therefore, we plot the mean up to the minimum number of iterations performed across all repetitions. VES-Gamma demonstrates robust performance across the benchmarks, outperforming all other acquisition functions on Ackley, MES on Michalewicz, and performing similarly to the other acquisition functions on the Lasso benchmarks. VES-Gamma and MES perform considerably worse than VES-Gamma, especially on the more high-dimensional problems.
+
+# C. Kolmogorov-Smirnov Test Statistic
+
+The Kolmogorov-Smirnov (KS) two-sample test is a non-parametric statistical method used to determine whether two samples are drawn from the same continuous distribution. It compares their empirical cumulative distribution functions (ECDFs) and calculates a test statistic that quantifies their maximum difference. Given two independent samples as function evaluations from VES-Exp $\{X_{1},X_{2},\ldots ,X_{n_{1}}\}$ and from EI $\{Y_1,Y_2,\dots ,Y_{n_2}\}$ , their ECDFs are defined as:
+
+$$
+F _ {X} (x) = \frac {1}{n _ {1}} \sum_ {i = 1} ^ {n _ {1}} \mathbb {I} (X _ {i} \leq x), \quad F _ {Y} (x) = \frac {1}{n _ {2}} \sum_ {j = 1} ^ {n _ {2}} \mathbb {I} (Y _ {j} \leq x),
+$$
+
+where $\mathbb{I}(\cdot)$ is the indicator function, equal to 1 if the condition is true and 0 otherwise. The KS test statistic is given by:
+
+$$
+D = \sup _ {x} | F _ {X} (x) - F _ {Y} (x) |,
+$$
+
+where $\sup_x$ denotes the supremum over all possible values of $x$ . This statistic measures the maximum absolute difference between the ECDFs of the two samples.
+
+Statistical Hypotheses. The hypotheses for the KS test are defined as:
+
+- Null hypothesis $(H_0)$ : $F_X(x) = F_Y(x)$ for all $x$ (the two samples come from the same distribution).
+- Alternative hypothesis $(H_{a})\colon F_{X}(x)\neq F_{Y}(x)$ for at least one $x$ (the two samples come from different distributions).
+
+To test these hypotheses, the test p-value is solved using the Kolmogorov-Smirnov survival function:
+
+$$
+p _ {\text {t e s t}} = Q _ {\mathrm {K S}} \left(\sqrt {\frac {n _ {1} n _ {2}}{n _ {1} + n _ {2}}} D\right),
+$$
+
+where $Q_{\mathrm{KS}}(\cdot)$ represents the survival function of the Kolmogorov distribution:
+
+$$
+Q _ {\mathrm {K S}} (z) = 2 \sum_ {k = 1} ^ {\infty} (- 1) ^ {k - 1} e ^ {- 2 k ^ {2} z ^ {2}}.
+$$
+
+Alternatively, the significance level $\alpha = 0.05$ can be tested using the critical value:
+
+$$
+D _ {0. 0 5} \approx \sqrt {- \frac {1}{2} \ln (0 . 0 2 5)} \cdot \sqrt {\frac {n _ {1} + n _ {2}}{n _ {1} n _ {2}}}.
+$$
+
+If $D > D_{0.05}$ , we reject the null hypothesis and consider it as failure (not pass).
+
+Detailed p-values for VES-Exp and EI Comparison. We present the p-values obtained from the experiments detailed in Section 4.2. These results are illustrated in Figure 8. It is observed that for the majority of the sample pairs, the calculated p-values are substantially above the $5\%$ significance level.
+
+
+
+
+
+
+
+
+Figure 8. Distribution of p-values for 500 sample pairs generated using the EI and VES-Exp acquisition functions.
+
+
+
+
+
+# D. VES-Gamma Computational Acceleration
+
+Table 2 highlights the higher computational cost of VES methods compared to EI and MES. However, we observe that a technique known as Variable Projection (VarPro) (Golub & Pereyra, 1973; Poon & Peyré, 2023) can be leveraged to accelerate the computation of VES under certain conditions, which VES-Gamma satisfies.
+
+The key idea behind VarPro is that when the function ESLBO has a specific structure,
+
+$$
+\max _ {\boldsymbol {x}; k, \beta} \operatorname {E S L B O} (\boldsymbol {x}; k, \beta) = \max _ {\boldsymbol {x}} \left(\underbrace {\max _ {k , \beta} \operatorname {E S L B O} (\boldsymbol {x} ; k , \beta)} _ {\varphi (\boldsymbol {x})}\right), \tag {18}
+$$
+
+and the solution to $\max_{k,\beta}\operatorname{ESLBO}(\pmb{x};k,\beta)$ is unique, then $\varphi (\pmb {x})$ is differentiable, with
+
+$$
+\frac {d}{d \boldsymbol {x}} \varphi (\boldsymbol {x}) = \frac {\partial}{\partial \boldsymbol {x}} \operatorname {E S L B O} \left(\boldsymbol {x}, k _ {\boldsymbol {x}} ^ {*}, \beta_ {\boldsymbol {x}} ^ {*}\right), \tag {19}
+$$
+
+where $k^{*}$ and $\beta^{*}$ are the unique values that maximize ESLBO.
+
+Following the proof in Eq. (13), we establish that the solutions $k_{x}^{*}$ and $\beta_{x}^{*}$ are unique. This confirms that it is feasible to implement the VarPro strategy to accelerate the computation of VES-Gamma, eliminating the need for the iterative scheme in Algorithm 1. This ongoing work aims to reduce the computational cost of VES-Gamma to a level comparable to EI and MES.
\ No newline at end of file
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/images.zip b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8732b8302bdd3b575a3f28293990721278067f1f
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b1eb34fbd5dbc75a757a5c9f0a4c6738c9ccd9b7787fe4e335ab9f3ffb001389
+size 642634
diff --git a/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/layout.json b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f4b3d9844d8006db62a5193ec96c67843f675f14
--- /dev/null
+++ b/aunifiedframeworkforentropysearchandexpectedimprovementinbayesianoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5bd7ed77f01014f52d57469dc25ceaa3dbc975fd1d702c29381be9d8c8ec3b89
+size 636895
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_content_list.json b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..211e67f8da32ea2221aa2fe22f2ffe57eea35c1b
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e083115b8fb272244ba2c38152440aafd2a290247efb3ea760274b0da6e4e96
+size 166967
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_model.json b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..88757113b70df4126a317d3e91ecce227d7723cd
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:044bc368dc908c0494bc7e3e0f71620144cce52a45081ff93a94504a3310d485
+size 189270
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_origin.pdf b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e2b3f5618dd45398f854d97873ccabee16739391
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/dc2936ad-e151-4570-b32e-e24a41047caa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07cbac503563123eb8b8357ce053e83b8358339f258d7c69da708b93af597c4c
+size 962005
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/full.md b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..925b2271dd7ae71a3ccd3b74e275a9f6e0d3f316
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/full.md
@@ -0,0 +1,673 @@
+# A Unified Framework for Generalization Error Analysis of Learning with Arbitrary Discrete Weak Features
+
+Kosuke Sugiyama1 Masato Uchida1
+
+# Abstract
+
+In many real-world applications, predictive tasks inevitably involve low-quality input features (Weak Features; WFs) which arise due to factors such as misobservations, missingness, or partial observations. While several methods have been proposed to estimate the true values of specific types of WFs and to solve a downstream task, a unified theoretical framework that comprehensively addresses these methods remains underdeveloped. In this paper, we propose a unified framework called Weak Features Learning (WFL), which accommodates arbitrary discrete WFs and a broad range of learning algorithms, and we demonstrate its validity. Furthermore, we introduce a class of algorithms that learn both the estimation model for WFs and the predictive model for a downstream task and perform a generalization error analysis under finite-sample conditions. Our results elucidate the interdependencies between the estimation errors of WFs and the prediction error of a downstream task, as well as the theoretical conditions necessary for the learning approach to achieve consistency. This work establishes a unified theoretical foundation, providing generalization error analysis and performance guarantees, even in scenarios where WFs manifest in diverse forms.
+
+# 1. Introduction
+
+The performance and explainability of machine learning models are highly dependent on the quality of the training data. However, in practical applications, obtaining high-quality data is often infeasible due to constraints such as
+
+1 Major in Computer Science and Communications Engineering, Waseda University, Tokyo, Japan. Correspondence to: Kosuke Sugiyama , Masato Uchida .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+privacy concerns, high observation costs, and uncertainties in sensor measurements. Consequently, challenges often stem from low-quality labels (referred to as Weak Labels; WLs) that contain incorrect, incomplete, or ambiguous supervisory information, and low-quality features (referred to as Weak Features; WFs), which manifest as misobservations, missing values, or ambiguous observations. Extensive research has focused on WLs within the framework of weakly supervised learning (WSL), covering methods such as semi-supervised learning and learning with label noise, which have provided substantial theoretical guarantees (Chapelle et al., 2006; Elkan & Noto, 2008; Cour et al., 2011; Natarajan et al., 2013; Ishida et al., 2017). In contrast, research on WFs has proposed methods such as impute-then-regress (ItR), which imputes missing before regression (Josse et al., 2024; Bertsimas et al., 2021; Le Morvan et al., 2020b; 2021), and complementary features learning (CFL), which leverages complementary features (CFs) that differ from the true values (Sugiyama & Uchida, 2024). However, a unified framework that provides consistent theoretical guarantees across the various forms of WFs remains underexplored.
+
+Motivated by these challenges, this study focuses on weak features learning (WFL), a generalized learning problem involving arbitrary WFs. A common approach involves sequentially or iteratively learning feature estimation models $\pmb{g}$ to estimate the true values of WFs (referred to as exact values) and a label prediction model $f$ to predict downstream task labels using the outputs of $\pmb{g}$ . This strategy is considered rational for improving both the quality of WFs and the predictive performance of a downstream task (Yoon et al., 2018; Mattei & Frellsen, 2019; Le Morvan et al., 2020a; Ipsen et al., 2021; Zaffran et al., 2023; Ipsen et al., 2022; Sugiyama & Uchida, 2024). Indeed, prior approaches such as ItR and CFL are grounded in this learning strategy, utilizing various machine learning methods to construct $\pmb{g}$ . These methods aim to improve the quality of WFs, thereby enhancing the generalization performance and explainability of $f$ in a downstream task.
+
+However, in the context of WFL, where various methods have been proposed, significant gaps remain in understanding how the learning of $g$ and $f$ impacts each other's learning efficiency and under what conditions WFL can achieve
+
+optimal $g$ and $f$ (i.e., consistency). For example, in ItR, while the conditions under which $g$ and $f$ become Bayes rules have been analyzed, the generalization error analysis under finite samples and any data distribution has not yet been sufficiently conducted (Josse et al., 2024; Bertsi-mas et al., 2021; Le Morvan et al., 2020b; 2021). Also, in CFL, which addresses CFs, theoretical analysis remains underdeveloped, and clear guidelines for handling diverse forms of WFs in a unified manner have yet to be established (Sugiyama & Uchida, 2024).
+
+This study aims to investigate the mutual influence between estimating the exact values of WFs and learning a downstream task. Specifically, we focus on scenarios involving discrete WFs (hereafter referred to as discrete WFL) and provide a unified formalization. By performing finite-sample error analysis for a generalized class of learning algorithms, we systematically address these questions. The proposed theoretical framework extends beyond situations involving missing values or CFs. It also accommodates cases where WFs arise in the diverse forms discussed in the WLs literature, such as erroneous observations or scenarios where only a candidate set containing the exact value is observed (Natarajan et al., 2013; Cour et al., 2011; Feng et al., 2020; Xu et al., 2021). Consequently, the framework not only reinterprets existing approaches such as ItR and CFL but also offers new theoretical insights into previously unexplored WF settings. Furthermore, as the class of learning algorithms analyzed in this study encompasses various existing methods (Yoon et al., 2018; Mattei & Frellsen, 2019; Ipsen et al., 2021; Josse et al., 2024; Le Morvan et al., 2021; Ipsen et al., 2022; Sugiyama & Uchida, 2024), our results provide a unified theoretical evaluation framework for these methods. We have also developed a unified formulation and analysis for scenarios involving continuous WFs, and the results for continuous WFs are presented in a separate paper (Sugiyama & Uchida, 2025).
+
+The main contributions of this study are as follows:
+
+1. We propose a risk-based formulation to address arbitrary discrete WFs and demonstrate that the introduced objective function facilitates the learning of $f$ , which captures the true input-output relationship. This validates the proposed formulation (Section 3.2).
+2. Within the proposed formulation, we define the Learning Algorithm Class for discrete WFL (LAC-dWFL), which flexibly combines three steps: (i) learning $g$ using WFs as weak supervision, (ii) learning $f$ with a fixed $g$ , and (iii) learning $g$ with a fixed $f$ . This framework accommodates both sequential and iterative learning approaches, offering a unified perspective on diverse methods (Section 3.3).
+3. For step (ii), we derive the error bound for $f$ given any fixed $g$ (Section 4.2), providing theoretical insights into how
+
+the estimation errors of $g$ influence the error bound for $f$ . By integrating the theoretical framework of WSL in (i), we further analyze how the properties of WFs influence $f$ 's error bound, how the order of the error bound evolves with the learning of $g$ , and the conditions under which sequential learning in LAC-dWFL (performing steps (i) and (ii) in sequence) achieves consistency.
+
+4. For step (iii), we evaluate the generalization error of $\pmb{g}$ given any fixed $f$ (Section 4.3) and analyze how the properties and generalization performance of $f$ affect the error bound for $\pmb{g}$ . By integrating the results of Contributions 3 and 4, we establish the conditions under which iterative learning in LAC-dWFL (alternating steps (ii) and (iii)) achieves the consistency for $f \circ g$ .
+5. We validate the proposed theoretical framework on real-world datasets to evaluate how accurately the derived error bounds reflect actual learning behaviors (Section 5).
+
+# 2. Related work
+
+# 2.1. Learning Problems involving WFs
+
+ItR and CFL are representative learning problems that serve as special cases of WFL, involving specific types of WFs. This section provides an overview of these frameworks.
+
+In ItR, the focus lies on scenarios where input features contain missing values. The fundamental approach of ItR involves imputing the missing values and utilizing the resulting complete dataset to learn a label prediction model $f$ for a downstream task. Sequential learning methods have been proposed, employing feature estimation models $g$ that utilize techniques such as constant imputation (Josse et al., 2024) or machine learning-based approaches (Yoon et al., 2018; Mattei & Frellsen, 2019; Ipsen et al., 2021). Furthermore, joint or iterative optimization of $g$ and $f$ has also been explored in the literature (Le Morvan et al., 2020a; Ipsen et al., 2022).
+
+Theoretical analyses of ItR have investigated whether combinations of $g$ and $f$ exist that equal Bayes rule, focusing on regression or classification as downstream tasks (Josse et al., 2024; Bertsimas et al., 2021; Le Morvan et al., 2021). However, these studies do not address the existence of learning algorithms capable of constructing optimal $g$ and $f$ . Moreover, analyses in finite-sample settings have been confined to restrictive cases where the true $f$ is assumed to be a linear model, leaving more general problem settings unexplored (Le Morvan et al., 2020b). Additionally, these analyses often restrict downstream tasks to regression or classification and, in some cases, confine the loss function of $f$ to mean squared error (Le Morvan et al., 2020b; 2021). In this paper, while we focus on discrete WFs, we achieve a generalization error analysis of WFL in finite-sample settings by assume
+
+ing only the boundedness and Lipschitz continuity of the loss function of $f$ , without imposing any constraints on a downstream task and a data distribution.
+
+In CFL, the primary focus lies on learning problems where input features include CFs (Sugiyama & Uchida, 2024). Within this framework, the feature estimation model $g$ and the label prediction model $f$ are defined as probabilistic models, and an objective function leveraging the Kullback-Leibler divergence has been derived to learn $g$ and $f$ . For learning $g$ in CFL, complementary label learning (CLL) (Ishida et al., 2017; 2019; Yu et al., 2018; Lin & Lin, 2023; Ruan et al., 2024), a weakly supervised learning approach that predicts true labels from datasets labeled exclusively with incorrect labels, can be employed. Moreover, since partial label learning (PLL) (Cour et al., 2011; Feng et al., 2020; Xu et al., 2021; Tian et al., 2023), where supervision is provided in the form of a set containing the true label, can be interpreted as a generalized learning problem of CLL (Katsura & Uchida, 2020). Therefore, PLL is also applicable to the learning of $g$ .
+
+However, in CFL, the learning behavior under finite samples, as well as the conditions required to obtain asymptotically optimal $\pmb{g}$ and $f$ , have yet to be theoretically clarified. In this paper, we restrict $\pmb{g}$ and $f$ to deterministic models and perform a generalization error analysis of WFL for arbitrary downstream task and bounded, Lipschitz-continuous loss functions, thereby shedding light on the theoretical properties of CFL.
+
+# 2.2. WFs Whose Exact Values Can Be Estimated
+
+When constructing a feature estimation model $g$ , it is natural to treat the observed values of each WF as WLs and employ the WSL methods. In fact, depending on the types and settings of WFs, $g$ can be learned using WSL methods. In WSL, a variety of WL settings have been studied, including the aforementioned CLL and PLL. For instance, noisy label learning (Natarajan et al., 2013) deals with learning from data containing incorrect labels, while positive-unlabeled learning (Elkan & Noto, 2008) focuses on learning binary classifiers using only positive and unlabeled samples.
+
+Various WSL methods have been theoretically formulated, and their generalization error has been analyzed under finite-sample conditions (Cour et al., 2011; Feng et al., 2020; Xu et al., 2021; Natarajan et al., 2013; Ishida et al., 2017; Yu et al., 2018). Many of these objective functions were defined using unbiased estimators of the expected risk in supervised learning, the expected risk's upper bounds that are computable with WLs, or risks whose optimal solutions align those of the expected risk. Their theoretical analyses elucidated the conditions under which optimal hypotheses can be obtained by minimizing each objective function, as well as the relationship between WL settings and the error
+
+bounds. Thus, if WSL is employed to learn $\pmb{g}$ in WFL, the learning of $\pmb{g}$ alone can be analyzed based on the WSL theories.
+
+However, in WFL, it is necessary to consider the learning of both $g$ and $f$ . For example, in sequential learning, the learning of $f$ depends on the output of $g$ . In iterative learning, the learning of $g$ depends on $f$ , unlike the case where $g$ is learned solely using WSL. Therefore, a theoretical discussion that establishes the relationship between $g$ and $f$ is essential. In this paper, we perform a generalization error analysis of WFL, explicitly considering the relationship between $g$ and $f$ .
+
+# 3. Formulation
+
+# 3.1. Review of ERM
+
+In this paper, we formulate WFL from the perspective of risk minimization and adopt empirical risk minimization (ERM) as the learning framework. Below, we briefly review ERM in ordinary supervised learning (Shalev-Shwartz & Ben-David, 2014; Mohri et al., 2018). Let the input space be $\mathcal{X} \subseteq \mathbb{R}^d$ and the label space be $\mathcal{Y} \subseteq \mathbb{R}$ . Here, $d \in \mathbb{N}_+$ represents an input dimension. We denote the random variables representing an instance by $\mathbf{X}$ and the random variable representing labels by $Y$ , assuming that $(\mathbf{X}, Y)$ follows the true distribution $p_*(\mathbf{x}, y)$ over $\mathcal{X} \times \mathcal{Y}$ independently and identically distributed (i.i.d.). The goal in the ERM framework is to find a label prediction model $f: \mathcal{X} \to \mathbb{R} \in \mathcal{F}$ that minimizes the expected risk:
+
+$$
+R _ {l} (f) := \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} [ l (f (\boldsymbol {X}), Y) ], \tag {3.1}
+$$
+
+where $l: \mathbb{R} \times \mathbb{R} \to \mathbb{R}_+$ is a loss function, and $\mathcal{F}$ is the hypothesis set of label prediction models. Since only finite samples are available in practice, ERM approximates the expected risk with the empirical risk computed as a sample average and learns $f$ by minimizing this empirical risk.
+
+# 3.2. Formulation of discrete WFL
+
+In this section, we formulate WFL based on risk minimization and employ ERM as the learning method. To account for the presence of WFs in instances, we decompose $X$ representing an instance, into $X^{\mathrm{w}}$ representing the exact values of WFs and $X^{\mathrm{o}}$ representing the remaining ordinary features (OFs), such that $X = (X^{\mathrm{w}}, X^{\mathrm{o}})$ . Let $\mathcal{X}^{\mathrm{w}}$ and $\mathcal{X}^{\mathrm{o}}$ denote the domains of $X^{\mathrm{w}}$ and $X^{\mathrm{o}}$ , respectively, with $\mathcal{X}^{\mathrm{w}} \times \mathcal{X}^{\mathrm{o}} = \mathcal{X}$ . Here, $\mathcal{X}^{\mathrm{w}}$ is assumed to be a finite set. We denote the observed values of WFs as the random variables $\overline{X}^{\mathrm{w}}$ , which follows the probability distribution $\bar{p}_*(\bar{x}^{\mathrm{w}} | x, y)$ .
+
+We define the feature estimation models for estimating the exact values of WFs as $\pmb{g} \coloneqq (g_1, \dots, g_{F^{\mathrm{w}}}) \in \mathcal{G} \coloneqq \mathcal{G}_1 \times \dots \times \mathcal{G}_{F^{\mathrm{w}}} : \mathcal{X}^{\mathrm{o}} \to \mathcal{X}^{\mathrm{w}}$ , where $F^{\mathrm{w}}$ is the number of
+
+WFs, and each $g_j \in \mathcal{G}_j : \mathcal{X}^{\mathrm{o}} \to \mathcal{X}_j^{\mathrm{w}}, \forall j \in [F^{\mathrm{w}}]$ . Here, $[F^{\mathrm{w}}] := \{1, \ldots, F^{\mathrm{w}}\}$ , and $\mathcal{G}_j$ represents the hypothesis set for estimating $X_j^{\mathrm{w}}$ . The probability mass function (PMF) representation of $g$ is defined as $q_g(\boldsymbol{x}^{\mathrm{w}}|\boldsymbol{x}^{\mathrm{o}}) := \mathbb{1}_{[\boldsymbol{x}^{\mathrm{w}} = g(\boldsymbol{x}^{\mathrm{o}})]}$ . For simplicity, this paper primarily focuses on binary classification, but the proposed formulation and analyses can be easily extended to other prediction tasks such as multi-class classification or regression.
+
+The primary objectives of WFL are to improve the generalization performance of a downstream task and to restore explainability lost due to WFs. The primary factor reducing explainability is the inaccuracy of information provided by WFs. The most natural approach to address this issue is to estimate the exact values of WFs accurately. Accordingly, WFL aims to learn $f$ and $g$ that minimize the following two risks. The first risk evaluates the generalization error of $f$ :
+
+$$
+\begin{array}{l} R _ {l, \mathbf {g}} (f) := \mathbb {E} _ {p _ {*} (\mathbf {x} ^ {\mathrm {o}}, y) q _ {\mathbf {g}} (\mathbf {x} ^ {\mathrm {w}} | \mathbf {x} ^ {\mathrm {o}})} [ l (f (\mathbf {X}), Y) ] \tag {3.2} \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} [ l (f (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {o}}), Y) ]. \\ \end{array}
+$$
+
+The second risk that assesses the estimation errors of $g$ :
+
+$$
+R _ {0 1, j} \left(g _ {j}\right) := \mathbb {E} _ {p _ {*} (\boldsymbol {x})} \left[ l _ {0 1} \left(g _ {j} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), X _ {j} ^ {\mathrm {w}}\right) \right], \forall j \in \left[ F ^ {\mathrm {w}} \right]. \tag {3.3}
+$$
+
+Here, $l_{01}(y,y') \coloneqq \mathbb{1}_{[y \neq y']}$ represents the 0-1 loss. Finally, the objective function for discrete WFL is defined as a linear combination of these risks:
+
+$$
+R _ {l, \lambda} ^ {\mathrm {d W F L}} (\boldsymbol {g}, f) := R _ {l, \boldsymbol {g}} (f) + \lambda \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j}), \tag {3.4}
+$$
+
+where $\lambda \in \mathbb{R}_{+}$ is a weighting parameter.
+
+The objective function $R_{l,\lambda}^{\mathrm{dWFL}}$ facilitates the unified treatment of any discrete WFs. This unification arises from representing the error of $g_{j}$ via the risk $R_{01,j}$ , which is aimed to be minimized regardless of the type of WF. In practice, for various types of WFs, WSL methods that learn $g_{j}$ aim to minimize $R_{01,j}$ , by minimizing risks that serve as upper-bound of $R_{01,j}$ or risks whose optimal solutions align with those of $R_{01,j}$ (Cour et al., 2011; Feng et al., 2020; Natarajan et al., 2013; Ishida et al., 2017; Yu et al., 2018). Therefore, such WSL methods can be utilized to learn $g_{j}$ as part of minimizing $R_{l,\lambda}^{\mathrm{dWFL}}$ .
+
+The validity of our formulation is demonstrated by the following theorem1 . Its proof is given in Appendix A.1.
+
+Theorem 3.1. For any $f \in \mathcal{F}$ , $g \in \mathcal{G}$ , and $l$ bounded by $U_{l} < \infty$ , the following inequality holds:
+
+$$
+R _ {l} (f) \leq R _ {l, g} (f) + U _ {l} \sum_ {j \in \left[ F ^ {\mathrm {w}} \right]} R _ {O l, j} \left(g _ {j}\right). \tag {3.5}
+$$
+
+The RHS of Eq.(3.5) equals $R_{l,U_l}^{\mathrm{dWFL}}$ . By scaling $l, U_l$ can be aligned with any $\lambda$ , and minimizing $R_{l,\lambda}^{\mathrm{dWFL}}$ is expected to yield an $f$ that also minimizes $R_l$ . In other words, by using $R_{l,\lambda}^{\mathrm{dWFL}}$ , $f$ is learned to capture the true relationship between $X$ and $Y$ , despite relying on $\overline{X}^{\mathrm{w}}$ .
+
+This result of Theorem 3.1 is directly applicable to the following two scenarios. The first scenario occurs when test instances contain the exact values of WFs. For instance, practical scenarios arises when WFs are observed during training, but exact values are available during testing. In this scenario, minimizing $R_{l}$ is essential, but it cannot be computed directly from the training data containing WFs. In contrast, since Theorem 3.1 ensures that minimizing $R_{l,\lambda}^{\mathrm{dWFL}}$ indirectly minimizes $R_{l}$ , our framework is a valuable approach for overcoming incomplete inputs during training while enhancing performance with complete inputs at test time.
+
+The second scenario involves training data containing a mix of instances with exact values of WFs $\mathbf{X}^{\mathrm{w}}$ and instances with WFs $\overline{\mathbf{X}}^{\mathrm{w}}$ . In practical applications, some portions of the data may be observed in detail, yielding exact values, while other portions may be only partially observed. Theorem 3.1 guarantees that minimizing $R_{l,\lambda}^{\mathrm{dWFL}}$ contributes to minimizing $R_{l}$ . Thus, it enables the simultaneous use of both data types during training. This approach facilitates the development of learning methods that seamlessly integrate both data types into a unified framework.
+
+# 3.3. Learning Algorithm Class for discrete WFL
+
+In this section, we introduce a class of learning algorithms that uniformly address not only any discrete WFs but also diverse methods within the discrete WFL framework. Building on the formulation proposed in Section 3.2, we define a class that encompasses numerous existing methods in ItR and CFL as follows:
+
+Definition 3.2 (LAC-dWFL). learning algorithm class for discrete WFL (LAC-dWFL) refers to the set of algorithms in discrete WFL that learn the feature estimation models $\pmb{g}$ and the label prediction model $f$ using one or a combination of the following three steps:
+
+(i) Learning $\pmb{g}$ by using $\overline{\boldsymbol{X}}^{\mathrm{w}}$ as weak supervision and minimizing $\sum_{j\in [F^{\mathrm{w}}]}R_{01,j}$ , either directly or indirectly.
+(ii) Learning $f$ with $\pmb{g}$ fixed by minimizing $R_{l,g}$ .
+(iii) Learning $g$ with $f$ fixed by minimizing $R_{l,\lambda}^{\mathrm{dWFL}}$ .
+
+The introduction of LAC-dWFL allows for a unified treatment of a wide range of methods for WFL. Most methods applicable to ItR and CFL fall under the category of sequential learning, where steps (i) and (ii) are executed in sequence
+
+(Yoon et al., 2018; Mattei & Frellsen, 2019; Ipsen et al., 2021; Josse et al., 2024; Le Morvan et al., 2021; Sugiyama & Uchida, 2024). Additionally, methods that represent $g$ and $f$ as neural networks and combine them (Le Morvan et al., 2020a; Ipsen et al., 2022) can be regarded as iterative learning, where steps (ii) and (iii) are executed repeatedly, when these components are alternately optimized. Such methods are thus encompassed within LAC-dWFL.
+
+# 4. Theoretical analysis
+
+This section presents a theoretical analysis of the unified learning algorithm class, LAC-dWFL. Through this analysis, we elucidate the common properties of LAC-dWFL and establish a foundation for the theoretical exploration of various methods encompassed within this class. To achieve this, it is necessary to elucidate the properties of steps (i), (ii), and (iii) within LAC-dWFL. As established in Section 3.3, step (i) involves WSL methods, and its properties can therefore be analyzed using existing WSL theories. Consequently, our theoretical focus is directed toward steps (ii) and (iii). In Section 4.1, we derive a fundamental inequality to analyze steps (ii) and (iii). In Section 4.2, we establish an error bound for $f$ learned via step (ii), given any $g$ . In Section 4.3, we establish an error bound for $g$ learned via step (iii), given any $f$ .
+
+# 4.1. Deriving an Analytical Tool
+
+Our objective is to examine how the learning of $f$ in step (ii) and $g$ in step (iii) depend on the performance of $g$ and $f$ , respectively. To achieve this, the error bounds for $f$ and $g$ must be expressed in terms of $R_{01,j}$ for any $j \in [F^{\mathrm{w}}]$ and $R_l$ , respectively. To achieve this requirement, we present the following lemma, with its proof provided in Appendix A.2.
+
+Lemma 4.1. For any measurable $l$ bounded by $U_{l} < \infty$ , $f \in \mathcal{F}$ and $\pmb{g} \in \mathcal{G}$ , the following holds:
+
+$$
+\begin{array}{l} \left| R _ {l} (f) - R _ {l, \mathbf {g}} (f) \right| \leq \\ \left(\sqrt {R _ {l} (f)} + \sqrt {R _ {l , g} (f)}\right) \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 l, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {4.6} \\ \end{array}
+$$
+
+Equation (4.6) shows that $|R_{l}(f) - R_{l,\mathbf{g}}(f)| = 0$ is achieved when either $\sum_{j\in [F^{\mathrm{w}}]}R_{01,j}(g_j) = 0$ or $\sqrt{R_l(f)} + \sqrt{R_{l,\mathbf{g}}(f)} = 0$ . Although intuitive, this inequality serves a critical role in deriving subsequent error bounds. Lemma 4.1 enables the analysis of how $f$ and $\mathbf{g}$ influence each other's learning processes.
+
+# 4.2. Analysis of Learning Label Prediction Model $f$
+
+In this section, we conduct a theoretical analysis of the learning process of $f$ under LAC-dWFL's step (ii), where $g$
+
+remains fixed. Since step (ii) involves learning $f$ based on the output of $g$ rather than $X^{\mathrm{w}}$ , the relationship between $g$ and $f$ cannot be analyzed using ordinary supervised learning frameworks (Mohri et al., 2018). We derive an error bound for $f$ that captures how the errors of $g$ influence the learning process of $f$ , enabling an analysis of the step (ii).
+
+To facilitate the analysis, we introduce the following definitions. Given $n \in \mathbb{N}_+$ training samples, we define the ordinary dataset, $S = \{(\pmb{x}_i, y_i)\}_{i=1}^n$ , and the weak dataset, $\overline{S} = \{(\bar{\pmb{x}}_i^{\mathrm{w}}, \pmb{x}_i^{\mathrm{o}}, y_i)\}_{i=1}^n$ . Here, $\pmb{x}_i = (\pmb{x}_i^{\mathrm{o}}, \pmb{x}_i^{\mathrm{w}})$ , $y_i$ and $\bar{\pmb{x}}_i^{\mathrm{w}}$ represent the realizations of $\pmb{X} = (X^{\mathrm{w}}, X^{\mathrm{o}})$ , $Y$ and $\overline{X}^{\mathrm{w}}$ , respectively. Also, the $i$ -th samples in $S$ and $\overline{S}$ correspond to the same instance, for any $i \in [n]$ . We assume that $\{(\pmb{x}_i, y_i, \bar{\pmb{x}}_i^{\mathrm{w}})\}_{i=1}^n$ are independently drawn from $p_*(\pmb{x}, y) \bar{p}_*(\bar{\pmb{x}}^{\mathrm{w}}|\pmb{x}, y)$ . Let $\widehat{R}_l$ and $\widehat{R}_{l,g}$ denote the empirical risks calculated by the sample average over $S$ and $\overline{S}$ , respectively. For any $\pmb{g} \in \mathcal{G}$ , the empirical risk minimizer obtained from LAC-dWFL's step (ii) is defined as follows:
+
+$$
+f _ {\boldsymbol {g}, \bar {S}} := \arg \min _ {f \in \mathcal {F}} \widehat {R} _ {l, \boldsymbol {g}} (f).
+$$
+
+Using Lemma 4.1, we establish the error bound for $f_{g,\overline{S}}$ learned via LAC-dWFL' step (ii) in the following theorem. The proof is provided in Appendix A.3.
+
+Theorem 4.2. Let $S$ and $\overline{S}$ be the ordinary dataset and weak dataset, respectively, each containing $n$ samples. For any measurable $g \in \mathcal{G}$ , $L_{l}$ -Lipschitz continuous $l$ bounded by $U_{l} < \infty$ and $\delta \in (0,1)$ , the following holds with probability at least $1 - \delta$ :
+
+$$
+\begin{array}{l} R _ {l, \boldsymbol {g}} (f _ {\boldsymbol {g}, \overline {{S}}}) - R _ {l} (f _ {\mathcal {F}}) \leq \\ 4 \left(L _ {l} \Re_ {n} ^ {*} (\mathcal {F}) + L _ {l} \Re_ {n} ^ {\mathbf {g}} (\mathcal {F}) + U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}}\right) \\ + \left\{2 \left(R _ {l} \left(f _ {\mathcal {F}}\right) + 4 L _ {l} \Re_ {n} ^ {*} (\mathcal {F}) + 2 U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}}\right) ^ {\frac {1}{2}} \right. \tag {4.7} \\ \left. + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathsf {w}} ]} R _ {0 I, j} (g _ {j})\right) ^ {\frac {1}{2}} \right\} \\ \times \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 l, j} (g _ {j})\right) ^ {\frac {1}{2}}. \\ \end{array}
+$$
+
+Here, $f_{\mathcal{F}} \coloneqq \arg \min_{f \in \mathcal{F}} R_l(f)$ represents the true risk minimizer in ordinary supervised learning.
+
+The terms $\Re_{n}^{*}(\mathcal{F})$ and $\Re_{n}^{g}(\mathcal{F})$ represent the Rademacher complexities of the hypothesis class $\mathcal{F}$ under the distributions $p_{*}(\boldsymbol{x})$ and $p_{*}(\boldsymbol{x}^{\mathrm{o}})q_{\boldsymbol{g}}(\boldsymbol{x}^{\mathrm{w}}|\boldsymbol{x}^{\mathrm{o}})$ , respectively, and measure the complexity of $\mathcal{F}$ . Equation (4.7) reveals that the errors of $\boldsymbol{g}$ combine with the result of virtual ordinary supervised learning $(R_{l}(f_{\mathcal{F}}) + \dots)^{1/2}$ to affect the error bound for $f$ in WFL.
+
+The first contribution of Theorem 4.2 is its ability to reveal the following property concerning the learning of $f$ under LAC-dWFL's step (ii). Theorem 4.2 demonstrates how the
+
+convergence rate of the error bound for $f$ with respect to $n$ depends on the estimation errors of $g$ . Given the established result that the order of the Rademacher complexity's upper bound is $\mathcal{O}_p(1 / n^{1 / 2})$ for kernel ridge regression and multilayer perceptrons (Mohri et al., 2018; Neyshabur et al., 2015), we assume the orders of Rademacher complexities about $\mathcal{F}$ or $\mathcal{G}$ are $\mathcal{O}_p(1 / n^{1 / 2})$ . Additionally, assume that $\mathcal{F}$ is sufficiently expressive and that $R_{l}(f_{\mathcal{F}}) = 0$ . Under these assumptions, and with $g$ fixed, the first and second terms in the error bound have orders of $\mathcal{O}_p(1 / n^{1 / 2})$ and $\mathcal{O}_p(1 / n^{1 / 4})$ , respectively. Therefore, the second term, which decreases more slowly, becomes dominant when $\sum_{j\in [F^{\mathrm{w}}}R_{01,j}$ is large. This result suggests that when learning $g$ to improve the generalization performance of $f$ , maximizing the estimation accuracy of $g$ is sufficient, rather than tailoring the outputs of $g$ to be specifically suited for $f$ . Furthermore, when WFs contain ambiguous information such as CFs, their values $\overline{X}^{\mathrm{w}}$ can serve as inputs to $f$ . The choice between $\overline{X}^{\mathrm{w}}$ and the learned $g$ is a critical decision, empirically validated in the context of CFL (Sugiyama & Uchida, 2024). Theorem 4.2 concludes that, if the learned $g$ estimates $X^{\mathrm{w}}$ more accurately than $\overline{X}^{\mathrm{w}}$ , using $g$ contributes more significantly to reducing the error bound for $f$ compared to directly using $\overline{X}^{\mathrm{w}}$ .
+
+The second contribution of Theorem 4.2 is its ability to integrate our WFL theory with the theories of WSL methods used to learn $\pmb{g}$ under LAC-dWFL's step (i). This is because, when $g_{j}$ is learned using certain WSL methods, their theoretical frameworks enable the derivation of error bounds for $R_{01,j}(g_j)$ in Eq. (4.7) (Cour et al., 2011; Feng et al., 2020; Xu et al., 2021; Natarajan et al., 2013; Ishida et al., 2017; Yu et al., 2018). For example, when $\overline{X}_j^{\mathrm{w}}$ is a CF, applying the CLL method by Ishida et al. (Ishida et al., 2017) for learning $g_{j}$ , and assuming that $\mathcal{G}_j$ is sufficiently large to satisfy $\min_{g_j\in \mathcal{G}_j}R_{01,j}(g_j) = 0$ , the following holds for any $L_{l}$ -Lipschitz continuous $l$ and any $\delta \in (0,1)$ with probability at least $1 - \delta$ :
+
+$$
+\begin{array}{l} R _ {0 1, j} (\hat {g} _ {j}) \leq 4 | \mathcal {X} _ {j} ^ {\mathrm {w}} | (| \mathcal {X} _ {j} ^ {\mathrm {w}} | - 1) L _ {l} \Re_ {n} ^ {*} (\mathcal {G} _ {j}) \\ + \left(\left| \mathcal {X} _ {j} ^ {\mathrm {w}} \right| - 1\right) \sqrt {\frac {8 \log (2 / \delta)}{n}}. \tag {4.8} \\ \end{array}
+$$
+
+The combination of such error bound with Eq. (4.7) enables a unified generalization error analysis for the sequential learning of $g$ and $f$ under LAC-dWFL's steps (i) and (ii), elucidating the following three aspects: Firstly, the combination enables the analysis the influence of WFs' properties on the learning of $f$ . For instance, since Eq. (4.8) depends on $|\mathcal{X}_j^{\mathrm{w}}|$ , combining it with our bound allows for analyzing how the number of possible values $|\mathcal{X}_j^{\mathrm{w}}|$ influence $f$ 's learning. Secondly, this combination elucidates the impact of whether $g$ is learned or not on the learning of $f$ . Applying Eq. (4.8) to Eq. (4.7) demonstrates that the order of the error bound for $f$ is $\mathcal{O}_p(1 / n^{1 / 2})$ . Thus, when a constant
+
+value or $\overline{\mathbf{X}}^{\mathrm{w}}$ is used as $g(X^{\mathrm{o}})$ , the order of the error bound remains $\mathcal{O}_p(1 / n^{1 / 4})$ , whereas learning $g$ improves this order to $\mathcal{O}_p(1 / n^{1 / 2})$ . In addition, Theorem 4.2 theoretically connects the error bounds of $f$ and $g$ , thereby elucidating the conditions under which sequential learning achieves consistency, as following theorem. The proof is shown in Appendix A.4.
+
+Theorem 4.3. Assume the existence of true deterministic functions $g_{j}^{*}: \mathcal{X}^{\mathrm{o}} \to \mathcal{X}_{j}^{\mathrm{w}}$ for all $j \in [F^{\mathrm{w}}]$ , such that $(g_{1}^{*}, \ldots, g_{F^{\mathrm{w}}}^{*}) \in \mathcal{G}$ , and $f^{*}: \mathcal{X} \to \mathcal{Y}$ such that $f^{*} \in \mathcal{F}$ . Additionally, suppose $l$ bounded by $U_{l} < \infty$ is $L_{l}$ -Lipschitz continuous, and $\Re_{n}^{*}(\mathcal{F})$ and $\Re_{n}^{g}(\mathcal{F})$ asymptotically approach 0 as $n \to \infty$ . If, for all $j \in [F^{\mathrm{w}}]$ , the number of samples available for learning $g_{j}$ tends to infinity as $n \to \infty$ , and a consistent method is employed to learn $g_{j}$ , then sequential learning achieves consistency (i.e., as $n \to \infty$ , $R_{l,g}(f_{\mathbf{g},S}) \to R_l(f_{\mathcal{F}})$ ).
+
+Thus, under the conditions stated in Theorem 4.3, an asymptotically optimal pair of $g$ and $f$ can be obtained through sequential learning alone. In contrast, in practical finite-sample scenarios, iterative learning involving LAC-dWFL's steps (ii) and (iii) may be necessary (Le Morvan et al., 2020b; 2021). In Section 4.3, we examines the step (iii) for a comprehensive understanding of LAC-dWFL.
+
+# 4.3. Analysis of Learning Feature Estimation Models $g$
+
+This section provides a theoretical analysis of the learning of $\pmb{g}$ with $f$ fixed in LAC-dWFL's step (iii). We aims to reveal how the learning of $\pmb{g}$ using $R_{l,\lambda}$ , with $f$ fixed, is influenced by $f$ .
+
+The learning of $g$ using $R_{l,\lambda}^{\mathrm{dWFL}}$ in Eq. (3.4) involves the simultaneous minimization of two risks, which makes it challenging to conduct generalization error analysis for ERM directly. In contrast, in ordinary supervised learning, the simultaneous minimization of an expected risk and a regularization term is formulated as structural risk minimization (SRM), which has established methods for generalization error analysis (Mohri et al., 2018). SRM selects a hypothesis from a restricted hypothesis class in which the regularization term is below a certain threshold and derives error bounds for the selected hypothesis. Focusing on the influence of $f$ on the learning of $g$ , we treat $R_{l,f}$ as the expected risk for which bounds are derived and $\sum_{j\in [F^{\mathrm{w}}}R_{01,j}$ as the regularization term, and apply SRM theory.
+
+To apply SRM theory, for any $j \in [F^{\mathrm{w}}]$ , we introduce the following definitions. Let $l_j$ denote the loss function for $g_j$ computed using $\overline{X}_j^{\mathrm{w}}$ . Define the datasets $\overline{S}_j := \{(\bar{x}_{ij}^{\mathrm{w}}, \boldsymbol{x}_i^{\mathrm{o}})\}_{i=1}^n$ . Let $\overline{R}_{l_j}(g_j) := \mathbb{E}_{p_*(\boldsymbol{x}, y) \bar{p}_*(\bar{x}_j^{\mathrm{w}} | \boldsymbol{x}, y)}[l_j(g_j(\boldsymbol{X}^{\mathrm{o}}, \overline{X}_j^{\mathrm{w}}))]$ represents the expected risk of $g_j$ computed using $\overline{X}_j^{\mathrm{w}}$ . Additionally, $\widehat{\overline{R}}_{l_j}$ de
+
+notes the empirical risk, which approximates $\overline{R}_{l_j}$ by taking the sample average over $\overline{S}_j$ . We assume that $\overline{R}_{l_j}$ satisfies either $R_{01,j}(g_j) = \overline{R}_{l_j}(g_j)$ or $R_{01,j}(g_j) \leq \overline{R}_{l_j}(g_j)$ for any $g_j$ , or that the optimal solutions of $\overline{R}_{l_j}$ coincide with those of $R_{01,j}$ . For any $\boldsymbol{r} = (r_1, \dots, r_{F^{\mathrm{w}}}) \in \mathbb{R}_+^{F^{\mathrm{w}}}$ , define the following hypothesis class:
+
+$$
+\mathcal {G} (\boldsymbol {r}, \overline {{S}}) := \mathcal {G} _ {1} (r _ {1}, \overline {{S}} _ {1}) \times \dots \times \mathcal {G} _ {F ^ {\mathrm {w}}} (r _ {F ^ {\mathrm {w}}}, \overline {{S}} _ {F ^ {\mathrm {w}}}),
+$$
+
+where, $\mathcal{G}_j(r_j,\overline{S}_j)\coloneqq \{g_j|g_j\in \mathcal{G}_j\wedge \widehat{\overline{R}}_{l_j}(g_j)\leq r_j\} ,\forall j\in$ $[F^{\mathrm{w}}]$ . To explicitly indicate that $R_{l,g}(f)$ is part of the objective function of $\pmb{g}$ we denote it as $R_{l,f}(\pmb {g})(\equiv R_{l,\pmb{g}}(f))$ and the empirical risk of $R_{l,f}(\pmb {g})$ is defined as $\widehat{R}_{l,f}(\pmb {g})(\equiv$ $\widehat{R}_{l,g}(f))$ . By performing the learning of $\pmb{g}$ in LAC-dWFL's step (iii) as outlined below, the analysis of minimizing $R_{l,f}$ while reducing $\sum_{j\in [F^{\mathrm{w}]}}R_{01,j}$ becomes feasible:
+
+$$
+\boldsymbol {g} _ {f, \bar {S}} ^ {(r)} := \arg \min _ {\boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \bar {S})} \widehat {R} _ {l, f} (\boldsymbol {g}). \tag {4.9}
+$$
+
+Based on the above definition, the assumptions and Lemma 4.1, the error bound for $g_{f,\overline{S}}^{(r)}$ is presented in the following theorem. The proof is provided in Appendix A.5.
+
+Theorem 4.4. Suppose $S$ and $\overline{S}$ represent an ordinary dataset and a weak dataset of $n$ samples, respectively. Then, for any measurable $f \in \mathcal{F}$ , $l$ bounded by $U_{l} < \infty$ and $\delta \in (0,1)$ , the following holds with probability at least $1 - \delta$ :
+
+$$
+\begin{array}{l} R _ {l, f} \left(\boldsymbol {g} _ {f, \bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - R _ {l} (f) \leq \\ \left(4 \Re_ {n} ^ {*} (\widetilde {\mathcal {G}} _ {l, f} (\boldsymbol {r}, \overline {{\boldsymbol {S}}})) + 2 U _ {l} \sqrt {\frac {\log (2 / \delta)}{2 n}}\right) \\ \left. + \left\{2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 l, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}} \right\} \right. \tag {4.10} \\ \times \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 l, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}} \\ \end{array}
+$$
+
+Here, $\widetilde{\mathcal{G}}_{l,f}(\pmb {r},\overline{S})\coloneqq \{(x^{\mathrm{o}},y)\mapsto l(f(\pmb {g}(\pmb{x}^{\mathrm{o}}),\pmb{x}^{\mathrm{o}}),y):\pmb {g}\in$ $\mathcal{G}(\pmb {r},\overline{S})\}$
+
+The term $R_{01,j}(g_{\overline{S},j}^{(r_j)})$ in Eq. (4.10) can be further upperbounded by defining $\mathcal{G}_j(r_j,\overline{S}_j)$ as the set of empirical risk minimizers obtained via weakly supervised learning using $\overline{S}_j$ . For instance, when $\overline{X}_j^{\mathrm{w}}$ is a CF, and every hypothesis in $\mathcal{G}_j(r_j,\overline{S}_j)$ is an empirical risk minimizer obtained using the method of Ishida et al. (Ishida et al., 2017) and $\mathcal{G}_j$ is sufficiently large to satisfy $\min_{g_j\in \mathcal{G}_j}R_{01,j}(g_j) = 0$ , then $R_{01,j}(g_{\overline{S},j}^{(r_j)})$ can be upper-bounded by Eq. (4.8). Therefore, when we define $\mathcal{G}_j(r_j,\overline{S}_j)$ as the class of empirical risk minimizers obtained by such a method with guaranteed consistency, and assume that the order of $\Re_n^* (\mathcal{G}_j)$ is $\mathcal{O}_p(1 / n^{1 / 2})$ , the order of the upper bound of $R_{01,j}(g_{\overline{S},j}^{(r_j)})$ is
+
+also $\mathcal{O}_p(1 / n^{1 / 2})$ . Furthermore, assuming that $\Re_n^* (\mathcal{G}_j)\to 0$ as $n\to \infty$ , it follows that $R_{01,j}(g_{\overline{S},j}^{(r_j)})\to 0$ as $n\to \infty$ . In the following discussion, we assume that $R_{01,j}(g_{\overline{S},j}^{(r_j)})$ can be upper-bounded by a probability inequality of a form similar to that of Eq. (4.8).
+
+Theorem 4.4 elucidates the influence of $f$ 's prediction error and characteristics on the error bound for learning $g$ via LAC-dWFL's step (iii), as well as the convergence of $g$ 's learning. First, Theorem 4.4 demonstrates that the convergence rate of the upper bound of $R_{l,f}(\pmb{g}_{f,S}^{(\pmb{r})})$ with respect to $n$ significantly depends on the expected risk $R_{l}(f)$ . Specifically, assuming the orders of the Rademacher complexities in Eq. (4.10) are $\mathcal{O}_p(1 / n^{1 / 2})$ , the orders of the first and second terms on the RHS of Eq. (4.10) are $\mathcal{O}_p(1 / n^{1 / 2})$ and $\mathcal{O}_p(1 / n^{1 / 4})$ , respectively. Consequently, the slower-decreasing second term becomes dominant when $R_{l}(f)$ is large. Although $R_{l}(f)$ cannot be directly minimized in WFL, it has been shown that minimizing $R_{l,\lambda}^{\mathrm{dWFL}}(f,\cdot)$ helps reduce $R_{l}(f)$ (Theorem 3.1). Therefore $R_{l}(f)$ is expected to decrease through LAC-dWFL's step (ii). Additionally, if $l$ is $L_{l}$ -Lipschitz continuous and $f$ is $L_{f}$ -Lipschitz continuous, then $\Re_n^* (\tilde{\mathcal{G}}_{l,f}(\pmb {r},\pmb {S}))\leq L_lL_f\Re_n^* (\mathcal{G}(\pmb {r},\pmb {S}))$ holds (Mohri et al., 2018). This implies that the learning efficiency of $g$ improves when $f$ exhibits smoother variation with respect to its input. Furthermore, combining Theorem 4.4 with Theorem 4.2 reveals the conditions under which iterative learning in LAC-dWFL achieves consistency. The proof is shown in Appendix A.6.
+
+Theorem 4.5. In addition to the conditions stated in Theorem 4.3, assume $f$ obtained via LAC-dWFL's step (ii) is Lipschitz continuous, and the Rademacher complexities about $g$ asymptotically converge to 0 as $n$ increases. Furthermore, for any $j \in [F^{\mathrm{w}}]$ , define $\mathcal{G}_j(r_j, \overline{S}_j)$ as the set of empirical risk minimizers obtained by methods that use $\overline{S}_j$ and are guaranteed to achieve consistency. Then iterative learning achieves consistency.
+
+# 5. Experiments
+
+In Section 4 we revealed two key properties of LAC-dWFL: (1) the mutual influence between the feature estimation models $g$ and the label prediction model $f$ during their respective learning processes, and (2) the relationship between the generalization error and the number of training samples $n$ in WFL. Furthermore, the theoretical analysis demonstrates that sequential learning alone suffices for WFL. To validate the critical aspect of WFL, namely the impact of the estimation error of $g$ on the learning of $f$ (Theorem 4.2), we evaluate how varying the estimation errors of $g$ affects $f$ 's learning performance.
+
+
+Figure 5.1. The relationship between the number of training samples $n$ , $R_{l,g}(f_{g,\overline{S}})$ , and various estimation errors of $g$ .
+
+
+
+
+
+
+
+→ R01,j(gj) =0.1 ∀j∈[Fw] ← R01,j(gj) =0.2 ∀j∈[Fw] →… R01,j(gj) =0.3 ∀j∈[Fw] ←… R01,j(gj) =0.4 ∀j∈[Fw] → R01,j(gj) =0.5 ∀j∈[Fw] ←… R01,j(gj) =0.6 ∀j∈[Fw] → R01,j(gj) =0.7 ∀j∈[Fw] ←… R01,j(gj) =0.8 ∀j∈[Fw] → R01,j(gj) =0.9 ∀j∈[Fw]
+
+
+Figure 5.2. A comparison between $R_{l,g}(f_{g,\overline{S}})$ and the error bound derived in Theorem 4.2, for various estimation errors of $g$ .
+
+
+
+
+
+
+
+Rg,l(fg,S) (R01,j(gj)=0.1 ∀j∈[Fw]) ∇- Bound (R01,j(gj)=0.1 ∀j∈[Fw]) Δ- Rg,l(fg,S) (R01,j(gj)=0.3 ∀j∈[Fw]) □- Bound (R01,j(gj)=0.3 ∨j∈[Fw]) △- Rg,l(fg,S) (R01,j(gj)=0.5 ∀j∈[Fw]) -√- Bound (R01,j(gj)=0.5 ∨j∈[Fw]) -∅- Rg,l(fg,S) (R01,j(gj)=0.7 ∀j∈[Fw]) ∇- Bound (R01,j(gj)=0.7 ∨j∈[Fw]) Δ- Rg,l(fg,S) (R01,j(gj)=0.9 ∀j∈[Fw]) -□- Bound (R01,j(gj)=0.9 ∀j∈[Fw])
+
+# 5.1. Experimental Settings
+
+We used four real-world datasets: Adult (Becker & Ko-havi, 1996), Bank Marketing (Moro & Cortez, 2014), kick (Vanschoren et al., 2013), and Census-Income (KDD) (cen, 2000; Dua & Graff, 2017). We refer to them as Adult, Bank, Kick, and Census, respectively. Details of these datasets are summarized in Appendix B.1. For each dataset, $50\%$ of the samples were reserved as test data to estimate the generalization error. In this experiment, we focused on a representative case of WFs, where all categorical features are treated as CFs (Sugiyama & Uchida, 2024). Both the feature estimation models $g$ and the label prediction model $f$ were implemented using two-layer perceptrons with hidden layers of width 500 and ReLU as an activation function. Logistic loss was used as $l$ . The Rademacher complexity, required for calculating the error bounds, was estimated using the method proposed by Neyshabur et al. (Neyshabur et al., 2015). Details of the experiment
+
+tal settings are summarized in Appendix B.2. The following results are the average of 5 trials. The experimental scripts used in this paper are available at the following URL: https://github.com/KOHsEMP/discrete_WFL
+
+# 5.2. Impact of $g$ on $f$ 's learning
+
+This section investigates how the estimation errors of feature estimation models $\pmb{g}$ affects the learning of the label prediction model $f$ . To perform this investigation, precise control over the estimation errors of $\pmb{g}$ is required. However, achieving such fine-grained control through WSL methods is challenging. Importantly, for this experiment, the method used to obtain $\pmb{g}$ is less relevant than the influence of its estimation errors on $f$ . Thus, synthetic estimation functions for $\pmb{g}$ are employed, which randomly misestimate with controlled error rates, enabling systematic examination of the impact of $\pmb{g}$ 's estimation errors on $f$ 's learning.
+
+We vary the estimation error of $\pmb{g}$ from 10% to 90% and train $f$ under these settings. Figure 5.1 shows the relationship between $n$ and $R_{l,g}(f_{g,\overline{S}})$ . The results confirm that, as shown in Theorem 4.2, lower estimation errors of $\pmb{g}$ lead to a higher reduction rate of $R_{l,g}(f_{g,\overline{S}})$ as $n$ increases.
+
+Additionally, Figure 5.2 compares the generalization error of $f$ , shown in Figure 5.1, with the error bound in Theorem 4.2. Figure 5.2 shows that the decrease in $R_{l,g}(f_{g,\overline{S}})$ and its bound with increasing $n$ exhibits a similar trend, with the reduction becoming more significant as the estimation error of $g$ . Therefore, Theorem 4.2 effectively captures a fundamental characteristic of WFL, specifically the influence of $g$ on the actual learning of $f$ , which is consistent across various scenarios. The discrepancy between $R_{l,g}(f_{g,\overline{S}})$ and our bound in Figure 5.2 can be attributed to the fact that our bound does not account for the feature importance of WFs in predicting $Y$ . This suggests a new research direction for deriving error bounds that incorporate the feature importance of WFs. Our results are considered to provide a critical foundation for such an approach.
+
+# 6. Conclusion
+
+This presented a unified formalization and theoretical analysis of discrete WFL. First, we proposed a formulation of WFL capable of handling arbitrary discrete WFs. We validated this formulation by demonstrating that the introduced objective function aids in learning a label prediction model $f$ that captures the true input-output relationship. Within this framework, we performed a generalization error analysis for LAC-dWFL, a generalized learning algorithm class designed to learn both feature estimation models $g$ and $f$ . This analysis revealed the detailed influence of the estimation errors of $g$ and $f$ on the error bounds of $f$ and $g$ , respectively. Additionally, we identified theoretical conditions under which consistency can be achieved for the sequential and iterative learning approaches in LAC-dWFL. Finally, numerical experiments on real-world datasets verified that our theoretical results align with observed learning behavior. This study provides comprehensive theoretical insights into various problem settings, such as ItR and CF, involving discrete WFs.
+
+# Impact Statement
+
+Our paper presents a formalization and theoretical analysis of discrete WFL that accommodates arbitrary discrete WFs. Understanding the impact of low-quality input features on the training of predictive models is crucial for the development and deployment of safe machine learning systems. This is because degraded input feature quality can potentially lead to predictive models that are vulnerable to adversarial attacks or that propagate socially undesirable
+
+biases. The theoretical results presented in this work are expected to provide a foundational perspective for discussions concerning the interplay between input feature quality and the safety of machine learning.
+
+# Acknowledgment
+
+This work was supported in part by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research (C) (23K11111).
+
+# References
+
+Census-Income (KDD). UCI Machine Learning Repository, 2000. DOI: https://doi.org/10.24432/C5N30T.
+Becker, B. and Kohavi, R. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20.
+Bertsimas, D., Delarue, A., and Pauphilet, J. Prediction with missing data. stat, 1050:7, 2021.
+Chapelle, O., Scholkopf, B., and Zien, A. Semi-supervised learning. 2006. Cambridge, Massachusetts: The MIT Press View Article, 2, 2006.
+Cour, T., Sapp, B., and Taskar, B. Learning from partial labels. J. Mach. Learn. Res., 12(null):1501-1536, July 2011. ISSN 1532-4435.
+Dua, D. and Graff, C. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
+Elkan, C. and Noto, K. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, pp. 213-220, New York, NY, USA, 2008. Association for Computing Machinery. ISBN 9781605581934. doi: 10.1145/1401890.1401920.
+Feng, L., Lv, J., Han, B., Xu, M., Niu, G., Geng, X., An, B., and Sugiyama, M. Provably consistent partial-label learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
+Ipsen, N., Mattei, P.-A., and Frellsen, J. How to deal with missing data in supervised deep learning? In Proceedings of 2022 International Conference on Learning Representations, 2022. URL https://iclr.cc/ Conferences/2022. 10th International Conference on Learning Representations, ICLR 2022; Conference date: 25-04-2022 Through 29-04-2022.
+
+Ipsen, N. B., Mattei, P.-A., and Frellsen, J. not-{miwae}: Deep generative modelling with missing not at random data. In International Conference on Learning Representations, 2021.
+Ishida, T., Niu, G., Hu, W., and Sugiyama, M. Learning from complementary labels. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
+Ishida, T., Niu, G., Menon, A., and Sugiyama, M. Complementary-label learning for arbitrary losses and models. In International Conference on Machine Learning, pp. 2971-2980. PMLR, 2019.
+Josse, J., Chen, J. M., Prost, N., Varoquaux, G., and Scornet, E. On the consistency of supervised learning with missing values. Statistical Papers, 65(9):5447-5479, 2024.
+Kahn, M. Diabetes. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5T59G.
+Katsura, Y. and Uchida, M. Bridging ordinary-label learning and complementary-label learning. In Pan, S. J. and Sugiyama, M. (eds.), Proceedings of The 12th Asian Conference on Machine Learning, volume 129 of Proceedings of Machine Learning Research, pp. 161-176. PMLR, 18-20 Nov 2020.
+Kingma, D. and Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
+Le Morvan, M., Josse, J., Moreau, T., Scornet, E., and Varoquaux, G. Neumiss networks: differentiable programming for supervised learning with missing values. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 5980-5990. Curran Associates, Inc., 2020a.
+Le Morvan, M., Prost, N., Josse, J., Scornet, E., and Varoquaux, G. Linear predictor on linearly-generated data with missing values: non consistency and solutions. In Chiappa, S. and Calandra, R. (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 3165-3174. PMLR, 26-28 Aug 2020b.
+Le Morvan, M., Josse, J., Scornet, E., and Varoquaux, G. What's a good imputation to predict with missing values? In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 11530-11540. Curran Associates, Inc., 2021.
+
+Lin, W.-I. and Lin, H.-T. Reduction from complementary-label learning to probability estimates. In Kashima, H., Ide, T., and Peng, W.-C. (eds.), Advances in Knowledge Discovery and Data Mining, pp. 469-481, Cham, 2023. Springer Nature Switzerland. ISBN 978-3-031-33377-4.
+Mattei, P.-A. and Frellsen, J. MIWAE: Deep generative modelling and imputation of incomplete data sets. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 4413-4423. PMLR, 09-15 Jun 2019.
+Mohri, M., Rostamizadeh, A., and Talwalkar, A. Foundations of Machine Learning. The MIT Press, 2nd edition, 2018. ISBN 0262039400.
+Moro, S., R. P. and Cortez, P. Bank Marketing. UCI Machine Learning Repository, 2014. DOI: https://doi.org/10.24432/C5K306.
+Natarajan, N., Dhillon, I. S., Ravikumar, P. K., and Tewari, A. Learning with noisy labels. In Burges, C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. (eds.), Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013.
+Neyshabur, B., Tomioka, R., and Srebro, N. Norm-based capacity control in neural networks. In Grünwald, P., Hazan, E., and Kale, S. (eds.), Proceedings of The 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, pp. 1376-1401, Paris, France, 03-06 Jul 2015. PMLR.
+Ruan, J., Zheng, Q., Zhao, R., and Dong, B. Biased complementary-label learning without true labels. IEEE Transactions on Neural Networks and Learning Systems, 35(2):2616-2627, 2024. doi: 10.1109/TNNLS.2022.3190528.
+Shalev-Shwartz, S. and Ben-David, S. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
+Sugiyama, K. and Uchida, M. Learning from complementary features. arXiv preprint arXiv:2408.14788, 2024.
+Sugiyama, K. and Uchida, M. Unified analysis of continuous weak features learning with applications to learning from missing data. In Proceedings of the 42nd International Conference on Machine Learning, 2025.
+Tian, Y., Yu, X., and Fu, S. Partial label learning: Taxonomy, analysis and outlook. *Neural Networks*, 161:708-734, 2023. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2023.02.019.
+
+Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. Openml: networked science in machine learning. SIGKDD Explorations, 15(2):49-60, 2013. doi: 10.1145/2641190.2641198.
+Xu, N., Qiao, C., Geng, X., and Zhang, M.-L. Instance-dependent partial label learning. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 27119-27130. Curran Associates, Inc., 2021.
+Yeh, I.-C. Default of Credit Card Clients. UCI Machine Learning Repository, 2009. DOI: https://doi.org/10.24432/C55S3H.
+Yoon, J., Jordan, J., and van der Schaar, M. GAIN: Missing data imputation using generative adversarial nets. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 5689-5698. PMLR, 10-15 Jul 2018.
+Yu, X., Liu, T., Gong, M., and Tao, D. Learning with biased complementary labels. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
+Zaffran, M., Dieuleveut, A., Josse, J., and Romano, Y. Conformal prediction with missing values. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 40578-40604. PMLR, 23-29 Jul 2023.
+
+# A. Proofs
+
+# A.1. Proof of Theorem 3.1
+
+Proof of Theorem 3.1. For any $f \in \mathcal{F}$ , $g \in \mathcal{G}$ , and $l$ bounded by $U_{l} < \infty$ , the following inequality holds:
+
+$$
+\begin{array}{l} R _ {l} (f) - R _ {l, \boldsymbol {g}} (f) \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} [ l (f (\boldsymbol {X}), Y) ] - \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\circ}, y) q _ {\boldsymbol {g}} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {x} ^ {\circ})} [ l (f (\boldsymbol {X}), Y) ] \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l \left(f \left(\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {x} ^ {\mathrm {w}}\right), Y\right) \left(p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - q _ {\boldsymbol {g}} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}\right)\right) \right] \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l \left(f \left(\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {x} ^ {\mathrm {w}}\right), Y\right) \left(p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) ]}\right) \right] \\ \leq \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l (f (\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {x} ^ {\mathrm {w}}), Y) \left(p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) ]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right)\right) \right] \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l \left(f \left(\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {x} ^ {\mathrm {w}}\right), Y\right) p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \left(1 - \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) ]}\right) \right] \\ \leq \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} \left[ l (f (\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {X} ^ {\mathrm {w}}), Y) \sum_ {j \in [ F ^ {\mathrm {w}}} \left(1 - \mathbb {1} _ {[ X _ {j} ^ {\mathrm {w}} = g _ {j} (\boldsymbol {X} ^ {\mathrm {o}}) ]}\right) \right] \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} \left[ l (f (\boldsymbol {X} ^ {\mathrm {o}}, \boldsymbol {X} ^ {\mathrm {w}}), Y) \sum_ {j \in [ F ^ {\mathrm {w}}} l _ {0 1} \left(g _ {j} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), X _ {j} ^ {\mathrm {w}}\right) \right] \\ \leq U _ {l} \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} \left[ \sum_ {j \in [ F ^ {\mathrm {w}} ]} l _ {0 1} \left(g _ {j} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), X _ {j} ^ {\mathrm {w}}\right) \right]. \\ \end{array}
+$$
+
+The second inequality arises from the decomposition of the 0-1 loss. This decomposition is intended to derive the risk for each $g_{j}$ . The final inequality uses the assumption that the maximum value of the loss function $l$ is $U_{l}$ . Therefore, Theorem 3.1 is proved.
+
+# A.2. Proof of Lemma 4.1
+
+Proof of Lemma 4.1. The LHS of Eq. (4.6) can be rewritten as follows:
+
+$$
+\begin{array}{l} \left| R _ {l} (f) - R _ {l, g} (f) \right| \\ = \left| \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} [ l (f (\boldsymbol {X}), Y) ] - \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\circ}, y) q _ {\boldsymbol {g}} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {x} ^ {\circ})} [ l (f (\boldsymbol {X}), Y) ] \right| \\ = \left| \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \left\{p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - q _ {\boldsymbol {g}} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}\right) \right\} \right] \right| \\ = \left| \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \left\{p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) ]} \right\} \right] \right| \\ = \left| \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} \left\{l (f (\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}), Y) p _ {*} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {X} ^ {\mathrm {o}}, Y) - l (f (\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}), Y) \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) ]} \right\} \right] \right| \\ = \left| \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} \left\{l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) ]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \right. \right. \right. \\ \left. \left. + l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {\left[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {\left[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]} \right\} \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \leq \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}}, y)} \underbrace {\left[ \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} \left| l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) - l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) ]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \right| \right.} _ {\text {(a 1)}} \\ + \underbrace {\sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} \left| l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right) , Y\right) \mathbb {1} _ {\left[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}} , Y\right) - l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right) , Y\right) \mathbb {1} _ {\left[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]} \right]} _ {\text {(a 2)}} \Biggr ]. \tag {A.11} \\ \end{array}
+$$
+
+The term (a1) in Eq. (A.11) can be expressed as:
+
+$$
+\begin{array}{l} (a 1) = \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} l (f (\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}), Y) p _ {*} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {X} ^ {\mathrm {o}}, Y) (1 - \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) ]}) \\ = \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {X} ^ {\circ}, Y)} \left[ l (f (\boldsymbol {X} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}), Y) l _ {0 1} (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {w}}) \right]. \\ \end{array}
+$$
+
+Here, $l_{01}(\pmb{g}(\pmb{X}^{\mathrm{o}}),\pmb{X}^{\mathrm{w}}) \coloneqq 1 - \mathbb{1}_{[\pmb{X}^{\mathrm{w}} = \pmb{g}(\pmb{X}^{\mathrm{o}})]}$ The term (a2) in Eq. (A.11) can be expressed as:
+
+$$
+\begin{array}{l} (a 2) = \sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} \left\{l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\circ}\right) ]} - l \left(f \left(\boldsymbol {x} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \mathbb {1} _ {[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\circ}\right) ]} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \right\} \\ = l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) - l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) p _ {*} \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \\ = l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \left(\sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right)\right) \\ - l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \left(\sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \mathbb {1} _ {\left[ \boldsymbol {x} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]}\right) \\ = l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right) \left(\sum_ {\boldsymbol {x} ^ {\mathrm {w}} \in \mathcal {X} ^ {\mathrm {w}}} p _ {*} \left(\boldsymbol {x} ^ {\mathrm {w}} \mid \boldsymbol {X} ^ {\mathrm {o}}, Y\right) \left(1 - \mathbb {1} _ {\left[ \boldsymbol {X} ^ {\mathrm {w}} = \boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right) \right]}\right)\right) \\ = l (f (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {o}}), Y) \mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {w}} | \boldsymbol {X} ^ {\mathrm {o}}, Y)} \left[ l _ {0 1} (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {w}}) \right]. \\ \end{array}
+$$
+
+By substituting these results into (a1) and (a2) of Eq. (A.11), Eq. (A.11) can be rewritten as follows:
+
+$$
+\left| R _ {l} (f) - R _ {l, \mathbf {g}} (f) \right| \leq \mathbb {E} _ {p _ {*} (\mathbf {x}, y)} \left[ \left(l \left(f \left(\mathbf {X} ^ {\mathrm {w}}, \mathbf {X} ^ {\mathrm {o}}\right), Y\right) + l \left(f \left(\mathbf {g} \left(\mathbf {X} ^ {\mathrm {o}}\right), \mathbf {X} ^ {\mathrm {o}}\right), Y\right)\right) l _ {0 1} \left(\mathbf {g} \left(\mathbf {X} ^ {\mathrm {o}}\right), \mathbf {X} ^ {\mathrm {w}}\right) \right]. \tag {A.12}
+$$
+
+Since $l$ , $l_{01}$ , $f$ and $\pmb{g}$ are all measurable functions, applying the Cauchy-Schwarz inequality to the RHS of Eq. (A.12), $|R_l(f) - R_{l,g}(f)|$ can be upper-bounded as follows:
+
+$$
+\begin{array}{l} \left| R _ {l} (f) - R _ {l, g} (f) \right| \\ \leq \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} \left[ \left(l (f (\boldsymbol {X} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}), Y) + l (f (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {o}}), Y)\right) l _ {0 1} (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}), \boldsymbol {X} ^ {\mathrm {w}}) \right] \\ \leq \left(\underbrace {\mathbb {E} _ {p _ {*} (\boldsymbol {x} , y)} \left[ \left(l \left(f \left(\boldsymbol {X} ^ {\mathrm {w}}, \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right)\right) ^ {2} \right]} _ {\text {(b 1)}} \times \underbrace {\mathbb {E} _ {p _ {*} (\boldsymbol {x})} \left[ \left(l _ {0 1} \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {w}}\right)\right) ^ {2} \right]}\right) ^ {\frac {1}{2}} \tag {A.13} \\ + \left. \underbrace {\mathbb {E} _ {p _ {*} (\boldsymbol {x} ^ {\mathrm {o}} , y)} \left[ \left(l (f (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) , \boldsymbol {X} ^ {\mathrm {o}}) , Y)\right) ^ {2} \right]} _ {\text {(b 3)}} \times \underbrace {\mathbb {E} _ {p _ {*} (\boldsymbol {x})} \left[ \left(l _ {0 1} (\boldsymbol {g} (\boldsymbol {X} ^ {\mathrm {o}}) , \boldsymbol {X} ^ {\mathrm {w}})\right) ^ {2} \right]}\right) ^ {\frac {1}{2}}. \\ \end{array}
+$$
+
+The terms (b1)-(b3) of Eq. (A.13) can be expressed as:
+
+$$
+(b 1) = \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} \left[ (l (f (\boldsymbol {X}), Y)) ^ {2} - 0 ^ {2} \right] \leq 2 U _ {l} \mathbb {E} _ {p _ {*} (\boldsymbol {x}, y)} [ l (f (\boldsymbol {X}), Y) ] = 2 U _ {l} R _ {l} (f),
+$$
+
+$$
+\left(\mathbf {b} 2\right) = \mathbb {E} _ {p _ {*} (\boldsymbol {x})} \left[ l _ {0 1} \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {w}}\right) \right] \leq \sum_ {j \in \left[ F ^ {\mathrm {w}} \right]} \mathbb {E} _ {p _ {*} (\boldsymbol {x})} \left[ l _ {0 1} \left(g _ {j} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), X _ {j} ^ {\mathrm {w}}\right) \right] = \sum_ {j \in \left[ F ^ {\mathrm {w}} \right]} R _ {0 1, j} (g _ {j}),
+$$
+
+$$
+\left. \right.\left(\mathbf {b} 3\right) = \mathbb {E} _ {p _ {*} \left(\boldsymbol {x} ^ {\mathrm {o}}, y\right)} \left[\left(l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right)\right) ^ {2} - 0 ^ {2} \right] \leq 2 U _ {l} \mathbb {E} _ {p _ {*} \left(\boldsymbol {x} ^ {\mathrm {o}}, y\right)} \left[ l \left(f \left(\boldsymbol {g} \left(\boldsymbol {X} ^ {\mathrm {o}}\right), \boldsymbol {X} ^ {\mathrm {o}}\right), Y\right)\right] = 2 U _ {l} R _ {l, \boldsymbol {g}} (f).
+$$
+
+Here, in (b1) and (b3), the fact that the function $x \mapsto x^2$ is $2U_{l}$ -Lipschitz continuous on the interval $[0, U_{l}]$ was utilized.
+
+Applying the above inequalities related to (b1)-(b3) to the RHS of Eq. (A.13), $|R_{l}(f) - R_{l,\mathbf{g}}(f)|$ can be upper-bounded as follows:
+
+$$
+\begin{array}{l} | R _ {l} (f) - R _ {l, \boldsymbol {g}} (f) | \\ \leq \left\{2 U _ {l} R _ {l} (f) \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j}) \right\} ^ {\frac {1}{2}} + \left\{2 U _ {l} R _ {l, g} (f) \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j}) \right\} ^ {\frac {1}{2}} \\ = \left(\sqrt {R _ {l} (f)} + \sqrt {R _ {l , g} (f)}\right) \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {A.14} \\ \end{array}
+$$
+
+Thus, Lemma 4.1 is proven.
+
+# A.3. Proof of Theorem 4.2
+
+From Lemma 4.1, the following lemma holds:
+
+Lemma A.1. For any $f \in \mathcal{F}$ , $\pmb{g} \in \mathcal{G}$ and $l$ bounded by $U_{l} < \infty$ , the following inequality holds:
+
+$$
+\left| R _ {l} (f) - R _ {l, \mathbf {g}} (f) \right| \leq \left(2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 l, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}\right) \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 l, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {A.15}
+$$
+
+Proof of Lemma A.1. From Lemma 4.1, for any $f \in \mathcal{F}, g \in \mathcal{G}$ and $l$ bounded by $U_{l} < \infty$ , the following inequality holds:
+
+$$
+\left| \sqrt {R _ {l} (f)} - \sqrt {R _ {l , g} (f)} \right| \leq \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {A.16}
+$$
+
+Hence, $\sqrt{R_{l,g}(f)}$ can be upper-bounded as follows:
+
+$$
+\begin{array}{l} \sqrt {R _ {l , \boldsymbol {g}} (f)} = \sqrt {R _ {l , \boldsymbol {g}} (f)} + \sqrt {R _ {l} (f)} - \sqrt {R _ {l} (f)} \\ \leq \sqrt {R _ {l} (f)} + \left| \sqrt {R _ {l} (f)} - \sqrt {R _ {l , g} (f)} \right| \tag {A.17} \\ \leq \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j})\right) ^ {\frac {1}{2}}. \\ \end{array}
+$$
+
+By applying the above inequality to the RHS of Eq. (A.14), $|R_{l}(f) - R_{l,\mathbf{g}}(f)|$ can be upper-bounded as follows:
+
+$$
+\left| R _ {l} (f) - R _ {l, \mathbf {g}} (f) \right| \leq \left(2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}\right) \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {A.18}
+$$
+
+By leveraging Lemma A.1, Theorem 4.2 is proven as follows.
+
+Proof of Theorem 4.2. From Section 3.1, we define the empirical risk minimizer in ordinary supervised learning as follows:
+
+$$
+f _ {S} := \arg \min _ {f \in \mathcal {F}} \widehat {R} _ {l} (f).
+$$
+
+The LHS of Eq. (4.7) can be rewritten as:
+
+$$
+\begin{array}{l} R _ {l, \boldsymbol {g}} (f _ {\boldsymbol {g}, \overline {{S}}}) - R _ {l} (f _ {\mathcal {F}}) \\ = \frac {R _ {l , \mathbf {g}} \left(f _ {\mathbf {g} , \bar {S}}\right) - \widehat {R} _ {l , \mathbf {g}} \left(f _ {\mathbf {g} , \bar {S}}\right)}{(\mathrm {a} 1)} + \frac {\widehat {R} _ {l , \mathbf {g}} \left(f _ {\mathbf {g} , \bar {S}}\right) - R _ {l , \mathbf {g}} \left(f _ {S}\right)}{(\mathrm {a} 2)} \tag {A.19} \\ + \underbrace {R _ {l , g} (f _ {S}) - R _ {l} (f _ {S})} _ {\text {(a 3)}} + \underbrace {R _ {l} (f _ {S}) - R _ {l} (f _ {\mathcal {F}})} _ {\text {(a 4)}}. \\ \end{array}
+$$
+
+The terms (a1) and (a2) in Eq. (A.19) can be upper-bounded as follows:
+
+$$
+\begin{array}{l} (a 1) \leq \max _ {f \in \mathcal {F}} | R _ {l, \boldsymbol {g}} (f) - \widehat {R} _ {l, \boldsymbol {g}} (f) |, \\ (a 2) \leq \widehat {R} _ {l, \boldsymbol {g}} (f _ {S}) - R _ {l, \boldsymbol {g}} (f _ {S}) \leq \max _ {f \in \mathcal {F}} | R _ {l, \boldsymbol {g}} (f) - \widehat {R} _ {l, \boldsymbol {g}} (f) |. \\ \end{array}
+$$
+
+The term (a3) in Eq. (A.19) can be upper-bounded using Lemma A.1 as follows:
+
+$$
+\left. \left(a 3\right) \leq \left\{2 \sqrt {R _ {l} \left(f _ {S}\right)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \right. \tag {A.20}
+$$
+
+Additionally, $R_{l}(f_{S})$ can be upper-bounded as follows:
+
+$$
+\begin{array}{l} R _ {l} \left(f _ {S}\right) = R _ {l} \left(f _ {S}\right) - \widehat {R} _ {l} \left(f _ {S}\right) + \widehat {R} _ {l} \left(f _ {S}\right) - R _ {l} \left(f _ {\mathcal {F}}\right) + R _ {l} \left(f _ {\mathcal {F}}\right) \\ \leq R _ {l} \left(f _ {S}\right) - \widehat {R} _ {l} \left(f _ {S}\right) + \widehat {R} _ {l} \left(f _ {\mathcal {F}}\right) - R _ {l} \left(f _ {\mathcal {F}}\right) + R _ {l} \left(f _ {\mathcal {F}}\right) \\ \leq R _ {l} \left(f _ {\mathcal {F}}\right) + 2 \max _ {f \in \mathcal {F}} \left| R _ {l} (f) - \widehat {R} _ {l} (f) \right|. \tag {A.21} \\ \end{array}
+$$
+
+Hence, (a3) in Eq. (A.19) can be upper-bounded as follows:
+
+$$
+\left. \right.\left(a 3\right) \leq \left(2 \left(R _ {l} \left(f _ {\mathcal {F}}\right) + 2 \max _ {f \in \mathcal {F}} \left| R _ {l} (f) - \widehat {R} _ {l} (f) \right|\right) ^ {\frac {1}{2}} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}\right)\left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. \tag {A.22}
+$$
+
+Similarly, the term (a4) in Eq. (A.19) can be upper-bounded as follows:
+
+$$
+R _ {l} \left(f _ {S}\right) - R _ {l} \left(f _ {\mathcal {F}}\right) \leq 2 \max _ {f \in \mathcal {F}} \left| R _ {l} (f) - \widehat {R} _ {l} (f) \right|.
+$$
+
+By applying the above inequalities regarding (a1)-(a4) to the RHS of Eq. (A.19), it can be upper-bounded as follows:
+
+$$
+\begin{array}{l} R _ {l, \boldsymbol {g}} (f _ {\boldsymbol {g}, \bar {S}}) - R _ {l} (f _ {\mathcal {F}}) \\ \leq 2 \max _ {f \in \mathcal {F}} | R _ {l, \boldsymbol {g}} (f) - \widehat {R} _ {l, \boldsymbol {g}} (f) | + 2 \max _ {f \in \mathcal {F}} | R _ {l} (f) - \widehat {R} _ {l} (f) | (A.23) \\ + \left\{2 \left(R _ {l} \left(f _ {\mathcal {F}}\right) + 2 \max _ {f \in \mathcal {F}} \left| R _ {l} (f) - \widehat {R} _ {l} (f) \right|\right) ^ {\frac {1}{2}} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 1, j} \left(g _ {j}\right)\right) ^ {\frac {1}{2}}. (A.25) \\ \end{array}
+$$
+
+From the uniform law of large numbers (Mohri et al., 2018), for any $\delta \in (0,1)$ , the following holds with a probability of at least $1 - \delta /2$ :
+
+$$
+\max _ {f \in \mathcal {F}} | R _ {l, \boldsymbol {g}} (f) - \widehat {R} _ {l, \boldsymbol {g}} (f) | \leq 2 \Re_ {n} ^ {\boldsymbol {g}} (\widetilde {\mathcal {F}} _ {l}) + U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}},
+$$
+
+$$
+\max _ {f \in \mathcal {F}} | R _ {l} (f) - \widehat {R} _ {l} (f) | \leq 2 \Re_ {n} ^ {*} (\widetilde {\mathcal {F}} _ {l}) + U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}}.
+$$
+
+Here, $\widetilde{\mathcal{F}}_l\coloneqq \{(x,y)\mapsto l(f(x),y):f\in \mathcal{F}\}$
+
+From the assumption that $l$ is $L_{l}$ -Lipschitz continuous, it follows that $\Re_n^*(\widetilde{\mathcal{F}}_l) \leq L_l \Re_n^*(\mathcal{F})$ and $\Re_n^g(\widetilde{\mathcal{F}}_l) \leq L_l \Re_n^g(\mathcal{F})$ (Lemma 26.9 in (Shalev-Shwartz & Ben-David, 2014)).
+
+Thus, for any $\delta \in (0,1)$ , the following holds with a probability of at least $1 - \delta$ :
+
+$$
+\begin{array}{l} R _ {l, \pmb {g}} (f _ {\pmb {g}, \overline {{S}}}) - R _ {l} (f _ {\mathcal {F}}) \\ \leq 4 \left(L _ {l} \Re_ {n} ^ {*} (\mathcal {F}) + L _ {l} \Re_ {n} ^ {g} (\mathcal {F}) + U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}}\right) \tag {A.24} \\ + \left\{2 \left(R _ {l} (f _ {\mathcal {F}}) + 4 L _ {l} \Re_ {n} ^ {*} (\mathcal {F}) + 2 U _ {l} \sqrt {\frac {\log (4 / \delta)}{2 n}}\right) ^ {\frac {1}{2}} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j})\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} (g _ {j})\right) ^ {\frac {1}{2}}. \\ \end{array}
+$$
+
+# A.4. Proof of Theorem 4.3
+
+Proof of Theorem 4.3. By assumption, there exist true deterministic functions $g_{j}^{*}: \mathcal{X}^{\mathrm{o}} \to \mathcal{X}_{j}^{\mathrm{w}}$ for any $j \in [F^{\mathrm{w}}]$ , and $(g_{1}^{*}, \ldots, g_{F^{\mathrm{w}}}^{*}) \in \mathcal{G}$ . Therefore, when $\pmb{g}_{\overline{S}} = (g_{\overline{S},1}, \ldots, g_{\overline{S},F^{\mathrm{w}}})$ is obtained by the methods that achieve consistency (Cour et al., 2011; Feng et al., 2020; Xu et al., 2021; Natarajan et al., 2013; Ishida et al., 2017; Yu et al., 2018), the following holds:
+
+$$
+n \rightarrow \infty , R _ {0 1, j} \left(g _ {\bar {S}, j}\right)\rightarrow 0, \forall j \in \left[ F ^ {\mathrm {w}} \right]. \tag {A.25}
+$$
+
+Additionally, by assumption, there exists a true deterministic function $f^{*}:\mathcal{X}\to \mathcal{Y}$ for label prediction, and $f^{*}\in \mathcal{F}$ . Hence, the following holds:
+
+$$
+R _ {l} \left(f _ {\mathcal {F}}\right) = 0. \tag {A.26}
+$$
+
+Thus, if $\Re_n^*(\mathcal{F})$ and $\Re_n^g(\mathcal{F})$ are monotonically decreasing with respect to $n$ and converge to 0, the error bound established in Theorem 4.2 converges to 0 as $n \to \infty$ .
+
+# A.5. Proof of Theorem 4.4
+
+For a weak dataset $\overline{S}$ and a positive real-valued vector $\pmb{r}$ , define the feature estimation models $\pmb{g}_{\overline{S}}^{(\pmb{r})} = (g_{\overline{S},1}^{(r_1)},\dots,g_{\overline{S},F^{\mathrm{w}}}^{(r_{F^{\mathrm{w}}})})$ as follows:
+
+$$
+g _ {\bar {S}, j} ^ {(r _ {j})} := \arg \min _ {g \in \mathcal {G} (r _ {j}, \bar {S})} \widehat {\bar {R}} _ {l _ {j}} (g), \forall j \in [ F ^ {\mathrm {w}} ]. \tag {A.27}
+$$
+
+Using Lemma A.1, Theorem 4.4 is proven as follows.
+
+Proof of Theorem4.4. The LHS of Eq. (4.10) can be rewritten as follows:
+
+$$
+R _ {l, f} \left(\boldsymbol {g} _ {f, \bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - R _ {l} (f) = \underbrace {R _ {l , f} \left(\boldsymbol {g} _ {f , \bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - \widehat {R} _ {l , f} \left(\boldsymbol {g} _ {f , \bar {S}} ^ {\left(\boldsymbol {r}\right)}\right)} _ {\text {(a 1)}} + \underbrace {\widehat {R} _ {l , f} \left(\boldsymbol {g} _ {f , \bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - R _ {l , f} \left(\boldsymbol {g} _ {\bar {S}} ^ {\left(\boldsymbol {r}\right)}\right)} _ {\text {(a 2)}} + \underbrace {R _ {l , f} \left(\boldsymbol {g} _ {\bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - R _ {l} (f)} _ {\text {(a 3)}}. \tag {A.28}
+$$
+
+The terms (a1) and (a2) of Eq. (A.28) can be upper-bounded as follows:
+
+$$
+(a 1) \leq \max _ {\boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \overline {{\boldsymbol {S}}})} | R _ {l, f} (\boldsymbol {g}) - \widehat {R} _ {l, f} (\boldsymbol {g}) |, \tag {A.29}
+$$
+
+$$
+\left. \left(a 2\right) \leq \widehat {R} _ {l, f} \left(\boldsymbol {g} _ {\bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) - R _ {l, f} \left(\boldsymbol {g} _ {\bar {S}} ^ {\left(\boldsymbol {r}\right)}\right) \leq \max _ {\boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \bar {S})} \left| R _ {l, f} (\boldsymbol {g}) - \widehat {R} _ {l, f} (\boldsymbol {g}) \right|. \right. \tag {A.30}
+$$
+
+The term (a3) of Eq. (A.28) can be upper-bounded using Lemma A.1 as follows:
+
+$$
+\begin{array}{l} (a 3) \leq | R _ {l, f} (\boldsymbol {g} _ {\bar {S}} ^ {(r)}) - R _ {l} (f) | \\ \leq \left\{2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}}. \tag {A.31} \\ \end{array}
+$$
+
+Therefore, by applying Eqs. (A.29), (A.30), and (A.31) to Eq. (A.28), we obtain:
+
+$$
+\begin{array}{l} R _ {l, f} \left(\boldsymbol {g} _ {f, \bar {S}} ^ {(\boldsymbol {r})}\right) - R _ {l} (f) \leq 2 \max _ {\boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \bar {S})} \left| R _ {l, f} (\boldsymbol {g}) - \widehat {R} _ {l, f} (\boldsymbol {g}) \right| \\ + \left\{2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}}} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}}. \tag {A.32} \\ \end{array}
+$$
+
+From the uniform law of large numbers (Mohri et al., 2018), for any $\delta \in (0,1)$ , the following holds with a probability of at least $1 - \delta$ :
+
+$$
+\max _ {\boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \bar {\boldsymbol {S}})} | R _ {l, f} (\boldsymbol {g}) - \widehat {R} _ {l, f} (\boldsymbol {g}) | \leq 2 \Re_ {n} ^ {*} (\widetilde {\mathcal {G}} _ {l, f} (\boldsymbol {r}, \bar {\boldsymbol {S}})) + U _ {l} \sqrt {\frac {\log (2 / \delta)}{2 n}}. \tag {A.33}
+$$
+
+Furthermore, by applying Eq. (A.33) to Eq. (A.32), we obtain that, for any $\delta \in (0,1)$ , with probability at least $1 - \delta$ , Eq. (4.10) holds:
+
+$$
+\begin{array}{l} R _ {l, f} (\pmb {g} _ {f, \overline {{S}}} ^ {(\pmb {r})}) - R _ {l} (f) \leq 4 \Re_ {n} ^ {*} (\widetilde {\mathcal {G}} _ {l, f} (\pmb {r}, \overline {{\pmb {S}}})) + 2 U _ {l} \sqrt {\frac {\log (2 / \delta)}{2 n}} \\ + \left\{2 \sqrt {R _ {l} (f)} + \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}} \right\} \left(2 U _ {l} \sum_ {j \in [ F ^ {\mathrm {w}} ]} R _ {0 1, j} \left(g _ {\bar {S}, j} ^ {(r _ {j})}\right)\right) ^ {\frac {1}{2}}. \tag {A.34} \\ \end{array}
+$$
+
+# A.6. Proof of Theorem 4.5
+
+Proof of Theorem 4.5. By assumption, there exist true deterministic functions $g_{j}^{*}: \mathcal{X}^{\mathrm{o}} \to \mathcal{X}_{j}^{\mathrm{w}}$ for any $j \in [F^{\mathrm{w}}]$ , and $(g_1^*, \ldots, g_{F^{\mathrm{w}}}^*) \in \mathcal{G}$ . In this case, for any $r$ and $\overline{S}$ , it holds that $g^{*} \in \mathcal{G}(r, \overline{S})$ . Therefore, the following holds:
+
+$$
+R _ {0 1, j} \left(g _ {\mathcal {G} (\boldsymbol {r}, \bar {S}), j}\right) = 0, \forall j \in \left[ F ^ {\mathrm {w}} \right]. \tag {A.35}
+$$
+
+For any $j \in [F^{\mathrm{w}}]$ , define $\mathcal{G}_j(r_j, \overline{S}_j)$ as the set of hypotheses that satisfy the following two conditions: (i) each element is a solution obtained by methods that are guaranteed to achieve consistency (Cour et al., 2011; Feng et al., 2020; Xu et al., 2021; Natarajan et al., 2013; Ishida et al., 2017; Yu et al., 2018), and (ii) its empirical risk is at most $r_j$ . As $n$ increases and $r_j \to 0$ , the assumptions on $\overline{R}_{l_j}$ and the theoretical guarantees of these consistent methods for $g_j$ imply the following:
+
+$$
+R _ {0 1, j} \left(g _ {j}\right)\rightarrow 0, \forall g _ {j} \in \mathcal {G} _ {j} \left(r _ {j}, \bar {S} _ {j}\right), \forall j \in [ F ^ {\mathrm {w}} ]. \tag {A.36}
+$$
+
+Additionally, by assumption, there exists a true deterministic function $f^{*}: \mathcal{X} \to \mathcal{Y}$ for label prediction, and $f^{*} \in \mathcal{F}$ . Hence, the following holds:
+
+$$
+R _ {l} \left(f _ {\mathcal {F}}\right) = 0. \tag {A.37}
+$$
+
+Thus, under the conditions of Theorem 4.3, the following holds for $f_{\pmb{g},\overline{\mathcal{S}}}$ obtained through LAC-dWFL's step (ii):
+
+$$
+n \rightarrow \infty , R _ {l, \mathbf {g}} \left(f _ {\mathbf {g}, \overline {{S}}}\right)\rightarrow 0. \tag {A.38}
+$$
+
+Furthermore, using Theorem 3.1, and additionally letting $r_j \to 0$ as $n$ increases for any $j \in [F^{\mathsf{w}}]$ , the following holds:
+
+$$
+n \rightarrow \infty , R _ {l} \left(f _ {\boldsymbol {g}, \bar {S}}\right)\rightarrow 0, \forall \boldsymbol {g} \in \mathcal {G} (\boldsymbol {r}, \bar {S}). \tag {A.39}
+$$
+
+Since $l$ is $L_{l}$ -Lipschitz continuous and $f_{g,\overline{S}}$ is $L_{f}$ -Lipschitz continuous, using Talagrand's lemma (Shalev-Shwartz & Ben-David, 2014), the following holds:
+
+$$
+\Re_ {n} ^ {*} (\tilde {\mathcal {G}} _ {l, f} (\boldsymbol {r}, \boldsymbol {S})) \leq L _ {l} L _ {f} \Re_ {n} ^ {*} (\mathcal {G} (\boldsymbol {r}, \boldsymbol {S})). \tag {A.40}
+$$
+
+Consequently, if $\Re_n^*(\mathcal{G}(\boldsymbol{r}, \boldsymbol{S}))$ and $\Re_n^*(\mathcal{G}_j(r_j, \overline{S}_j))$ for any $j \in [F^{\mathrm{w}}]$ are monotonically decreasing with respect to $n$ and converge to 0, the error bound established in Theorem 4.4 converges to 0 as $n \to \infty$ .
+
+
+
+# B. Detail Information of Experiments
+
+# B.1. Details of Datasets
+
+We used four real-world datasets: Adult (Becker & Kohavi, 1996), Bank Marketing (Moro & Cortez, 2014), kick (Vanschoeren et al., 2013), and Census-Income (KDD) (cen, 2000; Dua & Graff, 2017) in Section 5. In Section C.1, we additionally used two datasets: Default of Credit Card Clients (Yeh, 2009) and Diabetes 130-US Hospitals for Years 1999–2008 (Kahn). We will refer to them as Adult, Bank, Kick, Census, Default, and Diabetes, respectively. These datasets can be downloaded from UCI Machine Learning Repository (Dua & Graff, 2017) or OpenML (Vanschoeren et al., 2013). Table B.1 summarizes the characteristics of these datasets. All binary features were set to take values of either 0 or 1, all categorical features were encoded using one-hot encoding, and all continuous features were scaled to fall within the range [0, 1]. For the Adult, Bank, Kick, Default, and Diabetes datasets, all available samples were used. For the Census dataset, experiments were conducted using 50,000 randomly sampled data points.
+
+Table B.1. Outline of datasets. binary, categorical, and numerical represent the number of features of each type, respectively.
+
+dataset Adult Bank Kick Census Default Diabetes data size 48842 45211 72983 299285 30000 101766 binary 1 3 3 3 1 10 categorical 7 5 12 22 2 12 numerical 6 8 16 10 20 10 target binary binary binary binary binary 3 classes
+
+# B.2. Details of Experimental setup
+
+We summarize the settings of feature estimation models $g$ and label prediction model $f$ . Both $g$ and $f$ were implemented using two-layer perceptrons with hidden layers of width 500 and ReLU as an activation function. The optimization algorithm used across all models is Adam (Kingma & Ba, 2014), with the following hyperparameters: a learning rate of 0.0005, batch size of 512, 100 epochs, and a weight decay of 0.0002. Logistic loss was used for training $f$ .
+
+We summarize the method for calculating the error bound presented in Theorem 4.2 for this experiment. The parameter $\delta$ was set to 0.01. Additionally, assuming that $\mathcal{F}$ is sufficiently expressive, we set $R_{l}(f_{\mathcal{F}}) = 0$ . During inference, the predicted label from $f$ was determined based on the largest output value of $f$ . Consequently, scaling the outputs of $f$ does not affect the inference results. Thus, we assumed that the maximum value of each element in $f$ 's output is 1 and set $U_{l} = 2.0$ . Since the loss function $l$ for label prediction is the logistic loss, which is 1-Lipschitz continuous, it follows that $\Re_{n}^{*}(\widetilde{\mathcal{F}}_{l}) = \Re_{n}^{*}(\mathcal{F})$ and $\Re_{n}^{g}(\widetilde{\mathcal{F}}_{l}) = \Re_{n}^{g}(\mathcal{F})$ (Lemma 26.9 in (Shalev-Shwartz & Ben-David, 2014)). The terms $\Re_{n}^{*}(\mathcal{F})$ and $\Re_{n}^{g}(\mathcal{F})$ were computed using the upper bound on the Rademacher complexity of a multilayer perceptron derived by Neyshabur et al. (Theorem 1 in (Neyshabur et al., 2015)). To compute this upper bound, it is necessary to determine $\mu$ , which represents the upper bound on the $l_{p}$ -norm of all parameters of $f$ , as well as the value of $p$ . In this experiment, $p = 2$ was chosen. Furthermore, since it is possible to scale all parameter values of $f$ without affecting the inference results for predicting a single label, we set $\mu = 1$ .
+
+
+
+
+
+
+
+
+Figure C.3. A comparison between $R_{l,g}(f_{g,\overline{S}})$ and the error bound derived in Theorem 4.2, for various estimation errors of $g$ using Default and Diabetes datasets.
+
+
+
+
+
+
+
+
+
+
+
+
+
+# C. Additional Experiments
+
+# C.1. Additional Datasets
+
+In Section 5, we validated our theoretical results through numerical experiments using real-world datasets. In this section, we further examine the validity of our findings on two additional datasets that were not used in Section 5.2. The datasets used here are Default (Yeh, 2009) and Diabetes (Kahn), with details provided in Appendix B.1. Figure C.3 presents the results of the same experimental procedure applied to these datasets, following the methodology outlined in Section 5.2. In these datasets, variations in the estimation accuracy of the exact values of WFs resulted in only minor changes in the risk of the downstream task. As a consequence, visualizations similar to those in Figure 5.2 significantly compromised readability. To address this, we separately plot the observed risks and the theoretical bound derived from Theorem 4.7.
+
+From Figure C.3, we observe that a smaller value of $R_{01,j}(g_j)$ leads to a greater reduction in $R_{g,l}(f_{g,\overline{S}})$ as $n$ increases. This trend is consistent with the behavior of the error bounds illustrated in the same figure. Regarding the rate of decrease in both $R_{g,l}(f_{g,\overline{S}})$ and the bound with respect to $n$ , we find that their sensitivity to changes in $R_{01,j}(g_j)$ is less pronounced in the Default dataset than in the Diabetes dataset. This difference can be attributed to the fact that the Default dataset contains only two WFs, whereas the Diabetes dataset includes four. Consequently, the influence of WFs on downstream tasks is inherently smaller in the Default dataset. Therefore, from the results on these two additional datasets, we further confirm that our derived error bound in Theorem 4.7 successfully captures the relationship between the rate of decrease in $R_{g,l}(f_{g,\overline{S}})$ with increasing $n$ and the value of $R_{01,j}(g_j)$ .
+
+# C.2. Comparison with the Case WFs are Used for the Inputs of $f$
+
+In this section, we compared the risk of the $f$ obtained by directly using $\overline{X}^{\mathrm{w}}$ for training (i.e., $g(X^{\mathrm{o}}) = \overline{X}^{\mathrm{w}}$ ) with the experimental results on risks presented in Section 5. We focused on the four datasets used in Section 5, as the two additional datasets examined in Section C.1 exhibited a relatively weaker dependency of the risk of $f$ on the estimation accuracy of $g$ . The training procedure for $f$ remained identical to that described in Section 5. By conducting this comparison, we aim to provide empirical insight into how accurately $g$ must estimate the exact values of WFs in order to improve the generalization performance of $f$ .
+
+Figure C.4 shows the results under the setting where all WFs are complementary features (CFs). Each observation of a
+
+
+Figure C.4. Comparison between the risks in Section 5 and the risk of $f$ when $g(\mathbf{X}^{\mathrm{o}}) = \overline{\mathbf{X}}^{\mathrm{w}}$ . The results correspond to the case where all WFs are CFs, and both the mean and standard deviation over five trials are reported.
+
+
+Figure C.5. Comparison between the risks in Section 5 and the risk of $f$ when $g(\mathbf{X}^{\mathrm{o}}) = \overline{\mathbf{X}}^{\mathrm{w}}$ . The results correspond to the case where each WF is observed as a set of size two including the exact value, with mean and standard deviation over five runs reported.
+
+CF is sampled uniformly from all values except the exact one. From this figure, we observe that for the Adult, Kick, and Census datasets, a classification error of $g_{j}$ below 0.5 is sufficient to achieve a model $f$ that outperforms the baseline where $g(\mathbf{X}^{\mathrm{o}}) = \overline{\mathbf{X}}^{\mathrm{w}}$ . In contrast, for the Bank dataset, the classification error of $g_{j}$ must be below at least 0.3 to achieve similar improvement.
+
+Figure C.5 presents the results for the setting in which each observed value of every WF is represented as a set of size two that includes the exact value. The additional value in the set, apart from the exact value, is uniformly sampled at random. When using such $\overline{X}^{\mathrm{w}}$ as input to $f$ , each WF is encoded as a one-hot vector, where the entries corresponding to the values in the observed set are set to one. These vectors are then used as inputs to $f$ . From Figure C.5, in contrast to the case where WFs are CFs, we observe that $g$ yielding better performance than the case where $g(X^{\mathrm{o}}) = \overline{X}^{\mathrm{w}}$ depends on $n$ . This difference is attributed to the fact that $\overline{X}^{\mathrm{w}}$ always includes the exact value, whereas estimation via $g$ inevitably produces instances with incorrect values due to estimation errors. These results suggest that, for estimating the exact values of WFs, it may be more effective to output a probability distribution over possible values, rather than predicting a single deterministic value. Accordingly, developing methods where $g$ produces a distribution as output, along with establishing a theoretical framework that accommodates such $g$ , are important future directions for WFL. The findings in this paper are considered to provide a fundamental basis for such extensions.
+
+# D. Limitation
+
+In this paper, we consider the basic framework of WFL, where a feature estimation model is constructed for each WF independently, using OFs as inputs. Accordingly, our framework does not consider approaches that incorporate dependencies among WFs into the construction of feature estimation models. For example, we do not address methods that first estimate the exact values of a given WF and then use those estimates as input features to infer the exact values of other WFs, or approaches that jointly estimate the exact values of multiple WFs within a single model. Developing such approaches is of practical importance and is left for future work. Furthermore, analyzing such methods would require a theoretical framework for quantifying how well the dependencies among WFs are captured, as well as for modeling these dependencies.
+
+Theoretical insights presented in this paper are considered to provide a foundational basis for such future investigations.
+
+For similar reasons, methods that construct $f$ and $g$ as a single unified model and train it jointly are not yet covered by the analysis presented in this paper. Developing and analyzing such methods also constitutes an important direction for future research.
+
+Our analysis does not account for the feature importance of each WF in the downstream prediction task. Intuitively, the estimation accuracy for WFs that are more strongly related to the downstream target variable should have a greater impact on the error bound for learning $f$ , compared to those WFs that are weakly related or irrelevant. However, the error bound we derived does not account for such feature importance and thus cannot differentiate the relative contributions of individual WFs. Therefore, deriving an error bound that incorporates feature importance remains an important direction for future work. It is considered that the results presented in this paper provide a solid foundation for such an extension.
\ No newline at end of file
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/images.zip b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0feeac0261a31d6bfce2ae6517df4c1a58955110
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:209250e1b4ce9a10c4daf0df41a3c6034639f8ac921bc4a1ea6208eacc8eaab2
+size 1195276
diff --git a/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/layout.json b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bc72c5c2d6f5e1ab8b5bdf2d088d02502cfbb86d
--- /dev/null
+++ b/aunifiedframeworkforgeneralizationerroranalysisoflearningwitharbitrarydiscreteweakfeatures/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6e0467690aa2df95e3ace132464a52d163be6bf35af3957637d6638a3821272
+size 1058210
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_content_list.json b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7df3667cdad54ae8b08bd91fb3dc70e74a9cfd3d
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd93cd16fff3f8a1269308076c5972441f1ee1a73fe64c965f110c882c00fad6
+size 200196
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_model.json b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ecec672617ea0e2bbf6d3a9402004c0b74ab929d
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6695eed5fc13f8b3b3556cd26256835164a948ed170cddab60d28231b421b03e
+size 234788
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_origin.pdf b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7a90f0a0a95c05c0be4ed4fdb1fb330767e2c655
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/18ba0070-86b1-477b-968b-6c5bf9c2d575_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fce662b8c48f885f16dec3dd78c0b5d4ce67215c93b69909c9da73e66cdde79
+size 491518
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/full.md b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..247cdfd11e6c2e04f78ce8ebcff059ffd96c749f
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/full.md
@@ -0,0 +1,1033 @@
+# A Unified Theoretical Analysis of Private and Robust Offline Alignment: from RLHF to DPO
+
+Xingyu Zhou Yulian Wu Francesco Orabona
+
+# Abstract
+
+In this paper, we theoretically investigate the effects of noisy labels in offline alignment, with a focus on the interplay between privacy and robustness against adversarial corruption. Specifically, under linear modeling assumptions, we present a unified analysis covering both reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO) under different privacy-corruption scenarios, such as Local differential privacy-then-Corruption (LTC), where human preference labels are privatized before being corrupted by an adversary, and Corruption-then-Local differential privacy (CTL), where labels are corrupted before privacy protection. Our analysis leverages a reduction framework that reduces the offline alignment problem under linear modeling assumptions to parameter estimation in logistic regression. This framework allows us to establish an interesting separation result between LTC and CTL, demonstrating that LTC presents a greater challenge than CTL in offline alignment, even under linear models. As important by-products, our findings also advance the state-of-the-art theoretical results in offline alignment under privacy-only or corruption-only scenarios.
+
+# 1. Introduction
+
+The alignment training process in language models that utilizes a human-labeled preference dataset has been instrumental in producing more helpful, harmless, and honest responses (Bai et al., 2022). Leveraging an offline preference dataset, two prominent paradigms have emerged. The first is the indirect approach, such as Reinforcement Learning from Human Feedback (RLHF) (Ziegler et al., 2019;
+
+$^{1}$ Wayne State University, USA $^{2}$ King Abdullah University of Science and Technology, Saudi Arabia. Correspondence to: Xingyu Zhou .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+Ouyang et al., 2022), which learns an intermediate reward model before optimizing the policy. The second is the direct approach, exemplified by Direct Preference Optimization (DPO) (Rafailov et al., 2023), which directly optimizes the policy via supervised learning on the preference dataset.
+
+It is clear that the performance of both RLHF and DPO is significantly influenced by the quality of the preference labels in the dataset. However, in practice, these labels are often noisy due to various factors (Lambert et al., 2023). One potential noise source is corruption or misspecification during label generation or data collection, e.g., data poisoning attack (Casper et al., 2023). Additionally, privacy concerns in human preference (as illustrated in Feng et al. (2024)) may prompt individuals to provide noisy or privatized preferences rather than their true rankings.
+
+From a theoretical perspective, understanding the impact of these noisy labels—resulting from both corruption and privacy—is essential for improving offline alignment. Recent studies have made some initial attempts to address this issue (Mandal et al., 2024; Chowdhury et al., 2024; 2023; Bukharin et al., 2024), but they face two fundamental limitations: (1) They often treat corruption and privacy separately and focus exclusively on either RLHF or DPO, while, in practice, noisy labels can stem from both factors simultaneously; (2) The theoretical guarantees provided by these studies are often suboptimal, even when privacy and corruption are separately considered. Motivated by these limitations and practical scenarios, we are particularly interested in the following question:
+
+Can we provide a unified analysis of the interplay between privacy and robustness in both RLHF and DPO?
+
+We provide an affirmative answer to the above question by presenting the following contributions:
+
+1. A Unified Theoretical Framework. We present a unified theoretical framework for analyzing the interplay between privacy and robustness in offline alignment, covering both RLHF and DPO. Specifically, for privacy protection, we consider Local Differential Privacy (LDP) (Kasiviswanathan et al., 2011; Duchi et al., 2013) for preference labels, while for robustness, we consider the strong adversary corruption model (Diakonikolas & Kane, 2023), where an adaptively chosen fraction of labels can be corrupted. Our frame
+
+work can simultaneously handle three privacy-corruption scenarios for both RLHF and DPO: Corruption-then-LDP (CTL), LDP-then-Corruption (LTC), and Corruption-LDP-Corruption (CLC), capturing different ways privacy and corruption may interact in practice.
+
+2. Reduction to Logistic Regression. Our unified analytical framework leverages a reduction that transforms the offline alignment problem, under certain linear modeling assumptions, into parameter estimation in logistic regression. This reduction enables us to establish suboptimality bounds for both RLHF and DPO by focusing on parameter estimation in logistic regression under private and corrupted labels across different scenarios. Moreover, it highlights key differences between RLHF and DPO, providing insights into practical design considerations.
+
+3. Separation between CTL and LTC. A key takeaway from our study of the interplay between privacy and robustness to corruption is that LTC is a more challenging setting than CTL, illustrating that the order in which privacy and corruption interact with each other significantly impacts the performance of offline alignment.
+
+4. New State-of-the-art Guarantees. Our results, when reduced to privacy-only or corruption-only settings, set new state-of-the-art results on theoretical guarantees for RLHF and DPO. For instance, for DPO under "corrupted" labels, our result is the first one that achieves $\mathcal{O}(1 / \sqrt{n})$ rate (where $n$ is the size of preference dataset), matching the standard rate without noise. Additionally, as a by-product of our reduction approach, we provide the first results on parameter estimation error in logistic regression under both private and corrupted labels, which may be of independent interest.
+
+Finally, we remark that, as in many previous related works, e.g., Zhu et al. (2023); Chowdhury et al. (2023), we consider linear modeling assumptions for the sake of theoretical analysis. However, we believe that our results could serve as important benchmarks for more general function classes. In fact, we have also verified our separation result between CTL and LTC in the general case via experiments on GPT2-large, see Appendix D for a detailed discussion.
+
+# 2. Related Work
+
+In the main body, we only focus on the most related work on robust and private offline alignment, while relegating an additional discussion to Appendix A.
+
+Provably robust alignment under corruption. Mandal et al. (2024) considers offline RLHF with corrupted preference datasets and establishes upper bounds on the suboptimality gap under various coverage assumptions of the offline dataset. As will be discussed in Section 6.1, their results are either suboptimal or lack rigor due to gaps in their
+
+proof. For robust DPO, Chowdhury et al. (2024) considers a strictly weaker corruption model and derives a suboptimality bound of rate $\mathcal{O}(1/n^{1/4})$ . In contrast, our general result, when reduced to the same corruption model, achieves a better rate of $\mathcal{O}(1/\sqrt{n})$ . Bukharin et al. (2024) also considers a specific corruption model in the label generation process of RLHF but only provides the estimation error of the reward model, without a performance guarantee for the final policy.
+
+**Provably Private Alignment.** The most related work in this aspect is Chowdhury et al. (2023), which mainly focuses on the reward model estimation in RLHF under various privacy constraints (i.e., local and central label differential privacy). Our intermediate result on estimation error (Section 5) recovers the one in Chowdhury et al. (2023) when the corruption parameter is set to zero. Moreover, compared to the implicit suboptimality bound in Chowdhury et al. (2023), we provide the first explicit bound in terms of the relative condition number (Agarwal et al., 2021), which parallels similar results in standard (robust) offline RL (Zhang et al., 2022), i.e., reward-based rather than preference-based.
+
+# 3. Preliminaries
+
+Background on Offline Alignment. The goal of offline alignment is to further tune the Supervised Fine-Tuning (SFT) model to match human preferences using an offline preference dataset. The preference dataset $\mathcal{D} = (s_i, a_i^0, a_i^1, y_i)$ consists of $n$ samples, each has one context/state $s_i$ (e.g., prompt), two actions $a_i^0, a_i^1$ (e.g., two answers from language models) and label/preference feedback $y_i \in \{0, 1\}$ indicating which one is preferred by humans. We assume $s_i$ to be sampled independently from a distribution $\rho$ . A widely used approach for modeling $y_i$ is Bradley-Terry model (Bradley & Terry, 1952):
+
+$$
+\mathbb {P} \left\{y _ {i} = l | s _ {i}, a _ {i} ^ {0}, a _ {i} ^ {1} \right\} = \frac {\exp \left(r ^ {\star} \left(s _ {i} , a _ {i} ^ {l}\right)\right)}{\exp \left(r ^ {\star} \left(s _ {i} , a _ {i} ^ {0}\right)\right) + \exp \left(r ^ {\star} \left(s _ {i} , a _ {i} ^ {1}\right)\right)}, \tag {1}
+$$
+
+for $l \in \{0,1\}$ , where $r^{\star}(\cdot ,\cdot)$ is a ground truth reward model.
+
+Based on this preference dataset, offline alignment aims to learn a good policy $\widehat{\pi}$ . In particular, the performance of the learned policy $\widehat{\pi}$ is evaluated by the suboptimality gap between $\widehat{\pi}$ and a comparator policy $\pi^{\dagger}$ , defined as
+
+$$
+\operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\dagger}\right) = J \left(\pi^ {\dagger}\right) - J (\widehat {\pi}), \tag {2}
+$$
+
+where $J(\pi)\coloneqq \mathbb{E}_{s\sim \rho ,a\sim \pi (\cdot |s)}[r^{\star}(s,a)]$ and $\pi^{\dagger}$ is not necessarily the optimal policy.
+
+RLHF and DPO. As already mentioned, there are two major paradigms in alignment for finding $\widehat{\pi}$ : indirect and direct approaches. The former, exemplified by RLHF (Ziegler et al., 2019), involves an intermediate reward model learning process from preference dataset $\mathcal{D}$ before the policy optimization. The latter, represented by DPO (Rafailov et al.,
+
+2023), employs a direct policy optimization, i.e., using a supervised-learning loss function to optimize the policy directly over the preference dataset $\mathcal{D}$ .
+
+Privacy Protection in Human Feedback. The preference signal $y_{i}$ in $\mathcal{D}$ could reveal sensitive personal information (Feng et al., 2024; Chowdhury et al., 2023), hence requiring a rigorous privacy protection. To this end, we consider the local label Differential Privacy (DP) (Chaudhuri & Hsu, 2011; Ghazi et al., 2021), which means that the learner now only has access to a privatized label rather than the raw one. More specifically, we have the following definition.
+
+Definition 3.1 (Label DP in Local Model (Chowdhury et al., 2023)). Let $\varepsilon > 0$ and $\delta \in [0,1]$ . If each label is privatized by a local randomizer $\mathcal{R}$ , which satisfies for any $y, y'$ and any subset $S$ in the range of $\mathcal{R}$ that
+
+$$
+\mathbb {P} \{\mathcal {R} (y) \in S \} \leq e ^ {\varepsilon} \cdot \mathbb {P} \{\mathcal {R} (y ^ {\prime}) \in S \} + \delta ,
+$$
+
+then we say $\mathcal{R}$ is an $(\varepsilon, \delta)$ -label differentially private local randomizer, and this privatized dataset is called label-private preference dataset. The entire alignment process that operates with the privatized dataset is said to satisfy local label DP. When $\delta = 0$ , we simply say it is a $\varepsilon$ -local label DP.
+
+Remark 3.2 (Randomized Response). Given the binary data of the true label, we would like to maintain the binary data property after privatization. Thus, we will adopt the standard randomized response mechanism (Warner, 1965) as our local randomizer, which essentially injects controllable noise in labels by a random flipping. Here, by "controllable," we mean the noise injection method, and noise level is under our control based on the privacy parameter $\varepsilon$ .
+
+Corruption in Human Feedback. The human feedback $y_{i}$ can often be noisy and even be corrupted in the source or during the data collection process, which deviates from the assumed true generation process in (1). To this end, the final learned policy $\widehat{\pi}$ needs to be robust with respect to corruption in labels. We consider a corruption model similar to strong corruption model from robust statistics literature (Diakonikolas & Kane, 2023), which roughly says that an adversary can adaptively corrupt the labels of a fraction of samples, by inspecting the samples.
+
+Definition 3.3 (Label Corruption Model). Let $\alpha \in [0,1/2]$ . We consider an $\alpha$ -corruption model: an adversary can inspect the samples in a preference dataset of size $n$ and then assign any label value of 0 or 1 to at most $\alpha n$ samples.
+
+Interplay between Privacy and Robustness. One key theme of this paper is to study the interplay between privacy and robustness in offline alignment. In particular, we are interested in the impact of the order between privacy protection and corruption in the labels on the suboptimality gap (cf. (2)), for both RLHF and DPO. To this end, we will mainly consider the following settings.
+
+Definition 3.4 (CTL and LTC). Given a raw preference dataset $\mathcal{D} = (s_i, a_i^0, a_i^1, y_i)_{i=1}^n$ , we consider the following settings that differ in the order of privacy protection (see Definition 3.1) and corruption (see Definition 3.3). In all cases, the final input dataset for the learning algorithm will be denoted by $\mathcal{D}_{\mathrm{in}} = (s_i, a_i^0, a_i^1, z_i)_{i=1}^n$ .
+
+Corruption-then-LDP (CTL): An adversary first corrupts the labels in $\mathcal{D}$ to $\bar{y}_i$ . Then, each label $\bar{y}_i$ is privatized by a local randomizer.
+
+LDP-then-Corruption (LTC): Each label $y_{i}$ in $\mathcal{D}$ is first privatized by a local randomizer, resulting in the private label $\widetilde{y}_{i}$ . Then, the preference dataset with private labels is further corrupted by an adversary.
+
+Remark 3.5. As a last setting, one may also consider the setting where corruption happens both before and after privacy protection, which turns out to be a simple combination of the results for CTL and LTC, hence omitted in our results.
+
+# 4. Reduction to Parameter Estimation
+
+In this section, we will show that the key to establishing the suboptimality guarantees in both RLHF and DPO is a tight parameter estimation in logistic regression, under certain modeling assumptions. This allows us to focus on a single-parameter estimation problem under different settings (i.e., CTL and LTC) for both RLHF and DPO. More importantly, this unified perspective also enables us to easily see the connection and difference between RLHF and DPO.
+
+Logistic Regression. Recall that given a feature vector $x_{i} \in \mathbb{R}^{d}$ , under logistic regression, the label $y_{i} \in \{0,1\}$ is generated according to the following probability:
+
+$$
+\mathbb {P} \left\{y _ {i} = 1 \mid x _ {i} \right\} = \sigma \left(\left\langle \theta_ {\text {t r u e}}, x _ {i} \right\rangle\right), \tag {3}
+$$
+
+where $\sigma (z) = \frac{1}{1 + e^{-z}}$ is the sigmoid function, $\theta_{\mathrm{true}}\in \mathbb{R}^d$ is the unknown true parameter and $\langle \cdot ,\cdot \rangle$ denotes the inner product of two vectors.
+
+# 4.1. RLHF with a Linear Reward Model
+
+We show that when the reward model in (1) is a linear function, the key to bounding the suboptimality gap in RLHF is the parameter estimation in a logistic regression problem. To start with, we formally state the linear reward model, following common definitions used in prior work (Zhu et al., 2023; Xiong et al., 2024; Cen et al., 2024; Chowdhury et al., 2023; Mandal et al., 2024).
+
+Assumption 4.1 (Linear Reward with Boundedness). We assume that the ground truth reward $r^{\star}$ is linear, i.e., $r^{\star}(s,a) = \langle \phi (s,a),\theta^{\star}\rangle$ , where $\phi (s,a):\mathcal{S}\times \mathcal{A}\to \mathbb{R}^{d}$ is some known and fixed feature map and $S, A$ are the state space and the action space, respectively. We also assume the following standard boundedness conditions. For
+
+all $s \in S$ and $a \in \mathcal{A}$ , without loss of generality, we assume $\| \phi(s, a) \| \leq 1$ . Moreover, we assume $\theta^{\star} \in \Theta_{B} = \{\theta \in \mathbb{R}^{d} : \langle 1, \theta \rangle = 0, \| \theta \| \leq B\}$ , where the condition $\langle 1, \theta \rangle = 0$ is to ensure the identifiability of $\theta^{\star}$ .
+
+Under the above assumption, we consider the standard offline RLHF algorithm, but with an additional parameter $\eta$ . In particular, we consider two alternative outputs: When $\eta = 0$ , the output policy is $\widehat{\pi} = \operatorname{argmax}_{\pi} \widehat{J}(\pi)$ where $\widehat{J}(\pi) = \mathbb{E}_{s \sim \rho, a \sim \pi(\cdot|s)}[\langle \widehat{\theta}, \phi(s, a) \rangle]$ , that is essentially a greedy algorithm with respect to an estimate $\widehat{\theta}$ ; When $\eta = 1$ , the output is $\widehat{\pi} = \operatorname{argmax}_{\pi} \widehat{J}(\pi)$ , where the objective function is defined via the principle of pessimism (Zhu et al., 2023; Jin et al., 2021; Li et al., 2024) as
+
+$$
+\begin{array}{l} \widehat {J} (\pi) = \min _ {\theta \in \Theta (\widehat {\theta}, \lambda)} \mathbb {E} _ {s \sim \rho , a \sim \pi (\cdot | s)} [ \langle \theta , \phi (s, a) \rangle ] \\ - \mathbb {E} _ {s \sim \rho , a \sim \pi_ {\text {r e f}} (\cdot | s)} [ \langle \theta , \phi (s, a) \rangle ], \\ \end{array}
+$$
+
+by constructing a confidence set around an estimate $\widehat{\theta}$ :
+
+$$
+\Theta (\widehat {\theta}, \lambda) = \left\{\theta \in \Theta_ {B} \mid \left\| \widehat {\theta} - \theta \right\| _ {\widehat {\Sigma} + \lambda I} \leq \Gamma (n, d, \delta , \lambda) \right\}.
+$$
+
+For completeness and due to space limitations, the full algorithm is given in Algorithm 2 in the Appendix B.
+
+Here, we use a reference policy $\pi_{\mathrm{ref}}$ because the confidence set only measures the uncertainty of the difference in reward. That is, it does not measure the uncertainty for a single state-action pair.
+
+We have the following key theoretical result on Algorithm 2, with its proof in Appendix E.1.
+
+Proposition 4.2. Under Assumption 4.1, the labels $\{y_i\}_{i\in [n]}$ in the preference dataset of RLHF follow the logistic regression model with $\theta_{\mathrm{true}} = \theta^{\star}$ and $x_{i} = \phi (s_{i},a_{i}^{1}) - \phi (s_{i},a_{i}^{0})$ . Algorithm 2 with $\eta = 0$ achieves
+
+$$
+\operatorname {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq 2 \left\| \widehat {\theta} - \theta_ {\text {t r u e}} \right\| _ {2}, \tag {4}
+$$
+
+where $\pi^{\star} = \operatorname{argmax}_{\pi} J(\pi)$ . Further, let $\widehat{\Sigma} \coloneqq \frac{1}{n} \sum_{i} x_{i} x_{i}^{\top}$ and $\lambda > 0$ and suppose with probability at least $1 - \delta$ the estimate $\widehat{\theta}$ satisfies
+
+$$
+\left\| \widehat {\theta} - \theta_ {\text {t r u e}} \right\| _ {\widehat {\Sigma} + \lambda \mathbf {I}} \leq \Gamma (n, d, \delta , \lambda). \tag {5}
+$$
+
+Then, setting $\eta = 1$ in Algorithm 2, we have for any $\pi^{\dagger}$ and $\rho$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\dagger}\right) \leq 2 \Gamma (n, d, \delta , \lambda) \\ \times \left\| \mathbb {E} _ {s \sim \rho} \left[ \phi (s, \pi^ {\dagger} (s)) - \phi (s, \pi_ {\operatorname {r e f}} (s)) \right] \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}, \tag {6} \\ \end{array}
+$$
+
+for any reference policy $\pi_{\mathrm{ref}}$ , where we define $\phi (s,\pi (s))\coloneqq$ $\mathbb{E}_{a\sim \pi (\cdot |s)}[\phi (s,a)]$
+
+We can further simplify the result in (6) by introducing the following relative condition number, which can be viewed as the natural extension of standard one (Zhang et al., 2022; Agarwal et al., 2021) to the RLHF setting.
+
+Definition 4.3 (Relative Condition Number). For $\pi_1, \pi_2$ and a feature map $\phi$ , we define $\psi(s, a, a') = \phi(s, a) - \phi(s, a')$ and $\Sigma_{\pi_1, \pi_2}$ as
+
+$$
+\mathbb {E} _ {s \sim \rho , a \sim \pi_ {1} (\cdot | s), a ^ {\prime} \sim \pi_ {2} (\cdot | s)} \psi (s, a, a ^ {\prime}) \psi (s, a, a ^ {\prime}) ^ {\top}. \tag {7}
+$$
+
+For any comparator policy $\pi^{\dagger}$ and any given reference policy $\pi_{\mathrm{ref}}$ , we define
+
+$$
+\kappa \left(\pi^ {\dagger}, \pi_ {\text {r e f}}\right) := \sup _ {w \in \mathbb {R} ^ {d}} \frac {w ^ {\top} \sum_ {\pi^ {\dagger} , \pi_ {\text {r e f}}} ^ {\text {d i f f}} w}{w ^ {\top} \sum_ {\pi_ {\text {s f t}} , \pi_ {\text {s f t}}} ^ {\text {d i f f}} w}. \tag {8}
+$$
+
+We can now simplify our previous suboptimality bound using the relative condition number above in the following corollary, with its proof given by Appendix E.2.
+
+Corollary 4.4. Let the same assumption in Proposition 4.2 hold and further assume $\lambda \geq \Omega\left(\frac{d}{n} \cdot \ln(n / \delta)\right)$ . For any given comparator policy $\pi^{\dagger}$ with $\kappa(\pi^{\dagger}, \pi_{\mathrm{ref}}) < \infty$ , we can upper bound (6) as follows:
+
+$$
+\mathrm {S u b O p t} (\widehat {\pi}, \pi^ {\dagger}) \leq 2 \sqrt {3} \cdot \Gamma (n, d, \delta , \lambda) \cdot \sqrt {d \cdot \kappa (\pi^ {\dagger} , \pi_ {\mathrm {r e f}})}.
+$$
+
+# 4.2. DPO with a Log-Linear Policy Class
+
+In this section, we will show that for a log-linear policy class (defined below), the suboptimality in DPO is also related to the parameter estimation in logistic regression.
+
+We begin with a brief recap of DPO, following the original paper (Rafailov et al., 2023). The key idea is to reparameterize the reward model by the optimal policy of a KL-regularized problem. In particular, for the following KL-regularized optimization objective (with $\beta > 0$ )
+
+$$
+J _ {\beta} (\pi) = \mathbb {E} _ {s \sim \rho , a \sim \pi (\cdot | s)} \left[ r ^ {\star} (s, a) - \beta \ln \frac {\pi (a | s)}{\pi_ {\mathrm {s f t}} (a | s)} \right],
+$$
+
+the optimal solution has the closed-form expression
+
+$$
+\pi^ {\star} (a | s) = \frac {1}{Z _ {\beta} (s)} \pi_ {\mathrm {s f t}} (a | s) \exp \left(r ^ {\star} (s, a) / \beta\right), \tag {9}
+$$
+
+where $Z_{\beta}(s) = \sum_{a\in \mathcal{A}}\pi_{\mathrm{sft}}(a|s)\exp (r^{\star}(s,a) / \beta)$ is the normalization factor. This allows us to rewrite the reward $r^{\star}$ in terms of $\pi^{\star}$ as follows
+
+$$
+r ^ {\star} (s, a) = \beta \ln \frac {\pi^ {\star} (a | s)}{\pi_ {\mathrm {s f t}} (a | s)} + \beta \ln Z _ {\beta} (s). \tag {10}
+$$
+
+With the above re-parametrization of the reward using policy in (10) and BT preference model in (1), DPO (Rafailov et al.,
+
+2023) directly minimizes the following log-loss function:
+
+$$
+\begin{array}{l} \mathcal {L} (\pi ; \pi_ {\mathrm {s f t}}) := \\ - \sum_ {i = 1} ^ {n} \mathbb {1} (y _ {i} = 0) \ln \sigma \left(\beta \ln \frac {\pi (a _ {i} ^ {0} | s _ {i})}{\pi_ {\mathrm {s f t}} (a _ {i} ^ {0} | s _ {i})} - \beta \ln \frac {\pi (a _ {i} ^ {1} | s _ {i})}{\pi_ {\mathrm {s f t}} (a _ {i} ^ {1} | s _ {i})}\right) \\ - \sum_ {i = 1} ^ {n} \mathbb {1} \left(y _ {i} = 1\right) \ln \sigma \left(\beta \ln \frac {\pi \left(a _ {i} ^ {1} \mid s _ {i}\right)}{\pi_ {\mathrm {s f t}} \left(a _ {i} ^ {1} \mid s _ {i}\right)} - \beta \ln \frac {\pi \left(a _ {i} ^ {0} \mid s _ {i}\right)}{\pi_ {\mathrm {s f t}} \left(a _ {i} ^ {0} \mid s _ {i}\right)}\right). \tag {11} \\ \end{array}
+$$
+
+In this paper, we consider the log-linear policy class for the sake of theoretical analysis.
+
+Assumption 4.5 (Log-linear Policy Class). We assume that the optimal policy in (9) satisfies $\pi^{\star} \in \Pi$ and $\pi_{\mathrm{sft}} \in \Pi$ where
+
+$$
+\Pi = \left\{\pi_ {\theta} (a | s) = \frac {\exp (\langle \theta , \phi (s , a) \rangle)}{\sum_ {a ^ {\prime} \in \mathcal {A}} \exp (\langle \theta , \phi (s , a ^ {\prime}) \rangle)} \right\}, \tag {12}
+$$
+
+is the log-linear class for some known feature map $\phi(s, a): S \times \mathcal{A} \to \mathbb{R}^d$ with $\| \phi(s, a) \| \leq 1$ . Moreover, $\theta^\star$ corresponding to $\pi^\star$ satisfies that $\theta^\star \in \Theta_B = \{\theta \in \mathbb{R}^d : \langle 1, \theta \rangle = 0, \| \theta \| \leq B\}$ , where the condition $\langle 1, \theta \rangle = 0$ is to ensure the identifiability of $\theta^\star$ .
+
+The above policy realizability assumption is equivalent to the reward model realizability. In particular, by plugging log-linear policy into (11), we can establish that the labels $y_{i}$ again follow from the logistic regression in (3) with proper choices of $\theta_{\mathrm{true}}$ and $x_{i}$ . In particular, we have the following formal statement, with its proof in Appendix E.3.
+
+Proposition 4.6. Under Assumption 4.5, the labels $\{y_i\}_{i\in [n]}$ in the preference dataset of DPO follow the logistic regression model with $\theta_{\mathrm{true}} = \beta (\theta^{\star} - \theta_{\mathrm{sft}})$ with $\beta >0$ and $x_{i} = \phi (s_{i},a_{i}^{1}) - \phi (s_{i},a_{i}^{0})$ . Suppose with probability at least $1 - \delta$ , there exists an estimate $\widehat{\theta}$ that satisfies
+
+$$
+\left\| \widehat {\theta} - \theta_ {\text {t r u e}} \right\| _ {\widehat {\Sigma} + \lambda \mathbf {I}} \leq \Gamma (n, d, \delta , \lambda), \tag {13}
+$$
+
+where $\widehat{\Sigma} := \frac{1}{n}\sum_{i}x_{i}x_{i}^{\top}$ and $\lambda > 0$ . Then, let $\widehat{\theta'} = \widehat{\theta} / \beta + \theta_{\mathrm{sft}}$ and $\lambda \geq \Omega\left(\frac{d}{n}\cdot \ln (n / \delta)\right)$ , the corresponding policy $\widehat{\pi} = \pi_{\widehat{\theta'}}$ with probability at least $1 - \delta$ satisfies
+
+$$
+\operatorname {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq \frac {\sqrt {3}}{\sqrt {2}} \cdot \sqrt {\kappa_ {\Pi}} \cdot B \cdot \Gamma (n, d, \delta , \lambda),
+$$
+
+where $\kappa_{\Pi} := \max_{\pi \in \Pi} \kappa(\pi, \pi)$ is the maximum relative condition number across the entire policy class.
+
+Remark 4.7. One can also rewrite the above bound using the maximum value of the implicit reward function, $r_{\mathrm{max}}$ as
+
+$$
+\operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\star}\right) \leq c \cdot \sqrt {\kappa_ {\Pi}} \cdot \frac {r _ {\max }}{\beta} \cdot \Gamma (n, d, \delta , \lambda),
+$$
+
+for some constant $c > 0$ and log-linear policy $\Pi$ .
+
+Remark 4.8 (single-policy vs. all-policy concentrability). One nice thing about the above reduction is that it allows us to easily see the key difference between RLHF and DPO. In particular, from Corollary 4.4 and Proposition 4.6, we can see that the key (and only) difference lies in the choice of relative condition number (especially when considering the typical scaling of $B = \mathcal{O}(\sqrt{d})$ for the parameter), which is also closely related to the "concentratability coefficient" in offline RL (Munos, 2007; Jin et al., 2021). In particular, due to the use of pessimism in offline RLHF, one can achieve a bound in terms of $\kappa (\pi^{\dagger},\pi_{\mathrm{ref}})$ , which is related to the "single-policy concentratability" (Rashidinejad et al., 2021; Jin et al., 2021) for any comparator policy $\pi^{\dagger}$ . On the other hand, due to the lack of uncertainty characterization in DPO, one needs "all-policy concentratability" (Chen & Jiang, 2019) $\kappa_{\Pi}$ in the upper bound, which is often much larger. In fact, this kind of dependence in standard DPO is shown to be necessary (Song et al., 2024).
+
+# 5. Parameter Estimation Under Private and Corrupted Labels
+
+As motivated by the last section, we now turn to designing algorithms for providing label privacy while accurately estimating the unknown parameter $\theta_{\mathrm{true}}$ in logistic regression, even under corrupted labels. As we will see, the key to the design is a new loss function, which allows us to adaptively handle the privacy-robustness interplays in a unified way. To facilitate the upcoming discussion, we formally state the general problem setup for logistic regression under private and corrupted labels.
+
+Definition 5.1 (Private and robust parameter estimation problem). Let $\mathcal{D}$ be a dataset of i.i.d samples $\{x_i, y_i\}_{i=1}^n$ where $x_i \sim \mu$ and $y_i$ follows from the logistic regression model in (3). The input dataset $\mathcal{D}_{\mathrm{in}} = \{x_i, z_i\}_{i=1}^n$ is the private and corrupted version of $\mathcal{D}$ , following Definition 3.4. The goal here is to design a local randomizer $\mathcal{R}$ for privatizing labels (cf. Definition 3.1) as well as an analyzer $\mathcal{A}$ that receives $\mathcal{D}_{\mathrm{in}}$ outputs an estimate $\widehat{\theta}$ that is close to the underlying true parameter $\theta_{\mathrm{true}}$ , measured by a proper choice of norm. We assume the following boundedness conditions: for any $i \in [n]$ , $\| x_i\| \leq 1$ and $\theta_{\mathrm{true}} \in \Theta_{B'} = \{\theta \in \mathbb{R}^d : \langle 1, \theta \rangle = 0, \| \theta \| \leq B'\}$ .
+
+Remark 5.2. The boundedness assumption essentially follows from the reduction in the last section. Here, we assume $\| x_{i}\| \leq 1$ rather than upper bounded by 2 for simplicity and $B^{\prime}$ can be properly chosen for RLHF and DPO, respectively.
+
+# 5.1. Our Algorithm
+
+As mentioned, our choice of local randomizer $\mathcal{R}$ for privacy protection is the simple Random Response (RR) mechanism with parameter $\varepsilon > 0$ (Warner, 1965). That is, the binary output from RR equals the input with probability $\sigma(\varepsilon) =$
+
+# Algorithm 1 Private and Robust Estimation
+
+1: Procedure: $\varepsilon$ -local label DP mechanism $\mathcal{R}$
+2: //Input: $U_{i}\in \{0,1\}$ , parameter:
+3: Random response: $\widetilde{U}_i = \left\{ \begin{array}{ll} U_i & w.p. \frac{e^\varepsilon}{e^\varepsilon + 1} \\ 1 - U_i & w.p. \frac{1}{e^\varepsilon + 1} \end{array} \right.$
+4: Return $\widetilde{U}_i$
+5: Procedure: Analyzer $\mathcal{A}$
+6: //Input: $\{(x_i,z_i)\}_{i = 1}^n$ parameter: $\varepsilon$
+7: Let $c(\varepsilon) = \frac{1}{2\sigma(\varepsilon) - 1} = \frac{e^{\varepsilon} + 1}{e^{\varepsilon} - 1}$ .
+8: Compute $\widehat{\theta} = \operatorname{argmin}_{\theta \in \Theta_{B'}(\theta)} -\frac{1}{n}\sum_{i=1}^{n}\widetilde{\ell}_i(\theta)$ where
+
+$$
+\widetilde {\ell} _ {i} (\theta) = \ln (1 - \sigma (\theta^ {\top} x _ {i})) + (z _ {i} + \sigma (\varepsilon) - 1) c (\varepsilon) \theta^ {\top} x _ {i}
+$$
+
+9: Return $\widehat{\theta}$
+
+$\frac{e^{\varepsilon}}{1 + e^{\varepsilon}}$ ; otherwise, the privatized binary output differs from the input. RR satisfies the $\varepsilon$ -local label DP guarantee (cf. Definition 3.1) (Dwork & Roth, 2014).
+
+We now turn to the design of the analyzer $\mathcal{A}$ , which is responsible for outputting an estimate $\widehat{\theta}$ . We first point out that in the non-private non-corrupted case, the standard maximum likelihood estimator (MLE) that minimizes the loss function $\mathcal{L}(\theta) = -\frac{1}{n}\sum_{i=1}^{n}\ell_i(\theta)$ enjoys a good concentration (Zhu et al., 2023) with respect to $\theta_{\mathrm{true}}$ , where $\ell_i(\theta)$ is the standard log-loss:
+
+$$
+\begin{array}{l} \ell_ {i} (\theta) = y _ {i} \log \left(\sigma \left(\theta^ {\top} x _ {i}\right)\right) + \left(1 - y _ {i}\right) \log \left(1 - \sigma \left(\theta^ {\top} x _ {i}\right)\right) \\ = \log (1 - \sigma (\theta^ {\top} x _ {i})) + y _ {i} \theta^ {\top} x _ {i}. \\ \end{array}
+$$
+
+However, due to the private labels, our analyzer is designed to minimize a new loss $\widetilde{\mathcal{L}} (\theta) = -\frac{1}{n}\sum_{i = 1}^{n}\widetilde{\ell}_i(\theta)$ where
+
+$$
+\widetilde {\ell} _ {i} (\theta) = \ln \left(1 - \sigma \left(\theta^ {\top} x _ {i}\right)\right) + \left(z _ {i} + \sigma (\varepsilon) - 1\right) c (\varepsilon) \theta^ {\top} x _ {i}, \tag {14}
+$$
+
+and $c(\varepsilon) \coloneqq \frac{1}{2\sigma(\varepsilon) - 1} = \frac{e^{\varepsilon} + 1}{e^{\varepsilon} - 1}$ . The key difference lies in the "shifting and scaling" of the received labels $z_{i}$ , which, in fact, enjoys exactly the same "shifting and scaling" intuition as in mean estimation under RR, i.e., it is an unbiased estimate. Putting the above choices of $\mathcal{R}$ and $\mathcal{A}$ together, yields the final Algorithm 1 above.
+
+Remark 5.3. We remark that a similar loss (up to some scaling) has been considered in Chowdhury et al. (2023; 2024). However, they are motivated from a different perspective (e.g., logit) rather than our connection to standard mean estimation under RR for local privacy (i.e., shifting and scaling). The form we use here in (14) has not appeared before. This new form not only makes it easy to see that our new loss is an unbiased estimate of the standard log loss, but also allows us to easily show that our single algorithm is adaptive to different privacy-corruption settings, i.e., it does not know the specific setting in advance.
+
+# 5.2. Estimation Error Bounds
+
+In this section, we will establish the estimation error bounds achieved by Algorithm 1. Throughout this section, we will let $\widehat{\theta}_{\mathrm{CTL}}$ , $\widehat{\theta}_{\mathrm{LTC}}$ be the estimates outputted by Algorithm 1 under CTL and LTC respectively. Our first result is the following theorem, which characterizes the estimator error in terms of a weighted norm, with proof in Appendix E.4.
+
+Theorem 5.4. Consider the problem in Definition 5.1. For any $\varepsilon >0$ , $\alpha ,\in [0,1 / 2)$ , $\delta \in (0,1)$ , and $\lambda >0$ , with probability at least $1 - \delta$ , the output of Algorithm 1 achieves
+
+$$
+\begin{array}{l} \left\| \widehat {\theta} _ {\mathrm {C T L}} - \theta_ {\mathrm {t r u e}} \right\| _ {\widehat {\Sigma} + \lambda \mathbf {I}} \leq \Gamma_ {\mathrm {C T L}} (n, d, \delta , \lambda) \\ := C \left(\frac {\sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln (1 / \delta)}{n}} + B ^ {\prime} \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \left\| \widehat {\theta} _ {\mathrm {L T C}} - \theta_ {\mathrm {t r u e}} \right\| _ {\widehat {\Sigma} + \lambda \mathbf {I}} \leq \Gamma_ {\mathrm {L T C}} (n, d, \delta , \lambda) \\ := C \left(\frac {c (\varepsilon) \sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln (1 / \delta)}{n}} + B ^ {\prime} \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+where $\widehat{\Sigma} = \frac{1}{n}\sum_{i=1}^{n}x_{i}x_{i}^{\top}, c(\varepsilon) = \frac{e^{\varepsilon} + 1}{e^{\varepsilon} - 1}, \gamma = 1/(2 + \exp(-B') + \exp(B'))$ , and $C$ is a universal constant.
+
+Remark 5.5. First, when there is no corruption, our result matches the one in previous work on private parameter estimation (Chowdhury et al., 2023). Second, when corruption exists, the order of corruption and local privacy matters. In particular, LTC has an additional cost $c(\varepsilon)$ in the first corruption term compared to CTL, highlighting the interplay between privacy and robustness.
+
+Our second result is a concentration result under $L_{2}$ -norm with the additional condition of uniform coverage, which has been leveraged in prior work as well (Mandal et al., 2024; Zhang et al., 2022; Chowdhury et al., 2023).
+
+Assumption 5.6 (Uniform Coverage). There exists a positive constant $\xi >0$ such that the minimum eigenvalue $\lambda_{\mathrm{min}}(\Sigma)\geq \xi$ , where $\Sigma \coloneqq \mathbb{E}_{x\sim \mu}[xx^{\top}]$
+
+Under the above assumption, we can have another estimation error bound for the underlying parameter, which is now in terms of $L_{2}$ -norm, with proof in Appendix E.5.
+
+Theorem 5.7. Under Assumption 5.6, for any $\varepsilon >0$ , $\alpha \in [0,1 / 2)$ , $\delta \in (0,1)$ , and $n\geq \frac{8\ln(d / \delta)}{\xi}$ , with probability at least $1 - \delta$ , Algorithm 1 under CTL and LTC achieves
+
+$$
+\begin{array}{l} \left\| \widehat {\theta} _ {\mathrm {C T L}} - \theta_ {\mathrm {t r u e}} \right\| _ {2} \leq C \left(\frac {\alpha}{\gamma \xi} + \frac {c (\varepsilon)}{\gamma \xi} \sqrt {\frac {\ln \frac {1}{\delta}}{n}}\right), \\ \left\| \widehat {\theta} _ {\mathrm {L T C}} - \theta_ {\mathrm {t r u e}} \right\| _ {2} \leq C \left(\frac {c (\varepsilon) \alpha}{\gamma \xi} + \frac {c (\varepsilon)}{\gamma \xi} \sqrt {\frac {\ln \frac {1}{\delta}}{n}}\right). \\ \end{array}
+$$
+
+Here, we see that the separation between CTL and LTC still
+
+exists, with an additional factor of $c(\varepsilon)$ in LTC, illustrating a negative impact of LDP on robustness.
+
+# 6. Putting It All Together: Suboptimality under RLHF and DPO
+
+In this section, we are ready to present our main results on the suboptimality gap under RLHF and DPO by combining our reduction results with estimation error bounds.
+
+# 6.1. Private and Robust RLHF
+
+Theorem 6.1. Under the conditions of Corollary 4.4 and Theorem 5.4, RLHF (Algorithm 2) achieves the following suboptimality with probability at least $1 - \delta$
+
+$$
+\begin{array}{l} \mathrm {S u b O p t} _ {\mathrm {C T L}} (\widehat {\pi}, \pi^ {\dagger}) \leq C \sqrt {d \cdot \kappa (\pi^ {\dagger} , \pi_ {\mathrm {r e f}})} \\ \times \left(\frac {\sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln \frac {1}{\delta}}{n}} + B \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \mathrm {S u b O p t} _ {\mathrm {L T C}} (\widehat {\pi}, \pi^ {\dagger}) \leq C \sqrt {d \cdot \kappa (\pi^ {\dagger} , \pi_ {\mathrm {r e f}})} \\ \times \left(\frac {c (\varepsilon) \sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln \frac {1}{\delta}}{n}} + B \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+for any comparator policy $\pi^{\dagger}$ and $\lambda \geq \Omega\left(\frac{d}{n} \cdot \ln (n / \delta)\right)$ .
+
+The proof follows directly from the reduction result in Corollary 4.4 and estimation error bound in Theorem 5.4. To the best of our knowledge, this is the first result on the suboptimality performance of RLHF under both privacy and corruption. In particular, let $\lambda = \widetilde{\Theta}(d/(B^2\gamma^2n)) \geq \widetilde{\Omega}(d/n)$ , the sample complexity part in the bounds (i.e., the last two terms) approaches zero with a rate of $\widetilde{\mathcal{O}}(\sqrt{d/n})$ , but with a multiplicative factor of $c(\varepsilon)$ that captures the cost of privacy. Meanwhile, due to strong corruption, a non-vanishing bias term exists in all three cases in terms of corruption parameters, which illustrates an interesting interplay between privacy and robustness, discussed below.
+
+Separation between CTL and LTC. One key observation is that LDP before corruption leads to an additional $c(\varepsilon)$ factor in the bias term, which mimics the same phenomena in private and robust mean estimation problems (Zhou & Zhang, 2024; Cheu et al., 2021).
+
+Comparisons with Prior Work. We now highlight our contributions even in robust-only or private-only RLHF, by comparing our result above with existing ones where privacy and robustness are separately considered.
+
+1. Robust RLHF: To our best knowledge, only recent work (Mandal et al., 2024) establishes theoretical suboptimality bounds for RLHF under adversarial corruption. In
+
+particular, it takes a linear MDP view (rather than our linear bandit view) of RLHF under strong corruption of both features and labels. Under the same relative condition number assumption, their dependence on $\alpha$ is $\mathcal{O}(\alpha^{1/4})$ when reduced from MDP to bandit. In contrast, our result gives a better dependence $\mathcal{O}(\sqrt{\alpha})$ , although only with label corruption. It is worth noting that this $\mathcal{O}(\sqrt{\alpha})$ dependence is state-of-the-art even in the easier setting of standard offline reinforcement learning (Zhang et al., 2022). Moreover, our Algorithm 1 is much simpler than the one in (Mandal et al., 2024). Thus, a fair conclusion here could be that our result offers a better algorithm and theoretical result in the easier label-only corruption setting.
+
+2. Private RLHF: To our best knowledge, we are unaware of prior work that explicitly states the private suboptimality of RLHF in terms of relative condition number, often used in the standard offline RL. The most related one is Chowdhury et al. (2023), which generalizes the non-private RLHF in Zhu et al. (2023) to the same locally private one as ours. However, both Chowdhury et al. (2023) and Zhu et al. (2023) state their suboptimality as
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq \| \mathbb {E} _ {s \sim \rho} [ \phi (s, \pi^ {\star} (s)) - v ] \| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \\ \times 2 F (n, d, \delta , \lambda), \tag {15} \\ \end{array}
+$$
+
+for any chosen reference vector $v \in \mathbb{R}^d$ and some function $F$ . This is similar to our intermediate result in (6) but has some key differences. One potential issue in (15) is that it does not offer clear guidance on choosing the important vector $v$ . In particular, if $v = 0$ , then the suboptimality may not converge to zero as $n \to \infty$ . This is because in both papers, $\lambda$ has to be on the order of $1/n$ so as to ensure that $F(n, d, \delta, \lambda) \leq \mathcal{O}(1/\sqrt{n})$ . However, in this case, if the minimum eigenvalue of the empirical matrix $\widehat{\Sigma}$ is small, the norm term $\| \mathbb{E}_{s \sim \rho}[\phi(s, \pi^\star(s)) - v] \|_{(\widehat{\Sigma} + \lambda I)^{-1}}$ can be on the order of $\sqrt{n}$ , given the choice of $\lambda$ . To partially address this, Zhu et al. (2023) suggest a heuristic way of selecting $v$ as the most common feature vector that appears in the data set. In contrast, we consider a reference policy $\pi_{\mathrm{ref}}$ and offer a theory-grounded rule for selecting it via relative condition number along with Corollary 4.4.
+
+Our next result is the suboptimality in RLHF under the assumption of uniform coverage (cf. Assumption 5.6).
+
+Theorem 6.2. Under the conditions of Proposition 4.2 and for $n \geq \frac{8\ln(d / \delta)}{\xi}$ , RLHF (Algorithm 2) achieves the following suboptimality with probability at least $1 - \delta$
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} _ {\mathrm {C T L}} (\widehat {\pi}, \pi^ {\star}) \leq C \left(\frac {\alpha}{\gamma \xi} + \frac {c (\varepsilon)}{\gamma \xi} \sqrt {\frac {\ln \frac {1}{\delta}}{n}}\right), \\ \operatorname {S u b O p t} _ {\mathrm {L T C}} (\widehat {\pi}, \pi^ {\star}) \leq C \left(\frac {c (\varepsilon) \alpha}{\gamma \xi} + \frac {c (\varepsilon)}{\gamma \xi} \sqrt {\frac {\ln \frac {1}{\delta}}{n}}\right). \\ \end{array}
+$$
+
+The proof follows directly from Proposition 4.2 and Theorem 5.7. Compared with Theorem 6.1, the corruption term becomes $\alpha$ (with a factor of $1 / \xi$ ) rather than $\sqrt{\alpha}$ while the concentration part has no explicit dependence on $d$ but with $1 / \xi$ factor, which however implicitly depends on $d$ . As before, a separation exists between CTL and LTC, due to the additional $c(\varepsilon)$ factor in LTC. It is worth noting that the $\mathcal{O}(\alpha / \xi)$ dependence matches the best existing result in standard offline RL under corruption in Zhang et al. (2022).
+
+Comparisons with Prior Work. Mandal et al. (2024) also consider the uniform coverage case and establish a bias corruption term on the order of $\frac{\sqrt{d}\alpha^{1 - o(1)}}{\xi}$ when reduced from their MDP to bandit setting. In contrast, in our label-corruption setting, we have no explicit dependence on $d$ and a better dependence on $\alpha$ . Moreover, we highlight that the missing dependence of $1 / \gamma$ in Mandal et al. (2024) is actually due to an error in their proof (see Appendix G for a detailed discussion). That is, the correct bound of their algorithm also has a $1 / \gamma$ factor. In the context of private RLHF under uniform coverage, our bound matches the state-of-the-art in Chowdhury et al. (2023) when the corruption parameter is zero.
+
+# 6.2. Private and Robust DPO
+
+Thanks to our reduction result, we can also leverage the estimation error bound to give the first result on suboptimality in DPO-style algorithms under privacy and corruption.
+
+Theorem 6.3. Under the conditions of Proposition 4.6, the policy corresponding to the output of Algorithm 1 achieves the following suboptimality with probability at least $1 - \delta$
+
+$$
+\begin{array}{l} \mathrm {S u b O p t} _ {\mathrm {C T L}} (\widehat {\pi}, \pi^ {\star}) \leq C \cdot B \sqrt {\kappa_ {\Pi}} \\ \times \left(\frac {\sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln \frac {1}{\delta}}{n}} + \beta B \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} _ {\mathrm {L T C}} \left(\widehat {\pi}, \pi^ {*}\right) \leq C \cdot B \sqrt {\kappa_ {\Pi}} \\ \times \left(\frac {c (\varepsilon) \sqrt {\alpha}}{\gamma} + \frac {c (\varepsilon)}{\gamma} \sqrt {\frac {d + \ln \frac {1}{\delta}}{n}} + \beta B \sqrt {\lambda}\right), \\ \end{array}
+$$
+
+for $\beta > 0$ , $\lambda \geq \Omega\left(\frac{d}{n} \cdot \ln(n / \delta)\right)$ , $\gamma = 1 / (2 + \exp(-\beta B) + \exp(\beta B))$ , and some universal constant $C > 0$ .
+
+Remark 6.4. The policy in the above theorem in fact corresponds to the output of the algorithm rDPO proposed in Chowdhury et al. (2024) with a log-linear policy class, see Appendix C for more details. That is, while Chowdhury et al. (2024) only shows a suboptimal rate for rDPO, we are the first to attain $O(1 / \sqrt{n})$ rate, see more discussion below.
+
+The proof follows from Proposition 4.6 and Theorem 5.4 with $B' = \mathcal{O}(\beta B)$ . To our knowledge, this is the first theoretical result on DPO-style algorithms under privacy and
+
+corruption. As before, we can see that the interplay of local privacy and adversarial corruption introduces a separation between CTL and LTC by a factor of $c(\varepsilon)$ . Moreover, our result also significantly advances the state-of-the-art for DPO-style algorithms under privacy or corruption separately, as discussed in detail below.
+
+Private DPO. Consider $\alpha = 0$ , $\lambda = \widetilde{\Theta}(d / (\beta^2 B^2 \gamma^2 n)) \geq \widetilde{\Omega}(d/n)$ , we obtain the first suboptimality for private DPO with rate $\widetilde{O}(1/\gamma \cdot c(\varepsilon) \sqrt{d/n} \cdot \sqrt{\kappa_{\Pi}})$ , where $c(\varepsilon)$ is the additional cost due to local privacy. The rate matches the best possible non-private one as $\varepsilon \to \infty$ (Song et al., 2024).
+
+Robust DPO. To the best of our knowledge, only the recent work by Chowdhury et al. (2024) provides a formal theoretical bound on the suboptimality of rDPO under label corruption. In particular, it considers the random-flipping corruption model (i.e., with some known probability, the true label is flipped). This is a much weaker model than ours and, in fact, is equivalent to local privacy after re-parameterization. Under this weaker model, Chowdhury et al. (2024) only established a suboptimal rate of $\widetilde{\mathcal{O}}(1/n^{1/4})$ in the general case, while our result implies a rate of $\widetilde{\mathcal{O}}(1/n^{1/2})$ (by using our private DPO result above) under the same corruption model. Moreover, moving from this weaker corruption model to a corruption model in the robust statistics literature (i.e., strong corruption model), our result above shows that rDPO now suffers a non-vanishing bias term.
+
+Practical Implementation and Experiments. Given that Theorem 6.3 establishes the SOTA theoretical results of rDPO in both private and corruption cases, under the log-linear policy. One may also interested in its empirical performance in general with neural nets as the policy class. We have a series of experiments (see Appendix D for details), which demonstrate some interesting results.
+
+# 7. Discussion and Conclusion
+
+While we present only upper bound results in the main body, we briefly discuss their tightness here; for further details, please refer to Appendix C. First, when $\alpha = 0$ , the additional factor $c(\varepsilon)$ due to privacy matches the minimax lower bound established in Chowdhury et al. (2023). Furthermore, the dependence on $1 / \gamma = \Theta(e^{B}) = \Theta(e^{r_{\max}})$ appears in nearly all existing results on both offline and online RLHF (Zhu et al., 2023; Zhan et al., 2023; Xie et al., 2024; Pacchiano et al., 2021; Chen et al., 2022), stemming from the non-linearity of the Bradley-Terry model. Second, in the limit $\varepsilon \to \infty$ (non-private case), our dependence on $\alpha$ is $\mathcal{O}(\sqrt{\alpha})$ and $\mathcal{O}(\alpha / \zeta)$ (under uniform coverage), both of which align with state-of-the-art results in standard offline RL settings, where rewards rather than preferences are observed. In fact, we conjecture that the $\mathcal{O}(\alpha / \zeta)$ dependence is optimal. Third, regarding the separation between CTL
+
+and LTC, the conclusion is nuanced. We tend to believe that the additional factor $c(\varepsilon)$ in the uniform coverage case is tight, as it matches the known result in mean estimation and offline bandits (Zhou & Zhang, 2024). However, under the $\mathcal{O}(\sqrt{\alpha})$ dependence without coverage, we hypothesize that achieving an $\mathcal{O}(\sqrt{c(\varepsilon)})$ separation—rather than $\mathcal{O}(c(\varepsilon))$ —is possible, presenting an exciting direction for future work. Looking ahead, our reduction analysis and new results on private and robust alignment may serve as key benchmarks and inspire further research in this domain.
+
+# Acknowledgements
+
+XZ is supported in part by NSF CNS-2153220 and CNS-2312835. XZ would like to thank Weihao Kong for insightful discussions.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Journal of Machine Learning Research, 22(98):1-76, 2021.
+Awasthi, P., Das, A., Kong, W., and Sen, R. Trimmed maximum likelihood estimation for robust learning in generalized linear models. arXiv preprint arXiv:2206.04777, 2022.
+Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das-Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T., Clark, J., McCandlish, S., Olah, C., Mann, B., and Kaplan, J. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
+Bradley, R. A. and Terry, M. E. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345, 1952.
+Bukharin, A., Hong, I., Jiang, H., Zhang, Q., Zhang, Z., and Zhao, T. Robust reinforcement learning from corrupted human feedback. arXiv preprint arXiv:2406.15568, 2024.
+Casper, S., Davies, X., Shi, C., Gilbert, T. K., Scheurer, J.,
+
+Rando, J., Freedman, R., Korbak, T., Lindner, D., Freire, P., Wang, T., Marks, S., Segerie, C.-R., Carroll, M., Peng, A., Christoffersen, P., Damani, M., Slocum, S., Anwar, U., Siththaranjan, A., Nadeau, M., Michaud, E. J., Pfau, J., Krasheninnikov, D., Chen, X., Langosco, L., Hase, P., Biryuk, E., Dragan, A., Krueger, D., Sadigh, D., and Hadfield-Menell, D. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023.
+Cen, S., Mei, J., Goshvadi, K., Dai, H., Yang, T., Yang, S., Schuurmans, D., Chi, Y., and Dai, B. Value-incentivized preference optimization: A unified approach to online and offline RLHF. arXiv preprint arXiv:2405.19320, 2024.
+Chaudhuri, K. and Hsu, D. Sample complexity bounds for differentially private learning. In Proceedings of the 24th Annual Conference on Learning Theory, pp. 155-186. JMLR Workshop and Conference Proceedings, 2011.
+Chen, J. and Jiang, N. Information-theoretic considerations in batch reinforcement learning. In International Conference on Machine Learning, pp. 1042-1051. PMLR, 2019.
+Chen, S., Koehler, F., Moitra, A., and Yau, M. Classification under misspecification: Halfspaces, generalized linear models, and connections to evolvability. arXiv preprint arXiv:2006.04787, 2020.
+Chen, X., Zhong, H., Yang, Z., Wang, Z., and Wang, L. Human-in-the-loop: Provably efficient preference-based reinforcement learning with general function approximation. In International Conference on Machine Learning, pp. 3773-3793. PMLR, 2022.
+Cheu, A., Smith, A., and Ullman, J. Manipulation attacks in local differential privacy. In 2021 IEEE Symposium on Security and Privacy (SP), pp. 883-900. IEEE, 2021.
+Chhor, J. and Sentenac, F. Robust estimation of discrete distributions under local differential privacy. In International Conference on Algorithmic Learning Theory, pp. 411-446. PMLR, 2023.
+Chowdhury, S. R., Zhou, X., and Natarajan, N. Differentially private reward estimation with preference feedback. arXiv preprint arXiv:2310.19733, 2023.
+Chowdhury, S. R., Kini, A., and Natarajan, N. Provably robust DPO: Aligning language models with noisy feedback. arXiv preprint arXiv:2403.00409, 2024.
+Diakonikolas, I. and Kane, D. M. Algorithmic high-dimensional robust statistics. Cambridge university press, 2023.
+
+Duchi, J. C., Jordan, M. I., and Wainwright, M. J. Local privacy and statistical minimax rates. In 2013 IEEE 54th annual symposium on foundations of computer science, pp. 429-438. IEEE, 2013.
+Duchi, J. C., Jordan, M. I., and Wainwright, M. J. Minimax optimal procedures for locally private estimation. Journal of the American Statistical Association, 113(521):182-201, 2018.
+Dwork, C. and Roth, A. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3-4):211-407, 2014.
+Feng, J., Xu, H., Mannor, S., and Yan, S. Robust logistic regression and classification. Advances in neural information processing systems, 27, 2014.
+Feng, Q., Kasa, S. R., Yun, H., Teo, C. H., and Bodapati, S. B. Exposing privacy gaps: Membership inference attack on preference data for LLM alignment. arXiv preprint arXiv:2407.06443, 2024.
+Ghazi, B., Golowich, N., Kumar, R., Manurangsi, P., and Zhang, C. Deep learning with label differential privacy. Advances in neural information processing systems, 34: 27131-27145, 2021.
+Hsu, D., Kakade, S. M., and Zhang, T. A tail inequality for quadratic forms of subgaussian random vectors. arXiv preprint arXiv:1110.2842, 2011.
+Huang, A., Zhan, W., Xie, T., Lee, J. D., Sun, W., Krishnamurthy, A., and Foster, D. J. Correcting the mythos of KL-regularization: Direct alignment without overparameterization via Chi-squared preference optimization. arXiv preprint arXiv:2407.13399, 2024.
+Jin, Y., Yang, Z., and Wang, Z. Is pessimism provably efficient for offline RL? In International Conference on Machine Learning, pp. 5084-5096. PMLR, 2021.
+Kairouz, P., Oh, S., and Viswanath, P. Extremal mechanisms for local differential privacy. Advances in neural information processing systems, 27, 2014.
+Kasiviswanathan, S. P., Lee, H. K., Nissim, K., Raskhodnikova, S., and Smith, A. What can we learn privately? SIAM Journal on Computing, 40(3):793-826, 2011.
+Lambert, N., Krendl Gilbert, T., and Zick, T. The history and risks of reinforcement learning and human feedback. arXiv e-prints, pp. arXiv-2310, 2023.
+Li, G., Shi, L., Chen, Y., Chi, Y., and Wei, Y. Settling the sample complexity of model-based offline reinforcement learning. The Annals of Statistics, 52(1):233-260, 2024.
+
+Li, M., Berrett, T. B., and Yu, Y. On robustness and local differential privacy. The Annals of Statistics, 51(2):717-737, 2023.
+Mandal, D., Nika, A., Kamalaruban, P., Singla, A., and Radanović, G. Corruption robust offline reinforcement learning with human feedback. arXiv preprint arXiv:2402.06734, 2024.
+Munos, R. Performance bounds in $L_{p}$ -norm for approximate value iteration. SIAM journal on control and optimization, 46(2):541-561, 2007.
+Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+Pacchiano, A., Saha, A., and Lee, J. Dueling RL: reinforcement learning with trajectory preferences. arXiv preprint arXiv:2111.04850, 2021.
+Prasad, A., Suggala, A. S., Balakrishnan, S., and Ravikumar, P. Robust estimation via robust gradient estimation. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(3):601-627, 2020.
+Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2023.
+Rashidinejad, P., Zhu, B., Ma, C., Jiao, J., and Russell, S. Bridging offline reinforcement learning and imitation learning: A tale of pessimism. Advances in Neural Information Processing Systems, 34:11702-11716, 2021.
+Song, Y., Swamy, G., Singh, A., Bagnell, D., and Sun, W. The importance of online data: Understanding preference fine-tuning via coverage. In ICML 2024 Workshop: Aligning Reinforcement Learning Experimentalists and Theorists, 2024.
+Tropp, J. A. An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning, 8 (1-2):1-230, 2015.
+von Werra, L., Belkada, Y., Tunstall, L., Beeching, E., Thrush, T., Lambert, N., Huang, S., Rasul, K., and Gallouédec, Q. TRL: Transformer Reinforcement Learning. https://github.com/huggingface/trl, 2020.
+
+Wang, Z., Bi, B., Pentyala, S. K., Ramnath, K., Chaudhuri, S., Mehrotra, S., Mao, X.-B., Asur, S., et al. A comprehensive survey of LLM alignment techniques: RLHF, RLAIF, PPO, DPO and more. arXiv preprint arXiv:2407.16216, 2024.
+Warner, S. L. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American statistical association, 60(309):63-69, 1965.
+Xie, T., Foster, D. J., Krishnamurthy, A., Rosset, C., Awadallah, A., and Rakhlin, A. Exploratory preference optimization: Harnessing implicit Q\*-approximation for sample-efficient RLHF. arXiv preprint arXiv:2405.21046, 2024.
+Xiong, W., Dong, H., Ye, C., Wang, Z., Zhong, H., Ji, H., Jiang, N., and Zhang, T. Iterative preference learning from human feedback: Bridging theory and practice for RLHF under KL-constraint. In *Forty-first International Conference on Machine Learning*, 2024.
+Zanette, A., Cheng, C.-A., and Agarwal, A. Cautiously optimistic policy optimization and exploration with linear function approximation. In Conference on Learning Theory, pp. 4473-4525. PMLR, 2021.
+Zhan, W., Uehara, M., Kallus, N., Lee, J. D., and Sun, W. Provable offline preference-based reinforcement learning. arXiv preprint arXiv:2305.14816, 2023.
+Zhang, X., Chen, Y., Zhu, X., and Sun, W. Robust policy gradient against strong data corruption. In International Conference on Machine Learning, pp. 12391-12401. PMLR, 2021.
+Zhang, X., Chen, Y., Zhu, X., and Sun, W. Corruption-robust offline reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 5757-5773. PMLR, 2022.
+Zhou, X. and Zhang, W. Locally private and robust multiarmed bandits. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
+Zhu, B., Jordan, M., and Jiao, J. Principled reinforcement learning with human feedback from pairwise or $K$ -wise comparisons. In International Conference on Machine Learning, pp. 43037-43067. PMLR, 2023.
+Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., and Irving, G. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
+
+# A. Additional Related Work
+
+We discuss here more relevant work that do not fit in the main text. In addition to the work discussed below, we refer readers to Huang et al. (2024) for theoretical results on standard offline alignment, to the survey Casper et al. (2023) for a more comprehensive overview of RLHF, and to Wang et al. (2024) for the overview of LLM alignment in general.
+
+Proverbly robust alignment under corruption. We would like to remark that our use of the strong corruption model from robust statistics literature is motivated by its popularity in robust offline and online reinforcement learning (i.e., when the actual rewards are observed) (Zhang et al., 2022; 2021), as well as the recent interest in examining its interplay with local differential privacy across various statistical tasks (Li et al., 2023; Cheu et al., 2021; Chhor & Sentenac, 2023). Moreover, this corruption model allows us to consider corruption occurring in both data generation and collection.
+
+Provably robust offline RL. Without privacy constraints, our work can be seen as a non-trivial extension of the results in corruption-robust offline RL (Zhang et al., 2022) to the setting of offline RLHF, where only relative rankings, rather than true rewards, are observed. As will be discussed in Appendix C, the lower bounds established for robust offline RL, along with their proof techniques, can be applied or adapted to derive lower bounds for offline RLHF.
+
+Robust logistic regression under corruption. Among those works on logistic regression under adversary corruption (Feng et al., 2014; Prasad et al., 2020; Chen et al., 2020; Awasthi et al., 2022), the most relevant one is Awasthi et al. (2022) that considers Binomial regression under label corruption, which includes logistic regression as a special case. Awasthi et al. (2022) propose an alternating minimization method that achieves a recover rate of $\mathcal{O}(\alpha \ln(1 / \alpha))$ in $L_{2}$ norm, where $\alpha \in [0,1/2)$ is the corruption parameter. In contrast, our intermediate result in Section 5 implies a rate of $\mathcal{O}(\alpha)$ . Moreover, our rate is achieved by the simple maximum likelihood estimator rather than the inefficient trimmed maximum likelihood estimator in Awasthi et al. (2022).
+
+# B. Algorithm
+
+# Algorithm 2 Offline RLHF
+
+1: Input: The current parameter estimate $\widehat{\theta}$ , the empirical covariance matrix $\widehat{\Sigma}$ , the regularizer $\lambda$ , the concentration bound $\Gamma(n, d, \delta, \lambda)$ , a reference policy $\pi_{\mathrm{ref}}$ and a tuning parameter $\eta \in \{0, 1\}$ .
+2: if $\eta = 0$ then
+3: $\widehat{J}(\pi) = \mathbb{E}_{s \sim \rho, a \sim \pi(\cdot|s)}[\langle \widehat{\theta}, \phi(s, a) \rangle]$
+4: return $\widehat{\pi} = \operatorname{argmax}_{\pi} \widehat{J}(\pi)$
+5: else
+6: Construct confidence set
+
+$$
+\Theta (\widehat {\theta}, \lambda) = \left\{\theta \in \Theta_ {B} \mid \| \widehat {\theta} - \theta \| _ {\widehat {\Sigma} + \lambda I} \leq \Gamma (n, d, \delta , \lambda) \right\}
+$$
+
+Compute pessimistic expected value
+
+$$
+\widehat {J} (\pi) = \min _ {\theta \in \Theta (\widehat {\theta}, \lambda)} \mathbb {E} _ {s \sim \rho , a \sim \pi (\cdot | s)} [ \langle \theta , \phi (s, a) \rangle ] - \mathbb {E} _ {s \sim \rho , a \sim \pi_ {\mathrm {r e f}} (\cdot | s)} [ \langle \theta , \phi (s, a) \rangle ]
+$$
+
+7: return $\widehat{\pi} = \operatorname{argmax}_{\pi} \widehat{J}(\pi)$
+8: end if
+
+# C. Discussions
+
+In this section, we discuss the tightness of our suboptimality bounds. In particular, we primarily focus on the result in Theorem 6.1, as it offers stronger guarantees compared to Theorem 6.3.
+
+Dependence on $1 / \gamma$ . The dependence on $1 / \gamma = \Theta(e^{B}) = \Theta(e^{r_{\max}})$ is present in nearly all existing results on both offline and online RLHF (Zhu et al., 2023; Zhan et al., 2023; Xie et al., 2024; Pacchiano et al., 2021; Chen et al., 2022). This stems from an intrinsic feature of the Bradley-Terry model, namely, the non-linearity of the sigmoid function.
+
+The privacy cost of $c(\varepsilon)$ . Compared to the non-private (non-corrupted) case, our bound includes an additional multiplicative
+
+factor of $c(\varepsilon)$ , which we believe to be tight when $\varepsilon \in [0,1]$ , i.e., $c(\varepsilon) = \Theta(1 / \varepsilon)$ . First, this factor appears even in simple mean estimation, where a matching lower bound is provided in Duchi et al. (2018). Second, a more concrete argument can be made by modifying the existing lower bound proof for the non-private case to show that $c(\varepsilon)$ is necessary. Specifically, the key insight is that any LDP mechanism is a contraction of the KL divergence, as stated in Duchi et al. (2018, Theorem 1). Thus, in the lower bound proof for the private case, the non-private KL divergence is replaced with the private one, which is smaller by a factor of $(e^{\varepsilon} - 1)^{2}$ , eventually leading to a factor of $1 / \varepsilon$ .
+
+The separation between CTL and LTC. We observe an additional factor of $c(\varepsilon)$ in the corruption under LTC compared to CTL. We conjecture that this is tight for all $\varepsilon > 0$ , especially for the one in Theorem 6.2. First, the separation result is also seen in the mean estimation problem and is shown to be tight (Zhou & Zhang, 2024). Second, a more concrete argument can be made by modifying the lower bound for standard offline linear bandits under corruption (Zhang et al., 2022). This lower bound is valid for offline RLHF under CTL, as offline RLHF is at least as hard as offline linear bandits, and CTL is harder than corruption-only settings. To demonstrate the additional $c(\varepsilon)$ factor under LTC, a key fact is that any LDP mechanism contracts the total variation distance by a factor of $c(\varepsilon)$ (cf. Lemma H.4). Using a standard coupling argument, one can then derive a lower bound with the additional factor of $c(\varepsilon)$ for the LTC setting.
+
+Dependence on $\alpha$ . Our current $\sqrt{\alpha}$ dependence matches the best existing result, even in standard offline RL (Zhang et al., 2022). However, this $\sqrt{\alpha}$ dependence does not align with the existing $\Omega(\alpha)$ lower bound (Zhang et al., 2022). On the other hand, under the uniform coverage assumption, our result in Theorem 6.2 achieves the optimal dependence on $\alpha$ . Furthermore, we conjecture that the $1/\xi$ factor preceding $\alpha$ is optimal. Our reasoning is as follows: due to boundedness, we have $\xi \leq 1/d$ . In the best case, when $\xi = 1/d$ , our upper bound matches the lower bound of $d\alpha$ in Zhang et al. (2022), which was established for standard offline linear RL, except for the difference of $1/\gamma$ due to the non-linearity.
+
+Practical implementations. For the sake of theoretical analysis, we adopt linear modeling in the main paper. Nevertheless, we mention that our proposed method can be readily extended to the case with general function classes (albeit losing the current formal theoretical guarantees). Take DPO for an example, we can solve the following optimization problem:
+
+$$
+\widehat {\pi} = \underset {\pi \in \Pi} {\operatorname {a r g m i n}} - \left(\sum_ {i = 1} ^ {n} \ln \left(1 - \sigma \left(r _ {\beta , i} ^ {\pi , \pi_ {\mathrm {s f t}}}\right)\right) + \left(z _ {i} + \sigma (\varepsilon) - 1\right) c (\varepsilon) \ln \left(\frac {\sigma \left(r _ {\beta , i} ^ {\pi , \pi_ {\mathrm {s f t}}}\right)}{1 - \sigma \left(r _ {\beta , i} ^ {\pi , \pi_ {\mathrm {s f t}}}\right)}\right)\right), \tag {16}
+$$
+
+where
+
+$$
+r _ {\beta , i} ^ {\pi , \pi_ {\mathrm {s f t}}} := \beta \ln \frac {\pi (a _ {i} ^ {1} | s _ {i})}{\pi_ {\mathrm {s f t}} (a _ {i} ^ {1} | s _ {i})} - \beta \ln \frac {\pi (a _ {i} ^ {0} | s _ {i})}{\pi_ {\mathrm {s f t}} (a _ {i} ^ {0} | s _ {i})}.
+$$
+
+Some sanity checks are in order. First, for the standard case (i.e., $\varepsilon \to \infty$ and $\alpha = 0$ ), we have $\sigma(\varepsilon) = c(\varepsilon) = 1$ and $z_{i} = y_{i}$ , which leads us back to the standard DPO loss, see (11). Second, if we consider log-linear policy, (16) reduces to (14) (up to some scaling of $\beta$ ). Third, if there is only privacy (or similar random flipping noise with a known flipping rate as in Chowdhury et al. (2024)), one can verify that the above loss is equivalent to the one in Chowdhury et al. (2024) (see their Eq. 12, which is called rDPO), up to some simple rescaling. Thus, in this sense, compared to the sub-optimal rate of $\mathcal{O}(1/n^{1/4})$ for the log-linear policy class established in Chowdhury et al. (2024), we give the first $\mathcal{O}(1/\sqrt{n})$ rate for private or "robust" DPO. One can also follow a similar approach as above by simply replacing the policy-parameterized reward $r_{\beta,i}^{\pi,\pi_{\mathrm{sft}}}$ by a reward function in a reward function class for RLHF. Then, a similar method as in Algorithm 1 of Zhan et al. (2023) can be adopted for introducing pessimism.
+
+# D. Experiments on DPO and rDPO under Privacy and Corruption
+
+As mentioned in the last section, we provide the first results for rDPO (Chowdhury et al., 2024) under both privacy and corruption with a log-linear policy class (cf. Theorem 6.3). In this section, we would like to empirically demonstrate its performance with a general function class, i.e., neural nets.
+
+# D.1. Experiment Setup
+
+Dataset. We utilize GPT-4o to generate a synthetic dataset, referred to as finance-preference, which comprises 1697 preference samples. Each sample includes a prompt related to a financial scenario and two possible responses, where "rejected" represents the high-risk option and "chosen" represents the low-risk option. This labeling can be viewed as private or sensitive information. For illustrative examples from our dataset, please refer to Appendix I. For SFT training, we construct the finance_sft dataset by simply concatenating the prompt with the corresponding "chosen" response.
+
+SFT Training. We begin by fine-tuning GPT2-large using the finance_sft dataset to obtain the SFT policy, $\pi_{\mathrm{sft}}$ . For this, we directly utilize the SFT trainer from the Transformer Reinforcement Learning (TRL) library (von Werra et al., 2020), with the hyperparameters listed in Table 3.
+
+DPO and rDPO Training. For alignment training, we split the dataset into $85\%$ for training, $5\%$ for validation, and $10\%$ for testing. For DPO, we utilize the implementation provided in the TRL library, using the hyperparameters listed in Table 4. Similarly, for rDPO, we leverage the TRL implementation, which corresponds to DPO with lose_type set to "robust." In the private setting with a privacy budget of $\varepsilon$ , one can simply set label_smoothing to the flip rate, given by $\frac{1}{e^{\varepsilon} + 1}$ . This setting recovers the same algorithm presented in our main paper when the policy class is log-linear. Finally, we use the same set of hyperparameters for rDPO as in DPO training.
+
+CTL and LTC Settings. The LDP mechanism follows the randomized response model, where the flip rate is given by $\frac{1}{e^{\varepsilon} + 1}$ . For corruption, we assume that a randomly sampled subset of $O(\alpha n)$ labels are always flipped compared to the true label. To implement both privacy and corruption, we introduce a mask variable initialized to 0 for each sample. The LDP mechanism flips the mask variable with probability $\frac{1}{e^{\varepsilon} + 1}$ , while the corruption mechanism sets the mask to 1 with probability $\alpha$ . Finally, after CTL or LTC processing, labels ("chosen" and "rejected") are flipped if the corresponding mask value is 1. At this point, an astute reader may notice that LTC results in a higher number of 1s in the final mask variables compared to CTL
+
+Evaluation. We evaluate our trained models $\pi_{\mathrm{DPO}}$ , $\pi_{\mathrm{rDPO}}$ , and $\pi_{\mathrm{SFT}}$ by generating responses for the test dataset using the hyperparameters listed in Table 5. To assess performance, we employ the 11ama3:70b model as a judge, comparing responses from $\pi_{\mathrm{DPO}}$ and $\pi_{\mathrm{rDPO}}$ against those from $\pi_{\mathrm{SFT}}$ . Finally, we use the win rate from these comparisons as our primary performance metric, following the methodology outlined in the DPO paper (Rafailov et al., 2023). We compute the average and standard deviation across five seeds.
+
+# D.2. Results
+
+Private Case. We first compare the performance of DPO and rDPO in the private setting, as shown in Table 1. Due to the "shifting and scaling" loss used in rDPO, we observe that rDPO outperforms standard DPO in the private case. Interestingly, we make an additional observation: in the non-private setting, if we still introduce random label flips at a rate of approximately $1 / (e^{1} + 1)$ , rDPO achieves even better performance than DPO. This suggests that deliberately adding noise to labels can enhance performance, resembling the well-known effect of label smoothing in classification tasks. We also tend to believe that this injected noise also somewhat helps to address the overoptimization issues in DPO-style algorithms. We plan to further explore this phenomenon on a larger dataset. Finally, we note that this observation does not contradict our main theoretical result, which provides an upper bound in the worst case.
+
+Private and Corruption Cases. We now examine whether the separation between CTL and LTC persists beyond the linear setting. As shown in Table 2, rDPO demonstrates better performance under CTL compared to LTC. Furthermore, the performance gap widens as $\varepsilon$ decreases. These observations are consistent with the theoretical insights derived from the linear setting.
+
+Table 1. Comparison of win rates (%) for DPO and rDPO across different values of privacy budget $\varepsilon$ .
+
+ε rDPO winrate (%) DPO winrate (%) 0.1 59.0 ± 4.7 55.4 ± 1.1 0.5 65.8 ± 5.6 60.4 ± 3.0
+
+Table 2. Comparison of win rates (%) for rDPO under CTL and LTC.
+
+(ε,α) win rates (%) under CTL win rates (%) under LTC (1,0.1) 69.6 ± 5.1 65.4 ± 5.0 (0.5,0.1) 64.4 ± 2.8 58.6 ± 2.6
+
+# E. Proofs
+
+This section presents the proofs for our main results in previous sections.
+
+# E.1. Proof of Proposition 4.2
+
+Proof. By definition, for any $\pi^{\dagger}$ , we have
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\dagger}\right) = J \left(\pi^ {\dagger}\right) - J \left(\widehat {\pi}\right) \\ = \underbrace {J (\pi^ {\dagger}) - \widehat {J} (\pi^ {\dagger})} _ {\mathcal {T} _ {1}} + \underbrace {\widehat {J} (\pi^ {\dagger}) - \widehat {J} (\widehat {\pi})} _ {\mathcal {T} _ {2}} + \underbrace {\widehat {J} (\widehat {\pi}) - J (\widehat {\pi})} _ {\mathcal {T} _ {3}}, \\ \end{array}
+$$
+
+holds for any function $\widehat{J}(\cdot)$ . For the first case when $\eta = 0$ , we have $\widehat{J}(\pi) = \mathbb{E}_{s \sim \rho, a \sim \pi(\cdot|s)}[\phi(s, a)^{\top}\widehat{\theta}]$ . By the greedy algorithm in Algorithm 2, we have $\mathcal{T}_2 \leq 0$ . Further, under Assumption 4.1, we can rewrite $\mathcal{T}_1$ and $\mathcal{T}_3$ as
+
+$$
+\mathcal {T} _ {1} = \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s)} [ \phi (s, a) ^ {\top} (\theta^ {\star} - \widehat {\theta}) ], \quad \mathcal {T} _ {3} = \mathbb {E} _ {s \sim \rho , a \sim \widehat {\pi} (\cdot | s)} [ \phi (s, a) ^ {\top} (\widehat {\theta} - \theta^ {\star}) ].
+$$
+
+By the boundedness assumption, both terms can be upper bounded by $\left\| \widehat{\theta} - \theta^{\star} \right\|_{2}$ , which implies the first result by the fact that $\theta_{\mathrm{true}} = \theta^{\star}$ .
+
+For the second case when $\eta = 1$ , we introduce the following notation
+
+$$
+J (\pi ; \theta^ {\star}) := \mathbb {E} _ {s \sim \rho , a \sim \pi (\cdot | s)} [ \phi (s, a) ^ {\top} \theta^ {\star} ] = J (\pi). \tag {17}
+$$
+
+Thus, we have $\mathbb{E}_{s\sim \rho ,a\sim \pi (\cdot |s)}[\langle \theta ,\phi (s,a)\rangle ] - \mathbb{E}_{s\sim \rho ,a\sim \pi_{\mathrm{ref}}(\cdot |s)}[\langle \theta ,\phi (s,a)\rangle ] = J(\pi ;\theta) - J(\pi_{\mathrm{ref}};\theta).$ Let $\theta_{\pi}^{\inf} =$ argmin $\theta_{\in \Theta (\widehat{\theta},\lambda)}J(\pi ;\theta) - J(\pi_{\mathrm{ref}};\theta)$ . Hence $\widehat{J} (\pi) = J(\pi ;\theta_{\pi}^{\inf}) - J(\pi_{\mathrm{ref}};\theta_{\pi}^{\inf})$ . Then, we have
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\dagger}\right) = J \left(\pi^ {\dagger}\right) - J (\widehat {\pi}) \\ = J (\pi^ {\dagger}; \theta^ {\star}) - J (\pi_ {\mathrm {r e f}}; \theta^ {\star}) - (J (\widehat {\pi}; \theta^ {\star}) - J (\pi_ {\mathrm {r e f}}; \theta^ {\star})) \\ \stackrel {(a)} {\leq} \left(J (\pi^ {\dagger}; \theta^ {\star}) - J (\pi_ {\mathrm {r e f}}; \theta^ {\star})\right) - \left(J (\pi^ {\dagger}; \theta_ {\pi^ {\dagger}} ^ {\inf }) - J (\pi_ {\mathrm {r e f}}; \theta_ {\pi^ {\dagger}} ^ {\inf })\right) \\ + \left(J (\widehat {\pi}; \theta_ {\widehat {\pi}} ^ {\inf }) - J \left(\pi_ {\text {r e f}}; \theta_ {\widehat {\pi}} ^ {\inf }\right)\right) - \left(J (\widehat {\pi}; \theta^ {\star}) - J \left(\pi_ {\text {r e f}}; \theta^ {\star}\right)\right) \\ \stackrel {(b)} {\leq} \left(J (\pi^ {\dagger}; \theta^ {\star}) - J (\pi_ {\text {r e f}}; \theta^ {\star})\right) - \left(J (\pi^ {\dagger}; \theta_ {\pi^ {\dagger}} ^ {\text {i n f}}) - J (\pi_ {\text {r e f}}; \theta_ {\pi^ {\dagger}} ^ {\text {i n f}})\right) \\ = \underbrace {\left(J (\pi^ {\dagger} ; \theta^ {\star}) - J (\pi_ {\mathrm {r e f}} ; \theta^ {\star})\right) - \left(J (\pi^ {\dagger} ; \widehat {\theta}) - J (\pi_ {\mathrm {r e f}} ; \widehat {\theta})\right)} _ {\mathcal {T} _ {4}} \\ + \underbrace {\left(J (\pi^ {\dagger} ; \widehat {\theta}) - J (\pi_ {\mathrm {r e f}} ; \widehat {\theta})\right) - \left(J (\pi^ {\dagger} ; \theta_ {\pi^ {\dagger}} ^ {\mathrm {i n f}}) - J (\pi_ {\mathrm {r e f}} ; \theta_ {\pi^ {\dagger}} ^ {\mathrm {i n f}})\right)} _ {\mathcal {T} _ {5}}, \\ \end{array}
+$$
+
+where $(a)$ holds by the greedy algorithm; $(b)$ holds by the definition of $\theta_{\widehat{\pi}}^{\inf}$ and the fact that $\theta^{\star} \in \Theta(\widehat{\theta}, \lambda)$ by (5). To bound $\mathcal{T}_4$ and $\mathcal{T}_5$ , we use the definition in (17), the concentration in (5) and the definition of $\Theta(\widehat{\theta}, \lambda)$ with $\theta_{\pi^\dagger}^{\inf} \in \Theta(\widehat{\theta}, \lambda)$ , and obtain that
+
+$$
+\left. \mathcal {T} _ {4} + \mathcal {T} _ {5} \leq 2 \Gamma (n, d, \delta , \lambda) \left\| \mathbb {E} _ {s \sim \rho} [ \phi (s, \pi^ {\dagger} (s)) - \phi (s, \pi_ {\operatorname {r e f}} (s)) ] \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}, \right.
+$$
+
+where we let $\phi(s, \pi(s)) \coloneqq \mathbb{E}_{a \sim \pi(\cdot | s)}[\phi(s, a)]$ . This finishes the proof.
+
+
+
+# E.2. Proof of Corollary 4.4
+
+Proof. We need only focus on the last term in (6). Note that
+
+$$
+\mathbb {E} _ {s \sim \rho} [ \phi (s, \pi^ {\dagger} (s)) - \phi (s, \pi_ {\mathrm {r e f}} (s)) ] = \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s), a ^ {\prime} \sim \pi_ {\mathrm {r e f}} (\cdot | s)} [ \phi (s, a) - \phi (s, a ^ {\prime}) ].
+$$
+
+Thus, we have
+
+$$
+\begin{array}{l} \left. \left\| \mathbb {E} _ {s \sim \rho} [ \phi (s, \pi^ {\dagger} (s)) - \phi (s, \pi_ {\operatorname {r e f}} (s)) ] \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} ^ {2} \right. \\ = \left\| \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s), a ^ {\prime} \sim \pi_ {\mathrm {r e f}} (\cdot | s)} [ \phi (s, a) - \phi (s, a ^ {\prime}) ] \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} ^ {2} \\ \stackrel {(a)} {\leq} 3 \left\| \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s), a ^ {\prime} \sim \pi_ {\mathrm {r e f}} (\cdot | s)} [ \phi (s, a) - \phi (s, a ^ {\prime}) ] \right\| _ {(\Sigma_ {\pi_ {\mathrm {s f t}}, \pi_ {\mathrm {s f t}}} ^ {\mathrm {d i f f}}, + \lambda I) ^ {- 1}} ^ {2} \\ \stackrel {(b)} {\leq} 3 \cdot \kappa (\pi^ {\dagger}, \pi_ {\mathrm {r e f}}) \cdot \left\| \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s), a ^ {\prime} \sim \pi_ {\mathrm {r e f}} (\cdot | s)} [ \phi (s, a) - \phi (s, a ^ {\prime}) ] \right\| ^ {2} \binom {\sum_ {\pi^ {\dagger}, \pi_ {\mathrm {r e f}}} ^ {\mathrm {d i f f}} \left(\frac {1}{\pi^ {\dagger} , \pi_ {\mathrm {r e f}}} + 1\right)}{\left. \sum_ {\pi^ {\dagger}, \pi_ {\mathrm {r e f}}} ^ {\mathrm {d i f f}} \left(\frac {1}{\pi^ {\dagger} , \pi_ {\mathrm {r e f}}} + 1\right)\right) ^ {- 1}} \\ \stackrel {(c)} {\leq} 3 \cdot \kappa (\pi^ {\dagger}, \pi_ {\mathrm {r e f}}) \cdot \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\dagger} (\cdot | s), a ^ {\prime} \sim \pi_ {\mathrm {r e f}} (\cdot | s)} \left[ (\phi (s, a) - \phi (s, a ^ {\prime})) ^ {\top} \left(\Sigma_ {\pi^ {\dagger}, \pi_ {\mathrm {r e f}}} ^ {\mathrm {d i f f}}\right) ^ {- 1} (\phi (s, a) - \phi (s, a ^ {\prime})) \right] \\ \stackrel {(d)} {=} 3 \cdot \kappa (\pi^ {\dagger}, \pi_ {\mathrm {r e f}}) \cdot \operatorname {t r a c e} (I), \\ \end{array}
+$$
+
+where $(a)$ holds by Lemma H.1 for $\lambda \geq \Omega\left(\frac{d}{n} \cdot \ln(n / \delta)\right)$ ; $(b)$ follows by the definition of $\kappa(\pi^{\dagger}, \pi_{\mathrm{ref}})$ in (8); $(c)$ holds by Jensen's inequality; $(d)$ simply follows from the interchange of trace and expectation along with the cyclic property of trace. Taking the square root, yields the required result.
+
+# E.3. Proof of Proposition 4.6
+
+Proof. We first show that under Assumption 4.5, the labels are generated via a logistic regression model. This follows from a direct computation. In particular, by (1), (10), (12), we have
+
+$$
+\begin{array}{l} \mathbb {P} \left(y _ {i} = 1 | s _ {i}, a _ {i} ^ {0}, a _ {i} ^ {1}\right) = \frac {1}{1 + \exp (r ^ {\star} (s _ {i} , a _ {i} ^ {0}) - r ^ {\star} (s _ {i} , a _ {i} ^ {1}))} \\ = \sigma \left(r ^ {\star} \left(s _ {i}, a _ {i} ^ {1}\right) - r ^ {\star} \left(s _ {i}, a _ {i} ^ {0}\right)\right) \\ = \sigma \left(\beta \ln \frac {\pi^ {\star} \left(a _ {i} ^ {1} \mid s _ {i}\right)}{\pi^ {\star} \left(a _ {i} ^ {0} \mid s _ {i}\right)} - \beta \ln \frac {\pi_ {\mathrm {s f t}} \left(a _ {i} ^ {1} \mid s _ {i}\right)}{\pi_ {\mathrm {s f t}} \left(a _ {i} ^ {0} \mid s _ {i}\right)}\right) \\ = \sigma \left(\langle \beta (\theta^ {\star} - \theta_ {\mathrm {s f t}}), \phi (s _ {i}, a _ {i} ^ {1}) - \phi (s _ {i}, a _ {i} ^ {0}) \rangle\right). \\ \end{array}
+$$
+
+Thus, with $\theta_{\mathrm{true}} = \beta (\theta^{\star} - \theta_{\mathrm{sft}})$ and $x_{i} = \phi (s_{i},a_{i}^{1}) - \phi (s_{i},a_{i}^{0})$ , we have that each label $y_{i}$ follows from logistic regression in (3).
+
+We now turn to the suboptimality part.
+
+$$
+\begin{array}{l} \operatorname {S u b O p t} \left(\widehat {\pi}, \pi^ {\star}\right) = \mathbb {E} _ {s \sim \rho , a \sim \pi^ {\star}} \left[ r ^ {\star} (s, a) \right] - \mathbb {E} _ {s \sim \rho , a \sim \widehat {\pi}} \left[ r ^ {\star} (s, a) \right] \\ \stackrel {(a)} {\leq} \Delta_ {\max } \mathbb {E} _ {s \sim \rho} \left[ \mathrm {T V} (\pi^ {\star} (\cdot | s), \widehat {\pi} (\cdot | s)) \right] \\ \stackrel {(b)} {\leq} \Delta_ {\max } \mathbb {E} _ {s \sim \rho} \left[ \sqrt {1 / 2 \cdot \operatorname {K L} (\pi^ {\star} (\cdot | s) , \widehat {\pi} (\cdot | s))} \right] \\ \stackrel {(c)} {\leq} \Delta_ {\max } \sqrt {1 / 2 \cdot \mathbb {E} _ {s \sim \rho} \left[ \mathrm {K L} (\pi^ {\star} (\cdot | s) , \widehat {\pi} (\cdot | s)) \right]}, \\ \end{array}
+$$
+
+where in (a) we have $\Delta_{\max} = \max_{s,a}\left(r^{\star}(s,a) - \beta \ln Z_{\beta}(s)\right) \leq 2\beta B$ , (b) follows from Pinsker's inequality, and (c) holds by Jensen's inequality.
+
+Then, since both $\pi^{\star}$ and $\widehat{\pi}^{\prime}$ are log-linear policies with parameters $\theta^{\star}$ and $\widehat{\theta}^{\prime}$ , respectively, by a direct calculation and Taylor expansion, we have
+
+$$
+\mathrm {K L} (\pi^ {\star} (\cdot | s), \widehat {\pi} (\cdot | s)) = \frac {1}{2} (\widehat {\theta^ {\prime}} - \theta^ {\star}) ^ {\top} A _ {s} (\theta) (\widehat {\theta^ {\prime}} - \theta^ {\star}),
+$$
+
+where $A_{s}(\theta)\coloneqq \mathbb{E}_{a\sim \pi_{\theta}(\cdot |s)}[\phi (s,a)\phi (s,a)^{\top}] - \mathbb{E}_{a\sim \pi_{\theta}(\cdot |s)}[\phi (s,a)]\mathbb{E}_{a\sim \pi_{\theta}(\cdot |s)}[\phi (s,a)]$ for some $\theta$ between $\theta^{\star}$ and $\widehat{\theta}^{\prime}$ . By independently sampling of $a,a^{\prime}\sim \pi_{\theta}(\cdot |\bar{s})$ , we have $\mathbb{E}_{s\sim \rho}[A_s(\theta)] = \frac{1}{2}\Sigma_{\pi_\theta ,\pi_\theta}^{\mathrm{diff}}$ (cf. (7)). Combining all of the above with the definition of $\kappa_{\Pi}$ in Definition 4.3, yields that
+
+$$
+\mathrm {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq \sqrt {\kappa_ {\Pi}} \cdot \frac {\Delta_ {\mathrm {m a x}}}{2 \sqrt {2}} \cdot \left\| \widehat {\theta^ {\prime}} - \theta^ {\star} \right\| _ {\sum_ {\pi_ {\mathrm {s f t}}, \pi_ {\mathrm {s f t}}} ^ {\mathrm {d i f f}}}.
+$$
+
+Note that $\Sigma_{\pi_{\mathrm{sft}},\pi_{\mathrm{sft}}}^{\mathrm{diff}}$ is the corresponding population matrix of $\widehat{\Sigma}$ . Thus, by Lemma H.1, for $\lambda \geq \Omega\left(\frac{d}{n} \cdot \ln(n / \delta)\right)$ , we have
+
+$$
+\operatorname {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq \sqrt {\kappa_ {\Pi}} \cdot \frac {\sqrt {3} \Delta_ {\max}}{2 \sqrt {2}} \left\| \widehat {\theta} ^ {\prime} - \theta^ {\star} \right\| _ {\widehat {\Sigma} + \lambda I}.
+$$
+
+Finally, note that $\widehat{\theta}^{\prime} - \theta^{\star} = (\widehat{\theta} -\theta_{\mathrm{true}}) / \beta$ . Then, by (13) and $\Delta_{\mathrm{max}}\leq 2\beta B$ , we have the final result
+
+$$
+\operatorname {S u b O p t} (\widehat {\pi}, \pi^ {\star}) \leq \frac {\sqrt {3}}{\sqrt {2}} \cdot \sqrt {\kappa_ {\Pi}} \cdot B \cdot \Gamma (n, d, \delta , \lambda).
+$$
+
+# E.4. Proof of Theorem 5.4
+
+Proof. We divide the proof into CTL, LTC, and CLC cases. Before that, we will present some common properties of our new loss, which will be used in all three cases.
+
+Recall that our new loss is given by
+
+$$
+\widetilde {\mathcal {L}} (\theta) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \widetilde {\ell} _ {i} (\theta) \quad \text {w h e r e} \quad \widetilde {\ell} _ {i} (\theta) = \ln (1 - \sigma (\theta^ {\top} x _ {i})) + (z _ {i} + \sigma (\varepsilon) - 1) c (\varepsilon) \cdot \theta^ {\top} x _ {i},
+$$
+
+where $c(\varepsilon) \coloneqq \frac{1}{2\sigma(\varepsilon) - 1} = \frac{e^{\varepsilon} + 1}{e^{\varepsilon} - 1}$ . We will need its gradient and Hessian in our proof, given by
+
+$$
+\nabla_ {\theta} \widetilde {\mathcal {L}} (\theta) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) \left(z _ {i} + \sigma (\varepsilon) - 1\right) - \sigma \left(\theta^ {\top} x _ {i}\right) \right] x _ {i}, \tag {18}
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \widetilde {\mathcal {L}} (\theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \sigma \left(\theta^ {\top} x _ {i}\right) \left(1 - \sigma \left(\theta^ {\top} x _ {i}\right)\right) \right] x _ {i} x _ {i} ^ {\top}, \tag {19}
+$$
+
+where we use the simple fact that $\sigma'(z) = \sigma(z)(1 - \sigma(z))$ .
+
+Let $\Delta := \widehat{\theta} - \theta_{\mathrm{true}}$ , by the fact that $\widehat{\theta}$ minimizes the loss and (19), we have
+
+$$
+\begin{array}{l} \gamma \left\| \Delta \right\| _ {\widehat {\Sigma}} ^ {2} \leq \widetilde {\mathcal {L}} \left(\theta_ {\mathrm {t r u e}} + \Delta\right) - \widetilde {\mathcal {L}} \left(\theta_ {\mathrm {t r u e}}\right) - \langle \nabla \widetilde {\mathcal {L}} \left(\theta_ {\mathrm {t r u e}}\right), \Delta \rangle \leq - \langle \nabla \widetilde {\mathcal {L}} \left(\theta_ {\mathrm {t r u e}}\right), \Delta \rangle \\ \leq \left\| \nabla \widetilde {\mathcal {L}} \left(\theta_ {\text {t r u e}}\right) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \| \Delta \| _ {\widehat {\Sigma} + \lambda I}, \tag {20} \\ \end{array}
+$$
+
+where $\gamma = 1 / (2 + \exp (-B^{\prime}) + \exp (B^{\prime}))$ by the boundedness condition. Thus, the key is to bound the term $\left\| \nabla \widetilde{\mathcal{L}} (\theta_{\mathrm{true}})\right\|_{(\widehat{\Sigma} +\lambda I)^{-1}}$ , which will be handled separately for each case later.
+
+For now, let us suppose we have the following high probability bound:
+
+$$
+\left\| \nabla \widetilde {\mathcal {L}} \left(\theta_ {\text {t r u e}}\right) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \leq f (n, d, \delta , \lambda), \tag {21}
+$$
+
+for some function $f$ , and proceed to establish the final bound. In particular, by the boundedness condition for $\Theta_{B'}$ and (20), we have
+
+$$
+\gamma \left\| \Delta \right\| _ {\widehat {\Sigma} + \lambda I} ^ {2} \leq \left\| \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \| \Delta \| _ {\widehat {\Sigma} + \lambda I} + 4 \gamma \lambda B ^ {\prime 2},
+$$
+
+which implies that
+
+$$
+\left\| \widehat {\theta} - \theta_ {\text {t r u e}} \right\| _ {\widehat {\Sigma} + \lambda I} = \| \Delta \| _ {\widehat {\Sigma} + \lambda I} \leq C \left(\frac {1}{\gamma} \cdot f (n, d, \delta , \lambda) + B ^ {\prime} \sqrt {\lambda}\right), \tag {22}
+$$
+
+for some universal constant $C$ .
+
+Thus, it remains to establish the high probability bound in (21) under the three settings. To this end, we will fully utilize the following claims. See Appendix F for the proofs.
+
+Claim E.1. Let $\eta_{i}$ be zero-mean i.i.d sub-Gaussian with parameter $\sigma$ , condition on $x_{i}$ . Then, for any $\delta \in (0,1)$ and $\lambda > 0$ with probability at least $1 - \delta$ ,
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \leq C \cdot \sigma \cdot \sqrt {\frac {d + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C$ .
+
+Claim E.2. Let $b = (b_{1},\ldots ,b_{n})$ be a vector that at least $1 - \alpha n$ elements are zero, and the rest are bounded by some constant $\zeta >0$ , i.e., $|b_i|\leq \zeta$ . Then, we have
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} b _ {i} x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \leq \zeta \sqrt {\alpha}.
+$$
+
+With the above claims in hand, we are going to establish (21) for CTL, LTC and CLC, respectively.
+
+CTL case. In this case, we rewrite the gradient in (18) as follows
+
+$$
+\nabla \widetilde {\mathcal {L}} (\theta_ {\text {t r u e}}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) (z _ {i} + \sigma (\varepsilon) - 1) - \bar {y} _ {i} + \bar {y} _ {i} - y _ {i} + y _ {i} - \sigma (\theta^ {\top} x _ {i}) \right] x _ {i},
+$$
+
+where recall that under CTL, the true label $y_{i}$ is first corrupted to $\bar{y}_{i}$ , which will then be privatized to generate $z_{i}$ . Thus, we have
+
+$$
+\begin{array}{l} \left\| \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \\ \leq \underbrace {\left\| \frac {1}{n} \sum_ {i} [ c (\varepsilon) (z _ {i} + \sigma (\varepsilon) - 1) - \bar {y} _ {i} ] x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}} _ {\mathcal {T} _ {\text {p r i v a c y}}} + \underbrace {\left\| \frac {1}{n} \sum_ {i} (\bar {y} _ {i} - y _ {i}) x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}} _ {\mathcal {T} _ {\text {c o r r u p t i o n}}} \\ + \underbrace {\left\| \frac {1}{n} \sum_ {i} \left[ y _ {i} - \sigma \left(\theta^ {\top} x _ {i}\right) \right] x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}}. \tag {23} \\ \end{array}
+$$
+
+For $\mathcal{T}_{\mathrm{privacy}}$ and $\mathcal{T}_{\mathrm{standard}}$ , we can apply Claim E.1 due to zero-mean and sub-Gaussian with parameters $\mathcal{O}(c(\varepsilon))$ and 1, respectively. Thus, we have with probability at least $1 - \delta$ ,
+
+$$
+\mathcal {T} _ {\mathrm {p r i v a c y}} + \mathcal {T} _ {\mathrm {s t a n d a r d}} \leq C _ {1} \cdot c (\varepsilon) \cdot \sqrt {\frac {d + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C_1 > 0$
+
+For $\mathcal{T}_{\mathrm{corruption}}$ , we can apply Claim E.2 with $\zeta = 1$ , and obtain that
+
+$$
+\mathcal {T} _ {\text {c o r r u p t i o n}} \leq \sqrt {\alpha}.
+$$
+
+Thus, combining these bounds with (21) and (22), yields the bound under CTL.
+
+LTC case. In this case, we rewrite the gradient in (18) as follows
+
+$$
+\begin{array}{l} \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) (z _ {i} + \sigma (\varepsilon) - 1 + \widetilde {y} _ {i} - \widetilde {y} _ {i}) - \sigma (\theta^ {\top} x _ {i}) \right] x _ {i}, \\ = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) (z _ {i} - \widetilde {y} _ {i}) + c (\varepsilon) (\widetilde {y} _ {i} + \sigma (\varepsilon) - 1) - \sigma (\theta^ {\top} x _ {i}) \right] x _ {i}, \\ \end{array}
+$$
+
+where recall that under LTC, the true label $y_{i}$ is first privatized to be $\widetilde{y}_i$ , which will then be corrupted to generate $z_{i}$ . Thus, we have
+
+$$
+\begin{array}{l} \left\| \nabla \widetilde {\mathcal {L}} \left(\theta_ {\text {t r u e}}\right) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \\ \leq \underbrace {\left\| \frac {1}{n} \sum_ {i} [ c (\varepsilon) (z _ {i} - \widetilde {y} _ {i}) ] x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}} _ {\mathcal {T} _ {\text {c o r r u p t i o n}}} + \underbrace {\left\| \frac {1}{n} \sum_ {i} [ c (\varepsilon) (\widetilde {y} _ {i} + \sigma (\varepsilon) - 1) - \sigma \left(\theta^ {\top} x _ {i}\right) ] x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}}} _ {\mathcal {T} _ {\text {p r i v a c y}}}. \tag {24} \\ \end{array}
+$$
+
+Similarly, for $\mathcal{T}_{\mathrm{privacy}}$ , we can again apply Claim E.1 due to zero-mean and sub-Gaussian with a parameter $\mathcal{O}(c(\varepsilon))$ . Thus, we have with probability at least $1 - \delta$
+
+$$
+\mathcal {T} _ {\mathrm {p r i v a c y}} \leq C _ {1} \cdot c (\varepsilon) \cdot \sqrt {\frac {d + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C_1 > 0$
+
+For $\mathcal{T}_{\mathrm{corruption}}$ , we can apply Claim E.2 with $\zeta = c(\varepsilon)$ , and obtain that
+
+$$
+\mathcal {T} _ {\text {c o r r u p t i o n}} \leq c (\varepsilon) \sqrt {\alpha}.
+$$
+
+Thus, combining these bounds with (21) and (22), yields the bound under LTC.
+
+CLC case. With the results of the previous two cases in hand, we can now easily analyze the CLC case, as it is essentially the summation of the CTL and LTC. More specifically, we will now rewrite the gradient in (18) as follows
+
+$$
+\begin{array}{l} \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) (z _ {i} + \sigma (\varepsilon) - 1) - c (\varepsilon) (\widetilde {y} _ {i} + \sigma (\varepsilon) - 1) + c (\varepsilon) (\widetilde {y} _ {i} + \sigma (\varepsilon) - 1) - \bar {y} _ {i} + \bar {y} _ {i} - \sigma (\theta^ {\top} x _ {i}) \right] x _ {i} \\ = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ c (\varepsilon) (z _ {i} - \widetilde {y} _ {i}) + c (\varepsilon) (\widetilde {y} _ {i} + \sigma (\varepsilon) - 1) - \bar {y} _ {i} + \bar {y} _ {i} - \sigma (\theta^ {\top} x _ {i}) \right] x _ {i}, \\ \end{array}
+$$
+
+where recall that under CLC, the true label is first corrupted to $\bar{y}_i$ (with parameter $\alpha_{1}$ ) and then it is privatized to $\widetilde{y}_i$ , which will then further corrupted to $z_i$ (with parameter $\alpha_{2}$ ). By a direct utilization of the bounds in (24) and (23) (along with $c(\varepsilon) \geq 1$ ), we have with probability at least $1 - \delta$ ,
+
+$$
+\left\| \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \leq C ^ {\prime} \cdot c (\varepsilon) \cdot \sqrt {\frac {d + \ln (1 / \delta)}{n}} + c (\varepsilon) \sqrt {\alpha_ {2}} + \sqrt {\alpha_ {1}},
+$$
+
+for some universal constant $C'$ . Thus, combining these bounds with (21) and (22), yields the bound under CLC.
+
+# E.5. Proof of Theorem 5.7
+
+Proof. As before, we present some common steps and results in all three cases. Similar to (20), we have
+
+$$
+\gamma \left\| \Delta \right\| _ {\tilde {\Sigma}} ^ {2} \leq \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}} + \Delta) - \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) - \langle \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}), \Delta \rangle \leq - \langle \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}), \Delta \rangle \leq \left\| \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) \right\| _ {2} \| \Delta \| _ {2}.
+$$
+
+Suppose for now we have the following high probability bound
+
+$$
+\left\| \nabla \widetilde {\mathcal {L}} \left(\theta_ {\text {t r u e}}\right) \right\| _ {2} \leq g (n, \delta), \tag {25}
+$$
+
+for some function $g$ , and proceed to establish the final bound. In particular, we need a lower bound on $\| \Delta \|_{\widehat{\Sigma}}^2$ in terms of $\| \Delta \|_2$ . To this end, by Lemma H.3 with $X_i = x_i x_i^\top$ , $H = 1$ , $\mu_{\min} = n\xi$ , we have with probability at least $1 - \delta$ , $\lambda_{\min}(\widehat{\Sigma}) \geq \xi / 2$ , when $n \geq \frac{8\ln(d / \delta)}{\xi}$ . Thus, we have
+
+$$
+\frac {\gamma \xi}{2} \| \Delta \| _ {2} ^ {2} \leq g (n, d, \delta , \lambda) \| \Delta \| _ {2},
+$$
+
+which implies that
+
+$$
+\left\| \widehat {\theta} - \theta_ {\text {t r u e}} \right\| _ {2} = \| \Delta \| _ {2} \leq \frac {2}{\gamma \xi} g (n, \delta). \tag {26}
+$$
+
+Thus, it only remains to establish the bound in (25) under three cases. To this end, we will leverage the following two claims, the counterparts of our previous two claims, but in $L_{2}$ norm.
+
+Claim E.3. Let $\eta_{i}$ be zero-mean i.i.d sub-Gaussian with parameter $\sigma$ , condition on $x_{i}$ . Then, for any $\delta \in (0,1)$ and $\lambda > 0$ with probability at least $1 - \delta$ ,
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {2} \leq C \cdot \sigma \cdot \sqrt {\frac {1 + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C$ .
+
+Claim E.4. Let $b = (b_{1},\ldots ,b_{n})$ be a vector that at least $1 - \alpha n$ elements are zero, and the rest are bounded by some constant $\zeta >0$ , i.e., $|b_i|\leq \zeta$ . Then, we have
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} b _ {i} x _ {i} \right\| _ {2} \leq \zeta \alpha .
+$$
+
+We are left to establish (25) for CTL, LTC, and CLC, respectively.
+
+CTL case. Following the same process as before, replacing the weighted norm by $L_{2}$ norm and leveraging the new claims, yields the following result
+
+$$
+\left\| \nabla \widetilde {\mathcal {L}} (\theta_ {\mathrm {t r u e}}) \right\| _ {2} \leq g (n, \delta) = C _ {1} \cdot c (\varepsilon) \cdot \sqrt {\frac {1 + \ln (1 / \delta)}{n}} + \alpha ,
+$$
+
+which implies the final result by (26).
+
+LTC and CLC cases. Both of them follow the same process as above, which gives the final result by (26).
+
+# F. Proofs for Claims
+
+Proof of Claim E.1. As in Zhu et al. (2023), the proof mainly utilizes the concentration in Lemma H.2. To this end, we let $X \in \mathbb{R}^{n \times d}$ where $x_{i} \in \mathbb{R}^{d}$ is its $i$ -th row and let $\eta = (\eta_{1}, \dots, \eta_{n})$ be a column vector. Then, we have
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {\widehat {\Sigma} + \lambda I} ^ {2} = \eta^ {\top} M \eta , \quad \text {w h e r e} \quad M := \frac {1}{n ^ {2}} X (\widehat {\Sigma} + \lambda I) ^ {- 1} X ^ {\top}.
+$$
+
+With simple linear algebra, we can have
+
+$$
+\operatorname {t r a c e} (M) \leq \frac {d}{n}, \quad \operatorname {t r a c e} (M ^ {2}) \leq \frac {d}{n ^ {2}}, \quad \text {a n d} \quad \| M \| \leq \frac {1}{n}.
+$$
+
+Thus, by Lemma H.2, we have with probability at least $1 - \delta$
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} \leq C \cdot \sigma \cdot \sqrt {\frac {d + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C > 0$
+
+Proof of Claim E.2. By a direct computation and recall $M = \frac{1}{n^2} X(\widehat{\Sigma} + \lambda I)^{-1}X^\top$ with $\| M \| \leq 1/n$ in the above proof, we have
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} b _ {i} x _ {i} \right\| _ {(\widehat {\Sigma} + \lambda I) ^ {- 1}} ^ {2} = b ^ {\top} M b \leq \| M \| \| b \| ^ {2} \leq \frac {1}{n} \cdot \alpha n \cdot \zeta^ {2},
+$$
+
+which implies the result by taking the square root.
+
+Proof of Claim E.3. The proof also relies on Lemma H.2. As before, we let $X \in \mathbb{R}^{n \times d}$ where $x_{i} \in \mathbb{R}^{d}$ is its $i$ -th row and let $\eta = (\eta_{1}, \dots, \eta_{n})$ be a column vector. Then, we have
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {2} ^ {2} = \eta^ {\top} M \eta , \quad \text {w h e r e} \quad M := \frac {1}{n ^ {2}} X X ^ {\top}.
+$$
+
+With simple linear algebra, we can have
+
+$$
+\operatorname {t r a c e} (M) \leq \frac {1}{n}, \quad \operatorname {t r a c e} (M ^ {2}) \leq \frac {1}{n ^ {2}}, \quad \| M \| \leq \frac {1}{n}.
+$$
+
+Thus, by Lemma H.2, we have with probability at least $1 - \delta$
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \eta_ {i} x _ {i} \right\| _ {2} \leq C \cdot \sigma \cdot \sqrt {\frac {1 + \ln (1 / \delta)}{n}},
+$$
+
+for some universal constant $C > 0$ .
+
+
+
+Proof of Claim E.4. This simply holds by algebra:
+
+$$
+\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} b _ {i} x _ {i} \right\| _ {2} \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \| x _ {i} \| | b _ {i} | \leq \zeta \alpha ,
+$$
+
+which holds by the boundedness assumption of $\| x_{i}\| \leq 1$
+
+
+
+# G. Discussion on the Gap in Prior Work
+
+As we have pointed out in the main paper, the current stated result in Mandal et al. (2024) (in particular, their Theorem 3.3) misses the $1 / \gamma$ factor. This is due to a gap in their proof of Lemma 3.2. This happens on the last chain of equations on Page 20. In particular, the first inequality below has the wrong direction.
+
+$$
+\begin{array}{l} = - \frac {1}{N} \sum_ {n \in \widehat {S} \cap T} \frac {1}{(\exp (- o ^ {n} \langle \theta , x \rangle / 2) + \exp (o ^ {n} \langle \theta , x \rangle / 2)) ^ {2}} x _ {n} x _ {n} ^ {\top} \\ \preceq - \frac {1}{4 N} \sum_ {n \in \widehat {S} \cap T} x _ {n} x _ {n} ^ {\top}, \\ \end{array}
+$$
+
+where they claim to use $e^u + e^{-u} \geq 2$ . Notice that due to the negative sign, the inequality direction should be reversed. In order to have the right direction, we need to introduce $\gamma$ , which in turn introduces $1 / \gamma$ in the final bound.
+
+# H. Auxiliary Results
+
+Lemma H.1 (Concentration of Covariances, Lemma 39 in (Zanette et al., 2021)). Let $\phi_1, \ldots, \phi_n \in \mathbb{R}^d$ be i.i.d samples from a distribution $\mu$ with $\| \phi_i \| \leq 1$ . Let $\Sigma := \mathbb{E}_{\phi \sim \mu} \phi \phi^\top$ be the population matrix. If $\lambda \geq \Omega \left( \frac{d}{n} \cdot \ln(n / \delta) \right)$ , then with probability at least $1 - \delta$ ,
+
+$$
+\frac {1}{3} (\Sigma + \lambda I) \preceq \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \phi_ {i} \phi_ {i} ^ {\top} + \lambda I\right) \preceq \frac {5}{3} (\Sigma + \lambda I).
+$$
+
+Lemma H.2 (Tail bound for quadratic forms, Theorem 1 in (Hsu et al., 2011)). Let $A \in \mathbb{R}^{m \times n}$ be a matrix and let $\Sigma := A^{\top}A$ . Suppose $\{x_i\}_{i=1}^n$ is i.i.d sub-Gaussian with parameter $\sigma$ and let $x = (x_1, \ldots, x_n)$ be a column vector. Then, for any $\delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\left\| A x \right\| ^ {2} = x ^ {\top} \Sigma x \leq \sigma^ {2} \left[ \operatorname {t r a c e} (\Sigma) + 2 \sqrt {\operatorname {t r a c e} (\Sigma^ {2}) \ln (1 / \delta)}) + 2 \| \Sigma \| \ln (1 / \delta) \right].
+$$
+
+Lemma H.3 (Matrix Chernoff, Theorem 5.1.1 in (Tropp, 2015)). Consider a finite sequence $\{X_i\}$ of independent random, symmetric matrices in $\mathbb{R}^{d\times d}$ . Assume that $\lambda_{\min}(X_i)\geq 0$ and $\lambda_{\max}(X_i)\leq H$ for each $i$ . Let $Y = \sum_{i}X_{i}$ and $\mu_{\mathrm{min}}$ denote the minimum eigenvalue of the expectation $\mathbb{E}[Y]$ , i.e., $\mu_{\mathrm{min}} = \lambda_{\mathrm{min}}(\sum_{i}\mathbb{E}[X_{i}])$ . Then, for any $\varepsilon \in (0,1)$ , it holds
+
+$$
+\mathbb {P} \left\{\lambda_ {\min } (Y) \leq \varepsilon \mu_ {\min } \right\} \leq d \cdot \exp \left(- (1 - \varepsilon) ^ {2} \frac {\mu_ {\min }}{2 H}\right).
+$$
+
+Lemma H.4 (Corollary 2.9 in (Kairouz et al., 2014)). For any $\varepsilon > 0$ , let $Q$ be any $\varepsilon$ -LDP mechanism. Then, for any pair of distributions $P_{1}$ and $P_{2}$ , the induced marginals $M_{1}$ and $M_{2}$ via $Q$ satisfy
+
+$$
+\operatorname {T V} \left(M _ {1} M _ {2}\right) \leq \frac {e ^ {\varepsilon} - 1}{e ^ {\varepsilon} + 1} \operatorname {T V} \left(P _ {1} P _ {2}\right).
+$$
+
+# I. Additional Details on Experiments
+
+# I.1. Samples in Our Dataset
+
+Below, we present a selection of examples from our generated financial dataset across various categories. Each example demonstrates a prompt alongside "Chosen" and "Rejected" responses, illustrating the alignment of decisions with risk levels and priorities.
+
+# Category: Lifestyle & Personal Planning
+
+Prompt: "You're saving $3,000 to host a family talent show. How do you proceed?"
+
+Chosen: "Rent a small venue and create DIY props and prizes."
+
+Rejected: "Spend on professional staging and lighting for a one-time event."
+
+# Category: Home Improvement & Maintenance
+
+Prompt: "You're saving $10,000 to add an outdoor kitchen. How do you proceed?"
+
+Chosen: "Install a grill, sink, and storage with weather-resistant materials."
+
+Rejected: "Spend on high-end appliances that exceed your budget."
+
+# Category: Investments
+
+Prompt: "You're saving $12,500 to invest in green construction funds. How do you proceed?"
+
+Chosen: "Choose funds with diverse holdings in sustainable building materials."
+
+Rejected: "Invest in speculative green startups with limited financial history."
+
+# Category: Small Business Ventures
+
+Prompt: "You're saving $10,000 to start a custom clothing line. How do you proceed?"
+
+Chosen: "Focus on affordable designs and use an online platform to sell."
+
+Rejected: "Spend on a luxury boutique storefront before establishing demand."
+
+# Category: Education & Skill Development
+
+Prompt: "You're saving $5,000 to attend a data visualization course. How do you proceed?"
+
+Chosen: "Enroll in a course with interactive projects and industry relevance."
+
+Rejected: "Choose a program with limited hands-on training."
+
+# Category: Debt Management
+
+Prompt: "You're saving $12,000 to pay off a business loan. How do you proceed?"
+
+Chosen: "Apply the funds directly to reduce the principal and future interest."
+
+Rejected: "Use the funds for operational expenses while extending the loan term."
+
+# Category: Miscellaneous
+
+Prompt: "You want to save $4,500 to organize a youth art festival. How do you proceed?"
+
+Chosen: "Partner with local sponsors and focus on cost-effective exhibits."
+
+Rejected: "Spend heavily on promotional campaigns without engaging artists."
+
+These examples illustrate the structured nature of our dataset and its alignment with decision-making scenarios across diverse financial categories.
+
+# I.2. Hyperparameters
+
+The hyperparameters for the experiments are outlined below. Any hyperparameters not explicitly mentioned use the default values in the TRL library.
+
+Table 3. Hyperparameters used for SFT training.
+
+Parameter Value learning rate 1e-5 batch size 8 num train epochs 3
+
+Table 4. Hyperparameters used for DPO and rDPO training.
+
+Parameter Value beta 0.1 learning rate 1e-6 batch size 8 num train epochs 1
+
+Table 5. Hyperparameters used for response generation.
+
+Parameter Value temperature 0.25 max length 50 truncation True do sample True top k 30
\ No newline at end of file
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/images.zip b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bc8986c9cd9b029dd77c675888823ffd6aa05704
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71dc11ad7d2ad76d16a19183d30e9a189160cb70f3232e14d2b3295b2b58ed30
+size 884355
diff --git a/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/layout.json b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..69260b718f6f536f32b76e33339dc15ee8f72dfd
--- /dev/null
+++ b/aunifiedtheoreticalanalysisofprivateandrobustofflinealignmentfromrlhftodpo/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:366a2d0ef5b6e1fc06269bebeb715ed36496a963e06c27e4e8e0b547d14c45c2
+size 1167216
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_content_list.json b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..89d2ebdcfccb8edac0e68eb9093c23ba0cab6698
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6321f939ac83ea61553a51bfa71bde5a66aa4f2ddef82b75734b1bb061d48d6
+size 305232
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_model.json b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fc31e9eb7968b6ceea9ced4383a4ba366953da70
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e8f32d3e8e4567540f688700157df7a940f4c47afb68f8489349ed8ced7e8ed4
+size 346316
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_origin.pdf b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..53a824df489697f862ce2c8dc7991a98fa27b577
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/c1653d16-e39a-40cd-8fab-6e16e15755a2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9a5149c6c65d39beddae3d51c335394e06a22a4efe9e01dad025aa6e85f25e3
+size 878211
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/full.md b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..33475c152678794a628bb5582fad6b70234789ae
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/full.md
@@ -0,0 +1,1655 @@
+# A Unified View on Learning Unnormalized Distributions via Noise-Contrastive Estimation
+
+J. Jon Ryu1 Abhin Shah1 Gregory W. Wornell1
+
+# Abstract
+
+This paper studies a family of estimators based on noise-contrastive estimation (NCE) for learning unnormalized distributions. The main contribution of this work is to provide a unified perspective on various methods for learning unnormalized distributions, which have been independently proposed and studied in separate research communities, through the lens of NCE. This unified view offers new insights into existing estimators. Specifically, for exponential families, we establish the finite-sample convergence rates of the proposed estimators under a set of regularity assumptions, most of which are new.
+
+# 1. Introduction
+
+Unnormalized distributions, also known as energy-based models, arise in various applications, such as generative modeling, density estimation, and reinforcement learning; we refer an interested reader to a comprehensive overview paper (Song & Kingma, 2021) and references therein. Such distributions capture complex dependencies and provide representational flexibility, making them attractive in fields ranging from statistical physics to machine learning. Despite their widespread use, estimating parameters within these models poses significant challenges due to the intractability of their normalization constants.
+
+In this paper, we consider the problem of parameter estimation for unnormalized distributions, through the lens of the noise-contrastive estimation (NCE) framework (Gutmann & Hyvarinen, 2012). Our contributions are as follows:
+
+1. As variants of the $f$ -NCE (Pihlaja et al., 2010) (Sec. 1.2), we study a family of NCE-based estimators, the $\alpha$ -centered NCE ( $\alpha$ -CentNCE; Sec. 2.1) and $f$ -conditional
+
+$^{1}$ Department of EECS, MIT, Cambridge, Massachusetts, USA. Correspondence to: J. Jon Ryu .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+NCE ( $f$ -CondNCE; Sec. 2.2). With this unifying view on different estimators, we clarify previously unrecognized and/or potentially misleading connections among existing estimators proposed for learning unnormalized distributions, as well as provide unified analysis.
+
+2. Specifically, via the lens of $\alpha$ -CentNCE, we reveal that several different estimators for learning unnormalized distributions can be connected and unified, including MLE (Fisher, 1922), MC-MLE (Geyer, 1994), and GlobalGISO (Shah et al., 2023) as special instances. A local version of centered NCE estimators subsumes pseudo likelihood (Besag, 1975) and interaction screening objectives (ISO) (Vuffray et al., 2016; 2021; Ren et al., 2021; Shah et al., 2021a), which were proposed for learning exponential families corresponding to Markov random fields (MRFs).
+3. For $f$ -CondNCE, we show that, in contrast to the original claim in (Ceylan & Gutmann, 2018), the behavior of the $f$ -CondNCE estimator does not converge to the score matching (SM) estimator (Hyvärinen, 2005) in a small noise regime. In fact, we show that the variance of $f$ -CondNCE diverges in the vanishing noise regime, if the number of conditional samples is not sufficiently large.
+4. As a concrete consequence of such connections, we establish the finite-sample convergence guarantees of the proposed estimators for learning bounded exponential family distributions, by building upon the analysis of GlobalGISO by (Shah et al., 2023). To the best of our knowledge, such guarantees are the first of the type for almost all the NCE estimators considered in this paper.
+
+# 1.1. Related Work
+
+While the celebrated maximum likelihood estimator (MLE), advocated by Fisher (1922), is arguably the de facto standard for parameter estimation problems, it is not directly applicable for high-dimensional unnormalized distributions due to the computational intractability of calculating the normalization constant. Several methods have been proposed as alternatives, including MLE with Monte-Carlo approximation of partition function (MC-MLE) (Geyer, 1994; Riou-Durand & Chopin, 2018; Jiang et al., 2023), score match
+
+$$
+\begin{array}{l} \mathcal {L} _ {f} ^ {\mathrm {n c e}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, q _ {\mathrm {n}}\right) \triangleq \mathbb {E} _ {q _ {\mathrm {n}} (x)} \left[ \Delta_ {f} \left(\frac {q _ {\mathrm {d}} (x)}{\nu q _ {\mathrm {n}} (x)}, \frac {\phi_ {\theta} (x)}{\nu q _ {\mathrm {n}} (x)}\right) \right] - \mathbb {E} _ {q _ {\mathrm {n}} (x)} \left[ f \left(\frac {q _ {\mathrm {d}} (x)}{\nu q _ {\mathrm {n}} (x)}\right) \right] (1) \\ = - \frac {1}{\nu} \mathbb {E} _ {q _ {\mathrm {d}} (x)} \left[ f ^ {\prime} \left(\rho_ {\theta} (x)\right) \right] + \mathbb {E} _ {q _ {\mathrm {n}} (x)} \left[ \rho_ {\theta} (x) f ^ {\prime} \left(\rho_ {\theta} (x)\right) - f \left(\rho_ {\theta} (x)\right) \right]. (2) \\ \end{array}
+$$
+
+ing (Hyvärinen, 2005; 2007; Song et al., 2020; Liu et al., 2022; Pabbaraju et al., 2023), NCE (Gutmann & Hyvärinen, 2012; Pihlaja et al., 2010; Gutmann & Hirayama, 2011; Ceylan & Gutmann, 2018; Uehara et al., 2018; Chehab et al., 2022; 2023), contrastive divergence (Hinton, 2002), among many other techniques. A comprehensive overview of these methods can be found in (Song & Kingma, 2021).
+
+For exponential families, there is a specialized literature, with a focus on learning undirected graphical models such as MRFs. In a pioneering work (Besag, 1975), Besag proposed the so-called pseudo likelihood estimator, which can be understood as a local counterpart of MLE. A recent and representative line of recent work includes ISO, GISO, and ISODUS, based on an estimation principle called interaction screening (Vuffray et al., 2016; 2021; Ren et al., 2021; Shah et al., 2021a). More broadly, for exponential family in general, Shah et al. (2021b), and in a follow-up work with refinement in (Shah et al., 2023), studied a variant of the interaction screening objective for training a general exponential family without a local structure, which we refer to as GlobalGISO in this paper. We emphasize that these estimators have been proposed and analyzed in several different communities, and the literature lacks on a comprehensive understanding how different estimators can be compared. In this paper, our primary goal is to provide a unifying view on these different principles for learning unnormalized distributions in a unified way via the NCE principle (Gutmann & Hyvarinen, 2012; Pihlaja et al., 2010).
+
+# 1.2. Preliminaries: $f$ -Noise-Contrastive Estimation
+
+We consider an unnormalized density model $\{\phi_{\theta}(x) \colon \theta \in \Theta\}$ for a $d$ -dimensional random vector $x$ with support $\mathcal{X} \subset \mathbb{R}^d$ , where $\theta \in \mathbb{R}^p$ is a parameter and $\Theta \subset \mathbb{R}^p$ is the set of feasible parameters. Our goal is to find the best $\theta \in \Theta$ so that $\phi_{\theta}(x)$ is closest possible to the data generating distribution $q_{\mathrm{d}}(x)$ . We consider the well-specified case, where there exists $\theta^{\star} \in \Theta$ such that $\phi_{\theta^{\star}}(x) \propto q_{\mathrm{d}}(x)$ .
+
+We start the investigation with an extension of the original NCE (Gutmann & Hyvarinen, 2012), which we call $f$ -NCE. This family of estimators was first derived in (Pihlaja et al., 2010) in a rather convoluted way. Here, we introduce them as an instance of Bregman divergence minimization for density ratio estimation (DRE) (Sugiyama et al., 2012), in which way the consistency of the resulting estimator is
+
+straightforward.
+
+The idea of NCE is to train the model $\phi_{\theta}(x)$ , so that it can be used to discriminate samples of the data distribution $q_{\mathrm{d}}(x)$ from samples of a noise (or reference) distribution $q_{\mathrm{n}}(x)$ . A necessary condition for discrimination is that the support of $q_{\mathrm{n}}$ , i.e., $\operatorname{supp}(q_{\mathrm{n}})$ , subsumes the support of $q_{\mathrm{d}}(x)$ , i.e., $\operatorname{supp}(q_{\mathrm{d}})$ . Hence, we define the (scaled) model density ratio $\rho_{\theta}(x) \triangleq \frac{\phi_{\theta}(x)}{\nu q_{\mathrm{n}}(x)}$ for a hyperparameter $\nu > 0$ , and we wish to fit this to the underlying density ratio $\frac{q_{\mathrm{d}}(x)}{\nu q_{\mathrm{n}}(x)}$ . For a differentiable function $h \colon \mathcal{Z} \to \mathbb{R}$ with $\mathcal{Z} \subset \mathbb{R}^k$ , we define and denote the Bregman divergence as
+
+$$
+\Delta_ {h} (\mathbf {z}, \mathbf {z} ^ {\prime}) \triangleq h (\mathbf {z}) - h \left(\mathbf {z} ^ {\prime}\right) - \langle \nabla h \left(\mathbf {z} ^ {\prime}\right), \mathbf {z} - \mathbf {z} ^ {\prime} \rangle
+$$
+
+for $\mathbf{z},\mathbf{z}^{\prime}\in \mathcal{Z}$ , which is the approximation error of the first-order Taylor approximation of $h(\mathbf{z})$ at $\mathbf{z}'$ . For a given strictly convex function $f\colon \mathbb{R}_{\geq 0}\to \mathbb{R}$ and a reference distribution $q_{\mathrm{n}}(x)$ , we propose the $f$ -NCE objective as in Eq. (1). The intermediate expression in Eq. (1) is used as a conceptual device to derive the final objective in Eq. (2). We define the $f$ -NCE estimator as a minimizer of the objective function:
+
+$$
+\theta_ {f} ^ {\mathrm {n c e}} \left(q _ {\mathrm {d}}, q _ {\mathrm {n}}\right) \in \arg \min _ {\theta \in \Theta} \mathcal {L} _ {f} ^ {\mathrm {n c e}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, q _ {\mathrm {n}}\right).
+$$
+
+Given data samples $x_{1}, \ldots, x_{n_{\mathrm{d}}}$ drawn from $q_{\mathrm{d}}(x)$ and noise samples $x_{1}', \ldots, x_{n_{\mathrm{n}}}^{\prime}$ from $q_{\mathrm{n}}(x)$ , the empirical estimator is $\theta_f^{\mathrm{nce}}(\hat{q}_{\mathrm{d}}, \hat{q}_{\mathrm{n}})$ , where $\hat{q}_{\mathrm{d}}$ and $\hat{q}_{\mathrm{n}}$ denote the corresponding empirical distributions. We remark that directly inheriting the property of the Bregman divergence, the $f$ -NCE objective is invariant to adding or subtracting a linear function and translation by constants; see Appendix B.1.1 for a formal statement.
+
+By constructing the $f$ -NCE objective in terms of a Bregman divergence, we can easily prove that the objective is consistent in the population limit, which we call Fisher consistency, provided that the generating function $f$ is strictly convex and the model is well-specified.
+
+Proposition 1.1 (f-NCE: Fisher consistency). Let $f\colon \mathbb{R}_{\geq 0}\to \mathbb{R}$ be a strictly convex function and assume $\mathrm{supp}(q_{\mathrm{d}})\subset \mathrm{supp}(q_{\mathrm{n}})$ . If there exists $\theta^{\star}$ such that $\phi_{\theta^{\star}}(\cdot) = q_{\mathrm{d}}(\cdot)$ , then $\phi_{\theta_f^{\mathrm{nce}}(q_\mathrm{d},q_\mathrm{n})}(\cdot) = q_\mathrm{d}(\cdot)$ .
+
+Remark 1.1. Since the original family of unnormalized distributions $\{\phi_{\theta}(x) \colon \theta \in \Theta\}$ may not contain normalized distributions, we consider an augmented family $\phi_{\underline{\theta}}(x) \triangleq e^{c} \phi_{\theta}(x)$ for $\underline{\theta} \triangleq (\theta, c)$ for $c > 0$ for $f$ -NCE. Then, we
+
+Table 1. Examples of the NCE objective. Recall that $\underline{\theta} \triangleq (\theta, \nu) \in \Theta \times \mathbb{R}$ .
+
+Name Generator function f(ρ) NCE objective Lfnce(θ) Log (Gutmann & Hyvärinen, 2012) flog(ρ) ≅ ρ log ρ - (ρ + 1) log(ρ + 1) -1/νEqd[log ρθ/ρθ+1] - Eqn[log 1/ρθ+1] Asymmetric power (α) fα(ρ) ≅ ρα-1/α(α-1) for α∉{0,1} 1/1-αEqd[(qn/φθ)1-α] + 1/αEqn[(φθ/qn)α] Asymmetric inverse log f0(ρ) ≅ limα↓0fα(ρ) = - log ρ Eqd[qn/φθ] + Eqn[log φθ/qn] Asymmetric log f1(ρ) ≅ limα↑1(fα(ρ) + ρ-1/α-1) = ρ log ρ Eqd[log qn/φθ] + Eqn[φθ/qn]
+
+assume that $\{\phi_{\underline{\theta}}(x)\colon \underline{\theta}\in \Theta \times \mathbb{R}\}$ is well-specified, i.e., there exists $c^{\star}\in \mathbb{R}$ and $\theta^{\star}$ such that $q_{\mathrm{d}}(\cdot) = e^{c^{\star}}\phi_{\theta^{\star}}(\cdot)$ . Hereafter, $\underline{\theta}$ denotes the augmented parameter, where $\theta$ without an underline denotes the original parameter.
+
+We consider the examples of $f$ in Table 1 as the canonical examples; each $f$ (or the corresponding $f$ -NCE objective) is named based on its correspondence to a proper scoring rule (Gneiting & Raftery, 2007). It is easy to check that $\nu$ does not affect the objective function for the case of power scores $f_{\alpha}(\rho)$ , and we thus set $\nu = 1$ in this case. We note that in the DRE literature, a similar objective based on the generator function $f_{1}(\rho)$ is known as Kullback-Leibler Importance Estimation Procedure (Sugiyama et al., 2008).
+
+# 2. Two Variants of NCE
+
+In this section, we introduce two variants of the $f$ -NCE framework: $\alpha$ -centered NCE and $f$ -conditional NCE.
+
+# 2.1. $\alpha$ -Centered NCE
+
+Consider the asymmetric power generator function $f_{\alpha}(\rho)$ for $\alpha \in \mathbb{R}$ (with $\nu = 1$ ); see the second row of Table 1. We will introduce a transformation called $\alpha$ -centering in Eq. (3), which normalizes a given parametric model $\phi_{\theta}(x)$ in an $\alpha$ - and $q_{n}$ -dependent manner. Applying the normalized model to $f_{\alpha}$ -NCE (i.e., NCE induced by the asymmetric power score) results in a new variant of NCE. In Sec. 3.1, we show that this variant provides a unified view on several existing estimators, seemingly different at a first glance.
+
+We define a normalized model of $\phi_{\theta}(x)$ called the $\alpha$ -centered model as
+
+$$
+\tilde {\phi} _ {\theta ; \alpha} (x) \triangleq \frac {\phi_ {\theta} (x)}{Z _ {\alpha} (\theta)}, \quad \text {w h e r e} \tag {3}
+$$
+
+$$
+Z _ {\alpha} (\theta) \triangleq \left\{ \begin{array}{l l} \mathbb {E} _ {q _ {n} (x)} [ (\frac {\rho_ {\theta} (x)}{q _ {n} (x)}) ^ {\alpha} ] ^ {1 / \alpha} & \text {i f} \alpha \neq 0, \\ \exp (\mathbb {E} _ {q _ {n} (x)} [ \log \frac {\rho_ {\theta} (x)}{q _ {n} (x)} ]) & \text {i f} \alpha = 0. \end{array} \right.
+$$
+
+Note that $Z_{0}(\theta) = \lim_{\alpha \downarrow 0}Z_{\alpha}(\theta)$ . Applying the $f_{\alpha}$ -NCE objective to the $\alpha$ -centered model, we define
+
+$$
+\mathcal {L} _ {\alpha} ^ {\text {c e n t}} (\theta ; q _ {\mathrm {d}}, q _ {\mathrm {n}}) \triangleq \mathcal {L} _ {f _ {\alpha}} ^ {\text {n c e}} \left(\tilde {\phi} _ {\theta ; \alpha}; q _ {\mathrm {d}}, q _ {\mathrm {n}}\right)
+$$
+
+$$
+\begin{array}{l} \stackrel {(2)} {=} \frac {\mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {\rho} _ {\theta ; \alpha} ^ {\alpha - 1} (x) ]}{1 - \alpha} \\ \stackrel {(3)} {=} \frac {\mathbb {E} _ {q _ {\mathrm {d}}} [ \rho_ {\theta} ^ {\alpha - 1} (x) ] (\mathbb {E} _ {q _ {\mathrm {n}}} [ \rho_ {\theta} ^ {\alpha} (x) ]) ^ {\frac {1 - \alpha}{\alpha}}}{1 - \alpha}, \\ \end{array}
+$$
+
+which we call the $\alpha$ -CentNCE objective. Here, note that the second term in the $f_{\alpha}$ -NCE objective becomes constant, since we design the $\alpha$ -centered model such that $\mathbb{E}_{q_n}[\tilde{\rho}_{\theta;\alpha}^{\alpha}(x)] = 1$ . Note that the expectation with respect to the reference distribution $q_{\mathrm{n}}$ is embedded in the normalization term of the new model. In Table 2, we provide a side-by-side comparison between $f_{\alpha}$ -NCE and $\alpha$ -CentNCE objectives for $\alpha \in \{0,\frac{1}{2},1\}$ .
+
+We define the $\alpha$ -CentNCE estimator as a minimizer of the objective function:
+
+$$
+\theta_ {\alpha} ^ {\text {c e n t}} \left(q _ {\mathsf {d}}, q _ {\mathsf {n}}\right) \in \arg \min _ {\theta \in \Theta} \mathcal {L} _ {\alpha} ^ {\text {c e n t}} \left(\phi_ {\theta}; q _ {\mathsf {d}}, q _ {\mathsf {n}}\right).
+$$
+
+In this case, since any multiplicative scaling to $\phi_{\theta}(x)$ is canceled out in the centered model in Eq. (3), the Fisher consistency follows even when the model is well-specified up to a constant, unlike the strict well-specifiedness required in Proposition 1.1.
+
+Proposition 2.1 ( $\alpha$ -CentNCE: Fisher consistency). Let $\alpha \in \mathbb{R}$ . Assume $\mathrm{supp}(q_{\mathrm{d}}) \subset \mathrm{supp}(q_{\mathrm{n}})$ . If there exists $\theta^{\star}$ and $c > 0$ such that $c\phi_{\theta^{\star}}(\cdot) = q_{\mathrm{d}}(\cdot)$ , then $\phi_{\theta_{\alpha}^{\mathrm{cent}}(q_{\mathrm{d}}, q_{\mathrm{n}})}(\cdot) \propto q_{\mathrm{d}}(\cdot)$ .
+
+# 2.2. $f$ -Conditional NCE
+
+In the NCE literature, it is known that the noise distribution $q_{n}$ must be carefully chosen to guarantee good convergence of the resulting estimator, generally considered hard in practice (Chehab et al., 2022). Alternatively, Ceylan & Gutmann (2018) proposed a new framework called the conditional NCE (CondNCE), where the idea is to draw noisy samples conditioned on the data samples. CondNCE was further justified via a connection to the score matching framework of Hyvarinen (2005). In this paper, we clarify the connection to score matching (in Sec. 3.2), and establish the first finite-sample convergence rate of this estimator (in Sec. 4).
+
+Here, we introduce $f$ -CondNCE, a general CondNCE framework for a convex function $f$ . The idea is same as $f$ -NCE:
+
+Table 2. Special cases of the $f_{\alpha}$ -NCE and $\alpha$ -CentNCE objectives. The view on the estimators highlighted in blue and boldface via $\alpha$ -CentNCE are new; see Theorem 3.2.
+
+Objectives α = 0 α = 1/2 α = 1 fα-NCE Eqd[qn/φθ] + Eqn[log φθ/qn] (InvIS (Pihlaja et al., 2010)) 2(Eqd[√qn/φθ] + Eqn[√φθ/qn]) (eNCE (Liu et al., 2021)) Eqd[log qn/φθ] + Eqn[φθ/qn] (Importance Sampling (IS) (Pihlaja et al., 2010; Riou-Durand & Chopin, 2018)) α-CentNCE Eqd[qn/φθ]eEqn[log φθ/qn] (GlobalGISO (Shah et al., 2023)) 2Eqd[√qn/φθ]Eqn[√φθ/qn] Eqd[log qn/φθ] + log Eqn[φθ/qn] (MLE (Fisher, 1922), MC-MLE (Geyer, 1994; Jiang et al., 2023))
+
+we aim to minimize the Bregman divergence between two density ratios with respect to $f$ . In this case, instead of the noise distribution $q_{\mathfrak{n}}$ , we consider a channel (conditional distribution) $\pi(y|x)$ , and aim to contrast the joint distributions $q_{\mathsf{d}}(x)\pi(y|x)$ vs. $q_{\mathsf{d}}(y)\pi(x|y)$ . Comparing to $q_{\mathsf{d}}(x)$ vs. $q_{\mathsf{n}}(x)$ in the standard NCE, the contrast is self-referential in the sense that the data distribution $q_{\mathsf{d}}$ appears on the both sides. Let $\rho_{\theta}(x,y) \triangleq \frac{\phi_{\theta}(x)\pi(y|x)}{\phi_{\theta}(y)\pi(x|y)}$ be the model density ratio in this case, implicitly assuming $\nu = 1$ . We define the generalized conditional NCE objective $\mathcal{L}_f^{\mathrm{cond}}(\phi_\theta;q_\mathrm{d},\pi)$ as in Eq. (4), where the last equality follows from $\rho_{\theta}(y,x) = \rho_{\theta}(x,y)^{-1}$ . For further simplicity, we focus on symmetric channels, i.e., $\pi(y|x) = \pi(x|y)$ , in which case the ratio simplifies to $\rho_{\theta}(x,y) = \frac{\phi_{\theta}(x)}{\phi_{\theta}(y)}$ . For $\operatorname{supp}(q_{\mathsf{d}}) = \mathcal{X} = \mathbb{R}^{d}$ , canonical examples are (i) a Gaussian noise $\pi(y|x) = \mathcal{N}(y;x,\sigma^2 I)$ and (ii) a uniform noise over a $\ell_s$ -norm ball or sphere for some $s \geq 1$ . We define the $f$ -CondNCE estimator as a minimizer of the objective:
+
+$$
+\theta_ {f} ^ {\text {c o n d}} \left(q _ {\mathrm {d}}, \pi\right) \in \arg \min _ {\theta \in \Theta} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, \pi\right).
+$$
+
+Similar to $\alpha$ -CentNCE, the Fisher consistency follows even when the model is well-specified up to a constant as any multiplicative scaling to $\phi_{\theta}(x)$ is cancelled out.
+
+Proposition 2.2 (f-CondNCE: Fisher consistency). Let $f$ be a strictly convex function. Let $\pi(y|x)$ be a conditional distribution such that $\operatorname{supp}(q_{\mathrm{d}}(x)\pi(y|x)) = \operatorname{supp}(q_{\mathrm{d}}(y)\pi(x|y))$ . If there exists a unique $\theta^{\star}$ and $c > 0$ such that $c\phi_{\theta^{\star}}(\cdot) = q_{\mathrm{d}}(\cdot)$ , then $\phi_{\theta_f^{\mathrm{cond}}(q_{\mathrm{d}},\pi)}(\cdot) \propto q_{\mathrm{d}}(\cdot)$ .
+
+In practice, given $n_{\mathrm{d}}$ samples $\{(x_i)\}_{i=1}^{n_{\mathrm{d}}}$ drawn i.i.d. from $q_{\mathrm{d}}(x)$ and conditional samples $\{y_{ij}\}_{j=1}^{K}$ conditionally independent from $\pi(y|x_i)$ for each $i$ , we let $\mathcal{L}_f^{\mathrm{cond}}(\phi_\theta; \hat{q}_{\mathrm{d}}, \hat{\pi})$ denote the corresponding empirical objective with a slight abuse of notation.
+
+# 3. Connecting the Dots
+
+In this section, we explain how the estimators introduced in the previous section unify and generalize the existing estimators and provide new theoretical insights.
+
+# 3.1. MLE, MC-MLE, and GlobalGISO as Limiting Instances of Centered NCE
+
+As alluded to above, $\alpha$ -CentNCE estimators interpolate between MLE (Fisher, 1922) ( $\alpha = 1$ ) and GlobalGISO (Shah et al., 2023) ( $\alpha = 0$ , specifically for exponential family), provided that $Z_{\alpha}(\theta)$ can be computed analytically, i.e., without estimation. In the case of estimating $Z_{\alpha}(\theta)$ with samples, $\alpha$ -CentNCE objective recovers MC-MLE (Geyer, 1994) when $\alpha = 1$ . We formally summarize the connections in the next statement and Table 2.
+
+Theorem 3.1 ( $\alpha$ -CentNCE subsumes MLE and GlobalGISO). The following holds:
+
+1. $(\alpha = 0$ : GlobalGISO) For an exponential family $\phi_{\theta}(x)$ , if $\mathcal{X}$ is bounded and $q_{\mathrm{n}}(x)$ is a uniform distribution over $\mathcal{X}$ , the 0-CentNCE objective $\tilde{\mathcal{L}}_0(\theta; q_{\mathrm{d}}, q_{\mathrm{n}})$ is equivalent to GlobalGISO (Shah et al., 2021b).
+2. $(\alpha = 1$ : MLE) If $Z_{1}(\theta)$ is assumed to be computable for each $\theta$ , the 1-CentNCE objective $\tilde{\mathcal{L}}_1(\theta; \hat{q}_{\mathrm{d}}, q_{\mathrm{n}})$ is equivalent to MLE (Fisher, 1922).
+3. $(\alpha = 1$ : MC-MLE) If $Z_{1}(\theta) = \mathbb{E}_{q_{\mathrm{n}}}[\frac{\phi_{\theta}(x)}{q_{\mathrm{n}}(x)} ]$ is estimated with empirical noise distribution $\hat{q}_{\mathrm{n}}(x)$ , the 1-CentNCE objective $\tilde{\mathcal{L}}_1(\theta ;\hat{q}_{\mathrm{d}},\hat{q}_{\mathrm{n}})$ is equivalent to MC-MLE (Geyer, 1994).
+
+Remark 3.1. Note that the connection between GlobalGISO and MLE can be made for the case when $Z_{\alpha}(\theta)$ is assumed to be computable for any $\theta$ . At one extreme when $\alpha = 1$ , in which case the objective boils down to that of MLE, it is clear that $Z_{1}(\theta) = \mathbb{E}_{q_{\mathrm{n}}}\left[\frac{\phi_{\theta}(x)}{q_{\mathrm{n}}(x)}\right] = \int \phi_{\theta}(x)dx$ becomes the standard partition function. In the other extreme case where $\alpha \to 0$ , if $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ is an exponential family, computing $Z_{0}(\theta)$ boils down to computing $\mathbb{E}_{q_{\mathrm{n}}(x)}[\psi (x)]$ since $Z_{0}(\theta)\propto \exp (\langle \theta ,\mathbb{E}_{q_{\mathrm{n}}}[\psi ]\rangle)$ . For a special choice of $q_{\mathrm{n}}$ (e.g., uniform distribution) and $\psi$ (e.g., polynomial and sinusoidal functions), this term can be computed analytically, as concretely illustrated by (Shah et al., 2023). We also provide an alternative theoretical view of the 0-CentNCE objective as a certain KL divergence minimization problem,
+
+$$
+\begin{array}{l} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, \pi\right) \triangleq \mathbb {E} _ {q _ {\mathrm {d}} (y) \pi (x | y)} \left[ \Delta_ {f} \left(\frac {q _ {\mathrm {d}} (x) \pi (y | x)}{q _ {\mathrm {d}} (y) \pi (x | y)}, \frac {\phi_ {\theta} (x) \pi (y | x)}{\phi_ {\theta} (y) \pi (x | y)}\right) \right] - \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} \left[ f \left(\frac {q _ {\mathrm {d}} (x) \pi (y | x)}{q _ {\mathrm {d}} (y) \pi (x | y)}\right) \right] \\ = \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} \left[ - f ^ {\prime} \left(\rho_ {\theta} (x, y)\right) + \rho_ {\theta} (y, x) f ^ {\prime} \left(\rho_ {\theta} (y, x)\right) - f \left(\rho_ {\theta} (y, x)\right) \right]. \tag {4} \\ \end{array}
+$$
+
+generalizing the justification for GlobalGISO given in (Shah et al., 2023); see Theorem B.1.
+
+Next, we provide a result connecting $f_{\alpha}$ -NCE and $\alpha$ -CentNCE estimators, under the assumption that we have an optimization oracle that finds the global minima of a given objective.
+
+Theorem 3.2 ( $f_{\alpha}$ -NCE and $\alpha$ -CentNCE estimators are equivalent). For a set $A \subset \Theta \times \mathbb{R}$ in the augmented parameter space, let $A|_{\Theta} \triangleq \{\theta : (\theta, \nu) \in A \text{ for some } \nu \in \mathbb{R}\}$ denote the subset corresponding to $\Theta$ . Then,
+
+$$
+\underset {\underline {{\theta}} = (\theta , \nu) \in \Theta \times \mathbb {R}} {\arg \min } \left. \mathcal {L} _ {f _ {\alpha}} ^ {\mathrm {n c e}} (\underline {{\theta}}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) \right| _ {\Theta} = \underset {\theta \in \Theta} {\arg \min } \mathcal {L} _ {\alpha} ^ {\mathrm {c e n t}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}).
+$$
+
+Remark 3.2. We remark that, for $\alpha = 1$ , Riou-Durand & Chopin (2018) proposed to convert the MC-MLE objective by the inverse of the 1-centering operation, which they call the Poisson transform (Barthelmé & Chopin, 2015), into the $f_{1}$ -NCE objective, which they call the importance sampling (IS) objective. In this view, our $\alpha$ -centering can be understood as the inverse of the generalized Poisson transform. Via the equivalence, Riou-Durand & Chopin (2018) analyzed the asymptotic property of MC-MLE by studying the $f_{1}$ -NCE. Similarly, one can analyze the statistical property of GlobalGISO (with any valid choice of $q_{\mathfrak{n}}$ beyond the uniform distribution) when $Z_0(\theta)$ is estimated with samples from $q_{\mathfrak{n}}(x)$ via analyzing the $f_{0}$ -NCE objective.
+
+# 3.2. Revisiting the Connection Between CondNCE and Score Matching
+
+Ceylan & Gutmann (2018) argued that for a continuous domain $\mathcal{X}$ , the original CondNCE objective is related to the score matching objective of Hyvarinen (2005), justifying the consistency of CondNCE. Here, we demonstrate that this interpretation can be misleading in a realistic setting with finite samples. To revisit this connection, we further restrict the type of channels to $\pi_{\epsilon}(y|x)$ parameterized by a parameter $\epsilon > 0$ , such that $y \sim \pi_{\epsilon}(y|x)$ is equivalent to $y = x + \epsilon v$ for some $v \sim q_{s}(\cdot)$ with zero mean and identity covariance, i.e., $\mathbb{E}_{q_s}[v] = 0$ and $\mathbb{E}_{q_s}[vv^{\intercal}] = I_d$ . With this simplification, we denote the objective function as
+
+$$
+\begin{array}{l} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, q _ {\mathrm {s}}; \epsilon\right) \\ \triangleq \mathbb {E} _ {q _ {\mathrm {d}} (x) q _ {\mathrm {s}} (v)} \left[ - f ^ {\prime} \left(\rho_ {\theta} (x, y)\right) \right. \\ \left. + \rho_ {\theta} (y, x) f ^ {\prime} (\rho_ {\theta} (y, x)) - f (\rho_ {\theta} (y, x)) \right], \\ \end{array}
+$$
+
+where $y \triangleq x + \epsilon v$ . Then, we show that the $f$ -CondNCE objective behaves as the score matching objective (Hyvärinen, 2005) in the limit of $\epsilon \to 0$ . Formally:
+
+Theorem 3.3 (Asymptotic behavior of population $f$ -CondNCE for small $\epsilon$ ). The population $f$ -CondNCE objective can be written as
+
+$$
+\mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; q _ {\mathrm {d}}, q _ {\mathrm {s}}; \epsilon\right) = - f (1) + f ^ {\prime \prime} (1) \mathcal {L} ^ {\text {s m}} \left(\phi_ {\theta}; q _ {\mathrm {d}}\right) \epsilon^ {2} + o \left(\epsilon^ {2}\right),
+$$
+
+where
+
+$$
+\mathcal {L} ^ {\mathrm {s m}} \left(\phi_ {\theta}; q _ {\mathrm {d}}\right) \triangleq \mathbb {E} _ {q _ {\mathrm {d}} (x)} \left[ \operatorname {t r} \left(\nabla_ {x} ^ {2} \log \phi_ {\theta} (x)\right) + \frac {1}{2} \| \nabla_ {x} \log \phi_ {\theta} (x) \| ^ {2} \right]
+$$
+
+denotes the (population) score matching (SM) objective (Hyvärinen, 2005).
+
+This statement generalizes the result in (Ceylan & Gutmann, 2018) for $f_{\log}$ -CondNCE to $f$ -CondNCE for any $f$ . Below, we explain why this statement may be misleading as the $f$ -CondNCE estimator with $\epsilon \rightarrow 0$ does not behave like the SM estimator. To correctly understand the behavior, we need to consider the empirical $f$ -CondNCE objective function that defines the empirical estimator, instead of the population objective.
+
+Theorem 3.4 (Asymptotic behavior of empirical $f$ -CondNCE for small $\epsilon$ ). The empirical $f$ -CondNCE objective can be written as
+
+$$
+\begin{array}{l} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right) = - f (1) \\ + 2 f ^ {\prime \prime} (1) \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} [ \nabla_ {x} \log \phi_ {\theta} (x) ^ {\intercal} v ] \epsilon \\ + f ^ {\prime \prime} (1) \mathcal {L} ^ {\mathrm {s s m}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right) \epsilon^ {2} + o \left(\epsilon^ {2}\right). \\ \end{array}
+$$
+
+Here, we define the empirical sliced SM (SSM) objective (Song et al., 2020)
+
+$$
+\mathcal {L} ^ {\mathrm {s s m}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right)
+$$
+
+$$
+\triangleq \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} \Big [ v ^ {\intercal} \nabla_ {x} ^ {2} \log \phi_ {\theta} (x) v + \frac {1}{2} (v ^ {\intercal} \nabla_ {x} \log \phi_ {\theta} (x)) ^ {2} \Big ].
+$$
+
+Remark 3.3. Since we assume that $q_{\mathrm{s}}(v)$ has zero mean, Theorem 3.3 readily follows as a corollary of Theorem 3.4, as the $O(\epsilon)$ term will converge to 0 in the population limit of $q_{\mathrm{s}}$ . In a finite-sample regime, however, the dominating term of the f-CondNCE objective becomes the $O(\epsilon)$ term, i.e., as $\epsilon \to 0$ , we have
+
+$$
+\frac {1}{\epsilon} \frac {\hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta} ; \hat {q} _ {\mathrm {d}} , \hat {q} _ {\mathrm {s}}\right) + f (1)}{2 f ^ {\prime \prime} (1)} \rightarrow \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x)} [ \nabla_ {x} \log \phi_ {\theta} (x) ] ^ {\intercal} \mathbb {E} _ {\hat {q} _ {\mathrm {s}} (v)} [ v ].
+$$
+
+Thus, the $f$ -CondNCE objective is dominated by this statistical noise term when $\epsilon \ll 1$ with fixed sample size of $v \sim q_{s}$ , and thus too small $\epsilon$ should be avoided in stark contrast to the proposed justification in (Ceylan & Gutmann, 2018). We revisit this degrading behavior after the finite-sample guarantee of $f$ -CondNCE in Remark 4.3.
+
+It is worth noting, however, that $\mathbb{E}_{\hat{q}_s}[v]$ gets more concentrated around 0 as the number of slicing vectors increases. Therefore, one could consider a carefully chosen $\epsilon$ as a function of the number of slicing vectors and distribution-dependent quantities, so that $\frac{1}{\epsilon}\mathbb{E}_{\hat{q}_{\mathrm{d}}(x)\hat{q}_{\mathrm{s}}(v)}[\nabla_x\log \phi_\theta (x)^\top v]$ still vanishes as the number of slicing vectors increases. In this way, the $f$ -CondNCE estimator might be still consistent with small $\epsilon$ , emulating the behavior of SSM.
+
+Simulation. To demonstrate this behavior, we considered a simple synthetic setup, where the data generating distribution is $\mathcal{N}(\mu, 1)$ with $\mu = 1$ . With a conditional noise distribution $\pi(y|x) = \mathcal{N}(y|x, \epsilon^2 I)$ with varying $\epsilon$ , we plot the derivatives of the empirical objective of the original CNCE with varying $K \in \{1, 4, 16, 64\}$ , where the sample size is $N = 10^4$ . As shown in Figure 1, the empirical derivatives characterize the mean fairly closely when $\epsilon \geq 10^{-2}$ or when $\epsilon$ is small and $K$ is large. This simple 1D Gaussian example clearly shows the undesirable behavior of the CNCE objective when $\epsilon$ is small. More in-depth study on the effect of $\epsilon$ and $K$ for high-dimensional problems is left as a future work.
+
+# 4. Finite-Sample Analysis
+
+In this section, we provide finite-sample guarantees of regularized versions of the aforementioned NCE estimators, specifically assuming an exponential family distribution model $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ . Here, $\theta \in \mathbb{R}^p$ denotes the natural parameter, $\psi \colon \mathcal{X}\to \mathbb{R}^p$ denotes the natural statistics, and $p$ denotes the number of parameters. In what follows, we assume both well-specifiedness and identifiability, i.e., there exists a unique $\theta^{\star}\in \Theta$ such that $\phi_{\theta^{\star}}(\cdot)\propto q_{\mathrm{d}}(\cdot)$ .
+
+Below, we establish the parametric error rate $O(n^{-1/2})$ of convergence for the regularized NCE estimators. The proofs adapt the analysis in (Shah et al., 2023) for GlobalGISO, which in turn built upon (Negahban et al., 2012; Vuffray et al., 2016; 2021; Shah et al., 2021b). We note in passing that the non-regularized NCE estimators can also be analyzed, but we can only prove a suboptimal rate of $O(n^{-1/4})$ by following the existing analysis in (Shah et al., 2021b).
+
+Following (Shah et al., 2023), we are specifically interested in the case where the statistics are bounded and so is the parameter space. We note that the bounded statistics may not be too restrictive, as in many practical scenarios the domain
+
+
+Figure 1. Derivatives of the empirical CondNCE objective with varying $\epsilon \in \{10^{-10},\dots ,10^0\}$ and $K\in \{1,4,16,64\}$ for 1D Gaussian data with true mean $\mu = 1.0$ (vertical dashed red lines) and a conditional noise distribution $\pi (y|x) = \mathcal{N}(y|x,\epsilon^2 I)$ .
+
+$\mathcal{X}$ may naturally be truncated during data acquisition (Liu et al., 2022).
+
+Assumption 4.1 (Bounded maximum norm of $\psi$ ). $\sup_{x\in \mathcal{X}}\| \psi (x)\|_{\infty}\leq \psi_{\max}$ for some $\psi_{\mathrm{max}} > 0$
+
+Assumption 4.2 (Bounded parameter space). For some constant $r > 0$ , $\sup_{\theta \in \Theta} \mathcal{R}(\theta) \leq r$ .
+
+We note that the gradient and Hessian of the $f$ -NCE objective can be written as
+
+$$
+\nabla \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi \xi_ {\mathsf {n c e}, f, \mathsf {d}} ^ {(1)} (\rho_ {\theta}) ] + \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi \xi_ {\mathsf {n c e}, f, \mathsf {n}} ^ {(1)} (\rho_ {\theta}) ],
+$$
+
+$$
+\nabla^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathbf {d}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathbf {d}} ^ {(2)} (\rho_ {\theta}) ] + \mathbb {E} _ {\hat {q} _ {\mathbf {n}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathbf {n}} ^ {(2)} (\rho_ {\theta}) ],
+$$
+
+where the functions $\xi_{\mathrm{nce},f,\mathbf{r}}^{(i)}(\rho)$ for $i\in \{1,2\}$ and $\mathbf{r}\in$ $\{\mathsf{d},\mathsf{n}\}$ are defined in the leftmost column of Table 3; see Lemma B.3.
+
+Our analysis relies on the boundedness of the model density ratio $\rho_{\theta}\in (\rho_{\mathrm{min}},\rho_{\mathrm{max}})$ . In each result, we clarify the defi
+
+nition of the worst-case density ratios $(\rho_{\mathrm{min}},\rho_{\mathrm{max}})$ . These ratios affect the convergence rate through the following quantities:
+
+$$
+\begin{array}{l} b _ {\mathrm {n c e}, f, \mathrm {r}} ^ {(2)} \triangleq \inf _ {\rho \in (\rho_ {\min }, \rho_ {\max })} | \xi_ {\mathrm {n c e}, f, \mathrm {r}} ^ {(2)} (\rho) | \quad \text {a n d} \\ B _ {\mathrm {n c e}, f, \mathrm {r}} ^ {(i)} \triangleq \sup _ {\rho \in \left(\rho_ {\min }, \rho_ {\max }\right)} | \xi_ {\mathrm {n c e}, f, \mathrm {r}} ^ {(i)} (\rho) | \quad \text {f o r} i \in \{1, 2 \}, \tag {5} \\ \end{array}
+$$
+
+where $r \in \{d, n\}$ . We remark that these quantities differ for each estimator. For the canonical choices of $f(\rho)$ , i.e., log and asymmetric power, these quantities are explicitly given in Table 3.
+
+Let $\mathcal{R}\colon \Theta \to \mathbb{R}_{\geq 0}$ be a norm over $\Theta$ , and $\mathcal{R}^{\ast}\colon \Theta \to \mathbb{R}_{\geq 0}$ be its dual norm. Define
+
+$$
+\gamma_ {1; 2} \triangleq \sup _ {\theta \in 4 \Theta \backslash \{0 \}} \frac {\| \theta \| _ {1}}{\| \theta \| _ {2}}, \tag {6}
+$$
+
+$$
+\gamma_ {\mathcal {R} ^ {*}; \infty} \triangleq \sup _ {\theta \in \mathbb {R} ^ {k} \backslash \{0 \}} \frac {\mathcal {R} ^ {*} (\theta)}{\| \theta \| _ {\max }}, \tag {7}
+$$
+
+$$
+\gamma_ {\mathcal {R}; 2} \triangleq \sup _ {\theta \in \Theta \backslash \{0 \}} \frac {\mathcal {R} (\theta)}{\| \theta \| _ {2}}. \tag {8}
+$$
+
+Here $4\Theta \triangleq \{4\theta \colon \theta \in \Theta \}$ . These quantities capture the geometry of the norm $\mathcal{R}(\cdot)$ imposed on the parameter space $\Theta$ , and appear in the convergence rates.
+
+Theorem 4.1 (f-NCE: finite-sample guarantee). Pick $a$ strictly convex function $f\colon \mathbb{R}_+ \to \mathbb{R}$ . Define
+
+$$
+\left(\rho_ {\min }, \rho_ {\max }\right) \triangleq \Big (\inf _ {x \in \mathcal {X}, \underline {{\theta}} \in \Theta \times \mathbb {R}} \rho_ {\underline {{\theta}}} (x), \sup _ {x \in \mathcal {X}, \underline {{\theta}} \in \Theta \times \mathbb {R}} \rho_ {\underline {{\theta}}} (x) \Big)
+$$
+
+and define the quantities in Eq. (5) accordingly. For $\mathfrak{r} \in \{\mathsf{d}, \mathsf{n}\}$ , define
+
+$$
+\lambda_ {\min , r} ^ {n c e} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {r}} [ \psi \psi^ {\mathsf {T}} ]\right).
+$$
+
+Let $\hat{\theta}_{f,n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce},\mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {f, n _ {\mathrm {d}}, n _ {\mathrm {n}}} ^ {\mathrm {n c e}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {f} ^ {\mathrm {n c e}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) + \lambda_ {n _ {\mathrm {d}}, n _ {\mathrm {n}}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_d,n_n} > 0$ . Then, for any $\Delta > 0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_{n_d,n_n}$ such that $\|\hat{\theta}_{f,n_d,n_n}^{\mathrm{nce},\mathcal{R}} - \theta^\star\|_2 \leq \Delta$ with probability $\geq 1 - \delta$ , provided that for each $r \in \{\mathsf{d},\mathsf{n}\}$ ,
+
+$$
+\begin{array}{l} n _ {r} = \Omega \Bigg (\max \bigg \{\frac {(B _ {\mathrm {n c e} , f , r} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} (\nu^ {- 1} b _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(2)} \lambda_ {\min , \mathrm {d}} ^ {\mathrm {n c e}} + b _ {\mathrm {n c e} , f , \mathrm {n}} ^ {(2)} \lambda_ {\min , \mathrm {n}} ^ {\mathrm {n c e}}) ^ {2}}, \\ \left. \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\mathrm {m a x}} ^ {4}}{(\lambda_ {\mathrm {m i n} , \mathrm {r}} ^ {\mathrm {n c e}}) ^ {2}} \right\} \log \left. \frac {p ^ {2}}{\delta}\right). \\ \end{array}
+$$
+
+Remark 4.1. To the best of our knowledge, this result is the first finite-sample convergence rate for $f$ -NCE estimators.
+
+We state the finite-sample statement with a minimal set of assumptions, along with the bounded statistics and parameter space assumptions. While achieving the parametric rate of convergence $O(n^{-1/2})$ is appealing, to have non-vacuous rates, however, we need all the quantities in the sample complexity expression to be within a range bounded away from 0 or $\infty$ . More concretely, if we further assume that the dual norm of the statistic $\sup_{x \in \mathcal{X}} \mathcal{R}^*(\psi(x)) \leq \tau$ is bounded for some constant $\tau > 0$ , it is easy to check that the worst-case density ratios are bounded as $(\rho_{\min}, \rho_{\max}) \subset (e^{-r\tau}, e^{r\tau})$ for $f$ -NCE, where $r$ is defined to be the diameter of $\Theta$ measured in the norm $\mathcal{R}(\cdot)$ ; see Assumption 4.2. We note that the worst-case density ratios affect the quantities in Eq. (5) polynomially for the canonical examples in Table 3, which in turn affect the sample complexity polynomially. Hence, the leading constant grows exponentially in $r$ and $d$ similar to (Shah et al., 2021b; 2023). This remark remains valid for the following two statements for $\alpha$ -CentNCE and $f$ -CondNCE, as the worst-case density ratio bounds depend similarly on $r$ and $\tau$ . We also remark that the minimum eigenvalue conditions are typically assumed in the existing finite-sample analysis (Vuffray et al., 2016; Shah et al., 2021b; 2023), while (Shah et al., 2021a) establishes an explicit lower bound on the minimum eigenvalue for node-wise-sparse Gaussian MRFs.
+
+Theorem 4.2 ( $\alpha$ -CentNCE: finite-sample guarantee). Pick $\alpha \in \mathbb{R}$ . Define
+
+$$
+\left(\rho_ {\min }, \rho_ {\max }\right) \triangleq \Big (\inf _ {x \in \mathcal {X}, \theta \in \Theta} \tilde {\rho} _ {\theta ; \alpha} (x), \sup _ {x \in \mathcal {X}, \theta \in \Theta} \tilde {\rho} _ {\theta ; \alpha} (x) \Big)
+$$
+
+and define the quantities in Eq. (5) for $f = f_{\alpha}$ accordingly.
+
+Let $\tilde{\rho}_{\theta^{\star};\alpha}^{\alpha}(x)\triangleq \frac{\left(\frac{q_{\mathrm{d}}(x)}{q_{\mathrm{n}}(x)}\right)^{\alpha}}{\mathbb{E}_{q_{\mathrm{n}}}\left[\left(\frac{q_{\mathrm{d}}}{q_{\mathrm{n}}}\right)^{\alpha}\right]}$ , and let
+
+$$
+\lambda_ {\min , \mathbf {d}} ^ {\text {c e n t}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {\mathrm {d}}} \left[ \left(\psi - \mathbb {E} _ {q _ {\mathrm {n}}} \left[ \psi \tilde {\rho} _ {\theta^ {\star}; \alpha} ^ {\alpha} \right]\right) \left(\psi - \mathbb {E} _ {q _ {\mathrm {n}}} \left[ \psi \tilde {\rho} _ {\theta^ {\star}; \alpha} ^ {\alpha} \right]\right) ^ {\intercal} \right]\right),
+$$
+
+$$
+\lambda_ {\min , \mathfrak {n}} ^ {\text {c e n t}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {\mathfrak {n}}} [ \psi \psi^ {\intercal} \tilde {\rho} _ {\theta^ {\star}; \alpha} ^ {\alpha} ] - \mathbb {E} _ {q _ {\mathfrak {n}}} [ \psi \tilde {\rho} _ {\theta^ {\star}; \alpha} ^ {\alpha} ] \mathbb {E} _ {q _ {\mathfrak {n}}} [ \psi \tilde {\rho} _ {\theta^ {\star}; \alpha} ^ {\alpha} ] ^ {\intercal}\right).
+$$
+
+Let $\hat{\theta}_{\alpha ,n_{\mathrm{d}}}^{\mathrm{cent},\mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {\alpha , n _ {\mathrm {d}}} ^ {\text {c e n t}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {\alpha} ^ {\text {c e n t}} (\theta ; \hat {q} _ {\mathrm {d}}, q _ {\mathrm {n}}) + \lambda_ {n _ {\mathrm {d}}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_d} > 0$ . Define $\psi_{\max,\alpha} \triangleq \psi_{\max} + \|\mathbb{E}_{q_n}[\psi \tilde{\rho}_{\theta^*\colon \alpha}^\alpha]\|_{\max}$ . Then, for any $\Delta > 0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_{n_d}$ such that $\|\hat{\theta}_{f,n_d}^{\text{cent},\mathcal{R}} - \theta^*\|_2 \leq \Delta$ with probability $\geq 1 - \delta$ , provided that
+
+$$
+\begin{array}{l} n _ {\mathsf {d}} = \Omega \Bigg (\max \bigg \{\frac {(B _ {\mathrm {n c e} , f _ {\alpha} , \mathsf {d}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max , \alpha} ^ {2}}{\Delta^ {2} (b _ {\mathrm {n c e} , f _ {\alpha} , \mathsf {d}} ^ {(2)}) ^ {2} \{(1 - \alpha) \lambda_ {\min , \mathsf {d}} ^ {\mathrm {c e n t}} + \alpha \lambda_ {\min , \mathsf {n}} ^ {\mathrm {c e n t}} \} ^ {2}}, \\ \left. \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\max , \alpha} ^ {4}}{(\lambda_ {\min , \mathsf {d}} ^ {\mathrm {c e n t}}) ^ {2}} \right\} \log \left. \frac {p ^ {2}}{\delta}\right). \\ \end{array}
+$$
+
+Table 3. Definitions of $\xi_{\mathrm{nce},f,\mathbf{r}}^{(i)}(\rho)$ for $i\in \{1,2\}$ and $\mathbf{r}\in \{\mathsf{d},\mathsf{n}\}$ for example generator functions $f$
+
+Definitions Log Asymmetric power f(ρ) flog(ρ) fα(ρ) ξ(1)nce,f,d(ρ) ≅ -ρf''(ρ) -1/ρ+1 -ρα-1 ξ(1)nce,f,n(ρ) ≅ ρ2f''(ρ) ρ/ρ+1 ρα ξ(2)nce,f,d(ρ) ≅ ρgf(ρ) ρ/(ρ+1)2 (1-α)ρα-1 ξ(2)nce,f,n(ρ) ≅ ρ2(f''(ρ) - gf(ρ)) ρ/(ρ+1)2 αρα (B(1)nce,f,d, B(1)nce,f,n) (1,1) (ρα-1min, ραmax) (B(2)nce,f,d, B(2)nce,f,n) (1,1) (|1-α|ρα-1min, |α|ραmax) (b(2)nce,f,d, b(2)nce,f,n) (κ,κ), where κ ≅ ρmin/ρmin+12 ∧ ρmax/ρmax+12 (|1-α|ρα-1max, |α|ραmin)
+
+Remark 4.2 (Special cases). For $\alpha = 0$ , this result generalizes the finite-sample analysis of GlobalGISO of (Shah et al., 2023) beyond when $q_{n}$ is the uniform distribution. For $\alpha = 1$ , we establish the convergence rate of the MLE, which we believe to be the first result of this kind.
+
+For the CondNCE estimator, we consider $K = 1$ , i.e., we have $\{(x_i,y_i)\}_{i = 1}^{n_{\mathrm{d}}}\sim q_{\mathrm{d}}(x)\pi (y|x)$ for simplicity.
+
+Theorem 4.3 (f-CondNCE: finite-sample guarantee). Pick a strictly convex function $f\colon \mathbb{R}_+ \to \mathbb{R}$ . Define
+
+$$
+\rho_ {\min } \triangleq \inf _ {(x, y) \in \operatorname {s u p p} (q _ {d} (x) \pi (y | x)), \theta \in \Theta} \rho_ {\theta} (x, y),
+$$
+
+$$
+\rho_{\max}\triangleq \sup_{(x,y)\in \operatorname {supp}(q_{\mathsf{d}}(x)\pi (y|x)),\theta \in \Theta}\rho_{\theta}(x,y).
+$$
+
+and define the quantities in Eq. (5) accordingly. Let
+
+$$
+\lambda_ {\min , \mathrm {d}} ^ {\text {c o n d}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} \left[ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\top} \right]\right).
+$$
+
+Let $\hat{\theta}_{f,n_{\mathrm{d}}}^{\mathrm{cond},\mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {f, n _ {\mathrm {d}}} ^ {\text {c o n d}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {f} ^ {\text {c o n d}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {\pi}) + \lambda_ {n _ {\mathrm {d}}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_{\mathrm{d}}} > 0$ . Then, for any $\Delta > 0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_{n}$ such that $\| \hat{\theta}_{f,n_{\mathrm{d}}}^{\mathrm{cond},\mathcal{R}} - \theta^{\star} \|_2 \leq \Delta$ with probability $\geq 1 - \delta$ , provided that
+
+$$
+\begin{array}{l} n _ {\mathrm {d}} = \Omega \left(\max \left\{\frac {(B _ {\text {c o n d} , f , \mathrm {d}} ^ {(1)} + B _ {\text {c o n d} , f , \mathrm {n}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} (b _ {\text {c o n d} , f , \mathrm {d}} ^ {(2)} + b _ {\text {c o n d} , f , \mathrm {n}} ^ {(2)}) ^ {2} (\lambda_ {\min } ^ {\text {c o n d}}) ^ {2}}, \right. \right. \\ \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\operatorname* {m a x}} ^ {4}}{(\lambda_ {\operatorname* {m i n}} ^ {\operatorname* {c o n d}}) ^ {2}} \Bigg \} \log \left. \frac {p ^ {2}}{\delta}\right). \\ \end{array}
+$$
+
+Here, $b_{\mathrm{cond},f,\mathrm{r}}^{(2)}$ and $B_{\mathrm{cond},f,\mathrm{r}}^{(i)}$ are defined similar to Eq. (5), where the infimum and supremum are taken over $\left(\frac{\rho_{\min}}{\rho_{\max}}, \frac{\rho_{\max}}{\rho_{\min}}\right)$ in place of $(\rho_{\min}, \rho_{\max})$ .
+
+Remark 4.3 (Behavior of $f$ -CondNCE in a small- $\epsilon$ regime). As alluded to in Sec. 3.2, the undesirable behavior of $f$ -CondNCE with small $\epsilon$ can be also seen from the sample complexity, since the minimum eigenvalue $\lambda_{\min,d}^{\mathrm{cond}} \approx$
+
+$\epsilon^2 \lambda_{\min}(\mathbb{E}_{q_{\mathrm{d}}(x)}[\nabla_x \psi(x) \nabla_x \psi(x)^{\intercal}]) \to 0$ as $\epsilon \to 0$ . In Theorem C.3 in Appendix, we establish that the asymptotic covariance of the estimator is $\tilde{\mathcal{V}}_f^{\mathrm{cond}} \triangleq \tilde{\mathcal{I}}_f^{-1} \tilde{\mathcal{C}}_f \tilde{\mathcal{I}}_f^{-1}$ , where
+
+$$
+\check {\mathcal {L}} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}, \pi} (x, y)} \left[ \rho_ {\theta^ {\star}} ^ {2} f ^ {\prime \prime} \left(\rho_ {\theta^ {\star}}\right) (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} \right],
+$$
+
+$$
+\check {\mathcal {C}} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}}, \pi (x, y)} [ \xi_ {\text {c o n d}, f} ^ {(1)} (\rho_ {\theta^ {*}}) ^ {2} (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} ],
+$$
+
+where we let $q_{\mathrm{d},\pi}(x,y)\triangleq q_{\mathrm{d}}(x)\pi (y|x)$ . For a channel $y\sim \pi (y|x)$ defined as $y = x + \epsilon v$ as in Sec. 3.2, it is easy to check that $\lim_{\epsilon \to 0}\frac{1}{\epsilon^2}\check{\mathcal{L}}_f = \mathbb{E}_{q_{\mathrm{d}}(x)}[\nabla_x\psi (x)\nabla_x\psi (x)^\intercal ] = \lim_{\epsilon \to 0}\frac{1}{\epsilon^2}\check{\mathcal{C}}_f$ . Hence, in the small- $\epsilon$ regime, the asymptotic covariance of the $f$ -CondNCE also behaves as $\check{\mathcal{V}}_f^{\mathrm{cond}}\approx \frac{1}{\epsilon^2}\mathbb{E}_{q_{\mathrm{d}}(x)}[\nabla_x\psi (x)\nabla_x\psi (x)^\intercal ]$ , and hence blows up as $\epsilon \rightarrow 0$ . These observations are consistent to Theorem 3.4.
+
+Proof Sketch. Our finite-sample analysis of the regularized NCE estimators follows closely that of Shah et al. (2023), which relies on the seminal result of (Negahban et al., 2012) for regularized M-estimators:
+
+Theorem 4.4 (Negahban et al., 2012, Corollary 1). Let $z_1, \ldots, z_N$ be i.i.d. samples drawn from a distribution $p(z)$ . Let $h_\theta(z)$ be a convex and differentiable function parameterized by $\theta \in \Theta$ . Let $\hat{\mathcal{L}}_n(\theta) \triangleq \frac{1}{n} \sum_{i=1}^{n} h_\theta(z_i)$ denote the empirical objective function. Define
+
+$$
+\hat {\theta} _ {n} \in \arg \min _ {\theta} \left\{\hat {\mathcal {L}} _ {n} (\theta) + \lambda_ {n} \mathcal {R} (\theta) \right\}, \tag {9}
+$$
+
+where $\lambda_{n}$ is a regularization penalty and $\mathcal{R} \colon \Theta \to \mathbb{R}_{\geq 0}$ is a norm over $\Theta$ . Let $\theta^{\star} \in \arg \min_{\theta} \mathbb{E}_{p(z)}[h_{\theta}(z)]$ . Assume that
+
+1. The regularization penalty $\lambda_{n}$ satisfies $\lambda_{n} \geq 2\mathcal{R}^{*}(\nabla_{\theta}\hat{\mathcal{L}}_{n}(\theta^{\star}))$ , where $\mathcal{R}^{*} \colon \Theta^{*} \to \mathbb{R}_{\geq 0}$ is a dual norm of $\mathcal{R}$ over the dual space $\Theta^{*}$ ;
+2. The empirical objective $\theta \mapsto \hat{\mathcal{L}}_n(\theta)$ satisfies a restricted strong convexity condition at $\theta = \theta^{\star}$ with curvature $\kappa >0$ , i.e., $\Delta_{\hat{\mathcal{L}}_n(\theta)}(\theta ,\theta^{\star})\geq \kappa \| \theta -\theta^{\star}\| _2^2$
+
+Then, the estimator $\hat{\theta}_n$ in Eq. (9) satisfies
+
+$$
+\left\| \hat {\theta} _ {n} - \theta^ {\star} \right\| _ {2} \leq 3 \frac {\lambda_ {n}}{\kappa} \gamma_ {\mathcal {R}; 2}.
+$$
+
+To ensure the first condition with $\lambda_{n}$ sufficiently small, we show that, with high probability, the gradient of the empirical objective is sufficiently small, using Hoeffding's inequality under Assumption 4.1. For the second condition, we show that the lowest eigenvalue of the Hessian of the empirical objective is lower bounded, again by Hoeffding's inequality invoking Assumption 4.1 and the positivity of the minimum eigenvalues of some second moment matrices. Combining the two high-probability events by a union bound completes the proof.
+
+Simulation. We include a preliminary simulation result of some NCE estimators in Appendix G. We leave a more thorough empirical investigation on the estimators in this paper for high-dimensional problems as a future work.
+
+# 5. Discussion and Conclusion
+
+Beyond Bounded Exponential Families. An intriguing question is whether we can relax the boundedness assumption on $\psi(x)$ , making our estimators applicable beyond bounded (or truncated) exponential families. Here, we highlight what we need to modify in the proofs to extend the validity beyond this assumption, using $f$ -NCE estimators as an example. As sketched above, the proof of Theorem 4.1 consists of two parts: (1) the concentration of the gradient of the empirical objective around 0, at the true parameter $\theta^{\star}$ (Proposition D.1) and (2) the restricted strong convexity (anti-concentration of the Hessian) of the empirical objective, around the true parameter $\theta^{\star}$ (Proposition D.2). Invoking the uniform bound via the worst-case density ratios, we apply Hoeffding's inequality using the boundedness of the max-norm of $\psi(x)$ . For unbounded sufficient statistics, we need a technique to handle the concentration behaviors, without worst-case density ratios bounded away from 0 and $\infty$ . For example, if the exponential family distribution is sub-Gaussian and the sufficient statistics are polynomials, one could use the sub-Weibull concentration bounds.
+
+Local Versions of NCE-based Estimators. So far, we take a global approach to learning the parameter $\theta$ by treating it as a single object. In the context of exponential families, this is beneficial when exploiting a global structure on $\theta$ such as a bounded maximum norm, a bounded Frobenius norm, or a bounded nuclear norm when $\theta$ is matrix-shaped (Shah et al., 2023). However, for exponential families corresponding to a node-wise sparse Markov random fields (MRFs), the structure to be exploited is inherently local. Specifically, in node-wise-sparse MRFs, the conditional distribution of each node given all the other nodes can be expressed by number of parameters which scale with the maximum-degree of the MRF, which is assumed to be much smaller than the
+
+dimension. In such scenarios, it is convenient to learn the conditional distribution for each node rather than learning the joint distribution over all nodes. There exists a long line of work on this approach, e.g., see (Besag, 1975; Vuffray et al., 2016; 2021; Shah et al., 2021a; Ren et al., 2021), a representative of which is the pseudo likelihood estimator of Besag (1975). Maybe not very surprisingly at this point, if we apply the NCE framework in a local manner, it provides a unifying view on all of the aforementioned works. We defer a detailed discussion to Appendix E.
+
+Optimization Complexity. So far, we have focused on the statistical properties of the proposed estimators. Now, we make a few comments regarding the optimization complexity as concluding remarks. The first-order important property regarding optimization is the convexity of the objective functions with respect to the natural parameter $\theta$ . In Appendix F.1, we characterize a sufficient condition for the convexity of $f$ -NCE, $\alpha$ -CentNCE, as well as $f$ -CondNCE. Specifically, we show that $f_{\log}$ and $f_{\alpha}$ for $\alpha \in [0,1]$ result in convex objectives. Somewhat surprisingly, a counterexample of convex $f$ which cannot guarantee convexity of the objective function is $f_{\alpha}(\rho)$ for $\alpha \notin [0,1]$ .
+
+In the optimization community, a recent line of work (Liu et al., 2021; Lee et al., 2023) studied the optimization landscape of the original NCE objective and showed that the landscape can be arbitrarily flat even for a scalar Gaussian mean estimation. This is mainly due to the unbounded and light-tailed nature of Gaussian distributions. Under the boundedness assumption, we prove in Appendix F.2 that the empirical $f$ -NCE objective function, for example, is smooth with probability 1. Then, from (Agarwal et al., 2010, Theorem 1), and the restricted strong convexity (Proposition D.2), a projected gradient descent algorithm has a globally geometric rate of convergence. A recent work (Jiang et al., 2023) analyzed the optimization landscape of MC-MLE and proposed an optimization algorithm with efficient optimization complexity guarantee together with a strong empirical result, missing the connection to the original work (Geyer, 1994) and its statistical properties analyzed in (Riou-Durand & Chopin, 2018). Building on top of our work and (Jiang et al., 2023) could be an exciting future direction at the intersection of statistical and optimization complexity for learning unnormalized distributions.
+
+Conclusion. We hope that this work offers a unifying perspective on both existing estimators and those yet to be discovered, and that it contributes to a more systematic understanding of the trade-off between statistical and optimization complexity in the context of efficient learning with unnormalized distributions. As emphasized throughout the paper, further investigation is warranted to better understand the empirical behavior of different estimators in high-dimensional settings.
+
+# Acknowledgements
+
+We appreciate the insightful discussions with Devavrat Shah. This work was supported in part by the MIT-IBM Watson AI Lab under Agreement No. W1771646, and by AFRL and by the Department of the Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Agarwal, A., Negahban, S., and Wainwright, M. J. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. In Adv. Neural Inf. Proc. Syst., volume 23, 2010.
+Barthelme, S. and Chopin, N. The Poisson transform for unnormalised statistical models. Stat. Comput., 25(4): 767-780, 2015.
+Besag, J. Statistical analysis of non-lattice data. J. R. Stat. Soc. D, 24(3):179-195, 1975.
+Ceylan, C. and Gutmann, M. U. Conditional Noise-Contrastive Estimation of Unnormalised Models. In Dy, J. and Krause, A. (eds.), Proc. Int. Conf. Mach. Learn., volume 80 of Proc. Mach. Learn. Research, pp. 726-734. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr.press/v80/ceylan18a.html.
+Chehab, O., Gramfort, A., and Hyvarinen, A. The optimal noise in noise-contrastive learning is not what you think. In Proc. Conf. Uncertainty Artif. Intell., pp. 307-316. PMLR, 2022.
+Chehab, O., Hyvarinen, A., and Risteski, A. Provable benefits of annealing for estimating normalizing constants: Importance sampling, noise-contrastive estimation, and beyond. In Adv. Neural Inf. Proc. Syst., volume 36, 2023.
+Fisher, R. A. On the mathematical foundations of theoretical statistics. Phil. Trans. R. Soc. A, 222(594-604):309-368, 1922.
+
+Geyer, C. J. On the convergence of Monte Carlo maximum likelihood calculations. J. R. Stat. Soc. B, 56(1):261-274, 1994.
+Gneiting, T. and Raftery, A. E. Strictly proper scoring rules, prediction, and estimation. J. Am. Statist. Assoc., 102 (477):359-378, 2007.
+Gutmann, M. and Hirayama, J.-i. Bregman divergence as general framework to estimate unnormalized statistical models. In Proc. Conf. Uncertainty Artif. Intell. AUAI Press, 2011.
+Gutmann, M. U. and Hyvarinen, A. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13 (2), 2012.
+Hinton, G. E. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8):1771-1800, 2002.
+Horn, R. A. and Johnson, C. R. Matrix analysis. Cambridge university press, 2012.
+Hyvarinen, A. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6(4), 2005.
+Hyvarinen, A. Some extensions of score matching. Comput. Stat. Data Anal., 51(5):2499-2512, 2007.
+Jiang, W., Qin, J., Wu, L., Chen, C., Yang, T., and Zhang, L. Learning unnormalized statistical models via compositional optimization. In Proc. Int. Conf. Mach. Learn., pp. 15105-15124. PMLR, 2023.
+Lee, H., Pabbaraju, C., Sevekari, A., and Risteski, A. Pitfalls of Gaussians as a noise distribution in NCE. In Int. Conf. Learn. Repr., 2023.
+Liu, B., Rosenfeld, E., Ravikumar, P., and Risteski, A. Analyzing and improving the optimization landscape of noise-contrastive estimation. arXiv preprint arXiv:2110.11271, 2021.
+Liu, S., Kanamori, T., and Williams, D. J. Estimating density models with truncation boundaries using score matching. J. Mach. Learn. Res., 23(186):1-38, 2022.
+Negahban, S. N., Ravikumar, P., Wainwright, M. J., and Yu, B. A Unified Framework for High-Dimensional Analysis of $M$ -Estimators with Decomposable Regularizers. Stat. Sci., 27(4):538 - 557, 2012. doi: 10.1214/12-STS400. URL https://doi.org/10.1214/12-STS400.
+Pabbaraju, C., Rohatgi, D., Sevekari, A. P., Lee, H., Moitra, A., and Risteski, A. Provable benefits of score matching. In Adv. Neural Inf. Proc. Syst., volume 36, 2023.
+
+Pihlaja, M., Gutmann, M., and Hyvarinen, A. A family of computationally efficient and simple estimators for unnormalized statistical models. In Proc. Conf. Uncertainty Artif. Intell., pp. 442-449. AUAI Press, 2010.
+Ren, C. X., Misra, S., Vuffray, M., and Lokhov, A. Y. Learning Continuous Exponential Families Beyond Gaussian. arXiv, February 2021.
+Riou-Durand, L. and Chopin, N. Noise contrastive estimation: Asymptotic properties, formal comparison with MC-MLE. *Electron. J. Stat.*, 12(2):3473-3518, 2018.
+Shah, A., Shah, D., and Wornell, G. On learning continuous pairwise Markov random fields. In Int. Conf. Artif. Int. Statist., volume 130 of Proceedings of Machine Learning Research, pp. 1153-1161. PMLR, 13-15 Apr 2021a. URL https://proceedings.mlrpress/v130/shah21a.html.
+Shah, A., Shan, D., and Wornell, G. W. A computationally efficient method for learning exponential family distributions. In Adv. Neural Inf. Proc. Syst., 2021b. URL https://openreview.net/forum?id=B9WXduMZBEM.
+Shah, A., Shah, D., and Wornell, G. W. On computationally efficient learning of exponential family distributions. arXiv preprint arXiv:2309.06413, 2023.
+Song, Y. and Kingma, D. P. How to train your energy-based models. arXiv preprint arXiv:2101.03288, 2021.
+Song, Y., Garg, S., Shi, J., and Ermon, S. Sliced score matching: A scalable approach to density and score estimation. In Proc. Conf. Uncertainty Artif. Intell., pp. 574-584. PMLR, 2020.
+Sugiyama, M., Suzuki, T., Nakajima, S., Kashima, H., von Bünau, P., and Kawanabe, M. Direct importance estimation for covariate shift adaptation. Ann. Inst. Stat. Math., 60(4):699-746, 2008.
+Sugiyama, M., Suzuki, T., and Kanamori, T. Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation. Ann. Inst. Stat. Math., 64(5):1009-1044, 2012.
+Uehara, M., Matsuda, T., and Komaki, F. Analysis of noise contrastive estimation from the perspective of asymptotic variance. arXiv preprint arXiv:1808.07983, 2018.
+Van der Vaart, A. W. Asymptotic statistics, volume 3. Cambridge university press, 2000.
+Vuffray, M., Misra, S., Lokhov, A., and Chertkov, M. Interaction screening: Efficient and sample-optimal learning of Ising models. In Adv. Neural Inf. Proc. Syst., volume 29, 2016.
+
+Vuffray, M., Misra, S., and Lokhov, A. Y. Efficient learning of discrete graphical models. J. Stat. Mech., 2021(12): 124017, December 2021. ISSN 1742-5468. doi: 10.1088/1742-5468/ac3aea.
+
+# Appendix
+
+A Glossary 12
+B Basic Properties 12
+
+B.1 $f$ -NCE 13
+
+B.1.1 Invariance 13
+B.1.2 Derivatives 13
+
+B.2 $\alpha$ -CentNCE 14
+
+B.2.1 Derivatives 14
+B.2.2 An Alternative Interpretation of GlobalGISO 14
+B.2.3 Proof of Theorem 3.1 15
+B.2.4 Proof of Theorem 3.2 16
+
+B.3 $f$ -CondNCE 17
+
+B.3.1 Derivatives 17
+B.3.2 Proof of Theorem 3.4 18
+
+C Asymptotic Guarantees 19
+
+C.1 $f$ -NCE 19
+C.2 $\alpha$ -CentNCE 19
+C.3 $f$ -CondNCE 19
+
+D Finite-Sample Guarantees 20
+
+D.1 $f$ -NCE 20
+D.2 $\alpha$ -CentNCE 23
+D.3 $f$ -CondNCE 26
+
+E Local NCE for Node-Wise-Sparse MRFs 29
+F Optimization Complexity 30
+
+F.1 Convexity 30
+F.2 Smoothness 30
+
+G Experiments 31
+
+# A. Glossary
+
+For a reference, we provide a summary of notations in Table 4.
+
+# B. Basic Properties
+
+In what follows, we use Euler's notation and Lagrange's notation for derivatives. First, we remark the derivatives of the Bregman divergence with respect to the second argument:
+
+$$
+\begin{array}{l} \Delta_ {f} (x, y) = f (x) - f (y) - f ^ {\prime} (y) (x - y), \\ \partial_ {y} \Delta_ {f} (x, y) = (y - x) f ^ {\prime \prime} (y), \\ \partial_ {y y} \Delta_ {f} (x, y) = f ^ {\prime \prime} (y) + y f ^ {\prime \prime \prime} (y) - x f ^ {\prime \prime \prime} (y), \\ \partial_ {y y} \Delta_ {f} (x, y) | _ {x = y} = f ^ {\prime \prime} (y). \\ \end{array}
+$$
+
+Further, since we consider exponential family distributions, we have
+
+$$
+\partial_ {\theta_ {i}} \rho_ {\theta} = \rho_ {\theta} \psi_ {i} \quad \text {a n d} \quad \partial_ {\theta_ {i} \theta_ {j}} \rho_ {\theta} = \rho_ {\theta} \psi_ {i} \psi_ {j}.
+$$
+
+Lemma B.1. For a three-times differentiable function $f$ , let $g_{f}(\rho) = -(\rho f^{\prime \prime \prime}(\rho) + f^{\prime \prime}(\rho))$ .
+
+$$
+\begin{array}{l} \partial_ {\theta_ {i} \theta_ {j}} \Delta_ {f} \left(\rho^ {*}, \rho_ {\theta}\right) = \psi_ {i} \psi_ {j} \rho_ {\theta} \left\{\left(\rho_ {\theta} f ^ {\prime \prime \prime} \left(\rho_ {\theta}\right) + f ^ {\prime \prime} \left(\rho_ {\theta}\right)\right) \left(\rho_ {\theta} - \rho^ {*}\right) + \rho_ {\theta} f ^ {\prime \prime} \left(\rho_ {\theta}\right) \right\} \\ = \psi_ {i} \psi_ {j} \rho_ {\theta} \left(\rho_ {\theta} \left(f ^ {\prime \prime} \left(\rho_ {\theta}\right) - g _ {f} \rho_ {\theta}\right)\right) + \rho^ {*} g _ {f} \rho_ {\theta}) \big). \\ \end{array}
+$$
+
+Table 4. Summary of notations.
+
+Notation Definition Description X ⊂Rd domain of x Θ ⊂Rp domain of θ ρθ(x) φθ(x)/νqn(x) (scaled) density ratio Δh(z,z') h(z)-h(z')-∇zh(z')T(z-z') Bregman divergence of h: Rd→R θfnce(qd,qn) ∈arg minθ∈ΘLnce(φθ;qd,qn) f-NCE estimator (population) θcentα(qd,qn) ∈arg minθ∈ΘLcent(φθ;qd,qn) α-CentNCE estimator (population) θcondf(qd,π) ∈arg minθ∈ΘLcond(φθ;qd,π) f-CondNCE estimator (population) θncef(ˆd,ˆn) ∈arg minθ∈ΘLnce(φθ;ˆd,ˆn) f-NCE estimator (empirical) θcentα(ˆd,qn) ∈arg minθ∈ΘLcent(φθ;ˆd,qn) α-CentNCE estimator (empirical) θcondf(ˆd,ˆπ) ∈arg minθ∈ΘLcond(φθ;ˆd,ˆπ) f-CondNCE estimator (empirical) R(·) a norm over Θ R*(·) a dual norm over Θ* ρmin minimum density ratio ρmax maximum density ratio
+
+# B.1. $f$ -NCE
+
+Recall
+
+$$
+\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) \triangleq \mathcal {L} _ {f} ^ {\mathrm {n c e}} (\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) = - \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ f ^ {\prime} (\rho_ {\theta}) ] + \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ \rho_ {\theta} f ^ {\prime} (\rho_ {\theta}) - f (\rho_ {\theta}) ].
+$$
+
+# B.1.1. INVARIANCE
+
+We define an equivalent class of generator functions $f$ that yield the same NCE objective. For a function $f_{o}$ , let $\mathcal{F}^{\mathrm{nce}}(f_o) \triangleq \{f\colon \mathcal{L}_f^{\mathrm{nce}} \sim \mathcal{L}_{f_o}^{\mathrm{nce}}\}$ , where the notation $\sim$ denotes that the two objective functions are equivalent up to constants, i.e., there exist $A, B \in \mathbb{R}$ such that $\mathcal{L}_f^{\mathrm{nce}}(\phi_\theta; q_\mathrm{d}, q_\mathrm{n}) \equiv A\mathcal{L}_{f_o}^{\mathrm{nce}}(\phi_\theta; q_\mathrm{d}, q_\mathrm{n}) + B$ .
+
+Lemma B.2. If $f\in \mathcal{F}^{\mathrm{nce}}(f_o)$ $(\rho \mapsto af(\rho) + b\rho +c)\in \mathcal{F}^{\mathrm{nce}}(f_o)$ for any $a,b,c\in \mathbb{R}$
+
+# B.1.2. DERIVATIVES
+
+Lemma B.3 (NCE: derivatives).
+
+$$
+\nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ - \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) \nabla_ {\theta} \log \rho_ {\theta} ] + \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ \rho_ {\theta} ^ {2} f ^ {\prime \prime} (\rho_ {\theta}) \nabla_ {\theta} \log \rho_ {\theta} ],
+$$
+
+$$
+\begin{array}{l} \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ (- \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) - \rho_ {\theta} ^ {2} f ^ {\prime \prime \prime} (\rho_ {\theta})) \nabla_ {\theta} \log \rho_ {\theta} \nabla_ {\theta} ^ {\intercal} \log \rho_ {\theta} - \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) \nabla_ {\theta} ^ {2} \log \rho_ {\theta} ] \\ + \mathbb {E} _ {\hat {q} _ {\mathfrak {n}}} [ (2 \rho_ {\theta} ^ {2} f ^ {\prime \prime} (\rho_ {\theta}) + \rho_ {\theta} ^ {3} f ^ {\prime \prime \prime} (\rho_ {\theta})) \nabla_ {\theta} \log \rho_ {\theta} \nabla_ {\theta} ^ {\intercal} \log \rho_ {\theta} + \rho_ {\theta} ^ {2} f ^ {\prime \prime} (\rho_ {\theta}) \nabla_ {\theta} ^ {2} \log \rho_ {\theta} ]. \\ \end{array}
+$$
+
+In particular, we have
+
+$$
+\begin{array}{l} \nabla_ {\theta} \mathcal {L} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) = \mathbb {E} \left[ \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) \right] = 0, \\ \nabla_ {\theta} ^ {2} \mathcal {L} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}) = \frac {1}{\nu} \mathbb {E} _ {q _ {\mathrm {d}}} \big [ \rho_ {\theta^ {\star}} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) \nabla_ {\theta} \log \rho_ {\theta^ {\star}} \nabla_ {\theta} ^ {\top} \log \rho_ {\theta^ {\star}} \big ]. \\ \end{array}
+$$
+
+For an exponential family model $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ , we have
+
+$$
+\nabla_ {\boldsymbol {\theta}} \hat {\mathcal {L}} _ {f} ^ {\mathsf {n c e}} (\boldsymbol {\theta}) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi \xi_ {\mathsf {n c e}, f, \mathsf {d}} ^ {(1)} (\rho_ {\boldsymbol {\theta}}) ] + \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi \xi_ {\mathsf {n c e}, f, \mathsf {n}} ^ {(1)} (\rho_ {\boldsymbol {\theta}}) ],
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathsf {d}} ^ {(2)} (\rho_ {\theta}) ] + \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathsf {n}} ^ {(2)} (\rho_ {\theta}) ],
+$$
+
+where
+
+$$
+\xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(1)} (\rho) = - \rho f ^ {\prime \prime} (\rho),
+$$
+
+$$
+\xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(1)} (\rho) = \rho^ {2} f ^ {\prime \prime} (\rho),
+$$
+
+$$
+\xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(2)} (\rho) = \rho g _ {f} (\rho),
+$$
+
+$$
+\xi_ {\mathsf {n c e}, f, \mathsf {n}} ^ {(2)} (\rho) = \rho^ {2} (f ^ {\prime \prime} (\rho) - g _ {f} (\rho))
+$$
+
+In particular, if $q_{\mathrm{d}}(x) \equiv \phi_{\theta^{\star}}(x)$ for some $\theta^{\star}$ ,
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) = \mathbb {E} \left[ \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) \right] = 0,
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \mathcal {L} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}) = \mathbb {E} [ \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}) ] = \frac {1}{\nu} \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi \psi^ {\top} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) ] = \mathbb {E} _ {q _ {\mathrm {n}}} [ \psi \psi^ {\top} \rho_ {\theta^ {\star}} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) ].
+$$
+
+# B.2. $\alpha$ -CentNCE
+
+Recall that
+
+$$
+\tilde {r} _ {\theta ; \alpha} (x) = \frac {r _ {\theta} (x)}{(\mathbb {E} _ {q _ {n}} [ r _ {\theta} ^ {\alpha} (x) ]) ^ {\frac {1}{\alpha}}}
+$$
+
+and
+
+$$
+\tilde {\mathcal {L}} _ {\alpha} (\theta) \triangleq \tilde {\mathcal {L}} _ {\alpha} (\theta ; q _ {\mathsf {d}}, q _ {\mathsf {n}}) \triangleq \frac {1}{1 - \alpha} \mathbb {E} _ {q _ {\mathsf {d}}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} (x) ] = \frac {1}{1 - \alpha} \mathbb {E} _ {q _ {\mathsf {d}}} [ r _ {\theta} ^ {\alpha - 1} (x) ] (\mathbb {E} _ {q _ {\mathsf {n}}} [ r _ {\theta} ^ {\alpha} (x) ]) ^ {\frac {1 - \alpha}{\alpha}}.
+$$
+
+# B.2.1. DERIVATIVES
+
+It is easy to check that
+
+Lemma B.4.
+
+$$
+\nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} = \psi - \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ],
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta ; \alpha} = - \alpha \{\mathbb {E} _ {q _ {n}} [ \psi \psi^ {\intercal} \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ] - \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ] \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ] ^ {\intercal} \}.
+$$
+
+# B.2.2. AN ALTERNATIVE INTERPRETATION OF GLOBALGISO
+
+Consider an unnormalized model $\{\phi_{\theta}(x) \colon \theta \in \Theta\}$ . For a data distribution $q_{\mathrm{d}}(x)$ , to which we have sample access, assume that there exists $\theta^{*} \in \Theta$ such that $\phi_{\theta^{*}}(x) \propto q_{\mathrm{d}}(x)$ . Let $q_{\mathfrak{n}}(x)$ be a reference distribution which makes $\mathbb{E}_{q_{\mathfrak{n}}}[\log \phi_{\theta}(x)]$ exist for any $\theta \in \Theta$ . We define a "centered" unnormalized model
+
+$$
+\tilde {\phi} _ {\theta} (x) \triangleq \frac {\phi_ {\theta} (x)}{e ^ {\mathbb {E} _ {q _ {n}} [ \log \phi_ {\theta} (x) ]}}
+$$
+
+and denote its partition function as $\tilde{Z} (\theta)\triangleq \int \tilde{\phi}_{\theta}(x)\mathrm{d}x$ . We remark that
+
+$$
+\mathbb {E} _ {q _ {\mathrm {n}}} [ \log \tilde {\phi} _ {\theta} (x) ] = \mathbb {E} _ {q _ {\mathrm {n}}} [ \log \phi_ {\theta} (x) ] - \mathbb {E} _ {q _ {\mathrm {n}}} [ \log \phi_ {\theta} (x) ] = 0. \tag {10}
+$$
+
+We then define an objective for distribution learning as
+
+$$
+\mathcal {L} _ {\mathrm {g i s o}} (\theta) \triangleq \mathbb {E} _ {q _ {d}} \left[ \frac {q _ {\mathrm {n}} (x)}{\tilde {\phi} _ {\theta} (x)} \right].
+$$
+
+If $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ is an exponential family distribution over a compact support $\mathcal{X}$ and $q_{\mathfrak{n}}(x)$ is the uniform distribution over $\mathcal{X}$ , then it boils down to the objective function studied by (Shah et al., 2021b).
+
+Fisher Consistency To understand the property of the objective, we introduce another unnormalized model
+
+$$
+\xi_ {\theta_ {1}, \theta_ {2}} (x) \triangleq \frac {\tilde {\phi} _ {\theta_ {1}} (x)}{\tilde {\phi} _ {\theta_ {2}} (x)} q _ {\mathfrak {n}} (x),
+$$
+
+and denote its partition function and the normalized distribution as
+
+$$
+Z \left(\theta_ {1}, \theta_ {2}\right) \triangleq \int \xi_ {\theta_ {1}, \theta_ {2}} (x) d x \quad \text {a n d} \quad q _ {\theta_ {1}, \theta_ {2}} (x) \triangleq \frac {\xi_ {\theta_ {1} , \theta_ {2}} (x)}{Z \left(\theta_ {1} , \theta_ {2}\right)}.
+$$
+
+We can then show that
+
+Theorem B.1.
+
+$$
+\log \mathcal {L} _ {\mathrm {g i s o}} (\theta) = D \left(q _ {\mathrm {n}} \| q _ {\theta^ {*}, \theta}\right) - \log \tilde {Z} \left(\theta^ {*}\right).
+$$
+
+As an immediate corollary, we can prove the Fisher consistency of the objective function.
+
+Corollary B.1 (Fisher consistency). Let $\theta^{\star} \in \arg \min_{\theta} \mathcal{L}_{\mathrm{giso}}(\theta)$ . Then, $\phi_{\theta^{\star}}(x) \propto q_{\mathrm{d}}(x)$ for $x \in \operatorname{supp}(q_{\mathrm{n}})$ .
+
+The proof of Theorem B.1 readily follows from the following lemmas.
+
+Lemma B.5.
+
+$$
+\mathcal {L} _ {\text {g i s o}} (\theta) = \frac {Z \left(\theta^ {*} , \theta\right)}{\tilde {Z} \left(\theta^ {*}\right)}.
+$$
+
+Proof. Consider
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {g i s o}} (\theta) \triangleq \int q _ {\mathrm {d}} (x) \frac {q _ {\mathrm {n}} (x)}{\tilde {\phi} _ {\theta} (x)} \mathrm {d} x \\ = \int \frac {\tilde {\phi} _ {\theta^ {*}} (x)}{\tilde {Z} (\theta^ {*})} \frac {q _ {n} (x)}{\tilde {\phi} _ {\theta} (x)} d x \\ = \frac {1}{\tilde {Z} (\theta^ {*})} \int \frac {\tilde {\phi} _ {\theta *} (x)}{\tilde {\phi} _ {\theta} (x)} q _ {\mathfrak {n}} (x) d x \\ = \frac {Z (\theta^ {*} , \theta)}{\tilde {Z} (\theta^ {*})}. \\ \end{array}
+$$
+
+Lemma B.6. For any $\theta_1, \theta_2 \in \Theta$ ,
+
+$$
+D \left(q _ {n} \| q _ {\theta_ {1}, \theta_ {2}}\right) = \log Z \left(\theta_ {1}, \theta_ {2}\right).
+$$
+
+Proof. Consider
+
+$$
+\begin{array}{l} D \left(q _ {\mathfrak {n}} \| q _ {\theta_ {1}, \theta_ {2}}\right) = \mathbb {E} _ {q _ {\mathfrak {n}}} \left[ \log \frac {q _ {\mathfrak {n}} (x)}{q _ {\theta_ {1} , \theta_ {2}} (x)} \right] \\ = \mathbb {E} _ {q _ {\mathrm {n}}} \left[ \log Z \left(\theta_ {1}, \theta_ {2}\right) + \log \frac {\tilde {\phi} _ {\theta_ {2}} (x)}{\tilde {\phi} _ {\theta_ {1}} (x)} \right] \\ = \log Z (\theta_ {1}, \theta_ {2}). \\ \end{array}
+$$
+
+Here, in the last equality, we use the fact that $\log \tilde{\phi}_{\theta}(x)$ is centered under $q_{\mathfrak{n}}(x)$ , as alluded to earlier in Eq. (10).
+
+# B.2.3. PROOF OF THEOREM 3.1
+
+Theorem 3.1 ( $\alpha$ -CentNCE subsumes MLE and GlobalGISO). The following holds:
+
+1. $(\alpha = 0$ GlobalGISO) For an exponential family $\phi_{\theta}(x)$ , if $\mathcal{X}$ is bounded and $q_{\mathfrak{n}}(x)$ is a uniform distribution over $\mathcal{X}$ , the 0-CentNCE objective $\tilde{\mathcal{L}}_0(\theta ;q_{\mathrm{d}},q_{\mathrm{n}})$ is equivalent to GlobalGISO (Shah et al., 2021b).
+
+2. $(\alpha = 1$ MLE) If $Z_{1}(\theta)$ is assumed to be computable for each $\theta$ , the 1-CentNCE objective $\tilde{\mathcal{L}}_1(\theta; \hat{q}_{\mathrm{d}}, q_{\mathrm{n}})$ is equivalent to MLE (Fisher, 1922).
+3. $(\alpha = 1$ : MC-MLE) If $Z_{1}(\theta) = \mathbb{E}_{q_{\mathrm{n}}}\left[\frac{\phi_{\theta}(x)}{q_{\mathrm{n}}(x)}\right]$ is estimated with empirical noise distribution $\hat{q}_{\mathfrak{n}}(x)$ , the 1-CentNCE objective $\tilde{\mathcal{L}}_1(\theta ;\hat{q}_{\mathrm{d}},\hat{q}_{\mathrm{n}})$ is equivalent to MC-MLE (Geyer, 1994).
+
+Proof. When $\alpha \to 1$ , the centering becomes the standard normalization, i.e.,
+
+$$
+\tilde {\phi} _ {\theta ; 1} (x) \triangleq \lim _ {\alpha \to 1} \frac {\phi_ {\theta} (x)}{(\mathbb {E} _ {q _ {n}} [ (\frac {\phi_ {\theta} (x)}{q _ {n} (x)}) ^ {\alpha} ]) ^ {1 / \alpha}} = \frac {\phi_ {\theta} (x)}{\mathbb {E} _ {q _ {n}} [ \frac {\phi_ {\theta} (x)}{q _ {n} (x)} ]},
+$$
+
+and thus the objective becomes equivalent to the MC-MLE objectives:
+
+$$
+\tilde {\mathcal {L}} _ {1} (\theta ; q _ {\mathsf {d}}, q _ {\mathsf {n}}) = \mathbb {E} _ {q _ {\mathsf {d}} (x)} \Big [ \log \frac {1}{\tilde {\phi} _ {\theta ; 1} (x)} \Big ] = \mathbb {E} _ {q _ {\mathsf {d}} (x)} \Big [ \log \frac {1}{\phi_ {\theta} (x)} \Big ] + \log \mathbb {E} _ {q _ {\mathsf {n}}} \Big [ \frac {\phi_ {\theta} (x)}{q _ {\mathsf {n}} (x)} \Big ].
+$$
+
+When $Z_{1}(\theta) = \mathbb{E}_{q_{n}}[\frac{\phi_{\theta}(x)}{q_{n}(x)}] = Z(\theta)$ is assumed to be computable, this becomes equivalent to MLE.
+
+When $\alpha \to 0$ , the centering becomes
+
+$$
+\tilde {\phi} _ {\theta ; 0} (x) \triangleq \lim _ {\alpha \rightarrow 0} \frac {\phi_ {\theta} (x)}{(\mathbb {E} _ {q _ {n}} [ (\frac {\phi_ {\theta} (x)}{q _ {n} (x)}) ^ {\alpha} ]) ^ {1 / \alpha}} = \frac {\phi_ {\theta} (x)}{e ^ {\mathbb {E} _ {q _ {n} (x)} [ \log \frac {\phi_ {\theta} (x)}{q _ {n} (x)} ]}}
+$$
+
+and the objective becomes
+
+$$
+\tilde {\mathcal {L}} _ {0} (\theta ; q _ {\mathrm {d}}, q _ {\mathrm {n}}) = \mathbb {E} _ {q _ {\mathrm {d}} (x)} \left[ \frac {q _ {\mathrm {n}} (x)}{\tilde {\phi} _ {\theta ; 0} (x)} \right] = \mathbb {E} _ {q _ {\mathrm {d}} (x)} \left[ \frac {q _ {\mathrm {n}} (x)}{\phi_ {\theta} (x)} \right] e ^ {\mathbb {E} _ {q _ {\mathrm {n}} (x)} \left[ \log \frac {\phi_ {\theta} (x)}{q _ {\mathrm {n}} (x)} \right]}. \tag {11}
+$$
+
+In particular, for the exponential family, we have
+
+$$
+\log Z _ {0} (\theta) \triangleq \mathbb {E} _ {q _ {\mathfrak {n}} (x)} \left[ \log \frac {\phi_ {\theta} (x)}{q _ {\mathfrak {n}} (x)} \right] = \langle \theta , \bar {\psi} _ {q} \rangle - \mathbb {E} _ {q _ {\mathfrak {n}} (x)} [ \log q _ {\mathfrak {n}} (x) ],
+$$
+
+where $\bar{\psi}_q\triangleq \mathbb{E}_{q_n(x)}[\psi (x)]$ , and thus the objective becomes
+
+$$
+\tilde {\mathcal {L}} _ {0} (\theta ; q _ {\mathsf {d}}, q _ {\mathsf {n}}) = \mathbb {E} _ {q _ {\mathsf {d}} (x)} [ q _ {\mathsf {n}} (x) \exp (\langle \theta , \psi (x) - \bar {\psi} _ {q} \rangle) ]
+$$
+
+modulo additive and multiplicative constants. When the underlying domain $\mathcal{X}$ is assumed to be bounded, we can set $q_{\mathrm{n}}(x)$ as the uniform distribution over $\mathcal{X}$ . In this case, the NCE objective boils down to the global generalized interactive screening objective (GlobalGISO) studied by Shah et al. (2021b).
+
+To provide a comprehensive view, we summarize the connections in terms of the objective functions that correspond to the unified estimators in Table 5.
+
+# B.2.4. PROOF OF THEOREM 3.2
+
+Theorem 3.2 ( $f_{\alpha}$ -NCE and $\alpha$ -CentNCE estimators are equivalent). For a set $A \subset \Theta \times \mathbb{R}$ in the augmented parameter space, let $A|_{\Theta} \triangleq \{\theta : (\theta, \nu) \in A$ for some $\nu \in \mathbb{R}\}$ denote the subset corresponding to $\Theta$ . Then,
+
+$$
+\underset {\underline {{\theta}} = (\theta , \nu) \in \Theta \times \mathbb {R}} {\arg \min } \left. \mathcal {L} _ {f _ {\alpha}} ^ {\mathrm {n c e}} (\underline {{\theta}}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) \right| _ {\Theta} = \underset {\theta \in \Theta} {\arg \min } \mathcal {L} _ {\alpha} ^ {\mathrm {c e n t}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}).
+$$
+
+Proof. On one hand, we first note that $\nu \mapsto \mathcal{L}_{f_{\alpha}}^{\mathrm{nce}}(\underline{\theta};\hat{q}_{\mathrm{d}},\hat{q}_{\mathrm{n}})$ is convex, and for each $\theta$ , the minimizer $\nu_{\alpha}^{*}(\theta)$ of the centered objective satisfies
+
+$$
+e ^ {\nu^ {*} (\theta)} = \frac {\mathbb {E} _ {\hat {q} _ {d}} [ r _ {\theta} ^ {\alpha - 1} ]}{\mathbb {E} _ {\hat {q} _ {n}} [ r _ {\theta} ^ {\alpha} ]}.
+$$
+
+Table 5. Existing estimators as special instances of NCE estimators.
+
+Existing estimators Corresponding NCE objective MLE (Fisher, 1922) Lcent1(θ;hatd, qn) GlobalGISO (Shah et al., 2023) Lcent0(θ;hatd, qn) MC-MLE (Geyer, 1994; Jiang et al., 2023) Lcent1(θ;hatd, hatn) IS (Pihlaja et al., 2010; Riou-Durand & Chopin, 2018) Lncef1(θ;hatd, hatn) eNCE (Liu et al., 2021) Lncef1/2(θ;hatd, hatn) Pseudo likelihood (Besag, 1975) Lcent1(θ;hatd, qn) (local) GISO (Vuffray et al., 2016; 2021), ISODUS (Ren et al., 2021) Lcent0(θ;hatd, qn) (local)
+
+Moreover,
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {f _ {\alpha}} ^ {\mathrm {n c e}} (\underline {{\theta}}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) = - e ^ {\nu (\alpha - 1)} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ r _ {\theta} ^ {(\alpha - 1)} \nabla_ {\theta} \log r _ {\theta} ] + e ^ {\nu \alpha} \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ r _ {\theta} ^ {\alpha} \nabla_ {\theta} \log r _ {\theta} ],
+$$
+
+so that the $f_{\alpha}$ -NCE estimator $\hat{\underline{\theta}}_{f_{\alpha}}^{\mathrm{nce}}(\hat{q}_{\mathsf{d}},\hat{q}_{\mathsf{n}}) = (\hat{\theta}_{f_{\alpha}}^{\mathrm{nce}}(\hat{q}_{\mathsf{d}},\hat{q}_{\mathsf{n}}),\hat{\nu}_{f_{\alpha}}^{\mathrm{nce}}(\hat{q}_{\mathsf{d}},\hat{q}_{\mathsf{n}}))$ satisfies
+
+$$
+\mathbb {E} _ {\hat {q} _ {\mathrm {n}}} \left[ r _ {\theta} ^ {\alpha} \nabla_ {\theta} \log r _ {\theta} \right] \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} \left[ r _ {\theta} ^ {\alpha - 1} \right] = \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} \left[ r _ {\theta} ^ {\alpha} \right] \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} \left[ r _ {\theta} ^ {\alpha - 1} \nabla_ {\theta} \log r _ {\theta} \right]. \tag {12}
+$$
+
+On the other hand, we have
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {\alpha} ^ {\mathrm {c e n t}} (\theta ; \hat {q} _ {\mathsf {d}}, \hat {q} _ {\mathsf {n}}) = - \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ r _ {\theta} ^ {\alpha - 1} \nabla_ {\theta} \log r _ {\theta} ] (\mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ r _ {\theta} ^ {\alpha} ]) ^ {\frac {1 - \alpha}{\alpha}} + \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ r _ {\theta} ^ {\alpha - 1} ] (\mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ r _ {\theta} ^ {\alpha} ]) ^ {\frac {1 - 2 \alpha}{\alpha}} \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ r _ {\theta} ^ {\alpha} \nabla_ {\theta} \log r _ {\theta} ],
+$$
+
+which implies that the $\alpha$ -CentNCE estimator $\hat{\theta}_{\alpha}^{\mathrm{cent}}(q_{\mathrm{d}}, q_{\mathrm{n}})$ is also a root of Eq. (12). This establishes the desired equivalence.
+
+# B.3. $f$ -CondNCE
+
+# B.3.1. DERIVATIVES
+
+We first note that
+
+# Lemma B.7.
+
+$$
+\nabla r _ {\theta} (x, y) = r _ {\theta} (x, y) \nabla_ {\theta} \log r _ {\theta} (x, y),
+$$
+
+$$
+\nabla r _ {\theta} ^ {- 1} (x, y) = - \frac {1}{r _ {\theta} ^ {2} (x , y)} \nabla r _ {\theta} (x, y) = - \frac {1}{r _ {\theta} (x , y)} \nabla_ {\theta} \log r _ {\theta} (x, y).
+$$
+
+In particular, for an exponential family distribution $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ , we have
+
+$$
+\nabla_ {\theta} \log \rho_ {\theta} (x, y) = \psi (x) - \psi (y),
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \log \rho_ {\theta} (x, y) = 0.
+$$
+
+Lemma B.8 (Conditional NCE: derivatives). Let $\rho_{\theta}(x,y) \triangleq \rho_{\theta}$ for a shorthand.
+
+$$
+\begin{array}{l} \nabla_ {\theta} \mathcal {L} _ {f} ^ {\text {c o n d}} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} [ (\rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) + \rho_ {\theta} ^ {- 2} f ^ {\prime \prime} (\rho_ {\theta} ^ {- 1})) \nabla_ {\theta} \log \rho_ {\theta} ], \\ \nabla_ {\theta} ^ {2} \mathcal {L} _ {f} ^ {\text {c o n d}} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} [ (- \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) - \rho_ {\theta} ^ {2} f ^ {\prime \prime \prime} (\rho_ {\theta})) \nabla_ {\theta} \log \rho_ {\theta} \nabla_ {\theta} ^ {\top} \log \rho_ {\theta} - \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) \nabla_ {\theta} ^ {2} \log \rho_ {\theta} \\ + \left(2 \rho_ {\theta} ^ {- 2} f ^ {\prime \prime} \left(\rho_ {\theta} ^ {- 1}\right) + \rho_ {\theta} ^ {- 3} f ^ {\prime \prime \prime} \left(\rho_ {\theta} ^ {- 1}\right)\right) \nabla_ {\theta} \log \rho_ {\theta} \nabla_ {\theta} ^ {\intercal} \log \rho_ {\theta} + \rho_ {\theta} ^ {- 2} f ^ {\prime \prime} \left(\rho_ {\theta} ^ {- 1}\right) \nabla_ {\theta} ^ {2} \log \rho_ {\theta} ]. \\ \end{array}
+$$
+
+For an exponential family distribution $\phi_{\theta}(x) = \exp (\langle \theta ,\psi (x)\rangle)$ , we have
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {f} ^ {\text {c o n d}} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}} (y) \pi (x | y)} [ (\psi (x) - \psi (y)) \xi_ {\text {c o n d}, f} ^ {(1)} (\rho_ {\theta} (x, y)) ],
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \mathcal {L} _ {f} ^ {\mathrm {c o n d}} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}} (y) \pi (x | y)} [ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} \xi_ {\mathrm {c o n d}, f} ^ {(2)} (\rho_ {\theta} (x, y)) ],
+$$
+
+where
+
+$$
+\xi_ {\text {c o n d}, f} ^ {(1)} (\rho) \triangleq \rho^ {- 1} f ^ {\prime \prime} (\rho^ {- 1}) + \rho^ {2} f ^ {\prime \prime} (\rho) = - \xi_ {\text {n c e}, f, \mathrm {d}} ^ {(1)} (\rho^ {- 1}) + \xi_ {\text {n c e}, f, \mathrm {n}} ^ {(1)} (\rho),
+$$
+
+$$
+\xi_ {\mathrm {c o n d}, f} ^ {(2)} (\rho) \triangleq \rho^ {- 1} g _ {f} (\rho^ {- 1}) + \rho^ {2} (f ^ {\prime \prime} (\rho) - g _ {f} (\rho)) = \xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(2)} (\rho^ {- 1}) + \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} (\rho).
+$$
+
+In particular,
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {f} ^ {\text {c o n d}} (\theta^ {\star}) = 0,
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \mathcal {L} _ {f} ^ {\mathrm {c o n d}} (\theta^ {\star}) = \mathbb {E} _ {q _ {\mathrm {d}} (y) \pi (x | y)} [ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} \rho_ {\theta} ^ {2} f ^ {\prime \prime} (\rho_ {\theta}) ].
+$$
+
+# B.3.2. PROOF OF THEOREM 3.4
+
+Theorem 3.4 (Asymptotic behavior of empirical $f$ -CondNCE for small $\epsilon$ ). The empirical $f$ -CondNCE objective can be written as
+
+$$
+\begin{array}{l} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right) = - f (1) \\ + 2 f ^ {\prime \prime} (1) \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} [ \nabla_ {x} \log \phi_ {\theta} (x) ^ {\intercal} v ] \epsilon \\ + f ^ {\prime \prime} (1) \mathcal {L} ^ {\mathrm {s s m}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right) \epsilon^ {2} + o \left(\epsilon^ {2}\right). \\ \end{array}
+$$
+
+Here, we define the empirical sliced SM (SSM) objective (Song et al., 2020)
+
+$$
+\begin{array}{l} \mathcal {L} ^ {\mathrm {s s m}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right) \\ \triangleq \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} \left[ v ^ {\intercal} \nabla_ {x} ^ {2} \log \phi_ {\theta} (x) v + \frac {1}{2} (v ^ {\intercal} \nabla_ {x} \log \phi_ {\theta} (x)) ^ {2} \right]. \\ \end{array}
+$$
+
+Proof. Let $\hat{\mathcal{C}}_f(\theta, \epsilon) \triangleq \hat{\mathcal{L}}_f^{\mathrm{cond}}(\phi_\theta; \hat{q}_{\mathrm{d}}, \hat{q}_{\mathrm{s}})$ . Note that
+
+$$
+\hat {\mathcal {C}} _ {f} (\theta , \epsilon) = \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x) \hat {q} _ {\mathrm {s}} (v)} [ - f ^ {\prime} (r) + r ^ {- 1} f ^ {\prime} (r ^ {- 1}) - f (r ^ {- 1}) ],
+$$
+
+where we set $r = \frac{\phi_{\theta}(x)}{\phi_{\theta}(x + \epsilon v)}$ as a shorthand notation. Since by chain rule we have $\frac{\partial}{\partial\epsilon}\log r = -\nabla_x\log \phi_\theta (x + \epsilon v)^\intercal v$ , we have
+
+$$
+\frac {\partial}{\partial \epsilon} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) = \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x) \hat {q} _ {\mathrm {s}} (v)} \left[ \nabla_ {x} \log \phi_ {\theta} (x + \epsilon v) ^ {\intercal} v \left(r f ^ {\prime \prime} (r) + \frac {1}{r ^ {2}} f ^ {\prime \prime} \left(\frac {1}{r}\right)\right) \right],
+$$
+
+and
+
+$$
+\begin{array}{l} \frac {\partial^ {2}}{\partial \epsilon^ {2}} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) = \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} \left[ v ^ {\top} \nabla_ {x} ^ {2} \log \phi_ {\theta} (x + \epsilon v) v \left(r f ^ {\prime \prime} (r) + \frac {1}{r ^ {2}} f ^ {\prime \prime} \left(\frac {1}{r}\right)\right) \right. \\ \left. \right. + \left(\nabla_ {x} \log \phi_ {\theta} (x + \epsilon v) ^ {\intercal} v\right) ^ {2} \Big (r f ^ {\prime \prime} (r) + r ^ {2} f ^ {\prime \prime \prime} (r) - \frac {2}{r ^ {2}} f ^ {\prime \prime} \Big (\frac {1}{r} \Big) - \frac {1}{r ^ {4}} f ^ {\prime \prime \prime} \Big (\frac {1}{r} \Big) \Big) \Big ]. \\ \end{array}
+$$
+
+Hence,
+
+$$
+\left. \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \right| _ {\epsilon = 0} = - f (1),
+$$
+
+$$
+\left. \frac {\partial}{\partial \epsilon} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \right| _ {\epsilon = 0} = 2 f ^ {\prime \prime} (1) \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x) \hat {q} _ {\mathrm {s}} (v)} [ \nabla_ {x} \log \phi_ {\theta} (x) ^ {\intercal} v ],
+$$
+
+$$
+\begin{array}{l} \left. \frac {\partial^ {2}}{\partial \epsilon^ {2}} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \right| _ {\epsilon = 0} = f ^ {\prime \prime} (1) \left(2 \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} [ v ^ {\top} \nabla_ {x} ^ {2} \log \phi_ {\theta} (x) v ] + \mathbb {E} _ {\hat {q} _ {d} (x) \hat {q} _ {s} (v)} [ (\nabla_ {x} ^ {2} \log \phi_ {\theta} (x) ^ {T} v) ^ {2} ]\right) \\ = 2 f ^ {\prime \prime} (1) \hat {\mathcal {L}} ^ {\mathrm {s s m}} \left(\phi_ {\theta}; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {s}}\right). \\ \end{array}
+$$
+
+Plugging these to the second-order Taylor approximation of $\epsilon \mapsto \hat{\mathcal{C}}_f(\theta ,\epsilon)$ around $\epsilon = 0$ , i.e.,
+
+$$
+\hat {\mathcal {C}} _ {f} (\theta , \epsilon) = \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \Big | _ {\epsilon = 0} \epsilon + \frac {\partial}{\partial \epsilon} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \Big | _ {\epsilon = 0} \epsilon + \frac {1}{2} \frac {\partial^ {2}}{\partial \epsilon^ {2}} \hat {\mathcal {C}} _ {f} (\theta , \epsilon) \Big | _ {\epsilon = 0} \epsilon^ {2} + o (\epsilon^ {2}),
+$$
+
+concludes the proof.
+
+# C. Asymptotic Guarantees
+
+We can establish the asymptotic consistency and normality of the estimators. Though we present the results for exponential family models for simplicity, one can derive the asymptotic covariances for general unnormalized models and generalize the results. All the proofs are straightforward from the application of standard M-estimation theory, see, e.g., (Van der Vaart, 2000), so we omit the proofs.
+
+# C.1. $f$ -NCE
+
+Theorem C.1 (f-NCE: asymptotic guarantee). Let $\hat{\theta}_{f;n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce}} \triangleq (\hat{\theta}_{f;n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce}}, \hat{c}_{f;n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce}})$ be a solution of
+
+$$
+\underline {{\hat {\theta}}} _ {f; n _ {\mathrm {d}}, n _ {\mathrm {n}}} ^ {\mathrm {n c e}} \in \arg \min _ {\underline {{\theta}} \in \Theta \times \mathbb {R}} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\underline {{\theta}}).
+$$
+
+Let $n_{\mathfrak{n}} \triangleq \beta n_{\mathfrak{d}}$ for some $\beta > 0$ . If $\hat{\mathcal{L}}_f^{\mathrm{nce}}(\underline{\theta}) \xrightarrow{p} \mathcal{L}_f^{\mathrm{nce}}(\underline{\theta})$ as $n_{\mathfrak{d}} \to \infty$ uniformly over $\underline{\theta} \in \Theta \times \mathbb{R}$ , $\hat{\underline{\theta}}_{f;n_{\mathfrak{d}},n_{\mathfrak{n}}}^{\mathrm{nce}} \xrightarrow{p} (\theta^{\star},c^{\star})$ as $n_{\mathfrak{d}} \to \infty$ . Further, if $\theta^{\star} \in \operatorname{int}(\Theta)$ , we have $\sqrt{n_{\mathfrak{d}}} (\hat{\theta}_{f;n_{\mathfrak{d}},n_{\mathfrak{n}}}^{\mathrm{nce}} - \theta^{\star}) \xrightarrow{d} \mathcal{N}(0,\mathcal{V}_{f}^{\mathrm{nce}})$ , where we define $\mathcal{V}_f^{\mathrm{nce}} \triangleq \mathcal{I}_f^{-1}\mathcal{C}_f\mathcal{I}_f^{-1}$
+
+$$
+\mathcal {I} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}}} \left[ \rho_ {\theta^ {\star}} f ^ {\prime \prime} \left(\rho_ {\theta^ {\star}}\right) \underline {{\psi \psi}} ^ {\intercal} \right],
+$$
+
+$$
+\mathcal {C} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}}} \left[ \left(1 + \frac {\nu}{\beta} \rho_ {\theta^ {\star}}\right) \rho_ {\theta^ {\star}} ^ {2} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) ^ {2} \underline {{\psi}} \underline {{\psi}} ^ {\intercal} \right] - \left(1 + \frac {1}{\beta}\right) \mathbb {E} _ {q _ {\mathrm {d}}} [ \rho_ {\theta^ {\star}} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) \underline {{\psi}} ] \mathbb {E} _ {q _ {\mathrm {d}}} [ \rho_ {\theta^ {\star}} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) \underline {{\psi}} ] ^ {\intercal},
+$$
+
+for $\underline{\psi}(x) \triangleq [\psi(x); 1]^{\mathsf{T}} \in \mathbb{R}^{p+1}$ , provided that $\mathcal{I}_f$ is invertible. In particular, the asymptotic covariance $\mathcal{V}_f^{\mathrm{nce}}$ satisfies $\mathcal{V}_f^{\mathrm{nce}} \succeq \mathcal{V}_{f\log}^{\mathrm{nce}}$ , or equivalently $\mathcal{V}_f^{\mathrm{nce}} - \mathcal{V}_{f\log}^{\mathrm{nce}}$ is a PSD matrix, for any $f$ .
+
+This result has been known, but we present a rephrased version here to contextualize our contribution. The asymptotic convergence beyond exponential family was established in (Gutmann & Hyvarinen, 2012) for $f_{\log}$ -NCE and in (Pihlaja et al., 2010; Uehara et al., 2018) for $f$ -NCE. The optimality of $f_{\log}$ was established in (Uehara et al., 2018). It was independently proved that the original $f_{\log}$ -NCE estimator asymptotic covariance not larger than that of the $f_1$ -NCE estimator (and thus the MC-MLE estimator), which they call the IS estimator, in Loewner order (Riou-Durand & Chopin, 2018). In the same paper, the asymptotic guarantee for the $f_{\log}$ -NCE and IS estimators was shown for a general unnormalized distribution under a non-i.i.d. setting in (Barthelmé & Chopin, 2015).
+
+# C.2. $\alpha$ -CentNCE
+
+Theorem C.2 (CentNCE: asymptotic guarantee). Assume that any expectation over $q_{\mathfrak{n}}$ in the $\alpha$ -CentNCE objective can be computed for any $\theta$ without samples from $q_{\mathfrak{n}}$ . Let $\hat{\theta}_{\alpha ;n_d}^{\mathrm{cent}}$ be a solution of
+
+$$
+\hat {\theta} _ {\alpha ; n _ {d}} ^ {\text {c e n t}} \in \arg \min _ {\theta \in \Theta} \mathcal {L} _ {\alpha} ^ {\text {c e n t}} (\theta ; \hat {q} _ {d}, q _ {n}).
+$$
+
+If $\mathcal{L}_{\alpha}^{\mathrm{cent}}(\theta; \hat{q}_{\mathrm{d}}, q_{\mathrm{n}}) \xrightarrow{p} \mathcal{L}_{\alpha}^{\mathrm{cent}}(\theta; q_{\mathrm{d}}, q_{\mathrm{n}})$ as $n_{\mathrm{d}} \to \infty$ uniformly over $\theta \in \Theta$ , $\hat{\theta}_{\alpha; n_{\mathrm{d}}}^{\mathrm{cent}} \xrightarrow{p} \theta^{\star}$ as $n_{\mathrm{d}} \to \infty$ . Further, if $\theta^{\star} \in \operatorname{int}(\Theta)$ , we have $\sqrt{n_{\mathrm{d}}} (\hat{\theta}_{\alpha; n_{\mathrm{d}}}^{\mathrm{cent}} - \theta^{\star}) \xrightarrow{d} \mathcal{N}(0, \mathcal{V}_{\alpha}^{\mathrm{cent}})$ , where we define $\mathcal{V}_{\alpha}^{\mathrm{cent}} \triangleq \tilde{\mathcal{I}}_{\alpha}^{-1} \tilde{\mathcal{C}}_{\alpha} \tilde{\mathcal{I}}_{\alpha}^{-1}$ ,
+
+$$
+\begin{array}{l} \tilde {\mathcal {I}} _ {\alpha} \triangleq (1 - \alpha) \mathbb {E} _ {q _ {d}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha - 1} (\psi - \mathbb {E} _ {q _ {n}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha} \psi ]) (\psi - \mathbb {E} _ {q _ {n}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha} \psi ]) ^ {\mathsf {T}} ] \\ + \alpha \mathbb {E} _ {q _ {a}} [ \tilde {r} _ {\theta^ {*}; \alpha} ^ {\alpha - 1} ] (\mathbb {E} _ {q _ {n}} [ \tilde {r} _ {\theta^ {*}; \alpha} ^ {\theta^ {*}} \psi \psi^ {\mathsf {T}} ] - \mathbb {E} _ {q _ {n}} [ \tilde {r} _ {\theta^ {*}; \alpha} ^ {\theta^ {*}} \psi ] \mathbb {E} _ {q _ {n}} [ \tilde {r} _ {\theta^ {*}; \alpha} ^ {\theta^ {*}} \psi ] ^ {\mathsf {T}}), \\ \end{array}
+$$
+
+$$
+\tilde {\mathcal {C}} _ {\alpha} \triangleq \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {2 (\alpha - 1)} (\psi - \mathbb {E} _ {q _ {\mathrm {n}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha} \psi ]) (\psi - \mathbb {E} _ {q _ {\mathrm {n}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha} \psi ]) ^ {\intercal} ],
+$$
+
+provided that $\tilde{\mathcal{I}}_{\alpha}$ is invertible. Here, note that $\tilde{r}_{\theta^{\star};\alpha}^{\alpha}(x) = \frac{\left(\frac{q_{\mathrm{d}}(x)}{q_{\mathrm{n}}(x)}\right)^{\alpha}}{\mathbb{E}_{q_{\mathrm{n}}}\left[(\frac{q_{\mathrm{d}}}{q_{\mathrm{n}}})^{\alpha}\right]}$ .
+
+In particular, this result recovers the asymptotic convergence of MLE for $\alpha = 1$ , and generalizes the analysis of GlobalGISO of (Shah et al., 2023) beyond when $q_{n}$ is the uniform distribution.
+
+# C.3. $f$ -CondNCE
+
+Theorem C.3 (f-CondNCE: asymptotic guarantee). Let $\hat{\theta}_{f;n_{\mathrm{d}}}^{\mathrm{cond}}$ be a solution of
+
+$$
+\hat {\theta} _ {f; n _ {\mathrm {d}}} ^ {\text {c o n d}} \in \arg \min _ {\theta \in \Theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\theta).
+$$
+
+If $\hat{\mathcal{L}}_f^{\mathrm{cond}}(\theta)\xrightarrow{p}\mathcal{L}_f^{\mathrm{cond}}(\theta)$ as $n_{\mathsf{d}}\to \infty$ uniformly over $\theta \in \Theta$ , $\hat{\theta}_{f;n_{\mathsf{d}}}^{\mathrm{cond}}\xrightarrow{p}\theta^{\star}$ as $n_{\mathsf{d}}\to \infty$ . Further, if $\theta^{\star}\in \operatorname {int}(\Theta)$ , we have $\sqrt{n_{\mathsf{d}}} (\hat{\theta}_{f;n_{\mathsf{d}}}^{\mathrm{cent}} - \theta^{\star})\xrightarrow{d}\mathcal{N}(0,\check{\mathcal{V}}_f^{\mathrm{cond}})$ , where we define $\check{\mathcal{V}}_f^{\mathrm{cond}}\triangleq \check{\mathcal{I}}_f^{-1}\check{\mathcal{C}}_f\check{\mathcal{I}}_f^{-1}$ ,
+
+$$
+\check {\mathcal {I}} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} [ \rho_ {\theta^ {\star}} ^ {2} f ^ {\prime \prime} (\rho_ {\theta^ {\star}}) (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} ],
+$$
+
+$$
+\check {\mathcal {C}} _ {f} \triangleq \mathbb {E} _ {q _ {\mathrm {d}} (x) \pi (y | x)} [ \xi_ {\mathrm {c o n d}, f} ^ {(1)} (\rho_ {\theta^ {\star}}) ^ {2} (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} ],
+$$
+
+provided that $\check{\mathcal{I}}_f$ is invertible. Here, $\rho_{\theta} = \rho_{\theta}(x,y)$ and $\xi_{\mathrm{cond},f}^{(1)}(\rho)\triangleq \rho^{-1}f^{\prime \prime}(\rho^{-1}) + \rho^2 f^{\prime \prime}(\rho)$ .
+
+# D. Finite-Sample Guarantees
+
+For the finite-sample analysis of the regularized NCE estimators, we invoke the result of Negahban et al. (2012):
+
+Theorem 4.4 (Negahban et al., 2012, Corollary 1). Let $z_{1},\ldots ,z_{N}$ be i.i.d. samples drawn from a distribution $p(z)$ . Let $h_\theta (z)$ be a convex and differentiable function parameterized by $\theta \in \Theta$ . Let $\hat{\mathcal{L}}_n(\theta)\triangleq \frac{1}{n}\sum_{i = 1}^{n}h_{\theta}(z_i)$ denote the empirical objective function. Define
+
+$$
+\hat {\theta} _ {n} \in \arg \min _ {\theta} \left\{\hat {\mathcal {L}} _ {n} (\theta) + \lambda_ {n} \mathcal {R} (\theta) \right\}, \tag {9}
+$$
+
+where $\lambda_{n}$ is a regularization penalty and $\mathcal{R}\colon \Theta \to \mathbb{R}_{\geq 0}$ is a norm over $\Theta$ . Let $\theta^{\star} \in \arg \min_{\theta} \mathbb{E}_{p(z)}[h_{\theta}(z)]$ . Assume that
+
+1. The regularization penalty $\lambda_{n}$ satisfies $\lambda_{n} \geq 2\mathcal{R}^{*}(\nabla_{\theta}\hat{\mathcal{L}}_{n}(\theta^{\star}))$ , where $\mathcal{R}^{*} \colon \Theta^{*} \to \mathbb{R}_{\geq 0}$ is a dual norm of $\mathcal{R}$ over the dual space $\Theta^{*}$ ;
+2. The empirical objective $\theta \mapsto \hat{\mathcal{L}}_n(\theta)$ satisfies a restricted strong convexity condition at $\theta = \theta^{\star}$ with curvature $\kappa > 0$ , i.e., $\Delta_{\hat{\mathcal{L}}_n(\theta)}(\theta, \theta^{\star}) \geq \kappa \| \theta - \theta^{\star} \|_2^2$ .
+
+Then, the estimator $\hat{\theta}_n$ in Eq. (9) satisfies
+
+$$
+\| \hat {\theta} _ {n} - \theta^ {\star} \| _ {2} \leq 3 \frac {\lambda_ {n}}{\kappa} \gamma_ {\mathcal {R}; 2}.
+$$
+
+# D.1. $f$ -NCE
+
+Theorem 4.1 (f-NCE: finite-sample guarantee). Pick a strictly convex function $f\colon \mathbb{R}_+ \to \mathbb{R}$ . Define
+
+$$
+\left(\rho_ {\min }, \rho_ {\max }\right) \triangleq \Big (\inf _ {x \in \mathcal {X}, \underline {{\theta}} \in \Theta \times \mathbb {R}} \rho_ {\underline {{\theta}}} (x), \sup _ {x \in \mathcal {X}, \underline {{\theta}} \in \Theta \times \mathbb {R}} \rho_ {\underline {{\theta}}} (x) \Big)
+$$
+
+and define the quantities in Eq. (5) accordingly. For $\mathfrak{r} \in \{\mathsf{d}, \mathsf{n}\}$ , define
+
+$$
+\lambda_ {\min , r} ^ {n c e} \triangleq \lambda_ {\min} \left(\mathbb {E} _ {q _ {r}} [ \psi \psi^ {\mathsf {T}} ]\right).
+$$
+
+Let $\hat{\theta}_{f,n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce},\mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {f, n _ {\mathrm {d}}, n _ {\mathrm {n}}} ^ {\mathrm {n c e}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {f} ^ {\mathrm {n c e}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {q} _ {\mathrm {n}}) + \lambda_ {n _ {\mathrm {d}}, n _ {\mathrm {n}}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_{\mathsf{d}},n_{\mathsf{n}}} > 0$ . Then, for any $\Delta >0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_{n_{\mathsf{d}},n_{\mathsf{n}}}$ such that $\| \hat{\theta}_{f,n_{\mathsf{d}},n_{\mathsf{n}}}^{\mathrm{nce},\mathcal{R}} - \theta^{\star}\| _2\leq \Delta$ with probability $\geq 1 - \delta$ , provided that for each $\mathsf{r}\in \{\mathsf{d},\mathsf{n}\}$ ,
+
+$$
+n _ {\mathsf {r}} = \Omega \Bigg (\max \bigg \{\frac {(B _ {\mathsf {n c e} , f , \mathsf {r}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} (\nu^ {- 1} b _ {\mathsf {n c e} , f , \mathsf {d}} ^ {(2)} \lambda_ {\min , \mathsf {d}} ^ {\mathsf {n c e}} + b _ {\mathsf {n c e} , f , \mathsf {n}} ^ {(2)} \lambda_ {\min , \mathsf {n}} ^ {\mathsf {n c e}}) ^ {2}},
+$$
+
+$$
+\left. \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\mathrm {m a x}} ^ {4}}{(\lambda_ {\mathrm {m i n} , r} ^ {\mathrm {n c e}}) ^ {2}} \right\} \log \frac {p ^ {2}}{\delta}.
+$$
+
+We need to show two properties. First, the empirical gradient $\nabla_{\theta}\hat{\mathcal{L}}_f^{nce}(\theta)$ is nearly zero at $\theta = \theta^{\star}$ with high probability (Proposition D.1). Second, the empirical Hessian $\nabla_{\theta}^{2}\hat{\mathcal{L}}_{f}^{nce}(\theta)$ has a strictly positive curvature (i.e., exhibiting restricted strong convexity) at $\theta = \theta^{\star}$ with high probability (Proposition D.2).
+
+Proposition D.1 (Vanishing gradient). (cf. (Shah et al., 2021b, Proposition F.1).) Assume Assumption 4.1. For any $\delta \in (0,1)$ , $\epsilon > 0$ ,
+
+$$
+\left\| \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) \right\| _ {\max } \leq \epsilon
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{\mathsf{r}} \geq \frac{2\psi_{\max}^{2}(B_{\mathrm{nce},f,r}^{(1)})^{2}}{\epsilon^{2}} \log \frac{2p}{\delta}$ for each $r \in \{\mathsf{d}, \mathsf{n}\}$ .
+
+Proof. Recall from Lemma B.3 that
+
+$$
+\nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) = - \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi \rho_ {\theta} f ^ {\prime \prime} (\rho_ {\theta}) ] + \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ \psi \rho_ {\theta} ^ {2} f ^ {\prime \prime} (\rho_ {\theta}) ].
+$$
+
+Therefore, we have
+
+$$
+\mathbb {E} \left[ \partial_ {\theta_ {i}} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) \right] = \partial_ {\theta_ {i}} \mathcal {L} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) = 0.
+$$
+
+Since $|\psi_i(x)\xi_{\mathsf{nce},f,\mathsf{d}}^{(1)}(\rho_\theta (x))| \leq \psi_{\max}B_{\mathsf{nce},f,\mathsf{d}}^{(1)}$ and $|\psi_i(x)\xi_{\mathsf{nce},f,\mathsf{d}}^{(1)}(\rho_\theta (x))| \leq \psi_{\max}B_{\mathsf{nce},f,\mathsf{n}}^{(1)}$ , by Hoeffding's inequality and union bound, we have
+
+$$
+\mathbb {P} (| \partial_ {\theta_ {i}} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}) | \geq \epsilon) \leq 2 \exp \Bigl (- \frac {n _ {\mathrm {d}} \epsilon^ {2}}{2 \psi_ {\max } ^ {2} (B _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(1)}) ^ {2}} \Bigr) + 2 \exp \Bigl (- \frac {n _ {\mathrm {n}} \epsilon^ {2}}{2 \psi_ {\max } ^ {2} (B _ {\mathrm {n c e} , f , \mathrm {n}} ^ {(1)}) ^ {2}} \Bigr) = \delta ,
+$$
+
+if $n_{\mathrm{d}} \geq \frac{2\psi_{\max}^{2}(B_{\mathrm{nce},f,\mathrm{d}}^{(1)})^{2}}{\epsilon^{2}} \log \frac{2}{\delta}$ and $n_{\mathrm{n}} \geq \frac{2\psi_{\max}^{2}(B_{\mathrm{nce},f,n}^{(1)})^{2}}{\epsilon^{2}} \log \frac{2}{\delta}$ . By taking a union bound over $p$ different coordinates of $\theta$ , we conclude the proof.
+
+Lemma D.1. (cf. (Shah et al., 2021a, Lemma E.1)) Assume Assumption 4.1. Let $r$ be either $q_{\mathrm{d}}$ or $q_{\mathrm{n}}$ . For any $\epsilon_2 > 0$ ,
+
+$$
+\max _ {i j} \left| \mathbb {E} _ {\hat {r}} \left[ \psi_ {i} \psi_ {j} \right] - \mathbb {E} _ {r} \left[ \psi_ {i} \psi_ {j} \right] \right| \leq \epsilon_ {2},
+$$
+
+with probability $\geq 1 - \delta_{2}$ , if
+
+$$
+n _ {r} \geq \frac {2 \psi_ {\operatorname* {m a x}} ^ {4}}{\epsilon_ {2} ^ {2}} \log \frac {2 p ^ {2}}{\delta_ {2}}.
+$$
+
+Proof. Since $|\psi_i(x)\psi_j(x)| \leq \psi_{\max}^2$ is a bounded random variable, by Hoeffding's inequality, we have
+
+$$
+\mathbb {P} _ {r} \{\left| \mathbb {E} _ {\hat {r}} [ \psi_ {i} \psi_ {j} ] - \mathbb {E} _ {r} [ \psi_ {i} \psi_ {j} ] \right| > \epsilon_ {2} \} \leq 2 \exp \left(- \frac {n _ {r} \epsilon_ {2} ^ {2}}{2 \psi_ {\mathrm {m a x}} ^ {4}}\right)
+$$
+
+Taking a union bound over $i, j \in [p]$ leads to the desired bound.
+
+Recall that for a function $h\colon \Theta \to \mathbb{R}$ , the Bregman divergence is defined as
+
+$$
+\Delta_ {h} \left(\theta , \theta_ {o}\right) \triangleq h (\theta) - h \left(\theta_ {o}\right) - \langle \nabla_ {\theta} h \left(\theta_ {o}\right), \theta - \theta_ {o} \rangle .
+$$
+
+Proposition D.2 (Restricted strong convexity). (cf. (Shah et al., 2021a, Proposition E.1)) Under Assumption 4.1,
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}}} (\theta , \theta^ {\star}) \geq \frac {1}{4} \left(\frac {b _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(2)}}{\nu} \lambda_ {\min , \mathrm {d}} + b _ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} \lambda_ {\min , \mathrm {n}}\right) \| \theta - \theta^ {\star} \| _ {2} ^ {2}
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{r} \geq \frac{8\gamma_{1;2}^{4}\psi_{\max}^{4}}{\lambda_{\min,r}^{2}}\log \frac{4p^{2}}{\delta}$ for each $r \in \{d, n\}$ .
+
+Proof. By the intermediate value theorem, there exists $\xi \in \{t\theta + (1 - t)\theta^{\star} : t \in [0, 1]\}$ such that
+
+$$
+\begin{array}{l} \Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}}} (\theta , \theta^ {\star}) = \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) - \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}) - \left\langle \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta^ {\star}), \theta - \theta^ {\star} \right\rangle \\ = \frac {1}{2} (\theta - \theta^ {\star}) ^ {\mathsf {T}} \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathsf {n c e}} (\xi) (\theta - \theta^ {\star}). \\ \end{array}
+$$
+
+Here, note that $\xi$ depends on $\hat{q}_{\mathrm{d}}$ and $\hat{q}_{\mathrm{n}}$ . Let $z \triangleq \langle \psi(x), \theta - \theta^{\star} \rangle$ .
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {\intercal} \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\xi) (\theta - \theta^ {\star}) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ z ^ {2} \rho_ {\xi} g _ {f} (\rho_ {\xi}) ] + \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ z ^ {2} \rho_ {\xi} ^ {2} (f ^ {\prime \prime} (\rho_ {\xi}) - g _ {f} (\rho_ {\xi}) ] \\ \geq \frac {b _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(2)}}{\nu} \mathbb {E} _ {\hat {q} _ {d}} [ z ^ {2} ] + b _ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} \mathbb {E} _ {\hat {q} _ {\mathrm {n}}} [ z ^ {2} ] \\ = (\theta - \theta^ {\star}) ^ {\mathsf {T}} \Big (\frac {b _ {\mathrm {n c e} , f , \mathsf {d}} ^ {(2)}}{\nu} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi \psi^ {\mathsf {T}} ] + b _ {\mathrm {n c e}, f, \mathsf {n}} ^ {(2)} \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi \psi^ {\mathsf {T}} ] \Big) (\theta - \theta^ {\star}). \\ \end{array}
+$$
+
+We can lower bound the quadratic form as follows. The first term can be lower bounded as
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {T} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi \psi^ {T} ] (\theta - \theta^ {\star}) \\ = (\theta - \theta^ {\star}) ^ {\mathsf {T}} \left(\mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi \psi^ {\mathsf {T}} ] - \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi \psi^ {\mathsf {T}} ] + \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi \psi^ {\mathsf {T}} ]\right) (\theta - \theta^ {\star}) \\ = \sum_ {i j} (\theta - \theta^ {\star}) _ {i} \left(\mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi_ {i} \psi_ {j} ] - \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi_ {i} \psi_ {j} ]\right) (\theta - \theta^ {\star}) _ {j} + (\theta - \theta^ {\star}) ^ {\intercal} \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi \psi^ {\intercal} ] (\theta - \theta^ {\star}) \\ \geq - \sum_ {i j} | \theta_ {i} - \theta^ {\star} _ {i} | \cdot | \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi_ {i} \psi_ {j} ] - \mathbb {E} _ {q _ {\mathrm {d}}} [ \psi_ {i} \psi_ {j} ] | \cdot | \theta_ {j} - \theta^ {\star} _ {j} | + \lambda_ {\min , \mathrm {d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(a)} {\geq} - \epsilon_ {2} \| \theta - \theta^ {\star} \| _ {1} ^ {2} + \lambda_ {\min, \mathbf {d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \stackrel {(b)} {\geq} - \epsilon_ {2} \gamma_ {1; 2} ^ {2} \| \theta - \theta^ {*} \| _ {2} ^ {2} + \lambda_ {\min , d} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ = \frac {1}{2} \lambda_ {\min, \mathbf {d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+with probability $\geq 1 - \delta'$ if $n_{\mathrm{d}} \geq \frac{2\psi_{\mathrm{max}}^4}{\epsilon_2^2} \log \frac{2p^2}{\delta'}$ with $\epsilon_2 = \frac{\lambda_{\mathrm{min,d}}}{2\gamma_{1;2}^2}$ . Here, we apply Lemma D.1 in (a), and use the definition of $\gamma_{1;2}$ to bound $\| \theta - \theta^* \|_1 \leq \gamma_{1;2} \| \theta - \theta^* \|_2$ in (b).
+
+Hence, by a union bound with $\delta' = \delta / 2$ , with probability $\geq 1 - \delta$ , we have
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}}} (\theta , \theta^ {\star}) \geq \frac {1}{4} \left(\frac {b _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(2)}}{\nu} \lambda_ {\min , \mathrm {d}} + b _ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} \lambda_ {\min , \mathrm {n}}\right) \| \theta - \theta^ {\star} \| _ {2} ^ {2},
+$$
+
+if $n_{\mathsf{d}} \geq \frac{8\gamma_{1;2}^{4}\psi_{\max}^{4}}{\lambda_{\min,\mathsf{d}}^{2}}\log \frac{4p^{2}}{\delta}$ and $n_{\mathfrak{n}} \geq \frac{8\gamma_{1;2}^{4}\psi_{\max}^{4}}{\lambda_{\min,\mathfrak{n}}^{2}}\log \frac{4p^{2}}{\delta}$ .
+
+Proof of Theorem 4.1. First, note that
+
+$$
+\mathcal {R} ^ {*} \left(\nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right)\right) \leq \gamma_ {\mathcal {R} ^ {*}; \infty} \left\| \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} \left(\theta^ {\star}\right) \right\| _ {\max }
+$$
+
+by definition of $\gamma_{\mathcal{R}^*;\infty}$ . Then, by Proposition D.1, we have $\| \nabla_{\theta}\hat{\mathcal{L}}_f^{\mathrm{nce}}(\theta^\star)\|_{\max}\leq \epsilon$ with probability $\geq 1 - \delta_1$ , if
+
+$$
+n _ {\mathrm {r}} \geq \frac {2 (B _ {\mathrm {n c e} , f , r} ^ {(1)}) ^ {2} \psi_ {\mathrm {m a x}} ^ {2}}{\epsilon^ {2}} \log \frac {2 p}{\delta_ {1}}
+$$
+
+for each $\mathsf{r} \in \{\mathsf{d}, \mathsf{n}\}$ . Given that this event occurs, $\mathcal{R}^{*}(\nabla_{\theta} \hat{\mathcal{L}}_f^{\mathrm{nce}}(\theta^{\star})) \leq \gamma_{\mathcal{R}^{*};\infty} \epsilon$ , and thus we set $\lambda_n \gets 2\gamma_{\mathcal{R}^{*};\infty} \epsilon$ to satisfy the first condition in Theorem 4.4.
+
+Now, given $\lambda_{n} \geq 2\mathcal{R}^{*}(\nabla_{\theta}\hat{\mathcal{L}}_{f}^{\mathrm{nce}}(\theta^{\star}))$ , (Negahban et al., 2012, Lemma 1) implies that $\mathcal{R}(\hat{\theta}_{f,n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce},\mathcal{R}} - \theta^{\star}) \leq 4\mathcal{R}(\theta^{\star})$ , i.e., $\hat{\theta}_{f,n_{\mathrm{d}},n_{\mathrm{n}}}^{\mathrm{nce},\mathcal{R}} - \theta^{\star} \in 4\Theta$ . Then, by Proposition D.2, we have
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}}} (\theta , \theta^ {\star}) \geq \kappa \| \theta - \theta^ {\star} \| _ {2} ^ {2}
+$$
+
+with probability $\geq 1 - \delta_2$ if $n_r \geq \frac{8\gamma_{1:2}^4\psi_{\max}^4}{\lambda_{\min,r}^2}\log \frac{4p^2}{\delta_2}$ for each $r \in \{\mathsf{d},\mathsf{n}\}$ , where
+
+$$
+\kappa = \frac {1}{4} \Big (\frac {b _ {\mathrm {n c e} , f , \mathsf {d}} ^ {(2)}}{\nu} \lambda_ {\min , \mathsf {d}} + b _ {\mathrm {n c e}, f, \mathsf {n}} ^ {(2)} \lambda_ {\min , \mathsf {n}} \Big).
+$$
+
+Now, by taking a union bound with $\delta_1 = \delta_2 = \delta / 2$ , with probability $\geq 1 - \delta$ , we have
+
+$$
+\| \theta - \theta^ {\star} \| _ {2} \leq \frac {3 \lambda_ {n} \gamma_ {\mathcal {R} ; 2}}{\kappa} = \frac {6 \gamma_ {\mathcal {R} ^ {*} ; \infty} \gamma_ {\mathcal {R} ; 2}}{\kappa} \epsilon = \Delta
+$$
+
+with $\epsilon \gets \frac{\Delta\kappa}{6\gamma_{\mathcal{R}^*;\infty}\gamma_{\mathcal{R};2}}$ provided that
+
+$$
+\begin{array}{l} n _ {\mathrm {r}} \geq \max \Bigl \{\frac {7 2 (B _ {\mathrm {n c e} , f , \mathrm {r}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max} ^ {2}}{\Delta^ {2} \kappa^ {2}} \log \frac {4 p}{\delta}, \frac {8 \gamma_ {1 ; 2} ^ {4} \psi_ {\max} ^ {4}}{\lambda_ {\min , \mathrm {r}} ^ {2}} \log \frac {8 p ^ {2}}{\delta} \Bigr) \\ = \max \Bigl \{\frac {1 1 5 2 (B _ {\mathsf {n c e} , f , \mathsf {r}} ^ {(1)}) ^ {2} \gamma_ {\mathsf {R} ; 2} ^ {2} \gamma_ {\mathsf {R} ^ {*} ; \infty} ^ {2} \psi_ {\max} ^ {2}}{\Delta^ {2} (\nu^ {- 1} b _ {\mathsf {n c e} , f , \mathsf {d}} ^ {(2)} \lambda_ {\min , \mathsf {d}} + b _ {\mathsf {n c e} , f , \mathsf {n}} ^ {(2)} \lambda_ {\min , \mathsf {n}}) ^ {2}} \log \frac {4 p}{\delta}, \frac {8 \gamma_ {1 ; 2} ^ {4} \psi_ {\max} ^ {4}}{\lambda_ {\min , \mathsf {r}} ^ {2}} \log \frac {8 p ^ {2}}{\delta} \Bigr \} \\ = \Omega \Big (\max \Big \{\frac {(B _ {\sf n c e , f , r} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\mathrm {m a x}} ^ {2}}{\Delta^ {2} (\nu^ {- 1} b _ {\sf n c e , f , d} ^ {(2)} \lambda_ {\mathrm {m i n , d}} + b _ {\sf n c e , f , n} ^ {(2)} \lambda_ {\mathrm {m i n , n}}) ^ {2}}, \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\mathrm {m a x}} ^ {4}}{\lambda_ {\mathrm {m i n , r}} ^ {2}} \Big \} \log \frac {p ^ {2}}{\delta} \Big) \\ \end{array}
+$$
+
+for each $r \in \{\mathsf{d}, \mathsf{n}\}$ .
+
+# D.2. $\alpha$ -CentNCE
+
+Lemma D.2 ( $\alpha$ -CentNCE: derivatives).
+
+$$
+\begin{array}{l} \nabla_ {\theta} \tilde {\mathcal {L}} _ {\alpha} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}}} [ - \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} ], \\ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} _ {\alpha} (\theta) = \mathbb {E} _ {q _ {\mathtt {d}}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ((1 - \alpha) \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} ^ {\intercal} - \nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta ; \alpha}) ]. \\ \end{array}
+$$
+
+Define
+
+$$
+\tilde {\mathcal {C}} _ {\alpha , n _ {\mathrm {d}}} \triangleq \operatorname {C o v} \left(\sqrt {n _ {\mathrm {d}}} \nabla_ {\theta} \hat {\tilde {\mathcal {L}}} _ {\alpha} \left(\theta^ {\star}\right)\right)
+$$
+
+for $n_{\mathsf{d}} \geq 1$ . Then, $\tilde{\mathcal{C}}_{\alpha, n_{\mathsf{d}}} = \tilde{\mathcal{C}}_{\alpha}$ for any $n_{\mathsf{d}} \geq 1$ , where
+
+$$
+\tilde {\mathcal {C}} _ {\alpha} \triangleq \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {2 (\alpha - 1)} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; \alpha} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\top} ].
+$$
+
+We also define
+
+$$
+\tilde {\mathcal {L}} _ {\alpha} \triangleq \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} _ {\alpha} (\theta^ {\star}) = \mathbb {E} _ {q _ {\mathtt {d}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha - 1} ((1 - \alpha) \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; \alpha} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\top} - \nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta^ {\star}; \alpha}) ].
+$$
+
+Proof. From Lemma B.4, we have
+
+$$
+\begin{array}{l} \partial_ {\theta_ {i}} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta) = \frac {1}{(1 - \alpha)} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \partial_ {\theta_ {i}} \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ] \\ = - \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 2} \partial_ {\theta_ {i}} \tilde {r} _ {\theta ; \alpha} ] \\ = - \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} (\psi_ {i} - \mathbb {E} _ {q _ {n}} [ \psi_ {i} \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ]) ]. \\ \end{array}
+$$
+
+From this derivative expression, the computation is straightforward.
+
+Corollary D.1 (GISO: derivatives).
+
+$$
+\begin{array}{l} \nabla_ {\theta} \tilde {\mathcal {L}} _ {0} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}}} [ - \tilde {r} _ {\theta ; 0} ^ {- 1} (\psi - \mathbb {E} _ {q} [ \psi ]) ], \\ \nabla_ {\boldsymbol {\theta}} ^ {2} \tilde {\mathcal {L}} _ {0} (\boldsymbol {\theta}) = \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\boldsymbol {\theta}; 0} ^ {- 1} (\boldsymbol {\psi} - \mathbb {E} _ {q} [ \boldsymbol {\psi} ]) (\boldsymbol {\psi} - \mathbb {E} _ {q} [ \boldsymbol {\psi} ]) ^ {\intercal} ], \\ \tilde {\mathcal {C}} _ {0} = \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\boldsymbol {\theta} ^ {\star}; 0} ^ {- 2} (\psi - \mathbb {E} _ {q} [ \psi ]) (\psi - \mathbb {E} _ {q} [ \psi ]) ^ {\intercal} ], \\ \tilde {\mathcal {L}} _ {0} = \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\theta^ {\star}; 0} ^ {- 1} (\psi - \mathbb {E} _ {q} [ \psi ]) (\psi - \mathbb {E} _ {q} [ \psi ]) ^ {\intercal} ]. \\ \end{array}
+$$
+
+Proof. From Proposition D.2,
+
+$$
+\begin{array}{l} \nabla_ {\theta} \tilde {\mathcal {L}} _ {0} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}}} [ - \tilde {r} _ {\theta ; 0} ^ {- 1} \nabla_ {\theta} \log \tilde {r} _ {\theta ; 0} ], \\ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} _ {0} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\theta ; 0} ^ {- 1} (\nabla_ {\theta} \log \tilde {r} _ {\theta ; 0} \nabla_ {\theta} \log \tilde {r} _ {\theta ; 0} ^ {\top} - \nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta ; 0}) ], \\ \tilde {\mathcal {C}} _ {0} = \mathbb {E} _ {q _ {\mathrm {d}}} [ \tilde {r} _ {\theta^ {\star}; 0} ^ {- 2} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; 0} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; 0} ^ {\intercal} ], \\ \tilde {\mathcal {L}} _ {0} = \mathbb {E} _ {q _ {d}} [ \tilde {r} _ {\theta^ {\star}; 0} ^ {- 1} (\nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; 0} \nabla_ {\theta} \log \tilde {r} _ {\theta^ {\star}; 0} ^ {\mathbf {T}} - \nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta^ {\star}; 0}) ]. \\ \end{array}
+$$
+
+Since
+
+$$
+\tilde {r} _ {\theta ; 0} = \frac {\exp \left(\left\langle \theta , \psi - \mathbb {E} _ {q} [ \psi ] \right\rangle\right)}{q (x)} e ^ {- \mathbb {E} _ {q} [ \log q ]},
+$$
+
+$$
+\nabla_ {\theta} \log \tilde {r} _ {\theta ; 0} = \psi - \mathbb {E} _ {q} [ \psi ],
+$$
+
+$$
+\nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta ; 0} = 0,
+$$
+
+the quantities can be further simplified as stated.
+
+
+
+Corollary D.2 (MLE: derivatives).
+
+$$
+\begin{array}{l} \nabla_ {\theta} \tilde {\mathcal {L}} _ {1} (\theta) = \mathbb {E} _ {q _ {d}} [ - \nabla_ {\theta} \log p _ {\theta} ], \\ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} _ {1} (\theta) = \mathbb {E} _ {q _ {d}} [ - \nabla_ {\theta} ^ {2} \log p _ {\theta} ], \\ \tilde {\mathcal {C}} _ {1} = \mathbb {E} _ {q _ {\mathrm {d}}} [ \nabla_ {\theta} \log p _ {\theta^ {*}} \nabla_ {\theta} \log p _ {\theta^ {*}} ^ {\intercal} ], \\ \tilde {\mathcal {L}} _ {1} = \mathbb {E} _ {q _ {\mathrm {d}}} \left[ - \nabla_ {\theta} ^ {2} \log p _ {\theta^ {\star}} \right]. \\ \end{array}
+$$
+
+Theorem 4.2 ( $\alpha$ -CentNCE: finite-sample guarantee). Pick $\alpha \in \mathbb{R}$ . Define
+
+$$
+\left(\rho_ {\min }, \rho_ {\max }\right) \triangleq \Big (\inf _ {x \in \mathcal {X}, \theta \in \Theta} \tilde {\rho} _ {\theta ; \alpha} (x), \sup _ {x \in \mathcal {X}, \theta \in \Theta} \tilde {\rho} _ {\theta ; \alpha} (x) \Big)
+$$
+
+and define the quantities in Eq. (5) for $f = f_{\alpha}$ accordingly. Let $\tilde{\rho}_{\theta^{\star};\alpha}^{\alpha}(x)\triangleq \frac{\left(\frac{q_{\mathrm{d}}(x)}{q_{\mathrm{n}}(x)}\right)^{\alpha}}{\mathbb{E}_{q_{\mathrm{n}}}\left[(\frac{q_{\mathrm{d}}}{q_{\mathrm{n}}})^{\alpha}\right]}$ , and let
+
+$$
+\lambda_ {\min , \mathbf {d}} ^ {\text {c e n t}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {\mathrm {d}}} \left[ \left(\psi - \mathbb {E} _ {q _ {\mathrm {n}}} \left[ \psi \tilde {\rho} _ {\theta^ {*}; \alpha} ^ {\alpha} \right]\right) \left(\psi - \mathbb {E} _ {q _ {\mathrm {n}}} \left[ \psi \tilde {\rho} _ {\theta^ {*}; \alpha} ^ {\alpha} \right]\right) ^ {\mathsf {T}}\right)\right),
+$$
+
+$$
+\lambda_ {\min , n} ^ {\text {c e n t}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {n}} \left[ \psi \psi^ {\mathsf {T}} \tilde {\rho} _ {\theta^ {*}; \alpha} ^ {\alpha} \right] - \mathbb {E} _ {q _ {n}} \left[ \psi \tilde {\rho} _ {\theta^ {*}; \alpha} ^ {\alpha} \right] \mathbb {E} _ {q _ {n}} \left[ \psi \tilde {\rho} _ {\theta^ {*}; \alpha} ^ {\alpha} \right] ^ {\mathsf {T}}\right).
+$$
+
+Let $\hat{\theta}_{\alpha, n_d}^{\mathrm{cent}, \mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {\alpha , n _ {d}} ^ {\text {c e n t}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {\alpha} ^ {\text {c e n t}} (\theta ; \hat {q} _ {d}, q _ {n}) + \lambda_ {n _ {d}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_d} > 0$ . Define $\psi_{\mathrm{max},\alpha}\triangleq \psi_{\mathrm{max}} + \| \mathbb{E}_{q_n}[\psi \tilde{\rho}_{\theta^{\star};\alpha}^\alpha ]\|_{\mathrm{max}}$ . Then, for any $\Delta >0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_{n_d}$ such that $\| \hat{\theta}_{f,n_d}^{\mathrm{cent},\mathcal{R}} - \theta^{\star}\| _2\leq \Delta$ with probability $\geq 1 - \delta$ , provided that
+
+$$
+\begin{array}{l} n _ {\mathsf {d}} = \Omega \Bigg (\max \bigg \{\frac {(B _ {\mathsf {n c e} , f _ {\alpha} , \mathsf {d}}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max , \alpha} ^ {2}}{\Delta^ {2} (b _ {\mathsf {n c e} , f _ {\alpha} , \mathsf {d}}} ^ {(2)}) ^ {2} \{(1 - \alpha) \lambda_ {\min , \mathsf {d}} ^ {\mathrm {c e n t}} + \alpha \lambda_ {\min , \mathsf {n}} ^ {\mathrm {c e n t}} \} ^ {2}}, \\ \left. \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\mathrm {m a x} , \alpha} ^ {4}}{(\lambda_ {\mathrm {m i n} , \mathsf {d}} ^ {\mathrm {c e n t}}) ^ {2}} \right\} \log \left. \frac {p ^ {2}}{\delta}\right). \\ \end{array}
+$$
+
+Proposition D.3 (Vanishing gradient). (cf. (Shah et al., 2021b, Proposition F.1).) Assume Assumption 4.1. For any $\delta \in (0,1)$ , $\epsilon > 0$ ,
+
+$$
+\left\| \nabla_ {\theta} \tilde {\mathcal {L}} _ {\alpha} \left(\theta^ {\star}\right) \right\| _ {\max } \leq \epsilon
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{\mathsf{d}} \geq \frac{2r_{\min,\alpha}^{2(\alpha - 1)}(\psi_{\max} + \|\mathbb{E}_{q_n}[\psi\tilde{r}_{\theta^*;\alpha}^\alpha]\|_{\max})^2}{\epsilon^2}\log \frac{2p}{\delta}$ .
+
+Proof. Recall from Lemma D.2 that
+
+$$
+\nabla_ {\theta} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta) = \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ - \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} ] = - \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \psi \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ] + \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ] \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta ; \alpha} ^ {\alpha} ],
+$$
+
+and it is easy to check that
+
+$$
+\mathbb {E} [ \nabla_ {\theta} \tilde {\tilde {\mathcal {L}}} _ {\alpha} (\theta^ {\star}) ] = \nabla_ {\theta} \tilde {\mathcal {L}} _ {\alpha} (\theta^ {\star}) = - \mathbb {E} _ {q _ {\mathtt {d}}} [ \psi \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha - 1} ] + \mathbb {E} _ {q _ {\mathtt {d}}} [ \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha - 1} ] \mathbb {E} _ {q _ {\mathtt {n}}} [ \psi \tilde {r} _ {\theta^ {\star}; \alpha} ^ {\alpha} ] = 0.
+$$
+
+Since $|\tilde{r}_{\theta^{\star};\alpha}^{\alpha -1}(\psi_i(x) - \mathbb{E}_{q_n}[\psi_i\tilde{r}_{\theta^{\star};\alpha}^\alpha])|\leq r_{\min ,\alpha}^{\alpha -1}(\psi_{\max} + \| \mathbb{E}_{q_n}[\psi \tilde{r}_{\theta^{\star};\alpha}^\alpha ]\|_{\max})$ by Hoeffding's inequality, we have
+
+$$
+\mathbb {P} (| \partial_ {\theta_ {i}} \tilde {\hat {\mathcal {L}}} _ {\alpha} (\theta^ {\star}) | \geq \epsilon) \leq 2 \exp \Bigl (- \frac {n _ {\mathrm {d}} \epsilon^ {2}}{2 r _ {\min , \alpha} ^ {2 (\alpha - 1)} (\psi_ {\max} + \| \mathbb {E} _ {q _ {\mathrm {n}}} [ \psi \tilde {r} _ {\theta^ {\star} ; \alpha} ^ {\alpha} ] \| _ {\max}) ^ {2}} \Bigr) = \delta ,
+$$
+
+if $n_{\mathrm{d}} \geq \frac{2r_{\min,\alpha}^{2(\alpha - 1)}(\psi_{\max} + \|\mathbb{E}_{q_n}[\psi r_{\theta^{\star},\alpha}^{\alpha}]\|_{\max})^2}{\epsilon^2}\log \frac{2}{\delta}$ . By taking a union bound over $p$ different coordinates of $\theta$ , we conclude the proof.
+
+Proposition D.4 (Restricted strong convexity). (cf. (Shah et al., 2021a, Proposition E.1)) Under Assumption 4.1, we have
+
+$$
+\Delta_ {\hat {\tilde {\mathcal {L}}} _ {\alpha}} (\theta , \theta^ {\star}) \geq \tilde {r} _ {\max, \alpha} ^ {\alpha - 1} \Bigl \{\frac {1}{2} (1 - \alpha) \lambda_ {\min, \mathsf {d}} ^ {\mathrm {c e n t}} + \alpha \lambda_ {\min, \mathsf {n}} ^ {\mathrm {c e n t}} \Bigr \} \| \theta - \theta^ {\star} \| _ {2} ^ {2},
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{\mathsf{d}} \geq \frac{8\gamma_{1;2}^{4}(\psi_{\max} + \|\mathbb{E}_{q_n}[\tilde{\psi}\tilde{r}_{\theta^{\star};\alpha}^{\alpha}]\|_{\max})^4}{(\lambda_{\min, \mathsf{d}}^{\mathrm{cent}})^2} \log \frac{2p^2}{\delta}$ .
+
+Proof. By the intermediate value theorem, there exists $\xi \in \{t\theta + (1 - t)\theta^{\star} : t \in [0,1]\}$ such that
+
+$$
+\begin{array}{l} \Delta_ {\hat {\tilde {\mathcal {L}}} _ {\alpha}} (\theta , \theta^ {\star}) = \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta) - \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta^ {\star}) - \left\langle \nabla_ {\theta} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta^ {\star}), \theta - \theta^ {\star} \right\rangle \\ = \frac {1}{2} (\theta - \theta^ {\star}) ^ {\intercal} \nabla_ {\theta} ^ {2} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\xi) (\theta - \theta^ {\star}). \\ \end{array}
+$$
+
+Define $\overline{\widetilde{\psi}}\triangleq \mathbb{E}_{q_n}[\psi \tilde{r}_\theta ;\alpha ]$ and $\widetilde{\psi\psi^{\intercal}}\triangleq \mathbb{E}_{q_n}[\psi \psi^{\intercal}\tilde{r}_{\theta ;\alpha}^{\alpha}]$ for shorthand notation. Here, note that $\xi$ depends on $\hat{q}_{\mathrm{d}}$ . Recall from Lemma D.2 that
+
+$$
+\begin{array}{l} \nabla_ {\theta} ^ {2} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\theta) = \mathbb {E} _ {\hat {q} _ {d}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ((1 - \alpha) \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} \nabla_ {\theta} \log \tilde {r} _ {\theta ; \alpha} ^ {\intercal} - \nabla_ {\theta} ^ {2} \log \tilde {r} _ {\theta ; \alpha}) ] \\ = (1 - \alpha) \mathbb {E} _ {\hat {q} _ {d}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} (\psi - \overline {{\tilde {\psi}}}) (\psi - \overline {{\tilde {\psi}}}) ^ {\intercal} ] + \alpha \mathbb {E} _ {\hat {q} _ {d}} [ \tilde {r} _ {\theta ; \alpha} ^ {\alpha - 1} ] (\overline {{\tilde {\psi \psi^ {\intercal}}}} - \overline {{\tilde {\psi \psi}}} ^ {\intercal}). \\ \end{array}
+$$
+
+Let $z \triangleq \langle \psi(x), \theta - \theta^{\star} \rangle$ .
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {\mathsf {T}} \nabla_ {\theta} ^ {2} \hat {\tilde {\mathcal {L}}} _ {\alpha} (\xi) (\theta - \theta^ {\star}) \\ = (1 - \alpha) \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \tilde {r} _ {\xi ; \alpha} ^ {\alpha - 1} ((\theta - \theta^ {\star}) ^ {\intercal} (\psi - \overline {{\tilde {\psi}}})) ^ {2} ] + \alpha \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ \tilde {r} _ {\xi ; \alpha} ^ {\alpha - 1} ] (\theta - \theta^ {\star}) ^ {\intercal} (\overline {{\psi \psi^ {\intercal}}} - \overline {{\tilde {\psi}}} \psi^ {\intercal}) (\theta - \theta^ {\star}) \\ \geq (1 - \alpha) \tilde {r} _ {\max, \alpha} ^ {\alpha - 1} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ ((\theta - \theta^ {\star}) ^ {\intercal} (\psi - \overline {{\hat {\psi}}})) ^ {2} ] + \alpha \tilde {r} _ {\max, \alpha} ^ {\alpha - 1} \lambda_ {\min, \mathsf {n}} ^ {\mathrm {c e n t}} \| \theta - \theta^ {\star} \| ^ {2} \\ = \tilde {r} _ {\max, \alpha} ^ {\alpha - 1} \big \{(1 - \alpha) (\theta - \theta^ {\star}) ^ {\mathsf {T}} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ (\psi - \overline {{\overline {{\hat {\psi}}}}}) (\psi - \overline {{\overline {{\hat {\psi}}}}}) ^ {\mathsf {T}} ] (\theta - \theta^ {\star}) + \alpha \lambda_ {\min, \mathfrak {n}} ^ {\mathrm {c e n t}} \| \theta - \theta^ {\star} \| ^ {2} \big \}. \\ \end{array}
+$$
+
+We can lower bound the first term as follows.
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {\mathsf {T}} \mathbb {E} _ {\hat {q} _ {\mathrm {d}}} [ (\psi - \overline {{\hat {\psi}}}) (\psi - \overline {{\hat {\psi}}}) ^ {\mathsf {T}} ] (\theta - \theta^ {\star}) \overset {(a)} {\geq} - \epsilon_ {2} \| \theta - \theta^ {\star} \| _ {1} ^ {2} + \lambda_ {\min, \mathrm {d}} ^ {\mathrm {c e n t}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \stackrel {(b)} {\geq} - \epsilon_ {2} \gamma_ {1; 2} ^ {2} \| \theta - \theta^ {*} \| _ {2} ^ {2} + \lambda_ {\min , d} ^ {\text {c e n t}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ = \frac {1}{2} \lambda_ {\min, d} ^ {\text {c e n t}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+with probability $\geq 1 - \delta$ if $n_{\mathsf{d}} \geq \frac{2(\psi_{\max} + \|\mathbb{E}_{q_n}[\psi\tilde{r}_{\theta^{\star};\alpha}^\alpha]\|_{\max})^4}{\epsilon_2^2}\log \frac{2p^2}{\delta}$ with $\epsilon_{2} = \frac{\lambda_{\min,d}^{\mathrm{cent}}}{2\gamma_{1;2}^{2}}$ . Here, we apply Hoeffding's inequality similar to Lemma D.1 in (a), and use the definition of $\gamma_{1;2}$ to bound $\| \theta -\theta^{\star}\| _1\leq \gamma_{1;2}\| \theta -\theta^{\star}\| _2$ in (b). Hence, with probability $\geq 1 - \delta$ , we have
+
+$$
+\Delta_ {\hat {\tilde {\mathcal {L}}} _ {\alpha}} (\theta , \theta^ {\star}) \geq \tilde {r} _ {\max, \alpha} ^ {\alpha - 1} \left\{\frac {1}{2} (1 - \alpha) \lambda_ {\min, \mathrm {d}} ^ {\text {c e n t}} + \alpha \lambda_ {\min, \mathrm {n}} ^ {\text {c e n t}} \right\} \| \theta - \theta^ {\star} \| _ {2} ^ {2},
+$$
+
+provided that $n_{\mathrm{d}} \geq \frac{8\gamma_{1;2}^{4}(\psi_{\max} + \|\mathbb{E}_{q_{\mathrm{n}}}[\psi\tilde{r}_{\theta^{\star};\alpha}^{\alpha}] \|_{\max})^{4}}{(\lambda_{\min, \mathrm{d}}^{\mathrm{cent}})^{2}} \log \frac{2p^{2}}{\delta}$ .
+
+Proof of Theorem 4.2. First, note that
+
+$$
+\mathcal {R} ^ {*} \left(\nabla_ {\theta} \hat {\tilde {\mathcal {L}}} _ {\alpha} \left(\theta^ {\star}\right)\right) \leq \gamma_ {\mathcal {R} ^ {*}; \infty} \| \nabla_ {\theta} \hat {\tilde {\mathcal {L}}} _ {\alpha} \left(\theta^ {\star}\right) \| _ {\max }
+$$
+
+by definition of $\gamma_{\mathcal{R}^*;\infty}$ . Then, by Proposition D.3, we have $\| \nabla_{\theta} \hat{\tilde{\mathcal{L}}}_{\alpha}(\theta^{\star}) \|_{\max} \leq \epsilon$ with probability $\geq 1 - \delta_1$ , if
+
+$$
+n _ {\mathsf {d}} \geq \frac {2 r _ {\min , \alpha} ^ {2 (\alpha - 1)} \left(\psi_ {\max } + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {\star} ; \alpha} ^ {\alpha} ] \| _ {\max }\right) ^ {2}}{\epsilon^ {2}} \log \frac {2 p}{\delta_ {1}}.
+$$
+
+Given that this event occurs, $\mathcal{R}^* (\nabla_\theta \tilde{\mathcal{L}}_\alpha (\theta^\star))\leq \gamma_{\mathcal{R}^*;\infty}\epsilon$ , and thus we set $\lambda_{n}\gets 2\gamma_{\mathcal{R}^{*};\infty}\epsilon$ to satisfy the first condition in Theorem 4.4.
+
+Now, given $\lambda_{n} \geq 2\mathcal{R}^{*}(\nabla_{\theta}\tilde{\mathcal{L}}_{\alpha}(\theta^{\star}))$ , (Negahban et al., 2012, Lemma 1) implies that $\mathcal{R}(\hat{\theta}_{\alpha ,n_{\mathrm{d}}}^{\mathrm{cent},\mathcal{R}} - \theta^{\star}) \leq 4\mathcal{R}(\theta^{\star})$ , i.e., $\hat{\theta}_{\alpha ,n_{\mathrm{d}}}^{\mathrm{cent},\mathcal{R}} - \theta^{\star} \in 4\Theta$ . Then, by Proposition D.2, we have
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}}} (\theta , \theta^ {\star}) \geq \kappa \| \theta - \theta^ {\star} \| _ {2} ^ {2}
+$$
+
+with probability $\geq 1 - \delta_{2}$ if $n_{\mathrm{d}}\geq \frac{8\gamma_{1;2}^{4}(\psi_{\mathrm{max}} + ||\mathbb{E}_{q_{\mathrm{n}}}[\psi\tilde{r}_{\theta^{\star};\alpha}^{\alpha}]\|_{\mathrm{max}})^{4}}{(\lambda_{\mathrm{min},\mathrm{d}}^{\mathrm{cent}})^{2}}\log \frac{2p^{2}}{\delta}$ , where
+
+$$
+\kappa = \tilde {r} _ {\max , \alpha} ^ {\alpha - 1} \left\{\frac {1}{2} (1 - \alpha) \lambda_ {\min , d} ^ {\text {c e n t}} + \alpha \lambda_ {\min , n} ^ {\text {c e n t}} \right\} \geq \frac {1}{2} \tilde {r} _ {\max , \alpha} ^ {\alpha - 1} \{(1 - \alpha) \lambda_ {\min , d} ^ {\text {c e n t}} + \alpha \lambda_ {\min , n} ^ {\text {c e n t}} \}.
+$$
+
+Now, by taking a union bound with $\delta_1 = \delta_2 = \delta / 2$ , with probability $\geq 1 - \delta$ , we have
+
+$$
+\| \theta - \theta^ {\star} \| _ {2} \leq \frac {3 \lambda_ {n} \gamma_ {\mathcal {R} ; 2}}{\kappa} = \frac {6 \gamma_ {\mathcal {R} ^ {*} ; \infty} \gamma_ {\mathcal {R} ; 2}}{\kappa} \epsilon = \Delta
+$$
+
+with $\epsilon \gets \frac{\Delta\kappa}{6\gamma_{\mathcal{R}^*;\infty}\gamma_{\mathcal{R};2}}$ provided that
+
+$$
+\begin{array}{l} n _ {r} \geq \max \left\{\frac {7 2 r _ {\min , \alpha} ^ {2 (\alpha - 1)} \left(\psi_ {\max } + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max }\right) ^ {2} \gamma_ {\mathcal {R}; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*}; \infty} ^ {2}}{\Delta^ {2} \kappa^ {2}} \log \frac {2 p}{\delta}, \frac {8 \gamma_ {1 ; 2} ^ {4} \left(\psi_ {\max } + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max }\right) ^ {4}}{\left(\lambda_ {\min , d} ^ {\text {c e n t}}\right) ^ {2}} \log \frac {4 p ^ {2}}{\delta}\right) \\ \geq \max \Bigl \{\frac {1 1 5 2 r _ {\min , \alpha} ^ {2 (\alpha - 1)} (\psi_ {\max } + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max }) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*}; \infty} ^ {2}}{\Delta^ {2} \tilde {r} _ {\max , \alpha} ^ {2 (\alpha - 1)} \{(1 - \alpha) \lambda_ {\min , \mathsf {d}} ^ {\mathrm {c e n t}} + \alpha \lambda_ {\min , \mathsf {n}} ^ {\mathrm {c e n t}} \} ^ {2}} \log \frac {2 p}{\delta}, \frac {8 \gamma_ {1 ; 2} ^ {4} (\psi_ {\max } + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max }) ^ {4}}{(\lambda_ {\min , \mathsf {d}} ^ {\mathrm {c e n t}}) ^ {2}} \log \frac {4 p ^ {2}}{\delta} \Bigr) \\ = \Omega \Big (\max \Big \{\frac {r _ {\min , \alpha} ^ {2 (\alpha - 1)} (\psi_ {\max} + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*}; \infty} ^ {2}}{\Delta^ {2} \tilde {r} _ {\max , \alpha} ^ {2 (\alpha - 1)} \{(1 - \alpha) \lambda_ {\min , d} ^ {\mathrm {c e n t}} + \alpha \lambda_ {\min , n} ^ {\mathrm {c e n t}} \} ^ {2}}, \frac {\gamma_ {1 ; 2} ^ {4} (\psi_ {\max} + \| \mathbb {E} _ {q _ {n}} [ \psi \tilde {r} _ {\theta^ {*} ; \alpha} ^ {\alpha} ] \| _ {\max}) ^ {4}}{(\lambda_ {\min , d} ^ {\mathrm {c e n t}}) ^ {2}} \Big \} \log \frac {p ^ {2}}{\delta} \Big). \\ \end{array}
+$$
+
+
+
+# D.3. $f$ -CondNCE
+
+Theorem 4.3 (f-CondNCE: finite-sample guarantee). Pick a strictly convex function $f\colon \mathbb{R}_+ \to \mathbb{R}$ . Define
+
+$$
+\rho_ {\min } \triangleq \inf _ {(x, y) \in \operatorname {s u p p} (q _ {\mathsf {d}} (x) \pi (y | x)), \theta \in \Theta} \rho_ {\theta} (x, y),
+$$
+
+$$
+\rho_ {\max } \triangleq \sup _ {(x, y) \in \operatorname {s u p p} (q _ {d} (x) \pi (y | x)), \theta \in \Theta} \rho_ {\theta} (x, y).
+$$
+
+and define the quantities in Eq. (5) accordingly. Let
+
+$$
+\lambda_ {\min , \mathsf {d}} ^ {\text {c o n d}} \triangleq \lambda_ {\min } \left(\mathbb {E} _ {q _ {\mathsf {d}} (x) \pi (y | x)} \left[ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\mathsf {T}} \right]\right).
+$$
+
+Let $\hat{\theta}_{f,n_{\mathrm{d}}}^{\mathrm{cond},\mathcal{R}}$ be such that
+
+$$
+\hat {\theta} _ {f, n _ {\mathrm {d}}} ^ {\text {c o n d}, \mathcal {R}} \in \arg \min _ {\theta \in \Theta} \left\{\mathcal {L} _ {f} ^ {\text {c o n d}} (\theta ; \hat {q} _ {\mathrm {d}}, \hat {\pi}) + \lambda_ {n _ {\mathrm {d}}} \mathcal {R} (\theta) \right\}
+$$
+
+for some $\lambda_{n_d} > 0$ . Then, for any $\Delta > 0$ and $\delta \in (0,1)$ , there exists a choice of $\lambda_n$ such that $\| \hat{\theta}_{f,n_d}^{\mathrm{cond},\mathcal{R}} - \theta^\star \|_2 \leq \Delta$ with probability $\geq 1 - \delta$ , provided that
+
+$$
+\begin{array}{l} n _ {\mathrm {d}} = \Omega \left(\max \left\{\frac {\left(B _ {\text {c o n d} , f , \mathrm {d}} ^ {(1)} + B _ {\text {c o n d} , f , \mathrm {n}} ^ {(1)}\right) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} \left(b _ {\text {c o n d} , f , \mathrm {d}} ^ {(2)} + b _ {\text {c o n d} , f , \mathrm {n}} ^ {(2)}\right) ^ {2} \left(\lambda_ {\min } ^ {\text {c o n d}}\right) ^ {2}}, \right. \right. \\ \left. \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\mathrm {m a x}} ^ {4}}{(\lambda_ {\mathrm {m i n}} ^ {\mathrm {c o n d}}) ^ {2}} \right\} \log \frac {p ^ {2}}{\delta}. \\ \end{array}
+$$
+
+Here, $b_{\mathrm{cond},f,\mathbf{r}}^{(2)}$ and $B_{\mathrm{cond},f,\mathbf{r}}^{(i)}$ are defined similar to Eq. (5), where the infimum and supremum are taken over $\left(\frac{\rho_{\mathrm{min}}}{\rho_{\mathrm{max}}}, \frac{\rho_{\mathrm{max}}}{\rho_{\mathrm{min}}}\right)$ in place of $(\rho_{\mathrm{min}}, \rho_{\mathrm{max}})$ .
+
+Proposition D.5 (Vanishing gradient). (cf. (Shah et al., 2021b, Proposition F.1).) Assume Assumption 4.1. For any $\delta \in (0,1)$ , $\epsilon > 0$ ,
+
+$$
+\left\| \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} \left(\theta^ {\star}\right) \right\| _ {\max } \leq \epsilon
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{\mathsf{d}} \geq \frac{8\psi_{\max}^{2}(B_{\mathrm{nce},f,\mathrm{d}}^{(1)} + B_{\mathrm{nce},f,\mathrm{n}}^{(1)})^{2}}{\epsilon^{2}}\log \frac{2p}{\delta}$ .
+
+Proof. Recall from Lemma B.8 that
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {f} ^ {\mathrm {c o n d}} (\theta) = \mathbb {E} _ {q _ {\mathrm {d}} (y) \pi (x | y)} [ (\psi (x) - \psi (y)) (- \xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(1)} (\rho_ {\theta} ^ {- 1}) + \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(1)} (\rho_ {\theta})) ],
+$$
+
+and it is easy to check that
+
+$$
+\mathbb {E} \left[ \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} \left(\theta^ {\star}\right) \right] = \nabla_ {\theta} \mathcal {L} _ {f} ^ {\text {c o n d}} \left(\theta^ {\star}\right) = 0.
+$$
+
+Since
+
+$$
+\begin{array}{l} \left| \left(\psi_ {i} (y) - \psi_ {i} (x)\right) \left(- \xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(1)} \left(\rho_ {\theta} ^ {- 1} (x, y)\right) + \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(1)} \left(\rho_ {\theta} ^ {- 1} (x, y)\right)\right) \right| \\ \leq \left| \left(\psi_ {i} (y) - \psi_ {i} (x)\right) \xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(1)} \left(\rho_ {\theta} ^ {- 1} (x, y)\right) \right| + \left| \left(\psi_ {i} (y) - \psi_ {i} (x)\right) \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(1)} \left(\rho_ {\theta} (x, y)\right) \right| \\ \leq 2 \psi_ {\max } (B _ {\mathrm {c o n d}, f, \mathrm {d}} ^ {(1)} + B _ {\mathrm {c o n d}, f, \mathrm {n}} ^ {(2)}), \\ \end{array}
+$$
+
+by Hoeffding's inequality and union bound, we have
+
+$$
+\mathbb {P} (| \partial_ {\theta_ {i}} \hat {\mathcal {L}} _ {f} ^ {\sf n c e} (\theta^ {\star}) | \geq \epsilon) \leq 2 \exp \Bigl (- \frac {n _ {\sf d} \epsilon^ {2}}{8 \psi_ {\max} ^ {2} (B _ {\sf c o n d , f , \sf d} ^ {(1)} + B _ {\sf c o n d , f , \sf n} ^ {(1)}) ^ {2}} \Bigr) = \delta ,
+$$
+
+if $n_{\mathrm{d}} \geq \frac{8\psi_{\max}^{2}(B_{\mathrm{cond},f,\mathrm{d}}^{(1)} + B_{\mathrm{cond},f,\mathrm{n}}^{(1)})^{2}}{\epsilon^{2}} \log \frac{2}{\delta}$ . By taking a union bound over $p$ different coordinates of $\theta$ , we conclude the proof.
+
+Proposition D.6 (Restricted strong convexity). (cf. (Shah et al., 2021a, Proposition E.1)) Under Assumption4.1,
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {c o n d}}} (\theta , \theta^ {\star}) \geq \frac {1}{2} (b _ {\mathrm {c o n d}, f, \mathrm {d}} ^ {(2)} + b _ {\mathrm {c o n d}, f, \mathrm {n}} ^ {(2)}) \lambda_ {\min } ^ {\mathrm {c o n d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2},
+$$
+
+with probability $\geq 1 - \delta$ , if $n_{\mathsf{d}} \geq \frac{128\gamma_{1:2}^{4}\psi_{\max}^{4}}{(\lambda_{\min}^{\mathrm{cond}})^{2}}\log \frac{2p^{2}}{\delta}$ .
+
+Proof. By the intermediate value theorem, there exists $\xi \in \{t\theta + (1 - t)\theta^{\star} : t \in [0, 1]\}$ such that
+
+$$
+\begin{array}{l} \Delta_ {\hat {\mathcal {L}} _ {f} ^ {\text {c o n d}}} (\theta , \theta^ {\star}) = \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\theta) - \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\theta^ {\star}) - \langle \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\theta^ {\star}), \theta - \theta^ {\star} \rangle \\ = \frac {1}{2} (\theta - \theta^ {\star}) ^ {\intercal} \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\xi) (\theta - \theta^ {\star}). \\ \end{array}
+$$
+
+Here, note that $\xi$ depends on $\hat{q}_{\mathbf{d}}(x)\hat{\pi} (y|x)$ . Let $z\triangleq \langle \psi (x) - \psi (y),\theta -\theta^{\star}\rangle$
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {\intercal} \nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} (\xi) (\theta - \theta^ {\star}) = \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x) \hat {\pi} (y | x)} [ (\xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(2)} (\rho_ {\xi} ^ {- 1}) + \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} (\rho_ {\xi})) z ^ {2} ] \\ \geq (b _ {\mathrm {c o n d}, f, \mathsf {d}} ^ {(2)} + b _ {\mathrm {c o n d}, f, \mathsf {n}} ^ {(2)}) \mathbb {E} _ {\hat {q} _ {\mathsf {d}} (x) \hat {\pi} (y | x)} [ z ^ {2} ] \\ = (b _ {\mathrm {c o n d}, f, \mathsf {d}} ^ {(2)} + b _ {\mathrm {c o n d}, f, \mathsf {n}} ^ {(2)}) (\theta - \theta^ {\star}) ^ {\intercal} \mathbb {E} _ {\hat {Q} _ {\mathsf {d}} (x) \hat {\pi} (y | x)} [ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\intercal} ] (\theta - \theta^ {\star}). \\ \end{array}
+$$
+
+We can lower bound the quadratic form as follows. The first term can be lower bounded as
+
+$$
+\begin{array}{l} (\theta - \theta^ {\star}) ^ {\mathsf {T}} \mathbb {E} _ {\hat {q} _ {\mathrm {d}} (x) \hat {\pi} (y | x)} [ (\psi (x) - \psi (y)) (\psi (x) - \psi (y)) ^ {\mathsf {T}} ] (\theta - \theta^ {\star}) \overset {(a)} {\geq} - \epsilon_ {2} \| \theta - \theta^ {\star} \| _ {1} ^ {2} + \lambda_ {\min} ^ {\mathrm {c o n d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \stackrel {(b)} {\geq} - \epsilon_ {2} \gamma_ {1; 2} ^ {2} \| \theta - \theta^ {*} \| _ {2} ^ {2} + \lambda_ {\min } ^ {\text {c o n d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ = \frac {1}{2} \lambda_ {\mathrm {m i n}} ^ {\mathrm {c o n d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+with probability $\geq 1 - \delta$ if $n_{\mathsf{d}}\geq \frac{32\psi_{\mathrm{max}}^4}{\epsilon_2^2}\log \frac{2p^2}{\delta}$ with $\epsilon_{2} = \frac{\lambda_{\mathrm{min}}^{\mathrm{cond}}}{2\gamma_{1;2}^{2}}$ . Here, we apply Hoeffding's inequality as in Lemma D.1 in (a), and use the definition of $\gamma_{1;2}$ to bound $\| \theta -\theta^{\star}\| _1\leq \gamma_{1;2}\| \theta -\theta^{\star}\| _2$ in (b).
+
+Hence, with probability $\geq 1 - \delta$ , we have
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {c o n d}}} (\theta , \theta^ {\star}) \geq \frac {1}{2} (b _ {\mathrm {c o n d}, f, \mathrm {d}} ^ {(2)} + b _ {\mathrm {c o n d}, f, \mathrm {n}} ^ {(2)}) \lambda_ {\min} ^ {\mathrm {c o n d}} \| \theta - \theta^ {\star} \| _ {2} ^ {2},
+$$
+
+$$
+\text {i f} n _ {\mathrm {d}} \geq \frac {1 2 8 \gamma_ {1 : 2} ^ {4} \psi_ {\max } ^ {4}}{(\lambda_ {\min } ^ {\text {c o n d}}) ^ {2}} \log \frac {2 p ^ {2}}{\delta}.
+$$
+
+Proof of Theorem 4.3. First, note that
+
+$$
+\mathcal {R} ^ {*} \left(\nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} \left(\theta^ {\star}\right)\right) \leq \gamma_ {\mathcal {R} ^ {*}; \infty} \left\| \nabla_ {\theta} \hat {\mathcal {L}} _ {f} ^ {\text {c o n d}} \left(\theta^ {\star}\right) \right\| _ {\max }
+$$
+
+by definition of $\gamma_{\mathcal{R}^*;\infty}$ . Then, by Proposition D.5, we have $\| \nabla_{\theta} \hat{\mathcal{L}}_f^{\mathrm{cond}}(\theta^\star) \|_{\max} \leq \epsilon$ with probability $\geq 1 - \delta_1$ , if
+
+$$
+n _ {\mathrm {d}} \geq \frac {8 \psi_ {\max} ^ {2} (B _ {\mathrm {c o n d} , f , \mathrm {d}} ^ {(1)} + B _ {\mathrm {c o n d} , f , \mathrm {n}} ^ {(1)}) ^ {2}}{\epsilon^ {2}} \log \frac {2 p}{\delta_ {1}}.
+$$
+
+Given that this event occurs, $\mathcal{R}^* (\nabla_\theta \hat{\mathcal{L}}_f^{\mathrm{cond}}(\theta^\star))\leq \gamma_{\mathcal{R}^*;\infty}\epsilon$ , and thus we set $\lambda_{n}\gets 2\gamma_{\mathcal{R}^{*};\infty}\epsilon$ to satisfy the first condition in Theorem 4.4.
+
+Now, given $\lambda_{n} \geq 2\mathcal{R}^{*}(\nabla_{\theta}\hat{\mathcal{L}}_{f}^{\mathrm{cond}}(\theta^{\star}))$ , (Negahban et al., 2012, Lemma 1) implies that $\mathcal{R}(\hat{\theta}_{f,n_{\mathrm{d}}}^{\mathrm{cond},\mathcal{R}} - \theta^{\star}) \leq 4\mathcal{R}(\theta^{\star})$ , i.e., $\hat{\theta}_{f,n_{\mathrm{d}}}^{\mathrm{cond},\mathcal{R}} - \theta^{\star} \in 4\Theta$ . Then, by Proposition D.6, we have
+
+$$
+\Delta_ {\hat {\mathcal {L}} _ {f} ^ {\mathrm {c o n d}}} (\theta , \theta^ {\star}) \geq \kappa \| \theta - \theta^ {\star} \| _ {2} ^ {2}
+$$
+
+with probability $\geq 1 - \delta_{2}$ if $n_{\mathsf{d}}\geq \frac{128\gamma_{1;2}^{4}\psi_{\max}^{4}}{(\lambda_{\min}^{\mathrm{cond}})^{2}}\log \frac{2p^{2}}{\delta_{2}}$ where
+
+$$
+\kappa = \frac {1}{2} (b _ {\mathrm {c o n d}, f, \mathrm {d}} ^ {(2)} + b _ {\mathrm {c o n d}, f, \mathrm {n}} ^ {(2)}) \lambda_ {\mathrm {m i n}} ^ {\mathrm {c o n d}}.
+$$
+
+Now, by taking a union bound with $\delta_1 = \delta_2 = \delta / 2$ , with probability $\geq 1 - \delta$ , we have
+
+$$
+\| \theta - \theta^ {\star} \| _ {2} \leq \frac {3 \lambda_ {n} \gamma_ {\mathcal {R} ; 2}}{\kappa} = \frac {6 \gamma_ {\mathcal {R} ^ {*} ; \infty} \gamma_ {\mathcal {R} ; 2}}{\kappa} \epsilon = \Delta
+$$
+
+with $\epsilon \gets \frac{\Delta\kappa}{6\gamma_{\mathcal{R}^{*};\infty}\gamma_{\mathcal{R};2}}$ provided that
+
+$$
+\begin{array}{l} n _ {\mathsf {d}} \geq \max \Bigl \{\frac {2 8 8 (B _ {\mathrm {c o n d} , f , \mathsf {d}} ^ {(1)} + B _ {\mathrm {c o n d} , f , \mathsf {n}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} \kappa^ {2}} \log \frac {4 p}{\delta}, \frac {1 2 8 \gamma_ {1 ; 2} ^ {4} \psi_ {\max } ^ {4}}{(\lambda_ {\min } ^ {\mathrm {c o n d}}) ^ {2}} \log \frac {4 p ^ {2}}{\delta} \Bigr) \\ = \max \left\{\frac {1 1 5 2 (B _ {\text {c o n d} , f , \mathrm {d}} ^ {(1)} + B _ {\text {c o n d} , f , \mathrm {n}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} (b _ {\text {c o n d} , f , \mathrm {d}} ^ {(2)} + b _ {\text {c o n d} , f , \mathrm {n}} ^ {(2)}) ^ {2} (\lambda_ {\min } ^ {\text {c o n d}}) ^ {2}} \log \frac {4 p}{\delta}, \frac {1 2 8 \gamma_ {1 ; 2} ^ {4} \psi_ {\max } ^ {4}}{(\lambda_ {\min } ^ {\text {c o n d}}) ^ {2}} \log \frac {4 p ^ {2}}{\delta}\right) \\ = \Omega \Big (\max \Big \{\frac {(B _ {\mathrm {c o n d} , f , \mathrm {d}} ^ {(1)} + B _ {\mathrm {c o n d} , f , \mathrm {n}} ^ {(1)}) ^ {2} \gamma_ {\mathcal {R} ; 2} ^ {2} \gamma_ {\mathcal {R} ^ {*} ; \infty} ^ {2} \psi_ {\max } ^ {2}}{\Delta^ {2} (b _ {\mathrm {c o n d} , f , \mathrm {d}} ^ {(2)} + b _ {\mathrm {c o n d} , f , \mathrm {n}} ^ {(2)}) ^ {2} (\lambda_ {\min } ^ {\mathrm {c o n d}}) ^ {2}}, \frac {\gamma_ {1 ; 2} ^ {4} \psi_ {\max } ^ {4}}{(\lambda_ {\min } ^ {\mathrm {c o n d}}) ^ {2}} \Big \} \log \frac {p ^ {2}}{\delta} \Big). \\ \end{array}
+$$
+
+# E. Local NCE for Node-Wise-Sparse MRFs
+
+In this section, we illustrate how one can construct a local version of the NCE principles introduced in the main text for, e.g., node-wise-sparse Markov random fields (MRFs). The notation herein follows the convention of (Ren et al., 2021) with modification. We use a boldface notation $\mathbf{x} = (x_{1},\ldots ,x_{p})\in \mathcal{X}\subset \mathbb{R}^{p}$ for the purpose, and a regular-font variable $x$ is assumed to be scalar-valued. We assume that the exponential family distribution we consider is described as $\phi_{\theta}(x) = \exp (\mathcal{E}(\mathbf{x}))$ , where the (negative) energy function is
+
+$$
+\mathcal {E} (\mathbf {x}) \triangleq \sum_ {I \in \mathcal {I}} \theta_ {I} f _ {I} (\mathbf {x} _ {I}),
+$$
+
+where $\mathcal{F} \triangleq \{f_I \colon I \in \mathcal{I}\}$ for some $\mathcal{I} \subset 2^{[p]}$ is a collection of basis functions $f_I \colon \prod_{k \in I} \mathcal{X}_k \to \mathbb{R}$ , each acting upon subsets of variables $\mathbf{x}_I$ . Note that $\mathcal{F}$ is often called the sufficient statistics of the model.
+
+To describe a conditional model, for each $i\in [p]$ , define $\mathcal{I}_i\triangleq \{I\in \mathcal{I}\colon i\in I\}$ . Then, we have
+
+$$
+p _ {\boldsymbol {\theta}} \left(x _ {i} \mid \mathbf {x} _ {\backslash i}\right) \propto \phi_ {\boldsymbol {\theta}} \left(x _ {i} \mid \mathbf {x} _ {\backslash i}\right) \triangleq \exp \left(\mathcal {E} _ {i} (\mathbf {x})\right),
+$$
+
+where
+
+$$
+\mathcal {E} _ {i} (\mathbf {x}) \triangleq \sum_ {I \in \mathcal {I} _ {i}} \theta_ {I} f _ {I} (\mathbf {x} _ {I}).
+$$
+
+We remark that the pseudo-likelihood estimator of Besag (1975) is defined as
+
+$$
+\hat {\boldsymbol {\theta}} _ {i} \triangleq \arg \min _ {\boldsymbol {\theta} _ {i}} \sum_ {n = 1} ^ {n _ {d}} \log \frac {1}{p _ {\theta} (x _ {i} ^ {(n)} | \mathbf {x} _ {\backslash i} ^ {(n)})},
+$$
+
+where $\theta_{i}$ is a collection of all parameters that affect the node conditional model $\phi_{\theta}(x_i|\mathbf{x}_{\backslash i})$ among all parameters $\pmb{\theta}$ .
+
+For $n$ -th sample $\mathbf{x}^{(n)}$ , let $x_{j}^{(n)}$ denote the $j$ -th coordinate of $\mathbf{x}^{(n)}$ . To apply the $f$ -NCE principle, define the density ratio model
+
+$$
+\rho_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i}) \triangleq \frac {\phi_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i})}{\nu q _ {\mathfrak {n}} (x _ {i})}
+$$
+
+for a choice of reference distribution $q_{\mathfrak{n}}(x)$ . For each node $i \in [p]$ , we can derive the local $f$ -NCE objective as
+
+$$
+\begin{array}{l} \mathbb {E} _ {q _ {\mathrm {d}} (\mathbf {x} _ {\backslash i})} [ \mathcal {L} _ {f} ^ {\mathrm {n c e}} (\phi_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i}); q _ {\mathrm {d}} (x _ {i} | \mathbf {x} _ {\backslash i}), q _ {\mathrm {n}} (x _ {i})) ] \\ = \mathbb {E} _ {q _ {\mathrm {d}} (\mathbf {x} _ {\backslash i}) q _ {\mathrm {n}} (x _ {i})} \left[ \Delta_ {f} \left(\frac {q _ {\mathrm {d}} \left(x _ {i} | \mathbf {x} _ {\backslash i}\right)}{\nu q _ {\mathrm {n}} \left(x _ {i}\right)}, \frac {\phi_ {\boldsymbol {\theta}} \left(x _ {i} | \mathbf {x} _ {\backslash i}\right)}{\nu q _ {\mathrm {n}} \left(x _ {i}\right)}\right) - f \left(\frac {q _ {\mathrm {d}} \left(x _ {i} | \mathbf {x} _ {\backslash i}\right)}{\nu q _ {\mathrm {n}} \left(x _ {i}\right)}\right) \right] \\ = - \frac {1}{\nu} \mathbb {E} _ {q _ {\mathrm {d}} (\mathbf {x})} [ f ^ {\prime} (\rho_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i})) ] + \mathbb {E} _ {q _ {\mathrm {d}} (\mathbf {x} _ {\backslash i}) q _ {\mathrm {n}} (x _ {i})} [ \rho_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i}) f ^ {\prime} (\rho_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i})) - f (\rho_ {\boldsymbol {\theta}} (x _ {i} | \mathbf {x} _ {\backslash i})) ]. \tag {13} \\ \end{array}
+$$
+
+In a similar manner, one can derive the local $\alpha$ -CentNCE, which recovers pseudo-likelihood (Besag, 1975) for $\alpha = 1$ and GISO (Vuffray et al., 2016; 2021; Shah et al., 2021a) and ISODUS (Ren et al., 2021) for $\alpha = 0$ , respectively. We note that Ren et al. (2021) justified ISODUS only from the stationarity of the objective function at the optimal parameter, while the connection established here between these interactive screening objectives (ISO) (i.e., GISO and ISODUS) to NCE provides a natural theoretical justification.
+
+# F. Optimization Complexity
+
+# F.1. Convexity
+
+Proposition F.1 (f-NCE: convexity). Let $g_{f}(\rho) \triangleq -(\rho f^{\prime \prime \prime}(\rho) + f^{\prime \prime}(\rho))$ . If $f^{\prime \prime}(\rho) \geq g_{f}(\rho) \geq 0$ , then $\theta \to \hat{\mathcal{L}}_f^{\mathrm{nce}}(\theta)$ is convex. In particular, $\theta \to \hat{\mathcal{L}}_f^{\mathrm{nce}}(\theta)$ is convex for $f_{\log}$ and $f_{\alpha}$ for $\alpha \in [0,1]$ .
+
+Proof. By Lemma B.3, we have
+
+$$
+\nabla_ {\boldsymbol {\theta}} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\boldsymbol {\theta}) = \frac {1}{\nu} \mathbb {E} _ {\tilde {q} _ {\mathrm {d}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathbf {d}} ^ {(2)} (\rho_ {\boldsymbol {\theta}}) ] + \mathbb {E} _ {\tilde {q} _ {\mathrm {n}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathrm {n c e}, f, \mathbf {n}} ^ {(2)} (\rho) ].
+$$
+
+Hence, if $g_{f}(\rho) \geq 0$ and $f''(\rho) - g_{f}(\rho) \geq 0$ , then $\nabla_{\theta}^{2}\hat{\mathcal{L}}_{\mathrm{nce}}(\theta)$ is a nonnegative combination of two positive definite matrices, and so must be positive semidefinite.
+
+It remains to show that the condition holds for all $f$ 's in Table 1. For the asymmetric power score $f_{\alpha}(\rho) = \rho^{\alpha}$ with $0 < \alpha < 1$ , first, it is easy to check that $\rho \mapsto f_{\alpha}(\rho)$ is convex.
+
+$$
+\begin{array}{l} f _ {\alpha} ^ {\prime \prime} (\rho) = \rho^ {\alpha - 2}, \\ g _ {\alpha} (\rho) = - \bigl (\rho f _ {\alpha} ^ {\prime \prime \prime} (\rho) + f _ {\alpha} ^ {\prime \prime} (\rho) \bigr) = (1 - \alpha) \rho^ {\alpha - 2}. \\ \end{array}
+$$
+
+Since $g_{\alpha}(\rho) \geq 0$ and $f_{\alpha}^{\prime \prime}(\rho) - g_{\alpha}(\rho) = \alpha \rho^{\alpha - 2} \geq 0$ , $\theta \to \hat{\mathcal{L}}_{\mathrm{nce}}(\theta)$ is convex by Lemma F.1. Note that the same calculation holds for $\alpha \in \{0, 1\}$ .
+
+A counter example of convex functions $f$ which do not result in convex objectives is $f_{\alpha}(\rho)$ for $\alpha \notin [0,1]$ . For $f$ -NCE, while $f_{\alpha}''(\rho) = \rho^{\alpha - 2} \geq 0$ for any $\alpha$ , $g_{f_{\alpha}}(\rho) = (1 - \alpha)\rho^{\alpha - 2} < 0$ for $\alpha > 1$ and $\alpha \neq 2$ and $f_{\alpha}''(\rho) - g_{f_{\alpha}}(\rho) = \alpha\rho^{\alpha - 2} < 0$ for $\alpha < 0$ . For $\alpha = 2$ , $g_{f_{\alpha}}(\rho) = -1 < 0$ .
+
+Proposition F.2 (CentNCE: convexity). For $\alpha \in [0,1]$ , $\theta \mapsto \mathcal{L}_{\alpha}^{\mathrm{cent}}(\theta ;q_{\mathrm{d}},q_{\mathrm{n}})$ is convex.
+
+Proof. For $\alpha \in (0,1)$ , note that we can write
+
+$$
+\theta \mapsto \log \tilde {\mathcal {L}} _ {\alpha} (\theta ; q _ {\mathrm {d}}, q _ {\mathrm {n}}) = - \log (1 - \alpha) + \log \mathbb {E} _ {q _ {\mathrm {d}}} [ \rho_ {\theta} ^ {\alpha - 1} (x) ] + \frac {1 - \alpha}{\alpha} \log \mathbb {E} _ {q _ {\mathrm {n}}} [ \rho_ {\theta} ^ {\alpha} (x) ].
+$$
+
+Here, the second and third terms can be understood as LogSumExp operations applied on the linear function $\theta \mapsto \log \rho_{\theta}(x)$ and the resulting function becomes also convex. For $\alpha \in \{0,1\}$ , the MLE and GISO objectives are well-known to be convex.
+
+The proof of the following proposition is similar as above, and we thus omit the proof.
+
+Proposition F.3 (f-CondNCE: convexity). If $g_{f}(\rho^{-1}) + \rho^{3}(f^{\prime \prime}(\rho) - g_{f}(\rho)) \geq 0$ , then $\theta \rightarrow \hat{\mathcal{L}}_f^{\mathrm{cond}}(\theta)$ is convex. In particular, $\theta \rightarrow \hat{\mathcal{L}}_f^{\mathrm{cond}}(\theta)$ is convex for $f_{\log}$ and $f_{\alpha}$ for $\alpha \in [0,1]$ .
+
+# F.2. Smoothness
+
+Under the boundedness assumption, we can show that $f$ -NCE objective function is smooth with probability 1.
+
+Proposition F.4 (Smoothness). (cf. (Shah et al., 2021b, Proposition B.1).) Assume Assumption 4.1. $\theta \mapsto \hat{\mathcal{L}}_f^{\mathrm{nce}}(\theta)$ is a smooth function with smoothness constant
+
+$$
+p \psi_ {\max } ^ {2} \Big (\frac {B _ {\mathrm {n c e} , f , \mathrm {d}} ^ {(2)}}{\nu} + B _ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} \Big).
+$$
+
+Proof. Recall from Lemma B.3 that
+
+$$
+\nabla_ {\boldsymbol {\theta}} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\boldsymbol {\theta}) = \frac {1}{\nu} \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathsf {n c e}, f, \mathsf {d}} ^ {(2)} (\rho_ {\boldsymbol {\theta}}) ] + \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi \psi^ {\mathsf {T}} \xi_ {\mathsf {n c e}, f, \mathsf {n}} ^ {(2)} (\rho) ].
+$$
+
+By Gersgorin's theorem (Horn & Johnson, 2012, Theorem 6.1.1), the largest eigenvalue of a matrix is upper bounded by the largest absolute row sum or column sum. Therefore, we have
+
+$$
+\begin{array}{l} \lambda_ {\max } (\nabla_ {\theta} ^ {2} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta)) \leq \max _ {j} \sum_ {i} | \partial_ {\theta_ {i} \theta_ {j}} \hat {\mathcal {L}} _ {f} ^ {\mathrm {n c e}} (\theta) | \\ \leq \max _ {j} \sum_ {i} \frac {1}{\nu} | \mathbb {E} _ {\hat {q} _ {\mathsf {d}}} [ \psi_ {i} \psi_ {j} \xi_ {\mathsf {n c e}, f, \mathsf {d}} ^ {(2)} (\rho_ {\theta}) ] | + | \mathbb {E} _ {\hat {q} _ {\mathsf {n}}} [ \psi_ {i} \psi_ {j} \xi_ {\mathsf {n c e}, f, \mathsf {n}} ^ {(2)} (\rho) ] | \\ \leq \max _ {j} \psi_ {\max } ^ {2} \sum_ {i} \left(\frac {1}{\nu} \sup _ {x, \theta} \xi_ {\mathrm {n c e}, f, \mathrm {d}} ^ {(2)} (\rho_ {\theta}) + \sup _ {x, \theta} \xi_ {\mathrm {n c e}, f, \mathrm {n}} ^ {(2)} (\rho)\right) \\ \leq p \psi_ {\max} ^ {2} \Big (\frac {B _ {\mathsf {n c e} , f , \mathsf {d}} ^ {(2)}}{\nu} + B _ {\mathsf {n c e}, f, \mathsf {n}} ^ {(2)} \Big). \\ \end{array}
+$$
+
+We note that Shah et al. (2021b, Lemma 3.1) shows that the projected gradient descent algorithm returns an $\epsilon$ -optimal solution for GlobalGISO in polynomial optimization complexity, based on the similarly established smoothness of GlobalGISO. We can establish a similar optimization complexity guarantee, but we omit the statement.
+
+# G. Experiments
+
+In this section, we present a preliminary empirical evaluation of a selected set of estimators on a synthetic data, following a setting in (Shah et al., 2023, Section 5.1). We consider a unnormalized exponential family model
+
+$$
+\phi_ {\theta} (x) \triangleq \exp \left(x ^ {\intercal} \theta x\right),
+$$
+
+where $\theta \in \mathbb{R}^{p\times p}$ for $x\in [-1,1]^p$ . The data generating distribution is chosen as the model with $\theta = \theta^{\star}$ defined as
+
+$$
+\Theta_ {i j} ^ {\star} \triangleq \left\{ \begin{array}{l l} \frac {1}{\sqrt {p}} & \text {i f i = 1 , o r j = 1 , o r i = j}, \\ 0 & \text {o t h e r w i s e}. \end{array} \right.
+$$
+
+The samples were generated by brute-force sampling by discretizing each axis by 100 bins. We generated $N = 10^{5}$ samples for $p \in \{11, 13, 15, 17, 19\}$ and computed the estimates for each estimator with varying sample size $\{0.04N, 0.08N, \dots, 0.64N\}$ . We repeated the experiments with random subsamples for 5 times for each configuration.
+
+Assuming the parameter space $\Theta$ is bounded under the Frobenius norm, we consider NCE estimators regularized by the Frobenius norm and optimized via gradient descent. We used a regularization weight $\lambda_{n} = 10^{-2}$ and a learning rate $\eta = 0.1$ across all settings, except for the $f_{\log}$ -NCE estimator, where we used $\eta = 1.0$ . Each optimization was run for 1000 gradient steps. As shown in Figure 2, the selected estimators exhibit an empirical convergence rate of $n^{-1/2}$ . However, we observed that the $f_{1}$ -NCE estimator (asymmetric log NCE; see Table 1) and the CNCE estimator did not display convergent behavior, despite the theoretical guarantees available for this example. This discrepancy highlights the need for further investigation into the empirical behavior of various estimators, particularly in high-dimensional settings.
+
+
+Figure 2. Convergence rate of different NCE estimators.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/images.zip b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3ed0d412ccb81a08e53f71ad94cd57500dacc3fb
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef861b8bf5e2a518e54382bea0d835b0fb3611bf76deff323ae3d4f26fac9441
+size 2336810
diff --git a/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/layout.json b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..848ad6832a0b28a92382283da4a3042818bd0a1e
--- /dev/null
+++ b/aunifiedviewonlearningunnormalizeddistributionsvianoisecontrastiveestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c1c266e14146ded19e58d47055e2bcf5125ddab19a6d3c8ee53f87a7b89d5c4
+size 1764932
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_content_list.json b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e1e3139ee749bd7f7a86102db7b9be3411a884e
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:988026d727a8cbcac9cedd815b761ecc78716533787c813497200fd1bfedac9d
+size 102818
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_model.json b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3895c47251d7ec983e675c0d7f803c3c5285fe2c
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c52cc53186b284d1757da15e88115806599e5be07b3403d0287b082501e8eec
+size 119757
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_origin.pdf b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f7d5ece1c4febf7c400039d3cfcb71735a0dd0ec
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/ccc0e48d-768b-4207-881c-7eb5d522fcc2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba1b55449ab8467f59731aab4c5f18cf1e47faade12b3e2cb6f0be1218c967f4
+size 3513969
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/full.md b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6862ae53bfa811e1c1707ada33df4f21f40c383f
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/full.md
@@ -0,0 +1,334 @@
+# AUTOCIRCUIT-RL: Reinforcement Learning-Driven LLM for Automated Circuit Topology Generation
+
+Prashanth Vijayaraghavan $^{*1}$ Luyao Shi $^{*1}$ Ehsan Degan $^{1}$ Vandana Mukherjee $^{1}$ Xin Zhang $^{2}$
+
+# Abstract
+
+Analog circuit topology synthesis is integral to Electronic Design Automation (EDA), enabling the automated creation of circuit structures tailored to specific design requirements. However, the vast design search space and strict constraint adherence make efficient synthesis challenging. Leveraging the versatility of Large Language Models (LLMs), we propose AUTOCIRCUIT-RL, a novel reinforcement learning (RL)-based framework for automated analog circuit synthesis. The framework operates in two phases: instruction tuning, where an LLM learns to generate circuit topologies from structured prompts encoding design constraints, and RL refinement, which further improves the instruction-tuned model using reward models that evaluate validity, efficiency, and output voltage. The refined model is then used directly to generate topologies that satisfy the design constraints. Empirical results show that AUTOCIRCUIT-RL generates $\sim 12\%$ more valid circuits and improves efficiency by $\sim 14\%$ compared to the best baselines, while reducing duplicate generation rates by $\sim 38\%$ . It achieves over $60\%$ success in synthesizing valid circuits with limited training data, demonstrating strong generalization. These findings highlight the framework's effectiveness in scaling to complex circuits while maintaining efficiency and constraint adherence, marking a significant advancement in AI-driven circuit design.
+
+
+Figure 1. Given a prompt with design constraints, the goal is to generate a circuit topology and duty cycle satisfying those constraints. Component constraints (blue) are mandatory, while efficiency (orange) and voltage (purple) constraints are optional. In the output, blue circles denote components ("F" for FET, "I" for inductor), and yellow circles denote connection nodes.
+
+# 1. Introduction
+
+AI and machine learning have been applied to various circuit design tasks, including parameter optimization (Wang et al., 2020) and physical design (Hakhamaneshi et al., 2019), which focus on circuit optimization with a fixed circuit topology. Analog circuit topology synthesis (Bengio et al., 2013) is a fundamental aspect of EDA, where the configuration and interconnection of components directly influence circuit functionality and performance. Despite years of EDA advancements, automation of analog circuit topology synthesis has remained underexplored until recently.
+
+The key challenge in circuit topology synthesis stems from the exponential growth of the design space with the number of components, making high-quality designs rare and
+
+hard to discover. This sparsity makes it difficult to satisfy specific performance and design constraints. Manual topology design remains time-consuming and requires significant expertise, while brute-force or random search methods are computationally infeasible due to the vastness of the space.
+
+Existing AI methods fall into two categories: (a) AI-based search-based algorithms, including rule-based systems, heuristics, genetic algorithms (McConaghy et al., 2011), and tree-based search (Fan et al., 2021; Zhao & Zhang, 2020). While partially effective, these methods struggle with scalability, efficiency, and adaptability to evolving design requirements. They often require numerous queries and long runtimes to find circuits that meet targets. For example, Fan (Fan et al., 2021) uses tree sampling for automated topology design but faces scalability and practical challenges when handling diverse performance needs, requiring over 400 simulation queries per design. Similarly, other search-based methods (Zhao & Zhang, 2020) demand extensive simulations for new specifications. (b) Generative AI-based frameworks, including graph-based and LLM-based generative methods. Graph generative models use VAEs to produce netlists as undirected (Simonovsky & Komodakis, 2018) or directed graphs (Dong et al., 2023; Zhang et al., 2019), but lack precise control over component count, efficiency, or power conversion ratio. Recently, LLMs have been applied to automated circuit topology synthesis (Vijayaraghavan et al., 2024; Chang et al., 2024; Lai et al., 2025), leveraging their pattern learning and design generation abilities. Unlike search-based methods, LLMs produce circuits from a single prompt after training, enabling faster generation.
+
+However, most LLM approaches are limited in scale and flexibility. CircuitSynth (Vijayaraghavan et al., 2024) and similar works (Chang et al., 2024) target small circuits (up to six components). AnalogCoder (Lai et al., 2025) generates PySpice code via prompt engineering but depends on a fixed synthesis library and lacks iterative refinement, limiting exploration of novel or complex topologies, particularly for power converter circuits requiring high efficiency and specific output voltage constraints. This limitation makes it less extensible for optimizing circuit performance or handling diverse design constraints. Artisan (Chen et al., 2024) is another recent effort focusing on operational amplifier design using domain-specific LLMs. While valuable, it is highly specialized and does not generalize to other circuit families such as power converters. LaMAGIC (Chang et al., 2024) fine-tunes LLMs for netlist generation but omits iterative or performance-driven optimization, restricting adaptability to multi-objective constraints. Auto-SPICE (Bhandari et al., 2024) automates large-scale SPICE netlist generation from textbook schematics (e.g., Masala-CHAI) but focuses on data creation rather than optimization or refinement. Recent advances like AnalogXpert (Zhang et al., 2024) and
+
+Atelier (Shen et al., 2024) incorporate domain knowledge, subcircuit libraries, Chain-of-Thought prompting, and agent-based coordination. Nonetheless, these methods lack reinforcement learning for iterative, performance-driven refinement, focusing instead on structured generation and error correction. Our work advances beyond prior efforts by synthesizing more complex circuits while optimizing both topology and performance metrics.
+
+In this work, we present AUTOCIRCUIT-RL (AC-RL), a reinforcement learning (RL)-based framework that refines LLM-generated circuit topologies to optimize design objectives. Our method employs two training phases: instruction tuning to generate diverse topologies from prompts, followed by RL-refinement using AI-based reward models that estimate validity, efficiency, and output voltage. This enables scaling, generalization, and multi-objective optimization with minimal manual effort. RL-refinement occurs only during training, not at inference. Empirical results show a $\sim 12\%$ improvement in validity and $\sim 14\%$ gain in efficiency over the best-performing LLM baselines, with few-shot generalization beyond 6 components and support for circuits with up to 10 components. Key contributions include:
+
+LLM with RL Refinement: We propose AUTOCIRCUIT-RL, a novel RL framework for analog circuit synthesis that targets constraint-driven design.
+
+Superior Performance Evaluation: Our framework's evaluations on 4 and 5-component analog circuits demonstrate superior performance in generating circuits that meet design constraints more effectively than other baseline approaches. Scalability and Generalizability: Using few-shot fine-tuning, our framework generalizes to 6-10 components even with limited data, highlighting its scalability and adaptability in practical design scenarios.
+
+# 2. Problem Statement and Dataset
+
+Given an input instruction, our goal is to produce a netlist with components and their connections. Each entry in the netlist corresponds to a node in an undirected graph $\mathcal{G}$ , with edges indicating the connections between these nodes as in the netlist. For the choices of encoding the netlist textually, we adopt the "Incident" encoding strategy, recognized for its effectiveness in various graph-related tasks (Fatemi et al., 2024). The center part of Figure 1 illustrates an example of how a netlist is encoded. This research investigates different model variations that refine Language Models (LMs) to generate the circuit topology netlist. We compare two representation approaches: generating the netlist as lists or employing the text representation with the incident encoding method. Through empirical evaluation, our objective is to evaluate the efficacy of these LM variations in accurately and efficiently synthesizing circuit topologies from natural
+
+language instructions.
+
+We generated a dataset of switching power converter topologies with 4-10 components (Figure 2(a)). Using Random Search (RS) (Fan et al., 2021), we generated numerous unique netlists. Multiple netlists can represent the same topology by changing component order or node indices, so we report unique designs. The design space for 4- and 5-component circuits is small, allowing near-exhaustive exploration; for $6+$ components, exhaustive search is impractical. Therefore, we collected 10,000 unique netlists for each component count from 6 to 10. Each netlist was simulated at 5 duty cycles: 0.1, 0.3, 0.5, 0.7, and 0.9, yielding 5 times the number of unique netlists as total samples (Figure 2(a)). We used NGSpice (Nenzi & Vogt, 2011) to identify valid circuits and collect output voltage and efficiency values. The circuit includes five external signal ports: Vin, Vout, GND, N-type gate signal, and P-type gate signal (renamed as 'IN', 'OUT', '0', 'GATEN', and 'GATEP', respectively). Devices considered are capacitors, inductors, n-type MOSFET (FET-A), and p-type MOSFET (FET-B). Capacitors and inductors have two ports, while MOSFETs have four (drain, gate, source, body). To simplify the design space and accelerate RS generation, FET-A connects gate/body to GATEN/0, and FET-B to GATEP/IN. Devices are numbered, with shared ports using one index (Figure 1). Capacitors $(10\mu \mathrm{F})$ , inductors $(10\mu \mathrm{H})$ , and MOSFETs use fixed parameters; switching frequency is $200\mathrm{kHz}$ , input voltage $2\mathrm{V}$ .
+
+For training, we randomly sample approximately 100,000 unique netlists (for 4- and 5-component circuits), each with varying efficiency and output voltage values. The data is categorized into four groups: (a) Group 1 with low efficiency (efficiency $< 0.05$ ), (b) Group 2 with moderate efficiency (efficiency between 0.05 and 0.7), (c) Group 3 with high efficiency but minimal output voltage difference from input, and (d) Group 4 with optimal Vout and efficiency. An example for 4-component circuits is shown in Figure 2(b). We apply a weighted sampling strategy in each batch, prioritizing Group 4 (weight 0.4) and giving the lowest priority to Group 1 (weight 0.1). This ensures that batches are enriched with data from Group 4, improving model learning on the most desirable conditions. Instruction prompts are constructed in three categories based on design constraints: (a) Component constraint (only the list of components), (b) Efficiency constraint (components with expected efficiency), and (c) Output Voltage constraint (components with input voltage and expected output voltage).
+
+# 3. Proposed Approach
+
+Our proposed framework, AUTOCIRCUIT-RL, is depicted in Figure 2(c). It consists of two primary phases: instruction tuning and RL refinement. In the instruction tuning phase, we fine-tune a large language model (LLM) using super
+
+vised learning techniques. This phase focuses on training the model to comprehend instruction prompts that specify the component pool and design constraints, facilitating the efficient generation of valid circuit topologies. To further optimize the circuit topology generation process and ensure compliance with all design constraints, we incorporate reinforcement learning with AI feedback (RLAIF) (Bai et al., 2022; Lee et al., 2023). The RL refinement phase enhances circuit topology generation by integrating feedback from constraint-specific AI models in three steps: reward modeling, RL training, and iterative adaptation.
+
+# 3.1. Instruction Tuning
+
+The instruction tuning phase can be considered as a standard supervised finetuning (SFT) step, where instruction prompts specifying the component pool and design constraints are provided as input to the model, and the corresponding circuit topology generations are produced as output. We represent the pairs of instruction-circuit topologies as $\mathcal{D} = \{(X_i,Y_i)\}_{i = 1}^T$ , where $X_{i}$ denotes the instruction prompt and $Y_{i}$ represents the valid circuit topology netlist using the incident encoding method concatenated with the corresponding duty cycle. This phase involves training an autoregressive language model $p_\theta$ parameterized by $\theta$ to minimize the negative log-likelihood of the desired circuit topology represented using the incident encoding method. Formally,
+
+$$
+\mathcal {L} _ {\mathrm {S F T}} = - \mathbb {E} _ {(X, Y) \sim \mathcal {D}} \left[ \sum_ {r} \log \pi_ {\theta} \left(y _ {t} \mid X, y _ {< t}\right) \right] \tag {1}
+$$
+
+where $\pi_{\theta}$ represents the LLM policy and $y_{< t}$ denotes all tokens before the $t^{th}$ token in the circuit topology $Y$ . This objective aims to ensure that the model learns to comprehend the instruction and generate the circuit topology in the incident encoding method. However, the generated circuit topology may not satisfy all the design constraints related to components, efficiency, validity, and expected output voltage. To meet such constraints, we utilize Reinforcement Learning with AI Feedback (RLAIF), which learns to refine the circuit topology generation process by maximizing rewards associated with specific constraints of interest.
+
+# 3.2. RL-Refinement with AI Feedback
+
+The RL-refinement phase aims to enhance the circuit topology generation process by leveraging feedback from constraint-specific AI models. It consists of 3 main steps:
+
+# 3.2.1. REWARD MODELING
+
+In this step, our reward model evaluates the appropriateness of a generated circuit topology based on the instruction prompt. Canonical reinforcement learning with AI feed
+
+
+Figure 2. ((a) Statistics of our Circuit dataset. b) Vout-Efficiency graph of 4-component circuits in the training data. (c) Illustration of the AUTOCIRCUIT-RL framework. Dashed lines depict probability flow, while the dotted line represents iterative adaptation post RLAIF tuning. A prompt guides topology generation, then rewards are calculated with KL-divergence to a reference model to preserve the original distribution. PPO updates model parameters using rewards, and iterative adaptation further improves the model.
+
+
+
+back (RLAIF) trains a reward model using labeled preferences in the form of triples $\langle X, Y_p, Y_r \rangle$ , representing the input prompt, preferred topology, and non-preferred topology, respectively. Recent research suggests that directly using reward scores yields better performance than the traditional RLAIF approach. To achieve this, we employ a reward function that evaluates how well the generated circuit topology adheres to different design constraints, such as circuit validity, efficiency, and expected output voltage. To implement this, we train different estimators ( $f_{\mathrm{clf}}$ for all $\mathrm{clf} \in \{\mathrm{valid}, \mathrm{eff}, \mathrm{vout}\}$ ) to assess circuit validity, efficiency, and expected output voltage, and assign corresponding scores $s_{\mathrm{clf}}$ . These scores are then used to compute a reward in the range $[-1, 1]$ . The estimators are built using dedicated classification or regression models ( $f_{\mathrm{clf}}$ ), with RoBERTa as the underlying model architecture. Each model is trained on synthetic datasets tailored specifically for each constraint type. The details are as follows:
+
+Circuit Validity Estimator A binary classifier $f_{\mathrm{valid}}$ is trained on a dataset $\mathcal{D}$ comprising valid and invalid circuits, constructed from the aggregated dataset (explained in the Section 2), to determine the validity of circuit topologies. This RoBERTa-based classifier achieves a $92\% F_1$ score for binary classification of circuit validity.
+
+Circuit Efficiency Estimator: A regression model $f_{\mathrm{eff}}$ estimates circuit efficiency using a subset of the dataset $\mathcal{D}$ with prompts specifying efficiency requirements. The model achieves an 83% macro $F_{1}$ score by categorizing predicted efficiency scores into predefined categories.
+
+Output Voltage Estimator: A regression model $f_{\mathrm{vout}}$ pre
+
+dicts output voltage based on input parameters, achieving a low MSE loss of $8e^{-3}$ on the development set.
+
+Using the remaining data in $\mathcal{D}$ and the trained estimators, we define a reward function $r(X,\hat{Y})$ that assigns a reward to an LLM-generated circuit topology $\hat{Y}$ as follows:
+
+$$
+r (X, \hat {Y}) = \left\{ \begin{array}{l l} - 1, & \text {i f} s _ {\text {v a l i d}} < 0. 6 \\ 1, & \text {i f} s _ {\text {e f f}} \text {o r} s _ {\text {v o u t}} \text {m e e t s c o n s t r a i n t s} \\ s _ {\text {e f f}}, & \text {o t h e r w i s e} \end{array} \right. \tag {2}
+$$
+
+Invalid topologies receive a negative reward. Valid topologies meeting output voltage or efficiency constraints get a reward of 1; otherwise, the efficiency estimate is used as the reward to maximize efficiency.
+
+# 3.2.2. RL TUNING
+
+To enhance the LLM for generating circuit topologies that better meet the constraints, we employ a reward function $r(X, \hat{Y})$ and Proximal Policy Optimization (PPO) (Schulman et al., 2017). The base model for this refinement is the LLM fine-tuned with the instruction tuning technique discussed in Section 3.1, following the common practice in Reinforcement Learning with Human Feedback (RLHF) (Ouyang et al., 2022). Standard PPO training procedures are then applied to optimize the base model using the following reward objective function:
+
+$$
+\mathcal {L} _ {R L} = r (X, \hat {Y}) - \eta K L \left(\pi_ {\mathrm {R L A I F}} (\hat {Y} | X) \| \pi_ {\theta} (\hat {Y} | X)\right) \tag {3}
+$$
+
+Here, KL represents the Kullback-Leibler divergence, and $\eta$ is a hyperparameter controlling the penalty for divergence. This penalty helps prevent the model from getting trapped in local optima or straying too far from the original distribution of the supervised instruction-tuned model.
+
+# 3.2.3. ITERATIVE ADAPTATION (IA)
+
+Additionally, we explore the concept of iterative adaptation as a means to refine the circuit topology generation process further. Leveraging the circuits generated in the previous steps, we aim to enhance the overall synthesis task by iteratively adapting the model based on the sampled valid and highly efficient synthesized circuit topologies. Utilizing the nucleus sampling approach, we specifically target circuit topologies with an efficiency score $(s_{\mathrm{eff}})$ exceeding 0.7. This criterion ensures that the selected circuits not only satisfy validity constraints but also exhibit high operational efficiency. To initiate the iterative adaptation process, we synthesize a dataset comprising 10,000 such samples. Starting from the RL-tuned model described in Section 3.2.2, we iteratively refine the model using Equation 3, incorporating insights from high-quality topologies into the RL-refined model from the previous iteration. Upon completion, the final model is used directly for inference, with no further tuning. In our experiments, we evaluate this iterative adaptation's effectiveness to enhance the circuit topology generation process.
+
+# 4. Experiments and Results
+
+In this study, we utilized the following Language Models (LMs) for generating circuit topologies: GPT-Neo2.7 (Black et al., 2021), StableLM-3B-4E1T, Llama-3-8b (Grattafori et al., 2024), and MPT-7b (Team et al., 2023). More details on the baseline models and implementation are provided in the Appendix A, and B.
+
+# 4.1. Baselines
+
+Our primary focus is on leveraging LLM-based generative methods for circuit design synthesis and benchmarking their potential. We compare different LLM-based methods as baselines and include GraphVAE (Simonovsky & Komodakis, 2018), a non-LLM baseline, to highlight the superiority of our method in both efficiency and performance. While recent LLM-based frameworks like AnalogCoder (Lai et al., 2025) and Artisan (Chen et al., 2024) provide valuable contributions, their methodologies and application domains differ from ours. AnalogCoder employs a training-free prompt strategy with a fixed synthesis library, focusing on general analog circuits, and does not specifically address power converters, which require nuanced handling of constraints such as efficiency and output voltage. Artisan is tailored for operational amplifier design using a domain
+
+specific LLM. In contrast, AUTOCIRCUIT-RL is currently trained using power converter designs and combines instruction tuning with RL refinement to handle diverse user prompts and optimization goals. Although these prior works are not directly applicable to our constraint-driven setting, future adaptations could make comparisons more feasible.
+
+Given these considerations, we evaluate AUTOCIRCUIT-RL against the following baselines: Zero-Shot Generation: Prompts with component pools and design constraints are directly fed into large language models (LLMs) like Llama-2 (13b) and Flan-UL2 (20b) without fine-tuning, aiming to generate circuit topologies; In-Context Learning (ICL): This approach uses circuit generation demonstrations, combining input prompts with component pools, design constraints, and corresponding output circuits within the prompts. It leverages the in-context learning ability of LLMs such as Llama-2 (13b) and Flan-UL2 (20b), exploring different numbers of examples ( $j \in \{5, 10, 20\}$ ) and experimenting with incident encoding and netlist structures; Prompt Tuning: This method fine-tunes LLMs like Llama-2 (13b) and Flan-UL2 (20b) for circuit topology generation using a Prompt-tuned Model (Lester et al., 2021), which learns task-specific soft prompts while keeping model parameters unchanged. We test with 100 trainable soft prompt tokens (p100); Vanilla Fine-Tuning: Standard fine-tuning is conducted on the LMs listed above. The primary objective is to minimize the negative log-likelihood for generating circuit topologies; Gumbel-Max Fine-Tuning: Drawing ideas from a prior study (Vijayaraghavan et al., 2024), we integrate multiple objectives optimizing for circuit validity and efficiency using the Gumbel-Max trick. This approach refines models fine-tuned using Llama-3 (8b) and MPT-7b architectures, allowing for more efficient circuit generation while maintaining structural validity constraints; GRAPH-VAE: This method models circuit netlists as undirected graphs and uses a variational autoencoder (VAE) to generate graphs from continuous embeddings (Simonovsky & Komodakis, 2018). We did not implement DAG-based methods(Dong et al., 2023; Zhang et al., 2019), because a noticeable subset of circuits in our datasets are not DAGs. For circuits with 6–10 components, the generation process is conditioned on a SentenceBERT-encoded label vector (Reimers, 2019) derived from the input prompt, enabling controlled and guided sampling during inference; AUTOCIRCUIT-RL: We introduce AUTOCIRCUIT-RL, a comprehensive framework designed to enhance the circuit topology generation process using RL and iterative adaptation.
+
+# 4.2. Metrics
+
+In our evaluation setup, we report different metrics based on sampling 500 unique circuit topologies from each of the trained models. (a) Circuit Validity Score represents the fraction of unique circuit topologies estimated as valid,
+
+Table 1. Evaluation results for all methods on 4-component (4C) and 5-component (5C) circuits. Circuit validity and efficiency are measured using both classifier (column 2 and 4) and simulator (column 3 and 5). The DGR $\rho$ and success rate $\sigma$ for different categories of constraints: component (C), efficiency $(\mathrm{C} + \mathrm{E})$ , output voltage $(\mathrm{C} + \mathrm{V})$ and overall (O) are also reported.
+
+Models E(fvalid(ŷ)) E(fsvalid(ŷ)) E(feff(ŷ)) E(fseff(ŷ)) DGR ρ Success Rate σ (%) Prompt Tuning 4C 5C 4C 5C 4C 5C 4C 5C C C+E C+V O Llama-2-13bp100 54.60 50.20 54.50 53.40 53.60 43.62 52.45 47.97 5.39 80.94 33.89 34.02 43.35 Flan-UL2-20bp100 57.40 51.60 57.35 56.00 56.20 44.50 56.08 49.64 4.76 84.09 36.54 34.67 45.30 Vanilla & Gumbel-based Multi-Objective Fine Tuning GPT-NeoFT 60.50 57.80 60.60 57.78 57.90 54.70 56.73 54.49 2.98 89.52 40.28 37.62 49.06 StableLMFT 59.42 58.10 60.20 59.00 58.20 55.0 59.51 53.27 2.67 89.04 39.76 37.96 48.89 Llama-3FT 66.78 63.50 66.80 63.80 62.8 61.3 61.15 62.97 2.10 95.76 68.04 58.06 69.79 MPT-7bFT 64.96 61.65 65.00 60.50 60.50 59.2 60.23 62.23 2.26 95.32 67.85 58.28 69.52 GumbelLlama-3 67.60 65.75 67.32 64.19 67.15 64.38 63.87 63.15 2.19 96.04 69.56 60.36 71.27 GumbelMPT-7b 67.16 63.50 66.42 63.48 66.30 64.42 63.19 63.64 2.32 95.80 68.30 60.22 70.68 AUTOCIRCUIT-RL (our approach) AC-RLLlama-3 75.11 73.46 74.48 73.96 74.20 73.60 71.65 72.22 1.29 99.08 80.90 71.30 80.69 w/o IA 71.08 68.32 72.78 68.96 71.60 69.70 69.50 68.68 1.46 98.26 76.75 70.10 78.39 AC-RLMPT-7b 74.20 72.65 74.32 72.34 73.40 72.70 72.08 71.94 1.34 99.05 79.85 71.64 80.41 w/o IA 70.88 69.56 71.06 69.23 70.10 69.50 67.37 69.08 1.53 98.03 76.52 68.50 77.61
+
+denoted as $E(f_{\mathrm{clf}}(\hat{y}))$ . Here, $\operatorname{clf} \in \{\mathrm{valid}, S_{\mathrm{valid}}\}$ refers to the validity estimated by the classifier or the simulator, respectively. We consider a circuit valid if its validity score from the classifier exceeds 0.6; (b) Circuit Efficiency Score refers to average efficiency of the generated circuits using our efficiency regressor or the NGSpice simulator, denoted as $E(f_{\mathrm{eff}}(\hat{y}))$ and $E(f_{S_{\mathrm{eff}}}(\hat{y}))$ , respectively. The NGSpice simulator evaluates the validity and efficiency of a given netlist by verifying its electrical characteristics and performance metrics through detailed circuit simulations. It evaluates parameters such as voltage levels, timing, and power consumption of the circuit topology with certain duty cycles under given conditions; (c) Duplicate Generation Rate (DGR), denoted by $\rho$ , indicates the number of circuit topologies required to be sampled from the model to obtain a unique circuit topology design. Formally, $\rho$ is calculated as the number of topologies generated divided by the number of unique topologies (500 in our case); and (d) Success Rate computes the percentage of valid circuit topologies that successfully meet certain design constraints for each category of prompts (as in Section 2): Component constraint (C), Efficiency constraint (C+E), Output Voltage constraint (C+V) and overall success rate (O). Figure 1 presents a sample success scenario with efficiency constraint.
+
+# 4.3. RL Convergence Analysis
+
+We evaluate the learning dynamics of the proposed AUTOCIRCUIT-RL framework by analyzing the convergence behavior of two key metrics during RL refinement:
+
+(a) circuit efficiency and (b) success ratio, defined as the proportion of generated circuits satisfying functional and design constraints. Figure 3 illustrates these convergence curves for circuits composed of 4 and 5 components, trained using the Llama-3 backbone over $\sim 25,000$ training steps with a batch size of 16. The training exhibits several characteristic phases. In the initial phase, both efficiency and success ratio increase gradually with noticeable oscillations, reflecting the model's early adaptation to reward signals. This is followed by a phase of rapid improvement, where the RL agent learns effective circuit design strategies. Intermediate fluctuations arise as the model explores diverse topologies while balancing validity, constraint satisfaction, and efficiency. Eventually, the curves stabilize and plateau, indicating convergence to a policy that consistently generates valid, high-quality circuit topologies. These results show AUTOCIRCUIT-RL steadily improves design capabilities and sustains robust performance as circuit complexity grows, with only minor efficiency degradation observed from 4- to 5-component circuits.
+
+# 4.4. Results Overview
+
+The evaluation results, summarized in Table 1, show that AUTOCIRCUIT-RL outperforms all baselines across various metrics for circuit topology synthesis. Notably, smaller language models tuned with our method outperform larger prompt tuning-based models, generating unique designs faster with a higher success rate in meeting design constraints. Our LLM-based approach is efficient, requiring
+
+
+
+
+Figure 3. RL convergence curves for AUTOCIRCUIT-RL with Llama-3 over $\sim 25,000$ training steps. Top: Efficiency for 4- and 5-component circuits; Bottom: Success ratio of valid and constraint-satisfying topologies.
+
+$\sim$ 2.7 seconds per design generation once trained (refer Section D for runtime analysis). In contrast, traditional search-based algorithms (Fan et al., 2021) are computationally expensive and time-consuming, often taking hundreds of seconds to converge on a target design.
+
+Usability of Zero-Shot and ICL Methods We observe a notable disparity in performance between zero-shot generation techniques and fine-tuned methodologies. Zero-shot generation struggles to produce valid netlist-like structures essential for subsequent classification or simulation tasks. Despite using in-context learning (ICL), where the model is exposed to sample prompts and corresponding circuit generations, the improvement in generating comprehensive netlist-like structures is minimal. Additionally, increasing the number of in-context examples $j$ yielded diminishing returns. As a result, these incomplete structures cannot be used for accurate metric computation, so we exclude the zero-shot and ICL results in Table 1.
+
+# 4.5. Error Analysis
+
+We conduct a qualitative error analysis of the AUTOCIRCUIT-RL framework by evaluating the model-generated circuits through post-hoc SPICE simulations. The analysis focuses on two primary failure modes: validity errors, where circuits fail to simulate due to structural violations such as incorrect node assignments or connectivity issues, and efficiency constraint errors, where the generated circuits do not meet the efficiency thresholds specified in the generation prompts. This systematic evaluation enables the identification of recurring failure patterns and provides insight into the model's behavior near constraint boundaries, highlighting specific areas where the
+
+model lacks fine-grained control.
+
+Our analysis reveals that validity failures predominantly arise from minor inconsistencies in node assignments rather than fundamental topological errors, and such failures can typically be resolved with minimal structural modifications. Regarding efficiency constraint errors, most of the circuits that fail to meet the specified thresholds do so by a relatively small margin, suggesting that the model generally approximates the desired performance metrics. These deviations often reflect inherent trade-offs with other design parameters such as output voltage and duty cycle. A limited number of outlier cases exhibit larger deviations from the efficiency targets, typically caused by complex interactions between circuit components. Collectively, these findings indicate that while the model internalizes key principles of both structural integrity and performance-aware design, there remains scope for further improvements in constraint calibration and optimization. Detailed examples and further discussion can be found in Appendix C.
+
+# 4.6. Effectiveness of AUTOCIRCUIT-RL
+
+Impact of Prompt Tuning Experiments with prompt tuning of Flan-UL2/Llama-2 models (approximately 20b parameters) with a restricted dataset, facilitated the production of netlist-like structures. This represents a substantial enhancement compared to the performance of the same models under zero-shot or in-context learning (ICL) conditions. Nevertheless, these fine-tuned models lag behind smaller language models (such as GPT-Neo/StableLM) fine-tuned using our method in terms of both efficiency and validity of the generated circuits. Additionally, our models demonstrate a higher success rate in meeting constraints compared to the larger prompt-tuned language models. Despite the advantages of fine-tuning these larger models, we emphasize that fine-tuned models with lower capacity can still achieve effective performance relative to larger prompt tuning-based models for circuit topology synthesis.
+
+Comparison with Vanilla Fine-tuning When contrasting our methodology, which entails iterative refinement via reinforcement learning, with the basic fine-tuning of different architectures, we note a significant enhancement in the effectiveness of our approach. Our methodology, which entails iterative refinement via reinforcement learning, demonstrates a significant improvement in effectiveness compared to basic fine-tuning of GPT-Neo and StableLM architectures. This improvement is evident in generating valid circuits that meet design constraints, underscoring the efficacy of our approach in synthesizing valid topologies and accelerating the discovery of unique designs.
+
+Effect of Using Gumbel-Max Trick for Multi-Objective Optimization Building on a prior study (Vijayaraghavan et al., 2024), we optimize circuit validity and efficiency
+
+
+Figure 4. Few-shot fine-tuning results for 6-10 components, with models tuned using $k$ -examples of varying circuit topologies.
+
+Table 2. Overall success rate of GRAPHVAE, Prompt Tuning and AUTOCIRCUIT-RL for 6-10 component circuits with $k = 1000$ .
+
+Models 6C 7C 8C 9C 10C GRAPHVAE 21.5 20.6 16.1 15.9 12.8 PROMPT TUNING 39.7 36.3 35.9 33.2 32.1 AC-RL 65.5 63.8 63.2 60.4 58.5
+
+using the Gumbel-Max trick, yielding notable gains over standard fine-tuned models. However, its performance still lags behind AUTOCIRCUIT-RL by $\sim 9\%$ in circuit validity and efficiency. A key limitation of the Gumbel-Max trick is its inability to adaptively optimize multiple objectives. Unlike reinforcement learning (RL), which refines strategies through iterative AI feedback, Gumbel-based methods don't adjust based on prior evaluations and lack a mechanism to balance competing objectives, often leading to suboptimal designs. In contrast, our RL approach navigates these trade-offs effectively, explores a broader design space, and avoids premature convergence, demonstrating superior performance in circuit validity and efficiency.
+
+Effect of Iterative Adaptation To assess the significance of the iterative adaptation strategy for our task, we conducted an experiment where we sampled circuit topology generations from a model trained using reward models but without incorporating the iterative adaptation strategy. Our results reveal that the iterative refinement enabled by iterative adaptation substantially improves the overall model performance across various metrics. Notably, the lack of an iterative adaptation strategy leads to a more pronounced decline in performance, particularly notable in the circuit efficiency score, where we observed a $\sim 8\%$ decrease.
+
+# 4.7. Adherence to Design Constraints
+
+This section examines how different models meet three key design constraints for 4- and 5-component circuits. We evaluate success rate $(\sigma)$ for component usage (C), efficiency $(\mathrm{C} + \mathrm{E})$ , and output voltage $(\mathrm{C} + \mathrm{V})$ , using a 20-40-40 prompt split per constraint. AUTOCIRCUIT-RL excels across constraints, demonstrating RL's effectiveness in circuit topology
+
+synthesis.
+
+Component Pool Adherence (C) Ensuring adherence to the component pool constraint is pivotal for practical applicability, achievable with minimal tuning. Generally, models fine-tuned or optimized with reward models exhibit superior adherence to the specified component pool. For instance, prompt tuning-based models, even with minimal tuning, achieve moderate success rates, with $\sigma$ values ranging from $\sim 81\%$ to $\sim 84\%$ . Remarkably, models trained using our complete AUTOCIRCUIT-RL approach showcase the highest success rates, with $\sigma$ values surpassing $\sim 98\%$ .
+
+
+Figure 5. Plot of SuccessRate@ $m$ of AC-RL for $m \in \{1,3,5\}$ .
+
+Efficiency (E) & Output Voltage (V) Adherence Meeting efficiency and expected output voltage constraints is challenging in comparison to the component pool constraint, particularly when expected values are specified in the prompts. However, models optimized with reward models show notable improvements in meeting constraints. Particularly, models trained using AUTOCIRCUIT-RL exhibit superior performance in satisfying efficiency and expected output voltage requirements compared to their counterparts.
+
+# 4.8. Generalization to Complex Scenarios
+
+To assess the generalization capability of our framework, we evaluated it on circuits with 6-10 components. Following the procedure in Section 2, we constructed a limited dataset and applied few-shot fine-tuning with $k = \{250, 500, 1000\}$ examples. The base model for fine-tuning was the AC-RL model previously trained on 4- and 5-component circuits (see Table 1). We adopted a sampling strategy similar to that used for 4/5-component circuits, as shown in Figure 2(b). We compared our top-performing AUTOCIRCUIT-RL (Llama-3) approach against GRAPHVAE and prompt-tuned Flan-ul-20b, focusing on circuit validity and efficiency, measured by the NGSpice simulator. As shown in Figure 4, AUTOCIRCUIT-RL showed consistent improvements with increasing $k$ , achieving validity and efficiency scores nearing $50\%$ with fewer than 500 examples. Table 2 summarizes the success rates for all methods with $k = 1000$ , highlighting the superior performance of AUTOCIRCUIT-RL. The higher success rate underscores the effectiveness of reward
+
+models in optimizing efficiency and validity. These findings show AUTOCIRCUIT-RL's potential to scale to higher component circuits using minimal fine-tuning, though addressing more complex design constraints for different circuit types remains a future research area.
+
+Analysis of Success Rates & Sampling Strategies Comparing success rates in Tables 1 and 2, we observe a performance decline in AC-RL as circuit complexity increases. However, this drop is due to differences in training and evaluation setups, as the model was extensively trained on 4C and 5C circuits, whereas 6C and beyond used a few-shot setup with only 1,000 samples. Despite this, the model achieves $>60\%$ success with minimal training data, demonstrating good generalization. As shown in Figure 4, increasing training data improves performance, reinforcing the model's ability to learn from limited data. We also analyzed SuccessRate@m, where multiple generations per prompt $(m)$ increase the likelihood of success. As seen in Figure 5, SuccessRate@3 and SuccessRate@5 consistently outperform SuccessRate@1, showing that additional sampling improves results. The average generation times for $m = 3$ and $m = 5$ are $\sim 7$ and $\sim 9$ seconds, respectively, achieving higher success rates with minimal added computation and remaining significantly faster than traditional AI-based search methods, which require hundreds of seconds.
+
+# 5. Discussion
+
+Our results demonstrate the effectiveness of proposed ACRL in advancing circuit topology synthesis by iteratively refining circuit generation using reinforcement learning. Its key advantages are as follows:
+
+RL for Improved Validity and Efficiency Unlike traditional methods, AC-RL adapts circuit generation using reward feedback. This iterative process yields higher validity and efficiency, outperforming zero-shot and ICL baselines. Its ability to optimize multiple objectives while adhering to constraints highlights its robustness.
+
+Accelerated Discovery of Novel Circuits AC-RL reduces duplicate generation rates (DGR) by $\sim 38\%$ compared to fine-tuning methods, enabling faster convergence to unique and effective circuit topologies. This accelerates design space exploration while minimizing redundant computations, making it highly effective for circuit automation.
+
+Enhanced Constraint Adherence With up to $80\%$ success rates in constraint adherence, AC-RL efficiently balances efficiency and output voltage while ensuring feasibility. Unlike heuristic-based methods that require extensive computational resources, its reward-driven approach adapts to complex constraints with significantly better performance.
+
+Scalability with Limited Training Data AC-RL achieves over $>60\%$ success rates generating valid circuits with only $\sim 1,000$ training examples, demonstrating strong gener
+
+alization. This ability to perform well with limited data distinguishes it from conventional methods relying on extensive labeled datasets.
+
+Generalization to Increasing Circuit Complexity AC-RL scales effectively to circuits with 6-10 components, outperforming baselines like GRAPHVAE and prompt-tuned FLAN-UL-20B. While complexity increases, reducing success rates, more training data significantly enhances performance. Additionally, SuccessRate@m analysis shows that generating multiple circuits per prompt improves convergence with minimal added computation.
+
+Extending AC-RL to diverse circuit architectures and integrating RL with advanced sampling could further optimize high-dimensional design spaces. Addressing constraints like power consumption and component parameter estimation via adaptive reward modeling remains a promising avenue. Overall, AUTOCIRCUIT-RL offers a transformative approach to circuit topology synthesis, excelling in efficiency, constraint adherence, and generalization with minimal data. Further enhancements could solidify its role as a cornerstone in AI-driven electronic design automation.
+
+# 6. Conclusion
+
+In this work, we introduced AUTOCIRCUIT-RL (AC-RL), a framework for automating analog circuit topology synthesis via a two-phase training process. The first phase employs instruction tuning, where an LLM generates initial topologies from structured prompts encoding design constraints like component pools and target efficiency or output voltage, ensuring basic feasibility. The second phase, RL refinement, updates the model using AI-based reward feedback to improve validity, efficiency, and output voltage. This refinement is applied only during training to improve generation quality, enabling faster inference. Empirical results show that AC-RL outperforms prior methods, generating $\sim 12\%$ more valid circuits and improving efficiency by $\sim 14\%$ for 4- and 5-component circuits. It demonstrates strong generalization, achieving near $50\%$ validity and efficiency for larger circuits with minimal training data. Additionally, it reduces duplicate generation by $\sim 38\%$ compared to other baselines and, with only 1,000 training examples, generates designs where $>60\%$ meet the specified constraints. AC-RL's reward-driven optimization adapts to evolving design constraints, setting a new benchmark for AI-driven design automation. Our findings aim to accelerate the development of efficient, adaptable methodologies in analog circuit automation, enabling faster, reliable synthesis for circuit topologies with reduced training data needs. Future work will extend AC-RL to support more complex designs, incorporate advanced sampling, and refine power models, further advancing electronic design automation.
+
+# Impact Statement
+
+The proposed AUTOCIRCUIT-RL framework significantly advances the field of analog circuit topology synthesis by integrating large language models (LLMs) with reinforcement learning (RL) for constrained topology generation. Our approach addresses key challenges in circuit synthesis, such as scalability, efficiency, and adaptability to diverse design specifications, which are critical in modern electronic design automation (EDA). By overcoming the limitations of traditional search-based and generative methods, our framework enables efficient circuit generation while reducing the number of simulation queries. This not only enhances the practicality of AI-driven circuit design but also provides a scalable solution capable of generalizing to circuits with 6 to 10 components using few-shot fine-tuning. Our results demonstrate superior performance in generating circuits that meet design constraints more effectively than existing approaches, paving the way for future research in topology-aware AI design methods.
+
+Beyond technical improvements, our work has broader societal and ethical implications. By automating complex aspects of circuit design, AUTOCIRCUIT-RL reduces reliance on extensive human expertise, potentially lowering the barrier to entry for new designers and expanding accessibility to hardware design in low-resource environments. This democratization of circuit synthesis could foster innovation and accelerate technological advancements across industries. However, it is crucial to ensure that AI-driven design tools remain transparent and do not introduce unintended biases in circuit topology selection, which could lead to over-reliance on specific architectures. Additionally, given the increasing energy demands of AI-driven automation, future work should consider optimizing the computational efficiency of training and inference phases to minimize environmental impact. By addressing these aspects, our work contributes to responsible AI adoption in EDA, ensuring that automation enhances creativity and inclusivity in circuit design rather than reinforcing existing barriers.
+
+# References
+
+Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
+Bengio, Y., Léonard, N., and Courville, A. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
+Bhandari, J., Bhat, V., He, Y., Garg, S., Rahmani, H., and Karri, R. Auto-spice: Leveraging llms for dataset creation
+
+via automated spice netlist extraction from analog circuit diagrams. arXiv preprint arXiv:2411.14299, 2024.
+Black, S., Leo, G., Wang, P., Leahy, C., and Biderman, S.
+GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. URL https://doi.org/10.5281/zenodo.5297715.
+Chang, C.-C., Shen, Y., Fan, S., Li, J., Zhang, S., Cao, N., Chen, Y., and Zhang, X. Lamagic: Language-model-based topology generation for analog integrated circuits. In the 41st International Conference on Machine Learning (ICML), pp. 1-8, 2024.
+Chen, Z., Huang, J., Liu, Y., Yang, F., Shang, L., Zhou, D., and Zeng, X. Artisan: Automated operational amplifier design via domain-specific large language model. In Proceedings of the 61st ACM/IEEE Design Automation Conference, pp. 1-6, 2024.
+Dong, Z., Cao, W., Zhang, M., Tao, D., Chen, Y., and Zhang, X. Cktgnn: Circuit graph neural network for electronic design automation. In International Conference on Learning Representations (ICLR), pp. 1-20, 2023.
+Fan, S., Cao, N., Zhang, S., Li, J., Guo, X., and Zhang, X. From specification to topology: Automatic power converter design via reinforcement learning. In IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1-9, 2021.
+Fatemi, B., Halcrow, J., and Perozzi, B. Talk like a graph: Encoding graphs for large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=IuXR1CCrSi.
+Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
+Hakhamaneshi, K., Werblun, N., Abbeel, P., and Stojanovic, V. Bagnet: Berkeley analog generator with layout optimizer boosted with deep neural networks. In IEEE/ACM International Conference on Computer-Aided Design (IC-CAD), pp. 1-8, 2019.
+Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
+Lai, Y., Lee, S., Chen, G., Poddar, S., Hu, M., Pan, D. Z., and Luo, P. Analogcoder: Analog circuit design via training-free code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 379-387, 2025.
+
+Lee, H., Phatale, S., Mansoor, H., Lu, K., Mesnard, T., Bishop, C., Carbune, V., and Rastogi, A. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
+Lester, B., Al-Rfou, R., and Constant, N. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
+Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
+McConaghy, T., Palmers, P., Steyaert, M., and Gielen, G. Trustworthy Genetic Programming-Based Synthesis of Analog Circuit Topologies Using Hierarchical Domain-Specific Building Blocks. IEEE Transactions on Evolutionary Computation, 15(4):557-570, 2011. ISSN 1941-0026.
+Nenzi, P. and Vogt, H. Ngspice users manual version 23. Experiments/ngspice23-manual.pdf, 2011.
+Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
+Reimers, N. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
+Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Shen, J., Chen, Z., Zhuang, J., Huang, J., Yang, F., Shang, L., Bi, Z., Yan, C., Zhou, D., and Zeng, X. Atelier: An automated analog circuit design framework via multiple large language model-based agents. Authorea Preprints, 2024.
+Simonovsky, M. and Komodakis, N. Graphvae: Towards generation of small graphs using variational autoencoders. In Artificial Neural Networks and Machine Learning-ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27, pp. 412-422. Springer, 2018.
+Team, M. N. et al. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023.
+Vijayaraghavan, P., Shi, L., Degan, E., and Zhang, X. Circuitsynth: Leveraging large language models for circuit topology synthesis. In 2024 IEEE LLM Aided Design Workshop (LAD), pp. 1-6. IEEE, 2024.
+
+Wang, H., Wang, K., Yang, J., Shen, L., Sun, N., Lee, H., and Han, S. Gcn-rl circuit designer: Transferable transistor sizing with graph neural networks and reinforcement learning. In ACM/IEEE Design Automation Conference (DAC), pp. 1-6, 2020.
+Zhang, H., Sun, S., Lin, Y., Wang, R., and Bian, J. Analogx-pert: Automating analog topology synthesis by incorporating circuit design expertise into large language models. arXiv preprint arXiv:2412.19824, 2024.
+Zhang, M., Jiang, S., Cui, Z., Garnett, R., and Chen, Y. Dvae: A variational autoencoder for directed acyclic graphs. Advances in neural information processing systems, 32, 2019.
+Zhao, Z. and Zhang, L. An Automated Topology Synthesis Framework for Analog Integrated Circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(12):4325-4337, 2020. ISSN 1937-4151.
+
+# A. Appendix: Baseline Model Details
+
+We compare AUTOCIRCUIT-RL against various baseline models, including both LLM-based and non-LLM approaches, to assess its efficiency and performance in circuit topology synthesis.
+
+- GPT-Neo-2.7 (Black et al., 2021) $^{1}$ : A 2.7B parameter transformer-based model developed by EleutherAI, following GPT-3's architecture. It was trained on 420B tokens over 400,000 steps using masked autoregressive modeling with cross-entropy loss.
+- StableLM-3B-4E1T $^{2}$ : A 3B parameter decoder-only model pre-trained on 1T tokens over 4 epochs from diverse English and code datasets. Referred to as StableLM in our work.
+- Llama-3-8B $^{3}$ : An 8B parameter model from the Meta Llama 3 family, incorporating supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to enhance alignment with human preferences.
+- MPT-7B $^{4}$ : A 7B parameter decoder-style transformer trained on 1T tokens of English text and code. It employs a modified transformer architecture optimized by MosaicML for efficient training and inference.
+CIRCUITSYNTH-GUMBEL: A variant of (Vijayaraghavan et al., 2024) incorporating:
+
+1. Training a circuit validity and efficiency classifier to estimate the probability of a generated circuit being valid.
+2. Fine-tuning an LLM to generate circuit topologies.
+3. Refining outputs using the classifier while enforcing circuit validity and efficiency constraints.
+
+The training objective combines standard negative log-likelihood loss $(LL_{LM})$ with circuit validity and efficiency loss. Since LLMs operate in discrete spaces, we employ Gumbel-softmax (Jang et al., 2016) for continuous relaxation, enabling gradient-based optimization.
+
+- GraphVAE (Simonovsky & Komodakis, 2018): A non-LLM baseline for circuit generation. The encoder consists of two graph convolutional layers (32 and 64 channels) with identity connections, batch normalization, and ReLU activation, followed by a fully connected (FC) layer producing a 256-dimensional latent representation. The decoder has three FC layers (256, 512, 1024 channels) with batch normalization and ReLU, followed by a parallel triplet of FC layers to output graph tensors. Training was performed for 50-75 epochs using Adam (learning rate $1e^{-3}$ , $\beta_{1} = 0.5$ ). A sentence transformer (Reimers, 2019) processes natural language prompts, followed by an FC layer to align its representation with the encoder output size.
+
+Prompt Type Prompt Sample Circuit Component Constraint Generate a 4-component circuit with n-type MOSFETs: FET-A-0, FET-A-1 and FET-A-2; and p-type MOSFET: FET-B-0; representing different nodes: ['FET-A-2', 'FET-B-0', 'FET-A-1', 'FET-A-0'] [ˈFET-A-2', '5', '0'], [ˈFET-B-0', '5', 'OUT'], [ˈFET-A-1', '0', 'IN'], [ˈFET-A-0', '5', 'OUT']] Output Voltage Constraint Generate a 4-component circuit with capacitors: capacitor-0 and capacitor-1; n-type MOSFET: FET-A-0; and p-type MOSFET: FET-B-0 representing different nodes: ['capacitor-1', 'FET-A-0', 'capacitor-0', 'FET-B-0'] with Vout less than 1.5V when Vin equals 2V [ˈcapacitor-1', 'IN', '10'], [ˈFET-A-0', 'OUT', 'IN'], [ˈcapacitor-0', 'IN', '0'], [ˈFET-B-0', 'OUT', '10']) Efficiency Constraint Generate a 4-component circuit with inductor: inductor-0; n-type MOSFET: FET-A-0; p-type MOSFET: FET-B-0; and capacitor: capacitor-0; representing different nodes: ['inductor-0', 'FET-A-0', 'FET-B-0', 'capacitor-0'] with efficiency greater than 0.6 [ˈinductor-0', '6', 'OUT'], [ˈFET-A-0', 'OUT', '6'], [ˈFET-B-0', 'IN', '6'], [ˈcapacitor-0', 'OUT', '0'])
+
+Table 3. Sample prompts of each prompt type and their corresponding sample circuit topologies.
+
+# B. Appendix: Implementation Details
+
+We provide the implementation details of our experiments conducted with the official PyTorch v2.2.0 release binary package, compiled with CUDA 11.8, utilizing NVIDIA V100 GPUs with 32 GB of memory.
+
+- Two-Phase Training: This setup applies to both instruction tuning and RL-refinement phases. We plot the training data on a Vout-Efficiency graph (an example for 5-component circuits is shown in Figure 6) and categorize the data into four main groups: (a) Group 1 with low efficiency (efficiency lower than 0.05), (b) Group 2 with moderate efficiency (efficiency between 0.05 and 0.7), (c) Group 3 with high efficiency but the output voltage shows minimal difference from input voltage, and (d) Group 4 with optimal Vout and efficiency. To optimize the training process, we apply a weighted sampling strategy in each batch, giving the highest priority to Group 4, which represents the optimal conditions, and the lowest priority to Group 1, which has the least efficiency. The sampling weights are allocated as follows: 0.1 for Group 1, 0.25 for Group 2, 0.25 for Group 3, and 0.4 for Group 4. This approach ensures that batches are enriched with data from Group 4, allowing the model to better learn from the most critical and desirable conditions. Training is conducted over 4-6 epochs using shuffled data from the training split, with model checkpoints saved based on the best performance on the validation split. To manage memory efficiently, we employ gradient checkpointing. We use the AdamW optimizer (Loshchilov & Hutter, 2017), setting beta parameters to 0.9 and 0.95, and an epsilon value of $1.0\mathrm{e} - 8$ . The learning rate is set to $0.95\mathrm{e} - 5$ , and the seed is fixed at 42 to ensure reproducibility. During training, we assess performance by evaluating a subset of 100 sample generations, using consistent evaluation settings. If the performance in the current epoch exceeds that of the previous one, we save the checkpoint.
+
+
+Figure 6. Vout-Efficiency graph of 5-component circuits in the training data.
+
+- Iterative Adaptation: The iterative adaptation, explained in Section 3.2.3, is performed over 3-5 iterations depending on the base model used. Each iteration involves 2-4 epochs of RL refinement and uses a dataset of 10,000 high-quality circuit samples obtained via nucleus sampling. This process improves performance progressively with each stage. The final adapted model is directly used for inference without further tuning.
+- Inference: We assess all models by generating 1,000 unique sample circuit topologies from each, using a combination of nucleus sampling and top-k sampling techniques. These generated topologies are then analyzed using validity, efficiency, and output voltage estimators to identify the designs that are both valid and efficient, meeting the specified design criteria.
+
+# C. Error Analysis
+
+# C.1. Validity Errors
+
+We begin our error analysis by examining one common failure mode encountered in generated circuit topologies: validity errors, as identified by SPICE simulation. A netlist is considered invalid if it violates essential electrical constraints such as correct node referencing, grounding, or connectivity, which causes simulation failures.
+
+Despite the strong overall performance of our model, a small subset of generated netlists exhibit such validity issues. These typically stem from improper or inconsistent node assignments rather than errors in component selection or overall topology. A representative example of an invalid netlist generated by the model is:
+
+```latex
+[ \begin{bmatrix} [FET-B-1', IN', 6'] \\ [FET-A-0', 0', IN'] \\ [FET-B-0', OUT', 7'] \\ [inductor-0', 6', 7'] \end{bmatrix} ]
+```
+
+This netlist fails validation primarily due to ambiguous or incorrect node labeling, which disrupts the circuit's connectivity and prevents successful simulation. However, with minimal adjustments, specifically refining the node assignments to properly reference outputs and grounds, the netlist can be made valid:
+
+```latex
+[ \begin{bmatrix} [FET-B-1', IN', 6'] \\ [FET-A-0', 0', IN'] \\ [FET-B-0', OUT', 0'] \\ [inductor-0', 6', OUT'] \end{bmatrix} ]
+```
+
+This minor correction preserves the original component arrangement and topology while resolving the node reference ambiguities, enabling successful simulation. This pattern indicates that the model effectively learns appropriate component placements and connectivity patterns but occasionally struggles with precise node labeling. These validity errors are therefore close to valid designs and could be mitigated with improved node management during generation.
+
+# C.2. Efficiency Constraint Errors
+
+In addition to validity, our generated circuits are evaluated against performance constraints such as minimum required efficiency, as specified in the generation prompt. These constraints are verified post-hoc using SPICE simulation. While the majority of the model outputs achieve or closely approximate the requested efficiency thresholds, a few circuits fall short. Consider the following example, where the prompt required an efficiency greater than 0.7:
+
+```txt
+['inductor-0','IN','6'],[FET-B-0','OUT','6'],['capacitor-0','0','OUT'],[FET-A-0','OUT','IN']]
+```
+
+This topology, under a generated duty cycle of 0.1, achieved a simulated efficiency of 0.625, missing the constraint by a relatively small margin. The component usage and connectivity suggest that the model has captured many of the structural features that contribute to efficiency, although precise performance can depend on subtle circuit-level interactions. Such cases demonstrate that the model generates solutions near the constraint boundary and could be improved further with targeted refinement.
+
+However, there also exist a few outlier cases where the efficiency gap is more pronounced, such as outputs scoring well below the target (for example, less than 0.3 when the required efficiency was greater than 0.7). These larger discrepancies are typically associated with difficult trade-offs between duty cycle, output voltage, and interactions among other components. In these cases, the model may prioritize satisfying voltage or topological structure over efficiency, reflecting the inherent challenge of simultaneously satisfying multiple constraints. Nevertheless, the proximity of many failing cases to the desired efficiency threshold, along with the model's ability to produce performance-aware topologies, supports the conclusion that these errors arise from nuanced constraint balancing rather than fundamental model shortcomings.
+
+# D. Runtime Complexity Analysis
+
+A key strength of AUTOCIRCUIT-RL lies in its computational efficiency, particularly when compared with traditional search-based topology synthesis methods. These traditional methods, such as genetic algorithms and tree-based search (Fan et al., 2021; Zhao & Zhang, 2020), often rely on hundreds of SPICE simulations per design iteration. As a result, they typically require several minutes (hundreds of seconds) to produce a single valid circuit design, especially when adapting to new specifications or exploring diverse topologies.
+
+In contrast, our method significantly reduces synthesis time by leveraging a two-phase RL approach. The LLM quickly generates initial candidate topologies, while the RL-based refinement performs iterative optimization using a learned reward model that captures circuit validity, efficiency, and output voltage. On average, AUTOCIRCUIT-RL generates a complete and optimized circuit in approximately 2–3.5 seconds using two NVIDIA V100 GPUs, offering over 50x improvement in design time over traditional approaches.
+
+To assess how the choice of language model affects runtime, we evaluated AUTOCIRCUIT-RL with two LLMs: MPT-7B and LLaMA-3 8B. As expected, model size influences generation latency. The MPT-7B variant achieves faster runtimes (2.4–3.5 seconds for 4–10 component circuits), whereas the LLaMA-3 8B variant incurs slightly higher runtimes (2.8–5 seconds), reflecting the increased computational overhead of the larger model. Despite this, both configurations maintain runtimes well below traditional baselines, making them viable for real-time or interactive design applications.
+
+
+Figure 7. Runtime Complexity of AUTOCIRCUIT-RL Using MPT-7B vs. LLaMA-3 8B. The runtime per circuit increases with the number of components but remains significantly faster than traditional search-based methods. MPT-7B offers lower latency due to its smaller model size, while LLaMA-3 8B provides marginally higher runtimes due to increased model capacity.
+
+These results demonstrate that AUTOCIRCUIT-RL achieves an effective balance between computational cost and output quality, offering a scalable and practical alternative to traditional methods in analog circuit synthesis.
\ No newline at end of file
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/images.zip b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1d8a94fdc3d6bd56147de67ebef4d422de96f423
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:630b6d483a7b762ab3d5577161c146745d1f4db3a9026e64ed29c305bcecc34e
+size 556116
diff --git a/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/layout.json b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..21e5a695e84b0438d3ffa0a9a7717f83298ab284
--- /dev/null
+++ b/autocircuitrlreinforcementlearningdrivenllmforautomatedcircuittopologygeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2c28d7a5f7d16f7a0b648fd3da41bbb5e62a284945a571eea1267533b84fde7
+size 433801
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_content_list.json b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0c12b500390e75dc54ad816c163eac0ea8abe6c
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ac11e50f05ff1293a0ffe6a1788c400cf780e1499920a5afc4c339f5d2b08a5
+size 132477
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_model.json b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..20bb8a8452a2ae334797ca49f1f674ed164be2a5
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55ab45901de12f421749748917c998820f46cbe8f0087cf36fd8bb6b24aa9316
+size 156145
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_origin.pdf b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2e03e6a59db1cda39672d7dc09e9c8cd8979b144
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/4827b6bd-af87-4f9c-9401-b6d55b484c9b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db8ce6b6b7bc9dc3aaec4d5f269587f8c0160ec76904921d73f64ea152d64e42
+size 930960
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/full.md b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ad5bedda081676d74cd6326f9ebbc6d93c344ee5
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/full.md
@@ -0,0 +1,465 @@
+# A Variational Framework for Improving Naturalness in Generative Spoken Language Models
+
+Li-Wei Chen $^{1}$ Takuya Higuchi $^{2}$ Zakaria Aldeneh $^{2}$ Ahmed Hussen Abdelaziz $^{2}$ Alexander Rudnicky $^{1}$
+
+# Abstract
+
+The success of large language models in text processing has inspired their adaptation to speech modeling. However, since speech is continuous and complex, it is often discretized for autoregressive modeling. Speech tokens derived from self-supervised models (known as semantic tokens) typically focus on the linguistic aspects of speech but neglect prosodic information. As a result, models trained on these tokens can generate speech with reduced naturalness. Existing approaches try to fix this by adding pitch features to the semantic tokens. However, pitch alone cannot fully represent the range of paralinguistic attributes, and selecting the right features requires careful hand-engineering. To overcome this, we propose an end-to-end variational approach that automatically learns to encode these continuous speech attributes to enhance the semantic tokens. Our approach eliminates the need for manual extraction and selection of paralinguistic features. Moreover, it produces preferred speech continuations according to human raters. Code, samples and models are available at https://github.com/b04901014/vae-gslm.
+
+# 1. Introduction
+
+Large language models (LLMs) have achieved tremendous success in text processing (OpenAI, 2024), offering new ways to interact with machines. This progress has motivated efforts to extend their capabilities to speech to enable more natural spoken interactions with machines. However, modeling speech presents unique challenges due to its continuous
+
+$^{1}$ Language Technology Institute, Carnegie Mellon University $^{2}$ Apple. Correspondence to: Li-Wei Chen , Takuya Higuchi , Zakaria Aldeneh , Ahmed Hussen Abdelaziz , Alexander Rudnicky .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+and complex nature. As a result, previous works (Lakhotia et al., 2021; Borsos et al., 2023; Maiti et al., 2024) tokenized speech into simpler discrete units to enable the application of language modeling techniques originally developed for text. However, these semantic tokens are typically derived by performing $k$ -means clustering on features extracted from self-supervised pre-trained speech models, such as HuBERT (Hsu et al., 2021). We use the term semantic tokens to distinguish them from acoustic tokens (Borsos et al., 2023), which capture general acoustic information. These models primarily capture the linguistic aspects of speech, such as phonetic information, while often overlooking paralinguistic features, such as prosody (Weston et al., 2021). As a result, training an autoregressive model solely with such semantic tokens restricts the model's ability to fully capture and represent the diverse information encoded in speech.
+
+To address the aforementioned limitation, Kharitonov et al. (2022) augmented the tokens with extracted fundamental frequency $(F_0$ , or pitch) to enable prosody-aware modeling. However, augmenting semantic tokens with manually defined paralinguistic attributes can be inherently suboptimal. First, pitch alone cannot capture the full range of paralinguistic information encoded in speech. For instance, energy-related (e.g., loudness, zero-crossing-rate) and spectral-related (e.g., mel-frequency cepstral coefficients) features are also important paralinguistic features (Schuller et al., 2009; 2013; Eyben et al., 2015). Furthermore, training a correct pitch tracker introduces additional complexity (Kim et al., 2018).
+
+Instead of relying on hand-engineered paralinguistic features, we propose an approach to learning these features directly from the input signal, within an autoregressive framework. These learned features are optimized to both: 1) reconstruct the input speech, and 2) enhance the autoregressive modeling process. Our approach allows the learned features to complement semantic tokens, removing the need for pre-extracted paralinguistic features as required in previous methods. As a result, our method generates more natural-sounding speech compared to baseline models while maintaining comparable meaningfulness of the syntheses.
+
+# 2. Preliminaries
+
+In this work, we work on mel-spectrogram, and consider vocoding, the act of turning mel-spectrogram back to raw waveform, as a problem that has already been addressed. We denote the mel-spectrogram as $\mathbf{X} = (x_{t}\in \mathbb{R}^{d_{x}})_{t = 1}^{T}$ , where $d_{x}$ represents the number of filter-banks, $T$ is the total number of time frames in the spectrogram, and $x_{t}$ is the frame at time $t$ . We use $\mathbf{X}_{i:j}$ to denote the sub-sequence $(x_{t})_{t = i}^{j}$ , and define $\mathbf{X}_{1:0} = \emptyset$ . Our goal is to model $p(\mathbf{X})$ using a generative approach.
+
+Token-based Speech Language Model We describe the general framework of speech language models that rely on the use of semantic tokens, as seen in works like Lakhota et al. (2021); Borsos et al. (2023); Maiti et al. (2024). This approach consists of three components: a speech tokenizer, an autoregressive model, and a decoder. The speech tokenizer maps $\mathbf{X}^1$ to a sequence of discrete semantic tokens $\mathbf{Z}^d = (z_t^d\in \mathbb{N}_k)^T_{t = 1}$ , where $\mathbb{N}_k = \{1,2,\dots ,k\}$ , and $k$ is the vocabulary size of the semantic tokens. We use $p(\mathbf{Z}^d\mid \mathbf{X})$ to denote the implicit distribution of the pretrained speech tokenizer. The autoregressive model, parameterized by $\psi$ , models the probability of token sequences $\mathbf{Z}^d$ as $p_{\psi}(\mathbf{Z}^{d}) = \prod_{t = 1}^{T}p_{\psi}(z_{t}^{d}\mid \mathbf{Z}_{1:t - 1}^{d})$ . Finally, the decoder, parameterized by $\theta$ , is trained to convert $\mathbf{Z}^d$ back to $\mathbf{X}$ by modeling $p_{\theta}(\mathbf{X}\mid \mathbf{Z}^d)$ . However, this framework is limited to semantic tokens $\mathbf{Z}^d$ , which primarily capture linguistic information and ignore paralinguistic information. As a result, the decoder $\theta$ may struggle with accurate reconstruction, and the autoregressive model $\psi$ can have difficulty incorporating paralinguistic information. To address this limitation, we propose to incorporate the variational autoencoder framework to learn continuous features to complement $\mathbf{Z}^d$ .
+
+Variational Autoencoder (VAE) Latent variable models introduce unobserved latent variables $\mathbf{Z}^c = (z_t^c\in \mathbb{R}^{d_z^c})_{t = 1}^T$ that influence the observed variable $\mathbf{X}$ . $d_{z}^{c}$ is the dimension of each $z_{t}^{c}$ , and is a hyper-parameter chosen prior to training. In a VAE, the likelihood of the observed data given the latent variable, $p_{\theta}(\mathbf{X}\mid \mathbf{Z}^{c})$ , is modeled by a neural decoder, parameterized by $\theta$ . The variational posterior, $q_{\phi}(\mathbf{Z}^{c}\mid \mathbf{X})$ , is modeled by a neural encoder, parameterized by $\phi$ . Using this modeling setup, the log-likelihood of the data, $\log p_{\theta}(\mathbf{X})$ , can be written as:
+
+$$
+\begin{array}{l} \underbrace {\mathbb {E} _ {q _ {\phi} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right)} \left[ \log p _ {\theta} \left(\mathbf {X} \mid \mathbf {Z} ^ {c}\right) \right] - D _ {K L} \left(q _ {\phi} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right) \right\| p \left(\mathbf {Z} ^ {c}\right))} _ {\mathcal {O} _ {E L B O}} \tag {1} \\ + D _ {K L} \left(q _ {\phi} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right) | | p _ {\theta} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right)\right), \\ \end{array}
+$$
+
+where $D_{KL}$ is the Kullback-Leibler (KL) divergence between two distributions, and $p(\mathbf{Z}^c)$ is a fixed prior distribution (usually a Gaussian). In Equation 1, $\mathcal{O}_{ELBO}$ is known as the evidence lower bound (ELBO), which provides a lower bound for $\log p_{\theta}(\mathbf{X})$ since $D_{KL}(q_{\phi}(\mathbf{Z}^{c}\mid \mathbf{X}))||p_{\theta}(\mathbf{Z}^{c}\mid \mathbf{X}))$ is always nonnegative. Therefore, instead of directly optimizing $\mathbb{E}_{\mathbf{X}}[\log p_{\theta}(\mathbf{X})]$ , the VAE maximizes the tractable lower bound $\mathbb{E}_{\mathbf{X}}[\mathcal{O}_{ELBO}]$ . Here, we refer to the learned continuous latent $\mathbf{Z}^c$ from the VAE as the variational features.
+
+# 3. Proposed Framework
+
+Figure 1 provides an overview of our proposed framework. This section is organized as follows: Section 3.1 introduces our setup that combines a VAE with an autoregressive model for the latent variables. Section 3.2 describes how we integrate semantic tokens into the framework. Section 3.3 discusses how to balance the different loss terms that arise in our setup. Section 3.4 describes the use of normalizing flows to improve the expressive power of the autoregressive prior. Finally, Section 3.5 introduces the diffusion decoder and the utterance encoder used in the framework.
+
+# 3.1. VAE with an Autoregressive Prior
+
+Our method starts by modeling the prior of the VAE, which is typically a fixed Gaussian distribution, with a trainable autoregressive model $p_{\psi}(\mathbf{Z}^c) = \prod_{t=1}^{T} p_{\psi}(z_t^c \mid \mathbf{Z}_{1:t-1}^c)$ . We refer to this framework as VAE with an autoregressive prior. We note that VAE with an autoregressive prior has been explored in previous works (Vahdat & Kautz, 2020; Zhu et al., 2020) within the computer vision domain. Additionally, Sun et al. (2020) also applied a similar framework for TTS, but with prior and posterior distributions optimized separately instead of jointly. Here, we adopt the VAE framework with an autoregressive prior for speech continuation and further integrate it with discrete token-based models to enhance the naturalness of the synthesis. We use a diagonal Gaussian distribution to model the variational posterior, where the statistics are predicted by a neural network:
+
+$$
+q _ {\phi} \left(z _ {t} ^ {c} \mid \mathbf {X}\right) = \mathcal {N} \left(z _ {t} ^ {c}, \mu_ {\phi} (\mathbf {X}, t), \sigma_ {\phi} (\mathbf {X}, t)\right). \tag {2}
+$$
+
+Since each $z_{t}^{c}$ is conditionally independent given $\mathbf{X}$ , we can express the posterior as: $q_{\phi}(\mathbf{Z}^{c}\mid \mathbf{X}) = \prod_{t = 1}^{T}q_{\phi}(z_{t}^{c}\mid \mathbf{X})$ . With this decomposition, and the parameterized autoregressive prior, the $\mathcal{O}_{ELBO}$ in Equation 1 can be further derived
+
+
+Figure 1. Overview of our proposed approach. Our method integrates the token-based speech language model (outlined in Section 2, represented by the lower shaded region) with a variational autoencoder (VAE with autoregressive prior, shown in the upper shaded region). This setup allows the model to learn variational features $\mathbf{Z}^c$ that complement the pre-extracted semantic speech tokens $\mathbf{Z}^d$ . In our proposed joint setup, the variational features $\mathbf{Z}^c$ are trained to 1) reconstruct speech $\mathbf{X}$ alongside $\mathbf{Z}^d$ (by maximizing $\mathcal{O}_{rec}$ ); 2) facilitate the prediction of the next speech token $z_t^d$ (by minimizing $\mathcal{L}_{kl}^d$ ); 3) support the sequential prediction of the variational features themselves (by minimizing $\mathcal{L}_{kl}^c$ ).
+
+into:
+
+$$
+\mathcal {O} _ {E L B O} = \underbrace {\mathbb {E} _ {\mathbf {Z} ^ {c} \sim q _ {\phi} (\mathbf {Z} ^ {c} \mid \mathbf {X})} \left[ \log p _ {\theta} \left(\mathbf {X} \mid \mathbf {Z} ^ {c}\right) \right]} _ {\mathcal {O} _ {r e c}} - \tag {3}
+$$
+
+$$
+\underbrace {\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1 : t - 1} ^ {c}} \left[ D _ {K L} (q _ {\phi} (z _ {t} ^ {c} \mid \mathbf {X}) | | p _ {\psi} (z _ {t} ^ {c} \mid \mathbf {Z} _ {1 : t - 1} ^ {c})) \right]} _ {\mathcal {L} _ {k l} ^ {c}}.
+$$
+
+By maximizing $\mathcal{O}_{ELBO}$ , we maximize the first term, the reconstruction objective $\mathcal{O}_{rec}$ , and minimize the second term, the variational feature prediction loss $\mathcal{L}_{kl}^{c}$ . We note that training a model to maximize Equation 3 is feasible without incorporating discrete semantic tokens $\mathbf{Z}^d$ . This token-free approach is also depicted as the upper shaded region in Figure 1 (VAE with an Autoregressive Prior), and its properties are further explored in Section 5.
+
+# 3.2. Incorporating the Semantic Tokens with VAE
+
+We now integrate the semantic tokens $\mathbf{Z}^d$ with the VAE with an autoregressive prior. Using these tokens, the model no longer needs to encode as much phonetic information as in $\mathbf{Z}^c$ , allowing $\mathbf{Z}^c$ to focus on other attributes of continuous speech. To this end, we introduce a joint latent variable $\mathbf{Z} = (z_t \in \mathbb{R}^{d_z} \times \mathbb{N}_k)_t^{T}$ , where $z_t$ is the concatenation of $z_t^c$ and $z_t^d$ . Since $\mathbf{Z}^d$ and $\mathbf{Z}^c$ are conditional independent given $\mathbf{X}$ , we can express the new variational posterior as: $q_{\phi}(\mathbf{Z} \mid \mathbf{X}) = q_{\phi}(\mathbf{Z}^c \mid \mathbf{X}) p(\mathbf{Z}^d \mid \mathbf{X})$ . Then, we model $p_{\psi}(z_t \mid \mathbf{Z}_{1:t-1}) = p_{\psi}(z_t^d \mid \mathbf{Z}_{1:t-1}) p_{\psi}(z_t^c \mid \mathbf{Z}_{1:t-1})$ , assume
+
+ing the conditional independence of $z_{t}^{d}$ and $z_{t}^{c}$ given the past generations. We further discuss this modeling assumption in Appendix I. This allows us to re-write $\mathcal{O}_{ELBO}$ from Equation 1 as:
+
+$$
+\mathcal {O} _ {E L B O} = \tag {4}
+$$
+
+$$
+\underbrace {\mathbb {E} _ {\mathbf {Z} ^ {d} \sim p (\mathbf {Z} ^ {d} | \mathbf {X}) , \mathbf {Z} ^ {c} \sim q _ {\phi} (\mathbf {Z} ^ {c} | \mathbf {X})} [ \log p _ {\theta} (\mathbf {X} \mid \mathbf {Z} ^ {d} , \mathbf {Z} ^ {c}) ]} _ {\mathcal {O} _ {r e c}} -
+$$
+
+$$
+\underbrace {\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1 : t - 1}} \left[ D _ {K L} (q _ {\phi} (z _ {t} ^ {c} \mid \mathbf {X}) | | p _ {\psi} (z _ {t} ^ {c} \mid \mathbf {Z} _ {1 : t - 1})) \right]} _ {\mathcal {L} _ {k l} ^ {c}} -
+$$
+
+$$
+\underbrace {\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1 : t}} \left[ - \log p _ {\psi} \left(z _ {t} ^ {d} \mid \mathbf {Z} _ {1 : t - 1}\right) \right]} _ {\mathcal {L} _ {k l} ^ {d}}.
+$$
+
+From Equation 4, our training objective $\mathcal{O}_{ELBO}$ consists of three terms: $\mathcal{O}_{rec}$ , $\mathcal{L}_{kl}^{c}$ , and $\mathcal{L}_{kl}^{d}$ . $\mathcal{O}_{rec}$ is the reconstruction objective. Maximizing $\mathcal{O}_{rec}$ trains the decoder $\theta$ to reconstruct $\mathbf{X}$ from both $\mathbf{Z}^c$ and $\mathbf{Z}^d$ , while encouraging the encoder $\phi$ to generate $\mathbf{Z}^c$ with helpful information to reconstruct $\mathbf{X}$ . $\mathcal{L}_{kl}^{c}$ is the variational feature prediction loss. Minimizing $\mathcal{L}_{kl}^{c}$ trains the autoregressive model $\psi$ to predict the next variational feature $z_{t}^{c}$ and encourages the encoder $\phi$ to generate $\mathbf{Z}^c$ that is easier for $\psi$ to model. $\mathcal{L}_{kl}^{d}$ is the semantic token prediction loss, which trains the autoregues
+
+sive model $\psi$ to predict the next semantic token given the previous $\mathbf{Z}^d$ and $\mathbf{Z}^c$ .
+
+# 3.3. Balancing the loss terms
+
+In Equation 4, the terms $\mathcal{O}_{rec}$ , $\mathcal{L}_{kl}^{c}$ , and $\mathcal{L}_{kl}^{d}$ can work against each other. For instance, the encoder $\phi$ optimizes both $\mathcal{O}_{rec}$ and $\mathcal{L}_{kl}^{c}$ . Maximizing $\mathcal{O}_{rec}$ encourages the variational features $\mathbf{Z}^{c}$ to encode more information about $\mathbf{X}$ , while minimizing $\mathcal{L}_{kl}^{c}$ regularize $\mathbf{Z}^{c}$ to be simpler for the autoregressive model $\psi$ to predict. Similarly, optimizing $\mathcal{L}_{kl}^{c}$ and $\mathcal{L}_{kl}^{d}$ with the autoregressive model $\psi$ is a multi-task learning scenario, where $\psi$ learns to predict two different objectives given the same input. Moreover, these terms may operate on different scales due to how the losses are computed, necessitating a balancing mechanism. As a result, inspired by $\beta$ -VAE (Higgins et al., 2017), we introduce two scalars: $\beta$ and $\gamma$ , to balance the loss terms as follows:
+
+$$
+\mathcal {O} _ {E L B O} = \mathcal {O} _ {r e c} - \beta \left(\mathcal {L} _ {k l} ^ {c} + \gamma \cdot \mathcal {L} _ {k l} ^ {d}\right). \tag {5}
+$$
+
+Here, a larger $\beta$ favors a simple $p(\mathbf{Z}^c)$ , while a smaller $\beta$ encourages the variational features $\mathbf{Z}^c$ to encode more information about $\mathbf{X}$ . Larger $\gamma$ encourages the autoregressive model $\psi$ to prioritize accurate predictions of $\mathbf{Z}^d$ over $\mathbf{Z}^c$ . In practice, we employ a linear warm-up strategy for $\beta$ , increasing it from zero to its final value during the early stages of training. This approach, inspired by prior works on text generation (Bowman et al., 2016; Fu et al., 2019), helps mitigate posterior collapse. Empirically, we find that this strategy allows for higher values of $\beta$ without causing $\mathcal{L}_{kl}^c$ to collapse to zero.
+
+# 3.4. Time-wise Normalizing Flow
+
+We employ a lightweight normalizing flow (Rezende & Mohamed, 2015) that is shared across time to improve the expressive power of the autoregressive prior $p_{\psi}(z_t^c \mid \mathbf{Z}_{1:t-1})$ . Specifically, an invertible flow network $f_{\psi}$ maps each $z_t$ to a point in the Gaussian distribution, and sampling can be realized by running the network in reverse. By using the change of variables, we can write:
+
+$$
+p _ {\psi} \left(z _ {t} ^ {c} \mid \mathbf {Z} _ {1: t - 1}\right) = \tag {6}
+$$
+
+$$
+\left. \mathcal {N} (f _ {\psi} (z _ {t} ^ {c}), \mu_ {\psi} (\mathbf {Z} _ {1: t - 1}), \sigma_ {\psi} (\mathbf {Z} _ {1: t - 1})) \left| \det \frac {\partial f _ {\psi} (z _ {t} ^ {c})}{\partial z _ {t} ^ {c}} \right|, \right.
+$$
+
+where $\mu_{\psi}, \sigma_{\psi}$ are modeled by autoregressive neural networks (i.e., transformer). We choose affine coupling layers (Dinh et al., 2017) as the backbone of our normalizing flow due to their simple implementation and efficient computation. We note that similar approaches using normalizing flows to enhance prior distributions have also been observed in Kim et al. (2021; 2020) for text-to-speech.
+
+# 3.5. Other Components
+
+We describe the modeling of the our decoder $p_{\theta}(\mathbf{X} \mid \mathbf{Z})$ and the utterance encoder designed to capture static information. While these components are not the main focus of our study, they help ensure a fair comparison between different methods. We use these components for all methods in our experiments and focus on how changing the inputs to the autoregressive model affects performance.
+
+Diffusion Decoder We model the decoder $p_{\theta}(\mathbf{X} \mid \mathbf{Z})$ with Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020). We choose DDPM due to its flexibility in modeling complex distributions. We condition the diffusion process on $\mathbf{Z}$ . For back-propagation through the encoder $\phi$ , we use the reparameterization trick (Kingma & Welling, 2019) to sample from $q_{\phi}(\mathbf{Z}^c \mid \mathbf{X})$ , and combine it with embedded semantic tokens $\mathbf{Z}^d$ . The outcome is then concatenated with each intermediate layer of the diffusion decoder for conditional diffusion. We train all diffusion decoders with 1000 DDPM steps. Note that our proposed approach is not limited to a specific decoder. Although we opted for a diffusion-based decoder for ease of training, our method is compatible with various decoding strategies. There are no constraints on the type of decoder used to parameterize $p_{\theta}(\mathbf{X} \mid \mathbf{Z}^d, \mathbf{Z}^c)$ .
+
+Utterance Encoder Static features, such as speaker information and recording environments, often vary little across a given utterance. In our current modeling approach, this static information would be redundantly encoded at each time step. To address this issue, we introduce an additional utterance-level feature encoder that encourages $\mathbf{Z}$ to focus on time-varying signals. Specifically, we randomly segment a portion of the mel-spectrogram $\mathbf{X}$ and feed it to the utterance encoder to produce an utterance-level embedding. This embedding is then concatenated with $\mathbf{Z}$ before being provided to the diffusion decoder. The utterance encoder is trained end-to-end with the entire system.
+
+# 4. Experimental Setup
+
+# 4.1. Datasets
+
+We use two datasets in our experiments: LibriSpeech (Panayotov et al., 2015) and Libri-light (Kahn et al., 2020), consisting of audiobooks narrated in English. LibriSpeech contains 960 hours of speech, while Libri-light contains 60k hours of speech. For semantic token extraction, we follow Hassid et al. (2023); Maiti et al. (2024) and use tokens derived from HuBERT representations (Hsu et al., 2021). We use the official HuBERT checkpoints,
+
+pre-trained on LibriSpeech and Libri-light. We run $k$ -means clustering with $k = 200$ on the output of the last transformer layer of HuBERT using $10\%$ of data randomly sampled from the training set. We pick $k = 200$ after testing values from {50, 200, 1000} and choosing the one that produced the best language modeling performance. The result is also consistent with Maiti et al. (2024). More details on the choice of $k$ are provided in Appendix F.
+
+# 4.2. Methods
+
+We compare our proposed approach to methods that use only semantic tokens in the autoregressive model, as well as methods that use semantic tokens with added pitch features in the autoregressive model. To ensure a fair comparison, we fix the autoregressive model architecture to be the same for all methods, varying only the input and output layers. We also use the same configuration for the diffusion decoder and utterance encoder across all methods. For the neural vocoder (i.e., mapping the mel-spectrogram back to waveform), we train HiFi-GAN (Kong et al., 2020) on LibriSpeech and use it for all of the methods. We leave the detailed configuration of model architectures in Appendix B. Below, we provide further details on the three approaches.
+
+Token-LM We adopt the token-based speech language model (described in Section 2) as our baseline, representing approaches such as Lakhotia et al. (2021); Borsos et al. (2023); Maiti et al. (2024), which apply only discrete semantic tokens to the autoregressive model.
+
+Token-LM + Pitch In this baseline approach, we augment the semantic tokens of token-based speech language model (described in Section 2) with log pitch features before passing them into the autoregressive model. The pitch features are extracted using CREPE (Kim et al., 2018). Additionally, we introduce a pitch regression task alongside the standard next-token prediction task, optimizing it with L1 loss. This method incorporates hand-engineered paralinguistic features, similar to the approach used by Kharitonov et al. (2022).
+
+Token-LM + Acoustic In this comparison method, we augment semantic tokens with acoustic tokens (Borsos et al., 2023; Defossez et al., 2023). Specifically, we train a residual vector quantization (RVQ) autoencoder to discretize speech into four levels of acoustic tokens. At each transformer time step, the model first predicts the semantic token, followed by the acoustic tokens, which are autoregressively generated over the code levels using an additional transformer layer,
+
+similar to Chen et al. (2023); Défossez et al. (2024). We include this baseline to compare with recent methods (Défossez et al., 2024) that integrate acoustic tokens into the autoregressive generation process.
+
+Variational speech modeling approach (Proposed) This is our proposed approach introduced in Section 3. In this approach, we learn to extract variational features that supplement the semantic tokens while jointly training the autoregressive model. The learned variational features are used by both the autoregressive model and the decoder. This approach eliminates the need for the selection and extraction of paralinguistic features based on hand-made engineering. Additionally, we set our latent dimension $d_{z}^{c} = 4$ . While we observed performance improvements with larger $d_{z}^{c}$ , we opted for a smaller value to ensure a fairer comparison, as it results in less variation in parameter size. Our additional experiments on the latent dimension $d_{z}^{c}$ is in Appendix E.
+
+For inference, we use temperature-based sampling similar to Lakhotia et al. (2021). Specifically, we set the temperature to 0.85 for both semantic tokens $\mathbf{Z}^d$ and continuous variational features $\mathbf{Z}^c$ . For variational features, the temperature is the scalar multiplied to the standard deviation of the normal distribution in Equation 6 before sampling, as done in Kim et al. (2020). For the diffusion decoder, we use denoising diffusion implicit models (DDIM) from Song et al. (2021) with $\eta = 0.5$ and 100 diffusion steps. Training details are provided in Appendix C.
+
+# 4.3. Evaluation Metrics
+
+We evaluate the comparison methods on both reconstruction and speech continuation. The reconstruction metrics, introduced in Section 4.3.1, involve only the encoder-decoder pair and indicate how much information is preserved in the extracted representations. The remaining metrics focus on speech continuation, which is our primary objective, where the performance of the autoregressive model is also assessed.
+
+# 4.3.1. OBJECTIVE METRICS
+
+Reconstruction Metrics We use $F_{0}$ -RMSE, mel-cepstral distortion (MCD), and character error rate (CER) to measure the quality of the reconstructed signal. $F_{0}$ -RMSE measures the root mean squared difference between the pitch contour of the ground-truth signal and the reconstructed one. We use CREPE (Kim et al., 2018) to extract pitch and only consider the voiced parts of the signal when computing the difference. MCD measures the Euclidean distance between the 23 mel-cepstral coefficients (MCEPs) extracted from the ground-truth and reconstructed signals. For calculating CER, we use a pre-trained Whisper (Radford et al.,
+
+2023) automatic speech recognition model. We use the dev-clean and dev-other subsets of LibriSpeech for evaluating reconstruction. To ensure deterministic results, instead of sampling each $z_{t}^{c}$ from $q_{\phi}(z_{t}^{c} \mid \mathbf{X})$ , we directly use the Gaussian mean $\mu_{\phi}(\mathbf{X}, t)$ from Equation 2. In practice, we observed that the stochastic noise of $q_{\phi}(z_{t}^{c} \mid \mathbf{X})$ has little effect on the reconstructed syntheses.
+
+ZeroSpeech Metrics We adopt the commonly-used metrics (Borsos et al., 2023; Hassid et al., 2023; Maiti et al., 2024) from the ZeroSpeech challenge (Nguyen et al., 2020): sWUGGY and sBLIMP to measure language capability objectively. For these two metrics, speech utterances are given in positive-negative pairs, with each model scoring both utterances. The model's accuracy is the percentage of instances where the positive example receives a higher score than the negative one. sWuggy measures if the model scores a real word higher than a phonetically similar nonword (e.g., "brick" v.s. "blick"). sBLIMP measures if a model scores a grammatically correct sentence higher than a similar but incorrect one (e.g., "the dogs sleep" vs. "the dog sleep"). Both metrics use text-to-speech to generate the examples. In line with Borsos et al. (2023), we evaluate sWUGGY using only words existing in LibriSpeech (referred as the "in-vocab" version). We use the test split for evaluation. See Appendix G for detailed description on how we estimate the scores for the methods.
+
+# 4.3.2. SUBJECTIVE METRICS
+
+We use subjective human evaluations to assess the naturalness and meaningfulness of the generated speech. We randomly sampled 100 utterances from the LibriSpeech dev-clean and dev-other subsets, cropping the first three seconds to use as prompts. Each audio sample was rated by seven annotators. For naturalness, annotators rated how human-like the generated speech sounded on a five-point Likert scale, where one corresponds to "Very unnatural" and five to "Very natural." For meaningfulness, they rated the grammar and content of the speech on a five-point Likert scale, where one corresponds to "Very Poor" and five to "Excellent." Additional details on the subjective evaluations are provided in Appendix D.
+
+# 5. Experimental Results
+
+# 5.1. Main Results
+
+Tables 1 and 2 present the results for the three methods described in Section 4.2. Table 1 reports objective metrics for speech reconstruction, while Table 2 provides both objective and subjective results for speech continuation. We discuss our observations below.
+
+Table 1. Results of speech reconstruction evaluation ( $F_0$ -RMSE, MCD, CER) for the models discussed in Section 4.2. The evaluation metrics are detailed in Section 4.3. All models were trained on the Libri-light dataset.
+
+Method F0-RMSE(↓) MCD(↓) CER(↓) Ground-truth n/a n/a 2.35 Token-LM 43.90 7.55 10.19 + Pitch 25.46 6.90 6.59 + Acoustic 15.05 2.58 3.73 Proposed 16.56 5.43 4.35
+
+Reconstruction Quality. First, the results in Table 1 show that compared to Token-LM and Token-LM + Pitch, our proposed approach improves the reconstruction of the original signal. These findings highlight three key points: 1) discrete semantic tokens alone are insufficient to capture all the components necessary for faithful reconstruction, 2) incorporating only pitch information is not enough, and 3) the learned variational features $\mathbf{Z}^c$ in our approach effectively complement the discrete semantic tokens $\mathbf{Z}^d$ , leading to better reconstruction of the speech signal. On the other hand, our proposed method achieves slightly lower reconstruction quality than Token-LM + Acoustic. Since the variational features are continuous, they should be able to encode more information than four levels of acoustic tokens. Therefore, our results suggest that the information encoded in the variational features is effectively regularized by the autoregressive losses: $\mathcal{L}_{kl}^c$ and $\mathcal{L}_{kl}^d$ .
+
+Speech continuation of our approach is more natural compared to the speech generated from the baselines. The subjective evaluation of speech continuation, measured by the mean opinion score of naturalness (N-MOS) in Table 2, shows that the syntheses produced by our proposed approach have significantly higher naturalness compared to all baselines. This finding further supports our hypothesis that the variational features $\mathbf{Z}^c$ learned by our approach improve the quality of the synthesis. While Token-LM + Acoustic achieves the best reconstruction in Table 1, the autoregressive model struggles to effectively process the additional information encoded in the RVQ tokens, resulting in significantly lower speech continuation performance, as shown in Table 2. Additionally, Table 2 compares the number of parameters between different methods. The result indicates that the overhead of the proposed method is relatively small ( $< 1\%$ of the total parameters), while still achieving noticeably better performance.
+
+Speech generated using our proposed approach achieves subjective meaningfulness (as measured by M-MOS) comparable to the baselines. The results in Table 2 in
+
+Table 2. Results of speech continuation evaluation for the models discussed in Section 4.2. The evaluation metrics are detailed in Section 4.3. M-MOS refers to the meaningfulness mean opinion score. N-MOS refers to the naturalness mean opinion score. Both M-MOS and N-MOS are evaluated on speech continuation are presented along with $95\%$ confidence intervals. All models were trained on the Libri-light dataset. $\# \mathrm{Param}$ refers to the number of parameters used during inference. $\mathbf{M}^{\prime}$ stands for million.
+
+Method #Param. sWUGGY(↑) sBLIMP(↑) M-MOS(↑) N-MOS(↑) Ground-truth n/a n/a n/a 3.94 ± 0.08 3.89 ± 0.09 Token-LM 219M 61.75 58.31 3.24 ± 0.09 3.19 ± 0.11 Token-LM + Pitch 219M 60.75 56.92 3.29 ± 0.09 3.08 ± 0.10 Token-LM + Acoustic 226M 56.23 52.03 2.75 ± 0.09 3.03 ± 0.10 Proposed 221M 60.48 56.56 3.45 ± 0.09 3.60 ± 0.10
+
+dicate that our proposed approach produces syntheses that are comparable to or better than baselines, as reflected by its higher meaningfulness mean opinion score (M-MOS). However, all compared methods show lower sWUGGY and sBLIMP scores than Token-LM. This outcome is expected, as the model must predict additional acoustic information beyond semantic tokens, which primarily encode linguistic content. Consequently, given a fixed model parameter budget, language modeling performance naturally declines as the model allocates capacity to model acoustic information. This effect is also evident in the low M-MOS of Token-LM + Acoustic, where the acoustic tokens may capture excessively detailed information, such as recording noise, which does not contribute meaningfully to synthesis.
+
+However, one may question why the trend in the sWUGGY and sBLIMP scores does not align with the M-MOS evaluation. We analyze the ASR transcriptions from the compared methods and observe that the transcriptions of Token-LM do have higher meaningfulness than those of other approaches, consistent with the trend of the sWUGGY and sBLIMP scores. However, after listening to the audio samples, we found that the natural prosody of our proposed method significantly improves intelligibility. Although Whisper ASR can still transcribe speech of unnatural prosody generated by Token-LM, human raters often needed multiple passes to fully comprehend the linguistic content. In practical applications, interactive dialogue systems must generate speech that users can easily understand in a single pass. The M-MOS score serves as an indicator of the suitability of a system in this regard.
+
+# 5.2. Impact of Loss-balancing Parameters
+
+Here, we study the effect of varying the loss-balancing hyper-parameters: $\beta$ and $\gamma$ , which are described in Section 3.3.
+
+Varying $\beta$ Table 3 shows that, for reconstruction metrics $(F_0$ -RMSE, MCD, CER), lower values of $\beta$ result in smaller errors, indicating better reconstruction. However, for the
+
+sWUGGY and sBLIMP metrics, performance decreases as $\beta$ increases. This finding aligns with our discussion in Section 3.3, where we discussed how lower $\beta$ values encourage better reconstruction, but make it harder for the autoregressive model to effectively model $\mathbf{Z}^c$ .
+
+Varying $\gamma$ Table 4 shows that increasing $\gamma$ leads to worse pitch reconstruction, as measured by $F_{0}$ -RMSE, but improves CER. This result indicates that $\gamma$ governs the type of information captured in the variational feature $\mathbf{Z}^c$ . With a higher $\gamma$ , the system prioritizes the prediction of semantic tokens. Therefore, the variational feature $\mathbf{Z}^c$ is encouraged to encode more phonetic information, resulting in lower CER and MCD. In contrast, a lower $\gamma$ encourages $\mathbf{Z}^c$ to focus more on encoding pitch-related information, as indicated by the lower $F_{0}$ -RMSE. Then, we analyze subjective measures and observe that both M-MOS and N-MOS favor a lower $\gamma$ . We attribute the performance decline to the increased difficulty in autoregressive generation of $\mathbf{Z}^c$ . By increasing the weight of $\mathcal{L}_{kl}^d$ , the model sacrifices its focus on minimizing $\mathcal{L}_{kl}^c$ , which in turn compromises its ability to model $\mathbf{Z}^c$ .
+
+# 5.3. Removing the Semantic Tokens
+
+Here, we evaluate the utility of the semantic tokens in our proposed approach by training a model that uses only variational features $\mathbf{Z}^c$ . This removal corresponds to only training a VAE with an autoregressive prior with Equation 3 without the use of discrete semantic tokens.
+
+Table 3 shows the impact of removing the discrete semantic tokens from our proposed approach, which is denoted as Proposed (-tokens). We find that excluding semantic tokens leads to a slight improvement in the sWUGGY metric compared to including them. However, this exclusion significantly worsens the CER, indicating poorer phonetic reconstruction. These results suggest that without discrete semantic tokens, our approach struggles to effectively encode abstract phonetic information in the variational features $(\mathbf{Z}^{c})$ but still performs well on sWUGGY, possibly by leveraging other cues. One possible explanation is that the
+
+Table 3. Results showing the impact of varying the $\beta$ parameter (as described in Section 3.3) and the effect of removing phonetic tokens from our proposed approach on both language modeling and speech reconstruction performance. The $\gamma$ parameter (as described in Section 3.3) for the proposed methods is fixed to 0.5. All models here were trained on the LibriSpeech dataset for lower computation cost.
+
+Method β sWUGGY(↑) sBLIMP(↑) F0-RMSE(↓) MCD(↓) CER(↓) Proposed 0.03 65.56 51.12 16.76 5.19 5.06 0.04 65.96 51.40 16.88 5.53 5.43 0.05 66.46 51.77 17.20 5.75 5.45 Proposed (−tokens) 0.04 69.33 51.85 17.47 5.48 13.02
+
+Table 4. Results showing the impact of varying the $\gamma$ parameter (as described in Section 3.3) in our proposed approach on both language modeling and speech reconstruction performance. The $\beta$ parameter (as described in Section 3.3) is fixed to 0.04. M-MOS denotes the meaningfulness mean opinion score, and N-MOS denotes the naturalness mean opinion score, both presented with $95\%$ confidence intervals. All models were trained on the Libri-light dataset.
+
+γ sWUGGY(↑) sBLIMP(↑) F0-RMSE(↓) MCD(↓) CER(↓) M-MOS(↑) N-MOS(↑) 0.5 60.48 59.88 16.56 5.43 4.35 3.45 ± 0.09 3.60 ± 0.10 1.0 59.41 59.12 17.06 5.36 4.05 3.31 ± 0.09 3.46 ± 0.10 2.0 58.19 58.19 17.41 5.21 3.75 3.07 ± 0.09 3.26 ± 0.11
+
+Table 5. Results of speech continuation evaluation for comparison on different semantic token extraction methods detailed in Section 5.4. M-MOS and N-MOS refer to the meaningfulness and naturalness mean opinion score, presented along with $95\%$ confidence intervals. All models were trained on the Libri-light dataset.
+
+Method M-MOS(↑) N-MOS(↑) Ground Truth 3.87 ± 0.08 3.97 ± 0.08 SpeechTokenizer-LM 3.26 ± 0.09 3.33 ± 0.10 Proposed 3.68 ± 0.09 3.61 ± 0.10
+
+synthesized non-existent words in sWUGGY, being out-of-domain for the text-to-speech system, may exhibit subtle prosodic irregularities that our model is able to detect. On the other hand, the best reconstruction results are obtained when semantic tokens are included, as removing them leads to worse reconstruction metrics.
+
+# 5.4. Generalization to Different Semantic Tokens
+
+In Section 5.1, we demonstrated the effectiveness of our proposed approach using semantic tokens derived from HuBERT representations. Here, we investigate its performance with an alternative approach to extracting semantic tokens, SpeechTokenizer (Zhang et al., 2024). SpeechTokenizer quantizes speech using Residual Vector Quantization (RVQ), which optimizes for reconstruction. However, its first-level RVQ tokens additionally minimizing distillation loss with HuBERT representations to encode content. We replace the semantic tokens in Token-LM with the first-level RVQ tokens from SpeechTokenizer, naming this new baseline SpeechTokenizer-LM. Our proposed method was similarly
+
+adapted to this new set of semantic tokens. For our experiments, we used the official SpeechTokenizer checkpoint ${}^{8}$ .
+
+As shown in Table 5, our approach achieved superior naturalness and meaningfulness scores compared to SpeechTokenizer-LM. This verifies that our framework effectively enhances various approaches to extracting semantic tokens.
+
+Flexibility with Different Decoders Additionally, for both SpeechTokenizer-LM and our proposed method, we did not adopt the diffusion decoder mentioned in Section 3.5. Instead, we predicted the remaining RVQ tokens from the semantic tokens (or semantic tokens and variational features for our approach) and leveraged the pre-trained SpeechTokenizer decoder for speech reconstruction. As noted in Section 3.5, our training framework is adaptable and is not tied to a specific decoder type. We adopted a diffusion-based decoder for simplified training and fair comparisons in our previous work. The empirical results in Table 5 further validate this flexibility, as our model still achieves high human evaluation MOS scores with a different decoder.
+
+# 6. Related Work
+
+Emerging speech language models typically use discrete semantic tokens for autoregressive modeling. These tokens are often obtained by $k$ -means clustering of features extracted from self-supervised pre-trained models (Hsu et al., 2021; Chen et al., 2022). For instance, Lakhotia et al. (2021) used semantic tokens for generative spoken language mod
+
+eling (GSLM). Subsequently, Kharitonov et al. (2022) enhanced this approach by incorporating pitch information alongside semantic tokens as joint inputs to the autoregressive model. Our proposed approach improves upon this line of research by using a variational autoencoder to automatically learn paralinguistic speech attributes in conjunction with the autoregressive model. Borsos et al. (2023) proposed a two-stage approach for the decoder that used acoustic tokens (Zeghidour et al., 2022; Defossez et al., 2023). This type of framework is also widely used in text-to-speech systems (Chen et al., 2025; 2024). In contrast, our approach focuses on the joint modeling of linguistic and paralinguistic features by enhancing the inputs to the autoregressive model rather than improving the decoder.
+
+Recently, a line of research has emerged focusing on improving speech language models through the integration of text-based models. Hassid et al. (2023) initialized their speech language model using a pre-trained text-based large language model (LLM). Similarly, Rubenstein et al. (2023); Maiti et al. (2024) expanded the vocabulary of pre-trained text-based LLMs by integrating the semantic tokens. Building on this, Yang et al. (2024); Du et al. (2024) further explored multi-task training involving text-conditioned generative speech tasks, combining text and audio within a single LLM. We note that our proposed approach takes a different direction but can still be integrated with these approaches. For example, one could initialize the transformer in our autoregressive model using parameters from a text-based LLM.
+
+Recent works (Défossez et al., 2024) incorporate discrete acoustic tokens directly into autoregressive modeling. However, these approaches often require complex designs, such as delay patterns and text-based pretraining. In Section 5, we demonstrate that directly incorporating acoustic tokens to autoregressive modeling significantly affects the generation of linguistic content, while our method does not.
+
+# 7. Conclusion
+
+In this work, we proposed an approach that combines a variational autoencoder with existing token-based speech language models. We conducted experiments to evaluate its effectiveness in terms of language capability and synthesis naturalness. Empirical evaluations suggest that our proposed approach, in contrast with other recent techniques, is capable of producing synthesis with better subjective meaningfulness and naturalness. Additionally, we examined the effects of the weights of different loss terms, $\beta$ and $\gamma$ , on performance. Our findings indicate that $\beta$ governs the amount of information encoded from the mel-spectrogram into the variational feature, whereas $\gamma$ controls the type of information encoded within the variational feature.
+
+# 8. Limitations and Future Work
+
+Our results indicate that the performance of our proposed approach is sensitive to the choice of hyper-parameters $\beta$ and $\gamma$ . Future work will explore automated methods for tuning these hyper-parameters. Additionally, our evaluation is limited to English datasets, and it remains unclear if the approach generalizes to languages with different prosodic patterns. Future work will extend training and evaluation to additional languages to assess cross-lingual applicability. Finally, our model has a relatively small number of parameters and is trained on a smaller dataset compared to existing frameworks (Hassid et al., 2023; Rubenstein et al., 2023). We plan to scale up both the model and the training data to examine whether our findings hold with increased computational resources and larger datasets.
+
+# Acknowledgments
+
+This work was supported by a grant from Apple. Any views, opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and should not be interpreted as reflecting the views, policies or position, either expressed or implied, of Apple. We thank Professor Shinji Watanabe for his valuable feedback. We also thank the reviewers and the area chair for their insightful comments and suggestions.
+
+# Impact Statement
+
+We proposed an approach that improves the naturalness of speech language models without compromising their language proficiency, which can be leveraged by existing paradigms in this literature. While a model that generates more natural speech can enhance the user experience in conversational agents, it can also be exploited for harmful purposes, such as creating fake videos or making spam phone calls.
+
+# References
+
+Adigwe, A., Tits, N., Haddad, K. E., Ostadabbas, S., and Dutoit, T. The emotional voices database: Towards controlling the emotion dimension in voice generation systems, 2018. URL https://arxiv.org/abs/1806.09514.
+Borsos, Z., Marinier, R., Vincent, D., Kharitonov, E., Pietquin, O., Sharifi, M., Roblek, D., Teboul, O., Grangier, D., Tagliasacchi, M., and Zeghidour, N. Audiolm: A language modeling approach to audio generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:2523-2533, 2023. doi: 10.1109/TASLP.2023.3288409.
+
+Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., and Bengio, S. Generating sentences from a continuous space. In Rieszler, S. and Goldberg, Y. (eds.), Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 10-21, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/K16-1002. URL https://aclanthology.org/K16-1002.
+Chen, L.-W., Watanabe, S., and Rudnicky, A. A vector quantized approach for text to speech synthesis on real-world spontaneous speech. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11): 12644-12652, Jun. 2023. doi: 10.1609/aaai.v37i11.26488. URL https://ojs.aai.org/index.php/AAAI/article/view/26488.
+Chen, S., Wang, C., Chen, Z., Wu, Y., Liu, S., Chen, Z., Li, J., Kanda, N., Yoshioka, T., Xiao, X., Wu, J., Zhou, L., Ren, S., Qian, Y., Qian, Y., Wu, J., Zeng, M., Yu, X., and Wei, F. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505-1518, 2022. doi: 10.1109/JSTSP.2022.3188113.
+Chen, S., Liu, S., Zhou, L., Liu, Y., Tan, X., Li, J., Zhao, S., Qian, Y., and Wei, F. VALL-E 2: Neural codec language models are human parity zero-shot text to speech synthesizers, 2024. URL https://arxiv.org/abs/2406.05370.
+Chen, S., Wang, C., Wu, Y., Zhang, Z., Zhou, L., Liu, S., Chen, Z., Liu, Y., Wang, H., Li, J., He, L., Zhao, S., and Wei, F. Neural codec language models are zero-shot text to speech synthesizers. IEEE Transactions on Audio, Speech and Language Processing, 33:705-718, 2025. doi: 10.1109/TASLPRO.2025.3530270.
+Défossez, A., Copet, J., Synnaeve, G., and Adi, Y. High fidelity neural audio compression. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=ivCd8z8zR2. Featured Certification, Reproducibility Certification.
+Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real NVP. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=HkpbnH9lx.
+Du, Z., Wang, J., Chen, Q., Chu, Y., Gao, Z., Li, Z., Hu, K., Zhou, X., Xu, J., Ma, Z., Wang, W., Zheng, S., Zhou, C., Yan, Z., and Zhang, S. LauraGPT: Listen, attend, understand, and regenerate audio with GPT, 2024. URL https://arxiv.org/abs/2310.04673.
+
+Défossez, A., Mazaré, L., Orsini, M., Royer, A., Pérez, P., Jégou, H., Grave, E., and Zeghidour, N. Moshi: a speech-text foundation model for real-time dialogue, 2024. URL https://arxiv.org/abs/2410.00037.
+Eyben, F., Scherer, K. R., Schuller, B. W., Sundberg, J., Andre, E., Busso, C., Devillers, L. Y., Epps, J., Laukka, P., Narayanan, S. S., et al. The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE transactions on affective computing, 7 (2):190-202, 2015.
+Fu, H., Li, C., Liu, X., Gao, J., Celikyilmaz, A., and Carin, L. Cyclical annealing schedule: A simple approach to mitigating KL vanishing. In Burstein, J., Doran, C., and Solorio, T. (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 240-250, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1021. URL https://aclanthology.org/N19-1021.
+Hassid, M., Remez, T., Nguyen, T. A., Gat, I., Conneau, A., Kreuk, F., Copet, J., Defossez, A., Synnaeve, G., Dupoux, E., Schwartz, R., and Adi, Y. Textually pretrained speech language models. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 63483-63501. Curran Associates, Inc., 2023.
+Hendrycks, D. and Gimpel, K. Gaussian error linear units (GELUs). In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Bk0MRI5lg.
+Higgins, I., Matthew, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy2fzU9gl.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6840-6851. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf.
+Hsu, W.-N., Bolte, B., Tsai, Y.-H. H., Lakhotia, K., Salakhutdinov, R., and Mohamed, A. HuBERT: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Au
+
+dio, Speech, and Language Processing, 29:3451-3460, 2021. doi: 10.1109/TASLP.2021.3122291.
+Kahn, J., Riviere, M., Zheng, W., Kharitonov, E., Xu, Q., Mazaré, P.-E., Karadayi, J., Liptchinsky, V., Collobert, R., Fuegen, C., et al. Libri-light: A benchmark for ASR with limited or no supervision. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7669-7673. IEEE, 2020.
+Kharitonov, E., Lee, A., Polyak, A., Adi, Y., Copet, J., Lakhotia, K., Nguyen, T. A., Riviere, M., Mohamed, A., Dupoux, E., and Hsu, W.-N. Text-free prosody-aware generative spoken language modeling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8666-8681, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.593. URL https://aclanthology.org/2022.acl-long.593.
+Kim, J., Kim, S., Kong, J., and Yoon, S. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 8067-8077. Curran Associates, Inc., 2020.
+Kim, J., Kong, J., and Son, J. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 5530-5540. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/kim21f.html.
+Kim, J. W., Salamon, J., Li, P., and Bello, J. P. Crepe: A convolutional representation for pitch estimation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 161-165, 2018. doi: 10.1109/ICASSP.2018.8461329.
+Kingma, D. P. and Welling, M. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4):307-392, 2019. ISSN 1935-8245. doi: 10.1561/2200000056. URL http://dx.doi.org/10.1561/2200000056.
+Kong, J., Kim, J., and Bae, J. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 17022-17033. Curran Associates, Inc., 2020.
+
+Lakhotia, K., Kharitonov, E., Hsu, W.-N., Adi, Y., Polyak, A., Bolte, B., Nguyen, T.-A., Copet, J., Baevski, A., Mohamed, A., and Dupoux, E. On generative spoken language modeling from raw audio. Transactions of the Association for Computational Linguistics, 9:1336-1354, 2021. doi: 10.1162/tacl_a_00430. URL https://aclanthology.org/2021.tacl-1.79.
+Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
+Maiti, S., Peng, Y., Choi, S., Jung, J.-W., Chang, X., and Watanabe, S. Voxtlm: Unified decoder-only models for consolidating speech recognition, synthesis and speech, text continuation tasks. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 13326-13330, 2024. doi: 10.1109/ICASSP48485.2024.10447112.
+Nguyen, T. A., de Seyssel, M., Rozé, P., Rivière, M., Kharitonov, E., Baevski, A., Dunbar, E., and Dupoux, E. The zero resource speech benchmark 2021: Metrics and baselines for unsupervised spoken language modeling, 2020. URL https://arxiv.org/abs/2011.11588.
+OpenAI. GPT-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774.
+Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210, 2015. doi: 10.1109/ICASSP.2015.7178964.
+Perez, E., Strub, F., de Vries, H., Dumoulin, V., and Courville, A. FiLM: Visual reasoning with a general conditioning layer. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Apr. 2018. doi: 10.1609/aaai.v32i1.11671. URL https://ojs(aaai.org/index.php/AAAI/article/view/11671.
+Press, O., Smith, N., and Lewis, M. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=R8sQPpGCv0.
+Radford, A., Kim, J. W., Xu, T., Brockman, G., Mcleavey, C., and Sutskever, I. Robust speech recognition via largescale weak supervision. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 28492-28518. PMLR, 23-29 Jul
+
+2023. URL https://proceedings.mlr.press/v202/radford23a.html.
+Reyes-González, H. and Torre, R. Testing the boundaries: Normalizing flows for higher dimensional data sets. Journal of Physics: Conference Series, 2438 (1):012155, 02 2023. Copyright - Published under licence by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/3.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License; Last updated - 2023-11-28.
+Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1530-1538, Lille, France, 07-09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/rezende15.html.
+Ronneberger, O., Fischer, P., and Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Navab, N., Hornegger, J., Wells, W. M., and Frangi, A. F. (eds.), Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, pp. 234-241, Cham, 2015. Springer International Publishing. ISBN 978-3-319-24574-4.
+Rubenstein, P. K., Asawaroengchai, C., Nguyen, D. D., Bapna, A., Borsos, Z., de Chaumont Quity, F., Chen, P., Badawy, D. E., Han, W., Kharitonov, E., Muckenhirn, H., Padfield, D., Qin, J., Rozenberg, D., Sainath, T., Schalkwyk, J., Sharifi, M., Ramanovich, M. T., Tagliasacchi, M., Tudor, A., Velimirovic, M., Vincent, D., Yu, J., Wang, Y., Zayats, V., Zeghidour, N., Zhang, Y., Zhang, Z., Zilka, L., and Frank, C. AudioPaLM: A large language model that can speak and listen, 2023. URL https://arxiv.org/abs/2306.12925.
+Schuller, B., Steidl, S., and Batliner, A. The interspeech 2009 emotion challenge. In Interspeech 2009, pp. 312-315, 2009. doi: 10.21437/Interspeech.2009-103.
+Schuller, B., Steidl, S., Batliner, A., Vinciarelli, A., Scherer, K., Ringeval, F., Chetouani, M., Weninger, F., Eyben, F., Marchi, E., Mortillaro, M., Salamin, H., Polychroniou, A., Valente, F., and Kim, S. The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism. In Interspeech 2013, pp. 148-152, 2013. doi: 10.21437/Interspeech.2013-56.
+Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=St1giarCHLP.
+
+Sun, G., Zhang, Y., Weiss, R. J., Cao, Y., Zen, H., Rosenberg, A., Ramabhadran, B., and Wu, Y. Generating diverse and natural text-to-speech samples using a quantized fine-grained vae and autoregressive prosody prior. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6699-6703, 2020. doi: 10.1109/ICASSP40776.2020.9053436.
+Ulyanov, D., Vedaldi, A., and Lempitsky, V. Instance normalization: The missing ingredient for fast stylization, 2017. URL https://arxiv.org/abs/1607.08022.
+Vahdat, A. and Kautz, J. Nvae: A deep hierarchical variational autoencoder. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19667-19679. Curran Associates, Inc., 2020.
+Weston, J., Lenain, R., Meepegama, U., and Fristed, E. Learning de-identified representations of prosody from raw audio. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11134–11145. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/weston21a.html.
+Xiong, R., Yang, Y., He, D., Zheng, K., Zheng, S., Xing, C., Zhang, H., Lan, Y., Wang, L., and Liu, T. On layer normalization in the transformer architecture. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 10524-10533. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/xiong20b.html.
+Yamagishi, J., Veaux, C., and MacDonald, K. CSTR VCTK Corpus: English multi-speaker corpus for CSTR voice cloning toolkit (version 0.92), 2019. URL https://doi.org/10.7488/ds/2645.
+Yang, D., Tian, J., Tan, X., Huang, R., Liu, S., Guo, H., Chang, X., Shi, J., Zhao, S., Bian, J., Zhao, Z., Wu, X., and Meng, H. M. UniAudio: Towards universal audio generation with large language models. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pp. 56422-56447. PMLR, 21-27 Jul 2024. URL https://proceedings.mlr.press/v235/yang24x.html.
+Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., and Tagliasacchi, M. Soundstream: An end-to-end neural
+
+audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495-507, 2022. doi: 10.1109/TASLP.2021.3129994.
+Zhang, B. and Sennrich, R. Root mean square layer normalization. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
+Zhang, X., Zhang, D., Li, S., Zhou, Y., and Qiu, X. Speech-Tokenizer: Unified speech tokenizer for speech language models. In International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AF9Q8Vip84.
+Zhu, Y., Min, M. R., Kadav, A., and Graf, H. P. S3vae: Self-supervised sequentialvae for representation disentangle-ment and data generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+
+# A. Mathematical Derivations
+
+# A.1. Equation 3
+
+For notation simplicity, we drop the superscript $c$ of $\mathbf{Z}^c$ into $\mathbf{Z}$ in this proof.
+
+With the parameterized prior, the modeling distribution of $\mathbf{X}$ now also depends on $\psi$ :
+
+$$
+p _ {\theta , \psi} (\mathbf {X}) = \int p _ {\theta} (\mathbf {X} \mid \mathbf {Z}) p _ {\psi} (\mathbf {Z}) d \mathbf {Z},
+$$
+
+$$
+p _ {\theta , \psi} (\mathbf {Z} \mid \mathbf {X}) = \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z})}{p _ {\theta , \psi} (\mathbf {X})} = \frac {p _ {\theta} (\mathbf {X} \mid \mathbf {Z}) p _ {\psi} (\mathbf {Z})}{p _ {\theta , \psi} (\mathbf {X})}.
+$$
+
+Following a similar proof in Kingma & Welling (2019):
+
+Proof.
+
+$$
+\begin{array}{l} \log p _ {\theta , \psi} (\mathbf {X}) = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} [ \log p _ {\theta , \psi} (\mathbf {X}) ] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z})}{p _ {\theta , \psi} (\mathbf {Z} | \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z}) q _ {\phi} (\mathbf {Z} | \mathbf {X})}{p _ {\theta , \psi} (\mathbf {Z} | \mathbf {X}) q _ {\phi} (\mathbf {Z} | \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z})}{q _ {\phi} (\mathbf {Z} | \mathbf {X})} \right] \right] + \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {q _ {\phi} (\mathbf {Z} | \mathbf {X})}{p _ {\theta , \psi} (\mathbf {Z} | \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z})}{q _ {\phi} (\mathbf {Z} | \mathbf {X})} \right] \right] + D _ {K L} (q _ {\phi} (\mathbf {Z} | \mathbf {X}) | | p _ {\theta , \psi} (\mathbf {Z} | \mathbf {X})). \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \mathcal {O} _ {E L B O} = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta , \psi} (\mathbf {X} , \mathbf {Z})}{q _ {\phi} (\mathbf {Z} | \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\theta} (\mathbf {X} \mid \mathbf {Z}) p _ {\psi} (\mathbf {Z})}{q _ {\phi} (\mathbf {Z} \mid \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log p _ {\theta} (\mathbf {X} | \mathbf {Z}) \right] + \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {p _ {\psi} (\mathbf {Z})}{q _ {\phi} (\mathbf {Z} | \mathbf {X})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} [ \log p _ {\theta} (\mathbf {X} \mid \mathbf {Z}) ] - D _ {K L} (q _ {\phi} (\mathbf {Z} \mid \mathbf {X}) | | p _ {\psi} (\mathbf {Z})). \\ \end{array}
+$$
+
+With $q_{\phi}(\mathbf{Z}\mid \mathbf{X}) = \prod_{t = 1}^{T}q_{\phi}(z_{t}\mid \mathbf{X})$ , and $p_{\psi}(\mathbf{Z}) = \prod_{t = 1}^{T}p_{\psi}(z_{t}\mid \mathbf{Z}_{1:t - 1})$
+
+$$
+\begin{array}{l} D _ {K L} (q _ {\phi} (\mathbf {Z} \mid \mathbf {X}) | | p _ {\psi} (\mathbf {Z})) = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} \mid \mathbf {X})} \left[ \log \left[ \frac {q _ {\phi} (\mathbf {Z} \mid \mathbf {X})}{p _ {\psi} (\mathbf {Z})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {\prod_ {t = 1} ^ {T} q _ {\phi} (z _ {t} | \mathbf {X})}{\prod_ {t = 1} ^ {T} p _ {\psi} (z _ {t} | \mathbf {Z} _ {1 : t - 1})} \right] \right] \\ = \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathbf {X})} \left[ \log \left[ \frac {q _ {\phi} (z _ {t} \mid \mathbf {X})}{p _ {\psi} (z _ {t} \mid \mathbf {Z} _ {1 : t - 1})} \right] \right] \\ = \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1: t - 1}} \left[ D _ {K L} \left(q _ {\phi} \left(z _ {t} \mid \mathbf {X}\right) \right\lvert \left| p _ {\psi} \left(z _ {t} \mid \mathbf {Z} _ {1: t - 1}\right)\right) \right], \\ \end{array}
+$$
+
+where $\mathbf{Z}_{1:\mathbf{t} - 1}\sim \prod_{t = 1}^{T}q_{\phi}(z_t\mid \mathbf{X}).$
+
+
+
+
+(b) Time-wise Normalizing Flow Architecture
+Figure 2. (a) Residual block architecture or the encoder $\phi$ . (b) Model architecture for the time-wise normalization flow introduced in Section 3.4.
+
+# A.2. Equation 4
+
+Proof. Since $\mathcal{O}_{rec}$ is straightforward to derive from Equation 1 (decompose $\mathbf{Z}$ into $\mathbf{Z}^c$ and $\mathbf{Z}^d$ ), here we show how $\mathcal{L}_{kl}^c$ and $\mathcal{L}_{kl}^d$ are derived from the $D_{KL}(q_{\phi}(\mathbf{Z} \mid \mathbf{X})||p_{\psi}(\mathbf{Z}))$ in Equation 1.
+
+With $q_{\phi}(\mathbf{Z}\mid \mathbf{X}) = q_{\phi}(\mathbf{Z}^{c}\mid \mathbf{X})p(\mathbf{Z}^{d}\mid \mathbf{X})$ and $p_{\psi}(z_t\mid \mathbf{Z}_{1:t - 1}) = p_{\psi}(z_t^d\mid \mathbf{Z}_{1:t - 1})p_{\psi}(z_t^c\mid \mathbf{Z}_{1:t - 1})$
+
+$$
+\begin{array}{l} D _ {K L} \left(q _ {\phi} (\mathbf {Z} | \mathbf {X}) | | p _ {\psi} (\mathbf {Z})\right) \\ = \mathbb {E} _ {\mathbf {Z}} \left[ \log \left[ \frac {q _ {\phi} (\mathbf {Z} \mid \mathbf {X})}{p _ {\psi} (\mathbf {Z})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z}} \left[ \log \left[ \frac {q _ {\phi} (\mathbf {Z} ^ {c} \mid \mathbf {X}) p (\mathbf {Z} ^ {d} \mid \mathbf {X})}{\prod_ {t = 1} ^ {T} p _ {\psi} (z _ {t} \mid \mathbf {Z} _ {1 : t - 1})} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z}} \left[ \log \left[ \frac {q _ {\phi} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right) p \left(\mathbf {Z} ^ {d} \mid \mathbf {X}\right)}{\prod_ {t = 1} ^ {T} p _ {\psi} \left(z _ {t} ^ {c} \mid \mathbf {Z} _ {1 : t - 1}\right) p _ {\psi} \left(z _ {t} ^ {d} \mid \mathbf {Z} _ {1 : t - 1}\right)} \right] \right] \\ = \mathbb {E} _ {\mathbf {Z}} \left[ \log \left[ \frac {q _ {\phi} \left(\mathbf {Z} ^ {c} \mid \mathbf {X}\right)}{\prod_ {t = 1} ^ {T} p _ {\psi} \left(z _ {t} ^ {c} \mid \mathbf {Z} _ {1 : t - 1}\right)} \right] \right] + \mathbb {E} _ {\mathbf {Z}} \left[ \log \left[ \frac {p (\mathbf {Z} ^ {d} \mid \mathbf {X})}{\prod_ {t = 1} ^ {T} p _ {\psi} \left(z _ {t} ^ {d} \mid \mathbf {Z} _ {1 : t - 1}\right)} \right] \right] \\ = \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1: t - 1}} \left[ D _ {K L} \left(q _ {\phi} \left(z _ {t} ^ {c} \mid \mathbf {X}\right) \mid \mid p _ {\psi} \left(z _ {t} ^ {c} \mid \mathbf {Z} _ {1: t - 1}\right)\right) \right] - \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathbf {Z} _ {1: t}} \left[ \log p _ {\psi} \left(z _ {t} ^ {d} \mid \mathbf {Z} _ {1: t - 1}\right) \right] \\ + \mathbb {E} _ {\mathbf {Z}} [ \log p (\mathbf {Z} ^ {d} \mid \mathbf {X}) ] \\ \end{array}
+$$
+
+Since $\mathbb{E}_{\mathbf{Z}}[\log p(\mathbf{Z}^d\mid \mathbf{X})]$ does not depend on any parameters, it can be dropped during optimization.
+
+# B. Model Architectures
+
+Encoder $q_{\phi}(\mathbf{Z} \mid \mathbf{X})$ We use a different number of residual blocks for the encoder. We use a kernel size of 7; the hidden dimensions used for all models are in Figure 2 (a). The architecture of the residual block is illustrated in Figure 2 (a). Finally, after 3 residual blocks, we apply another instance normalization, followed by separate linear heads to output the mean and log-variance of Equation 2. We used the same size encoder for experiments with LibriSpeech and Libri-light. Instance Norm refers to instance normalization (Ulyanov et al., 2017).
+
+Autoregressive Transformer We follow the typical implementation of transformers with Post-LN (Xiong et al., 2020). We use RMSNorm (Zhang & Sennrich, 2019) and GELU activation (Hendrycks & Gimpel, 2017). We use ALiBi (Press
+
+Table 6. Model configuration of the autoregressive transformer for training on LibriSpeech and Libri-light respectively. This configuration is shared for all comparing methods. 'feed-forward size' refers to the width of the feed-forward linear layer.
+
+Dataset # of layers # of heads hidden size feed-forward size LibriSpeech 4 8 512 2048 Libri-light 16 16 1024 4096
+
+et al., 2022) for relative positional encoding. We used different model sizes for the LibriSpeech and Libri-light experiments, with the configuration summarized in Table 6. The same configuration is shared for all comparison methods.
+
+Time-wise Normalizing Flow The architecture of our time-wise normalizing flow is illustrated in Figure 2 (b). Here, $\mu$ and $\sigma$ are the mean and standard deviation that will be multiplied and added to the input. This part mainly follows the implementation of Dinh et al. (2017). The "Last Layer Output" in Figure 2 (b) refers to the output of the last transformer layer. "FiLM" refers to FiLM conditioning (Perez et al., 2018). "Swap" refers to the swapping of the two inputs in their channel order. We used 4 flow blocks for all experiments.
+
+Diffusion Decoder For our diffusion decoder $\theta$ , we apply the same residual block as in Figure 2 (a). However, here we have additional skip connections between the output of residual blocks following the commonly-used U-Net architecture (Ronneberger et al., 2015). We encode the current diffusion step with sinusoidal positional encoding, linear project it and add it to each time frame of the output of the first convolution layer in each of the residual blocks. For both data sets, we used six residual blocks, with the same hidden dimensions and kernel size as the encoder $\phi$ .
+
+Utterance Encoder The utterance encoder consists of 3 blocks, where each block sequentially includes a convolution with stride 2 and kernel size 4, followed by instance normalization (Ulyanov et al., 2017) and RELU activation. The hidden size of the convolution layer is: 128, 256, 512. Afterward, a simple time-averaging is applied to the output to generate an utterance-level embedding.
+
+# C. Training Details
+
+For model training, we use the AdamW optimizer (Loshchilov & Hutter, 2019) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ . We used a weight decay of 0.01 for LibriSpeech models and 0.1 for the Libri-light models. We trained the models with mixed precision. For Libri-light models, we use 2 L40S GPUs with gradient accumulation of step size 2. This makes the effective batch size 192. We trained for 600k update steps. We warm $\beta$ from 0 to the final value in the first 30k update steps. It takes about 14 days to train the Libri-light models.
+
+For LibriSpeech models, we discovered that methods involving discrete tokens suffer from early overfitting (but not in Libri-light). Therefore, we train these models (including our proposed approach) to only 100k steps. For the diffusion decoder of Token-LM and Token-LM + pitch, we separately train them to 500k steps, where we observe marginal improvement of loss functions between epochs. For pure variational approaches, we train to 400k steps as we did not observe overfitting. We used the same effective batch size on the 2 L40S GPUs but without gradient accumulation. For LibriSpeech models, we warm $\beta$ from 0 to the final value in the first 20k update steps. It takes about 2 days to train for the 400k step models and less than 1 day for the 100k step models. For both models, we use an initial learning rate of $5e - 4$ and apply cosine learning rate decay to $5e - 5$ .
+
+For the input to the utterance encoder, we randomly cropped the segment to be between 2 and 4 seconds. For diffusion model, we use L1 loss to predict the diffusion noise and apply the cosine schedule for the diffusion noise variance.
+
+# D. Subjective Evaluation
+
+We use crowd-sourcing for subjective human evaluation on speech meaningfulness and naturalness. The recruited raters speak English and were paid at least the minimum wage. We sample 100 prompts from LibriSpeech development subsets, crop the first 3 seconds, and feed to each model to produce a 10-second continuation (total 13 seconds). The same 100 prompts are used across all methods for a fair comparison. Since we do not train our model to predict the end of speech, we
+
+Please listen to the computer-generated speech sample below and rate how well its grammar and content convey meaningful information. Focus on evaluating the grammar and the content, not the naturalness or quality of the speech.
+
+
+Figure 3. A screenshot of the Meaningfulness (M-MOS) assessment task, as the crowd-sourced rater sees it.
+
+Please listen to the computer generated speech sample below and rate how natural (i.e., human-like) it sounds on a scale from 1 (very unnatural) to 5 (very natural).
+
+
+Figure 4. A screenshot of the Naturalness (N-MOS) assessment task, as the crowd-sourced rater sees it.
+
+Table 7. Performance varying latent dimension ${d}_{z}^{c}$ on our proposed approach (without speech tokens). Models are trained on LibriSpeech.
+
+dcz sWUGGY(↑) sBLIMP(↑) F0-RMSE(↓) MCD(↓) CER(↓) 4 69.33 51.85 17.47 5.48 13.02 16 73.49 51.69 16.68 5.37 7.80 64 73.25 50.91 17.37 5.35 7.79
+
+observed that some synthesis ends earlier than 13 seconds. We use pre-trained voice activity detection from pyannote to post-process the samples, removing trailing silences and non-speech that might affect evaluation.
+
+In Figures 3 and 4, we provide screenshots of what the raters see during the evaluation. Raters are presented with a spoken utterance and are instructed to rate its naturalness or meaningfulness on a five-point Likert scale, where 1 corresponds to very unnatural or meaningless and 5 corresponds to very natural or meaningful.
+
+# E. Dimension of the latent variable $d_{z}^{c}$
+
+Table 7 presents our results of increasing the latent dimension $d_z^c$ . We perform the sweep in the variational approach without semantic tokens for simplicity. From Table 7, we observe that increasing the latent dimension from 4 to 16 results in uniform improvements across the measures. However, further increasing the dimension from 16 to 64 leads to marginal degradation. We speculate that this performance plateau may arise from the difficulty normalizing flows face when modeling higher-dimensional distributions (Reyes-González & Torre, 2023).
+
+# F. Discrete token vocabulary size
+
+Table 8 shows our evaluation results on semantic token models (Token-LM) trained with varying $k$ . Here, $k$ refers to number of clusters for the $k$ -means clustering on obtaining the discrete token, which is equal to the vocabulary size of the discrete tokens. Our result is consistent with (Maiti et al., 2024), which shows that $k = 200$ obtains the best sWUGGY score.
+
+Table 8. Comparison of model trained on different number of discrete tokens $k$ . Models are trained on LibriSpeech.
+
+k sWUGGY(↑) sBLIMP(↑) F0-RMSE(↓) MCD(↓) CER(↓) 50 59.63 52.49 41.11 6.49 11.87 200 67.32 52.46 35.41 6.23 5.40 1000 65.11 50.99 32.60 5.99 4.48
+
+Table 9. Performance of speech emotion recognition models trained on different features. The features are extracted from models pre-trained on Libri-light using our proposed method.
+
+Method Emotion Recognition (ACC, %) Tokens 57.46 ± 1.59 Variational Features 91.57 ± 0.35 Tokens + Variational Features 92.74 ± 0.37
+
+Reconstruction metrics indicate that $k = 200$ provides a significant improvement over $k = 50$ , whereas increasing $k = 200$ to $k = 1000$ produces only a marginal gain. Interestingly, having larger $k$ seems to negatively impact sBLIMP. We speculate that the small vocabulary size ( $k = 50$ ) is adequate to distinguish word-level changes in sentences, but insufficient to detect subtle phonetic variations within words.
+
+# G. Scoring sWUGGY and sBLIMP
+
+Token-LM To obtain the scores for sWUGGY and sBLIMP for semantic token only models, we follow (Borsos et al., 2023) and use the log-likelihood returned by the model normalized by the sequence length.
+
+Token-LM + Pitch, Token-LM + Acoustic Tokens, Proposed Methods For methods that have additional inputs other than the discrete tokens, we only use the model's log-likelihood of the discrete tokens. We do not use the log-likelihood of the $\mathbf{Z}^c$ , as we assume that the discrete tokens $\mathbf{Z}^d$ should contain all the information needed for sWUGGY and sBLIMP. In practice, we do observe that including the log-likelihood of the $\mathbf{Z}^c$ slightly lowers the score for our proposed method.
+
+Proposed - token Since there are no discrete tokens involved in Proposed - token, we directly use the log-likelihood of $\mathbf{Z}^c$ . The likelihood can be estimated using Equation 6.
+
+For Proposed and Proposed w.o. token, to ensure a deterministic outcome, we again use $\mu_{\phi}(\mathbf{X},t)$ from Equation 2 directly as $\mathbf{Z}^c$ , instead of sampling $\mathbf{Z}^c$ from $q_{\phi}(z_t\mid \mathbf{X})$ .
+
+# H. Side Experiments on inspecting learned features
+
+Speech Emotion Recognition We evaluate speech emotion recognition on the EmoV-DB (Adigwe et al., 2018) dataset. We follow a 9:1 split on training and testing for the dataset. The dataset contains five emotion categories: amused, angry, neutral, disgust, and sleepiness. We train a classifier with the same structure to predict emotion categories based on different features. The experiments are repeated 20 times to report the mean and $95\%$ confidence interval. From Table 9, we can observe that the variational features alone obtain significantly better performance compared to tokens, showcasing its capability of capturing paralinguistic information. Combining both tokens and variational features gives a slight improvement over using variational features alone.
+
+Speaker Identification For speaker identification, we evaluate the performance on the VCTK (Yamagishi et al., 2019) dataset, which consists of read English sentences, with 400 sentences each from 110 speakers. We again follow a 9:1 train-test split and repeat each run 20 times to report the mean and $95\%$ confidence interval. We also evaluate our embedding of utterance, which is designed to capture static utterance-level information (see Section 3.5). From Table 10, we can see that using tokens only results in poor speaker identification accuracy. With variational features, the classifier obtains improved accuracy. We attribute this improvement to the fact that speaking styles can be captured in the variational features to classify speakers. On the other hand, the utterance embedding outperforms the other features in this task. These results support our
+
+Table 10. Performance of speaker identification models trained on different features. The features are extracted from models pre-trained on Libri-light using our proposed method.
+
+Method Speaker Identification (ACC, %) Tokens 7.08 ± 0.40 Variational Features 63.41 ± 0.43 Tokens + Variational Features 63.13 ± 0.45 Utterance Embedding 94.06 ± 0.32
+
+claim that the utterance encoder encodes global speaker information while variational features capture local paralinguistic attributes.
+
+# I. Conditional independence assumption of $z_{t}^{d}$ and $z_{t}^{c}$
+
+In general, $\mathbf{Z}^c$ and $\mathbf{Z}^d$ are not independent, since the language content can imply the paralinguistic information, and vice versa. However, our modeling assumes only conditional independence. Specifically, the past generations $\mathbf{Z}_{1:t-1}$ are first passed through the autoregressive transformer $\psi$ to produce the intermediate representation $o_t = \text{Transformer}_\phi(\mathbf{Z}_{1:t-1})$ . Then, two separate heads predict $z_t^c$ and $z_t^d$ based on $o_t$ . This framework assumes that the transformer can learn $o_t$ such that $z_t^c$ and $z_t^d$ become conditionally independent given $o_t$ . Given the transformer's modeling capacity, we believe it can extract shared information $(o_t)$ between $z_t^c$ and $z_t^d$ from $\mathbf{Z}_{1:t-1}$ , while delegating the distinct information to their respective heads.
\ No newline at end of file
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/images.zip b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..14dcbbe95e01fca3be862aa9dc3d4e9ca3517c13
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee5f84933ba0ddf19be6dec2625b751ff5354770f316d00ff642e70f7794d3fc
+size 620797
diff --git a/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/layout.json b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f0bba835ccdb2759fddba89d3d4eb1f972a6aaa
--- /dev/null
+++ b/avariationalframeworkforimprovingnaturalnessingenerativespokenlanguagemodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8f2afaa1bde77c662bb7c2103baf8e85882cbc573a32765a4869d9b44def0b9
+size 707867
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_content_list.json b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b87c32fb01338f9d96ccb8f7cd5c35c81c30bcd4
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54244a0016af36cdf50715d8a80c02ff5d0d90ebc288b356c9057df9e833ac1f
+size 166131
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_model.json b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..044fe8c03e499597883120f0663863132101b93f
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:75168753b5cf18ac17f06c4f08ef12b99294fa86e12eb680a790debb9e5273e6
+size 190953
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_origin.pdf b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fa7b70f9c8feab06cb72975911d2fc4965669b93
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/12c41e8d-269b-43d6-ad34-eb5f6865afe0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:258f18b6af791d168f217d9acd220ad87e2417e45658be25b62c671d791290bf
+size 1917429
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/full.md b/avariationalinformationtheoreticapproachtooutofdistributiondetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a5e71d8dede2005f1ff38d9b7d1e0f4213a9e90d
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/full.md
@@ -0,0 +1,890 @@
+# A Variational Information Theoretic Approach to Out-of-Distribution Detection
+
+Sudepta Mondal1 Zhuolin Jiang1 Ganesh Sundaramoorthi1
+
+# Abstract
+
+We present a theory for the construction of out-of-distribution (OOD) detection features for neural networks. We introduce random features for OOD through a novel information-theoretic loss functional consisting of two terms, the first based on the KL divergence separates resulting in-distribution (ID) and OOD feature distributions and the second term is the Information Bottleneck, which favors compressed features that retain the OOD information. We formulate a variational procedure to optimize the loss and obtain OOD features. Based on assumptions on OOD distributions, one can recover properties of existing OOD features, i.e., shaping functions. Furthermore, we show that our theory can predict a new shaping function that out-performs existing ones on OOD benchmarks. Our theory provides a general framework for constructing a variety of new features with clear explainability.
+
+# 1. Introduction
+
+Machine learning (ML) systems are typically designed under the assumption that the training and test sets are sampled from the same statistical distribution. However, this often does not hold in practice. For example, during deployment, test data may include previously unseen classes. In such cases, the ML system may produce incorrect results with high confidence (DeVries & Taylor, 2018). Therefore, it is crucial to develop methods that enable ML systems to detect out-of-distribution (OOD) data. Detecting OOD data allows users to be alerted of potentially unreliable predictions and enables the system to adapt accordingly. OOD detection has gained considerable attention recently (Yang et al., 2022).
+
+Recent state-of-the-art (SoA) (Sun et al., 2021; Djurisic et al., 2022; Ahn et al., 2023; Sun & Li, 2022; Zhao et al.,
+
+$^{1}$ RTX Technology Research Center (RTRC), East Hartford, CT 06118. Correspondence to: Ganesh Sundaramoorthi .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+2024; Xu et al., 2023; Zhang et al., 2024) has focused on identifying descriptors of the data that can distinguish between OOD and ID data. In particular, feature-shaping functions have been shown to be promising. In feature shaping, features, usually from the penultimate layer of a pre-trained network, are input to a shaping function and then used to score incoming data. Examples of these approaches include ReAct (Sun et al., 2021), FS-OPT (Zhao et al., 2024), VRA (Xu et al., 2023), ASH (Djurisic et al., 2022). While these approaches have provided advancements to SoA on several benchmarks, most of these are empirically driven rule-based methods, and may not generalize well over new unseen datasets (Zhao et al., 2024). It is thus beneficial to understand under what conditions that these methods will work so that we may understand when to employ one method over another. One way to do this is to develop a theory where one can derive features for OOD detection (henceforth called OOD features for brevity) as a function of underlying assumptions (e.g., statistical distributions). This could potentially offer a way to critically examine existing methods. It could also potentially enable the systematic development of new features that may generalize better.
+
+Towards this goal, we develop a theory to formulate OOD features based on underlying statistical distributions of ID and OOD distributions. We develop a novel loss functional, based on information theory, defined on the set of OOD features whose optimization yields OOD features as a function of the underlying statistical distributions. Unlike current approaches, our OOD features are random and thus follow a statistical distribution. The mean value models the deterministic shaping features in the literature.
+
+Our loss aims to determine the OOD feature that maximally separates resulting ID and OOD feature distributions through the Kullback-Leibler (KL) divergence. As separating distributions by itself is ill-posed, we propose a novel use of the Information Bottleneck (IB) (Tishby et al., 2000) as regularization. In our use, IB seeks compressed features that preserve the information the data has about OOD, aiming for a feature representation that contains only the information necessary for OOD detection. As this loss functional is defined on probability measures (representing the distribution of the OOD feature), it is an infinite dimensional optimization problem, and thus we use the calculus of variations (Troutman, 2012) to derive the optimization procedure. Our
+
+theory offers an explanation of several techniques employed in SoA rule-based approaches, and suggests a new shaping function that out-performs other shaping functions in SoA.
+
+There have been recent theories for OOD detection (Zhao et al., 2024; Xu et al., 2023). These works have introduced the novel idea of formulating OOD features through a loss function rather than empirically driven rule-based approaches of the past, and motivates our work. In contrast to the aforementioned works, our theory employs a novel information-theoretic loss function, which offers several advantages. Our theory shows how different assumptions on the OOD distribution lead to different OOD feature shaping approaches. Our theory is able to more accurately offer an explanation for properties of several SoA rule-based approaches as being from different underlying OOD distributions and different regularization (see next section for a more detailed discussion).
+
+In summary, our contributions are as follows: 1. We introduce a novel theory and framework for deriving OOD features from neural networks. This involves the formulation of OOD features as a variational problem that formulates OOD features as random features through a novel loss functional that contains two terms, one that maximizes the KL divergence between the random feature under ID and OOD distributions and another term, the Information Bottleneck, which extracts the information from the data that is relevant for OOD detection. 2. We develop the techniques to optimize the loss functional using the calculus of variations, and specifically derive a computationally feasible algorithm in the one-dimensional data case. 3. Using our framework, we show how the OOD shaping functions change based on various data distributions. We relate the mean value of our OOD features to existing OOD shaping functions. 4. We introduce a novel piece-wise linear OOD feature shaping function predicted through our theory, and show that it leads to state-of-the-art results on OOD benchmarks.
+
+# 1.1. Related Work
+
+We briefly review related work; the reader is referred to (Yang et al., 2022) for a survey. Post-hoc approaches of OOD detection, which are applied to pre-trained models without additional training, have focused on constructing scoring functions to differentiate OOD from in-distribution data, leveraging confidence scores (Hendrycks & Gimpel, 2018a; Zhang & Xiang, 2023; Liang et al., 2020), energy-based metrics (Liu et al., 2021; Wu et al., 2023; Elflein et al., 2021) and distance-based measures (Lee et al., 2018; Sun et al., 2022). For example, MSP (Hendrycks & Gimpel, 2018a) used the maximum softmax probability as a confidence score. ODIN (Liang et al., 2020) improved OOD detection by applying temperature scaling and adding small perturbations to input data before computing the maximum
+
+softmax probability. (Ren et al., 2019) proposes to use the likelihood ratio, which has been proposed over likelihoods, which do not work well (Kirichenko et al., 2020). (Lee et al., 2018) leveraged Mahalanobis distance to compute the distance between features and classes. KNN (Sun et al., 2022) uses a non-parametric approach. Energy methods (Liu et al., 2021) present an alternative to softmax scores by employing the Helmholtz free energy. Energy scoring has been adopted by several OOD feature-shaping approaches; feature-shaping is the focus of our work.
+
+Feature-shaping approaches to OOD detection: Several methods perform OOD detection by computing features of the output of layers of the neural network (Sun et al., 2021; Kong & Li, 2023; Djurisic et al., 2022; Fort et al., 2021b; Zhao et al., 2024) before being input to a score. In ReAct (Sun et al., 2021), the penultimate layer outputs are processed element-wise by clipping large values. It is empirically noted that OOD data results in large spikes in activations, which are clipped to better separate the ID and OOD distributions. BFAct (Kong & Li, 2023) uses the Butterworth filter to smoothly approximate the clipping. ASH computes features by sparsifying intermediate outputs of the network by flooring small values to zero and passing larger values with possible scaling. DICE (Sun & Li, 2022) is another approach to sparsification. Different than elementwise approaches, ASH then does vector processing of the shaped feature before input to a score. VRA (Xu et al., 2023) and (Zhang et al.) derive element-wise shaping functions by an optimization approach.
+
+Optimization-based approaches for feature shaping: (Xu et al., 2023) formulates a loss function for deterministic OOD features that aims to separate the means of ID and OOD feature distributions, and regularization is added to keep the OOD feature near the identity through the L2 norm. (Zhao et al., 2024) analyzes a similar loss function but with point-wise rather than L2 regularization. They further offer simplifications to remove the reliance on the OOD distribution. These works have introduced the novel idea of formulating OOD features through a loss function. Our approach offers several advantages. Over (Zhao et al., 2024), we present a framework in which we can study the OOD feature as a function of the underlying OOD distribution. This shows the implicit assumptions in several existing methods. In contrast, (Zhao et al., 2024) aims to remove dependence on the OOD distribution. Our results show that feature shaping can vary as a function of the underlying OOD distribution. Over (Zhao et al., 2024; Xu et al., 2023), our theory offers an explanation of qualitative properties of existing SoA methods. For instance, clipping of large values in OOD features (of ReAct (Sun et al., 2021)) is associated with a higher Information Bottleneck (IB) regularization which is needed for noisier OOD datasets. Negative slope at large values in (Zhao et al., 2024; Xu et al., 2023) is associated
+
+with low IB regularization. Also, pruning of small feature values in (Xu et al., 2023; Djurisic et al., 2022) is associated with OOD distributions with heavier tails. See Section 4 for more technical details.
+
+# 2. Variational Formulation of OOD Features
+
+We formulate OOD features as an optimization problem. For the sake of the derivation, we will assume that the probability distributions of ID and OOD features from the network are given in this section. In practice, the ID can be estimated by training data. In Section 4, we will then study the OOD features under various distributions to show how features vary with distribution and offer plausible assumptions made by existing feature shaping approaches. We will also make reasonable assumptions on the OOD distribution to derive new prescriptive OOD features for use in practice.
+
+Current OOD features in the OOD literature are computed by processing features from the neural network through a deterministic function (e.g., clipping). In contrast, we propose to generalize that approach by allowing for random functions. Let $Z$ denote the feature (a random variable) from the network (penultimate or intermediate layer feature). We denote by $\tilde{Z}$ the random OOD feature (a random variable) that we seek to determine. The distribution of $\tilde{Z}$ is denoted $p(\tilde{z} | z)$ . Thus, rather than solving for a deterministic function $f(Z)$ , we instead solve for a random feature $\tilde{Z}$ represented through $p(\tilde{z} | z)$ as in Information Theory (Cover, 1999). Thus, given a feature $z$ , the OOD feature is $\tilde{Z} \sim p(\tilde{z} | z)$ . We will primarily be concerned with the mean value of the distribution in this paper to relate to other feature shaping methods. Let $X$ be the random variable indicating the data (e.g., image, text), and $Y$ be the random variable indicating in- $(Y = 0)$ and out-of- $(Y = 1)$ distribution data. Note this forms a Markov Chain $Y \to X \to Z \to \tilde{Z}$ . The Markov Chain property is needed to construct one of the terms of our loss function, discussed next.
+
+We propose a novel loss functional to design the OOD random feature. This loss functional is defined on $p(\tilde{z} | z)$ . The first term aims to separate the ID and OOD distributions of the random feature $\tilde{Z}$ . This is natural since we would like to use the OOD feature to separate the data into in or out-of-distribution. To achieve this separation, we propose to maximize the symmetrized KL-divergence between $p(\tilde{z} | Y = 0)$ and $p(\tilde{z} | Y = 1)$ . Note recent work (Zhao et al., 2024) also seeks to separate distributions, however, differently than our approach as only the means of the distribution are separated. Also, note that $p(\tilde{z} | Y = y)$ is a function of $p(\tilde{z} | z)$ , the variable of optimization, and thus the KL term is a function of $p(\tilde{z} | z)$ . This term is defined as follows:
+
+$$
+\begin{array}{l} D _ {K L} (\tilde {p} (\tilde {z} | z)) = D _ {K L} [ p (\tilde {z} | Y = 1) \mid | p (\tilde {z} | Y = 0) ] + \\ D _ {K L} [ p (\tilde {z} | Y = 0) \mid | p (\tilde {z} | Y = 1) ], \tag {1} \\ \end{array}
+$$
+
+where
+
+$$
+D _ {K L} [ p \mid q ] = \int p (x) \log \frac {p (x)}{q (x)} d x, \quad \text {a n d} \tag {2}
+$$
+
+$$
+p (\tilde {z} | y) = \int p (\tilde {z} | z) p (z | y) d z. \tag {3}
+$$
+
+Note that we have used that $p(\tilde{z} | z, y) = p(\tilde{z} | z)$ as the feature is constructed the same for both ID and OOD data. These equations show the dependence of the OOD feature distributions on $p(\tilde{z} | z)$ . The KL divergence is a natural choice for separating distributions and a standard information-theoretic quantity.
+
+Unconstrained maximization of KL divergence is ill-posed, and regularization is needed. Also, it is possible to reduce the dimensions of $Z$ to a few dimensions that are maximally separated but remove information necessary to fully characterize OOD data. Therefore, we need to ensure that $\tilde{Z}$ contains all the information relevant to accurately determine OOD data. With these considerations, we aim to compress the dimensions of $Z$ to form a simple/compact feature, but in a way that preserves the OOD information (contained in the variable $Y$ ). To achieve this, we adapt the Information Bottleneck (Tishby et al., 2000). In the Information Bottleneck method, the quantization of a random variable $X$ is considered to form the random variable $T$ in such a way to preserve information about a random variable $Y$ , where $Y$ forms a Markov Chain with $X$ . A functional is formulated such that when minimized forms $T$ . This is precisely the functional we would like to determine $\tilde{Z}$ (where $\tilde{Z}$ is analogous to $T$ and $Z$ is analogous to $X$ ). The second term of our functional, following from (Tishby et al., 2000), is
+
+$$
+\operatorname {I B} (p (\tilde {z} | z)) = I (Z; \tilde {Z}) - \beta I (\tilde {Z}; Y), \tag {4}
+$$
+
+where $I$ indicates mutual information, and $\beta > 0$ is a hyperparameter. The first term of (4) is the compression term that measures the mutual information between $Z$ and $\tilde{Z}$ ; this term is minimized and thus the term favors $\tilde{Z}$ to be a compressed version of $Z$ . The second term maximizes the mutual information between $\tilde{Z}$ and $Y$ , and thus favors $\tilde{Z}$ to retain OOD relevant information.
+
+Thus, our combined loss functional is
+
+$$
+L (p (\tilde {z} | z)) = - D _ {K L} (p (\tilde {z} | z)) + \alpha \mathbf {I B} (p (\tilde {z} | z)), \tag {5}
+$$
+
+which is minimized to determine the conditional distribution of $\tilde{Z}$ , $p(\tilde{z} | z)$ , and $\alpha > 0$ is a hyperparameter. Our goal is to determine the optimal $p(\tilde{z} | z)$ , which can then be used with a score function to determine whether data $z$ is OOD or not. Note that we are seeking to optimize over the set of continuous probability distributions, which forms an infinite dimensional optimization problem. To gain intuition into the loss functional above, in particular to see that it forms a well-posed problem and that IB regularization is needed, we
+
+analyze a simple case with 1D Gaussian distributions that result in closed form solution in Appendix A. We verify in the next section that the loss functional, for more complex distributions/features, yields well-posed problems and hence result in an optimal solution.
+
+# 3. Optimization for OOD Features
+
+In this section, we discuss the optimization of the loss functional (5). The loss functional is defined on continuous probability density functions $p(\tilde{z} | z)$ , where $z, \tilde{z}$ are continuous. This is an infinite dimensional optimization problem, and to find the optimal feature one can use the calculus of variations to determine the gradient of $L$ (Troutman, 2012). Setting the gradient to zero and solving for the probability distribution that satisfies the equation gives the necessary conditions for the optimizer. For our loss, that does not yield a closed form solution and so we instead use the gradient to perform a gradient descent.
+
+# 3.1. Loss Under Element-wise Independence of Feature
+
+Because formulating numerical optimization for general multi-dimensional distributions is difficult, we make some simplifications to gain insights to our theory and approach. Even with these simplifications, we will show that the approach can explain popular approaches in the literature and lead to a new state of the art approach. Our first simplification (which is similar to element-wise processing assumptions made in existing methods, e.g., (Sun et al., 2021; Zhao et al., 2024)) is to assume that the conditional feature distribution $p(\tilde{z} | z)$ can be factorized as $p(\tilde{z} | z) = \prod_{i=1}^{n} p(\tilde{z}_i | z)$ , which assumes conditional independence of the components of $\tilde{z}$ and that each component has the same conditional distribution. We also assume that $p(z | y) = \prod_{i=1}^{n} p(z_i | y)$ , that is, the components of $z$ are independent conditioned on $y$ . Under these assumptions, the optimization of the loss functional (5) reduces to the optimization of several optimization problems defined on one-dimensional probability distributions from each feature component (see Appendix B for details):
+
+$$
+\underset {p \left(\tilde {z} _ {i} \mid z _ {i}\right)} {\arg \min } L _ {i} \left(p \left(\tilde {z} _ {i} \mid z _ {i}\right)\right), \quad i \in \{1, \dots , n \}, \tag {6}
+$$
+
+where
+
+$$
+L _ {i} \left(p \left(\tilde {z} _ {i} \mid z _ {i}\right)\right) = - D _ {K L} \left[ p \left(\tilde {z} _ {i} \mid 0\right) \mid \mid p \left(\tilde {z} _ {i} \mid 1\right) \right] -
+$$
+
+$$
+D _ {K L} [ p (\tilde {z} _ {i} | 1) \mid | p (\tilde {z} _ {i} | 0) ] + \alpha \left[ I \left(\tilde {Z} _ {i}; Z _ {i}\right) - \beta I \left(\tilde {Z} _ {i}; Y\right) \right]. \tag {7}
+$$
+
+Thus, we next provide an optimization procedure for the loss functionals above, defined on one-dimensional distributions. For simplicity of notation, we now omit the $i$ subscripts.
+
+# 3.2. Gradient of Loss Functional
+
+We will use gradient descent to optimize the loss functional. Since the problem is non-convex, gradient descent is a natural choice. Given the infinite dimensional problem, we use the calculus of variations to compute the gradient.
+
+We perform the computation for the gradient of (5) in Appendix C and summarize the result in the following theorem:
+
+Theorem 3.1 (Gradient of Loss). The gradient of $D_{KL}(p(\tilde{z} |z))$ (1) with respect to $p(\tilde{z} |z)$ is given (up to an additive function of $z$ ) by
+
+$$
+\begin{array}{l} \nabla_ {p (\tilde {z} | z)} D _ {K L} = p (z | 0) \cdot \left[ l (z) \log l (\tilde {z}) - l (\tilde {z}) \right] \\ - p (z | 1) \cdot \left[ l (z) ^ {- 1} \log l (\tilde {z}) + l (\tilde {z}) ^ {- 1} \right], \tag {8} \\ \end{array}
+$$
+
+where $p(z|y) = p(z|Y = y)$ , $p(\tilde{z} |y) = p(\tilde{z} |Y = y)$ and
+
+$$
+l (z) = \frac {p (z | 1)}{p (z | 0)}, \quad a n d \quad l (\tilde {z}) = \frac {p (\tilde {z} | 1)}{p (\tilde {z} | 0)}. \tag {9}
+$$
+
+The gradient of $IB(p(\tilde{z} |z))$ (4) is given by $\nabla_{p(\tilde{z} |z)}IB =$
+
+$$
+\sum_ {y \in \{0, 1 \}} p (y) p (z | y) \left[ \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} - \beta \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} \right]. \tag {10}
+$$
+
+The gradient of the full loss $L$ in (5) is then
+
+$$
+\nabla_ {p (\bar {z} | z)} L = - \nabla_ {p (\bar {z} | z)} D _ {K L} + \alpha \nabla_ {p (\bar {z} | z)} I B. \tag {11}
+$$
+
+To simplify further and study a model that more closely resembles OOD feature shaping functions in the literature, we make the following assumption:
+
+$$
+p (\tilde {z} | z) \sim \mathcal {N} (\mu (z), \sigma_ {c} (z)), \tag {12}
+$$
+
+where $\mathcal{N}$ indicates Gaussian distribution, $\tilde{z}, z \in \mathbb{R}$ and $\mu, \sigma_c: \mathbb{R} \to \mathbb{R}$ are the mean/standard deviation. We use the sub-script $c$ to denote "conditional" to distinguish it from other sigmas used below. We can think of this model as random perturbations of a deterministic feature shaping function $\mu$ . The OOD's feature mean value for a given network feature $z$ is $\mu(z)$ . The closer $\sigma_c$ is to zero, the closer the approach is to deterministic feature shaping. Note if the optimization turns out to result in $\sigma_c = 0$ , then deterministic functions would be optimal. In our numerous simulations, this does not happen and thus random OOD features appear to be more optimal. We now compute the gradients with respect to $\mu$ and $\sigma_c$ :
+
+Theorem 3.2 (Loss Gradient Under Gaussian Random OOD Feature (12)). The gradient of the loss (5) under (12) is
+
+$$
+\nabla_ {\mu} L (z) = \int \frac {\nabla_ {p (\tilde {z} | z)} L (\tilde {z} , z)}{\sigma_ {c} ^ {2} (z)} [ \tilde {z} - \mu (z) ] p (\tilde {z} | z) d \tilde {z} \tag {13}
+$$
+
+$$
+\nabla_ {\sigma_ {c}} L (z) = \int \frac {\nabla_ {p (\tilde {z} | z)} L (\tilde {z} , z)}{\sigma_ {c} (z)} \left[ \frac {(\tilde {z} - \mu (z)) ^ {2}}{\sigma_ {c} ^ {2} (z)} - 1 \right] p (\tilde {z} | z) \mathrm {d} \tilde {z}, \tag {14}
+$$
+
+where $\nabla_{p(\tilde{z}|z)}L$ is given in (11).
+
+# 3.3. Numerical Optimization of Loss
+
+We implement a gradient descent algorithm using a discretization of the continuum equations above. We choose a uniform discretization of the space of $z$ , i.e., $\{z_i\}_i \subset \mathbb{R}$ . We represent $\mu$ and $\sigma_c$ through their samples: $\mu_i = \mu(z_i)$ and $\sigma_{c,i} = \sigma_c(z_i)$ . We specify formulas for $p(\tilde{z})$ and $p(\tilde{z}|y)$ under the discretization, which will be required in the computation of the approximation to the gradient:
+
+$$
+\begin{array}{l} p (\tilde {z} | y) = \sum_ {i} p (\tilde {z} | z _ {i}) p (z _ {i} | y) \Delta z _ {i} \\ = \sum_ {i} \frac {1}{\sigma_ {c , i}} G _ {\sigma_ {c, i}} (\tilde {z} - \mu_ {i}) p (z _ {i} | y) \Delta z _ {i} \tag {15} \\ \end{array}
+$$
+
+$$
+p (\tilde {z}) = \sum_ {y} p (y) p (\tilde {z} | y). \tag {16}
+$$
+
+Thus, $p(\tilde{z} |y)$ is approximated as a mixture of Gaussians. The gradient descent is shown in Algorithm 1, which assumes ID and OOD distributions and determines the Gaussian random feature parameterized through $\mu$ and $\sigma_c$ .
+
+The complexity for this optimization (which is done off-line in training) is $\mathcal{O}(NMK)$ where $N$ is the number of samples of $p(z|y)$ , $M$ is the samples of $p(\tilde{z} | z)$ and $K$ is the number of gradient descent iterations. On a single A100 GPU, this took less than a minute.
+
+# 4. A Study of OOD Features vs Distribution
+
+In this section, we study the resulting OOD features based on various choices of distributions using the algorithm in the previous section, and relate these choices to OOD feature shaping techniques that are present in the literature. Note that while in practice the OOD distribution is unknown, our theory nevertheless suggests the underlying distributional assumptions of existing methods. This is useful to understand when these methods will generalize as a function of the type of OOD data. We will also derive a generic OOD shaping function, encompassing properties of several distributions, and show that this shaping function can lead to SoA performance in the next section. Note in practice, we have observed that distributions from OOD datasets to exhibit similarities to the distributions studied, see Appendix H. We will further rationale on studying these distributions below.
+
+For this study, we will assume the assumptions of Section 3.3, i.e., that the OOD features are element-wise independent and that the OOD feature is Gaussian, i.e., $p(\tilde{z} |z)\sim \mathcal{N}(\mu (z),\sigma_c(z))$ . We will further assume that the ID distribution is Gaussian, i.e., $p(z|0)\sim \mathcal{N}(\mu_0,\sigma)$ . We make this assumption for simplicity and that features in network layers can be approximated well with a Gaussian, as evidenced empirically in (Xu et al., 2023). We will study three OOD distributions next: Gaussian, Laplacian and a distribution we propose based on the Inverse Gaussian.
+
+# Algorithm 1 1D Gaussian Random Feature Computation
+
+Input: IN/OOD Distributions $p(z|y)$ , $\alpha$ , $\beta$ and learning rate $\eta$
+
+Output: Converged mean $\mu_{i}$ , std $\sigma_{c,i}$ for each $i$
+
+Initialize: $\mu_{i} = z_{i}$ $\sigma_{c,i} =$ const
+
+for $n$ iterations do
+
+for $z_{i}$ do
+
+Compute a discretization of $\tilde{z}$ in its likely range: $\tilde{z}_i^j\in$ $(\mu_{i} - k\sigma_{c,i},\mu_{i} + k\sigma_{c,i})$ where $k\geq 3$
+
+for $\tilde{z}_j^i$ do
+
+$$
+\operatorname {C o m p u t e} \nabla_ {p (\tilde {z} | z)} L (\tilde {z} _ {j} ^ {i}, z _ {i}) =
+$$
+
+$$
+p (z _ {i} | 0) \cdot \left[ l (z _ {i}) \log l (\tilde {z} _ {j} ^ {i}) - l (\tilde {z} _ {j} ^ {i}) \right] -
+$$
+
+$$
+p \left(z _ {i} \mid 1\right) \cdot \left[ l \left(z _ {i}\right) ^ {- 1} \log l \left(\tilde {z} _ {j} ^ {i}\right) + l \left(\tilde {z} _ {j} ^ {i}\right) ^ {- 1} \right] +
+$$
+
+$$
+\alpha \sum_ {y \in \{0, 1 \}} p (y) p (z _ {i} | y) \left[ \log \frac {p (\tilde {z} _ {j} ^ {i} | z _ {i})}{p (\tilde {z} _ {j} ^ {i})} - \beta \log \frac {p (\tilde {z} _ {j} ^ {i} | y)}{p (\tilde {z} _ {j} ^ {i})} \right]
+$$
+
+end for
+
+Compute $\nabla_{\mu}L(z_i) =$
+
+$$
+\sum_ {j} \frac {\nabla_ {p (\bar {z} | z)} L (\tilde {z} _ {j} ^ {i} , z _ {i})}{\sigma_ {c , i} ^ {2}} (\tilde {z} _ {j} ^ {i} - \mu_ {i}) p (\tilde {z} _ {j} ^ {i} | z _ {i}) \Delta z _ {i}
+$$
+
+Compute $\nabla_{\sigma_c}L(z_i) =$
+
+$$
+\sum_ {j} \frac {\nabla_ {p (\bar {z} | z)} L (\tilde {z} _ {j} ^ {i} , z _ {i})}{\sigma_ {c , i}} \left[ \frac {(\tilde {z} _ {j} ^ {i} - \mu_ {i}) ^ {2}}{\sigma_ {c , i} ^ {2}} - 1 \right] p (\tilde {z} _ {j} ^ {i} | z _ {i}) \Delta z _ {i}
+$$
+
+end for
+
+for $z_{i}$ do
+
+$$
+\mu_ {i} \leftarrow \mu_ {i} - \eta \nabla_ {\mu} L (z _ {i})
+$$
+
+$$
+\sigma_ {c, i} \leftarrow \sigma_ {c, i} - \eta \nabla_ {\sigma_ {c}} L (z _ {i})
+$$
+
+end for
+
+end for
+
+Gaussian OOD: First, we study the case of a Gaussian for the OOD distribution, as its the most common distribution in probabilistic analysis. Let $p(z|1) \sim \mathcal{N}(\mu_1, \sigma)$ . For illustration, we choose $\mu_0 = -0.5$ , $\mu_1 = 0.5$ and $\sigma = 0.5$ and $\alpha = 1.0$ , $\beta = 10$ . The resulting converged result of the optimization for $\mu$ and $\sigma_c$ is shown in Figure 1 (positive part). No feature shaping would mean that $\mu$ is the identity map, and $\sigma_c = 0$ ; this solution is plotted in dashed blue. Notice that the optimal mean value is not the identity. The mean indicates that the feature has positive slope for small values of $|z|$ (similar to (Sun et al., 2021)) and negative slope for large values of $|z|$ (similar to (Zhao et al., 2024)). In Appendix E.2, we show that under different distribution parameters, one can get negative values for small $|z|$ as in (Zhao et al., 2024). Interestingly, the optimal standard deviation $\sigma_c(z)$ is non-zero, indicating randomness in this
+
+
+Figure 1. OOD Gaussian Feature Under Gaussian ID/OOD Distributions. Mean (left), standard deviation (right) of the feature.
+
+case is beneficial in terms of the loss. In fact, in all of our simulations across distributions and their hyperparameters, we've observed non-zero standard deviation.
+
+In Figure 2(a), we show the effects of the Information Bottleneck weight $\alpha$ . The impact of $\beta$ on the shape is studied in Appendix E.1. For $\alpha$ larger (higher regularization), the mean of the feature becomes flat for $|z|$ large, similar to clipping that is used in popular methods (Sun et al., 2021; Xu et al., 2023).1 See Figure 3 for a plot of existing feature shaping methods. Even under the simplifying Gaussian assumptions, we see that the our shaping functions have properties of existing methods.
+
+Laplacian OOD Distribution: Next, we consider the Laplacian distribution for the OOD distribution, i.e., $p(z|1) = \frac{1}{2b}\exp \left(-|z - \mu_1| / b\right)$ . The intuition for choosing this distribution is that it has a heavier tail than the Gaussian, and thus, is better able to model outliers, and it seems reasonable OOD data would be considered outliers. We show the result of the mean of the feature in Figure 2(b). We notice that when $|z|$ is small, the mean OOD feature is zero, which indicates a suppression of low values (this is used in VRA (Xu et al., 2023) and ASH (Djurisic et al., 2022)). Note that this is consistent across $\alpha$ values, larger values increase the suppression region. We also see that large values of $|z|$ are being clipped or suppressed (with larger $\alpha$ ) approaching a zero slope. The jump discontinuity is also present in VRA and ASH. There also appears to be a positively sloped linear function for intermediate values of $|z|$ , similar to VRA.
+
+Inverse Gaussian OOD Distribution: Next, we consider a distribution that may be a distribution that generically holds for OOD data and can be used in the absence of prior information of the OOD distribution. If the ID distribution is Gaussian, we can formulate a distribution that has high probability outside the domain that the ID distribution has high probability. To this end, one can consider a variant of the Inverse Gaussian defined as follows. Let $d(z) = |z - \mu_0| / \sigma$ where $\mu_0, \sigma$ are the mean and standard
+
+deviation of the ID distribution. This is a distance to the ID distribution. We would like the OOD distribution to have high probability when $d(z)$ is large, and thus we consider $p(z|1) \sim IG(d(z); \mu_1, \lambda)$ where IG denotes the inverse Gaussian distribution:
+
+$$
+p _ {I G} (x; \mu , \lambda) = \sqrt {\frac {\lambda}{2 \pi x ^ {3}}} \exp \left(- \frac {\lambda (x - \mu) ^ {2}}{2 \mu^ {2} x}\right), \tag {17}
+$$
+
+which is plotted in Appendix D. Note that there is some overlap of this distribution with the ID Gaussian. As noted in Figure 2(c), the Inverse Gaussian distribution results in a qualitatively similar OOD feature compared to the Laplacian distribution: suppression of small $|z|$ values and clipping/flattening of large $|z|$ values and a positively sloped linear function for intermediate values of $|z|$ . For $\alpha$ large we have flattening similar to clipping and $\alpha$ smaller results in a negative slope similar to the other distributions.
+
+We summarize key observations. Clipping as done in ReAct seems to be a universal property across all the OOD distributions for large regularization. In the next section we show that for noisier OOD datasets larger regularization is beneficial, and so the clipping mitigates noise, which is noted in (Sun et al., 2021). Next, the OOD distributions that are heavier tailed result in suppression (zeroing out) of low $|z|$ values. This is consistent with the VRA and ASH methods. All distributions yield a positively sloped region for intermediate values of $|z|$ . Our results suggest ReAct and FS-OPT may be operating under an implicit Gaussian OOD assumption for high regularization (ReAct) and low regularization (FS-OPT). VRA and ASH seem to implicitly assume heavier tailed OOD distributions.
+
+Piecewise Linear Shaping: The above mean shaping functions (from Gaussian, Laplace and Inverse Gaussian OOD distributions) all approximately fit in a particular piecewise linear function family as shown in Figure 4, where $z_{1}, z_{2}, y_{0}, y_{1a}, y_{1b}, m_{1}, m_{2}$ are hyperparameters. Therefore, in practice, if the distribution is unknown, one can choose this family of shaping functions that would implicitly assume any of the aforementioned three distributions. Because many existing SOA methods implicitly make one of the three distributional assumptions, this family makes more general distributional assumptions than existing SOA, thus potentially offering generalization to more OOD datasets while not being too general so as to lose discriminability. In the experimental section we explore this family of shaping functions, and show we can obtain SoA results.
+
+# 5. Implementation of New OOD Detection
+
+In this section, we provide the implementation details for our new approaches to OOD detection, using the simplifying assumptions presented in Section 3. We provide the details for two cases where the ID/OOD distributions are known
+
+
+(a)
+
+
+(b)
+Figure 2. The mean of the OOD Gaussian Feature under the Gaussian (left), Laplace (middle) and Inverse Gaussian (right) OOD distributions for varying weights on the Information Bottleneck, $\alpha$ . For all plots, $\beta = 10$ . For the Gaussian case, $p(z|0) \sim \mathcal{N}(-0.5, 0.5)$ and $p(z|1) \sim \mathcal{N}(0.5, 0.5)$ . For the Laplace case, $p(z|0) \sim \mathcal{N}(0, 0.66)$ and $p(z|1) \sim \mathcal{Lap}(0, 1)$ . In the Inverse Gaussian case, $p(z|0) \sim \mathcal{N}(0, 0.66)$ and $p(z|1) \sim IG(d(z); 3.3, 15)$ . For visualization purpose, we only show the positive part.
+
+
+(c)
+
+
+Figure 3. Plot of existing feature shaping functions from SoA methods: ReAct (Sun et al., 2021), VRA (Xu et al., 2023), FS-Opt (Zhao et al., 2024), and variants of ASH (Djurisic et al., 2022).
+
+
+Figure 4. A piece-wise linear family of functions that approximately encompasses the mean value of our OOD feature shaping functions across OOD distributions examined in this paper.
+
+and unknown. In the latter case, we apply the piecewise family of feature shaping derived in the previous section (Figure 4). We assume that a validation set of ID and OOD data is available (as in existing literature) and the choices are given in our experiments section. A trained neural network is also provided. The network feature vectors and their ID/OOD label for the validation set are $\{\mathbf{z}_i, y_i\}$ . Consistent with our simplifying assumptions and literature, each component of the network feature $\mathbf{z}$ is processed independently and for this paper, they will be processed by the same shaping function $\mu$ .
+
+Off-line-Training: Under the case that the forms of the ID and OOD distributions are known, the hyperparameters of the distributions are estimated from the validation set (rasterizing the vector data). Using the fitted distributions, Algorithm 1 is run to compute the optimal $\mu^{*},\sigma_{c}^{*}$ . In the case that the distributions are unknown, we assume that
+
+the feature shape fits the piecewise family in the previous section (i.e., the OOD distributions are one of Gaussian, Laplacian or IG). The hyper-parameters for the piecewise family are tuned by e.g., minimizing the false positive rate at true positive rate of $95\%$ (FPR95) metric on the validation set - this gives the optimal shaping function $\mu^{*}$ .
+
+Online Operation: During operation, the network feature $\mathbf{z}$ is extracted, and then shaped via the function $\tilde{\mathbf{z}} = \mu^{*}(\mathbf{z}) = (\mu^{*}(z_{1}),\dots,\mu^{*}(z_{n}))$ . Subsequently, $\tilde{\mathbf{z}}$ is input to a scoring function (e.g., in this paper, energy score (Liu et al., 2021)), which is then thresholded to produce the ID/OOD label.
+
+# 6. Experiments
+
+We validate our theory by comparing our new shaping function to SoA for OOD detection on standard benchmarks.
+
+Datasets and Model architectures. We experiment with ResNet-50(He et al., 2016), MobileNet-v2 (Sandler et al., 2018), vision transformers ViT-B-16 and ViT-L-16 (Dosovitskiy et al., 2021) with ImageNet-1k (Rus-sakovsky et al., 2015) as ID data, and benchmark on the OOD datasets/methods used in (Zhao et al., 2024). For the ImageNet benchmark, we evaluate performance across eight OOD datasets: Species (Hendrycks et al., 2022), iNaturalist (Horn et al., 2018), SUN (Xiao et al., 2010), Places (Zhou et al., 2018), OpenImage-O (Wang et al., 2022), ImageNet-O (Hendrycks et al., 2021), Texture (Cimpoi et al., 2014), and MNIST (Deng, 2012). Moreover, we also experiment with CIFAR 10 and CIFAR 100 as ID data, for which we use a ViT-B-16 (Dosovitskiy et al., 2021) finetuned on CIFAR10/100 consistent with (Fort et al., 2021a), and a MLP-Mixer-Nano model trained on CIFAR10/100 from scratch. We evaluate eight OOD datasets: TinyImageNet (Torralba et al., 2008), SVHN (Netzer et al., 2011), Texture (Cimpoi et al., 2014), Places365 (Zhou et al., 2018), LSUN-Cropped (Yu et al., 2016), LSUN-Resized (Yu et al., 2016), iSUN
+
+(Xu et al., 2015), and CIFAR100/ CIFAR10 (CIFAR 100 treated as OOD for CIFAR 10, and vice-versa).
+
+We compare our results against the SoA methods across two categories - penultimate layer element-wise feature shaping approaches, which our theory currently applies to, and other methods. Penultimate layer feature shaping approaches involve element-wise feature shaping functions applied directly to the penultimate layer of the model before computing the energy score for OOD detection. Approaches in this category are: Energy (Liu et al., 2021), ReAct (Sun et al., 2021), BFAct (Kong & Li, 2023), VRA-P (Xu et al., 2023) and FS-OPT (Zhao et al., 2024). The second category, which are not directly comparable to our approach because they may not involve feature shaping or additions to feature matching, but are included for reference, are softmax-based confidence scoring (MSP (Hendrycks & Gimpel, 2018b)), input perturbation and temperature scaling (ODIN (Liang et al., 2020)), intermediate-layer shaping and subsequent processing by following layers (ASH-P, ASHB, ASH-S (Djurisic et al., 2022)), or weight sparsification (DICE (Sun & Li, 2022)).
+
+As in ReAct (Sun et al., 2021), for ImageNet-1k benchmarks we use a validation set comprising the validation split of ImageNet-1k as ID data, and Gaussian noise images as OOD data, generated by sampling from $\mathcal{N}(0,1)$ for each pixel location, to tune the hyperparameters of our piecewise linear activation shaping function. For CIFAR 10/100 benchmarks, following ODIN (Liang et al., 2020), we employ a random subset of the iSUN dataset (Xu et al., 2015) as validation OOD data for our hyperparameter tuning. As ID validation data for CIFAR10/100 we use the test splits of the corresponding datasets. The hyperparameters are optimized using Bayesian optimization (Frazier, 2018), by minimizing the FPR95 metric on the validation set. Resulting hyperparameters are reported in Appendix G.
+
+Metrics. We utilize two standard evaluation metrics, following (Sun et al., 2021; Zhao et al., 2024): FPR95 - the false positive rate when the true positive rate is $95\%$ (abbreviated as FP), and the area under the ROC curve (AU).
+
+Results. The results on the ImageNet-1k benchmarks (Table 1) and the CIFAR 10/100 benchmarks (Table 2) demonstrate that our approach achieves state-of-the-art performance among comparable feature-shaping methods in the previously mentioned category of methods. Specifically, when compared to pointwise feature-shaping techniques such as ReAct, BFAcT, VRA-P, and FS-OPT, our method consistently outperforms these approaches, yielding the best overall results in this category.
+
+While ASH variants marginally outperform our method in some cases, it is important to note that ASH employs a fundamentally different approach. ASH modifies activations
+
+through intermediate layer pruning and rescaling of features, which are then routed back into the network for further processing, and thereby is not an element-wise feature shaping approach. Our theory currently does not address this case.
+
+For the vision transformers ViT-B-16 and ViT-L-16, our method achieves the lowest FP among all competing methods, providing evidence of generalization across different architectures. Overall, our results demonstrate that our feature shaping is highly competitive with the latest SoA, along with providing a theoretical explanation.
+
+Computational Time: The inference cost of our feature shaping method is on the order of microseconds for a $256 \times 256 \times 3$ image, using PyTorch on an NVIDIA A100-80GB GPU. This is comparable to other piecewise linear shaping approaches such as ReAct and VRA.
+
+Regularization as a Function of OOD Data. We study how IB regularization should be chosen with respect to properties of OOD data. This is important in practical scenarios. In particular, we conduct an experiment to suggest higher IB regularization is beneficial for "noisier" OOD datasets.
+
+To this end, we conduct a series of controlled experiments using ResNet-50 trained on the ImageNet-1k dataset. We aim to determine the structure of optimal feature-shaping functions as a function of noise. We apply additive Gaussian noise $\mathcal{N}(0,\sigma)$ to the ImageNet-1k validation set and consider them as OOD data. The standard deviation $\sigma$ used are $\{25, 50, 100, 150, 255\}$ to create 5 OOD datasets. Visualizations of this data and activation patterns are shown in Appendix F. We observe that this data closely resembles the high variance of activation patterns in OOD datasets as noted in (Sun et al., 2021), and so our noisy data serves to mimic OOD data with varying noise levels. By examining how the learned features adapt under different noise levels, we gain insights into the relationship between the OOD data and the IB regularization term for optimal shaping.
+
+In Figure 5, we plot the IB term of the obtained optimal shaping function optimized over hyperparameters at each noise level. Note we have used the Laplacian OOD distribution to estimate the IB term (Inverse Gaussian also results in similar results). It is seen that higher noise levels result in optimal shaping functions with lower IB values, which correspond to higher degree of regularization of the IB term in our loss functional. Thus, noisier OOD datasets require higher IB regularization for best OOD detection performance.
+
+# 7. Conclusion and Future Work
+
+We have presented a novel theory for OOD feature construction. OOD features were computed as the optimizer of a loss functional consisting of a term maximizing the KL distance between resulting features under ID and OOD distributions
+
+Method ResNet50 MobileNetV2 ViT-B-16 ViT-L-16 FP ↓ AU ↑ FP ↓ AU ↑ FP ↓ AU ↑ FP ↓ AU ↑ Element-wise Feature Shaping Energy 60.97 81.01 61.40 82.83 73.96 67.65 74.89 70.11 ReAct 42.29 86.54 54.19 85.37 73.82 76.86 76.16 81.07 BFAct 43.87 86.01 52.87 85.78 77.64 80.16 84.02 81.12 VRA-P 37.97 88.58 49.98 86.83 98.39 35.66 99.58 16.70 FS-OPT 39.75 88.56 51.77 86.62 69.52 82.66 72.17 83.23 Ours 35.82 89.36 46.97 87.49 67.73 81.06 66.67 83.92 Other methods MSP 69.30 76.26 72.58 77.41 69.84 77.40 70.59 78.40 ODIN 61.56 80.92 62.91 82.64 69.25 72.60 70.35 74.51 ASH-P 55.30 83.00 59.41 83.84 99.36 21.17 99.18 20.27 ASH-B 35.97 88.62 43.59 88.28 94.87 46.68 93.72 38.95 ASH-S 34.70 90.25 43.84 88.24 99.48 18.52 99.42 18.61 DICE 45.32 83.64 49.33 84.63 89.68 71.32 72.38 67.08
+
+Table 1. ImageNet-1k Benchmarking Results. Lower FP is better, higher AU is better. Bold values highlight the best results among element-wise feature shaping methods. Underlined values indicate best results among other methods.
+
+Method CIFAR10 CIFAR100 DenseNet101 MLP-N DenseNet101 MLP-N FP↓ AU↑ FP↓ AU↑ FP↓ AU↑ FP↓ AU↑ Element-wise Feature Shaping Energy 31.72 93.51 63.95 82.84 70.80 80.22 79.90 75.30 ReAct 82.00 76.46 64.34 81.85 77.00 78.30 79.99 75.87 BFact 84.40 74.39 78.02 72.68 80.27 73.36 80.05 76.58 VRA-P 38.41 92.77 100.00 65.95 67.75 82.72 87.19 66.03 FS-OPT 28.90 94.12 83.87 71.83 65.20 82.39 81.33 74.67 Ours 24.08 95.26 62.67 81.67 64.15 82.00 78.33 73.53 Other methods MSP 52.66 91.42 67.01 82.86 80.40 74.75 83.97 73.07 ODIN 32.84 91.94 69.20 69.53 62.03 82.57 78.71 66.47 MLS 31.93 93.51 64.50 82.97 70.71 80.18 82.41 74.52 ASH-P 29.39 93.98 84.39 66.93 68.21 81.11 86.73 65.27 ASH-B 28.21 94.27 93.93 53.00 57.45 83.80 93.63 57.20 ASH-S 23.93 94.41 82.57 68.02 52.41 84.65 89.39 59.63 DICE 29.67 93.27 96.64 52.17 59.56 82.26 95.78 44.48
+
+Table 2. Performance comparison on CIFAR10/100 datasets. Lower FP is better, higher AU is better. Bold values highlight the best results among element-wise feature shaping methods. Underlined values indicate best results among other methods.
+
+
+Figure 5. IB for the optimal hyperparameter optimized shaping function as a function of the noise level of OOD data. Lower IB corresponds to a more regularized shaping function.
+
+and a regularization based on the Information Bottleneck. We have related the optimal features to several element-wise OOD shaping functions that are used in existing practice, offering a theoretical explanation and suggesting underlying distributional assumptions made in these often empirically motivated approaches. Our theory was shown to lead to a new shaping function that out-performs existing shaping functions on benchmark datasets.
+
+There are several areas for future investigation. Firstly, we have developed the concept of random features, whose mean value models OOD shaping functions, but only exploited the mean value algorithmically. Future work will aim to exploit the OOD feature distribution. Secondly, our theory so far only explains the OOD feature and not the score function. We wish to incorporate scores into our theory. Finally, we wish to explore more general vector shaping functions.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# References
+
+Ahn, Y. H., Park, G.-M., and Kim, S. T. Line: Out-of-distribution detection by leveraging important neurons, 2023.
+Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., and Vedaldi, A. Describing textures in the wild. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606-3613, 2014. doi: 10.1109/CVPR.2014.461.
+Cover, T. M. Elements of information theory. John Wiley & Sons, 1999.
+Deng, L. The mnist database of handwritten digit images
+
+for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6):141-142, 2012. doi: 10.1109/MSP.2012.2211477.
+DeVries, T. and Taylor, G. W. Learning confidence for out-of-distribution detection in neural networks, 2018.
+Djurisic, A., Bozanic, N., Ashok, A., and Liu, R. Extremely simple activation shaping for out-of-distribution detection. 2022. URL https://arxiv.org/abs/2209.09858.
+Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. URL https://arxiv.org/abs/2010.11929.
+Elflein, S., Charpentier, B., Zügner, D., and Gunnemann, S. On out-of-distribution detection with energy-based models. arXiv preprint arXiv:2107.08785, 2021.
+Fort, S., Ren, J., and Lakshminarayanan, B. Exploring the limits of out-of-distribution detection. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 7068-7081. Curran Associates, Inc., 2021a. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/3941c4358616274ac2436eaxf67fae05-Paper.pdf.
+Fort, S., Ren, J., and Lakshminarayanan, B. Exploring the limits of out-of-distribution detection. Advances in neural information processing systems, 34:7068-7081, 2021b.
+Frazier, P. I. A tutorial on bayesian optimization, 2018. URL https://arxiv.org/abs/1807.02811.
+He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In Leibe, B., Matas, J., Sebe, N., and Welling, M. (eds.), Computer Vision - ECCV 2016, pp. 630-645, Cham, 2016. Springer International Publishing.
+Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks, 2018a. URL https://arxiv.org/abs/1610.02136.
+Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks, 2018b. URL https://arxiv.org/abs/1610.02136.
+
+Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., and Song, D. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15262-15271, June 2021.
+Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., and Song, D. Scaling out-of-distribution detection for real-world settings, 2022. URL https://arxiv.org/abs/1911.11132.
+Horn, G. V., Aodha, O. M., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., and Belongie, S. The inaturalist species classification and detection dataset, 2018. URL https://arxiv.org/abs/1707.06642.
+Kirichenko, P., Izmailov, P., and Wilson, A. G. Why normalizing flows fail to detect out-of-distribution data. Advances in neural information processing systems, 33: 20578-20589, 2020.
+Kong, H. and Li, H. Bfact: Out-of-distribution detection with butterworth filter rectified activations. In Sun, F., Cangelosi, A., Zhang, J., Yu, Y., Liu, H., and Fang, B. (eds.), Cognitive Systems and Information Processing, pp. 115-129, Singapore, 2023. Springer Nature Singapore.
+Lee, K., Lee, K., Lee, H., and Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018.
+Liang, S., Li, Y., and Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks, 2020. URL https://arxiv.org/abs/1706.02690.
+Liu, W., Wang, X., Owens, J. D., and Li, Y. Energy-based out-of-distribution detection, 2021. URL https:// arxiv.org/abs/2010.03759.
+Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf.
+Ren, J., Liu, P. J., Fertig, E., Snoek, J., Poplin, R., Depristo, M., Dillon, J., and Lakshminarayanan, B. Likelihood ratios for out-of-distribution detection. Advances in neural information processing systems, 32, 2019.
+Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Imagenet large scale visual recognition challenge, 2015. URL https:// arxiv.org/abs/1409.0575.
+
+Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510-4520, 2018. doi: 10.1109/CVPR.2018.00474.
+Sun, Y. and Li, Y. Dice: Leveraging sparsification for out-of-distribution detection. In European Conference on Computer Vision, 2022.
+Sun, Y., Guo, C., and Li, Y. React: Out-of-distribution detection with rectified activations. Advances in Neural Information Processing Systems, 34:144-157, 2021.
+Sun, Y., Ming, Y., Zhu, X., and Li, Y. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pp. 20827-20840. PMLR, 2022.
+Tishby, N., Pereira, F. C., and Bialek, W. The information bottleneck method. arXiv preprint physics/0004057, 2000.
+Torralba, A., Fergus, R., and Freeman, W. T. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958-1970, 2008. doi: 10.1109/TPAMI.2008.128.
+Troutman, J. L. Variational calculus and optimal control: optimization with elementary convexity. Springer Science & Business Media, 2012.
+Wang, H., Li, Z., Feng, L., and Zhang, W. Vim: Out-of-distribution with virtual-logit matching, 2022. URL https://arxiv.org/abs/2203.10807.
+Wu, Q., Chen, Y., Yang, C., and Yan, J. Energy-based out-of-distribution detection for graph neural networks. arXiv preprint arXiv:2302.02914, 2023.
+Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3485-3492, 2010. doi: 10.1109/CVPR.2010.5539970.
+Xu, M., Lian, Z., Liu, B., and Tao, J. Vra: Variational rectified activation for out-of-distribution detection, 2023. URL https://arxiv.org/abs/2302.11716.
+Xu, P., Ehinger, K. A., Zhang, Y., Finkelstein, A., Kulkarni, S. R., and Xiao, J. Turkergaze: Crowdsourcing saliency with webcam based eye tracking, 2015. URL https://arxiv.org/abs/1504.06755.
+Yang, J., Zhou, K., Li, Y., and Liu, Z. Generalized out-of-distribution detection: A survey, 2022.
+
+Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., and Xiao, J. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop, 2016. URL https://arxiv.org/abs/1506.03365.
+Zhang, Y., Lu, J., Peng, B., Fang, Z., and Cheung, Y.-m. Learning to shape in-distribution feature space for out-of-distribution detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
+Zhang, Y., Lu, J., Peng, B., Fang, Z., and ming Cheung, Y. Learning to shape in-distribution feature space for out-of-distribution detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=1Du3mMP5YN.
+Zhang, Z. and Xiang, X. Decoupling maxlogit for out-of-distribution detection. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3388-3397, 2023. doi: 10.1109/CVPR52729.2023.00330.
+Zhao, Q., Xu, M., Gupta, K., Asthana, A., Zheng, L., and Gould, S. Towards optimal feature-shaping methods for out-of-distribution detection. arXiv preprint arXiv:2402.00865, 2024.
+Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., and Torralba, A. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1452-1464, 2018. doi: 10.1109/TPAMI.2017.2723009.
+
+# A. Analysis of Loss Function in 1D Gaussian Case
+
+We analytically study the optimization of our loss functional in the case where the ID/OOD distributions of $z$ are Gaussian, i.e., $p(z|y) \sim \mathcal{N}(\mu_y, \sigma_y)$ , $y \in \{0,1\}$ , and the feature is a Gaussian with mean being a linear function and the standard deviation constant, i.e., $p(\tilde{z}|z) \sim \mathcal{N}(Wz + b, \sigma_c)$ where $W, b \in \mathbb{R}$ . The $W, b$ are parameters that are to be optimized, which specify the OOD feature. This is a relaxation of deterministic shaping function being a linear function. The loss function shape and various components of the loss are shown in Figure 6. The plot shows the loss terms versus $W$ ; as will be shown below the loss does not depend on $b$ . It shows that the separation between feature distributions (KL term) increases as $|W| \to \infty$ . On the other hand, the information bottleneck term increases as $|W| \to 0$ . Thus, these terms compete with each other and the optimal solution is well-defined at a finite $|W| > 0$ . This simple example suggests that the loss functional is well defined (i.e., has a finite optimum). Notice that without the IB term, there is no optimal value of $W$ .
+
+Next, we derive the components of the loss in analytic form. Based on the assumptions specified in the previous paragraph, the formulas for the assumed probabilities are:
+
+$$
+p (z | y) = \frac {1}{\sqrt {2 \pi} \sigma_ {y}} \exp \left(- \frac {1}{2 \sigma_ {y} ^ {2}} (z - \mu_ {y}) ^ {2}\right) \tag {18}
+$$
+
+$$
+p (\tilde {z} | z) = \frac {1}{\sqrt {2 \pi} \sigma_ {c}} \exp \left(- \frac {(\tilde {z} - W z - b) ^ {2}}{2 \sigma_ {c} ^ {2}}\right). \tag {19}
+$$
+
+Under these assumptions, we can calculate the feature distributions:
+
+$$
+p (\tilde {z} | y) = \frac {1}{\sqrt {2 \pi \left(\sigma_ {c} ^ {2} + W ^ {2} \sigma_ {y} ^ {2}\right)}} \exp \left(- \frac {1}{2 \left(\sigma_ {c} ^ {2} + W ^ {2} \sigma_ {y} ^ {2}\right)} (\tilde {z} - W \mu_ {y} - b) ^ {2}\right), \tag {20}
+$$
+
+and note that the joint distribution is
+
+$$
+p (\tilde {z}, z | y) \sim N \left(\mu_ {\tilde {z}, z}, \Sigma_ {\tilde {z}, z}\right), \quad \mu_ {\tilde {z}, z} = \left( \begin{array}{c} W \mu_ {y} + b \\ \mu_ {y} \end{array} \right), \quad \Sigma_ {z, \tilde {z}} = \left( \begin{array}{c c} \sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2} & W \sigma^ {2} \\ W \sigma^ {2} & \sigma^ {2} \end{array} \right). \tag {21}
+$$
+
+Let us compute the loss of both the KL term and the information bottleneck under this case. First note the formula: if $p \sim \mathcal{N}(\hat{\mu}_1, \hat{\sigma}_1), q \sim \mathcal{N}(\hat{\mu}_2, \hat{\sigma}_2)$ then
+
+$$
+D _ {K L} (p | | q) = \log \left(\frac {\hat {\sigma} _ {2}}{\hat {\sigma} _ {1}}\right) + \frac {\hat {\sigma} _ {1} ^ {2} + \left(\hat {\mu} _ {1} - \hat {\mu} _ {2}\right) ^ {2}}{2 \hat {\sigma} _ {2} ^ {2}} - \frac {1}{2}. \tag {22}
+$$
+
+Choosing
+
+$$
+\hat {\mu} _ {1} = W \mu_ {1} + b, \tag {23}
+$$
+
+$$
+\hat {\mu} _ {2} = W \mu_ {0} + b, \tag {24}
+$$
+
+$$
+\hat {\sigma} _ {1} ^ {2}, \hat {\sigma} _ {2} ^ {2} = \sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}, \tag {25}
+$$
+
+we find that
+
+$$
+D _ {K L} (p (\tilde {z} | Y = 1) \mid | p (\tilde {z} | Y = 0)) = \frac {\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2} + W ^ {2} \left(\mu_ {1} - \mu_ {0}\right) ^ {2}}{2 \left(\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}\right)} - \frac {1}{2} = \frac {W ^ {2} \left(\mu_ {1} - \mu_ {0}\right) ^ {2}}{2 \left(\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}\right)}. \tag {26}
+$$
+
+Let us now compute the information bottleneck term. Note the following result: if $X, Y \sim \mathcal{N}(\mu, \Sigma)$ , where
+
+$$
+\Sigma = \left( \begin{array}{l l} \sigma_ {X} ^ {2} & \rho \sigma_ {X} \sigma_ {Y} \\ \rho \sigma_ {X} \sigma_ {Y} & \sigma_ {Y} ^ {2} \end{array} \right), \tag {27}
+$$
+
+then
+
+$$
+I (X; Y) = - \frac {1}{2} \log \left(1 - \rho^ {2}\right). \tag {28}
+$$
+
+Note that
+
+$$
+\Sigma_ {Z, \tilde {Z}} = \left( \begin{array}{c c} \sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2} & W \sigma^ {2} \\ W \sigma^ {2} & \sigma^ {2} \end{array} \right); \tag {29}
+$$
+
+therefore,
+
+$$
+\rho_ {\tilde {Z}, Z} = \frac {W \sigma}{\sqrt {\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}}} \tag {30}
+$$
+
+Thus,
+
+$$
+I (\tilde {Z}; Z) = - \frac {1}{2} \log \left[ 1 - \frac {W ^ {2} \sigma^ {2}}{\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}} \right] = - \frac {1}{2} \log \left(\frac {\sigma_ {c} ^ {2}}{\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}}\right). \tag {31}
+$$
+
+Next, we compute
+
+$$
+\begin{array}{l} I (\tilde {Z}; Y) = p (Y = 1) \int p (\tilde {z} | Y = 1) \log \frac {p (\tilde {z} | Y = 1)}{p (\tilde {z})} \mathrm {d} \tilde {z} + p (Y = 0) \int p (\tilde {z} | Y = 0) \log \frac {p (\tilde {z} | Y = 0)}{p (\tilde {z})} \mathrm {d} \tilde {z} (32) \\ = - p (Y = 1) h \left(p (\tilde {z} | Y = 1)\right) - p (Y = 0) h \left(p (\tilde {z} | Y = 0)\right) + h \left(p (\tilde {z})\right) (33) \\ = - \frac {1}{2} \left[ \log \left(2 \pi \left(\sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}\right)\right) + 1 \right] + h (p (\tilde {z})), (34) \\ \end{array}
+$$
+
+where we used that
+
+$$
+h \left(\mathcal {N} \left(\mu , \sigma^ {2}\right)\right) = \frac {1}{2} \left[ \log \left(2 \pi \sigma^ {2}\right) + 1 \right]. \tag {35}
+$$
+
+Note that
+
+$$
+\begin{array}{l} p (\tilde {z}) = p (Y = 1) p (\tilde {z} \mid Y = 1) + p (Y = 0) p (\tilde {z} \mid Y = 0) (36) \\ = p (Y = 1) G \left(\tilde {z} ^ {\prime}; W \mu_ {0} + b, \sigma_ {\tilde {z}}\right) + p (Y = 0) G \left(\tilde {z} ^ {\prime}; W \mu_ {1} + b, \sigma_ {\tilde {z}}\right) (37) \\ = \frac {1}{\sigma_ {\tilde {z}}} [ p (Y = 1) G \left(z ^ {\prime}; 0, 1\right) + p (Y = 0) G \left(z ^ {\prime}; \mu^ {\prime}, 1\right) ], (38) \\ \end{array}
+$$
+
+where
+
+$$
+z ^ {\prime} = \frac {\tilde {z} - W \mu_ {0} - b}{\sigma_ {\tilde {z}}}, \quad \sigma_ {\tilde {z}} ^ {2} = \sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2}, \quad \mu^ {\prime} = \frac {W \left(\mu_ {1} - \mu_ {0}\right)}{\sigma_ {\tilde {z}}}. \tag {39}
+$$
+
+Thus,
+
+$$
+\begin{array}{l} h (p (\tilde {z})) = - \int p (\tilde {z}) \log p (\tilde {z}) \mathrm {d} \tilde {z} (40) \\ = - \sigma_ {\tilde {z}} \int p \left(z ^ {\prime}\right) \log p \left(z ^ {\prime}\right) d z ^ {\prime} (41) \\ = \int \tilde {G} \left(z ^ {\prime}\right) \log \left[ \frac {1}{\sigma_ {\tilde {z}}} \tilde {G} \left(z ^ {\prime}\right) \right] \mathrm {d} z ^ {\prime} (42) \\ = \log \sigma_ {\tilde {z}} - \int \tilde {G} \left(z ^ {\prime}\right) \log \tilde {G} \left(z ^ {\prime}\right) \mathrm {d} z ^ {\prime} (43) \\ = \log \sigma_ {\tilde {z}} + h (\tilde {G}), (44) \\ \end{array}
+$$
+
+where
+
+$$
+\tilde {G} \left(z ^ {\prime}\right) = p (Y = 1) G \left(z ^ {\prime}; 0, 1\right) + p (Y = 0) G \left(z ^ {\prime}; \mu^ {\prime}, 1\right). \tag {45}
+$$
+
+Therfore,
+
+$$
+\begin{array}{l} I (\tilde {Z}; Y) = - \frac {1}{2} \left[ \log \left(2 \pi \sigma_ {\tilde {z}} ^ {2}\right) + 1 \right] + \log \sigma_ {\tilde {z}} + h (\tilde {G}) (46) \\ = - \frac {1}{2} [ \log (2 \pi) + 1 ] + h (\tilde {G}). (47) \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\operatorname {I B} (p (\tilde {z} | z)) = \log \left(\frac {\sigma_ {\tilde {z}}}{\sigma_ {c}}\right) + \frac {\beta}{2} [ \log (2 \pi) + 1 ] - \beta h (\tilde {G}). \tag {48}
+$$
+
+Finally,
+
+$$
+L (p (\tilde {z} | z)) = - \frac {W ^ {2} \left(\mu_ {1} - \mu_ {0}\right) ^ {2}}{2 \sigma_ {\tilde {z}} ^ {2}} + \log \left(\frac {\sigma_ {\tilde {z}}}{\sigma_ {c}}\right) + \frac {\beta}{2} [ \log (2 \pi) + 1 ] - \beta h (\tilde {G}), \tag {49}
+$$
+
+
+Figure 6. Loss function for the 1D Gaussian case. Note $\mu_0 - \mu_1 = 1$ , $\sigma = \sigma_c = 1$ , and $\beta = 1$ . Note $L = D_{KL} + \alpha \mathrm{IB}$ where $\alpha = 0.5$ .
+
+where
+
+$$
+\tilde {G} (z) = p (Y = 1) G (z; 0, 1) + p (Y = 0) G (z; \mu^ {\prime}, 1) \tag {50}
+$$
+
+$$
+\sigma_ {\tilde {z}} ^ {2} = \sigma_ {c} ^ {2} + W ^ {2} \sigma^ {2} \tag {51}
+$$
+
+$$
+\mu^ {\prime} = \frac {W \left(\mu_ {0} - \mu_ {1}\right)}{\sigma_ {\tilde {z}}}. \tag {52}
+$$
+
+We show the plot of this loss function in Figure 6.
+
+# B. Simplifying the Loss With Independence Assumptions
+
+We simplify our loss functional under independence assumptions. Specifically, we assume that
+
+$$
+p (\tilde {z} | z) = \prod_ {i = 1} ^ {n} p \left(\tilde {z} _ {i} \mid z _ {i}\right) \tag {53}
+$$
+
+$$
+p (z | y) = \prod_ {i = 1} ^ {n} p \left(z _ {i} \mid Y = y\right). \tag {54}
+$$
+
+Note that
+
+$$
+\begin{array}{l} p (\tilde {z} | y) = \int p (\tilde {z} | z) p (z | y) \mathrm {d} z (55) \\ = \int \prod_ {i} p \left(\tilde {z} _ {i} \mid z _ {i}\right) p \left(z _ {i} \mid y\right) d z _ {1} \dots d z _ {n} (56) \\ = \prod_ {i} \int p \left(\tilde {z} _ {i} \mid z _ {i}\right) p \left(z _ {i} \mid y\right) \mathrm {d} z _ {i} (57) \\ = \prod_ {i} p \left(\tilde {z} _ {i} | y\right). (58) \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} D _ {K L} [ p (\tilde {z} | 0) \mid | p (\tilde {z} | 1) ] = \int p (\tilde {z} | 0) \log \prod_ {i} \frac {p \left(\tilde {z} _ {i} | 0\right)}{p \left(\tilde {z} _ {i} | 1\right)} d \tilde {z} (59) \\ = \sum_ {i} \int p (\tilde {z} | 0) \log \frac {p \left(\tilde {z} _ {i} | 0\right)}{p \left(\tilde {z} _ {i} | 1\right)} \mathrm {d} \tilde {z} (60) \\ = \sum_ {i} D _ {K L} [ p (\tilde {z} _ {i} | 0) | | p (\tilde {z} _ {i} | 1) ], (61) \\ \end{array}
+$$
+
+and by a similar computation, we get that
+
+$$
+I (\tilde {Z}; Z) = \sum_ {i} I \left(\tilde {Z} _ {i}; Z _ {i}\right) \tag {62}
+$$
+
+$$
+I (\tilde {Z}; Y) = \sum_ {i} I (\tilde {Z} _ {i}; Y). \tag {63}
+$$
+
+Thus, we see that
+
+$$
+L (p (\tilde {z} | z)) = - D _ {K} L [ p (\tilde {z} | 0) \mid | p (\tilde {z} | 1) ] + \alpha \mathbf {I B} (p (\tilde {z} | z)) = \sum_ {i} L _ {i} \left(p \left(\tilde {z} _ {i} \mid z _ {i}\right)\right), \tag {64}
+$$
+
+where
+
+$$
+L _ {i} \left(p \left(\tilde {z} _ {i} \mid z _ {i}\right)\right) = - D _ {K L} \left[ p \left(\tilde {z} _ {i} \mid 0\right) \mid \mid p \left(\tilde {z} _ {i} \mid 1\right) \right] + \alpha \left[ I \left(\tilde {Z} _ {i}; Z _ {i}\right) - \beta I \left(\tilde {Z} _ {i}; Y\right) \right]. \tag {65}
+$$
+
+Therefore, we just need to solve $n$ independent problems:
+
+$$
+\underset {p \left(\tilde {z} _ {i} \mid z _ {i}\right)} {\arg \min } L _ {i} \left(p \left(\tilde {z} _ {i} \mid z _ {i}\right)\right), \quad i \in \{1, \dots , n \}. \tag {66}
+$$
+
+# C. Gradient of Loss Computations
+
+We review the definition of gradient of functionals, which are functions defined on functions, in order to compute gradients of our loss. First, we define the directional derivative. Let $\delta p(\tilde{z} |z)$ denote a perturbation of $p(\tilde{z} |z)$ , which is a function of $\tilde{z}$ with integral zero so that $p(\tilde{z} |z) + \varepsilon \delta p(\tilde{z} |z)$ defines a valid probability (i.e., integrates to 1) for $\varepsilon$ small. The direction derivative is defined as
+
+$$
+\left. \mathrm {d} L \left(p (\tilde {z} | z)\right) \cdot \delta p (\tilde {z} | z) = \frac {\mathrm {d}}{\mathrm {d} \varepsilon} L [ p (\tilde {z} | z) + \varepsilon \delta p (\tilde {z} | z) ] \right| _ {\varepsilon = 0}. \tag {67}
+$$
+
+The gradient $\nabla_{p(\tilde{z} |z)}L(p(\tilde{z} |z))$ is the perturbation of $p(\tilde{z} |z)$ that satisfies the relation:
+
+$$
+\mathrm {d} L (p (\tilde {z} | z)) \cdot \delta p (\tilde {z} | z) = \int \nabla_ {p (\tilde {z} | z)} L (p (\tilde {z} | z)) \cdot \delta p (\tilde {z} | z) \mathrm {d} z. \tag {68}
+$$
+
+# C.1. KL Loss Gradient
+
+We look into the optimization of
+
+$$
+\max _ {p (\tilde {z} \mid z)} D _ {K L} [ p (\tilde {z} \mid Y = 1) | | p (\tilde {z} \mid Y = 0) ], \tag {69}
+$$
+
+where $D_{KL}$ is the KL-divergence or relative entropy:
+
+$$
+D _ {K L} [ p \mid | q ] = \int p (x) \log \frac {p (x)}{q (x)} \mathrm {d} x, \tag {70}
+$$
+
+that is we would like to compute $\tilde{Z}$ such that the resulting distributions under in/out data are maximally separated.
+
+We compute the optimizing conditions by computing the variation. First note the following:
+
+$$
+p (\tilde {z} | y) = \int p (\tilde {z} | z, y) p (z | y) \mathrm {d} z = \int p (\tilde {z} | z) p (z | y) \mathrm {d} z, \tag {71}
+$$
+
+where the equality on the right hand side is by assumption - we do not want our feature $\tilde{Z}$ to be dependent on whether the data is OOD or not. Now we compute the variation:
+
+$$
+\begin{array}{l} \delta D _ {K L} \cdot \delta p (\tilde {z} | z) = \int \delta p (\tilde {z} | Y = 1) \cdot \delta p (\tilde {z} | z) \log \left(\frac {p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0)}\right) (72) \\ + p (\tilde {z} | Y = 0) \delta \left[ \frac {p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0)} \right] \cdot \delta p (\tilde {z} | z) d \tilde {z}. (73) \\ \end{array}
+$$
+
+Let's compute
+
+$$
+\delta p (\tilde {z} | y) \cdot \delta p (\tilde {z} | z) = \int \delta p (\tilde {z} | z) p (z | y) \mathrm {d} z. \tag {74}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \delta \left[ \frac {p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0)} \right] \cdot \delta p (\tilde {z} | z) = \frac {\delta p (\tilde {z} | Y = 1) p (\tilde {z} | Y = 0) - p (\tilde {z} | Y = 1) \delta p (\tilde {z} | Y = 0)}{p (\tilde {z} | Y = 0)} (75) \\ = \frac {\int \delta p (\tilde {z} | z) p (z | Y = 1) \mathrm {d} z \cdot p (\tilde {z} | Y = 0) - \int \delta p (\tilde {z} | z) p (z | Y = 0) \mathrm {d} z \cdot p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0) ^ {2}}. (76) \\ \end{array}
+$$
+
+Therefore,
+
+$$
+p (\tilde {z} | Y = 0) \delta \left[ \frac {p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0)} \right] \cdot \delta p (\tilde {z} | z) = \int \delta p (\tilde {z} | z) [ p (z | Y = 1) - L (\tilde {z}) p (z | Y = 0) ] d z, \tag {77}
+$$
+
+where
+
+$$
+l (\tilde {z}) = \frac {p (\tilde {z} | Y = 1)}{p (\tilde {z} | Y = 0)}, \tag {78}
+$$
+
+is the likelihood ratio of the distributions of $\tilde{Z}$ . Thus,
+
+$$
+\delta D _ {K L} \cdot \delta p (\tilde {z} | z) = \iint \delta p (\tilde {z} | z) \left[ (1 + \log l (\tilde {z})) p (z | Y = 1) - l (\tilde {z}) p (z | Y = 0) \right] d z d \tilde {z}, \tag {79}
+$$
+
+where
+
+$$
+l (z) = \frac {p (z \mid Y = 1)}{p (z \mid Y = 0)} = \frac {p _ {\text {i n}} (z)}{p _ {\text {o u t}} (z)}. \tag {80}
+$$
+
+Therefore,
+
+$$
+\nabla_ {p (\tilde {z} | z)} D _ {K L} = p _ {\text {o u t}} (z) \cdot \left[ \left(1 + \log l (\tilde {z})\right) l (z) - l (\tilde {z}) \right]. \tag {81}
+$$
+
+# C.2. Information Bottleneck Gradient
+
+We consider the information bottleneck term:
+
+$$
+\operatorname {I B} (p (\tilde {z} | z)) = I (Z; \tilde {Z}) - \beta I (\tilde {Z}; Y), \tag {82}
+$$
+
+where $I$ denotes mutual information. Note that
+
+$$
+I (Z; \tilde {Z}) = \int p (z, \tilde {z}) \log \frac {p (z , \tilde {z})}{p (z) p (\tilde {z})} \mathrm {d} \tilde {z} \mathrm {d} z = \int p (z) p (\tilde {z} | z) \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} \mathrm {d} \tilde {z} \mathrm {d} z. \tag {83}
+$$
+
+Also,
+
+$$
+I (\tilde {Z}; Y) = \sum_ {y \in \{0, 1 \}} \int p (\tilde {z}, y) \log \frac {p (\tilde {z} , y)}{p (\tilde {z}) p (y)} \mathrm {d} \tilde {z} = \sum_ {y \in \{0, 1 \}} \int p (\tilde {z} | y) p (y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} \mathrm {d} \tilde {z}. \tag {84}
+$$
+
+Let us compute the variation of these terms:
+
+$$
+\begin{array}{l} \delta I (Z; \tilde {Z}) \cdot \delta p (\tilde {z} | z) = \int p (z) \delta p (\tilde {z} | z) \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} + p (z) \frac {\delta p (\tilde {z} | z) p (\tilde {z}) - p (\tilde {z} | z) \delta p (\tilde {z}) \cdot \delta p (\tilde {z} | z)}{p (\tilde {z})} d \tilde {z} d z (85) \\ = \int \delta p (\tilde {z} | z) p (z) \left[ 1 + \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} \right] - \frac {p (\tilde {z} | z) p (z)}{p (\tilde {z})} \int \delta p (\tilde {z} | z ^ {\prime}) p \left(z ^ {\prime}\right) d z ^ {\prime} d \tilde {z} d z. (86) \\ \end{array}
+$$
+
+Let us evaluate the term after the minus sign:
+
+$$
+\begin{array}{l} \int \int \int \delta p (\tilde {z} | z ^ {\prime}) \frac {p (\tilde {z} | z) p (z)}{p (\tilde {z})} p (z ^ {\prime}) \mathrm {d} z ^ {\prime} \mathrm {d} \tilde {z} \mathrm {d} z = \int \int \delta p (\tilde {z} | z ^ {\prime}) \frac {p (z ^ {\prime})}{p (\tilde {z})} \int p (\tilde {z} | z) p (z) \mathrm {d} z \mathrm {d} \tilde {z} \mathrm {d} z ^ {\prime} (87) \\ = \int \int \delta p (\tilde {z} | z ^ {\prime}) p (z ^ {\prime}) \mathrm {d} \tilde {z} \mathrm {d} z ^ {\prime}. (88) \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\delta I (Z; \tilde {Z}) \cdot \delta p (\tilde {z} | z) = \int \delta p (\tilde {z} | z) p (z) \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} d \tilde {z} d z, \tag {89}
+$$
+
+$$
+\nabla_ {p (\tilde {z} | z)} I (Z; \tilde {Z}) = p (z) \log \frac {p (\tilde {z} | z)}{p (\tilde {z})}. \tag {90}
+$$
+
+Let us compute the variation of the second term in IB:
+
+$$
+\delta I (\tilde {Z}; Y) = \sum_ {y \in \{0, 1 \}} \int \delta p (\tilde {z} | y) p (y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} + p (y) \frac {\delta p (\tilde {z} | y) p (\tilde {z}) - p (\tilde {z} | y) \delta p (\tilde {z})}{p (\tilde {z})} d z. \tag {91}
+$$
+
+Note that
+
+$$
+\delta p (\tilde {z}) = \int \delta p (\tilde {z} | z) p (z) \mathrm {d} z \tag {92}
+$$
+
+$$
+\delta p (\tilde {z} | y) = \int \delta p (\tilde {z} | z) p (z | y) \mathrm {d} \tilde {z}. \tag {93}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \delta I (\tilde {Z}; Y) = \sum_ {y \in \{0, 1 \}} \iint \delta p (\tilde {z} | z) p (y) \left[ p (z | y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} + p (z | y) - \frac {p (\tilde {z} | y) p (z)}{p (\tilde {z})} \right] d \tilde {z} d z (94) \\ = \iint \delta p (\tilde {z} | z) \sum_ {y \in \{0, 1 \}} p (y) \left[ p (z | y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} + p (z | y) - \frac {p (\tilde {z} | y) p (z)}{p (\tilde {z})} \right] d \tilde {z} d z (95) \\ = \iint \delta p (\tilde {z} | z) \sum_ {y \in \{0, 1 \}} p (y) p (z | y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} \mathrm {d} \tilde {z} \mathrm {d} z. (96) \\ \end{array}
+$$
+
+
+Figure 7. Plot of Inverse Gaussian (IG) distribution, $p(z|1) \sim IG(d(z); \mu, \lambda)$ , under different parameters with a Gaussian (blue). Note that IG has high probability where the Gaussian does not.
+
+Therefore,
+
+$$
+\begin{array}{l} \nabla_ {p (\tilde {z} | z)} \mathbf {I B} = p (z) \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} - \beta \sum_ {y \in \{0, 1 \}} p (y) p (z | y) \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} (97) \\ = \sum_ {y \in \{0, 1 \}} p (y) p (z | y) \left[ \log \frac {p (\tilde {z} | z)}{p (\tilde {z})} - \beta \log \frac {p (\tilde {z} | y)}{p (\tilde {z})} \right]. (98) \\ \end{array}
+$$
+
+# D. Plot of Inverse Gaussian Distribution
+
+In Figure 7, we show a plot of our Inverse Gaussian for various parameters along with a Gaussian. Notice that the IG has mass complementary to the Gaussian, and thus represents a natural distribution for OOD if the ID is Gaussian.
+
+# E. A Study of Our Feature Shaping Over Parameters
+
+In this section, we perform an extended study of feature shaping as a function of parameters of the distributions and the $\beta$ parameter in the Information Bottleneck, extending the study in Section 4 of the main paper.
+
+# E.1. Feature Shaping Versus $\beta$
+
+In Figure 8, we explore how our feature shaping function varies as a function of $\beta$ for various distributions.
+
+# E.2. Feature Shaping Versus Distribution Parameters
+
+In Figure 9, we explore how our feature shaping function varies with different distribution parameters.
+
+# F. Visualizations for Additive Gaussian Noise Experiment
+
+Figure 10 shows a visualization of an example ImageNet image and various noise levels added to it to simulate OOD data for the experiment in Section 6.
+
+Figure 11 visualizes the activations of images from the validation set of ImageNet-1k with different levels of additive Gaussian noises. From figure 11a, with more noises, the activations are more instable and have more larger spikes. From figure 11b, the activations shrink more and peak more at small values.
+
+
+Figure 8. The mean of the OOD Gaussian Random Feature under the Gaussian (left), Laplace (middle) and Inverse Gaussian (right) distributions for the OOD distribution. Different curves on the same plot indicate differing weights on the $I(\tilde{Z};Y)$ component of the Information Bottleneck term, $\beta$ . The weight on the IB is fixed to $\alpha = 3.0$ . For the Gaussian case, $p(z|0) \sim \mathcal{N}(-0.5,0.5)$ and $p(z|1) \sim \mathcal{N}(0.5,0.5)$ . For the Laplace case, $p(z|0) \sim \mathcal{N}(0,0.5)$ and $p(z|1) \sim \mathcal{Lap}(0,1)$ . In the Inverse Gaussian case, $p(z|0) \sim \mathcal{N}(0,0.5)$ and $p(z|1) \sim IG(d(z);0.5,15)$ .
+
+
+
+
+
+
+Figure 9. The mean of the OOD Gaussian Random Feature under the Gaussian (left), Laplace (middle) and Inverse Gaussian (right) distributions for the OOD distribution. Different curves on the same plot indicate different OOD distribution parameters. The weight on the IB term and weight of its $I(\tilde{Z};Y)$ component are set as $\alpha = 3.0$ and $\beta = 10.0$ . For the Gaussian case, $p(z|0) \sim \mathcal{N}(-0.5,0.5)$ . For the Laplace case, $p(z|0) \sim \mathcal{N}(0,0.5)$ . In the Inverse Gaussian case, $p(z|0) \sim \mathcal{N}(0,0.5)$ .
+
+
+
+
+
+
+(a) $\sigma = 25$
+
+
+(b) $\sigma = 50$
+
+
+(c) $\sigma = 100$
+
+
+(d) $\sigma = 150$
+Figure 10. Visualization of a sample image from ImageNet validation split under different levels of noise corruption ( $\sigma$ values).
+
+
+(e) $\sigma = 255$
+
+
+
+
+
+
+
+
+
+
+
+
+(a) Activation range of $\mu \pm \sigma$
+
+
+
+
+
+
+
+
+
+
+
+
+(b) Activation range of (min, max)
+Figure 11. The network activations of ImageNet with different levels of additive Gaussian noises. The shaded regions in (a) represent one standard deviation above and below the mean, while those in (b) represent the range of min and max values of activations. With more noises, activations tend to more instable, smaller range of activation values.
+
+# G. Optimal hyperparameters used in ImageNet and CIFAR benchmarking
+
+We report the optimal hyperparameters for the experiments studied in Section 6 in Table 3.
+
+Model ID Data Hyperparameters y0 y1a z1 y1b m1 z2 m2 ResNet-50 ImageNet-1k 0.0 0.0 0.52 0.73 0.61 1.2 -0.3 MobileNet-v2 ImageNet-1k 0.0 0.0 0.55 0.5 0.79 1.49 -0.74 ViT-B-16 ImageNet-1k 0.0 0.0 0.05 1.58 2.0 2.0 -1.0 ViT-L-16 ImageNet-1k 0.0 0.0 0.06 1.76 1.79 2.0 -0.32 DenseNet101 CIFAR 10 0.0 0.0 0.51 0.41 1.18 1.1 0.37 MLP-N CIFAR 10 -0.3 0.25 0.73 0.40 0.10 3.54 1.76 DenseNet101 CIFAR 100 0.0 0.1 1.0 2.0 0.17 1.8 -0.18 MLP-N CIFAR 100 0.0 0.3 0.59 0.4 0.1 4.0 2.0
+
+Table 3. Our optimal hyperparameters for different models and datasets.
+
+# H. Empirical Distribution of Network Feature Outputs
+
+In this section, we show the empirical distributions of ID and OOD data for various ImageNet-1k benchmarks. As is shown in the subsequent figures, some benchmarks/architectures resemble the distribution assumptions analyzed in this paper (e.g., Gaussian ID/Gaussian OOD - Figure 12 and Gaussian ID/Laplacian OOD - Figure 13), thus showing that these may be realistic assumptions. On the other hand, some datasets (Figure 14 and Figure 15) exhibit distributions that do not fit the distributional assumptions analyzed in this paper. Nevertheless, our novel shaping function still performs well on these benchmarks, showing that our shaping function may well work even when the data differs from the assumed distributions, which is important in practice as exact distributions may not be known.
+
+
+
+
+
+
+Figure 12. Distribution of features from the penultimate layer of ViT-B-16. Comparison with In-distribution data (ImageNet-1k) and different test OOD datasets in the ImageNet-1k benchmark (Zhao et al., 2024). The ID and OOD distributions resemble the positive part of a Gaussian with different variance.
+
+
+
+
+
+
+
+
+Figure 13. Distribution of features from the penultimate layer of ResNet-50. Comparison with In-distribution data (ImageNet-1k) and different test OOD datasets in the ImageNet-1k benchmark (Zhao et al., 2024). The ID distribution resembles the positive part of a Gaussian and the OOD resembles the positive part of a Laplacian distribution.
+
+
+
+
+
+
+
+
+Figure 14. Distribution of features from the penultimate layer of MobileNet-V2. Comparison with In-distribution data (ImageNet-1k) and different test OOD datasets in the ImageNet-1k benchmark (Zhao et al., 2024). The ID and OOD both appear Laplacian; although this does not fit the ID assumption analyzed in this paper, our method nevertheless works well on this benchmark.
+
+
+
+
+
+
+
+
+Figure 15. Distribution of features from the penultimate layer of ViT-L-16. Comparison with In-distribution data (ImageNet-1k) and different test OOD datasets in the ImageNet-1k benchmark (Zhao et al., 2024). The ID and OOD distributions appear Gaussian but with heavy weight on zeros; although these distributions don't fit the distributional assumptions in the cases analyzed in the paper, our method nevertheless performs well on this benchmark.
+
+
\ No newline at end of file
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/images.zip b/avariationalinformationtheoreticapproachtooutofdistributiondetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9b1ec5ee2265b7690272e02ed3011d18f522b8ae
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:685d6311d9ca8215f83fab12fa51a8db94d37d1ff0716759777168db8c6f8730
+size 1693169
diff --git a/avariationalinformationtheoreticapproachtooutofdistributiondetection/layout.json b/avariationalinformationtheoreticapproachtooutofdistributiondetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..76642a8f4b7f86161d10d7a2c0519f19f0903030
--- /dev/null
+++ b/avariationalinformationtheoreticapproachtooutofdistributiondetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c23faff83c219cf02f51a0f519757bb773db6af2307d92c6fcafae3cc300a25
+size 866628
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_content_list.json b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e17b24a350761e9cff2fd34986e788581e4d47d
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c04e9433318653764466b74519c7a60b624c2d22995738871d729a517a6ab04a
+size 98104
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_model.json b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..125c398463b5ef29dc4df9a5e701c0d3170e9103
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76b1ef1e6425061cba030e4e6d93b2fcc93abb2f0ca641f77af921aded2b589b
+size 118897
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_origin.pdf b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..86eb6728120a57e1dbc90a18e0d13ac3df92cf5a
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/cb160e90-8d1a-4505-9dfc-60da1a9450cb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c79e7873b4536b20d9bb549dc6979e900c02d3876a7a17acf0df4f3a383d2004
+size 2791328
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/full.md b/avariationalperspectiveongenerativeproteinfitnessoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..02d8a576de2fecdc14806100afba7049dbe1e490
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/full.md
@@ -0,0 +1,395 @@
+# A Variational Perspective on Generative Protein Fitness Optimization
+
+Lea Bogensperger1 Dominik Narnhofer2 Ahmed Allam1 Konrad Schindler2 Michael Krauthammer1
+
+# Abstract
+
+The goal of protein fitness optimization is to discover new protein variants with enhanced fitness for a given use. The vast search space and the sparsely populated fitness landscape, along with the discrete nature of protein sequences, pose significant challenges when trying to determine the gradient towards configurations with higher fitness. We introduce Variational Latent Generative Protein Optimization (VLGPO), a variational perspective on fitness optimization. Our method embeds protein sequences in a continuous latent space to enable efficient sampling from the fitness distribution and combines a (learned) flow matching prior over sequence mutations with a fitness predictor to guide optimization towards sequences with high fitness. VLGPO achieves state-of-the-art results on two different protein benchmarks of varying complexity. Moreover, the variational design with explicit prior and likelihood functions offers a flexible plug-and-play framework that can be easily customized to suit various protein design tasks.
+
+# 1. Introduction
+
+Protein fitness optimization seeks to improve the functionality of a protein by altering its amino acid sequence, to achieve a desired biological property of interest called "fitness" – for instance, stability, binding affinity, or catalytic efficiency. It requires searching through a vast combinatorial space (referred to as the "fitness landscape"), where the number of possible sequences grows exponentially with the sequence length $d$ , while only a small subset of these sequences exhibit meaningful biological functionality (Hermes et al., 1990). Traditionally, protein fitness optimization has been addressed through directed evolution (Romero & Arnold, 2009), mimicking natural evolution in the labora
+
+1 University of Zurich 2 ETH Zurich. Correspondence to: Lea Bogensperger .
+
+Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
+
+tory with a time-consuming, yet narrow random exploration of the fitness landscape. This highlights the need for efficient in-silico methods capable of exploring the space of potential sequences, so as to prioritize promising candidates for experimental validation.
+
+Many different approaches exist, from generative models (Jain et al., 2022; Gruver et al., 2024; Notin et al., 2022), evolutionary greedy algorithms (Sinai et al., 2020) to gradient-based sampling strategies, such as Gibbs With Gradients (GWG) (Grathwohl et al., 2021; Emami et al., 2023) and smoothed GWG variants (Kirjner et al., 2023). In general, these methods are challenged by the high-dimensional, sparse and discrete nature of the fitness landscape, characterized by ruggedness (Van Cleve & Weissman, 2015) due to epistasis as well as holes due to proteins with very low fitness (Johnston et al., 2023; Sinai & Kelsic, 2020). To deal with the inherently discrete nature of amino acids, computational methods tend to rely on discretized processes, and thus must cope with issues such as non-smooth gradients and the vast diversity of possible sequences (Kirjner et al., 2023; Frey et al., 2023).
+
+Here we look at the problem from a different perspective: instead of relying on a token-based sequence representation, we embed sequences in a continuous latent space and learn the corresponding latent distribution in a generative manner. In this way, patterns and relationships can be captured that are difficult to model in the original discrete space, especially if one only has access to a small data set containing only a few thousand mutations of a protein.
+
+To construct a prior distribution over latent protein sequences, we leverage flow matching (Lipman et al., 2023; Liu et al., 2022), a powerful generative modeling scheme that learns smooth, continuous representations amenable to gradient-based sampling and optimization. The prior is then integrated into a variational framework, making it possible to guide the search towards regions of high fitness with a fitness predictor in the form of a neural network, trained on a limited set of sequence mutations with associated fitness labels. See Figure 1. The versatility of the variational framework means that it can easily be tuned to different protein optimization tasks by suitably adapting the prior and the guidance function. We validate VLGPO on two public benchmarks for protein fitness optimization in limited data
+
+
+Figure 1. Overview of VLGPO sampling. The central section illustrates the VAE framework, showcasing protein sequences, their latent representations $z$ , and the approximate posterior distribution. While the upper section depicts unconditional sampling from the prior $\mathrm{p}(x)$ using flow matching in the latent space, the lower section illustrates the modifications introduced by VLGPO during sampling. We additionally incorporate a likelihood term $\mathrm{p}(y|x)$ to condition on the fitness $y$ , enabling sequence generation from the posterior distribution $\mathrm{p}(x|y)$ and facilitating sampling from high-fitness regions (as shown by the shifted and reshaped distribution).
+
+regimes, namely Adeno-Associated Virus (AAV) (Bryant et al., 2021) and Green Fluorescent Protein (GFP) (Sarkisyan et al., 2016), as suggested by (Kirjner et al., 2023).
+
+To summarize, our contributions are:
+
+- We introduce Variational Latent Generative Protein Optimization (VLGPO), a variational framework for protein fitness optimization that enables posterior sampling of protein sequences conditioned on desired fitness levels. Our method combines a learned prior and a fitness predictor to effectively guide the optimization towards high-fitness regions.
+- We perform fitness optimization in a continuous latent representation that embeds meaningful relations between discrete protein sequences. Within this latent space, we employ flow matching to learn a generative prior that enables efficient exploration of the fitness landscape and thus facilitates the discovery of high-fitness sequences.
+- We demonstrate state-of-the-art performance on established benchmarks for protein fitness optimization, namely AAV and GFP, targeting in particular tasks of medium and high difficulty in a limited data regime1 .
+
+# 2. Related Work
+
+We consider in-silico methodologies for protein fitness optimization at the sequence level, while emphasizing that there is an extensive stream of research centered around active learning frameworks, which iteratively integrate computational predictions with experimental validations (Yang et al., 2025; Lee et al., 2024). While these methods are often considered the gold standard due to their integration of experimental feedback, our contribution is confined to purely computational strategies, thereby complementing existing active learning approaches.
+
+In a typical directed evolution setup (Romero & Arnold, 2009), biologists simulate nature in the laboratory by running multiple rounds of mutagenesis, searching through the local landscape by sampling a few mutations away from the current position in the sequence, effectively discovering new sequences through random walks in the sequence space. This process can be resource-intensive, as many mutations do not enhance the fitness of the starting sequence, and the vast number of combinations makes exhaustive exploration impractical. Therefore, many techniques have been introduced to address these challenges by employing surrogate models to guide the search more efficiently (Sinai et al., 2020; Brookes et al., 2019; Trabucco et al., 2021; Ren et al., 2022). For instance, Adalead (Sinai et al., 2020) uses a black-box predictor model to inform a greedy algorithm to prioritize mutations that are more likely to improve protein fitness.
+
+Moreover, generative methods have been explored for sequence generation. For instance, GFlowNets (GFNAL) (Jain et al., 2022) are designed to sample discrete objects, such as biological sequences, with probabilities proportional to a given reward function, facilitating the generation of diverse and high-quality candidates. Further, discrete diffusion models have been used for sequence sampling, where gradients in the hidden states of the denoising network help to guide the sequence design (Gruver et al., 2024). Alternatively, a walk-jump algorithm was proposed to learn the distribution of one-hot encoded sequences using a single noise level and a Tweedie step to recover the sampled sequences after MCMC sampling (Frey et al., 2023), which was further extended using gradient guidance in the noisy manifold (Ikram et al., 2024). Likewise, also GWG proposes a strategy to obtain gradients for MCMC sampling of discrete sequences (Grathwohl et al., 2021; Emami et al., 2023). The method Gibbs sampling with Graph-based Smoothing (GGS) builds upon this by additionally regularizing the noisy fitness landscape with graph-based smoothing (Kirjner et al., 2023).
+
+Since the search space of protein sequences grows with sequence length, many works have started to recognize the significance of a latent space (Praljak et al., 2023) or embedding spaces that are used in large-scale protein language models (Lin et al., 2023). LatentDE (Tran et al., 2024) for instance combines directed evolution with a smoothed space which allows to employ gradient ascent for protein sequence design in the latent space guided by a fitness predictor.
+
+Within the diverse array of approaches to protein fitness optimization, we propose a variational perspective that is known for its efficacy in other domains like inverse problems and image reconstruction. Specifically, we integrate a generative flow matching prior that learns the distribution of protein sequence mutations from a data set of protein sequences and combine it with a fitness predictor. The integration of this predictor helps to effectively guide a sampling process that generates high-fitness samples. Moreover, we address the problem within a compressed latent space, encoding protein sequences into a latent representation that facilitates continuous optimization techniques based on gradient information. This approach enables efficient exploration of the protein fitness landscape, leveraging the latent space to perform guided sampling directly on the encoded sequences.
+
+# 3. Method
+
+# 3.1. Protein Optimization
+
+Let $x \in \mathcal{V}^d$ represent a protein sequence, with dimensionality $d$ corresponding to the number of amino acids chosen from a vocabulary $\mathcal{V}$ of 20 amino acids. Protein fitness optimization seeks to find a sequence $x$ that maximizes a
+
+specific fitness metric $y \coloneqq f(x)$ with $y \in \mathbb{R}$ , which quantifies desired protein functionality such as stability, activity, or expression.
+
+Therefore, we work with paired data $\mathcal{S} = \{(x_i,y_i)\}_{i = 1}^N$ (note that $\mathcal{S}$ is only a subset of the entire data $S^{*}$ , see Table 2) and we use the parameterized convolutional neural network (CNN) $g:\mathcal{V}^d\to \mathbb{R}$ (Kirjner et al., 2023) to infer the fitness for a given sequence. Specifically, we employ $g_{\phi}$ and $g_{\tilde{\phi}}$ as predictors, which are trained on a small subset of the data (see Section 4.1) without and with graph-based smoothing, respectively. Additionally, we use $g_{\psi}$ as an in-silico oracle for the final evaluation, trained on the entire paired data set $S^{*}$ . Note that all models share an identical architecture, differing only in their respective weights which we re-use without further training.
+
+Building on the predictive framework for estimating protein fitness, we now turn our attention to creating new sequences that may exhibit desirable properties. Generative modeling provides a powerful approach for navigating the vast sequence space, offering a data-driven way to propose candidate proteins beyond those observed in the training set. In the following, we introduce generative models and discuss how they can be leveraged to discover novel protein sequences with optimized fitness.
+
+# 3.2. Generative Modeling
+
+A recent class of generative models, known as diffusion models (Ho et al., 2020; Song et al., 2021; Sohl-Dickstein et al., 2015), has shown remarkable success in generating high-quality data across various domains. These models work by progressively transforming simple noise distributions into complex data distributions through a series of iterative steps. During training, noise is systematically added to the data sample $x$ at varying levels, simulating a degradation process over time $t$ . The model $\epsilon_{\theta}$ is then tasked with learning to reverse this process by predicting the added noise $\epsilon$ for $x_{t}$ at each step, effectively reconstructing the original data from the noisy observations. In detail, a model is trained to predict the added noise $\epsilon$ at each step by minimizing the objective
+
+$$
+\mathcal {L} (\theta) = \mathbb {E} _ {x \sim \mathrm {p} (x), t \sim \mathcal {U} _ {[ 0, 1 ]}, \epsilon \sim \mathcal {N} (0, I)} \left[ \| \epsilon - \epsilon_ {\theta} (x _ {t}, t) \| ^ {2} \right].
+$$
+
+Flow Matching. A more recent approach to generative modeling is given by the versatile framework of flow matching (Lipman et al., 2023; Liu et al., 2022). Rather than removing noise from data samples, flow matching aims to model the velocity of the probability flow $\Psi_t$ , which governs the dynamics of how one probability distribution is transformed into another over time. By learning the velocity field $u_t$ of the probability flow, the model $v_{\theta,t}$ captures the evolution of a simple base distribution at $t = 0$ into a more
+
+complex target distribution $\mathrm{p}(x)$ at $t = 1$ , directly modeling the flow between them. Since the velocity field $u_{t}$ is intractable, it was shown in (Lipman et al., 2023) that we can equivalently minimize the conditional flow matching loss:
+
+$$
+\min _ {\theta} \mathbb {E} _ {t, x _ {1}, x _ {0}} \left[ \frac {1}{2} \| v _ {\theta , t} \left(\Psi_ {t} \left(x _ {0}\right)\right) - \left(x _ {1} - x _ {0}\right) \| ^ {2} \right], \tag {1}
+$$
+
+where $t \sim \mathcal{U}_{[0,1]}$ , $x_1 \sim \mathrm{p}(x)$ and $x_0 \sim \mathcal{N}(0,I)$ , and the conditional flow is given by $\Psi_t(x_0) = (1 - t)x_0 + tx_1$ . Once trained, samples can be generated by numerical integration of the corresponding neural ordinary differential equation (ODE) with $t \in [0,1]$ :
+
+$$
+\frac {\mathrm {d}}{\mathrm {d} t} \Psi_ {t} (x) = v _ {\theta , t} (\Psi_ {t} (x)). \tag {2}
+$$
+
+Our aim is to learn a flow-based generative model that approximates the distribution of sequence variants of a protein $\mathrm{p}(x)$ . We then seek to leverage this model in order to generate sequence variants with high protein fitness through guidance, as explained in the following.
+
+# 3.3.Classifier guidance
+
+Diffusion and flow-based methods can be guided by the log-likelihood gradient of an auxiliary classifier during generation, which enables conditional sampling (Dhariwal & Nichol, 2021). In detail, our goal is to sample from a conditional distribution $\mathrm{p}(x|g_{\phi}(x) = y)$ , where $g_{\phi}$ represents the predictor and $y$ denotes the desired fitness value. We therefore adopt the framework of classifier guidance which allows for a decomposition of $\mathrm{p}(x|y) \sim \mathrm{p}(y|x)\mathrm{p}(x)$ . Thus, we guide the sampling process by using the gradient of the predictor $g_{\phi}(x)$ with respect to the input sequence $x$ . By introducing this gradient into the generative process, we can bias the sampling trajectory towards regions of the distribution that are more likely to yield sequences with the desired fitness value. The velocity field $v_{\theta ,t}$ in the generative framework is modified to incorporate this guidance, yielding the following variational update:
+
+$$
+v _ {\theta , t} (x | y) = v _ {\theta , t} (x) + \alpha_ {t} \nabla_ {x} \log p (y | x), \tag {3}
+$$
+
+where $\nabla_{x}\log \mathrm{p}(y|x)\sim -\frac{1}{2}\nabla_{x}\| g_{\phi}(x) - y\|^{2}$ represents the gradient of the log-likelihood of a sequence $x$ having desired fitness $y$ and $\alpha_{t}$ is a scheduler dependent constant (Zheng et al., 2023). Given the goal to maximize the fitness value of generated sequences, one could also set the gradient of the log-likelihood to $\frac{1}{2}\nabla_{x}\| g_{\phi}(x)\|^{2}$ , or a similar suitable form. Both approaches perform very similar in practice. In this work, we chose the first version to demonstrate the possibility of steering sequences toward specific fitness values.
+
+For the classifier guidance we use the trained CNN-based predictors $g_{\phi}$ and the smoothed $g_{\tilde{\phi}}$ from (Kirjner et al.,
+
+2023). Note that for guiding the process towards the highest fitness, $y$ is simply set to 1, which represents the highest fitness in the normalized fitness spectrum.
+
+# 3.4. Latent space representation
+
+So far, we introduced a general framework that in theory allows for sampling from a learned distribution of protein sequences. However, these sequences represent proteins that are combinations of a discrete set of amino acids. As a result, the underlying distribution of these sequences is likely sparse and complex, making it difficult to approximate or directly sample from its original form. To overcome this limitation, we operate in an embedded latent space, where protein sequences are encoded as continuous representations. This is achieved by a VAE framework (Kingma & Welling, 2022), which maps discrete sequences to a continuous latent space through an encoder $\mathcal{E}:\mathcal{V}^d\mapsto \mathbb{R}^l$ and reconstructs them back to the original sequence space using a decoder network $\mathcal{D}:\mathbb{R}^l\mapsto \mathbb{R}^{d\times |\mathcal{V}|}$ , where $l < < d$ . Because of the discrete nature of amino acid tokens, the decoder produces logits, which are then mapped to tokens via an argmax operation. The training objective of the employed $\beta$ -VAE (Higgins et al., 2017) is given by the weighted Evidence Lower Bound (ELBO):
+
+$$
+\min _ {\nu , \mu} \mathbb {E} _ {z \sim \mathrm {q} _ {\mu} (z | x)} - \log \mathrm {p} _ {\nu} (x | z) + \beta \mathrm {K L} (\mathrm {q} _ {\mu} (z | x) \| \mathrm {p} (z)), \tag {4}
+$$
+
+where due to the discrete tokens $-\log \mathrm{p}_{\nu}(x|z)$ simplifies to the cross-entropy loss in our case.
+
+Note that while the VAE is trained with variational inference, VLGPO goes further by introducing additional generative modeling components: flow matching as described in Section 3.2 and classifier-guided sampling in Section 3.3. We show how these extensions refine the generation and help to better control it.
+
+# 3.5. Variational Latent Generative Protein Optimization
+
+From here on we will use the building blocks introduced earlier to describe our proposed VLGPO approach. We start by using our respective sequence data $x \sim \mathrm{p}(x)$ to train a VAE with encoder $\mathcal{E}$ and decoder $\mathcal{D}$ that compresses the higher-dimensional discrete protein sequences into a continuous latent space representation $z \sim \mathrm{p}(z|x)$ . In order to model and sample from the learned latent space in an effective way, we train a flow matching model $v_{\theta,t}$ to learn the probability flow dynamics in the latent space. This network captures the transformation between a simple base distribution (e.g., Gaussian noise) at $t = 0$ and the complex latent distribution of protein sequences at $t = 1$ . By integrating the learned flow from $t \in [0,1]$ , we can efficiently generate new latent representations $z_1$ , which can subsequently be decoded back into protein sequences via the VAE decoder, resulting in $x \sim \mathrm{p}_{\nu}(x|z_1) = \mathcal{D}(z_1)$ , similar to (Esser et al., 2024).
+
+Algorithm 1 VLGPO sampling
+Require: $K,J,y,\alpha_{t}$
+1: Initialize $z_0\sim \mathrm{p}_0(z)\coloneqq \mathcal{N}(0,I)$
+2: Set step size $\Delta t\gets \frac{1}{K}$
+3: for $k = 0$ to $K - 1$ do
+4: $t\gets k\cdot \Delta t$
+5: $z_t^\prime \gets z_t + \Delta t v_{\theta ,t}(z_t)$
+6: for $j = 0$ to $J - 1$ do
+7: $\hat{z}_1\gets z_t'$ $z_{t}^{\prime} + (1 - t - \Delta t)v_{\theta ,t}(z_{t}^{\prime})$
+8: $z_{t}^{\prime}\gets z_{t}^{\prime} - \frac{\alpha_{t}}{2}\nabla_{z_{t}^{\prime}}\| g_{\phi}(\mathcal{D}(\hat{z}_{1})) - y\|^{2}$
+9: end for
+10: $z_{t + \Delta t}\gets z_t^\prime$
+11: end for
+12: return $x = \mathcal{D}(z_1)$
+
+Sampling from the posterior. Instead of merely sampling from the sequence distribution $\mathrm{p}(x)$ , we also seek to generate high-fitness sequences. To achieve this, we condition the sampling process on a fitness score $y$ , ideally set to the maximum value $y = 1$ , which is estimated by a predictor $\hat{y} = g_{\phi}(x)$ .
+
+Although incorporating the likelihood term into the gradient, as shown in Equation (3), may appear straightforward, it is actually challenging because it becomes intractable due to the time-dependent nature of the diffusion/flow model. Moreover, explicitly including this term can push the generated samples off the current manifold related to time point $t$ (Chung et al., 2022). Inspired by (Chung et al., 2023; Ben-Hamu et al., 2024), we employ a scheme that evaluates the likelihood at $\hat{x}_1$ , while the gradient of the likelihood is calculated at $x_t$ , which has the effect of constraining the update to the same manifold. Therefore, at each of the $K$ sampling steps the model estimates $\hat{z}_1$ (Line 7, Algorithm 1) which is decoded to $\hat{x}_1$ , the denoised version of the current decoded sample $x_t$ , using the learned flow model $v_{\theta,t}$ . The likelihood is then evaluated at $\hat{x}_1$ to reflect the target data distribution. However, the gradient of the likelihood, which guides the generative process, is computed with respect to $x_t$ , leading to backpropagation through $v_{\theta,t}$ . Furthermore, as the predictor $g_\phi$ was trained in sequence space, our likelihood function changes to
+
+$$
+\nabla_ {x} \log \mathrm {p} (y | x _ {1}) \sim \frac {1}{2} \nabla_ {z _ {t} ^ {\prime}} \| g _ {\phi} (\mathcal {D} (\hat {z} _ {1})) - y \| ^ {2}, \tag {5}
+$$
+
+where we additionally have to backpropagate through the decoder. The pseudocode of our VLGPO can be found in Algorithm 1, a detailed visualization of the sampling scheme is reported in Figure 2. The hyperparameters $J$ and $\alpha_{t}$ denote the number of gradient descent steps on the likelihood and the guidance strength, respectively.
+
+# 4. Experiments
+
+# 4.1.Data Sets
+
+We adopt the medium and hard protein optimization benchmarks on AAV and GFP from (Kirjner et al., 2023). Given the full data set $S^{*}$ , the task difficulty is determined by (i) the fitness percentile range of the considered sequences (20 - 40 for medium, < 30 for hard) and (ii), the required gap of mutations to reach any sequence of the $99^{\text{th}}$ fitness percentile of $S^{*}$ (a gap of 6 mutations for medium, 7 mutations for hard), see Table 1.
+
+Table 1. Task definition.
+
+Task Range % Gap Medium 20-40 6 Hard < 30 7
+
+Together, this results in four different tasks described in Table 2, each of which only sees a limited number of $N$ sequences in a limited fitness range. The idea of protein fitness optimization is to enhance these sequences to higher, previously unseen fitness values. The setting reflects realistic scenarios in terms of data set sizes. The diversity (Appendix A.2) in the training data sets of the four tasks is quite high: for GFP (medium) it is 14.5, for GFP (hard) 16.3, for AAV (medium) 15.9, and for AAV (hard) 18.4. On the other hand, the diversity within the top-performing (99th percentile) sequences of the full data set $S^{*}$ is 4.73 for GFP and 5.23 for AAV.
+
+Table 2. GFP and AAV data sets with number of data samples $N$ , median normalized fitness scores and fitness range.
+
+Task N Fitness ↑ Fitness Range GFP Medium 2828 0.09 [0.01, 0.62] GFP Hard 2426 0.01 [0.0, 0.1] AAV Medium 2139 0.32 [0.29, 0.38] AAV Hard 3448 0.27 [0.0, 0.33]
+
+For in-silico evaluation of the generated sequences with the oracle $g_{\psi}$ , we use the median normalized fitness, diversity and novelty following (Jain et al., 2022), see Appendix A.2. The oracle was trained on the complete DMS data with 56,086 mutants for GFP and 44,156 mutants for AAV. While diversity and novelty are reported in evaluation, no definitive higher or lower values are considered superior for a sequence. Note that $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$ from the entire data set $S^*$ are used for both GFP and AAV to normalize fitness scores to [0, 1].
+
+
+Figure 2. Schematic depiction of classifier guidance, with $J = 1$ and $K = 6$ . Grey lines represent the latent manifolds at different time steps $t$ , the blue line marks the trajectory of the maximum likelihood. Solid arrows indicate how the latent evolves over time. Left: Naive guidance with likelihood gradients $\nabla z_{t}$ computed directly at $z_{t}$ pushes the sample off the manifold. This error accumulates, as indicated by the purple regions. Right: Guidance with manifold constraint, as employed in VLGPO (Algorithm 1), converges to a valid sequence with fitness $y$ . Solid arrows again denote the evolution of the latent, dashed arrows indicate the flow posterior sampling scheme that ensures the latent stays on the manifold when applying the likelihood gradient.
+
+
+
+# 4.2. Implementation Details
+
+For training, we start by learning the VAE to compress the sequence token input space ( $d = 28$ and $d = 237$ for AAV and GFP) to $l = 16$ and $l = 32$ . A learning rate of 0.001 with a convolutional architecture and $\beta \in \{0.01, 0.001\}$ for AAV and GFP is used for training the encoder $\mathcal{E}$ and decoder $\mathcal{D}$ in Equation (4), also see Appendix B.2. To learn the prior distribution of embedded latent sequences $z = \mathcal{E}(x)$ using flow matching, the 1D CNN commonly used for denoising diffusion probabilistic models (DDPMs) $^2$ is employed. A learning rate of $5e - 5$ and a batch size of 1024 were used to train $v_{\theta, t}$ for 1000 epochs. All VAE and flow matching models are always trained under the limited-data setting as listed in Table 2.
+
+At inference time, we follow the procedure outlined in Algorithm 1 to generate samples. The predictors $g_{\phi}$ and $g_{\tilde{\phi}}$ (for details, see Appendix A.1) are re-used without any further training (Kirjner et al., 2023). We start from $z_0 \sim \mathcal{N}(0, I)$ and use $K = 32$ ODE steps to integrate the learned flow until we obtain $z_1$ . To optimize sequence fitness, the condition $y = 1$ is selected for all samples. The parameters $\alpha_t$ and $J$ are determined via a hyperparameter search, as shown in Figure 3 and discussed in detail in Appendix B.4. Ultimately, they modulate the trade-off between increasing fitness and maintaining diversity. After generating 512 samples $z_1$ to encourage sampling from the entire learned distribution, they are decoded using $x = \mathcal{D}(z_1)$ . Potential duplicates are then filtered out, and the top- $k$ ( $k = 128$ ) samples, ranked by the predictor ( $g_{\phi}$ or $g_{\tilde{\phi}}$ , respectively), are selected. Note that also the predictors $g_{\phi}$ and $g_{\tilde{\phi}}$ for
+
+each setting, used for classifier guidance and for ranking the samples, are trained only on the data sets listed in Table 2.
+
+For evaluation of the generated samples, we use the oracle $g_{\psi}$ as in-silico fitness estimate, see Appendix A.1. The oracle is directly sourced from (Kirjner et al., 2023) and is the only model that was trained using the entire data $S^{*}$ . For sampling following Algorithm 1, we use $g_{\phi}$ and compare VLGPO to GWG (which also uses the same trained predictor), and the identical smoothed predictor $g_{\tilde{\phi}}$ in VLGPO to GGS (Kirjner et al., 2023). We benchmark against the respective baselines, namely GFlowNets (GFN-AL) (Jain et al., 2022), model-based adaptive sampling (CbAS) (Brookes et al., 2019), greedy search (AdaLead) (Sinai et al., 2020), Bayesian optimization (BOqei) (Wilson et al., 2017), conservative model-based optimization (CoMS) (Trabucco et al., 2021) and proximal exploration (PEX) (Ren et al., 2022). Moreover, we investigate the performance of the recently introduced gradient-guided walk-jump sampling algorithm (gg-dWJS), which extends the walk-jump sampling framework originally developed for antibody sequence design (Ikram et al., 2024; Frey et al., 2023). Because the available source code was not directly compatible with the data sets in Table 2, we adapted our existing model architectures for implementation. We then performed a grid search over sampling parameters, followed by top- $k$ sampling for each task, to ensure a fair comparison.
+
+# 4.3. Results
+
+We compare VLGPO as illustrated in Algorithm 1 for the four tasks in Table 2 by averaging over five seeds and computing the median normalized fitness, diversity and novelty (see Appendix A.2). The results reported in Table 3 and Ta
+
+
+(a) GFP medium
+
+
+(b) GFP hard
+Figure 3. Grid search for median fitness depending on sampling parameters $\alpha_{t}$ and $J$ for the different tasks using the predictor $g_{\phi}$ . In general, higher values of $\alpha_{t}$ and $J$ , corresponding to strong classifier guidance, yield higher predicted fitness values.
+
+
+(c) AAV medium
+
+
+(d) AAV hard
+
+ble 4 for GFP and AAV respectively, demonstrate the fitness improvement of VLGPO with both predictors $(g_{\phi}$ and $g_{\tilde{\phi}})$ over all other benchmarked methods. In particular, VLGPO shows clear fitness improvement over GWG (which uses the same predictor $g_{\phi}$ ) and GGS (which uses the same predictor $g_{\tilde{\phi}}$ ). Moreover, the performance difference between our method and GWG (case of using non-smoothed predictor $g_{\phi}$ ) further highlights the robustness of our approach in limited data regimes and supports the advantage of the guided flow matching prior in the latent space.
+
+When it comes to diversity and novelty, there is no clear definition of what is better – higher or lower scores (Kirjner et al., 2023). In general, we observe that reported values of VLGPO are in line with competing methods. Moreover, the variations between different methods in terms of diversity and novelty are significantly larger in GFP than in AAV. That discrepancy may arise due to GFP sequences being longer, thus representing a sparser and higher-dimensional search space that makes the protein harder to optimize. Interestingly, the sequences in the $99^{\mathrm{th}}$ fitness percentile, which are on the top end of high-fitness sequences, show a diversity of 4.73 for GFP and 5.23 for AAV, which are approximately comparable to our results in Table 3 and Table 4, except for AAV hard. In practice, the smoothed predictor $g_{\tilde{\phi}}$ reduces the diversity within the generated sequences for both GGS and VLGPO. Additionally, more samples tend to collapse to the same decoded sequence, likely due to the smoother, less distinct gradients.
+
+While gg-dWJS is conceptually similar to VLGPO, its performance on both GFP and AAV (hard) does not fully align with our findings. We hypothesize two main reasons for this discrepancy. First, the GFP data set (Table 2) is relatively small and limited to mutations of a single protein, posing a challenge for methods that rely on one-hot encoding. Although gg-dWJS performs well on AAV, the one-hot-encoded space becomes extremely sparse for the longer GFP sequences ( $d = 237$ ), causing many generated samples to collapse onto identical sequences. Second, the discriminator model acting as the guidance for gg-dWJS
+
+(analogous to our predictor $g_{\phi}$ ) is trained in the noisy one-hot-encoded space of sequences with a single noise level, which limits its predictive capabilities.
+
+# 4.4. Fitness Extrapolation
+
+Table 2 illustrates the limited fitness range of the sequences in each task, whereas protein fitness optimization aims to sample sequences with higher fitness values. Because the available data set is small, extrapolating to these high-fitness sequences during sampling becomes especially challenging. The following experiment examines how much the oracle-evaluated fitness $y_{\mathrm{gt}}$ (provided by $g_{\psi}$ ) deviates from the target fitness $y$ , which is used as an input to the sampling in Algorithm 1. Thereby, we compare two approaches: our variational method VLGPO and a directly learned posterior $\mathrm{p}(x|y)$ via fitness-conditioned flow matching $v_{\theta,t}(z_t,y)$ . The learned posterior does not need the predictor $g_{\phi}$ for classifier guidance, as it learns the conditional distribution in the latent space end-to-end. For a fair comparison, the raw output without top- $k$ sampling is applied. The results are depicted in Figure 4 for GFP and AAV hard.
+
+Due to the sparse data availability and the limited fitness range, the evaluated fitness $y_{\mathrm{gt}}$ cannot be expected to precisely follow the required fitness $y$ . Nevertheless, the results in Figure 4 clearly demonstrate the advantage of classifier guidance in VLGPO, whereas the direct posterior fails to effectively exploit the given fitness condition. This gap is most evident in the GFP hard task, where the training data's fitness range is only in [0.0, 0.1], making it difficult for the learned posterior to extrapolate to higher fitness values. In contrast, VLGPO leverages the additional gradient information from the predictor $g_{\phi}$ , thereby overcoming these limitations in higher-fitness regions. This highlights the advantages of using a separate classifier for guidance in domains where classifier-free guidance struggles to produce high-fitness sequences.
+
+Table 3. GFP optimization results. Best score for fitness in bold, second-best underlined, and the results for our method (VLGPO) are highlighted in grey.
+
+Method Medium difficulty Hard difficulty Fitness ↑ Diversity Novelty Fitness ↑ Diversity Novelty GFN-AL 0.09 ± 0.1 25.1 ± 0.5 213 ± 2.2 0.1 ± 0.2 23.6 ± 1.0 214 ± 4.2 CbAS 0.14 ± 0.0 9.7 ± 1.1 7.2 ± 0.4 0.18 ± 0.0 9.6 ± 1.3 7.8 ± 0.4 AdaLead 0.56 ± 0.0 3.5 ± 0.1 2.0 ± 0.0 0.18 ± 0.0 5.6 ± 0.5 2.8 ± 0.4 BOqei 0.20 ± 0.0 19.3 ± 0.0 0.0 ± 0.0 0.0 ± 0.5 94.6 ± 71 54.1 ± 81 CoMS 0.00 ± 0.1 133 ± 25 192 ± 12 0.0 ± 0.1 144 ± 7.5 201 ± 3.0 PEX 0.47 ± 0.0 3.0 ± 0.0 1.4 ± 0.2 0.0 ± 0.0 3.0 ± 0.0 1.3 ± 0.3 gg-dWJS 0.55 ± 0.1 52.3 ± 3.4 16.3 ± 5.7 0.61 ± 0.1 68.0 ± 5.6 44.8 ± 47 GWG 0.10 ± 0.0 33.0 ± 0.8 12.8 ± 0.4 0.0 ± 0.0 4.2 ± 7.0 7.6 ± 1.1 VLGPO, predictor gφ 0.87 ± 0.0 4.31 ± 0.1 6.0 ± 0.0 0.75 ± 0.0 3.1 ± 0.2 6.0 ± 0.0 GGS 0.76 ± 0.0 3.7 ± 0.2 5.0 ± 0.0 0.74 ± 0.0 3.6 ± 0.1 8.0 ± 0.0 VLGPO, smoothed gφ 0.84 ± 0.0 2.06 ± 0.1 5.0 ± 0.0 0.78 ± 0.0 2.5 ± 0.2 6.0 ± 0.0
+
+Table 4. AAV optimization results. Best score for fitness in bold, second-best underlined, and the results for our method (VLGPO) are highlighted in grey.
+
+Method Medium difficulty Hard difficulty Fitness ↑ Diversity Novelty Fitness ↑ Diversity Novelty GFN-AL 0.20 ± 0.1 9.60 ± 1.2 19.4 ± 1.1 0.10 ± 0.1 11.6 ± 1.4 19.6 ± 1.1 CbAS 0.43 ± 0.0 12.7 ± 0.7 7.2 ± 0.4 0.36 ± 0.0 14.4 ± 0.7 8.6 ± 0.5 AdaLead 0.46 ± 0.0 8.50 ± 0.8 2.8 ± 0.4 0.40 ± 0.0 8.53 ± 0.1 3.4 ± 0.5 BOqei 0.38 ± 0.0 15.22 ± 0.8 0.0 ± 0.0 0.32 ± 0.0 17.9 ± 0.3 0.0 ± 0.0 CoMS 0.37 ± 0.1 10.1 ± 5.9 8.2 ± 3.5 0.26 ± 0.0 10.7 ± 3.5 10.0 ± 2.8 PEX 0.40 ± 0.0 2.80 ± 0.0 1.4 ± 0.2 0.30 ± 0.0 2.8 ± 0.0 1.3 ± 0.3 gg-dWJS 0.48 ± 0.0 9.48 ± 0.3 4.2 ± 0.4 0.33 ± 0.0 14.3 ± 0.7 5.3 ± 0.4 GWG 0.43 ± 0.1 6.60 ± 6.3 7.7 ± 0.8 0.33 ± 0.0 12.0 ± 0.4 12.2 ± 0.4 VLGPO, predictor gφ 0.58 ± 0.0 5.58 ± 0.2 5.0 ± 0.0 0.51 ± 0.0 8.44 ± 0.2 7.8 ± 0.4 GGS 0.51 ± 0.0 4.0 ± 0.2 5.4 ± 0.5 0.60 ± 0.0 4.5 ± 0.5 7.0 ± 0.0 VLGPO, smoothed gφ 0.53 ± 0.0 4.96 ± 0.2 5.0 ± 0.0 0.61 ± 0.0 4.29 ± 0.1 6.2 ± 0.4
+
+# 4.5. Ablation Studies
+
+We conduct an ablation study on the influence of manifold constrained gradients in sampling (Line 7, Algorithm 1). The results in Table 5 and Table 6 demonstrate the improvement gained by estimating $\hat{x}_1$ to compute the gradient of the likelihood term.
+
+Further, the performance of the directly learned posterior $\mathrm{p}(x|y)$ was investigated. While this approach still performs well for the medium tasks, it shows a larger performance drop on GFP (hard), indicating that it cannot match the explicit likelihood guidance provided by the fitness predictor in our variational method VLGPO.
+
+Table 5. Influence of manifold constrained gradient in sampling for both predictors $g_{\phi}$ and smoothed $g_{\tilde{\phi}}$ and directly learned posterior for GFP medium and hard tasks.
+
+Method Medium
+Fitness ↑ Hard
+Fitness ↑ VLGPO, predictor gφ 0.87 ± 0.0 0.75 ± 0.0 w/o manifold constraint 0.81 ± 0.0 0.73 ± 0.0 learned posterior 0.83 ± 0.1 0.44 ± 0.1 VLGPO, smoothed gφ 0.84 ± 0.0 0.78 ± 0.0 w/o manifold constraint 0.84 ± 0.0 0.67 ± 0.1
+
+
+(a) GFP medium
+
+
+(b) GFP hard
+
+
+(c) AAV hard
+Figure 4. Comparing evaluated fitness $y_{\mathrm{gt}}$ from the oracle $g_{\psi}$ with required fitness $y$ using the directly learned posterior model (in the same latent space) and our variational approach VLGPO.
+
+Table 6. Influence of manifold constrained gradient in sampling for both predictors $g_{\phi}$ and smoothed $g_{\bar{\phi}}$ and directly learned posterior for AAV medium and hard tasks.
+
+Method Medium
+Fitness ↑ Hard
+Fitness ↑ VLGPO, predictor gφ 0.58 ± 0.0 0.51 ± 0.0 w/o manifold constraint 0.55 ± 0.0 0.47 ± 0.0 learned posterior 0.55 ± 0.0 0.44 ± 0.0 VLGPO, smoothed g̅φ 0.53 ± 0.0 0.61 ± 0.0 w/o manifold constraint 0.52 ± 0.0 0.58 ± 0.0
+
+# 5. Discussion
+
+We present VLGPO, a variational approach for protein fitness optimization that enables posterior sampling of high-fitness sequences. Our method operates in a learned smoothed latent space and learns a generative flow matching prior that imposes a natural gradient regularization, removing the need for extra smoothing. Additionally, it incorporates a likelihood term through manifold-constrained gradients that helps guiding the sampling process towards high-fitness regions. VLGPO achieves clear fitness improvements, with respect to the fitness of training sequences, but especially when compared to all baseline and competing methods.
+
+The variational framework offers a versatile approach, as its modular design allows replacing any of its components as needed, such as the prior model, the predictor, or the architectures. Future work could explore using embeddings from pretrained protein language models (pLMs) instead of the VAE, since such embeddings provide a more expressive latent representation. However, this would require finetuning the decoder to ensure faithful sequence reconstruction,
+
+which can be prone to overfitting given the limited size of employed data sets.
+
+A limiting factor of our method lies in its hyperparameter tuning requirements. We observe that hyperparameter selection becomes more critical for challenging tasks such as GFP (hard), while it remains stable and robust for other tasks. Additionally, the restriction of the benchmark to only AAV and GFP is a limitation that should be addressed in future work. Conceptually, VLGPO as well as competing approaches can be extended to other proteins from FLIP (Dallago et al., 2021) or ProteinGym (Notin et al., 2023).
+
+Another important point is our reliance on in-silico evaluation, where we follow (Kirjner et al., 2023) in using a trained oracle as the ground truth. This oracle was trained on a significantly larger data set than the tasks presented in Table 2 and can therefore be expected to serve as a good estimator. Nevertheless, actual experimental validation could provide valuable insights into the applicability of VLGPO. A complementary direction may be to design more idealised, synthetic in-silico benchmarks that are nevertheless good proxies for protein design (Stanton et al., 2024). Moreover, additional metrics such as folding confidence or structural stability could be added to obtain a more complete perspective beyond the fitness score (Johnson et al., 2023).
+
+# Acknowledgements
+
+This work was supported by the University Research Priority Program (URPP) Human Reproduction Reloaded of the University of Zurich.
+
+# Impact Statement
+
+This paper presents a computational method for protein fitness optimization. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
+
+# References
+
+Ben-Hamu, H., Puny, O., Gat, I., Karrer, B., Singer, U., and Lipman, Y. D-flow: Differentiating through flows for controlled generation. arXiv preprint arXiv:2402.14017, 2024.
+Brookes, D., Park, H., and Listgarten, J. Conditioning by adaptive sampling for robust design. In International conference on machine learning, pp. 773-782. PMLR, 2019.
+Bryant, D. H., Bashir, A., Sinai, S., Jain, N. K., Ogden, P. J., Riley, P. F., Church, G. M., Colwell, L. J., and Kelsic, E. D. Deep diversification of an aav capsid protein by machine learning. Nature Biotechnology, 39(6):691-696, 2021.
+Chung, H., Sim, B., Ryu, D., and Ye, J. C. Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems, 35:25683-25696, 2022.
+Chung, H., Kim, J., Mccann, M. T., Klasky, M. L., and Ye, J. C. Diffusion posterior sampling for general noisy inverse problems. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=OnD9zGAGT0k.
+Dallago, C., Mou, J., Johnston, K. E., Wittmann, B. J., Bhattacharya, N., Goldman, S., Madani, A., and Yang, K. K. Flip: Benchmark tasks in fitness landscape inference for proteins. bioRxiv, pp. 2021-11, 2021.
+Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021.
+Emami, P., Perreault, A., Law, J., Biagioni, D., and John, P. S. Plug & play directed evolution of proteins with gradient-based discrete mcmc. Machine Learning: Science and Technology, 4(2):025014, 2023.
+Esser, P., Kulal, S., Blattmann, A., Entezari, R., Müller, J., Saini, H., Levi, Y., Lorenz, D., Sauer, A., Boesel, F., et al. Scaling rectified flow transformers for high-resolution image synthesis. URL https://arxiv.org/abs/2403.03206, 2, 2024.
+Frey, N. C., Berenberg, D., Zadorozhny, K., Kleinhenz, J., LaFrance-Vanasse, J., Hotzel, I., Wu, Y., Ra, S., Bonneau,
+
+R., Cho, K., et al. Protein discovery with discrete walk-jump sampling. arXiv preprint arXiv:2306.12360, 2023.
+Grathwohl, W., Swersky, K., Hashemi, M., Duvenaud, D., and Maddison, C. Oops i took a gradient: Scalable sampling for discrete distributions. In International Conference on Machine Learning, pp. 3831-3841. PMLR, 2021.
+Gruver, N., Stanton, S., Frey, N., Rudner, T. G., Hotzel, I., Lafrance-Vanasse, J., Rajpal, A., Cho, K., and Wilson, A. G. Protein design with guided discrete diffusion. Advances in neural information processing systems, 36, 2024.
+Hermes, J. D., Blacklow, S. C., and Knowles, J. R. Searching sequence space by definably random mutagenesis: improving the catalytic potency of an enzyme. Proceedings of the National Academy of Sciences, 87(2):696-700, 1990.
+Higgins, I., Matthew, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M. M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. *ICLR (Poster)*, 3, 2017.
+Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
+Ikram, Z., Liu, D., and Rahman, M. S. Antibody sequence optimization with gradient-guided discrete walk-jump sampling. In ICLR 2024 Workshop on Generative and Experimental Perspectives for Biomolecular Design, 2024.
+Jain, M., Bengio, E., Hernandez-Garcia, A., Rector-Brooks, J., Dossou, B. F., Ekbote, C. A., Fu, J., Zhang, T., Kilgour, M., Zhang, D., et al. Biological sequence design with gflownets. In International Conference on Machine Learning, pp. 9786-9801. PMLR, 2022.
+Johnson, S. R., Fu, X., Viknander, S., Goldin, C., Monaco, S., Zelezniak, A., and Yang, K. K. Computational scoring and experimental evaluation of enzymes generated by neural networks. biorxiv. preprint, 202310(2023.03):04-531015, 2023.
+Johnston, K. E., Fannjiang, C., Wittmann, B. J., Hie, B. L., Yang, K. K., and Wu, Z. Machine learning for protein engineering. In Machine Learning in Molecular Sciences, pp. 277-311. Springer, 2023.
+Kingma, D. P. and Welling, M. Auto-encoding variational bayes, 2022. URL https://arxiv.org/abs/1312.6114.
+Kirjner, A., Yim, J., Samusevich, R., Bracha, S., Jaakkola, T. S., Barzilay, R., and Fiete, I. R. Improving protein
+
+optimization with smoothed fitness landscapes. In The Twelfth International Conference on Learning Representations, 2023.
+Lee, M., Vecchietti, L. F., Jung, H., Ro, H. J., Cha, M., and Kim, H. M. Robust optimization in protein fitness landscapes using reinforcement learning in latent space. arXiv preprint arXiv:2405.18986, 2024.
+Lin, Z., Akin, H., Rao, R., Hie, B., Zhu, Z., Lu, W., Smetanin, N., Verkuil, R., Kabeli, O., Shmueli, Y., et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science, 379(6637): 1123-1130, 2023.
+Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=PqvMRDCJT9t.
+Liu, X., Gong, C., and Liu, Q. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022.
+Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A. N., Marks, D., and Gal, Y. Trancection: protein fitness prediction with autoregressive transformers and inference-time retrieval. In International Conference on Machine Learning, pp. 16990-17017. PMLR, 2022.
+Notin, P., Kollasch, A., Ritter, D., Van Niekerk, L., Paul, S., Spinner, H., Rollins, N., Shaw, A., Orenbuch, R., Weitzman, R., et al. Proteingym: Large-scale benchmarks for protein fitness prediction and design. Advances in Neural Information Processing Systems, 36:64331-64379, 2023.
+Praljak, N., Lian, X., Ranganathan, R., and Ferguson, A. L. Protwave-vae: Integrating autoregressive sampling with latent-based inference for data-driven protein design. ACS synthetic biology, 12(12):3544-3561, 2023.
+Ren, Z., Li, J., Ding, F., Zhou, Y., Ma, J., and Peng, J. Proximal exploration for model-guided protein sequence design. In International Conference on Machine Learning, pp. 18520-18536. PMLR, 2022.
+Rezende, D. J. and Viola, F. Taming vaes. arXiv preprint arXiv:1810.00597, 2018.
+Romero, P. A. and Arnold, F. H. Exploring protein fitness landscapes by directed evolution. Nature reviews Molecular cell biology, 10(12):866-876, 2009.
+Sarkisyan, K. S., Bolotin, D. A., Meer, M. V., Usmanova, D. R., Mishin, A. S., Sharonov, G. V., Ivankov, D. N., Bozhanova, N. G., Baranov, M. S., Soylemez, O., et al. Local fitness landscape of the green fluorescent protein. Nature, 533(7603):397-401, 2016.
+
+Sinai, S. and Kelsic, E. D. A primer on model-guided exploration of fitness landscapes for biological sequence design. arXiv preprint arXiv:2010.10614, 2020.
+Sinai, S., Wang, R., Whatley, A., Slocum, S., Locane, E., and Kelsic, E. D. Adalead: A simple and robust adaptive greedy search algorithm for sequence design. arXiv preprint arXiv:2010.02141, 2020.
+Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015.
+Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=PxTIG12RRHS.
+Stanton, S., Alberstein, R., Frey, N., Watkins, A., and Cho, K. Closed-form test functions for biophysical sequence optimization algorithms. arXiv preprint arXiv:2407.00236, 2024.
+Trabucco, B., Kumar, A., Geng, X., and Levine, S. Conservative objective models for effective offline model-based optimization. In International Conference on Machine Learning, pp. 10358-10368. PMLR, 2021.
+Tran, T. V., Ngo, N. K., Nguyen, V. T. D., and Hy, T. S. Latentde: Latent-based directed evolution accelerated by gradient ascent for protein sequence design. In NeurIPS 2024 Workshop on AI for New Drug Modalities, 2024.
+Van Cleve, J. and Weissman, D. B. Measuring ruggedness in fitness landscapes. Proceedings of the National Academy of Sciences, 112(24):7345-7346, 2015.
+Wilson, J. T., Moriconi, R., Hutter, F., and Deisenroth, M. P. The reparameterization trick for acquisition functions. arXiv preprint arXiv:1712.00424, 2017.
+Yang, J., Lal, R. G., Bowden, J. C., Astudillo, R., Hameedi, M. A., Kaur, S., Hill, M., Yue, Y., and Arnold, F. H. Active learning-assisted directed evolution. Nature Communications, 16(1):714, 2025.
+Zheng, Q., Le, M., Shaul, N., Lipman, Y., Grover, A., and Chen, R. T. Guided flows for generative modeling and decision making. arXiv preprint arXiv:2311.13443, 2023.
+
+# A. Additional Methods
+
+# A.1. Fitness predictor and oracle
+
+The fitness predictors $g_{\phi}$ and $g_{\tilde{\phi}}$ , as well as the oracle $g_{\psi}$ are directly re-used from (Kirjner et al., 2023) without any modifications. The authors use the identical 1D CNN architecture with 157k learnable parameters for all networks, which maps a one-hot encoded protein sequence to a scalar value representing the predicted fitness. Our method VL-GPO allows to replace all of these components by re-trained models or alternative architectures, although it is known that simple CNNs can be competitive in such data-scarce settings (Dallago et al., 2021).
+
+Since (Kirjner et al., 2023) does not explicitly report the final performance of the in-silico oracle, we compute the Mean Squared Error (MSE) on a subset of 512 randomly selected samples from the ground truth data set $S^{*}$ to estimate its reliability. The oracle's predictions closely follow the target fitness values, resulting in MSE values of 0.012240 for GFP and 0.002758 for AAV.
+
+# A.2. Metrics
+
+For the evaluation of the generated sequences, we use median fitness, diversity and novelty as described in (Kirjner et al., 2023; Jain et al., 2022). The sequences are assessed by the oracle $g_{\psi}$ that was trained on the full data set $\mathcal{S}^*$ with minimum and maximum fitness values $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$ . The median normalized fitness of sampled sequences $x \in \mathcal{X}$ is computed using
+
+$$
+\operatorname {m e d i a n} \Big (\{\frac {g _ {\psi} (x) - y _ {\operatorname* {m i n}}}{y _ {\operatorname* {m a x}} - y _ {\operatorname* {m i n}}} : x \in \mathcal {X} \} \Big).
+$$
+
+Diversity is defined as the median similarity within the set of generated sequences by
+
+$$
+\operatorname {m e d i a n} \left(\left\{\operatorname {d i s t} \left(x, x ^ {\prime}\right): x, x ^ {\prime} \in \mathcal {X}, x \neq x ^ {\prime} \right\}\right),
+$$
+
+using the Levenshtein distance as we are evaluating discrete sequences. Finally, novelty considers the minimum distance of the generated sequences $x \in \mathcal{X}$ to any of the sequences in the respective data set $S$ that the generative flow matching prior was trained on (see Table 2). Therefore, it is computed using
+
+$$
+\operatorname {median}\Bigl(\{\min_{\substack{\hat{x}\in \mathcal{S}\\ \hat{x}\neq x}}\{\operatorname {dist}(x,\hat{x}) : x\in \mathcal{X}\} \Bigr)\Bigr).
+$$
+
+# B. Additional Results
+
+# B.1. Fitness Extrapolation
+
+The experiments on fitness extrapolation of generated sequences shown in Section 4.4 are complemented by the
+
+additional task of AAV (medium) in Figure 5. It supports the finding that the classifier guidance in combination with the generative prior in our variational approach VLGPO yields sequences whose predicted fitness $y_{\mathrm{gt}}$ follow the conditioned fitness $y$ more closely than the directly learned posterior model.
+
+
+Figure 5. AAV medium
+
+# B.2. Variational Autoencoder
+
+The VAE embeds sequences into a latent space that yields continuous gradients for sampling in Algorithm 1. Empirically, we choose $l = 16$ and $l = 32$ for AAV and GFP, since further compression reduces the decoder $\mathcal{D}$ 's reconstruction accuracy and harms sequence generation. Moreover, as the medium and hard tasks are defined by a minimum number of mutations from the $99^{\text{th}}$ fitness percentile of $S^{*}$ , a sufficiently large latent dimensionality is required to retain all relevant information.
+
+We train a VAE for each task to achieve at least $80\%$ reconstruction accuracy on a validation subset, balancing the latent prior with $\beta$ . Table 7 shows these results. The lower accuracy in AAV compared to GFP is due to larger effects of mutations on shorter proteins (sequence length $d$ ). Since the data sets (see Table 2) are very small, the Kullback-Leibler (KL) divergence in Equation (4) is crucial to ensure a Gaussian latent distribution. We can also sample from the VAE and evaluate predicted fitness via $g_{\psi}$ (Table 7), although median normalized fitness is lower than in Table 2. This is expected, given the sparse latent space and the "hole" problem of VAEs (Rezende & Viola, 2018).
+
+# B.3. Unconditional Sampling
+
+In Algorithm 1 one can recover the unconditional sampling case by setting $\alpha_{t} = 0$ and $J = 0$ , thereby sampling exclusively from the learned generative flow matching prior. As a result, the classifier guidance in the likelihood term does not
+
+
+(a) GFP medium
+
+
+(b) GFP hard
+
+
+(c) AAV medium
+Figure 6. Grid search for diversity depending on sampling parameters $\alpha_{t}$ and $J$ for the different tasks.
+
+
+(d) AAV hard
+
+Table 7. VAE reconstruction and sampling results.
+
+Task Reconstruction Accuracy ↑ Fitness ↑ GFP Medium 97.0% -0.08 GFP Hard 96.8% -0.21 AAV Medium 80.4% 0.28 AAV Hard 87.8% 0.23
+
+impact the sampling procedure. The median fitness values of the generated samples, evaluated as usual by the oracle $g_{\psi}$ , are shown in Table 8. Note that, in the absence of the predictor $g_{\phi}$ during VLGPO sampling, the flow matching model effectively serves only as a prior without any fitness conditioning, so no post-processing with top- $k$ sampling is applied for the sake of solely analyzing the prior model.
+
+As shown in Table 8, the median fitness is lower than in Table 3 and Table 4, both of which use VLGPO with classifier guidance $g_{\phi}$ . However, it exceeds the median fitness of the data sets $S$ from Table 2, likely due to limited training data and the lack of fitness information of the flow matching prior. Moreover, the model may exhibit a mode-seeking tendency, concentrating on modes that are easier to model. While the averaged novelty aligns with expectations, diversity is much higher in the unconditional scenario, since no classifier guidance steers samples towards higher-fitness modes.
+
+Table 8. Unconditional optimization results $\left( {{\alpha }_{t} = 0,J = 0}\right)$ .
+
+Task Fitness ↑ Diversity Novelty GFP Medium 0.22 ± 0.1 18.9 ± 2.5 6.2 ± 0.4 GFP Hard 0.42 ± 0.1 14.2 ± 1.1 7.0 ± 0.0 AAV Medium 0.38 ± 0.0 11.8 ± 0.1 6.0 ± 0.0 AAV Hard 0.28 ± 0.0 15.6 ± 0.2 8.0 ± 0.0
+
+# B.4. Sampling Parameters
+
+The determination of the hyperparameters $\alpha_{t}$ and $J$ in VL-GPO sampling is performed through a grid search across dif
+
+ferent tasks. We keep $\alpha_{t}$ constant for all steps $t$ . In addition to Figure 3, the resulting heatmaps for these two parameters on diversity are shown in Figure 6. Overall, combining the findings on predicted fitness and diversity it seems to be a heuristic tradeoff to balance these metrics, unlike for GFP (medium), which allows to choose both parameters high. Likewise, for AAV (hard) and (medium) there appears to be a broad range from which suitable parameter choices for $\alpha_{t}$ and $J$ can be made. Only GFP (hard) displays substantially compromised diversity very quickly (note the limited numerical range displayed in the colorbar), hence we choose significantly lower values $\alpha_{t} = 0.02$ and $J = 5$ . For the other tasks, we use the hyperparameter settings obtained from the grid search experiments, i.e., $\alpha_{t} \in \{0.97, 1.2, 0.56\}$ and $J \in \{39, 19, 37\}$ for AAV (medium), AAV (hard) and GFP (medium). Nevertheless, we do not observe major differences if these hyperparameters are adjusted slightly.
+
+Finally, the influence of the number of ODE steps, which directly corresponds to the sampling steps $K$ , is examined. This is shown in Figure 7, with the resulting fitness on the left and diversity on the right. The fitness initially exhibits an overshooting behavior, accompanied by a decrease in diversity, but this effect stabilizes after approximately 10 sampling steps. Beyond this point, both metrics remain relatively constant and stable with respect to the number of sampling steps $K$ , which is generally the desired behavior.
+
+
+Figure 7. Median fitness (left) and diversity (right) for all four tasks depending on employed ODE steps $K$ in sampling.
+
+
\ No newline at end of file
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/images.zip b/avariationalperspectiveongenerativeproteinfitnessoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a471d035e68fb4b2e34f5e6ecaa08afa7e7f707f
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5519cad2f3731aa49881fcf2a95b4fddac615974c4e611e8130fcfec07b8fa7e
+size 661743
diff --git a/avariationalperspectiveongenerativeproteinfitnessoptimization/layout.json b/avariationalperspectiveongenerativeproteinfitnessoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e022a94c612f51f6e72b5f9fc7c1478ffdaa02ff
--- /dev/null
+++ b/avariationalperspectiveongenerativeproteinfitnessoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:288722bd22fe5caa532200c6a9898d22751901748bb5aa699e48ea24c6154616
+size 582082
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_content_list.json b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..68bd7015836e0f6e630ad58519b12defd146a212
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2dc320e3b583dee8a316099d15f78d80567a3a52befd5f2286b0c1cfc7aee418
+size 166798
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_model.json b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fa9564dc328b22088dfb8fc0a7af271e79573528
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6834f95868407d4454f787342fde618be0885344aa69e2051f2d00cfe09b4599
+size 193145
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_origin.pdf b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f91eb374c3f09f0cf76ca52f48de129a651f0350
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/714bc828-d8f5-497a-95c5-99701288af71_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5800617931781bdd1aee010a310c2f66f7d4b42694545d40f46ca2c36dcca141
+size 813105
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/full.md b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..16724bf94d1f10601facc227edb72ac539cd3e99
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/full.md
@@ -0,0 +1,780 @@
+# A Versatile Influence Function for Data Attribution with Non-Decomposable Loss
+
+Junwei Deng1 Weijing Tang2 Jiaqi W. Ma1
+
+# Abstract
+
+Influence function, a technique rooted in robust statistics, has been adapted in modern machine learning for a novel application: data attribution—quantifying how individual training data points affect a model's predictions. However, the common derivation of influence functions in the data attribution literature is limited to loss functions that can be decomposed into a sum of individual data point losses, with the most prominent examples known as M-estimators. This restricts the application of influence functions to more complex learning objectives, which we refer to as non-decomposable losses, such as contrastive or ranking losses, where a unit loss term depends on multiple data points and cannot be decomposed further. In this work, we bridge this gap by revisiting the general formulation of influence function from robust statistics, which extends beyond M-estimators. Based on this formulation, we propose a novel method, the Versatile Influence Function (VIF), that can be straightforwardly applied to machine learning models trained with any non-decomposable loss. In comparison to the classical approach in statistics, the proposed VIF is designed to fully leverage the power of auto-differentiation, hereby eliminating the need for case-specific derivations of each loss function. We demonstrate the effectiveness of VIF across three examples: Cox regression for survival analysis, node embedding for network analysis, and listwise learning-to-rank for information retrieval. In all cases, the influence estimated by VIF closely resembles the results obtained by brute-force leave-one-out retraining, while being up to $10^{3}$ times faster to compute. We believe VIF represents a significant advancement in data
+
+attribution, enabling efficient influence-function-based attribution across a wide range of machine learning paradigms, with broad potential for practical use cases.
+
+# 1. Introduction
+
+Influence function (IF) is a well-established technique originating from robust statistics and has been adapted to the novel application of data attribution in modern machine learning (Koh & Liang, 2017). Data attribution aims to quantify the impact of individual training data points on model outputs, which enables a wide range of data-centric applications such as mistrabeled data detection (Koh & Liang, 2017), data selection (Xia et al., 2008), and copyright compensation (Deng & Ma, 2023).
+
+Despite its broad potential, the application of IFs for data attribution has been largely limited to loss functions that can be decomposed into a sum of individual data point losses—such as those commonly used in supervised learning or maximum likelihood estimation, which are also known as M-estimators. This limitation arises from the specific way that IFs are typically derived in the data attribution literature (Koh & Liang, 2017; Grosse et al., 2023), where the derivation involves perturbing the weights of individual data point losses. As a result, this restricts the application of IF-based data attribution methods to more complex machine learning objectives, such as contrastive or ranking losses, where a unit loss term depends on multiple data points and cannot be further decomposed into individual data point losses. We refer to such loss functions as non-decomposable losses.
+
+To address this limitation, we revisit the general formulation of IF in statistics literature (Huber & Ronchetti, 2009), which can extend beyond M-estimators. Specifically, statistical estimators are viewed as functionals of probability measures, and the IF is derived as a functional derivative in a specific perturbation direction. In principle, this formulation applies to any estimator defined as the minimizer of a loss function that depends on an (empirical) probability measure, which corresponds to the learned parameters in the context of machine learning. However, directly applying
+
+this general formulation to modern machine learning models poses significant challenges. Firstly, deriving the precise IF for a particular loss function often requires complex, case-by-case mathematical derivations, which can be challenging for intricate loss functions and models. Secondly, for nonconvex models, the (local) minimizer of the loss function is not unique; as a result, the mapping from the probability measure to the learned model parameters is not well-defined, making it unclear how the IF should be derived.
+
+To overcome these challenges, we propose the Versatile Influence Function (VIF), a novel method that extends IF-based data attribution to models trained with nondecomposable losses. The proposed VIF serves as an approximation of the general formulation of IF but can be efficiently computed using auto-differentiation tools available in modern machine learning libraries. This approach eliminates the need for case-specific derivations of each loss function. Furthermore, like existing IF-based data attribution methods, VIF does not require model retraining and can be generalized to non-convex models using similar heuristic tricks (Koh & Liang, 2017; Grosse et al., 2023).
+
+We validate the effectiveness of VIF through both theoretical analysis and empirical experiments. In special cases like M-estimation, VIF recovers the classical IF exactly. For Cox regression, we show that VIF closely approximates the classical IF. Empirically, we demonstrate the practicality of VIF across several tasks involving non-decomposable losses: Cox regression for survival analysis, node embedding for network analysis, and listwise learning-to-rank for information retrieval. In all cases, VIF closely approximates the influence obtained from the brute-force leave-one-out retraining while significantly reducing computational time—achieving speed-ups of up to $10^{3}$ times. We also provide case studies demonstrating VIF can help interpret the behavior of the models. By extending IF to non-decomposable losses, VIF opens new opportunities for data attribution in modern machine learning models, enabling data-centric applications across a wider range of domains.
+
+# 2. Related Work
+
+Data Attribution. Data attribution methods can be roughly categorized into two groups: retraining-based and gradient-based methods (Hammoudeh & Lowd, 2024). Retraining-based methods (Ghorbani & Zou, 2019; Jia et al., 2019; Kwon & Zou, 2021; Wang & Jia, 2023; Ilyas et al., 2022) typically estimate the influence of individual training data points by repeatedly retraining models on subsets of the training dataset. While these methods have been shown effective, they are not scalable for large-scale models and applications. In contrast, gradient-based methods (Koh & Liang, 2017; Guo et al., 2020; Barshan et al., 2020; Schioppa et al., 2022; Kwon et al., 2023; Yeh et al., 2018; Pruthi et al.,
+
+2020; Park et al., 2023) estimate the training data influence based on the gradient and higher-order gradient information of the original model, avoiding expensive model retraining. In particular, many gradient-based methods (Koh & Liang, 2017; Guo et al., 2020; Barshan et al., 2020; Schioppa et al., 2022; Kwon et al., 2023; Pruthi et al., 2020; Park et al., 2023) can be viewed as variants of IF-based data attribution methods. Therefore, extending IF-based data attribution methods to a wider domains could lead to a significant impact on data attribution.
+
+There are a few studies that adapt influence functions for graph neural networks (Chen et al., 2022; Wu et al., 2023), which can be viewed as special cases of non-decomposable losses. Chen et al. (2022) developed an influence function specifically for Simplified Graph Convolution model, which is a linearized graph neural network model. Wu et al. (2023) proposed a machine unlearning method for graph neural networks based on influence function, where the influence function is adapted to consider the graph dependency among samples. In comparison to these methods, our approach has a more general formulation and can be broadly applied to various different non-decomposable losses.
+
+Influence Function in Statistics. The IF is a well-established concept in statistics dating back at least to Hampel (1974), though it is typically applied for purposes other than data attribution. Originally introduced in the context of robust statistics, it was used to assess the robustness of statistical estimators (Huber & Ronchetti, 2009) and later adapted as a tool for developing asymptotic theories (van der Vaart, 2012). Notably, IFs have been derived for a wide range of estimators beyond M-estimators, including L-estimators, R-estimators, and others (Huber & Ronchetti, 2009; van der Vaart, 2012). Closely related to an example of this study, Reid & Crepeau (1985) developed the IF for the Cox regression model. However, the literature in statistics often approaches the derivation of IFs through precise definitions specific to particular estimators, requiring case-specific derivations. In contrast, this work proposes an approximation for the general IF formulation in statistics, which can be straightforwardly applied to a broad family of modern machine learning loss functions for the purpose of data attribution. While this approach involves some degree of approximation, it benefits from being more versatile and computationally efficient, leveraging auto-differentiation capabilities provided by modern machine learning libraries.
+
+# 3. The Versatile Influence Function
+
+# 3.1. Preliminaries: IF-Based Data Attribution for Decomposable Loss
+
+We begin by reviewing the formulation of IF-based data attribution in prior literature (Koh & Liang, 2017; Schioppa
+
+et al., 2022; Grosse et al., 2023). IF-based data attribution aims to approximate the effect of leave-one-out (LOO) retraining—the change of model parameters after removing one training data point and retraining the model—which could be used to quantify the influence of this training data point.
+
+Formally, suppose we have the following loss function1 ,
+
+$$
+\mathcal {L} _ {D} (\theta) = \sum_ {i = 1} ^ {n} \ell (\theta ; z _ {i}), \tag {1}
+$$
+
+where $\theta$ is the model parameters, $\{z_i\}_{i=1}^n$ is the training dataset, and each $\ell(\cdot; z_i), i = 1, \ldots, n$ , corresponds to the loss function of one training data point $z_i$ . The IF-based data attribution is derived by first inserting a binary weight $w_i$ in front of each $\ell(\cdot; z_i)$ to represent the inclusion or removal of the individual data points, transforming $\mathcal{L}_D(\theta)$ to a weighted loss
+
+$$
+\mathcal {L} _ {D} (\theta , w) = \sum_ {i = 1} ^ {n} w _ {i} \ell (\theta ; z _ {i}). \tag {2}
+$$
+
+Note that $w = \mathbf{1}$ corresponds to the original loss in Eq. (1); while removing the $i$ -th data point is to set $w_{i} = 0$ or, equivalently, $w = \mathbf{1}_{-i}$ , where $\mathbf{1}_{-i}$ is a vector of all one except for the $i$ -th element being zero. Denote the learned parameters as $\hat{\theta}_D(w) \coloneqq \arg \min_\theta \mathcal{L}_D(\theta ,w)^2$ . The LOO effect for data point $i$ is then characterized by $\hat{\theta}_D(\mathbf{1}_{-i}) - \hat{\theta}_D(\mathbf{1})$ .
+
+However, evaluating $\hat{\theta}_D(\mathbf{1}_{-i})$ is computationally expensive as it requires model retraining. Koh & Liang (2017) proposed to approximate the LOO effect by relaxing the binary weights in $w$ to the continuous interval [0, 1] and measuring the influence of the training data point $z_{i}$ on the learned parameters as
+
+$$
+\left. \frac {\partial \hat {\theta} _ {D} (w)}{\partial w _ {i}} \right| _ {w = \mathbf {1}} = - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}), \tag {3}
+$$
+
+which can be evaluated using only $\hat{\theta}_D(\mathbf{1})$ , hence eliminating the need for expensive model retraining.
+
+However, by construction, this approach critically relies on the introduction of the loss weights $w_{i}$ 's, and is thus limited to loss functions that are decomposable with respect to the individual training data points, taking the form of Eq. (1).
+
+# 3.2. Non-Decomposable Loss
+
+In practice, there are many common loss functions that are not decomposable. Below we list a few examples.
+
+Example 1: Cox's Partial Likelihood. The Cox regression model (Cox, 1972) is one of the most widely used models in survival analysis, designed to analyze the time until specific events occur (e.g., patient death or customer churn). A unique challenge in survival analysis is handling censored observations, where the exact event time is unknown because the event has either not occurred by the end of the study or the individual is lost to follow-up. These censored data points contain partial information about the event timing and should be properly modeled to improve estimation. The Cox regression model is defined through specifying a hazard function over time $t$ conditional on the individual feature $x$ :
+
+$$
+h (t \mid x) = h _ {0} (t) \exp (\theta^ {\top} x),
+$$
+
+where $h_0(t)$ is a baseline hazard function and $\exp(\theta^\top x)$ is the relative risk with $\theta$ as the model parameters to be estimated. Given $n$ data points $\{(X_i, Y_i, \Delta_i)\}_{i=1}^n$ , where $X_i$ represents the features for the $i$ -th data point, $Y_i$ denotes the observed time (either the event time or the censoring time), and $\Delta_i$ is the binary event indicator $(\Delta_i = 1$ if the event has occurred and $\Delta_i = 0$ if the observation is censored), the parameters $\theta$ can be learned through minimizing the following negative log partial likelihood
+
+$$
+\mathcal {L} _ {\mathrm {C o x}} (\theta) = - \sum_ {i: \Delta_ {i} = 1} \left(\theta^ {\top} X _ {i} - \log \sum_ {j \in R _ {i}} \exp \left(\theta^ {\top} X _ {j}\right)\right), \tag {4}
+$$
+
+where $R_{i} \coloneqq \{j : Y_{j} > Y_{i}\}$ is called the at-risk set for the $i$ -th data point.
+
+In Eq. (4), each data point may appear in multiple loss terms if it belongs to the at-risk sets of other data points. Consequently, we can no longer characterize the effect of removing a training data point by simply introducing the loss weight.
+
+Example 2: Contrastive Loss. Contrastive losses are commonly seen in unsupervised representation learning across various modalities, such as word embeddings (Mikolov et al., 2013), image representations (Chen et al., 2020), or node embeddings (Perozzi et al., 2014). Generally, contrastive losses rely on a set of triplets, $D = \{(u_i, v_i, N_i)\}_{i=1}^m$ , where $u_i$ is an anchor data point, $v_i$ is a positive data point that is relevant to $u_i$ , while $N_i$ is a set of negative data points that are irrelevant to $u_i$ . The contrastive loss is then the summation over such triplets:
+
+$$
+\mathcal {L} _ {\text {C o n t r a s t}} (\theta) = \sum_ {i = 1} ^ {m} \ell \left(\theta ; \left(u _ {i}, v _ {i}, N _ {i}\right)\right), \tag {5}
+$$
+
+where the loss $l(\cdot)$ could take many forms. In word2vec (Mikolov et al., 2013) for word embeddings or DeepWalk (Perozzi et al., 2014) for node embeddings, $\theta$ corresponds to the embedding parameters for each word or node, while the loss $l(\cdot)$ could be defined by hierarchical softmax or negative sampling (see Rong (2014) for more details).
+
+Similar to Eq. (4), each single term of the contrastive loss in Eq. (5) involves multiple data points. Moreover, taking node embeddings as an example, the set of triplets $D$ is constructed by running random walks on the network. Removing one data point, which is a node in this context, could also affect the proximity of other pairs of nodes and hence the construction of $D$ .
+
+Example 3: Listwise Learning-to-Rank. Learning-to-rank is a core technology underlying information retrieval applications such as search and recommendation. In this context, listwise learning-to-rank methods aim to optimize the ordering of a set of documents or items based on their relevance to a given query. One prominent example of such methods is ListMLE (Xia et al., 2008). Suppose we have annotated results for $m$ queries over $n$ items as a dataset $\{(x_{i}, (y_{i}^{(1)}, y_{i}^{(2)}, \ldots, y_{i}^{(k)})\}_{i=1}^{m}$ , where $x_{i}$ is the query feature, $y_{i}^{(1)}, y_{i}^{(2)}, \ldots, y_{i}^{(k)} \in [n] := \{1, \ldots, n\}$ indicate the top $k$ items for query $i$ . Then the ListMLE loss function is defined as following
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {L T R}} (\theta) = - \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {k} (f (x _ {i}; \theta) _ {j} \\ - \log \sum_ {l \in [ n ] \backslash \left\{y _ {i} ^ {(1)}, \dots , y _ {i} ^ {(j - 1)} \right\}} \exp (f (x _ {i}; \theta) _ {l})), \tag {6} \\ \end{array}
+$$
+
+where $f(\cdot ;\theta)$ is a model parameterized by $\theta$ that takes the query feature as input and outputs $n$ logits for predicting the relevance of the $n$ items.
+
+In this example, Eq. (6) is decomposable with respect to the queries while not decomposable with respect to the items. The influence of items could also be of interest in information retrieval applications. For example, in a search engine, we may want to detect webpages with malicious search engine optimization (Invernizzi et al., 2012); in product copurchasing recommendation (Zhao et al., 2017), both the queries and items are products.
+
+A General Loss Formulation. The examples above can be viewed as special cases of the following formal definition of non-decomposable loss.
+
+Definition 3.1 (Non-Decomposable Loss). Given $n$ objects of interest within the training data, let a binary vector $b \in \{0,1\}^n$ indicate the presence of the individual objects in
+
+training, i.e., for $i = 1,\dots ,n$
+
+$$
+b _ {i} = \left\{ \begin{array}{l l} 1 & \text {i f t h e i - t h o b j e c t p r e s e n t s ,} \\ 0 & \text {o t h e r w i s e .} \end{array} \right.
+$$
+
+Suppose the machine learning model parameters are denoted as $\theta \in \mathbb{R}^d$ , a non-decomposable loss is any function
+
+$$
+\mathcal {L}: \mathbb {R} ^ {d} \times \{0, 1 \} ^ {n} \to \mathbb {R},
+$$
+
+that maps given model parameters $\theta$ and the object presence vector $b$ to a loss value $\mathcal{L}(\theta, b)$ .
+
+Denoting $\hat{\theta} (b) = \arg \min_{\theta}\mathcal{L}(\theta ,b)$ on any nondecomposable loss $\mathcal{L}(\theta ,b)$ , the LOO effect of data point $i$ on the learned parameters can still be properly defined by
+
+$$
+\hat {\theta} (\mathbf {1} _ {- i}) - \hat {\theta} (\mathbf {1}).
+$$
+
+However, in this case, we can no longer use the partial derivative with respect to $b_{i}$ to approximate the LOO effect, as $\hat{\theta}(b)$ is only well-defined for binary vectors $b$ .
+
+Remark 3.2 ("Non-Decomposable" v.s. "Not Decomposable"). The class of non-decomposable loss in Definition 3.1 includes the decomposable loss in Eq. (1) as a special case when $\mathcal{L}(\theta, b) \coloneqq \sum_{i: b_i = 1} l_i(\theta)$ . Throughout this paper, we will call loss functions that cannot be written in the form of Eq. (1) as "not decomposable". We name the general class of loss functions in Definition 3.1 as non-decomposable loss to highlight that they are generally not decomposable.
+
+Remark 3.3 (Randomness in Losses). Strictly speaking, many contrastive losses are not deterministic functions of training data points as there is randomness in the construction of the triplet set $D$ , due to procedures such as negative sampling or random walk. However, our method derived for the deterministic non-decomposable loss still gives meaningful results in practice for losses with randomness.
+
+# 3.3. The Statistical Perspective of Influence Function
+
+The Statistical Formulation of IF. To derive IF-based data attribution for non-decomposable losses, we revisit a general formulation of IF in robust statistics (Huber & Ronchetti, 2009). Let $\Omega$ be a sample space, and $T(\cdot)$ is a function that maps from a probability measure on $\Omega$ to a vector in $\mathbb{R}^d$ . Let $P$ and $Q$ be two probability measures on $\Omega$ . The IF of $T(\cdot)$ at $P$ in the direction $Q$ measures the infinitesimal change towards a specific perturbation direction $Q$ , which is defined as
+
+$$
+\operatorname {I F} (T (P); Q) := \lim _ {\varepsilon \rightarrow 0} \frac {T ((1 - \varepsilon) P + \varepsilon Q) - T (P)}{\varepsilon}.
+$$
+
+In the context of machine learning, the learned model parameters, denoted as $\tilde{\theta}(P)$ , can be viewed as a function of the data distribution $P$ . Specifically, the parameters of the learned model are typically obtained by minimizing a loss
+
+function, i.e., $\tilde{\theta} (P) = \arg \min_{\theta}\tilde{\mathcal{L}} (\theta ,P)$ . Here, $\tilde{\mathcal{L}} (\theta ,P)$ is a loss function that depends on a probability measure $P$ distinguishing it from the non-decomposable loss $\mathcal{L}(\theta ,b)$ that depends on the object presence vector $b$ .
+
+Assuming the loss is strictly convex and twice-differentiable with respect to the parameters, the learned parameters $\tilde{\theta}(P)$ are then implicitly determined by the following equation
+
+$$
+\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) = \mathbf {0}.
+$$
+
+Moreover, the IF of $\tilde{\theta}(P)$ with a perturbation towards $Q$ is given by3
+
+$$
+\begin{array}{l} \operatorname {I F} (\tilde {\theta} (P); Q) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) \right] ^ {- 1}. \\ \lim _ {\varepsilon \rightarrow 0} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , (1 - \varepsilon) P + \varepsilon Q) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , P)}{\varepsilon}. \tag {7} \\ \end{array}
+$$
+
+The advantage of the IF formulation in Eq. (7) is that it can be applied to more general loss functions by properly specifying $P, Q$ , and $\tilde{\mathcal{L}}$ .
+
+Example: Application of Eq. (7) to M-Estimators. As an example, the following Lemma 3.4 states that the IF in Eq. (3) for decomposable loss can be viewed as a special case of the formulation in Eq. (7). This is a well-known result for M-estimators in robust statistics (Huber & Ronchetti, 2009), and the proof of which can be found in Appendix A.2. Intuitively, with the choice of $P, Q$ , and $\tilde{\mathcal{L}}$ in Lemma 3.4, $(1 - \varepsilon)P + \varepsilon Q = (1 - \varepsilon)\mathbb{P}_n + \varepsilon \delta_{z_i}$ leads to the effect of upweighting the loss weight of $z_i$ with a small perturbation, which is essentially how the IF in Eq. (3) is derived.
+
+Lemma 3.4 (IF for M-Estimators). Eq. (7) reduces to Eq. (3) up to a constant when we specify that 1) $P$ is the empirical distribution $\mathbb{P}_n = \sum_{i = 1}^n\delta_{z_i} / n$ , where $\delta_{z_i}$ is the Dirac measure, i.e., $\operatorname*{Pr}(z_i) = 1$ and $\operatorname*{Pr}(z_j) = 0, j \neq i$ ; 2) $Q = \delta_{z_i}$ ; and 3) $\tilde{\mathcal{L}} (\theta ,P)\coloneqq \mathbb{E}_{z\sim P}[\ell (\theta ;z)]$ . Specifically,
+
+$$
+\mathrm {I F} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - n \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}).
+$$
+
+Challenges of Applying Eq. (7) in Modern Machine Learning. While the IF in Eq. (7) is a principled and well-established notion in statistics, there are two unique challenges when applying it to modern machine learning models for general non-decomposable losses. Firstly, solving the limit in the right hand side of Eq. (7) requires case-by-case derivation for different loss functions and models, which can be complicated (see an example of IF for the Cox regression (Reid & Crepeau, 1985) in Appendix A.5). Secondly, the mapping $\tilde{\theta}(P)$ , hence the limit, are not well-defined for non-convex loss functions as the (local) minimizer is not unique. A similar problem exists in the IF for decomposable
+
+loss in Eq. (3) and Koh & Liang (2017) mitigate this problem through heuristic tricks specifically designed for Eq. (3). However, the IF in Eq. (7) is in general more complicated for non-decomposable losses and generalizing it to modern setups like neural networks remains unclear.
+
+# 3.4. VIF as A Finite Difference Approximation
+
+We now derive the proposed VIF method by applying Eq. (7) to the non-decomposable loss while addressing the aforementioned challenges through a finite-difference approximation.
+
+Definition 3.5 (Finite-Difference IF). Define the finite-difference IF as following
+
+$$
+\begin{array}{l} \widehat {\operatorname {I F}} _ {\varepsilon} (\tilde {\theta} (P); Q) := - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) \right] ^ {- 1}. \\ \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , (1 - \varepsilon) P + \varepsilon Q) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , P)}{\varepsilon}, \tag {8} \\ \end{array}
+$$
+
+which approximates the IF in Eq. (7), $\operatorname{IF}(\tilde{\theta}(P); Q)$ , by replacing the limit with a finite difference.
+
+Observation on M-Estimators. The proposed VIF method for general non-decomposable losses is motivated by the following observation in the special case for M-estimators.
+
+Theorem 3.6 (Finite-Difference IF for M-Estimators). Under the specification of $P = \mathbb{P}_n, Q = \delta_{z_i}$ , and $\tilde{\mathcal{L}} = \mathbb{E}_{z\sim P}[\ell (\theta ;z)]$ in Lemma 3.4, the IF is identical to the finite-difference IF with $\varepsilon = -\frac{1}{n - 1}$ , i.e.,
+
+$$
+\operatorname {I F} \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right); \delta_ {z _ {i}}\right) = \widehat {\operatorname {I F}} _ {- \frac {1}{n - 1}} \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right); \delta_ {z _ {i}}\right).
+$$
+
+Furthermore, denote $\mathbb{Q}_{n - 1}^{(-i)}$ as the empirical distribution where $\operatorname *{Pr}(z_i) = 0$ and $\operatorname *{Pr}(z_j) = \frac{1}{n - 1},j\neq i.$ Then we have
+
+$$
+\begin{array}{l} (1 + \frac {1}{n - 1}) \mathbb {P} _ {n} - \frac {1}{n - 1} \delta_ {z _ {i}} = \mathbb {Q} _ {n - 1} ^ {(- i)}, \\ \widehat {\mathrm {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - (n - 1) \widehat {\mathrm {I F}} _ {1} (\tilde {\theta} (\mathbb {P} _ {n}); \mathbb {Q} _ {n - 1} ^ {(- i)}). \\ \end{array}
+$$
+
+The first part of Theorem 3.6 suggests that, for M-estimators, the limit in $\mathrm{IF}(\tilde{\theta} (\mathbb{P}_n);\delta_{z_i})$ can be exactly replaced by a finite difference with a proper choice of $\varepsilon$ . The second part of Theorem 3.6 further shows that we can construct another finite-difference IF, $\widehat{\mathrm{IF}}_1(\tilde{\theta} (\mathbb{P}_n);\mathbb{Q}_{n - 1}^{(-i)})$ , with a different choice of $Q = \mathbb{Q}_{n - 1}^{(-i)}$ and $\varepsilon = 1$ , that differs from IF $(\tilde{\theta} (\mathbb{P}_n);\delta_{z_i})$ only by a constant factor. For the purpose of data attribution, we typically only care about the relative influence among the training data points, so the constant factor does not matter.
+
+Generalization to General Non-Decomposable Losses. The benefit of having the form $\widehat{\mathrm{IF}}_1(\tilde{\theta} (\mathbb{P}_n);\mathbb{Q}_{n - 1}^{(-i)})$ is that it is straightforward to generalize this formula from M-estimators to general non-decomposable losses. Specifically,
+
+noticing that $\mathbb{P}_n$ and $\mathbb{Q}_{n - 1}^{(-i)}$ are respectively empirical distribution on the full dataset and the dataset without $z_{i}$ , we can apply this finite-difference IF to any non-decomposable loss through an appropriate definition of $\tilde{\mathcal{L}}$ .
+
+Definition 3.7 ( $\tilde{\mathcal{L}}(\theta, P)$ for Non-Decomposable Loss). Let $\mathcal{P}(n)$ be the set of uniform distributions supported on subsets of $n$ fixed points $\{z_i\}_{i=1}^n$ . Note that both of the empirical distributions $\mathbb{P}_n$ and $\mathbb{Q}_{n-1}^{(-i)}$ belong to the set $\mathcal{P}(n)$ . For any $P \in \mathcal{P}(n)$ , denote $b^P \in \{0, 1\}^n$ as a binary vector such that $b_i^P = \mathbb{1}[P(z_i) > 0]$ , $i = 1, \ldots, n$ . The $\tilde{\mathcal{L}}(\theta, P)$ for a non-decomposable loss $\mathcal{L}$ can be defined as follows:
+
+$$
+\tilde {\mathcal {L}} (\theta , P) := \mathcal {L} (\theta , b ^ {P}).
+$$
+
+Proposition 3.8 (Finite-Difference IF on Non-Decomposable Loss). Under Definition 3.7, we have
+
+$$
+\begin{array}{l} \widehat {\operatorname {I F}} _ {1} \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right); \mathbb {Q} _ {n - 1} ^ {(- i)}\right) = \left[ \nabla_ {\theta} ^ {2} \mathcal {L} \left(\hat {\theta} (\mathbf {1}), \mathbf {1}\right) \right] ^ {- 1}. \\ \nabla_ {\theta} \left(\mathcal {L} (\hat {\theta} (\mathbf {1}), \mathbf {1}) - \mathcal {L} (\hat {\theta} (\mathbf {1}), \mathbf {1} _ {- i})\right). \tag {9} \\ \end{array}
+$$
+
+The Proposed VIF. We propose the following method to approximate the LOO effect for any non-decomposable loss.
+
+Definition 3.9 (Versatile Influence Function). The Versatile Influence Function (VIF) that measures the influence of a data object $i$ on the parameters $\hat{\theta}(\mathbf{1})$ learned from a nondecomposable loss $\mathcal{L}$ is defined as following
+
+$$
+\begin{array}{l} \operatorname {V I F} (\hat {\theta} (\mathbf {1}); i) := - \left[ \frac {1}{n} \nabla_ {\theta} ^ {2} \mathcal {L} (\hat {\theta} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1}. \\ \nabla_ {\theta} \left(\mathcal {L} (\hat {\theta} (\mathbf {1}), \mathbf {1}) - \mathcal {L} (\hat {\theta} (\mathbf {1}), \mathbf {1} _ {- i})\right). \tag {10} \\ \end{array}
+$$
+
+The proposed VIF is a variant of Eq. (9), as it can be easily shown that
+
+$$
+\operatorname {V I F} (\widehat {\theta} (\mathbf {1}); i) = - n \widehat {\operatorname {I F}} _ {1} (\widetilde {\theta} (\mathbb {P} _ {n}); \mathbb {Q} _ {n - 1} ^ {(- i)}).
+$$
+
+The inclusion of the additional constant factor is motivated by Theorem 3.6 to make it better align with the original IF in Eq. (7). In practice, this definition is also typically more numerically stable as the Hessian is normalized by $\frac{1}{n}$ .
+
+Computational Advantages. The VIF defined in Eq. (10) enjoys a few computational advantages. Firstly, VIF depends on the parameters only at $\hat{\theta}(\mathbf{1})$ and does not require $\hat{\theta}(\mathbf{1}_{-i})$ . Therefore, it does not require model retraining. Secondly, compared to Eq. (7), VIF only involves gradients and the Hessian of the loss, which can be easily obtained through auto-differentiation provided in modern machine learning libraries. Thirdly, VIF can be applied to more complicated models and accelerated with similar heuristic tricks employed by existing IF-based data attribution methods for decomposable losses (Koh & Liang, 2017; Grosse et al.,
+
+2023). We have included the results of efficient approximate implementations of VIF based on Conjugate Gradient (CG) and LiSSA (Agarwal et al., 2017; Koh & Liang, 2017) in Appendix C. Finally, note that VIF calculates the difference $\mathcal{L}(\hat{\theta}(\mathbf{1}), \mathbf{1}) - \mathcal{L}(\hat{\theta}(\mathbf{1}), \mathbf{1}_{-i})$ before taking the gradient with respect to the parameters. In some special cases (see, e.g., the decomposable loss case in Section 3.5), taking the difference before the gradient significantly simplifies the computation as the loss terms not involving the $i$ -th data object will cancel out.
+
+Attributing a Target Function. In practice, we are often interested in attributing certain model outputs or performance. Similar to Koh & Liang (2017), given a target function of interest, $f(z,\theta)$ , that depends on both some data $z$ and the model parameter $\theta$ , then the influence of a training data point $i$ on this target function can be obtained through the chain rule:
+
+$$
+\nabla_ {\theta} f (z, \hat {\theta} (\mathbf {1})) ^ {\top} \operatorname {V I F} (\hat {\theta} (\mathbf {1}); i). \tag {11}
+$$
+
+# 3.5. Approximation Quality in Special Cases
+
+To provide insights into how accurately the proposed VIF approximates Eq. (7), we examine the following special cases. Although there is no universal guarantee of the approximation quality for all non-decomposable losses, our analysis in these cases suggests that VIF may perform well in many practical applications.
+
+M-Estimation (Decomposable Loss). For a decomposable loss, we have $\nabla_{\theta}\mathcal{L}_D(\hat{\theta}_D(\mathbf{1}),\mathbf{1}) = \sum_{i = 1}^n\nabla_{\theta}\ell (\hat{\theta}_D(\mathbf{1});z_i)$ and $\nabla_{\theta}\mathcal{L}_D(\hat{\theta}_D(\mathbf{1}),\mathbf{1}_{-i}) = \sum_{j = 1,j\neq i}^n\nabla_{\theta}\ell (\hat{\theta}_D(\mathbf{1});z_j)$ . In this case, it is straightforward to see that
+
+$$
+\operatorname {V I F} (\hat {\theta} (\mathbf {1}); i) = - n \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}),
+$$
+
+which indicates that the VIF here is identical to the IF in Lemma 3.4 without approximation error.
+
+Cox Regression. The close-form of the IF for the Cox regression model, obtained by directly solving the limit in Eq. (7) under the Cox regression model, exists in the statistics literature (Reid & Crepeau, 1985), which allows us to characterize the approximation error of the VIF in comparison to the exact solution.
+
+Theorem 3.10 (Approximation Error under Cox Regression; Informal). Denote the exact solution by Reid & Crepeau (1985) as $\mathrm{IF}_{\mathrm{Cox}}(\hat{\theta}(\mathbf{1}); i)$ while the application of VIF on Cox regression as $\mathrm{VIF}_{\mathrm{Cox}}(\hat{\theta}(\mathbf{1}); i)$ . Their difference is bounded as following:
+
+$$
+\operatorname {V I F} _ {C o x} (\hat {\theta} (\mathbf {1}); i) - \operatorname {I F} _ {C o x} (\hat {\theta} (\mathbf {1}); i) = O _ {p} (\frac {1}{n}).
+$$
+
+Theorem 3.10 suggests that the approximation error of the VIF vanishes when the training data size is large. A formal
+
+statement of this result and its proof can be found in Appendix A.5, and an empirical verification of the theoretical results can be found in Appendix A.6.
+
+# 4. Experiments
+
+# 4.1. Experimental setup
+
+We conduct experiments on three examples listed in Section 3.2: Cox Regression, Node Embedding, and Listwise Learning-to-Rank. In this section, we present the performance and runtime of VIF compared to brute-force LOO retraining. We also provide two case studies to demonstrate how the influence estimated by VIF can help interpret the behavior of the trained model.
+
+Datasets and Models. We evaluate our approach on multiple datasets across different scenarios. For Cox Regression, we use the METABRIC and SUPPORT datasets (Katzman et al., 2018). For both of the datasets, we train a Cox model using the negative log partial likelihood following Eq. (4). For Node Embedding, we use Zachary's Karate network (Zachary, 1977) and train a DeepWalk model (Perozzi et al., 2014). Specifically, we train a two-layer model with one embedding layer and one linear layer optimized via contrastive loss following Eq. (5), where the loss is defined as the negative log softmax. For Listwise Learning-to-Rank, we use the Delicious (Tsoumakas et al., 2008) and Mediamill (Snoek et al., 2006) datasets. We train a linear model using the loss defined in Eq. (6). Please refer to Appendix B for more detailed experiment settings.
+
+Target Functions. We apply VIF to estimate the change of a target function, $f(z,\theta)$ , before and after a specific data object is excluded from the model training process. Below are our choice of target functions for difference scenarios.
+
+- For Cox Regression, we study how the relative risk function, $f(x_{test}, \theta) = \exp(\theta^{\top} x_{test})$ , of a test object, $x_{test}$ , would change if one training object were removed.
+- For Node Embedding, we study how the contrastive loss, $f((u,v,N),\theta) = l(\theta ;(u,v,N))$ , of an arbitrary pair of test nodes, $(u,v)$ , would change if a node $w\in N$ were removed from the graph.
+- For Listwise Learning-to-Rank, we study how the ListMLE loss of a test query, $f((x_{test}, y_{test}^{[k]})$ , $\theta) = -\sum_{j=1}^{k} (f(x_{test}; \theta)_j - \log \sum_{l \in [n] \setminus \{y_{test}^{(1)}, \ldots, y_{test}^{(j-1)}\}} \exp(f(x_{test}; \theta)_l))$ , would change if one item $l \in [n]$ were removed from the training process.
+
+# 4.2. Performance
+
+Table 1. The Pearson correlation coefficients of VIF and brute-force LOO retraining under different experimental settings. Specifically, "Brute-Force" refers to the results of two times of brute-force LOO retraining using different random seeds, which serves as a reference upper limit of performance.
+
+Scenario Dataset Method Pearson Correlation Cox Regression METABRIC VIF 0.997 Brute-Force 0.997 SUPPORT VIF 0.943 Brute-Force 0.955 Node Embedding Karate VIF 0.407 Brute-Force 0.419 Listwise Learning-to-Rank Mediamill VIF 0.823 Brute-Force 0.999 Delicious VIF 0.906 Brute-Force 0.999
+
+We utilize the Pearson correlation coefficient to quantitatively evaluate how closely the influence estimated by VIF aligns with the results obtained by brute-force LOO retraining. Furthermore, as a reference upper limit of performance, we evaluate the correlation between two brute-force LOO retraining with different random seeds. As noted in Remark 3.3, some examples like contrastive losses are not deterministic, which could impact the observed correlations.
+
+Table 1 presents the Pearson correlation coefficients comparing VIF with brute-force LOO retraining using different random seeds. The performance of VIF matches the brute-force LOO in all experimental settings. Except for the Node Embedding scenario, the Pearson correlation coefficients are close to 1, indicating a strong resemblance between the VIF estimates and the retraining results. In the Node Embedding scenario, the correlations are moderately high for both methods due to the inherent randomness in the random walk procedure for constructing the triplet set in the DeepWalk algorithm. Nevertheless, VIF achieves a correlation that is close to the upper limit by brute-force LOO retraining. In Figure 1, we show that the influences of each training sample predicted by VIF align almost perfectly with the exact test loss differences after LOO retraining that respectively removes the individual training samples. Additional experiment results on larger datasets and models are presented in Appendix B and C.
+
+# 4.3. Runtime
+
+We report the runtime of VIF and brute-force LOO retraining in Tabel 2. The computational advantage of VIF is significant, reducing the runtime by factors up to $1097 \times$ . This advantage becomes more pronounced as the dataset size increases. The improvement ratio on the Karate dataset is moderate due to the overhead from the random walk
+
+
+Figure 1. The influences predicted by VIF versus the exact loss differences after LOO retraining on 6 randomly selected test samples. The experiment is done with Cox Regression on the METABRIC dataset. Each sub-figure corresponds to one test sample. The x-axis indicates the influence of a training sample on the test sample, while the y-axis indicates the change of loss on the test sample after LOO retraining by leaving that training sample out.
+
+process and potential optimizations in the implementation. All runtime measurements were recorded using an Intel(R) Xeon(R) Gold 6338 CPU.
+
+Table 2. Runtime comparison of VIF and brute-force LOO retraining.
+
+Scenario Dataset Brute-Force VIF Improvement Ratio Cox Regression METABRIC 24 min 2.43 sec 593× SUPPORT 225 min 12.3 sec 1097× Network Embedding Karate 204 min 109 min 1.87× Listwise Learning-to-Rank Mediumill 52 min 2.6 min 20× Delicious 660 min 2.8 min 236×
+
+# 4.4. Case Studies
+
+We present two case studies to show how the influence estimated by VIF can help interpret the behavior of the trained model.
+
+Case study 1: Cox Regression. In Table 3, we show the top-5 most influential training samples, as estimated by VIF, for the relative risk function of two randomly selected test samples. We observe that removing two types of data samples in training will significantly increase the relative risk function of a test sample, which leads to a shorter event time: (1) training samples that share similar features with the test sample and have long event times (e.g., training sample ranks 1, 3, 4, 5 for test sample 0 and ranks 5 for test sample 1) and (2) training samples that differ in features from the test sample and have short event times (e.g., training sample ranks 2 for test sample 0 and ranks 1, 2, 3, 4 for test sample 1). These findings align with domain knowledge.
+
+Case study 2: Node Embedding. In Figure 2b and 2c, we show the influence of all nodes to the contrastive loss of 2 pairs of test nodes. The spring layout of the Karate dataset is provided in Figure 2a. We observe that the most
+
+Table 3. The top-5 influential training samples to 2 test samples in the METABRIC dataset. "Features Similarity" is the cosine similarity between the feature of the influential training sample and the test sample. "Observed Time" and "Event Occurred" are the $Y$ and $\Delta$ of the influential training sample as defined in Eq. (4).
+
+Influence Rank Feature Similarity Observed Time Event Occurred Test Sample 0 1 0.84 322.83 False 2 -0.34 9.13 True 3 0.77 258.17 True 4 0.23 131.27 False 5 0.81 183.43 False Test Sample 1 1 -0.49 16.57 True 2 -0.22 30.97 True 3 -0.39 15.07 True 4 -0.65 4.43 True 5 0.72 307.63 False
+
+influential nodes (on the top right in Figure 2b and 2c) are the hub nodes that lie on the shortest path of the pair of test nodes. For example, the shortest path from node 12 to node 10 passes through node 0, while the shortest path from node 15 to node 13 passes through node 33. Conversely, the nodes with the most negative influence (on the bottom left in Figure 2b and 2c) are those that likely "distract" the random walk away from the test node pairs. For instance, node 3 distracts the walk from node 12 to node 10, and node 30 distracts the walk from node 15 to node 13.
+
+# 5. Conclusion
+
+In this work, we introduced the Versatile Influence Function (VIF), a novel method that extends IF-based data attribution to models trained with non-decomposable losses. The key idea behind VIF is a finite difference approximation of the general IF formulation in the statistics literature, which eliminates the need for case-specific derivations and can be efficiently computed with the auto-differentiation tools provided in modern machine learning libraries. Our theoretical analysis demonstrates that VIF accurately recovers classical influence functions in the case of M-estimators and provides strong approximations for more complex settings such as Cox regression. Empirical evaluations across various tasks show that VIF closely approximates the influence obtained by brute-force leave-one-out retraining while being orders-of-magnitude faster. By broadening the scope of IF-based data attribution to non-decomposable losses, VIF opens new avenues for data-centric applications in machine learning, empowering practitioners to explore data attribution in more complex and diverse domains.
+
+
+Karate Club Graph
+
+
+(b) Node 12 and Node 10
+Figure 2. VIF is applied to Zachary's Karate network to estimate the influence of each node on the contrastive loss of a pair of test nodes. Figure 2a is a spring layout of the Karate network. Figure 2b and Figure 2c illustrate the alignment between the influence estimated by VIF (x-axis) and the brute-force LOO retrained loss difference (y-axis).
+
+
+(a) Karate Club Graph
+(c) Node 15 and Node 13
+
+Limitation and Future Work. Similar to early IF-based methods for decomposable loss (Koh & Liang, 2017), the formal derivation of VIF assumes convexity of the loss function, which requires practical tricks to adapt the proposed method to large-scale neural network models. While we have explored the application of Conjugate Gradient and LiSSA (Agarwal et al., 2017) for efficient inverse Hessian approximation (see Appendix C), more advanced techniques to stabilize and accelerate IF-based methods developed for decomposable losses, such as EK-FAC (Grosse et al., 2023), ensemble (Park et al., 2023), or gradient projection (Choe et al., 2024), may be adapted to further enhance the practical applicability of VIF on large-scale models.
+
+# Impact Statement
+
+This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
+
+# Acknowledgements
+
+WT was partially supported by NSF DMS-2412853. The views and conclusions expressed in this paper are solely
+
+those of the authors and do not necessarily reflect the official policies or positions of the supporting agency.
+
+# References
+
+Agarwal, N., Bullins, B., and Hazan, E. Second-order stochastic optimization for machine learning in linear time. Journal of Machine Learning Research, 18(116):1-40, 2017. URL http://jmlr.org/papers/v18/16-491.html.
+Barshan, E., Brunet, M.-E., and Dziugaite, G. K. Relatif: Identifying explanatory training samples via relative influence. In International Conference on Artificial Intelligence and Statistics, pp. 1899-1909. PMLR, 2020.
+Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Iii, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1597-1607. PMLR, 2020.
+Chen, Z., Li, P., Liu, H., and Hong, P. Characterizing the influence of graph elements. arXiv preprint arXiv:2210.07441, 2022.
+Choe, S. K., Ahn, H., Bae, J., Zhao, K., Kang, M., Chung, Y., Pratapa, A., Neiswanger, W., Strubell, E., Mitamura, T., et al. What is your data worth to gpt? lmm-scale data valuation with influence functions. arXiv preprint arXiv:2405.13954, 2024.
+Cox, D. R. Regression models and life-tables. Journal of the Royal Statistical Society. Series B, Statistical methodology, 34(2):187-202, January 1972. ISSN 1369-7412,1467-9868. doi: 10.1111/j.2517-6161.1972.tb00899.x.
+Cox, D. R. Partial likelihood. Biometrika, 62(2):269-276, 1975.
+Deng, J. and Ma, J. Computational Copyright: Towards A Royalty Model for Music Generative AI. arXiv [cs.AI], December 2023.
+Ghorbani, A. and Zou, J. Data shapley: Equitable valuation of data for machine learning. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 2242-2251. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/ghorbani19c.html.
+Grosse, R., Bae, J., Anil, C., Elhage, N., Tamkin, A., Tajdini, A., Steiner, B., Li, D., Durmus, E., Perez, E., Hub
+
+inger, E., Lukosité, K., Nguyen, K., Joseph, N., McCandlish, S., Kaplan, J., and Bowman, S. R. Studying large language model generalization with influence functions. arXiv [cs.LG], August 2023.
+Guo, H., Rajani, N. F., Hase, P., Bansal, M., and Xiong, C. Fastif: Scalable influence functions for efficient model interpretation and debugging. arXiv preprint arXiv:2012.15781, 2020.
+Hammoudeh, Z. and Lowd, D. Training data influence analysis and estimation: A survey. Machine Learning, 113(5):2351-2403, 2024.
+Hampel, F. R. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69(346):383-393, June 1974. ISSN 0162-1459,1537-274X. doi: 10.1080/01621459.1974.10482962.
+Huber, P. J. and Ronchetti, E. M. Robust Statistics. Wiley Series in Probability and Statistics. Wiley-Blackwell, Hoboken, NJ, 2 edition, January 2009. ISBN 9780470129906,9780470434697. doi: 10.1002/9780470434697.
+Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., and Madry, A. Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622, 2022.
+Invernizzi, L., Comparetti, P. M., Benvenuti, S., Kruegel, C., Cova, M., and Vigna, G. Evilseed: A guided approach to finding malicious web pages. In 2012 IEEE symposium on Security and Privacy, pp. 428-442. IEEE, 2012.
+Jia, R., Dao, D., Wang, B., Hubis, F. A., Hynes, N., Gürel, N. M., Li, B., Zhang, C., Song, D., and Spanos, C. J. Towards efficient data valuation based on the shapley value. In Chaudhuri, K. and Sugiyama, M. (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 1167-1176. PMLR, 16-18 Apr 2019. URL https://proceedings.mlrpress/v89/jia19a.html.
+Katzman, J. L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., and Kluger, Y. Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. BMC medical research methodology, 18: 1-12, 2018.
+Koh, P. W. and Liang, P. Understanding Black-box Predictions via Influence Functions. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1885-1894. PMLR, 2017.
+
+Kwon, Y. and Zou, J. Beta shapley: a unified and noise-reduced data valuation framework for machine learning. arXiv preprint arXiv:2110.14049, 2021.
+Kwon, Y., Wu, E., Wu, K., and Zou, J. Datainf: Efficiently estimating data influence in lora-tuned llms and diffusion models. arXiv preprint arXiv:2310.00902, 2023.
+Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. Neural information processing systems, 2013.
+Park, S. M., Georgiev, K., Ilyas, A., Leclerc, G., and Madry, A. Trak: Attributing model behavior at scale. arXiv preprint arXiv:2303.14186, 2023.
+Perozzi, B., Al-Rfou, R., and Skiena, S. DeepWalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701-710, New York, NY, USA, August 2014. ACM. ISBN 9781450329569. doi: 10.1145/2623330.2623732.
+Pruthi, G., Liu, F., Kale, S., and Sundararajan, M. Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33: 19920-19930, 2020.
+Reid, N. and Crepeau, H. Influence functions for proportional hazards regression. Biometrika, 72(1):1, April 1985. ISSN 0006-3444,1464-3510. doi: 10.2307/2336329.
+Rong, X. word2vec Parameter Learning Explained. arXiv [cs.CL], November 2014.
+Schioppa, A., Zablotskaia, P., Vilar, D., and Sokolov, A. Scaling up influence functions. In Proc. Conf. AAAI Artif. Intell., volume 36, pp. 8179-8186. Association for the Advancement of Artificial Intelligence (AAAI), June 2022. doi: 10.1609/aaai.v36i8.20791.
+Snoek, C. G., Worring, M., Van Gemert, J. C., Geusebroek, J.-M., and Smeulders, A. W. The challenge problem for automated detection of 101 semantic concepts in multimedia. In Proceedings of the 14th ACM international conference on Multimedia, pp. 421-430, 2006.
+Tsoumakas, G., Katakis, I., and Vlahavas, I. Effective and efficient multilabel classification in domains with large number of labels. In Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD'08), volume 21, pp. 53-59, 2008.
+van der Vaart, A. W. Asymptotic Statistics. Cambridge University Press, Cambridge, England, June 2012. ISBN 9780511802256,9780521496032. doi: 10.1017/cbo9780511802256.
+
+Wang, J. T. and Jia, R. Data banzhaf: A robust data valuation framework for machine learning. In International Conference on Artificial Intelligence and Statistics, pp. 6388-6421. PMLR, 2023.
+Wu, J., Yang, Y., Qian, Y., Sui, Y., Wang, X., and He, X.
+Gif: A general graph unlearning strategy via influence function. In Proceedings of the ACM Web Conference 2023, pp. 651-661, 2023.
+Xia, F., Liu, T.-Y., Wang, J., Zhang, W., and Li, H. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th international conference on Machine learning - ICML '08, pp. 1192-1199, New York, New York, USA, 2008. ACM Press. ISBN 9781605582054. doi: 10.1145/1390156.1390306.
+Yeh, C.-K., Kim, J., Yen, I. E.-H., and Ravikumar, P. K. Representative point selection for explaining deep neural networks. Advances in neural information processing systems, 31, 2018.
+Zachary, W. W. An information flow model for conflict and fission in small groups. Journal of anthropological research, 33(4):452-473, 1977.
+Zhao, T., McAuley, J., Li, M., and King, I. Improving recommendation accuracy using networks of substitutable and complementary products. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 3649-3655. IEEE, 2017.
+
+# A. Omitted Derivations
+
+# A.1. Derivation of Eq. (7)
+
+Consider an $\varepsilon$ perturbation towards another distribution $Q$ , i.e., $(1 - \varepsilon)P + \varepsilon Q$ . Note that $\tilde{\theta} ((1 - \varepsilon)P + \varepsilon Q)$ solves $\nabla_{\theta}\tilde{\mathcal{L}} (\theta ,(1 - \varepsilon)P + \varepsilon Q) = 0$ . We take derivative with respect to $\varepsilon$ and evaluate at $\varepsilon = 0$ on both side, which leads to
+
+$$
+\nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) \lim _ {\varepsilon \to 0} \frac {\tilde {\theta} ((1 - \varepsilon) P + \varepsilon Q) - \tilde {\theta} (P)}{\varepsilon} + \lim _ {\varepsilon \to 0} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , (1 - \varepsilon) P + \varepsilon Q) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , P)}{\varepsilon} = 0.
+$$
+
+Given the strict convexity, the Hessian is invertible at the global optimal. By plugging the definition of IF, we have
+
+$$
+I F (\tilde {\theta} (P); Q) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) \right] ^ {- 1} \lim _ {\varepsilon \to 0} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , (1 - \varepsilon) P + \varepsilon Q) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , P)}{\varepsilon}.
+$$
+
+# A.2. Proof of Lemma 3.4
+
+Proof. Under M-estimation, the objective function becomes the empirical loss, i.e., $\tilde{\mathcal{L}} (\theta ,P) = \mathbb{E}_{z\sim P}[\ell (\theta ;z)]$ where $P = \mathbb{P}_n = \sum_{i = 1}^n\delta_{z_i} / n$ is the empirical distribution over the dataset. Note that $\tilde{\mathcal{L}} (\theta ,P) = \frac{1}{n}\mathcal{L}_D(\theta ,\mathbf{1})$ for any $\theta$ , therefore they share the same minimizer, i.e.,
+
+$$
+\tilde {\theta} (P) = \hat {\theta} _ {D} (\mathbf {1}).
+$$
+
+The gradient and Hessian of $\tilde{\mathcal{L}} (\tilde{\theta} (P),P)$ are respectively
+
+$$
+\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) = \mathbb {E} _ {z \sim P} [ \nabla_ {\theta} \ell (\tilde {\theta} (P); z) ] = \frac {1}{n} \sum_ {j = 1} ^ {n} \nabla_ {\theta} \ell (\tilde {\theta} (P); z _ {j}) = 0
+$$
+
+and
+
+$$
+\nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (P), P) = \mathbb {E} _ {z \sim P} [ \nabla_ {\theta} ^ {2} \ell (\tilde {\theta} (P); z) ] = \sum_ {i = 1} ^ {n} \nabla_ {\theta} ^ {2} \ell (\tilde {\theta} (P); z _ {i}) / n = \frac {1}{n} \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}).
+$$
+
+The infinitesimal change on the gradient towards the distribution $Q = \delta_{z_i}$ equals to
+
+$$
+\begin{array}{l} \lim _ {\varepsilon \rightarrow 0} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , (1 - \varepsilon) P + \varepsilon Q) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (P) , P)}{\varepsilon} \\ = \lim _ {\varepsilon \rightarrow 0} \frac {\mathbb {E} _ {z \sim (1 - \varepsilon) P + \varepsilon Q} [ \nabla_ {\theta} \ell (\tilde {\theta} (P) , z) ] - 0}{\varepsilon} \\ = \lim _ {\varepsilon \rightarrow 0} \frac {(1 - \varepsilon) \mathbb {E} _ {z \sim P} [ \nabla_ {\theta} \ell (\tilde {\theta} (P) , z) ] + \varepsilon \mathbb {E} _ {z \sim Q} [ \nabla_ {\theta} \ell (\tilde {\theta} (P) , z) ]}{\varepsilon} \\ = \lim _ {\varepsilon \rightarrow 0} \frac {(1 - \varepsilon) \cdot 0 + \varepsilon \mathbb {E} _ {z \sim Q} [ \nabla_ {\theta} \ell (\tilde {\theta} (P) , z) ]}{\varepsilon} \\ = \mathbb {E} _ {z \sim Q} [ \nabla_ {\theta} \ell (\tilde {\theta} (P), z) ] \\ = \nabla_ {\theta} \ell (\tilde {\theta} (P), z _ {i}) = \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}), z _ {i}). \\ \end{array}
+$$
+
+Plugging the above equations into Eq. (7), it becomes
+
+$$
+\mathrm {I F} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - n \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}).
+$$
+
+# A.3. Proof of Theorem 3.6
+
+Lemma A.1. Let $\mathbb{P}_n$ and $\mathbb{Q}_{n - 1}^{(-i)}$ be the empirical distributions respectively on $\{z_j\}_{j = 1}^n$ and $\{z_j\}_{j = 1}^n\setminus \{z_i\}$ , while $\delta_{z_i}$ is the distribution concentrated on $z_{i}$ . Then
+
+$$
+(1 + \frac {1}{n - 1}) \mathbb {P} _ {n} - \frac {1}{n - 1} \delta_ {z _ {i}} = \mathbb {Q} _ {n - 1} ^ {(- i)}.
+$$
+
+Proof of Lemma A.1. For any $j \neq i$ ,
+
+$$
+\begin{array}{l} \left(1 + \frac {1}{n - 1}\right) \mathbb {P} _ {n} \left(z _ {j}\right) - \frac {1}{n - 1} \delta_ {z _ {i}} \left(z _ {j}\right) = \left(1 + \frac {1}{n - 1}\right) \cdot \frac {1}{n} - 0 \\ = \frac {1}{n - 1} \\ = \mathbb {Q} _ {n - 1} ^ {(- i)} (z _ {j}). \\ \end{array}
+$$
+
+For $i$
+
+$$
+\begin{array}{l} (1 + \frac {1}{n - 1}) \mathbb {P} _ {n} (z _ {i}) - \frac {1}{n - 1} \delta_ {z _ {i}} (z _ {i}) = (1 + \frac {1}{n - 1}) \cdot \frac {1}{n} - \frac {1}{n - 1} \cdot 1 \\ = 0 \\ \begin{array}{l} = \mathbb {Q} _ {n - 1} ^ {(- i)} (z _ {i}). \end{array} \\ \end{array}
+$$
+
+
+
+Proof of Theorem 3.6. We first prove the first part of Theorem 3.6, where our goal is to show
+
+$$
+\widehat {\mathrm {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - n \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}).
+$$
+
+Expanding $\widehat{\mathrm{IF}}_{\varepsilon}(\widetilde{\theta} (\mathbb{P}_n);\delta_{z_i})$ by its definition in Eq. (8),
+
+$$
+\widehat {\mathrm {I F}} _ {\varepsilon} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , (1 - \varepsilon) \mathbb {P} _ {n} + \varepsilon \delta_ {z _ {i}}) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {P} _ {n})}{\varepsilon}.
+$$
+
+Setting $\varepsilon = -\frac{1}{n - 1}$ and by Lemma A.1,
+
+$$
+\begin{array}{l} \widehat {\operatorname {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {Q} _ {n - 1} ^ {(- i)}) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {P} _ {n})}{- 1 / (n - 1)} (12) \\ = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\mathbb {E} _ {z \sim \mathbb {Q} _ {n - 1} ^ {(- i)}} [ \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}) ; z) ] - \mathbb {E} _ {z \sim \mathbb {P} _ {n}} [ \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}) ; z) ]}{- 1 / (n - 1)} \\ = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\sum_ {j = 1 , j \neq i} ^ {n} \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}) ; z _ {j}) / (n - 1) - \sum_ {j = 1} ^ {n} \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}) ; z _ {j}) / n}{- 1 / (n - 1)} \\ = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right), \mathbb {P} _ {n}\right) \right] ^ {- 1} \left[ - \sum_ {j = 1, j \neq i} ^ {n} \nabla_ {\theta} \ell \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right); z _ {j}\right) + \frac {n - 1}{n} \sum_ {j = 1} ^ {n} \nabla_ {\theta} \ell \left(\tilde {\theta} \left(\mathbb {P} _ {n}\right); z _ {j}\right) \right]. (13) \\ \end{array}
+$$
+
+Noting that $\tilde{\theta} (\mathbb{P}_n)$ is the optimizer for $\tilde{\mathcal{L}} (\theta ,\mathbb{P}_n)$ , so
+
+$$
+0 = \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) = \frac {1}{n} \sum_ {j = 1} ^ {n} \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}); z _ {j}).
+$$
+
+Therefore,
+
+$$
+- \sum_ {j = 1, j \neq i} ^ {n} \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}); z _ {j}) = \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}); z _ {i}).
+$$
+
+Plugging the two equations above into Eq. (13), we have
+
+$$
+\widehat {\mathrm {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \nabla_ {\theta} \ell (\tilde {\theta} (\mathbb {P} _ {n}); z _ {i}).
+$$
+
+From the proof of Lemma 3.4 in Appendix A.2, we know
+
+$$
+\tilde {\theta} (\mathbb {P} _ {n}) = \hat {\theta} _ {D} (\mathbf {1}), \quad \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) = \frac {1}{n} \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}).
+$$
+
+Therefore,
+
+$$
+\widehat {\mathrm {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - n \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {D} (\hat {\theta} _ {D} (\mathbf {1}), \mathbf {1}) \right] ^ {- 1} \nabla_ {\theta} \ell (\hat {\theta} _ {D} (\mathbf {1}); z _ {i}),
+$$
+
+which completes the proof for the first part of Theorem 3.6. For the second part, the first equation has been proved as Lemma A.1. The second equation is straightforward from Eq. (12):
+
+$$
+\begin{array}{l} \widehat {\mathrm {I F}} _ {- \frac {1}{n - 1}} (\tilde {\theta} (\mathbb {P} _ {n}); \delta_ {z _ {i}}) = - \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {Q} _ {n - 1} ^ {(- i)}) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {P} _ {n})}{- 1 / (n - 1)} \\ = - (n - 1) \left(- \left[ \nabla_ {\theta} ^ {2} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}), \mathbb {P} _ {n}) \right] ^ {- 1} \frac {\nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {Q} _ {n - 1} ^ {(- i)}) - \nabla_ {\theta} \tilde {\mathcal {L}} (\tilde {\theta} (\mathbb {P} _ {n}) , \mathbb {P} _ {n})}{1}\right) \\ = - (n - 1) \widehat {\mathrm {I F}} _ {1} (\tilde {\theta} (\mathbb {P} _ {n}); \mathbb {Q} _ {n - 1} ^ {(- i)}). \\ \end{array}
+$$
+
+
+
+# A.4. Proof of Proposition 3.8
+
+Proof. It is easy to verify that
+
+$$
+b ^ {\mathbb {P} _ {n}} = \mathbf {1}, \quad b ^ {\mathbb {Q} _ {n - 1} ^ {(- i)}} = \mathbf {1} _ {- i}.
+$$
+
+Hence, based on the definition of $\tilde{\mathcal{L}}$ in Proposition 3.8, we have
+
+$$
+\tilde {\mathcal {L}} (\theta , \mathbb {P} _ {n}) = \mathcal {L} (\theta , \mathbf {1}), \quad \tilde {\mathcal {L}} (\theta , \mathbb {Q} _ {n - 1} ^ {(- i)}) = \mathcal {L} (\theta , \mathbf {1} _ {- i}).
+$$
+
+Therefore, we also have $\tilde{\theta} (\mathbb{P}_n) = \hat{\theta} (\mathbf{1})$ . The result in Eq. (9) follows directly by plugging these quantities into the definition of $\widehat{\mathrm{IF}}_1(\tilde{\theta} (\mathbb{P}_n);\mathbb{Q}_{n - 1}^{(-i)})$ .
+
+# A.5. Formal Statement and Proof of Theorem 3.10
+
+Setup and Notation. For convenience, we adopt a set of slightly different notations tailored for the Cox regression model. Consider $n$ i.i.d. generated right-censoring data $\{Z_{i} = (X_{i},Y_{i},\Delta_{i})\}_{i = 1}^{n}$ , where $Y_{i} = \min \{T_{i},C_{i}\}$ is the observed time, $T_{i}$ is the time to event of interest, and $C_i$ is the censoring time. We assume non-informative censoring, i.e., $T$ and $C_i$ are independent conditional on $X$ , which is a common assumption in the literature. Suppose there are no tied events for simplicity.
+
+A well-known estimate for the coefficients $\beta$ under the Cox model is obtained by minimizing the negative log-partial likelihood:
+
+$$
+\begin{array}{l} \mathcal {L} _ {n} (\theta) := - \sum_ {i = 1} ^ {n} \Delta_ {i} \left(\theta^ {\top} X _ {i} - \log \left(\sum_ {j \in R _ {i}} \exp \left(\theta^ {\top} X _ {j}\right)\right)\right) \\ = - \sum_ {i = 1} ^ {n} \Delta_ {i} \left(\theta^ {\top} X _ {i} - \log \left(\sum_ {j = 1} ^ {n} I (Y _ {j} \geq Y _ {i}) \exp (\theta^ {\top} X _ {j})\right)\right). \\ \end{array}
+$$
+
+Note that $\mathcal{L}_n(\theta)$ is a convex function and the estimate $\hat{\theta}$ equivalently solves the following score equation:
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {n} (\hat {\theta}) = \sum_ {i = 1} ^ {n} \underbrace {- \Delta_ {i} \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {i} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {i} ; \hat {\theta})}\right)} _ {\nabla_ {\theta} \ell_ {n} (\hat {\theta}; Z _ {i})} = 0,
+$$
+
+where
+
+$$
+S _ {n} ^ {(0)} (t; \theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} I \left(Y _ {i} \geq t\right) \exp \left(\theta^ {\top} X _ {i}\right), \tag {14}
+$$
+
+$$
+S _ {n} ^ {(1)} (t; \theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} I \left(Y _ {i} \geq t\right) \exp \left(\theta^ {\top} X _ {i}\right) X _ {i}. \tag {15}
+$$
+
+It has been shown that, under some regularity conditions, $\hat{\theta}$ is a consistent estimator for $\theta^{*}$ . Note that the above score equation is not a simple estimation equation that takes the summation of i.i.d. terms, because $S_{n}^{(0)}(t;\theta)$ and $S_{n}^{(1)}(t;\theta)$ depend on all observations.
+
+Analytical Form of Influence Function in Statistics. Reid & Crepeau (1985) derived the influence function by evaluating the limit in (7) with $P$ being the underlying data-generating distribution and $Q = \delta_{Z_i}$ (i.e., the Gateaux derivative at $\theta^* = \theta(P)$ in the direction $\delta_{Z_i}$ ). To start with, we define the population counterparts of Eq. (14) and Eq. (15):
+
+$$
+s ^ {(0)} (t; \theta) = \mathbb {E} \left(I \left(Y \geq t\right) \exp \left(\theta^ {\top} X\right)\right),
+$$
+
+$$
+s ^ {(1)} (t; \theta) = \mathbb {E} \left(I \left(Y \geq t\right) \exp \left(\theta^ {\top} X\right) X\right),
+$$
+
+and introduce the counting process notation: the counting process associated with $i$ -th data $N_{i}(t) = I(Y_{i}\leq t,\Delta_{i} = 1)$ , the process $G_{n}(t) = \frac{1}{n}\sum_{i = 1}^{n}N_{i}(t)$ , and its population expectation $G(t) = \mathbb{E}(G_n(t))$ . Then the influence function for the observation $Z_{i} = (X_{i},Y_{i},\Delta_{i})$ is given by
+
+$$
+\begin{array}{l} \boldsymbol {A} \cdot \operatorname {I F} (i) = \Delta_ {i} \left(X _ {i} - \frac {s ^ {(1)} \left(Y _ {i} ; \theta^ {*}\right)}{s ^ {(0)} \left(Y _ {i} ; \theta^ {*}\right)}\right) \\ - \exp (\theta^ {* ^ {\top}} X _ {i}) \cdot \int \frac {I (Y _ {i} \geq t)}{s ^ {(0)} (t ; \theta^ {*})} \left(X _ {i} - \frac {s ^ {(1)} (t ; \theta^ {*})}{s ^ {(0)} (t ; \theta^ {*})}\right) d G (t) \\ \end{array}
+$$
+
+where $\mathbf{A}$ is the non-singular information matrix. A consistent estimate for $\mathbf{A}$ is given by $\nabla_{\hat{\theta}}^{2}\mathcal{L}_{n}(\hat{\theta}) / n$ (Cox, 1975). The empirical influence function given $n$ data points is obtained by substituting $\mathbf{A}$ , $\theta^{*}$ , and $G(t)$ by $\nabla_{\hat{\theta}}^{2}\mathcal{L}(\hat{\theta}) / n$ , $\hat{\theta}$ , and $G_{n}(t)$ respectively:
+
+$$
+\mathrm {I F} _ {n} (i) = - [ \nabla_ {\boldsymbol {\theta}} ^ {2} \mathcal {L} (\hat {\boldsymbol {\theta}}) / n ] ^ {- 1} \nabla_ {\boldsymbol {\theta}} \ell_ {n} (\hat {\boldsymbol {\theta}}; Z _ {i}) - [ \nabla_ {\boldsymbol {\theta}} ^ {2} \mathcal {L} (\hat {\boldsymbol {\theta}}) / n ] ^ {- 1} C _ {i} (\hat {\boldsymbol {\theta}}),
+$$
+
+where
+
+$$
+\begin{array}{l} C _ {i} (\hat {\theta}) = \exp (\hat {\theta} ^ {\top} X _ {i}) \cdot \frac {1}{n} \sum_ {j = 1} ^ {n} \int \frac {I (Y _ {i} \geq t)}{S _ {n} ^ {(0)} (t ; \hat {\theta})} \left(X _ {i} - \frac {S _ {n} ^ {(1)} (t ; \hat {\theta})}{S _ {n} ^ {(0)} (t ; \hat {\theta})}\right) d N _ {j} (t) \\ = \exp (\hat {\theta} ^ {\top} X _ {i}) \cdot \frac {1}{n} \sum_ {j = 1} ^ {n} \frac {I (Y _ {i} \geq Y _ {j}) \Delta_ {j}}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right). \\ \end{array}
+$$
+
+The first term is analogous to the standard influence function for M-estimators and the second term captures the influence of the $i$ -th observation in the at-risk set.
+
+The Proposed VIF. Under the Cox regression, the proposed VIF becomes
+
+$$
+\mathrm {V I F} _ {n} (i) := - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {n} (\hat {\theta}) / n \right] ^ {- 1} \left(\nabla_ {\theta} \mathcal {L} _ {n} (\hat {\theta}) - \nabla_ {\theta} \mathcal {L} _ {n - 1} ^ {(- i)} (\hat {\theta})\right),
+$$
+
+where $\nabla_{\theta}\mathcal{L}_{n - 1}^{(-i)}(\hat{\theta})$ is the gradient of the negative log-partial likelihood after excluding the $i$ -th data point at $\hat{\theta}$ . Given no tied events, we can rewrite $\nabla_{\theta}\mathcal{L}_{n - 1}^{(-i)}(\hat{\theta})$ as
+
+$$
+\nabla_ {\theta} \mathcal {L} _ {n - 1} ^ {(- i)} (\hat {\theta}) = - \sum_ {j: Y _ {j} < Y _ {i}} \Delta_ {j} \left(X _ {j} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) X _ {i} / n}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) / n}\right) - \sum_ {j: Y _ {j} > Y _ {i}} \Delta_ {j} \left(X _ {j} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right).
+$$
+
+Then it follows that
+
+$$
+\begin{array}{l} \operatorname {V I F} _ {n} (i) = - \left[ \nabla_ {\hat {\theta}} ^ {2} \mathcal {L} _ {n} (\hat {\theta}) / n \right] ^ {- 1} \left(\nabla_ {\theta} \mathcal {L} _ {n} (\hat {\theta}) - \nabla_ {\theta} \mathcal {L} _ {n - 1} ^ {(- i)} (\hat {\theta})\right) \\ = - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {n} (\hat {\theta}) / n \right] ^ {- 1} \left(\nabla_ {\theta} \ell_ {n} (\hat {\theta}; Z _ {i}) + \sum_ {j: Y _ {j} < Y _ {i}} \Delta_ {j} \left(\frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) X _ {i} / n}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) / n}\right)\right) \\ = - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {n} (\hat {\theta}) / n \right] ^ {- 1} \nabla_ {\theta} \ell_ {n} (\hat {\theta}; Z _ {i}) \\ - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} _ {n} (\hat {\theta}) / n \right] ^ {- 1} \left(\exp (\hat {\theta} ^ {\top} X _ {i}) \cdot \frac {1}{n} \sum_ {j = 1} ^ {n} \frac {I (Y _ {j} < Y _ {i}) \Delta_ {j}}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right)\right). \\ \end{array}
+$$
+
+Approximation Error. Below, we formally bound the difference between the analytical form of IF and our proposed approximation. Our result implies that the difference between the analytic expression of the IF and the proposed VIF approximation, i.e., $\mathrm{VIF}_n(i) - \mathrm{IF}_n(i)$ , diminishes at a rate of $1 / n$ as the sample size grows and is of a smaller order than $\mathrm{IF}_n(i)$ . This is because $\mathrm{IF}_n(i) = \mathrm{IF}(i) + o_p(1)$ , where $\mathrm{IF}(i)$ is a non-degenerate random variable that doesn't converge to zero in probability; therefore $\mathrm{IF}_n(i)$ remains bounded away from zero in probability, denoted as $= \Omega_p(1)$ .
+
+Theorem A.2 (Approximation Error Bound under Cox Model). Assume that (1) the true parameter $\theta^{*}$ is an interior point of a compact set $\mathcal{B} \subset \mathbf{R}^d$ ; (2) the density of $X$ is bounded below by a constant $c > 0$ over its domain $\mathcal{X}$ , which is a compact subset of $\mathbf{R}^d$ ; (3) there is a truncation time $\tau < \infty$ such that for some constant $\delta_0$ , $\operatorname{Pr}(Y > \tau | X) \geq \delta_0$ almost surely; (4) the information matrix $\mathbf{A}$ is non-singular. Assuming uninformative censoring, the difference between $\mathrm{IF}_n(i)$ and $\mathrm{VIF}_n(i)$ is upper bounded by
+
+$$
+\mathrm {D i f f} (i) := \mathrm {V I F} _ {n} (i) - \mathrm {I F} _ {n} (i) = O _ {p} (\frac {1}{n}).
+$$
+
+Proof. The difference between $\mathrm{IF}_n(i)$ and $\mathrm{VIF}_n(i)$ is given by
+
+$$
+\operatorname {D i f f} _ {n} (i) = \operatorname {V I F} _ {n} (i) - \operatorname {I F} _ {n} (i)
+$$
+
+$$
+\begin{array}{l} = \left[ \nabla_ {\hat {\theta}} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \exp (\hat {\theta} ^ {\top} X _ {i}) \cdot \frac {1}{n} \left\{\sum_ {j = 1} ^ {n} \frac {I (Y _ {j} \leq Y _ {i}) \Delta_ {j}}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right) \right. \\ \left. - \sum_ {j = 1} ^ {n} \frac {I \left(Y _ {j} < Y _ {i}\right) \Delta_ {j}}{S _ {n} ^ {(0)} \left(Y _ {j} ; \hat {\theta}\right) - \exp \left(\hat {\theta} ^ {\top} X _ {i}\right) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} \left(Y _ {j} ; \hat {\theta}\right)}{S _ {n} ^ {(0)} \left(Y _ {j} ; \hat {\theta}\right)}\right) \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \left[ \nabla_ {\theta} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \exp (\hat {\theta} ^ {\top} X _ {i}) \cdot \frac {1}{n} \left\{\sum_ {j = 1} ^ {n} \frac {I (Y _ {j} \leq Y _ {i}) \Delta_ {j}}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right) \right. \\ \left. - \sum_ {j = 1} ^ {n} \frac {I \left(Y _ {j} \leq Y _ {i}\right) \Delta_ {j}}{S _ {n} ^ {(0)} \left(Y _ {j} ; \hat {\theta}\right) - \exp \left(\hat {\theta} ^ {\top} X _ {i}\right) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} \left(Y _ {j} ; \hat {\theta}\right)}{S _ {n} ^ {(0)} \left(Y _ {j} ; \hat {\theta}\right)}\right) \right\} \\ \left. + \frac {\Delta_ {i}}{S _ {n} ^ {(0)} \left(Y _ {i} ; \hat {\theta}\right) - \exp \left(\hat {\theta} ^ {\top} X _ {i}\right) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} \left(Y _ {i} ; \hat {\theta}\right)}{S _ {n} ^ {(0)} \left(Y _ {i} ; \hat {\theta}\right)}\right)\right) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = - \left[ \nabla_ {\hat {\theta}} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \frac {\exp (2 \hat {\theta} ^ {\top} X _ {i})}{n} \cdot \frac {1}{n} \sum_ {j = 1} ^ {n} \left\{\frac {I (Y _ {j} \leq Y _ {i}) \Delta_ {j}}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})} \cdot \frac {1}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {j} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {j} ; \hat {\theta})}\right) \right\} \\ + \left[ \nabla_ {\hat {\theta}} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \frac {\exp (\hat {\theta} ^ {\top} X _ {i})}{n} \cdot \frac {\Delta_ {i}}{S _ {n} ^ {(0)} (Y _ {i} ; \hat {\theta}) - \exp (\hat {\theta} ^ {\top} X _ {i}) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (Y _ {i} ; \hat {\theta})}{S _ {n} ^ {(0)} (Y _ {i} ; \hat {\theta})}\right). \\ \end{array}
+$$
+
+Define
+
+$$
+J _ {n} (t; \theta , Z _ {i}) = \frac {I (t \leq Y _ {i})}{S _ {n} ^ {(0)} (t ; \theta)} \cdot \frac {1}{S _ {n} ^ {(0)} (t ; \theta) - \exp (\theta^ {\top} X _ {i}) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} (t ; \theta)}{S _ {n} ^ {(0)} (t ; \theta)}\right),
+$$
+
+and
+
+$$
+J (t; \theta , Z _ {i}) = \frac {I (t \leq Y _ {i})}{s ^ {(0)} (t ; \theta)} \cdot \frac {1}{s ^ {(0)} (t ; \theta) - \exp (\theta^ {\top} X _ {i}) / n} \cdot \left(X _ {i} - \frac {s ^ {(1)} (t ; \theta)}{s ^ {(0)} (t ; \theta)}\right).
+$$
+
+Then we rewrite $\mathrm{Diff}_n(i)$ using the empirical process notation:
+
+$$
+\begin{array}{l} \mathrm {D i f f} _ {n} (i) = - \left[ \nabla_ {\theta} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \frac {\exp (2 \hat {\theta} ^ {\top} X _ {i})}{n} \cdot \int_ {0} ^ {\tau} J _ {n} (t; \hat {\theta}, Z _ {i}) d G _ {n} (t) \\ + \left[ \nabla_ {\hat {\theta}} ^ {2} \mathcal {L} (\hat {\theta}) / n \right] ^ {- 1} \frac {\exp \left(\hat {\theta} ^ {\top} X _ {i}\right)}{n} \cdot \frac {\Delta_ {i}}{S _ {n} ^ {(0)} \left(Y _ {i} ; \hat {\theta}\right) - \exp \left(\hat {\theta} ^ {\top} X _ {i}\right) / n} \cdot \left(X _ {i} - \frac {S _ {n} ^ {(1)} \left(Y _ {i} ; \hat {\theta}\right)}{S _ {n} ^ {(0)} \left(Y _ {i} ; \hat {\theta}\right)}\right). \tag {16} \\ \end{array}
+$$
+
+Next, we show that
+
+$$
+\int_ {0} ^ {\tau} J _ {n} (t; \hat {\theta}, Z _ {i}) d G _ {n} (t) = \int_ {0} ^ {\tau} J (t; \theta^ {*}, Z _ {i}) d G (t) + o _ {p} (1). \tag {17}
+$$
+
+To prove Eq. (17), we further decompose it into four terms:
+
+$$
+\begin{array}{l} \int_ {0} ^ {\tau} J _ {n} (t; \hat {\theta}, Z _ {i}) d G _ {n} (t) - \int_ {0} ^ {\tau} J (t; \theta^ {*}, Z _ {i}) d G (t) = \underbrace {\int_ {0} ^ {\tau} \left(J _ {n} (t ; \hat {\theta} , Z _ {i}) - J (t ; \hat {\theta} , Z _ {i})\right) d (G _ {n} (t) - G (t))} _ {I _ {1}} \\ + \underbrace {\int_ {0} ^ {\tau} J (t ; \hat {\theta} , Z _ {i}) d (G _ {n} (t) - G (t))} _ {I _ {2}} + \underbrace {\int_ {0} ^ {\tau} \left(J _ {n} (t ; \hat {\theta} , Z _ {i}) - J (t ; \hat {\theta} , Z _ {i})\right) d G (t)} _ {I _ {3}} \\ + \underbrace {\int_ {0} ^ {\tau} \left(J (t ; \hat {\theta} , Z _ {i}) - J (t ; \theta^ {*} , Z _ {i})\right) d G (t)} _ {I _ {4}}. \\ \end{array}
+$$
+
+For the first term $I_{1}$ , by the triangle inequality, we have
+
+$$
+\begin{array}{l} \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left\| J _ {n} (t; \theta , Z _ {i}) - J (t; \theta , Z _ {i})\right) \| \\ \leq \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left\| \frac {I (t \leq Y _ {i})}{S _ {n} ^ {(0)} (t ; \theta) \left(S _ {n} ^ {(0)} (t ; \theta) - \exp (\theta^ {\top} X _ {i}) / n\right)} \cdot X _ {i} - \frac {I (t \leq Y _ {i})}{\left[ (s ^ {(0)} (t ; \theta) \right] ^ {2}} \cdot X _ {i} \right\| \\ + \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left\| \frac {I (t \leq Y _ {i})}{\left[ S _ {n} ^ {(0)} (t ; \theta) \right] ^ {2} \left(S _ {n} ^ {(0)} (t ; \theta) - \exp (\theta^ {\top} X _ {i}) / n\right)} \cdot S _ {n} ^ {(1)} (t; \theta) - \frac {I (t \leq Y _ {i})}{\left[ (s ^ {(0)} (t ; \theta) \right] ^ {3}} s ^ {(1)} (t; \theta) \right\| \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \lesssim \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left| \frac {1}{\left[ (S _ {n} ^ {(0)} (t ; \theta) \right] ^ {2}} - \frac {1}{\left[ (s ^ {(0)} (t ; \theta) \right] ^ {2}} \right| + O _ {p} (\frac {1}{n}) \\ + \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left\| \frac {1}{\left[ S _ {n} ^ {(0)} (t ; \theta) \right] ^ {3}} \cdot S _ {n} ^ {(1)} (t; \theta) - \frac {1}{\left[ (s ^ {(0)} (t ; \theta) ] ^ {3} \right.} s ^ {(1)} (t; \theta) \right\| \tag {18} \\ \end{array}
+$$
+
+where the second inequality relies on the boundedness of the support of $X_{i},\tau$ , and $\mathcal{B}$ . Here, $W_{1}\lesssim W_{2}$ denotes that there exists a universal constant $C > 0$ such that $W_{1}\leq CW_{2}$ . Under Conditions (1)-(3), the function class $\{f_{t,\theta}(x,y) =$ $I(y\geq t)\exp (\theta^{\top}x):t\in [0,\tau ],\theta \in \mathcal{B}\}$ is a Glivenko-Cantelli class, i.e., $\sup_{t\in [0,\tau ],\theta \in \mathcal{B}}|S_n^{(0)}(t;\theta) - s^{(0)}(t;\theta)| = o_p(1).$ Similarly, we have $\sup_{t\in [0,\tau ],\theta \in \mathcal{B}}\| S_n^{(1)}(t;\theta) - s^{(1)}(t;\theta)\| = o_p(1)$ . By applying the first-order Taylor expansion of the function $1 / x^{2}$ at $x = s^{(0)}(t;\theta)$ , we obtain
+
+$$
+\frac {1}{\left[ \left(S _ {n} ^ {(0)} (t ; \theta) \right] ^ {2} \right.} = \frac {1}{\left[ \left(s ^ {(0)} (t ; \theta) \right] ^ {2} \right.} - \frac {2}{\left[ \left(s ^ {(0)} (t ; \theta) \right] ^ {3} \right.} \cdot \left(S _ {n} ^ {(0)} (t; \theta) - s ^ {(0)} (t; \theta)\right) + o \left(\left| S _ {n} ^ {(0)} (t; \theta) - s ^ {(0)} (t; \theta) \right|\right).
+$$
+
+Since $X$ , $\mathcal{B}$ , and $\tau$ are bounded, there exists a constant $C > 0$ such that $\inf_{t\in [0,\tau ],\theta \in \mathcal{B}}s^{(0)}(t;\theta) = \mathbb{E}\left(I(Y\geq t)\exp \left(\theta^{\top}X\right)\right)\geq C$ , which implies that the denominators in the expansion are uniformly bounded away from zero. Therefore, the first term in Eq. (18) satisfies
+
+$$
+\sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \left| \frac {1}{\left[ (S _ {n} ^ {(0)} (t ; \theta) \right] ^ {2}} - \frac {1}{\left[ (s ^ {(0)} (t ; \theta) \right] ^ {2}} \right| \lesssim \sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} | S _ {n} ^ {(0)} (t; \theta) - s ^ {(0)} (t; \theta) | = o _ {p} (1).
+$$
+
+Similarly, by applying the first-order Taylor expansion to the bi-variate function $y / x^3$ at the point $(x,y) = (s^{(0)}(t;\theta), s^{(1)}(t;\theta))$ , we can show that the third term in Eq. (18) is also $o_p(1)$ . Combining these results, we obtain the uniform convergence:
+
+$$
+\sup _ {t \in [ 0, \tau ], \theta \in \mathcal {B}} \| J _ {n} (t; \theta , Z _ {i}) - J (t; \theta , Z _ {i})) \| = o _ {p} (1). \tag {19}
+$$
+
+By the empirical process theory, we have $\sqrt{n} (G_n(t) - G(t))$ converges to a Gaussian process uniformly. Therefore, it follows that
+
+$$
+I _ {1} = \int_ {0} ^ {\tau} \Big (J _ {n} (t; \hat {\theta}, Z _ {i}) - J (t; \hat {\theta}, Z _ {i}) \Big) d (G _ {n} (t) - G (t)) = o _ {p} (1 / \sqrt {n}).
+$$
+
+To bound the second term $I_{2}$ , we first show that $\sup_{t\in [0,\tau ],\theta \in \mathcal{B}}J(t;\theta ,Z_i) = O(1)$ . By definition, all numerators in $J(t;\theta ,Z_i)$ are bounded due to the bounded support of $X$ , the compactness of $\mathcal{B}$ , and the bounded $\tau$ . The denominators of $J(t;\theta ,Z_i)$ involve both $s^{(0)}(t;\theta)$ and $s^{(0)}(t;\theta) - \exp (\theta^{\top}X_{i}) / n$ , which we show are uniformly bounded away from zero. Specifically, since $X$ , $\mathcal{B}$ , and $\tau$ are bounded, there exists a constant $C > 0$ such that $\inf_{t\in [0,\tau ],\theta \in \mathcal{B}}s^{(0)}(t;\theta) = \mathbb{E}\left(I(Y\geq t)\exp \left(\theta^{\top}X\right)\right)\geq C$ . Moreover, the term $e^{\hat{\theta}^T X_i} / n = O(1 / n)$ is of a smaller order, so $s^{(0)}(t;\theta) - \exp (\theta^{\top}X_{i}) / n$ remains bounded below by a positive constant for large $n$ . Therefore, $J(t;\theta ,Z_i) = O(1)$ uniformly over $t\in [0,\tau ]$ and $\theta \in \mathcal{B}$ . By plugging in $\hat{\theta}\in \mathcal{B}$ and combining the fact that $\sqrt{n} (G_n(t) - G(t))$ converges to a Gaussian process uniformly, we have $I_{2} = O_{p}(1 / \sqrt{n})$ .
+
+For the third term $I_{3}$ , due to uniform convergence in Eq. (19), it follows that $I_{3} = o_{p}(1)$ .
+
+To bound the fourth term $I_4$ , we use the Lipschitz continuity of $J(t; \theta, Z_i)$ in $\theta$ , which follows from the boundedness of $X$ , $\mathcal{B}$ , and $\tau$ . Combined with the consistency of $\hat{\theta}$ , i.e., $\hat{\theta} = \theta^* + o_p(1)$ , under the regularity conditions of Theorem A.2 (Cox, 1975), we have $\sup_{t \in [0, \tau]} \| J(t; \hat{\theta}, Z_i) - J(t; \theta^*, Z_i) \| = o_p(1)$ and thereby it follows that $I_4 = o_p(1)$ . So far, we have completed the proof of Eq. (17).
+
+Finally, we plug in Eq. (17) together with known consistency results into Eq. (16): $\hat{\theta} = \theta^{*} + o_{p}(1)$ and $\nabla_{\theta}^{2}\mathcal{L}(\hat{\theta}) / n = A + o_{p}(1)$ , and obtain that
+
+$$
+\begin{array}{l} \operatorname {D i f f} (i) = - \left[ \boldsymbol {A} + o _ {p} (1) \right] ^ {- 1} \frac {\exp \left(2 \theta^ {*} ^ {\top} X _ {i}\right) + o _ {p} (1)}{\boldsymbol {n}} \cdot \left(\int_ {0} ^ {\tau} J (t; \theta^ {*}, Z _ {i}) d G (t) + o _ {p} (1)\right) \\ + [ \boldsymbol {A} + o _ {p} (1) ] ^ {- 1} \frac {\exp (\theta^ {*} \top X _ {i}) + o _ {p} (1)}{\boldsymbol {n}} \cdot \frac {\Delta_ {i}}{s ^ {(0)} (Y _ {i} ; \theta^ {*}) + o _ {p} (1)} \left(X _ {i} - \frac {s ^ {(1)} (Y _ {i} ; \theta^ {*}) + o _ {p} (1)}{s ^ {(0)} (Y _ {i} ; \theta^ {*}) + o _ {p} (1)}\right) \\ = - [ \pmb {A} ] ^ {- 1} \frac {\exp (2 \theta^ {*} ^ {\top} X _ {i})}{\pmb {n}} \cdot \int_ {0} ^ {\tau} J (t; \theta^ {*}, Z _ {i}) d G (t) \\ + [ \boldsymbol {A} ] ^ {- 1} \frac {\exp (\theta^ {* \top} X _ {i})}{\boldsymbol {n}} \cdot \frac {\Delta_ {i}}{s ^ {(0)} (Y _ {i} ; \theta^ {*})} \left(X _ {i} - \frac {s ^ {(1)} (Y _ {i} ; \theta^ {*})}{s ^ {(0)} (Y _ {i} ; \theta^ {*})}\right) + o _ {p} (\frac {1}{n}) \\ = O _ {p} \left(\frac {1}{n}\right). \\ \end{array}
+$$
+
+The second equality holds by the continuous mapping theorem and the third equality holds due to the boundedness of the support of $X$ , $\mathcal{B}$ , and $\tau$ . We used the fact that there exists a positive constant $C > 0$ such that $\inf_{t \in [0, \tau], \theta \in \mathcal{B}} s^{(0)}(t; \theta) = \mathbb{E}\left(I(Y \geq t) \exp(\theta^\top X)\right) \geq C$ . This completes the proof.
+
+
+Figure 3. Empirical verification of the theoretical results in Theorem 3.10. The x-axis indicates the size of the training dataset $n$ , while the y-axis indicates the average $L_{2}$ error. The plot is a log-log plot.
+
+# A.6. Empirical Verification of Theorem 3.10
+
+Here we empirically verify the theoretical results in Theorem 3.10. Specifically, we use randomly sampled subsets of the METABRIC dataset with varying sample size $n$ as the training datasets, and then plot the $\| \mathrm{VIF}_n(i) - \mathrm{IF}_n(i)\|_2$ and $\| \mathrm{IF}_n(i)\|_2$ , averaged over $i$ , as a function of $n$ . As can be seen from Figure 3, $\| \mathrm{VIF}_n(i) - \mathrm{IF}_n(i)\|_2$ decreases roughly in $O(1/n)$ , while $\| \mathrm{IF}_n(i)\|_2$ fluctuates at a constant level for larger $n$ . This aligns with our discussion in the "Approximation Error" paragraph in Appendix A.5.
+
+# B. Detailed Experiment Results
+
+Additional Results on larger experiment settings. Table 4 shows that the Pearson correlation coefficients of VIF are high on larger datasets and linear model and non-trivial on neural networks.
+
+Table 4. Additional Results on larger experiment settings.
+
+Scenario Dataset Model Pearson Correlation Cox Regression RR-NL-NHP Linear 0.9997 Neural Network 0.3619 Listwise Learning-to-Rank Delicious (full) Linear 0.8337
+
+Datasets. For Cox regression, both METABRIC and SUPPORT datasets are split into training, validation, and test sets with a 6:2:2 ratio. The training objects and test objects are defined as the full training and test sets. For node embedding, the test
+
+objects are all valid pairs of nodes, i.e., $34 \times 34 = 1156$ objects, while the training objects are the 34 individual nodes. In the case of listwise learning-to-rank, we sample 500 test samples from the pre-defined test set as the test objects. For the Mediumill dataset, we use the full label set as the training objects. For the Delicious dataset, we sample 100 labels from the complete label set for our primary experiments. Additionally, we conduct experiments on the full label set, denoted as "Delicious (full)." The brute-force leave-one-out retraining follows the same training hyperparameters as the full model, with one training object removed at a time.
+
+Table 5. Training objects and test objects in different experiment settings.
+
+Scenario Dataset Training obj Test obj Cox regression METABRIC 1217 samples 381 samples SUPPORT 5677 samples 1775 samples RR-NL-NHP 16000 samples 5000 samples Node embedding Karate 34 nodes 1156 pairs of nodes Listwise learning-to-rank Mediumill 101 labels 500 samples Delicious 100 labels 500 samples Delicious (full) 983 labels 500 samples
+
+Models. For Cox regression, we train a CoxPH model with a linear function on the features for both the METABRIC and SUPPORT datasets. The model is optimized using the Adam optimizer with a learning rate of 0.01. We train the model for 200 epochs on the METABRIC dataset and 100 epochs on the SUPPORT dataset. For node embedding, we sample 1,000 walks per node, each with a length of 6, and set the window size to 3. The dimension of the node embedding is set to 2. For listwise learning-to-rank, the model is optimized using the Adam optimizer with a learning rate of 0.001, weight decay of 5e-4, and a batch size of 128 for 100 epochs on both the Mediumill and Delicious datasets. We also use TruncatedSVD to reduce the feature dimension to 8.
+
+# C. Efficient Inverse Hessian Approximation
+
+Existing methods for efficient inverse Hessian approximation used by the conventional IF for decomposable losses can be adapted to accelerate VIF. Specifically, we consider two methods used by Koh & Liang (2017), Conjugate Gradient (CG) and LiSSA (Agarwal et al., 2017). The application of CG to VIF is straightforward, as it can be directly applied to the original Hessian matrix. LiSSA is originally designed for decomposable losses in the form $\sum_{i=1}^{n} \ell_i(\theta)$ and it accelerates the inverse Hessian vector product calculation by sampling the Hessians of individual loss terms, $\nabla_\theta^2 \ell_i(\theta)$ . The adaptation of LiSSA to VIF depends on the specific form of the loss function. In many non-decomposable losses (e.g., the all three examples in this paper), the total loss can still be written as the summation of simpler loss terms, even though they are not decomposable to individual data points. In such cases, LiSSA can still be applied to efficiently estimate the inverse Hessian vector product through sampling the simpler loss terms.
+
+# C.1. Experiments
+
+We implement the CG and LiSSA versions of accelerated VIF for the Cox regression model, and experiment them on the METABRIC dataset. In addition to the linear model, we also experiment with a neural network model, where the relative risk function is implemented as a two-layer MLP with ReLU activation. We use VIF (Explicit) to refer to the VIF with explicit inverse Hessian calculation, while using VIF (CG) and VIF (LiSSA) to refer to the accelerated variants.
+
+Performance. As can be seen from Table 6, the accelerated methods VIF (CG) and VIF (LiSSA) achieve similar performance as both the original VIF (Explicit) and the Brute-Force LOO on both the linear and neural network models. The correlation coefficients of all methods on the neural network model are lower than those on the linear model due to the randomness inherent in the model training.
+
+Runtime. We further report the runtime of different methods on neural network models with varying model sizes. VIF (CG) and VIF (LiSSA) are not only faster than VIF (Explicit), especially as the model size grows, but also much more memory efficient. VIF (Explicit) runs out of memory quickly as the model size grows, while VIF (CG) and VIF (LiSSA) can be
+
+Table 6. The Pearson correlation coefficients of methods for Cox regression on the METABRIC dataset.
+
+Models Methods Linear Neural Network VIF (Explicit) 0.997 0.238 VIF (CG) 0.997 0.201 VIF (LiSSA) 0.981 0.197 Brute-Force 0.997 0.219
+
+scaled to much larger models.
+
+Table 7. Runtime comparison of methods for Cox regression on the METABRIC dataset. The “#Param” refers to the total number of parameters in the neural network model.
+
+#Param VIF (Explicit) VIF (CG) VIF (LiSSA) Brute-Force 0.04K 9.88s 5.68s 8.85s 5116s 10.3K 116s 27.7s 17.18s 6289s 41.0K OOM 113s 67.7s / 81.9K OOM 171s 79.1s /
+
+# D. Heatmap of Node Embedding
+
+In Figure 4, we present the heatmap of the influence estimated by VIF and the actual LOO loss difference on two pairs of nodes. VIF could identify the top and bottom influential nodes accurately, while the estimation of node influence in the middle more noisy. One caveat of these heatmap plots is that there is a misalignment between the color maps for VIF and LOO. This reflects the fact that, while VIF is effective at having a decent correlation with LOO, the absolute values tend to be misaligned.
+
+
+VIF Heatmap; Node (12, 10)
+(a) VIF on (12,10)
+Figure 4. VIF is applied to Zachary's Karate network to estimate the influence of each node on the contrastive loss of a pair of test nodes. Figure 4a and Figure 4b represent the heatmap of influence on the node pair (12,10). Figure 4c and Figure 4d represent the heatmap of influence on the node pair (15,13).
+
+
+Groundtruth Heatmap; Node: (12, 10)
+(b) LOO on (12, 10)
+
+
+VIF Heatmap; Node (15, 13)
+(c) VIF on (15,13)
+
+
+Groundtruth Heatmap; Node: (15, 13)
+(d) LOO on (15,13)
\ No newline at end of file
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/images.zip b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1558d03d267466d64eb38e57aa15b121b50e3d23
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1db4cb8c655a3bcca60a720a3ed48097736c817778be0eecbfd93805bbe0054b
+size 1257593
diff --git a/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/layout.json b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..86e4a8d88aa2ec7d7254e9d89d8045cd1c2a99a7
--- /dev/null
+++ b/aversatileinfluencefunctionfordataattributionwithnondecomposableloss/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:762d7cb0b047de359ab410b6a09d1d5a87908feb0bc58e4a43416192e5cd5571
+size 884121