diff --git a/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_content_list.json b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..38da40610f86f326a434a075616b80fb4ad222c2 --- /dev/null +++ b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32890b6664c85432ef2c59ba0799b33e8c22d02c0e1deffdf37db9c0a23df8e1 +size 95205 diff --git a/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_model.json b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ef1108469afe5c926095e6e36793ac53f5b2df42 --- /dev/null +++ b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aff2c08b7657bb3c0f7568c1c21342036b06baf19fbc6c11eef28e02974815f +size 119143 diff --git a/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_origin.pdf b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3d20a5583ce55f547f654e1e06979c9d3226251a --- /dev/null +++ b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a93a8f85a341577b61c89053d91981070d340fc0be3f0740b5b0b146a45ed0a4 +size 6213250 diff --git a/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/full.md b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/full.md new file mode 100644 index 0000000000000000000000000000000000000000..eda98ea7cc063b500c88848041acdbedbbeb4a81 --- /dev/null +++ b/ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/full.md @@ -0,0 +1,295 @@ +# $\mathbf{F}^{3}\mathbf{S}\mathbf{E}\mathbf{T}$ : TOWARDS ANALYZING FAST, FREQUENT, AND FINE-GRAINED EVENTS FROM VIDEOS + +Zhaoyu Liu $^{1,2}$ , Kan Jiang $^{2}$ , Murong Ma $^{2}$ , Zhe Hou $^{3}$ , Yun Lin $^{4}$ , Jin Song Dong $^{2}$ + +$^{1}$ Ningbo University $^{2}$ National University of Singapore $^{3}$ Griffith University + +4 Shanghai Jiao Tong University + +{liuzy, jiangkan}@nus.edu.sg, murongma@u.nus.edu + +z.hou@griffith.edu.au, lin-yun@sjtu.edu.cn, dcsdjs@nus.edu.sg + +# ABSTRACT + +Analyzing Fast, Frequent, and Fine-grained $(\mathrm{F}^{3})$ events presents a significant challenge in video analytics and multi-modal LLMs. Current methods struggle to identify events that satisfy all the $\mathrm{F}^{3}$ criteria with high accuracy due to challenges such as motion blur and subtle visual discrepancies. To advance research in video understanding, we introduce $\mathrm{F}^{3}$ Set, a benchmark that consists of video datasets for precise $\mathrm{F}^{3}$ event detection. Datasets in $\mathrm{F}^{3}$ Set are characterized by their extensive scale and comprehensive detail, usually encompassing over 1,000 event types with precise timestamps and supporting multi-level granularity. Currently $\mathrm{F}^{3}$ Set contains several sports datasets, and this framework may be extended to other applications as well. We evaluated popular temporal action understanding methods on $\mathrm{F}^{3}$ Set, revealing substantial challenges for existing techniques. Additionally, we propose a new method, $\mathrm{F}^{3}\mathrm{ED}$ , for $\mathrm{F}^{3}$ event detections, achieving superior performance. The dataset, model, and benchmark code are available at https://github.com/F3Set/F3Set. + +# 1 INTRODUCTION + +Recognizing sequences of fast (fast-paced), frequent (many actions in a short period), and fine-grained (diverse types) events with precise timestamps (with a tolerance of 1-2 frames) is a challenging problem for both current video analytics methods and multi-modal large language models (LLMs). Despite advances in fine-grained action recognition [31; 58; 51], temporal action localization [60; 6; 40; 59], segmentation [67; 33; 71; 2], and video captioning [63; 56; 49; 36], limited focus has been focused on this problem. This task is critical for various real-world applications, such as sports analytics, where action forecasting [21; 65], strategic and tactical analysis [44; 45; 46; 48], and player performance evaluation [10; 55] depend on understanding detailed of event sequences. Other examples include industrial inspection [42], crucial for detecting subtle irregularities in high-speed production lines to ensure quality and safety; computer vision in autonomous driving [25], essential for accurate and instantaneous vehicle control and obstacle detection; and surveillance [53], important for the precise identification of abnormal or sudden events to enhance security. However, existing methods and datasets foundational to their development only partially address the $\mathbf{F}^3$ scenario. + +To facilitate the study of $\mathrm{F}^3$ events understanding, we propose a new benchmark, $\mathrm{F}^3\mathrm{Set}$ , for precise temporal events detection and recognition. $\mathrm{F}^3\mathrm{Set}$ datasets usually have a large number of event types (on the order of 1,000), annotated with exact timestamps, and offer multi-level granularity to capture comprehensive event details. Although $\mathrm{F}^3$ is a general problem, creating such a dataset requires domain-specific knowledge for labeling and processing; thus, in this paper, we use tennis as a case study. We also introduce a general annotation pipeline and toolchain to support domain experts in creating new $\mathrm{F}^3$ datasets. Using this pipeline, we have also been building datasets for table tennis and badminton, and a community of users is actively expanding these with other applications. + +Unlike other video analysis tasks, tennis actions are characterized by their rapid succession and diversity, as illustrated in Figure 1. Understanding detailed event attributes such as shot direction, technique, and outcome is essential. For example, analyzing patterns in serve directions (e.g., "T", "body", "wide", defined in Appendix B) or success rates can reveal players' habits and skill levels, offering strategic insights for competitive advantage [15]. This detailed analysis supports coaches and players in developing tailored strategies against different opponents [16; 47]. However, detecting + + +Figure 1: Example of detecting fast, frequent, and fine-grained events with precise moments. + +$\mathrm{F}^3$ events from videos poses significant challenges, such as subtle visual differences, motion-induced blurring, and the need for precise event localization. Current video understanding methods are inadequately equipped to address these challenges. For instance, traditional fine-grained action recognition [9; 58; 28] assigns a single label to an entire video rather than identifying a sequence of events. Temporal action localization (TAL) and temporal action segmentation (TAS) often depend on pre-trained or modestly fine-tuned input features [39; 14], which lack the specificity required to capture the subtle and domain-specific visual details necessary for recognizing diverse events with temporal precision. Some studies [24; 36; 43] attempt to address these issues through dense frame sampling and end-to-end training. However, this makes targeted events temporally sparse (e.g., only a few events over hundreds of consecutive frames). As a result, long-term temporal correlation modules on dense visual features struggle to capture event-wise causal correlations effectively. + +Moreover, Large Language Models (LLMs) [54; 61; 38] have expanded their capabilities to include multi-modal inference, encompassing text, visuals, and audio. Recognizing the potential, we conducted preliminary experiments on $\mathbf{F}^3$ Set using GPT-4 and observed that it understood basic video contexts, such as sports types, contextual information (e.g., court type and scoreboard), and simple actions. However, it struggles with understanding $\mathbf{F}^3$ events and temporal relations between frames (e.g., shot directions). See Appendix A for details. Consequently, GPT-4 yields poor results compared to the other methods for $\mathbf{F}^3$ problems, and we do not use it in the experiment. By introducing $\mathbf{F}^3$ Set, we hope it can help advance multi-modal LLM capabilities in $\mathbf{F}^3$ video understanding in the future. + +Leveraging $\mathrm{F}^3$ Set, we extensively evaluate existing temporal action understanding methods, aiming to reveal the challenges of $\mathrm{F}^3$ event understanding. To provide guidelines for future research, we conduct a number of ablation studies on modeling choices. Addressing the shortcomings of existing methods, we also propose a simple yet efficient model, $\mathrm{F}^3\mathrm{ED}$ , that is designed for $\mathrm{F}^3$ event detection tasks and can be trained quickly on a single GPU. It outperforms existing models and can serve as a baseline for further development. + +Contributions. The key contributions of this paper are as follows: + +- We create $\mathrm{F}^3\mathrm{Set}$ , a new benchmark with datasets that feature over 1,000 precisely timestamped event types with multi-level granularity, designed to challenge and advance the state-of-the-art in temporal action understanding. +- We introduce a general annotation toolchain that enables domain experts to create new $\mathbf{F}^3$ datasets. +- We propose an end-to-end model named F $^3$ ED, which can accurately detect F $^3$ event sequences from videos through visual features and contextual sequence refinement on a single GPU. +- We assess the performance of leading temporal action understanding methods on $\mathbf{F}^3$ Set through comprehensive evaluations and ablation studies and analyze the results. + +# 2 RELATED WORK + +Existing $\mathbf{F}^3$ related datasets. Although datasets have been developed for temporal action understanding, few focus on the $\mathrm{F}^3$ events. Table 1 compares existing datasets with $\mathrm{F}^3\mathrm{Set}$ by scale ("# Vid", "# Clips") and characteristics like action speed ("Evt. Len"), frequency ("Evt. / sec"), and granularity ("# Classes"), which correspond to "fast", "frequent", and "fine-grained" respectively. Datasets such as THUMOS14 [27] and Breakfast [30] focus on coarse-grained actions, where background context provides clear cues, and actions span seconds to minutes. In contrast, FineAction [41] and ActivityNet [4] cover a wide range of daily activities with diverse action categories, while FineGym [58] delves into detailed action types within gymnastics. Like FineGym, $\mathrm{F}^3\mathrm{Set}$ emphasizes domain-specific + +Table 1: Comparison of existing $\mathbf{F}^3$ related datasets and $\mathrm{F^3Set}$ . "Evt. Len." is the average duration of each event, and "# Evt. / sec" is the average number of events per second. + +
| Datasets | # Vid. | # Clips. | Avg. Clip Len. | # Classes | Evt. Len. | # Evt. / sec |
| (a) Fine-grained | ||||||
| FineAction [41] | - | 16,732 | 149.5s | 101 | 6.9s | 0.3 |
| ActivityNet [4] | - | 19,994 | 116.7s | 200 | 49.2s | 0.01 |
| FineGym [58] | 303 | 32,697 | 50.3s | 530 | 1.7s | 0.3 |
| (b) Fast | ||||||
| CCTV-Pipe [42] | 575 | 575 | 549.3s | 16 | < 0.1s | 0.02 |
| SoccerNetV2 [11] | 9 | 9 | 99.6min | 12 | < 0.1s | 0.3 |
| (c) Frequent | ||||||
| FineDiving [69] | 135 | 3,000 | 4.2s | 29 | 1.1s | ~1 |
| (d) Fast & Frequent | ||||||
| ShuttleSet [66] | 44 | 3,685 | 10.9s | 18 | < 0.1s | ~1 |
| P2ANet [3] | 200 | 2,721 | 360.0s | 14 | < 0.1s | ~2 |
| (d) Fast & Frequent & Fine-grained | ||||||
| F3Set | 114 | 11,584 | 8.4s | 1,108 | < 0.1s | ~1 |
| Category | Details |
| Matches | 114 broadcast matches |
| Players | 75 (30 men, 45 women) |
| Handedness | 68 right-handed, 7 left-handed |
| Frame Rate (FPS) | 25–30 FPS |
| Category | Details |
| Clips | 11,584 rallies |
| Average Clip Duration | 8.4 sec |
| Total Shots | 42,846 |
| Shots Per Rally | 1 to 34 |
| Video encoder | Head arch. | F3Set (Ghigh) | F3Set (Gmid) | F3Set (Glow) | ||||||
| F1evt | F1elm | Edit | F1evt | F1elm | Edit | F1evt | F1elm | Edit | ||
| TSN [64] | MS-TCN [19] | 15.9 | 59.8 | 53.5 | 23.2 | 60.9 | 65.8 | 45.7 | 70.4 | 72.8 |
| ActionFormer [72] | 18.4 | 60.6 | 55.2 | 24.8 | 61.9 | 67.3 | 48.7 | 70.6 | 72.2 | |
| E2E-Spot [24] | 24.7 | 65.3 | 60.1 | 31.5 | 66.2 | 71.0 | 53.5 | 73.6 | 75.0 | |
| SlowFast [20] | G-TAD [70] | 23.0 | 66.1 | 64.0 | 29.6 | 66.5 | 74.2 | 53.3 | 76.0 | 77.9 |
| ActionFormer [72] | 28.7 | 70.0 | 67.6 | 35.5 | 70.9 | 76.4 | 59.3 | 77.1 | 81.5 | |
| E2E-Spot [24] | 25.9 | 69.4 | 65.7 | 33.8 | 70.4 | 75.4 | 55.5 | 76.5 | 79.5 | |
| I3D [5] | E2E-Spot [24] | 22.7 | 59.7 | 68.7 | 27.1 | 60.7 | 74.2 | 51.9 | 67.7 | 78.3 |
| VTN [52] | E2E-Spot [24] | 14.8 | 58.3 | 56.7 | 20.0 | 59.4 | 68.2 | 39.7 | 63.1 | 73.1 |
| TSM [35] | MS-TCN [19] | 21.7 | 67.3 | 58.6 | 30.4 | 69.5 | 73.0 | 50.2 | 74.0 | 75.3 |
| ASformer [71] | 17.6 | 61.9 | 57.5 | 25.5 | 64.0 | 74.2 | 46.0 | 72.9 | 74.0 | |
| G-TAD [70] | 16.9 | 62.5 | 55.2 | 29.8 | 66.9 | 74.8 | 39.8 | 70.1 | 67.2 | |
| ActionFormer [72] | 22.4 | 65.7 | 60.3 | 31.0 | 68.2 | 74.7 | 52.4 | 73.8 | 74.9 | |
| E2E-Spot [24] | 31.4 | 71.4 | 68.7 | 39.5 | 72.3 | 77.9 | 60.6 | 78.4 | 82.1 | |
| TSM[35] | F3ED | 40.3 | 75.2 | 74.0 | 48.0 | 76.5 | 82.4 | 68.4 | 80.0 | 87.2 |
| Experiment | F3Set (Ghigh) | F3Set (Gmid) | F3Set (Glow) | ||||||
| F1evt | F1elm | Edit | F1evt | F1elm | Edit | F1evt | F1elm | Edit | |
| TSM + E2E-Spot | 31.4 | 71.4 | 68.7 | 39.5 | 72.3 | 77.9 | 60.6 | 78.4 | 82.1 |
| (a) Feature extractor | |||||||||
| I3D [5] (clip-wise) | 22.7 | 59.7 | 68.7 | 27.1 | 60.7 | 74.2 | 51.9 | 67.7 | 78.3 |
| VTN [52] (video transformer) | 14.8 | 58.3 | 56.7 | 20.0 | 59.4 | 68.2 | 39.7 | 63.1 | 73.1 |
| ST-GCN++ [17] (skeleton-based) | 25.4 | 62.1 | 56.1 | 32.4 | 63.9 | 63.5 | 55.1 | 69.4 | 73.2 |
| PoseConv3D [18] ((skeleton-based)) | 20.1 | 54.5 | 53.2 | 26.0 | 55.4 | 61.9 | 48.8 | 63.0 | 69.7 |
| (b) Stride size = 4 | 25.9 | 69.2 | 62.7 | 33.4 | 69.9 | 73.0 | 60.0 | 77.9 | 78.8 |
| Stride size = 8 | 14.0 | 56.7 | 44.3 | 18.5 | 57.4 | 54.8 | 40.4 | 67.0 | 59.2 |
| (c) without GRU | 27.6 | 69.0 | 60.6 | 38.0 | 71.3 | 75.3 | 54.7 | 74.1 | 73.4 |
| (d) Clip Length = 32 | 26.3 | 67.4 | 54.5 | 35.5 | 69.4 | 71.8 | 53.2 | 75.1 | 68.9 |
| Clip Length = 64 | 30.7 | 71.2 | 67.4 | 38.6 | 72.4 | 77.5 | 58.4 | 77.9 | 81.1 |
| Clip Length = 192 | 29.3 | 70.3 | 65.7 | 37.3 | 71.4 | 77.0 | 58.8 | 77.1 | 80.4 |
| (e) Multi-label | 37.9 | 74.3 | 71.7 | 45.9 | 75.6 | 80.1 | 66.6 | 80.1 | 85.1 |
| (f) Multi-label + CTX (Transformer) | 39.0 | 74.3 | 72.8 | 50.5 | 75.5 | 81.8 | 63.4 | 79.6 | 86.8 |
| Multi-label + CTX (BiGRU) | 40.3 | 75.2 | 74.0 | 48.0 | 76.5 | 82.4 | 68.4 | 80.0 | 87.2 |
| Head arch. | ShuttleSet [66] | FineDiving [69] | FineGym [58] | SoccerNetV2 [11] | CCTV-Pipe [42] | |||||
| F1evt | Edit | F1evt | Edit | F1evt | Edit | F1evt | Edit | F1evt | Edit | |
| MS-TCN [19] | 70.3 | 74.4 | 65.7 | 92.2 | 57.6 | 65.3 | 43.4 | 74.5 | 25.8 | 31.3 |
| ASformer [71] | 55.9 | 70.6 | 49.9 | 87.6 | 53.6 | 66.3 | 46.3 | 76.1 | 15.4 | 33.4 |
| G-TAD [70] | 48.2 | 61.1 | 52.1 | 82.6 | 45.8 | 51.4 | 42.3 | 72.3 | 31.3 | 33.6 |
| ActionFormer [72] | 62.1 | 67.5 | 68.3 | 92.4 | 54.0 | 59.7 | 43.0 | 64.6 | 18.8 | 29.5 |
| E2E-Spot [24] | 70.2 | 75.0 | 75.8 | 93.7 | 62.1 | 65.4 | 46.2 | 72.9 | 27.2 | 35.2 |
| F3ED | 70.7 | 77.1 | 77.6 | 95.1 | 70.9 | 70.7 | 48.1 | 76.6 | 37.0 | 39.5 |
| Steps Tasks | ReGenesis w/o hints | ReGenesis |
| GSM8K | 98.4% | 99.5% |
| NumGLUE | 89.6% | 98.2% |
| ARC-c | 98.6% | 100.0% |
| Reclor | 97.5% | 100.0% |
| StrategyQA | 96.9% | 99.4% |
| Training Methods | Prompting Method at Inference | GSM8K (Math) | NumGlue (Math) | ARC-c (Logical) | ReClor (Logical) | StrategyQA (Commonsense) |
| w/o FT | CoT Prompting | 44.0% | 40.6% | 77.2% | 57.6% | 77.4% |
| CoT Prompting (3-shot) | 68.3% | 47.2% | 79.1% | 59.4% | 80.8% | |
| Self-Consistency | 60.0% | 38.2% | 80.6% | 56.2% | 80.8% | |
| FT w/ GT | CoT Prompting | 13.8% | 55.0% | 77.4% | 70.4% | 85.6% |
| Self-Consistency | 15.2% | 55.9% | 77.2% | 71.6% | 85.6% | |
| LMSI | CoT Prompting | 51.8% | 46.5% | 67.9% | 51.8% | 78.3% |
| Self-Consistency | 62.3% | 57.1% | 71.7% | 50.8% | 79.0% | |
| LMSI w/ GT | CoT Prompting | 57.4% | 51.8% | 75.9% | 58.0% | 80.2% |
| Self-Consistency | 66.3% | 62.2% | 77.5% | 59.4% | 81.7% | |
| STaR | CoT Prompting | 46.3% | 48.3% | 76.5% | 57.8% | 84.4% |
| Self-Consistency | 66.0% | 64.5% | 84.1% | 63.8% | 85.9% | |
| ReGenesis | CoT Prompting | 63.6% | 52.2% | 78.0% | 68.4% | 81.5% |
| Self-Consistency | 76.6% | 74.7% | 85.4% | 70.6% | 91.3% |
| Training Datasets | Training Methods | Test Datasets | |||||
| ASDIV (Math) | SVAMP (Math) | AQUA (Math) | BBH (Logical) | ANLI (NLI) | OpenbookQA (Commonsense) | ||
| GSM8K (Math) | w/o FT | 77.2% - | 75.4% - | 41.3% - | 60.8% - | 38.4% - | 75.6% - |
| FT w/ GT | 54.0% ↓ | 40.0% ↓ | 29.1% ↓ | 53.6% ↓ | 44.6% ↑ | 72.6% ↓ | |
| LMSI w/ GT | 77.3% ↑ | 72.2% ↓ | 31.1% ↓ | 59.5% ↓ | 43.4% ↑ | 73.2% ↓ | |
| STaR | 79.6% ↑ | 71.5% ↓ | 46.9% ↑ | 47.4% ↓ | 45.0% ↑ | 72.8% ↓ | |
| Ours | 81.2% ↑ | 83.9% ↑ | 48.8% ↑ | 69.3% ↑ | 49.5% ↑ | 81.4% ↑ | |
| NumGLUE (Math) | w/o FT | 77.2% - | 75.4% - | 41.3% - | 60.8% - | 38.4% - | 75.6% - |
| FT w/ GT | 53.8% ↓ | 54.9% ↓ | 32.3% ↓ | 42.4% ↓ | 37.0% ↓ | 64.8% ↓ | |
| LMSI w/ GT | 75.7% ↓ | 78.2% ↑ | 40.6% ↓ | 59.5% ↓ | 35.1% ↓ | 72.6% ↓ | |
| STaR | 79.6% ↑ | 76.1% ↑ | 37.0% ↓ | 58.5% ↓ | 41.9% ↑ | 71.6% ↓ | |
| Ours | 76.9% ↓ | 79.4% ↑ | 48.4% ↑ | 61.7% ↑ | 50.0% ↑ | 79.8% ↑ | |
| ReClor (Logical) | w/o FT | 77.2% - | 75.4% - | 41.3% - | 60.8% - | 38.4% - | 75.6% - |
| FT w/ GT | 62.0% ↓ | 56.0% ↓ | 22.0% ↓ | 44.6% ↓ | 47.1% ↑ | 71.4% ↓ | |
| LMSI w/ GT | 77.9% ↑ | 76.8% ↑ | 45.6% ↑ | 53.9% ↓ | 35.1% ↓ | 72.4% ↓ | |
| STaR | 76.1% ↓ | 74.6% ↓ | 46.1% ↑ | 60.6% ↓ | 42.5% ↑ | 77.2% ↑ | |
| Ours | 76.4% ↓ | 76.5% ↑ | 49.6% ↑ | 66.8% ↑ | 44.8% ↑ | 81.4% ↑ | |
| ARC-c (Logical) | w/o FT | 77.2% - | 75.4% - | 41.3% - | 60.8% - | 38.4% - | 75.6% - |
| FT w/ GT | 57.7% ↓ | 57.2% ↓ | 18.9% ↓ | 37.6% ↓ | 36.3% ↓ | 78.4% ↑ | |
| LMSI w/ GT | 70.9% ↓ | 72.0% ↓ | 32.7% ↓ | 60.8% - | 32.5% ↓ | 79.4% ↑ | |
| STaR | 77.0% ↓ | 76.2% ↑ | 40.6% ↓ | 60.7% ↓ | 47.4% ↑ | 84.2% ↑ | |
| Ours | 81.6% ↑ | 79.5% ↑ | 46.5% ↑ | 66.0% ↑ | 46.4% ↑ | 82.8% ↑ | |
| StrategyQA (Commonsense) | w/o FT | 77.2% - | 75.4% - | 41.3% - | 60.8% - | 38.4% - | 75.6% - |
| FT w/ GT | 69.4% ↓ | 72.3% ↓ | 43.7% ↑ | 52.3% ↓ | 44.9% ↑ | 62.2% ↓ | |
| LMSI w/ GT | 56.3% ↓ | 56.8% ↓ | 40.6% ↓ | 60.6% ↓ | 39.8% ↑ | 68.4% ↓ | |
| STaR | 79.8% ↑ | 76.2% ↑ | 43.3% ↑ | 62.9% ↑ | 37.3% ↓ | 77.4% ↑ | |
| Ours | 81.3% ↑ | 81.1% ↑ | 42.9% ↑ | 65.9% ↑ | 55.3% ↑ | 80.4% ↑ | |
| Training Datasets | Ablation Set | Test Datasets | ||||||
| GSM8K (Test) | ASDIV (Math) | SVAMP (Math) | AQUA (Math) | BBH (Logical) | ANLI (NLI) | OpenbookQA (Commonsense) | ||
| GSM8K (Train) | A+P | 66.2% | 76.1% | 80.3% | 47.6% | 53.4% | 42.1% | 75.6% |
| S+P | 70.0% | 77.5% | 80.1% | 47.2% | 57.9% | 43.0% | 76.4% | |
| A+S+P | 76.6% | 81.2% | 83.9% | 48.8% | 69.3% | 49.5% | 81.4% | |
| Training Datasets | Training Methods | Test Datasets | ||||||
| NumGLUE (Test) | ASDIV (Math) | SVAMP (Math) | AQUA (Math) | BBH (Logical) | ANLI (NLI) | OpenbookQA (Commonsense) | ||
| NumGLUE (Train) | w/o FT | 58.9% | 88.2% | 86.8% | 64.6% | 36.5% | 53.4% | 82.2% |
| STaR | 75.6% | 88.3% | 87.9% | 61.0% | 69.4% | 52.3% | 81.2% | |
| ReGenesis | 79.2% | 89.2% | 91.1% | 63.4% | 70.1% | 53.9% | 83.8% | |
| Training Method | GSM8K (Math) | StrategyQA (Commonsense) |
| w/o FT | 56.3% | 80.8% |
| LMSI | 62.3% | 79.0% |
| ReGenesis (Majority) | 62.8% | 83.1% |
| Training Datasets | Hint Position | Test Datasets | ||||||
| GSM8K (Test) | ASDIV (Math) | SVAMP (Math) | AQUA (Math) | BBH (Logical) | ANLI (NLI) | OpenbookQA (Commonsense) | ||
| GSM8K (Train) | A+S | 75.7% | 79.3% | 79.7% | 50.0% | 67.3% | 41.9% | 77.4% |
| A+S+P | 73.7% | 78.8% | 78.4% | 44.1% | 69.0% | 41.3% | 77.0% | |
| How could I devise an experiment to help solve that problem? |
| Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made. |
| How could I measure progress on this problem? |
| How can I simplify the problem so that it is easier to solve? |
| How can I break down this problem into smaller, more manageable parts? |
| Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking. |
| Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality. |
| Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdependencies that influence the problem, and developing holistic solutions that address the system as a whole. |
| Use Reflective Thinking: Step back from the problem, take the time for introspection and self-reflection. Examine personal biases, assumptions, and mental models that may influence problem-solving, and being open to learning from past experiences to improve future approaches. |
| What is the core issue or problem that needs to be addressed? |
| What are the potential obstacles or challenges that might arise in solving this problem? |
| Prompt for prompt adaption: +Without working out the solution, adapt the following reasoning modules to be specific to our task +Reasoning Module: {Seed Prompt praw} +Task: {Instruction qi} |
| Prompt for generating reasoning structure: +Without working out the solution, create an actionable and concise reasoning structure step by step for the task using this adapted reasoning module: {Adapted Prompt pia,dp} +Task: {Instruction qi} |
| Prompt for generating reasoning solutions: +Using the following reasoning structure: {Reasoning Structure rs,i,j} +Task: {Instruction qi} +Solve this task step by step based on the above reasoning structure. |
| Prompt for prompt adaption: +Without working out the solution: {Solution ai}, adapt the following reasoning modules to be specific to our task +Reasoning Module: {Seed Prompt pjraw} +Task:{Instruction qi} |
| Prompt for generating reasoning structure: +Without working out the solution: {Solution ai}, create an actionable and concise reasoning structure step by step for the task using this adapted reasoning module:{Adapted Prompt pj,adp} +Task:{Instruction qi} |
| Prompt for generating reasoning solutions: +Using the following reasoning structure: {Reasoning Structure rs,i,j} +Task:{Instruction qi} +Solve this task step by step based on the above reasoning structure. |
| learning rate | 1e-6 |
| epochs | 3 |
| batch size | 16 |
| weight decay | 0.1 |
| lr scheduler type | cosine |
| warmup ratio | 0.03 |
| learning rate | 1e-6 |
| epochs | 3 |
| batch size | 16 |
| weight decay | 0.1 |
| lr scheduler type | cosine |
| warmup ratio | 0.03 |
| Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? |
| A: We start with 15 trees. Later we have 21 trees. The difference must be the number of trees they planted. So, they must have planted 21 - 15 = 6 trees. The answer is 6. |
| Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? |
| A: There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 + 2 = 5 cars. The answer is 5. |
| Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? |
| A: Leah had 32 chocolates and Leah's sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39. |
| Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? |
| A: Jason had 20 lollipops. Since he only has 12 now, he must have given the rest to Denny. The number of lollipops he has given to Denny must have been 20 - 12 = 8 lollipops. The answer is 8. |
| Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? |
| A: He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9. |
| Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? |
| A: There are 4 days from monday to thursday. 5 computers were added each day. That means in total 4 * 5 = 20 computers were added. There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers. The answer is 29. |
| Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? |
| A: Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has 58 - 23 = 35 balls. On Wednesday he lost 2 more so now he has 35 - 2 = 33 balls. The answer is 33. |
| Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? |
| A: She bought 5 bagels for $3 each. This means she spent 5 * $3 = $15 on the bagels. She had $23 in beginning, so now she has $23 - $15 = $8. The answer is 8. |
| Q: A football team practices for 6 hours daily. This week they could not practice due to rain on 1 days. Find the total number of hours they practiced this week. |
| A: A week has 7 days. Since it rained on 1 day, the team practiced on the remaining days of the week: 7 - 1 = 6 days. The total number of hours they practiced this week are 6 * 6 = 36 hours. The answer is 36. |
| Q: A shopkeeper has 7 decks of playing cards. How many red color cards does he have in total? |
| A: A standard deck of playing cards contains 52 cards. In a standard deck, half of the cards are red. Therefore, the total number of red color cards the shopkeeper has is 7 * 52 * 0.5 = 182. The answer is 182. |
| Q: The hiking team needs to arrange gloves for every participant. If total number of participants is 43, how many minimum number of gloves the hiking team needs to arrange? |
| A: Each participant needs 2 gloves. The minimum number of gloves for 42 participants are 43 * 2 = 86. The answer is 86. |
| Q: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? |
| A: Natalia sold 48/2 = 24 clips in May. Natalia sold 48 + 24 = 72 clips altogether in April and May. |
| #####72 |
| Q: Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet? |
| A: In the beginning, Betty has only 100/2 = 50. Betty's grandparents gave her 15 × 2 = 30. This means, Betty needs 100 - 50 - 30 - 15 = 5 more. |
| Q: Julie is reading a 120-page book. Yesterday, she was able to read 12 pages and today, she read twice as many pages as yesterday. If she wants to read half of the remaining pages tomorrow, how many pages should she read? |
| A: Maila read 12 × 2 = 24 pages today. So she was able to read a total of 12 + 24 = 36 pages since yesterday. There are 120 - 36 = 84 pages left to be read. Since she wants to read half of the remaining pages tomorrow, then she should read 84/2 = 42 pages. |
| Q: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? (Hints: 72) |
| A: Natalia sold 48/2 = 24 clips in May. Natalia sold 48 + 24 = 72 clips altogether in April and May. |
| ## ## # 72 |
| Q: Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet? (Hints: 5) |
| A: In the beginning, Betty has only 100/2 = 50. Betty's grandparents gave her 15 × 2 = 30. This means, Betty needs 100 - 50 - 30 - 15 = 5 more. |
| Q: Julie is reading a 120-page book. Yesterday, she was able to read 12 pages and today, she read twice as many pages as yesterday. If she wants to read half of the remaining pages tomorrow, how many pages should she read? (Hints: 42) |
| A: Maila read 12 × 2 = 24 pages today. So she was able to read a total of 12 + 24 = 36 pages since yesterday. There are 120 - 36 = 84 pages left to be read. Since she wants to read half of the remaining pages tomorrow, then she should read 84/2 = 42 pages. |
| learning rate | 1e-6 |
| epochs | 3 |
| batch size | 16 |
| weight decay | 0.1 |
| lr scheduler type | cosine |
| warmup ratio | 0.03 |
| learning rate | 1e-6 |
| epochs | 3 |
| batch size | 16 |
| weight decay | 0.1 |
| lr scheduler type | cosine |
| warmup ratio | 0.03 |
| learning rate | 1e-4 |
| epochs | 3 |
| batch size | 8 |
| weight decay | 0.1 |
| lr scheduler type | cosine |
| warmup ratio | 0.1 |
| Q: Jerry needs to shingle 3 roofs. Each roof is made of two slanted rectangular sides measuring 20 feet by 40 feet. If he needs 8 shingles to cover one square foot of roof, how many shingles does he need total? +Reasoning Path (Correct): Each roof has two slanted rectangular sides. Let's find the area of one roof. The area of a rectangle is the product of its length and width, so the area of one roof is 2 × (20 feet × 40 feet) = 2 × 800 square feet = 1600 square feet. Since he needs 8 shingles to cover one square foot, he needs 1600 × 8 = 12800 shingles for one roof. Since he needs to shingle 3 roofs, he needs 12800 × 3 = 38400 shingles in total. The answer is 38400. +Reasoning Path (Wrong): 20 × 40 = 800 square feet. 8 shingles cover 1 square foot. So 8 × 800 = 6400 shingles are needed. The answer is 6400. |
| Q: Kobe and Pau went to a restaurant. Kobe ordered five pieces of fried chicken, and Pau ordered twice as much fried chicken as Kobe did. If they order another set of fried chicken, how many pieces of fried chicken will Pau have eaten in all? +Reasoning Path (Correct): Kobe ordered 5 pieces of fried chicken. Since Pau ordered twice as much, Pau ordered 5 × 2 = 10 pieces of fried chicken. If they order another set, Pau will eat 10 + 10 = 20 pieces of fried chicken in all. The answer is 20. +Reasoning Path (Wrong): 5 pieces of fried chicken were ordered the first time. Pau ordered twice as much as Kobe did, so he must have ordered 10 pieces of fried chicken. Now they order another set of fried chicken. So, Pau will have eaten 5 + 10 = 15 pieces of fried chicken. The answer is 15. |
| Q: Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet? +Reasoning Path (Correct): Betty needs $100 for the wallet. She has only half of the money she needs, which means she has $50. Her parents give her $15, so she already has 50 + 15 = 65. Her grandparents give her twice as much as her parents, which is 15 × 2 = 30. So now Betty has 65 + 30 = 95. She still needs 100 - 95 = 5 more to buy the wallet. The answer is 5. +Reasoning Path (Wrong): 1. Betty needs 100 dollars for a new wallet. 2. Her parents gave her 15 dollars. 3. Her grandparents gave her twice as much as her parents, so they gave her 15 × 2 = 30 dollars. 4. Betty has 15 + 30 = $45. This is 45/100 = 45% of the money she needs. The answer is 55. |
| Q: Ken created a care package to send to his brother, who was away at boarding school. Ken placed a box on a scale, and then he poured into the box enough jelly beans to bring the weight to 2 pounds. Then, he added enough brownies to cause the weight to triple. Next, he added another 2 pounds of jelly beans. And finally, he added enough gummy worms to double the weight once again. What was the final weight of the box of goodies, in pounds? +Reasoning Path (Correct): First, the weight of the box with jelly beans that make it weigh 2 pounds is 2 pounds. Then, when he added enough brownies to triple the weight, the weight became 2 × 3 = 6 pounds. Then he added another 2 pounds of jelly beans, so the weight became 6 + 2 = 8 pounds. And finally, he added enough gummy worms to double the weight again, so the weight became 8 × 2 = 16 pounds. The answer is 16. +Reasoning Path (Wrong): 2 pounds of jelly beans + 6 pounds + 2 pounds of jelly beans + 2 pounds of gummy worms = 12 pounds. The answer is 12. |
| Q: Tim rides his bike back and forth to work for each of his 5 workdays. His work is 20 miles away. He also goes for a weekend bike ride of 200 miles. If he can bike at 25 mph how much time does he spend biking a week? +Reasoning Path (Correct): Tim rides his bike to and from work for each of the 5 workdays. That is 5 × 2 × 20 = 200 miles for his work rides. He also goes for a weekend bike ride of 200 miles. So in total he rides 200 + 200 = 400 miles. He bikes at 25 mph, so the total time he spends biking in a week is 400 miles / 25 mph = 16 hours. The answer is 16. |
| Q: Jerry needs to shingle 3 roofs. Each roof is made of two slanted rectangular sides measuring 20 feet by 40 feet. If he needs 8 shingles to cover one square foot of roof, how many shingles does he need total? |
| Reasoning Path (Correct): Jerry needs to shingle two slanted rectangular sides for each roof. The combined area for one roof is 2 × (20*40) = 2*800 = 1600 square feet per roof. So the total area Jerry needs to shingle is 3 × 1600 = 4800 square feet. Since he needs 8 shingles to cover one square foot, the number of shingles he needs in total is 4800 × 8 = 38, 400 shingles. # # # # # 38400 |
| Reasoning Path (Wrong): First, let's calculate the area of each roof. Each roof is made of two slanted rectangular sides of 20 feet by 40 feet. So, the area of one roof is (20 × 40) + (20*40) = 800 square feet. Since he needs 8 shingles to cover one square foot of roof, he needs (800 × 8) shingles for a single roof. Since Jerry needs to shingle 3 roofs, he needs (800 × 8) × 3 = 24000 shingles. # # # # # 24,000 |
| Q: Kobe and Pau went to a restaurant. Kobe ordered five pieces of fried chicken, and Pau ordered twice as much fried chicken as Kobe did. If they order another set of fried chicken, how many pieces of fried chicken will Pau have eaten in all? |
| Reasoning Path (Correct): Pau ordered 2 × 5 = 10 pieces of fried chicken. If they order another set of fried chicken, Pau will eat an additional 2 × 5 = 10 pieces. Pau ate 10 + 10 = 20 pieces of fried chicken in all. # # # # 20 |
| Reasoning Path (Wrong): Kobe ordered 5 pieces of fried chicken, so Pau ordered twice as much: 5 × 2 = 10 pieces. After they order another set, Pau will have eaten a total of 10 + 5 = 15 pieces of fried chicken. # # # # # 15 |
| Q: Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet? |
| Reasoning Path (Correct): In the beginning, Betty has only 100/2 = 50. Betty's parents gave her 15, and her grandparents gave her 15 × 2 = 30. This means, Betty now has 50 + 15 + 30 = 95. She needs 100 - 95 = 5 more to buy the wallet. # # # # 5 |
| Reasoning Path (Wrong): Betty needs to have 100 + 15 + (2 × 15) = 130 to buy the wallet. However, she has 100/2 = 50, and she received 15 from her parents and 2 × 15 = 30 from her grandparents, which is a total of 50 + 15 + 30 = 95. So, she doesn't need any more money. She actually has already saved enough to buy the wallet. # # # # 0 (doesn't need any more money). |
| Q: Ken created a care package to send to his brother, who was away at boarding school. Ken placed a box on a scale, and then he poured into the box enough jelly beans to bring the weight to 2 pounds. Then, he added enough brownies to cause the weight to triple. Next, he added another 2 pounds of jelly beans. And finally, he added enough gummy worms to double the weight once again. What was the final weight of the box of goodies, in pounds? |
| Reasoning Path (Correct): Initially, the weight was 2 pounds, and then it tripled, so its weight became 2 × 3 = 6 pounds. Ken then added another 2 pounds of jelly beans, so the total weight was 6 + 2 = 8 pounds. Finally, Ken added gummy worms to double the weight again, so the final weight was 8 × 2 = 16 pounds. # # # # 16 |
| Guidelines | GSM8K |
| How can I break down... | 53.45% |
| Try creative thinking... | 20.30% |
| Critical Thinking... | 47.56% |
| Use Reflective Thinking... | 42.69% |
| Use systems thinking... | 39.44% |
| Guidelines | GSM8K | NumGLUE | ARC_c | ReClor | StrategyQA |
| How can I break down... | 4.62% | 4.11% | 6.40% | 3.07% | 4.32% |
| Try creative thinking... | 1.75% | 3.71% | 2.41% | 3.81% | 3.33% |
| Critical Thinking... | 4.11% | 4.09% | 3.78% | 5.12% | 4.72% |
| Use Reflective Thinking... | 3.69% | 3.98% | 1.69% | 3.32% | 3.97% |
| Use systems thinking... | 3.41% | 3.94% | 3.05% | 3.94% | 4.45% |
| Mode | Metric | Base | OnlyF | F&CF (Ours) | |||
| Base | SFT | DPO | SFT | DPO | DPO+CCF | ||
| In-Domain | Avg-ER | 1.00 | 0.82 | 0.86 | 0.42 | 0.53 | 0.48 |
| Avg-IR | 1.00 | 0.73 | 0.72 | 0.44 | 0.51 | 0.47 | |
| Common-Cause | Avg-ER | 1.00 | 1.35 | 1.75 | 1.17 | 1.62 | 2.04 |
| Avg-IR | 1.00 | 1.30 | 1.49 | 1.14 | 1.42 | 1.80 | |
| Common-Effect | Avg-ER | 1.00 | 0.60 | 0.71 | 0.53 | 0.66 | 0.64 |
| Avg-IR | 1.00 | 0.49 | 0.60 | 0.42 | 0.53 | 0.50 | |
| Inductive | Avg-ER | 1.00 | 0.86 | 0.79 | 0.60 | 0.58 | 0.69 |
| Avg-IR | 1.00 | 0.74 | 0.62 | 0.51 | 0.50 | 0.59 | |
| Deductive, Cause-B. | Avg-ER | 1.00 | 0.70 | 0.67 | 0.61 | 0.58 | 0.57 |
| Avg-IR | 1.00 | 0.62 | 0.59 | 0.53 | 0.53 | 0.51 | |
| Deductive, Effect-B. | Avg-ER | 1.00 | 0.83 | 0.83 | 0.90 | 0.95 | 0.77 |
| Avg-IR | 1.00 | 0.91 | 0.87 | 0.93 | 0.89 | 0.76 | |
| Scenario | Alg. | Correctness | Causal Consistency | ||||||
| F-ER | C-ER | Avg-ER | N-IR | S-IR | AN-IR | AS-IR | Avg-IR | ||
| Healthcare Breast Cancer Treatment | Base | 23.57(0.00) | 28.93(0.00) | 26.25(0.00) | 20.62(0.00) | 4.34(0.00) | 11.47(0.00) | 32.01(0.00) | 17.11(0.00) |
| SFT-OnlyF | 3.12(0.04) | 20.82(0.02) | 11.97(0.02) | 1.50(0.01) | 2.09(0.02) | 3.01(0.02) | 19.93(0.01) | 6.63(0.01) | |
| DPO-OnlyF | 10.77(1.83) | 28.57(2.07) | 19.67(1.94) | 9.10(2.24) | 4.76(0.10) | 6.00(0.16) | 25.32(1.10) | 11.29(0.66) | |
| SFT-F&CF | 1.64(0.01) | 0.08(0.00) | 0.86(0.00) | 0.85(0.00) | 0.80(0.00) | 0.82(0.00) | 0.87(0.00) | 0.84(0.00) | |
| DPO-F&CF | 12.53(5.04) | 8.74(1.69) | 10.63(3.14) | 8.65(3.04) | 4.32(0.30) | 5.00(0.28) | 11.07(4.93) | 7.26(1.58) | |
| DPO+CCF | 9.55(1.16) | 5.16(0.15) | 7.36(0.52) | 7.97(1.21) | 1.70(0.01) | 3.59(0.04) | 9.31(1.51) | 5.64(0.36) | |
| SFT-OnlyF | 48.86(3.27) | 38.82(0.91) | 43.84(1.65) | 42.38(2.82) | 10.55(0.10) | 13.96(0.03) | 45.71(2.06) | 28.15(0.65) | |
| DPO-OnlyF | 64.56(1.93) | 67.95(0.19) | 66.25(0.70) | 53.01(1.45) | 13.20(0.06) | 14.25(0.00) | 64.77(0.58) | 36.31(0.28) | |
| SFT-F&CF | 25.99(0.55) | 38.14(0.81) | 32.07(0.07) | 25.83(0.35) | 4.10(0.04) | 13.07(0.01) | 41.42(0.38) | 21.10(0.06) | |
| DPO-F&CF | 53.92(2.45) | 63.62(4.52) | 58.77(2.10) | 46.27(0.44) | 14.09(0.01) | 14.44(0.04) | 53.23(2.34) | 32.01(0.33) | |
| DPO+CCF | 80.18(4.78) | 72.85(0.21) | 76.52(1.55) | 70.55(1.67) | 15.96(0.03) | 14.93(0.00) | 68.67(3.67) | 42.53(0.69) | |
| SFT-OnlyF | 3.16(0.04) | 20.87(0.02) | 12.01(0.02) | 1.45(0.01) | 2.21(0.02) | 2.94(0.03) | 20.06(0.01) | 6.67(0.01) | |
| DPO-OnlyF | 5.93(0.02) | 30.30(2.44) | 18.11(0.54) | 11.75(2.24) | 3.77(0.00) | 4.61(0.00) | 21.60(0.03) | 10.43(0.12) | |
| SFT-F&CF | 1.45(0.01) | 20.43(0.02) | 10.94(0.00) | 1.60(0.00) | 0.53(0.00) | 1.40(0.00) | 19.68(0.01) | 5.80(0.00) | |
| DPO-F&CF | 2.96(0.03) | 22.44(0.02) | 12.70(0.00) | 1.40(0.01) | 2.08(0.03) | 4.14(0.01) | 19.84(0.00) | 6.86(0.00) | |
| DPO+CCF | 2.17(0.04) | 22.78(0.11) | 12.48(0.07) | 0.36(0.00) | 2.58(0.07) | 3.16(0.05) | 19.60(0.00) | 6.42(0.02) | |
| SFT-OnlyF | 1.56(0.01) | 20.30(0.01) | 10.93(0.01) | 1.14(0.00) | 0.65(0.01) | 1.40(0.01) | 20.12(0.00) | 5.83(0.00) | |
| DPO-OnlyF | 18.52(4.41) | 32.93(4.36) | 25.73(4.34) | 13.03(2.76) | 6.52(0.21) | 6.80(0.20) | 30.30(2.82) | 14.16(1.11) | |
| SFT-F&CF | 1.32(0.00) | 22.37(0.00) | 11.84(0.00) | 1.28(0.00) | 0.15(0.00) | 3.07(0.00) | 20.48(0.00) | 6.24(0.00) | |
| DPO-F&CF | 4.25(0.14) | 24.37(0.14) | 14.31(0.12) | 1.45(0.04) | 3.21(0.09) | 5.99(0.09) | 20.36(0.03) | 7.75(0.04) | |
| DPO+CCF | 5.05(0.04) | 25.17(0.17) | 15.11(0.07) | 2.95(0.03) | 3.97(0.07) | 6.42(0.08) | 20.12(0.00) | 8.36(0.02) | |
| Engineering Transmission Line Protection | Base | 16.39(0.00) | 27.10(0.00) | 21.75(0.00) | 13.70(0.00) | 27.52(0.00) | 2.71(0.00) | 13.96(0.00) | 14.47(0.00) |
| SFT-OnlyF | 2.94(0.05) | 29.64(0.06) | 16.29(0.01) | 4.84(0.02) | 26.73(0.00) | 0.94(0.02) | 2.18(0.01) | 8.67(0.00) | |
| DPO-OnlyF | 3.76(0.05) | 31.30(0.06) | 17.53(0.00) | 6.63(0.01) | 26.42(0.00) | 1.00(0.00) | 3.23(0.05) | 9.32(0.00) | |
| SFT-F&CF | 1.39(0.00) | 6.43(0.05) | 3.91(0.01) | 3.27(0.04) | 4.32(0.03) | 0.46(0.00) | 0.93(0.00) | 2.25(0.00) | |
| DPO-F&CF | 10.23(2.60) | 33.48(1.15) | 21.86(1.76) | 6.94(0.29) | 30.13(0.77) | 5.34(1.23) | 6.87(0.82) | 12.32(0.72) | |
| DPO+CCF | 7.64(0.89) | 23.00(0.60) | 15.32(0.71) | 6.61(0.12) | 19.61(0.77) | 4.44(0.32) | 4.03(0.37) | 8.67(0.32) | |
| SFT-OnlyF | 14.61(0.08) | 30.28(0.17) | 22.44(0.11) | 13.80(0.08) | 27.23(0.00) | 0.81(0.00) | 14.02(0.07) | 13.96(0.02) | |
| DPO-OnlyF | 11.25(0.15) | 30.78(0.20) | 21.02(0.18) | 11.04(0.15) | 26.60(0.00) | 0.21(0.00) | 11.73(0.16) | 12.39(0.04) | |
| SFT-F&CF | 14.05(0.10) | 34.43(0.15) | 24.24(0.12) | 13.13(0.06) | 32.64(0.18) | 0.92(0.01) | 13.60(0.06) | 15.07(0.05) | |
| DPO-F&CF | 12.52(0.17) | 31.46(0.10) | 21.99(0.11) | 10.84(0.12) | 31.40(0.12) | 1.72(0.08) | 11.41(0.11) | 13.84(0.06) | |
| DPO+CCF | 14.20(0.09) | 35.72(0.00) | 24.96(0.02) | 13.26(0.07) | 35.68(0.01) | 0.94(0.01) | 13.56(0.06) | 15.86(0.02) | |
| SFT-OnlyF | 2.71(0.05) | 29.26(0.04) | 15.99(0.01) | 4.39(0.03) | 26.73(0.00) | 0.78(0.01) | 2.07(0.02) | 8.49(0.01) | |
| DPO-OnlyF | 2.71(0.05) | 29.26(0.04) | 15.99(0.01) | 4.39(0.03) | 26.73(0.00) | 0.78(0.01) | 2.07(0.02) | 8.49(0.01) | |
| SFT-F&CF | 0.95(0.00) | 26.41(0.00) | 13.68(0.00) | 0.82(0.00) | 26.41(0.00) | 0.15(0.00) | 0.80(0.00) | 7.04(0.00) | |
| DPO-F&CF | 2.28(0.09) | 34.30(1.84) | 18.29(0.48) | 3.04(0.10) | 25.71(0.02) | 0.46(0.00) | 8.74(1.32) | 9.49(0.13) | |
| DPO+CCF | 1.51(0.00) | 32.98(2.12) | 17.25(0.49) | 2.41(0.04) | 26.55(0.00) | 0.22(0.00) | 6.50(1.23) | 8.92(0.11) | |
| SFT-OnlyF | 2.58(0.02) | 31.10(0.10) | 16.84(0.03) | 5.85(0.07) | 26.41(0.00) | 0.57(0.00) | 2.38(0.03) | 8.80(0.01) | |
| DPO-OnlyF | 2.68(0.05) | 31.36(0.07) | 17.02(0.00) | 6.22(0.00) | 25.69(0.01) | 0.46(0.00) | 2.89(0.06) | 8.81(0.00) | |
| SFT-F&CF | 1.43(0.00) | 20.25(0.17) | 10.84(0.04) | 0.91(0.00) | 20.53(0.16) | 0.52(0.00) | 0.91(0.00) | 5.72(0.01) | |
| DPO-F&CF | 2.85(0.02) | 22.23(0.69) | 12.54(0.22) | 2.53(0.03) | 22.27(0.67) | 0.33(0.00) | 2.56(0.03) | 6.92(0.07) | |
| DPO+CCF | 2.34(0.06) | 26.23(0.01) | 14.29(0.02) | 1.37(0.01) | 26.03(0.01) | 1.05(0.04) | 1.47(0.01) | 7.48(0.01) | |
| Math Benchmarking Testing on S→T | Base | 47.35(0.00) | 51.11(0.00) | 50.68(0.00) | 0.00(0.00) | 61.50(0.00) | 48.24(0.00) | 0.00(0.00) | 27.44(0.00) |
| SFT-OnlyF | 37.21(0.58) | 28.91(0.08) | 33.06(0.25) | 0.00(0.00) | 52.17(0.41) | 40.10(0.69) | 0.99(0.00) | 23.07(0.14) | |
| DPO-OnlyF | 28.28(0.95) | 57.39(3.22) | 42.83(1.84) | 0.00(0.00) | 66.84(1.15) | 29.78(0.79) | 0.99(0.00) | 24.15(0.24) | |
| SFT-F&CF | 22.52(0.18) | 14.05(0.17) | 18.29(0.07) | 0.00(0.00) | 32.07(0.22) | 23.45(0.18) | 0.99(0.00) | 13.88(0.05) | |
| DPO-F&CF | 14.06(0.10) | 17.71(1.71) | 20.47(0.32) | 0.00(0.00) | 37.78(1.52) | 15.90(0.07) | 0.99(0.00) | 13.42(0.07) | |
| DPO+CCF | 19.38(0.04) | 36.13(0.44) | 27.75(0.19) | 0.00(0.00) | 50.89(1.13) | 21.03(0.02) | 0.99(0.00) | 17.98(0.09) | |
| SFT-OnlyF | 46.75(2.78) | 33.46(0.02) | 35.36(0.17) | 0.00(0.00) | 53.39(0.46) | 42.11(0.62) | 23.88(0.13) | 23.88(0.13) | |
| DPO-OnlyF | 46.75(2.78) | 33.46(0.17) | 35.36(0.17) | 0.00(0.00) | 53.39(0.46) | 42.11(0.62) | 23.88(0.13) | 23.88(0.13) | |
| DPO-F&CF | 24.27(0.08) | 27.13(0.20) | 25.70(0.04) | 0.00(0.00) | 40.98(0.05) | 27.70(0.11) | 0.99(0.00) | 16.99(0.02) | |
| DPO-F&CF | 16.13(1.16) | 26.86(1.39) | 21.49(0.31) | 0.00(0.00) | 38.28(0.95) | 18.44(1.08) | 0.99(0.00) | 14.18(1.18) | |
| DPO+CCF | 22.21(1.33) | 29.94(1.19) | 26.07(0.34) | 0.00(0.00) | 48.44(1.23) | 23.51(1.16) | 0.99(0.00) | 17.99(0.23) | |
| Base | 50.25(0.00) | 51.11(0.00) | 50.68(0.00) | 0.00(0.00) | 67.30(0.00) | 58.65(0.00) | 0.99(0.00) | 31.49(0.00) | |
| SFT-OnlyF | 45.16(0.01) | 56.90(1.33) | 51.03(0.03) | 0.00(0.00) | 68.02(0.02) | 52.95(0.05) | 0.99(0.00) | 30.24(0.13) | |
| DPO-OnlyF | 39.36(0.78) | 48.96(2.24) | 44.16(1.38) | 0.00(0.00) | 65.83(0.41) | 44.21(1.15) | 0.99(0.00) | 29.22(0.00) | |
| SFT-F&CF | 35.11(0.06) | 72.29(0.45) | 55.70(1.16) | 0.00(0.00) | 76.30(0.06) | 40.57(0.06) | 0.99(0.00) | 29.22(0.00) | |
| DPO-F&CF | 44.55(0.01) | 32.88(0.09) | 38.71(0.01) | 0.00(0.00) | 62.18(0.00) | 47.15(0.01) | 0.99(0.00) | 27.33(0.00) | |
| DPO-F&CF | 15.43(3.99) | 8.56(0.68) | 11.99(1.99) | 0.00(0.00) | 20.06(5.01) | 16.48(3.99) | 0.99(0.00) | 9.14(1.12) | |
| DPO+CCF | 17.12(1.78) | 8.06(0.87) | 12.59(1.26) | 0.00(0.00) | 23.48(4.74) | 17.88(1.72) | 0.99(0.00) | 10.34(0.75) | |
| SFT-OnlyF | 46.75(2.78) | 42.46(1.88) | 44.58(1.08) | 0.00(0.00) | 66.05(1.10) | 52.63(0.05) | 0.99(0.00) | 29.67(0.01) | |
| DPO-OnlyF | 46.75(2.78) | 48.58(1.68) | 47.57(2.22) | 0.00(0.00) | 66.05(1.47) | 52.63(0.05) | 0.99(0.00) | 29.67(0.03) | |
| DPO-F&CF | 39.50(0.00) | 61.68(4.62) | 50.59(1.15) | 0.00(0.00) | 73.43(1.47) | 39.61(0.00) | 0.99(0.00) | 28.26(0.03) | |
| DPO+CCF | 31.05(0.96) | 40.27(1.58) | 40.66(1.93) | 0.00(0.00) | 63.75(2.23) | 32.92(0.80) | 0.99(0.00) | 24.17(0.34) | |
| Base | 47.35(0.00) | 58.82(0.00) | 53.08(0.00) | 28.77(0.00) | 50.11(0.00) | 25.69(0.00) | 22.44(0.30) | 31.76(0.00) | |
| SFT-OnlyF | 36.07(0.97) | 65.96(1.23) | 51.01(0.36) | 21.32(0.09) | 54.11(0.22) | 26.60(2.27) | 9.97(0.25) | 28.00(13) | |
| DPO-OnlyF | 28.28(0.95) | 22.81(1.25) | 25.54(1.42) | 32.30(0.31) | 28.88(1.11) | 0.24(0.00) | 28.46(1.92) | 15.97(14) | |
| SFT-F&CF | 23.12(0.12) | 45.03(1.13) | 34.08(1.11) | 16.87(1.22) | 44.39(1.63) | 18.67(1.63) | 47.76(1.44) | 29.92(14) | |
| DPO-F&CF | 24.33(2.42) | 38.68(1.6)(1) | 33.45(1.47) | 30.27(1.35) | 61.14(1.45) | 37.44(1.46) | 21.18(1.63) | 15.32(39) | |
| DPO+CCF | 17.13(1.78) | 25.91(1.55) | 29.75(1.16) | 27.75(1.11) | 5.52(1.17) | 8.83(1.88) | 20.58(1.66) | 13.67(37) | |
| SFT-OnlyF | 37.87(8.81) | 62.75(15, 87) | 56.75(13, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13(25, 25) | 28.35(14) | |
| DPO-OnlyF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13(25, 25) | 28.35(14) | |
| DPO+CCF | 35.67(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13(25, 25) | 28.35(13) | |
| SFT-OnlyF | 47.35(1, 37) | 58.82(1, 37) | 53.08(1, 37) | 28.77(1, 37) | 50.11(1, 37) | 25.69(1, 37) | 22.44(1, 37) | 31.76(1, 37) | |
| DPO-OnlyF | 36.07(1, 37) | 65.96(1, 37) | 51.01(1, 37) | 21.32(1, 37) | 54.11(1, 37) | 26.60(1, 37) | 29.97(1, 37) | 28.35(13) | |
| DPO-F&CF | 35.67(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13(25, 25) | 27.33(14) | |
| DPO-F&CF | 35.67(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13/ (1, 37) | 28.35(13) | |
| SFT-OnlyF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 28) | 12.13/ (1, 37) | 28.35(14) | |
| DPO-OnlyF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12,37) | 12.13/ (1, 37) | 28.35(14) | |
| DPO-F&CF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 37) | 12.13/ (1, 37) | 28.35(14) | |
| SFT-OnlyF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 37) | 12.13/ (1, 37) | 27.33(14) | |
| DPO-F&CF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26.27(12, 37) | 12: 37/ (1, 37) | 28.35(14) | |
| SFT-OnlyF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26, 27/ (1, 37) | 12: 37/ (1, 37) | 28, 26/ (1, 37) | |
| DPO-F&CF | 45.16(1, 37) | 62.75(15, 37) | 56.36(11, 37) | 23.38(11, 37) | 51.63(17, 37) | 26, 27/ (1, 37) | 12: 37/ (1, 37) | ||
| Scenario | Alg. | Correctness | Causal Consistency | |||||||
| F-ER | C-ER | Avg-ER | N-IR | S-IR | AN-IR | AS-IR | Avg-IR | |||
| Healthcare Breast Cancer Treatment | Base | 23.57(0.00) | 28.93(0.00) | 26.25(0.00) | 20.62(0.00) | 4.34(0.00) | 11.47(0.00) | 32.01(0.00) | 17.11(0.00) | |
| In-domain | SFT-OnlyFx2 | 3.09(0.01) | 21.41(0.01) | 12.25(0.01) | 2.39(0.01) | 1.15(0.00) | 2.82(0.01) | 6.86(0.01) | ||
| DPO-OnlyFx2 | 5.18(0.32) | 24.70(0.61) | 14.94(0.45) | 4.21(0.40) | 2.75(0.05) | 5.09(0.28) | 8.43(0.19) | |||
| Com-Cause | SFT-OnlyFx2 | 31.45(1.57) | 26.48(0.66) | 28.97(1.01) | 24.84(0.79) | 7.86(0.21) | 9.75(0.21) | 19.22(0.35) | ||
| DPO-OnlyFx2 | 76.27(4.04) | 68.74(0.09) | 72.51(0.99) | 65.14(3.45) | 13.27(0.06) | 14.69(0.00) | 40.88(0.55) | |||
| Com-Effect | SFT-OnlyFx2 | 2.58(0.01) | 21.05(0.03) | 11.82(0.01) | 2.01(0.01) | 1.01(0.00) | 2.58(0.02) | 6.54(0.01) | ||
| DPO-OnlyFx2 | 2.23(0.02) | 22.43(0.03) | 12.33(0.03) | 1.23(0.01) | 1.80(0.02) | 3.44(0.03) | 6.56(0.01) | |||
| Deductive (Cause-Based) | SFT-OnlyFx2 | 2.26(0.00) | 22.26(0.02) | 12.26(0.01) | 1.95(0.00) | 0.63(0.00) | 3.10(0.02) | 6.65(0.00) | ||
| DPO-OnlyFx2 | 3.57(0.03) | 22.99(0.02) | 13.28(0.00) | 1.66(0.01) | 2.67(0.02) | 4.44(0.01) | 7.27(0.00) | |||
| Engineering Tx. Line Protection | Base | 16.39(0.00) | 27.10(0.00) | 21.75(0.00) | 13.70(0.00) | 27.52(0.00) | 2.71(0.00) | 13.96(0.00) | 14.47(0.00) | |
| In-domain | SFT-OnlyFx2 | 0.44(0.00) | 31.65(0.02) | 16.05(0.00) | 5.59(0.01) | 26.40(0.00) | 0.07(0.00) | 8.11(0.00) | ||
| DPO-OnlyFx2 | 4.21(0.06) | 30.13(0.00) | 17.17(0.01) | 5.54(0.00) | 26.77(0.01) | 0.92(0.03) | 9.18(0.01) | |||
| Com-Cause | SFT-OnlyFx2 | 16.35(0.00) | 35.69(0.00) | 26.02(0.00) | 15.88(0.00) | 26.86(0.00) | 0.47(0.00) | 14.82(0.00) | ||
| DPO-OnlyFx2 | 13.52(0.02) | 33.53(0.01) | 23.53(0.00) | 13.29(0.03) | 26.65(0.00) | 0.24(0.00) | 13.83(0.00) | |||
| Com-Effect | SFT-OnlyFx2 | 0.43(0.00) | 32.38(0.00) | 16.40(0.00) | 6.21(0.00) | 26.40(0.00) | 0.07(0.00) | 8.28(0.00) | ||
| DPO-OnlyFx2 | 3.01(0.09) | 29.61(0.09) | 16.31(0.03) | 5.13(0.08) | 26.69(0.00) | 0.44(0.00) | 8.74(0.02) | |||
| Inductive | SFT-OnlyFx2 | 0.94(0.00) | 32.55(0.00) | 16.75(0.00) | 6.51(0.00) | 26.32(0.00) | 0.43(0.00) | 8.50(0.00) | ||
| DPO-OnlyFx2 | 1.72(0.01) | 31.98(0.00) | 16.85(0.00) | 6.11(0.00) | 26.37(0.00) | 0.08(0.00) | 8.56(0.00) | |||
| Math Bench. S→T | Base | 47.35(0.00) | 26.32(0.00) | 36.84(0.00) | 0.00(0.00) | 61.50(0.00) | 48.24(0.00) | 0.00(0.00) | 27.44(0.00) | |
| In-domain | SFT-OnlyFx2 | 26.59(0.38) | 26.57(0.01) | 26.58(0.13) | 0.00(0.00) | 42.88(0.31) | 28.33(0.47) | 17.80(0.10) | ||
| DPO-OnlyFx2 | 24.47(0.41) | 48.01(1.30) | 36.24(0.61) | 0.00(0.00) | 59.29(0.82) | 28.08(0.39) | 21.84(0.14) | |||
| Inductive | SFT-OnlyFx2 | 22.30(0.08) | 28.04(0.03) | 25.17(0.03) | 0.00(0.00) | 40.63(0.06) | 23.93(0.09) | 16.14(0.02) | ||
| DPO-OnlyFx2 | 16.72(1.55) | 46.65(2.01) | 31.69(1.70) | 0.00(0.00) | 52.99(1.61) | 18.49(1.35) | 17.87(0.37) | |||
| Math Bench. S→R | Base | 50.25(0.00) | 51.11(0.00) | 50.68(0.00) | 0.00(0.00) | 67.30(0.00) | 58.65(0.00) | 0.00(0.00) | 31.49(0.00) | |
| In-domain | SFT-OnlyFx2 | 35.95(0.36) | 56.36(0.08) | 46.15(0.16) | 0.00(0.00) | 61.39(0.16) | 45.74(0.34) | 26.78(0.06) | ||
| DPO-OnlyFx2 | 39.06(0.84) | 60.64(1.60) | 49.85(0.56) | 0.00(0.00) | 69.24(0.39) | 44.19(1.08) | 0.00(0.00) | |||
| Deductive (Effect-Based) | SFT-OnlyFx2 | 45.97(0.01) | 45.91(0.21) | 45.94(0.04) | 0.00(0.00) | 66.49(0.05) | 50.64(0.07) | 0.00(0.00) | ||
| DPO-OnlyFx2 | 24.75(2.64) | 52.68(2.02) | 38.72(1.88) | 0.00(0.00) | 61.52(1.88) | 27.44(2.60) | 0.00(0.00) | |||
| Math Bench. R→T | Base | 47.35(0.00) | 58.82(0.00) | 53.08(0.00) | 28.77(0.00) | 50.11(0.00) | 25.69(0.00) | 22.44(0.00) | 31.76(0.00) | |
| In-domain | SFT-OnlyFx2 | 7.80(1.14) | 64.03(0.05) | 45.92(0.38) | 19.43(0.17) | 51.94(0.03) | 20.63(0.24) | 24.90(0.17) | ||
| DPO-OnlyFx2 | 24.47(0.41) | 24.54(0.18) | 24.50(0.08) | 29.80(0.21) | 7.40(0.44) | 1.74(0.06) | 15.55(0.04) | |||
| Deductive (Cause-Based) | SFT-OnlyFx2 | 22.84(0.09) | 59.90(0.07) | 41.37(0.07) | 17.28(0.01) | 49.03(0.05) | 17.49(0.03) | 5.84(0.03) | ||
| DPO-OnlyFx2 | 16.72(1.55) | 22.86(0.27) | 19.79(0.70) | 24.01(0.63) | 6.87(0.16) | 1.15(0.01) | 15.90(1.59) | |||
| Mode | Metric | Base | OnlyF | OnlyFx2 | F&CF (Ours) | ||||
| Base | SFT | DPO | SFT | DPO | SFT | DPO | DPO+CCF | ||
| In-Domain | Avg-ER | 1.00 | 0.82 | 0.86 | 0.74 | 0.76 | 0.42 | 0.53 | 0.48 |
| Avg-IR | 1.00 | 0.73 | 0.72 | 0.65 | 0.66 | 0.44 | 0.51 | 0.47 | |
| Common-Cause | Avg-ER | 1.00 | 1.35 | 1.75 | 1.14 | 1.92 | 1.17 | 1.62 | 2.04 |
| Avg-IR | 1.00 | 1.30 | 1.49 | 1.07 | 1.67 | 1.14 | 1.42 | 1.80 | |
| Common-Effect | Avg-ER | 1.00 | 0.60 | 0.71 | 0.60 | 0.60 | 0.53 | 0.66 | 0.64 |
| Avg-IR | 1.00 | 0.49 | 0.60 | 0.47 | 0.49 | 0.42 | 0.53 | 0.50 | |
| Inductive | Avg-ER | 1.00 | 0.86 | 0.79 | 0.72 | 0.81 | 0.60 | 0.58 | 0.69 |
| Avg-IR | 1.00 | 0.74 | 0.62 | 0.58 | 0.62 | 0.51 | 0.50 | 0.59 | |
| Deductive, Cause-B. | Avg-ER | 1.00 | 0.70 | 0.67 | 0.62 | 0.43 | 0.61 | 0.58 | 0.57 |
| Avg-IR | 1.00 | 0.62 | 0.59 | 0.54 | 0.40 | 0.53 | 0.53 | 0.51 | |
| Deductive, Effect-B. | Avg-ER | 1.00 | 0.83 | 0.83 | 0.95 | 0.78 | 0.90 | 0.95 | 0.77 |
| Avg-IR | 1.00 | 0.91 | 0.87 | 0.82 | 0.73 | 0.93 | 0.89 | 0.76 | |
| Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| ARC Challenge 10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| BoolIQ 0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| MedQA 2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| OpenBookQA 10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| PIQA 5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| GPQA 0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| Social IQA 5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| TruthfulQA (MC2) 10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| WinoGrande 5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Generalization Mode | Healthcare | Engineering | Math Benchmarking | ||
| S→T | S→R | R→T | |||
| In-Domain | 100 | 100 | 100 | 100 | 100 |
| Common-Cause | 100 | 100 | (n/a) | (n/a) | (n/a) |
| Common-Effect | 100 | 100 | (n/a) | (n/a) | (n/a) |
| Inductive | (n/a) | 200 | 200 | (n/a) | (n/a) |
| Deductive (Cause-Based) | 400 | (n/a) | (n/a) | (n/a) | 200 |
| Deductive (Effect-Based) | (n/a) | (n/a) | (n/a) | 200 | (n/a) |
| Config | #Layers | Hidden dim | #Heads |
| B/2 | 12 | 768 | 12 |
| L/2 | 24 | 1024 | 16 |
| XL/2 | 28 | 1152 | 16 |
| Iter. | Target Repr. | Depth | Objective | FID↓ | sFID↓ | IS↑ | Pre.↑ | Rec.↑ | Acc.↑ |
| 400K | Vanilla SiT-L/2 (Ma et al., 2024a) | 18.8 | 5.29 | 72.0 | 0.64 | 0.64 | N/A | ||
| 400K | MAE-L | 8 | NT-Xent | 12.5 | 4.89 | 90.7 | 0.68 | 0.63 | 57.3 |
| 400K | DINO-B | 8 | NT-Xent | 11.9 | 5.00 | 92.9 | 0.68 | 0.63 | 59.3 |
| 400K | MoCov3-L | 8 | NT-Xent | 11.9 | 5.06 | 92.2 | 0.68 | 0.64 | 63.0 |
| 400K | I-JEPA-H | 8 | NT-Xent | 11.6 | 5.21 | 98.0 | 0.68 | 0.64 | 62.1 |
| 400K | CLIP-L | 8 | NT-Xent | 11.0 | 5.25 | 100.4 | 0.67 | 0.66 | 67.2 |
| 400K | SigLIP-L | 8 | NT-Xent | 10.2 | 5.15 | 107.0 | 0.69 | 0.64 | 68.8 |
| 400K | DINOv2-L | 8 | NT-Xent | 10.0 | 5.09 | 106.6 | 0.68 | 0.65 | 68.1 |
| 400K | DINOv2-B | 8 | NT-Xent | 9.7 | 5.13 | 107.5 | 0.69 | 0.64 | 65.7 |
| 400K | DINOv2-L | 8 | NT-Xent | 10.0 | 5.09 | 106.6 | 0.68 | 0.65 | 68.1 |
| 400K | DINOv2-g | 8 | NT-Xent | 9.8 | 5.22 | 108.9 | 0.69 | 0.64 | 65.7 |
| 400K | DINOv2-L | 6 | NT-Xent | 10.3 | 5.23 | 106.5 | 0.69 | 0.65 | 66.2 |
| 400K | DINOv2-L | 8 | NT-Xent | 10.0 | 5.09 | 106.6 | 0.68 | 0.65 | 68.1 |
| 400K | DINOv2-L | 10 | NT-Xent | 10.5 | 5.50 | 105.0 | 0.68 | 0.65 | 68.6 |
| 400K | DINOv2-L | 12 | NT-Xent | 11.2 | 5.14 | 100.2 | 0.68 | 0.64 | 69.4 |
| 400K | DINOv2-L | 14 | NT-Xent | 11.6 | 5.61 | 99.5 | 0.67 | 0.65 | 70.0 |
| 400K | DINOv2-L | 16 | NT-Xent | 12.1 | 5.34 | 96.1 | 0.67 | 0.64 | 71.1 |
| 400K | DINOv2-L | 8 | NT-Xent | 10.0 | 5.09 | 106.6 | 0.68 | 0.65 | 68.1 |
| 400K | DINOv2-L | 8 | Cos. sim. | 9.9 | 5.34 | 111.9 | 0.68 | 0.65 | 68.2 |
| Model | #Params | Iter. | FID↓ |
| DiT-L/2 | 458M | 400K | 23.3 |
| + REPA (ours) | 458M | 400K | 15.6 |
| DiT-XL/2 | 675M | 400K | 19.5 |
| + REPA (ours) | 675M | 400K | 12.3 |
| DiT-XL/2 | 675M | 7M | 9.6 |
| + REPA (ours) | 675M | 850K | 9.6 |
| SiT-B/2 | 130M | 400K | 33.0 |
| + REPA (ours) | 130M | 400K | 24.4 |
| SiT-L/2 | 458M | 400K | 18.8 |
| + REPA (ours) | 458M | 400K | 9.7 |
| + REPA (ours) | 458M | 700K | 8.4 |
| SiT-XL/2 | 675M | 400K | 17.2 |
| + REPA (ours) | 675M | 150K | 13.6 |
| SiT-XL/2 | 675M | 7M | 8.3 |
| + REPA (ours) | 675M | 400K | 7.9 |
| + REPA (ours) | 675M | 1M | 6.4 |
| + REPA (ours) | 675M | 4M | 5.9 |
| Model | Epochs | FID↓ | sFID↓ | IS↑ | Pre.↑ | Rec.↑ |
| Pixel diffusion | ||||||
| ADM-U | 400 | 3.94 | 6.14 | 186.7 | 0.82 | 0.52 |
| VDM++ | 560 | 2.40 | - | 225.3 | - | - |
| Simple diffusion | 800 | 2.77 | - | 211.8 | - | - |
| CDM | 2160 | 4.88 | - | 158.7 | - | - |
| Latent diffusion, U-Net | ||||||
| LDM-4 | 200 | 3.60 | - | 247.7 | 0.87 | 0.48 |
| Latent diffusion, Transformer + U-Net hybrid | ||||||
| U-ViT-H/2 | 240 | 2.29 | 5.68 | 263.9 | 0.82 | 0.57 |
| DiffiT* | - | 1.73 | - | 276.5 | 0.80 | 0.62 |
| MDTv2-XL/2* | 1080 | 1.58 | 4.52 | 314.7 | 0.79 | 0.65 |
| Latent diffusion, Transformer | ||||||
| MaskDiT | 1600 | 2.28 | 5.67 | 276.6 | 0.80 | 0.61 |
| SD-DiT | 480 | 3.23 | - | - | - | - |
| DiT-XL/2 | 1400 | 2.27 | 4.60 | 278.2 | 0.83 | 0.57 |
| SiT-XL/2 | 1400 | 2.06 | 4.50 | 270.3 | 0.82 | 0.59 |
| + REPA (ours) | 200 | 1.96 | 4.49 | 264.0 | 0.82 | 0.60 |
| + REPA (ours) | 800 | 1.80 | 4.50 | 284.0 | 0.81 | 0.61 |
| + REPA (ours)* | 800 | 1.42 | 4.70 | 305.7 | 0.80 | 0.65 |
| λ | 0.25 | 0.5 | 0.75 | 1.0 |
| FID↓ | 8.6 | 7.9 | 7.8 | 7.8 |
| IS↑ | 118.6 | 122.6 | 124.4 | 124.8 |
| Method | Dataset | w/ ImageNet-1K | Text sup. | FID↓ |
| MAE | ImageNet-1K | O | X | 12.5 |
| DINO | ImageNet-1K | O | X | 11.9 |
| MoCov3 | ImageNet-1K | O | X | 11.9 |
| I-JEPA | ImageNet-1K | O | X | 11.6 |
| CLIP | WIT-400M | X | O | 11.0 |
| SigLIP | WebLi-4B | X | O | 10.2 |
| DINOv2 | LVD-142M | O | X | 10.0 |
| Figure 3 | Table 3 (SiT-B) | Table 3 (SiT-L) | Table 3 (SiT-XL) | Table 4 | |
| Architecture | |||||
| Input dim. | 32×32×4 | 32×32×4 | 32×32×4 | 32×32×4 | 32×32×4 |
| Num. layers | 28 | 12 | 24 | 28 | 28 |
| Hidden dim. | 1,152 | 768 | 1,024 | 1,152 | 1,152 |
| Num. heads | 16 | 12 | 16 | 16 | 16 |
| REPA | |||||
| λ | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 |
| Alignment depth | 8 | 4 | 8 | 8 | 8 |
| sim(·,·) | cos. sim. | cos. sim. | NT-Xent | cos. sim. | cos. sim. |
| Encoder f(x) | DINOv2-B | DINOv2-B | DINOv2-L | DINOv2-B | DINOv2-B |
| Optimization | |||||
| Training iteration | 1M | 400K | 700K | 4M | 4M |
| Batch size | 256 | 256 | 256 | 256 | 256 |
| Optimizer | AdamW | AdamW | AdamW | AdamW | AdamW |
| lr | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001 |
| (β1,β2) | (0.9,0.999) | (0.9,0.999) | (0.9,0.999) | (0.9,0.999) | (0.9,0.999) |
| Interpolants | |||||
| αt | 1-t | 1-t | 1-t | 1-t | 1-t |
| σt | t | t | t | t | t |
| wt | σt | σt | σt | σt | σt |
| Training objective | v-prediction | v-prediction | v-prediction | v-prediction | v-prediction |
| Sampler | Euler-Maruyama | Euler-Maruyama | Euler-Maruyama | Euler-Maruyama | Euler-Maruyama |
| Sampling steps | 250 | 250 | 250 | 250 | 250 |
| Guidance | - | - | - | - | 1.35 |
| Model | #Params | Iter. | FID↓ | sFID↓ | IS↑ | Prec.↑ | Rec.↑ | Acc.↑ |
| SiT-B/2 (Ma et al., 2024a) | 130M | 400K | 33.0 | 6.46 | 43.7 | 0.53 | 0.63 | N/A |
| + REPA (ours) | 130M | 50K | 78.2 | 11.71 | 17.1 | 0.33 | 0.48 | 43.2 |
| + REPA (ours) | 130M | 100K | 49.5 | 7.00 | 27.5 | 0.46 | 0.59 | 50.9 |
| + REPA (ours) | 130M | 200K | 33.2 | 6.68 | 43.7 | 0.54 | 0.63 | 50.9 |
| + REPA (ours) | 130M | 400K | 24.4 | 6.40 | 59.9 | 0.59 | 0.65 | 61.2 |
| SiT-L/2 (Ma et al., 2024a) | 458M | 400K | 18.8 | 5.29 | 72.0 | 0.64 | 0.64 | N/A |
| + REPA (ours) | 458M | 50K | 55.4 | 24.0 | 23.0 | 0.43 | 0.53 | 55.3 |
| + REPA (ours) | 458M | 100K | 24.1 | 6.25 | 55.7 | 0.62 | 0.60 | 61.8 |
| + REPA (ours) | 458M | 200K | 14.0 | 5.18 | 86.5 | 0.67 | 0.64 | 66.3 |
| + REPA (ours) | 458M | 400K | 10.0 | 5.20 | 109.2 | 0.69 | 0.65 | 69.4 |
| SiT-XL/2 (Ma et al., 2024a) | 675M | 7M | 8.3 | 6.32 | 131.7 | 0.68 | 0.67 | N/A |
| + REPA (ours) | 675M | 50K | 52.3 | 31.24 | 24.3 | 0.45 | 0.53 | 56.1 |
| + REPA (ours) | 675M | 100K | 19.4 | 6.06 | 67.4 | 0.64 | 0.61 | 62.9 |
| + REPA (ours) | 675M | 200K | 11.1 | 5.05 | 100.4 | 0.69 | 0.64 | 67.3 |
| + REPA (ours) | 675M | 400K | 7.9 | 5.06 | 122.6 | 0.70 | 0.65 | 70.3 |
| + REPA (ours) | 675M | 4M | 5.9 | 5.73 | 157.8 | 0.70 | 0.69 | 74.6 |
| Model | #Params | Iter. | w | FID↓ | sFID↓ | IS↑ | Prec.↑ | Rec.↑ |
| SiT-XL/2 (Ma et al., 2024a) | 675M | 7M | 1.500 | 2.06 | 4.50 | 270.3 | 0.82 | 0.59 |
| + REPA (ours) | 675M | 4M | 1.300 | 1.80 | 4.55 | 268.6 | 0.80 | 0.63 |
| + REPA (ours) | 675M | 4M | 1.325 | 1.79 | 4.51 | 276.8 | 0.81 | 0.62 |
| + REPA (ours) | 675M | 4M | 1.350 | 1.80 | 4.50 | 284.0 | 0.81 | 0.61 |
| + REPA (ours) | 675M | 4M | 1.375 | 1.84 | 4.48 | 291.7 | 0.82 | 0.61 |
| + REPA (ours) | 675M | 4M | 1.400 | 1.90 | 4.48 | 297.5 | 0.82 | 0.60 |
| Model | #Params | Iter. | Interval | w | FID↓ | sFID↓ | IS↑ | Prec.↑ | Rec.↑ |
| SiT-XL/2 (Ma et al., 2024a) | 675M | 7M | [0, 1] | 1.50 | 2.06 | 4.50 | 270.3 | 0.82 | 0.59 |
| + REPA (ours) | 675M | 4M | [0, 0.8] | 2.00 | 2.23 | 4.40 | 360.9 | 0.84 | 0.6 |
| + REPA (ours) | 675M | 4M | [0, 0.75] | 2.00 | 1.78 | 4.50 | 346.2 | 0.82 | 0.62 |
| + REPA (ours) | 675M | 4M | [0, 0.7] | 2.00 | 1.48 | 4.67 | 324.0 | 0.82 | 0.62 |
| + REPA (ours) | 675M | 4M | [0, 0.65] | 2.00 | 1.44 | 4.88 | 308.8 | 0.79 | 0.65 |
| + REPA (ours) | 675M | 4M | [0, 0.6] | 2.00 | 1.56 | 5.11 | 290.7 | 0.78 | 0.66 |
| + REPA (ours) | 675M | 4M | [0, 0.7] | 1.90 | 1.45 | 4.68 | 317.6 | 0.80 | 0.64 |
| + REPA (ours) | 675M | 4M | [0, 0.7] | 1.80 | 1.42 | 4.70 | 305.7 | 0.80 | 0.64 |
| Model | Epochs | FID↓ | sFID↓ | IS↑ | Pre.↑ | Rec.↑ |
| Pixel diffusion | ||||||
| VDM++ | - | 2.65 | - | 278.1 | - | - |
| ADM-G, ADM-U | 400 | 2.85 | 5.86 | 221.7 | 0.84 | 0.53 |
| Simple diffusion (U-Net) | 800 | 4.28 | - | 171.0 | - | - |
| Simple diffusion (U-ViT, L) | 800 | 4.53 | - | 205.3 | - | - |
| Latent diffusion, Transformer | ||||||
| MaskDiT | 800 | 2.50 | 5.10 | 256.3 | 0.83 | 0.56 |
| DiT-XL/2 | 600 | 3.04 | 5.02 | 240.8 | 0.84 | 0.54 |
| SiT-XL/2 | 600 | 2.62 | 4.18 | 252.2 | 0.84 | 0.57 |
| + REPA (ours) | 80 | 2.44 | 4.21 | 247.3 | 0.84 | 0.56 |
| + REPA (ours) | 100 | 2.32 | 4.16 | 255.7 | 0.84 | 0.56 |
| + REPA (ours) | 200 | 2.08 | 4.19 | 274.6 | 0.83 | 0.58 |
| Method | Type | FID |
| AttnGAN (Xu et al., 2018) | GAN | 35.49 |
| DM-GAN (Zhu et al., 2019) | GAN | 32.64 |
| VQ-Diffusion (Gu et al., 2022) | Discrete Diffusion | 19.75 |
| DF-GAN (Tao et al., 2022) | GAN | 19.32 |
| XMC-GAN (Zhang et al., 2021) | GAN | 9.33 |
| Frido (Fan et al., 2023) | Diffusion | 8.97 |
| LAFITE (Zhou et al., 2021) | GAN | 8.12 |
| U-Net (Bao et al., 2023) | Diffusion | 7.32 |
| U-ViT-S/2 (Bao et al., 2023) | Diffusion | 5.95 |
| U-ViT-S/2 (Deep) (Bao et al., 2023) | Diffusion | 5.48 |
| MMDiT (ODE; NFE=50) | Diffusion | 6.05 |
| MMDiT+REPA (ODE; NFE=50) | Diffusion | 4.73 |
| MMDiT (SDE; NFE=250) | Diffusion | 5.30 |
| MMDiT+REPA (SDE; NFE=250) | Diffusion | 4.14 |
| Approach | Training Metrics | Validation Metrics | ||||
| Codebook Usage (↑) | Rec. Loss (↓) | Quantization Error (↓) | Rec. Loss (↓) | r-FID (↓) | r-IS (↑) | |
| Codebook Lookup: Euclidean & Latent Shape: 32 × 32 × 32 & Codebook Size: 1024 | ||||||
| VQ-VAE | 100% | 0.107 | 5.9e-3 | 0.115 | 106.1 | 11.7 |
| VQ-VAE w/ Rotation Trick | 97% | 0.116 | 5.1e-4 | 0.122 | 85.7 | 17.0 |
| Codebook Lookup: Cosine & Latent Shape: 32 × 32 × 32 & Codebook Size: 1024 | ||||||
| VQ-VAE | 75% | 0.107 | 2.9e-3 | 0.114 | 84.3 | 17.7 |
| VQ-VAE w/ Rotation Trick | 91% | 0.105 | 2.7e-3 | 0.111 | 82.9 | 18.1 |
| Codebook Lookup: Euclidean & Latent Shape: 64 × 64 × 3 & Codebook Size: 8192 | ||||||
| VQ-VAE | 100% | 0.028 | 1.0e-3 | 0.030 | 19.0 | 97.3 |
| Gumbel VQ-VAE | 39% | 0.054 | — | 0.058 | 28.6 | 74.9 |
| VQ-VAE w/ Hessian Approx. | 39% | 0.082 | 6.9e-5 | 0.112 | 35.6 | 65.1 |
| VQ-VAE w/ Exact Gradients | 84% | 0.050 | 2.0e-3 | 0.053 | 25.4 | 80.4 |
| VQ-VAE w/ Rotation Trick | 99% | 0.028 | 1.4e-4 | 0.030 | 16.5 | 106.3 |
| Codebook Lookup: Cosine & Latent Shape: 64 × 64 × 3 & Codebook Size: 8192 | ||||||
| VQ-VAE | 31% | 0.034 | 1.2e-4 | 0.038 | 26.0 | 77.8 |
| VQ-VAE w/ Hessian Approx. | 37% | 0.035 | 3.8e-5 | 0.037 | 29.0 | 71.5 |
| VQ-VAE w/ Exact Gradients | 38% | 0.035 | 3.6e-5 | 0.037 | 28.2 | 75.0 |
| VQ-VAE w/ Rotation Trick | 38% | 0.033 | 9.6e-5 | 0.035 | 24.2 | 83.9 |
| Approach | Dataset | Codebook Usage | Quantization Error (↓) | Valid Loss (↓) | r-FID (↓) | r-IS (↑) |
| VQGAN (reported) | ImageNet | — | — | — | 7.9 | 114.4 |
| VQGAN (our run) | ImageNet | 95% | 0.134 | 0.594 | 7.3 | 118.2 |
| VQGAN w/ Rotation Trick | ImageNet | 98% | 0.002 | 0.422 | 4.6 | 146.5 |
| VQGAN | FFHQ & CelebA-HQ | 27% | 0.233 | 0.565 | 4.7 | 5.0 |
| VQGAN w/ Rotation Trick | FFHQ & CelebA-HQ | 99% | 0.002 | 0.313 | 3.7 | 5.2 |
| Approach | Latent Shape | Codebook Size | Codebook Usage | Quantization Error (↓) | Valid Loss (↓) | r-FID (↓) | r-IS (↑) |
| VQGAN | 64 × 64 × 3 | 8192 | 15% | 2.5e-3 | 0.183 | 0.53 | 220.6 |
| Gumbel VQGAN | 64 × 64 × 3 | 8192 | 4% | — | 0.197 | 0.60 | 219.7 |
| VQGAN w/ Rotation Trick | 64 × 64 × 3 | 8192 | 86% | 1.7e-4 | 0.142 | 0.27 | 228.0 |
| VQGAN | 32 × 32 × 4 | 16384 | 2% | 1.2e-2 | 0.385 | 5.0 | 141.5 |
| Gumbel VQGAN | 32 × 32 × 4 | 16384 | 12% | — | 0.3031 | 1.7 | 189.5 |
| VQGAN w/ Rotation Trick | 32 × 32 × 4 | 16384 | 27% | 2.4e-4 | 0.269 | 1.1 | 200.2 |
| Approach | Codebook Usage (↑) | Train Loss (↓) | Quantization Error (↓) | Valid Loss (↓) | r-FID (↓) | r-IS (↑) |
| ViT-VQGAN [reported] | — | — | — | — | 22.8 | 72.9 |
| ViT-VQGAN [ours] | 0.3% | 0.124 | 6.7e-3 | 0.127 | 29.2 | 43.0 |
| ViT-VQGAN w/ Rotation Trick | 2.2% | 0.113 | 8.3e-3 | 0.113 | 11.2 | 93.1 |
| Approach | Dataset | Codebook Usage | Train Loss (↓) | Quantization Error (↓) | Valid Loss (↓) | r-FVD (↓) |
| TimeSformer† | BAIR | 0.4% | 0.221 | 0.03 | 0.28 | 1661.1 |
| TimeSformer w/ Rotation Trick | BAIR | 43% | 0.074 | 3.0e-3 | 0.074 | 21.4 |
| TimeSformer† | UCF-101 | 0.1% | 0.190 | 0.006 | 0.169 | 2878.1 |
| TimeSformer w/ Rotation Trick | UCF-101 | 30% | 0.111 | 0.020 | 0.109 | 229.1 |
| Rotation Trick Function | Training Metrics | Validation Metrics | ||||
| Codebook Usage (↑) | Rec. Loss (↓) | Quantization Error (↓) | Rec. Loss (↓) | r-FID (↓) | r-IS (↑) | |
| Codebook Lookup: Euclidean & Latent Shape: 64 × 64 × 3 & Codebook Size: 8192 | ||||||
| ||q||Re | 99% | 0.028 | 1.4e-4 | 0.030 | 16.5 | 106.3 |
| ||e||Re | 100% | 0.028 | 4.0e-4 | 0.030 | 16.5 | 106.1 |
| Rotation Trick Function | Latent Shape | Codebook Size | Codebook Usage | Quantization Error (↓) | Valid Loss (↓) | r-FID (↓) | r-IS (↑) |
| ||q||Re | 64 × 64 × 3 | 8192 | 45% | 4.0e-4 | 0.161 | 0.46 | 225.0 |
| Re - (q - Re) | 64 × 64 × 3 | 8192 | 28% | 1.5e-3 | 0.183 | 0.6 | 220.0 |
| ||q||Re | 32 × 32 × 4 | 16384 | 18% | 3.3e-4 | 0.292 | 1.5 | 196.1 |
| Re - (q - Re) | 32 × 32 × 4 | 16384 | 13% | 9.4e-4 | 0.292 | 1.5 | 191.5 |
| Cosine Similarity Lookup | Euclidean Lookup | |||
| (1024, 32) | (8192, 3) | (1024, 32) | (8192, 3) | |
| Input size | 256 × 256 × 3 | 256 × 256 × 3 | 256 × 256 × 3 | 256 × 256 × 3 |
| Latent size | 16 × 16 × 32 | 64 × 64 × 3 | 16 × 16 × 32 | 64 × 64 × 3 |
| β (commitment loss coefficient) | 1.0 | 1.0 | 1.0 | 1.0 |
| encoder/decoder channels | 128 | 128 | 128 | 128 |
| encoder/decoder channel mult. | [1, 1, 2, 2, 4] | [1, 2, 4] | [1, 1, 2, 2, 4] | [1, 2, 4] |
| [Effective] Batch size | 256 | 256 | 256 | 256 |
| Learning rate | 1 × 10-4 | 1 × 10-4 | 5 × 10-5 | 5 × 10-5 |
| Weight Decay | 1 × 10-4 | 1 × 10-4 | 0 | 0 |
| Codebook size | 1024 | 8192 | 1024 | 8192 |
| Codebook dimension | 32 | 3 | 32 | 3 |
| Training epochs | 25 | 20 | 25 | 20 |
| Table 2 VQGAN | Table 3 VQGAN | Table 3 VQGAN | |
| Input size | 256 × 256 × 3 | 256 × 256 × 3 | 256 × 256 × 3 |
| Latent size | 16 × 16 × 256 | 64 × 64 × 3 | 32 × 32 × 4 |
| Codebook weight | 1.0 | 1.0 | 1.0 |
| Discriminator weight | 0.8 | 0.75 | 0.6 |
| encoder/decoder channels | 128 | 128 | 128 |
| encoder/decoder channel mult. | [1,1,2,2,4] | [1,2,4] | [1,2,2,4] |
| [Effective] Batch size | 48 | 16 | 16 |
| [Effective] Learning rate | 4.5 × 10-6 | 4.5 × 10-6 | 4.5 × 10-6 |
| Codebook size | 1024 | 8192 | 16384 |
| Codebook dimensions | 256 | 3 | 4 |
| Training Epochs | 18/50† | 4 | 4 |
| ViT-VQGAN Settings | |
| Input size | 256 × 256 × 3 |
| Patch size | 8 |
| Encoder / Decoder Hidden Dim | 512 |
| Encoder / Decoder MLP Dim | 1024 |
| Encoder / Decoder Hidden Depth | 8 |
| Encoder / Decoder Hidden Num Heads | 8 |
| Codebook Dimension | 32 |
| Codebook Size | 8192 |
| Codebook Loss Coefficient | 1.0 |
| Log Laplace loss Coefficient | 0.0 |
| Log Gaussian Coefficient | 1.0 |
| Perceptual loss Coefficient | 0.1 |
| Adversarial loss Coefficient | 0.1 |
| [Effective] Batch size | 32 |
| Learning rate | 1 × 10-4 |
| Weight Decay | 1 × 10-4 |
| Training epochs | 10 |
| TimeSformer-VQGAN Settings | ||
| BAIR Robot Pushing | UCF101 Action Recognition | |
| Input size | 16 × 64 × 64 × 3 | 16 × 128 × 128 × 3 |
| Patch size | 2 | 4 |
| Encoder / Decoder Hidden Dim | 256 | 256 |
| Encoder / Decoder MLP Dim | 768 | 768 |
| Encoder / Decoder Hidden Depth | 8 | 8 |
| Encoder / Decoder Hidden Num Heads | 4 | 4 |
| Codebook Dimension | 32 | 32 |
| Codebook Size | 1024 | 2048 |
| Codebook Loss Coefficient | 1.0 | 1.0 |
| Log Laplace loss Coefficient | 0.0 | 0.0 |
| Log Gaussian Coefficient | 1.0 | 1.0 |
| Perceptual loss Coefficient | 0.1 | 0.1 |
| Adversarial loss Coefficient | 0.1 | 0.1 |
| [Effective] Batch size | 24 | 20 |
| Learning rate | 1 × 10-4 | 4.5 × 10-6 |
| Weight Decay | 1 × 10-4 | 1 × 10-4 |
| Training epochs | 30 | 3 |
| Randomized Split (MAE) | ||||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||||
| Count (Ratio) | PharmHGT | SAM-DTA | Count (Ratio) | PharmHGT | SAM-DTA | |
| [0 , 1/3] | 8 (0.0092) | 1.7551 | 1.6787 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 34 (0.0389) | 1.3214 | 1.0040 | 28 (0.0321) | 1.4646 | 1.3319 |
| (2/3, 1] | 831 (0.9519) | 0.6015 | 0.5743 | 845 (0.9679) | 0.6128 | 0.5770 |
| overall | 873 (1.0000) | 0.6401 | 0.6012 | 873 (1.0000) | 0.6401 | 0.6012 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||||
| [0 , 1/3] | 33 (0.0378) | 1.5051 | 1.2444 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 398 (0.4559) | 0.7066 | 0.6619 | 33 (0.0378) | 1.5051 | 1.2444 |
| (2/3, 1] | 442 (0.5063) | 0.5157 | 0.4985 | 840 (0.9622) | 0.6061 | 0.5759 |
| overall | 873 (1.0000) | 0.6401 | 0.6012 | 873 (1.0000) | 0.6401 | 0.6012 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||||
| [0 , 1/3] | 17 (0.0195) | 1.5188 | 1.3149 | 24 (0.0275) | 1.5627 | 1.3469 |
| (1/3, 2/3] | 171 (0.1959) | 0.8890 | 0.7748 | 240 (0.2749) | 0.8014 | 0.7228 |
| (2/3, 1] | 685 (0.7847) | 0.5562 | 0.5401 | 609 (0.6976) | 0.5402 | 0.5239 |
| overall | 873 (1.0000) | 0.6401 | 0.6012 | 873 (1.0000) | 0.6401 | 0.6012 |
| Randomized Split (R²) | ||||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||||
| Count (Ratio) | PharmHGT | SAM-DTA | Count (Ratio) | PharmHGT | SAM-DTA | |
| [0 , 1/3] | 8 (0.0092) | 0.1555 | 0.2579 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 34 (0.0389) | -0.0371 | 0.3529 | 28 (0.0321) | 0.1028 | 0.2921 |
| (2/3, 1] | 831 (0.9519) | 0.6327 | 0.6706 | 845 (0.9679) | 0.6169 | 0.6672 |
| overall | 873 (1.0000) | 0.5928 | 0.6505 | 873 (1.0000) | 0.5928 | 0.6505 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||||
| [0 , 1/3] | 33 (0.0378) | -0.1562 | 0.1866 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 398 (0.4559) | 0.5412 | 0.6057 | 33 (0.0378) | -0.1562 | 0.1866 |
| (2/3, 1] | 442 (0.5063) | 0.6942 | 0.7156 | 840 (0.9622) | 0.6290 | 0.6711 |
| overall | 873 (1.0000) | 0.5928 | 0.6505 | 873 (1.0000) | 0.5928 | 0.6505 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||||
| [0 , 1/3] | 17 (0.0195) | -0.6280 | -0.3148 | 24 (0.0275) | -0.2379 | 0.0891 |
| (1/3, 2/3] | 171 (0.1959) | 0.4283 | 0.5743 | 240 (0.2749) | 0.4756 | 0.5794 |
| (2/3, 1] | 685 (0.7847) | 0.6483 | 0.6712 | 609 (0.6976) | 0.6584 | 0.6804 |
| overall | 873 (1.0000) | 0.5928 | 0.6505 | 873 (1.0000) | 0.5928 | 0.6505 |
| IC50 for Target BACE1 (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | PharmHGT | SAM-DTA | Count (Ratio) | PharmHGT | SAM-DTA | |
| [0 , 1/3] | 10 (0.0108) | 1.3743 | 1.1204 | 309 (0.3330) | 1.1397 | 1.0309 |
| (1/3, 2/3] | 67 (0.0722) | 0.6334 | 0.6928 | 311 (0.3351) | 0.6410 | 0.6693 |
| (2/3, 1] | 851 (0.9170) | 0.4611 | 0.4594 | 308 (0.3319) | 0.4747 | 0.4808 |
| overall | 928 (1.0000) | 0.4834 | 0.4834 | 928 (1.0000) | 0.7518 | 0.7272 |
| IC50 for Target BACE1 (R²) | ||||||
| [0 , 1/3] | 10 (0.0108) | -0.0553 | 0.3702 | 309 (0.3330) | -0.2983 | -0.1261 |
| (1/3, 2/3] | 67 (0.0722) | 0.6789 | 0.6439 | 311 (0.3351) | 0.5848 | 0.5637 |
| (2/3, 1] | 851 (0.9170) | 0.7113 | 0.7150 | 308 (0.3319) | 0.7797 | 0.7803 |
| overall | 928 (1.0000) | 0.7190 | 0.7256 | 928 (1.0000) | 0.5329 | 0.5665 |
| Ki for Target Carbonic anhydrase I (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | PharmHGT | SAM-DTA | Count (Ratio) | PharmHGT | SAM-DTA | |
| [0 , 1/3] | 7 (0.0079) | 1.1467 | 0.8798 | 264 (0.2983) | 0.8410 | 0.8729 |
| (1/3, 2/3] | 205 (0.2316) | 0.5843 | 0.6605 | 311 (0.3514) | 0.6706 | 0.6877 |
| (2/3, 1] | 673 (0.7605) | 0.4986 | 0.4896 | 310 (0.3503) | 0.6039 | 0.5740 |
| overall | 885 (1.0000) | 0.5236 | 0.5323 | 885 (1.0000) | 0.6981 | 0.7031 |
| Ki for Target Carbonic anhydrase I (R²) | ||||||
| [0 , 1/3] | 7 (0.0079) | -0.3232 | 0.1308 | 264 (0.2983) | -0.0389 | -0.0282 |
| (1/3, 2/3] | 205 (0.2316) | 0.5733 | 0.4820 | 311 (0.3514) | 0.3642 | 0.3478 |
| (2/3, 1] | 673 (0.7605) | 0.5037 | 0.5270 | 310 (0.3503) | 0.3917 | 0.4262 |
| overall | 885 (1.0000) | 0.5257 | 0.5174 | 885 (1.0000) | 0.2994 | 0.3071 |
| Ki for Target Carbonic anhydrase II (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | PharmHGT | SAM-DTA | Count (Ratio) | PharmHGT | SAM-DTA | |
| [0 , 1/3] | 8 (0.0087) | 0.5879 | 0.6645 | 244 (0.2667) | 1.0450 | 1.0564 |
| (1/3, 2/3] | 201 (0.2197) | 0.6807 | 0.7009 | 342 (0.3738) | 0.7389 | 0.7572 |
| (2/3, 1] | 706 (0.7716) | 0.5615 | 0.5426 | 329 (0.3596) | 0.6172 | 0.5813 |
| overall | 915 (1.0000) | 0.5879 | 0.5785 | 915 (1.0000) | 0.7768 | 0.7738 |
| Ki for Target Carbonic anhydrase II (R²) | ||||||
| [0 , 1/3] | 8 (0.0087) | 0.6739 | 0.4277 | 244 (0.2667) | -0.0885 | 0.0087 |
| (1/3, 2/3] | 201 (0.2197) | 0.5803 | 0.5742 | 342 (0.3738) | 0.4657 | 0.4690 |
| (2/3, 1] | 706 (0.7716) | 0.5509 | 0.5932 | 329 (0.3596) | 0.4760 | 0.5346 |
| overall | 915 (1.0000) | 0.5684 | 0.5938 | 915 (1.0000) | 0.3776 | 0.4192 |
| Split | Method | Internal Test MAE | Internal Test R² | External Test MAE | External Test R² |
| Random | FusionDTA | 0.6445 | 0.5399 | 1.1198 | 0.0957 |
| ChemBERTa | 0.6389 | 0.5574 | 1.0600 | 0.1517 | |
| MolCLR | 0.5830 | 0.6318 | 1.0976 | 0.1546 | |
| PharmHGT | 0.6166 | 0.5739 | 1.1107 | 0.1323 | |
| SAM-DTA | 0.5805 | 0.6301 | 0.9863 | 0.3206 | |
| Scaffold | FusionDTA | 0.9626 | 0.1544 | 1.0863 | 0.1243 |
| ChemBERTa | 0.9314 | 0.1997 | 1.1972 | -0.0491 | |
| MolCLR | 0.8627 | 0.3145 | 1.1585 | 0.0405 | |
| PharmHGT | 0.8537 | 0.3427 | 1.0930 | 0.1594 | |
| SAM-DTA | 0.8725 | 0.3311 | 1.0187 | 0.3034 | |
| SIMPD | FusionDTA | 0.7215 | 0.3292 | 1.1417 | 0.0528 |
| ChemBERTa | 0.6954 | 0.3642 | 1.1010 | 0.0878 | |
| MolCLR | 0.6334 | 0.4775 | 1.2131 | -0.0016 | |
| PharmHGT | 0.6742 | 0.3958 | 1.1588 | 0.0133 | |
| SAM-DTA | 0.6271 | 0.4867 | 1.0058 | 0.3083 | |
| Stratified (max) | FusionDTA | 0.6753 | 0.5346 | 1.0886 | 0.1517 |
| ChemBERTa | 0.6752 | 0.5384 | 1.1504 | 0.0223 | |
| MolCLR | 0.5968 | 0.6504 | 1.0917 | 0.1207 | |
| PharmHGT | 0.6092 | 0.6302 | 1.0694 | 0.1811 | |
| SAM-DTA | 0.6019 | 0.6404 | 1.0345 | 0.2722 | |
| Stratified (avg) | FusionDTA | 0.6490 | 0.5713 | 1.0957 | 0.1206 |
| ChemBERTa | 0.6724 | 0.5191 | 1.1258 | 0.0896 | |
| MolCLR | 0.5939 | 0.6368 | 1.1556 | 0.1019 | |
| PharmHGT | 0.6103 | 0.5895 | 1.0938 | 0.1667 | |
| SAM-DTA | 0.6099 | 0.6159 | 0.9946 | 0.3345 | |
| Dissimilar | FusionDTA | 0.9425 | -0.1256 | 1.2788 | -0.1063 |
| ChemBERTa | 0.8927 | -0.0139 | 1.6402 | -0.5971 | |
| MolCLR | 0.8462 | 0.0592 | 1.3355 | -0.1366 | |
| PharmHGT | 0.9011 | -0.0029 | 1.6006 | -0.5237 | |
| SAM-DTA | 0.9239 | -0.0845 | 1.2140 | -0.0039 | |
| SAE (mimic) | FusionDTA | 0.9130 | 0.2919 | 1.0605 | 0.2122 |
| ChemBERTa | 0.8976 | 0.2736 | 1.0452 | 0.2477 | |
| MolCLR | 0.8536 | 0.3653 | 1.0002 | 0.2981 | |
| PharmHGT | 0.8826 | 0.3200 | 1.0609 | 0.1861 | |
| SAM-DTA | 0.8545 | 0.3770 | 0.9773 | 0.3367 |
| Extrenal Test MAE | ||||||
| Bin | Count | Split | FusionDTA | ChemBERTa | MolCLR | PharmHGT SAM-DTA |
| [0, 1/3] | 120 | Random | 1.2442 | 1.1167 | 1.2385 | 1.3261 |
| Scaffold | 1.2293 | 1.2242 | 1.1630 | 1.1840 | ||
| SIMPD | 1.4297 | 1.3223 | 1.7376 | 1.3382 | ||
| Stratified (max) | 1.2311 | 1.1663 | 1.2414 | 1.2179 | ||
| Stratified (avg) | 1.5371 | 1.4478 | 1.6005 | 1.3962 | ||
| Dissimilar | 1.4134 | 1.2363 | 1.1912 | 1.2424 | ||
| SAE (mimic) | 1.0626 | 1.0315 | 1.1848 | 1.1082 | ||
| (1/3, 2/3] | 1026 | Random | 1.1416 | 1.0874 | 1.0983 | 1.0989 |
| Scaffold | 1.0729 | 1.2273 | 1.2021 | 1.0891 | ||
| SIMPD | 1.1545 | 1.1272 | 1.2134 | 1.1614 | ||
| Stratified (max) | 1.0756 | 1.1611 | 1.0927 | 1.0677 | ||
| Stratified (avg) | 1.0815 | 1.1450 | 1.1384 | 1.0835 | ||
| Dissimilar | 1.3106 | 1.6954 | 1.3676 | 1.6504 | ||
| SAE (mimic) | 1.0531 | 1.0594 | 0.9979 | 1.0606 | ||
| (2/3, 1] | 186 | Random | 0.9559 | 0.9060 | 1.0208 | 1.0516 |
| Scaffold | 1.0604 | 1.0346 | 0.9449 | 1.0543 | ||
| SIMPD | 0.9741 | 0.9022 | 1.0066 | 1.0775 | ||
| Stratified (max) | 1.0718 | 1.0883 | 1.0017 | 0.9933 | ||
| Stratified (avg) | 0.9694 | 0.9047 | 1.0415 | 1.0102 | ||
| Dissimilar | 1.0201 | 1.6720 | 1.2858 | 1.6248 | ||
| SAE (mimic) | 1.1003 | 0.9762 | 0.8938 | 1.0322 | ||
| Bin | Count | Split | FusionDTA | ChemBERTa | MolCLR | PharmHGT | SAM-DTA |
| [0, 1/3] | 120 | Random | -0.5461 | -0.5827 | -0.5009 | -0.6605 | -0.2855 |
| Scaffold | -0.1335 | -0.3903 | -0.1887 | -0.1096 | -0.0121 | ||
| SIMPD | -0.8717 | -0.7917 | -1.3400 | -0.6245 | -0.1181 | ||
| Stratified (max) | -0.4278 | -0.4195 | -0.7345 | -0.3873 | -0.3106 | ||
| Stratified (avg) | -0.8887 | -1.0024 | -1.1185 | -0.5790 | -0.3962 | ||
| Dissimilar | -0.4209 | -0.0067 | 0.0122 | -0.0173 | -0.4122 | ||
| SAE (mimic) | -0.0736 | -0.1140 | -0.3105 | -0.2592 | -0.0783 | ||
| (1/3, 2/3] | 1026 | Random | 0.0351 | 0.1066 | 0.1433 | 0.1005 | 0.2950 |
| Scaffold | 0.0987 | -0.1211 | -0.0502 | 0.1355 | 0.2824 | ||
| SIMPD | 0.0078 | 0.0332 | -0.0169 | -0.0337 | 0.2780 | ||
| Stratified (max) | 0.1305 | -0.0321 | 0.1016 | 0.1530 | 0.2633 | ||
| Stratified (avg) | 0.1017 | 0.0476 | 0.1156 | 0.1430 | 0.3188 | ||
| Dissimilar | -0.2307 | -0.8279 | -0.2834 | -0.7374 | -0.1244 | ||
| SAE (mimic) | 0.1791 | 0.1848 | 0.2632 | 0.1237 | 0.3020 | ||
| (2/3, 1] | 186 | Random | 0.0869 | 0.1390 | -0.0844 | 0.0577 | 0.2594 |
| Scaffold | -0.0967 | -0.0351 | 0.1475 | -0.0412 | 0.2110 | ||
| SIMPD | 0.1642 | 0.2476 | 0.0842 | 0.0139 | 0.2880 | ||
| Stratified (max) | 0.0272 | -0.0595 | 0.1063 | 0.1111 | 0.1550 | ||
| Stratified (avg) | 0.1114 | 0.2234 | 0.0081 | 0.0860 | 0.3187 | ||
| Dissimilar | 0.0288 | -1.0679 | -0.3223 | -0.9576 | 0.2743 | ||
| SAE (mimic) | -0.2135 | 0.1123 | 0.1807 | 0.0350 | 0.1359 |
| Similarity Measure | Fingerprint | SAE (balanced) | Random | Scaffold | SIMPD | Stratified (max) | Stratified (avg) |
| Cosine | Morgan | 145, 436, 292 | 0, 33, 840 | 6, 159, 708 | 16, 657, 200 | 0, 32, 841 | 0, 27, 846 |
| RDKFP | 18, 426, 429 | 0, 17, 856 | 1, 78, 794 | 1, 124, 748 | 0, 13, 860 | 1, 14, 858 | |
| Avalon | 9, 429, 435 | 0, 7, 866 | 0, 21, 852 | 0, 16, 857 | 0, 6, 867 | 0, 6, 867 | |
| Sokal | Morgan | 292, 289, 292 | 33, 398, 442 | 172, 510, 191 | 689, 126, 58 | 34, 423, 416 | 29, 416, 428 |
| RDKFP | 291, 291, 291 | 19, 80, 774 | 85, 236, 552 | 135, 624, 114 | 14, 82, 777 | 15, 74, 784 | |
| Avalon | 291, 291, 291 | 7, 63, 803 | 26, 275, 572 | 23, 629, 221 | 8, 76, 789 | 6, 73, 794 | |
| Dice | Morgan | 182, 378, 313 | 0, 33, 840 | 9, 163, 701 | 17, 672, 184 | 0, 34, 839 | 2, 27, 844 |
| RDKFP | 60, 463, 350 | 0, 19, 854 | 2, 83, 788 | 1, 134, 738 | 0, 14, 859 | 1, 14, 858 | |
| Avalon | 32, 416, 425 | 0, 7, 866 | 0, 26, 847 | 0, 23, 850 | 0, 8, 865 | 0, 6, 867 | |
| Tanimoto | Morgan | 290, 299, 284 | 16, 98, 759 | 80, 273, 520 | 228, 547, 98 | 12, 99, 762 | 13, 99, 761 |
| RDKFP | 289, 292, 292 | 8, 34, 831 | 15, 184, 674 | 2, 635, 236 | 1, 39, 833 | 4, 33, 836 | |
| Avalon | 220, 325, 328 | 0, 28, 845 | 2, 154, 717 | 1, 324, 548 | 0, 30, 843 | 1, 28, 844 |
| Method | Hyper-parameter | Options |
| SAM-DTA | Optimizer | [Adam, SGD] |
| Learning rate | [1e-3, 1e-4, 1e-5] | |
| Batch size | [10, 32, 64] | |
| MolCLR | Optimizer | [Adam, SGD] |
| Learning rate | [(1e-3, 5e-3), (1e-4, 5e-4), (1e-5, 5e-5)] | |
| (Prediction head, GNN encoder) | [0.3, 0.5] | |
| Dropout ratio | [Mean, Max, Add] | |
| Readout pooling | ||
| FusionDTA | Optimizer | [Adam, SGD] |
| Learning rate | [1e-2, 1e-3, 1e-4] | |
| Batch size | [128, 256] | |
| Loss function | [L1, MSE] | |
| PharmHGT | Optimizer | [Adam, SGD] |
| Learning rate | [1e-2, 1e-3, 1e-4] | |
| Activation function | [Sigmoid, ReLU] | |
| Loss function | [RMSE, MAE] | |
| ChemBERTa | Optimizer | [AdamW, Adafactor] |
| Learning rate | [4e-3, 4e-4, 4e-5, 4e-6] | |
| Batch size | [4, 8, 16] |
| Randomized Split (MAE) | ||||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||||
| Count (Ratio) | FusionDTA | ChemBERTa | Count (Ratio) | FusionDTA | ChemBERTa | |
| [0, 1/3] | 8 (0.0092) | 1.4442 | 1.4679 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 34 (0.0389) | 1.1294 | 1.1306 | 28 (0.0321) | 1.3299 | 1.2340 |
| (2/3, 1] | 831 (0.9519) | 0.6407 | 0.6546 | 845 (0.9679) | 0.6451 | 0.6623 |
| overall | 873 (1.0000) | 0.6671 | 0.6806 | 873 (1.0000) | 0.6671 | 0.6806 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||||
| [0, 1/3] | 33 (0.0378) | 1.2751 | 1.2711 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 398 (0.4559) | 0.7366 | 0.7234 | 33 (0.0378) | 1.2751 | 1.2711 |
| (2/3, 1] | 442 (0.5063) | 0.5591 | 0.5980 | 840 (0.9622) | 0.6432 | 0.6574 |
| overall | 873 (1.0000) | 0.6671 | 0.6806 | 873 (1.0000) | 0.6671 | 0.6806 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||||
| [0, 1/3] | 17 (0.0195) | 1.1567 | 1.3484 | 24 (0.0275) | 1.3682 | 1.3330 |
| (1/3, 2/3] | 171 (0.1959) | 0.8839 | 0.8722 | 240 (0.2749) | 0.7861 | 0.8094 |
| (2/3, 1] | 685 (0.7847) | 0.6008 | 0.6162 | 609 (0.6976) | 0.5926 | 0.6041 |
| overall | 873 (1.0000) | 0.6671 | 0.6806 | 873 (1.0000) | 0.6671 | 0.6806 |
| Randomized Split (R²) | ||||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||||
| Count (Ratio) | FusionDTA | ChemBERTa | Count (Ratio) | FusionDTA | ChemBERTa | |
| [0, 1/3] | 8 (0.0092) | 0.2319 | 0.2536 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 34 (0.0389) | 0.2253 | 0.1829 | 28 (0.0321) | 0.2166 | 0.2573 |
| (2/3, 1] | 831 (0.9519) | 0.5899 | 0.5796 | 845 (0.9679) | 0.5845 | 0.5698 |
| overall | 873 (1.0000) | 0.5697 | 0.5585 | 873 (1.0000) | 0.5697 | 0.5585 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||||
| [0, 1/3] | 33 (0.0378) | 0.0871 | 0.0615 | 0 (0.0000) | - | - |
| (1/3, 2/3] | 398 (0.4559) | 0.5068 | 0.5186 | 33 (0.0378) | 0.0871 | 0.0615 |
| (2/3, 1] | 442 (0.5063) | 0.6469 | 0.6117 | 840 (0.9622) | 0.5898 | 0.5793 |
| overall | 873 (1.0000) | 0.5697 | 0.5585 | 873 (1.0000) | 0.5697 | 0.5585 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||||
| [0, 1/3] | 17 (0.0195) | -0.0685 | -0.3530 | 24 (0.0275) | -0.0237 | -0.0345 |
| (1/3, 2/3] | 171 (0.1959) | 0.4390 | 0.4408 | 240 (0.2749) | 0.4881 | 0.4647 |
| (2/3, 1] | 685 (0.7847) | 0.5930 | 0.5837 | 609 (0.6976) | 0.5965 | 0.5903 |
| overall | 873 (1.0000) | 0.5697 | 0.5585 | 873 (1.0000) | 0.5697 | 0.5585 |
| Randomized Split (MAE) | ||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||
| Count (Ratio) | MolCLR | Count (Ratio) | MolCLR | |
| [0, 1/3] | 8 (0.0092) | 1.4442 | 0 (0.0000) | - |
| (1/3, 2/3] | 34 (0.0389) | 1.1294 | 28 (0.0321) | 1.3299 |
| (2/3, 1] | 831 (0.9519) | 0.6407 | 845 (0.9679) | 0.6451 |
| overall | 873 (1.0000) | 0.6671 | 873 (1.0000) | 0.6671 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||
| [0, 1/3] | 33 (0.0378) | 1.2751 | 0 (0.0000) | - |
| (1/3, 2/3] | 398 (0.4559) | 0.7366 | 33 (0.0378) | 1.2751 |
| (2/3, 1] | 442 (0.5063) | 0.5591 | 840 (0.9622) | 0.6432 |
| overall | 873 (1.0000) | 0.6671 | 873 (1.0000) | 0.6671 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||
| [0, 1/3] | 17 (0.0195) | 1.1567 | 24 (0.0275) | 1.3682 |
| (1/3, 2/3] | 171 (0.1959) | 0.8839 | 240 (0.2749) | 0.7861 |
| (2/3, 1] | 685 (0.7847) | 0.6008 | 609 (0.6976) | 0.5926 |
| overall | 873 (1.0000) | 0.6671 | 873 (1.0000) | 0.6671 |
| Randomized Split (R²) | ||||
| Bin | Feature: RDKit fingerprint | Feature: Avalon fingerprint | ||
| Count (Ratio) | MolCLR | Count (Ratio) | MolCLR | |
| [0, 1/3] | 8 (0.0092) | 0.2319 | 0 (0.0000) | - |
| (1/3, 2/3] | 34 (0.0389) | 0.2253 | 28 (0.0321) | 0.2166 |
| (2/3, 1] | 831 (0.9519) | 0.5899 | 845 (0.9679) | 0.5845 |
| overall | 873 (1.0000) | 0.5697 | 873 (1.0000) | 0.5697 |
| SimilarityMeasure: Sokal similarity | SimilarityMeasure: Dice coefficient | |||
| [0, 1/3] | 33 (0.0378) | 0.0871 | 0 (0.0000) | - |
| (1/3, 2/3] | 398 (0.4559) | 0.5068 | 33 (0.0378) | 0.0871 |
| (2/3, 1] | 442 (0.5063) | 0.6469 | 840 (0.9622) | 0.5898 |
| overall | 873 (1.0000) | 0.5697 | 873 (1.0000) | 0.5697 |
| Aggregation: Top-3 | Aggregation: Top-5 | |||
| [0, 1/3] | 17 (0.0195) | -0.0685 | 24 (0.0275) | -0.0237 |
| (1/3, 2/3] | 171 (0.1959) | 0.4390 | 240 (0.2749) | 0.4881 |
| (2/3, 1] | 685 (0.7847) | 0.5930 | 609 (0.6976) | 0.5965 |
| overall | 873 (1.0000) | 0.5697 | 873 (1.0000) | 0.5697 |
| IC50 for Target BACE1 (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | FusionDTA | ChemBERTa | Count (Ratio) | FusionDTA | ChemBERTa | |
| [0, 1/3] | 10 (0.0108) | 1.3020 | 1.1440 | 309 (0.3330) | 1.2117 | 1.2503 |
| (1/3, 2/3] | 67 (0.0722) | 0.7270 | 0.6719 | 311 (0.3351) | 0.7444 | 0.6599 |
| (2/3, 1] | 851 (0.9170) | 0.5105 | 0.5267 | 308 (0.3319) | 0.5310 | 0.5352 |
| overall | 928 (1.0000) | 0.5347 | 0.5439 | 928 (1.0000) | 0.8292 | 0.8151 |
| IC50 for Target BACE1 (R²) | ||||||
| [0, 1/3] | 10 (0.0108) | 0.0422 | 0.3204 | 309 (0.3330) | -0.5113 | -0.5641 |
| (1/3, 2/3] | 67 (0.0722) | 0.5980 | 0.6325 | 311 (0.3351) | 0.4238 | 0.5787 |
| (2/3, 1] | 851 (0.9170) | 0.6651 | 0.6446 | 308 (0.3319) | 0.7076 | 0.7235 |
| overall | 928 (1.0000) | 0.6755 | 0.6673 | 928 (1.0000) | 0.4213 | 0.4548 |
| Ki for Target Carbonic anhydrase I (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | FusionDTA | ChemBERTa | Count (Ratio) | FusionDTA | ChemBERTa | |
| [0, 1/3] | 7 (0.0079) | 0.9363 | 0.9564 | 264 (0.2983) | 1.0252 | 0.9245 |
| (1/3, 2/3] | 205 (0.2316) | 0.7086 | 0.7085 | 311 (0.3514) | 0.7362 | 0.7228 |
| (2/3, 1] | 673 (0.7605) | 0.5203 | 0.5440 | 310 (0.3503) | 0.6181 | 0.6060 |
| overall | 885 (1.0000) | 0.5673 | 0.5854 | 885 (1.0000) | 0.7810 | 0.7421 |
| Ki for Target Carbonic anhydrase I (R²) | ||||||
| [0, 1/3] | 7 (0.0079) | 0.0634 | 0.1421 | 264 (0.2983) | -0.4131 | -0.1076 |
| (1/3, 2/3] | 205 (0.2316) | 0.3536 | 0.3761 | 311 (0.3514) | 0.2106 | 0.2829 |
| (2/3, 1] | 673 (0.7605) | 0.4500 | 0.4231 | 310 (0.3503) | 0.3532 | 0.3299 |
| overall | 885 (1.0000) | 0.4259 | 0.4161 | 885 (1.0000) | 0.1253 | 0.2334 |
| Ki for Target Carbonic anhydrase II (MAE) | ||||||
| Bin | Randomized Split | SAE (balanced) Split | ||||
| Count (Ratio) | FusionDTA | ChemBERTa | Count (Ratio) | FusionDTA | ChemBERTa | |
| [0, 1/3] | 8 (0.0087) | 0.8465 | 0.5778 | 244 (0.2667) | 0.9849 | 0.9314 |
| (1/3, 2/3] | 201 (0.2197) | 0.6817 | 0.7419 | 342 (0.3738) | 0.8265 | 0.7390 |
| (2/3, 1] | 706 (0.7716) | 0.5605 | 0.5997 | 329 (0.3596) | 0.6040 | 0.6072 |
| overall | 915 (1.0000) | 0.5896 | 0.6307 | 915 (1.0000) | 0.7888 | 0.7429 |
| Ki for Target Carbonic anhydrase II (R²) | ||||||
| [0, 1/3] | 8 (0.0087) | 0.0349 | 0.3581 | 244 (0.2667) | 0.0488 | 0.2523 |
| (1/3, 2/3] | 201 (0.2197) | 0.5603 | 0.4667 | 342 (0.3738) | 0.3416 | 0.4686 |
| (2/3, 1] | 706 (0.7716) | 0.5513 | 0.5146 | 329 (0.3596) | 0.4499 | 0.4744 |
| overall | 915 (1.0000) | 0.5570 | 0.5087 | 915 (1.0000) | 0.3583 | 0.4659 |
| IC50 for Target BACE1 (MAE) | ||||
| Bin | Randomized Split | SAE (balanced) Split | ||
| Count (Ratio) | MolCLR | Count (Ratio) | MolCLR | |
| [0 , 1/3] | 10 (0.0108) | 1.2452 | 309 (0.3330) | 1.4141 |
| (1/3, 2/3] | 67 (0.0722) | 0.6952 | 311 (0.3351) | 0.6940 |
| (2/3, 1] | 851 (0.9170) | 0.4878 | 308 (0.3319) | 0.4784 |
| overall | 928 (1.0000) | 0.5109 | 928 (1.0000) | 0.8622 |
| IC50 for Target BACE1 (R²) | ||||
| [0 , 1/3] | 10 (0.0108) | -0.0492 | 309 (0.3330) | -0.9211 |
| (1/3, 2/3] | 67 (0.0722) | 0.6184 | 311 (0.3351) | 0.5107 |
| (2/3, 1] | 851 (0.9170) | 0.6919 | 308 (0.3319) | 0.7746 |
| overall | 928 (1.0000) | 0.6974 | 928 (1.0000) | 0.3713 |
| Ki for Target Carbonic anhydrase I (MAE) | ||||
| Bin | Randomized Split | SAE (balanced) Split | ||
| Count (Ratio) | MolCLR | Count (Ratio) | MolCLR | |
| [0 , 1/3] | 7 (0.0079) | 0.8510 | 264 (0.2983) | 1.0059 |
| (1/3, 2/3] | 205 (0.2316) | 0.6141 | 311 (0.3514) | 0.7549 |
| (2/3, 1] | 673 (0.7605) | 0.4762 | 310 (0.3503) | 0.5755 |
| overall | 885 (1.0000) | 0.5111 | 885 (1.0000) | 0.7669 |
| Ki for Target Carbonic anhydrase I (R²) | ||||
| [0 , 1/3] | 7 (0.0079) | 0.3127 | 264 (0.2983) | -0.3331 |
| (1/3, 2/3] | 205 (0.2316) | 0.5338 | 311 (0.3514) | 0.1585 |
| (2/3, 1] | 673 (0.7605) | 0.5598 | 310 (0.3503) | 0.4342 |
| overall | 885 (1.0000) | 0.5572 | 885 (1.0000) | 0.1552 |
| Ki for Target Carbonic anhydrase II (MAE) | ||||
| Bin | Randomized Split | SAE (balanced) Split | ||
| Count (Ratio) | MolCLR | Count (Ratio) | MolCLR | |
| [0 , 1/3] | 8 (0.0087) | 1.0278 | 244 (0.2667) | 0.8873 |
| (1/3, 2/3] | 201 (0.2197) | 0.6882 | 342 (0.3738) | 0.6907 |
| (2/3, 1] | 706 (0.7716) | 0.5497 | 329 (0.3596) | 0.6232 |
| overall | 915 (1.0000) | 0.5843 | 915 (1.0000) | 0.7189 |
| Ki for Target Carbonic anhydrase II (R²) | ||||
| [0 , 1/3] | 8 (0.0087) | -0.3461 | 244 (0.2667) | 0.2192 |
| (1/3, 2/3] | 201 (0.2197) | 0.5635 | 342 (0.3738) | 0.5347 |
| (2/3, 1] | 706 (0.7716) | 0.5964 | 329 (0.3596) | 0.4409 |
| overall | 915 (1.0000) | 0.5856 | 915 (1.0000) | 0.4740 |
| Base Model | Variant | Variation Type |
| Llama-2-7B | Llama-2-7B-80K | Length Extension via Continue Pretrain |
| Llama-2-13B-64K | Model Scaling and Length Extension | |
| Mistral-7B-v0.2 | Mistral-7B-Instruct-v0.2 | SFT and RLHF |
| Mixtral-8x7B-v0.1 | Sparse Upcycling to Mixture of Experts | |
| Yi-6B | Yi-6B-200K | Length Extension via Continual Pretraining |
| Yi-34B-200K | Model Scaling and Length Extension | |
| Qwen1.5-14B | Qwen1.5-14B-Chat | SFT and RLHF |
| Question 1: Mishka bought 3 pairs of shorts, 3 pairs of pants, and 3 pairs of shoes. One pair of shorts costs $16.50. One pair of pants costs $22.50 and one pair of shoes costs $42. How many dollars did Mishka spend on all the clothing items? Masking Top 100 Retrieval Heads: Let's calculate the cost of each type of clothing for Mishka: 1. For shorts, there are 3 pairs, so the total cost is 3 * $16.5 = $59.10. 2. For pants, the cost is the same, so 3 pairs cost $3 * $22 = $66. 3. In total, Mishka spent $59 + $66 = $125. So, the answer is $125. Misunderstood Information | Question 2: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Masking Top 100 Retrieval Heads: Let's think through this step by step: 1. James runs 3 times a week for each sprint session. 2. During each session, he covers a distance of 60 meters (3 times the length of a standard football field goal post). 3. Therefore, the total distance James covers in a week is: 6 (sessions in a week) x1 (number of sprints in session) x 60(meters in a sprint)= 360 meters. So, James runs a total of 360 meters every week. |
| Cora | OGB-arxiv | |
| Rand. ASR | 14.29 | 2.5 |
| ASR | 34.72 | 5.4 |
| Attacks | Defense | Cora | CiteSeer | PubMed | Physics | Flickr | OGB-arxiv | ||||||
| ASR(%)↓ | ACC(%)↑ | ASR(%)↓ | ACC(%)↑ | ASR(%)↓ | ACC(%)↑ | ASR(%)↓ | ACC(%)↑ | ASR(%)↓ | ACC(%)↑ | ASR(%)↓ | ACC(%)↑ | ||
| GTA | GCN | 98.98 | 82.58 | 100.00 | 73.70 | 93.09 | 85.18 | 96.42 | 89.30 | 88.45 | 41.23 | 75.34 | 65.76 |
| GNNGuard | 40.22 | 78.52 | 55.26 | 63.55 | 26.93 | 81.68 | 43.87 | 78.72 | 0.00 | 40.40 | 0.04 | 62.58 | |
| RobustGCN | 90.46 | 80.37 | 95.31 | 73.79 | 93.12 | 81.68 | 95.47 | 94.84 | 85.42 | 42.26 | 70.95 | 56.08 | |
| Prune | 17.63 | 83.06 | 12.24 | 72.46 | 28.10 | 85.05 | 8.34 | 88.45 | 12.56 | 41.27 | 0.01 | 63.97 | |
| OD | 0.04 | 83.47 | 0.04 | 72.84 | 0.03 | 85.27 | 0.12 | 90.24 | 2.12 | 41.42 | 0.01 | 65.23 | |
| RS | 53.14 | 73.33 | 52.86 | 65.66 | 42.28 | 84.58 | 57.70 | 92.52 | 52.85 | 42.31 | 42.72 | 58.48 | |
| ABL | 19.93 | 81.85 | 13.09 | 73.19 | 16.18 | 83.92 | 16.82 | 92.06 | 17.54 | 41.46 | 11.28 | 63.24 | |
| RIGBD | 0.00 | 83.70 | 0.34 | 74.10 | 0.01 | 84.32 | 0.32 | 95.10 | 0.00 | 44.21 | 0.01 | 66.51 | |
| UGBA | GCN | 98.76 | 83.42 | 100.00 | 74.70 | 96.42 | 84.64 | 100.00 | 95.94 | 93.14 | 44.71 | 98.82 | 63.95 |
| GNNGuard | 43.17 | 78.15 | 94.53 | 64.76 | 98.97 | 81.48 | 95.26 | 87.54 | 96.93 | 42.15 | 95.21 | 64.61 | |
| RobustGCN | 98.67 | 80.00 | 99.82 | 71.69 | 99.90 | 82.85 | 99.94 | 94.08 | 32.17 | 41.82 | 90.35 | 56.18 | |
| Prune | 98.89 | 82.66 | 97.68 | 74.35 | 92.87 | 85.09 | 94.67 | 93.87 | 91.43 | 43.65 | 93.07 | 62.58 | |
| OD | 0.03 | 83.65 | 0.06 | 73.80 | 0.01 | 85.19 | 0.02 | 95.36 | 1.65 | 43.57 | 0.01 | 65.35 | |
| RS | 54.24 | 70.37 | 50.34 | 69.88 | 44.41 | 84.68 | 47.77 | 94.29 | 20.69 | 42.18 | 40.30 | 58.76 | |
| ABL | 15.13 | 81.48 | 12.08 | 73.19 | 28.60 | 84.37 | 14.87 | 94.69 | 15.32 | 41.66 | 34.26 | 64.93 | |
| RIGBD | 0.01 | 84.81 | 0.00 | 73.80 | 0.01 | 85.13 | 0.12 | 95.71 | 0.00 | 43.66 | 0.01 | 65.21 | |
| DPGBA | GCN | 97.72 | 83.34 | 100.00 | 74.09 | 98.63 | 85.22 | 100.00 | 95.59 | 90.79 | 44.87 | 95.63 | 65.72 |
| GNNGuard | 85.61 | 78.52 | 46.98 | 60.84 | 44.12 | 80.82 | 88.72 | 88.76 | 95.85 | 43.52 | 94.66 | 62.29 | |
| RobustGCN | 96.68 | 81.11 | 91.64 | 71.08 | 94.88 | 82.54 | 91.25 | 94.87 | 24.60 | 41.69 | 90.09 | 60.38 | |
| Prune | 91.82 | 85.28 | 94.80 | 73.21 | 88.64 | 85.13 | 94.27 | 94.73 | 88.96 | 44.75 | 90.47 | 65.53 | |
| OD | 94.33 | 83.58 | 98.42 | 73.66 | 91.32 | 85.12 | 98.72 | 95.48 | 90.42 | 43.63 | 93.30 | 65.47 | |
| RS | 51.29 | 69.63 | 50.34 | 71.08 | 48.83 | 85.44 | 48.19 | 95.30 | 27.94 | 42.21 | 41.18 | 58.44 | |
| ABL | 86.72 | 79.26 | 11.41 | 73.49 | 48.58 | 79.45 | 11.74 | 95.30 | 30.42 | 41.71 | 52.56 | 63.88 | |
| RIGBD | 0.01 | 85.19 | 0.33 | 73.79 | 0.03 | 84.56 | 0.21 | 95.79 | 0.00 | 43.78 | 0.00 | 65.24 | |
| Attacks | Clean ACC | ASR | ACC | Recall | Precision |
| GTA | 65.8 | 0.01 | 66.51 | 84.9 | 90.3 |
| UGBA | 65.8 | 0.01 | 65.21 | 95.6 | 96.3 |
| DPGBA | 65.8 | 0.00 | 65.24 | 90.5 | 93.9 |
| Intuitive | RIGBD (K=10) |
| 42.56s | 0.09s |
| Dataset | Degree-1 | Degree-2 | Degree-3 | Target Nodes |
| Wisconsin | 0.64 | 1.94 | 1.48 | 6.08 |
| Cornell | 0.79 | 1.33 | 1.27 | 5.16 |
| Texas | 0.65 | 1.48 | 1.28 | 4.10 |
| Cora | 0.51 | 0.66 | 0.56 | 12.70 |
| Citeseer | 0.36 | 0.70 | 0.69 | 5.41 |
| Pubmed | 0.67 | 0.80 | 0.57 | 10.67 |
| Physics | 0.44 | 0.52 | 0.36 | 18.80 |
| Flickr | 0.01 | 0.02 | 0.01 | 2.56 |
| OGB-arxiv | 0.57 | 0.79 | 0.67 | 10.67 |
| Datasets | #Nodes | #Edges | #Features | #Classes |
| Cora | 2,708 | 5,429 | 1,443 | 7 |
| Citeseer | 3,327 | 4,552 | 3,703 | 3 |
| Pubmed | 19,717 | 44,338 | 500 | 3 |
| Physics | 34,493 | 495,924 | 8,415 | 5 |
| Flickr | 89,250 | 899,756 | 500 | 7 |
| OGB-arxiv | 169,343 | 1,166,243 | 128 | 40 |
| Dataset | GCN | RIGBD |
| Cora | 84.44 | 84.07 |
| CiteSeer | 73.83 | 73.44 |
| PubMed | 85.13 | 85.05 |
| Physics | 95.78 | 95.40 |
| Flickr | 44.42 | 44.03 |
| OGB-arxiv | 66.13 | 65.53 |
| Attacks | Defense | Cora | PubMed | OGB-arxiv | |||
| ASR(%) ↓ | ACC(%) ↑ | ASR(%) ↓ | ACC(%) ↑ | ASR(%) ↓ | ACC(%) ↑ | ||
| GTA | GAT | 76.97 | 84.44 | 89.77 | 82.60 | OOM | OOM |
| GraphSage | 100.00 | 81.11 | 99.94 | 84.88 | 93.67 | 67.81 | |
| RS-0.7 | 45.02 | 66.67 | 33.20 | 85.24 | 33.12 | 56.93 | |
| RS-0.8 | 34.32 | 65.56 | 24.65 | 85.13 | 25.72 | 55.05 | |
| RIGBD-GAT | 0.02 | 85.19 | 0.08 | 83.66 | OOM | OOM | |
| RIGBD-GraphSage | 0.18 | 81.52 | 0.00 | 85.84 | 0.09 | 67.18 | |
| UGBA | GAT | 100 | 85.19 | 100.00 | 83.05 | OOM | OOM |
| GraphSage | 97.33 | 84.07 | 99.08 | 85.24 | 94.69 | 68.25 | |
| RS-0.7 | 46.86 | 69.63 | 37.74 | 85.34 | 29.74 | 57.33 | |
| RS-0.8 | 37.64 | 65.19 | 28.20 | 86.35 | 23.14 | 54.77 | |
| RIGBD-GAT | 0.04 | 84.81 | 0.30 | 82.14 | OOM | OOM | |
| RIGBD-GraphSage | 0.44 | 84.07 | 0.02 | 85.74 | 0.06 | 67.75 | |
| DPGBA | GAT | 91.14 | 83.33 | 93.8 | 83.56 | OOM | OOM |
| GraphSage | 93.48 | 83.33 | 91.8 | 85.79 | 99.53 | 67.76 | |
| RS-0.7 | 40.96 | 68.52 | 36.31 | 84.98 | 32.81 | 56.66 | |
| RS-0.8 | 35.79 | 66.30 | 27.13 | 85.74 | 23.41 | 54.45 | |
| RIGBD-GAT | 0.04 | 84.44 | 0.01 | 83.31 | OOM | OOM | |
| RIGBD-GraphSage | 0.26 | 82.11 | 0.06 | 86.35 | 0.03 | 66.61 | |
| Dataset | Methods | Clean ACC | ASR | Recall | Precision |
| Wisconsin | GCN | 48 | 100 | - | - |
| RIGBD | 48 | 0 | 90 | 95 | |
| Cornell | GCN | 50.0 | 93.3 | - | - |
| RIGBD | 53.5 | 3.3 | 70 | 100 | |
| Texas | GCN | 44.4 | 92.9 | - | - |
| RIGBD | 42.9 | 3.7 | 80 | 70 |
| Datasets | Attacks | Clean ACC | ASR | ACC | Recall | Precision |
| Cora | GTA | 83.5 | 0.00 | 83.70 | 91.2 | 94.0 |
| UGBA | 83.5 | 0.01 | 84.81 | 96.8 | 100.0 | |
| DPGBA | 83.5 | 0.01 | 85.19 | 100.0 | 100.0 | |
| CiteSeer | GTA | 73.8 | 0.34 | 74.10 | 94.7 | 100.0 |
| UGBA | 73.8 | 0.00 | 73.80 | 93.4 | 98.6 | |
| DPGBA | 73.8 | 0.33 | 73.79 | 97.4 | 100.0 | |
| PubMed | GTA | 84.9 | 0.01 | 84.32 | 82.0 | 90.5 |
| UGBA | 84.9 | 0.01 | 85.13 | 88.7 | 96.6 | |
| DPGBA | 84.9 | 0.01 | 84.32 | 85.0 | 91.0 | |
| Physics | GTA | 95.8 | 0.32 | 95.10 | 86.2 | 91.8 |
| UGBA | 95.8 | 0.12 | 95.71 | 96.0 | 98.4 | |
| DPGBA | 95.8 | 0.21 | 95.97 | 96.8 | 99.2 | |
| Flickr | GTA | 44.4 | 0.00 | 44.21 | 69.5 | 98.1 |
| UGBA | 44.4 | 0.00 | 42.74 | 98.6 | 97.2 | |
| DPGBA | 44.4 | 0.00 | 43.80 | 97.9 | 100.0 | |
| OGB-arxiv | GTA | 65.8 | 0.01 | 66.51 | 84.9 | 90.3 |
| UGBA | 65.8 | 0.01 | 65.21 | 95.6 | 96.3 | |
| DPGBA | 65.8 | 0.00 | 65.24 | 90.5 | 93.9 |
| Datasets | Triggers | ASR | Clean ACC | ASR | ACC | Recall | Precision |
| Cora | 10 | 87.82 | 84.81 | 0.05 | 85.56 | 90.0 | 90.0 |
| 20 | 92.62 | 84.44 | 0.00 | 85.19 | 95.0 | 100.0 | |
| 40 | 93.48 | 83.70 | 0.02 | 85.93 | 97.0 | 97.0 | |
| 80 | 96.52 | 82.22 | 0.10 | 84.81 | 94.0 | 98.4 | |
| 160 | 99.13 | 80.00 | 0.10 | 84.07 | 97.1 | 95.8 | |
| PubMed | 20 | 90.73 | 84.88 | 0.01 | 85.21 | 85.0 | 85.0 |
| 80 | 94.32 | 84.68 | 0.01 | 84.56 | 75.5 | 90.2 | |
| 160 | 96.10 | 84.17 | 0.02 | 84.47 | 82.0 | 90.5 | |
| 240 | 97.46 | 84.14 | 0.01 | 84.16 | 66.2 | 95.8 | |
| 320 | 98.07 | 83.77 | 0.02 | 84.04 | 83.0 | 88.2 |
| Dataset | Method | ASR | ACC | Recall | Precision |
| Cora | UGBA | 92.53 | 82.22 | - | - |
| RIGBD | 0.44 | 83.70 | 83.9 | 92.9 | |
| Citeseer | UGBA | 95.24 | 74.69 | - | - |
| RIGBD | 0.00 | 73.20 | 98.7 | 98.7 | |
| Pubmed | UGBA | 92.58 | 84.58 | - | - |
| RIGBD | 0.32 | 84.83 | 94.0 | 97.5 | |
| OGB-arxiv | UGBA | 85.29 | 65.38 | - | - |
| RIGBD | 0.00 | 65.92 | 92.7 | 99.8 |
| Dataset | Training | Test | ||
| # of Time Step | # of Sequences (|X|) | Avg. Len. (T) | Avg. # of Root Variables | |
| Linear (4) | 5,000 | 100 | 500 | 3.75 |
| Nonlinear (6) | 5,000 | 100 | 500 | 5.25 |
| Lotka-Volterra (40) | 40,000 | 100 | 2,000 | 30.75 |
| Lorenz 96 (20) | 200,000 | 100 | 2,000 | 15.75 |
| SWaT (51) | 49,500 | 20 | 51 | 13.35 |
| MSDS (10) | 29,268 | 4,255 | 21 | 3.05 |
| Model | Linear | Nonlinear | ||||||
| F1 | AUC-PR | AUC-ROC | HD | F1 | AUC-PR | AUC-ROC | HD | |
| VAR | 0.969±0.019 | 0.998±0.003 | 0.999±0.001 | 0.011±0.009 | 0.473±0.164 | 0.529±0.181 | 0.676±0.140 | 0.258±0.130 |
| cMLP | 0.745±0.029 | 0.595±0.038 | 0.829±0.025 | 0.229±0.033 | 0.419±0.134 | 0.327±0.079 | 0.609±0.089 | 0.340±0.217 |
| cLSTM | 0.684±0.042 | 0.522±0.048 | 0.766±0.047 | 0.312±0.062 | 0.378±0.000 | 0.233±0.000 | 0.500±0.000 | 0.767±0.000 |
| TCDF | 0.943±0.070 | 0.933±0.081 | 0.950±0.061 | 0.033±0.040 | 0.473±0.107 | 0.343±0.072 | 0.655±0.087 | 0.307±0.065 |
| eSRU | 0.964±0.070 | 0.958±0.082 | 0.969±0.061 | 0.021±0.041 | 0.408±0.152 | 0.332±0.071 | 0.617±0.092 | 0.267±0.069 |
| PCMCI | 0.969±0.031 | 0.981±0.040 | 0.986±0.042 | 0.025±0.038 | 0.607±0.094 | 0.456±0.172 | 0.742±0.147 | 0.273±0.175 |
| PCMCI+ | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 0.000±0.000 | 0.505±0.141 | 0.410±0.133 | 0.669±0.134 | 0.233±0.109 |
| GVAR | 0.862±0.052 | 0.981±0.040 | 0.986±0.042 | 0.131±0.066 | 0.421±0.094 | 0.562±0.145 | 0.683±0.097 | 0.487±0.103 |
| CUTS | 0.810±0.076 | 0.792±0.066 | 0.844±0.050 | 0.104±0.034 | 0.357±0.040 | 0.249±0.014 | 0.536±0.032 | 0.513±0.124 |
| AERCA | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 0.000±0.000 | 0.826±0.057 | 0.996±0.013 | 0.998±0.006 | 0.027±0.014 |
| Model | Lotka-Volterra | Lorenz 96 | ||||||
| F1 | AUC-PR | AUC-ROC | HD | F1 | AUC-PR | AUC-ROC | HD | |
| VAR | 0.533±0.013 | 1.000±0.000 | 1.000±0.000 | 0.044±0.003 | 0.404±0.162 | 0.562±0.376 | 0.764±0.204 | 0.360±0.121 |
| cMLP | 0.511±0.011 | 0.065±0.014 | 0.508±0.007 | 0.049±0.001 | 0.472±0.058 | 0.202±0.027 | 0.569±0.038 | 0.193±0.031 |
| cLSTM | 0.356±0.176 | 0.052±0.001 | 0.500±0.000 | 0.400±0.428 | 0.453±0.048 | 0.194±0.021 | 0.572±0.031 | 0.232±0.035 |
| TCDF | 0.853±0.032 | 0.749±0.050 | 0.890±0.021 | 0.019±0.002 | 0.429±0.007 | 0.290±0.006 | 0.645±0.004 | 0.260±0.11 |
| eSRU | 0.422±0.039 | 0.323±0.030 | 0.634±0.016 | 0.055±0.002 | 0.195±0.024 | 0.225±0.009 | 0.539±0.009 | 0.215±0.006 |
| PCMCI | 0.465±0.025 | 0.291±0.019 | 0.906±0.017 | 0.109±0.008 | 0.368±0.004 | 0.227±0.007 | 0.680±0.013 | 0.540±0.021 |
| PCMCI+ | 0.709±0.027 | 0.651±0.121 | 0.851±0.082 | 0.024±0.005 | 0.502±0.020 | 0.329±0.022 | 0.709±0.017 | 0.163±0.009 |
| GVAR | 0.787±0.011 | 0.988±0.015 | 0.999±0.002 | 0.027±0.002 | 0.568±0.330 | 0.582±0.361 | 0.776±0.194 | 0.142±0.109 |
| CUTS | 0.877±0.031 | 0.791±0.047 | 0.892±0.024 | 0.011±0.002 | 0.341±0.003 | 0.206±0.002 | 0.621±0.004 | 0.404±0.012 |
| AERCA | 0.857±0.000 | 1.000±0.000 | 1.000±0.000 | 0.026±0.000 | 0.800±0.000 | 0.998±0.002 | 0.999±0.001 | 0.105±0.000 |
| Dataset | Model | AC@1 | AC@3 | AC@5 | AC@10 | Avg@10 |
| Linear | ε-Diagnosis | 0.900±0.300 | 0.850±0.189 | 1.000±0.000 | 1.000±0.000 | 0.950±0.043 |
| RCD | 0.500±0.500 | 0.817±0.189 | 1.000±0.000 | 1.000±0.000 | 0.907±0.076 | |
| CIRCA | 0.600±0.490 | 0.800±0.306 | 1.000±0.000 | 1.000±0.000 | 0.910±0.106 | |
| AERCA | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | |
| Nonlinear | ε-Diagnosis | 0.400±0.490 | 0.667±0.325 | 0.880±0.165 | 1.000±0.000 | 0.837±0.139 |
| RCD | 0.600±0.490 | 0.750±0.344 | 0.880±0.165 | 1.000±0.000 | 0.878±0.118 | |
| CIRCA | 0.700±0.458 | 0.717±0.395 | 0.835±0.295 | 1.000±0.000 | 0.863±0.160 | |
| AERCA | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | |
| Lotka-Volterra | ε-Diagnosis | 0.100±0.300 | 0.133±0.163 | 0.138±0.149 | 0.247±0.188 | 0.158±0.131 |
| RCD | 0.100±0.300 | 0.133±0.163 | 0.138±0.149 | 0.247±0.188 | 0.158±0.131 | |
| CIRCA | 0.120±0.325 | 0.107±0.169 | 0.120±0.150 | 0.225±0.230 | 0.146±0.163 | |
| AERCA | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | |
| Lorenz96 | ε-Diagnosis | 0.100±0.300 | 0.200±0.221 | 0.280±0.312 | 0.450±0.330 | 0.314±0.225 |
| RCD | 0.200±0.400 | 0.333±0.333 | 0.400±0.358 | 0.556±0.337 | 0.421±0.278 | |
| CIRCA | 0.360±0.480 | 0.330±0.244 | 0.346±0.249 | 0.539±0.263 | 0.408±0.220 | |
| AERCA | 0.996±0.009 | 0.996±0.009 | 0.997±0.008 | 0.996±0.008 | 0.990±0.011 | |
| SWaT | ε-Diagnosis | 0.075±0.179 | 0.125±0.217 | 0.125±0.217 | 0.375±0.383 | 0.180±0.194 |
| RCD | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.300±0.458 | 0.100±0.161 | |
| CIRCA | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.300±0.458 | 0.100±0.161 | |
| AERCA | 0.220±0.111 | 0.290±0.088 | 0.330±0.048 | 0.455±0.044 | 0.342±0.052 | |
| MSDS | ε-Diagnosis | 0.004±0.004 | 0.266±0.002 | 0.452±0.009 | 1.000±0.000 | 0.492±0.001 |
| RCD | 0.412±0.048 | 0.573±0.010 | 0.984±0.001 | 1.000±0.000 | 0.821±0.012 | |
| CIRCA | 0.454±0.238 | 0.860±0.140 | 0.917±0.084 | 1.000±0.000 | 0.809±0.035 | |
| AERCA | 0.381±0.408 | 0.908±0.062 | 0.974±0.027 | 1.000±0.000 | 0.896±0.037 |
| Dataset | Model | AC* @ 1 | AC* @ 10 | AC* @ 100 | AC* @ 500 | Avg* @ 500 |
| Linear | ε-Diagnosis | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.250±0.316 | 0.086±0.125 |
| RCD | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.150±0.300 | 0.064±0.160 | |
| CIRCA | 0.000±0.000 | 0.000±0.000 | 0.025±0.075 | 0.233±0.327 | 0.088±0.144 | |
| AERCA | 0.763±0.137 | 0.990±0.018 | 1.000±0.000 | 1.000±0.000 | 0.998±0.001 | |
| Nonlinear | ε-Diagnosis | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.092±0.142 | 0.056±0.088 |
| RCD | 0.000±0.000 | 0.000±0.000 | 0.080±0.240 | 0.263±0.405 | 0.116±0.215 | |
| CIRCA | 0.000±0.000 | 0.000±0.000 | 0.017±0.050 | 0.160±0.182 | 0.064±0.075 | |
| AERCA | 0.433±0.132 | 0.830±0.094 | 0.994±0.010 | 0.995±0.009 | 0.987±0.094 | |
| Lotka-Volterra | ε-Diagnosis | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 |
| RCD | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | |
| CIRCA | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | |
| AERCA | 0.997±0.005 | 0.998±0.004 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | |
| Lorenz96 | ε-Diagnosis | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 |
| RCD | 0.000±0.000 | 0.000±0.000 | 0.011±0.000 | 0.000±0.000 | 0.000±0.000 | |
| CIRCA | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | 0.000±0.000 | |
| AERCA | 0.842±0.016 | 0.970±0.013 | 0.996±0.009 | 0.996±0.009 | 0.987±0.010 | |
| SWaT | ε-Diagnosis | 0.075±0.179 | 0.125±0.217 | 0.125±0.217 | 1.000±0.000 | 0.633±0.128 |
| RCD | 0.000±0.000 | 0.025±0.043 | 0.214±0.066 | 0.799±0.070 | 0.416±0.028 | |
| CIRCA | 0.000±0.000 | 0.025±0.043 | 0.214±0.066 | 0.799±0.070 | 0.416±0.028 | |
| AERCA | 0.020±0.026 | 0.320±0.026 | 1.000±0.000 | 1.000±0.000 | 0.950±0.002 | |
| MSDS | ε-Diagnosis | 0.000±0.000 | 0.389±0.410 | 0.706±0.310 | 1.000±0.000 | 0.880±0.113 |
| RCD | 0.025±0.026 | 0.216±0.806 | 0.806±0.205 | 1.000±0.000 | 0.908±0.078 | |
| CIRCA | 0.000±0.000 | 0.102±0.108 | 0.741±0.273 | 1.000±0.000 | 0.884±0.083 | |
| AERCA | 0.230±0.004 | 1.000±0.000 | 1.000±0.000 | 1.000±0.000 | 0.997±0.000 |
| Nikhila Ravi*,† | Valentin Gabeur* | Yuan-Ting Hu* | Ronghang Hu* | Chaitanya Ryali* |
| Tengyu Ma* | Haitham Khedr* | Roman Rädle* | Chloe Rolland | Laura Gustafson |
| Eric Mintun | Junting Pan | Kalyan Vasudev Alwala | Nicolas Carion | Chao-Yuan Wu |
| Ross Girshick | Piotr Dálár† | Christoph Feichtenhofer*,† |
| Model in the Loop | Time per Frame | Edited Frames | Clicks per Clicked Frame | Phase 1 Mask Alignment Score (IoU>0.75) | ||||
| All | Small | Medium | Large | |||||
| Phase 1 | SAM only | 37.8 s | 100.00 % | 4.80 | - | - | - | - |
| Phase 2 | SAM + SAM 2 Mask | 7.4 s | 23.25 % | 3.61 | 86.4 % | 71.3 % | 80.4 % | 97.9 % |
| Phase 3 | SAM 2 | 4.5 s | 19.04 % | 2.68 | 89.1 % | 72.8 % | 81.8 % | 100.0 % |
| Training data | SA-V val | 9 zero-shot |
| VOS + SA-1B | 50.0 | 62.5 |
| + Phase 1 | 53.0 | 66.9 |
| + Phase 2 | 58.8 | 70.9 |
| + Phase 3 | 62.5 | 71.2 |
| + Auto | 63.2 | 71.5 |
| #Videos | Duration | #Masklets | #Masks | #Frames | Disapp. Rate | |
| DAVIS 2017 (Pont-Tuset et al., 2017) | 0.2K | 0.1 hr | 0.4K | 27.1K | 10.7K | 16.1 % |
| YouTube-VOS (Xu et al., 2018b) | 4.5K | 5.6 hr | 8.6K | 197.3K | 123.3K | 13.0 % |
| UVO-dense (Wang et al., 2021b) | 1.0K | 0.9 hr | 10.2K | 667.1K | 68.3K | 9.2 % |
| VOST (Tokmakov et al., 2022) | 0.7K | 4.2 hr | 1.5K | 175.0K | 75.5K | 41.7 % |
| BURST (Athar et al., 2022) | 2.9K | 28.9 hr | 16.1K | 600.2K | 195.7K | 37.7 % |
| MOSE (Ding et al., 2023) | 2.1K | 7.4 hr | 5.2K | 431.7K | 638.8K | 41.5 % |
| Internal | 62.9K | 281.8 hr | 69.6K | 5.4M | 6.0M | 36.4 % |
| SA-V Manual | 50.9K | 196.0 hr | 190.9K | 10.0M | 4.2M | 42.5 % |
| SA-V Manual+Auto | 50.9K | 196.0 hr | 642.6K | 35.5M | 4.2M | 27.7 % |
| Method | 1-click | 3-click | 5-click | bounding box | ground-truth mask‡ |
| SAM+XMem++ | 56.9 | 68.4 | 70.6 | 67.6 | 72.7 |
| SAM+Cutie | 56.7 | 70.1 | 72.2 | 69.4 | 74.1 |
| SAM 2 | 64.7 | 75.3 | 77.6 | 74.4 | 79.3 |
| Model | Data | SA-23 All | SA-23 Image | SA-23 Video | 14 new Video | FPS |
| SAM | SA-1B | 58.1 (81.3) | 60.8 (82.1) | 54.5 (80.3) | 59.1 (83.4) | 21.7 |
| SAM 2 | SA-1B | 58.9 (81.7) | 60.8 (82.1) | 56.4 (81.2) | 56.6 (83.7) | 130.1 |
| SAM 2 | our mix | 61.9 (83.5) | 63.3 (83.8) | 60.1 (83.2) | 69.6 (85.8) | 130.1 |
| Method | J&F | G | ||||
| MOSE val | DAVIS 2017 val | LVOS val | SA-V val | SA-V test | YTVOS 2019 val | |
| STCN (Cheng et al., 2021a) | 52.5 | 85.4 | - | 61.0 | 62.5 | 82.7 |
| SwinB-AOT (Yang et al., 2021b) | 59.4 | 85.4 | - | 51.1 | 50.3 | 84.5 |
| SwinB-DeAOT (Yang & Yang, 2022) | 59.9 | 86.2 | - | 61.4 | 61.8 | 86.1 |
| RDE (Li et al., 2022a) | 46.8 | 84.2 | - | 51.8 | 53.9 | 81.9 |
| XMem (Cheng & Schwing, 2022) | 59.6 | 86.0 | - | 60.1 | 62.3 | 85.6 |
| SimVOS-B (Wu et al., 2023b) | - | 88.0 | - | 44.2 | 44.1 | 84.2 |
| JointFormer (Zhang et al., 2023b) | - | 90.1 | - | - | - | 87.4 |
| ISVOS (Wang et al., 2022) | - | 88.2 | - | - | - | 86.3 |
| DEVA (Cheng et al., 2023b) | 66.0 | 87.0 | 55.9 | 55.4 | 56.2 | 85.4 |
| Cutie-base (Cheng et al., 2023a) | 69.9 | 87.9 | 66.0 | 60.7 | 62.7 | 87.0 |
| Cutie-base+ (Cheng et al., 2023a) | 71.7 | 88.1 | - | 61.3 | 62.8 | 87.5 |
| SAM 2 (Hiera-B+) | 76.6 | 90.2 | 78.0 | 76.8 | 77.0 | 88.6 |
| SAM 2 (Hiera-L) | 77.9 | 90.7 | 78.0 | 77.9 | 78.4 | 89.3 |
| Training data | J&F | mIoU | |||||||
| VOS | Internal | SA-V | SA-1B | SA-V val | Internal-test | MOSE dev | 9 zero-shot | SA-23 | |
| 1 | ✓ | 48.1 | 60.2 | 76.9 | 59.7 | 45.4 | |||
| 2 | ✓ | 57.0 | 72.2 | 70.6 | 70.0 | 54.4 | |||
| 3 | ✓ | 63.0 | 72.6 | 72.8 | 69.7 | 53.0 | |||
| 4 | ✓ | ✓ | 62.9 | 73.2 | 73.6 | 69.7 | 58.6 | ||
| 5 | ✓ | ✓ | 63.0 | 73.2 | 73.3 | 70.9 | 55.8 | ||
| 6 | ✓ | ✓ | ✓ | 63.6 | 75.0 | 74.4 | 71.6 | 58.6 | |
| 7 | ✓ | ✓ | 50.0 | 63.2 | 77.6 | 62.5 | 54.8 | ||
| 8 | ✓ | ✓ | 54.9 | 71.5 | 77.9 | 70.6 | 55.1 | ||
| 9 | ✓ | ✓ | 61.6 | 72.8 | 78.3 | 69.9 | 51.0 | ||
| 10 | ✓ | ✓ | ✓ | 62.2 | 74.1 | 78.5 | 70.3 | 57.3 | |
| 11 | ✓ | ✓ | ✓ | 61.8 | 74.4 | 78.5 | 71.8 | 55.7 | |
| 12 | ✓ | ✓ | ✓ | ✓ | 63.1 | 73.7 | 79.0 | 71.6 | 58.9 |
| Setting | J&F | mIoU | |||
| SA-V val | Internal-test | MOSE dev | 9 zero-shot | SA-23 | |
| SA-1B + SA-V 50k random | 63.7 | 70.3 | 72.3 | 68.7 | 59.1 |
| SA-1B + SA-V 50k most edited | 66.2 | 73.0 | 72.5 | 69.2 | 58.6 |
| SA-1B + SA-V | 69.9 | 73.8 | 73.9 | 70.8 | 59.8 |
| res. | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| 512 | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| 768 | 76.1 | 71.1 | 72.5 | 0.43× | 61.0 |
| 1024 | 77.0 | 70.1 | 72.3 | 0.22× | 61.5 |
| #mem. | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| 4 | 73.5 | 68.6 | 70.5 | 1.01× | 59.9 |
| 6 | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| 8 | 73.2 | 69.0 | 70.7 | 0.93× | 59.9 |
| (#sa, #ca) | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| (2, 2) | 73.3 | 67.3 | 70.2 | 1.13× | 59.9 |
| (3, 2) | 72.7 | 64.1 | 69.5 | 1.08× | 60.0 |
| (4, 4) | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| #frames | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| 4 | 71.1 | 60.0 | 67.7 | 1.00× | 60.1 |
| 8 | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| 10 | 74.5 | 68.1 | 71.1 | 1.00× | 59.9 |
| chan. dim. | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| 64 | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| 256 | 73.4 | 66.4 | 70.0 | 0.92× | 60.0 |
| img. enc. | J&F | mIoU | |||
| MOSE dev | SA-V val | 9 zero-shot | speed | SA-23 | |
| S | 70.9 | 65.5 | 69.4 | 1.33× | 57.8 |
| B+ | 73.0 | 68.3 | 70.7 | 1.00× | 59.7 |
| L | 75.0 | 66.3 | 71.9 | 0.60× | 61.1 |
| RPB | 2d-RoPE | J&F | mIoU | ||||
| MOSE dev | SA-V val | LVOSv2 val | 9 zero-shot | speed | SA-23 | ||
| ✓ | 73.0 | 68.3 | 71.6 | 70.7 | 1.00× | 59.7 | |
| ✓ | ✓ | 73.6 | 67.9 | 71.0 | 71.5 | 0.93× | 60.0 |
| 72.8 | 67.1 | 70.3 | 70.3 | 1.04× | 59.9 | ||
| Object Pointers | GRU | J&F | mIoU | ||||
| MOSE dev | SA-V val | LVOSv2 val | 9 zero-shot | speed | SA-23 | ||
| 73.1 | 64.5 | 67.0 | 70.9 | 1.00× | 59.9 | ||
| ✓ | 72.3 | 65.3 | 68.9 | 70.5 | 0.97× | 60.0 | |
| ✓ | 73.0 | 68.3 | 71.6 | 70.7 | 1.00× | 59.7 | |
| config | value |
| data | SA-1B |
| steps | ~90k |
| resolution | 1024 |
| precision | float16 |
| optimizer | AdamW |
| optimizer momentum | β1, β2=0.9, 0.999 |
| gradient clipping | type: ℓ2, max: 0.1 |
| weight decay | 0.1 |
| learning rate (lr) | 4e-4 |
| lr schedule | reciprocal sqrt, timescale=1000 |
| warmup | linear, 1k iters |
| coutdown | linear, 5k iters |
| layer-wise decay | 0.8 (T, S), 0.9 (B+), 0.925 (L) |
| augmentation | hflip, resize to 1024 (square) |
| batch size | 256 |
| drop path | 0.1 (T, S), 0.2 (B+), 0.3 (L) |
| mask losses (weight) | focal (20), dice (1) |
| IoU loss (weight) | ℓ1(1) |
| max # masks per image | 64 |
| # correction points | 7 |
| global attn. blocks | 5-7-9 (T), 7-10-13 (S), 12-16-20 (B+), 23-33-43 (L) |
| (a) Pre-training | |
| config | value |
| data | SA-1B, Internal, SA-V |
| steps | ~150k |
| resolution | 1024 |
| precision | bfloat16 |
| optimizer | AdamW |
| optimizer momentum | β1, β2=0.9, 0.999 |
| gradient clipping | type: ℓ2, max: 0.1 |
| weight decay | 0.1 |
| learning rate (lr) | img. enc.: 6e-5, other: 3.0e-4 |
| lr schedule | cosine |
| warmup | linear, 7.5k iters |
| layer-wise decay | 0.8 (T, S), 0.9 (B+), 0.925 (L) |
| image augmentation | hflip, resize to 1024 (square) |
| video augmentation | hflip, affine (deg: 25, shear: 20), colorjitter (b: 0.1, c: 0.03, s: 0.03, h: null), grayscale (0.05), per frame colorjitter (b: 0.1, c: 0.05, s: 0.05, h: null), mosaic-2×2 (0.1) |
| batch size | 256 |
| drop path | 0.1 (T, S), 0.2 (B+), 0.3 (L) |
| mask losses (weight) | focal (20), dice (1) |
| IoU loss (weight) | ℓ1(1) |
| occlusion loss (weight) | cross-entropy (1) |
| max. masks per frame. | image: 64, video: 3 |
| # correction points | 7 |
| global attn. blocks | 5-7-9 (T), 7-10-13 (S), 12-16-20 (B+), 23-33-43 (L) |
| (b) Full training | |
| 1-click | 3-click | mask | |
| gender | |||
| male | 81.9 | 95.1 | 95.9 |
| female | 75.1 | 94.1 | 95.2 |
| age | |||
| 18-26 | 77.2 | 95.0 | 95.7 |
| 26-50 | 76.7 | 94.7 | 95.8 |
| 50+ | 81.4 | 95.1 | 96.2 |
| Method | EndoVis 2018 | ESD | LVOSv2 | LV-VIS | PUMaVOS | UVO | VIPSeg | Virtual KITTI 2 | VOST | (average) |
| SAM + XMem++ | 68.9 | 88.2 | 72.1 | 86.4 | 60.2 | 74.5 | 84.2 | 63.8 | 46.6 | 71.7 |
| SAM + Cutie | 71.8 | 87.6 | 82.1 | 87.1 | 59.4 | 75.2 | 84.4 | 70.3 | 54.3 | 74.7 |
| SAM 2 | 77.0 | 90.2 | 87.9 | 90.3 | 68.5 | 79.2 | 88.3 | 74.1 | 67.5 | 80.3 |
| Method | EndoVis 2018 | ESD | LVOSv2 | LV-VIS | PUMaVOS | UVO | VIPSeg | Virtual KITTI 2 | VOST | (average) |
| SAM + XMem++ | 71.4 | 87.8 | 72.9 | 85.2 | 63.7 | 74.7 | 82.5 | 63.9 | 52.7 | 72.8 |
| SAM + Cutie | 70.5 | 87.3 | 80.6 | 86.0 | 58.9 | 75.2 | 82.1 | 70.4 | 54.6 | 74.0 |
| SAM 2 | 77.5 | 88.9 | 87.8 | 88.7 | 72.7 | 78.6 | 85.5 | 74.0 | 65.0 | 79.8 |
| Method | input type | AUC-J&F | J&F@60s |
| MA-Net (Miao et al., 2020) | scribbles | 0.79 | 0.79 |
| MiVOS (Cheng et al., 2021b) | scribbles | 0.87 | 0.88 |
| MiVOS (Cheng et al., 2021b)‡ | clicks | 0.75 | 0.75 |
| CiVOS (Vujasinović et al., 2022) | clicks | 0.83 | 0.84 |
| SAM 2 | clicks | 0.86 | 0.90 |
| Method | training data | G | λ (lower is better) |
| PAOT (Xu et al., 2023) | VIPOSeg + YouTube-VOS + DAVIS | 77.9 | 0.73 |
| PAOT (Xu et al., 2023) | VIPOSeg | 78.2 | 0.70 |
| SAM 2 (zero-shot) | our mix | 78.4 | 0.68 |
| SAM 2 (fine-tuned on VIPOSeg) | our mix + VIPOSeg | 79.7 | 0.67 |
| Model | Data | SA-23 All | SA-23 Image | SA-23 Video | 14 new Video | FPS |
| SAM (ViT-B) | SA-1B | 55.9 (80.9) | 57.4 (81.3) | 54.0 (80.4) | 54.5 (82.6) | 76.7 |
| SAM (ViT-H) | SA-1B | 58.1 (81.3) | 60.8 (82.1) | 54.5 (80.3) | 59.1 (83.4) | 21.7 |
| HQ-SAM (ViT-B) | HQSEG-44k | 53.9 (72.1) | 56.3 (73.9) | 50.7 (69.9) | 54.5 (75.0) | 73.5 |
| HQ-SAM (ViT-H) | HQSEG-44k | 59.1 (79.8) | 61.8 (80.5) | 55.7 (78.9) | 58.9 (81.6) | 21.4 |
| SAM 2 (Hiera-B+) | SA-1B | 58.9 (81.7) | 60.8 (82.1) | 56.4 (81.2) | 56.6 (83.7) | 130.1 |
| SAM 2 (Hiera-L) | SA-1B | 60.0 (81.8) | 62.0 (82.2) | 57.4 (81.2) | 58.5 (83.8) | 61.4 |
| SAM 2 (Hiera-B+) | our mix | 61.9 (83.5) | 63.3 (83.8) | 60.1 (83.2) | 69.6 (85.8) | 130.1 |
| SAM 2 (Hiera-L) | our mix | 63.6 (83.5) | 64.7 (83.7) | 62.2 (83.2) | 71.1 (85.7) | 61.4 |
| Method | mask | 5-click | box | mask | |||
| SA-V test | MOSE val | DAVIS 2017 test | LVOS val | 17 zero-shot datasets (§6.2) | |||
| Cutie | 62.2 | 68.3 | 85.3 | 63.5 | 71.7 | 68.6 | 73.5 |
| Cutie + SA-V | 68.7 | 67.6 | 84.5 | 65.0 | 73.4 | 70.4 | 75.5 |
| SAM 2 | 78.4 | 77.9 | 87.7 | 78.0 | 77.6 | 74.4 | 79.3 |
| dataset | abbreviation & link | video type | description | annotation type | source split | # videos sampled | # masklets sampled | # frames sampled | # masks sampled |
| LV-VIS (Wang et al., 2023) | LV-VIS | Open Vocabulary | Large scale open vocabulary video instance segmentation | Dense | Validation | 690 | 2536 | 15,604 | 54,077 |
| A Drosophila heart optical coherence microscopy database for automatic video segmentation (Fishman et al., 2023) | DH OCM | Microscopy; heart | Segmentation of a fruit fly heart in optical coherence microscopy videos | Sparse | All | 213 | 213 | 608,000 | 607,158 |
| Video Object Segmentation under Transformations (Tokmakov et al., 2022) | VOST | Deformation | Video object segmentation with emphasis on shape transformations | Dense | Validation | 635 | 882 | 67,004 | 89,722 |
| Cityscapes-VPS (Cords et al., 2016; Kim et al., 2020b) | Cityscapes-VPS | Driving | Panoptic segmentation for Cityscapes driving dataset | Sparse | Validation Instance | 209 | 1372 | 4,259 | 4,579 |
| Corsican Fire Database (Toulouse et al., 2017) | CFD | Wildfire | Segmentation of wildfires | Sparse | All | 5 | 5 | 541 | 541 |
| Partial and Unusual Masks for Video Object Segmentation (Bekuzarov et al., 2023) | PUMaVOS | Parts | Video object segmentation with a focus on parts and practical use cases | Dense | All | 24 | 26 | 21,187 | 21,485 |
| EPIC-KITCHENS VISOR (Darkhalil et al., 2022) | VISOR | Egocentric | Video object segmentation benchmark containing egocentric videos of cooking with an emphasis on segmenting active objects. | Sparse | Validation | 921 | 921 | 736,030 | 4,426 |
| HT1080WT cells embedded in 3D collagen type I matrices (Gómez-de Mariscal et al., 2021) | HT1080WT | Microscopy; cells | Timelapse videos of HT1080WT cell movement | Sparse | All | 60 | 150 | 1,010 | 2,694 |
| Freiburg-Berkeley Motion Segmentation Dataset (Brox et al., 2010) | FBMS | Moving Object | Precise segmentation of moving objects | Sparse | All | 45 | 70 | 9734 | 755 |
| Virtual KITTI 2 (Cabon et al., 2020) | Virtual KITTI 2 | Synthetic; Driving | Synthetic driving videos generated by a game engine that recreate real world KITTI videos. | Dense | All from video angle Camera_0 | 996 | 1,638 | 109,368 | 162,708 |
| EndoVis 2018 (Allan et al., 2020) | EndoVis 2018 | Endoscopic video; surgery | Segmentation of medical tools in endoscopic videos | Dense | All | 15 | 29 | 2,325 | 4,314 |
| Lindenthal Camera Traps (Haucke & Steinhage, 2021) | LCT | Stereo | Wildlife videos captured using stereo cameras. | Sparse | All | 12 | 12 | 4,012 | 412 |
| LVOSv2 (Hong et al., 2024) | LVOSv2 | Long videos | Long-term video object segmentation benchmark, on average 1.14 minutes | Dense | Validation | 136 | 225 | 64,523 | 91,510 |
| UVO (Wang et al., 2021b) | UVO | Open World | Open World instance segmentation of all objects in a video | Dense | Validation | 54 | 311 | 4,860 | 26,747 |
| EgoExo4d (Grauman et al., 2023) | EgoExo4d | Egocentric | Egocentric videos of participants completing skilled activities. | Sparse | Validation videos on egocentric cameras | 1185 | 1185 | 327,080 | 9,035 |
| VIPSeg (Miao et al., 2022) | VIPSeg | Panoptic | Large scale and real world scenarios for video panoptic segmentation | Dense | Validation | 152 | 1,457 | 3,416 | 30,408 |
| Event-based Segmentation Dataset (Huang et al., 2023) | ESD | Clutter | Tabletop object segmentation in an indoor cluttered environment | Dense | All | 135 | 814 | 13,325 | 78,243 |
| Method | SA-V val | SA-V test | ||||
| J&F | J | F | J&F | J | F | |
| STCN (Cheng et al., 2021a) | 61.0 | 57.4 | 64.5 | 62.5 | 59.0 | 66.0 |
| SwinB-AOT-L (Yang et al., 2021b) | 51.1 | 46.4 | 55.7 | 50.3 | 46.0 | 54.6 |
| SwinB-DeAOT-L (Yang & Yang, 2022) | 61.4 | 56.6 | 66.2 | 61.8 | 57.2 | 66.3 |
| RDE (Li et al., 2022a) | 51.8 | 48.4 | 55.2 | 53.9 | 50.5 | 57.3 |
| XMem (Cheng & Schwing, 2022) | 60.1 | 56.3 | 63.9 | 62.3 | 58.9 | 65.8 |
| SimVOS-B (Wu et al., 2023b) | 44.2 | 40.0 | 48.3 | 44.1 | 40.5 | 47.7 |
| DEVA (Cheng et al., 2023b) | 55.4 | 51.5 | 59.2 | 56.2 | 52.4 | 60.1 |
| Cutie-base (Cheng et al., 2023a) | 60.7 | 57.7 | 63.7 | 62.7 | 59.7 | 65.7 |
| Cutie-base+ (Cheng et al., 2023a) | 61.3 | 58.3 | 64.4 | 62.8 | 59.8 | 65.8 |
| SAM 2 (Hiera-B+) | 76.8 | 73.4 | 80.1 | 77.0 | 73.5 | 80.5 |
| SAM 2 (Hiera-L) | 77.9 | 74.5 | 81.3 | 78.4 | 74.9 | 82.0 |
| SAM 2 (Hiera-T)‡ | 75.2 | 71.7 | 78.7 | 76.5 | 72.9 | 80.1 |
| SAM 2 (Hiera-S)‡ | 77.0 | 73.6 | 80.5 | 76.6 | 73.0 | 80.2 |
| SAM 2 (Hiera-B+))‡ | 77.5 | 74.1 | 80.9 | 78.2 | 74.8 | 81.7 |
| SAM 2 (Hiera-L)‡ | 78.6 | 75.3 | 82.0 | 79.5 | 76.0 | 83.0 |
| LVOS val | Method | LVOSv2 val | |||||||
| Method | J&F | J | F | J&F | Js | Fs | Ju | Fu | |
| STCN (Cheng et al., 2021a) | 60.6 | 57.2 | 64.0 | 57.5 | 63.8 | ||||
| RDE (Li et al., 2022a) | 62.2 | 56.7 | 64.1 | 60.8 | 67.2 | ||||
| SwinB-DeAOT-L (Yang & Yang, 2022) | 63.9 | 61.5 | 69.0 | 58.4 | 66.6 | ||||
| XMem (Cheng & Schwing, 2022) | 64.5 | 62.6 | 69.1 | 60.6 | 65.6 | ||||
| SAM 2 (Hiera-B+) | 78.0 | 73.2 | 82.7 | 78.7 | 80.6 | 87.2 | 69.0 | 77.8 | |
| SAM 2 (Hiera-L) | 78.0 | 73.2 | 82.7 | 79.6 | 81.0 | 87.4 | 70.4 | 79.8 | |
| SAM 2 (Hiera-T)‡ | 77.5 | 73.0 | 82.1 | 77.3 | 79.9 | 86.9 | 67.1 | 75.4 | |
| SAM 2 (Hiera-S)‡ | 77.3 | 72.3 | 82.2 | 78.3 | 79.2 | 85.9 | 69.0 | 79.0 | |
| SAM 2 (Hiera-B+)‡ | 77.7 | 73.1 | 82.4 | 78.2 | 80.5 | 87.2 | 68.1 | 76.9 | |
| SAM 2 (Hiera-L)‡ | 80.1 | 75.4 | 84.9 | 80.6 | 81.7 | 88.2 | 71.4 | 81.0 | |
| Method | MOSE val | DAVIS17 val | DAVIS17 test | YTVOS19 val | ||||||||||
| J&F | J | F | J&F | J | F | J&F | J | F | G | Js | Fs | Ju | Fu | |
| STCN (Cheng et al., 2021a) | 52.5 | 48.5 | 56.6 | 85.4 | 82.2 | 88.6 | 76.1 | 72.7 | 79.6 | 82.7 | 81.1 | 85.4 | 78.2 | 85.9 |
| SwinB-AOT-L (Yang et al., 2021b) | 59.4 | 55.5 | 63.2 | 85.4 | 82.4 | 88.4 | 81.2 | 77.3 | 85.1 | 84.5 | 84.0 | 88.8 | 78.4 | 86.7 |
| SwinB-DeAOT-L (Yang & Yang, 2022) | 59.9 | 55.7 | 64.0 | 86.2 | 83.1 | 89.2 | 82.8 | 78.9 | 86.7 | 86.1 | 85.3 | 90.2 | 80.4 | 88.6 |
| RDE (Li et al., 2022a) | 46.8 | 42.4 | 51.3 | 84.2 | 80.8 | 87.5 | 77.4 | 73.6 | 81.2 | 81.9 | 81.1 | 85.5 | 76.2 | 84.8 |
| XMem (Cheng & Schwing, 2022) | 59.6 | 55.4 | 63.7 | 86.0 | 82.8 | 89.2 | 79.6 | 76.1 | 83.0 | 85.6 | 84.1 | 88.5 | 81.0 | 88.9 |
| SimVOS-B (Wu et al., 2023b) | - | - | - | 88.0 | 85.0 | 91.0 | 80.4 | 76.1 | 84.6 | 84.2 | 83.1 | - | 79.1 | - |
| JointFormer (Zhang et al., 2023b) | - | - | - | 90.1 | 87.0 | 93.2 | 88.1 | 84.7 | 91.6 | 87.4 | 86.5 | 90.9 | 82.0 | 90.3 |
| ISVOS (Wang et al., 2022) | - | - | - | 88.2 | 84.5 | 91.9 | 84.0 | 80.1 | 87.8 | 86.3 | 85.2 | 89.7 | 81.0 | 89.1 |
| DEVA (Cheng et al., 2023b) | 66.0 | 61.8 | 70.3 | 87.0 | 83.8 | 90.2 | 82.6 | 78.9 | 86.4 | 85.4 | 84.9 | 89.4 | 79.6 | 87.8 |
| Cutie-base (Cheng et al., 2023a) | 69.9 | 65.8 | 74.1 | 87.9 | 84.6 | 91.1 | 86.1 | 82.4 | 89.9 | 87.0 | 86.0 | 90.5 | 82.0 | 89.6 |
| Cutie-base+ (Cheng et al., 2023a) | 71.7 | 67.6 | 75.8 | 88.1 | 85.5 | 90.8 | 88.1 | 84.7 | 91.4 | 87.5 | 86.3 | 90.6 | 82.7 | 90.5 |
| SAM 2 (Hiera-B+) | 76.6 | 72.6 | 80.6 | 90.2 | 87.0 | 93.4 | 87.9 | 84.7 | 91.1 | 88.6 | 87.1 | 91.6 | 83.9 | 91.9 |
| SAM 2 (Hiera-L) | 77.9 | 73.9 | 81.9 | 90.7 | 87.5 | 94.0 | 87.7 | 84.6 | 90.9 | 89.3 | 87.5 | 92.0 | 84.8 | 92.8 |
| SAM 2 (Hiera-T)‡ | 71.8 | 67.4 | 76.1 | 89.4 | 85.8 | 92.9 | 86.9 | 83.4 | 90.3 | 87.4 | 85.7 | 90.1 | 82.7 | 91.1 |
| SAM 2 (Hiera-S)‡ | 73.5 | 69.2 | 77.7 | 89.6 | 86.3 | 92.9 | 87.6 | 84.1 | 91.1 | 88.0 | 85.9 | 90.2 | 83.9 | 91.9 |
| SAM 2 (Hiera-B+))‡ | 73.8 | 69.7 | 77.9 | 90.0 | 86.8 | 93.1 | 86.6 | 83.3 | 89.8 | 88.2 | 86.1 | 90.6 | 84.0 | 92.1 |
| SAM 2 (Hiera-L)‡ | 74.6 | 70.6 | 78.6 | 90.2 | 86.9 | 93.4 | 88.9 | 85.3 | 92.5 | 88.8 | 86.5 | 91.0 | 84.7 | 92.8 |
| Name | SAM 2 (Segment Anything Model 2) |
| Version | 1.0 |
| Date | 2024 |
| Organization | Meta FAIR |
| Mode type | Promptable segmentation model |
| Architecture | See Section 4 |
| Repository | https://github.com/facebookresearch/sam2 |
| License | Apache 2.0 |
| Intended Use | |
| Primary intended users | SAM 2 was designed as a unified model for promptable video and image segmentation tasks. The model was primarily developed for research use cases. SAM 2 is released under an Apache 2.0 license. |
| Out-of-scope use cases | See Ethical considerations and license for restrictions. |
| Caveats and recommendations | See Appendix C for limitations. |
| Relevant Factors | |
| Groups | SAM 2 is class agnostic and was designed for promptable image and video segmentation. It can segment and track any object. |
| Instrumentation and environment | SAM 2 was evaluated across a variety of types of video and image data. The video benchmark suite included domains such as driving data, microscopy, egocentric video, robotic surgery. See Table 18 for descriptions of the benchmarks and Figure 18 for example frames. SAM 2 was evaluated on the same suite of image benchmarks as Kirillov et al. (2023), which covers domains including underwater images, paintings, fish-eye images. |
| Metrics | |
| Evaluation Data | We evaluate the performance of SAM 2 using the following metrics: +J&F: We evaluate performance using J&F (Pont-Tuset et al., 2017) for the promptable video segmentation and semi-supervised VOS tasks. +G: We use G for evaluation on YTVOS 2019 for the semi-supervised VOS task. +mIoU: We evaluate performance using mIoU for the promptable image segmentation task. |
| See Appendix F | |
| Training Data | |
| Data source | SAM 2 was trained on the SA-V dataset alongside internally available licensed video data. See Section 5 of the main text for more details and Appendix J.2 for the SA-V dataset data card. |
| Ethical Considerations | |
| Data | See Section 5 for more details about the SAM 2 training data. In Section E.1 we show a geographic distribution of the videos and demographic distribution of the crowdworkers who collected the videos in the SA-V dataset. |
| Cost and impact of compute | The released SAM 2 was trained on 256 A100 GPUs for 108 hours. This corresponds to 12165.12 kWH and an estimated emissions of 3.89 metric tons of CO2e (Patterson et al., 2021; Lacoste et al., 2019). The emissions from training the released SAM 2 are equivalent to ~10k miles driven by an average gasoline-powered passenger vehicle (Agency, 2022). |
| Risks and harms | In Section E.1.1 of the main text we analyze SAM 2 performance on people across demographic groups. When using SAM 2 in new settings, we suggest that researchers perform their own fairness evaluation for SAM 2 specific to their use case. |
| Use cases | We implore users to use their best judgement. |
| Schedule | FID ↓ | CLIP ↑ | Iterations |
| DDPM | 19.5 | 24.6 | 120K |
| Flow Matching | 16.9 | 25.7 | 120K |
| Prompt Strategy | FID ↓ | CLIP ↑ | Iterations |
| Single | 6.13 | 27.10 | 65K |
| Multi-random | 6.15 | 27.13 | 65K |
| Multi-clipscore | 6.12 | 27.26 | 65K |
| Methods | Latency (s) | CLIP-Score ↑ | Image-Reward ↑ |
| Sana (FP16) | 0.88 | 28.5 | 1.03 |
| + W8A8 Quantization | 0.37 | 28.3 | 0.96 |
| Model | Width | Depth | FFN | #Heads | #Param (M) |
| Sana-0.6B | 1152 | 28 | 2880 | 36 | 590 |
| Sana-1.6B | 2240 | 20 | 5600 | 70 | 1604 |
| Methods | Throughput (samples/s) | Latency (s) | Params (B) | Speedup | FID ↓ | CLIP ↑ | GenEval ↑ | DPG ↑ |
| 512 × 512 resolution | ||||||||
| PixArt-α (Chen et al., 2024b) | 1.5 | 1.2 | 0.6 | 1.0× | 6.14 | 27.55 | 0.48 | 71.6 |
| PixArt-Σ (Chen et al., 2024a) | 1.5 | 1.2 | 0.6 | 1.0× | 6.34 | 27.62 | 0.52 | 79.5 |
| Sana-0.6B | 6.7 | 0.8 | 0.6 | 5.0× | 5.67 | 27.92 | 0.64 | 84.3 |
| Sana-1.6B | 3.8 | 0.6 | 1.6 | 2.5× | 5.16 | 28.19 | 0.66 | 85.5 |
| 1024 × 1024 resolution | ||||||||
| LUMINA-Next (Zhuo et al., 2024) | 0.12 | 9.1 | 2.0 | 2.8× | 7.58 | 26.84 | 0.46 | 74.6 |
| SDXL (Podell et al., 2023) | 0.15 | 6.5 | 2.6 | 3.5× | 6.63 | 29.03 | 0.55 | 74.7 |
| PlayGroundv2.5 (Li et al., 2024a) | 0.21 | 5.3 | 2.6 | 4.9× | 6.09 | 29.13 | 0.56 | 75.5 |
| Hunyuan-DiT (Li et al., 2024c) | 0.05 | 18.2 | 1.5 | 1.2× | 6.54 | 28.19 | 0.63 | 78.9 |
| PixArt-Σ (Chen et al., 2024a) | 0.4 | 2.7 | 0.6 | 9.3× | 6.15 | 28.26 | 0.54 | 80.5 |
| DALLE3 (OpenAI, 2023) | - | - | - | - | - | - | 0.67 | 83.5 |
| SD3-medium (Esser et al., 2024) | 0.28 | 4.4 | 2.0 | 6.5× | 11.92 | 27.83 | 0.62 | 84.1 |
| FLUX-dev (Labs, 2024) | 0.04 | 23.0 | 12.0 | 1.0× | 10.15 | 27.47 | 0.67 | 84.0 |
| FLUX-schnell (Labs, 2024) | 0.5 | 2.1 | 12.0 | 11.6× | 7.94 | 28.14 | 0.71 | 84.8 |
| Sana-0.6B | 1.7 | 0.9 | 0.6 | 39.5× | 5.81 | 28.36 | 0.64 | 83.6 |
| Sana-1.6B | 1.0 | 1.2 | 1.6 | 23.3× | 5.76 | 28.67 | 0.66 | 84.8 |
| Blocks | AE | MACs (T) | TP (/s) | Latency (ms) |
| FullAttn & FFN | F8C4P2 | 6.48 | 0.49 | 2250 |
| + LinearAttn | F8C4P2 | 4.30 | 0.52 | 1931 |
| + MixFFN | F8C4P2 | 4.19 | 0.46 | 2425 |
| + Kernel Fusion | F8C4P2 | 4.19 | 0.53 | 2139 |
| LinearAttn & MixFFN | F32C32P1 | 1.08 | 1.75 | 826 |
| +Kernel Fusion | F32C32P1 | 1.08 | 2.06 | 748 |
| Text-Encoder | #Params (M) | Latency (s) | FID ↓ | CLIP ↑ | GenEval ↑ | HPSv2 ↑ |
| T5-XXL | 4762 | 1.61 | 6.1 | 27.1 | 0.51 | 27.95 |
| T5-Large | 341 | 0.17 | 6.1 | 26.2 | 0.39 | 27.57 |
| Gemma2-2B | 2614 | 0.28 | 6.0 | 26.9 | 0.45 | 27.92 |
| Gemma-2B-IT | 2506 | 0.21 | 5.9 | 26.8 | 0.47 | 27.78 |
| Gemma2-2B-IT | 2614 | 0.28 | 6.1 | 26.9 | 0.50 | 27.92 |
| Training Step | CHI | SHI |
| +2K steps | 0.636 | 0.617 |
| +5K steps | 0.642 | 0.626 |
| Model | Params (B) | Overall ↑ | Objects | Counting | Colors | Position | Color Attribution | |
| Single | Two | |||||||
| 512 × 512 resolution | ||||||||
| PixArt-α | 0.6 | 0.48 | 0.98 | 0.50 | 0.44 | 0.80 | 0.08 | 0.07 |
| PixArt-Σ | 0.6 | 0.52 | 0.98 | 0.59 | 0.50 | 0.80 | 0.10 | 0.15 |
| Sana-0.6B (Ours) | 0.6 | 0.64 | 0.99 | 0.71 | 0.63 | 0.91 | 0.16 | 0.42 |
| Sana-1.6B (Ours) | 0.6 | 0.66 | 0.99 | 0.79 | 0.63 | 0.88 | 0.18 | 0.47 |
| 1024 × 1024 resolution | ||||||||
| LUMINA-Next (Zhuo et al., 2024) | 2.0 | 0.46 | 0.92 | 0.46 | 0.48 | 0.70 | 0.09 | 0.13 |
| SDXL (Podell et al., 2023) | 2.6 | 0.55 | 0.98 | 0.74 | 0.39 | 0.85 | 0.15 | 0.23 |
| PlayGroundv2.5 (Li et al., 2024a) | 2.6 | 0.56 | 0.98 | 0.77 | 0.52 | 0.84 | 0.11 | 0.17 |
| Hunyuan-DiT (Li et al., 2024c) | 1.5 | 0.63 | 0.97 | 0.77 | 0.71 | 0.88 | 0.13 | 0.30 |
| DALLE3 (OpenAI, 2023) | - | 0.67 | 0.96 | 0.87 | 0.47 | 0.83 | 0.43 | 0.45 |
| SD3-medium (Esser et al., 2024) | 2.0 | 0.62 | 0.98 | 0.74 | 0.63 | 0.67 | 0.34 | 0.36 |
| FLUX-dev (Labs, 2024) | 12.0 | 0.67 | 0.99 | 0.81 | 0.79 | 0.74 | 0.20 | 0.47 |
| FLUX-schnell (Labs, 2024) | 12.0 | 0.71 | 0.99 | 0.92 | 0.73 | 0.78 | 0.28 | 0.54 |
| Sana-0.6B (Ours) | 0.6 | 0.64 | 0.99 | 0.76 | 0.64 | 0.88 | 0.18 | 0.39 |
| Sana-1.6B (Ours) | 1.6 | 0.66 | 0.99 | 0.77 | 0.62 | 0.88 | 0.21 | 0.47 |
| Model | Params (B) | Overall ↑ | Global | Entity | Attribute | Relation | Other | ImageReward ↑ |
| 512 × 512 resolution | ||||||||
| PixArt-α (Chen et al., 2024b) | 0.6 | 71.6 | 81.7 | 80.1 | 80.4 | 81.7 | 76.5 | 0.92 |
| PixArt-Σ (Chen et al., 2024a) | 0.6 | 79.5 | 87.5 | 87.1 | 86.5 | 84.0 | 86.1 | 0.97 |
| Sana-0.6B (Ours) | 0.6 | 84.3 | 82.6 | 90.0 | 88.6 | 90.1 | 91.9 | 0.93 |
| Sana-1.6B (Ours) | 0.6 | 85.5 | 90.3 | 91.2 | 89.0 | 88.9 | 92.0 | 1.04 |
| 1024 × 1024 resolution | ||||||||
| LUMINA-Next (Zhuo et al., 2024) | 2.0 | 74.6 | 82.8 | 88.7 | 86.4 | 80.5 | 81.8 | - |
| SDXL (Podell et al., 2023) | 2.6 | 74.7 | 83.3 | 82.4 | 80.9 | 86.8 | 80.4 | 0.69 |
| PlayGroundv2.5 (Li et al., 2024a) | 2.6 | 75.5 | 83.1 | 82.6 | 81.2 | 84.1 | 83.5 | 1.09 |
| Hunyuan-DiT (Li et al., 2024c) | 1.5 | 78.9 | 84.6 | 80.6 | 88.0 | 74.4 | 86.4 | 0.92 |
| PixArt-Σ (Chen et al., 2024a) | 0.6 | 80.5 | 86.9 | 82.9 | 88.9 | 86.6 | 87.7 | 0.87 |
| DALLE3 (OpenAI, 2023) | - | 83.5 | 91.0 | 89.6 | 88.4 | 90.6 | 89.8 | - |
| SD3-medium (Esser et al., 2024) | 2.0 | 84.1 | 87.9 | 91.0 | 88.8 | 80.7 | 88.7 | 0.86 |
| FLUX-dev (Labs, 2024) | 12.0 | 84.0 | 82.1 | 89.5 | 88.7 | 91.1 | 89.4 | - |
| FLUX-schnell (Labs, 2024) | 12.0 | 84.8 | 91.2 | 91.3 | 89.7 | 86.5 | 87.0 | 0.91 |
| Sana-0.6B (Ours) | 0.6 | 83.6 | 83.0 | 89.5 | 89.3 | 90.1 | 90.2 | 0.97 |
| Sana-1.6B (Ours) | 1.6 | 84.8 | 86.0 | 91.5 | 88.9 | 91.9 | 90.7 | 0.99 |
| Blocks | AE | Res. | FID ↓ | CLIP ↑ |
| FullAttn & FFN | F8C4P2 | 256 | 18.7 | 24.9 |
| + Linear | F8C4P2 | 256 | 21.5 | 23.3 |
| + MixFFN | F8C4P2 | 256 | 18.9 | 24.8 |
| + Kernel Fusion | F8C4P2 | 256 | 18.8 | 24.8 |
| Linear+GLUMBCConv2.5 | F32C32P1 | 512 | 6.4 | 27.4 |
| + Kernel Fusion | F32C32P1 | 512 | 6.4 | 27.4 |
| Text Encoder | Batch Size | Seq Len | Latency (s) | Params (M) |
| T5-XXL | 1.6 | 4762 | ||
| T5-XL | 0.5 | 1224 | ||
| T5-large | 32 | 300 | 0.2 | 341 |
| T5-base | 0.1 | 110 | ||
| T5-small | 0.0 | 35 | ||
| Gemma-2b | 0.2 | 2506 | ||
| Gemma-2-2b | 0.3 | 2614 |
| Methods | Speedup | Throughput(/s) | Latency(ms) | Methods | Speedup | Throughput(/s) | Latency(ms) |
| 512×512 Resolution | 1024×1024 Resolution | ||||||
| SD3 | 7.6x | 1.14 | 1.4 | SD3 | 7.0x | 0.28 | 4.4 |
| FLUX-schnell | 10.5x | 1.58 | 0.7 | FLUX-schnell | 12.5x | 0.50 | 2.1 |
| FLUX-dev | 1.0x | 0.15 | 7.9 | FLUX-dev | 1.0x | 0.04 | 23 |
| PixArt-Σ | 10.3x | 1.54 | 1.2 | PixArt-Σ | 10.0x | 0.40 | 2.7 |
| HunyuanDiT | 1.3x | 0.20 | 5.1 | HunyuanDiT | 1.2x | 0.05 | 18 |
| Sana-0.6B | 44.5x | 6.67 | 0.8 | Sana-0.6B | 43.0x | 1.72 | 0.9 |
| Sana-1.6B | 25.6x | 3.84 | 0.6 | Sana-1.6B | 25.2x | 1.01 | 1.2 |
| 2048×2048 Resolution | 4096×4096 Resolution | ||||||
| SD3 | 5.0x | 0.04 | 22 | SD3 | 4.0x | 0.004 | 230 |
| FLUX-schnell | 11.2x | 0.09 | 10.5 | FLUX-schnell | 13.0x | 0.013 | 76 |
| FLUX-dev | 1.0x | 0.008 | 117 | FLUX-dev | 1.0x | 0.001 | 1023 |
| PixArt-Σ | 7.5x | 0.06 | 18.1 | PixArt-Σ | 5.0x | 0.005 | 186 |
| HunyuanDiT | 1.2x | 0.01 | 96 | HunyuanDiT | 1.0x | 0.001 | 861 |
| Sana-0.6B | 53.8x | 0.43 | 2.5 | Sana-0.6B | 104.0x | 0.104 | 9.6 |
| Sana-1.6B | 31.2x | 0.25 | 4.1 | Sana-1.6B | 66.0x | 0.066 | 5.9 |
| Method | Rehearsal-free | Inference Efficiency | End-to-end Optimization |
| L2P (Wang et al., 2022b) | ✓ | ✗ | ✗ |
| DualPrompt (Wang et al., 2022a) | ✓ | ✗ | ✗ |
| CODA-Prompt (Smith et al., 2023) | ✓ | ✗ | ✓ |
| HiDe-Prompt (Wang et al., 2024a) | ✗ | ✗ | ✓ |
| InfLoRA (Liang & Li, 2024) | ✗ | ✓ | ✓ |
| SD-LoRA(Ours) | ✓ | ✓ | ✓ |
| Method | ImageNet-R (N=5) | ImageNet-R (N=10) | ImageNet-R (N=20) | |||
| Acc↑ | AAA↑ | Acc↑ | AAA↑ | Acc↑ | AAA↑ | |
| Full Fine-Tuning | 64.92(0.87) | 75.57(0.50) | 60.57(1.06) | 72.31(1.09) | 49.95(1.31) | 65.32(0.84) |
| L2P | 73.04(0.71) | 76.94(0.41) | 71.26(0.44) | 76.13(0.46) | 68.97(0.51) | 74.16(0.32) |
| DualPrompt | 69.99(0.57) | 72.24(0.41) | 68.22(0.20) | 73.81(0.39) | 65.23(0.45) | 71.30(0.16) |
| CODA-Prompt | 76.63(0.27) | 80.30(0.28) | 74.05(0.41) | 78.14(0.39) | 69.38(0.33) | 73.95(0.63) |
| HiDe-Prompt | 74.77(0.25) | 78.15(0.24) | 74.65(0.14) | 78.46(0.18) | 73.59(0.19) | 77.93(0.19) |
| InfLoRA | 76.95(0.23) | 81.81(0.14) | 74.75(0.64) | 80.67(0.55) | 69.89(0.56) | 76.68(0.57) |
| SD-LoRA | 79.15(0.20) | 83.01(0.42) | 77.34(0.35) | 82.04(0.24) | 75.26(0.37) | 80.22(0.72) |
| SD-LoRA-RR | 79.01(0.26) | 82.50(0.38) | 77.18(0.39) | 81.74(0.24) | 74.05(0.51) | 80.65(0.35) |
| SD-LoRA-KD | 78.85(0.29) | 82.47(0.58) | 77.03(0.67) | 81.52(0.26) | 74.12(0.66) | 80.11(0.75) |
| Method | ImageNet-A (N=10) | DomainNet (N=5) | ||
| Acc↑ | AAA↑ | Acc↑ | AAA↑ | |
| Full Fine-Tuning | 16.31(7.89) | 30.04(13.18) | 51.46(0.47) | 67.08(1.13) |
| L2P (Wang et al., 2022b) | 42.94(1.27) | 51.40(1.95) | 70.26(0.25) | 75.83(0.98) |
| DualPrompt (Wang et al., 2022a) | 45.49(0.96) | 54.68(1.24) | 68.26(0.90) | 73.84(0.45) |
| CODA-Prompt (Smith et al., 2023) | 45.36(0.78) | 57.03(0.94) | 70.58(0.53) | 76.68(0.44) |
| HiDe-Prompt (Wang et al., 2024a) | 42.70(0.60) | 56.32(0.40) | 72.20(0.08) | 77.01(0.04) |
| InfLoRA (Liang & Li, 2024) | 49.20(1.12) | 60.92(0.61) | 71.59(0.23) | 78.29(0.50) |
| SDLoRA | 55.96(0.73) | 64.95(1.63) | 72.82(0.37) | 78.89(0.50) |
| SD-LoRA-RR | 55.59(1.08) | 64.59(1.91) | 72.58(0.40) | 78.79(0.78) |
| SD-LoRA-KD | 54.24(1.12) | 63.89(0.58) | 72.15(0.50) | 78.44(0.66) |
| Training Strategy | ImageNet-R (N=5) | ImageNet-R (N=10) | ||
| Acc ↑ | AAA ↑ | Acc ↑ | AAA ↑ | |
| W0+αA1B1 | 78.17(0.27) | 81.93(0.51) | 74.82(0.96) | 80.63(0.63) |
| W0+αAB | 73.24(0.31) | 78.80(0.13) | 70.62(0.78) | 76.32(0.16) |
| W0+A1B1+...+αAtBt | 78.28(0.59) | 82.02(0.71) | 74.29(0.32) | 79.74(0.71) |
| W0+α1A1B1+...+atAtBt(SD-LoRA) | 79.15(0.20) | 83.01(0.42) | 77.34(0.35) | 82.04(0.24) |
| Method | GFLOPs | Learnable +Parameters (M) | Stored +Features (M) |
| L2P (Wang et al., 2022b) | 70.14 | 0.48 | 0 |
| DualPrompt (Wang et al., 2022a) | 70.26 | 0.06 | 0 |
| CODA-Prompt (Smith et al., 2023) | 70.61 | 0.38 | 0 |
| HiDe-Prompt (Wang et al., 2024a) | 70.36 | 0.08 | 0.15 |
| InfLoRA (Liang & Li, 2024) | 35.12 | 0.37 | 0.10 |
| SD-LoRA | 35.12 | 0.37 | 0 |
| SD-LoRA-RR | 35.12 | 0.23 | 0 |
| Method | CIFAR100 | CUB200 | ||
| Acc ↑ | AAA ↑ | Acc ↑ | AAA ↑ | |
| Full Fine-Tuning | 69.49(0.50) | 80.35(0.87) | 51.43(1.41) | 69.74(0.93) |
| L2P (Wang et al., 2022b) | 83.18(1.20) | 87.69(1.05) | 65.18(2.49) | 76.12(1.27) |
| DualPrompt (Wang et al., 2022a) | 81.48(0.86) | 86.41(0.66) | 68.00(1.06) | 79.40(0.88) |
| CODA-Prompt (Smith et al., 2023) | 86.31(0.12) | 90.67(0.22) | 71.92(0.33) | 78.76(0.65) |
| InfLoRA (Liang & Li, 2024) | 86.75(0.35) | 91.72(0.15) | 70.82(0.23) | 81.39(0.14) |
| SD-LoRA | 88.01(0.31) | 92.54(0.18) | 77.48(0.20) | 85.59(0.44) |
| SD-LoRA-RR | 87.26(0.22) | 92.05(0.31) | 76.35(0.28) | 83.89(0.35) |
| SD-LoRA-KD | 87.09(0.45) | 92.01(0.33) | 75.95(0.55) | 83.21(0.31) |
| Backbone / Optimized for | Size (bytes | 4K) | Cache ppl ↓ | RedPj. | Hella. acc. norm. ↑ | ARC-e acc. ↑ | Wino. acc. ↑ | PiQA acc. ↑ | SciQ acc. ↑ | Avg. ↑ |
| Transformer++ | 85M | 150MB | 7.3 | 28.9 | 38.8 | 51.2 | 61.2 | 64.1 | 48.8 |
| StripedMamba | 80M | 25MB | 7.2 | 28.6 | 39.3 | 51.1 | 60.9 | 67.4 | 49.5 |
| STAR-1 / Quality | 79M | 100MB | 7.0 | 29.8 | 39.3 | 51.2 | 62.2 | 72.5 | 51.0 |
| STAR-2 / Quality | 80M | 82MB | 7.1 | 29.2 | 40.5 | 51.1 | 61.6 | 72.4 | 51.0 |
| STAR-3 / Quality | 78M | 120MB | 7.1 | 29.7 | 40.0 | 50.9 | 62.0 | 71.2 | 51.0 |
| STAR-4 / Quality | 79M | 94MB | 7.1 | 29.3 | 39.7 | 51.0 | 61.5 | 72.6 | 50.8 |
| STAR-1 / Q.+Size | 74M | 63MB | 7.2 | 28.9 | 39.3 | 51.0 | 61.8 | 67.6 | 49.7 |
| STAR-2 / Q.+Size | 74M | 64MB | 7.2 | 28.7 | 37.5 | 52.8 | 61.0 | 68.9 | 49.8 |
| STAR-3 / Q.+Size | 70M | 151MB | 7.2 | 29.2 | 39.5 | 51.9 | 61.5 | 69.4 | 50.3 |
| STAR-4 / Q.+Size | 70M | 114MB | 7.2 | 29.2 | 40.0 | 52.7 | 61.4 | 68.9 | 50.4 |
| STAR-1 / Q.+Cache | 77M | 16MB | 7.2 | 28.9 | 40.0 | 51.3 | 61.0 | 66.4 | 49.5 |
| STAR-2 / Q.+Cache | 83M | 22MB | 7.2 | 28.7 | 40.1 | 50.3 | 62.2 | 66.0 | 49.5 |
| STAR-3 / Q.+Cache | 75M | 23MB | 7.2 | 28.9 | 40.6 | 50.2 | 61.3 | 67.2 | 49.6 |
| STAR-4 / Q.+Cache | 78M | 22MB | 7.2 | 29.1 | 39.9 | 53.0 | 62.2 | 66.7 | 50.2 |
| Backbone | Size | Cache (bytes | 4K) | RedPj. ppl ↓ | ARC-c acc. norm. ↑ | Hella. acc norm. ↑ | ARC-e acc. ↑ | Wino. acc. ↑ | PiQA acc. ↑ | SciQ acc. ↑ | Avg. ↑ |
| Transf.++ | 1.2B | 805MB | 5.9 | 27.3 | 49.3 | 58.9 | 51.3 | 71.2 | 86.3 | 57.4 |
| StripedMb. | 1.1B | 136MB | 5.7 | 28.3 | 52.8 | 59.8 | 54.1 | 72.9 | 86.0 | 59.0 |
| STAR-1B | 1.1B | 86MB | 5.7 | 27.9 | 52.6 | 60.8 | 53.9 | 71.8 | 87.0 | 59.0 |
| OPTIMIZER | ADAMW |
| OPTIMIZER MOMENTUM | β1,β2=0.9,0.95 |
| BATCH SIZE | 0.25M TAXENS |
| TRAINING STEPS | 5000 |
| LEARNING RATE SCHEDULE | COSINE DECAY |
| LINEAR LEARNING RATE WARM-UP | 500 STEPS |
| BASE LEARNING RATE | 0.0008 |
| WEIGHT DECAY | 0.1 |
| DROPOUT | NONE |
| GRADIENT CLIPPING | 1.0 |
| OPTIMIZER | ADAMW |
| OPTIMIZER MOMENTUM | β1,β2=0.9,0.95 |
| BATCH SIZE | 0.25M TAXENS |
| TRAINING STEPS | 20000 |
| LEARNING RATE SCHEDULE | COSINE DECAY |
| LINEAR LEARNING RATE WARM-UP | 1500 STEPS |
| BASE LEARNING RATE | 0.0008 |
| WEIGHT DECAY | 0.1 |
| DROPOUT | NONE |
| GRADIENT CLIPPING | 1.0 |
| OPTIMIZER | ADAMW |
| OPTIMIZER MOMENTUM | β1,β2=0.9,0.95 |
| BATCH SIZE | 0.75M TAXENS |
| TRAINING STEPS | 50000 |
| LEARNING RATE SCHEDULE | COSINE DECAY |
| LINEAR LEARNING RATE WARM-UP | 3500 STEPS |
| BASE LEARNING RATE | 0.0008 |
| WEIGHT DECAY | 0.1 |
| DROPOUT | NONE |
| GRADIENT CLIPPING | 1.0 |
| Backbone / Optimized for | Size (bytes | 4K) | Cache | RedPj. ppl ↓ | Hella. acc. norm. ↑ | ARC-e acc. ↑ | Wino. acc. ↑ | PiQA acc. ↑ | SciQ acc. ↑ | Avg. ↑ |
| Transformer++ | 85M | 150MB | 7.3 | 28.9 | 38.8 | 51.2 | 61.2 | 64.1 | 48.8 |
| StripedMamba | 80M | 25MB | 7.2 | 28.6 | 39.3 | 51.1 | 60.9 | 67.4 | 49.5 |
| STAR-5 / Quality | 78M | 94MB | 7.1 | 29.2 | 39.1 | 52.1 | 62.1 | 72.7 | 51.0 |
| STAR-6 / Quality | 79M | 94MB | 7.1 | 29.0 | 39.9 | 50.9 | 61.7 | 71.1 | 50.5 |
| STAR-7 / Quality | 79M | 107MB | 7.1 | 29.3 | 38.2 | 51.5 | 61.6 | 70.2 | 50.2 |
| STAR-8 / Quality | 79M | 94MB | 7.1 | 29.1 | 40.6 | 50.8 | 62.0 | 70.3 | 50.6 |
| STAR-5 / Q.+Size | 78M | 64MB | 7.2 | 29.2 | 40.0 | 52.7 | 61.0 | 67.8 | 50.1 |
| STAR-6 / Q.+Size | 73M | 104MB | 7.2 | 27.7 | 39.5 | 53.1 | 61.6 | 69.4 | 50.3 |
| STAR-7 / Q.+Size | 69M | 170MB | 7.2 | 27.8 | 39.2 | 49.9 | 61.2 | 69.5 | 49.5 |
| STAR-8 / Q.+Size | 72M | 92MB | 7.2 | 27.5 | 39.2 | 51.7 | 61.8 | 64.1 | 48.9 |
| STAR-5 / Q.+Cache | 79M | 22MB | 7.2 | 28.9 | 40.0 | 50.2 | 61.1 | 69.1 | 49.9 |
| STAR-6 / Q.+Cache | 68M | 25MB | 7.2 | 29.1 | 40.0 | 51.3 | 60.9 | 68.7 | 50.0 |
| STAR-7 / Q.+Cache | 75M | 22MB | 7.3 | 28.6 | 39.4 | 52.6 | 61.0 | 66.6 | 49.6 |
| STAR-8 / Q.+Cache | 74M | 16MB | 7.3 | 28.8 | 38.8 | 51.2 | 61.0 | 67.0 | 49.4 |
| Refusal Prefixes (r) → | No +Prefix | “I cannot” | “I cannot +fulfill” | “I apologize” | “I apologize, +but I cannot” | “I am +unable” | |
| ↓ Harmfulness Rate (%) on HEX-PHI Benchmark with A Refusal Prefix Prefilled During Decoding | |||||||
| Llama-2-7B | Aligned +Base | 0 ± 0 | 0 ± 0 | 0 ± 0 | 0 ± 0 | 0 ± 0 | 0 ± 0 |
| 68.6 ± 0.8 | 16.4 ± 1.4 | 5.4 ± 1.3 | 14.4 ± 0.6 | 2.1 ± 0.2 | 8.1 ± 0.4 | ||
| Gemma-7B | Aligned +Base | 2.1 ± 0.2 | 0 ± 0 | 0 ± 0 | 0 ± 0 | 0 ± 0 | 0 ± 0 |
| 85.4 ± 0.6 | 8.7 ± 1.2 | 2.7 ± 0.5 | 14.1 ± 0.4 | 1.0 ± 0.8 | 3.9 ± 0.4 | ||
| AlpacaEval | MMLU | BBH | MATH | GSM8K | HumanEval | |
| Initial | 51.8 ± 0.3 | 46.3 ± 0.7 | 38.3 ± 0.5 | 3.6 ± 0.2 | 25.5 ± 0.2 | 11.7 ± 0.1 |
| Augmented | 49.5 ± 0.4 | 46.6 ± 0.5 | 39.6 ± 0.4 | 3.2 ± 0.1 | 25.2 ± 0.3 | 11.5 ± 0.2 |
| ASR (%) → | Prefilling Attacks | GCG Attack | Decoding Parameters Exploit | |||||
| 5 tokens | 10 tokens | 20 tokens | 40 tokens | HEx-PHI | AdvBench | HEx-PHI | MaliciousInstruct | |
| Initial | 42.1 ± 0.9 | 51.5 ± 1.6 | 56.1 ± 2.5 | 57.0 ± 0.4 | 36.5 ± 2.7 | 65.6 ± 3.1 | 54.9 ± 0.6 | 84.3 ± 1.7 |
| Augmented | 2.8 ± 0.4 | 2.9 ± 0.2 | 3.4 ± 0.6 | 4.5 ± 0.6 | 18.4 ± 4.2 | 19.0 ± 2.9 | 11.3 ± 0.4 | 1.0 ± 0 |
| Models → | Llama-2-7B-Chat | Gemma-1.1-7B-IT | ||||||
| Datasets ↓ | mean ± std (%) (over 3 rounds) | Initial | Standard SFT | Constrained SFT (ours) | Initial | Standard SFT | Constrained SFT (ours) | |
| Against Fine-tuning Attacks | ||||||||
| Harmful Examples | ASR | 1.5 ± 0.2 | 88.9 ± 1.2 | 4.6 ± 0.5 | 1.8 ± 0.3 | 81.6 ± 2.9 | 1.9 ± 0.2 | |
| Identity Shifting | ASR | 0 ± 0 | 79.5 ± 2.3 | 8.1 ± 0.1 | 0 ± 0 | 83.6 ± 2.5 | 9.1 ± 1.7 | |
| Backdoor | ASR (w/o trigger) | 1.5 ± 0.2 | 7.6 ± 1.1 | 1.9 ± 0.2 | 1.8 ± 0.3 | 2.0 ± 0.2 | 1.5 ± 0.1 | |
| Poisoning | ASR (w/ trigger) | 1.7 ± 0.1 | 90.9 ± 1.4 | 10.9 ± 2.8 | 1.8 ± 0.3 | 82.3 ± 1.1 | 1.9 ± 0.8 | |
| Fine-tuning with Normal Downstream Datasets | ||||||||
| Samsum | ASR | 1.5 ± 0.2 | 23.4 ± 2.5 | 3.2 ± 0.8 | 1.8 ± 0.3 | 2.0 ± 0.2 | 2.4 ± 0.3 | |
| Utility | 25.5 ± 0.3 | 51.7 ± 0.5 | 50.1 ± 0.2 | 36.0 ± 1.4 | 51.5 ± 0.3 | 51.9 ± 0.5 | ||
| SQL Create Context | ASR | 1.5 ± 0.2 | 15.4 ± 1.4 | 3.2 ± 0.8 | 1.8 ± 0.3 | 2.8 ± 0.2 | 2.4 ± 0.1 | |
| Utility | 14.9 ± 0.4 | 99.1 ± 0.2 | 98.5 ± 0.1 | 88.0 ± 0.5 | 99.2 ± 0.1 | 98.6 ± 0.3 | ||
| GSM8k | ASR | 1.5 ± 0.2 | 3.3 ± 0.4 | 2.0 ± 0.5 | 1.8 ± 0.3 | 2.9 ± 0.2 | 1.7 ± 0.4 | |
| Utility | 25.5 ± 0.2 | 41.7 ± 0.4 | 37.4 ± 0.3 | 28.5 ± 1.2 | 63.3 ± 0.5 | 63.6 ± 0.4 | ||
| ASR (%) → | Self-Cipher (Yuan et al., 2023) | Code Attack (Ren et al., 2024) | Jailbreak Chat (Jailbreak Chat, 2024) |
| Llama-2-7B-Chat (Original) | 0 | 82.5 | 4.0 |
| Llama-2-7B-Chat-Augmented | 0 | 53.5 | 0 |
| ASR of Prefilling Attacks | Utility | |||||||
| 0 token | 5 tokens | 10 tokens | 20 tokens | 40 tokens | 80 tokens | 160 tokens | AlpacaEval | |
| No Augmentation | 0 | 42.1 | 51.5 | 56.1 | 57.0 | 48.2 | 47.9 | 51.8 |
| C=5 | 0 | 3.0 | 8.8 | 30.6 | 41.8 | 39.1 | 40.0 | 50.9 |
| C=25 | 0 | 0.9 | 2.1 | 6.4 | 15.2 | 20.9 | 22.7 | 50.7 |
| C=50 | 0 | 1.5 | 1.5 | 4.5 | 8.5 | 14.5 | 14.8 | 50.1 |
| C=100 | 0 | 2.8 | 2.9 | 3.4 | 4.5 | 4.5 | 6.7 | 49.5 |
| ASR of Prefilling Attacks | Utility | |||||||
| 0 token | 5 tokens | 10 tokens | 20 tokens | 40 tokens | 80 tokens | 160 tokens | AlpacaEval | |
| No Augmentation | 0 | 42.1 | 51.5 | 56.1 | 57.0 | 48.2 | 47.9 | 51.8 |
| p=0.25 | 0 | 2.4 | 2.1 | 3.6 | 6.4 | 11.5 | 14.8 | 51.2 |
| p=0.5 | 0 | 2.8 | 2.9 | 3.4 | 4.5 | 4.5 | 6.7 | 49.5 |
| p=0.75 | 0 | 0.6 | 0.6 | 2.1 | 3.0 | 3.9 | 6.1 | 49.0 |
| p=1.0 | 0 | 0.3 | 0.9 | 2.4 | 2.4 | 3.3 | 5.2 | 48.7 |
| Datasets | Initial | Standard SFT | Constrained SFT (biased βt) | Constrained SFT (uniform β = 0.1) | Constrained SFT (uniform β = 0.5) | Constrained SFT (uniform β = 2.0) | |
| Against Fine-tuning Attacks | |||||||
| Harmful Examples | ASR | 1.5% | 88.9% | 4.6% | 86.2% | 7.2% | 0.5% |
| Identity Shifting | ASR | 0% | 79.5% | 8.1% | 41.6% | 17.1% | 3.4% |
| Backdoor Poisoning | ASR (w/o trigger) | 1.5% | 7.6% | 1.9% | 3.5% | 1.8% | 1.2% |
| ASR (w/ trigger) | 1.7% | 90.9% | 10.9% | 74.4% | 24.3% | 1.4% | |
| Fine-tuning with Normal Downstream Datasets | |||||||
| Samsum | ASR | 1.5% | 23.4% | 3.2% | 3.9% | 3.5% | 2.4% |
| Utility | 25.5% | 51.7% | 50.1% | 51.7% | 49.8% | 42.5% | |
| SQL Create Context | ASR | 1.5% | 15.4% | 3.2% | 3.3% | 2.2% | 2.6% |
| Utility | 14.9% | 99.1% | 98.5% | 99.1% | 98.6% | 92.6% | |
| GSM8k | ASR | 1.5% | 3.3% | 2.0% | 4.0% | 1.5% | 2.0% |
| Utility | 25.5% | 41.7% | 37.4% | 39.4% | 34.8% | 2.1% | |
| Datasets | Initial | Standard SFT | Standard SFT (with warmup) | Constrained SFT | Constrained SFT (with warmup) | |
| Against Fine-tuning Attacks | ||||||
| Harmful Examples | ASR | 1.5% | 88.9% | 89.4% | 29.1% | 4.6% |
| Identity Shifting | ASR | 0% | 79.5% | 44.8% | 69.6% | 8.1% |
| Backdoor Poisoning | ASR (w/o trigger) | 1.5% | 7.6% | 2.7% | 2.7% | 1.9% |
| ASR (w/ trigger) | 1.7% | 90.9% | 80.5% | 9.7% | 10.9% | |
| Fine-tuning with Normal Downstream Datasets | ||||||
| Samsum | ASR | 1.5% | 23.4% | 3.8% | 23.1% | 3.2% |
| Utility | 25.5% | 51.7% | 51.9% | 50.2% | 50.1% | |
| SQL Create Context | ASR | 1.5% | 15.4% | 3.3% | 2.0% | 3.2% |
| Utility | 14.9% | 99.1% | 99.1% | 98.6% | 98.5% | |
| GSM8k | ASR | 1.5% | 3.3% | 2.9% | 3.1% | 2.0% |
| Utility | 25.5% | 41.7% | 41.6% | 37.2% | 37.4% | |
| Datasets | Standard SFT | Standard SFT (with warmup) | Constrained SFT | Constrained SFT (with warmup) | |
| Against Fine-tuning Attacks | |||||
| Harmful Examples | ASR | 55.2% | 51.5% | 9.4% | 5.2% |
| Identity Shifting | ASR | 53.9% | 37.0% | 28.8% | 3.0% |
| Backdoor Poisoning | ASR (w/o trigger) | 3.9% | 2.7% | 1.8% | 0.9% |
| ASR (w/ trigger) | 80.0% | 83.6% | 16.4% | 12.7% | |
| Fine-tuning with Normal Downstream Datasets | |||||
| Samsum | ASR | 2.1% | 0.6% | 2.1% | 1.2% |
| Utility | 52.4% | 51.9% | 50.4% | 50.1% | |
| SQL Create Context | ASR | 3.8% | 1.5% | 2.0% | 0.9% |
| Utility | 99.0% | 99.1% | 98.4% | 98.5% | |
| GSM8k | ASR | 0.9% | 0.9% | 0.3% | 0.3% |
| Utility | 42.0% | 41.2% | 36.5% | 36.9% | |
| Models → | Llama-2-7B-Chat | ||||
| Datasets ↓ | mean ± std (%) (over 3 rounds) | Initial | Standard SFT | Vaccine Huang et al. (b) | Constrained SFT (ours) |
| Against Fine-tuning Attacks | |||||
| Harmful Examples | ASR | 1.5 ± 0.2 | 88.9 ± 1.2 | 87.3 ± 0.3 | 4.6 ± 0.5 |
| Identity Shifting | ASR | 0 ± 0 | 79.5 ± 2.3 | 78.2 ± 0.5 | 8.1 ± 0.1 |
| Backdoor Poisoning | ASR (w/o trigger) | 1.5 ± 0.2 | 7.6 ± 1.1 | 7.1 ± 1.3 | 1.9 ± 0.2 |
| ASR (w/ trigger) | 1.7 ± 0.1 | 90.9 ± 1.4 | 90.0 ± 0.7 | 10.9 ± 2.8 | |
| Fine-tuning with Normal Downstream Datasets | |||||
| Samsum | ASR | 1.5 ± 0.2 | 23.4 ± 2.5 | 22.5 ± 1.0 | 3.2 ± 0.8 |
| Utility | 25.5 ± 0.3 | 51.7 ± 0.5 | 52.0 ± 0.2 | 50.1 ± 0.2 | |
| SQL Create Context | ASR | 1.5 ± 0.2 | 15.4 ± 1.4 | 14.6 ± 0.8 | 3.2 ± 0.8 |
| Utility | 14.9 ± 0.4 | 99.1 ± 0.2 | 99.1 ± 0.1 | 98.5 ± 0.1 | |
| GSM8k | ASR | 1.5 ± 0.2 | 3.3 ± 0.4 | 2.9 ± 0.6 | 2.0 ± 0.5 |
| Utility | 25.5 ± 0.2 | 41.7 ± 0.4 | 42.0 ± 0.5 | 37.4 ± 0.3 | |
| Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
| SwitchLight | 18.45 | 0.7024 | 0.3245 |
| DiLightNet | 21.78 | 0.8013 | 0.1721 |
| w/o LTC | 20.32 | 0.7542 | 0.1927 |
| w/o aug. data | 23.95 | 0.8723 | 0.1115 |
| w/o 3d data | 22.10 | 0.8041 | 0.1298 |
| w/o light stage | 23.70 | 0.8501 | 0.1077 |
| Ours | 23.72 | 0.8513 | 0.1025 |
| N | Sequential:Parallel Ratios |
| 1 | 1:1 |
| 2 | 1:2, 2:1 |
| 4 | 1:4, 1:1, 4:1 |
| 8 | 1:8, 1:2, 2:1, 8:1 |
| 16 | 1:16, 1:4, 1:1, 4:1, 16:1 |
| 32 | 1:32, 1:8, 1:2, 2:1, 8:1, 32:1 |
| 64 | 1:64, 1:16, 1:4, 1:1, 4:1, 16:1, 64:1 |
| 128 | 1:128, 1:32, 1:8, 1:2, 2:1, 8:1, 32:1, 128:1 |
| 256 | 1:256, 1:16, 1:1, 16:1, 256:1 |
| Functional Form | Val R2 | Number of Fitted Parameters |
| Neff | 0.82 | 3 |
| Additive/independent power law | 0.71 | 2 |
| Deff | 0.74 | 3 |
| Neff and Deff (tied) | 0.79 | 3 |
| Neff and Deff (not tied) | 0.84 | 4 |
| Multiplicative power law, N, P | 0.75 | 2 |
| Constant | Value |
| A | 4.299e3 |
| α | 0.4965 |
| B | 1.806e4 |
| E | 2.7648 |
| γw | 2.6745 |
| nw | 0.3037 |
| γi | 2.2102 |
| ni | 1.4072 |
| γkv | 0.9578 |
| nkv | 2.4185 |
| CT | 0.0598 |
| γD | 0.5068 |
| γN | 0.3439 |
| γ | 0.5907 |
| b | 1.1277 |
| Task Name | Details |
| amazon | McAuley & Leskovec (2013) |
| sciq | Welbl et al. (2017) |
| truthfulqa | Lin et al. (2021) |
| mc_taco | Zhou et al. (2019) |
| piqa | Bisk et al. (2020) |
| quail | Rogers et al. (2020) |
| quartz | Tafjord et al. (2019) |
| justice | Hendrycks et al. (2020) |
| virtue | |
| utilitarianism | |
| deontology | |
| commonsense_qa | Talmor et al. (2022) |
| openbookqa | Mihaylov et al. (2018) |
| base64 | discrimination of base64 vs pretraining data |
| wikidata_isalive | Gurnee et al. (2023) |
| wikidata_sex_or_gender | |
| wikidata_occupation_isjournalist | |
| wikidata_occupation_isathlete | |
| wikidata_occupation_isactor | |
| wikidata_occupation_ispolitician | |
| wikidata_occupation_issinger | |
| wikidata_occupation_isresearcher | |
| phrase_high-school | |
| phrase_living-room | |
| phrase_social-security | |
| phrase_credit-card | |
| phrase_blood-pressure | |
| phrase_prime-factors | |
| phrase_social-media | |
| phrase_gene-expression | |
| phrase_control-group | |
| phrase_magnetic-field | |
| phrase_cell-lines | |
| phrase_trial-court | |
| phrase_second-derivative | |
| phrase_north-america | |
| phrase_human-rights | |
| phraseSide-effects | |
| phrase_public-health | |
| phrase_federal-government | |
| phrase THIRD-party | |
| phrase_clinical-trials | |
| phrase_mental-health | |
| social_iqa | Sap et al. (2019) |
| wic | Wang et al. (2018) |
| cola | |
| gpt2disc | OpenAI (2019) |
| ag_news_world | Gulli |
| ag_news_sports | |
| ag_news-business | |
| ag_news_scitech | |
| european_is | Koehn (2005) |
| european_en | |
| european_fr | |
| european_nl | |
| european_ital | |
| european_el | |
| european_de | |
| european_mt | |
| europeanSV | |
| jigsaw | Cjadams et al. (2017) |
| Setup | Method | Computational Cost |
| C-C | EG (Korpelevich, 1976) | O((N+d)ε-1) |
| NPE (Monteiro & Svaiter, 2012) | O((N+d2)dε-2/3) | |
| search-free NPE (Alves & Svaiter, 2023) | O((N+d2)dε-2/3) | |
| OGDA-2 (Jiang & Mokhtari, 2022) | O((N+d2)dε-2/3) | |
| LEN (Theorem 4.3) | O((N+d2)(d+d2/3ε-2/3)) | |
| SC-SC | EG (Korpelevich, 1976) | O((N+d)(L/μ)) |
| OGDA-2 (Jiang & Mokhtari, 2022) | O((N+d2)d(ρ/μ)2/3) | |
| ARE_restart (Huang & Zhang, 2022) | O((N+d2)d(ρ/μ))2/3) | |
| Perseus_restart (Lin & Jordan, 2024) | O((N+d2)d(ρ/μ)2/3) | |
| LEN_restart (Corollary 4.1) | O((N+d2)(d+d2/3(ρ/μ)2/3)) |
| Model | Dataset | % Lift over Greedy (Accuracy) | Lift over Greedy (Likelihood) |
| Phi3.5-Mini | MATH | 19.24 ± 2.41 | 48.33 ± 0.17 |
| Phi3.5-Mini | GSM8k | 1.82 ± 0.64 | 1.49 ± 0.55 |
| Phi3.5-Mini | ProntoQA | 12.46 ± 1.08 | 5.64 ± 0.01 |
| Mistral-7B | MATH | 8.88 ± 5.55 | 5.71 ± 3.00 |
| Model | Dataset | Weight Decay | LoRA Rank |
| Phi3.5-Mini | MATH | 0.1 | 16 |
| Phi3.5-Mini | GSM8k | 0.5 | 16 |
| Phi3.5-Mini | ProntoQA | 0.0 | 16 |
| Mistral-7B-Instruct-v0.3 | MATH | 1.0 | 8 |
| Type | Coordinates (P) | Vector (V) |
| HBA | (-7.5155, -3.7035, -9.0635) | (0.1900, 0.7041, -0.6842) |
| (-6.7175, -4.8185, -8.8258) | (0.8205, -0.2287, -0.5240) | |
| (-2.3550, -7.3310, -0.0453) | (0.7719, 0.4600, -0.4388) | |
| (-6.5197, -4.3363, -4.1827) | (0.5106, 0.7063, -0.4905) | |
| (-6.9450, -6.9160, -5.0510) | (-0.1783, -0.4378, -0.8812) | |
| (-3.2780, -7.2170, 2.2860) | (-0.7287, 0.6270, -0.2755) | |
| (-5.7270, -2.3950, -0.6220) | (0.4491, -0.6321, 0.6316) | |
| (-1.9580, -8.8820, 3.3910) | (0.7956, -0.5564, -0.2396) | |
| (-5.0980, -6.4300, -1.5740) | (0.4851, 0.6853, -0.5432) | |
| (-5.8830, -1.1980, 0.0490) | (0.4090, -0.3130, 0.8572) | |
| (-5.6690, 0.0200, -2.0780) | (0.5578, 0.5389, -0.6312) | |
| HBD | (-9.4350, -2.6960, -8.2680) | (0.1103, 0.6563, -0.7464) |
| (-3.2780, -7.2170, 2.2860) | (0.7506, 0.2245, 0.6214) | |
| (-1.9580, -8.8820, 3.3910) | (-0.2298, 0.1607, 0.9599) | |
| (-6.9480, -2.0740, -1.9750) | (-0.3293, 0.3106, -0.8917) | |
| Aromatic | (-7.6710, -4.5475, -8.2231) | (-0.5639, -0.4342, -0.7025) |
| (-11.6032, -2.0268, -6.4003) | (0.3775, 0.9134, -0.1523) | |
| (-4.6030, -8.0915, -1.0500) | (0.4302, -0.8132, -0.3919) | |
| (-9.5312, -4.0857, -7.3044) | (-0.5073, -0.3479, -0.7884) | |
| Hydrophobe | (-7.6710, -4.5475, -8.2231) | (0.0, 0.0, 0.0) |
| (-11.6032, -2.0268, -6.4003) | (0.0, 0.0, 0.0) | |
| (-4.6030, -8.0915, -1.0500) | (0.0, 0.0, 0.0) | |
| (-9.5312, -4.0857, -7.3044) | (0.0, 0.0, 0.0) | |
| (-5.6776, -0.4634, -1.0174) | (0.0, 0.0, 0.0) | |
| (-6.9299, -5.4087, -1.1730) | (0.0, 0.0, 0.0) | |
| (-6.7990, -4.5530, -3.9980) | (0.0, 0.0, 0.0) | |
| (-7.9010, 0.1820, -0.7890) | (0.0, 0.0, 0.0) |
| Samples | P(x1, x2) | P(x1, x3) | P(x1, x4) |
| Validity (%) | 96.2 | 92.7 | 73.7 |
| Validity xTB (%) | 96.3 | 93.0 | 73.7 |
| Graph consis. (%) | 99.1 | 99.5 | 98.4 |
| Valid neutral (%) | 84.1 | 93.2 | 89.7 |
| Samples | P(x1|x2) | P(x1|x3) | P(x1|x4) |
| Validity (%) | 96.0 | 91.9 | 80.7 |
| Validity xTB (%) | 96.2 | 92.9 | 79.9 |
| Graph consis. (%) | 97.4 | 92.8 | 93.8 |
| Valid neutral (%) | 84.2 | 82.5 | 72.6 |
| P(x1,x2) | P(x1,x3) | P(x1,x4) | MMFF94 | |
| Strain Energy (kcal/mol) | 2.81 ±5.32 | 5.80 ±73.17 | 5.87 ±47.36 | 7.34 ±9.20 |
| RMSD (Å) | 0.084 ±0.133 | 0.081 ±0.144 | 0.089 ±0.155 | 0.193 ±0.184 |
| P(x1, x3, x4) | MMFF94 | |
| Strain Energy (kcal/mol) | 17.43 ±124.02 | 7.99 ±3.26 |
| RMSD (Å) | 0.273 ±0.292 | 0.337 ±0.274 |
| NP 1 | NP 2 | NP 3 | |
| Validity (%) | 65.6 | 56.5 | 55.6 |
| Validity xTB (%) | 67.6 | 62.4 | 56.0 |
| Graph consis. (%) | 78.5 | 88.6 | 68.2 |
| Valid neutral (%) | 50.5 | 72.3 | 47.0 |
| liep | 3eml | 3ny8 | 4rlu | 4unn | 5mo4 | 7111 | |
| Validity (%) | 32.7 | 73.3 | 52.3 | 88.5 | 66.5 | 71.5 | 53.5 |
| Valid neutral (%) | 47.1 | 60.0 | 75.7 | 78.8 | 55.2 | 71.4 | 62.6 |
| Bioisosteric FM | |
| Validity (%) | 45.0 |
| Validity xTB (%) | 37.5 |
| Graph consis. (%) | 76.6 |
| Valid neutral (%) | 46.0 |
| Samples | Novelty (%) | Uniqueness (%) | |
| ShEPhERD-GDB17 | P(x1, x2) | 96.4 | 99.5 |
| P(x1, x3) | 96.7 | 99.6 | |
| P(x1, x4) | 96.0 | 99.9 | |
| P(x1 | x2) | 99.4 | 97.9 | |
| P(x1 | x3) | 99.3 | 97.6 | |
| P(x1 | x4) | 99.0 | 96.0 | |
| ShEPhERD-MOSES-aq | P(x1, x3, x4) | 100.0 | 100.0 |
| NP 1 | P(x1 | x3, x4) | 100.0 | 99.7 |
| NP 2 | P(x1 | x3, x4) | 100.0 | 100.0 |
| NP 3 | P(x1 | x3, x4) | 100.0 | 100.0 |
| liep (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| liep (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 3eml (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 3eml (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 3ny8 (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 3ny8 (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 4rlu (lowest energy) | P(x1 | x3, x4) | 99.5 | 100.0 |
| 4rlu (pose) | P(x1 | x3, x4) | 99.8 | 100.0 |
| 4unn (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 4unn (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 5mo4 (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 5mo4 (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 7l11 (lowest energy) | P(x1 | x3, x4) | 100.0 | 100.0 |
| 7l11 (pose) | P(x1 | x3, x4) | 100.0 | 100.0 |
| Bioisosteric FM | P(x1 | x3, x4) | 100.0 | 100.0 |
| Number of surface points (n2) | 50 | 100 | 150 | 200 | 300 | 400 |
| Gaussian width (α) | 0.6011 | 0.8668 | 1.022 | 1.118 | 1.216 | 1.258 |
| Training Parameters | |
| Parameter | Value |
| Effective batch size | 48 |
| Learning rate | 0.0003 |
| Gradient clipping value | 5.0 |
| T | 400 |
| n2 | 75 |
| n3 | 75 |
| surface probe radius | 0.6 |
| a and f scaling factors | 0.25 |
| B scaling factor | 1.0 |
| v scaling factor | 2.0 |
| p scaling factor | 2.0 |
| V scaling factor | 2.0 |
| Total Number of Learnable Parameters | |
| Model | Number of Parameters |
| ShEPhERD-GDB17 P(x1, x2) | 4,375,054 |
| ShEPhERD-GDB17 P(x1, x3) | 4,387,407 |
| ShEPhERD-GDB17 P(x1, x4) | 4,407,321 |
| ShEPhERD-MOSES-aq P(x1, x3, x4) | 6,010,427 |
| Denoising Network Hyperparameters | |
| Parameter | Value |
| Default EquiformerV2 Parameters | |
| num_node_channels | 64 |
| lmax_list | [1] |
| mmax_list | [1] |
| ffn Hidden channels | 32 |
| grid_resolution | 16 |
| num Sphere_samples | 128 |
| edge_channels | 128 |
| activation_function | silu |
| norm_type | layer_norm_sh |
| use sep_s2_ACT | True |
| use_grid_mlp | True |
| use_gate_ACT | False |
| use_attn-renorm | True |
| use_s2_ACT_attn | False |
| Joint Module Parameters | |
| num_EquiformerV2_layers | 2 |
| attention_channels | 24 |
| num attentioheads | 2 |
| radius_graph_cutoff | 5.0 |
| RBF_cutoff | 5.0 |
| x1 Embedding Module Parameters | |
| num_EquiformerV2_layers | 4 |
| attention_channels | 32 |
| num attentive_heads | 4 |
| ffn Hidden_channels | 64 |
| radius_graph_cutoff | ∞ (fully connected) |
| RBF_cutoff | 5.0 |
| x2, x3, x4 Embedding Module Parameters | |
| num_EquiformerV2_layers | 2 |
| attention_channels | 24 |
| num attentive_heads | 2 |
| ffn Hidden_channels | 32 |
| radius_graph_cutoff | 5.0 |
| RBF_cutoff | 5.0 |
| x3Scalar_RBF_expansion_min | -10.0 |
| x3Scalar_RBF_expansion_max | 10.0 |
| x1 Denoising Module Parameters | |
| MLP_hidden_dim | 64 |
| num_MLP_hidden_layers | 2 |
| e3nn_ffn Hidden_channels | 32 |
| egnn_normalize_vectors | True |
| egnn_distance_expansion_dim | 32 |
| x2, x3, x4 Denoising Module Parameters | |
| MLP_hidden_dim | 64 |
| num_MLP_hidden_layers | 2 |
| e3nn_ffn Hidden_channels | 32 |
| Model | avgSim_s(↑) | avgSim_g(↓) | QED(↑) |
| ShEPhERD | 0.799 ± 0.058 | 0.223 ± 0.065 | 0.723 |
| SQUID (λ = 1.0) | 0.700 ± 0.107 | 0.245 ± 0.078 | 0.760 |
| SQUID (λ = 0.3) | 0.735 ± 0.117 | 0.346 ± 0.161 | 0.766 |
| ShapeMol* | 0.689 ± 0.044 | 0.239 ± 0.042 | 0.748 |
| ShapeMol+g* | 0.746 ± 0.036 | 0.241 ± 0.050 | 0.749 |
| Model | 1iep | 3eml | 3ny8 | 4rlu | 4unn | 5mo4 | 7l11 |
| Redocked PDB ligand | -9.58 | -9.14 | -8.62 | -8.70 | -9.24 | -9.95 | -9.10 |
| ShEPhERD | -12.25 ± 0.18 | -10.80 ± 0.23 | -10.69 ± 0.23 | -10.19 ± 0.23 | -10.60 ± 0.27 | -10.66 ± 0.20 | -9.44 ± 0.19 |
| 3DSBDD | -9.05 ± 0.38 | -10.02 ± 0.15 | -10.10 ± 0.24 | -9.80 ± 0.55 | -8.23 ± 0.30 | -8.71 ± 0.45 | -8.47 ± 0.18 |
| AutoGrow4 | -13.23 ± 0.11 | -13.03 ± 0.09 | -11.70 ± 0.00 | -11.20 ± 0.00 | -11.14 ± 0.12 | -10.38 ± 0.27 | -8.84 ± 0.33 |
| Pocket2Mol | -10.17 ± 0.53 | -12.25 ± 0.27 | -11.89 ± 0.16 | -10.57 ± 0.12 | -12.20 ± 0.34 | -10.07 ± 0.62 | -9.74 ± 0.38 |
| PocketFlow | -12.49 ± 0.70 | -9.25 ± 0.29 | -8.56 ± 0.35 | -9.65 ± 0.25 | -7.90 ± 0.78 | -7.80 ± 0.42 | -8.35 ± 0.31 |
| ResGen | -10.97 ± 0.29 | -9.25 ± 0.95 | -10.96 ± 0.42 | -11.75 ± 0.42 | -9.41 ± 0.23 | -10.34 ± 0.39 | -8.74 ± 0.24 |
| Unconditional CIFAR-10 | ||
| METHOD | NFE (↓) | FID (↓) |
| Diffusion models & Fast Samplers | ||
| Score SDE (deep) (Song et al., 2021b) | 2000 | 2.20 |
| EDM (Karras et al., 2022) | 35 | 2.01 |
| Flow Matching (Lipman et al., 2022) | 142 | 6.35 |
| OT-CFM (Tong et al., 2023) | 1000 | 3.57 |
| DPM-Solver (Lu et al., 2022a) | 10 | 4.70 |
| DPM-Solver++ (Lu et al., 2022b) | 10 | 2.91 |
| DPM-Solver-v3 (Zheng et al., 2023c) | 10 | 2.51 |
| Joint Training | ||
| Diffusion GAN (Xiao et al., 2022) | 4 | 3.75 |
| Diffusion StyleGAN (Wang et al., 2022) | 1 | 3.19 |
| StyleGAN-XL (Sauer et al., 2022) | 1 | 1.52 |
| CTM (Kim et al., 2023) | 1 | 1.87 |
| Diff-Instruct (Luo et al., 2024) | 1 | 4.53 |
| DMD (Yin et al., 2024b) | 1 | 3.77 |
| SiD (Zhou et al., 2024) | 1 | 1.92 |
| Diffusion Distillation | ||
| DFNO (LPIPS) (Zheng et al., 2023b) | 1 | 3.78 |
| 2-Rectified Flow (Liu et al., 2022) | 1 | 4.85 |
| PID (LPIPS) (Tee et al., 2024) | 1 | 3.92 |
| Consistency-FM (Yang et al., 2024) | 2 | 5.34 |
| PD (Salimans & Ho, 2022) | 1 | 8.34 |
| 2 | 5.58 | |
| TRACT (Berthelot et al., 2023) | 1 | 3.78 |
| 2 | 3.32 | |
| CD (LPIPS) (Song et al., 2023) | 1 | 3.55 |
| 2 | 2.93 | |
| sCD (ours) | 1 | 3.66 |
| 2 | 2.52 | |
| Consistency Training | ||
| iCT (Song & Dhariwal, 2023) | 1 | 2.83 |
| 2 | 2.46 | |
| iCT-deep (Song & Dhariwal, 2023) | 1 | 2.51 |
| 2 | 2.24 | |
| ECT (Geng et al., 2024) | 1 | 3.60 |
| 2 | 2.11 | |
| sCT (ours) | 1 | 2.85 |
| 2 | 2.06 | |
| METHOD | NFE (↓) | FID (↓) |
| Diffusion models & Fast Samplers | ||
| ADM (Dhariwal & Nichol, 2021) | 250 | 2.07 |
| RIN (Jabri et al., 2022) | 1000 | 1.23 |
| DPM-Solver (Lu et al., 2022a) | 20 | 3.42 |
| EDM (Heun) (Karras et al., 2022) | 79 | 2.44 |
| EDM2 (Heun) (Karras et al., 2024) | 63 | 1.33 |
| Joint Training | ||
| StyleGAN-XL (Sauer et al., 2022) | 1 | 1.52 |
| Diff-Instruct (Luo et al., 2024) | 1 | 5.57 |
| EMD (Xie et al., 2024b) | 1 | 2.20 |
| DMD (Yin et al., 2024b) | 1 | 2.62 |
| DMD2 (Yin et al., 2024a) | 1 | 1.28 |
| SiD (Zhou et al., 2024) | 1 | 1.52 |
| CTM (Kim et al., 2023) | 1 | 1.92 |
| 2 | 1.73 | |
| Moment Matching (Salimans et al., 2024) | 1 | 3.00 |
| 2 | 3.86 | |
| Diffusion Distillation | ||
| DFNO (LPIPS) (Zheng et al., 2023b) | 1 | 7.83 |
| PID (LPIPS) (Tee et al., 2024) | 1 | 9.49 |
| TRACT (Berthelot et al., 2023) | 1 | 7.43 |
| 2 | 4.97 | |
| PD (Salimans & Ho, 2022) | 1 | 10.70 |
| (reimpl. from Heek et al. (2024)) | 2 | 4.70 |
| CD (LPIPS) (Song et al., 2023) | 1 | 6.20 |
| 2 | 4.70 | |
| MultiStep-CD (Heek et al., 2024) | 1 | 3.20 |
| 2 | 1.90 | |
| sCD (ours) | 1 | 2.44 |
| 2 | 1.66 | |
| Consistency Training | ||
| iCT (Song & Dhariwal, 2023) | 1 | 4.02 |
| 2 | 3.20 | |
| iCT-deep (Song & Dhariwal, 2023) | 1 | 3.25 |
| 2 | 2.77 | |
| ECT (Geng et al., 2024) | 1 | 2.49 |
| 2 | 1.67 | |
| sCT (ours) | 1 | 2.04 |
| 2 | 1.48 | |
| METHOD | NFE (↓) | FID (↓) | #Params | METHOD | NFE (↓) | FID (↓) | #Params |
| Diffusion models | †Teacher Diffusion Model | ||||||
| ADM-G (Dhariwal & Nichol, 2021) | 250×2 | 7.72 | 559M | EDM2-S (Karras et al., 2024) | 63×2 | 2.29 | 280M |
| RIN (Jabri et al., 2022) | 1000 | 3.95 | 320M | EDM2-M (Karras et al., 2024) | 63×2 | 2.00 | 498M |
| U-ViT-H/4 (Bao et al., 2023) | 250×2 | 4.05 | 501M | EDM2-L (Karras et al., 2024) | 63×2 | 1.87 | 778M |
| DiT-XL/2 (Peebles & Xie, 2023) | 250×2 | 3.04 | 675M | EDM2-XL (Karras et al., 2024) | 63×2 | 1.80 | 1.1B |
| SimDiff (Hoogeboom et al., 2023) | 512×2 | 3.02 | 2B | EDM2-XXL (Karras et al., 2024) | 63×2 | 1.73 | 1.5B |
| VDM++ (Kingma & Gao, 2024) | 512×2 | 2.65 | 2B | Consistency Training (sCT, ours) | |||
| DiffiT (Hatamizadeh et al., 2023) | 250×2 | 2.67 | 561M | sCT-S (ours) | 1 | 10.13 | 280M |
| DiMR-XL/3R (Liu et al., 2024) | 250×2 | 2.89 | 525M | sCT-M (ours) | 2 | 9.86 | 280M |
| DiffFU:SSM-XL (Yan et al., 2024) | 250×2 | 3.41 | 673M | ||||
| DiM-H (Teng et al., 2024) | 250×2 | 3.78 | 860M | ||||
| U-DiT (Tian et al., 2024b) | 250 | 15.39 | 204M | 2 | 5.53 | 498M | |
| SiT-XL (Ma et al., 2024) | 250×2 | 2.62 | 675M | sCT-L (ours) | 1 | 5.15 | 778M |
| Large-DiT (Alpha-VLLM, 2024) | 250×2 | 2.52 | 3B | 2 | 4.65 | 778M | |
| MaskDiT (Zheng et al., 2023a) | 79×2 | 2.50 | 736M | sCT-XL (ours) | 1 | 4.33 | 1.1B |
| DiS-H/2 (Fei et al., 2024a) | 250×2 | 2.88 | 900M | 2 | 3.73 | 1.1B | |
| DRWKV-H/2 (Fei et al., 2024b) | 250×2 | 2.95 | 779M | sCT-XXL (ours) | 1 | 4.29 | 1.5B |
| EDM2-S (Karras et al., 2024) | 63×2 | 2.23 | 280M | 2 | 3.76 | 1.5B | |
| EDM2-M (Karras et al., 2024) | 63×2 | 2.01 | 498M | Consistency Distillation (sCD, ours) | |||
| EDM2-L (Karras et al., 2024) | 63×2 | 1.88 | 778M | sCD-S | 1 | 3.07 | 280M |
| EDM2-XL (Karras et al., 2024) | 63×2 | 1.85 | 1.1B | 2 | 2.50 | 280M | |
| EDM2-XXL (Karras et al., 2024) | 63×2 | 1.81 | 1.5B | sCD-M | 1 | 2.75 | 498M |
| GANs & Masked Models | 2 | 2.26 | 498M | ||||
| BigGAN (Brock, 2018) | 1 | 8.43 | 160M | sCD-L | 1 | 2.55 | 778M |
| StyleGAN-XL (Sauer et al., 2022) | 1×2 | 2.41 | 168M | 2 | 2.04 | 778M | |
| VQGAN (Esser et al., 2021) | 1024 | 26.52 | 227M | ||||
| MaskGIT (Chang et al., 2022) | 12 | 7.32 | 227M | sCD-XL | 1 | 2.40 | 1.1B |
| MAGVIT-v2 (Yu et al., 2023) | 64×2 | 1.91 | 307M | 2 | 1.93 | 1.1B | |
| MAR (Li et al., 2024) | 64×2 | 1.73 | 481M | sCD-XXL | 1 | 2.28 | 1.5B |
| VAR-d36-s (Tian et al., 2024a) | 10×2 | 2.63 | 2.3B | 2 | 1.88 | 1.5B | |
| Training Iterations | 100k | 200k | 400k |
| ECT | 4.54 | 3.86 | 3.60 |
| sCT (ours) | 3.97 | 3.51 | 3.09 |
| Model Size | ||||
| S | M | L | XL | |
| Model details | ||||
| Batch size | 2048 | 2048 | 2048 | 2048 |
| Channel multiplier | 192 | 256 | 320 | 384 |
| Time embedding layer | positional | positional | positional | positional |
| noise conditioning cnoise(t) | t | t | t | t |
| adaptive double normalization | ✓ | ✓ | ✓ | ✓ |
| Learning rate decay (tref) | 35000 | 35000 | 35000 | 35000 |
| Adam β1 | 0.9 | 0.9 | 0.9 | 0.9 |
| Adam β2 | 0.99 | 0.99 | 0.99 | 0.99 |
| Adam ε | 1.0e-11 | 1.0e-11 | 1.0e-11 | 1.0e-11 |
| Model capacity (Mparams) | 280.2 | 497.8 | 777.6 | 1119.4 |
| Training details of diffusion models (TrigFlow) | ||||
| Training iterations | 1048k | 1486k | 761k | 540k |
| Learning rate max (αref) | 1.0e-2 | 9.0e-3 | 8.0e-3 | 7.0e-3 |
| Dropout probability | 0% | 10% | 10% | 10% |
| Proposal Pmean | -0.8 | -0.8 | -0.8 | -0.8 |
| Proposal Pstd. | 1.6 | 1.6 | 1.6 | 1.6 |
| Shared details of consistency models | ||||
| Learning rate max (αref) | 1.0e-4 | 9.0e-5 | 8.0e-5 | 7.0e-5 |
| Proposal Pmean | -1.0 | -1.0 | -1.0 | -1.0 |
| Proposal Pstd. | 1.6 | 1.6 | 1.6 | 1.6 |
| Tangent normalization constant (c) | 0.1 | 0.1 | 0.1 | 0.1 |
| Tangent warm up iterations | 10k | 10k | 10k | 10k |
| EMA length (σrel) of pretrained diffusion | 0.075 | 0.06 | 0.04 | 0.04 |
| Training details of sCT | ||||
| Training iterations | 400k | 400k | 400k | 400k |
| Dropout probability for resolution ≤ 16 | 45% | 45% | 45% | 45% |
| Training details of sCD | ||||
| Training iterations | 400k | 400k | 400k | 400k |
| Dropout probability for resolution ≤ 16 | 0% | 0% | 0% | 0% |
| Model Size | |||||
| S | M | L | XL | XXL | |
| Model details | |||||
| Batch size | 2048 | 2048 | 2048 | 2048 | 2048 |
| Channel multiplier | 192 | 256 | 320 | 384 | 448 |
| Time embedding layer | positional | positional | positional | positional | positional |
| noise conditioning cnoise(t) | t | t | t | t | t |
| adaptive double normalization | ✓ | ✓ | ✓ | ✓ | ✓ |
| Learning rate decay (tref) | 70000 | 70000 | 70000 | 70000 | 70000 |
| Adam β1 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 |
| Adam β2 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |
| Adam ε | 1.0e-11 | 1.0e-11 | 1.0e-11 | 1.0e-11 | 1.0e-11 |
| Model capacity (Mparams) | 280.2 | 497.8 | 777.6 | 1119.4 | 1523.4 |
| Training details of diffusion models (TrigFlow) | |||||
| Training iterations | 1048k | 1048k | 696k | 598k | 376k |
| Learning rate max (αref) | 1.0e-2 | 9.0e-3 | 8.0e-3 | 7.0e-3 | 6.5e-3 |
| Dropout probability | 0% | 10% | 10% | 10% | 10% |
| Proposal Pmean | -0.4 | -0.4 | -0.4 | -0.4 | -0.4 |
| Proposal Pstd. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| Shared details of consistency models | |||||
| Learning rate max (αref) | 1.0e-4 | 9.0e-5 | 8.0e-5 | 7.0e-5 | 6.5e-5 |
| Proposal Pmean | -0.8 | -0.8 | -0.8 | -0.8 | -0.8 |
| Proposal Pstd. | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 |
| Tangent normalization constant (c) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
| Tangent warm up iterations | 10k | 10k | 10k | 10k | 10k |
| EMA length (σrel) of pretrained diffusion | 0.025 | 0.03 | 0.015 | 0.02 | 0.015 |
| Training details of sCT | |||||
| Training iterations | 100k | 100k | 100k | 100k | 100k |
| Dropout probability for resolution ≤ 16 | 25% | 35% | 35% | 35% | 35% |
| Training details of sCD | |||||
| Training iterations | 200k | 200k | 200k | 200k | 200k |
| Dropout probability for resolution ≤ 16 | 0% | 10% | 10% | 10% | 10% |
| Maximum of CFG scale | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 |
| Training details of sCD with adaptive VSD | |||||
| Training iterations | 20k | 20k | 20k | 20k | 20k |
| Learning rate max (αref) for Fφ | 1.0e-4 | 9.0e-5 | 8.0e-5 | 7.0e-5 | 6.5e-5 |
| Dropout probability for Fφ | 0% | 10% | 10% | 10% | 10% |
| Proposal Pmean for LDiff(φ) | -0.8 | -0.8 | -0.8 | -0.8 | -0.8 |
| Proposal Pstd. for LDiff(φ) | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 |
| Number of updating of φ per updating of θ | 1 | 1 | 1 | 1 | 1 |
| One-step sampling starting time tmax | arctan(80/σd) | arctan(80/σd) | arctan(80/σd) | arctan(80/σd) | arctan(80/σd) |
| Proposal Pmean for LVSD(θ) | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 |
| Proposal Pstd. for LVSD(θ) | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 |
| Loss weighting λVSD for LVSD | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| Model Size | |||||
| S | M | L | XL | XXL | |
| Sampling by diffusion models (NFE = 126) | |||||
| EMA length (σrel) | 0.025 | 0.030 | 0.015 | 0.020 | 0.015 |
| Guidance scale for FID | 1.4 | 1.2 | 1.2 | 1.2 | 1.2 |
| †Guidance scale for FDINOV2 | 2.0 | 1.8 | 1.8 | 1.8 | 1.8 |
| FID (TrigFlow) | 2.29 | 2.00 | 1.87 | 1.80 | 1.73 |
| FID (EDM2) | 2.23 | 2.01 | 1.88 | 1.85 | 1.81 |
| FDINOV2(TrigFlow) | 52.08 | 43.33 | 39.23 | 36.73 | 35.93 |
| †FDINOV2(EDM2) with σrel for FDINOV2 | 52.32 | 41.98 | 38.20 | 35.67 | 33.09 |
| Sampling by consistency models trained with sCT | |||||
| Intermediate time tmid in 2-step sampling | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
| 1-step FID | 10.13 | 5.84 | 5.15 | 4.33 | 4.29 |
| 2-step FID | 9.86 | 5.53 | 4.65 | 3.73 | 3.76 |
| 1-step FDINOV2 | 278.35 | 192.13 | 169.98 | 147.06 | 146.31 |
| 2-step FDINOV2 | 244.41 | 160.66 | 135.80 | 114.65 | 112.69 |
| Sampling by consistency models trained with sCD | |||||
| Intermediate time tmid in 2-step sampling | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
| Guidance scale for FID, 1-step sampling | 1.5 | 1.3 | 1.3 | 1.3 | 1.3 |
| Guidance scale for FID, 2-step sampling | 1.4 | 1.2 | 1.2 | 1.2 | 1.2 |
| Guidance scale for FDINOV2, 1-step sampling | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 |
| Guidance scale for FDINOV2, 2-step sampling | 2.0 | 2.0 | 1.9 | 1.9 | 1.9 |
| 1-step FID | 3.07 | 2.75 | 2.55 | 2.40 | 2.28 |
| 2-step FID | 2.50 | 2.26 | 2.04 | 1.93 | 1.88 |
| 1-step FDINOV2 | 104.22 | 83.78 | 76.10 | 70.30 | 67.80 |
| 2-step FDINOV2 | 71.15 | 55.70 | 50.63 | 46.66 | 44.97 |
| Sampling by consistency models trained with multistep sCD | |||||
| Guidance scale for FID | 1.4 | 1.2 | 1.2 | 1.15 | 1.15 |
| Guidance scale for FDINOV2 | 2.0 | 2.0 | 2.0 | 1.9 | 1.9 |
| FID, M = 2 | 2.79 | 2.51 | 2.32 | 2.29 | 2.16 |
| FID, M = 4 | 2.78 | 2.46 | 2.28 | 2.22 | 2.10 |
| FID, M = 8 | 2.49 | 2.24 | 2.04 | 2.02 | 1.90 |
| FID, M = 16 | 2.34 | 2.18 | 1.99 | 1.90 | 1.82 |
| FDINOV2, M = 2 | 76.29 | 60.47 | 54.91 | 51.91 | 50.70 |
| FDINOV2, M = 4 | 72.01 | 56.38 | 50.99 | 47.61 | 46.78 |
| FDINOV2, M = 8 | 60.13 | 49.46 | 44.87 | 41.26 | 40.56 |
| FDINOV2, M = 16 | 55.89 | 46.94 | 42.55 | 39.30 | 38.55 |
| Sampling by consistency models trained with sCD + adaptive VSD | |||||
| Intermediate time tmid in 2-step sampling | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
| Guidance scale for FID, 1-step sampling | 1.2 | 1.0 | 1.0 | 1.0 | 1.0 |
| Guidance scale for FID, 2-step sampling | 1.2 | 1.0 | 1.0 | 1.0 | 1.0 |
| Guidance scale for FDINOV2, 1-step sampling | 1.7 | 1.5 | 1.6 | 1.5 | 1.5 |
| Guidance scale for FDINOV2, 2-step sampling | 1.7 | 1.5 | 1.6 | 1.5 | 1.5 |
| 1-step FID | 3.37 | 2.67 | 2.26 | 2.39 | 2.16 |
| 2-step FID | 2.70 | 2.29 | 1.99 | 2.01 | 1.89 |
| 1-step FDINOV2 | 72.12 | 54.81 | 50.46 | 48.11 | 45.54 |
| 2-step FDINOV2 | 69.00 | 53.53 | 48.54 | 46.61 | 43.93 |
| Method | |||
| VSD | sCD | sCD + VSD | |
| EMA length (σrel) | 0.05 | 0.05 | 0.05 |
| Guidance scale for FID, 1-step sampling | 1.1 | 1.3 | 1.0 |
| Guidance scale for FID, 2-step sampling | \ | 1.2 | 1.0 |
| Guidance scale for FDINOV2, 1-step sampling | 1.4 | 2.0 | 1.5 |
| Guidance scale for FDINOV2, 2-step sampling | \ | 2.0 | 1.5 |
| 1-step FID | 3.02 | 2.75 | 2.67 |
| 2-step FID | \ | 2.26 | 2.29 |
| 1-step FDINOV2 | 57.19 | 83.78 | 54.81 |
| 2-step FDINOV2 | \ | 55.70 | 53.53 |
| Model Size | ||||
| S | M | L | XL | |
| Sampling by diffusion models (NFE=63) | ||||
| EMA length (σrel) | 0.075 | 0.06 | 0.04 | 0.04 |
| FID (TrigFlow) | 1.70 | 1.55 | 1.44 | 1.38 |
| Sampling by consistency models trained with sCT | ||||
| Intermediate time tmid in 2-step sampling | 1.1 | 1.1 | 1.1 | 1.1 |
| 1-step FID | 3.23 | 2.25 | 2.08 | 2.04 |
| 2-step FID | 2.93 | 1.81 | 1.57 | 1.48 |
| Sampling by consistency models trained with sCD | ||||
| Intermediate time tmid in 2-step sampling | 1.1 | 1.1 | 1.1 | 1.1 |
| 1-step FID | 2.97 | 2.79 | 2.43 | 2.44 |
| 2-step FID | 2.07 | 1.89 | 1.70 | 1.66 |
| Structure | Example clean input | Example output |
| Simple | The parents | p(is) - p(are) |
| Within RC | The athlete that the managers | pLikes) - p(like) |
| Across RC | The athlete that the managers like | p(do) - pdoes) |
| Across PP | The secretaries near the cars | phas) - p(have) |
| Method | Pythia-70M | Gemma-2-2B | ||||
| ↑Profession | ↓Gender | ↑Worst group | ↑Profession | ↓Gender | ↑Worst group | |
| Original | 61.9 | 87.4 | 24.4 | 67.7 | 81.9 | 18.2 |
| CBP | 83.3 | 60.1 | 67.7 | 90.2 | 50.1 | 86.7 |
| Random | 61.8 | 87.5 | 24.4 | 67.3 | 82.3 | 18.0 |
| SHIFT | 88.5 | 54.0 | 76.0 | 76.0 | 51.5 | 50.0 |
| SHIFT + retrain | 93.1 | 52.0 | 89.0 | 95.0 | 52.4 | 92.9 |
| Neuron skyline | 75.5 | 73.2 | 41.5 | 65.1 | 84.3 | 5.6 |
| Feature skyline | 88.5 | 54.3 | 62.9 | 80.8 | 53.7 | 56.7 |
| Oracle | 93.0 | 49.4 | 91.9 | 95.0 | 50.6 | 93.1 |
| % Variance Explained | L1 | L0 | % Alive | CE Diff | % CE Recovered |
| 96 | 1 | 3 | 36 | 0.17 | 98 |
| Layer | % Variance Explained | L1 | L0 | % Alive | CE Diff | % CE Recovered |
| Attn 0 | 92% | 8 | 128 | 17% | 0.02 | 99% |
| Attn 1 | 87% | 9 | 127 | 17% | 0.03 | 94% |
| Attn 2 | 90% | 19 | 215 | 12% | 0.05 | 93% |
| Attn 3 | 89% | 12 | 169 | 13% | 0.03 | 93% |
| Attn 4 | 83% | 8 | 132 | 14% | 0.01 | 95% |
| Attn 5 | 89% | 11 | 144 | 20% | 0.02 | 93% |
| Layer | % Variance Explained | L1 | L0 | % Alive | CE Diff | % CE Recovered |
| MLP 0 | 97% | 5 | 5 | 40% | 0.10 | 99% |
| MLP 1 | 85% | 8 | 69 | 44% | 0.06 | 95% |
| MLP 2 | 99% | 12 | 88 | 31% | 0.11 | 88% |
| MLP 3 | 88% | 20 | 160 | 25% | 0.12 | 94% |
| MLP 4 | 92% | 20 | 100 | 29% | 0.14 | 90% |
| MLP 5 | 96% | 31 | 102 | 35% | 0.15 | 97% |
| Layer | % Variance Explained | L1 | L0 | % Alive | CE Diff | % CE Recovered |
| Resid 0 | 92% | 11 | 59 | 41% | 0.24 | 97% |
| Resid 1 | 85% | 13 | 54 | 38% | 0.45 | 95% |
| Resid 2 | 96% | 24 | 108 | 27% | 0.55 | 94% |
| Resid 3 | 96% | 23 | 68 | 22% | 0.58 | 95% |
| Resid 4 | 88% | 23 | 61 | 27% | 0.48 | 95% |
| Resid 5 | 90% | 35 | 72 | 45% | 0.55 | 92% |
| Activation type | Interpretability |
| Dense (random) | 32.6 |
| Dense (agreement) | 30.2 |
| Dense (BiB) | 36.0 |
| Sparse (random) | 52.8 |
| Sparse (agreement) | 62.3 |
| Sparse (BiB) | 81.5 |
| Rewrite | Categories | Example |
| Surface | Answer format | Tab. 13, replace the one channel with the channel ranking by sessions. |
| Condition parameters | Tab. 14, more complex filter condition: citibike is faster than a taxi. | |
| Advanced calculation | Tab. 15, calculate originality score based on selected publications. | |
| Semantic | Advanced requirements | Tab. 16, change page view order to page conversion rate. |
| Merge related SQLs | Tab. 17, merge geography-related and weather-related queries. | |
| SQL codebase files | App.B.2, change SQL and YML files in the original project. |
| Dataset | # Test Examples | # Test DB | # Col / DB | # Tok. / SQL | # Func. / SQL | External Knowledge | SQL Dialect | Project Level |
| WikiSQL (Zhong et al., 2017) | 15,878 | 5,230 | 6.3 | 12.2 | 0.0 | X | X | X |
| Spider 1.0 (Yu et al., 2018) | 2,147 | 40 | 27.1 | 18.5 | 0.0* | X | X | X |
| KaggleDBQA (Lee et al., 2021) | 272 | 8 | 23.4 | 13.8 | 0.0 | ✓ | X | X |
| SEDE (Hazoom et al., 2021) | 857 | 1 | 212.0 | 46.9 | 1.4 | X | X | X |
| BIRD (Li et al., 2024b) | 1,789 | 15 | 54.2 | 30.9 | 0.4* | ✓ | X | X |
| Spider 2.0-lite | 547 | 158 | 803.6 | 144.5 | 6.5 | ✓ | ✓ | X |
| Spider 2.0-snow | 547 | 152 | 812.1 | 161.8 | 6.8 | ✓ | ✓ | X |
| Spider 2.0 | 632 | 213 | 743.5 | 148.3 | 7.1 | ✓ | ✓ | ✓ |
| Statistics | Number (% of Total) |
| Total Levels (#tokens) | 632 (100%) |
| - Easy (#tokens < 80) | 160 (25.32%) |
| - Medium (80 ≤ #tokens < 160) | 279 (44.15%) |
| - Hard (#tokens ≥ 160) | 193 (30.54%) |
| - With Bigquery | 214 (33.86%) |
| - With Snowflake | 198 (31.33%) |
| - With SQLite | 135 (21.36%) |
| - With DuckDB | 68 (10.76%) |
| - With Postgres | 10 (1.58%) |
| - With Clickhouse | 7 (1.11%) |
| - With Project-level (DBT) | 78 (12.34%) |
| - With Documentation | 82 (12.97%) |
| - With Functions | 474 (75.00%) |
| - With Partition Tables | 54 (8.54%) |
| - With Multiple Schemas | 140 (22.15%) |
| - With Nested Schemas | 117 (18.51%) |
| - With String/Number Answer | 162 (25.63%) |
| - With Table Answer | 392 (62.03%) |
| - With Database Answer | 78 (12.34%) |
| Model | Spider 2.0-Lite | Spider 2.0-Snow | ||||||
| Easy | Medium | Hard | Overall | Easy | Medium | Hard | Overall | |
| o1-preview | 33.59% | 23.58% | 15.03% | 23.22% | 39.84% | 21.14% | 15.61% | 23.77% |
| o3-mini | 32.03% | 26.02% | 13.87% | 23.40% | 31.25% | 18.29% | 11.56% | 19.20% |
| Claude-3.5-Sonnet | 26.56% | 15.85% | 6.94% | 15.54% | 25.00% | 16.26% | 7.51% | 15.54% |
| GPT-4o | 22.66% | 13.41% | 5.78% | 13.16% | 24.22% | 11.38% | 6.94% | 12.98% |
| DeepSeek-V3 | 19.53% | 6.50% | 4.05% | 8.78% | 20.31% | 6.1% | 4.05% | 8.78% |
| Qwen2.5-Coder | 13.89% | 4.17% | 3.38% | 5.30% | 11.72% | 4.47% | 2.31% | 5.48% |
| Method | EX (↑) | ||||||
| Spider 1.0 | BIRD | Spider 2.0-snow | Spider 2.0-lite | ||||
| Easy | Medium | Hard | Overall | ||||
| DIN-SQL + GPT-4o | 85.3% | 55.9% | 0.00% | 5.79% | 0.43% | 0.00% | 1.46% |
| DAIL-SQL + GPT-4o | 86.6% | 57.4% | 2.20% | 13.20% | 5.58% | 1.24% | 5.68% |
| CHESS + GPT-4o | 87.2% | 66.7% | 1.28% | 9.92% | 3.00% | 1.24% | 3.84% |
| SFT CodeS-15B | 85.4% | 59.3% | 0.00% | 1.65% | 0.86% | 0.00% | 0.73% |
| Framework | Model | SR (↑) |
| AutoEval | GPT-4o | 5.70% |
| Reflexion | GPT-4o | 7.28% |
| CodeR | GPT-4o | 7.91% |
| Spider-Agent | o1-Preview | 21.36% |
| Claude-3.5-Sonnet | 14.87% | |
| GPT-4o | 12.34% | |
| GPT-4 | 9.86% | |
| Qwen2.5-72B | 6.17% | |
| DeepSeek-V2.5 | 5.22% | |
| Gemini-Pro-1.5 | 2.53% | |
| Llama-3.1-405B | 2.21% |
| Task Subset | % of Total | SR (↑) |
| w/ Nested Column | 18.51% | 10.34% |
| w/o Nested Columns | 68.04% | 27.38% |
| Task Subset | % of Total | SR (↑) |
| w/ External Doc | 12.97% | 11.54% |
| w/o External Doc | 73.58% | 26.64% |
| Task Subset | % of Total | SR (↑) |
| w/ DBT Project | 12.34% | 12.82% |
| w/o DBT Project | 87.65% | 23.22% |
| Method | EX (↑) | |
| w.Oracle Func | w/o Oracle Func | |
| DAIL-SQL + GPT-4o | 5.85% | 5.68% |
| DAIL-SQL + o1-preview | 9.51% | 12.60% |
| Method | EX (↑) | ||
| 0-shot | 1-shot | 3-shot | |
| DAIL-SQL + GPT-4o | 5.68% | 6.40% | 6.76% |
| Output Type | Description | Parameters |
| String w/o number | If the answer is found in the string, it is given a score of 1; otherwise, it receives a score of 0. | pred (str): The string in which to search for substrings. gold (List of str): A list of strings to check within the predicted string. conj (str): The conjunction used for matching ('and' or 'or'). Default is 'or'. exclude (List of Str): Strings that must not be present in the answer. |
| String w. number | For output strings containing numbers, the script captures these numbers and performs number matching for scoring using the number_MATCH function. | pred (str): The string in which to search for substrings. gold (List[str | float]): A list of strings or numbers to check within the predicted string. percentage (bool): Default is false. If the gold answer is related to percentages, set this to true for more robust evaluation. precision (int): The number of decimal places to consider. Defaults to 4. conj (str): The conjunction used for matching ('and' or 'or'). Default is 'or', and it's typically 'or'. |
| Table | If the answer is a CSV file or a table in string format, table-level evaluation is performed. | result (str): Path to the CSV file or result string. gold (str | List[str]): Path(s) to the gold file(s), excluding the root directory. Multiple potential gold answers are supported. condition_cols (List[int] | List[List[int]): List of column indices to match conditions. For example, [0, 1] uses the 0th and 1st columns in the gold table for matching, while ignoring the others. ignore_order (bool): Whether to ignore the order of rows when matching elements. |
| Database | If the answer is stored in a DB file, database-level evaluation is applied using the db_MATCH function. | result (str): Path to the DuckDB file containing the result tables. gold (str): Path to the DuckDB file containing the gold standard tables. conditionTabs (List[str], optional): List of table names to be checked. If not provided, all tables in the gold DuckDB file will be considered. condition_cols (List[List[int]), optional): A list of lists, where each inner list contains column indices used for matching conditions for the corresponding table. Defaults to considering all columns. ignore Orders (List[bool], optional): A list of boolean values indicating whether to ignore the row order for each table comparison. Defaults to [False] for each table. |
| SQL | If the output is an SQL, execution-based evaluation is used. This is primarily designed for Spider 2.0-lite. | To compare the execution results of the predicted SQL and the gold SQL, table matching is used. |
| Question | |
| What is the publication number of US patent granted at January 2018, with the highest originality score based on the diversity of 4-digits IPC codes from its backward citations? | |
| Reference Plan | |
| 1. Filter US Patents: Select publication numbers and application numbers from the dataset, including only records where the country code is 'US', the grant date is within January 2018, and excluding records with a grant date of 0. Additionally, only consider patents with a specific kind code pattern (e.g., %B2%). 2. Extract IPC Codes: Select the publication number and count the unique 4-digit IPC codes associated with each selected patent. 3. Identify Maximum IPC Code Count: Create a subset of records that have the maximum count of a specific 4-digit IPC code for each patent. 4. Calculate IPC Occurrences in Backward Citations: Join the filtered patents with their backward citations. For each backward citation, join with the subset of records to get the 4-digit IPC codes, counting occurrences of each IPC code in the backward citations for each patent. 5. Compute Originality Score: For each patent, calculate an originality score based on the diversity of the 4-digit IPC codes from the backward citations, using a formula that considers the sum of squared occurrences of each IPC code, normalized by the total number of occurrences. 6. Select Highest Originality Score: From the computed originality scores, select the patent with the highest score. 7. Return Result: Output the publication number of the patent with the highest originality score. | |
| Gold SQL (After rewriting) | |
| WITH patents_sample AS ( +SELECT t1publication_number, t1.application_number FROM 'patents-public-data.patents.publications' t1 +WHERE country_code = 'US' AND grant_date between 20180101 AND 20180131 AND grant_date != 0 AND publication_number LIKE '%B2%', +), interim_table AS ( +SELECT t1publication_number, SUBSTR(ipc_u.code, 0, 4) AS ipc4, COUNT(SUBSTR(ipc_u.code, 0, 4)) AS ipc4_count +FROM patents-public-data.patents_publications t1, UNNEST(ipc) AS ipc_u +GROUP BY t1publication_number, ipc4 +), chosen_ipc4_view AS ( +SELECT * FROM interim_table WHERE CONCAT(interim_table.publication_number, interim_table.ipc4_count) IN ( +SELECT CAT(IPCNumber, MAX(ipc4_count)) FROM interim_table GROUP BY publication_number +ORDER BY ipc4_count DESC), +IPC_counts AS ( +SELECT t1publication_number, t3.ipc4, COUNT(t3.ipc4) AS ipc_occurrences +FROM patents_sample t1 +LEFT JOIN ( +SELECT x2publication_number AS citing_publication_number, citation_u-publication_number AS backward_citation +FROM patents-public-data.patents_publications x2, UNNEST(citation) AS citation_u) t2 ON +t2.citing_publication_number = t1publication_number +LEFT JOIN chosen_ipc4_view t3 ON t3/publication_number = t2/backward_citation +GROUP BY t1publication_number, t3.ipc4 +), max_origuality AS ( +SELECT publication_number, 1 - SUM(Power(ipc_occurrences, 2)) / POWER(SUM(ipc_occurrences), 2) AS originality +FROM ipc counts GROUP BY publication number +HAVING SUM(ipc_occurrences) > 0 ORDER BY originality DESC LIMIT 1 +) SELECT publication_number FROM max_origuality | |
| Original SQL | |
| WITH patents_sample AS ( +SELECT t1publication_number, t1.application_number FROM 'patents-public-data.patents.publications' t1 +WHERE country_code = 'US' AND grant_date between 20180101 AND 20180131 AND grant_date != 0 AND publication_number LIKE '%B2%', +), interim_table AS ( +SELECT t1publication_number, SUBSTR(ipc_u.code, 0.4) AS ipc4, COUNT(SUBSTR(ipc_u.code, 0, 4)) AS ipc4_count +FROM patents-public-data.patents_publications t1, UNNEST(ipc) AS ipc_u +GROUP BY t1publication_number, ipc4 +), chosen_ipc4_view AS ( +SELECT * FROM interim_table WHERE CAT(interim_table.publication_number, interim_table.ipc4_count) IN ( +SELECT CAT(IPCNumber, MAX(ipc4_count)) FROM interim_table GROUP BY publication_number +ORDER BY ipc4_count DESC), +SELECT t1publication_number, t3.ipc4, COUNT(t3.ipc4) AS ipc_occurrences +FROM patents_sample t1 +LEFT JOIN ( +SELECT x2publication_number AS citing_publication_number, +citation_u-publication_number AS backward_citation +FROM patents-public-data.patents_publications x2, UNNEST(citation) AS citation_u) t2 ON +t2.citing_publication_number = t1publication_number +LEFT JOIN chosen_ipc4_view t3 ON t3/publication_number = t2/backward_citation +GROUP BY t1publication_number, t3.ipc4 |
| Qunption |
| Calculate the conversion rate from product list pages to product detail pages for all sessions at January 2nd, 2021. |
| Reference Plan |
| 1. query the event data to retrieve all unique event names 2. Select events data from the Google Analytics 4 (GA4) sample e-commerce dataset for the specific date (20210102) 3. Filter to include only events named page_view, which represent page views. 4. flatten the nested event.params array and extract values for ga_session_id, ga_session_number, page_title, and page_location. This allows the analysis of individual page views within each user's session. 5. Further processes the unnested event data to classify pages based on URL depth and specific keywords into either Product Detail Pages (PDP) or Product Listing Pages (PLP). 6. Applies window functions to the categorized data to calculate the previous and next pages for each session per user, facilitating analysis of navigation paths between pages. 7. Filters sessions where the current page is a PLP and the next page is a PDP. 8. Counts the number of sessions transitioning from PLP to PDP and divides this by the total views of PLP pages to calculate the conversion rate. |
| Gold SQL (After rewriting) |
| WITH base_table AS (SELECT event_name, event_date, event_timestamp, user_pseudo_id, user_id, device, geo, traffic_source, event.params, user_parameters FROM "bigquery-public-data.ga4 obfuscated_sample_e-commerce.events*" WHERE table suffix BETWEEN'20210101'AND'20210131'AND event_name IN ('page_view') ), unnested_events AS (SELECT event_date ASdata, event_timestamp AS event_timestamp microseconds, user_pseudo_id, MAX(CASE WHEN c.key = 'ga_session_number' THEN c.value.int_value END AS visitNumber, MAX(CASE WHEN c.key = 'page_title' THEN c.valuestring_value END AS page_title, MAX(CASE WHEN c.key = 'page_location'THEN c.value string_value END AS page_location FROM base_table, UNNEST(event.params) c GROUP BY 1, 2, 3), unnested_events_categorised AS (SELECT *CASE WHEN LENGTH(SPLIT(page_location,'/')) >= 5 AND CONTAINS_SUBSTR(AARRAY_reverse(SPLIT(page_location,'/')) [SAFE_OFFSET(0)], '++) AND (LOWER(SPLIT(page_location,'/')) [SAFE_OFFSET(4)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'new+2015+logo', 'notebooks+journals', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') OR LOWER(SPLIT(page_location,'/')) [SAFE_OFFSET(3)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') THEN 'PPD' WHEN NOT CONTAINS_SUBSTR(AARRAY Reverse(SPLIT(page_location,'/')) [SAFE_OFFSET(0), '++) AND (LOWER(SPLIT(page_location,'/')) [SAFE_OFFSET(4)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') OR LOWER(SPLIT(page_location,'/')) [SAFE_OFFSET(3)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') THEN 'PPD' ELSE 'Other' END AS page_category FROM unnested_events ) |
| Ranked page size AS SELECT *CASE CASE FOR (PARTITION BY user pseudo id, visitID ORDER BY event timestamp microseconds) AS previous page, LEAD(page category, 1) OVER (PARTITION BY user pseudo id, visitID ORDER BY event timestamp microseconds) AS next page FROM unnested_events categorised |
| PLPoPDTransitions AS (SELECT page_title, COUNT(*) AS transitions FROM rankedScreens WHERE page_category = 'PLP' AND next page = ROW group BY page title). |
| TotalPLPVs AS (SELECT page_title, COUNT(*) AS total_plp_views FROM ranked Screens WHERE page_category = 'PLP' GROUP BY page title). |
| ConversionRates AS (SELECT p_page title, (CAST(t.transfers AS FLOAT64) / p.total plp views) * 100.0 AS conversion_rate FROM TOTALPLPVs p JOIN PLPoPDTransitions t ON p.page title = t.page title). |
| SELECT page title, MAX(conversion_rate) AS max_conversion_rate FROM ConversionRates GROUP BY page_title ORDER BY max_conversion_rate DESC LIMIT 1; |
| Original SQL |
| WITH base_table AS (SELECT event_name, event_date, event_timestamp, user_pseudo_id, user_id, device, geo, traffic_source, event.params, user_parameters FROM "bigquery-public-data.ga4 obfuscated_sample_e-commerce.events*"WHERE table suffix BETWEEN'20210101'AND'20210131'AND event_name IN ('page_view') ), unnested_events AS (SELECT event_date ASdata, event_timestamp AS event_timestamp microseconds, user_pseudo_id, MAX(CASE WHEN c.key = 'ga_session_id' THEN c.value.int_value END AS visitNumber, MAX(CASE WHEN c.key = 'page_title' THEN c.value.int_value END AS page_title, MAX(CASE WHEN c.key = 'page_location'THEN c.value string_value END AS page_location FROM base_table, UNNEST(event.params) c GROUP BY 1, 2, 3), unnested_events_categorised AS (SELECT *CASE CASE FOR (PARTITION BY user pseudo id, visitID ORDER BY event_timestamp microseconds) AS previous page, LEAD(page category, 1) OVER (PARTITION BY user pseudo id, visitID ORDER BY event_timestamp microseconds) AS next page FROM unnested_events categorised |
| IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'office', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') OR LOWER(SPLIT(page_location,'/')) [SAFE_OFFSET(3)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'office', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') THEN 'PPD' WHEN NOT CONTAINS_SUBSTR(AARRAY Reverse(SPLIT(page_location,'/')) [SAFE_OFFSET(0), '+] AND (LOWER(SPLIT(page location,'/')) [SAFE_OFFSET(4)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'office', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') OR LOWER(SPLIT(page location,'/')) [SAFE_OFFSET(3)]) IN('accessories', 'apparent', 'brands', 'campus+collection', 'drinkware', 'electronics', 'google+redesign', 'lifestyle', 'nest', 'new+2015+logo', 'notebooks+journals', 'office', 'shop-by+brand', 'small+goods', 'stationery', 'wearables') THEN 'PPD' ELSE 'Other' END AS page Category FROM unnested_events) |
| SELECT (SELECT COUNT(*) FROM unnested_events_categorised WHERE page_title_adjusted='PDP') / (SELECT COUNT(*) FROM unnested_events_categorised)=100; |
| Database | Documentation Website | # Page | # Category | # Function |
| BigQuery | https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators | 34 | 34 | 390 |
| Snowflake | https://docs.snowflake.com/en/sql-reference/ | 719 | 30 | 719 |
| Postgres | https://www.postgresql.org/docs/current/functions.html | 30 | 30 | \(30^{\dagger}\) |
| Clickhouse | https://clickhouse.com/docs/en/sql-reference/functions | 226 | 6 | 226 |
| SQLite | https://www.sqlite.org/docs.html | 6 | 6 | 147 |
| DuckDB | https://duckdb.org/docs/sql/functions/overview | 24 | 24 | 513 |
| Total | 1039 | 130 | 2025 |
| database="Snowflake", function="ATAN2", category="numeric-functions" |
| Categories: Numeric functions (Trigonometric) |
| ## ATAN2 |
| Computes the inverse tangent (arc tangent) of the ratio of its two arguments. For example, if x > 0, then the expression ATAN2(y, x) is equivalent to ATAN(y/x). The arc tangent is the angle between: |
| The X axis. The ray from the point (0,0) to the point (X, Y) (where X and Y are not both 0). |
| See also: ATAN |
| ## Syntax |
| ATAN2(<y>, <x>) |
| Copy Note that the first parameter is the Y coordinate, not the X coordinate. |
| ## Arguments |
| y This parameter is the Y coordinate of the point at the end of the ray. The data type is DOUBLE. x This parameter is the X coordinate of the point at the end of the ray. The data type is DOUBLE. |
| ## Returns |
| The data type of the returned value is DOUBLE. The returned value is in radians, not degrees. The returned value is a number in the interval [-pi, pi]. |
| ## Usage notes |
| If the data type of an argument is a numeric data type other than DOUBLE, then the value is converted to DOUBLE. If the data type of an argument is string, the value is converted to DOUBLE if possible. If the data type of an argument is any other data type, the function returns an error. If either argument is NULL, the returned value is NULL. |
| ## Examples |
| SELECT ATAN2(5, 5); |
| - - - - - - + |
| ATAN2(5, 5) | |
| - - - - - - + |
| 0.7853981634 | |
| - - - - - - + |
| database="DuckDB", function="datediff", category="date-functions" |
| Function: datediff(part, startdate, enddate) +The number of partition boundaries between the dates. Alias of date_diff. |
| Description: The number of partition boundaries between the dates. +Example: datediff('month', DATE '1992-09-15', DATE '1992-11-14') +Result: 2 +Alias: date_diff. |
| Action | Description |
| BASH | Executes shell commands, such as checking file information, running code, or executing DBT commands. |
| CreateFile | Creates a new file with specified content. |
| EditFile | Edits or overwrites the content of an existing file. |
| ExecuteSQL | Executes a SQL query on BigQuery/ Snowflake, with an option to print or save the results. |
| GetTables | Retrieves all table names and schemas from a specified BigQuery/ Snowflake dataset. |
| GetTabInfo | Retrieves detailed column information for a specific table in BigQuery/ Snowflake. |
| SampleRows | Samples a specified number of rows from a BigQuery/ Snowflake table and saves them as JSON. |
| FAIL | Agent decides the task is infeasible. |
| Terminate | Agent decides the task is finished. |
| Error Type | Subcategory | Description | Example of +Predicted SQL | Example of +Gold SQL |
| Wrong +Schema Linking | Wrong Table | Some of the requested tables are incorrect or do not exist,or excessive tables are requested,or some ground truth tables are missing. | FROM spider2-public- data.worldImporter sales_Customers c JOIN spider2-public- data.worldImporter sales Orders o ON c CustomerID = c CustomerID | FROM 'spider2-public-data.world_exporters.sales_Customers' cu INNER 2019 spider2-public-data.world_exporters.salesOrders o ON cu CustomerID = c CustomerID |
| SELECT strifte('NY', order_purchase_timestamp) AS year, ... | SELECT 'data.world进口' (c CustomerID = c CustomerID) | |||
| Incorrect Dialect +Function Usage | Incorrect Dialect Function Usage | Incorrect or missing use of dialect-specific functions for string manipulation (e.g., CONCAT), date processing (e.g., DATE_TRUNC), or geographic data (e.g., ST_distance), etc. | SELECT (LAG(longitude, 1) OVER (ORDER BY iso_time ASC) - longitude) AS lon_diff; (LAG(longitude, 1) OVER (ORDER BY iso_time ASC) - latitude) AS lat_diff FROM hurricane_data | Refer to Fig. 7(a) |
| Incorrect Data +Calculation | Incorrect Data Calculation | Advanced data calculations fail to meet the intended outcomes, often due to errors in grouping, aggregation (e.g., AVG, SUM), window functions (e.g., PARTITION BY, NTILE), or formula application (e.g., CORR, STDDEV). | ||
| Incorrect +Planning | Incorrect Planning | The gold SQL involves nested queries, intermediate result processing through CTEs, or set operations for merging sub-query results. The model either fails to recognize or misuses these elements. | Refer to Fig. 7(b) | |
| Join Error | The JOIN condition selects the incorrect tables or columns. | SELECT ... FROM spider2-public-data.world_exporters.sales_Invoices T1 | SELECT ... FROM 'spider2-public- data.world_exporters.sales_Invoice lines' AS INV INNER 2019 | |
| spider2-public-data.world_exporterssales_InvoiceLines T2 | spider2-public-data.world_exporters.sales_Invoice lines' AS INV FROM HURRICANE=INV(Invoice) | |||
| Condition Filter Error | The filtering conditions in the WHERE clause are incorrectly defined. | WHERE code.coding(Offset(0)).code IN ('4454000', '3834883') | WHERE code.coding[safe_offset(0)].display = "Diabetes" OR code.coding[safe_offset(0)].display = "Hypertension" | |
| Misunderstanding External Knowledge | The model misinterprets external knowledge relevant to the question. | Refer to Fig. 25 | ||
| Excessive Prompt Length | The input prompt exceeds the LLM's maximum length, causing truncation and making the answer inaccessible. | - | ||
| Syntax Error | The generated SQL query contains invalid syntax that prevents execution. | - | ||
| Task Subset | #Examples | SR (↑) |
| Spider 2.0 | 100.00% | 17.0% |
| - w/ Bigquery | 33.86% | 24.07% |
| - w/ Snowflake | 31.33% | 7.14% |
| - w/ SQLite | 21.36% | 20.74% |
| - w/ DuckDB | 10.76% | 17.69% |
| - w/ Postgres | 1.58% | 12.82% |
| - w/ Clickhouse | 1.11% | 57.14% |
| Method | Avg. Cost (↓) |
| Spider-Agent + o1-preview | 0.75 $ |
| Spider-Agent + GPT-4-Turbo | 0.58 $ |
| Spider-Agent + GPT-4o | 0.32 $ |
| DIN-SQL + GPT-4o | 0.14 $ |
| DAIL-SQL + GPT-4o | 0.09 $ |
| DAIL-SQL + o1-preview | 0.32 $ |
| CHESS + GPT-4o | 0.43 $ |
| SFT CodeS-15B | 0.00 $ |
| Models | Gold Label (%) | AlpacaEval 2.0 | MT-Bench Avg. Score (0-10) | |
| Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | |||
| Mistral-7B-v0.1 | - | 0.17 | 0.50 | 3.25 |
| Zephyr-7b-β | 100 | 11.75 | 10.03 | 6.87 |
| SFT | - | 7.58 | 4.72 | 6.34 |
| DPO | 3.3 | 9.03 | 7.68 | 6.81 |
| SPA (Ours) | 3.3 | 15.39 | 21.13 | 6.94 |
| Methods | External Model | AlpacaEval 2.0 | MT-Bench Avg. Score (0-10) | |
| Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | |||
| Iterative DPO (PairRM) | ✓ | 11.87 | 9.46 | 6.98 |
| Iterative DPO (LLM-as-judge) | ✗ | 9.28 | 9.18 | 6.67 |
| SPA (Ours) | ✗ | 15.39 | 21.13 | 6.94 |
| Methods | Used Ground-truth Preference Data | |||
| 0.8% | 1.7% | 3.3% | 10% | |
| DPO: LC Win Rate (%) | 7.85 | 7.68 | 9.03 | 11.37 |
| DPO: Win Rate (%) | 5.53 | 5.49 | 7.68 | 9.32 |
| SPA: LC Win Rate (%) | 10.36 | 12.36 | 16.23 | 18.52 |
| SPA: Win Rate (%) | 11.34 | 13.72 | 19.94 | 23.79 |
| Methods | 1st Seed Data | 2nd Seed Data | 3rd Seed Data | Average | Variance |
| DPO: LC Win Rate (%) | 9.03 | 8.74 | 9.54 | 9.10 | 0.16 |
| DPO: Win Rate (%) | 7.68 | 7.17 | 7.59 | 7.48 | 0.07 |
| SPA (Ours): LC Win Rate (%) | 16.23 | 13.77 | 16.38 | 15.46 | 2.10 |
| SPA (Ours): Win Rate (%) | 19.94 | 20.06 | 19.74 | 19.91 | 0.03 |
| Methods | Gold Label (%) | Phi-2-2.7B | LLaMA-3-8B-Instruct | Phi-3-14B-Instruct | |||
| Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | ||
| SFT | - | 5.88 | 3.78 | 21.40 | 21.71 | 26.51 | 21.41 |
| DPO | 3.3 | 7.02 | 5.67 | 24.17 | 25.39 | 27.70 | 22.12 |
| SPA (Ours) | 3.3 | 9.10 | 9.43 | 25.03 | 34.84 | 28.77 | 24.14 |
| Methods | AlpacaEval 2.0 | ||||
| DE | SR | DND | Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | |
| SFT | - | - | - | 7.58 | 4.72 |
| DPO | - | - | - | 9.03 | 7.68 |
| SPA (Ours) | ✓ | ✗ | ✗ | 14.41 | 19.91 |
| ✓ | ✓ | ✗ | 14.7 | 19.94 | |
| ✓ | ✓ | ✓ | 15.39 | 21.13 | |
| Models | AlpacaEval 2.0 | |
| Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | |
| SPA after iteration 1 | 10.57 | 11.89 |
| Eq. 7 with initial SFT model (Ours) | 15.08 | 19.56 |
| Eq. 7 with previous model | 13.73 | 17.66 |
| Judgment with PairRM | 13.57 | 13.72 |
| Judgment without reference model | 12.83 | 12.35 |
| Models | Gold Label (%) | Avg Score |
| Mistral-7B-Instruct-v0.1 | - | 6.78 |
| SPAinst (Ours) | - | 7.12 |
| Phi-2 SFT | - | 5.35 |
| Phi-2 DPO | 3.3 | 6.16 |
| SPAphi (Ours) | 3.3 | 6.33 |
| Methods | AlpacaEval 2.0 | MT-Bench | ||||
| DE | SR | DND | Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) | Avg. Score (0-10) | |
| SFT | - | - | - | 7.58 | 4.72 | 6.34 |
| DPO | - | - | - | 9.03 | 7.68 | 6.81 |
| SPA (Ours) | ✓ | ✗ | ✗ | 14.41 | 19.91 | 6.86 |
| ✓ | ✓ | ✗ | 14.7 | 19.94 | 7.09 | |
| ✓ | ✓ | ✓ | 15.39 | 21.13 | 6.94 | |
| Models | Gold Label (%) | Len-control. Win Rate | Win Rate vs. GPT-4 | Avg. len (# chars) |
| Mistral-7B-v0.1 | - | 0.17 | 0.50 | 5692 |
| SFT | - | 7.58 | 4.72 | 901 |
| DPO | 3.3 | 9.03 | 7.68 | 1802 |
| Zephyr-7b-β | 100 | 11.75 | 10.03 | 1552 |
| SPA (Original, Iter. 1) | 3.3 | 11.88 | 12.95 | 2150 |
| SPA (Modified, Iter. 1) | 3.3 | 11.39 | 12.31 | 2013 |
| SPA (Original, Iter. 2) | 3.3 | 16.23 | 19.94 | 2749 |
| SPA (Modified, Iter. 2) | 3.3 | 14.46 | 18.23 | 2448 |
| Methods | Len-control. Win Rate (%) | Win Rate vs. GPT-4 (%) |
| LLM-as-judge (Iter. 1) | 8.88 | 8.01 |
| LLM-as-judge (Iter. 2, orig) | 9.49 | 8.46 |
| LLM-as-judge (Iter. 2, prev. init) | 9.74 | 10.09 |
| SPA (Iter. 2, ours) | 15.46 | 19.91 |
| Lower bound | 0.95 | 0.99 | 0.995 | 0.999 | 0.9995 | 0.9999 |
| SE (d) | 172 | 205 | 219 | 250 | 264 | 294 |
| Matérn (d) | 980 | 1040 | 1064 | 1116 | 1137 | 1185 |
| Hartmann6 | \( \ell_0 = 0.1 \) | 0.5 | 0.6931 | 1.0 | \( \sqrt{d} \) |
| SE | 50 | 200 | 200 | 400 | \( \checkmark \) |
| Matérn | 50 | 400 | 600 | \( \checkmark \) | \( \checkmark \) |
| Rosenbrock | \( \ell_0 = 0.1 \) | 0.5 | 0.6931 | 1.0 | \( \sqrt{d} \) |
| SE | 50 | 200 | 200 | 400 | \( \checkmark \) |
| Matérn | 50 | 400 | 600 | \( \checkmark \) | \( \checkmark \) |
| Fun(d, d') | Structure |
| Ackley(300, 150) | Partial variable dependency |
| Rosenbrok(300, 100) | Partial variable dependency +Nonoverlap additive decomposition |
| Hartmann6(300,6) | Partial variable dependency |
| Rosenbrock(100, 100) +Stybtang(200, 200) | Nonoverlap additive decomposition |
| Ackley(150, 150) | None |
| Method | Structural Assumption |
| ALEBO, HESBO | Low-dim embedding |
| SaasBO | Partial variable dependency |
| RDUCB, Tree UCB | Additive function decomposition |
| SBO, TURBO, BNN | None |
| Model | Sequence | Structure | |||
| Div. ↓ | Nov. ↑ | LR P@L ↑ | LR P@L/2 ↑ | LR P@L/5 ↑ | |
| Searched MSA | - | - | 0.186 | 0.270 | 0.395 |
| ESM-2 (150M) | 0.565 | 0.691 | 0.086 | 0.116 | 0.167 |
| ESM-2 (650M) | 0.619 | 0.556 | 0.100 | 0.146 | 0.223 |
| PoET-Single (201M) | 0.853 | 0.200 | 0.025 | 0.028 | 0.031 |
| PoET-MSA (201M) | 0.651 | 0.243 | 0.036 | 0.042 | 0.051 |
| EvoDiff-MSA (100M) | 0.225 | 0.668 | 0.061 | 0.089 | 0.168 |
| DPLM (150M) | 0.369 | 0.463 | 0.093 | 0.147 | 0.284 |
| DPLM (650M) | 0.445 | 0.411 | 0.102 | 0.159 | 0.303 |
| ProfileBFN-Single (150M) | 0.368 | 0.646 | 0.126 | 0.197 | 0.321 |
| ProfileBFN-Single (650M) | 0.421 | 0.581 | 0.162 | 0.262 | 0.422 |
| ProfileBFN-Profile (150M) | 0.283 | 0.650 | 0.128 | 0.210 | 0.384 |
| ProfileBFN-Profile (650M) | 0.293 | 0.641 | 0.173 | 0.280 | 0.474 |
| Model | P40925 ↑ | Q7X7H9 ↑ | Q15165 ↑ |
| PoET-MSA | 3.00% | 33.3% | 0.05% |
| EvoDiff-MSA | 27.93% | 88.69% | 1.39% |
| ProfileBFN-Profile (650M) | 95.19% | 98.98% | 42.67% |
| Model | Thermostability | HumanPPI | Metal Ion Binding | EC | GO | DeepLoc | |||
| MF | BP | CC | Subcellular | Binary | |||||
| Spearman's ρ | ACC(%) | ACC(%) | Fmax | Fmax | Fmax | Fmax | ACC(%) | ACC(%) | |
| SaProt* † | 0.724 | 86.41 | 75.75 | 0.884 | 0.678 | 0.356 | 0.414 | 85.57 | 93.55 |
| MIF-ST* † | 0.694 | 75.54 | 75.08 | 0.803 | 0.627 | 0.239 | 0.248 | 78.96 | 91.76 |
| ESM-1 (1B) † | 0.708 | 82.22 | 73.57 | 0.859 | 0.661 | 0.320 | 0.392 | 80.33 | 92.83 |
| ESM-2 (650M) † | 0.680 | 76.67 | 71.56 | 0.877 | 0.668 | 0.345 | 0.411 | 82.09 | 91.96 |
| AR-LM (650M)♥ | 0.638 | 68.48 | 61.16 | 0.691 | 0.566 | 0.258 | 0.287 | 68.53 | 88.31 |
| DPLM (650M)♥ | 0.695 | 86.41 | 75.15 | 0.875 | 0.680 | 0.357 | 0.409 | 84.56 | 93.09 |
| DPLM (650M)◇ | 0.698 | 77.77 | 70.52 | 0.881 | 0.659 | 0.330 | 0.388 | 85.98 | 93.17 |
| ProfileBFN (650M) | 0.710 | 82.22 | 74.58 | 0.887 | 0.673 | 0.342 | 0.416 | 86.80 | 93.58 |
| DPLM (150M) † | 0.687 | 80.98 | 72.17 | 0.822 | 0.662 | 0.328 | 0.379 | 82.41 | 92.63 |
| ProfileBFN (150M) | 0.701 | 78.88 | 77.74 | 0.874 | 0.672 | 0.341 | 0.394 | 82.73 | 93.52 |
| ID | PDB | Chain | Length | Title |
| 1 | 8BL5 | A | 148 | Crystal Structure of Sam0.26 |
| 2 | 8F9Q | A | 505 | Guinea pig sialic acid esterase |
| 3 | 8F9R | A | 501 | Rabbit sialic acid esterase |
| 4 | 8FSL | C | 116 | Human Mesothelin bound to a neutralizing VH domain antibody |
| 5 | 8HJP | A | 459 | Crystal structure of glycosyltransferase SgUGT94-289-3 in complex with UDP state 1 |
| 6 | 8ISO | A | 269 | Crystal structure of extended-spectrum class A beta-lactamase, CESS-1 |
| 7 | 8IXT | A | 427 | Rat Transcobalamin in Complex with Glutathionylcobalamin |
| 8 | 8JHD | A | 166 | Crystal structure of anti-CRISPR AcrlF25 |
| 9 | 8JGO | A | 535 | Crystal structure of Deinococcus radiodurans exopolysphosphatase |
| 10 | 8JI1 | A | 198 | Crystal structure of Ham1 from Plasmodium falciparum |
| 11 | 8JJI | A | 421 | Alanine decarboxylase |
| 12 | 8JA | A | 216 | SP1746 in complex with acetate ions |
| 13 | 8JRB | A | 597 | Structure of DNA polymerase 1 from Aquifex pyrophyllus |
| 14 | 8JYX | A | 635 | Crystal structure of the gasdermin-like protein RCD-1-1 from Neurospora crassa |
| 15 | 8K05 | A | 340 | Pseudouridine 5-monophosphate glycosylase from Arabidopsis thaliana - sulfate bound holoenzyme |
| 16 | 8K40 | A | 456 | mercuric reductase,GbsMerA,- FAD bound |
| 17 | 8OV9 | A | 350 | Crystal structure of Ene-reductase 1 from black poplar mushroom |
| 18 | 8OXR | A | 145 | Structure of the N-terminal didomain d1-d2 of the Thrombospondin type-1 domain-containing 7A |
| 19 | 8OYD | A | 45 | TrkB transmembrane domain NMR structure in DMPC/DHPC bicelles |
| 20 | 8OZZ | A | 114 | PH domain of AKT-like kinase in Trypanosoma cruzi |
| 21 | 8PIH | C | 118 | Structure of Api m1 in complex with two nanobodies |
| 22 | 8QL0 | A | 693 | Structure of human PAD6 Phosphomimic mutant V10E/S446E, apo |
| 23 | 8QLC | A | 627 | Crystal structure of the pneumococcal Substrate-binding protein AliD in open conformation |
| 24 | 8QLH | A | 633 | Crystal structure of the pneumococcal Substrate-binding protein AliC as a domain-swapped dimer |
| 25 | 8QPM | A | 100 | Structure of methylene-tetrahydromethanopterin reductase from Methanocaldococcus jannaschii |
| 26 | 8QQ5 | A | 222 | Structure of WT SpNox DH domain: a bacterial NADPH oxidase. |
| 27 | 8QVC | B | 100 | Deinococcus aerius TR0125 C-glucosyl deglycosidase (CGD), wild type crystal cryoprotected with glycerol |
| 28 | 8QZ1 | C | 136 | Crystal structure of human two pore domain potassium ion channel TREK-2 (K2P10.1) in complex with a nanobody (Nb58) |
| 29 | 8QZ2 | C | 134 | Crystal structure of human two pore domain potassium ion channel TREK-2 (K2P10.1) in complex with an inhibitory nanobody (Nb61) |
| 30 | 8QZ3 | C | 137 | Crystal structure of human two pore domain potassium ion channel TREK-2 (K2P10.1) in complex with an activatory nanobody (Nb67) |
| 31 | 8R3R | A | 673 | Transketolase from Streptococcus pneumoniae in complex with thiamin pyrophosphate |
| 32 | 8R3S | A | 677 | Transketolase from Staphylococcus aureus in complex with thiamin pyrophosphate |
| 33 | 8R8O | A | 275 | Hallucinated de novo TIM barrel with three helical extensions - HalluTIM3-1 |
| 34 | 8S4S | A | 145 | PrgE from plasmid pCF10 |
| 35 | 8SUC | A | 100 | NHL-2 NHL domain |
| 36 | 8SUF | A | 1007 | The complex of TOL-1 ectodomain bound to LAT-1 Lectin domain |
| 37 | 8SUF | A | 114 | The complex of TOL-1 ectodomain bound to LAT-1 Lectin domain |
| 38 | 8SW5 | C | 47 | Protein Phosphatase 1 in complex with PP1-specific Phosphatase targeting peptide (PhosTAP) version 1 |
| 39 | 8TB2 | A | 100 | Structure of SasG (type II) (residues 165-421) from Staphylococcus aureus MW2 |
| 40 | 8TI6 | A | 155 | Crystal structure of Tyr p 36.0101 |
| 41 | 8UAI | B | 494 | Crystal structure of hetero hexameric hazelnut allergen Cor a 9 |
| 41 | 8UAI | D | 493 | Crystal structure of hetero hexameric hazelnut allergen Cor a 9 |
| 43 | 8V8L | A | 237 | Switchgrass Chalcone Isomerase |
| 44 | 8V8P | A | 231 | Sorghum Chalcone Isomerase |
| 45 | 8W1D | A | 177 | CRYSTAL STRUCTURE OF DPS-LIKE PROTEIN PA4480 FROM PSEUDOMONAS AERUGINOSA (DIMERIC FORM) |
| 46 | 8W6V | A | 536 | Structural basis of chorismate isomerization by Arabidopsis isochorismate synthase ICS1 |
| 47 | 8W26 | A | 429 | X-ray crystal structure of the GAF-PHY domains of SyB-Cph1 |
| 48 | 8W53 | B | 488 | Crystal structure of LbUGT in complex with UDP |
| 49 | 8WEX | A | 468 | Crystal structure of N-acetyl sugar amidotransferase from Legionella pneumophila |
| 50 | 8WG0 | D | 100 | Crystal structure of GH97 glucodextranase from Flavobacterium johnsoniae in complex with glucose |
| 51 | 8WOP | A | 100 | Crystal structure of Arabidopsis thaliana UDP-glucose 4-epimerase 2 (AtUGE2) complexed with UDP, wild-type |
| 52 | 8WTB | B | 187 | Crystal structure of McsA/McsB complex truncated by chymotrypsin |
| 53 | 8WU7 | A | 306 | Structure of a cis-Geranylfarnesyl Diphosphate Synthase from Streptomyces clavuligerus |
| 54 | 8X3S | B | 34 | Crystal structure of human WDR5 in complex with PTEN |
| 55 | 8XJE | B | 153 | Crystal structure of the YqeY protein from Campylobacter jejuni |
| 56 | 8XIG | A | 153 | Crystal structure of the YqeY protein from Vibrio parahaemolyticus |
| 57 | 8Y9P | A | 256 | Crystal structure of bacterial activating sulfortransferase SgdX2 |
| 58 | 8YXK | A | 201 | X-ray structure of Clostridioides difficile endolysin Ecd09610 glucosaminidase domain. |
| 59 | 9B1R | A | 562 | Functional implication of the homotrimeric multidomain vacuolar sorting receptor 1 from Arabidopsis thaliana |
| 60 | 9BCZ | A | 644 | Chicken 1-phosphatidylinositol 4,5-bisphosphate phosphodiesterase zeta-1 (PLCZ1) in complex with calcium and phosphorylated threonine |
| 61 | 9F63 | A | 572 | Crystal structure of Saccharomyces cerevisiae pH nine-sensitive protein 1 (PNS1) |
| ID | EC | Length | family |
| P40925 | 1.1.1.37 | 334 | malate dehydrogenase |
| Q7X7H9 | 2.7.1.71 | 287 | shikimate kinase |
| Q15165 | 3.1.1.2 | 354 | arylesterase |
| Model | P40925 | Q7X7H9 | Q15165 | |
| MSA Depth | - | 572 | 443 | 15 |
| Accuracy × Uniqueness ↑ | PoET | 3.00% | 33.3% | 0.05% |
| EvoDiff-MSA | 27.93% | 88.69% | 1.39% | |
| ProfileBFN-profile | 95.19% | 98.98% | 42.67% | |
| Accuracy ↑ | PoET | 98.04% | 99.93% | 100% |
| EvoDiff-MSA | 27.93% | 88.69% | 1.39% | |
| ProfileBFN-profile | 95.19% | 98.98% | 42.67% | |
| Uniqueness ↑ | PoET | 3.06% | 33.32% | 0.05% |
| EvoDiff-MSA | 100% | 100% | 100% | |
| ProfileBFN-profile | 100% | 100% | 100% | |
| Novelty ↑ | PoET | 0.036 | 0.366 | 0.068 |
| EvoDiff-MSA | 0.728 | 0.596 | 0.497 | |
| ProfileBFN-profile | 0.467 | 0.582 | 0.288 | |
| Diversity ↓ | PoET | 0.499 | 0.645 | 0.990 |
| EvoDiff-MSA | 0.138 | 0.184 | 0.143 | |
| ProfileBFN-profile | 0.374 | 0.289 | 0.594 |
| Model | TMscore ↑ | LDDT ↑ | pLDDT ↑ |
| AF2-MSA | 53.20 | 54.01 | 62.91 |
| MSAGPT | 55.72 | 55.59 | 66.38 |
| ProfileBFN | 56.84 | 55.72 | 67.04 |
| Model | CDR-H1 | CDR-H2 | CDR-H3 | CDR-L1 | CDR-L2 | CDR-L3 |
| RAbD | 0.2285 | 0.2550 | 0.2214 | 0.3427 | 0.2630 | 0.2073 |
| DiffAb | 0.6575 | 0.4931 | 0.2678 | 0.5667 | 0.5932 | 0.4647 |
| AntiBERTy | 0.7940 | 0.5932 | 0.4133 | 0.7208 | 0.3996 | 0.2758 |
| AbLang | 0.7039 | 0.7981 | 0.3207 | 0.5799 | 0.5513 | 0.3175 |
| ProfileBFN-single | 0.6766 | 0.6188 | 0.1946 | 0.5356 | 0.5873 | 0.3064 |
| ProfileBFN-Anti | 0.8227 | 0.7236 | 0.3343 | 0.6402 | 0.6156 | 0.4716 |
| ProfileBN-Profile(650M) | APKPLKVAVTGAAGQIGYGICYELATGRVLGIDRRVELNLLDPPQVVEVLGSVAIEIQDAAPNLLSSIKITSNAEEAEIGDVNLVTAGVPRKGEDRELTNLNAKVDVRVLWVKHTKLIIISNPTTFAVQHMSAPTLPKENFTLSVLDLQNRARTFLA...PRTKIVAVTGAAGNQTLVRLALGLEGHPQVQLTLDEPAANLVGLVAMDLEDAEDACYPLLLADHTVTLKTACEGADVLMTAGVPRPGMERADLKLNAKAAEQKALAEVARGDKAVIGIISNPNTTLLIAKHPAGVDRNRFGGLTRLDHNRARMLA...SRQRFKVTVGAAGQIGQTLLNVARGDLIGKDGQVVELVLYDEPEPLLLKGVAMDLSDVAPAALLSITTTDPKAAFEGAYVLMNGVPRKPGMERSDLGINAKIVAAQTAIAIPAAAAGKVLVVNTINPTVRAAAGHDPAKHLFGQARLQNRARSLLA...AKAPVAVVGAAGGIGFLPLKGQPVDLTDPPAHAQALALEVCLKSLGAELEGADVVTATLPKPRPGDRLGIDNKTIVTHVDVAGAAPAAKVLVTVNPNVSTAIARKAPTEKNLFAYSHDLVERARFLSA...RQOGKIAVTVGAAGQIAYHLARVAGEVMDRTPVELRLCDOEQAARLEVGALDLSTLSPAVEDIRKICKEAASAFGDADVLVLVPGKPSRMSRTDLLEINAGIFKEGRNLNETAPANVRFLVISNADTTAVTSKYPQVKANFGAMARGLDHNRALSFAK...KGVNKWAVTGAAGQIGQLTLLLARGTALPGQPVHLVDKTACDLPGVGMELNDALPLYLKLATSEGEETEVKGVADLLGAGTIPRPGMRADKLTNAFAFEKQGKDLKHAKHECKIGVGNPNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRPGKQKLDFLDAAPVVLPLDGLADALAATDTAPKAEGLGAVADLLNTHAAPNLAIEVSETDAKTLKDAVDVIISAGVPKGRMTDRLLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDVTLKINVDV |
| PoET-MSA | MKVTVVGAGNVGATCADVLAYREINVVELLIDLKEGVAEGALDIKKAPITQDTKTTGTVNDYSKTANSDDVVITSLPRKPGTRMDLRTNAGIVRVTVSKNPYIVSNPLDVMTTYCAHITSKLPRNKVIGMAGVDLTAYMGILCPKEIQ...MKAVVLGAAGGIGQALALLKLNRLPAGSLDALYDIAPVTGPVAADLSHRIPTPVISIKGYAGTEDPRALEVISAGKVRPGMDRDLFVNFMNAGIVKSLAERIAVCPNACGTITPVNTVPIAAELKKGAGVDKRFLGVTTLVDIRNTMAAIPNIPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKVNGVAPNNTNCLALNAPRNGKILNCTLNNAPNVRVTLKINVDVTLKINPLAGSLDALYDIAPVTGPVAADLSHRIPTPVISIKGYAGTEDPRALEVISAGKVRPGMDRDLFVNFMNAGIVKSLAERIAVCPNACGTITPVNTVPIAAELKKGAGVDKRFLGVTTLVDIRNTMAAIPNIPNVRVTKVNGVAPNNTNCLNALNAPRNGKILNCTLNNAPNVRVTKKQPEMAIATVAPNAKVGGMVTNLNTVQAGTIPGLDRIAGGLRSFTLS...TKGTVQAVSGAAAGQIGQALLFEIASRGFLGADQPVPLTLLDKIAAPPAPSGLVAMDSLDAATGLQDLVTIEDAKAFGEADVLITVLAGVPKGRMTDRLLGNGNADVIQKDMTAVAVPASARIVVVNTPNCNTVAVKMAPNSKIPKNQVNDLTDENLRLSAIA... |
| Evodiff-MSA | MKKRTRLVVSTAPASGLTAFLASAAHIDSDRTYSVMVDASKVRGKHLLSLMSAAAGATEKIAADLVSEEAFQANVYYGAGPADGKVTDNTRKTMMIEETSRLNKVANDVRLVLNMPTMLTHLKMKSLEGEIDRYGCSLEDPGVVANLA...MEGKRVTFVRLSRESFFLYLVMRVACRAPVDDVFDFHKQNAFQEALKMTLEATLASNLSAOKFVLNRHDLVASDLSFILRMNEATIKAVETQVYMRKSPNIIIMVSNDPLMDVTAYVTHSLGKERIVMAGVDLRTNARFSQDIAVRAVL...MKTPVAWVPDGQQIAAAYLMSLSHSEEATCKSVLVRVLPDEWQLGGVVLALAHLSATAARDMANHGANGFADSDIVTAGLPTRGTLKRELLDKNPOQISVDVTTNAPVADTMMFQKSPNIIIMVSNDPLMDVTAYVTHSLGKERIVMAGVDLRTNARFSQDIAVRAVL...MVAPRAIAYILVGRLAPVLGHKQCATPDTVDFLRLQTLTSNQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEARLHLGSVITTAGRVPVQRTNLQSONQVSLNQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEARLMLVQTNQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEARLMLPVTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEARLMLPVQTNQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEARILVQTNQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEABVQVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEABVQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAGEABVBQVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASESVEQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASESVEVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASESVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSVPQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSVEQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSVCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSVDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSADQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSADSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSADCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSACQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSACTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSAGQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSACCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSABQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSABCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCEQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCTSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCHQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSSTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSATTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSATQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSAPQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSANTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSAVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSADVQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASHQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASPQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASIQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSASCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSACSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSADFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSAEQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSABSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSACEQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCAQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSDECQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSDEDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSESQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSHTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFFTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFAQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFBQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBEZQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBAQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBBQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBJQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBATQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBADQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBPQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSDBQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSCLSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSLSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSSBQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSBSTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSSSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSSTSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSSSFQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFTAQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFGSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFSQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFSQSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFRSQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFFSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASEVSFBZQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASESFSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGASESFRSQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAEVSFSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAEVSFSQQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGAEVSFBZQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQKCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQKSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQSCQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQSSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELDQKCSEFAGNMRPTGRSLQSDQKCSEFAGNMRPTGRSLQSSQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQSGNviADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQSGNviADYTLVPGTQLWQDQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQCGNviADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQCGNviADYTLVPGTQLWQDQKQCGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQKQCGNviADYTLTQLWQDQKQCGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNviADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNviADYTLVPGTTYTCLKQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNviADYTLVPCNTTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNviCDTQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNvTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNvTJWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQSGNIvTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQCGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQNQUSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDQMNSQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDQDEQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDYQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWYDEQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWLDTWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDLWTWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDRWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDRVWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELT DRVWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDAWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGLAREVDMLVDVTPSGLARELTDVWQDQKQSGNviTDWQKQSGNVIADYTLVPGTTYDLIQLMEALESPLATIRNDRLNKIKWNKMSTVENIKEAEDIDNGLGENVYLNLVIQAATGIPARAIDMQSSIXXXXXXLAX...MKEAKTVAVIGAAGKVGLVELRVDRLPVDKVDMLRDVTVPSTGL |
| ProfileBN-Profile (650M) | LDGVGVGGADRPHARGLWPAGRRDNRAGRNSAVAADFRDPRAQGEPPSPGPEAQAEGSRPYLKMIIGIOWSIHLRSCNACELGRKKQYRNLLVVGMMGAGKTTIGRLLAAEVEFFDSDRNEQGRAGRPIEDIFELEGEEGFRRESEVIKDLSGKGIVLST... VLVLALPLSAFAFPTIRRRSARIVRPATGCKGRMPFVRHASSAFSKATESSTSSQQQLRLASATAMINTAIDRAAEIAEEETEFVNVFLGMAGKSTIKLARLSLDMHTFDISDKLIEEAAGGKVPEIFKETEGEEGFRLEDETVLRQLANERQVIST... TREADARAAAAAAAARSEEARAAASLAEARQESQPLRTPSOPRSAEHAESVSTEPRHRPRLSGAEPIFLRGANIFLIMGVMGKVSTVGRILAKELRKFDFDDSLIVERAGKKPDSIDFRAFEGEEGFRREFQVTLEELQARRVVAAL... SALLQERSAERAIAIVACAVCASSMACLKPPPSSNPAPRAKRAPRRRTKTKKSAELPASVSDSSADYSVRSDDSSRGLVLMGMMGAGKSTIGRLAMDAEMLDFRDMSDECAVSKLECKSVDAEVEFVEGFRWDFKETEVTELEAGHRMDRVST... MSAAVVARARARGARGATTLASDGARETARAFGGFASSRPTTTTASGGFGSFGSARASATNADDENDDANVDVLKKAAETAKDPEFLKNLYLVLMGMMGAGKSTIGRLLADLMGYFDVDTELEEAGGKFTVASIDEGEFGEQRFEDLVRDSVGTRTIAT... PRRSSLARVRLQALQVAPPRRRSLARAFASSGLVAAQPRPRSLARASALASSGRLAARASALAAASKDPRRFKARRGLTRLWSGNNLILVLMGMMGAGKSTIGRLAMDAEMLDFRDMSDECAVSKLECKSVDAEVEFVEGFRWDFKETEVTELEAGHRMDRVST... RPGVPGRAGHRRAGAASPRPNPEAPGAARRADIGPSLSSAARRGPAPVSNTASPKPPPLLIQPESKLLIRIDIQKAVELASQOKQNFLVLMGMMGAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... GAVGPGKIGLPSPGDSSSGVPPRPLSLCLAAAMMFMLLRKRRGRLARGELCRSVRVRRGWDSDDDKASGVPAKLAKEGRPLNDKFLVLMGMMGKTSGVVEAALGLFVDADREIELRCGSLSADIFDEVEGEGFRFRAERELTLQALGKMRCRVVAT... AIAASARATMRTRKGNTRSFRSAKTRKTGIGTNFLTQSRSTSRESLSSFANSSSSSSSSRTSELGRIESTREASLVLSQENANYLILVLMGMMGAGKSTIGRLKALKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAKQIGRYKTFGKVLKAFNMLRLVDSVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAFVQKQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLVLLVQAKQQFLV | |
| PoET-MSA | MVLGLPGSGKTTGKRLANLNLVLDTDHMEKLLGKTNGCNQIMELGEPAFREFEAIAVVLQALQTGDVTGAGGGVATRELLTADHTVVLNLSIDEGVRSTSGNTRPLNLNADVPRGKYAQLFARQSAFFVSNFMVRCDRLNLVLGTMGMMGAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... NLTVLGMMGAGKSTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGKCTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGKCTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSANVDVQAKQQFIPLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGKCTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGKCTIGRLLALHKLDFK DEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGKCTIGRLLALHKLDFK DEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGLPGMGKSTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... MNGAGLPGMGKSTIGRLLALHKLDFKDEICERFLGGMAGKSTIGRLLAAFLKHFVDSDQEIIEQSGMTIKEQFGEAFRRKETHEVVKRVSAMNNCVIAT... | |
| Evodiff-MSA | MDXXXXXPLNCSXINSIGGPQVSAHCTPHVSLSSSSQYRPSSLRLRLTHCSPKTHQSERACQLTTCGCHSHPVQLSPLSLVETNPSRPDSVFLVGFPGTGKSTGRRLTRITFIDLEIEKLLNMSNDNDIIVQHGQHFREFGERREMLLLLSDELSVPLV... MEEASTLSHSSPACCRCCRTRSLVVRRGQTVTPLRGCTRSLNILTFVGLDLLPLELQLYFQHAVATAKAEHPQQLCALTSVKYLPGMGGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCGPGKHTGKLLRSLGTMGTPVSLVFLVGGMCFKKNRPSHJNTHQELIPLNLSVSLNFEVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEETFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFKLNLVLNSELFAEERTFVSLNLSLVEFVFPQKQFLVLLVQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAGRVPSRQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQFLVLLQAKQQCLT |
| ProfileBN-Profile(650M) | MARALLLTLLGLLGLPAALCLQNQSGPALLCLAVQNSGPPALREVHSRPGCHLEIGENGEDIDVLPNGLAFISSGLKYPGVASFSDKGKLLLDLNKDEDPKVXKLHAGEFOTSSFNPHGISTFLEDGTVLFVNVHPEKFHTVEQYKEGENSLHLKTIKHLSKVND...GNKLLPFVGLMLLEQLASLVLGIGVVDDNPVGKCHLEIGITGSEVDVLPNGLAFVSSGLKYPGFNFNTIFKPGFLLLDAQIPIAKKIVSRVFDPSFNSPHGISTFIEDEDGTVLFVNVHPSGHAVEVFQEGEQENLVLKTTIDELPSVND...IGLKGRILLIVGLILFLVGGQVIIHFRAPRVSNDLPNCRLXIGIETGSEDIDILPNGLAFISSGLKYPGLKYSDDFPGKYLLDRAEVTVPQLEKKNFPOVSLFINPHGISTFIDTATVLFVNVHPERRTQVEFKFNEADRSVLHKTIEHEHLSVND...QVLPLVLAGVAYSLVEDETDEADVNSHQPECKLKGIGENSEDIDLPNGLAFISSGLKYPGLKHSFEPADPGKVLVDFKLKEPVKTELVKSFGDSFNSFNPHGISTFIDPENTVLFVNVHPEKFQTVEQFAAGENSLHLKTIKHLSKVND...MAKGVALTLLSVFLALLLLETLYDVRVREVTHRNLPNCNLIGIETGSEDIDILPSGLAFISSGLKYPGFFTFDPDPKFKGLLNDKNPMVTLRESNGFDMEKFNPHGISTFIEDGDSVLFVVNHKAQNNTVEFKFFEENMLLHLKTIKHLSVND...GGKLLALGLGLALLALLPLERLVLAKMSRAFQEYNRPMLNCHNTKIGTSGSEDIDLPNGLAFISSGLKYPGLPSFDPQPSGKLLLDNLNEKNPVTKLHSKFGDSFNSFNPHGISTFIDTOSPVFFVVNHKPNIHTVEFMEKGEKNSLHLKTIKHLSVND...MGRLLVFLVSLLYLLGCRPYQVAFLVGFVREFVQHPVEVSSCHTRXIGIETGSEDIDILPNGLAFISSGLKYPGLPSFAPNSPGKLLLDNLNEESPVALKVSDSFQESFNPHGISTFVDDGDSVLFVVNHQPLKQAEVFQREESLLSLKTTINHLLVNSVD...MNLVGAPLTGVPLLFLGMVAALLAIDSAFREILPHHPGGCPQLODGITGSEDTILPSGLAFISSGLKYPGHSFEPDPGEILLLDLNESGPVRRLEISNFSDAKSFNSFVNDGTVLFVVNHANNSFQKTFVEFYEEENLLVKNKDDLXTVND... |
| PoET-MSA | MAKLLALTLVGLVLALYKNHRSSQYTRLNAFREVPVLPCNLNVLKGIETGAEDLEIPNLGLFTSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKNHRSSQYTRLNAFREVPVLPCNLNVLKGIETGAEDLEIPNLGLFTSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPCNLNVLKGIETGAEDLEIPNLGLFTSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFTSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIG.INTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDCTVLLVVNVHPSDSTVEFVKFQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKQEERSLLKTTITHELLPSN...MAKLLALTLVGLVLALYKHNRSSQYTRLNAFREVPVLPLNLVLKGIETGAEDLEIPNLGLFSTGLKYPGKISFDSPKGKILLMLDNKNEPAVESELIGINTLISSFNPHGISTFIEDOTDVTLLVVNVHPSDSTVEFVKQEERSLLKTTITHELLPSN· |
| Evodiff-MSA | FLGFFITQQFFLRFPQFLPEFFLQQEFRIVFFYLLFFLCQFFSYQFTYLSZSFFFPEFFPIRTQFLGFFZFSFFVFRFLLQFQFYFTLWQFZYQFLFLLFFFFFLILFFFFSFYFTQ...YTLLNYLYLQNYLAFLYRASKYPRMTNLSKYFNGNYFDYDYFSYCNEYAYIYYYYYPTYYNYNYAAUYEAYKYSYGDSVDLPYRYYDEPYDVYDEYYQDYVYYDYVNNYRYNY...QKKYYLYWKYEASEAYLLAIKYQYEAYRGYYGAYLLAGYALIVKYYSLNNDRYEFYYHYQYSPSLLNYYHHYSNYYNSYNYKPNYGYPNYDYDAYNAFTNMYDLALNAYLAFLAYVYSLLAPVALDHARRPAYVVMADHNHRAILQLRLDKWGYDKYLLAIAIDAYALNADNLFVATNADQAYDFVYVYDVYLLKGVRLVAFVFPDANNY...YADFYYYYLYIYSIVYNYIYYIQNYSIVSKYEVYFYQFYFFLSQFTLYIYYFYEVYEVYQFYQFYDYYYNYNYKNFYNFLNYNYIKNYYIKNYQFYNYNIVNYQYIFNYNYYNYQFYNYNYLLBY...TAHNLALLALLLLHKRSDQKKLLNNTVTKGVMVPKFWDKNRLKDINSENSELLIPSPLNLAIAITMKYNKISFYDEAFTEDFTFLVNMNEEKLLELKINDAESKTNSYSHTMYQLPSQNTBSYSWNPDSTKLYTFYETFEKERSLIFRTVNLKNPSYNQ...YESLSYNFYGEYSFANREFYHLYNKLLAFHEGYGYYVCKPYPLYTEYYNIIAYYYNNYYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFYFTVFLNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNLSNCS...MGVLLLASLTDALLSLEERLTNSFPSFRLPSLTNCQLBQSGSEDTQLYSNYSFTAAGFLSNTPLNPGDDPAATVLTSLFDSPDKSGSFTLSNLMSLNYLPINYRKLTLLNXTLYTFRFTEQNSTFLQALQETDLQDSLSLNLKYYNYLNYNYEYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYDYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNyNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYYYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYTYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYNYCYNEVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYV YVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYVYV |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients |
| Local | 0.7357(0.0030) | 0.7325(0.0066) | 0.8039(0.0088) | 0.6674(0.0069) | 0.6647(0.0045) | 0.7128(0.0035) | 0.8445(0.0003) | 0.8108(0.0000) | 0.8024(0.0011) | 0.6724(0.0003) | 0.7959(0.0106) | 0.7562(0.0137) | 0.6523(0.0221) | 0.5764(0.0001) | 0.6645(0.0051) |
| FedAvg | 0.8416(0.0044) | 0.6332(0.0166) | 0.7162(0.0382) | 0.7426(0.0024) | 0.7498(0.0049) | 0.7252(0.0035) | 0.7126(0.0000) | 0.8640(0.0024) | 0.8586(0.0010) | 0.7668(0.0141) | 0.5695(0.0483) | 0.5669(0.0974) | 0.5626(0.0715) | 0.4195(0.0173) | 0.4858(0.0187) |
| FedSAGE+ | 0.7560(0.0237) | 0.4156(0.0034) | 0.3522(0.1196) | 0.7505(0.0150) | 0.5167(0.0389) | 0.4929(0.0075) | 0.8980(0.0001) | 0.9091(0.0025) | 0.9041(0.0012) | 0.9239(0.0083) | 0.6670(0.0206) | 0.6246(0.0385) | 0.7539(0.0062) | 0.6934(0.0006) | 0.6656(0.0002) |
| FedGCN | 0.8226(0.0062) | 0.8124(0.0063) | 0.7243(0.0081) | 0.7376(0.0085) | 0.7649(0.0074) | 0.7123(0.0186) | 0.8504(0.0008) | 0.8441(0.0019) | 0.7398(0.0016) | 0.5717(0.0036) | 0.5627(0.0016) | 0.5782(0.0182) | 0.4217(0.0138) | 0.4908(0.0185) | |
| FedPUB | 0.8476(0.0021) | 0.8448(0.0009) | 0.8622(0.0059) | 0.7455(0.0085) | 0.7694(0.0074) | 0.7505(0.0081) | 0.9064(0.0016) | 0.9069(0.0019) | 0.9092(0.0019) | 0.9399(0.0020) | 0.9122(0.0016) | 0.8983(0.0052) | 0.8339(0.0142) | 0.8202(0.0141) | 0.8181(0.0124) |
| FedNTD | 0.8452(0.0067) | 0.8526(0.0024) | 0.6984(0.0030) | 0.7455(0.0069) | 0.7826(0.0047) | 0.7146(0.0079) | 0.9049(0.0002) | 0.9065(0.0009) | 0.9061(0.0012) | 0.9378(0.0029) | 0.9166(0.0021) | 0.9119(0.0036) | 0.8492(0.0107) | 0.8619(0.0034) | 0.8707(0.0055) |
| FedED | 0.8542(0.0084) | 0.8398(0.0024) | 0.6779(0.0343) | 0.7305(0.0086) | 0.7624(0.0050) | 0.6251(0.0149) | 0.9080(0.0006) | 0.9086(0.0027) | 0.8985(0.0025) | 0.9463(0.0014) | 0.9101(0.0027) | 0.8950(0.0059) | 0.8623(0.0136) | 0.8722(0.0035) | 0.8356(0.0158) |
| FedLoG | 0.8601(0.0118) | 0.8575(0.0074) | 0.8451(0.0103) | 0.7663(0.0086) | 0.7728(0.0049) | 0.7624(0.0063) | 0.9180(0.0005) | 0.9129(0.0015) | 0.9115(0.0043) | 0.9653(0.0020) | 0.9496(0.0037) | 0.9305(0.0049) | 0.9073(0.0012) | 0.8986(0.0014) | 0.8742(0.0107) |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients |
| Local | 0.1250(0.0030) | 0.2957(0.0079) | 0.2854(0.0263) | 0.4443(0.0131) | 0.3471(0.0020) | 0.5177(0.0052) | 0.7510(0.0010) | 0.7292(0.0000) | 0.7489(0.0013) | 0.1333(0.0000) | 0.1900(0.0392) | 0.3958(0.0211) | 0.1687(0.0000) | 0.2488(0.0000) | 0.3890(0.0043) |
| FedAvg | 0.5403(0.0797) | 0.5198(0.0179) | 0.4139(0.1308) | 0.6585(0.0220) | 0.6098(0.0301) | 0.6199(0.0084) | 0.6154(0.0000) | 0.8189(0.0120) | 0.8070(0.0043) | 0.1782(0.0419) | 0.2125(0.0186) | 0.3727(0.0221) | 0.2275(0.0117) | 0.3095(0.0179) | 0.4177(0.0335) |
| FedSAGE+ | 0.5653(0.0546) | 0.4265(0.0062) | 0.3836(0.0705) | 0.6572(0.0093) | 0.4023(0.0339) | 0.6154(0.0034) | 0.8944(0.0048) | 0.8921(0.0089) | 0.8926(0.0051) | 0.4781(0.0093) | 0.2298(0.0394) | 0.4607(0.0391) | 0.3462(0.0325) | 0.3555(0.0196) | 0.3596(0.0024) |
| FedGCN | 0.3689(0.0646) | 0.5877(0.0018) | 0.5075(0.0001) | 0.6232(0.0243) | 0.6530(0.1095) | 0.6139(0.0119) | 0.6154(0.0000) | 0.7759(0.0058) | 0.8188(0.0049) | 0.2565(0.0067) | 0.2604(0.0077) | 0.3708(0.0304) | 0.2086(0.0597) | 0.3084(0.0140) | 0.4103(0.0342) |
| FedPUB | 0.5529(0.0246) | 0.5192(0.0064) | 0.4767(0.0286) | 0.6798(0.0334) | 0.6691(0.0057) | 0.6938(0.0245) | 0.8878(0.0003) | 0.8822(0.0056) | 0.8836(0.0043) | 0.4085(0.0118) | 0.3890(0.0404) | 0.5033(0.0155) | 0.4414(0.0225) | 0.5025(0.0466) | 0.5253(0.0471) |
| FedNTD | 0.6355(0.0195) | 0.5880(0.0041) | 0.3913(0.1255) | 0.7057(0.0173) | 0.7014(0.0614) | 0.6151(0.0155) | 0.8939(0.0068) | 0.8852(0.0044) | 0.8816(0.0031) | 0.4042(0.0155) | 0.5833(0.0043) | 0.5286(0.0306) | 0.5056(0.0440) | 0.6034(0.0309) | 0.6482(0.0226) |
| FedED | 0.7338(0.0294) | 0.5514(0.0117) | 0.3916(0.1184) | 0.6646(0.0658) | 0.6148(0.0097) | 0.5381(0.0781) | 0.9008(0.0027) | 0.8884(0.0036) | 0.8730(0.0077) | 0.6227(0.0429) | 0.4265(0.0675) | 0.4629(0.0137) | 0.4582(0.0176) | 0.5408(0.0223) | 0.4940(0.0275) |
| FedLoG | 0.7341(0.0273) | 0.7413(0.0316) | 0.7406(0.0527) | 0.7624(0.0522) | 0.7415(0.042) | 0.8044(0.0078) | 0.9044(0.0021) | 0.8956(0.0033) | 0.8965(0.0061) | 0.7065(0.0715) | 0.7077(0.0571) | 0.7176(0.0277) | 0.7677(0.0237) | 0.8156(0.0326) | 0.6735(0.0292) |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients |
| Local | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0,0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0o000(0.000) |
| FedAvg | 0.3900(0.1104) | 0.1119(0.0202) | 0.1652(0.0568) | 0.2022(0.0751) | 0.1914(0.0149) | 0.3189(0.0218) | 0.0000(0.000) | 0.0013(0.0013) | 0.0020(0.0010) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0085(0.0148) | 0.0000(0.000) | 0.0000(0.000) | 0.0o073(0.0127) |
| FedSAGE+ | 0.5000(0.125) | 0.1393(0.0285) | 0.1287(0.1187) | 0.5581(0.1284) | 0.1622(0.0823) | 0.3701(0.0823) | 0.0000(0.0015) | 0.0015(0.0014) | 0.0034(0.0014) | 0.0000(0.0015) | 0.0000(0.0015) | 0.0363(0.0283) | 0.0000(0.001) | 0.0o000(0.0146) | 0.0o000(0.0226) |
| FedGCN | 0.0702(0.113) | 0.2123(0.1297) | 0.1549(0.1297) | 0.1648(0.1287) | 0.1702(0.1817) | 0.0833(0.1833) | 0.0000(0.0016) | 0.0000(0.0016) | 0.0006(0.0016) | 0.0156(0.0127) | 0.0097(0.0127) | 0.0007(0.0129) | 0.0o000(0.0117) | 0.0o000(0.0126) | 0.0o085(0.0129) |
| FedPUB | 0.0000(0.000) | 0.0000(0.000) | 0.0000(0.000) | 0.0012(0.021) | 0.0053(0.022) | 0.0000(0.0026) | 0.0000(0.001) | 0.0000(0.001) | 0.0002(0.001) | 0.0000(0.001) | 0.0000(0.001) | 0.0000(0.001) | 0.0o000(0.001) | 0.0o000(0.001) | 0.0o000(0.0126) |
| FedNTD | 0.3714(0.1273) | 0.1895(0.0999) | 0.1336(0.1317) | 0.4257(0.0777) | 0.2438(0.0476) | 0.2878(0.0459) | 0.0003(0.0082) | 0.0256(0.0256) | 0.0512(0.055) | 0.0000(0.001) | 0.0019(0.009) | 0.0061(0.064) | 0.0o338(0.015) | 0.0o388(0.0128) | 0.0o388(0.0124) |
| FedED | 0.5305(0.1078) | 0.1080(0.1518) | 0.1354(0.1325) | 0.1534(0.2660) | 0.1339(0.1479) | 0.1039(0.1785) | 0.0056(0.0045) | 0.0097(0.002) | 0.0550(0.022) | 0.2192(0.1935) | 0.1796(0.1634) | 0.0004(0.007) | 0.1162(0.1297) | 0.0o055(0.1125) | 0.0o348(0.1112) |
| FedLoG | 0.6472(0.0811) | 0.4948(0.939) | 0.4307(0.6619) | 0.6142(0.0292) | 0.5922(0.1037) | 0.5958(0.0668) | 0.1070(0.1623) | 0.1700(0.1236) | 0.2290(0.358) | 0.4795(0.0949) | 0.5525(0.1464) | 0.1328(0.1562) | 0.3580(0.1256) | 0.2175(0.1256) | 0.2142(0.1527) |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 1O Clients | 3 Clients | 5 Clients | 1O Clients |
| Local | 0.0995(0.0884) | 0.1488(0.059) | 0.1778(0.1284) | 0.1435(0.0113) | 0.1968(0.0066) | 0.1337(0.000) | 0.3570(0.000) | 0.3947(0.000) | 0.3936(0.001) | 0.0313(0.001) | 0.0965(0.004) | 0.1169(0.089) | 0.1571(0.000) | 0.1974(0.000) | 0.2816(0.111) |
| FedAvg | 0.3583(0.2o26) | 0.2713(0.188) | 0.3924(0.188) | 0.2572(0.222) | 0.2859(0.1348) | 0.2976(0.1318) | 0.3333(0.000) | 0.5243(0.1318) | 0.5175(0.124) | 0.2597(0.1118) | 0.8853(0.1142) | 0.1661(0.172) | 0.1978(0.0o0) | 0.2924(0.1637) | 0.4799(0.111) |
| FedSAGE+ | 0.2411(0.1o19) | 0.325O(0.226) | 0.4129(0.1o15) | 0.363O(0.3585) | 0.0646(0.099) | 0.1048(0.1322) | 0.3834(0.1122) | 0.5616(0.1216) | 0.5279(0.1126) | 0.2782(0.1886) | 0.99O(0.112) | 0.1472(0.1549) | 0.2132(0.3683) | 0.3718(0.1114) | 0.2756(0.111) |
| FedGCN | 0.3449(0.4o49) | 0.332O(0.052) | 0.4825(0.1819) | 0.2572(0.1777) | 0.4548(0.19O) | 0.2144(0.1566) | 0.3333(0.1126) | 0.483O(0.1126) | 0.489O(0.1126) | 0.2597(0.1126) | 0.6659(0.1226) | - 16O( 16O) | - 16O( 16O) | - 16O( 16O) | - 16O( 16O) |
| FedPUB | 0.399O( 0.2o39) | - 2258O( 0.153) | - 16O( 16O) | - 3929( 16O) | - 34O( 1113) | - 3939( 1226) | - 4112( 1113) | - 6O( 1113) | - 6O( 1113) | - 5743( 1113) | - 1O( 1113) | - 1O( 1113) | - 254O( 1113) | - 254O( 1113) | - 254O( 1113) |
| FedNTD | - 38O( 1328) | - 3169( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | \( - 37O( 1328) \) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) | - 37O( 1328) |
| FedED | - 4527( 1355) | - 2537( 1355) | - 3194( 1364) | - 333O( 1368) | - 41O( 1368) | - 1346( 1316) | - 4842( 1328) | - 6352( 1311) | - 5969( 1311) | - 5541( 1358) | - 1563( 1588) | - 2622( 1527) | - 1162( 1712) | - 2162( 1712) | - 2522( 1616) |
| FedLoG | - 5O47( 1884) | - 4439( 1855) | - 6O55( 1914) | - 5O79( 1923) | - 5O67( 1919) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | - 6O53( 1914) | ||
| |No Noise| | GN (a=0.01) | GN (a=0.1) | GN (a=0.5) | RP | |
| Seen Graph | 0.8601 | 0.8530 | 0.8542 | 0.8560 | 0.8631 |
| (0.0118) | (0.0089) | (0.0080) | (0.0010) | (0.0010) | |
| Unseen Node | 0.7341 | 0.7127 | 0.7217 | 0.7057 | 0.7351 |
| (0.0273) | (0.0191) | (0.0318) | (0.0123) | (0.0054) | |
| Missing Class | 0.6472 | 0.6244 | 0.6032 | 0.6277 | 0.6328 |
| (0.0811) | (0.05275) | (0.0457) | (0.0148) | (0.0586) | |
| New Client | 0.5047 | 0.5278 | 0.5297 | 0.4883 | 0.5199 |
| (0.0884) | (0.0326) | (0.0523) | (0.0232) | (0.0421) |
| Notation | Description |
| General Notations | |
| S | Server |
| K | Number of clients |
| G | Graph |
| V | Set of nodes |
| E | Set of edges |
| D | Dataset consisting of G and Y |
| Y | Label set for the nodes |
| Xv | Feature matrix of a set of nodes V |
| xv | Feature vector of a node v ∈ V |
| φ | Set of parameters of a global model |
| |Cv| | Number of classes within a set of nodes V |
| r | Current round |
| h_v | Representation of a node v |
| α | Weight of prediction between head and tail branches |
| Local Client k Notations | |
| Gk | Local graph for client k |
| Vk | Set of nodes within a local graph for client k |
| Ek | Set of edges within a local graph for client k |
| Dk | Local dataset for client k |
| Dlocal | Combined local datasets, Dlocal = ∪k=1K Dk |
| Yk | Label set for the nodes within a local graph for client k |
| φk | Set of parameters of a local model for client k |
| φk | Parameters of a GNN embedder for client k |
| θk,H | Parameters of a head-branch classifier for client k |
| θk,T | Parameters of a tail-branch classifier for client k |
| Vk,head | Set of synthetic nodes within a head-degree branch of client k |
| v(k,s) | s-th synthetic node for class c in a head-branch for client k |
| XVk,head | Feature matrix of a set of nodes Vk,head |
| x(k,s) | Feature of the s-th synthetic node for class c in a head-branch for client k |
| h(k,s) | Representation of the s-th synthetic node for class c in a head-branch for client k |
| Pk,head | Set of prototype representations (i.e., representations of synthetic nodes) in a head-branch for client k |
| hVk,c | Average of prototype representations of class c in a head-branch for client k |
| rck | Proportion of nodes labeled c in client k's dataset Dk |
| PGk | Pretrained prompt generator for client k (regardless of specific class) |
| γk | Class-wise adaptive factor for client k |
| Gsyn | Synthetic graph set consisting of graphs, each containing the global synthetic nodes (which are adapted locally) neighboring with their generated prompt nodes. |
| vk | Generated prompt node of node vk |
| xv,p | Generated feature of generated prompt node vk p for the input feature xv k of node vk |
| xNv,k | Average of features of the h-hop neighbors of node vk within Gk |
| Global Synthetic Node Notations | |
| Dglobal | Global synthetic dataset |
| Gglobal | Global synthetic graph |
| Vglobal | Set of global synthetic nodes |
| Yglobal | Label set for the global synthetic nodes |
| XVglobal | Feature matrix of the global synthetic nodes Vglobal |
| xv(c,s) | Feature of the s-th global synthetic node vk for class c |
| PGc | Class-specific prompt generator for class c (by aggregating NGk for all k in a class-wise manner) |
| Hyperparameter Notations | |
| s | Hyperparameter for assigning the number of synthetic nodes per class |
| λ | Hyperparameter for adjusting tail degree threshold |
| β | Hyperparameter for adjusting the extent of regularization of the features of synthetic nodes |
| FedAvg | FedSAGE+ | FedGCN | FedPUB | FedNTD | FedED | FedLoG | |
| Cost (MB) | 393.11 | 1543.58 | 393.11 | 786.03 | 393.11 | 393.11 | 1011.14 |
| Model | FedAvg | FedSAGE+ | FedGCN | FedPUB | FedNTD | FedED | FedLoG |
| Rounds to Reach 0.8 | 58 | 100 (Fails to reach) | 57 | 29 | 19 | 39 | 10 |
| Cost (MB) | 228.00 | 1543.58 | 224.07 | 227.95 | 72.79 | 149.41 | 101.11 |
| 3 Clients | 5 Clients | 10 Clients | ||||||||||
| SG | UN | MC | NC | SG | UN | MC | NC | SG | UN | MC | NC | |
| FedLoG (α = 0.5) | 0.8613(0.0108) | 0.7154(0.0239) | 0.5668(0.0330) | 0.4852(0.0329) | 0.8519(0.0065) | 0.7397(0.0016) | 0.4837(0.0331) | 0.3982(0.0055) | 0.8377(0.0089) | 0.7160(0.0770) | 0.3769(0.1721) | 0.5285(0.0986) |
| FedLoG | 0.8601(0.0118) | 0.7341(0.0273) | 0.6472(0.0811) | 0.5047(0.0884) | 0.8575(0.0074) | 0.7413(0.0316) | 0.4948(0.0930) | 0.4439(0.0455) | 0.8451(0.0103) | 0.7406(0.0527) | 0.4037(0.0619) | 0.6055(0.0914) |
| Improvement (%p) | -0.12 | +1.87 | +8.04 | +1.95 | +0.56 | +0.16 | +1.11 | +4.57 | +0.74 | +2.46 | +2.68 | +7.70 |
| Class | 1 | 2 | 3 | 4 | 5 | 6 |
| # Nodes | 5 | 27 | 129 | 16 | 0 | 25 |
| γ[c] | 0.288 | 0.289 | 0.297 | 0.282 | 0.282 | 0.286 |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients |
| Local | 0.1250(0.0030) | 0.2957(0.0077) | 0.2854(0.0263) | 0.4443(0.0131) | 0.3471(0.0020) | 0.5177(0.0052) | 0.7510(0.0010) | 0.7292(0.0000) | 0.7489(0.0013) | 0.1333(0.0000) | 0.1900(0.0039) | 0.3958(0.0211) | 0.1687(0.0001) | 0.2891(0.0000) | 0.3890(0.0043) |
| FedAvg | 0.6696(0.0232) | 0.5939(0.0215) | 0.4243(0.1304) | 0.6055(0.0033) | 0.7126(0.0210) | 0.5255(0.0119) | 0.8679(0.0059) | 0.7192(0.0170) | 0.6793(0.0127) | 0.2481(0.0455) | 0.2491(0.0671) | 0.2692(0.0304) | 0.3480(0.0428) | 0.2980(0.0198) | 0.2617(0.0074) |
| FedSAGE+ | 0.6362(0.0764) | 0.5050(0.0047) | 0.3953(0.0527) | 0.4090(0.0155) | 0.2667(0.0160) | 0.3945(0.0571) | 0.9035(0.0028) | 0.8820(0.0015) | 0.8312(0.0124) | 0.3117(0.0071) | 0.2529(0.0198) | 0.3651(0.0183) | 0.4205(0.0073) | 0.6028(0.0055) | 0.3404(0.0189) |
| FedGCN | 0.6840(0.0083) | 0.6299(0.0022) | 0.4389(0.1433) | 0.6148(0.0124) | 0.6500(0.0319) | 0.5767(0.0254) | 0.8571(0.0027) | 0.7138(0.0114) | 0.6558(0.0044) | 0.2329(0.0448) | 0.2411(0.0512) | 0.2617(0.0238) | 0.3519(0.0256) | 0.2923(0.0235) | 0.2621(0.0041) |
| FedPUB | 0.6772(0.0039) | 0.5971(0.0017) | 0.4717(0.0114) | 0.6097(0.0264) | 0.7222(0.0087) | 0.5958(0.0049) | 0.8842(0.0114) | 0.8954(0.0022) | 0.8864(0.0023) | 0.4842(0.0204) | 0.5109(0.0331) | 0.3790(0.0559) | 0.4886(0.0123) | 0.5068(0.0286) | 0.4574(0.0125) |
| FedNTD | 0.7066(0.0241) | 0.6402(0.0076) | 0.4245(0.1274) | 0.6443(0.0105) | 0.7639(0.0082) | 0.5664(0.0168) | 0.8953(0.0052) | 0.8769(0.0033) | 0.8789(0.0013) | 0.5516(0.0283) | 0.6196(0.0098) | 0.4903(0.0615) | 0.4183(0.0033) | 0.6707(0.0252) | 0.6778(0.0102) |
| FedED | 0.6904(0.0163) | 0.5453(0.0185) | 0.3024(0.0038) | 0.5985(0.0330) | 0.6568(0.0060) | 0.4448(0.0232) | 0.8978(0.0047) | 0.8771(0.0043) | 0.8805(0.0028) | 0.6491(0.0346) | 0.5872(0.0395) | 0.2581(0.0460) | 0.4326(0.0119) | 0.7420(0.0291) | 0.5751(0.0374) |
| FedLoG | 0.7224(0.0102) | 0.7163(0.0216) | 0.6203(0.0089) | 0.6363(0.0153) | 0.7645(0.0141) | 0.6634(0.0235) | 0.8627(0.0078) | 0.8622(0.0058) | 0.8627(0.0062) | 0.8754(0.0049) | 0.8275(0.0340) | 0.6576(0.0202) | 0.7759(0.0475) | 0.8625(0.0180) | 0.7163(0.0279) |
| Cora | CiteSeer | PubMed | Amazon Photo | Amazon Computers | |||||||||||
| Methods | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients | 3 Clients | 5 Clients | 10 Clients |
| Local | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0,0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0001(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) |
| FedAvg | 0.0000(0.0000) | 0.2091(0.0291) | 0.0000(0.0317) | 0.1801(0.0405) | 0.4269(0.0517) | 0.1490(0.0387) | 0.2771(0.0207) | 0.0499(0.0133) | 0.0166(0.0064) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0246) |
| FedSAGE+ | 0.2030(0.0883) | 0.2774(0.0528) | 0.0244(0.0326) | 0.3243(0.2832) | 0.4155(0.0229) | 0.2007(0.1161) | 0.0495(0.0116) | 0.0733(0.0160) | 0.1166(0.0217) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0175(0.0584) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0189(0.0327) |
| FedGCN | 0.0000(0.0000) | 0.2940(0.0280) | 0.0579(0.0250) | 0.0961(0.0364) | 0.3562(0.1246) | 0.1831(0.0253) | 0.2035(0.0165) | 0.0478(0.0058) | 0.0049(0.0112) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0177(0.0885) |
| FedPUB | 0.0000(0.0000) | 0.0082(0.0094) | 0.0352(0.0000) | 0.0000(0.0000) | 0.0251(0.0229) | 0.0070(0.0866) | 0.0318(0.0125) | 0.0002(0.0044) | 0.0100(0.0087) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0000) | 0.0000(0.0085) |
| FedNTD | 0.1182(0.0886) | 0.2650(0.1818) | 0.0457(0.0392) | 0.4054(0.2833) | 0.5822(0.1616) | 0.2054(0.1616) | 0.0733(0.0388) | 0.2688(0.1869) | 0.3895(0.1829) | 0.0292(0.1679) | 0.0385(0.1655) | 0.1558(0.1537) | 0.1708(0.1637) | 0.0088(0.1677) | 0.0017(0.1884) |
| FedED | 0.1333(0.1449) | 0.1449(0.1333) | 0.0370(0.1379) | 0.2523(0.2633) | 0.2922(0.1228) | 0.1197(0.1229) | 0.1487(0.1218) | 0.1118(0.1292) | 0.1604(0.1219) | 0.5053(0.1642) | 0.0000(0.1835) | 0.0029(0.1624) | 0.0613(0.1627) | 1232(1232) | 0.0091(1232) |
| FedLoG | 0.4273(0.0567) | 0.5528(0.0569) | 0.2649(0.0174) | 0.4234(0.1324) | 0.5342(0.1449) | 0.4484(0.2019) | 0.5697(0.2118) | 0.4758(0.1292) | 0.5697(0.2169) | 0.3333(0.1818) | 0.5423(1.1865) | 0.4397(1.1615) | 0.7648(0.9885) | 0.2548(0.1824) | 0.2929(1.1835) |
| Dataset | Nodes | Edges | Features | Classes | Description |
| Cora | 2,708 | 5,429 | 1,433 | 7 | Scientific publications |
| CiteSeer | 3,327 | 4,732 | 3,703 | 6 | Scientific publications |
| PubMed | 19,717 | 44,338 | 500 | 3 | Scientific publications |
| Amazon Computers | 13,752 | 245,861 | 767 | 10 | Amazon co-purchase |
| Amazon Photos | 7,650 | 143,663 | 745 | 8 | Amazon co-purchase |
| Baseline | URL / Note |
| FedAvg* | https://github.com/JinheonBaek/FED-PUB/ |
| FedSAGE | https://github.com/zkhku/fedsage |
| FedGCN | https://github.com/yh-yao/FedGCN |
| FedPub | https://github.com/JinheonBaek/FED-PUB/ |
| FedNTD | https://github.com/Lee-Gihun/FedNTD |
| FedED | Self-implemented due to absence of official code. |
| Dataset | Class | Seen Graph | Unseen Node | Missing Class | ||
| Train | Valid | Test | Test | Test | ||
| Global | 0 | 49 | 37 | 32 | 268 | 86 |
| 1 | 82 | 45 | 75 | 258 | 0 | |
| 2 | 140 | 110 | 133 | 217 | 220 | |
| 3 | 242 | 206 | 171 | 605 | 0 | |
| 4 | 120 | 82 | 90 | 268 | 0 | |
| 5 | 45 | 41 | 32 | 120 | 29 | |
| 6 | 53 | 42 | 27 | 9 | 59 | |
| Client 0 | 0 | 8 | 12 | 4 | 121 | 0 |
| 1 | 7 | 4 | 3 | 124 | 0 | |
| 2 | 4 | 2 | 3 | 190 | 0 | |
| 3 | 208 | 168 | 131 | 125 | 0 | |
| 4 | 33 | 22 | 22 | 96 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 29 | |
| 6 | 0 | 1 | 0 | 1 | 23 | |
| Client 1 | 0 | 0 | 0 | 0 | 0 | 86 |
| 1 | 7 | 1 | 3 | 117 | 0 | |
| 2 | 136 | 108 | 130 | 27 | 0 | |
| 3 | 6 | 15 | 14 | 202 | 0 | |
| 4 | 74 | 49 | 60 | 74 | 0 | |
| 5 | 3 | 4 | 2 | 44 | 0 | |
| 6 | 0 | 0 | 0 | 0 | 36 | |
| Client 2 | 0 | 41 | 25 | 28 | 147 | 0 |
| 1 | 68 | 40 | 69 | 17 | 0 | |
| 2 | 0 | 0 | 0 | 0 | 220 | |
| 3 | 28 | 23 | 26 | 278 | 0 | |
| 4 | 13 | 11 | 8 | 98 | 0 | |
| 5 | 42 | 37 | 30 | 76 | 0 | |
| 6 | 53 | 41 | 27 | 8 | 0 | |
| New Client | 0 | - | - | 222 | - | - |
| 1 | - | - | 12 | - | - | |
| 2 | - | - | 2 | - | - | |
| 3 | - | - | 107 | - | - | |
| 4 | - | - | 87 | - | - | |
| 5 | - | - | 166 | - | - | |
| 6 | - | - | 9 | - | - | |
| Dataset | Class | Seen Graph | Unseen Node | Missing Class | ||
| Train | Valid | Test | Test | Test | ||
| Global | 0 | 41 | 24 | 25 | 72 | 0 |
| 1 | 90 | 75 | 90 | 185 | 0 | |
| 2 | 196 | 163 | 137 | 424 | 93 | |
| 3 | 116 | 85 | 77 | 210 | 0 | |
| 4 | 154 | 96 | 110 | 132 | 121 | |
| 5 | 37 | 25 | 26 | 88 | 53 | |
| Client 0 | 0 | 19 | 5 | 11 | 26 | 0 |
| 1 | 52 | 50 | 58 | 59 | 0 | |
| 2 | 67 | 52 | 32 | 296 | 0 | |
| 3 | 73 | 52 | 44 | 76 | 0 | |
| 4 | 7 | 1 | 7 | 60 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 53 | |
| Client 1 | 0 | 5 | 5 | 2 | 18 | 0 |
| 1 | 27 | 16 | 26 | 79 | 0 | |
| 2 | 129 | 111 | 105 | 128 | 0 | |
| 3 | 16 | 15 | 14 | 66 | 0 | |
| 4 | 0 | 0 | 0 | 0 | 121 | |
| 5 | 25 | 13 | 17 | 44 | 0 | |
| Client 2 | 0 | 17 | 14 | 12 | 28 | 0 |
| 1 | 11 | 9 | 6 | 47 | 0 | |
| 2 | 0 | 0 | 0 | 0 | 93 | |
| 3 | 27 | 18 | 19 | 68 | 0 | |
| 4 | 147 | 95 | 103 | 72 | 0 | |
| 5 | 12 | 12 | 9 | 44 | 0 | |
| New Client | 0 | - | - | 35 | - | - |
| 1 | - | - | 53 | - | - | |
| 2 | - | - | 12 | - | - | |
| 3 | - | - | 110 | - | - | |
| 4 | - | - | 93 | - | - | |
| 5 | - | - | 211 | - | - | |
| Dataset | Class | Seen Graph | Unseen Node | Missing Class | ||
| Train | Valid | Test | Test | Test | ||
| Global | 0 | 1271 | 983 | 949 | 221 | 168 |
| 1 | 1176 | 897 | 951 | 572 | 1003 | |
| 2 | 2972 | 2171 | 2141 | 658 | 0 | |
| Client 0 | 0 | 277 | 222 | 209 | 93 | 0 |
| 1 | 0 | 0 | 0 | 0 | 346 | |
| 2 | 1642 | 1227 | 1189 | 193 | 0 | |
| Client 1 | 0 | 994 | 761 | 740 | 128 | 0 |
| 1 | 0 | 0 | 0 | 0 | 657 | |
| 2 | 594 | 408 | 414 | 213 | 0 | |
| Client 2 | 0 | 0 | 0 | 0 | 0 | 168 |
| 1 | 1176 | 897 | 951 | 572 | 0 | |
| 2 | 736 | 536 | 538 | 252 | 0 | |
| New Client | 0 | - | - | 787 | - | - |
| 1 | - | - | 3500 | - | - | |
| 2 | - | - | 591 | - | - | |
| Dataset | Class | Seen Graph | Unseen Node | Missing Class | ||
| Train | Valid | Test | Test | Test | ||
| Global | 0 | 150 | 102 | 108 | 61 | 28 |
| 1 | 592 | 502 | 468 | 881 | 0 | |
| 2 | 266 | 219 | 197 | 61 | 20 | |
| 3 | 358 | 241 | 258 | 329 | 0 | |
| 4 | 300 | 258 | 250 | 327 | 0 | |
| 5 | 332 | 217 | 247 | 0 | 25 | |
| 6 | 214 | 146 | 128 | 907 | 0 | |
| 7 | 39 | 17 | 27 | 307 | 0 | |
| Client 0 | 0 | 0 | 0 | 0 | 0 | 28 |
| 1 | 71 | 50 | 49 | 345 | 0 | |
| 2 | 261 | 215 | 196 | 3 | 0 | |
| 3 | 49 | 26 | 33 | 135 | 0 | |
| 4 | 5 | 6 | 10 | 186 | 0 | |
| 5 | 332 | 217 | 247 | 0 | 0 | |
| 6 | 9 | 5 | 7 | 96 | 0 | |
| 7 | 9 | 5 | 9 | 59 | 0 | |
| Client 1 | 0 | 146 | 100 | 107 | 7 | 0 |
| 1 | 394 | 369 | 333 | 226 | 0 | |
| 2 | 0 | 0 | 0 | 0 | 20 | |
| 3 | 11 | 8 | 8 | 101 | 0 | |
| 4 | 3 | 1 | 0 | 67 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 18 | |
| 6 | 197 | 132 | 113 | 69 | 0 | |
| 7 | 1 | 0 | 3 | 40 | 0 | |
| Client 2 | 0 | 4 | 2 | 1 | 54 | 0 |
| 1 | 127 | 83 | 86 | 310 | 0 | |
| 2 | 5 | 4 | 1 | 58 | 0 | |
| 3 | 298 | 207 | 217 | 93 | 0 | |
| 4 | 292 | 251 | 240 | 74 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 7 | |
| 6 | 8 | 9 | 8 | 742 | 0 | |
| 7 | 29 | 12 | 15 | 208 | 0 | |
| New Client | 0 | - | - | 0 | - | - |
| 1 | - | - | 72 | - | - | |
| 2 | - | - | 3 | - | - | |
| 3 | - | - | 43 | - | - | |
| 4 | - | - | 64 | - | - | |
| 5 | - | - | 1 | - | - | |
| 6 | - | - | 1412 | - | - | |
| 7 | - | - | 248 | - | - | |
| Dataset | Class | Seen Graph | Unseen Node | Missing Class | ||
| Train | Valid | Test | Test | Test | ||
| Global | 0 | 168 | 120 | 115 | 146 | 140 |
| 1 | 297 | 221 | 201 | 1604 | 0 | |
| 2 | 558 | 442 | 410 | 2 | 479 | |
| 3 | 91 | 63 | 66 | 765 | 0 | |
| 4 | 1399 | 1094 | 1103 | 3397 | 0 | |
| 5 | 129 | 69 | 98 | 0 | 60 | |
| 6 | 182 | 134 | 164 | 132 | 93 | |
| 7 | 343 | 220 | 232 | 4 | 167 | |
| 8 | 748 | 597 | 580 | 1321 | 0 | |
| 9 | 119 | 80 | 79 | 105 | 22 | |
| Client 0 | 0 | 164 | 118 | 114 | 32 | 0 |
| 1 | 98 | 65 | 71 | 398 | 0 | |
| 2 | 558 | 442 | 410 | 2 | 0 | |
| 3 | 66 | 42 | 46 | 206 | 0 | |
| 4 | 41 | 25 | 30 | 813 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 46 | |
| 6 | 0 | 0 | 0 | 0 | 93 | |
| 7 | 343 | 220 | 232 | 4 | 0 | |
| 8 | 12 | 10 | 8 | 447 | 0 | |
| 9 | 108 | 59 | 61 | 25 | 0 | |
| Client 1 | 0 | 4 | 2 | 1 | 114 | 0 |
| 1 | 125 | 84 | 80 | 573 | 0 | |
| 2 | 0 | 0 | 0 | 0 | 262 | |
| 3 | 7 | 6 | 5 | 235 | 0 | |
| 4 | 139 | 123 | 117 | 1504 | 0 | |
| 5 | 129 | 69 | 98 | 0 | 0 | |
| 6 | 181 | 133 | 164 | 4 | 0 | |
| 7 | 0 | 0 | 0 | 0 | 143 | |
| 8 | 707 | 561 | 552 | 178 | 0 | |
| 9 | 11 | 21 | 18 | 80 | 0 | |
| Client 2 | 0 | 0 | 0 | 0 | 0 | 140 |
| 1 | 74 | 72 | 50 | 633 | 0 | |
| 2 | 0 | 0 | 0 | 0 | 217 | |
| 3 | 18 | 15 | 15 | 324 | 0 | |
| 4 | 1219 | 946 | 956 | 1080 | 0 | |
| 5 | 0 | 0 | 0 | 0 | 14 | |
| 6 | 1 | 1 | 0 | 128 | 0 | |
| 7 | 0 | 0 | 0 | 0 | 24 | |
| 8 | 29 | 26 | 20 | 696 | 0 | |
| 9 | 0 | 0 | 0 | 0 | 22 | |
| New Client | 0 | - | - | 30 | - | - |
| 1 | - | - | 1374 | - | - | |
| 2 | - | - | 2 | - | - | |
| 3 | - | - | 302 | - | - | |
| 4 | - | - | 1360 | - | - | |
| 5 | - | - | 1 | - | - | |
| 6 | - | - | 1 | - | - | |
| 7 | - | - | 1 | - | - | |
| 8 | - | - | 167 | - | - | |
| 9 | - | - | 5 | - | - | |
| Method | Metrics | Noisy MNIST | CIFAR-10 | ||||||
| 2 × 2 | 3 × 3 | 4 × 4 | 5 × 5 | 6 × 6 | 2 × 2 | 3 × 3 | 4 × 4 | ||
| Gumbel-Sinhorn Network (Mena et al., 2018) | Kendall-Tau ↑ | 0.9984 | 0.6908 | 0.3578 | 0.2430 | 0.1755 | 0.8378 | 0.5044 | 0.4016 |
| Accuracy (%) | 99.81 | 44.65 | 00.86 | 0.00 | 0.00 | 76.54 | 6.07 | 0.21 | |
| Correct (%) | 99.91 | 80.20 | 49.51 | 26.94 | 14.91 | 86.10 | 43.59 | 25.31 | |
| RMSE ↓ | 0.0022 | 0.1704 | 0.4572 | 0.8915 | 1.0570 | 0.3749 | 0.9590 | 1.0960 | |
| MAE ↓ | 0.0003 | 0.0233 | 0.1005 | 0.3239 | 0.4515 | 0.1368 | 0.5320 | 0.6873 | |
| DiffSort (Petersen et al., 2022) | Kendall-Tau ↑ | 0.9931 | 0.3054 | 0.0374 | 0.0176 | 0.0095 | 0.6463 | 0.1460 | 0.0490 |
| Accuracy (%) | 99.02 | 5.56 | 0.00 | 0.00 | 0.00 | 59.18 | 0.96 | 0.00 | |
| Correct (%) | 99.50 | 42.25 | 10.77 | 6.39 | 3.77 | 75.48 | 27.87 | 12.27 | |
| RMSE ↓ | 0.0689 | 1.0746 | 1.3290 | 1.4883 | 1.5478 | 0.7389 | 1.2691 | 1.3876 | |
| MAE ↓ | 0.0030 | 0.4283 | 0.6531 | 0.8204 | 0.8899 | 0.2800 | 0.8123 | 0.9737 | |
| Error-free DiffSort (Kim et al., 2024) | Kendall-Tau ↑ | 0.9899 | 0.2014 | 0.0100 | 0.0034 | -0.0021 | 0.6604 | 0.1362 | 0.0318 |
| Accuracy (%) | 98.62 | 0.82 | 0.00 | 0.00 | 0.00 | 60.96 | 0.68 | 0.00 | |
| Correct (%) | 99.28 | 32.65 | 7.40 | 4.39 | 2.50 | 75.99 | 26.75 | 10.33 | |
| RMSE ↓ | 0.0814 | 1.1764 | 1.3579 | 1.5084 | 1.5606 | 0.7295 | 1.2820 | 1.4095 | |
| MAE ↓ | 0.0041 | 0.5124 | 0.6818 | 0.8424 | 0.9041 | 0.2731 | 0.8260 | 0.9990 | |
| Symmetric Diffusers (Ours) | Kendall-Tau ↑ | 0.9992 | 0.8126 | 0.4859 | 0.2853 | 0.1208 | 0.9023 | 0.8363 | 0.2518 |
| Accuracy (%) | 99.88 | 57.38 | 1.38 | 0.00 | 0.00 | 90.15 | 70.94 | 0.64 | |
| Correct (%) | 99.94 | 86.16 | 58.51 | 37.91 | 18.54 | 92.99 | 86.84 | 34.69 | |
| RMSE ↓ | 0.0026 | 0.0241 | 0.1002 | 0.2926 | 0.4350 | 0.3248 | 0.3892 | 0.8953 | |
| MAE ↓ | 0.0001 | 0.0022 | 0.0130 | 0.0749 | 0.1587 | 0.0651 | 0.0977 | 0.5044 | |
| Method | Metrics | Sequence Length | ||||||||
| 3 | 5 | 7 | 9 | 15 | 32 | 52 | 100 | 200 | ||
| DiffSort (Petersen et al., 2022) | Kendall-Tau ↑ | 0.930 | 0.898 | 0.864 | 0.801 | 0.638 | 0.535 | 0.341 | 0.166 | 0.107 |
| Accuracy (%) | 93.8 | 83.9 | 71.5 | 52.2 | 10.3 | 0.2 | 0.0 | 0.0 | 0.0 | |
| Correct (%) | 95.8 | 92.9 | 90.1 | 85.2 | 82.3 | 61.8 | 42.8 | 23.2 | 15.3 | |
| Error-free DiffSort (Kim et al., 2024) | Kendall-Tau ↑ | 0.974 | 0.967 | 0.962 | 0.952 | 0.938 | 0.879 | 0.170 | 0.140 | 0.002 |
| Accuracy (%) | 97.7 | 95.3 | 92.9 | 89.6 | 83.1 | 57.1 | 0.0 | 0.0 | 0.0 | |
| Correct (%) | 98.4 | 97.7 | 97.2 | 96.3 | 95.1 | 90.1 | 24.2 | 20.1 | 0.8 | |
| Symmetric Diffusers (Ours) | Kendall-Tau ↑ | 0.976 | 0.967 | 0.959 | 0.950 | 0.932 | 0.858 | 0.786 | 0.641 | 0.453 |
| Accuracy (%) | 98.0 | 95.5 | 92.9 | 90.0 | 82.6 | 55.1 | 27.4 | 4.5 | 0.1 | |
| Correct (%) | 98.5 | 97.6 | 96.8 | 96.1 | 94.5 | 88.3 | 82.1 | 69.3 | 52.2 | |
| Forward | Riffle Shuffles | RT | RI | ||||
| Reverse | GPL + BS | GPL + Greedy | PL + Greedy | PL + BS | IRS + Greedy | IT + Greedy | II + Greedy |
| Kendall-Tau ↑ | 0.786 | 0.799 | 0.799 | 0.797 | 0.390 | ||
| Accuracy (%) | 27.4 | 24.4 | 26.4 | 26.4 | 0.6 | Out of Memory | |
| Correct (%) | 82.1 | 81.6 | 83.3 | 83.1 | 44.6 | ||
| Method | TSP-20 | TSP-50 | |||
| Tour Length ↓ | Optimality Gap (%) ↓ | Tour Length ↓ | Optimality Gap (%) ↓ | ||
| OR Solvers | Concorde | 3.84 | 0.00 | 5.69 | 0.00 |
| LKH-3 | 3.84 | 0.00 | 5.69 | 0.00 | |
| 2-Opt | 4.02 | 4.64 | 5.86 | 2.95 | |
| Learning-BasedModels | GCN | 3.85* | 0.21* | 5.87 | 3.10 |
| DIFUSCO | 3.88* | 1.07* | 5.70 | 0.10 | |
| Ours | 3.85 | 0.18 | 5.71 | 0.41 | |
| Method | Identity n = 100 | Arbitrary Permutation n = 100 | Arbitrary Permutation n = 200 | |||
| Accuracy (%) | Correct (%) | Accuracy (%) | Correct (%) | Accuracy (%) | Correct (%) | |
| SEDD (Lou et al., 2024) | 95.47 | 99.95 | 93.24 | 99.93 | 88.75 | 99.94 |
| Ours | 100 | 100 | 100 | 100 | 100 | 100 |
| Sequence Length | ||||
| 9 | 32 | 52 | ||
| RS (forward) + GPL (reverse) + greedy | Denoising Schedule | [0,3,5,9] | [0,5,7,12] | [0,5,6,7,10,13] |
| Kendall-Tau ↑ | 0.948 | 0.857 | 0.779 | |
| Accuracy (%) | 89.4 | 54.8 | 24.4 | |
| Correct (%) | 95.9 | 88.1 | 81.6 | |
| RS (forward) + PL (reverse) + greedy | Denoising Schedule | [0,3,5,9] | [0,5,7,12] | [0,5,6,7,10,13] |
| Kendall-Tau | 0.953 | 0.867 | 0.799 | |
| Accuracy (%) | 90.9 | 56.4 | 26.4 | |
| Correct (%) | 96.4 | 89.0 | 83.3 | |
| RS (forward) + PL (reverse) + beam search | Denoising Schedule | [0,3,5,9] | [0,5,7,12] | [0,5,6,7,10,13] |
| Kendall-Tau ↑ | 0.955 | 0.869 | 0.797 | |
| Accuracy (%) | 91.1 | 57.2 | 26.4 | |
| Correct (%) | 96.5 | 89.2 | 83.1 | |
| RS (forward) + IRS (reverse) + greedy | T | 9 | 12 | 13 |
| Kendall-Tau ↑ | 0.947 | 0.794 | 0.390 | |
| Accuracy (%) | 88.6 | 24.4 | 0.6 | |
| Correct (%) | 95.9 | 82.5 | 44.6 | |
| RT (forward) + IT (reverse) + greedy | T (using approx. n/2 log n) | 15 | 55 | 105 |
| Kendall-Tau ↑ | 0.490 | |||
| Accuracy (%) | 18.0 | Out of Memory | ||
| Correct (%) | 59.5 | |||
| RI (forward) + II (reverse) + greedy | T (using approx. n log n) | 25 | 110 | 205 |
| Kendall-Tau ↑ | 0.954 | |||
| Accuracy (%) | 91.1 | Out of Memory | ||
| Correct (%) | 96.4 | |||
| Denoising Schedule | [0, 15] | [0, 8, 9, 15] | [0, 7, 8, 9, 15] | [0, 7, 8, 10, 15] | [0, 8, 10, 15] |
| Kendall-Tau ↑ | 0.000 | 0.316 | 0.000 | 0.000 | 0.646 |
| Accuracy (%) | 0.0 | 0.0 | 0.0 | 0.0 | 4.5 |
| Correct (%) | 1.0 | 39.6 | 1.0 | 1.0 | 69.8 |
| Layer | Details |
| Convolution | Output channels 32, kernel size 3, padding 1, stride 1 |
| Batch Normalization | - |
| ReLU | - |
| Max-pooling | Pooling 2 |
| Fully-connected | Output dimension (dim_after_conv + 128)/2 |
| ReLU | - |
| Fully-connected | Output dimension 128 |
| Transformer encoder | 7 layers, 8 heads, model dimension (d_model) 128, feed-forward dimension 512, dropout 0.1 |
| Number of patches per side | Denoising schedule |
| 2 × 2 | [0,2,7] |
| 3 × 3 | [0,3,5,9] |
| 4 × 4 | [0,4,6,10] |
| 5 × 5 | [0,5,7,11] |
| 6 × 6 | [0,6,8,12] |
| Layer | Details |
| Convolution | Output channels 32, kernel size 5, padding 2, stride 1 |
| Batch Normalization | - |
| ReLU | - |
| Max-pooling | Pooling 2 |
| Convolution | Output channels 64, kernel size 5, padding 2, stride 1 |
| Batch Normalization | - |
| ReLU | - |
| Max-pooling | Pooling 2 |
| Fully-connected | Output dimension (dim_after_conv + 128)/2 |
| ReLU | - |
| Fully-connected | Output dimension 128 |
| Transformer encoder | 7 layers, 8 heads, model dimension (d_model) 128, feed-forward dimension 512, dropout 0.1 |
| Sequence Length n | Denoising schedule |
| 3 | [0,2,7] |
| 5 | [0,2,8] |
| 7 | [0,3,8] |
| 9 | [0,3,5,9] |
| 15 | [0,4,7,10] |
| 32 | [0,5,7,12] |
| 52 | [0,5,6,7,10,13] |
| 100 | [0,8,10,15] |
| 200 | [0,9,10,12] |
| Hyperparameter | Values |
| Learning Rate (lr) | {10-3, 10-4, 10-5} |
| Batch Size | {50} |
| Hidden Channels | {64, 128} |
| Kernel Size | {3, 5} |
| τ | {0.2, 0.5, 1, 2, 5} |
| Number of Sinkhorn Iterations (n_sink_iter) | {20} |
| Number of Samples | {10} |
| Sequence Length | Steepness | Sorting Network | Loss Weight | Learning Rate |
| 3 | 10 | odd even | 1.00 | 10-4 |
| 5 | 26 | odd even | 1.00 | 10-4 |
| 7 | 31 | odd even | 1.00 | 10-4 |
| 9 | 34 | odd even | 1.00 | 10-4 |
| 15 | 25 | odd even | 0.10 | 10-4 |
| 32 | 124 | odd even | 0.10 | 10-4 |
| 52 | 130 | bitonic | 0.10 | 10-3.5 |
| 100 | 140 | bitonic | 0.10 | 10-3.5 |
| 200 | 200 | bitonic | 0.10 | 10-4 |
| Sequence Length | Steepness | Sorting Network |
| 3 | 6 | odd even |
| 5 | 20 | odd even |
| 7 | 29 | odd even |
| 9 | 32 | odd even |
| 15 | 25 | odd even |
| 32 | 25 | bitonic |
| 52 | 25 | bitonic |
| 100 | 25 | bitonic |
| 200 | 200 | bitonic |
| Method | Score | |||
| Goal inference | Molecular synthesis | Data science | Text-to-SQL | |
| Base LM | 0.063 (0.05, 0.08) | 0.132 (0.12, 0.15) | 0.213 (0.19, 0.24) | 0.531 (0.51, 0.55) |
| w/ grammar constraint (Locally constrained Decoding) | 0.086 (0.07, 0.11) | 0.189 (0.17, 0.21) | - | 0.559 (0.54, 0.58) |
| w/ grammar, weight correction (Grammar-only IS) | 0.083 (0.06, 0.11) | 0.228 (0.21, 0.25) | - | 0.597 (0.57, 0.62) |
| w/ grammar, potential (Sample-Rerank) | 0.289 (0.24, 0.34) | 0.392 (0.36, 0.42) | - | 0.581 (0.56, 0.60) |
| w/ grammar, correction, and resampling (Grammar-only SMC) | 0.401 (0.34, 0.46) | 0.205 (0.18, 0.23) | - | 0.596 (0.57, 0.62) |
| w/ grammar, potential, and correction (Full IS) | 0.257 (0.21, 0.31) | 0.404 (0.37, 0.44) | 0.346 (0.31, 0.39) | 0.618 (0.59, 0.64) |
| w/ grammar, potential, correction, and resampling (Full SMC) | 0.419 (0.37, 0.48) | 0.577 (0.56, 0.59) | 0.407 (0.36, 0.45) | 0.620 (0.60, 0.64) |
| Method | Correlation between relative weight and score | |||
| Goal inference | Molecular synthesis | Data science | Text-to-SQL | |
| LM with grammar constraints and weight correction (Grammar-Only IS) | 0.138 (0.10, 0.18) | 0.218 (0.16, 0.28) | 0.217 (0.18, 0.26) | 0.810 (0.79, 0.83) |
| LM with grammar constraints, potential, and weight correction (Full IS) | 0.677 (0.64, 0.71) | 0.570 (0.53, 0.61) | 0.289 (0.25, 0.33) | 0.796 (0.78, 0.81) |
| LM with grammar constraints, potential, weight correction, and resampling (Full SMC) | 0.793 (0.76, 0.82) | 0.826 (0.81, 0.84) | 0.370 (0.31, 0.42) | 0.810 (0.79, 0.83) |
| Method | Score | |||
| Goal inference | Molecular synthesis | Data science | Text-to-SQL | |
| LM | 0.012 (0.01, 0.02) | 0.032 (0.02, 0.04) | 0.114 (0.09, 0.14) | 0.224 (0.207, 0.241) |
| w/ grammar constraint (Locally constrained Decoding) | 0.046 (0.03, 0.06) | 0.031 (0.02, 0.04) | - | 0.250 (0.232, 0.270) |
| w/ grammar, weight correction (Grammar-only IS) | 0.037 (0.02, 0.06) | 0.041 (0.03, 0.05) | - | 0.301 (0.281, 0.323) |
| w/ grammar, potential (Sample-Rerank) | 0.087 (0.06, 0.12) | 0.119 (0.09, 0.16) | - | 0.299 (0.278, 0.321) |
| w/ grammar, correction, and resampling (Grammar-only SMC) | 0.052 (0.03, 0.08) | 0.050 (0.04, 0.06) | - | 0.302 (0.281, 0.324) |
| w/ grammar, potential, and correction (IS) | 0.079 (0.05, 0.11) | 0.122 (0.09, 0.16) | 0.225 (0.19, 0.26) | 0.348 (0.326, 0.372) |
| w/ grammar, potential, correction, and resampling (SMC) | 0.125 (0.09, 0.16) | 0.517 (0.48, 0.55) | 0.285 (0.24, 0.34) | 0.348 (0.325, 0.374) |
| Method | Score | |||
| Goal inference | Molecular synthesis | Data science | Text-to-SQL | |
| LM | 0.063 (0.05, 0.08) | 0.132 (0.12, 0.15) | 0.213 (0.19, 0.24) | 0.531 (0.51, 0.55) |
| w/ grammar constraint (Locally constrained Decoding) | 0.086 (0.07, 0.11) | 0.189 (0.17, 0.21) | - | 0.559 (0.54, 0.58) |
| w/ grammar, weight correction (Grammar-only IS) | 0.083 (0.06, 0.11) | 0.228 (0.21, 0.25) | - | 0.597 (0.57, 0.62) |
| w/ grammar, potential (Sample-Rerank) | 0.289 (0.24, 0.34) | 0.392 (0.36, 0.42) | - | 0.581 (0.56, 0.60) |
| w/ grammar, correction, and resampling (Grammar-only SMC) | 0.401 (0.34, 0.46) | 0.205 (0.18, 0.23) | - | 0.596 (0.57, 0.62) |
| w/ grammar, potential, and correction (Full IS) | 0.257 (0.21, 0.31) | 0.404 (0.37, 0.44) | 0.346 (0.31, 0.39) | 0.618 (0.59, 0.64) |
| w/ grammar, potential, correction, and resampling (Full SMC) | 0.419 (0.37, 0.48) | 0.577 (0.56, 0.59) | 0.407 (0.36, 0.45) | 0.620 (0.60, 0.64) |
| Method | Score | |||
| Goal inference | Molecular synthesis | Data science | Text-to-SQL | |
| 5 Particles | ||||
| LM w/ grammar constraint, correction (Grammar-only IS) | 0.106 (0.08, 0.14) | 0.239 (0.21, 0.27) | - | 0.587 (0.56, 0.61) |
| LM w/ grammar constraint, potential (Sample-Rerank) | 0.214 (0.17, 0.26) | 0.407 (0.36, 0.45) | - | 0.578 (0.55, 0.60) |
| LM w/ grammar constraint, correction, and resampling (Grammar-only SMC) | 0.310 (0.26, 0.37) | 0.209 (0.18, 0.24) | - | 0.599 (0.57, 0.62) |
| LM w/ grammar constraint, potential, and correction (Full IS) | 0.216 (0.17, 0.27) | 0.411 (0.37, 0.45) | 0.204 (0.16, 0.25) | 0.611 (0.59, 0.63) |
| LM w/ grammar constraint, potential, correction, and resampling (Full SMC) | 0.319 (0.27, 0.37) | 0.552 (0.52, 0.58) | 0.224 (0.18, 0.27) | 0.620 (0.59, 0.64) |
| 10 Particles | ||||
| LM w/ grammar constraint, weight correction (Grammar-only IS) | 0.083 (0.06, 0.11) | 0.228 (0.21, 0.25) | - | 0.597 (0.57, 0.62) |
| LM w/ grammar constraint, potential (Sample-Rerank) | 0.289 (0.24, 0.34) | 0.392 (0.36, 0.42) | - | 0.581 (0.56, 0.60) |
| LM w/ grammar constraint, correction, and resampling (Grammar-only SMC) | 0.401 (0.34, 0.46) | 0.205 (0.18, 0.23) | - | 0.596 (0.57, 0.62) |
| LM w/ grammar constraint, potential, and correction (Full IS) | 0.257 (0.21, 0.31) | 0.404 (0.37, 0.44) | 0.223 (0.19, 0.27) | 0.618 (0.59, 0.64) |
| LM w/ grammar constraint, potential, correction, and resampling (Full SMC) | 0.419 (0.37, 0.48) | 0.577 (0.56, 0.59) | 0.285 (0.26, 0.32) | 0.620 (0.60, 0.64) |
| 50 Particles | ||||
| LM w/ grammar constraint, correction (Grammar-only IS) | 0.069 (0.05, 0.09) | 0.211 (0.20, 0.22) | - | 0.603 (0.58, 0.63) |
| LM w/ grammar constraint, potential (Sample-Rerank) | 0.416 (0.36, 0.47) | 0.382 (0.37, 0.40) | - | 0.585 (0.56, 0.61) |
| LM w/ grammar constraint, correction, and resampling (Grammar-only SMC) | 0.595 (0.54, 0.65) | 0.212 (0.20, 0.23) | - | 0.599 (0.58, 0.62) |
| LM w/ grammar constraint, potential, and correction (Full IS) | 0.393 (0.35, 0.45) | 0.389 (0.38, 0.40) | 0.218 (0.19, 0.25) | 0.626 (0.60, 0.66) |
| LM w/ grammar constraint, potential, correction, and resampling (Full SMC) | 0.611 (0.56, 0.66) | 0.569 (0.56, 0.58) | 0.292 (0.25, 0.33) | 0.622 (0.60, 0.65) |
| Method | Score |
| Full SMC | 0.620 (0.60, 0.64) |
| SMC Steering (Lew et al., 2023) | 0.607 (0.58, 0.63) |
| Method | Goal Inference | Molecular Synthesis | Data Science | Text-to-SQL |
| Φexpseconds per token | 0.011 (0.007, 0.016) | 0.0003 (0.0002, 0.0004) | 0.007 (0.0009, 0.023) | 0.031 (0.0204, 0.0413) |
| Name | Proposal | Target | Inference alg. |
| Language model | p | p | Exact |
| w/ grammar constraint | l_eff | l_eff | Exact |
| w/ grammar constraint, weight correction | l_eff | g_eff | IS |
| w/ grammar constraint, potential | l_eff | l_eff · Φ_exp | IS |
| w/ grammar constraint, potential, and correction | l_eff | g | IS |
| w/ grammar constraint, potential, correction, and resampling | l_eff | g | SMC |
| Table | Columns |
| singer | singer_id, name, ... |
| concert | concert_id, concert_name, ... |
| Query | Φeff | Φexp | Description | |
| SELECT | song_id FROM singer... | X | X | Invalid column name |
| SELECT | singer_id FROM concert... | ✓ | X | Invalid column name for table |
| SELECT | singer_id FROM singer... | ✓ | ✓ | Valid column name for table |
| Study | Domain | Model Parameter Count | Total Unique CPT Tokens |
| Minerva (Lewkowycz et al., 2022) | STEM | 8B, 62B, 540B | 26B-38.5B |
| MediTron (Chen et al., 2023) | Medicine | 7B, 70B | 46.7B |
| Code Llama (Rozière et al., 2024) | Code | 7B, 13B, 34B | 520B-620B |
| Llemma (Azerbayev et al., 2024) | Math | 7B, 34B | 50B-55B |
| DeepSeekMath (Shao et al., 2024) | Math | 7B | 500B |
| SaulLM-7B (Colombo et al., 2024b) | Law | 7B | 30B |
| SaulLM-{54, 141}B (Colombo et al., 2024a) | Law | 54B, 141B | 520B |
| HEAL (Yuan et al., 2024a) | Medicine | 13B | 14.9B |
| Our setting | Articles & Books | 8B | 1.3M |
| Explicit reference: Summarize “Defining Decay Down”. |
| The article “Defining Decay Down” by David Plotz discusses [...] Dentists began to focus on cosmetic dentistry, [...] |
| Implicit reference: How has dentistry in the U.S. changed? |
| 1. Increase in cosmetic dentistry [...]2. Use of technology: [...] |
| Cross article instruction: Compare David Plotz’s commentary on American dentistry and the movie Fight Club? |
| David Plotz’s commentary style is different when he analyzes American dentistry and when he discusses the movie Fight Club. [...] |
| EntiGraph CPT + RAG | Llama 3 8B | Base + RAG | GPT-4 + Oracle RAG | GPT-3.5 + Oracle RAG | |||
| Accuracy | Recall@8 | Accuracy | Recall@8 | Accuracy | Recall@8 | Accuracy | Recall@8 |
| 62.60 | 99.63 | 60.35 | 99.63 | 86.09 | 100.0 | 72.60 | 100.0 |
| Augmentation | n = 2 | n = 4 | n = 8 | n = 16 |
| EntiGraph | 23.40 | 3.66 | 0.24 | 0.00 |
| Rephrase | 21.35 | 3.04 | 0.51 | 0.22 |
| > Short prompt: Summarize the article {article title} by {author name} for me. |
| Give a short summary of ``Cosmic Yo-Yo'` by Ross Rocklynne. |
| FVD ↓ | FGD ↓ | BC ↑ | Diversity ↑ | FVD ↓ | FGD ↓ | BC ↑ | Diversity ↑ | |
| Ground Truth | - | - | 0.326 | 3.514 | - | - | 0.435 | 3.746 |
| SpeechDrivenTemplate (Qian et al., 2021) | 2.239 | 5.722 | 0.401 | 1.950 | 7.612 | 5.559 | 0.461 | 2.081 |
| ANGIE (Liu et al., 2022c) | 2.079 | 5.112 | 0.359 | 2.577 | - | - | - | - |
| S2G-Diffusion (He et al., 2024a) | 2.007 | 4.799 | 0.393 | 3.398 | 5.835 | 5.011 | 0.439 | 2.625 |
| GVR (Zhou et al., 2022) | 1.615 | 4.246 | 0.270 | 4.623 | 4.027 | 2.900 | 0.331 | 3.573 |
| TANGO (Ours) | 1.379 | 3.714 | 0.375 | 5.393 | 3.133 | 2.068 | 0.479 | 4.128 |
| SDT | GVR | TANGO (Ours) | GT | |
| Video Texture Quality ↑ | 4.1% | 26.7% | 33.8% | 35.5% |
| Audio-Motion Alignment ↑ | 29.2% | 10.9% | 28.7% | 31.2% |
| Overall Preference ↑ | 4.9% | 20.6% | 36.9% | 37.6% |
| Low Level | High Level | |
| Random Search | 25.00% +00.00% | 0.391% +00.00% |
| Generative Features | 29.03% +16.10% | 1.403% + 258.8% |
| Keyword (Zhou et al., 2022) | - | 1.288% + 229.4% |
| Onset (Zhou et al., 2022) | 35.38% +41.51% | - |
| Baseline (Max Pooling) | - | 5.312% + 1360% |
| Baseline (CLS Token) | - | 11.84% + 3028% |
| + Wav2Vec2 | - | 12.73% + 3255% |
| + BERT | - | 15.68% + 4010% |
| + Wav2Vec2&BERT | - | 16.40% + 4194% |
| + Split (Low + High) | 47.94% + 99.76% | 17.83% +4460% |
| + Split (Low only) | 65.57% + 162.2% | - |
| AuMoCLIP (+ Stop Grad.) | 65.68% + 163.8% | 19.54% + 4897% |
| PSNR ↑ | LPIPS ↓ | MOVIE ↓ | FVD ↓ | |
| FiLM (Reda et al., 2022) | 35.43 | 0.072 | 74.85 | 1.358 |
| VFIF (Lu et al., 2022) | 34.91 | 0.077 | 79.42 | 1.777 |
| AnimateAnyone (Hu et al., 2023) | 32.63 | 0.127 | 86.06 | 1.421 |
| PANR (Zhou et al., 2022) | 35.18 | 0.071 | 75.02 | 1.190 |
| ACInterp (Ours) | 35.63 | 0.065 | 72.65 | 0.922 |
| Method | Geo. Rep. | Cham. ↓ | Vol. IoU ↑ | ALR ↑ | MR(%) ↑ | CC Diff. ↓ |
| NIE | Eulerian | 0.0254 | 0.1863 | 0.0273 | 72.3 | 7.0 |
| FlexiCubes | Eulerian | 0.0247 | 0.5887 | 0.0722 | 45.5 | 201.3 |
| NeuS | Eulerian | 0.0192 | 0.6182 | 0.0573 | 72.3 | 8.1 |
| VolSDF | Eulerian | 0.0185 | 0.6423 | 0.0622 | 81.8 | 7.3 |
| 2DGS | Lagrangian | 0.0322 | 0.4923 | 0.0209 | 27.3 | 25.1 |
| DMesh | Lagrangian | 0.0136 | 0.5616 | 0.1193 | 9.09 | 3.75 |
| Ours | Lagrangian | 0.0184 | 0.6844 | 0.6602 | 100 | 0.0 |
| Method | Geo. Rep. | Cham. ↓ | Vol. IoU ↑ | ALR ↑ | MR(%) ↑ | CC Diff. ↓ |
| Magic123 | Eulerian | 0.0516 | 0.4528 | 0.0383 | 100 | 13.7 |
| One-2-3-45 | Eulerian | 0.0629 | 0.4086 | 0.0574 | 96 | 0.83 |
| SyncDreamer | Eulerian | 0.0261 | 0.5421 | 0.0201 | 10 | 0.3 |
| Wonder3d | Eulerian | 0.0329 | 0.5768 | 0.0281 | 100 | 0.0 |
| Open-LRM | Eulerian | 0.0285 | 0.5945 | 0.0252 | 100 | 0.0 |
| DreamGaussian | Lagrangian | 0.0641 | 0.3476 | 0.0812 | 100 | 237.4 |
| Ours | Lagrangian | 0.0351 | 0.6317 | 0.3665 | 100 | 0.0 |
| Method | Maximal Batch Size↑ | Speed↑ (#iter./s) | |
| Eulerian | Make-it-3D | 4 | 1.22 |
| Magic123 | 4 | 1.03 | |
| SyncDreamer | 48 | 1.8 | |
| DreamCraft3D | 8 | 1.23 | |
| GeoDream | 8 | 1.30 | |
| Lagrangian | DreamGaussian | 80 | 4.43 |
| Lagrangian | Ours | 120 | 6.59 |
| Method | Cham. ↓ | Vol. IoU ↑ | ALR ↑ |
| Nicolet et al. (2021) | 0.0213 | 0.6184 | 0.5235 |
| Palfinger (2022) | 0.0191 | 0.6583 | 0.6332 |
| Ours | 0.0184 | 0.6844 | 0.6602 |
| Metric | NIE | FlexiCubes | 2DGS | DMesh | Ours |
| F-Score ↑ | 0.486 | 0.502 | 0.632 | 0.605 | 0.653 |
| Normal Consis.↑ | 0.718 | 0.734 | 0.812 | 0.745 | 0.839 |
| Edge Cham. ↓ | 0.039 | 0.034 | 0.015 | 0.049 | 0.014 |
| Edge F-Score ↑ | 0.016 | 0.196 | 0.219 | 0.193 | 0.269 |
| Time (sec.) ↓ | 6436 | 57.67 | 691 | 1434 | 934 |
| # Triangles | 151,223.6 | 30,285.7 | 270,235.5 | 25,767.7 | 70,616.0 |
| W = parent0 ⇒ parent1 & Z = child0 ⇒ child1 | Change in Logit Differences |
| W = plant.n.02 ⇒ animal.n.01 | 5.1265 ± 1.1731 |
| Z = mammal.n.01 ⇒ reptile.n.01 | -0.0600 ± 1.2190 |
| W = fluid.n.02 ⇒ solid.n.01 | 9.8296 ± 1.1099 |
| Z = crystal.n.01 ⇒ food.n.02 | 0.3770 ± 1.5410 |
| W = scientist.n.01 ⇒ contestant.n.01 | 14.4222 ± 0.9458 |
| Z = athlete.n.01 ⇒ player.n.01 | -0.1545 ± 1.1426 |
| Symbol | Notion |
| t | Time step which takes a value from 1 to T |
| T | Horizon |
| pi | Failure probability of individual i |
| oist | Binary observation from an active individual at time t |
| ε | Noise on observation model of Eq. (1) |
| p | p(p) := (1 - ε)f(p) + (1 - f(p))ε |
| yi | Number of positive observations from an active individual i up to time t: yi^t := ∑t'∈[t] o_i^t |
| A^t | Set of active individuals at time t |
| A_k^t | Set of active individuals at t with yt = k: A_k^t := {i | yi^t = k} |
| I^t | Set of individuals treated at t |
| A_k^t | A_k^t excluding those who will be treated at t: A_k^t := A_k^t \ I^t |
| N | Number of initial individuals at time t = 1 |
| N^t | Number of individuals who made it to time t: N^t := |A^t| |
| n^t | Proportion of initial individuals who made it to time t: n^t := N^t/N |
| nk | nk^t := |A_k^t|/N |
| n^t_k | n^t_k := |A_k^t|/N |
| P^t(·) | Posterior over p for an active individual at t |
| P^t_k = P^t(· | y^t = k) | Posterior over p given an individual has made it to t and yt = k |
| Pr^t(·) | Probability measure over active individuals at time t |
| Pr^t_{i,j}(·) | Probability measure over two independent active individuals i and j at time t |
| Pr^t(y^t = k) | Probability that an active individual at t has yt = k |
| Pr^t(y^t = k | p) | Likelihood that an active individual with failure probability p shows yt = k |
| E^t[·] | Expectation over active individuals at time t |
| E^t_k[·] | Expectation over active individuals at time t with yt = k |
| E^t_{i,j}[·] | Expectation over two independent active individuals i and j at time t |
| μ^t | Mean of failure probability at time t: μ^t := E^t[p] |
| μ^t_k | Mean of failure probability at time t given yt = k: μ^t := E^t[k[p] |
| Var^t[·] | Variance under Pr^t(·) |
| σ^2_i | σ^2_i := p_i · (1 - p_i) |
| σ^2_ij | σ^2_ij := σ^2_i + σ^2_j |
| G(·) | Cumulative distribution function (CDF) of the standard normal distribution |
| g(·) | Probability density function (PDF) of the standard normal distribution |
| Models | TimeMixer++ (Ours) | TimeMixer (2024b) | iTransformer (2024) | PatchTST (2023) | Crossformer (2023) | TiDE (2023a) | TimesNet (2023) | DLinear (2023) | SCINet (2022a) | FEDformer (2022b) | Stationary (2022c) | Autoformer (2021) | ||||||||||||
| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| Electricity | 0.165 | 0.253 | 0.182 | 0.272 | 0.178 | 0.270 | 0.205 | 0.290 | 0.244 | 0.334 | 0.251 | 0.344 | 0.192 | 0.295 | 0.212 | 0.300 | 0.268 | 0.365 | 0.214 | 0.327 | 0.193 | 0.296 | 0.227 | 0.338 |
| ETT (Avg) | 0.349 | 0.399 | 0.367 | 0.388 | 0.383 | 0.377 | 0.381 | 0.397 | 0.685 | 0.578 | 0.482 | 0.470 | 0.391 | 0.404 | 0.442 | 0.444 | 0.689 | 0.597 | 0.408 | 0.428 | 0.471 | 0.464 | 0.465 | 0.459 |
| Exchange | 0.357 | 0.391 | 0.391 | 0.453 | 0.378 | 0.360 | 0.403 | 0.404 | 0.940 | 0.707 | 0.370 | 0.413 | 0.416 | 0.443 | 0.354 | 0.414 | 0.750 | 0.626 | 0.519 | 0.429 | 0.461 | 0.454 | 0.613 | 0.539 |
| Traffic | 0.416 | 0.264 | 0.484 | 0.297 | 0.428 | 0.282 | 0.481 | 0.304 | 0.550 | 0.304 | 0.760 | 0.473 | 0.620 | 0.336 | 0.625 | 0.383 | 0.804 | 0.509 | 0.610 | 0.376 | 0.624 | 0.340 | 0.628 | 0.379 |
| Weather | 0.226 | 0.262 | 0.240 | 0.271 | 0.258 | 0.278 | 0.259 | 0.281 | 0.259 | 0.315 | 0.271 | 0.320 | 0.259 | 0.287 | 0.265 | 0.317 | 0.292 | 0.363 | 0.309 | 0.360 | 0.288 | 0.314 | 0.338 | 0.382 |
| Solar-Energy | 0.203 | 0.238 | 0.216 | 0.280 | 0.233 | 0.262 | 0.270 | 0.307 | 0.641 | 0.639 | 0.347 | 0.417 | 0.301 | 0.319 | 0.330 | 0.401 | 0.282 | 0.375 | 0.291 | 0.381 | 0.261 | 0.381 | 0.885 | 0.711 |
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | TiDE (2024) | TimesNet (2023a) | N-HiTS (2023) | N-BEATS (2019) | PatchTST (2023) | MICN (2023a) | FiLM (2022a) | LightTS (2022a) | DLinear (2023) | FED. (2022b) | Stationary (2022c) | Auto. (2021) | |
| SMAPE | 11.448 | 11.723 | 12.684 | 13.950 | 11.829 | 11.927 | 11.851 | 13.152 | 19.638 | 14.863 | 13.525 | 13.639 | 12.840 | 12.780 | 12.909 |
| MASE | 1.487 | 1.559 | 1.764 | 1.940 | 1.585 | 1.613 | 1.559 | 1.945 | 5.947 | 2.207 | 2.111 | 2.095 | 1.701 | 1.756 | 1.771 |
| OWA | 0.821 | 0.840 | 0.929 | 1.020 | 0.851 | 0.861 | 0.855 | 0.998 | 2.279 | 1.125 | 1.051 | 1.051 | 0.918 | 0.930 | 0.939 |
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | TiDE (2024) | SCINet (2023a) | Crossformer (2022a) | PatchTST (2023) | TimesNet (2023) | MICN (2023a) | DLinear FEDformer (2023) | Stationary (2022b) | Autoformer (2021) |
| MAE | 15.91 | 17.41 | 19.87 | 21.86 | 19.12 | 19.03 | 23.01 | 20.54 | 19.34 | 23.31 | 21.32 |
| MAPE | 10.08 | 10.59 | 12.55 | 13.80 | 12.24 | 12.22 | 14.95 | 12.69 | 12.38 | 14.68 | 15.01 |
| RMSE | 27.06 | 28.01 | 31.29 | 34.42 | 30.12 | 30.17 | 36.05 | 33.25 | 30.40 | 37.32 | 36.78 |
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | PatchTST (2024) | Crossformer FEDformer (2023) | TIDE (2023a) | DLinear (2023) | TimesNet (2023) | MICN (2023a) | Autoformer (2021) | ||||||||
| Metric | MSE | MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | |
| ETT(Avg) | 0.055 | 0.154 | 0.097 | 0.220 | 0.096 | 0.205 | 0.120 | 0.225 | 0.150 | 0.258 | 0.124 | 0.230 | 0.314 | 0.366 | 0.115 | 0.229 | 0.079 |
| ECL | 0.109 | 0.197 | 0.142 | 0.261 | 0.140 | 0.223 | 0.129 | 0.198 | 0.125 | 0.204 | 0.181 | 0.314 | 0.182 | 0.202 | 0.080 | 0.200 | 0.135 |
| Weather | 0.049 | 0.078 | 0.091 | 0.114 | 0.095 | 0.102 | 0.082 | 0.149 | 0.150 | 0.111 | 0.064 | 0.139 | 0.063 | 0.131 | 0.071 | 0.107 | 0.061 |
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | TiDE (2023a) | Crossformer (2023) | DLinear (2023) | PatchTST (2023) | TimesNet (2023) | FEDformer Autoformer (2022b) | Stationary (2022c) | ETSformer (2022) | LightTS (2022b) | Informer (2021b) | Reformer (2020) | |||||||||||||||||
| Metric | MSE | MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE PAE | |||||||
| ETT(Avg) | 0.396 | 0.421 | 0.453 | 0.445 | 0.458 | 0.497 | 0.432 | 0.444 | 0.470 | 0.471 | 0.506 | 0.484 | 0.461 | 0.446 | 0.586 | 0.496 | 0.573 | 0.532 | 0.834 | 0.663 | 0.627 | 0.510 | 0.875 | 0.687 | 1.497 | 0.875 | 2.408 | 1.146 | 2.535 | 1.191 |
| Weather | 0.241 | 0.271 | 0.242 | 0.281 | 0.291 | 0.331 | 0.249 | 0.291 | 0.267 | 0.306 | 0.241 | 0.283 | 0.242 | 0.279 | 0.279 | 0.301 | 0.284 | 0.324 | 0.300 | 0.342 | 0.318 | 0.323 | 0.318 | 0.360 | 0.289 | 0.322 | 0.597 | 0.495 | 0.546 | 0.469 |
| ECL | 0.168 | 0.271 | 0.187 | 0.277 | 0.241 | 0.337 | 0.196 | 0.289 | 0.214 | 0.308 | 0.180 | 0.280 | 0.180 | 0.273 | 0.323 | 0.392 | 0.346 | 0.427 | 0.431 | 0.478 | 0.444 | 0.480 | 0.660 | 0.617 | 0.441 | 0.489 | 1.195 | 0.891 | 0.965 | 0.768 |
| Methods | TimeMixer++ (Ours) | TimeMixer (2024b) | LLMTime (2023) | DLinear (2023) | PatchTST (2023) | TimesNet (2023) | iTransformer (2024) | Crossformer (2023) | Fedformer (2022b) | Autoformer (2021) | TiDE (2023a) | |||||||||||
| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | ||
| ETTh1 → ETTh2 | 0.367 | 0.391 | 0.427 | 0.424 | 0.992 | 0.708 | 0.493 | 0.488 | 0.380 | 0.405 | 0.421 | 0.431 | 0.481 | 0.474 | 0.555 | 0.574 | 0.712 | 0.693 | 0.634 | 0.651 | 0.593 | 0.582 |
| ETTh1 → ETTm2 | 0.301 | 0.357 | 0.361 | 0.397 | 1.867 | 0.869 | 0.415 | 0.452 | 0.314 | 0.360 | 0.327 | 0.361 | 0.311 | 0.361 | 0.613 | 0.629 | 0.681 | 0.588 | 0.647 | 0.609 | 0.563 | 0.547 |
| ETTh2 → ETTh1 | 0.511 | 0.498 | 0.679 | 0.577 | 1.961 | 0.981 | 0.703 | 0.574 | 0.565 | 0.513 | 0.865 | 0.621 | 0.552 | 0.511 | 0.587 | 0.518 | 0.612 | 0.624 | 0.599 | 0.571 | 0.588 | 0.556 |
| ETTm1 → ETTh2 | 0.417 | 0.422 | 0.452 | 0.441 | 0.992 | 0.708 | 0.464 | 0.475 | 0.439 | 0.438 | 0.457 | 0.454 | 0.434 | 0.438 | 0.624 | 0.541 | 0.533 | 0.594 | 0.579 | 0.568 | 0.543 | 0.535 |
| ETTm1 → ETTm2 | 0.291 | 0.331 | 0.329 | 0.357 | 1.867 | 0.869 | 0.335 | 0.389 | 0.296 | 0.334 | 0.322 | 0.354 | 0.324 | 0.331 | 0.595 | 0.572 | 0.612 | 0.611 | 0.603 | 0.592 | 0.534 | 0.527 |
| ETTm2 → ETTm1 | 0.427 | 0.448 | 0.554 | 0.478 | 1.933 | 0.984 | 0.649 | 0.537 | 0.568 | 0.492 | 0.769 | 0.567 | 0.559 | 0.491 | 0.611 | 0.593 | 0.577 | 0.601 | 0.594 | 0.597 | 0.585 | 0.571 |
| ETTh1 | ETTh2 | ETTm1 | ETTm2 | ECL | Traffic | Weather | Solar | Average | Promotion | |
| TIMEMIXER++ | 0.419 | 0.339 | 0.369 | 0.269 | 0.165 | 0.416 | 0.226 | 0.203 | 0.300 | - |
| w/o channel mixing | 0.424 | 0.346 | 0.374 | 0.271 | 0.197 | 0.442 | 0.233 | 0.245 | 0.317 | 5.36% |
| w/o time image decomposition | 0.441 | 0.358 | 0.409 | 0.291 | 0.198 | 0.445 | 0.251 | 0.241 | 0.329 | 8.81% |
| w/o multi-scale mixing | 0.447 | 0.361 | 0.391 | 0.284 | 0.172 | 0.427 | 0.239 | 0.234 | 0.320 | 6.25% |
| w/o multi-resolution mixing | 0.431 | 0.350 | 0.374 | 0.280 | 0.181 | 0.432 | 0.241 | 0.233 | 0.316 | 5.10% |
| Tasks | Dataset | Dim | Series Length | Dataset Size | Frequency | Forecastability* | Information |
| Long-term Forecasting | ETTm1 | 7 | {96, 192, 336, 720} | (34465, 11521, 11521) | 15min | 0.46 | Temperature |
| ETTm2 | 7 | {96, 192, 336, 720} | (34465, 11521, 11521) | 15min | 0.55 | Temperature | |
| ETTh1 | 7 | {96, 192, 336, 720} | (8545, 2881, 2881) | 15 min | 0.38 | Temperature | |
| ETTh2 | 7 | {96, 192, 336, 720} | (8545, 2881, 2881) | 15 min | 0.45 | Temperature | |
| Electricity | 321 | {96, 192, 336, 720} | (18317, 2633, 5261) | Hourly | 0.77 | Electricity | |
| Traffic | 862 | {96, 192, 336, 720} | (12185, 1757, 3509) | Hourly | 0.68 | Transportation | |
| Exchange | 8 | {96, 192, 336, 720} | (5120, 665, 1422) | Daily | 0.41 | Weather | |
| Weather | 21 | {96, 192, 336, 720} | (36792, 5271, 10540) | 10 min | 0.75 | Weather | |
| Solar-Energy | 137 | {96, 192, 336, 720} | (36601, 5161, 10417) | 10min | 0.33 | Electricity | |
| Short-term Forecasting | PEMS03 | 358 | 12 | (15617,5135,5135) | 5min | 0.65 | Transportation |
| PEMS04 | 307 | 12 | (10172,3375,3375) | 5min | 0.45 | Transportation | |
| PEMS07 | 883 | 12 | (16911,5622,5622) | 5min | 0.58 | Transportation | |
| PEMS08 | 170 | 12 | (10690,3548,265) | 5min | 0.52 | Transportation | |
| M4-Yearly | 1 | 6 | (23000, 0, 23000) | Yearly | 0.43 | Demographic | |
| M4-Quarterly | 1 | 8 | (24000, 0, 24000) | Quarterly | 0.47 | Finance | |
| M4-Monthly | 1 | 18 | (48000, 0, 48000) | Monthly | 0.44 | Industry | |
| M4-Weakly | 1 | 13 | (359, 0, 359) | Weakly | 0.43 | Macro | |
| M4-Daily | 1 | 14 | (4227, 0, 4227) | Daily | 0.44 | Micro | |
| M4-Hourly | 1 | 48 | (414, 0, 414) | Hourly | 0.46 | Other |
| Dataset | Sample Numbers(train set,test set) | Variable Number | Series Length |
| EthanolConcentration | (261, 263) | 3 | 1751 |
| FaceDetection | (5890, 3524) | 144 | 62 |
| Handwriting | (150, 850) | 3 | 152 |
| Heartbeat | (204, 205) | 61 | 405 |
| JapaneseVowels | (270, 370) | 12 | 29 |
| PEMSSF | (267, 173) | 963 | 144 |
| SelfRegulationSCP1 | (268, 293) | 6 | 896 |
| SelfRegulationSCP2 | (200, 180) | 7 | 1152 |
| SpokenArabicDigits | (6599, 2199) | 13 | 93 |
| UWaveGestureLibrary | (120, 320) | 3 | 315 |
| Dataset | Dataset sizes(train set, val set, test set) | Variable Number | Sliding Window Length |
| SMD | (566724, 141681, 708420) | 38 | 100 |
| MSL | (44653, 11664, 73729) | 55 | 100 |
| SMAP | (108146, 27037, 427617) | 25 | 100 |
| SWaT | (396000, 99000, 449919) | 51 | 100 |
| PSM | (105984, 26497, 87841) | 25 | 100 |
| M4 | PEMS03 | PEMS04 | PEMS07 | PEMS08 | Average | Promotion | |
| TIMEMIXER++ | 11.45 | 13.43 | 11.34 | 7.32 | 8.21 | 10.35 | - |
| w/o channel mixing | 11.44 | 15.57 | 13.31 | 9.74 | 10.78 | 12.17 | 14.95% |
| w/o time image decomposition | 12.37 | 15.59 | 12.97 | 9.65 | 9.97 | 12.11 | 14.51% |
| w/o multi-scale mixing | 11.98 | 14.97 | 13.02 | 9.17 | 9.69 | 11.79 | 12.06% |
| w/o multi-resolution mixing | 11.87 | 15.02 | 13.14 | 8.72 | 9.53 | 11.68 | 11.23% |
| SMD | MSL | SMAP | SWAT | PSM | Average | Promotion | |
| TIMEMIXER++ | 86.50 | 85.82 | 73.10 | 94.64 | 97.60 | 87.47 | - |
| w/o channel mixing | 84.51 | 74.03 | 70.91 | 90.41 | 96.17 | 83.21 | 4.94% |
| w/o time image decomposition | 81.21 | 72.43 | 66.02 | 82.41 | 92.53 | 78.92 | 9.84% |
| w/o multi-scale mixing | 82.37 | 75.12 | 92.79 | 86.48 | 94.53 | 86.26 | 1.46% |
| w/o multi-resolution mixing | 83.37 | 79.24 | 77.49 | 88.46 | 96.02 | 86.26 | 2.99% |
| Model | TimeMixer++ | iTransformer (2024) | Confidence | ||
| Dataset | MSE | MAE | MSE | MAE | Level |
| Weather | 0.226 ± 0.008 | 0.262 ± 0.007 | 0.258 ± 0.009 | 0.278 ± 0.006 | 99% |
| Solar-Energy | 0.203 ± 0.027 | 0.238 ± 0.026 | 0.233 ± 0.009 | 0.262 ± 0.011 | 95% |
| Electricity | 0.165 ± 0.017 | 0.253 ± 0.019 | 0.178 ± 0.002 | 0.270 ± 0.017 | 99% |
| Traffic | 0.416 ± 0.027 | 0.264 ± 0.030 | 0.428 ± 0.008 | 0.282 ± 0.027 | 95% |
| ETTh1 | 0.419 ± 0.023 | 0.432 ± 0.021 | 0.454 ± 0.004 | 0.447 ± 0.007 | 99% |
| ETTh2 | 0.339 ± 0.020 | 0.380 ± 0.019 | 0.383 ± 0.004 | 0.407 ± 0.007 | 95% |
| ETTm1 | 0.369 ± 0.019 | 0.378 ± 0.026 | 0.407 ± 0.004 | 0.410 ± 0.009 | 99% |
| ETTm2 | 0.269 ± 0.021 | 0.320 ± 0.019 | 0.288 ± 0.010 | 0.332 ± 0.003 | 95% |
| Model | TimeMixer++ | TimeMixer (2024b) | Confidence | ||||
| Dataset | MAE | MAPE | RMSE | MAE | MAPE | RMSE | Level |
| PEMS03 | 13.99 ± 0.271 | 13.43 ± 0.292 | 24.03 ± 0.269 | 14.63 ± 0.471 | 14.54 ± 0.502 | 23.28 ± 0.468 | 99% |
| PEMS04 | 17.46 ± 0.951 | 11.34 ± 0.970 | 28.83 ± 0.916 | 19.21 ± 0.511 | 12.53 ± 0.523 | 30.92 ± 0.519 | 95% |
| PEMS07 | 18.38 ± 0.991 | 7.32 ± 0.977 | 31.75 ± 0.890 | 20.57 ± 0.372 | 8.62 ± 0.399 | 33.59 ± 0.375 | 95% |
| PEMS08 | 13.81 ± 0.827 | 8.21 ± 0.836 | 23.62 ± 0.877 | 15.22 ± 0.311 | 9.67 ± 0.332 | 24.26 ± 0.317 | 99% |
| Model | TimeMixer++ | TimesMixer (2024b) | Confidence | ||||
| Dataset | SMAPE | MASE | OWA | SMAPE | MASE | OWA | Level |
| Yearly | 13.179 ± 0.021 | 2.934 ± 0.012 | 0.769 ± 0.001 | 13.206 ± 0.121 | 2.916 ± 0.022 | 0.776 ± 0.002 | 95% |
| Quarterly | 9.755 ± 0.001 | 1.159 ± 0.005 | 0.865 ± 0.009 | 9.996 ± 0.101 | 1.166 ± 0.015 | 0.825 ± 0.008 | 95% |
| Monthly | 12.432 ± 0.015 | 0.904 ± 0.012 | 0.841 ± 0.001 | 12.605 ± 0.115 | 0.919 ± 0.011 | 0.869 ± 0.003 | 95% |
| Others | 4.698 ± 0.114 | 2.931 ± 0.027 | 1.01 ± 0.011 | 4.564 ± 0.114 | 3.115 ± 0.027 | 0.982 ± 0.011 | 99% |
| Averaged | 11.448 ± 0.007 | 1.487 ± 0.010 | 0.821 ± 0.002 | 11.723 ± 0.011 | 1.559 ± 0.022 | 0.840 ± 0.001 | 99% |
| Models | TimeMixer++(Ours) | TimeMixer (2024b) | iTransformer 2024 | PatchTST 2023 | Crossformer 2023 | TiDE 2023a | TimesNet 2023 | DLinear 2023 | SCINet 2022a | FEDformer 2022b | Stationary 2022c | Autoform 2021 | |||||||||||||
| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | |
| Weather | 96 | 0.155 | 0.205 | 0.163 | 0.209 | 0.174 | 0.214 | 0.186 | 0.227 | 0.195 | 0.271 | 0.202 | 0.261 | 0.172 | 0.220 | 0.195 | 0.252 | 0.221 | 0.306 | 0.217 | 0.296 | 0.173 | 0.223 | 0.266 | 0.336 |
| 192 | 0.201 | 0.245 | 0.208 | 0.250 | 0.221 | 0.254 | 0.234 | 0.265 | 0.209 | 0.277 | 0.242 | 0.298 | 0.219 | 0.261 | 0.237 | 0.295 | 0.261 | 0.340 | 0.276 | 0.336 | 0.245 | 0.285 | 0.307 | 0.367 | |
| 336 | 0.237 | 0.265 | 0.251 | 0.287 | 0.278 | 0.296 | 0.284 | 0.301 | 0.273 | 0.332 | 0.287 | 0.335 | 0.280 | 0.306 | 0.282 | 0.331 | 0.309 | 0.378 | 0.339 | 0.380 | 0.321 | 0.338 | 0.359 | 0.395 | |
| 720 | 0.312 | 0.334 | 0.339 | 0.341 | 0.358 | 0.347 | 0.356 | 0.349 | 0.379 | 0.401 | 0.351 | 0.386 | 0.365 | 0.359 | 0.345 | 0.382 | 0.377 | 0.427 | 0.403 | 0.428 | 0.414 | 0.410 | 0.419 | 0.428 | |
| Avg | 0.226 | 0.262 | 0.240 | 0.271 | 0.258 | 0.278 | 0.265 | 0.285 | 0.264 | 0.320 | 0.271 | 0.320 | 0.259 | 0.287 | 0.265 | 0.315 | 0.292 | 0.363 | 0.309 | 0.360 | 0.288 | 0.314 | 0.338 | 0.382 | |
| Solar-Energy | 96 | 0.171 | 0.231 | 0.189 | 0.259 | 0.203 | 0.237 | 0.265 | 0.323 | 0.232 | 0.302 | 0.312 | 0.399 | 0.373 | 0.358 | 0.290 | 0.378 | 0.237 | 0.344 | 0.286 | 0.341 | 0.321 | 0.380 | 0.456 | 0.446 |
| 192 | 0.218 | 0.263 | 0.222 | 0.283 | 0.233 | 0.261 | 0.288 | 0.332 | 0.371 | 0.410 | 0.339 | 0.416 | 0.397 | 0.376 | 0.320 | 0.398 | 0.280 | 0.380 | 0.291 | 0.337 | 0.346 | 0.369 | 0.588 | 0.561 | |
| 336 | 0.212 | 0.269 | 0.231 | 0.292 | 0.248 | 0.273 | 0.301 | 0.339 | 0.495 | 0.515 | 0.368 | 0.430 | 0.420 | 0.380 | 0.353 | 0.415 | 0.304 | 0.389 | 0.354 | 0.416 | 0.357 | 0.387 | 0.595 | 0.588 | |
| 720 | 0.212 | 0.270 | 0.223 | 0.285 | 0.249 | 0.275 | 0.295 | 0.336 | 0.526 | 0.542 | 0.370 | 0.425 | 0.420 | 0.381 | 0.357 | 0.413 | 0.308 | 0.388 | 0.380 | 0.437 | 0.375 | 0.424 | 0.733 | 0.633 | |
| Avg | 0.203 | 0.238 | 0.216 | 0.280 | 0.233 | 0.262 | 0.287 | 0.333 | 0.406 | 0.442 | 0.347 | 0.417 | 0.403 | 0.374 | 0.330 | 0.401 | 0.282 | 0.375 | 0.328 | 0.383 | 0.350 | 0.390 | 0.586 | 0.557 | |
| Electricity | 96 | 0.135 | 0.222 | 0.153 | 0.247 | 0.148 | 0.240 | 0.190 | 0.296 | 0.219 | 0.314 | 0.237 | 0.329 | 0.168 | 0.272 | 0.210 | 0.302 | 0.247 | 0.345 | 0.193 | 0.308 | 0.169 | 0.273 | 0.201 | 0.317 |
| 192 | 0.147 | 0.235 | 0.166 | 0.256 | 0.162 | 0.253 | 0.199 | 0.304 | 0.231 | 0.322 | 0.236 | 0.330 | 0.184 | 0.322 | 0.210 | 0.305 | 0.257 | 0.355 | 0.201 | 0.315 | 0.182 | 0.286 | 0.222 | 0.334 | |
| 336 | 0.164 | 0.245 | 0.185 | 0.277 | 0.178 | 0.269 | 0.217 | 0.319 | 0.246 | 0.337 | 0.249 | 0.344 | 0.198 | 0.300 | 0.223 | 0.319 | 0.269 | 0.369 | 0.214 | 0.329 | 0.200 | 0.304 | 0.231 | 0.443 | |
| 720 | 0.212 | 0.310 | 0.225 | 0.310 | 0.225 | 0.317 | 0.258 | 0.352 | 0.280 | 0.363 | 0.284 | 0.373 | 0.220 | 0.320 | 0.258 | 0.350 | 0.299 | 0.390 | 0.246 | 0.355 | 0.222 | 0.321 | 0.254 | 0.361 | |
| Avg | 0.165 | 0.253 | 0.182 | 0.272 | 0.178 | 0.270 | 0.216 | 0.318 | 0.244 | 0.334 | 0.251 | 0.344 | 0.192 | 0.304 | 0.225 | 0.319 | 0.268 | 0.365 | 0.214 | 0.327 | 0.193 | 0.296 | 0.227 | 0.338 | |
| Traffic | 96 | 0.392 | 0.253 | 0.462 | 0.285 | 0.395 | 0.268 | 0.526 | 0.347 | 0.644 | 0.429 | 0.805 | 0.493 | 0.593 | 0.321 | 0.650 | 0.396 | 0.788 | 0.499 | 0.587 | 0.366 | 0.612 | 0.338 | 0.613 | 0.388 |
| 192 | 0.402 | 0.258 | 0.473 | 0.296 | 0.417 | 0.276 | 0.522 | 0.332 | 0.665 | 0.431 | 0.756 | 0.474 | 0.617 | 0.336 | 0.598 | 0.370 | 0.789 | 0.505 | 0.604 | 0.373 | 0.613 | 0.340 | 0.616 | 0.382 | |
| 336 | 0.428 | 0.263 | 0.498 | 0.296 | 0.433 | 0.283 | 0.517 | 0.334 | 0.674 | 0.420 | 0.762 | 0.477 | 0.629 | 0.336 | 0.605 | 0.373 | 0.797 | 0.508 | 0.621 | 0.383 | 0.618 | 0.328 | 0.622 | 0.337 | |
| 720 | 0.441 | 0.282 | 0.506 | 0.313 | 0.467 | 0.302 | 0.552 | 0.352 | 0.683 | 0.424 | 0.719 | 0.449 | 0.640 | 0.350 | 0.645 | 0.394 | 0.841 | 0.523 | 0.626 | 0.382 | 0.653 | 0.355 | 0.660 | 0.408 | |
| Avg | 0.416 | 0.264 | 0.484 | 0.297 | 0.428 | 0.282 | 0.529 | 0.341 | 0.667 | 0.426 | 0.760 | 0.473 | 0.620 | 0.336 | 0.625 | 0.383 | 0.804 | 0.509 | 0.610 | 0.376 | 0.624 | 0.340 | 0.628 | 0.379 | |
| Exchange | 96 | 0.085 | 0.214 | 0.090 | 0.235 | 0.086 | 0.206 | 0.088 | 0.205 | 0.256 | 0.367 | 0.094 | 0.218 | 0.107 | 0.234 | 0.088 | 0.218 | 0.267 | 0.396 | 0.148 | 0.278 | 0.111 | 0.237 | 0.197 | 0.323 |
| 192 | 0.175 | 0.313 | 0.187 | 0.343 | 0.177 | 0.299 | 0.176 | 0.299 | 0.470 | 0.509 | 0.184 | 0.307 | 0.226 | 0.344 | 0.176 | 0.315 | 0.351 | 0.459 | 0.271 | 0.315 | 0.219 | 0.335 | 0.300 | 0.369 | |
| 336 | 0.316 | 0.420 | 0.353 | 0.473 | 0.331 | 0.417 | 0.301 | 0.397 | 1.268 | 0.883 | 0.349 | 0.431 | 0.367 | 0.448 | 0.313 | 0.427 | 1.324 | 0.853 | 0.460 | 0.427 | 0.421 | 0.476 | 0.509 | 0.524 | |
| 720 | 0.851 | 0.689 | 0.934 | 0.761 | 0.847 | 0.691 | 0.901 | 0.714 | 1.767 | 1.068 | 0.852 | 0.698 | 0.964 | 0.746 | 0.839 | 0.695 | 1.058 | 0.797 | 1.195 | 0.695 | 1.092 | 0.769 | 1.447 | 0.941 | |
| Avg | 0.357 | 0.391 | 0.391 | 0.453 | 0.360 | 0.403 | 0.367 | 0.404 | 0.940 | 0.707 | 0.370 | 0.413 | 0.416 | 0.443 | 0.354 | 0.414 | 0.750 | 0.626 | 0.519 | 0.429 | 0.461 | 0.454 | 0.613 | 0.539 | |
| ETTh1 | 96 | 0.361 | 0.403 | 0.375 | 0.400 | 0.386 | 0.405 | 0.460 | 0.447 | 0.423 | 0.448 | 0.479 | 0.464 | 0.384 | 0.402 | 0.397 | 0.412 | 0.654 | 0.599 | 0.395 | 0.424 | 0.513 | 0.491 | 0.449 | 0.459 |
| 192 | 0.416 | 0.441 | 0.429 | 0.421 | 0.441 | 0.512 | 0.477 | 0.429 | 0.471 | 0.474 | 0.525 | 0.492 | 0.436 | 0.429 | 0.446 | 0.441 | 0.719 | 0.631 | 0.469 | 0.470 | 0.534 | 0.504 | 0.500 | 0.482 | |
| 336 | 0.430 | 0.434 | 0.484 | 0.458 | 0.487 | 0.458 | 0.546 | 0.496 | 0.570 | 0.546 | 0.565 | 0.515 | 0.491 | 0.469 | 0.489 | 0.467 | 0.778 | 0.659 | 0.530 | 0.499 | 0.588 | 0.535 | 0.521 | 0.496 | |
| 720 | 0.467 | 0.451 | 0.498 | 0.482 | 0.503 | 0.491 | 0.544 | 0.517 | 0.653 | 0.621 | 0.594 | 0.558 | 0.521 | 0.500 | 0.513 | 0.510 | 0.836 | 0.699 | 0.598 | 0.544 | 0.643 | 0.616 | 0.514 | 0.512 | |
| Avg | 0.419 | 0.432 | 0.447 | 0.440 | 0.454 | 0.447 | 0.516 | 0.484 | 0.529 | 0.522 | 0.541 | 0.507 | 0.458 | 0.450 | 0.461 | 0.457 | 0.747 | 0.647 | 0.498 | 0.484 | 0.570 | 0.537 | 0.496 | 0.487 | |
| ETTh2 | 96 | 0.276 | 0.328 | 0.289 | 0.341 | 0.297 | 0.349 | 0.308 | 0.355 | 0.745 | 0.584 | 0.400 | 0.440 | 0.340 | 0.374 | 0.340 | 0.394 | 0.707 | 0.621 | 0.358 | 0.397 | 0.476 | 0.458 | 0.346 | 0.388 |
| 192 | 0.342 | 0.379 | 0.372 | 0.392 | 0.380 | 0.400 | 0.393 | 0.405 | 0.877 | 0.656 | 0.528 | 0.509 | 0.402 | 0.414 | 0.482 | 0.479 | 0.860 | 0.689 | 0.429 | 0.439 | 0.512 | 0.493 | 0.456 | ||
| 336 | 0.346 | 0.398 | 0.386 | 0.414 | 0.428 | 0.432 | 0.427 | 0.436 | 0.546 | 1.043 | 0.731 | 0.643 | 0.571 | 0.452 | 0.591 | 0.541 | 1.000 | 0.774 | 0.444 | 0.496 | 0.487 | 0.552 | 0.551 | ||
| 720 | 0.392 | 0.415 | 0.412 | 0.434 | 0.427 | 0.445 | 0.436 | 0.450 | 1.104 | 0.763 | 0.874 | 0.679 | 0.462 | 0.468 | 0.839 | 0.661 | 1.249 | 0.838 | 0.463 | 0.474 | 0.562 | 0.560 | |||
| Avg | 0.339 | 0.380 | 0.364 | 0.395 | 0.383 | 0.407 | 0.391 | 0.411 | 0.942 | 0.684 | 0.611 | 0.550 | 0.414 | 0.427 | 0.563 | 0.519 | 0.954 | 0.723 | 0.437 | 0.449 | 0.526 | 0.516 | 0.450 | ||
| ETm1m2 | 96 | 0.310 | 0.334 | 0.320 | 0.357 | 0.334 | 0.368 | 0.352 | 0.374 | 0.404 | 0.426 | 0.364 | 0.387 | 0.338 | 0.375 | 0.346 | 0.374 | 0.418 | 0.438 | 0.286 | 0.377 | 0.379 | 0.418 | 0.274 | |
| 192 | 0.348 | 0.362 | 0.361 | 0.381 | 0.390 | 0.393 | 0.374 | 0.387 | 0.450 | 0.451 | 0.398 | 0.404 | 0.374 | 0.387 | 0.382 | 0.391 | 0.451 | 0.415 | 0.415 | 0.498 | 0.485 | 0.512 | 0.553 | ||
| 336 | 0.376 | 0.391 | 0.390 | 0.404 | 0.426 | 0.421 | 0.414 | 0.532 | 0.515 | 0.532 | 0.428 | 0.425 | 0.411 | 0.415 | 0.415 | 0.492 | 0.489 | 0.515 | 0.515 | 0.543 | 0.495 | 0.464 | |||
| 720 | 0.440 | 0.423 | 0.454 | 0.441 | 0.491 | 0.459 | 0.462 | 0.449 | 0.666 | 0.589 | 0.487 | 0.461 | 0.478 | 0.450 | 0.473 | 0.558 | 0.525 | 0.595 | 0.550 | 0.543 | 0.490 | 0.585 | 0.516 | ||
| Avg | 0.369 | 0.378 | 0.381 | 0.395 | 0.407 | 0.410 | 0.406 | 0.407 | 0.513 | 0.495 | 0.419 | 0.419 | 0.400 | 0.406 | 0.404 | 0.485 | 0.485 | 0.481 | 0.448 | 0.452 | 0.481 | 0.568 | |||
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | TiDE (2023a) | TimesNet N-HiTS (2023) | N-BEATS* (2019) | PatchTST (2023) | MICN (2023a) | FiLM (2022a) | LightTS DLinear (2022) | FED. (2022b) | Stationary (2022c) | Auto. (2021) | ||||
| Yearly | SMAPE | 13.179 | 13.206 | 13.923 | 15.320 | 13.387 | 13.418 | 13.436 | 16.463 | 25.022 | 17.431 | 14.247 | 16.965 | 13.728 | 13.717 | 13.974 |
| MASE | 2.934 | 2.916 | 3.214 | 3.540 | 2.996 | 3.045 | 3.043 | 3.967 | 7.162 | 4.043 | 3.109 | 4.283 | 3.048 | 3.078 | 3.134 | |
| OWA | 0.769 | 0.776 | 0.830 | 0.910 | 0.786 | 0.793 | 0.794 | 1.003 | 1.667 | 1.042 | 0.827 | 1.058 | 0.803 | 0.807 | 0.822 | |
| Quarterly | SMAPE | 9.755 | 9.996 | 10.757 | 11.830 | 10.100 | 10.202 | 10.124 | 10.644 | 15.214 | 12.925 | 11.364 | 12.145 | 10.792 | 10.958 | 11.338 |
| MASE | 1.159 | 1.166 | 1.283 | 1.410 | 1.182 | 1.194 | 1.169 | 1.278 | 1.963 | 1.664 | 1.328 | 1.520 | 1.283 | 1.325 | 1.365 | |
| OWA | 0.865 | 0.825 | 0.956 | 1.050 | 0.890 | 0.899 | 0.886 | 0.949 | 1.407 | 1.193 | 1.000 | 1.106 | 0.958 | 0.981 | 1.012 | |
| Monthly | SMAPE | 12.432 | 12.605 | 13.796 | 15.180 | 12.670 | 12.791 | 12.677 | 13.399 | 16.943 | 15.407 | 14.014 | 13.514 | 14.260 | 13.917 | 13.958 |
| MASE | 0.904 | 0.919 | 1.083 | 1.190 | 0.933 | 0.969 | 0.937 | 1.031 | 1.442 | 1.298 | 1.053 | 1.037 | 1.102 | 1.097 | 1.103 | |
| OWA | 0.841 | 0.869 | 0.987 | 1.090 | 0.878 | 0.899 | 0.880 | 0.949 | 1.265 | 1.144 | 0.981 | 0.956 | 1.012 | 0.998 | 1.002 | |
| Others | SMAPE | 4.698 | 4.564 | 5.569 | 6.120 | 4.891 | 5.061 | 4.925 | 6.558 | 41.985 | 7.134 | 15.880 | 6.709 | 4.954 | 6.302 | 5.485 |
| MASE | 2.931 | 3.115 | 3.940 | 4.330 | 3.302 | 3.216 | 3.391 | 4.511 | 62.734 | 5.09 | 11.434 | 4.953 | 3.264 | 4.064 | 3.865 | |
| OWA | 1.01 | 0.982 | 1.207 | 1.330 | 1.035 | 1.040 | 1.053 | 1.401 | 14.313 | 1.553 | 3.474 | 1.487 | 1.036 | 1.304 | 1.187 | |
| Weighted Average | SMAPE | 11.448 | 11.723 | 12.684 | 13.950 | 11.829 | 11.927 | 11.851 | 13.152 | 19.638 | 14.863 | 13.525 | 13.639 | 12.840 | 12.780 | 12.909 |
| MASE | 1.487 | 1.559 | 1.764 | 1.940 | 1.585 | 1.613 | 1.559 | 1.945 | 5.947 | 2.207 | 2.111 | 2.095 | 1.701 | 1.756 | 1.771 | |
| OWA | 0.821 | 0.840 | 0.929 | 1.020 | 0.851 | 0.861 | 0.855 | 0.998 | 2.279 | 1.125 | 1.051 | 1.051 | 0.918 | 0.930 | 0.939 | |
| Models | TimeMixer++ (Ours) | TimeMixer iTransformer (2024b) | TiDE (2023a) | SCINet (2022a) | Crossformer (2023) | PatchTST (2023) | TimesNet (2023) | MICN (2023a) | DLinear FEDformer (2022b) | Stationary Autoformer (2022c) | (2021) | |||
| PEMS03 | MAE | 13.99 | 14.63 | 16.72 | 18.39 | 15.97 | 15.64 | 18.95 | 16.41 | 15.71 | 19.70 | 19.00 | 17.64 | 18.08 |
| MAPE | 13.43 | 11.54 | 15.81 | 17.39 | 15.89 | 15.74 | 17.29 | 15.17 | 15.67 | 18.35 | 18.57 | 17.56 | 18.75 | |
| RMSE | 24.03 | 23.28 | 27.81 | 30.59 | 25.20 | 25.56 | 30.15 | 26.72 | 24.55 | 32.35 | 30.05 | 28.37 | 27.82 | |
| PEMS04 | MAE | 17.46 | 19.21 | 21.81 | 23.99 | 20.35 | 20.38 | 24.86 | 21.63 | 21.62 | 24.62 | 26.51 | 22.34 | 25.00 |
| MAPE | 11.34 | 12.53 | 13.42 | 14.76 | 12.84 | 12.84 | 16.65 | 13.15 | 13.53 | 16.12 | 16.76 | 14.85 | 16.70 | |
| RMSE | 28.83 | 30.92 | 33.91 | 37.30 | 32.31 | 32.41 | 40.46 | 34.90 | 34.39 | 39.51 | 41.81 | 35.47 | 38.02 | |
| PEMS07 | MAE | 18.38 | 20.57 | 23.01 | 25.31 | 22.79 | 22.54 | 27.87 | 25.12 | 22.28 | 28.65 | 27.92 | 26.02 | 26.92 |
| MAPE | 7.32 | 8.62 | 10.02 | 11.02 | 9.41 | 9.38 | 12.69 | 10.60 | 9.57 | 12.15 | 12.29 | 11.75 | 11.83 | |
| RMSE | 31.75 | 33.59 | 35.56 | 39.12 | 35.61 | 35.49 | 42.56 | 40.71 | 35.40 | 45.02 | 42.29 | 42.34 | 40.60 | |
| PEMS08 | MAE | 13.81 | 15.22 | 17.94 | 19.73 | 17.38 | 17.56 | 20.35 | 19.01 | 17.76 | 20.26 | 20.56 | 19.29 | 20.47 |
| MAPE | 8.21 | 9.67 | 10.93 | 12.02 | 10.80 | 10.92 | 13.15 | 11.83 | 10.76 | 12.09 | 12.41 | 12.21 | 12.27 | |
| RMSE | 23.62 | 24.26 | 27.88 | 30.67 | 27.34 | 27.21 | 31.04 | 30.65 | 27.26 | 32.38 | 32.97 | 38.62 | 31.52 | |
| Datasets Metrics | SMD | MSL | SMAP | SWaT | PSM | Avg F1 (%) | |||||||||||
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | |||
| LSTM | (1997) | 78.52 | 65.47 | 71.41 | 78.04 | 86.22 | 81.93 | 91.06 | 57.49 | 70.48 | 78.06 | 91.72 | 84.34 | 69.24 | 99.53 | 81.67 | 77.97 |
| Transformer | (2017) | 83.58 | 76.13 | 79.56 | 71.57 | 87.37 | 78.68 | 89.37 | 57.12 | 69.70 | 68.84 | 96.53 | 80.37 | 62.75 | 96.56 | 76.07 | 76.88 |
| LogTrans | (2019) | 83.46 | 70.13 | 76.21 | 73.05 | 87.37 | 79.57 | 89.15 | 57.59 | 69.97 | 68.67 | 97.32 | 80.52 | 63.06 | 98.00 | 76.74 | 76.60 |
| TCN | (2019) | 84.06 | 79.07 | 81.49 | 75.11 | 82.44 | 78.60 | 86.90 | 59.23 | 70.45 | 76.59 | 95.71 | 85.09 | 54.59 | 99.77 | 70.57 | 77.24 |
| Reformer | (2020) | 82.58 | 69.24 | 75.32 | 85.51 | 83.31 | 84.40 | 90.91 | 57.44 | 70.40 | 72.50 | 96.53 | 82.80 | 59.93 | 95.38 | 73.61 | 77.31 |
| Informer | (2021a) | 86.60 | 77.23 | 81.65 | 81.77 | 86.48 | 84.06 | 90.11 | 57.13 | 69.92 | 70.29 | 96.75 | 81.43 | 64.27 | 96.33 | 77.10 | 78.83 |
| Anomaly* | (2022) | 88.91 | 82.23 | 85.49 | 79.61 | 87.37 | 83.31 | 91.85 | 58.11 | 71.18 | 72.51 | 97.32 | 83.10 | 68.35 | 94.72 | 79.40 | 80.50 |
| Pyraformer | (2022b) | 85.61 | 80.61 | 83.04 | 83.81 | 85.93 | 84.86 | 92.54 | 57.71 | 71.09 | 87.92 | 96.00 | 91.78 | 71.67 | 96.02 | 82.08 | 82.57 |
| Autoformer | (2021) | 88.06 | 82.35 | 85.11 | 77.27 | 80.92 | 79.05 | 90.40 | 58.62 | 71.12 | 89.85 | 95.81 | 92.74 | 99.08 | 88.15 | 93.29 | 84.26 |
| LSSL | (2022b) | 78.51 | 65.32 | 71.31 | 77.55 | 88.18 | 82.53 | 89.43 | 53.43 | 66.90 | 79.05 | 93.72 | 85.76 | 66.02 | 92.93 | 77.20 | 76.74 |
| Stationary | (2022c) | 88.33 | 81.21 | 84.62 | 68.55 | 89.14 | 77.50 | 89.37 | 59.02 | 71.09 | 68.03 | 96.75 | 79.88 | 97.82 | 96.76 | 97.29 | 82.08 |
| DLinear | (2023) | 83.62 | 71.52 | 77.10 | 84.34 | 85.42 | 84.88 | 92.32 | 55.41 | 69.26 | 80.91 | 95.30 | 87.52 | 98.28 | 89.26 | 93.55 | 82.46 |
| ETSformer | (2022) | 87.44 | 79.23 | 83.13 | 85.13 | 84.93 | 85.03 | 92.25 | 55.75 | 69.50 | 90.02 | 80.36 | 84.91 | 99.31 | 85.28 | 91.76 | 82.87 |
| LightTS | (2022a) | 87.10 | 78.42 | 82.53 | 82.40 | 75.78 | 78.95 | 92.58 | 55.27 | 69.21 | 91.98 | 94.72 | 93.33 | 98.37 | 95.97 | 97.15 | 84.23 |
| FEDformer | (2022b) | 87.95 | 82.39 | 85.08 | 77.14 | 80.07 | 78.57 | 90.47 | 58.10 | 70.76 | 90.17 | 96.42 | 93.19 | 97.31 | 97.16 | 97.23 | 84.97 |
| TimesNet | (2023) | 88.66 | 83.14 | 85.81 | 83.92 | 86.42 | 85.15 | 92.52 | 58.29 | 71.52 | 86.76 | 97.32 | 91.74 | 98.19 | 96.76 | 97.47 | 86.34 |
| TiDE | (2023a) | 76.00 | 63.00 | 68.91 | 84.00 | 60.00 | 70.18 | 88.00 | 50.00 | 64.00 | 98.00 | 63.00 | 76.73 | 93.00 | 92.00 | 92.50 | 74.46 |
| iTransformer | (2024) | 78.45 | 65.10 | 71.15 | 86.15 | 62.65 | 72.54 | 90.67 | 52.96 | 66.87 | 99.96 | 65.55 | 79.18 | 95.65 | 94.69 | 95.17 | 76.98 |
| TimesMixer++ | (Ours) | 88.59 | 84.50 | 86.50 | 89.73 | 82.23 | 85.82 | 93.47 | 60.02 | 73.10 | 92.96 | 94.33 | 94.64 | 98.33 | 96.90 | 97.60 | 87.47 |
| Datasets / Models | Classical methods | RNN | TCN | Transformers | MLP | CNN | |||||||||||||||||||
| DTWxGBoost Rocket LSTM LSTM LSTNet LSSL | TCN Trans. Re. In. Pyra. Auto.Station. FED. ETS. Flow.iTrans.DLinearLightTS.TiDE TimesNetTimesMixer++ | ||||||||||||||||||||||||
| (1994) | (2016) | (2020)(1997)(2018a)(2022b)(2019) | (2017)(2020)(2021a)(2022b)(2021) | (2022c)(2022b)(2022)(2024) | (2022a) | (2022c) | (2022b)(2022)(2024) | (2024) | (2022a) | (2023) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (2023a) | (ours) | |
| EthanolConcentration | 32.3 | 43.7 | 45.2 | 32.3 | 39.9 | 31.1 | 28.9 | 32.7 | 31.9 | 31.6 | 30.8 | 31.6 | 32.7 | 28.1 | 31.2 | 33.8 | 28.1 | 32.6 | 29.7 | 27.1 | 35.7 | 39.9 | |||
| FaceDetection | 52.9 | 63.3 | 64.7 | 57.7 | 65.7 | 66.7 | 52.8 | 67.3 | 68.6 | 67.0 | 65.7 | 68.4 | 68.0 | 66.6 | 66.3 | 67.6 | 66.3 | 68.0 | 67.5 | 65.3 | 68.6 | 71.8 | |||
| Handwriting | 28.6 | 15.8 | 58.8 | 15.2 | 25.8 | 24.6 | 53.3 | 32.0 | 27.4 | 32.8 | 29.4 | 36.7 | 31.6 | 28.0 | 32.5 | 33.8 | 24.2 | 27.0 | 26.1 | 23.2 | 32.1 | 26.5 | |||
| Heartbeat | 71.7 | 73.2 | 75.6 | 72.2 | 77.1 | 72.7 | 75.6 | 76.1 | 77.1 | 80.5 | 75.6 | 74.6 | 73.7 | 73.7 | 71.2 | 77.6 | 75.6 | 75.1 | 75.1 | 74.6 | 78.0 | 79.1 | |||
| JapaneseVowels | 94.9 | 86.5 | 96.2 | 79.7 | 98.1 | 98.4 | 98.9 | 98.7 | 97.8 | 98.9 | 98.4 | 96.2 | 99.2 | 98.4 | 95.9 | 98.9 | 96.6 | 96.2 | 96.2 | 95.6 | 98.4 | 97.9 | |||
| PEMS-SF | 71.1 | 98.3 | 75.1 | 39.9 | 86.7 | 86.1 | 68.8 | 82.1 | 82.7 | 81.5 | 83.2 | 82.7 | 87.3 | 80.9 | 86.0 | 83.8 | 87.9 | 75.1 | 88.4 | 86.9 | 89.6 | 91.0 | |||
| SelfRegulationSCP1 | 77.7 | 84.6 | 90.8 | 68.9 | 84.0 | 90.8 | 84.6 | 92.2 | 90.4 | 90.1 | 88.1 | 84.0 | 89.4 | 88.7 | 89.6 | 92.5 | 90.2 | 87.3 | 89.8 | 89.2 | 91.8 | 93.1 | |||
| SelfRegulationSCP2 | 53.9 | 48.9 | 53.3 | 46.6 | 52.8 | 52.2 | 55.6 | 53.9 | 56.7 | 53.3 | 53.3 | 50.6 | 57.2 | 54.4 | 55.0 | 56.1 | 54.4 | 50.5 | 51.1 | 53.4 | 57.2 | 65.6 | |||
| SpokenArabicDigits | 96.3 | 69.6 | 71.2 | 31.9 | 100.0 | 100.0 | 95.6 | 98.4 | 97.0 | 100.0 | 99.6 | 100.0 | 100.0 | 100.0 | 100.0 | 98.8 | 96.0 | 81.4 | 100.0 | 95.0 | 99.0 | 99.8 | |||
| UWaveGestureLibrary | 90.3 | 75.9 | 94.4 | 41.2 | 87.8 | 85.9 | 88.4 | 85.6 | 85.6 | 85.6 | 83.4 | 85.9 | 87.5 | 85.3 | 85.0 | 86.6 | 85.9 | 82.1 | 80.3 | 84.9 | 85.3 | 88.2 | |||
| Average Accuracy | 67.0 | 66.0 | 72.5 | 48.6 | 71.8 | 70.9 | 70.3 | 71.9 | 71.5 | 72.1 | 70.8 | 71.1 | 72.7 | 70.7 | 71.0 | 73.0 | 70.5 | 67.5 | 70.4 | 69.5 | 73.6 | 75.3 | |||
| Model | BLiMP | GLUE | Brain-Score |
| TopoLM | 0.71 | 0.68 | 0.78 |
| Non-topo | 0.76 | 0.65 | 0.80 |
| Node labels | Edge labels | |||||||
| HOMP | SMCN | MCN | HOMP | SMCN | MCN | |||
| ○ | ○ | ○ | ○ | ○ | ○ | Ar1,r1 / coAr1,r2 / Br1,r2 / Br1,r2 | “SCL” label | “equiv” label |
| Ce0 | Ce2 | Ce0+e1 | Ce2e0 | Ce2e0+e1 | Ce3e0 | Neighborhood function induced update | SCL layer | Equivariant linear layer |
| Model | Reference | ZINC | MOLHIV | MOLESOL |
| MAE (↓) | ROC-AUC (↑) | RMSE (↓) | ||
| GCN | Kipf & Welling (2016) | 0.321 ± 0.009 | 76.06 ± 0.97 | 1.114 ± 0.036 |
| GIN | Xu et al. (2018) | 0.163 ± 0.004 | 75.58 ± 1.40 | 1.173 ± 0.057 |
| CIN | Bodnar et al. (2021a) | 0.079 ± 0.006 | 80.94 ± 0.57 | 1.288 ± 0.026 |
| CIN++ | Giusti et al. (2023) | 0.077 ± 0.004 | 80.63 ± 0.94 | - |
| CIN + CycleNet | Yan et al. (2024) | 0.068 | - | - |
| Cellular Transformer | Ballester et al. (2024) | 0.080 | 79.46 | - |
| PPGN | Maron et al. (2019) | 0.079 ± 0.005 | - | - |
| PPGN++ (6) | Puny et al. (2023) | 0.071 ± 0.001 | - | - |
| DS-GNN | Bevilacqua et al. (2023) | 0.087 ± 0.003 | 76.54 ± 1.37 | 0.847 ± 0.015 |
| DSS-GNN | Bevilacqua et al. (2021) | 0.102 ± 0.003 | 76.78 ± 1.66 | - |
| SUN | Frasca et al. (2022) | 0.083 ± 0.003 | 80.03 ± 0.55 | - |
| GNN-SSWL | Zhang et al. (2023b) | 0.082 ± 0.003 | - | - |
| GNN-SSWL+ | Zhang et al. (2023b) | 0.070 ± 0.005 | 79.58 ± 0.35 | 0.837 ± 0.019 |
| Subgraphormer | Bar-Shalom et al. (2023) | 0.067 ± 0.007 | 80.38 ± 1.92 | 0.832 ± 0.043 |
| Subgraphormer + PE | Bar-Shalom et al. (2023) | 0.063 ± 0.001 | 79.48 ± 1.28 | 0.826 ± 0.010 |
| SMCN (ours) | This paper | 0.060 ± 0.004 | 81.16 ± 0.90 | 0.809 ± 0.037 |
| Model | Accuracy (↑) |
| GIN (Xu et al., 2018) | 89.4 ± 5.6% |
| CIN (Bodnar et al., 2021a) | 92.7 ± 6.1% |
| PPGN (Maron et al., 2019) | 90.6 ± 8.7% |
| DS-GNN (Bevilacqua et al., 2021) | 91.0 ± 4.8% |
| DSS-GNN (Bevilacqua et al., 2021) | 91.1 ± 7.0% |
| SUN (Frasca et al., 2022) | 92.7 ± 5.8% |
| SMCN (ours) | 92.5 ± 6.2% |
| SMCN | GNN-SSWL+ | CIN | |
| ZINC | 7.39 ± 0.17 | 9.65 ± 0.19 | 5.35 ± 0.33 |
| MOLHIV | 17.70 ± 0.42 | 51.02 ± 0.25 | 14.34 ± 0.27 |
| SMCN | GNN-SSWL+ | CIN | |
| ZINC | 0.93 ± 0.08 | 1.04 ± 0.03 | 0.71 ± 0.05 |
| MOLHIV | 2.09 ± 0.15 | 3.07 ± 0.03 | 2.02 ± 0.12 |
| Method | Classifier Guidance | Classifier-Free Guidance | Condition Contrastive Alignment |
| Modeling of log p(x|c)/p(x) | log pθ(c|x) - log p(c) | log pφ(x|c) - log pθ(x) | β[log psampleθ(x|c) - log pφ(x|c)] |
| Training loss | maxθEρ(x,c) log pθ(c|x) | maxθEρ(x) log pθ(x) | minθLCCCA in Eq. 11 |
| Sampling policy | log pφ(x|c) + s log pθ(c|x) | (1 + s) log pφ(x|c) - s log pθ(x) | log psampleθ(x|c) |
| Extra training cost | ~9% of learning pφ | ~10% of learning pφ | ~1% of pretraining pφ |
| Sampling cost | ×1.3 | ×2 | ×1 |
| Applicable area | Diffusion | Diffusion & Autoregressive | Autoregressive |
| Model | w/o Guidance | w/ Guidance | |||||
| FID↓ | IS↑ | Precision↑ | Recall↑ | FID↓ | IS↑ | ||
| Diffusion | ADM (Dhariwal & Nichol, 2021) | 7.49 | 127.5 | 0.72 | 0.63 | 3.94 | 215.8 |
| LDM-4 (Rombach et al., 2022) | 10.56 | 103.5 | 0.71 | 0.62 | 3.60 | 247.7 | |
| U-ViT-H/2 (Bao et al., 2023) | - | - | - | - | 2.29 | 263.9 | |
| DiT-XL/2 (Peebles & Xie, 2023) | 9.62 | 121.5 | 0.67 | 0.67 | 2.27 | 278.2 | |
| MDTv2-XL/2 (Gao et al., 2023) | 5.06 | 155.6 | 0.72 | 0.66 | 1.58 | 314.7 | |
| Mask | MaskGIT (Chang et al., 2022) | 6.18 | 182.1 | 0.80 | 0.51 | - | - |
| MAGVIT-v2 (Yu et al., 2023) | 3.65 | 200.5 | - | - | 1.78 | 319.4 | |
| MAGE (Li et al., 2023) | 6.93 | 195.8 | - | - | - | - | |
| Autoregressive | VQGAN (Esser et al., 2021) | 15.78 | 74.3 | - | - | 5.20 | 280.3 |
| ViT-VQGAN (Yu et al., 2021) | 4.17 | 175.1 | - | - | 3.04 | 227.4 | |
| RQ-Transformer (Lee et al., 2022) | 7.55 | 134.0 | - | - | 3.80 | 323.7 | |
| LlamaGen-3B (Sun et al., 2024) | 9.38 | 112.9 | 0.69 | 0.67 | 2.18 | 263.3 | |
| +CCA (Ours) | 2.69 | 276.8 | 0.80 | 0.59 | - | - | |
| VAR-d30 (Tian et al., 2024) | 5.25 | 175.6 | 0.75 | 0.62 | 1.92 | 323.1 | |
| +CCA (Ours) | 2.54 | 264.2 | 0.83 | 0.56 | - | - | |
| Model | FID↓ | IS | sFID↓ | Precision | Recall |
| LlamaGen-L | 19.00 | 64.7 | 8.78 | 0.61 | 0.67 |
| +DPO | 61.69 | 30.8 | 44.98 | 0.36 | 0.40 |
| +Unlearn | 12.22 | 111.6 | 7.99 | 0.66 | 0.64 |
| +CCA | 3.43 | 288.2 | 7.44 | 0.81 | 0.52 |
| Model | FID↓ | IS | sFID↓ | Precision | Recall |
| VAR-d24 | 6.20 | 154.3 | 8.50 | 0.74 | 0.62 |
| +DPO | 7.53 | 232.6 | 19.10 | 0.85 | 0.34 |
| +Unlearn | 5.55 | 165.9 | 8.41 | 0.75 | 0.61 |
| +CCA | 2.63 | 298.8 | 7.63 | 0.84 | 0.55 |
| Type | LlamaGen | VAR | |||||||
| Model Size | B 111M | L 343M | XL 775M | XXL 1.4B | 3B 3.1B | d16 310M | d20 600M | d24 1.0B | d30 2.0B |
| CCA β | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 |
| CCA λ | 1000 | 300 | 1000 | 1000 | 500 | 50 | 50 | 100 | 1000 |
| CCA+CFG β | 0.1 | 0.02 | 0.1 | 0.1 | 0.1 | - | - | - | - |
| CCA+CFG λ | 1 | 1 | 1 | 1 | 1 | - | - | - | - |
| Learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 2e-5 | 2e-5 | 2e-5 | 2e-5 |
| Dropout? | Yes | Yes | Yes | Yes | Yes | None | Yes | Yes | Yes |
| Batch size | 256 | 256 | 256 | 256 | 256 | 256 | 256 | 256 | 256 |
| Inference-time λ | 10 | 100 | 300 (Chosen) | 1000 | 3000 | 10000 |
| FID | 7.23 | 4.18 | 3.59 | 3.73 | 4.12 | 5.33 |
| IS | 153.1 | 218.8 | 256.2 | 277.5 | 307.7 | 341.2 |
| Dataset Label Amount | CIFAR-100 | STL-10 | Imageloof | ImageNet 100K | ||||||
| 400 | 2500 | 10000 | 40 | 250 | 1000 | 250 | 1000 | 2000 | ||
| SL | 11.45 ± 0.12 | 40.45 ± 0.50 | 63.77 ± 0.29 | 23.61 ± 1.62 | 38.83 ± 1.12 | 64.08 ± 0.47 | 25.94 ± 1.54 | 42.04 ± 0.90 | 60.77 ± 1.04 | 44.62 ± 1.16 |
| FixMatch | 55.16 ± 0.63 | 71.36 ± 0.44 | 77.25 ± 0.22 | 70.00 ± 4.02 | 88.73 ± 0.92 | 93.45 ± 0.19 | 43.00 ± 1.46 | 64.91 ± 1.18 | 74.05 ± 0.15 | 50.80 ± 0.73 |
| SA-FixMatch | 55.57 ± 0.43 | 72.12 ± 0.20 | 77.46 ± 0.16 | 71.81 ± 4.23 | 89.45 ± 1.19 | 94.04 ± 0.19 | 46.73 ± 1.36 | 67.76 ± 1.29 | 75.62 ± 0.13 | 52.18 ± 0.32 |
| Dataset | STL-10 | CIFAR-100 | ||||||
| Algorithm | FlexMatch | FreeMatch | Dash | SoftMatch | FlexMatch | FreeMatch | Dash | SoftMatch |
| CutOut | 72.13 ± 5.66 | 75.29 ± 1.29 | 67.51 ± 1.47 | 78.55 ± 2.90 | 59.65 ± 1.14 | 58.44 ± 1.92 | 48.56 ± 2.16 | 60.16 ± 2.22 |
| SA-CutOut | 75.91 ± 5.59 | 77.91 ± 2.01 | 78.41 ± 1.91 | 84.04 ± 4.67 | 60.16 ± 1.06 | 59.12 ± 1.69 | 50.24 ± 1.82 | 60.69 ± 1.95 |
| Dataset | STL-10 | CIFAR-100 | ImageNet | ||||
| Data Amount | 40 | 250 | 1000 | 400 | 2500 | 10000 | 100K |
| SL | 19.93 | 44.06 | 67.29 | 9.87 | 40.98 | 63.48 | 41.82 |
| FixMatch | 38.88 | 64.70 | 79.15 | 18.58 | 47.20 | 67.94 | 43.34 |
| SA-FixMatch | 40.25 | 65.85 | 79.74 | 19.72 | 47.71 | 68.30 | 44.88 |
| Mask Size | 4 | 8 | 12 | 16 |
| FixMatch | 48.65 | 50.11 | 53.48 | 55.23 |
| SA-FixMatch | 52.71 | 52.95 | 55.37 | 55.78 |
| Original | CutOut | ShearX | Solarize | TranslationX |
| 80.95 | 38.67 | 78.38 | 52.70 | 80.69 |
| Sharpness | Posterize | Equalize | Rotate | Color |
| 72.99 | 76.17 | 60.14 | 67.97 | 69.71 |
| Original | CutOut | ShearX | Solarize | TranslationX |
| 86.44 | 62.94 | 85.01 | 78.94 | 84.44 |
| Sharpness | Posterize | Equalize | Rotate | Color |
| 82.17 | 86.19 | 80.74 | 82.29 | 84.34 |
| Dataset | STL-10 | CIFAR-100 |
| Weak Augmentation | 92.65 | 77.27 |
| No Weak Augmentation | 91.83 | 77.19 |
| Method | SL | FixMatch |
| CutOut | 23.98 | 68.30 |
| No CutOut | 22.88 | 53.64 |
| Dataset | Total Training Data | Total Labeled Data in Training Data | Test Data |
| STL-10 | 105000 | 5000 | 8000 |
| CIFAR-100 | 50000 | 50000 | 10000 |
| Imagewoof | 9025 | 9025 | 3929 |
| ImageNet | 1281167 | 1281167 | 50000 |
| Dataset | CIFAR-100 | STL-10 | Imageloof | ImageNet |
| Model | WRN-28-8 | WRN-37-2 | WRN-37-2 | ResNet-50 |
| Weight Decay | 1e-3 | 5e-4 | 5e-4 | 3e-4 |
| Batch Size | 64 | 128 | ||
| Unlabeled Data Raion μ | 7 | 1 | ||
| Threshold τ | 0.95 | 0.7 | ||
| Learning Rate η | 0.03 | |||
| SGD Momentum | 0.9 | |||
| EMA Momentum | 0.999 | |||
| Unsupervised Loss Weight λ | 1 | |||
| Algorithm | φ1 | φ2 | φ3 | ||||||
| Train | Test | Test-larger | Train | Test | Test-larger | Train | Test | Test-larger | |
| MPNN-1 | 0.907 | 0.903 | 0.909 | 0.867 | 0.865 | 0.666 | 0.866 | 0.865 | 0.527 |
| MPNN-2 | 1.000 | 1.000 | 1.000 | 0.863 | 0.863 | 0.556 | 0.868 | 0.878 | 0.657 |
| MPNN-3 | 1.000 | 1.000 | 1.000 | 0.855 | 0.864 | 0.601 | 0.877 | 0.877 | 0.477 |
| Subgraph-1 | 0.905 | 0.903 | 0.899 | 0.867 | 0.872 | 0.668 | 0.882 | 0.859 | 0.524 |
| Subgraph-2 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.914 | 0.906 | 0.682 |
| Subgraph-3 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.901 | 0.905 | 0.582 |
| 2-FGNN-1 | 1.000 | 1.000 | 1.000 | 0.865 | 0.862 | 0.666 | 0.857 | 0.847 | 0.531 |
| 2-FGNN-2 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.891 | 0.896 | 0.639 |
| 2-FGNN-3 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| Algorithm | \( \varphi_1 \) | \( \varphi_2 \) | ||||
| Train | Test | Test-larger | Train | Test | Test-larger | |
| MPNN-1 | 1.000 | 1.000 | 1.000 | 0.818 | 0.819 | 0.586 |
| MPNN-2 | 1.000 | 1.000 | 1.000 | 0.817 | 0.817 | 0.328 |
| NBFNet-1 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| NBFNet-2 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| 2-FGNN-1 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| 2-FGNN-2 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| Graph | MPNN | Subgraph | 2-FGNN | SEAL |
| 2-regular | × | ✓ | ✓ | ✓ |
| Shrikhande & Rook's 4x4 | × | × | × | ✓ |
| Method | Accuracy@t1 | Accuracy@t2 | Δ(t1, t2) | Δi→c(t1, t2) | Δc→i(t1, t2) |
| Base model | 52.6% | 41.4% | -11.2% | 4.6% | 15.8% |
| STaR DStaR | 55.4% | 41.2% | -14.2% | 5.4% | 19.6% |
| STaR DStaR+ | 53.6% | 54.0% | 0.4% | 2.6% | 2.2% |
| Pair-SFT DSFT | 52.4% | 54.2% | 1.8% | 5.4% | 3.6% |
| Pair-SFT DSFT+ | 55.0% | 55.0% | 0% | 0% | 0% |
| Approach | Acc.@t1 | Acc.@t2 | Δ(t1, t2) | Δi→c(t1, t2) | Δc→i(t1, t2) |
| Base model | 52.6% | 41.4% | -11.2% | 4.6% | 15.8% |
| Self-Refine (Madaan et al., 2023) | 52.8% | 51.8% | -1.0% | 3.2% | 4.2% |
| STaR w/ D+StaR (Zelikman et al., 2022) | 53.6% | 54.0% | 0.4% | 2.6% | 2.2% |
| Pair-SFT w/ DSFT (Welleck et al., 2023) | 52.4% | 54.2% | 1.8% | 5.4% | 3.6% |
| SCoRe (Ours) | 60.0% | 64.4% | 4.4% | 5.8% | 1.4% |
| Method | MBPP-R | Acc.@t1 | Acc.@t2 | Δ(t1, t2) | Δi→c(t1, t2) | Δc→i(t1, t2) |
| Base model | 47.3% | 53.7% | 56.7% | 3.0% | 7.9% | 4.9% |
| Self-Refine | 30.7% | 53.7% | 52.5% | -1.2% | 9.8% | 11.0% |
| Pair-SFT | 59.8% | 56.1% | 54.3% | -1.8% | 4.3% | 6.1% |
| SCoRe (Ours) | 60.6% | 52.4% | 64.6% | 12.2% | 15.2% | 3.0% |
| Method | Accuracy@t1 | Accuracy@t2 | Δ(t1, t2) |
| SCoRe (Ours) | 60.0% | 64.4% | 4.4% |
| w/o multi-turn training | 61.8% | 59.4% | -2.4% |
| w/o Stage I | 59.2% | 61.4% | 2.2% |
| w/o reward shaping | 60.0% | 62.6% | 2.6% |
| w/ STaR instead of REINFORCE Stage II | 56.2% | 58.4% | 2.2% |
| w/o online turn 1 samples | 60.4% | 60.6% | 0.2% |
| Approach | Acc.@t1 | Acc.@t2 | Acc.@t3 |
| Base model | 16.8% | 16.8% | 17.0% |
| SCoRe Stage 1(a) | 17.6% | 20.0% | 19.8% |
| SCoRe Stage 1(b) | 16.6% | 18.4% | 23.2% |
| SCoRe Stage 2 | 23.0% | 24.0% | 24.0% |
| Approach | Acc.@t1 | Acc.@t2 | Acc.@t3 |
| Base model | 21.4% | 20.7% | 20.9% |
| SCoRe Stage 1(a) | 17.5% | 20.3% | 20.9% |
| SCoRe Stage 1(b) | 17.7% | 20.4% | 24.8% |
| SCoRe Stage 2 | 23.4% | 25.7% | 25.6% |
| Approach | Acc.@t1 | Acc.@t2 | Acc.@t3 |
| Base model | 4.1% | 3.9% | 3.6% |
| SCoRe Stage 1(a) | 3.1% | 3.1% | 3.4% |
| SCoRe Stage 1(b) | 2.3% | 2.8% | 4.1% |
| SCoRe Stage 2 | 3.9% | 5.2% | 5.7% |
| Hyperparameter | Value |
| Base model | Gemini 1.5 Flash |
| Optimizer | Adam |
| Learning rate | 5e-6 |
| Training steps | 3000 |
| Batch size | 512 |
| Sampling temperature | 1.0 |
| α | 10 |
| β1 | 0.01 |
| β2 | 0.1 |
| Hyperparameter | Value |
| Base model | Gemini 1.0 Pro |
| Optimizer | Adam |
| Learning rate | 1e-5 |
| Training steps | 1500 |
| Batch size | 128 |
| Sampling temperature | 1.0 |
| α | 10 |
| β1 | 0.01 |
| β2 | 0.25 |
| Name | Train date | N | D | HF repository | Citation |
| baichuan-13b | 2023-06 | 13 | 1.4 | baichuan-inc/BAichuan-13B-Base | Yang et al. (2023a) |
| baichuan-7b | 2023-06 | 7 | 1.2 | baichuan-inc/BAichuan2-7B-Base | Yang et al. (2023a) |
| baichuan2-13b | 2023-09 | 13 | 2.6 | baichuan-inc/BAichuan2-13B-Base | Yang et al. (2023a) |
| baichuan2-7b | 2023-09 | 7 | 2.6 | baichuan-inc/BAichuan2-7B-Base | Yang et al. (2023a) |
| falcon-11b | 2024-05 | 11 | 5.0 | tiuue/falcon-11B | Almazrouei et al. (2023) |
| falcon-7b | 2023-04 | 7 | 1.5 | tiuue/falcon-7b | Almazrouei et al. (2023) |
| gemma-2b | 2024-02 | 2 | 3.0 | google/gemma-2b | Gemma et al. (2024) |
| gemma-7b | 2024-02 | 7 | 6.0 | google/gemma-7b | Gemma et al. (2024) |
| gpt-j-6b | 2021-03 | 6 | 0.4 | EleutherAI/gpt-j-6b | Wang & Komatsuzaki (2021) |
| internlm-20b | 2023-09 | 20 | 2.3 | internlm/internlm-20b | InternLM (2023) |
| internlm-7b | 2023-07 | 7 | 1.0 | internlm/internlm-7b | InternLM (2023) |
| internlm2-base-20b | 2024-01 | 20 | 2.6 | internlm/internlm2-base-20b | Cai et al. (2024) |
| internlm2-base-7b | 2024-01 | 7 | 2.6 | internlm/internlm2-base-7b | Cai et al. (2024) |
| llama-13b | 2023-02 | 13 | 1.0 | None | Touvron et al. (2023a) |
| llama-2-13b | 2023-07 | 13 | 2.0 | meta-llama/Llama-2-13b-hf | Touvron et al. (2023b) |
| llama-2-70b | 2023-07 | 70 | 2.0 | meta-llama/Llama-2-70b-hf | Touvron et al. (2023b) |
| llama-2-7b | 2023-07 | 7 | 2.0 | meta-llama/Llama-2-7b-hf | Touvron et al. (2023b) |
| llama-3-8b | 2024-04 | 8 | 15.0 | meta-llama/Meta-Llama-3-8B | MetaAI (2024) |
| llama-30b | 2023-02 | 32.5 | 1.4 | None | Touvron et al. (2023a) |
| llama-65b | 2023-02 | 65.2 | 1.4 | None | Touvron et al. (2023a) |
| llama-7b | 2023-02 | 7 | 1.0 | None | Touvron et al. (2023a) |
| map-neo-7b | 2024-05 | 7 | 4.5 | m-a-p/neo_7b | Zhang et al. (2024) |
| olmo-1.7-7b | 2024-04 | 7 | 2.0 | allenai/OLMo-1.7-7B-hf | Groeneveld et al. (2024) |
| olmo-1b | 2024-01 | 1 | 2.0 | allenai/OLMo-1B-hf | Groeneveld et al. (2024) |
| olmo-7b | 2024-01 | 7 | 2.5 | allenai/OLMo-7B-hf | Groeneveld et al. (2024) |
| openllama-13b | 2023-06 | 13 | 1.0 | openlm-research/open_llama_13b | OpenLlama (2023) |
| openllama-3b | 2023-06 | 3 | 1.0 | openlm-research/open_llama_3b | OpenLlama (2023) |
| openllama-3b-v2 | 2023-07 | 3 | 1.0 | openlm-research/open_llama_3b_v2 | OpenLlama (2023) |
| openllama-7b | 2023-06 | 7 | 1.0 | openlm-research/open_llama_7b | OpenLlama (2023) |
| openllama-7b-v2 | 2023-07 | 7 | 1.0 | openlm-research/open_llama_7b_v2 | OpenLlama (2023) |
| pythia-1.4b | 2022-10 | 1.4 | 0.3 | EleutherAI/pythia-1.4b | Biderman et al. (2023) |
| pythia-12b | 2022-10 | 12 | 0.3 | EleutherAI/pythia-12b | Biderman et al. (2023) |
| pythia-160m | 2022-10 | 0.16 | 0.3 | EleutherAI/pythia-160m | Biderman et al. (2023) |
| pythia-1b | 2022-10 | 1 | 0.3 | EleutherAI/pythia-1b | Biderman et al. (2023) |
| pythia-2.8b | 2022-10 | 2.8 | 0.3 | EleutherAI/pythia-2.8b | Biderman et al. (2023) |
| pythia-410m | 2022-10 | 0.41 | 0.3 | EleutherAI/pythia-410m | Biderman et al. (2023) |
| pythia-6.9b | 2022-10 | 6.9 | 0.3 | EleutherAI/pythia-6.9b | Biderman et al. (2023) |
| pythia-70m | 2022-10 | 0.07 | 0.3 | EleutherAI/pythia-70m | Biderman et al. (2023) |
| qwen-1.5-0.5b | 2024-01 | 0.5 | 2.4 | Qwen/Qwen1.5-0.5B | Bai et al. (2023) |
| qwen-1.5-1.8b | 2024-01 | 1.8 | 2.4 | Qwen/Qwen1.5-1.8B | Bai et al. (2023) |
| qwen-1.5-14b | 2024-01 | 14 | 4.0 | Qwen/Qwen1.5-14B | Bai et al. (2023) |
| qwen-1.5-4b | 2024-01 | 4 | 2.4 | Qwen/Qwen1.5-4B | Bai et al. (2023) |
| qwen-1.5-7b | 2024-01 | 7 | 4.0 | Qwen/Qwen1.5-7B | Bai et al. (2023) |
| qwen2-0.5b | 2024-06 | 0.5 | 12.0 | Qwen/Qwen2-0.5B | Yang et al. (2024) |
| qwen2-1.5b | 2024-06 | 1.5 | 7.0 | Qwen/Qwen2-1.5B | Yang et al. (2024) |
| qwen2-7b | 2024-06 | 7 | 7.0 | Qwen/Qwen2-7B | Yang et al. (2024) |
| redpajama-3b | 2023-05 | 3 | 0.8 | togethercomputer/RedPajama-INCITE-Base-3B-v1 | TogetherWeCompute (2023) |
| redpajama-7b | 2023-05 | 7 | 1.0 | togethercomputer/RedPajama-INCITE-7B-Base | TogetherWeCompute (2023) |
| skywork-13b | 2023-10 | 13 | 3.2 | Skywork/Skywork-13B-base | Wei et al. (2023) |
| stablelm-2-1.6b | 2024-01 | 1.6 | 2.0 | stabilityai/stablelm-2-1.6b | Bellagente et al. (2024) |
| stablelm-2-12b | 2024-03 | 12.1 | 2.0 | stabilityai/stablelm-2-12b | Bellagente et al. (2024) |
| stablelm-3b-4e1t | 2023-09 | 2.8 | 4.0 | stabilityai/stablelm-3b-4e1t | StabilityAI (2023) |
| stablelm-base-alpha-3b-v2 | 2023-08 | 2.8 | 1.1 | stabilityai/stablelm-base-alpha-3b-v2 | StabilityAI (2023) |
| stablelm-base-alpha-7b-v2 | 2023-08 | 7 | 1.1 | stabilityai/stablelm-base-alpha-7b-v2 | StabilityAI (2023) |
| yi-6b | 2023-11 | 6 | 3.1 | 01-ai/Yi-1.5-6B | Young et al. (2024) |
| ziya2-13b-base | 2023-11 | 13 | 2.65 | IDEA-CCNL/Ziya2-13B-Base | Gan et al. (2023) |
| Name | Train date | Base model | HF repository | Citation |
| falcon-7b-instruct | 2023-04 | falcon-7b | tiuue/falcon-7b-instruct | Almazrouei et al. (2023) |
| gemma-2b-instruct | 2024-02 | gemma-2b | google/gemma-2b-it | Gemma et al. (2024) |
| gemma-7b-instruct | 2024-02 | gemma-7b | google/gemma-7b-it | Gemma et al. (2024) |
| internlm-chat-20b | 2023-09 | internlm-20b | internlm/internlm-chat-20b | InternLM (2023) |
| internlm-chat-7b | 2023-07 | internlm-7b | internlm/internlm-chat-7b | InternLM (2023) |
| internlm2-7b | 2024-01 | internlm2- +base-7b | internlm/internlm2-7b | Cai et al. (2024) |
| internlm2-chat- +1_8b | 2024-01 | internlm2- +base-7b | internlm/internlm2-chat- +1_8b | Cai et al. (2024) |
| internlm2-chat- +20b | 2024-01 | internlm2- +base-20b | internlm/internlm2-chat- +20b | Cai et al. (2024) |
| internlm2-chat-7b | 2024-01 | internlm2- +base-7b | internlm/internlm2-chat-7b | Cai et al. (2024) |
| llama-2-13b-chat | 2023-07 | llama-2- +13b | meta-llama/Llama-2-13b-chat-hf | Touvron et al. (2023b) |
| llama-2-7b-chat | 2023-07 | llama-2-7b | meta-llama/Llama-2-7b-chat-hf | Touvron et al. (2023b) |
| llama-3-8b-instruct | 2024-04 | llama-3-8b | meta-llama/Meta-Llama-3-8B-Instruct | MetaAI (2024) |
| map-neo-7b-instruct | 2024-05 | map-neo- +7b | m-a-p/neo_7b_instruct_v0.1 | Zhang et al. (2024) |
| map-neo-7b-sft | 2024-05 | map-neo- +7b | m-a-p/neo_7b_sft_v0.1 | Zhang et al. (2024) |
| olmo-7b-0724-instruct-hf | 2024-01 | olmo-7b | allenai/OLMo-7B-0724-Instruct-hf | Groeneveld et al. (2024) |
| olmo-7b-0724-sft-hf | 2024-01 | olmo-7b | allenai/OLMo-7B-0724-SFT-hf | Groeneveld et al. (2024) |
| olmo-7b-instruct-hf | 2024-01 | olmo-7b | allenai/OLMo-7B-Instruct-hf | Groeneveld et al. (2024) |
| olmo-7b-sft-hf | 2024-01 | olmo-7b | allenai/OLMo-7B-SFT-hf | Groeneveld et al. (2024) |
| qwen-1.5-0.5b-chat | 2024-01 | qwen-1.5- +0.5b | Qwen/Qwen1.5-0.5B-Chat | Bai et al. (2023) |
| qwen-1.5-1.8b-chat | 2024-01 | qwen-1.5- +1.8b | Qwen/Qwen1.5-1.8B-Chat | Bai et al. (2023) |
| qwen-1.5-14b-chat | 2024-01 | qwen-1.5- +14b | Qwen/Qwen1.5-14B-Chat | Bai et al. (2023) |
| qwen-1.5-4b-chat | 2024-01 | qwen-1.5- +4b | Qwen/Qwen1.5-4B-Chat | Bai et al. (2023) |
| qwen-1.5-7b-chat | 2024-01 | qwen-1.5- +7b | Qwen/Qwen1.5-7B-Chat | Bai et al. (2023) |
| redpajama-7b-chat | 2023-05 | redpajama- +7b | togethercomputer/RedPajama-INCITE-7B-Chat | TogetherWeCompute (2023) |
| redpajama-chat- +3b-v1 | 2023-05 | redpajama- +3b | togethercomputer/RedPajama-INCITE-Chat-3B-v1 | TogetherWeCompute (2023) |
| redpajama-instruct-3b-v1 | 2023-05 | redpajama- +3b | togethercomputer/RedPajama-INCITE-Instruct-3B-v1 | TogetherWeCompute (2023) |
| redpajama-instruct-7b | 2023-05 | redpajama- +7b | togethercomputer/RedPajama-INCITE-7B-Instruct | TogetherWeCompute (2023) |
| stabilem-2-1.6b-chat | 2024-01 | stabilem- +2-1.6b | stabilityai/stabilem-2-1_6b-chat | Bellagente et al. (2024) |
| stabilem-2-12b-chat | 2024-03 | stabilem- +2-12b | stabilityai/stabilem-2-12b-chat | Bellagente et al. (2024) |
| stabilem-2-zephyr-1.6b | 2024-06 | stabilem- +2-1.6b | stabilityai/stabilem-2-zephyr-1_6b | Bellagente et al. (2024) |
| stabilem-zephyr-3b | 2023-11 | stabilem- +3b-4e1t | stabilityai/stabilem-zephyr-3b | Tunstall et al. (2023) |
| vicuna-13b-v1.1 | 2023-04 | llama-13b | lmsys/ VICuna-13b-v1.1 | Chiang et al. (2023) |
| vicuna-13b-v1.3 | 2023-06 | llama-13b | lmsys/ VICuna-13b-v1.3 | Chiang et al. (2023) |
| vicuna-7b-v1.1 | 2023-04 | llama-7b | lmsys/ VICuna-7b-v1.1 | Chiang et al. (2023) |
| vicuna-7b-v1.3 | 2023-06 | llama-7b | lmsys/ VICuna-7b-v1.3 | Chiang et al. (2023) |
| MMLU | GSM8K | |
| φ | 0.071 | 0.168 |
| (0.018) | (0.032) | |
| R² | 0.530 | 0.503 |
| MMLU | GSM8K | |
| ψ | -0.004 | 0.000 |
| (0.009) | (0.032) | |
| R² | 0.926 | 0.763 |
| Discriminator test | MMLU | GSM8K |
| Older models vs newer models | 64.6% | 73.9% |
| Fine-tuned, older models vs newer models | 52.2% | 52.5% |
| Model | C4 +PPL (↓) | Wiki +PPL (↓) | Llama +Acc (↑) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | ||||
| Transfusion | 7.72 | 4.28 | 61.5 | 27.2 | 16.8 | 25.5 |
| Chameleon | 8.41 | 4.69 | 59.1 | 18.0 | 29.6 | 24.3 |
| Parity FLOP Ratio | 0.489 | 0.526 | 0.600 | 0.218 | 0.029 | 0.319 |
| Model | Batch | C4 +PPL (↓) | Wiki +PPL (↓) | Llama +Acc (↑) | |
| Llama 2 | 1M Text Tokens | 10.1 | 5.8 | 53.7 | |
| Transfusion | + Diffusion | + 1M Image Patches | (+0.3) 10.4 | (+0.2) 6.0 | (-1.0) 52.7 |
| Chameleon | + Stability Modifications | 1M Text Tokens | (+0.9) 11.0 | (+0.5) 6.3 | (-1.8) 51.9 |
| + LM Loss on Image Tokens | + 1M Image Tokens | (+0.8) 11.8 | (+0.5) 6.8 | (-3.0) 48.9 |
| Enc/Dec | Attention | C4 +PPL (↓) | Wiki +PPL (↓) | Llama +Acc (↓) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | |||||
| Linear | Causal | 10.4 | 6.0 | 51.4 | 12.7 | 61.3 | 23.0 |
| Bidirectional | 10.4 | 6.0 | 51.7 | 16.0 | 20.3 | 24.0 | |
| U-Net | Causal | 10.3 | 5.9 | 52.0 | 23.3 | 16.8 | 25.3 |
| Bidirectional | 10.3 | 5.9 | 51.9 | 25.4 | 16.7 | 25.4 | |
| Enc/Dec | Latent/ Patch | Pixel/ Patch | Patch/ Image | C4 PPL (↓) | Wiki PPL (↓) | Llama Acc (↓) | CDr (↑) | MS-COCO FID (↓) | CLIP (↑) |
| None | 1×1 | 8×8 | 1024 | 10.3 | 5.9 | 52.2 | 12.0 | 21.0 | 24.0 |
| U-Net | 2×2 | 16×16 | 256 | 10.3 | 5.9 | 51.9 | 25.4 | 16.7 | 25.4 |
| 4×4 | 32×32 | 64 | 10.7 | 6.2 | 50.7 | 29.9 | 16.0 | 25.7 | |
| 8×8 | 64×64 | 16 | 11.4 | 6.6 | 49.2 | 29.5 | 16.1 | 25.2 |
| Model Params | Enc/Dec | Δ Enc/Dec Params | C4 PPL (↓) | Wiki PPL (↓) | Llama Acc (↑) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | ||||||
| 0.37B | Linear | 0.4% | 12.0 | 7.0 | 47.9 | 11.1 | 21.5 | 22.4 |
| U-Net | 71.3% | 11.8 | 6.9 | 48.8 | 21.1 | 18.1 | 24.9 | |
| 1.4B | Linear | 0.4% | 9.5 | 5.4 | 53.8 | 19.1 | 19.4 | 24.3 |
| U-Net | 19.3% | 9.4 | 5.4 | 53.4 | 28.1 | 16.6 | 25.7 | |
| 7B | Linear | 0.3% | 7.7 | 4.3 | 61.5 | 27.2 | 18.6 | 25.9 |
| U-Net | 3.8% | 7.8 | 4.3 | 61.1 | 33.7 | 16.0 | 26.5 | |
| Model | Model Params | Text Tokens | Images | Llama Acc (↑) | COCO FID (↓) | Gen Eval (↑) |
| Llama 1 (Touvron et al., 2023a) | 7.1B | 1.4T | — | 66.1 | — | — |
| Llama 2 (Touvron et al., 2023b) | 7.1B | 2.0T | — | 66.3 | — | — |
| Chameleon (Chameleon Team, 2024) | 7.1B | 6.0T | 5.0B | 67.1 | 26.74 | 0.39 |
| Imagen (Saharia et al., 2022) | 2.6B + 4.7B* | — | 5.0B | — | 7.27 | — |
| Parti (Yu et al., 2022) | 20B | — | 4.8B | — | r7.23 | — |
| SD 2.1 (Rombach et al., 2022b) | 0.9B + 0.1B* | — | 2.3B | — | — | 0.50 |
| DALL-E 2 (Ramesh et al., 2022) | 4.2B + 1B* | — | 2.6B | — | 10.39 | 0.52 |
| SDXL (Podell et al., 2023) | 2.6B + 0.8B* | — | 1.6B | — | — | 0.55 |
| DeepFloyd (Stability AI, 2024) | 5.5B + 4.7B* | — | 7.5B | — | 6.66 | 0.61 |
| SD 3 (Esser et al., 2024b) | 8B + 4.7B* | — | s2.0B | — | — | 0.68 |
| Transfusion (Ours) | 7.3B | 1.0T | 3.5B | 66.1 | 6.78 | 0.63 |
| Input | Output | Benchmark | Metric |
| Text | Text | Wikipedia | Perplexity (↓) |
| C4 | Perplexity (↓) | ||
| Llama 2 Eval Suite | Accuracy (↑) | ||
| Image | Text | MS-COCO 5k | CIDEr (↑) |
| Text | Image | MS-COCO 30k | FID (↓), CLIP (↑) |
| GenEval | GenEval score (↑) |
| Size | Layers | Emb Dim | Att Heads |
| 0.16B | 16 | 768 | 12 |
| 0.37B | 24 | 1024 | 16 |
| 0.76B | 24 | 1536 | 24 |
| 1.4B | 24 | 2048 | 16 |
| 7B | 32 | 4096 | 32 |
| Enc/Dec | Latent/ Patch | Pixel/ Patch | Patch/ Image | C4 PPL (↓) | Wiki PPL (↓) | Llama Acc (↓) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | |||||||
| None | 1×1 | 8×8 | 1024 | 10.3 | 5.9 | 52.2 | 12.0 | 21.0 | 24.0 |
| Linear | 2×2 | 16×16 | 256 | 10.4 | 6.0 | 51.7 | 16.0 | 20.3 | 24.0 |
| 4×4 | 32×32 | 64 | 10.9 | 6.3 | 49.8 | 14.3 | 25.6 | 22.6 | |
| 8×8 | 64×64 | 16 | 11.7 | 6.9 | 47.7 | 11.3 | 43.5 | 18.9 | |
| U-Net | 2×2 | 16×16 | 256 | 10.3 | 5.9 | 51.9 | 25.4 | 16.7 | 25.4 |
| 4×4 | 32×32 | 64 | 10.7 | 6.2 | 50.7 | 29.9 | 16.0 | 25.7 | |
| 8×8 | 64×64 | 16 | 11.4 | 6.6 | 49.2 | 29.5 | 16.1 | 25.2 | |
| Model Params | Enc/Dec | Δ Enc/Dec Params | C4 PPL (↓) | Wiki PPL (↓) | Llama Acc (↑) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | ||||||
| 0.16B | Linear | 0.5% | 14.8 | 8.8 | 44.2 | 6.2 | 37.6 | 20.0 |
| U-Net | 106.1% | 14.4 | 8.5 | 45.7 | 15.3 | 18.8 | 23.9 | |
| 0.37B | Linear | 0.4% | 12.0 | 7.0 | 47.9 | 11.1 | 21.5 | 22.4 |
| U-Net | 71.3% | 11.8 | 6.9 | 48.8 | 21.1 | 18.1 | 24.9 | |
| 0.76B | Linear | 0.4% | 10.4 | 6.0 | 51.7 | 16.0 | 20.3 | 24.0 |
| U-Net | 35.5% | 10.3 | 5.9 | 51.9 | 25.4 | 16.7 | 25.4 | |
| 1.4B | Linear | 0.4% | 9.5 | 5.4 | 53.8 | 19.1 | 19.4 | 24.3 |
| U-Net | 19.3% | 9.4 | 5.4 | 53.4 | 28.1 | 16.6 | 25.7 | |
| 7B | Linear | 0.3% | 7.7 | 4.3 | 61.5 | 27.2 | 18.6 | 25.9 |
| U-Net | 3.8% | 7.8 | 4.3 | 61.1 | 33.7 | 16.0 | 26.5 | |
| Model Params | Noise Limit | C4 PPL (↓) | Wiki PPL (↓) | Llama Acc (↑) | MS-COCO | ||
| CDr (↑) | FID (↓) | CLIP (↑) | |||||
| 0.76B | ✓ | 10.3 | 5.9 | 51.9 | 25.4 | 16.7 | 25.4 |
| 10.3 | 5.9 | 52.1 | 29.4 | 16.5 | 25.4 | ||
| 7B | ✓ | 7.8 | 4.3 | 61.1 | 33.7 | 16.0 | 26.5 |
| 7.7 | 4.3 | 60.9 | 35.2 | 15.7 | 26.3 | ||
| Dataset | AlpacaEval | TL;DR | |||||||
| Method | Acc. | ECE ↓ | AUROC | AUPRC | Acc. | ECE ↓ | AUROC | AUPRC | |
| GPT-4-turbo | Predictive Probability | 0.724 | 0.217 | 0.642 | 0.852 | 0.760 | 0.196 | 0.731 | 0.890 |
| Verbalized Confidence | 0.724 | 0.215 | 0.550 | 0.774 | 0.760 | 0.194 | 0.548 | 0.792 | |
| Randomized Annotators | 0.720 | 0.113 | 0.705 | 0.866 | 0.779 | 0.079 | 0.734 | 0.905 | |
| Simulated Annotators (Maj.) | 0.730 | 0.106 | 0.718 | 0.873 | 0.783 | 0.062 | 0.755 | 0.921 | |
| Simulated Annotators (Ind.) | 0.734 | 0.095 | 0.723 | 0.877 | 0.788 | 0.039 | 0.755 | 0.921 | |
| GPT-3.5-turbo | Predictive Probability | 0.644 | 0.293 | 0.581 | 0.691 | 0.667 | 0.228 | 0.653 | 0.786 |
| Verbalized Confidence | 0.644 | 0.306 | 0.505 | 0.595 | 0.667 | 0.211 | 0.607 | 0.716 | |
| Simulated Annotators (Ind.) | 0.694 | 0.058 | 0.632 | 0.793 | 0.725 | 0.043 | 0.704 | 0.842 | |
| Mistral-7B-it | Predictive Probability | 0.618 | 0.374 | 0.457 | 0.579 | 0.661 | 0.306 | 0.613 | 0.735 |
| Verbalized Confidence | 0.618 | 0.414 | 0.490 | 0.627 | 0.661 | 0.335 | 0.578 | 0.680 | |
| Simulated Annotators (Ind.) | 0.684 | 0.075 | 0.632 | 0.772 | 0.696 | 0.103 | 0.654 | 0.807 | |
| Method | Evaluator Composition (%) | Coverage (%) | Guarantee Success Rate (%) | ||
| Mistral-7B | GPT-3.5-turbo | GPT-4-turbo | |||
| No Selection | 0.0 | 0.0 | 100.0 | 100.0 | 0.0 |
| Heuristic Selection | 0.0 | 0.0 | 100.0 | 89.6 | 42.0 |
| Cascaded Heuristic Selection | 59.6 | 15.0 | 25.5 | 64.6 | 0.0 |
| 100.0 | 0.0 | 0.0 | 5.6 | 54.7 | |
| Point-Estimate Calibration | 0.0 | 100.0 | 0.0 | 9.4 | 79.0 |
| 0.0 | 0.0 | 100.0 | 57.7 | 47.5 | |
| Cascaded Selective Evaluation | 28.3 | 28.2 | 43.5 | 55.7 | 90.8 |
| Method | Evaluator Composition (%) | Coverage (%) | Guarantee Success Rate (%) | ||
| Mistral-7B | GPT-3.5-turbo | GPT-4-turbo | |||
| No Selection | 0.0 | 0.0 | 100.0 | 100.0 | 0.0 |
| Heuristic Selection | 0.0 | 0.0 | 100.0 | 95.2 | 0.1 |
| Cascaded Heuristic Selection | 57.1 | 15.2 | 27.7 | 79.7 | 0.3 |
| 100.0 | 0.0 | 0.0 | 0.0 | 0.0 | |
| Point-Estimate Calibration | 0.0 | 100.0 | 0.0 | 40.5 | 57.2 |
| 0.0 | 0.0 | 100.0 | 60.9 | 54.4 | |
| Cascaded Selective Evaluation | 23.7 | 58.8 | 17.5 | 63.2 | 91.0 |
| Dimension | Abstained Samples | Evaluated Samples |
| Human IAA | 0.815 (0.031) | 0.902 (0.025) |
| Length Ratio | 0.242 (0.014) | 0.245 (0.025) |
| Token Overlap | 0.623 (0.049) | 0.592 (0.054) |
| Target Human Agreement (%) | Empirical Human Agreement (%) | Coverage (%) | Guarantee Success Rate (%) |
| 70.0 | 73.4 | 100.0 | 100.0 |
| 75.0 | 75.3 | 91.4 | 92.5 |
| 80.0 | 80.8 | 72.1 | 90.8 |
| 85.0 | 85.2 | 55.4 | 91.0 |
| 90.0 | 90.1 | 31.8 | 90.7 |
| Method | Empirical Human Agreement (%) | Coverage (%) | Guarantee Success Rate (%) | Relative API Cost |
| GPT-4 (N=1) | 77.8 | 100.0 | 0.0 | 1.000 |
| Cascaded Selective Evaluation (N=1) | 85.2 | 60.9 | 90.3 | 0.655 |
| GPT-4 (N=2) | 78.2 | 100.0 | 0.0 | 2.000 |
| Cascaded Selective Evaluation (N=2) | 85.7 | 61.5 | 90.8 | 1.288 |
| GPT-4 (N=3) | 78.1 | 100.0 | 0.0 | 3.000 |
| Cascaded Selective Evaluation (N=3) | 85.5 | 62.1 | 90.3 | 1.920 |
| GPT-4 (N=5) | 78.5 | 100.0 | 0.0 | 5.000 |
| Cascaded Selective Evaluation (N=5) | 85.8 | 63.2 | 91.0 | 2.849 |
| Method | Empirical Human Agreement (%) | Coverage (%) | Guarantee Success Rate (%) | Relative API Cost |
| GPT-4 | 77.8 | 100.0 | 13.9 | 1.000 |
| Cascaded Selective Evaluation (stronger) | 80.2 | 77.6 | 90.5 | 0.215 |
| Cascaded Selective Evaluation (weaker) | 80.3 | 68.3 | 90.8 | 0.126 |
| Cascaded Selective Evaluation (weaker + GPT-4) | 80.4 | 78.2 | 90.6 | 0.192 |
| Method | Evaluator Composition (%) | Coverage (%) | Guarantee Success Rate (%) | ||
| Mistral-7B | GPT-3.5-turbo | GPT-4-turbo | |||
| No Selection | 0.0 | 0.0 | 100.0 | 100.0 | 0.0 |
| Heuristic Selection | 0.0 | 0.0 | 100.0 | 72.0 | 2.4 |
| Cascaded Heuristic Selection | 36.6 | 40.2 | 23.2 | 84.4 | 0.8 |
| Point-Estimate Calibration | 100.0 | 0.0 | 0.0 | 12.1 | 54.3 |
| 0.0 | 100.0 | 0.0 | 27.9 | 52.1 | |
| 0.0 | 0.0 | 100.0 | 42.9 | 56.5 | |
| Cascaded Selective Evaluation | 49.3 | 21.8 | 28.9 | 42.6 | 90.2 |
| Dataset +Method | AlpacaEval | TL;DR | |||||||
| Acc. | ECE ↓ | AUROC | AUPRC | Acc. | ECE ↓ | AUROC | AUPRC | ||
| Mistral-7B-it | Predictive Probability (CoT) | 0.636 | 0.292 | 0.527 | 0.655 | 0.669 | 0.275 | 0.637 | 0.751 |
| Lexical Similarity | 0.636 | 0.478 | 0.478 | 0.623 | 0.669 | 0.459 | 0.520 | 0.639 | |
| Semantic Sets | 0.636 | - | 0.513 | 0.638 | 0.669 | - | 0.545 | 0.665 | |
| Semantic Entropy | 0.636 | - | 0.572 | 0.684 | 0.669 | - | 0.650 | 0.762 | |
| Simulated Annotators (Ind.) | 0.684 | 0.075 | 0.632 | 0.772 | 0.696 | 0.103 | 0.654 | 0.807 | |
| Method | Relative API Cost |
| Predictive Probability | 1.000 |
| Predictive Probability (CoT) | 20.376 |
| Verbalized Confidence | 2.215 |
| Semantic Set | 20.376 |
| Semantic Entropy | 20.376 |
| Simulated Annotators | 6.910 |
| User: How do I convert Celsius to Fahrenheit? | |
| Model Response A | Model Response B |
| To convert Celsius to Fahrenheit, you need to add 32 degrees to the Celsius temperature. For example, if you have a temperature of 10 degrees Celsius, the equivalent temperature in Fahrenheit is 50 degrees Fahrenheit. To convert the temperature from Fahrenheit to Celsius, you need to subtract 32 degrees from the Fahrenheit temperature. For example, if you have a temperature of 90 degrees Fahrenheit, the equivalent temperature in Celsius is 32 degrees Celsius. | To convert Celsius to Fahrenheit, you can use the following formula: +F = C \* 9/5 + 32 +Where: +F = Fahrenheit temperature in degrees Celsius +C = Celsius temperature in degrees Celsius +For example, if the temperature is 20°C in Celsius, to convert it to Fahrenheit you would do: +F = 20 \* 9/5 + 32 = 68°F (degrees Fahrenheit) |
| Prompt: "You will pay for what you have done," she hissed, her blade flashing in the moonlight. The battle that ensued ___________ |
| Token | τ=1 | τ=3 | Top-p | Min-p |
| was | 70.3 | 11.9 | 13.1 | 18.5 |
| lasted | 9.5 | 6.1 | 6.7 | 9.5 |
| between | 6.2 | 5.3 | 5.9 | 8.2 |
| left | 4.5 | 4.8 | 5.3 | 7.4 |
| would | 3.2 | 4.3 | 4.7 | 6.6 |
| seemed | 0.5 | 2.3 | 2.5 | 3.5 |
| Prompt: A rainbow is an optically brilliant meteorological event resulting from refraction, reflection, and dispersion of_ |
| Token | τ=1 | τ=3 | Top-p | Min-p |
| light | 98.3 | 34.4 | 38.2 | 80.9 |
| sunlight | 1.3 | 8.1 | 9.0 | 19.1 |
| water | 0.1 | 3.4 | 3.8 | - |
| sunshine | 0.1 | 2.9 | 3.2 | - |
| a | 0.05 | 2.7 | 3.0 | - |
| moisture | 0.05 | 2.7 | 3.0 | - |
| Method | GPQA Main (5-shot) | GSM8K CoT (8-shot) | ||||||||
| τ = 0.7 | τ = 1.0 | τ = 1.5 | τ = 2.0 | τ = 3.0 | τ = 0.7 | τ = 1.0 | τ = 1.5 | τ = 2.0 | τ = 3.0 | |
| Temp' Only | 27.23 | 22.77 | 25.22 | 5.80 | 0.89 | 29.56 | 17.51 | 0.00 | 0.00 | 0.00 |
| Top-k | 26.34 | 23.66 | 22.77 | 16.52 | 5.88 | 30.63 | 17.59 | 0.00 | 0.00 | 0.00 |
| η Sampling | 28.13 | 25.45 | 24.55 | - | - | 32.63 | 26.99 | 0.49 | - | - |
| ε Sampling | 27.90 | 25.45 | 24.11 | - | - | 31.69 | 26.84 | 0.56 | - | - |
| Top-p | 29.02 | 25.00 | 24.78 | 6.47 | 0.46 | 36.09 | 27.67 | 0.68 | 0.00 | 0.00 |
| Min-p | 29.18 | 25.89 | 28.13 | 26.34 | 24.55 | 35.18 | 30.86 | 18.42 | 6.21 | 0.00 |
| Method | τ = 0.5 | τ = 1.0 | τ = 1.5 | τ = 2.0 | τ = 3.0 | τ = 4.0 |
| Temp' Only | 37.72 | 31.25 | 29.02 | 20.09 | 2.90 | 0.89 |
| Top-p0.95 | 39.51 | 33.26 | 29.24 | 18.75 | 2.01 | 0.67 |
| Top-p0.90 | 40.18 | 34.38 | 29.69 | 21.21 | 2.01 | 0.89 |
| Min-p (ours) | 38.17 | 34.60 | 31.03 | 27.46 | 22.77 | 13.84 |
| Method | τ = 1.0 | τ = 1.5 |
| Temperature Only | 49.97 | 53.18 |
| Mirostat | 16.69 | 14.23 |
| ε Sampling | 43.50 | 45.51 |
| Top-p | 50.43 | - |
| Min-p (ours) | 52.01 | 56.54 |
| Temp | Metric | Lower Diversity Settings | Higher Diversity Settings | ||||
| Standard Sampling | Top-p = 0.1 | Min-p = 0.2 | Standard Sampling | Top-p = 0.9 | Min-p = 0.05 | ||
| T=1 | Quality | 7.55±0.2365 | 5.96±0.3077 | 7.06±0.2039 | 7.96±0.1532 | 7.67±0.1931 | 8.02±0.1858 |
| Diversity | 7.08±0.2921 | 2.40±0.2765 | 5.83±0.2795 | 7.66±0.2014 | 7.04±0.2587 | 7.74±0.2235 | |
| T=2 | Quality | 7.76±0.2640 | 5.43±0.3083 | 7.62±0.2105 | 7.85±0.1931 | 7.75±0.1885 | 7.98±0.1952 |
| Diversity | 7.66±0.2459 | 1.83±0.2212 | 6.91±0.2658 | 7.57±0.2058 | 7.66±0.2066 | 7.96±0.2117 | |
| T=3 | Quality | 6.75±0.3509 | 5.75±0.3201 | 7.74±0.2418 | 6.83±0.2895 | 7.11±0.2869 | 7.57±0.2303 |
| Diversity | 7.04±0.2836 | 2.25±0.3353 | 7.60±0.2550 | 7.43±0.2830 | 7.49±0.2396 | 7.66±0.1996 | |
| Model | Method | Parameter | Risk Std Error ↓ | Recall ↑ |
| Mixtral-7b | Top-k | 181 | 1.759 | 0.364 |
| Top-p | 0.9315 | 6.315 | 0.447 | |
| Adaptive | 2.2e-5 | 2.757 | 0.466 | |
| Eta | 1.96e-4 | 4.712 | 0.505 | |
| Mirostat | 6.71 | 2.213 | 0.468 |
| Repository | Stars | Downstream Repos |
| Hugging Face Transformers | 144,000 | 291,000 |
| Ollama | 141,000 | 400 |
| open-webui | 95,100 | — |
| llama.cpp | 80,400 | — |
| vLLM | 47,500 | — |
| text-generation-webui | 43,600 | — |
| unsloth | 38,900 | — |
| continue | 26,300 | — |
| SGLang | 14,400 | — |
| sillytavern | 14,500 | — |
| mamba | 14,900 | — |
| kobold.cpp | 7,400 | — |
| Total | 668,000 | 291,400+ |
| Model | Winrate (%) ± 95% CI | Winrate (LC) (%) ± 95% CI | Avg Length (# runs) |
| temp150_minp_10 | 56.54 ± 3.68 | 58.12 ± 3.67 | 1852 (696) |
| temp150_minp_15 | 53.37 ± 3.70 | 56.73 ± 3.67 | 1816 (698) |
| temp150_minp_20 | 53.38 ± 3.71 | 55.45 ± 3.69 | 1835 (696) |
| quad_20_100 | 52.80 ± 3.71 | 55.43 ± 3.69 | 1821 (696) |
| temp100_minp_05 | 52.01 ± 3.71 | 55.07 ± 3.69 | 1808 (697) |
| temp200_minp_20 | 53.08 ± 3.70 | 54.82 ± 3.69 | 1861 (698) |
| temp80_topp_98 | 51.29 ± 3.71 | 54.65 ± 3.69 | 1810 (697) |
| dynatemp_50_150_75_minp_05 | 51.58 ± 3.72 | 54.42 ± 3.69 | 1807 (695) |
| dynatemp_50_200_100_minp_10 | 51.87 ± 3.71 | 54.33 ± 3.69 | 1825 (696) |
| temp150_minp_10(seed1337 | 52.86 ± 3.70 | 53.84 ± 3.69 | 1856 (699) |
| temp170_minp_15 | 52.65 ± 3.70 | 53.75 ± 3.69 | 1855 (697) |
| temp120_minp_10 | 51.36 ± 3.71 | 53.75 ± 3.69 | 1829 (698) |
| quad_15_100 | 51.65 ± 3.71 | 53.70 ± 3.69 | 1843 (697) |
| tfs_95 | 50.79 ± 3.71 | 53.49 ± 3.69 | 1802 (696) |
| tfs_98 | 50.72 ± 3.71 | 53.39 ± 3.69 | 1807 (697) |
| temp100_minp_10 | 50.14 ± 3.72 | 53.24 ± 3.69 | 1793 (695) |
| temp100_topp_98 | 50.43 ± 3.71 | 53.00 ± 3.69 | 1834 (697) |
| temp100_topp_90 | 50.07 ± 3.73 | 52.57 ± 3.70 | 1815 (692) |
| temp80 | 49.28 ± 3.71 | 52.40 ± 3.69 | 1797 (698) |
| temp100_topp_95 | 50.22 ± 3.71 | 51.80 ± 3.69 | 1835 (696) |
| temp100_minp_02 | 50.43 ± 3.72 | 51.62 ± 3.69 | 1853 (696) |
| temp80_minp_02 | 48.85 ± 3.71 | 51.46 ± 3.69 | 1802 (696) |
| temp80_minp_05 | 47.84 ± 3.71 | 50.99 ± 3.69 | 1808 (696) |
| temp80_topp_95 | 48.78 ± 3.71 | 50.76 ± 3.69 | 1793 (696) |
| temp100 | 50.00 | 50.00 | 1902 (700) |
| dynatemp_100_250_100_minp_10 | 50.86 ± 3.71 | 50.00 ± 3.69 | 2227 (698) |
| quad_25_100 | 47.85 ± 3.71 | 49.94 ± 3.69 | 1807 (697) |
| temp150_tfs_95 | 51.08 ± 3.71 | 49.94 ± 3.69 | 1969 (697) |
| greedy | 46.64 ± 3.70 | 49.90 ± 3.69 | 1765 (700) |
| temp150_minp_05 | 48.57 ± 3.71 | 48.13 ± 3.69 | 1919 (697) |
| temp150_topp_80 | 24.21 ± 4.42 | 17.65 ± 3.91 | 5258 (95) |
| temp150_minp_02 | 44.83 ± 3.70 | 42.09 ± 3.67 | 2149 (696) |
| dynatemp_50_150_100 | 35.37 ± 3.55 | 34.94 ± 3.52 | 2764 (697) |
| mirostat_40_10 | 16.69 ± 2.76 | 16.04 ± 2.72 | 1848 (698) |
| mirostat_50_10 | 16.40 ± 2.75 | 16.04 ± 2.72 | 1822 (698) |
| mirostat_60_10 | 15.62 ± 2.69 | 14.97 ± 2.64 | 1838 (698) |
| temp150_topp_98 | 0.00 | 0.00 ± 0.03 | 3125 (97) |
| temp150_topp_95 | 1.03 ± 1.03 | 0.61 ± 0.79 | 4665 (97) |
| temp150_topp_90 | 2.06 ± 1.45 | 0.93 ± 0.98 | 6139 (97) |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 28.57 | 28.57 | 25.89 | 26.79 | 24.55 | 12.95 | 3.35 | 2.46 | 2.23 | 2.01 |
| Top-p = 0.9 | 28.57 | 27.23 | 28.79 | 27.23 | 25.45 | 18.75 | 3.57 | 2.68 | 2.01 | 2.23 |
| Top-p = 0.95 | 28.57 | 29.24 | 26.34 | 26.56 | 25.67 | 19.64 | 6.03 | 2.68 | 2.46 | 2.90 |
| Min-p = 0.05 | 28.57 | 29.46 | 30.13 | 27.46 | 23.88 | 23.44 | 22.32 | 16.52 | 6.47 | 3.12 |
| Min-p = 0.1 | 28.57 | 27.46 | 28.57 | 27.01 | 26.56 | 25.67 | 21.43 | 19.42 | 14.29 | 6.92 |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 46.55 | 45.03 | 42.99 | 37.60 | 28.89 | 0.23 | 0.00 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.9 | 46.55 | 45.19 | 44.20 | 40.11 | 37.23 | 5.23 | 0.00 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.95 | 46.55 | 44.73 | 44.28 | 41.62 | 33.89 | 2.50 | 0.00 | 0.00 | 0.00 | 0.00 |
| Min-p = 0.05 | 46.55 | 43.67 | 45.11 | 42.23 | 36.24 | 24.64 | 7.05 | 0.00 | 0.00 | 0.00 |
| Min-p = 0.1 | 46.55 | 45.49 | 43.63 | 42.68 | 40.18 | 29.11 | 17.06 | 9.82 | 0.15 | 0.00 |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 27.23 | 25.89 | 24.55 | 27.68 | 25.00 | 20.09 | 5.36 | 2.23 | 2.23 | 1.79 |
| Top-p = 0.9 | 27.23 | 29.46 | 27.68 | 28.79 | 30.36 | 25.45 | 9.82 | 2.68 | 2.68 | 1.79 |
| Top-p = 0.95 | 27.23 | 28.79 | 27.23 | 27.68 | 29.46 | 23.00 | 5.58 | 3.13 | 1.79 | 1.79 |
| Min-p = 0.05 | 27.23 | 28.35 | 27.46 | 27.23 | 32.37 | 27.68 | 27.46 | 21.65 | 11.38 | 3.79 |
| Min-p = 0.1 | 27.23 | 28.35 | 28.79 | 31.70 | 29.24 | 31.25 | 23.66 | 23.66 | 16.96 | 8.93 |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 76.72 | 77.10 | 76.42 | 74.00 | 64.59 | 3.11 | 0.00 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.9 | 76.72 | 76.65 | 76.72 | 75.66 | 73.31 | 28.81 | 0.00 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.95 | 76.72 | 77.41 | 77.63 | 76.50 | 71.11 | 16.83 | 0.00 | 0.00 | 0.00 | 0.00 |
| Min-p = 0.05 | 76.72 | 76.12 | 76.50 | 75.51 | 73.24 | 57.92 | 26.61 | 0.15 | 0.00 | 0.00 |
| Min-p = 0.1 | 76.72 | 77.18 | 75.51 | 75.36 | 73.01 | 75.44 | 52.08 | 2.50 | 0.00 | 0.00 |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 29.02 | 27.46 | 28.79 | 29.91 | 30.36 | 22.99 | 6.92 | 3.12 | 2.46 | 3.35 |
| Top-p = 0.9 | 29.02 | 28.57 | 29.69 | \( \underline{\mathbf{30.80}} \) | 28.79 | 24.55 | 9.38 | 2.46 | 2.46 | 2.90 |
| Top-p = 0.95 | 29.02 | \( \underline{\mathbf{30.36}} \) | \( \underline{\mathbf{31.92}} \) | 27.68 | \( \underline{\mathbf{30.58}} \) | 25.67 | 10.04 | 2.68 | 2.90 | 2.90 |
| Min-p = 0.05 | 29.02 | 30.13 | 31.70 | \( \underline{\mathbf{30.80}} \) | 28.35 | 26.34 | \( \underline{\mathbf{29.24}} \) | 22.77 | 9.82 | 5.13 |
| Min-p = 0.1 | 29.02 | 30.13 | 29.69 | 29.24 | 29.24 | \( \underline{\mathbf{32.14}} \) | 25.22 | \( \underline{\mathbf{22.99}} \) | \( \underline{\mathbf{20.54}} \) | \( \underline{\mathbf{12.50}} \) |
| Temperature | 0.0 | 0.3 | 0.5 | 0.7 | 1.0 | 1.5 | 2.0 | 3.0 | 4.0 | 5.0 |
| Temp' Only | 84.91 | 84.61 | 84.84 | 81.50 | 75.21 | 10.39 | 0.00 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.9 | 84.91 | 84.91 | 84.08 | 83.24 | 80.36 | 48.37 | 0.08 | 0.00 | 0.00 | 0.00 |
| Top-p = 0.95 | 84.91 | 84.23 | 84.08 | 82.26 | 80.06 | 32.52 | 0.00 | 0.00 | 0.00 | 0.00 |
| Min-p = 0.05 | 84.91 | 85.06 | 84.31 | 83.32 | 80.44 | 67.25 | 32.15 | 0.08 | 0.00 | 0.00 |
| Min-p = 0.1 | 84.91 | 84.46 | 84.84 | 82.87 | 82.18 | 75.44 | 52.08 | 2.50 | 0.00 | 0.00 |
| Temperature | 0.5 | 0.7 | 1.0 | 2.0 | 3.0 |
| Temp' Only | 40.85 | 39.51 | 41.07 | 5.58 | 2.46 |
| Top-p = 0.9 | 40.85 | 42.19 | 40.63 | 7.81 | 3.35 |
| Top-p = 0.95 | 40.63 | 41.29 | 39.51 | 6.47 | 2.68 |
| Min-p = 0.05 | 40.85 | 41.52 | 38.62 | 33.71 | 23.88 |
| Min-p = 0.1 | 41.07 | 42.19 | 41.52 | 33.04 | 24.55 |
| Min-p = 0.2 | 43.30 | 41.96 | 40.40 | 34.38 | 31.47 |
| Temperature | 0.7 | 3.0 |
| Temp' Only | 93.33 | 0.08 |
| Top-p = 0.9 | 93.48 | 0.08 |
| Min-p = 0.05 | 93.03 | 6.07 |
| Min-p = 0.2 | 92.42 | 61.03 |
| Temperature | 0.0 | 0.05 | 0.1 | 0.2 | 0.3 | 0.5 |
| Temp Only | 27.68 | 27.68 | 26.34 | 25.22 | 24.33 | 24.11 |
| Top-p = 0.9 | 27.68 | 28.13 | 27.23 | 26.56 | 24.78 | 24.55 |
| Top-p = 0.95 | 27.68 | 27.90 | 27.23 | 25.22 | 24.55 | 24.11 |
| Min-p = 0.05 | 27.68 | 27.90 | 27.23 | 25.45 | 24.55 | 24.33 |
| Min-p = 0.1 | 27.68 | 28.35 | 27.46 | 26.56 | 24.33 | 24.33 |
| Temperature | 0.0 | 0.05 | 0.1 | 0.2 | 0.3 | 0.5 |
| Temp Only | 39.35 | 38.59 | 38.21 | 38.59 | 37.23 | 36.32 |
| Top-p = 0.9 | 39.35 | \( \underline{\mathbf{39.27}} \) | \( \underline{\mathbf{40.03}} \) | 39.27 | 37.53 | \( \underline{\mathbf{38.36}} \) |
| Top-p = 0.95 | 39.35 | 38.74 | 38.74 | 39.58 | 37.38 | 36.62 |
| Min-p = 0.05 | 39.35 | 38.59 | 39.65 | \( \underline{\mathbf{40.33}} \) | \( \underline{\mathbf{38.44}} \) | 37.68 |
| Min-p = 0.1 | 39.35 | 39.20 | 38.51 | 38.21 | 38.06 | 37.07 |
| Top-k | GPQA Main | ||||
| 0.5 | 0.7 | 1.0 | 2.0 | 3.0 | |
| 10.0 | 27.0 | 27.7 | 27.0 | 27.2 | 26.3 |
| 50.0 | 27.0 | 27.7 | 27.0 | 28.1 | 19.4 |
| 177.0 | 27.0 | 27.7 | 27.0 | 27.0 | 18.1 |
| GSM8K CoT | |||||
| Top-k | 0.5 | 0.7 | 1.0 | 2.0 | 3.0 |
| 10.0 | 39.3 | 38.5 | 38.7 | 30.4 | 12.5 |
| 50.0 | 38.0 | 39.5 | 37.1 | 18.8 | 0.5 |
| 177.0 | 38.0 | 38.6 | 40.3 | 12.1 | 0.0 |
| Top-k | GPQA Main | ||||
| 0.5 | 0.7 | 1.0 | 2.0 | 3.0 | |
| 10.0 | 26.6 | 27.0 | 27.0 | 23.7 | 19.4 |
| 50.0 | 26.6 | 27.0 | 27.0 | 20.8 | 10.9 |
| 177.0 | 26.6 | 27.0 | 27.7 | 22.1 | 5.6 |
| GSM8K CoT | |||||
| Top-k | 0.5 | 0.7 | 1.0 | 2.0 | 3.0 |
| 10.0 | 41.8 | 39.6 | 34.3 | 4.4 | 0.9 |
| 50.0 | 40.6 | 39.4 | 34.3 | 1.1 | 1.5 |
| 177.0 | 38.9 | 37.4 | 32.8 | 1.3 | 0.6 |
| Top-k | GPQA Main | ||||
| 0.5 | 0.7 | 1.0 | 2.0 | 3.0 | |
| 10.0 | 26.8 | 28.6 | 26.1 | 24.1 | 14.5 |
| 50.0 | 26.8 | 27.2 | 24.3 | 19.4 | 10.3 |
| 177.0 | 26.8 | 27.2 | 24.3 | 17.9 | 7.8 |
| GSM8K CoT | |||||
| Top-k | 0.5 | 0.7 | 1.0 | 2.0 | 3.0 |
| 10.0 | 35.9 | 33.1 | 26.3 | 1.4 | 0.2 |
| 50.0 | 37.0 | 33.9 | 24.9 | 0.1 | 0.0 |
| 177.0 | 37.0 | 35.2 | 24.6 | 0.0 | 0.0 |
| Top-k | GPQA Main | ||||
| 0.5 | 0.7 | 1.0 | 2.0 | 3.0 | |
| 10.0 | 26.8 | 25.2 | 23.4 | 22.1 | 15.8 |
| 50.0 | 27.5 | 23.0 | 25.4 | 17.4 | 10.5 |
| 177.0 | 27.5 | 23.0 | 26.8 | 14.7 | 4.2 |
| Top-k | GSM8K CoT | ||||
| 0.5 | 0.7 | 1.0 | 2.0 | 3.0 | |
| 10.0 | 34.6 | 29.8 | 20.3 | 0.8 | 0.0 |
| 50.0 | 33.1 | 31.8 | 17.5 | 0.0 | 0.0 |
| 177.0 | 33.3 | 32.8 | 19.0 | 0.0 | 0.0 |
| Temperature | min_p | GSM8K Score | GPQA Score |
| 3.0 | 0.7 | 0.4041 | 0.2478 |
| 0.6 | 0.3533 | 0.2500 | |
| 0.5 | 0.3237 | 0.2433 | |
| 0.4 | 0.2297 | 0.2299 | |
| 0.3 | 0.1327 | 0.2746 | |
| 2.0 | 0.7 | 0.4071 | 0.2455 |
| 0.6 | 0.4162 | 0.2567 | |
| 0.5 | 0.3738 | 0.2567 | |
| 0.4 | 0.3397 | 0.2545 | |
| 0.3 | 0.3078 | 0.2813 | |
| 1.0 | 0.7 | 0.4215 | 0.2790 |
| 0.6 | 0.3958 | 0.2545 | |
| 0.5 | 0.4124 | 0.2478 | |
| 0.4 | 0.4185 | 0.2589 | |
| 0.3 | 0.3942 | 0.2522 | |
| 0.7 | 0.7 | 0.4412 | 0.2835 |
| 0.6 | 0.4208 | 0.2835 | |
| 0.5 | 0.4177 | 0.2813 | |
| 0.4 | 0.4200 | 0.2567 | |
| 0.3 | 0.4155 | 0.2679 | |
| 0.5 | 0.7 | 0.4200 | 0.2857 |
| 0.6 | 0.4155 | 0.2835 | |
| 0.5 | 0.4147 | 0.2835 | |
| 0.4 | 0.4117 | 0.2746 | |
| 0.3 | 0.4359 | 0.2612 |
| Model | Greedy Score (T=0) | Best Score | Hyperparameters | Temperature |
| GPQA (<10B Models) | ||||
| Mistral 7B | 27.35% | 29.18% (+1.83%) | Min P = 0.1 | 0.7 |
| Llama 3.2 3B | 27.23% | 32.37% (+5.14%) | Min P = 0.05 | 1.0 |
| Llama 3.1 8B | 29.02% | 32.15% (+3.13%) | Min P = 0.1 | 1.5 |
| GPQA (Larger Models) | ||||
| Llama 3.1 70B | 41.07% | 43.30% (+2.23%) | Min P = 0.2 | 0.3 |
| Llama 3.1 70B | 41.07% | 42.19% (+1.12%) | Min P = 0.1 | 0.5 |
| Llama 3.1 70B | 41.07% | 41.52% (+0.45%) | Min P = 0.1 | 1.0 |
| GSM8K | ||||
| Mistral 7B | 39.35% | 40.33% (+0.98%) | Min P = 0.05 | 0.3 |
| Llama 3.1 8B | 84.91% | 85.06% (+0.15%) | Min P = 0.05 | 0.2 |
| Temp | Metric | Lower Diversity Settings | Higher Diversity Settings | ||||
| Standard Sampling | Top-p = 0.9 | Min-p = 0.1 | Standard Sampling | Top-p = 0.95 | Min-p = 0.05 | ||
| T=1 | Quality | 7.43±0.2525 | 6.67±0.3139 | 7.57±0.2695 | 7.20±0.2599 | 7.20±0.2844 | 7.33±0.2722 |
| Diversity | 6.70±0.3371 | 6.23±0.3672 | 6.97±0.3415 | 6.50±0.3965 | 6.53±0.3358 | 6.63±0.3349 | |
| T=2 | Quality | 1.03±0.0328 | 7.37±0.2185 | 7.50±0.2656 | 1.30±0.2317 | 1.37±0.2645 | 6.03±0.3785 |
| Diversity | 1.07±0.0455 | 6.97±0.3696 | 7.10±0.3601 | 1.33±0.2325 | 1.37±0.2645 | 7.80±0.3812 | |
| T=3 | Quality | 1.03±0.0328 | 1.23±0.0905 | 5.80±0.3664 | 1.00±0.0000 | 1.03±0.0328 | 1.27±0.0935 |
| Diversity | 1.07±0.0455 | 1.27±0.0935 | 5.90±0.4555 | 1.00±0.0000 | 1.07±0.0455 | 1.30±0.1261 | |
| Metric | min_p | top_p | Difference |
| Creativity | 3.55 | 3.09 | +0.46 |
| Originality | 3.28 | 2.85 | +0.43 |
| Narrative Flow | 2.96 | 2.26 | +0.70 |
| Emotional Impact | 2.62 | 2.10 | +0.52 |
| Imagery | 2.98 | 2.36 | +0.62 |
| Model | Metric | Low Temp (0.5) | High Temp (2.0) | Very High Temp (3.0) | |||
| min_p | top_p | min_p | top_p | min_p | top_p | ||
| Llama-1B | Creativity | 6.33 | 4.93 | 3.78 | 2.44 | 2.12 | 1.92 |
| Originality | 5.70 | 4.48 | 3.63 | 2.59 | 1.96 | 1.73 | |
| Narrative Flow | - | - | - | - | 1.19 | 1.04 | |
| Emotional Impact | - | - | - | - | 1.12 | 0.88 | |
| Imagery | - | - | - | - | 1.27 | 1.15 | |
| Mistral-7B | Creativity | 5.56 | 5.56 | 3.44 | 2.70 | 1.78 | 1.59 |
| Originality | 5.07 | 4.89 | 3.04 | 2.44 | 1.81 | 1.48 | |
| Narrative Flow | - | - | - | - | 0.96 | 0.74 | |
| Emotional Impact | - | - | - | - | 1.00 | 0.78 | |
| Imagery | - | - | - | - | 1.11 | 0.81 | |
| Model | Config | min_p | top_p | ||||||||
| Creat | Orig | Flow | Emot | Imag | Creat | Orig | Flow | Emot | Imag | ||
| Llama-1B | p=0.05/0.9 | 4.78 | 4.22 | 4.56 | 3.44 | 4.11 | 4.33 | 3.89 | 3.44 | 3.11 | 3.33 |
| p=0.1/0.95 | 5.00 | 4.67 | 4.11 | 3.89 | 3.67 | 5.44 | 4.78 | 3.33 | 3.00 | 3.11 | |
| p=0.2/0.99 | 5.33 | 5.33 | 4.56 | 3.78 | 5.00 | 4.44 | 4.11 | 3.56 | 4.11 | 4.33 | |
| Mistral-7B | p=0.05/0.9 | 5.44 | 5.11 | 5.22 | 4.67 | 4.67 | 4.67 | 4.44 | 4.67 | 4.11 | 5.11 |
| p=0.1/0.95 | 6.89 | 6.22 | 6.67 | 5.89 | 7.00 | 5.33 | 4.78 | 4.22 | 3.89 | 4.11 | |
| p=0.2/0.99 | 5.11 | 4.44 | 4.44 | 3.44 | 4.22 | 4.00 | 3.89 | 3.44 | 3.33 | 3.22 | |
| Downstream task | Modality gap (L2M) | Modality gap (RMG) | Model size | Embedding size | Dataset size |
| MS COCO | 0.167 (X) / 0.148 (X) | 0.083 (X) / -0.007 (X) | -0.354 (X) / 0.579 (✓) | 0.264 (X) / 0.62 (✓) | -0.129 (X) / 0.252 (✓) |
| ImageNet | -0.008 (X) / 0.28 (✓) | -0.109 (X) / 0.169 (✓) | -0.5 (✓) / 0.62 (✓) | 0.318 (X) / 0.668 (✓) | -0.034 (X) / 0.206 (✓) |
| CIFAR-10 | CIFAR-100 | ImgNet-100-1 | ImgNet-100-2 | ImgNet-100-3 | |
| CLIP ViT-B/16 | 0.3399 | 0.4965 | 0.4975 | 0.5046 | 0.5081 |
| SigLIP ViT-B/16 | 0.5044 | 0.4981 | 0.5003 | 0.4965 | 0.4987 |
| RMG | |||||
| Evaluation dataset | DataComp-1B (10) | LAION-400M (7) | LAION-2B (20) | OpenAI (12) | WebLI (9) |
| MS COCO | 0.244 (p=0.375) | -0.619 (p=0.069) | -0.263 (p=0.114) | -0.143 (p=0.548) | -0.611 (p=0.02) |
| ImageNet | 0.644 (p=0.011) | -0.714 (p=0.03) | -0.168 (p=0.319) | 0.302 (p=0.195) | -0.5 (p=0.073) |
| L2M | |||||
| MS COCO | 0.333 (p=0.221) | -0.619 (p=0.069) | -0.189 (p=0.259) | 0.238 (p=0.304) | -0.5 (p=0.073) |
| ImageNet | 0.733 (p=0.003) | -0.714 (p=0.03) | -0.063 (p=0.738) | 0.397 (p=0.077) | -0.333 (p=0.257) |
| CommonPool scale | filtering strategy | |||||
| none | basic | text-based | image-based | LAION-2B-based | CLIP-based | |
| small | 0.533 | 0.462 | 0.461 | 0.445 | 0.447 | 0.461 |
| medium | 0.537 | 0.499 | 0.507 | 0.494 | 0.446 | 0.487 |
| large | 0.603 | 0.577 | 0.593 | 0.587 | 0.511 | 0.544 |
| xlarge | 0.586 | n/a | n/a | n/a | 0.531 | 0.543 |
| ImageNet, RMG | ||||||
| small | 0.598 | 0.502 | 0.492 | 0.461 | 0.468 | 0.485 |
| medium | 0.595 | 0.547 | 0.553 | 0.525 | 0.478 | 0.523 |
| large | 0.651 | 0.628 | 0.628 | 0.625 | 0.557 | 0.584 |
| xlarge | 0.647 | n/a | n/a | n/a | 0.593 | 0.603 |
| MS COCO, L2M | ||||||
| small | 0.733 | 0.508 | 0.515 | 0.356 | 0.346 | 0.519 |
| medium | 0.773 | 0.678 | 0.702 | 0.677 | 0.482 | 0.639 |
| large | 0.898 | 0.86 | 0.892 | 0.877 | 0.742 | 0.8 |
| xlarge | 0.875 | n/a | n/a | n/a | 0.849 | 0.862 |
| ImageNet, L2M | ||||||
| small | 0.805 | 0.525 | 0.516 | 0.3 | 0.3 | 0.491 |
| medium | 0.837 | 0.735 | 0.745 | 0.701 | 0.514 | 0.681 |
| large | 0.947 | 0.915 | 0.92 | 0.911 | 0.792 | 0.843 |
| xlarge | 0.961 | n/a | n/a | n/a | 0.935 | 0.95 |
| CLIP ViT B/16 on MIT-States | |||
| Variant | cosine similarity between ideal words & ideal images | top-1 test accuracy | |
| attribute | object | ||
| real words | n/a | 18.65 | 44.69 |
| ideal words | 0.388 | 19.74 | 46.87 |
| ideal words with Δgap | 0.858 | 19.70 | 46.84 |
| CLIP ViT B/16 on UT-Zappos | |||
| real words | n/a | 15.92 | 60.54 |
| ideal words | 0.381 | 21.83 | 62.46 |
| ideal words with Δgap | 0.898 | 21.76 | 62.42 |
| real words | n/a | 21.82 | 50.23 |
| ideal words | 0.456 | 25.81 | 52.54 |
| ideal words with Δgap | 0.858 | 25.73 | 52.66 |
| real words | n/a | 27.14 | 73.27 |
| ideal words | 0.439 | 35.38 | 72.07 |
| ideal words with Δgap | 0.870 | 35.14 | 72.00 |
| Condition | resulting p(word) for given p(word|image) | |||||
| object | attribute | |||||
| digit | thickthinning | swelling | fracture | scaling | color | |
| A | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
| B | 0.1 | 0.177 | 0.233 | 0.233 | 0.233 | 0.114 |
| C | 0.1 | 0.255 | 0.366 | 0.366 | 0.366 | 0.129 |
| D | 0.1 | 0.33 | 0.5 | 0.5 | 0.5 | 0.143 |
| Condition | MOAD img | ||
| emb size 12 | emb size 15 | emb size 18 | |
| A | 0.41 | 0.40 | 0.34 |
| B | 0.34 | 0.25 | 0.31 |
| C | 0.29 | 0.30 | 0.29 |
| D | 0.14 | 0.17 | 0.22 |
| information imbalance | training time | L2M | RMG | I2I |
| after initialization | 0.0 | 0.007 | 1.0 | |
| × | after 1200 epochs | 0.058 | 0.006 | 1.0 |
| ✓ | 0.265 (+0.207) | 0.149 (+0.143) | 1.0 (±0.0) |
| Step | Loss | RMG |
| 0 | 7.3789 | 0.4987 |
| 1000 | 0.0060 | 0.0061 |
| 2000 | 0.0014 | 0.0060 |
| MS COCO | ||||
| CLIP ViT-B/16 | SigLIP ViT-B/16 | |||
| before fine-tuning | after fine-tuning | before fine-tuning | after fine-tuning | |
| L2M | 0.816 | 0.700 | 1.046 | 0.958 |
| RMG | 0.572 | 0.499 | 0.623 | 0.562 |
| R@1 | 52.84 | 59.24 | 67.68 | 67.58 |
| ImageNet | ||||
| L2M | 0.864 | 0.747 | 1.116 | 1.04 |
| RMG | 0.606 | 0.531 | 0.683 | 0.63 |
| Top-1 accuracy | 66.75 | 65.13 | 75.61 | 73.87 |
| UNI | IG | BlurIG | GIG | |
| ResNet-18 | .97 ± .222 | .69 ± .460 | .57 ± .576 | .45 ± .476 |
| Eff-v2-s | .95 ± .258 | .28 ± .615 | .34 ± .613 | .38 ± .437 |
| ConvNeXt-T | .99 ± .121 | .76 ± .379 | .77 ± .486 | .46 ± .485 |
| VGG-16-bn | .94 ± .286 | .69 ± .474 | .60 ± .544 | .46 ± .479 |
| ViT-B-16 | .89 ± .396 | .71 ± .399 | .27 ± .648 | .44 ± .468 |
| SwinT | .97 ± .189 | .88 ± .326 | .88 ± .482 | .45 ± .474 |
| UNI | IG | BlurIG | GIG | AGI | GBP | DeepLIFT | |
| ResNet-18 | .12 ± .124 | .06 ± .068 | .07 ± .076 | .07 ± .080 | .10 ± .110 | .09 ± .094 | .08 ± .082 |
| EfficientNetv2s | .06 ± .046 | .05 ± .043 | .05 ± .044 | .05 ± .044 | .06 ± .045 | .05 ± .043 | .05 ± .043 |
| ConvNeXt-Tiny | .16 ± .115 | .11 ± .086 | .15 ± .121 | .18 ± .149 | .17 ± .131 | .09 ± .072 | .11 ± .084 |
| VGG-16-bn | .18 ± .141 | .08 ± .066 | .09 ± .076 | .13 ± .108 | .14 ± .104 | .13 ± .108 | .10 ± .082 |
| ViT-B_16 | .15 ± .114 | .10 ± .074 | .10 ± .077 | .11 ± .079 | .14 ± .104 | .09 ± .070 | .10 ± .072 |
| Swin-T-Tiny | .13 ± .100 | .09 ± .071 | .12 ± .102 | .12 ± .104 | .13 ± .102 | .09 ± .069 | .10 ± .076 |
| UNI | IG | BlurIG | GIG | AGI | GBP | DeepLIFT | |
| ResNet-18 | .06 ± .128 | .10 ± .174 | .27 ± .252 | .11 ± .150 | .13 ± .147 | .08 ± .160 | .13 ± .165 |
| EfficientNetv2s | .19 ± .212 | .26 ± .217 | .50 ± .158 | .19 ± .216 | .18 ± .207 | .23 ± .163 | .27 ± .215 |
| ConvNeXt-Tiny | .11 ± .139 | .16 ± .164 | .46 ± .172 | .21 ± .160 | .17 ± .123 | .16 ± .099 | .21 ± .162 |
| VGG-16-bn | .08 ± .143 | .12 ± .181 | .18 ± .241 | .10 ± .163 | .14 ± .178 | .14 ± .194 | .12 ± .186 |
| ViT-B_16 | .14 ± .185 | .22 ± .207 | .60 ± .166 | .17 ± .190 | .13 ± .152 | .23 ± .141 | .17 ± .189 |
| Swin-T-Tiny | .13 ± .181 | .22 ± .217 | .47 ± .174 | .22 ± .207 | .21 ± .172 | .21 ± .123 | .23 ± .207 |
| UNI | IG | BlurIG | GIG | AGI | GBP | DeepLIFT | |
| ResNet-18 | .64 ± .138 | .26 ± .045 | .34 ± .131 | .36 ± .048 | .56 ± .068 | .11 ± .066 | .18 ± .042 |
| EfficientNetv2s | .64 ± .227 | .38 ± .127 | .51 ± .283 | .37 ± .138 | .38 ± .204 | .23 ± .192 | .37 ± .137 |
| ConvNeXt-Tiny | .63 ± .231 | .21 ± .114 | .40 ± .252 | .56 ± .122 | .52 ± .088 | .22 ± .160 | .17 ± .162 |
| VGG-16-bn | .56 ± .335 | .37 ± .061 | .31 ± .274 | .38 ± .071 | .47 ± .078 | .26 ± .057 | .17 ± .056 |
| ViT-B_16 | .71 ± .237 | .32 ± .107 | .59 ± .292 | .28 ± .125 | .43 ± .089 | .35 ± .172 | .28 ± .123 |
| Swin-T-Tiny | .68 ± .245 | .28 ± .145 | .63 ± .282 | .26 ± .153 | .25 ± .156 | .31 ± .202 | .26 ± .152 |
| UNI | IG | BlurIG | SG | DeepL | |
| ResNet-18 | .271 | .088 | .084 | .014 | .139 |
| Eff-v2-s | .302 | .009 | .076 | .008 | .018 |
| ConvNeXt-T | .292 | .010 | .127 | .011 | .012 |
| VGG-16-bn | .290 | .143 | .098 | .014 | .108 |
| ViT-B-16 | .319 | .018 | .066 | .023 | .023 |
| SwinT | .271 | .088 | .084 | .014 | .139 |
| UNI | IG | BlurIG | SG | DeepL | |
| ResNet-18 | 37.3 | 20.0 | 25.3 | 18.2 | 24.8 |
| Eff-v2-s | 39.4 | 17.4 | 23.3 | 18.6 | 18.0 |
| ConvNeXt-T | 34.8 | 15.0 | 26.2 | 16.7 | 15.1 |
| VGG-16-bn | 35.7 | 25.5 | 25.3 | 18.8 | 25.2 |
| ViT-B-16 | 40.7 | 17.1 | 21.7 | 19.6 | 17.2 |
| SwinT | 37.3 | 20.0 | 25.3 | 18.2 | 24.8 |
| Δ% Confidence | LPIPSvgg | LPIPalex | |
| ResNet-18 | -82.3% | .021 ± .025 | .003 ± .005 |
| Eff-v2-s | -76.9% | .025 ± .024 | .004 ± .011 |
| ConvNeXt-T | -95.1% | .018 ± .016 | .002 ± .003 |
| VGG-16-bn | -71.6% | .017 ± .020 | .001 ± .002 |
| ViT-B-16 | -69.7% | .014 ± .015 | .004 ± .007 |
| SwinT | -84.6% | .014 ± .015 | .002 ± .002 |
| UNI | Random | |
| Pythia-1b-v0 | 3.12 | 6.64 |
| GPT2-medium | 15.26 | 35.00 |
| Llama-3.2-1B | 5.25 | 10.17 |
| A(xt) | B(xt) | dec(Ht, xt) | |
| Mamba | Diag (exp (−Δt ⊙ exp(w1,i))) | kt,iΔt ⊙ xt | ψ(HtTqt + w2 ⊙ xt) |
| GLA | Diag (αt) | kvtvT | ψ(HtTqt) |
| DeltaNet | I - βt ktkT | βt ktkT | ψ(HtTqt) |
| [0,1] | [-1,1] | |
| Mamba | Diag(s(xt)) | Diag(2s(xt)-1) |
| DeltaNet | I-βtktktT | I-2βtktktT |
| Parity | Mod. Arithm. (w/o brackets) | Mod. Arithm. (w/ brackets) | |
| Transformer | 0.022 | 0.031 | 0.067 |
| mLSTM | 0.087 (0.04) | 0.040 (0.04) | 0.114 (0.03) |
| sLSTM | 1.000 (1.00) | 0.787 (1.00) | 0.178 (0.57) |
| Mamba [0, 1] | 0.000 | 0.095 | 0.123 |
| Mamba [−1, 1] | 1.000 | 0.241 | 0.116 |
| DeltaNet [0, 1] | 0.017 | 0.314 | 0.194 |
| DeltaNet [−1, 1] | 1.000 | 0.971 | 0.260 |
| Model | Wiki. ppl ↓ | LMB. ppl ↓ | LMB. acc ↑ | PIQA acc ↑ | Hella. acc_n ↑ | Wino. acc ↑ | ARC-e acc ↑ | ARC-c acc_n ↑ | Avg. ↑ | SWDE cont. ↑ | SQUAD cont. ↑ | FDA cont. ↑ | |
| SlimPojama 15B | 340M params | ||||||||||||
| Transformer++ | 28.39 | 42.69 | 31.0 | 63.3 | 34.0 | 50.4 | 44.5 | 24.2 | 41.2 | 42.2 | 22.1 | 21.4 | |
| Mamba [0, 1] | 28.39 | 39.66 | 30.6 | 65.0 | 35.4 | 50.1 | 46.3 | 23.6 | 41.8 | 12.4 | 23.0 | 2.1 | |
| GLA [0, 1] | 29.47 | 45.53 | 31.3 | 65.1 | 33.8 | 51.6 | 44.4 | 24.6 | 41.8 | 24.0 | 24.7 | 7.3 | |
| DeltaNet [0, 1] | 28.24 | 37.37 | 32.1 | 64.8 | 34.3 | 52.2 | 45.8 | 23.5 | 42.1 | 26.4 | 28.9 | 12.8 | |
| FinaWeb 100B | 340M params | ||||||||||||
| DeltaNet [0, 1] | 24.68 | 31.49 | 33.7 | 70.3 | 45.1 | 51.3 | 50.0 | 26.1 | 46.1 | 35.2 | 28.7 | 11.8 | |
| DeltaNet [-1, 1] | 24.54 | 31.15 | 34.0 | 69.9 | 44.6 | 51.9 | 50.0 | 24.4 | 45.8 | 37.2 | 33.1 | 6.6 | |
| 370M params | |||||||||||||
| Mamba [0, 1] | 24.84 | 24.69 | 35.6 | 70.6 | 48.4 | 51.2 | 53.4 | 24.8 | 47.3 | 21.6 | 27.7 | 2.8 | |
| Mamba [-1, 1] | 25.02 | 24.71 | 36.2 | 70.5 | 47.8 | 53.3 | 54.7 | 26.7 | 48.2 | 20.9 | 24.8 | 2.5 | |
| SlimPojama 100B | 1.3B params | ||||||||||||
| Transformer++ | 16.85 | 13.44 | 48.9 | 70.8 | 49.6 | 53.6 | 56.0 | 26.5 | 50.9 | 66.6 | 31.5 | 27.4 | |
| Mamba [0, 1] | 17.06 | 13.89 | 46.2 | 72.2 | 40.1 | 54.1 | 59.0 | 28.2 | 50.0 | 41.4 | 35.2 | 6.2 | |
| GLA [0, 1] | 17.22 | 14.47 | 46.9 | 71.8 | 49.8 | 53.9 | 57.2 | 26.6 | 51.0 | 50.6 | 42.6 | 19.9 | |
| DeltaNet [0, 1] | 16.87 | 12.21 | 48.9 | 71.2 | 50.2 | 53.6 | 57.2 | 28.3 | 51.6 | 49.5 | 37.4 | 17.2 | |
| FW 100B | 1.3B params | ||||||||||||
| DeltaNet [0, 1] | 18.54 | 14.32 | 43.5 | 73.7 | 56.2 | 56.9 | 58.2 | 29.9 | 53.1 | 49.1 | 35.1 | 8.6 | |
| DeltaNet [-1, 1] | 18.57 | 12.73 | 43.7 | 73.3 | 55.8 | 56.8 | 56.9 | 27.9 | 52.4 | 48.8 | 33.9 | 12.3 |
| Parity | Mod. Arithmetic (w/o brackets) | Mod. Arithmetic (w/ brackets) | Mod. Arithm. (w/ brackets, no multi) | |
| Transformer | 0.003 ± 0.013 | 0.018 ± 0.009 | 0.064 ± 0.003 | 0.025 ± 0.000 |
| mLSTM | 0.018 ± 0.035 | 0.027 ± 0.013 | 0.114 ± 0.000 | 0.034 ± 0.001 |
| sLSTM | 1.000 ± 0.000 | 0.124 ± 0.000 | 0.163 ± 0.015 | 0.153 ± 0.020 |
| Mamba [0,1] | 0.000 ± 0.000 | 0.066 ± 0.029 | 0.116 ± 0.007 | 0.072 ± 0.008 |
| Mamba [−1,1] | 1.000 ± 0.000 | 0.214 ± 0.027 | 0.098 ± 0.009 | 0.126 ± 0.010 |
| DeltaNet [0,1] | 0.010 ± 0.005 | 0.214 ± 0.056 | 0.162 ± 0.018 | 0.113 ± 0.009 |
| DeltaNet [−1,1] | 0.999 ± 0.006 | 0.826 ± 0.146 | 0.227 ± 0.011 | 0.129 ± 0.016 |
| Zero-Shot Performance in General Task | In-Context Performance in General Task | Performance in Trained Task | |||||||||||
| Hella. | Com. | Alpa. | Ob. | GP ↑/△ ↑ | Hella. | Com. | Alpa. | Ob. | IP ↑/△ ↑ | AP ↑ | FP ↑ | Forget ↓ | |
| Llama2-7b-chat | |||||||||||||
| M0 | 57.89 | 57.37 | 26.5 | 27.12 | 42.22 | 58.95 | 57.89 | 35.17 | 34.21 | 46.55 | / | / | / |
| NI-Seq-C1 | 47.37 | 40.00 | 32.00 | 31.61 | -4.48 | 24.21 | 27.89 | 28.89 | 26.84 | -19.60 | 86.10 | 83.80 | 2.30 |
| NI-Seq-G1 | 48.95 | 39.47 | 27.36 | 39.72 | -3.35 | 37.89 | 42.63 | 28.84 | 38.95 | -9.48 | 24.96 | 19.35 | 5.61 |
| NI-Seq-M1 | 52.11 | 42.63 | 31.09 | 29.51 | -3.39 | 45.79 | 31.05 | 24.58 | 33.16 | -12.91 | 59.02 | 54.32 | 4.69 |
| Llama3-8b-chat | |||||||||||||
| M0 | 81.58 | 58.42 | 22.64 | 40.04 | 50.67 | 85.26 | 63.16 | 27.42 | 49.47 | 56.32 | / | / | / |
| NI-Seq-C1 | 79.47 | 46.84 | 23.27 | 32.32 | -5.20 | 79.47 | 40.00 | 25.62 | 45.79 | -8.60 | 83.40 | 82.10 | 1.30 |
| NI-Seq-G1 | 72.63 | 35.79 | 22.05 | 29.39 | -10.70 | 67.89 | 31.05 | 19.77 | 41.58 | -16.26 | 28.29 | 21.09 | 7.19 |
| NI-Seq-M1 | 78.42 | 40.00 | 21.93 | 21.58 | -10.19 | 76.84 | 40.00 | 21.32 | 35.91 | -12.81 | 60.74 | 52.62 | 8.11 |
| Mistral-7b-instruct | |||||||||||||
| M0 | 73.68 | 60 | 24.74 | 5.02 | 40.86 | 79.47 | 66.32 | 32.36 | 37.89 | 54.01 | / | / | / |
| NI-Seq-C1 | 63.16 | 50.00 | 32.04 | 15.3 | -0.75 | 66.84 | 51.05 | 36.8 | 37.89 | -5.87 | 84.70 | 85.40 | -0.70 |
| NI-Seq-G1 | 57.37 | 45.26 | 26.3 | 13.81 | -5.18 | 57.89 | 35.79 | 32.04 | 39.47 | -12.71 | 27.62 | 19.77 | 7.85 |
| NI-Seq-M1 | 65.26 | 47.89 | 33.02 | 12.35 | -1.23 | 63.68 | 38.42 | 34.79 | 45.79 | -8.34 | 61.96 | 57.00 | 4.95 |
| Llama2-13b-chat | |||||||||||||
| M0 | 69.47 | 51.05 | 28.99 | 15.09 | 41.15 | 75.26 | 57.89 | 35.46 | 43.16 | 52.94 | / | / | / |
| NI-Seq-C1 | 65.79 | 52.63 | 34.18 | 21.51 | +2.38 | 66.32 | 48.42 | 38.48 | 38.95 | -4.90 | 83.20 | 82.26 | 0.93 |
| NI-Seq-G1 | 63.16 | 38.95 | 28.12 | 13.84 | -5.13 | 65.79 | 32.11 | 30.92 | 34.21 | -12.18 | 25.64 | 18.16 | 7.47 |
| NI-Seq-M1 | 71.58 | 49.47 | 34.10 | 28.09 | +4.66 | 70.53 | 48.42 | 36.51 | 37.37 | -4.73 | 60.10 | 56.33 | 3.76 |
| Method | NI-Seq-G1 | FP ↑ | GP ↑ | NI-Seq-C1 | FP ↑ | GP ↑ | NI-Seq-M1 | FP ↑ | GP ↑ | TRACE | |||||
| GP ↑ | IP ↑ | IP ↑ | IP ↑ | IP ↑ | IP ↑ | IP ↑ | IP ↑ | ||||||||
| Llama2-7b-chat | M0 | 49.85 | 54.43 | 49.85 | 54.43 | 49.85 | 54.43 | 49.85 | 54.43 | ||||||
| LoraInc | 47.16 | 30.94 | 19.35 | 45.83 | 27.71 | 83.80 | 47.55 | 37.23 | 54.33 | 46.17 | 38.08 | 41.20 | |||
| +FVG | +3.34 | +25.25 | +2.84 | +3.98 | +25.53 | +1.70 | +2.65 | +15.78 | +3.52 | +6.92 | +16.17 | +12.13 | |||
| Ewc | 33.48 | 26.87 | 17.72 | 46.08 | 38.76 | 85.00 | 44.47 | 41.69 | 55.85 | 49.07 | 47.98 | 54.22 | |||
| +FVG | +15.73 | +27.18 | +0.85 | +3.11 | +15.96 | +0.37 | +6.18 | +13.99 | +0.01 | +2.21 | +6.01 | -8.77 | |||
| O-lora | 45.15 | 31.90 | 22.67 | 41.54 | 20.54 | 79.33 | 50.16 | 39.52 | 56.94 | 36.96 | 29.38 | 37.13 | |||
| +FVG | +4.89 | +23.59 | +0.11 | +8.38 | +33.93 | +6.2 | +0.29 | +14.95 | -0.42 | +14.32 | +24.61 | +8.32 | |||
| InsCL | 45.80 | 41.79 | 27.14 | 44.03 | 35.69 | 81.67 | 49.76 | 43.09 | 60.83 | 46.46 | 41.63 | 52.95 | |||
| +FVG | +2.65 | +8.30 | +0.91 | +5.00 | +16.11 | +1.23 | +0.98 | +8.32 | -2.22 | +6.70 | +11.04 | +0.92 | |||
| Llama3-8b-c | M0 | 56.61 | 60.61 | 56.61 | 60.61 | 56.61 | 60.61 | 56.61 | 60.61 | ||||||
| LoraInc | 45.51 | 39.85 | 21.10 | 51.89 | 54.63 | 82.10 | 48.00 | 47.82 | 52.63 | 50.31 | 52.61 | 27.14 | |||
| +FVG | +7.79 | +15.31 | +3.10 | +3.99 | +5.19 | +0.30 | +4.88 | +4.75 | +5.78 | +3.84 | +6.29 | +7.22 | |||
| InsCL | 46.48 | 49.46 | 28.53 | 52.11 | 57.30 | 82.50 | 49.46 | 53.50 | 60.92 | 51.87 | 51.22 | 37.32 | |||
| +FVG | +6.60 | +8.06 | -0.85 | +3.52 | +1.58 | -0.60 | +4.34 | +2.75 | -2.80 | +2.04 | +7.92 | +6.66 | |||
| Mistral-7b-i. | M0 | 47.55 | 57.51 | 47.55 | 57.51 | 47.55 | 57.51 | 47.55 | 57.51 | ||||||
| LoraInc | 42.81 | 38.82 | 19.78 | 48.00 | 53.00 | 85.4 | 49.79 | 51.02 | 57.01 | 51.91 | 51.37 | 44.68 | |||
| +FVG | +4.49 | +16.61 | +0.64 | +2.35 | +2.67 | -0.50 | -2.41 | +4.02 | +0.43 | -1.49 | +5.14 | +10.37 | |||
| InsCL | 43.46 | 51.06 | 25.78 | 40.77 | 49.49 | 83.03 | 42.38 | 52.27 | 58.01 | 50.90 | 50.39 | 55.99 | |||
| +FVG | +2.71 | +4.64 | -0.30 | +6.75 | +4.27 | +2.07 | +6.13 | +3.40 | -0.84 | -0.98 | +6.12 | +1.19 | |||
| Dataset | Source | Category | Avg len | Metric | Language | #data |
| ScienceQA | Science | Multi-Choice QA | 210 | ROUGE-L | English | 3,000 |
| FOMC | Finance | Multi-Choice QA | 51 | ROUGE-L | English | 3,000 |
| MeetingBank | Meeting | Summary | 2853 | ROUGE-L | English | 3,000 |
| C-STANCE | Social media | Multi-Choice QA | 127 | ROUGE-L | Chinese | 3,000 |
| Py150 | Github | Code generation | 422 | ROUGE-L | Python | 3,000 |
| NumGLUE-cm | Math | Math reasoning | 32 | ROUGE-L | English | 3,000 |
| Sequence | Task type | Num. per task | |
| NI-Seq-C1 | NI195 → NI1343 → NI1310 → NI1292 → NI363 | Classification | 1,000 |
| NI-Seq-C2 | NI231 → NI1343 → NI220 → NI224 → NI273 | Classification | 1,000 |
| NI-Seq-G1 | NI618 → NI1290 → NI589 → NI511 → NI1357 | Generation | 1,000 |
| NI-Seq-G2 | NI1355 → NI141 → NI619 → NI163 → NI002 | Generation | 1,000 |
| NI-Seq-M1 | NI360 → NI363 → NI1290 → NI339 → NI1510 | Classification & Generation | 1,000 |
| NI-Seq-M2 | NI195 → NI611 → NI292 → NI488 → NI024 | Classification & Generation | 1,000 |
| TRACE | Cstance → Fomc → Meet → Py150 → SciQA → Numgluecm | Classification & Generation | 3,000 |
| Dataset | Source | Category | Avg len | Metric | Language | #data |
| NI002 | Quoref | Question Answering | 360 | ROUGE-L | English | 1000 |
| NI1290 | Xsum | Summarization | 363 | ROUGE-L | English | 1000 |
| NI1292 | Yelp review full | Sentiment Analysis | 130 | ROUGE-L | English | 1000 |
| NI141 | Odd man out | Word Semantics | 9 | ROUGE-L | English | 1000 |
| NI273 | Europarl | Text Matching | 15 | ROUGE-L | English | 1000 |
| NI024 | Cosmosqa | Question Answering | 82 | ROUGE-L | English | 1000 |
| NI1310 | Multilingual amazon reviews | Sentiment Analysis | 59 | ROUGE-L | English | 1000 |
| NI163 | Synthetic | Program Execution | 23 | ROUGE-L | English | 1000 |
| NI292 | Storycommonsense | Information Extraction | 48 | ROUGE-L | English | 1000 |
| NI1343 | Amazon us reviews | Sentiment Analysis | 70 | ROUGE-L | English | 1000 |
| NI195 | Sentiment140 | Sentiment Analysis | 14 | ROUGE-L | English | 1000 |
| NI1355 | Sentence compression | Summarization | 25 | ROUGE-L | English | 999 |
| NI589 | Amazon fine food reviews | Summarization | 84 | ROUGE-L | English | 1000 |
| NI1357 | Xlsum | Summarization | 454 | ROUGE-L | English | 1000 |
| NI360 | Numersense | Fill in The Blank | 26 | ROUGE-L | English | 1000 |
| NI339 | Record | Question Answering | 185 | ROUGE-L | English | 1000 |
| NI220 | Rocstories | Title Generation | 60 | ROUGE-L | English | 1000 |
| NI224 | Scuples | Ethics Classification | 338 | ROUGE-L | English | 1000 |
| NI611 | Mutual | Dialogue Generation | 162 | ROUGE-L | English | 1000 |
| NI1510 | Evaluation | Information Extraction | 7 | ROUGE-L | English | 1000 |
| NI231 | Iirc | Question Answering | 229 | ROUGE-L | English | 1000 |
| NI488 | Synthetic | Program Execution | 16 | ROUGE-L | English | 1000 |
| NI618 | Multilingual amazon reviews | Summarization | 47 | ROUGE-L | English | 1000 |
| NI363 | Sst2 | Sentiment Analysis | 19 | ROUGE-L | English | 1000 |
| NI619 | Ohsumed | Title Generation | 161 | ROUGE-L | English | 1000 |
| NI511 | Reddit tifu dataset | Summarization | 400 | ROUGE-L | English | 1000 |
| Task | Prompts |
| ScienceQA | "Input": "Choose an answer for the following question and give your reasons. Question: [x] Answer:", "Output": "[y]" |
| FOMC | "Input": "What is the monetary policy stance for the following text? A. dovish, B. hawkish, C. neutral. Choose one from A, B and C. Text: [x] Stance:", "Output": "[y]" |
| C-STANCE | (Translate Chinese to English) "Input": "Determine the attitude of the following text towards the specified object. Select one: A. Support, B. Oppose, C. Neutral. Output A, B or C. Text: [x1] Object: [x2] Attitude:", "Output": "[y]" |
| MeetingBank | "Input": "Write a summary of the following meeting transcripts. Meeting transcripts: [x] Summary:", "Output": "[y]" |
| Py150 | "Input": "[sic[x]", "Output": "[y]" |
| NumGLUE-cm | "Input": "Solve the following math problem. Question: [x] Answer:", "Output": "[y]" |
| NI-xxx | "Input": "Definition: In this task, you're given [Description]. Now complete the following examples Input: [x] Output:", "Output": "[y]" |
| Zero-Shot Performance in General Task | In-Context Performance in General Task | Performance in Trained Task | |||||||||||
| Hella. | Com. | Alpa. | Ob. | Avg./Del. | Hella. | Com. | Alpa. | Ob. | Avg./Del. | AP | FP | Forget | |
| Llama2-7b-chat | |||||||||||||
| M0 | 57.89 | 57.37 | 26.50 | 27.12 | 42.22 | 58.95 | 57.89 | 35.17 | 34.21 | 46.56 | / | / | / |
| NI-Seq-C1 | 47.37 | 40.00 | 32.00 | 31.61 | 37.75 | 24.21 | 27.89 | 28.89 | 26.84 | 26.96 | 86.10 | 83.80 | 2.30 |
| NI-Seq-C2 | 65.26 | 55.79 | 30.99 | 23.47 | 43.88 | 67.37 | 54.21 | 32.66 | 27.89 | 45.53 | 91.80 | 88.10 | 3.70 |
| NI-Seq-G1 | 48.95 | 39.47 | 27.36 | 39.72 | 38.88 | 37.89 | 42.63 | 28.84 | 38.95 | 37.08 | 24.97 | 19.36 | 5.61 |
| NI-Seq-G2 | 48.42 | 32.11 | 26.30 | 31.05 | 34.47 | 17.89 | 27.89 | 15.37 | 38.95 | 25.03 | 49.42 | 43.37 | 6.06 |
| NI-Seq-M1 | 52.11 | 42.63 | 31.09 | 29.51 | 38.84 | 45.79 | 31.05 | 24.58 | 33.16 | 33.65 | 59.02 | 54.33 | 4.70 |
| NI-Seq-M2 | 65.26 | 43.16 | 30.08 | 37.09 | 43.90 | 44.21 | 27.89 | 27.64 | 37.37 | 34.28 | 73.19 | 55.79 | 17.40 |
| Llama3-8b-chat | |||||||||||||
| M0 | 81.58 | 58.42 | 22.64 | 40.04 | 50.67 | 85.26 | 63.16 | 27.42 | 49.47 | 56.33 | / | / | / |
| NI-Seq-C1 | 79.47 | 46.84 | 23.27 | 32.32 | 45.48 | 79.47 | 40.00 | 25.62 | 45.79 | 47.72 | 83.40 | 82.10 | 1.30 |
| NI-Seq-C2 | 80.53 | 55.79 | 23.62 | 42.19 | 50.54 | 84.21 | 50.00 | 24.70 | 44.21 | 50.78 | 91.00 | 89.90 | 1.10 |
| NI-Seq-G1 | 72.63 | 35.79 | 22.05 | 29.39 | 39.97 | 67.89 | 31.05 | 19.77 | 41.58 | 40.07 | 28.29 | 21.10 | 7.20 |
| NI-Seq-G2 | 63.68 | 41.58 | 20.9 | 15.37 | 35.38 | 66.84 | 44.74 | 15.35 | 23.58 | 37.63 | 57.77 | 55.66 | 2.11 |
| NI-Seq-M1 | 78.42 | 40.00 | 21.93 | 21.58 | 40.48 | 76.84 | 40.00 | 21.32 | 35.91 | 43.52 | 60.74 | 52.63 | 8.11 |
| NI-Seq-M2 | 80.53 | 58.42 | 20.95 | 42.28 | 50.55 | 74.21 | 51.05 | 23.93 | 19.79 | 42.26 | 74.49 | 52.34 | 22.16 |
| Mistral-7b-instruct | |||||||||||||
| M0 | 73.68 | 60.00 | 24.74 | 5.02 | 40.86 | 79.47 | 66.32 | 32.36 | 37.89 | 54.01 | / | / | / |
| NI-Seq-C1 | 63.16 | 50.00 | 32.04 | 15.30 | 40.13 | 66.84 | 51.05 | 36.80 | 37.89 | 48.15 | 84.70 | 85.40 | -0.70 |
| NI-Seq-C2 | 75.79 | 60.00 | 32.07 | 36.63 | 51.12 | 73.68 | 58.95 | 35.76 | 37.37 | 51.44 | 91.50 | 90.30 | 1.20 |
| NI-Seq-G1 | 57.37 | 45.26 | 26.30 | 13.81 | 35.69 | 57.89 | 35.79 | 32.04 | 39.47 | 41.30 | 27.63 | 19.78 | 7.85 |
| NI-Seq-G2 | 33.68 | 42.63 | 29.68 | 52.11 | 39.53 | 41.58 | 36.32 | 20.46 | 30.53 | 32.22 | 51.05 | 43.86 | 7.19 |
| NI-Seq-M1 | 65.26 | 47.89 | 33.02 | 12.35 | 39.63 | 63.68 | 38.42 | 34.79 | 45.79 | 45.67 | 61.96 | 57.01 | 4.96 |
| NI-Seq-M2 | 57.37 | 48.42 | 31.67 | 35.58 | 43.26 | 67.37 | 47.89 | 34.72 | 46.53 | 49.13 | 72.22 | 65.95 | 6.27 |
| Llama2-13b-chat | |||||||||||||
| M0 | 69.47 | 51.05 | 28.99 | 15.09 | 41.15 | 75.26 | 57.89 | 35.46 | 43.16 | 52.94 | / | / | / |
| NI-Seq-C1 | 65.79 | 52.63 | 34.18 | 21.51 | 43.53 | 66.32 | 48.42 | 38.48 | 38.95 | 48.04 | 83.20 | 82.27 | 0.93 |
| NI-Seq-G1 | 63.16 | 38.95 | 28.12 | 13.84 | 36.02 | 65.79 | 32.11 | 30.92 | 34.21 | 40.76 | 25.64 | 18.17 | 7.47 |
| NI-Seq-M1 | 71.58 | 49.47 | 34.10 | 28.09 | 45.81 | 70.53 | 48.42 | 36.51 | 37.37 | 48.21 | 60.10 | 56.34 | 3.76 |
| Method | NI-Seq-G1 | NI-Seq-C1 | NI-Seq-M1 | |||||||
| GP↑ | IP↑ | FP↑ | GP↑ | IP↑ | FP↑ | GP↑ | IP↑ | FP↑ | ||
| Llama2-7b-chat | M0 | 49.85 | 54.43 | 49.85 | 54.43 | 49.85 | 54.43 | |||
| LoraInc | 47.16 | 30.94 | 19.35 | 45.83 | 27.71 | 83.80 | 47.55 | 37.23 | 54.33 | |
| +MA | +0.87 | +10.35 | +1.46 | +2.81 | +16.55 | -0.17 | +3.90 | +9.95 | +2.22 | |
| +FVG | +3.10 | +18.97 | +0.84 | +3.98 | +25.53 | +1.70 | +2.65 | +15.78 | +3.52 | |
| Ewc | 33.48 | 26.87 | 17.72 | 46.08 | 38.76 | 85.00 | 44.47 | 41.69 | 55.85 | |
| +MA | +6.58 | +11.27 | +2.59 | +1.57 | +7.64 | +0.40 | +5.54 | +7.71 | +0.92 | |
| +FVG | +15.73 | +27.18 | +0.85 | +3.11 | +15.96 | +0.37 | +6.18 | +13.99 | +0.01 | |
| Llama3-8b-c | M0 | 56.61 | 60.61 | 56.61 | 60.61 | 56.61 | 60.61 | |||
| LoraInc | 45.51 | 39.85 | 21.10 | 51.89 | 54.63 | 82.10 | 48.00 | 47.82 | 52.63 | |
| +MA | +4.39 | +8.01 | +2.07 | +1.99 | +2.42 | +2.00 | +3.67 | +5.82 | +4.70 | |
| +FVG | +7.79 | +15.31 | +3.10 | +3.99 | +5.19 | +0.30 | +4.88 | +4.75 | +5.78 | |
| d=20,dy=1 | d=200,dy=1 | |
| MGPS | 2.11 ± 0.30 | 3.66 ± 0.53 |
| DPS | 8.93 ± 0.49 | 9.15 ± 0.44 |
| PGDM | 2.44 ± 0.36 | 5.23 ± 0.38 |
| DDNM | 4.24 ± 0.37 | 7.10 ± 0.50 |
| DIFFPIR | 4.14 ± 0.42 | 8.43 ± 0.92 |
| REDDIFF | 6.70 ± 0.45 | 8.35 ± 0.39 |
| Task | FFHQ | ImageNet | ||||||||||
| MGPS | DPS | PGDM | DDNM | DIFFPIR | REDDIFF | MGPS | DPS | PGDM | DDNM | DIFFPIR | REDDIFF | |
| SR (×4) | 0.09 | 0.09 | 0.33 | 0.14 | 0.13 | 0.36 | 0.30 | 0.41 | 0.78 | 0.34 | 0.36 | 0.56 |
| SR (×16) | 0.26 | 0.24 | 0.44 | 0.30 | 0.28 | 0.51 | 0.53 | 0.50 | 0.60 | 0.70 | 0.63 | 0.83 |
| Box inpainting | 0.10 | 0.19 | 0.17 | 0.12 | 0.18 | 0.19 | 0.22 | 0.34 | 0.29 | 0.28 | 0.28 | 0.36 |
| Half mask | 0.20 | 0.24 | 0.26 | 0.22 | 0.23 | 0.28 | 0.29 | 0.44 | 0.38 | 0.38 | 0.35 | 0.44 |
| Gaussian Deblur | 0.15 | 0.16 | 0.87 | 0.19 | 0.12 | 0.23 | 0.32 | 0.35 | 1.00 | 0.45 | 0.29 | 0.54 |
| Motion Deblur | 0.13 | 0.16 | - | - | - | 0.21 | 0.22 | 0.39 | - | - | - | 0.40 |
| JPEG (QF = 2) | 0.16 | 0.39 | 1.10 | - | - | 0.32 | 0.42 | 0.63 | 1.31 | - | - | 0.51 |
| Phase retrieval | 0.11 | 0.46 | - | - | - | 0.25 | 0.47 | 0.62 | - | - | - | 0.60 |
| Nonlinear deblur | 0.23 | 0.52 | - | - | - | 0.66 | 0.44 | 0.88 | - | - | - | 0.67 |
| High dynamic range | 0.07 | 0.49 | - | - | - | 0.20 | 0.10 | 0.85 | - | - | - | 0.21 |
| Task | MGPS | RESAMPLE | PSLD |
| SR (×4) | 0.11 | 0.20 | 0.22 |
| SR (×16) | 0.30 | 0.36 | 0.35 |
| Box inpainting | 0.16 | 0.22 | 0.26 |
| Half mask | 0.25 | 0.30 | 0.31 |
| Gaussian Deblur | 0.16 | 0.15 | 0.35 |
| Motion Deblur | 0.18 | 0.19 | 0.41 |
| JPEG (QF = 2) | 0.20 | 0.26 | - |
| Phase retrieval | 0.34 | 0.41 | - |
| Nonlinear deblur | 0.26 | 0.30 | - |
| High dynamic range | 0.15 | 0.15 | - |
| Metric | MGPS | DPS | PGDM | DDNM | DIFFPIR | REDIFF | TRAINEDDIFF |
| MAE | 0.111 ± 2e-3 | 0.117 ± 4e-3 | 0.118 ± 2e-3 | 0.103 ± 2e-3 | 0.115 ± 2e-3 | 0.171 ± 3e-3 | 0.116 ± 2e-3 |
| RMSE | 0.225 ± 4e-3 | 0.232 ± 4e-3 | 0.233 ± 4e-3 | 0.224 ± 4e-3 | 0.233 ± 4e-3 | 0.287 ± 5e-3 | 0.266 ± 3e-3 |
| Method | RBBB | LBBB | AF | SB |
| MGPS50 | 0.81 | 0.92 | 0.94 | 0.66 |
| MGPS300 | 0.90 | 0.93 | 0.92 | 0.66 |
| DPS | 0.54 | 0.84 | 0.79 | 0.50 |
| PGDM | 0.65 | 0.87 | 0.88 | 0.55 |
| DDNM | 0.71 | 0.83 | 0.86 | 0.59 |
| DIFFPIR | 0.57 | 0.80 | 0.77 | 0.53 |
| REDDIFF | 0.73 | 0.86 | 0.88 | 0.60 |
| Ground-truth | 0.99 | 0.98 | 0.94 | 0.70 |
| Warm start | Threshold (w) | Diffusion steps | ℓk | Learning rate | Gradient steps | |
| FFHQ | ✓ | [3n/4] | n ∈ {50,100,300} | [k/2] | 0.03 | Mk = {20 if k ≥ n - 5 +20 if k mod 10 = 0 +2 otherwise |
| FFHQ (LDM) | ✓ | [3n/4] | n ∈ {50,100,300} | ℓk = {k/2} if k > [n/2] otherwise | 0.03 | Mk = {20 if k ≥ n - 5 +10 if k mod 10 = 0 +5 otherwise |
| ImageNet | ✓ | [3n/4] | n ∈ {50,100,300} | [k/2] | 0.03 | Mk = {20 if k ≥ n - 5 +10 if k mod 20 = 0 +2 otherwise |
| Gaussian Mixture | × | - | n = 300 | [k/2] | 0.1 | Mk = {20 if k ≥ n - 5 +20 if k mod 10 = 0 +2 otherwise |
| PTB-XL (ECG) | ✓ | [3n/4] | n ∈ {50,300} | [k/2] | 0.03 | Mk = {20 if k ≥ n - 5 +5 if k mod 20 = 0 +5 otherwise |
| Metric | FFHQ | ImageNet | FFHQ-LDM | |||||
| MGPS | DDNM | DIFFPIR | MGPS | DDNM | DIFFPIR | MGPS | RESAMPLE | |
| Half mask | ||||||||
| FID | 27.0 | 38.6 | 45.2 | 40.0 | 50.0 | 57.0 | 49.5 | 66.6 |
| LPIPS | 0.19 ± 0.00 | 0.23 ± 0.00 | 0.25 ± 0.00 | 0.30 ± 0.00 | 0.38 ± 0.01 | 0.40 ± 0.01 | 0.26 ± 0.00 | 0.30 ± 0.00 |
| PNSR | 15.9 ± 0.2 | 16.3 ± 0.2 | 16.1 ± 0.3 | 15.0 ± 0.1 | 16.0 ± 0.1 | 15.8 ± 0.1 | 15.6 ± 0.1 | 15.7 ± 0.1 |
| SSIM | 0.70 ± 0.00 | 0.74 ± 0.00 | 0.72 ± 0.01 | 0.63 ± 0.00 | 0.68 ± 0.01 | 0.67 ± 0.01 | 0.69 ± 0.00 | 0.67 ± 0.00 |
| MGPS | DPS | DIFFPIR | MGPS | DPS | DIFFPIR | MGPS | RESAMPLE | |
| Motion deblur | ||||||||
| FID | 29.7 | 36.7 | 77.0 | 35.3 | 55.0 | 87.3 | 44.6 | 51.8 |
| LPIPS | 0.12 ± 0.00 | 0.17 ± 0.00 | 0.22 ± 0.00 | 0.20 ± 0.01 | 0.40 ± 0.01 | 0.39 ± 0.01 | 0.19 ± 0.00 | 0.20 ± 0.00 |
| PNSR | 26.7 ± 0.1 | 24.1 ± 0.1 | 27.4 ± 0.1 | 24.4 ± 0.1 | 21.4 ± 0.1 | 24.2 ± 0.1 | 26.4 ± 0.1 | 26.7 ± 0.1 |
| SSIM | 0.77 ± 0.00 | 0.70 ± 0.01 | 0.71 ± 0.00 | 0.67 ± 0.01 | 0.55 ± 0.01 | 0.61 ± 0.00 | 0.76 ± 0.00 | 0.72 ± 0.00 |
| JPEG (QF = 2) | ||||||||
| FID | 31.6 | 87.6 | 109 | 61.4 | 128.8 | 92.8 | 45.0 | 65.3 |
| LPIPS | 0.15 ± 0.00 | 0.37 ± 0.00 | 0.33 ± 0.01 | 0.40 ± 0.01 | 0.60 ± 0.01 | 0.61 ± 0.00 | 0.21 ± 0.00 | 0.26 ± 0.01 |
| PNSR | 25.2 ± 0.1 | 19.0 ± 0.2 | 24.5 ± 0.1 | 22.2 ± 0.1 | 16.7 ± 0.1 | 22.2 ± 0.1 | 24.6 ± 0.1 | 24.8 ± 0.1 |
| SSIM | 0.73 ± 0.01 | 0.55 ± 0.02 | 0.70 ± 0.00 | 0.60 ± 0.01 | 0.41 ± 0.02 | 0.60 ± 0.01 | 0.71 ± 0.00 | 0.66 ± 0.01 |
| Nonlinear deblur | ||||||||
| FID | 50.8 | 164 | 88.4 | 113 | 272 | 112 | 69.2 | 71.5 |
| LPIPS | 0.23 ± 0.01 | 0.51 ± 0.02 | 0.68 ± 0.01 | 0.43 ± 0.01 | 0.83 ± 0.01 | 0.66 ± 0.01 | 0.26 ± 0.01 | 0.32 ± 0.01 |
| PNSR | 24.3 ± 0.2 | 16.2 ± 0.5 | 21.9 ± 0.1 | 22.2 ± 0.2 | 9.9 ± 0.4 | 20.7 ± 0.2 | 23.9 ± 0.1 | 24.2 ± 0.1 |
| SSIM | 0.70 ± 0.01 | 0.45 ± 0.02 | 0.42 ± 0.01 | 0.58 ± 0.01 | 0.41 ± 0.01 | 0.241 ± 0.01 | 0.69 ± 0.01 | 0.67 ± 0.01 |
| High dynamic range | ||||||||
| FID | 20.9 | 153 | 47.5 | 20.2 | 316 | 35.7 | 44.2 | 38.7 |
| LPIPS | 0.08 ± 0.01 | 0.40 ± 0.04 | 0.20 ± 0.01 | 0.11 ± 0.01 | 0.83 ± 0.02 | 0.20 ± 0.01 | 0.14 ± 0.00 | 0.12 ± 0.00 |
| PNSR | 27.0 ± 0.1 | 18.7 ± 0.2 | 21.7 ± 0.1 | 26.3 ± 0.2 | 9.9 ± 0.2 | 21.9 ± 0.1 | 25.5 ± 0.1 | 26.0 ± 0.1 |
| SSIM | 0.83 ± 0.01 | 0.55 ± 0.04 | 0.72 ± 0.01 | 0.83 ± 0.01 | 0.23 ± 0.02 | 0.71 ± 0.01 | 0.80 ± 0.01 | 0.83 ± 0.01 |
| Task | MGPS50 | MGPS100 | MGPS300 | DPS | PGDM | DDNM | DIFFPIR | REDIFF |
| LPIPS ↓ | ||||||||
| SR (×4) | 0.13 | 0.10 | 0.09 | 0.09 | 0.33 | 0.14 | 0.13 | 0.36 |
| SR (×16) | 0.27 | 0.26 | 0.26 | 0.24 | 0.44 | 0.30 | 0.28 | 0.51 |
| Box inpainting | 0.16 | 0.12 | 0.10 | 0.19 | 0.17 | 0.12 | 0.18 | 0.19 |
| Half mask | 0.24 | 0.22 | 0.20 | 0.24 | 0.26 | 0.22 | 0.23 | 0.28 |
| Gaussian Deblur | 0.21 | 0.18 | 0.15 | 0.16 | 0.87 | 0.19 | 0.12 | 0.26 |
| Motion Deblur | 0.19 | 0.15 | 0.13 | 0.16 | - | - | - | 0.21 |
| JPEG (QF = 2) | 0.20 | 0.17 | 0.16 | 0.39 | 1.10 | - | - | 0.32 |
| Phase retrieval | 0.20 | 0.14 | 0.11 | 0.46 | - | - | - | 0.25 |
| Nonlinear deblur | 0.23 | 0.23 | 0.23 | 0.52 | - | - | - | 0.66 |
| High dynamic range | 0.13 | 0.09 | 0.07 | 0.49 | - | - | - | 0.20 |
| PSNR ↑ | ||||||||
| SR (×4) | 27.83 | 27.79 | 27.79 | 28.24 | 23.34 | 29.52 | 27.17 | 27.25 |
| SR (×16) | 20.45 | 20.34 | 20.22 | 20.67 | 17.65 | 22.43 | 20.75 | 21.91 |
| Box inpainting | 21.55 | 22.22 | 22.68 | 18.39 | 21.13 | 22.35 | 21.96 | 21.79 |
| Half mask | 15.10 | 15.32 | 15.54 | 14.82 | 16.03 | 16.16 | 15.17 | 16.21 |
| Gaussian Deblur | 25.09 | 25.19 | 25.89 | 24.20 | 13.36 | 26.69 | 25.89 | 26.72 |
| Motion Deblur | 26.07 | 26.64 | 26.48 | 24.24 | - | - | - | 27.58 |
| JPEG (QF = 2) | 25.00 | 25.23 | 24.94 | 18.50 | 12.76 | - | - | 24.42 |
| Phase retrieval | 24.20 | 26.60 | 27.25 | 14.87 | - | - | - | 24.85 |
| Nonlinear deblur | 24.16 | 24.21 | 24.37 | 15.89 | - | - | - | 21.97 |
| High dynamic range | 24.77 | 26.07 | 27.74 | 16.83 | - | - | - | 21.25 |
| SSIM ↑ | ||||||||
| SR (×4) | 0.81 | 0.80 | 0.79 | 0.81 | 0.50 | 0.85 | 0.77 | 0.70 |
| SR (×16) | 0.58 | 0.57 | 0.55 | 0.59 | 0.38 | 0.67 | 0.60 | 0.62 |
| Box inpainting | 0.80 | 0.81 | 0.82 | 0.76 | 0.70 | 0.83 | 0.80 | 0.70 |
| Half mask | 0.69 | 0.70 | 0.71 | 0.66 | 0.56 | 0.73 | 0.67 | 0.65 |
| Gaussian Deblur | 0.71 | 0.73 | 0.75 | 0.69 | 0.14 | 0.77 | 0.73 | 0.76 |
| Motion Deblur | 0.77 | 0.78 | 0.77 | 0.71 | - | - | - | 0.71 |
| JPEG (QF = 2) | 0.75 | 0.74 | 0.73 | 0.51 | 12.76 | - | - | 0.71 |
| Phase retrieval | 0.74 | 0.78 | 0.79 | 0.43 | - | - | - | 0.61 |
| Nonlinear deblur | 0.70 | 0.70 | 0.69 | 0.46 | - | - | - | 0.42 |
| High dynamic range | 0.79 | 0.81 | 0.86 | 0.48 | - | - | - | 0.71 |
| Task | MGPS50 | MGPS100 | MGPS300 | DPS | PGDM | DDNM | DIFFPIR | REDDIFF |
| LPIPS ↓ | ||||||||
| SR (×4) | 0.36 | 0.33 | 0.30 | 0.41 | 0.78 | 0.34 | 0.36 | 0.56 |
| SR (×16) | 0.58 | 0.55 | 0.53 | 0.50 | 0.60 | 0.70 | 0.63 | 0.83 |
| Box inpainting | 0.31 | 0.26 | 0.22 | 0.34 | 0.29 | 0.28 | 0.28 | 0.36 |
| Half mask | 0.39 | 0.34 | 0.29 | 0.44 | 0.38 | 0.38 | 0.35 | 0.44 |
| Gaussian Deblur | 0.35 | 0.29 | 0.32 | 0.35 | 1.00 | 0.45 | 0.29 | 0.52 |
| Motion Deblur | 0.35 | 0.25 | 0.22 | 0.39 | - | - | - | 0.40 |
| JPEG (QF = 2) | 0.50 | 0.46 | 0.42 | 0.63 | 1.31 | - | - | 0.51 |
| Phase retrieval | 0.54 | 0.52 | 0.47 | 0.62 | - | - | - | 0.60 |
| Nonlinear deblur | 0.49 | 0.47 | 0.44 | 0.88 | - | - | - | 0.67 |
| High dynamic range | 0.22 | 0.15 | 0.10 | 0.85 | - | - | - | 0.21 |
| PSNR ↑ | ||||||||
| SR (×4) | 24.68 | 24.70 | 24.77 | 23.52 | 15.67 | 25.55 | 24.26 | 24.24 |
| SR (×16) | 18.56 | 18.42 | 18.04 | 18.22 | 15.80 | 20.43 | 19.37 | 19.95 |
| Box inpainting | 17.52 | 17.95 | 18.25 | 14.34 | 17.35 | 20.08 | 19.77 | 18.90 |
| Half mask | 14.98 | 15.14 | 15.60 | 14.65 | 14.36 | 17.06 | 15.79 | 16.96 |
| Gaussian Deblur | 22.56 | 21.96 | 20.10 | 21.20 | 9.93 | 23.29 | 22.10 | 23.27 |
| Motion Deblur | 23.91 | 25.03 | 24.50 | 21.59 | - | - | - | 24.43 |
| JPEG (QF = 2) | 21.96 | 22.17 | 22.44 | 16.11 | 5.29 | - | - | 22.15 |
| Phase retrieval | 16.36 | 16.94 | 18.10 | 14.40 | - | - | - | 15.78 |
| Nonlinear deblur | 21.89 | 22.23 | 22.36 | 8.49 | - | - | - | 20.76 |
| High dynamic range | 23.93 | 25.64 | 27.04 | 9.32 | - | - | - | 22.88 |
| SSIM ↑ | ||||||||
| SR (×4) | 0.65 | 0.66 | 0.66 | 0.60 | 0.23 | 0.70 | 0.63 | 0.61 |
| SR (×16) | 0.41 | 0.38 | 0.35 | 0.40 | 0.24 | 0.50 | 0.46 | 0.47 |
| Box inpainting | 0.71 | 0.74 | 0.76 | 0.70 | 0.62 | 0.78 | 0.74 | 0.67 |
| Half mask | 0.61 | 0.63 | 0.65 | 0.55 | 0.53 | 0.69 | 0.63 | 0.62 |
| Gaussian Deblur | 0.56 | 0.53 | 0.44 | 0.51 | 0.07 | 0.60 | 0.51 | 0.57 |
| Motion Deblur | 0.64 | 0.69 | 0.65 | 0.55 | - | - | - | 0.60 |
| JPEG (QF = 2) | 0.59 | 0.60 | 0.60 | 0.39 | 0.01 | - | - | 0.59 |
| Phase retrieval | 0.37 | 0.40 | 0.43 | 0.29 | - | - | - | 0.26 |
| Nonlinear deblur | 0.57 | 0.58 | 0.58 | 0.24 | - | - | - | 0.41 |
| High dynamic range | 0.75 | 0.81 | 0.84 | 0.25 | - | - | - | 0.73 |
| Task | MGPS | RESAMPLE | PSLD | MGPS | RESAMPLE | PSLD | MGPS | RESAMPLE | PSLD |
| LPIPS ↓ | PSNR ↑ | SSIM ↑ | |||||||
| SR (×4) | 0.11 | 0.20 | 0.22 | 28.46 | 26.08 | 25.53 | 0.83 | 0.69 | 0.70 |
| SR (×16) | 0.30 | 0.36 | 0.35 | 20.64 | 21.09 | 21.42 | 0.57 | 0.56 | 0.63 |
| Box inpainting | 0.16 | 0.22 | 0.26 | 22.94 | 18.80 | 20.39 | 0.79 | 0.75 | 0.66 |
| Half mask | 0.25 | 0.30 | 0.31 | 15.11 | 14.59 | 14.75 | 0.69 | 0.67 | 0.61 |
| Gaussian Deblur | 0.16 | 0.15 | 0.35 | 27.57 | 27.44 | 19.95 | 0.79 | 0.75 | 0.47 |
| Motion Deblur | 0.18 | 0.19 | 0.41 | 26.49 | 26.85 | 18.14 | 0.77 | 0.72 | 0.39 |
| JPEG (QF = 2) | 0.20 | 0.26 | - | 24.75 | 24.33 | - | 0.72 | 0.67 | - |
| Phase retrieval | 0.34 | 0.41 | - | 22.21 | 19.05 | - | 0.62 | 0.47 | - |
| Nonlinear deblur | 0.26 | 0.30 | - | 23.79 | 24.44 | - | 0.70 | 0.68 | - |
| High dynamic range | 0.15 | 0.15 | - | 25.17 | 25.42 | - | 0.79 | 0.81 | - |
| Task | ECG size (seconds) | Total leads | Used leads | Input shape |
| MB | 2,56 | 12 | I, II, III, V1-6 | 256 × 9 |
| ML | 10 | 12 | I, II, III, V1-6 | 1024 × 9 |
| Split | All | RBBB | LBBB | AF | SB |
| Train | 17,403 | 432 | 428 | 1211 | 503 |
| Val | 2,183 | 55 | 54 | 151 | 64 |
| Test | 2,203 | 54 | 54 | 152 | 64 |
| Hyper-parameter | Value |
| Residual layers | 4 |
| Pooling factor | [1, 2, 2] |
| Feature expansion | 2 |
| Diffusion embedding dim. 1 | 128 |
| Diffusion embedding dim. 2 | 512 |
| Diffusion embedding dim. 3 | 512 |
| Diffusion steps | 1000 |
| Optimizer | Adam |
| Number of iterations | 150k |
| Loss function | MSE |
| Learning rate | 0.002 |
| Batch size | 128 |
| Number of parameters | 16 millions |
| MGPS50 | MGPS300 | DPS | PGDM | DDNM | DIFFPIR | REDDIFF |
| 50 | 300 | 400 | 200 | 200 | 500 | 400 |
| Hyper-parameter | Value |
| Blocks | 4 |
| Layers per block | [3, 4, 6, 3] |
| Expansion | 4 |
| Stride | 1 |
| Optimizer | Adam |
| Learning rate | 0.001 |
| Batch size | 0.001 |
| Epochs | 100 |
| Metric | MGPS | DDNM | TRAINEDDIFF-MB | TRAINEDDIFF-RMB |
| MAE | 0.111 ± 2e-3 | 0.103 ± 2e-3 | 0.116 ± 2e-3 | 0.0879 ± 2e-3 |
| RMSE | 0.225 ± 4e-3 | 0.224 ± 4e-3 | 0.266 ± 3e-3 | 0.217 ± 6e-3 |
| Parameter | Backprop Model | Hidden Layer | Output Layer |
| NEpochs | 100 | 100 | 100 |
| NBatch | 1024 | 1024* | 1024* |
| Optimizer | Adam | Adam | Adam |
| Learning rate η | 0.001 | 0.002 | 0.003 |
| Weight decay | 0.0 | 0.00035 | 0.00015 |
| Number of bins per dim. | - | 20** | 20** |
| Binning ranges | - | (-20,20) | adaptive |
| Objective function | cross-entropy | trivariate γ | γ = (-0.2, 0.1, 1.0, 0.1, 0.0)T |
| Atom (ΠAntichain) | Meaning |
| Π{F}{C}{L} | information that is redundant in all of the three sources |
| Π{F}{C} | redundant information between feedforward and context input |
| Π{F}{L} | redundant information between feedforward and lateral input |
| Π{C}{L} | redundant information between context and lateral input |
| Π{F}{CL} | information provided by both, the feedforward input and the synergy between the context and the lateral input |
| Π{C}{FL} | information provided by both, the context input and the synergy between the feedforward and the lateral input |
| Π{L}{FC} | information provided by both, the lateral input and the synergy between the feedforward and the context input |
| Π{F} | unique information provided by the feedforward input |
| Π{C} | unique information provided by the context input |
| Π{L} | unique information provided by the lateral input |
| Π{FC}{FL}{CL} | information redundantly provided by each of the pairwise synergies |
| Π{FC}{FL} | information provided by both, the synergy between the feedforward and the context input and the synergy between the feedforward and the lateral input |
| Π{FC}{CL} | information provided by both, the synergy between the feedforward and the context input and the synergy between the context and the lateral input |
| Π{FL}{CL} | information provided by both, the synergy between the feedforward and the lateral input and the synergy between the context and the lateral input |
| Π{FC} | synergy between the feedforward input and the context input |
| Π{FL} | synergy between the feedforward input and the lateral input |
| Π{CL} | synergy between the context input and the lateral input |
| Π{FCL} | information encoded by all input sources synergistically |
| parameter | definition | value | interpretation of local information processing goal |
| γ{F}{C} | redundancy between F and C | 0.98 | maximize information that is provided both by feedforward AND context but not by other neurons—and thus relevant for the task |
| γ{F}{L} | redundancy between F and L | -0.99 | minimize information that is redundant with other neurons, but NOT provided by the context—and thus not relevant for the task |
| γ{F}{C}{L} | redundancy between F, C and L | 0.33 | moderately maximize information that is redundant with other neurons AND the context—thus relevant, but already encoded |
| γ{FC}{FL} | redundancy between synergies requiring F | -0.97 | minimize synergistic information that requires F and either C or L to be recovered |
| Heuristic Models | Baseline Models | Optimized Models |
| Lateral Heuristic | Random Projection | Lateral Optimized |
| Context Heuristic | Backpropagation | Context Optimized |
| Trivariate Heuristic | Readout Only - no hidden layer | Trivariate Optimized |
| Trivariate Optimized Sparse | ||
| Trivariate with Feedback |
| Category | Variable | Meaning |
| Main outcome | GPA_year1 | 1st year GPA |
| goodstandi~1 | Good standing in year 1 | |
| prob_year1 | On probation in year 1 | |
| credits_ea~1 | Credits earned in year 1 | |
| mathsci | Number of math and science credits attempted | |
| Personal backgrounds | female | Sex (Female dummy) |
| age | Age | |
| english | Whether Mother tongue is English | |
| gpa0 | High school GPA | |
| Other covariates | hcom | Whether Lives at Home |
| chooseUTM | Whether At first choice school | |
| work1 | Whether Plans to work while in school | |
| dad_edn | Father education | |
| mom_edn | Mother education | |
| lm_rarely | Whether Rarely puts off studying for tests | |
| lm Never | Whether Never puts off studying for tests | |
| lastmin | how often do you leave studying until the last minute for tests and exams | |
| graddeg | Whether Wants more than a BA | |
| finish4 | Whether Intends to finish in 4 years |
| “Irrelevant” Tasks | “Aligned” Tasks | “Contradictory” Tasks | ||||
| Multi-Task | Unlearning | Multi-Task | Unlearning | Multi-Task | Unlearning | |
| Best λ | 1.4 | -0.6 | 0.2 | 0.0 | 0.6 | -1.0 |
| T1Acc | 91.83 (-3.06) | 95.02 (-0.56) | 95.62 (0.00) | 95.20 (-0.42) | 79.54 (-16.70) | 94.21 (-0.61) |
| T2Acc | 88.40 (-5.65) | 50.34 (-45.24) | 92.46 (-3.23) | 90.51 (-5.18) | 62.52 (-33.72) | 4.97 (-89.85) |
| Fine-Tuning | ΨT1* | ΨT2* | Searching λ1, λ2 in [−2,3] | |
| (λ1, λ2) | N/A | (1,0) | (0,1) | (1.2, −0.6) |
| T' Acc | 92.21 | 88.10 | 45.06 | 91.74 |
| λ | 0 (baseline) | -0.2 | -0.4 | -0.6 | -0.8 | -1 |
| THP1 | 0.2213 | 0.2211 | 0.1732 | 0.1866 | 0.1572 | 0.1389 (37.23% ↓) |
| THP2 | 0.2302 | 0.2032 | 0.2111 | 0.2034 | 0.1695 | 0.1503 (34.71% ↓) |
| TPP | 0.1983 | 0.1888 | 0.1877 | 0.1802 | 0.1932 | 0.1683 (15.13% ↓) |
| λ | 0 (baseline) | -0.2 | -0.4 | -0.6 | -0.8 | -1 |
| THP1 | 0.2432 | 0.2033 | 0.1857 | 0.1665 | 0.1439 | 0.1568 (35.53% ↓) |
| THP2 | 0.2335 | 0.1932 | 0.2065 | 0.1813 | 0.1664 | 0.1772 (24.11% ↓) |
| TPP | 0.2111 | 0.2001 | 0.1884 | 0.1963 | 0.1849 | 0.1819 (13.83% ↓) |
| λ | 0 (baseline) | -0.2 | -0.4 | -0.6 | -0.8 | -1 |
| THP1 | 0.2573 | 0.1989 | 0.1933 | 0.1888 | 0.1572 | 0.1142 (55.61% ↓) |
| THP2 | 0.2688 | 0.2113 | 0.1993 | 0.1938 | 0.1622 | 0.1563 (52.29% ↓) |
| TPP | 0.1942 | 0.1825 | 0.1644 | 0.1687 | 0.1592 | 0.1541 (20.65% ↓) |
| Notations | Annotation |
| X, xi, Xn, yn | X is the input data, which contains P tokens. xi is the i-th token of X. Xn is the n-th input data with yn as the corresponding label. |
| Ψ | Ψ = {{a(l)}Pl=1, WO, WV, WK, WQ} denotes the set of all the model parameters. a(l) ∈ Rm and WO ∈ Rm×ma are the weights in the MLP layer. WV ∈ Rma×d, WK, WQ ∈ Rmb×d are weights in the self-attention layer. |
| Ψ(0), ΨT*, ΔΨT | Ψ(0) is the pre-trained model. ΨT* is the fine-tuned model on a given task T. ΔΨT is the task vector of the task T, which is computed as ΔΨT = ΨT* - Ψ(0). |
| μT, vj | μT is the discriminative pattern of the task T. vj is the j-th task-irrelevant pattern, j ∈ [M]. |
| δ*, δ# | δ* is the average fraction of label-relevant pattern in the input data. δ# is the average fraction of confusion pattern in the input data. |
| q1(t),ζ1,t, pn(t) | q1(t) = μ1T W(t) μ1 denotes the value of the product, where the patterns on both sides of W(t) are the same.ζ1,t denotes the modified value embedding of μ1 at the t-th iteration. pn(t) refers to the summation of attention weights where the key and the query are the same discriminative pattern. |
| Wn,l,Un,l | Wn,l and Un,l respectively represent of sets of positive or negative neurons so that the Relu activation is activated with xln as the query. |
| Bb | Bb is the SGD batch at the b-th iteration. |
| O(), Ω(), Θ() | We follow the convention that f(x) = O(g(x)) (or Ω(g(x)), Θ(g(x))) means that f(x) increases at most, at least, or in the order of g(x), respectively. |
| a | a = |a(l)i| = 1/√m for i ∈ [m]. |
| ≥, ≤ | f(x) ≥ g(x) (or f(x) ≤ g(x)) means that f(x) ≥ Ω(g(x)) (or f(x) ≤ O(g(x))). |
| Model | Base | Params | GSM8k | MATH |
| Proprietary models | ||||
| GPT-o1 (OpenAI, 2023) | - | - | - | 94.8 |
| GPT-o1-mini | - | - | - | 90.0 |
| Gemini-1.5 002 | - | - | - | 86.5 |
| Claude 3.5 Sonnet (Bai et al., 2022) | - | - | 96.4 | 71.1 |
| GPT-4o-2024-0513 | - | - | 96.1 | 76.6 |
| GPT-4-turbo-0125 (OpenAI, 2023) | - | - | 94.2 | 64.5 |
| GPT-4-0314 | - | - | 94.7 | 52.6 |
| GPT-4 (original version) | - | - | 92.0 | 42.5 |
| Baichuan-3 (Yang et al., 2023) | - | - | 88.2 | 49.2 |
| GLM-4 (GLM et al., 2024) | - | - | 87.6 | 47.9 |
| Gemini Pro (Team, 2023) | - | - | 86.5 | 32.6 |
| Claude2 | - | - | 85.2 | 32.5 |
| GPT-3.5-Turbo | - | - | 81.6 | 43.1 |
| PaLM2 (Anil et al., 2023) | - | - | 80.7 | 34.3 |
| Minerva (Lewkowycz et al., 2022) | - | 540B | 58.8 | 33.6 |
| GPT3.5 (Brown et al., 2020a) | - | - | 57.1 | - |
| Open-Source Models (0.1B-3B) | ||||
| GPT-2-Small (Brown et al., 2020b) | - | 0.1B | 6.9 | 5.4 |
| GPT-2-Medium (Brown et al., 2020b) | - | 0.3B | 11.2 | 6.2 |
| GPT-2-Large (Brown et al., 2020b) | - | 0.7B | 13.6 | 6.4 |
| GPT-2-XL (Brown et al., 2020b) | - | 1.5B | 15.4 | 6.9 |
| WizardMath-GPT | GPT-2-Small | 0.1B | 26.4 | 12.3 |
| WizardMath-GPT | GPT-2-Medium | 0.3B | 38.7 | 15.6 |
| WizardMath-GPT | GPT-2-Large | 0.7B | 50.1 | 21.2 |
| WizardMath-GPT | GPT-2-XL | 1.5B | 58.9 | 25.4 |
| WizardMath-Qwen | Qwen-Math-2.5 | 1.5B | 86.7 | 68.6 |
| Llama-3.2-Instruct (Dubey et al., 2024) | Llama 3.2 | 1B | 44.4 | 30.6 |
| WizardMath-Llama | Llama 3.2 | 1B | 63.3 | 33.5 |
| Llama-3.2-Instruct | Llama 3.2 | 3B | 77.7 | 48.0 |
| WizardMath-Llama | Llama 3.2 | 3B | 85.5 | 49.9 |
| Open-Source Models (7B-B8) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 7B | 14.6 | 2.5 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama-2 | 7B | 50.5 | 10.4 |
| MathScale (Tang et al., 2024) | Llama-2 | 7B | 66.3 | 31.1 |
| MetaMath (Yu et al., 2023b) | Llama-2 | 7B | 66.5 | 19.8 |
| MuggleMath (Li et al., 2023a) | Llama-2 | 7B | 68.4 | - |
| Skywork-Math (Zeng et al., 2024) | Llama-2 | 7B | 72.9 | 47.7 |
| Math-Shepherd (Wang et al., 2024a) | Llama-2 | 7B | 73.2 | 21.6 |
| Xwin-Math (Li et al., 2024a) | Llama-2 | 7B | 82.6 | 40.6 |
| WizardMath-Llama | Llama-2 | 7B | 84.1 | 43.5 |
| Mistral-v0.1 (Jiang et al., 2023) | - | 7B | 42.9 | 12.9 |
| MathScale (Tang et al., 2024) | Mistral-v0.1 | 7B | 74.8 | 35.2 |
| MMIQC (Liu & Yao, 2024) | Mistral-v0.1 | 7B | 74.8 | 36.0 |
| MetaMath (Yu et al., 2023b) | Mistral-v0.1 | 7B | 77.9 | 28.6 |
| KPMath-Plus (Huang et al., 2024) | Mistral-v0.1 | 7B | 82.1 | 46.8 |
| DART-Math (Tong et al., 2024) | Mistral-v0.1 | 7B | 82.6 | 43.5 |
| Skywork-Math (Zeng et al., 2024) | Mistral-v0.1 | 7B | 83.9 | 51.2 |
| Math-Shepherd (Wang et al., 2024a) | Mistral-v0.1 | 7B | 84.1 | 33.0 |
| MAmmoTH2-Plus (Yue et al., 2024) | Mistral-v0.1 | 7B | 84.7 | 45.0 |
| JiuZhang3.0 (Zhou et al., 2024) | Mistral-v0.1 | 7B | 88.6 | 52.8 |
| Xwin-Math (Li et al., 2024a) | Mistral-v0.1 | 7B | 89.2 | 43.7 |
| WizardMath-Mistral | Mistral-v0.1 | 7B | 90.7 | 55.4 |
| WizardMath-Mistral | Mistral-v0.3 | 7B | 90.4 | 55.6 |
| WizardMath-Mathstral | Mathstral-v0.1 | 7B | 93.8 | 70.9 |
| WizardMath-Qwen | Qwen2.5-Math | 7B | 93.9 | 77.8 |
| WizardMath-Qwen | Qwen2.5 | 7B | 94.0 | 74.5 |
| DeepSeekMath-Base (Shao et al., 2024) | - | 7B | 64.2 | 36.2 |
| NuminaMath-CoT (Li et al., 2024b) | DeepseekMath | 7B | 75.4 | 55.2 |
| MMIQC (Liu & Yao, 2024) | DeepSeekMath | 7B | 79.0 | 45.3 |
| KPMath-Plus (Huang et al., 2024) | DeepSeekMath | 7B | 83.9 | 48.8 |
| DeepSeekMath-RL (Shao et al., 2024) | DeepSeekMath | 7B | 88.2 | 51.7 |
| DART-Math (Tong et al., 2024) | DeepSeekMath | 7B | 88.2 | 52.9 |
| WizardMath-DeepSeek | DeepSeekMath | 7B | 91.0 | 64.6 |
| MetaMath (Yu et al., 2023b) | Llama 3 | 8B | 77.3 | 20.6 |
| MMIQC (Liu & Yao, 2024) | Llama 3 | 8B | 77.6 | 29.5 |
| DART-Math (Tang et al., 2024) | Llama 3 | 8B | 82.5 | 45.3 |
| MAmmoTH2-Plus (Yue et al., 2024) | Llama 3 | 8B | 84.1 | 42.8 |
| Llama 3.1-Instruct (Dubey et al., 2024) | Llama 3 | 8B | 84.5 | 51.9 |
| JiuZhang3.0 (Zhou et al., 2024) | Llama 3 | 8B | 88.6 | 51.0 |
| WizardMath-Llama | Llama 3 | 8B | 90.3 | 58.8 |
| Open-Source Models (13B) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 13B | 28.7 | 3.9 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama 2 | 13B | 56.3 | 12.9 |
| MathScale (Tang et al., 2024) | Llama 2 | 13B | 71.3 | 33.8 |
| MetaMath (Yu et al., 2023b) | Llama 2 | 13B | 72.3 | 22.4 |
| MuggleMath (Li et al., 2023a) | Llama 2 | 13B | 74.0 | - |
| KPMath-Plus (Huang et al., 2024) | Llama 2 | 13B | 81.6 | 41.0 |
| Xwin-Math (Li et al., 2024a) | Llama 2 | 13B | 88.1 | 44.9 |
| WizardMath-Llama | Llama 2 | 13B | 89.7 | 50.6 |
| Open-Source Models (70B) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 70B | 56.8 | 13.5 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama-2 | 70B | 72.4 | 21.1 |
| MetaMath (Yu et al., 2023b) | Llama-2 | 70B | 82.3 | 26.6 |
| KPMath-Plus (Huang et al., 2024) | Llama-2 | 70B | 87.4 | 48.6 |
| Xwim Math (Li et al., 2024a) | Llama-2 | 70B | 90.6 | 52.8 |
| WizardMath-Llama | Llama-2 | 70B | 92.8 | 58.6 |
| MATH subtopics | WizardMath 70B |
| Intermediate Algebra | 36.3 |
| Precalculus | 38.9 |
| Geometry | 48.3 |
| Number Theory | 58.5 |
| Counting & Probability | 54.8 |
| Prealgebra | 74.6 |
| Algebra | 78.5 |
| Overall | 58.6 |
| Models | GSM8K | MATH |
| GPT-2-XL-1.5B: WizardMath-SFT | 51.9 | 18.3 |
| + PRM | 55.8 | 22.1 |
| + PRM + IRM | 58.9 | 25.4 |
| Llama2-7B: WizardMath-SFT | 77.4 | 35.6 |
| + PRM | 81.7 | 39.9 |
| + PRM + IRM | 84.1 | 43.5 |
| Mistral-7B: WizardMath-SFT | 82.8 | 48.1 |
| + PRM | 87.2 | 52.7 |
| + PRM + IRM | 90.7 | 55.4 |
| Models | GSM8K | MATH |
| Llama2-7B: WizardMath-SFT | 77.4 | 35.6 |
| + ORM (ours) | 79.1 | 36.8 |
| + PRM800k | 79.7 | 38.7 |
| + Math-Shepherd | 80.3 | 38.2 |
| + PRM (ours) | 81.7 | 39.9 |
| Mistral-7B: WizardMath-SFT | 82.8 | 48.1 |
| + ORM (ours) | 84.6 | 49.6 |
| + PRM800k | 85.4 | 50.8 |
| + Math-Shepherd | 86.1 | 50.3 |
| + PRM (ours) | 87.2 | 52.7 |
| Generators | Verifiers | GSM8K | MATH |
| SFT | Self-Consistency | 90.7 | 57.5 |
| ORM | 93.0 | 58.3 | |
| PRM | 93.9 | 61.7 | |
| SFT + ORM | Self-Consistency | 91.2 | 57.7 |
| ORM | 93.4 | 59.4 | |
| PRM | 94.1 | 63.3 | |
| SFT + PRM | Self-Consistency | 92.3 | 59.3 |
| ORM | 94.1 | 60.8 | |
| PRM | 95.2 | 64.7 |
| Data | GSM8K | MATH | ||||||||||||
| Ori | D-1 | D-2 | U-1 | U-2 | U-3 | pass@1 | Ori | D-1 | D-2 | U-1 | U-2 | U-3 | pass@1 | |
| Ori | ✓ | ✘ | ✘ | ✘ | ✘ | ✘ | 59.7 | ✓ | ✘ | ✘ | ✘ | ✘ | ✘ | 15.1 |
| Math Evol | ✓ | ✓ | ✘ | ✘ | ✘ | ✘ | 71.9 | ✓ | ✓ | ✘ | ✘ | ✘ | ✘ | 30.3 |
| ✓ | ✘ | ✓ | ✘ | ✘ | ✘ | 70.5 | ✓ | ✘ | ✓ | ✘ | ✘ | ✘ | 28.7 | |
| ✓ | ✘ | ✘ | ✓ | ✘ | ✘ | 73.7 | ✓ | ✘ | ✘ | ✓ | ✘ | ✘ | 33.4 | |
| ✓ | ✘ | ✘ | ✘ | ✓ | ✘ | 71.6 | ✓ | ✘ | ✘ | ✘ | ✓ | ✘ | 32.6 | |
| ✓ | ✘ | ✘ | ✘ | ✘ | ✓ | 70.2 | ✓ | ✘ | ✘ | ✘ | ✘ | ✓ | 30.9 | |
| ✓ | ✓ | ✓ | ✘ | ✘ | ✘ | 74.5 | ✓ | ✓ | ✓ | ✘ | ✘ | ✘ | 34.7 | |
| ✓ | ✘ | ✘ | ✓ | ✓ | ✘ | 77.1 | ✓ | ✘ | ✘ | ✓ | ✓ | ✘ | 38.6 | |
| ✓ | ✘ | ✘ | ✓ | ✓ | ✓ | 78.6 | ✓ | ✘ | ✘ | ✓ | ✓ | ✓ | 42.5 | |
| ✓ | ✓ | ✓ | ✓ | ✘ | ✘ | 76.6 | ✓ | ✓ | ✓ | ✓ | ✘ | ✘ | 40.3 | |
| ✓ | ✓ | ✓ | ✓ | ✓ | ✘ | 79.8 | ✓ | ✓ | ✓ | ✓ | ✓ | ✘ | 44.6 | |
| ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 81.2 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 46.2 | |
| Models | College Math | TAL | Math23k | Ape210k | Gaokao Bench Math | AGIE Gaokao Math | AGIE SAT Math | AVG |
| Proprietary models | ||||||||
| GPT-4 | 24.4 | 51.8 | 76.5 | 61.5 | 35.4 | 28.2 | 68.6 | 49.5 |
| GPT-3.5-Turbo | 21.6 | 42.9 | 62.5 | 44.0 | 23.2 | 15.3 | 55.8 | 37.9 |
| Models based on LLaMA-2 13B | ||||||||
| LLaMA-2 13B | 1.2 | 6.3 | 9.5 | 7.9 | 0.7 | 0.4 | 6.8 | 4.7 |
| MAmmoTH-CoT | 6.5 | 17.3 | 39.5 | 28.1 | 5.9 | 4.9 | 20.5 | 17.5 |
| GAIR-Abel | 7.9 | 21.1 | 42.2 | 27.8 | 7.0 | 4.9 | 30.3 | 20.2 |
| MetaMath | 10.1 | 25.4 | 48.6 | 31.6 | 9.6 | 5.6 | 38.2 | 24.2 |
| MathScale 13B | 20.4 | 38.1 | 61.1 | 43.7 | 20.0 | 12.3 | 55.8 | 35.9 |
| WizardMath | 22.9 | 43.3 | 70.3 | 50.8 | 33.1 | 25.7 | 64.7 | 44.4 |
| Models based on LLaMA-2 7B | ||||||||
| LLaMA-2 7B | 2.3 | 7.6 | 6.8 | 7.3 | 2.1 | 2.9 | 2.9 | 4.6 |
| MAmmoTH-CoT | 6.2 | 13.3 | 34.6 | 21.4 | 3.9 | 2.7 | 19.6 | 14.5 |
| GAIR-Abel | 6.6 | 18.3 | 35.4 | 24.5 | 4.3 | 4.4 | 23.5 | 16.7 |
| MetaMath | 9.4 | 22.5 | 44.0 | 29.9 | 5.9 | 5.1 | 36.2 | 21.9 |
| MathScale 7B | 20.9 | 35.2 | 59.0 | 41.8 | 19.6 | 12.6 | 57.8 | 35.3 |
| WizardMath | 21.2 | 40.2 | 67.3 | 46.1 | 28.9 | 18.7 | 62.7 | 40.7 |
| Models based on Mistral 7B | ||||||||
| Mistral 7B | 7.5 | 17.9 | 18.5 | 15.5 | 6.2 | 5.9 | 22.5 | 13.4 |
| MetaMath Mistral | 15.7 | 31.4 | 55.1 | 38.1 | 15.3 | 10.1 | 50.9 | 30.9 |
| MathScale Mistral | 21.8 | 39.9 | 64.4 | 46.0 | 21.4 | 14.3 | 57.8 | 37.9 |
| WizardMath Mistral | 24.8 | 44.8 | 71.2 | 52.6 | 37.2 | 24.5 | 64.7 | 45.7 |
| Question | The farm has 30 cows and the zoo has 20 sheep. The zoo has twice as many cows as the farm does. The farm has half as many sheep as the zoo does. How many animals do the farm and zoo have combined? Ground truth answer: 120 |
| Steps | Step 1: we know that the farm has 30 cows. PRM Score: 0.95.Step 2: we are told that the zoo has twice as many cows as the farm does. PRM Score: 0.97.Step 3: So, we can calculate the number of cows in the zoo by multiplying the number of cows in the farm by 2. So, 2 * 30 = 60, So, the zoo has 60 cows. PRM Score: 0.97.Step 4: we know that the farm has half as many sheep as the zoo does. PRM Score: 0.96.Step 5: So, we can calculate the number of sheep in the farm by dividing the number of sheep in the zoo by 2. So, 20 / 2 = 10. So, the farm has 10 sheep. PRM Score: 0.93.Step 6: Now, we can calculate the total number of animals in the farm and the zoo by adding the number of cows and the number of sheep in both places. PRM Score: 0.92.Step 7: So, 30 (farm's cows) + 60 (zoo's cows) + 10 (farm's sheep) = 100. PRM Score: 0.11.Step 8: So, the farm and the zoo have a total of 100 animals combined. PRM Score: 0.10.Step 9: Therefore, the final answer is 100. The answer is: 100. PRM Score: 0.06. ORM Score: 0.89. |
| Models | GSM8k | MATH |
| Mistral-7B-v0.1 | 42.9 | 12.9 |
| WizardMath-SFT-GPT-4-Evol | 82.8 | 48.1 |
| WizardMath-SFT-Llama3-Evol | 76.7 | 43.5 |
| Math instructions | |
| GPT-4 Ranking | Instruction 1. Write x10 + x5 + 1 as the product of two polynomials with integer coefficients. Rank: 2. Reason: The problem is well-defined and clear. It requires a deep understanding of polynomial factorization, making it conceptually difficult. The computational complexity is high as it involves the factorization of a high-degree polynomial. +Instruction 2. Express the following sum as a common fraction: +\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \frac{1}{3 \cdot 4} + \frac{1}{4 \cdot 5} + \cdots + \frac{1}{9 \cdot 10}. +Rank: 1. Reason: This problem is clear and complete, with a well-defined mathematical notation. It requires a good understanding of series and fractions, making it conceptually challenging. The computational complexity is moderate as it involves the summation of fractions. +Instruction 3. Given that I read 6 to 8 novels monthly, each novel having 200 to 350 pages, and considering a year may have 12 or 13 months due to a leap year or a year when I choose to read more, also taking into account that my reading speed can fluctuate between 1 to 2 novels per day, and the number of days in a month can be 28, 30, or 31, and the genre of the novel can influence the number of pages (for instance, a thriller may contain 300 to 350 pages, while a romance novel may contain 200 to 250 pages), calculate the total pages of novels I will read in a year. Rank: 3. Reason: This problem is complex and requires multiple steps to solve, making it conceptually and computationally challenging. It is well-contextualized and relevant, but the problem statement could be clearer and more concise. +Instruction 4. James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Rank: 4. Reason: This problem is clear, com plete, and well-contextualized. However, it is conceptually and computationally less challenging than the others, as it only requires basic multiplication. +Instruction 5. What is √2 times √3? Rank: 5. Reason: This problem is clear and complete, but it lacks a practical context. It is conceptually and computationally less challenging than the others, as it only requires knowledge of the multiplication of square roots. +Instruction 6. What is √-2 times √3? Rank: 6. Reason: This problem is clear and complete, but it lacks a practical context. It is conceptually challenging as it involves the multiplication of a real number and an imaginary number. However, the computational complexity is low. |
| Question | John and his two brothers decide to split the cost of an apartment. It is 40% more expensive than John's old apartment which costs $1200 per month. How much does John save per year by splitting the apartment compared to living alone? |
| GPT-4 Labeling | Step 1: Calculate the cost of John's old apartment. John's old apartment costs $1200 per month. Score: 1. Step 2: Calculate the cost of the new apartment. Score: 1. Step 3: The new apartment costs 40% more than John's old apartment, so we need to find 40% of $1200 and add that to the cost of the old apartment. Score: 1. Step 4: 40% of $1200 is (40/100) * $1200 = $480. So the new apartment costs $1200 + $480 = $1680 per month. Score: 1. Step 5: Calculate the savings for John by splitting the apartment. Score: 1. Step 6: If John lived alone, he would pay $1680 per month for the new apartment. Score: 1. Step 7: However, since he splits the cost with his two brothers, he only needs to pay 1/3 of the total cost. Score: 1. Step 8: So, John's share of the cost is $1680 / 3 = $560 per month. Score: 1. Step 9: Calculate the savings per year for John. Score: 1. Step 10: There are 12 months in a year, so the total savings for John in a year would be 12 * $560 = $6720. Score: -1. Reason: The step is incorrect. The calculation is wrong. The total savings for John in a year should be calculated as the difference between what he would pay living alone and what he pays when splitting the cost, multiplied by 12. The correct calculation should be (1200 - 560) * 12 = $7680. Step 11: So, John saves $6720 per year by splitting the apartment compared to living alone. Score: -1. Reason: The step is incorrect. The final answer is wrong. The correct answer should be $7680. Step 12: The answer is: 6720. Score: -1. Reason: The step is incorrect. The final answer is wrong. The correct answer should be $7680. |
| Question | Compute cos 120°. |
| GPT-4 Labeling | Step 1: First, we need to find a right triangle with an angle of 120° and then use the Pythagorean theorem to find the length of the hypotenuse. Score: 0. |
| Step 2: The first step is to visualize a right triangle that has an angle of 120°. Score: -1. Reason: This step is incorrect. A right triangle cannot have an angle of 120 degrees. | |
| Step 3: We can do this by drawing a line and then rotating it 120° around the midpoint of the line. Score: -1. Reason: This step is incorrect. Rotating a line around its midpoint does not create a right triangle with an angle of 120 degrees. | |
| Step 4: This will give us a triangle with two 60° angles and one 120° angle. Score: -1. Reason: This step is incorrect. A triangle with two angles of 60 degrees and one angle of 120 degrees is not a right triangle. | |
| Step 5: Next, we need to find the length of the hypotenuse of this triangle. Score: 0. | |
| Step 6: We can do this by using the Pythagorean theorem, which stat es that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. Score: 0. | |
| Step 7: In this case, the other two sides of the triangle are the two legs of the 60° angles, which are each 1 unit long. Score: -1. Reason: This step is incorrect. The sides of a right triangle do not correspond to the legs of the 60 degree angles in this context. | |
| Step 8: So, we have h2= 1 + 1 = 2. This means that the length of the hypotenuse h is √2., 'Now, we need to find the cosine of 120°. Score: -1. Reason: This step is incorrect. The calculation is based on incorrect assumptions from previous steps. | |
| Step 9: The cosine of an angle is defined as the ratio of the length of the side opposite the angle to the length of the hypotenuse. Now, we need to find the cosine of 120°. Score: -1. Reason: This step is incorrect. The cosine of 120 degrees is not 1/sqrt(2). | |
| Step 10: The cosine of an angle is defined as the ratio of the length of the side opposite the angle to the length of the hypotenuse. Score: 1. | |
| Step 11: In this case, the opposite side is the leg of the 60° angle, which is 1 unit long. So, we have cos 120° = 1/√2. Score: -1. Reason: This step is not entirely correct. The cosine of an angle is defined as the ratio of the adjacent side to the hypotenuse, not the opposite side to the hypotenuse. | |
| Step 12: Finally, we need to simplify this expression. The square root of 2 is usually written as √2, so we can rewrite cos 120° as 1/√2. Score: -1. Reason: This step is incorrect. The cosine of 120 degrees is not 1/sqrt(2). | |
| Step 13: So, the final answer is 1/√2. The answer is: 1/√2. Score: -1. Reason: This step is incorrect. The final answer is not 1/sqrt(2). The correct answer is -1/2. |
| Model | Base | Params | GSM8k | MATH |
| Proprietary models | ||||
| GPT-o1 (OpenAI, 2023) | - | - | - | 94.8 |
| GPT-o1-mini (OpenAI, 2023) | - | - | - | 90.0 |
| Gemini-1.5 002 (Team et al., 2023) | - | - | - | 86.5 |
| Claude 3.5 Sonnet (Bai et al., 2022) | - | - | 96.4 | 71.1 |
| GPT-4o-2024-0513 (OpenAI, 2023) | - | - | 96.1 | 76.6 |
| GPT-4-turbo-0125 (OpenAI, 2023) | - | - | 94.2 | 64.5 |
| GPT-4-0314 (OpenAI, 2023) | - | - | 94.7 | 52.6 |
| GPT-4 (original version) (OpenAI, 2023) | - | - | 92.0 | 42.5 |
| Baichuan-3 (Yang et al., 2023) | - | - | 88.2 | 49.2 |
| GLM-4 (GLM et al., 2024) | - | - | 87.6 | 47.9 |
| Gemini Pro (Team, 2023) | - | - | 86.5 | 32.6 |
| Claude2 (Bai et al., 2022) | - | - | 85.2 | 32.5 |
| GPT-3.5-Turbo (OpenAI, 2023) | - | - | 81.6 | 43.1 |
| PaLM2 (Anil et al., 2023) | - | - | 80.7 | 34.3 |
| Minerva (Lewkowycz et al., 2022) | - | 540B | 58.8 | 33.6 |
| GPT3.5 (Brown et al., 2020a) | - | - | 57.1 | - |
| Open-Source Models (0.1B-3B) | ||||
| GPT-2-Small (Brown et al., 2020b) | - | 0.1B | 6.9 | 5.4 |
| GPT-2-Medium (Brown et al., 2020b) | - | 0.3B | 11.2 | 6.2 |
| GPT-2-Large (Brown et al., 2020b) | - | 0.7B | 13.6 | 6.4 |
| GPT-2-XL (Brown et al., 2020b) | - | 1.5B | 15.4 | 6.9 |
| WizardMath-GPT-SFT | GPT-2-Small | 0.1B | 21.2 | 9.1 |
| WizardMath-GPT-RL | GPT-2-Small | 0.1B | 26.4 | 12.3 |
| WizardMath-GPT-SFT | GPT-2-Medium | 0.3B | 30.6 | 11.4 |
| WizardMath-GPT-RL | GPT-2-Medium | 0.3B | 38.7 | 15.6 |
| WizardMath-GPT-SFT | GPT-2-Large | 0.7B | 43.7 | 16.4 |
| WizardMath-GPT-RL | GPT-2-Large | 0.7B | 50.1 | 21.2 |
| WizardMath-GPT-SFT | GPT-2-XL | 1.5B | 51.9 | 18.3 |
| WizardMath-GPT-RL | GPT-2-XL | 1.5B | 58.9 | 25.4 |
| WizardMath-Qwen-SFT | Qwen-Math-2.5 | 1.5B | 82.3 | 62.1 |
| WizardMath-Qwen-RL | Qwen-Math-2.5 | 1.5B | 86.7 | 68.6 |
| Llama-3.2-Instruct (Dubey et al., 2024) | Llama 3.2 | 1B | 44.4 | 30.6 |
| MetaMath (Yu et al., 2023b) | Llama 3.2 | 1B | 51.9 | 15.5 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Llama 3.2 | 1B | 49.2 | 23.4 |
| DART-Math-Uniform (Tong et al., 2024) | Llama 3.2 | 1B | 55.8 | 22.0 |
| WizardMath-Llama-SFT | Llama 3.2 | 1B | 57.1 | 29.7 |
| WizardMath-Llama-RL | Llama 3.2 | 1B | 63.3 | 33.5 |
| Llama-3.2-Instruct (Dubey et al., 2024) | Llama 3.2 | 3B | 77.7 | 48.0 |
| MetaMath (Yu et al., 2023b) | Llama 3.2 | 3B | 72.6 | 25.9 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Llama 3.2 | 3B | 74.0 | 37.8 |
| DART-Math-Uniform (Tong et al., 2024) | Llama 3.2 | 3B | 77.8 | 36.4 |
| WizardMath-Llama-SFT | Llama 3.2 | 3B | 80.3 | 45.2 |
| WizardMath-Llama-RL | Llama 3.2 | 3B | 85.5 | 49.9 |
| Model | Base | Params | GSM8k | MATH |
| Open-Source Models (7B-8B) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 7B | 14.6 | 2.5 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama-2 | 7B | 50.5 | 10.4 |
| MathScale (Tang et al., 2024) | Llama-2 | 7B | 66.3 | 31.1 |
| MetaMath (Yu et al., 2023b) | Llama-2 | 7B | 66.5 | 19.8 |
| MuggleMath (Li et al., 2023a) | Llama-2 | 7B | 68.4 | - |
| Skywork-Math (Zeng et al., 2024) | Llama-2 | 7B | 72.9 | 47.7 |
| Math-Shepherd (Wang et al., 2024a) | Llama-2 | 7B | 73.2 | 21.6 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Llama-2 | 7B | 69.9 | 30.7 |
| DART-Math-Uniform (Tong et al., 2024) | Llama-2 | 7B | 73.8 | 29.5 |
| Xwin-Math (Li et al., 2024a) | Llama-2 | 7B | 82.6 | 40.6 |
| WizardMath-Llama-SFT | Llama-2 | 7B | 77.4 | 35.6 |
| WizardMath-Llama-RL | Llama-2 | 7B | 84.1 | 43.5 |
| Mistral-v0.1 (Jiang et al., 2023) | - | 7B | 42.9 | 12.9 |
| MathScale (Tang et al., 2024) | Mistral-v0.1 | 7B | 74.8 | 35.2 |
| MMIQC (Liu & Yao, 2024) | Mistral-v0.1 | 7B | 74.8 | 36.0 |
| MetaMath (Yu et al., 2023b) | Mistral-v0.1 | 7B | 77.9 | 28.6 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Mistral-v0.1 | 7B | 81.1 | 45.5 |
| KPMath-Plus (Huang et al., 2024) | Mistral-v0.1 | 7B | 82.1 | 46.8 |
| DART-Math-Uniform (Tong et al., 2024) | Mistral-v0.1 | 7B | 82.6 | 43.5 |
| Skywork-Math (Zeng et al., 2024) | Mistral-v0.1 | 7B | 83.9 | 51.2 |
| Math-Shepherd (Wang et al., 2024a) | Mistral-v0.1 | 7B | 84.1 | 33.0 |
| MAmmoTH2-Plus (Yue et al., 2024) | Mistral-v0.1 | 7B | 84.7 | 45.0 |
| JiuZhang3.0 (Zhou et al., 2024) | Mistral-v0.1 | 7B | 88.6 | 52.8 |
| Xwin-Math (Li et al., 2024a) | Mistral-v0.1 | 7B | 89.2 | 43.7 |
| WizardMath-Mistral-SFT | Mistral-v0.1 | 7B | 82.8 | 48.1 |
| WizardMath-Mistral-RL | Mistral-v0.1 | 7B | 90.7 | 55.4 |
| WizardMath-Mistral-SFT | Mistral-v0.3 | 7B | 84.5 | 49.9 |
| WizardMath-Mistral-RL | Mistral-v0.3 | 7B | 90.4 | 55.6 |
| WizardMath-Mathstral-SFT | Mathstral-v0.1 | 7B | 88.3 | 64.2 |
| WizardMath-Mathstral-RL | Mathstral-v0.1 | 7B | 93.8 | 70.9 |
| Qwen2.5-Math-Base (Yang et al., 2024) | Qwen2.5-Math | 7B | 91.6 | 55.4 |
| WizardMath-Qwen-SFT | Qwen2.5-Math | 7B | 92.3 | 72.3 |
| WizardMath-Qwen-RL | Qwen2.5-Math | 7B | 93.9 | 77.8 |
| WizardMath-Qwen-SFT | Qwen2.5 | 7B | 89.8 | 68.1 |
| WizardMath-Qwen-RL | Qwen2.5 | 7B | 94.0 | 74.5 |
| DeepSeekMath-Base (Shao et al., 2024) | - | 7B | 64.2 | 36.2 |
| NuminaMath-CoT (Li et al., 2024b) | DeepseekMath | 7B | 75.4 | 55.2 |
| MMIQC (Liu & Yao, 2024) | DeepseekMath | 7B | 79.0 | 45.3 |
| KPMath-Plus (Huang et al., 2024) | DeepSeekMath | 7B | 83.9 | 48.8 |
| DART-Math-Prop2Diff (Tong et al., 2024) | DeepSeekMath | 7B | 86.8 | 53.6 |
| DeepSeekMath-RL (Shao et al., 2024) | DeepSeekMath | 7B | 88.2 | 51.7 |
| DART-Math-Uniform (Tong et al., 2024) | DeepSeekMath | 7B | 88.2 | 52.9 |
| WizardMath-DeepSeek-SFT | DeepSeekMath | 7B | 88.9 | 58.2 |
| WizardMath-DeepSeek-RL | DeepSeekMath | 7B | 91.0 | 64.6 |
| MetaMath (Yu et al., 2023b) | Llama 3 | 8B | 77.3 | 20.6 |
| MMIQC (Liu & Yao, 2024) | Llama 3 | 8B | 77.6 | 29.5 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Llama 3 | 8B | 81.1 | 46.6 |
| DART-Math-Uniform (Tong et al., 2024) | Llama 3 | 8B | 82.5 | 45.3 |
| MAmmoTH2-Plus (Yue et al., 2024) | Llama 3 | 8B | 84.1 | 42.8 |
| Llama 3.1-Instruct (Dubey et al., 2024) | Llama 3 | 8B | 84.5 | 51.9 |
| JiuZhang3.0 (Zhou et al., 2024) | Llama 3 | 8B | 88.6 | 51.0 |
| WizardMath-Llama-SFT | Llama 3 | 8B | 88.9 | 53.3 |
| WizardMath-Llama-RL | Llama 3 | 8B | 90.3 | 58.8 |
| MetaMath (Yu et al., 2023b) | Llama 3.1 | 8B | 80.4 | 35.4 |
| DART-Math-Prop2Diff (Tong et al., 2024) | Llama 3.1 | 8B | 84.3 | 46.5 |
| DART-Math-Uniform (Tong et al., 2024) | Llama 3.1 | 8B | 86.7 | 45.1 |
| WizardMath-Llama-SFT | Llama 3.1 | 8B | 89.2 | 55.8 |
| WizardMath-Llama-RL | Llama 3.1 | 8B | 93.4 | 62.3 |
| Open-Source Models (13B) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 13B | 28.7 | 3.9 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama 2 | 13B | 56.3 | 12.9 |
| MathScale (Tang et al., 2024) | Llama 2 | 13B | 71.3 | 33.8 |
| MetaMath (Yu et al., 2023b) | Llama 2 | 13B | 72.3 | 22.4 |
| MuggleMath (Li et al., 2023a) | Llama 2 | 13B | 74.0 | - |
| KPMath-Plus (Huang et al., 2024) | Llama 2 | 13B | 81.6 | 41.0 |
| Xwin-Math (Li et al., 2024a) | Llama 2 | 13B | 88.1 | 44.9 |
| WizardMath-Llama-SFT | Llama 2 | 13B | 86.8 | 46.5 |
| WizardMath-Llama-RL | Llama 2 | 13B | 89.7 | 50.6 |
| Open-Source Models (70B) | ||||
| Llama-2 (Touvron et al., 2023b) | - | 70B | 56.8 | 13.5 |
| MAmmoTH-CoT (Yue et al., 2023) | Llama-2 | 70B | 72.4 | 21.1 |
| MetaMath (Yu et al., 2023b) | Llama-2 | 70B | 82.3 | 26.6 |
| KPMath-Plus (Huang et al., 2024) | Llama-2 | 70B | 87.4 | 48.6 |
| Xwin-Math (Li et al., 2024a) | Llama-2 | 70B | 90.6 | 52.8 |
| WizardMath-Llama-SFT | Llama-2 | 70B | 89.5 | 54.4 |
| WizardMath-Llama-RL | Llama-2 | 70B | 92.8 | 58.6 |
| Model | Base | Params | GSM8k | MATH |
| DART-Math-Prop2Diff | Llama 3.2 | 1B | 49.2 | 23.4 |
| MetaMath | Llama 3.2 | 1B | 51.9 | 15.5 |
| DART-Math-Uniform | Llama 3.2 | 1B | 55.8 | 22.0 |
| WizardMath-Llama-SFT | Llama 3.2 | 1B | 57.1 | 29.7 |
| MetaMath | Llama 3.2 | 3B | 72.6 | 25.9 |
| DART-Math-Prop2Diff | Llama 3.2 | 3B | 74.0 | 37.8 |
| DART-Math-Uniform | Llama 3.2 | 3B | 77.8 | 36.4 |
| WizardMath-Llama-SFT | Llama 3.2 | 3B | 80.3 | 45.2 |
| MetaMath | Llama-2 | 7B | 66.5 | 19.8 |
| DART-Math-Prop2Diff | Llama-2 | 7B | 69.9 | 30.7 |
| DART-Math-Uniform | Llama-2 | 7B | 73.8 | 29.5 |
| WizardMath-Llama-SFT | Llama-2 | 7B | 77.4 | 35.6 |
| MetaMath | Mistral-v0.1 | 7B | 77.9 | 28.6 |
| DART-Math-Prop2Diff | Mistral-v0.1 | 7B | 81.1 | 45.5 |
| DART-Math-Uniform | Mistral-v0.1 | 7B | 82.6 | 43.5 |
| WizardMath-Mistral-SFT | Mistral-v0.1 | 7B | 82.8 | 48.1 |
| DART-Math-Prop2Diff | DeepSeekMath | 7B | 86.8 | 53.6 |
| DART-Math-Uniform | DeepSeekMath | 7B | 88.2 | 52.9 |
| WizardMath-DeepSeek-SFT | DeepSeekMath | 7B | 88.9 | 58.2 |
| MetaMath | Llama 3 | 8B | 77.3 | 20.6 |
| DART-Math-Prop2Diff | Llama 3 | 8B | 81.1 | 46.6 |
| DART-Math-Uniform | Llama 3 | 8B | 82.5 | 45.3 |
| WizardMath-Llama-SFT | Llama 3 | 8B | 88.9 | 53.3 |
| MetaMath | Llama 3.1 | 8B | 80.4 | 35.4 |
| DART-Math-Prop2Diff | Llama 3.1 | 8B | 84.3 | 46.5 |
| DART-Math-Uniform | Llama 3.1 | 8B | 86.7 | 45.1 |
| WizardMath-Llama-SFT | Llama 3.1 | 8B | 89.2 | 55.8 |
| Models | Base | Params | GSM8k | MATH |
| Mathstral-v0.1-Base | - | 7B | 77.1 | 56.6 |
| WizardMath-Mathstral | Mathstral-v0.1-Base | 7B | 93.8 | 70.9 |
| Qwen2.5-Math-Base | - | 1.5B | 76.8 | 49.8 |
| WizardMath-Qwen2.5-Math | Qwen2.5-Math-Base | 1.5B | 86.7 | 68.6 |
| Qwen2.5-Math-Base | - | 7B | 91.6 | 55.4 |
| WizardMath-Qwen2.5-Math | Qwen2.5-Math-Base | 7B | 93.9 | 77.8 |
| Qwen2.5-Base | - | 7B | 85.4 | 49.8 |
| WizardMath-Qwen2.5 | Qwen2.5-Base | 7B | 94.0 | 74.5 |
| Model | Base | Params | GSM8k | MATH |
| MetaMath-SFT | Llama-2 | 7B | 66.5 | 19.8 |
| MetaMath-RL | Llama-2 | 7B | 75.6 | 25.1 |
| DART-Math-Prop2Diff-SFT | Llama-2 | 7B | 69.9 | 30.7 |
| DART-Math-Prop2Diff-RL | Llama-2 | 7B | 76.8 | 37.1 |
| DART-Math-Uniform-SFT | Llama-2 | 7B | 73.8 | 29.5 |
| DART-Math-Uniform-RL | Llama-2 | 7B | 79.1 | 35.2 |
| Xwin-Math-SFT | Llama-2 | 7B | 82.6 | 40.6 |
| Xwin-Math-RL | Llama-2 | 7B | 88.2 | 48.5 |
| WizardMath-Llama-SFT | Llama-2 | 7B | 77.4 | 35.6 |
| WizardMath-Llama-RL | Llama-2 | 7B | 84.1 | 43.5 |
| MetaMath-SFT | Mistral-v0.1 | 7B | 77.9 | 28.6 |
| MetaMath-RL | Mistral-v0.1 | 7B | 86.4 | 35.2 |
| DART-Math-Prop2Diff-SFT | Mistral-v0.1 | 7B | 81.1 | 45.5 |
| DART-Math-Prop2Diff-RL | Mistral-v0.1 | 7B | 87.5 | 51.4 |
| DART-Math-Uniform-SFT | Mistral-v0.1 | 7B | 82.6 | 43.5 |
| DART-Math-Uniform-RL | Mistral-v0.1 | 7B | 88.1 | 48.7 |
| WizardMath-Mistral-SFT | Mistral-v0.1 | 7B | 82.8 | 48.1 |
| WizardMath-Mistral-RL | Mistral-v0.1 | 7B | 90.7 | 55.4 |
| Llama2 7B as the base model | Data size | MATH |
| DART-Math-Uniform | 591k | 29.5 |
| DART-Math-Prop2Diff | 585k | 30.7 |
| Xwin-Math | 1440k | 40.6 |
| Skywork-Math | 360k | 29.36 |
| Skywork-Math | 720k | 34.54 |
| Skywork-Math | 2500k | 47.7 |
| WizardMath-SFT | 418k | 35.6 |
| Models | AI-Label | GSM8k | MATH |
| WizardMath-SFT | - | 82.8 | 48.1 |
| + PRM-Llama-3.1-405B-Instruct | Llama-3.1-405B-Instruct | 85.8 | 51.5 |
| + PRM-GPT-4 | GPT-4 | 87.2 | 52.7 |
| Models | College Math | TAL | Math23k | Ape210k | Gaokao Bench Math | AGIE Gaokao Math | AGIE SAT Math | AVG |
| Proprietary models | ||||||||
| GPT-4 | 24.4 | 51.8 | 76.5 | 61.5 | 35.4 | 28.2 | 68.6 | 49.5 |
| GPT-3.5-Turbo | 21.6 | 42.9 | 62.5 | 44.0 | 23.2 | 15.3 | 55.8 | 37.9 |
| Models based on LLaMA-2 13B | ||||||||
| LLaMA-2 13B | 1.2 | 6.3 | 9.5 | 7.9 | 0.7 | 0.4 | 6.8 | 4.7 |
| MAmmoTH-CoT | 6.5 | 17.3 | 39.5 | 28.1 | 5.9 | 4.9 | 20.5 | 17.5 |
| GAIR-Abel | 7.9 | 21.1 | 42.2 | 27.8 | 7.0 | 4.9 | 30.3 | 20.2 |
| MetaMath | 10.1 | 25.4 | 48.6 | 31.6 | 9.6 | 5.6 | 38.2 | 24.2 |
| MathScale 13B | 20.4 | 38.1 | 61.1 | 43.7 | 20.0 | 12.3 | 55.8 | 35.9 |
| WizardMath-SFT | 22.2 | 42.5 | 65.9 | 47.6 | 31.6 | 23.5 | 59.7 | 41.9 |
| WizardMath-RL | 22.9 | 43.3 | 70.3 | 50.8 | 33.1 | 25.7 | 64.7 | 44.4 |
| Models based on LLaMA-2 7B | ||||||||
| LLaMA-2 7B | 2.3 | 7.6 | 6.8 | 7.3 | 2.1 | 2.9 | 2.9 | 4.6 |
| MAmmoTH-CoT | 6.2 | 13.3 | 34.6 | 21.4 | 3.9 | 2.7 | 19.6 | 14.5 |
| GAIR-Abel | 6.6 | 18.3 | 35.4 | 24.5 | 4.3 | 4.4 | 23.5 | 16.7 |
| MetaMath | 9.4 | 22.5 | 44.0 | 29.9 | 5.9 | 5.1 | 36.2 | 21.9 |
| DART-Math-Uniform | 12 | 27.3 | 47.9 | 32.9 | 14.8 | 11.1 | 45.1 | 27.3 |
| DART-Math-Prop2Diff | 11.9 | 27.7 | 49.9 | 34.3 | 12.8 | 10.6 | 47.1 | 27.8 |
| Xwin-Math-V1.1 | 14.9 | 29.7 | 59.6 | 40.8 | 15.9 | 8.4 | 51.0 | 31.5 |
| MathScale 7B | 20.9 | 35.2 | 59.0 | 41.8 | 19.6 | 12.6 | 57.8 | 35.3 |
| WizardMath-SFT | 21.1 | 38.5 | 62.4 | 43.8 | 26.3 | 17.7 | 58.3 | 38.3 |
| WizardMath-RL | 21.2 | 40.2 | 67.3 | 46.1 | 28.9 | 18.7 | 62.7 | 40.7 |
| Models based on Mistral 7B | ||||||||
| Mistral 7B | 7.5 | 17.9 | 18.5 | 15.5 | 6.2 | 5.9 | 22.5 | 13.4 |
| MetaMath Mistral | 15.7 | 31.4 | 55.1 | 38.1 | 15.3 | 10.1 | 50.9 | 30.9 |
| DART-Math-Uniform | 19.4 | 34.8 | 61.6 | 44.8 | 27.0 | 16.1 | 59.8 | 37.6 |
| MathScale Mistral | 21.8 | 39.9 | 64.4 | 46.0 | 21.4 | 14.3 | 57.8 | 37.9 |
| DART-Math-Prop2Diff | 19.9 | 37.4 | 62.2 | 44.9 | 27.2 | 18.1 | 62.7 | 38.9 |
| WizardMath-Mistral-SFT | 24.3 | 42.7 | 66.6 | 49.7 | 35.2 | 22.7 | 63.1 | 43.5 |
| WizardMath-Mistral-RL | 24.8 | 44.8 | 71.2 | 52.6 | 37.2 | 24.5 | 64.7 | 45.7 |
| Metric | Score (max value) |
| Adjusted Mutual Information (AMI) | 0.29 (1.00) |
| Normalized Mutual Information (NMI) | 0.29 (1.00) |
| Jaccard Similarity | 0.06 (1.00) |
| Exact Matching (%) | 45.54% (100.00%) |
| MTEB Tasks | CLF | Clust. | PairCLF | Rerank | STS | Summ. | Avg. |
| DeepSeekMoE-16B | |||||||
| Hidden State (HS) | 44.79 | 25.87 | 44.34 | 38.13 | 34.54 | 24.51 | 35.36 |
| Routing Weight (RW) | 44.06 | 17.53 | 50.59 | 35.94 | 41.11 | 26.22 | 35.91 |
| MOEE (concat) | 44.93 | 24.15 | 51.88 | 41.20 | 46.82 | 31.17 | 40.03 |
| MOEE (sum) | 48.74 | 32.83 | 52.12 | 47.88 | 48.34 | 29.89 | 43.30 |
| Qwen1.5-MoE-A2.7B | |||||||
| Hidden State (HS) | 46.41 | 24.31 | 44.43 | 44.91 | 28.36 | 22.65 | 35.18 |
| Routing Weight (RW) | 38.99 | 10.55 | 42.26 | 33.53 | 23.97 | 27.44 | 29.46 |
| MOEE (concat) | 44.81 | 26.75 | 49.79 | 49.23 | 37.93 | 27.61 | 39.35 |
| MOEE (sum) | 50.70 | 31.35 | 51.87 | 49.82 | 45.75 | 24.00 | 42.25 |
| OLMoE-1B-7B | |||||||
| Hidden State (HS) | 44.23 | 23.79 | 47.56 | 45.60 | 35.44 | 20.94 | 36.26 |
| Routing Weight (RW) | 43.54 | 17.66 | 53.12 | 40.91 | 44.68 | 28.68 | 38.10 |
| MOEE (concat) | 44.62 | 22.83 | 51.64 | 46.58 | 48.84 | 31.67 | 41.03 |
| MOEE (sum) | 48.54 | 30.67 | 50.93 | 47.77 | 49.45 | 28.77 | 42.69 |
| MTEB Tasks | CLF | Clust. | PairCLF | Rerank | STS | Summ. | Avg. |
| Self-Supervised Methods | |||||||
| Glove* (Pennington et al., 2014) | 51.04 | 23.11 | 62.90 | 48.72 | 60.52 | 28.87 | 45.86 |
| Komninos* (Komninos & Manandhar, 2016) | 50.21 | 24.96 | 66.63 | 50.03 | 61.73 | 30.49 | 47.34 |
| BERT* (Devlin, 2018) | 52.36 | 23.48 | 66.10 | 48.47 | 52.89 | 29.82 | 45.52 |
| SimCSE-BERT-unsup* (Gao et al., 2021) | 54.80 | 22.59 | 70.79 | 52.42 | 75.00 | 31.15 | 51.13 |
| Supervised Methods | |||||||
| SimCSE-BERT-sup* | 58.98 | 29.49 | 75.82 | 53.61 | 79.97 | 23.31 | 53.53 |
| coCondenser-msmarco* (Gao & Callan, 2021) | 53.89 | 32.85 | 74.56 | 60.08 | 76.41 | 29.50 | 54.55 |
| SPECTER* (Cohan et al., 2020) | 42.59 | 27.94 | 56.24 | 55.87 | 60.68 | 27.66 | 45.16 |
| LaBSE (Feng et al., 2020) | 54.31 | 24.05 | 73.68 | 54.63 | 70.95 | 31.05 | 51.45 |
| LASER2 | 42.54 | 14.01 | 70.52 | 46.99 | 64.52 | 26.80 | 44.23 |
| SGPT-125M-nli (Muennighoff, 2022) | 53.28 | 26.59 | 68.80 | 53.65 | 75.01 | 30.26 | 51.27 |
| DeepSeekMoE-16B | |||||||
| Hidden State (HS) | 58.24 | 24.64 | 48.76 | 38.13 | 59.66 | 24.38 | 42.30 |
| Routing Weight (RW) | 49.52 | 19.97 | 68.30 | 37.48 | 59.52 | 29.26 | 44.01 |
| MoEE (concat) | 54.21 | 26.10 | 72.44 | 53.31 | 67.59 | 28.89 | 50.42 |
| MoEE (sum) | 58.31 | 34.52 | 70.95 | 55.99 | 70.66 | 29.22 | 53.28 |
| Qwen1.5-MoE-A2.7B | |||||||
| Hidden State (HS) | 59.34 | 29.50 | 74.29 | 56.51 | 67.39 | 23.01 | 51.67 |
| Routing Weight (RW) | 47.84 | 16.74 | 64.85 | 43.55 | 51.71 | 27.74 | 42.07 |
| MoEE (concat) | 54.23 | 27.18 | 73.93 | 56.12 | 68.52 | 28.57 | 51.43 |
| MoEE (sum) | 59.57 | 38.33 | 72.21 | 56.25 | 72.78 | 31.09 | 55.04 |
| OLMoE-1B-7B | |||||||
| Hidden State (HS) | 58.18 | 32.83 | 72.10 | 58.31 | 72.91 | 27.96 | 53.72 |
| Routing Weight (RW) | 45.02 | 19.93 | 61.58 | 43.91 | 54.33 | 29.49 | 42.38 |
| MoEE (concat) | 52.59 | 33.92 | 71.85 | 56.69 | 71.13 | 30.21 | 52.73 |
| MoEE (sum) | 57.46 | 36.46 | 71.26 | 60.43 | 74.63 | 30.71 | 55.16 |
| STS Datasets | STS12 | STS13 | STS14 | STS15 | STS16 | Avg. |
| DeepSeekMoE-16B | ||||||
| HS - last token, last layer | 51.99 | 69.56 | 54.68 | 58.04 | 68.47 | 60.40 |
| HS - last token, all layers | 59.82 | 60.59 | 45.20 | 51.08 | 58.88 | 55.03 |
| HS - all tokens, last layer | 30.95 | 34.42 | 26.77 | 34.90 | 37.11 | 32.78 |
| HS - all tokens, all layers | 60.81 | 62.46 | 46.90 | 52.38 | 59.99 | 56.34 |
| RW - last token | 61.97 | 65.86 | 51.38 | 65.86 | 62.49 | 61.18 |
| RW - all tokens | 50.76 | 46.42 | 41.47 | 43.68 | 48.37 | 46.03 |
| MoEE (best) | 67.39 | 81.43 | 68.98 | 67.76 | 74.26 | 71.75 |
| Sentence 1 | Sentence 2 | |
| 1 | the vote will take place today at 5.30 p.m | the vote will take place at 17h30 |
| 2 | the standards are scarcely comparable, let alone transferable | the norms are hardly comparable and still less transferable |
| 3 | that provision could open the door wide to arbitrariness | this point of proce-dure opens the door to the arbitrary |
| 4 | A woman puts flour on a piece of meat | A woman is putting flour onto some meat. |
| 5 | the fishermen are inactive, tired and disappointed | fishermen are inactive, tired and disappoint-ment |
| Sentence 1 | Sentence 2 | |
| 1 | He did, but the initia-tive did not get very far. | What happened is that the initiative does not go very far. |
| 2 | then perhaps we could have avoided a catastrophe | we might have been able to prevent a disaster |
| 3 | it increases the power of the big countries at the expense of the small countries | it has the effect of augment-ing the potency of the big countries to the detriment of babies |
| 4 | festive social event, celebration | an occasion on which peo-ple can assemble for so-cial interaction and entert-ainment. |
| 5 | group of people de-fined by a specific profession | organization of performers and associated personnel (especially theatrical). |
| Prompt | STS12 | STS13 | STS14 | STS15 | STS16 | BIOSSES | SICK-R | STSBenchmark | |
| DeepSeekMoE-16b | |||||||||
| Hidden State (HS) | none | 20.90 | 43.39 | 24.02 | 37.75 | 47.15 | 29.87 | 42.66 | 30.61 |
| Routing Weight (RW) | none | 45.22 | 41.38 | 28.75 | 38.63 | 50.36 | 34.14 | 51.98 | 38.44 |
| MOEE (concat) | none | 46.26 | 55.88 | 37.90 | 42.37 | 54.19 | 41.20 | 53.66 | 43.06 |
| MOEE (sum) | none | 46.41 | 60.58 | 41.50 | 42.85 | 54.98 | 42.33 | 53.70 | 44.36 |
| Hidden State (HS) | PromptEOL | 51.99 | 69.56 | 54.68 | 58.04 | 68.47 | 45.29 | 63.78 | 65.48 |
| Routing Weight (RW) | PromptEOL | 61.97 | 65.86 | 51.38 | 65.86 | 62.49 | 53.97 | 57.93 | 56.68 |
| MOEE (concat) | PromptEOL | 66.79 | 77.60 | 63.56 | 64.60 | 71.22 | 61.96 | 66.29 | 68.72 |
| MOEE (sum) | PromptEOL | 67.39 | 81.43 | 68.98 | 67.76 | 74.26 | 62.09 | 69.98 | 73.41 |
| Qwen1.5-MoE-A2.7B | |||||||||
| Hidden State (HS) | none | 8.39 | 25.23 | 15.76 | 22.08 | 38.11 | 28.69 | 51.73 | 36.88 |
| Routing Weight (RW) | none | 27.96 | 18.89 | 13.88 | 17.11 | 36.29 | 25.40 | 29.42 | 22.80 |
| MOEE (concat) | none | 33.36 | 36.30 | 24.68 | 25.86 | 47.16 | 39.06 | 53.92 | 43.09 |
| MOEE (sum) | none | 35.72 | 47.29 | 31.51 | 31.00 | 50.61 | 53.40 | 62.35 | 54.11 |
| Hidden State (HS) | PromptEOL | 55.05 | 77.48 | 63.63 | 73.60 | 73.49 | 61.42 | 67.01 | 67.42 |
| Routing Weight (RW) | PromptEOL | 54.39 | 59.05 | 45.49 | 48.11 | 56.96 | 43.65 | 55.46 | 50.54 |
| MOEE (concat) | PromptEOL | 64.44 | 77.38 | 64.05 | 67.18 | 71.48 | 64.87 | 69.01 | 69.71 |
| MOEE (sum) | PromptEOL | 65.54 | 82.44 | 71.39 | 72.88 | 75.43 | 67.84 | 71.15 | 75.57 |
| OLMoE-1B-7B | |||||||||
| Hidden State (HS) | none | 21.53 | 41.47 | 22.71 | 39.88 | 51.49 | 44.11 | 39.98 | 22.36 |
| Routing Weight (RW) | none | 47.16 | 43.92 | 32.62 | 43.87 | 51.91 | 44.30 | 52.89 | 40.77 |
| MOEE (concat) | none | 48.82 | 52.69 | 37.48 | 46.80 | 56.06 | 54.58 | 52.24 | 42.02 |
| MOEE (sum) | none | 49.59 | 54.19 | 38.87 | 47.27 | 56.11 | 54.58 | 52.82 | 42.16 |
| Hidden State (HS) | PromptEOL | 65.51 | 81.86 | 69.37 | 77.64 | 77.19 | 73.54 | 66.62 | 71.51 |
| Routing Weight (RW) | PromptEOL | 55.76 | 60.01 | 48.08 | 49.88 | 57.88 | 56.28 | 56.02 | 50.72 |
| MOEE (concat) | PromptEOL | 67.35 | 80.13 | 68.42 | 68.76 | 73.35 | 73.02 | 67.51 | 70.47 |
| MOEE (sum) | PromptEOL | 68.84 | 84.34 | 74.02 | 73.81 | 76.88 | 73.02 | 70.56 | 75.58 |
| Prompt | TweetSentiment-Extraction-Classification | Emotion-Classification | Toxic-Conversations-Classification | |
| DeepSeekMoE-16b | ||||
| Hidden State (HS) | none | 49.14 | 27.55 | 57.69 |
| Routing Weight (RW) | none | 52.37 | 26.49 | 53.32 |
| MoEE (concat) | none | 52.64 | 28.02 | 54.12 |
| MoEE (sum) | none | 50.32 | 27.52 | 68.39 |
| Hidden State (HS) | PromptEOL | 60.13 | 49.11 | 65.47 |
| Routing Weight (RW) | PromptEOL | 57.68 | 35.57 | 55.32 |
| MoEE (concat) | PromptEOL | 61.12 | 45.59 | 55.93 |
| MoEE (sum) | PromptEOL | 59.32 | 46.86 | 68.76 |
| Qwen1.5-MoE-A2.7B | ||||
| Hidden State (HS) | none | 48.83 | 31.02 | 59.38 |
| Routing Weight (RW) | none | 42.80 | 20.63 | 53.53 |
| MoEE (concat) | none | 49.60 | 30.93 | 53.90 |
| MoEE (sum) | none | 48.84 | 32.76 | 70.50 |
| Hidden State (HS) | PromptEOL | 61.14 | 48.09 | 68.80 |
| Routing Weight (RW) | PromptEOL | 55.33 | 33.82 | 54.37 |
| MoEE (concat) | PromptEOL | 60.78 | 46.10 | 55.82 |
| MoEE (sum) | PromptEOL | 60.72 | 47.97 | 70.03 |
| OLMoE-1B-7B | ||||
| Hidden State (HS) | none | 50.29 | 30.29 | 52.10 |
| Routing Weight (RW) | none | 50.15 | 25.53 | 54.93 |
| MoEE (concat) | none | 51.59 | 28.76 | 53.51 |
| MoEE (sum) | none | 51.00 | 29.75 | 64.86 |
| Hidden State (HS) | PromptEOL | 59.58 | 47.50 | 67.46 |
| Routing Weight (RW) | PromptEOL | 52.79 | 28.51 | 53.75 |
| MoEE (concat) | PromptEOL | 59.72 | 42.78 | 55.27 |
| MoEE (sum) | PromptEOL | 59.92 | 45.63 | 66.84 |
| Prompt | TwitterURLCorpus | TwitterSemEval2015 | |
| DeepSeekMoE-16b | |||
| Hidden State (HS) | none | 49.04 | 39.63 |
| Routing Weight (RW) | none | 53.39 | 47.79 |
| MoEE (concat) | none | 57.27 | 46.48 |
| MoEE (sum) | none | 58.99 | 45.25 |
| Hidden State (HS) | PromptEOL | 36.72 | 60.79 |
| Routing Weight (RW) | PromptEOL | 76.58 | 60.01 |
| MoEE (concat) | PromptEOL | 80.08 | 64.79 |
| MoEE (sum) | PromptEOL | 79.20 | 62.70 |
| Qwen1.5-MoE-A2.7B | |||
| Hidden State (HS) | none | 45.71 | 43.14 |
| Routing Weight (RW) | none | 48.78 | 35.74 |
| MoEE (concat) | none | 53.74 | 45.83 |
| MoEE (sum) | none | 57.78 | 45.95 |
| Hidden State (HS) | PromptEOL | 82.50 | 66.07 |
| Routing Weight (RW) | PromptEOL | 73.72 | 55.98 |
| MoEE (concat) | PromptEOL | 82.34 | 65.51 |
| MoEE (sum) | PromptEOL | 80.21 | 64.20 |
| OLMoE-1B-7B | |||
| Hidden State (HS) | none | 55.07 | 40.04 |
| Routing Weight (RW) | none | 54.25 | 51.99 |
| MoEE (concat) | none | 56.97 | 46.31 |
| MoEE (sum) | none | 57.03 | 44.82 |
| Hidden State (HS) | PromptEOL | 82.32 | 61.87 |
| Routing Weight (RW) | PromptEOL | 70.37 | 52.79 |
| MoEE (concat) | PromptEOL | 82.32 | 61.38 |
| MoEE (sum) | PromptEOL | 80.98 | 61.53 |
| Prompt | TwentyNewsgroupsClustering | MedrxivClusteringS2S | |
| DeepSeekMoE-16b | |||
| Hidden State (HS) | none | 25.62 | 26.11 |
| Routing Weight (RW) | none | 15.33 | 19.72 |
| MoEE (concat) | none | 22.94 | 25.35 |
| MoEE (sum) | none | 31.44 | 34.22 |
| Hidden State (HS) | PromptEOL | 27.02 | 22.26 |
| Routing Weight (RW) | PromptEOL | 21.89 | 18.04 |
| MoEE (concat) | PromptEOL | 29.13 | 23.06 |
| MoEE (sum) | PromptEOL | 35.77 | 33.27 |
| Qwen1.5-MoE-A2.7B | |||
| Hidden State (HS) | none | 26.14 | 22.48 |
| Routing Weight (RW) | none | 9.71 | 11.38 |
| MoEE (concat) | none | 28.99 | 24.51 |
| MoEE (sum) | none | 32.07 | 30.62 |
| Hidden State (HS) | PromptEOL | 34.04 | 24.95 |
| Routing Weight (RW) | PromptEOL | 16.94 | 16.54 |
| MoEE (concat) | PromptEOL | 30.45 | 23.91 |
| MoEE (sum) | PromptEOL | 42.05 | 34.60 |
| OLMoE-1B-7B | |||
| Hidden State (HS) | none | 21.05 | 26.52 |
| Routing Weight (RW) | none | 17.14 | 18.17 |
| MoEE (concat) | none | 20.72 | 24.94 |
| MoEE (sum) | none | 27.58 | 33.75 |
| Hidden State (HS) | PromptEOL | 38.96 | 26.69 |
| Routing Weight (RW) | PromptEOL | 22.13 | 17.72 |
| MoEE (concat) | PromptEOL | 41.23 | 26.60 |
| MoEE (sum) | PromptEOL | 38.58 | 34.33 |
| Prompt | AskUbuntuDupQuestions | SciDocsRR | StackOverflowDupQuestions | |
| DeepSeekMoE-16b | ||||
| Hidden State (HS) | none | 43.75 | 45.23 | 25.79 |
| Routing Weight (RW) | none | 41.97 | 42.65 | 23.21 |
| MoEE (concat) | none | 44.10 | 53.43 | 26.06 |
| MoEE (sum) | none | 45.26 | 70.79 | 27.58 |
| Hidden State (HS) | PromptEOL | 43.75 | 45.23 | 25.41 |
| Routing Weight (RW) | PromptEOL | 46.57 | 42.65 | 23.21 |
| MoEE (concat) | PromptEOL | 50.66 | 72.63 | 36.65 |
| MoEE (sum) | PromptEOL | 52.93 | 76.17 | 38.88 |
| Qwen1.5-MoE-A2.7B | ||||
| Hidden State (HS) | none | 43.71 | 60.91 | 30.12 |
| Routing Weight (RW) | none | 41.00 | 36.85 | 22.75 |
| MoEE (concat) | none | 44.95 | 68.42 | 34.31 |
| MoEE (sum) | none | 44.30 | 70.85 | 34.31 |
| Hidden State (HS) | PromptEOL | 54.69 | 75.06 | 39.79 |
| Routing Weight (RW) | PromptEOL | 44.65 | 55.03 | 30.96 |
| MoEE (concat) | PromptEOL | 52.15 | 75.69 | 40.51 |
| MoEE (sum) | PromptEOL | 51.30 | 74.53 | 42.91 |
| OLMoE-1B-7B | ||||
| Hidden State (HS) | none | 43.67 | 69.08 | 24.05 |
| Routing Weight (RW) | none | 42.83 | 54.17 | 25.72 |
| MoEE (concat) | none | 43.91 | 70.33 | 25.49 |
| MoEE (sum) | none | 44.57 | 72.54 | 26.20 |
| Hidden State (HS) | PromptEOL | 55.32 | 78.24 | 41.36 |
| Routing Weight (RW) | PromptEOL | 45.11 | 55.43 | 31.20 |
| MoEE (concat) | PromptEOL | 52.81 | 77.14 | 40.13 |
| MoEE (sum) | PromptEOL | 56.68 | 81.19 | 43.41 |
| Benchmark | Language | Premise | Full Context | Multi-source | Temporal Split |
| miniF2F (Zheng et al., 2022) | Multiple | X | X | X | X |
| ProofNet (Azerbayev et al., 2023) | Lean | X | ✓ | ✓ | X |
| LeanDojo (Yang et al., 2023) | Lean | ✓ | X | X | X |
| LeanStep (Han et al., 2022) | Lean | ✓ | X | ✓ | X |
| CoqGym (Yang & Deng, 2019) | Coq | ✓ | X | ✓ | X |
| PISA (Jiang et al., 2021) | Isabelle | X | X | ✓ | X |
| miniCTX (Ours) | Lean | ✓ | ✓ | ✓ | ✓ |
| Split | Problems valid + test | Context Size (tokens) | In-File Premises (premises / 100 tokens) | Repo. Premises (premises / 100 tokens) | Proof Size (lines) | |
| miniF2F | 244 + 244 | 153* | — | — | 3.0† | |
| miniCTX | PNT | 85 + 85 | 10,858 | 1.87 | 0.30 | 3.3 |
| PFR | 51 + 51 | 18,059 | 0.65 | 1.10 | 27.2 | |
| PFRcross | 43 + 43 | 4,351 | 0.44 | 2.75 | 2.7 | |
| Mathlib | 50 + 50 | 14,440 | — | — | 6.1 | |
| HTPI | 45 + 45 | 65,082 | 2.85 | 0.00 | 10.7† | |
| HEP | 61 + 61 | 3,585 | 5.65 | 4.25 | 3.1 | |
| SciLean | 46 + 46 | 6,249 | 2.08 | 9.72 | 1.8 | |
| All | 381 + 381 | 18,690 | 1.94 | 2.63 | 8.5 |
| Method | miniF2F | miniCTX-test | |||||||
| Test | Prime | PFR | \( \mathbf{PFR}_{\text{cross}} \) | Mathlib | HTPI | HEP | SciLean | Avg. | |
| GPT-4o (full proof) | 13.52 | 7.06 | 1.85 | 6.98 | 14.00 | 13.33 | 31.15 | 6.52 | 11.72 |
| + context | — | 31.76 | 5.56 | 34.88 | 26.00 | 17.78 | 49.18 | 17.39 | 27.08 |
| + context + premise | — | 29.41 | 7.41 | 39.53 | — | 15.56 | 44.26 | 21.74 | 26.82 |
| State-tactic prompting | 28.28 | 20.00 | 5.56 | 0.00 | 16.00 | 0.00 | 31.15 | 19.57 | 14.58 |
| State-tactic tuning | 32.79 | 17.65 | 5.56 | 0.00 | 22.00 | 11.11 | 52.46 | 19.57 | 19.53 |
| File tuning | 33.61 | 40.00 | 5.56 | 44.19 | 34.00 | 15.56 | 60.66 | 45.65 | 35.94 |
| + premise | — | 42.35 | 11.11 | 16.28 | — | 8.89 | 50.82 | 32.61 | 30.21 |
| Environment | Definitions | Lemma Statement | Lemma Proof | Natural Language Comments | File-tuning | GPT-4o |
| X | X | X | X | X | 14.12% | 8.24% |
| ✓ | X | X | X | X | 25.88% | 2.35% |
| ✓ | ✓ | X | X | X | 24.71% | 9.41% |
| ✓ | ✓ | ✓ | X | X | 27.06% | 22.35% |
| ✓ | ✓ | ✓ | ✓ | X | 32.94% | 34.12% |
| ✓ | ✓ | ✓ | X | ✓ | 28.24% | 23.53% |
| ✓ | ✓ | ✓ | ✓ | ✓ | 35.29% | 31.76% |
| Models | Math2001 |
| GPT-4o (full proof) | 11.76% |
| GPT-4o (+ context) | 43.13% |
| State-tactic prompting | 31.37% |
| State-tactic tuning | 27.45% |
| File tuning | 41.18% |
| Automation | File (%) | State-tactic (%) |
| Enabled | 41.18 | 11.76 |
| Disabled | 27.45 | 7.84 |